Add TI's DSP Bridge driver to the staging area.
TI's DSP Bridge driver supplies a direct link between
host applications and DSP tasks running on a remote processor.
Please pull from:
git://wizery.com/pub/tidspbridge.git for-greg
The patches will be sent as a follow-on to this message to lkml and l-o for
people to see.
The patches are the result of a staging relocation and a linux-next
rebase of 85343cd5491260881b34ab7bb7cdc8fdeef078e4 at
git://dev.omapzoom.org/pub/scm/tidspbridge/kernel-dspbridge.git dspbridge
For more information about TI's DSP Bridge, check out the
submitted documentation and also:
http://omapzoom.org/gf/project/omapbridge/docman/?subdir=3
The DSP/Bridge project wish to thank all of its contributors;
current bridge driver is the result of the work of all of them.
The following is an alphabetical order of all contributors (that we
know of. If by any chance we forgot to mention anyone, please let
us know, thanks!):
Suman Anna
Sripal Bagadia
Felipe Balbi
Ohad Ben-Cohen
Phil Carmody
Deepak Chitriki
Felipe Contreras
Hiroshi Doyu
Seth Forshee
Ivan Gomez Castellanos
Mark Grosen
Ramesh Gupta G
Fernando Guzman Lugo
Axel Haslam
Janet Head
Shivananda Hebbar
Hari Kanigeri
Tony Lindgren
Antonio Luna
Hari Nagalla
Nishanth Menon
Ameya Palande
Vijay Pasam
Gilbert Pitney
Omar Ramirez Luna
Ernesto Ramos
Chris Ring
Larry Schiefer
Rebecca Schultz Zavin
Bhavin Shah
Andy Shevchenko
Jeff Taylor
Roman Tereshonkov
Armando Uribe de Leon
Nischal Varide
Wenbiao Wang
Thanks,
Ohad Ben-Cohen (1):
staging: ti dspbridge: add TODO file
Omar Ramirez Luna (10):
staging: ti dspbridge: add driver documentation
staging: ti dspbridge: add core driver sources
staging: ti dspbridge: add platform manager code
staging: ti dspbridge: add resource manager
staging: ti dspbridge: add MMU support
staging: ti dspbridge: add generic utilities
staging: ti dspbridge: add services
staging: ti dspbridge: add DOFF binaries loader
staging: ti dspbridge: add header files
staging: ti dspbridge: enable driver building
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
.../staging/tidspbridge/Documentation/CONTRIBUTORS | 82 +
drivers/staging/tidspbridge/Documentation/README | 70 +
.../staging/tidspbridge/Documentation/error-codes | 157 +
drivers/staging/tidspbridge/Kconfig | 88 +
drivers/staging/tidspbridge/Makefile | 34 +
drivers/staging/tidspbridge/TODO | 18 +
drivers/staging/tidspbridge/core/_cmm.h | 45 +
drivers/staging/tidspbridge/core/_deh.h | 35 +
drivers/staging/tidspbridge/core/_msg_sm.h | 142 +
drivers/staging/tidspbridge/core/_tiomap.h | 377 +++
drivers/staging/tidspbridge/core/_tiomap_pwr.h | 85 +
drivers/staging/tidspbridge/core/chnl_sm.c | 1015 ++++++
drivers/staging/tidspbridge/core/dsp-clock.c | 421 +++
drivers/staging/tidspbridge/core/io_sm.c | 2410 +++++++++++++++
drivers/staging/tidspbridge/core/mmu_fault.c | 139 +
drivers/staging/tidspbridge/core/mmu_fault.h | 36 +
drivers/staging/tidspbridge/core/msg_sm.c | 673 ++++
drivers/staging/tidspbridge/core/tiomap3430.c | 1887 ++++++++++++
drivers/staging/tidspbridge/core/tiomap3430_pwr.c | 604 ++++
drivers/staging/tidspbridge/core/tiomap_io.c | 458 +++
drivers/staging/tidspbridge/core/tiomap_io.h | 104 +
drivers/staging/tidspbridge/core/ue_deh.c | 303 ++
drivers/staging/tidspbridge/core/wdt.c | 150 +
drivers/staging/tidspbridge/dynload/cload.c | 1960 ++++++++++++
.../staging/tidspbridge/dynload/dload_internal.h | 351 +++
drivers/staging/tidspbridge/dynload/doff.h | 344 +++
drivers/staging/tidspbridge/dynload/getsection.c | 416 +++
drivers/staging/tidspbridge/dynload/header.h | 55 +
drivers/staging/tidspbridge/dynload/module_list.h | 159 +
drivers/staging/tidspbridge/dynload/params.h | 226 ++
drivers/staging/tidspbridge/dynload/reloc.c | 484 +++
drivers/staging/tidspbridge/dynload/reloc_table.h | 102 +
.../tidspbridge/dynload/reloc_table_c6000.c | 257 ++
drivers/staging/tidspbridge/dynload/tramp.c | 1143 +++++++
.../tidspbridge/dynload/tramp_table_c6000.c | 164 +
drivers/staging/tidspbridge/gen/gb.c | 167 +
drivers/staging/tidspbridge/gen/gh.c | 213 ++
drivers/staging/tidspbridge/gen/gs.c | 89 +
drivers/staging/tidspbridge/gen/uuidutil.c | 223 ++
drivers/staging/tidspbridge/hw/EasiGlobal.h | 41 +
drivers/staging/tidspbridge/hw/GlobalTypes.h | 308 ++
drivers/staging/tidspbridge/hw/MMUAccInt.h | 76 +
drivers/staging/tidspbridge/hw/MMURegAcM.h | 226 ++
drivers/staging/tidspbridge/hw/hw_defs.h | 60 +
drivers/staging/tidspbridge/hw/hw_mmu.c | 587 ++++
drivers/staging/tidspbridge/hw/hw_mmu.h | 161 +
.../tidspbridge/include/dspbridge/_chnl_sm.h | 181 ++
.../tidspbridge/include/dspbridge/brddefs.h | 39 +
.../staging/tidspbridge/include/dspbridge/cfg.h | 222 ++
.../tidspbridge/include/dspbridge/cfgdefs.h | 81 +
.../staging/tidspbridge/include/dspbridge/chnl.h | 130 +
.../tidspbridge/include/dspbridge/chnldefs.h | 67 +
.../tidspbridge/include/dspbridge/chnlpriv.h | 101 +
.../staging/tidspbridge/include/dspbridge/clk.h | 101 +
.../staging/tidspbridge/include/dspbridge/cmm.h | 386 +++
.../tidspbridge/include/dspbridge/cmmdefs.h | 105 +
.../staging/tidspbridge/include/dspbridge/cod.h | 369 +++
.../staging/tidspbridge/include/dspbridge/dbc.h | 46 +
.../staging/tidspbridge/include/dspbridge/dbdcd.h | 358 +++
.../tidspbridge/include/dspbridge/dbdcddef.h | 78 +
.../staging/tidspbridge/include/dspbridge/dbdefs.h | 546 ++++
.../tidspbridge/include/dspbridge/dbldefs.h | 140 +
.../staging/tidspbridge/include/dspbridge/dbll.h | 59 +
.../tidspbridge/include/dspbridge/dblldefs.h | 496 +++
.../staging/tidspbridge/include/dspbridge/dbtype.h | 88 +
.../tidspbridge/include/dspbridge/dehdefs.h | 32 +
.../staging/tidspbridge/include/dspbridge/dev.h | 702 +++++
.../tidspbridge/include/dspbridge/devdefs.h | 26 +
.../staging/tidspbridge/include/dspbridge/disp.h | 204 ++
.../tidspbridge/include/dspbridge/dispdefs.h | 35 +
.../staging/tidspbridge/include/dspbridge/dmm.h | 75 +
.../staging/tidspbridge/include/dspbridge/drv.h | 522 ++++
.../tidspbridge/include/dspbridge/drvdefs.h | 25 +
.../tidspbridge/include/dspbridge/dspapi-ioctl.h | 475 +++
.../staging/tidspbridge/include/dspbridge/dspapi.h | 167 +
.../tidspbridge/include/dspbridge/dspchnl.h | 72 +
.../tidspbridge/include/dspbridge/dspdefs.h | 1128 +++++++
.../staging/tidspbridge/include/dspbridge/dspdeh.h | 47 +
.../staging/tidspbridge/include/dspbridge/dspdrv.h | 62 +
.../staging/tidspbridge/include/dspbridge/dspio.h | 41 +
.../tidspbridge/include/dspbridge/dspioctl.h | 73 +
.../staging/tidspbridge/include/dspbridge/dspmsg.h | 56 +
.../tidspbridge/include/dspbridge/dynamic_loader.h | 492 +++
drivers/staging/tidspbridge/include/dspbridge/gb.h | 79 +
.../tidspbridge/include/dspbridge/getsection.h | 108 +
drivers/staging/tidspbridge/include/dspbridge/gh.h | 32 +
drivers/staging/tidspbridge/include/dspbridge/gs.h | 59 +
.../tidspbridge/include/dspbridge/host_os.h | 89 +
drivers/staging/tidspbridge/include/dspbridge/io.h | 114 +
.../staging/tidspbridge/include/dspbridge/io_sm.h | 309 ++
.../staging/tidspbridge/include/dspbridge/iodefs.h | 36 +
.../staging/tidspbridge/include/dspbridge/ldr.h | 29 +
.../staging/tidspbridge/include/dspbridge/list.h | 225 ++
.../staging/tidspbridge/include/dspbridge/mbx_sh.h | 198 ++
.../tidspbridge/include/dspbridge/memdefs.h | 30 +
.../staging/tidspbridge/include/dspbridge/mgr.h | 205 ++
.../tidspbridge/include/dspbridge/mgrpriv.h | 45 +
.../staging/tidspbridge/include/dspbridge/msg.h | 86 +
.../tidspbridge/include/dspbridge/msgdefs.h | 29 +
.../staging/tidspbridge/include/dspbridge/nldr.h | 55 +
.../tidspbridge/include/dspbridge/nldrdefs.h | 293 ++
.../staging/tidspbridge/include/dspbridge/node.h | 579 ++++
.../tidspbridge/include/dspbridge/nodedefs.h | 28 +
.../tidspbridge/include/dspbridge/nodepriv.h | 182 ++
.../staging/tidspbridge/include/dspbridge/ntfy.h | 217 ++
.../staging/tidspbridge/include/dspbridge/proc.h | 621 ++++
.../tidspbridge/include/dspbridge/procpriv.h | 25 +
.../staging/tidspbridge/include/dspbridge/pwr.h | 107 +
.../staging/tidspbridge/include/dspbridge/pwr_sh.h | 33 +
.../include/dspbridge/resourcecleanup.h | 63 +
.../staging/tidspbridge/include/dspbridge/rmm.h | 181 ++
.../staging/tidspbridge/include/dspbridge/rms_sh.h | 95 +
.../tidspbridge/include/dspbridge/rmstypes.h | 28 +
.../tidspbridge/include/dspbridge/services.h | 50 +
.../staging/tidspbridge/include/dspbridge/std.h | 94 +
.../staging/tidspbridge/include/dspbridge/strm.h | 404 +++
.../tidspbridge/include/dspbridge/strmdefs.h | 46 +
.../staging/tidspbridge/include/dspbridge/sync.h | 109 +
.../tidspbridge/include/dspbridge/utildefs.h | 39 +
.../tidspbridge/include/dspbridge/uuidutil.h | 62 +
.../staging/tidspbridge/include/dspbridge/wdt.h | 79 +
drivers/staging/tidspbridge/pmgr/chnl.c | 163 +
drivers/staging/tidspbridge/pmgr/chnlobj.h | 46 +
drivers/staging/tidspbridge/pmgr/cmm.c | 1172 +++++++
drivers/staging/tidspbridge/pmgr/cod.c | 658 ++++
drivers/staging/tidspbridge/pmgr/dbll.c | 1585 ++++++++++
drivers/staging/tidspbridge/pmgr/dev.c | 1171 +++++++
drivers/staging/tidspbridge/pmgr/dmm.c | 533 ++++
drivers/staging/tidspbridge/pmgr/dspapi.c | 1685 ++++++++++
drivers/staging/tidspbridge/pmgr/io.c | 142 +
drivers/staging/tidspbridge/pmgr/ioobj.h | 38 +
drivers/staging/tidspbridge/pmgr/msg.c | 129 +
drivers/staging/tidspbridge/pmgr/msgobj.h | 38 +
drivers/staging/tidspbridge/rmgr/dbdcd.c | 1506 +++++++++
drivers/staging/tidspbridge/rmgr/disp.c | 754 +++++
drivers/staging/tidspbridge/rmgr/drv.c | 1047 +++++++
drivers/staging/tidspbridge/rmgr/drv_interface.c | 644 ++++
drivers/staging/tidspbridge/rmgr/drv_interface.h | 27 +
drivers/staging/tidspbridge/rmgr/dspdrv.c | 142 +
drivers/staging/tidspbridge/rmgr/mgr.c | 374 +++
drivers/staging/tidspbridge/rmgr/nldr.c | 1999 ++++++++++++
drivers/staging/tidspbridge/rmgr/node.c | 3231 ++++++++++++++++++++
drivers/staging/tidspbridge/rmgr/proc.c | 1948 ++++++++++++
drivers/staging/tidspbridge/rmgr/pwr.c | 182 ++
drivers/staging/tidspbridge/rmgr/rmm.c | 535 ++++
drivers/staging/tidspbridge/rmgr/strm.c | 861 ++++++
drivers/staging/tidspbridge/services/cfg.c | 253 ++
drivers/staging/tidspbridge/services/ntfy.c | 31 +
drivers/staging/tidspbridge/services/services.c | 69 +
drivers/staging/tidspbridge/services/sync.c | 104 +
152 files changed, 51105 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/Documentation/CONTRIBUTORS
create mode 100644 drivers/staging/tidspbridge/Documentation/README
create mode 100644 drivers/staging/tidspbridge/Documentation/error-codes
create mode 100644 drivers/staging/tidspbridge/Kconfig
create mode 100644 drivers/staging/tidspbridge/Makefile
create mode 100644 drivers/staging/tidspbridge/TODO
create mode 100644 drivers/staging/tidspbridge/core/_cmm.h
create mode 100644 drivers/staging/tidspbridge/core/_deh.h
create mode 100644 drivers/staging/tidspbridge/core/_msg_sm.h
create mode 100644 drivers/staging/tidspbridge/core/_tiomap.h
create mode 100644 drivers/staging/tidspbridge/core/_tiomap_pwr.h
create mode 100644 drivers/staging/tidspbridge/core/chnl_sm.c
create mode 100644 drivers/staging/tidspbridge/core/dsp-clock.c
create mode 100644 drivers/staging/tidspbridge/core/io_sm.c
create mode 100644 drivers/staging/tidspbridge/core/mmu_fault.c
create mode 100644 drivers/staging/tidspbridge/core/mmu_fault.h
create mode 100644 drivers/staging/tidspbridge/core/msg_sm.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap3430.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap3430_pwr.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap_io.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap_io.h
create mode 100644 drivers/staging/tidspbridge/core/ue_deh.c
create mode 100644 drivers/staging/tidspbridge/core/wdt.c
create mode 100644 drivers/staging/tidspbridge/dynload/cload.c
create mode 100644 drivers/staging/tidspbridge/dynload/dload_internal.h
create mode 100644 drivers/staging/tidspbridge/dynload/doff.h
create mode 100644 drivers/staging/tidspbridge/dynload/getsection.c
create mode 100644 drivers/staging/tidspbridge/dynload/header.h
create mode 100644 drivers/staging/tidspbridge/dynload/module_list.h
create mode 100644 drivers/staging/tidspbridge/dynload/params.h
create mode 100644 drivers/staging/tidspbridge/dynload/reloc.c
create mode 100644 drivers/staging/tidspbridge/dynload/reloc_table.h
create mode 100644 drivers/staging/tidspbridge/dynload/reloc_table_c6000.c
create mode 100644 drivers/staging/tidspbridge/dynload/tramp.c
create mode 100644 drivers/staging/tidspbridge/dynload/tramp_table_c6000.c
create mode 100644 drivers/staging/tidspbridge/gen/gb.c
create mode 100644 drivers/staging/tidspbridge/gen/gh.c
create mode 100644 drivers/staging/tidspbridge/gen/gs.c
create mode 100644 drivers/staging/tidspbridge/gen/uuidutil.c
create mode 100644 drivers/staging/tidspbridge/hw/EasiGlobal.h
create mode 100644 drivers/staging/tidspbridge/hw/GlobalTypes.h
create mode 100644 drivers/staging/tidspbridge/hw/MMUAccInt.h
create mode 100644 drivers/staging/tidspbridge/hw/MMURegAcM.h
create mode 100644 drivers/staging/tidspbridge/hw/hw_defs.h
create mode 100644 drivers/staging/tidspbridge/hw/hw_mmu.c
create mode 100644 drivers/staging/tidspbridge/hw/hw_mmu.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/_chnl_sm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/brddefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cfg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cfgdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnlpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/clk.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cmmdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cod.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbc.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdcd.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdcddef.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbll.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dblldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbtype.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dehdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dev.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/devdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/disp.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dispdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/drv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/drvdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspapi-ioctl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspapi.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspchnl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdeh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdrv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspio.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspmsg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dynamic_loader.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gb.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/getsection.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/host_os.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/io.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/io_sm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/iodefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/ldr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/list.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mbx_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/memdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mgr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mgrpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/msg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/msgdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nldr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nldrdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/node.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nodedefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nodepriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/ntfy.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/proc.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/procpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/pwr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/pwr_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/resourcecleanup.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rms_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rmstypes.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/services.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/std.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/strm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/strmdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/sync.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/utildefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/uuidutil.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/wdt.h
create mode 100644 drivers/staging/tidspbridge/pmgr/chnl.c
create mode 100644 drivers/staging/tidspbridge/pmgr/chnlobj.h
create mode 100644 drivers/staging/tidspbridge/pmgr/cmm.c
create mode 100644 drivers/staging/tidspbridge/pmgr/cod.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dbll.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dev.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dmm.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dspapi.c
create mode 100644 drivers/staging/tidspbridge/pmgr/io.c
create mode 100644 drivers/staging/tidspbridge/pmgr/ioobj.h
create mode 100644 drivers/staging/tidspbridge/pmgr/msg.c
create mode 100644 drivers/staging/tidspbridge/pmgr/msgobj.h
create mode 100644 drivers/staging/tidspbridge/rmgr/dbdcd.c
create mode 100644 drivers/staging/tidspbridge/rmgr/disp.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv_interface.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv_interface.h
create mode 100644 drivers/staging/tidspbridge/rmgr/dspdrv.c
create mode 100644 drivers/staging/tidspbridge/rmgr/mgr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/nldr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/node.c
create mode 100644 drivers/staging/tidspbridge/rmgr/proc.c
create mode 100644 drivers/staging/tidspbridge/rmgr/pwr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/rmm.c
create mode 100644 drivers/staging/tidspbridge/rmgr/strm.c
create mode 100644 drivers/staging/tidspbridge/services/cfg.c
create mode 100644 drivers/staging/tidspbridge/services/ntfy.c
create mode 100644 drivers/staging/tidspbridge/services/services.c
create mode 100644 drivers/staging/tidspbridge/services/sync.c
From: Omar Ramirez Luna <[email protected]>
Add a README with a general overview of TI's DSP Bridge driver,
a short explanations of how error codes are currently used,
and a CONTRIBUTORS file with all past & present contributors.
For additional information about TI's DSP Bridge,
check out http://omapzoom.org/gf/project/omapbridge/docman/?subdir=3
Note: if by any chance we forgot to mention any contributor,
please let us know and we will fix that.
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
.../staging/tidspbridge/Documentation/CONTRIBUTORS | 82 ++++++++++
drivers/staging/tidspbridge/Documentation/README | 70 +++++++++
.../staging/tidspbridge/Documentation/error-codes | 157 ++++++++++++++++++++
3 files changed, 309 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/Documentation/CONTRIBUTORS
create mode 100644 drivers/staging/tidspbridge/Documentation/README
create mode 100644 drivers/staging/tidspbridge/Documentation/error-codes
diff --git a/drivers/staging/tidspbridge/Documentation/CONTRIBUTORS b/drivers/staging/tidspbridge/Documentation/CONTRIBUTORS
new file mode 100644
index 0000000..b40e7a6
--- /dev/null
+++ b/drivers/staging/tidspbridge/Documentation/CONTRIBUTORS
@@ -0,0 +1,82 @@
+TI DSP/Bridge Driver - Contributors File
+
+The DSP/Bridge project wish to thank all of its contributors, current bridge
+driver is the result of the work of all of them. If any name is accidentally
+omitted, let us know by sending a mail to [email protected] or
[email protected].
+
+Please keep the following list in alphabetical order.
+
+ Suman Anna
+ Sripal Bagadia
+ Felipe Balbi
+ Ohad Ben-Cohen
+ Phil Carmody
+ Deepak Chitriki
+ Felipe Contreras
+ Hiroshi Doyu
+ Seth Forshee
+ Ivan Gomez Castellanos
+ Mark Grosen
+ Ramesh Gupta G
+ Fernando Guzman Lugo
+ Axel Haslam
+ Janet Head
+ Shivananda Hebbar
+ Hari Kanigeri
+ Tony Lindgren
+ Antonio Luna
+ Hari Nagalla
+ Nishanth Menon
+ Ameya Palande
+ Vijay Pasam
+ Gilbert Pitney
+ Omar Ramirez Luna
+ Ernesto Ramos
+ Chris Ring
+ Larry Schiefer
+ Rebecca Schultz Zavin
+ Bhavin Shah
+ Andy Shevchenko
+ Jeff Taylor
+ Roman Tereshonkov
+ Armando Uribe de Leon
+ Nischal Varide
+ Wenbiao Wang
+
+
+
+The following list was taken from file Revision History, if you recognize your
+alias or did any contribution to the project please let us now, so we can
+proper credit your work.
+
+ ag
+ ap
+ cc
+ db
+ dh4
+ dr
+ hp
+ jg
+ kc
+ kln
+ kw
+ ge
+ gv
+ map
+ mf
+ mk
+ mr
+ nn
+ rajesh
+ rg
+ rr
+ rt
+ sb
+ sg
+ sh
+ sp
+ srid
+ swa
+ vp
+ ww
diff --git a/drivers/staging/tidspbridge/Documentation/README b/drivers/staging/tidspbridge/Documentation/README
new file mode 100644
index 0000000..df6d371
--- /dev/null
+++ b/drivers/staging/tidspbridge/Documentation/README
@@ -0,0 +1,70 @@
+ Linux DSP/BIOS Bridge release
+
+DSP/BIOS Bridge overview
+========================
+
+DSP/BIOS Bridge is designed for platforms that contain a GPP and one or more
+attached DSPs. The GPP is considered the master or "host" processor, and the
+attached DSPs are processing resources that can be utilized by applications
+and drivers running on the GPP.
+
+The abstraction that DSP/BIOS Bridge supplies, is a direct link between a GPP
+program and a DSP task. This communication link is partitioned into two
+types of sub-links: messaging (short, fixed-length packets) and data
+streaming (multiple, large buffers). Each sub-link operates independently,
+and features in-order delivery of data, meaning that messages are delivered
+in the order they were submitted to the message link, and stream buffers are
+delivered in the order they were submitted to the stream link.
+
+In addition, a GPP client can specify what inputs and outputs a DSP task
+uses. DSP tasks typically use message objects for passing control and status
+information and stream objects for efficient streaming of real-time data.
+
+GPP Software Architecture
+=========================
+
+A GPP application communicates with its associated DSP task running on the
+DSP subsystem using the DSP/BIOS Bridge API. For example, a GPP audio
+application can use the API to pass messages to a DSP task that is managing
+data flowing from analog-to-digital converters (ADCs) to digital-to-analog
+converters (DACs).
+
+From the perspective of the GPP OS, the DSP is treated as just another
+peripheral device. Most high level GPP OS typically support a device driver
+model, whereby applications can safely access and share a hardware peripheral
+through standard driver interfaces. Therefore, to allow multiple GPP
+applications to share access to the DSP, the GPP side of DSP/BIOS Bridge
+implements a device driver for the DSP.
+
+Since driver interfaces are not always standard across GPP OS, and to provide
+some level of interoperability of application code using DSP/BIOS Bridge
+between GPP OS, DSP/BIOS Bridge provides a standard library of APIs which
+wrap calls into the device driver. So, rather than calling GPP OS specific
+driver interfaces, applications (and even other device drivers) can use the
+standard API library directly.
+
+DSP Software Architecture
+=========================
+
+For DSP/BIOS, DSP/BIOS Bridge adds a device-independent streaming I/O (STRM)
+interface, a messaging interface (NODE), and a Resource Manager (RM) Server.
+The RM Server runs as a task of DSP/BIOS and is subservient to commands
+and queries from the GPP. It executes commands to start and stop DSP signal
+processing nodes in response to GPP programs making requests through the
+(GPP-side) API.
+
+DSP tasks started by the RM Server are similar to any other DSP task with two
+important differences: they must follow a specific task model consisting of
+three C-callable functions (node create, execute, and delete), with specific
+sets of arguments, and they have a pre-defined task environment established
+by the RM Server.
+
+Tasks started by the RM Server communicate using the STRM and NODE interfaces
+and act as servers for their corresponding GPP clients, performing signal
+processing functions as requested by messages sent by their GPP client.
+Typically, a DSP task moves data from source devices to sink devices using
+device independent I/O streams, performing application-specific processing
+and transformations on the data while it is moved. For example, an audio
+task might perform audio decompression (ADPCM, MPEG, CELP) on data received
+from a GPP audio driver and then send the decompressed linear samples to a
+digital-to-analog converter.
diff --git a/drivers/staging/tidspbridge/Documentation/error-codes b/drivers/staging/tidspbridge/Documentation/error-codes
new file mode 100644
index 0000000..12826e2
--- /dev/null
+++ b/drivers/staging/tidspbridge/Documentation/error-codes
@@ -0,0 +1,157 @@
+ DSP/Bridge Error Code Guide
+
+
+Success code is always taken as 0, except for one case where a success status
+different than 0 can be possible, this is when enumerating a series of dsp
+objects, if the enumeration doesn't have any more objects it is considered as a
+successful case. In this case a positive ENODATA is returned (TODO: Change to
+avoid this case).
+
+Error codes are returned as a negative 1, if an specific code is expected, it
+can be propagated to user space by reading errno symbol defined in errno.h, for
+specific details on the implementation a copy of the standard used should be
+read first.
+
+The error codes used by this driver are:
+
+[EPERM]
+ General driver failure.
+
+ According to the use case the following might apply:
+ - Device is in 'sleep/suspend' mode due to DPM.
+ - User cannot mark end of stream on an input channel.
+ - Requested operation is invalid for the node type.
+ - Invalid alignment for the node messaging buffer.
+ - The specified direction is invalid for the stream.
+ - Invalid stream mode.
+
+[ENOENT]
+ The specified object or file was not found.
+
+[ESRCH]
+ A shared memory buffer contained in a message or stream could not be mapped
+ to the GPP client process's virtual space.
+
+[EIO]
+ Driver interface I/O error.
+
+ or:
+ - Unable to plug channel ISR for configured IRQ.
+ - No free I/O request packets are available.
+
+[ENXIO]
+ Unable to find a named section in DSP executable or a non-existent memory
+ segment identifier was specified.
+
+[EBADF]
+ General error for file handling:
+
+ - Unable to open file.
+ - Unable to read file.
+ - An error occurred while parsing the DSP executable file.
+
+[ENOMEM]
+ A memory allocation failure occurred.
+
+[EACCES]
+ - Unable to read content of DCD data section; this is typically caused by
+ improperly configured nodes.
+ - Unable to decode DCD data section content; this is typically caused by
+ changes to DSP/BIOS Bridge data structures.
+ - Unable to get pointer to DCD data section; this is typically caused by
+ improperly configured UUIDs.
+ - Unable to load file containing DCD data section; this is typically
+ caused by a missing COFF file.
+ - The specified COFF file does not contain a valid node registration
+ section.
+
+[EFAULT]
+ Invalid pointer or handler.
+
+[EEXIST]
+ Attempted to create a channel manager when one already exists.
+
+[EINVAL]
+ Invalid argument.
+
+[ESPIPE]
+ Symbol not found in the COFF file. DSPNode_Create will return this if
+ the iAlg function table for an xDAIS socket is not found in the COFF file.
+ In this case, force the symbol to be linked into the COFF file.
+ DSPNode_Create, DSPNode_Execute, and DSPNode_Delete will return this if
+ the create, execute, or delete phase function, respectively, could not be
+ found in the COFF file.
+
+ - No symbol table is loaded/found for this board.
+ - Unable to initialize the ZL COFF parsing module.
+
+[EPIPE]
+ I/O is currently pending.
+
+ - End of stream was already requested on this output channel.
+
+[EDOM]
+ A parameter is specified outside its valid range.
+
+[ENOSYS]
+ The indicated operation is not supported.
+
+[EIDRM]
+ During enumeration a change in the number or properties of the objects
+ has occurred.
+
+[ECHRNG]
+ Attempt to created channel manager with too many channels or channel ID out
+ of range.
+
+[EBADR]
+ The state of the specified object is incorrect for the requested operation.
+
+ - Invalid segment ID.
+
+[ENODATA]
+ Unable to retrieve resource information from the registry.
+
+ - No more registry values.
+
+[ETIME]
+ A timeout occurred before the requested operation could complete.
+
+[ENOSR]
+ A stream has been issued the maximum number of buffers allowed in the
+ stream at once; buffers must be reclaimed from the stream before any more
+ can be issued.
+
+ - No free channels are available.
+
+[EILSEQ]
+ Error occurred in a dynamic loader library function.
+
+[EISCONN]
+ The Specified Connection already exists.
+
+[ENOTCONN]
+ Nodes not connected.
+
+[ETIMEDOUT]
+ Timeout occurred waiting for a response from the hardware.
+
+ - Wait for flush operation on an output channel timed out.
+
+[ECONNREFUSED]
+ No more connections can be made for this node.
+
+[EALREADY]
+ Channel is already in use.
+
+[EREMOTEIO]
+ dwTimeOut parameter was CHNL_IOCNOWAIT, yet no I/O completions were
+ queued.
+
+[ECANCELED]
+ I/O has been cancelled on this channel.
+
+[ENOKEY]
+ Invalid subkey parameter.
+
+ - UUID not found in registry.
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge core driver sources
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/core/_cmm.h | 45 +
drivers/staging/tidspbridge/core/_deh.h | 35 +
drivers/staging/tidspbridge/core/_msg_sm.h | 142 ++
drivers/staging/tidspbridge/core/_tiomap.h | 377 ++++
drivers/staging/tidspbridge/core/_tiomap_pwr.h | 85 +
drivers/staging/tidspbridge/core/chnl_sm.c | 1015 +++++++++
drivers/staging/tidspbridge/core/dsp-clock.c | 421 ++++
drivers/staging/tidspbridge/core/io_sm.c | 2410 +++++++++++++++++++++
drivers/staging/tidspbridge/core/mmu_fault.c | 139 ++
drivers/staging/tidspbridge/core/mmu_fault.h | 36 +
drivers/staging/tidspbridge/core/msg_sm.c | 673 ++++++
drivers/staging/tidspbridge/core/tiomap3430.c | 1887 ++++++++++++++++
drivers/staging/tidspbridge/core/tiomap3430_pwr.c | 604 ++++++
drivers/staging/tidspbridge/core/tiomap_io.c | 458 ++++
drivers/staging/tidspbridge/core/tiomap_io.h | 104 +
drivers/staging/tidspbridge/core/ue_deh.c | 303 +++
drivers/staging/tidspbridge/core/wdt.c | 150 ++
17 files changed, 8884 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/core/_cmm.h
create mode 100644 drivers/staging/tidspbridge/core/_deh.h
create mode 100644 drivers/staging/tidspbridge/core/_msg_sm.h
create mode 100644 drivers/staging/tidspbridge/core/_tiomap.h
create mode 100644 drivers/staging/tidspbridge/core/_tiomap_pwr.h
create mode 100644 drivers/staging/tidspbridge/core/chnl_sm.c
create mode 100644 drivers/staging/tidspbridge/core/dsp-clock.c
create mode 100644 drivers/staging/tidspbridge/core/io_sm.c
create mode 100644 drivers/staging/tidspbridge/core/mmu_fault.c
create mode 100644 drivers/staging/tidspbridge/core/mmu_fault.h
create mode 100644 drivers/staging/tidspbridge/core/msg_sm.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap3430.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap3430_pwr.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap_io.c
create mode 100644 drivers/staging/tidspbridge/core/tiomap_io.h
create mode 100644 drivers/staging/tidspbridge/core/ue_deh.c
create mode 100644 drivers/staging/tidspbridge/core/wdt.c
diff --git a/drivers/staging/tidspbridge/core/_cmm.h b/drivers/staging/tidspbridge/core/_cmm.h
new file mode 100644
index 0000000..7660bef
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/_cmm.h
@@ -0,0 +1,45 @@
+/*
+ * _cmm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private header file defining CMM manager objects and defines needed
+ * by IO manager to register shared memory regions when DSP base image
+ * is loaded(bridge_io_on_loaded).
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _CMM_
+#define _CMM_
+
+/*
+ * These target side symbols define the beginning and ending addresses
+ * of the section of shared memory used for shared memory manager CMM.
+ * They are defined in the *cfg.cmd file by cdb code.
+ */
+#define SHM0_SHARED_BASE_SYM "_SHM0_BEG"
+#define SHM0_SHARED_END_SYM "_SHM0_END"
+#define SHM0_SHARED_RESERVED_BASE_SYM "_SHM0_RSVDSTRT"
+
+/*
+ * Shared Memory Region #0(SHMSEG0) is used in the following way:
+ *
+ * |(_SHM0_BEG) | (_SHM0_RSVDSTRT) | (_SHM0_END)
+ * V V V
+ * ------------------------------------------------------------
+ * | DSP-side allocations | GPP-side allocations |
+ * ------------------------------------------------------------
+ *
+ *
+ */
+
+#endif /* _CMM_ */
diff --git a/drivers/staging/tidspbridge/core/_deh.h b/drivers/staging/tidspbridge/core/_deh.h
new file mode 100644
index 0000000..8da2212
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/_deh.h
@@ -0,0 +1,35 @@
+/*
+ * _deh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private header for DEH module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DEH_
+#define _DEH_
+
+#include <dspbridge/ntfy.h>
+#include <dspbridge/dspdefs.h>
+
+/* DEH Manager: only one created per board: */
+struct deh_mgr {
+ struct bridge_dev_context *hbridge_context; /* Bridge context. */
+ struct ntfy_object *ntfy_obj; /* NTFY object */
+ struct dsp_errorinfo err_info; /* DSP exception info. */
+
+ /* MMU Fault DPC */
+ struct tasklet_struct dpc_tasklet;
+};
+
+#endif /* _DEH_ */
diff --git a/drivers/staging/tidspbridge/core/_msg_sm.h b/drivers/staging/tidspbridge/core/_msg_sm.h
new file mode 100644
index 0000000..556de5c
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/_msg_sm.h
@@ -0,0 +1,142 @@
+/*
+ * _msg_sm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private header file defining msg_ctrl manager objects and defines needed
+ * by IO manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _MSG_SM_
+#define _MSG_SM_
+
+#include <dspbridge/list.h>
+#include <dspbridge/msgdefs.h>
+
+/*
+ * These target side symbols define the beginning and ending addresses
+ * of the section of shared memory used for messages. They are
+ * defined in the *cfg.cmd file by cdb code.
+ */
+#define MSG_SHARED_BUFFER_BASE_SYM "_MSG_BEG"
+#define MSG_SHARED_BUFFER_LIMIT_SYM "_MSG_END"
+
+#ifndef _CHNL_WORDSIZE
+#define _CHNL_WORDSIZE 4 /* default _CHNL_WORDSIZE is 2 bytes/word */
+#endif
+
+/*
+ * ======== msg_ctrl ========
+ * There is a control structure for messages to the DSP, and a control
+ * structure for messages from the DSP. The shared memory region for
+ * transferring messages is partitioned as follows:
+ *
+ * ----------------------------------------------------------
+ * |Control | Messages from DSP | Control | Messages to DSP |
+ * ----------------------------------------------------------
+ *
+ * msg_ctrl control structure for messages to the DSP is used in the following
+ * way:
+ *
+ * buf_empty - This flag is set to FALSE by the GPP after it has output
+ * messages for the DSP. The DSP host driver sets it to
+ * TRUE after it has copied the messages.
+ * post_swi - Set to 1 by the GPP after it has written the messages,
+ * set the size, and set buf_empty to FALSE.
+ * The DSP Host driver uses SWI_andn of the post_swi field
+ * when a host interrupt occurs. The host driver clears
+ * this after posting the SWI.
+ * size - Number of messages to be read by the DSP.
+ *
+ * For messages from the DSP:
+ * buf_empty - This flag is set to FALSE by the DSP after it has output
+ * messages for the GPP. The DPC on the GPP sets it to
+ * TRUE after it has copied the messages.
+ * post_swi - Set to 1 the DPC on the GPP after copying the messages.
+ * size - Number of messages to be read by the GPP.
+ */
+struct msg_ctrl {
+ u32 buf_empty; /* to/from DSP buffer is empty */
+ u32 post_swi; /* Set to "1" to post msg_ctrl SWI */
+ u32 size; /* Number of messages to/from the DSP */
+ u32 resvd;
+};
+
+/*
+ * ======== msg_mgr ========
+ * The msg_mgr maintains a list of all MSG_QUEUEs. Each NODE object can
+ * have msg_queue to hold all messages that come up from the corresponding
+ * node on the DSP. The msg_mgr also has a shared queue of messages
+ * ready to go to the DSP.
+ */
+struct msg_mgr {
+ /* The first field must match that in msgobj.h */
+
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+
+ struct io_mgr *hio_mgr; /* IO manager */
+ struct lst_list *queue_list; /* List of MSG_QUEUEs */
+ spinlock_t msg_mgr_lock; /* For critical sections */
+ /* Signalled when MsgFrame is available */
+ struct sync_object *sync_event;
+ struct lst_list *msg_free_list; /* Free MsgFrames ready to be filled */
+ struct lst_list *msg_used_list; /* MsgFrames ready to go to DSP */
+ u32 msgs_pending; /* # of queued messages to go to DSP */
+ u32 max_msgs; /* Max # of msgs that fit in buffer */
+ msg_onexit on_exit; /* called when RMS_EXIT is received */
+};
+
+/*
+ * ======== msg_queue ========
+ * Each NODE has a msg_queue for receiving messages from the
+ * corresponding node on the DSP. The msg_queue object maintains a list
+ * of messages that have been sent to the host, but not yet read (MSG_Get),
+ * and a list of free frames that can be filled when new messages arrive
+ * from the DSP.
+ * The msg_queue's hSynEvent gets posted when a message is ready.
+ */
+struct msg_queue {
+ struct list_head list_elem;
+ struct msg_mgr *hmsg_mgr;
+ u32 max_msgs; /* Node message depth */
+ u32 msgq_id; /* Node environment pointer */
+ struct lst_list *msg_free_list; /* Free MsgFrames ready to be filled */
+ /* Filled MsgFramess waiting to be read */
+ struct lst_list *msg_used_list;
+ void *arg; /* Handle passed to mgr on_exit callback */
+ struct sync_object *sync_event; /* Signalled when message is ready */
+ struct sync_object *sync_done; /* For synchronizing cleanup */
+ struct sync_object *sync_done_ack; /* For synchronizing cleanup */
+ struct ntfy_object *ntfy_obj; /* For notification of message ready */
+ bool done; /* TRUE <==> deleting the object */
+ u32 io_msg_pend; /* Number of pending MSG_get/put calls */
+};
+
+/*
+ * ======== msg_dspmsg ========
+ */
+struct msg_dspmsg {
+ struct dsp_msg msg;
+ u32 msgq_id; /* Identifies the node the message goes to */
+};
+
+/*
+ * ======== msg_frame ========
+ */
+struct msg_frame {
+ struct list_head list_elem;
+ struct msg_dspmsg msg_data;
+};
+
+#endif /* _MSG_SM_ */
diff --git a/drivers/staging/tidspbridge/core/_tiomap.h b/drivers/staging/tidspbridge/core/_tiomap.h
new file mode 100644
index 0000000..bf0164e
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/_tiomap.h
@@ -0,0 +1,377 @@
+/*
+ * _tiomap.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definitions and types private to this Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _TIOMAP_
+#define _TIOMAP_
+
+#include <plat/powerdomain.h>
+#include <plat/clockdomain.h>
+#include <mach-omap2/prm-regbits-34xx.h>
+#include <mach-omap2/cm-regbits-34xx.h>
+#include <dspbridge/devdefs.h>
+#include <hw_defs.h>
+#include <dspbridge/dspioctl.h> /* for bridge_ioctl_extproc defn */
+#include <dspbridge/sync.h>
+#include <dspbridge/clk.h>
+
+struct map_l4_peripheral {
+ u32 phys_addr;
+ u32 dsp_virt_addr;
+};
+
+#define ARM_MAILBOX_START 0xfffcf000
+#define ARM_MAILBOX_LENGTH 0x800
+
+/* New Registers in OMAP3.1 */
+
+#define TESTBLOCK_ID_START 0xfffed400
+#define TESTBLOCK_ID_LENGTH 0xff
+
+/* ID Returned by OMAP1510 */
+#define TBC_ID_VALUE 0xB47002F
+
+#define SPACE_LENGTH 0x2000
+#define API_CLKM_DPLL_DMA 0xfffec000
+#define ARM_INTERRUPT_OFFSET 0xb00
+
+#define BIOS24XX
+
+#define L4_PERIPHERAL_NULL 0x0
+#define DSPVA_PERIPHERAL_NULL 0x0
+
+#define MAX_LOCK_TLB_ENTRIES 15
+
+#define L4_PERIPHERAL_PRM 0x48306000 /*PRM L4 Peripheral */
+#define DSPVA_PERIPHERAL_PRM 0x1181e000
+#define L4_PERIPHERAL_SCM 0x48002000 /*SCM L4 Peripheral */
+#define DSPVA_PERIPHERAL_SCM 0x1181f000
+#define L4_PERIPHERAL_MMU 0x5D000000 /*MMU L4 Peripheral */
+#define DSPVA_PERIPHERAL_MMU 0x11820000
+#define L4_PERIPHERAL_CM 0x48004000 /* Core L4, Clock Management */
+#define DSPVA_PERIPHERAL_CM 0x1181c000
+#define L4_PERIPHERAL_PER 0x48005000 /* PER */
+#define DSPVA_PERIPHERAL_PER 0x1181d000
+
+#define L4_PERIPHERAL_GPIO1 0x48310000
+#define DSPVA_PERIPHERAL_GPIO1 0x11809000
+#define L4_PERIPHERAL_GPIO2 0x49050000
+#define DSPVA_PERIPHERAL_GPIO2 0x1180a000
+#define L4_PERIPHERAL_GPIO3 0x49052000
+#define DSPVA_PERIPHERAL_GPIO3 0x1180b000
+#define L4_PERIPHERAL_GPIO4 0x49054000
+#define DSPVA_PERIPHERAL_GPIO4 0x1180c000
+#define L4_PERIPHERAL_GPIO5 0x49056000
+#define DSPVA_PERIPHERAL_GPIO5 0x1180d000
+
+#define L4_PERIPHERAL_IVA2WDT 0x49030000
+#define DSPVA_PERIPHERAL_IVA2WDT 0x1180e000
+
+#define L4_PERIPHERAL_DISPLAY 0x48050000
+#define DSPVA_PERIPHERAL_DISPLAY 0x1180f000
+
+#define L4_PERIPHERAL_SSI 0x48058000
+#define DSPVA_PERIPHERAL_SSI 0x11804000
+#define L4_PERIPHERAL_GDD 0x48059000
+#define DSPVA_PERIPHERAL_GDD 0x11805000
+#define L4_PERIPHERAL_SS1 0x4805a000
+#define DSPVA_PERIPHERAL_SS1 0x11806000
+#define L4_PERIPHERAL_SS2 0x4805b000
+#define DSPVA_PERIPHERAL_SS2 0x11807000
+
+#define L4_PERIPHERAL_CAMERA 0x480BC000
+#define DSPVA_PERIPHERAL_CAMERA 0x11819000
+
+#define L4_PERIPHERAL_SDMA 0x48056000
+#define DSPVA_PERIPHERAL_SDMA 0x11810000 /* 0x1181d000 conflict w/ PER */
+
+#define L4_PERIPHERAL_UART1 0x4806a000
+#define DSPVA_PERIPHERAL_UART1 0x11811000
+#define L4_PERIPHERAL_UART2 0x4806c000
+#define DSPVA_PERIPHERAL_UART2 0x11812000
+#define L4_PERIPHERAL_UART3 0x49020000
+#define DSPVA_PERIPHERAL_UART3 0x11813000
+
+#define L4_PERIPHERAL_MCBSP1 0x48074000
+#define DSPVA_PERIPHERAL_MCBSP1 0x11814000
+#define L4_PERIPHERAL_MCBSP2 0x49022000
+#define DSPVA_PERIPHERAL_MCBSP2 0x11815000
+#define L4_PERIPHERAL_MCBSP3 0x49024000
+#define DSPVA_PERIPHERAL_MCBSP3 0x11816000
+#define L4_PERIPHERAL_MCBSP4 0x49026000
+#define DSPVA_PERIPHERAL_MCBSP4 0x11817000
+#define L4_PERIPHERAL_MCBSP5 0x48096000
+#define DSPVA_PERIPHERAL_MCBSP5 0x11818000
+
+#define L4_PERIPHERAL_GPTIMER5 0x49038000
+#define DSPVA_PERIPHERAL_GPTIMER5 0x11800000
+#define L4_PERIPHERAL_GPTIMER6 0x4903a000
+#define DSPVA_PERIPHERAL_GPTIMER6 0x11801000
+#define L4_PERIPHERAL_GPTIMER7 0x4903c000
+#define DSPVA_PERIPHERAL_GPTIMER7 0x11802000
+#define L4_PERIPHERAL_GPTIMER8 0x4903e000
+#define DSPVA_PERIPHERAL_GPTIMER8 0x11803000
+
+#define L4_PERIPHERAL_SPI1 0x48098000
+#define DSPVA_PERIPHERAL_SPI1 0x1181a000
+#define L4_PERIPHERAL_SPI2 0x4809a000
+#define DSPVA_PERIPHERAL_SPI2 0x1181b000
+
+#define L4_PERIPHERAL_MBOX 0x48094000
+#define DSPVA_PERIPHERAL_MBOX 0x11808000
+
+#define PM_GRPSEL_BASE 0x48307000
+#define DSPVA_GRPSEL_BASE 0x11821000
+
+#define L4_PERIPHERAL_SIDETONE_MCBSP2 0x49028000
+#define DSPVA_PERIPHERAL_SIDETONE_MCBSP2 0x11824000
+#define L4_PERIPHERAL_SIDETONE_MCBSP3 0x4902a000
+#define DSPVA_PERIPHERAL_SIDETONE_MCBSP3 0x11825000
+
+/* define a static array with L4 mappings */
+static const struct map_l4_peripheral l4_peripheral_table[] = {
+ {L4_PERIPHERAL_MBOX, DSPVA_PERIPHERAL_MBOX},
+ {L4_PERIPHERAL_SCM, DSPVA_PERIPHERAL_SCM},
+ {L4_PERIPHERAL_MMU, DSPVA_PERIPHERAL_MMU},
+ {L4_PERIPHERAL_GPTIMER5, DSPVA_PERIPHERAL_GPTIMER5},
+ {L4_PERIPHERAL_GPTIMER6, DSPVA_PERIPHERAL_GPTIMER6},
+ {L4_PERIPHERAL_GPTIMER7, DSPVA_PERIPHERAL_GPTIMER7},
+ {L4_PERIPHERAL_GPTIMER8, DSPVA_PERIPHERAL_GPTIMER8},
+ {L4_PERIPHERAL_GPIO1, DSPVA_PERIPHERAL_GPIO1},
+ {L4_PERIPHERAL_GPIO2, DSPVA_PERIPHERAL_GPIO2},
+ {L4_PERIPHERAL_GPIO3, DSPVA_PERIPHERAL_GPIO3},
+ {L4_PERIPHERAL_GPIO4, DSPVA_PERIPHERAL_GPIO4},
+ {L4_PERIPHERAL_GPIO5, DSPVA_PERIPHERAL_GPIO5},
+ {L4_PERIPHERAL_IVA2WDT, DSPVA_PERIPHERAL_IVA2WDT},
+ {L4_PERIPHERAL_DISPLAY, DSPVA_PERIPHERAL_DISPLAY},
+ {L4_PERIPHERAL_SSI, DSPVA_PERIPHERAL_SSI},
+ {L4_PERIPHERAL_GDD, DSPVA_PERIPHERAL_GDD},
+ {L4_PERIPHERAL_SS1, DSPVA_PERIPHERAL_SS1},
+ {L4_PERIPHERAL_SS2, DSPVA_PERIPHERAL_SS2},
+ {L4_PERIPHERAL_UART1, DSPVA_PERIPHERAL_UART1},
+ {L4_PERIPHERAL_UART2, DSPVA_PERIPHERAL_UART2},
+ {L4_PERIPHERAL_UART3, DSPVA_PERIPHERAL_UART3},
+ {L4_PERIPHERAL_MCBSP1, DSPVA_PERIPHERAL_MCBSP1},
+ {L4_PERIPHERAL_MCBSP2, DSPVA_PERIPHERAL_MCBSP2},
+ {L4_PERIPHERAL_MCBSP3, DSPVA_PERIPHERAL_MCBSP3},
+ {L4_PERIPHERAL_MCBSP4, DSPVA_PERIPHERAL_MCBSP4},
+ {L4_PERIPHERAL_MCBSP5, DSPVA_PERIPHERAL_MCBSP5},
+ {L4_PERIPHERAL_CAMERA, DSPVA_PERIPHERAL_CAMERA},
+ {L4_PERIPHERAL_SPI1, DSPVA_PERIPHERAL_SPI1},
+ {L4_PERIPHERAL_SPI2, DSPVA_PERIPHERAL_SPI2},
+ {L4_PERIPHERAL_PRM, DSPVA_PERIPHERAL_PRM},
+ {L4_PERIPHERAL_CM, DSPVA_PERIPHERAL_CM},
+ {L4_PERIPHERAL_PER, DSPVA_PERIPHERAL_PER},
+ {PM_GRPSEL_BASE, DSPVA_GRPSEL_BASE},
+ {L4_PERIPHERAL_SIDETONE_MCBSP2, DSPVA_PERIPHERAL_SIDETONE_MCBSP2},
+ {L4_PERIPHERAL_SIDETONE_MCBSP3, DSPVA_PERIPHERAL_SIDETONE_MCBSP3},
+ {L4_PERIPHERAL_NULL, DSPVA_PERIPHERAL_NULL}
+};
+
+/*
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|1|0|0|0|c|c|c|i|i|i|i|i|i|i|
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ * where c -> Externel Clock Command: Clk & Autoidle Disable/Enable
+ * i -> External Clock ID Timers 5,6,7,8, McBSP1,2 and WDT3
+ */
+
+/* MBX_PM_CLK_IDMASK: DSP External clock id mask. */
+#define MBX_PM_CLK_IDMASK 0x7F
+
+/* MBX_PM_CLK_CMDSHIFT: DSP External clock command shift. */
+#define MBX_PM_CLK_CMDSHIFT 7
+
+/* MBX_PM_CLK_CMDMASK: DSP External clock command mask. */
+#define MBX_PM_CLK_CMDMASK 7
+
+/* MBX_PM_MAX_RESOURCES: CORE 1 Clock resources. */
+#define MBX_CORE1_RESOURCES 7
+
+/* MBX_PM_MAX_RESOURCES: CORE 2 Clock Resources. */
+#define MBX_CORE2_RESOURCES 1
+
+/* MBX_PM_MAX_RESOURCES: TOTAL Clock Reosurces. */
+#define MBX_PM_MAX_RESOURCES 11
+
+/* Power Management Commands */
+#define BPWR_DISABLE_CLOCK 0
+#define BPWR_ENABLE_CLOCK 1
+
+/* OMAP242x specific resources */
+enum bpwr_ext_clock_id {
+ BPWR_GP_TIMER5 = 0x10,
+ BPWR_GP_TIMER6,
+ BPWR_GP_TIMER7,
+ BPWR_GP_TIMER8,
+ BPWR_WD_TIMER3,
+ BPWR_MCBSP1,
+ BPWR_MCBSP2,
+ BPWR_MCBSP3,
+ BPWR_MCBSP4,
+ BPWR_MCBSP5,
+ BPWR_SSI = 0x20
+};
+
+static const u32 bpwr_clkid[] = {
+ (u32) BPWR_GP_TIMER5,
+ (u32) BPWR_GP_TIMER6,
+ (u32) BPWR_GP_TIMER7,
+ (u32) BPWR_GP_TIMER8,
+ (u32) BPWR_WD_TIMER3,
+ (u32) BPWR_MCBSP1,
+ (u32) BPWR_MCBSP2,
+ (u32) BPWR_MCBSP3,
+ (u32) BPWR_MCBSP4,
+ (u32) BPWR_MCBSP5,
+ (u32) BPWR_SSI
+};
+
+struct bpwr_clk_t {
+ u32 clk_id;
+ enum dsp_clk_id clk;
+};
+
+static const struct bpwr_clk_t bpwr_clks[] = {
+ {(u32) BPWR_GP_TIMER5, DSP_CLK_GPT5},
+ {(u32) BPWR_GP_TIMER6, DSP_CLK_GPT6},
+ {(u32) BPWR_GP_TIMER7, DSP_CLK_GPT7},
+ {(u32) BPWR_GP_TIMER8, DSP_CLK_GPT8},
+ {(u32) BPWR_WD_TIMER3, DSP_CLK_WDT3},
+ {(u32) BPWR_MCBSP1, DSP_CLK_MCBSP1},
+ {(u32) BPWR_MCBSP2, DSP_CLK_MCBSP2},
+ {(u32) BPWR_MCBSP3, DSP_CLK_MCBSP3},
+ {(u32) BPWR_MCBSP4, DSP_CLK_MCBSP4},
+ {(u32) BPWR_MCBSP5, DSP_CLK_MCBSP5},
+ {(u32) BPWR_SSI, DSP_CLK_SSI}
+};
+
+/* Interrupt Register Offsets */
+#define INTH_IT_REG_OFFSET 0x00 /* Interrupt register offset */
+#define INTH_MASK_IT_REG_OFFSET 0x04 /* Mask Interrupt reg offset */
+
+#define DSP_MAILBOX1_INT 10
+/*
+ * Bit definition of Interrupt Level Registers
+ */
+
+/* Mail Box defines */
+#define MB_ARM2DSP1_REG_OFFSET 0x00
+
+#define MB_ARM2DSP1B_REG_OFFSET 0x04
+
+#define MB_DSP2ARM1B_REG_OFFSET 0x0C
+
+#define MB_ARM2DSP1_FLAG_REG_OFFSET 0x18
+
+#define MB_ARM2DSP_FLAG 0x0001
+
+#define MBOX_ARM2DSP HW_MBOX_ID0
+#define MBOX_DSP2ARM HW_MBOX_ID1
+#define MBOX_ARM HW_MBOX_U0_ARM
+#define MBOX_DSP HW_MBOX_U1_DSP1
+
+#define ENABLE true
+#define DISABLE false
+
+#define HIGH_LEVEL true
+#define LOW_LEVEL false
+
+/* Macro's */
+#define REG16(A) (*(reg_uword16 *)(A))
+
+#define CLEAR_BIT(reg, mask) (reg &= ~mask)
+#define SET_BIT(reg, mask) (reg |= mask)
+
+#define SET_GROUP_BITS16(reg, position, width, value) \
+ do {\
+ reg &= ~((0xFFFF >> (16 - (width))) << (position)) ; \
+ reg |= ((value & (0xFFFF >> (16 - (width)))) << (position)); \
+ } while (0);
+
+#define CLEAR_BIT_INDEX(reg, index) (reg &= ~(1 << (index)))
+
+/* This Bridge driver's device context: */
+struct bridge_dev_context {
+ struct dev_object *hdev_obj; /* Handle to Bridge device object. */
+ u32 dw_dsp_base_addr; /* Arm's API to DSP virt base addr */
+ /*
+ * DSP External memory prog address as seen virtually by the OS on
+ * the host side.
+ */
+ u32 dw_dsp_ext_base_addr; /* See the comment above */
+ u32 dw_api_reg_base; /* API mem map'd registers */
+ void __iomem *dw_dsp_mmu_base; /* DSP MMU Mapped registers */
+ u32 dw_api_clk_base; /* CLK Registers */
+ u32 dw_dsp_clk_m2_base; /* DSP Clock Module m2 */
+ u32 dw_public_rhea; /* Pub Rhea */
+ u32 dw_int_addr; /* MB INTR reg */
+ u32 dw_tc_endianism; /* TC Endianism register */
+ u32 dw_test_base; /* DSP MMU Mapped registers */
+ u32 dw_self_loop; /* Pointer to the selfloop */
+ u32 dw_dsp_start_add; /* API Boot vector */
+ u32 dw_internal_size; /* Internal memory size */
+
+ struct omap_mbox *mbox; /* Mail box handle */
+
+ struct cfg_hostres *resources; /* Host Resources */
+
+ /*
+ * Processor specific info is set when prog loaded and read from DCD.
+ * [See bridge_dev_ctrl()] PROC info contains DSP-MMU TLB entries.
+ */
+ /* DMMU TLB entries */
+ struct bridge_ioctl_extproc atlb_entry[BRDIOCTL_NUMOFMMUTLB];
+ u32 dw_brd_state; /* Last known board state. */
+ u32 ul_int_mask; /* int mask */
+ u16 io_base; /* Board I/O base */
+ u32 num_tlb_entries; /* DSP MMU TLB entry counter */
+ u32 fixed_tlb_entries; /* Fixed DSPMMU TLB entry count */
+
+ /* TC Settings */
+ bool tc_word_swap_on; /* Traffic Controller Word Swap */
+ struct pg_table_attrs *pt_attrs;
+ u32 dsp_per_clks;
+};
+
+/*
+ * If dsp_debug is true, do not branch to the DSP entry
+ * point and wait for DSP to boot.
+ */
+extern s32 dsp_debug;
+
+/*
+ * ======== sm_interrupt_dsp ========
+ * Purpose:
+ * Set interrupt value & send an interrupt to the DSP processor(s).
+ * This is typicaly used when mailbox interrupt mechanisms allow data
+ * to be associated with interrupt such as for OMAP's CMD/DATA regs.
+ * Parameters:
+ * dev_context: Handle to Bridge driver defined device info.
+ * mb_val: Value associated with interrupt(e.g. mailbox value).
+ * Returns:
+ * 0: Interrupt sent;
+ * else: Unable to send interrupt.
+ * Requires:
+ * Ensures:
+ */
+int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val);
+
+#endif /* _TIOMAP_ */
diff --git a/drivers/staging/tidspbridge/core/_tiomap_pwr.h b/drivers/staging/tidspbridge/core/_tiomap_pwr.h
new file mode 100644
index 0000000..b9a3453
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/_tiomap_pwr.h
@@ -0,0 +1,85 @@
+/*
+ * _tiomap_pwr.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definitions and types for the DSP wake/sleep routines.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _TIOMAP_PWR_
+#define _TIOMAP_PWR_
+
+#ifdef CONFIG_PM
+extern s32 dsp_test_sleepstate;
+#endif
+
+extern struct mailbox_context mboxsetting;
+
+/*
+ * ======== wake_dsp =========
+ * Wakes up the DSP from DeepSleep
+ */
+extern int wake_dsp(struct bridge_dev_context *dev_context,
+ IN void *pargs);
+
+/*
+ * ======== sleep_dsp =========
+ * Places the DSP in DeepSleep.
+ */
+extern int sleep_dsp(struct bridge_dev_context *dev_context,
+ IN u32 dw_cmd, IN void *pargs);
+/*
+ * ========interrupt_dsp========
+ * Sends an interrupt to DSP unconditionally.
+ */
+extern void interrupt_dsp(struct bridge_dev_context *dev_context,
+ IN u16 mb_val);
+
+/*
+ * ======== wake_dsp =========
+ * Wakes up the DSP from DeepSleep
+ */
+extern int dsp_peripheral_clk_ctrl(struct bridge_dev_context
+ *dev_context, IN void *pargs);
+/*
+ * ======== handle_hibernation_from_dsp ========
+ * Handle Hibernation requested from DSP
+ */
+int handle_hibernation_from_dsp(struct bridge_dev_context *dev_context);
+/*
+ * ======== post_scale_dsp ========
+ * Handle Post Scale notification to DSP
+ */
+int post_scale_dsp(struct bridge_dev_context *dev_context,
+ IN void *pargs);
+/*
+ * ======== pre_scale_dsp ========
+ * Handle Pre Scale notification to DSP
+ */
+int pre_scale_dsp(struct bridge_dev_context *dev_context,
+ IN void *pargs);
+/*
+ * ======== handle_constraints_set ========
+ * Handle constraints request from DSP
+ */
+int handle_constraints_set(struct bridge_dev_context *dev_context,
+ IN void *pargs);
+
+/*
+ * ======== dsp_clk_wakeup_event_ctrl ========
+ * This function sets the group selction bits for while
+ * enabling/disabling.
+ */
+void dsp_clk_wakeup_event_ctrl(u32 ClkId, bool enable);
+
+#endif /* _TIOMAP_PWR_ */
diff --git a/drivers/staging/tidspbridge/core/chnl_sm.c b/drivers/staging/tidspbridge/core/chnl_sm.c
new file mode 100644
index 0000000..714b6f7
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/chnl_sm.c
@@ -0,0 +1,1015 @@
+/*
+ * chnl_sm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implements upper edge functions for Bridge driver channel module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/*
+ * The lower edge functions must be implemented by the Bridge driver
+ * writer, and are declared in chnl_sm.h.
+ *
+ * Care is taken in this code to prevent simulataneous access to channel
+ * queues from
+ * 1. Threads.
+ * 2. io_dpc(), scheduled from the io_isr() as an event.
+ *
+ * This is done primarily by:
+ * - Semaphores.
+ * - state flags in the channel object; and
+ * - ensuring the IO_Dispatch() routine, which is called from both
+ * CHNL_AddIOReq() and the DPC(if implemented), is not re-entered.
+ *
+ * Channel Invariant:
+ * There is an important invariant condition which must be maintained per
+ * channel outside of bridge_chnl_get_ioc() and IO_Dispatch(), violation of
+ * which may cause timeouts and/or failure offunction sync_wait_on_event.
+ * This invariant condition is:
+ *
+ * LST_Empty(pchnl->pio_completions) ==> pchnl->sync_event is reset
+ * and
+ * !LST_Empty(pchnl->pio_completions) ==> pchnl->sync_event is set.
+ */
+
+/* ----------------------------------- OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Bridge Driver */
+#include <dspbridge/dspdefs.h>
+#include <dspbridge/dspchnl.h>
+#include "_tiomap.h"
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/io_sm.h>
+
+/* ----------------------------------- Define for This */
+#define USERMODE_ADDR PAGE_OFFSET
+
+#define MAILBOX_IRQ INT_MAIL_MPU_IRQ
+
+/* ----------------------------------- Function Prototypes */
+static struct lst_list *create_chirp_list(u32 uChirps);
+
+static void free_chirp_list(struct lst_list *pList);
+
+static struct chnl_irp *make_new_chirp(void);
+
+static int search_free_channel(struct chnl_mgr *chnl_mgr_obj,
+ OUT u32 *pdwChnl);
+
+/*
+ * ======== bridge_chnl_add_io_req ========
+ * Enqueue an I/O request for data transfer on a channel to the DSP.
+ * The direction (mode) is specified in the channel object. Note the DSP
+ * address is specified for channels opened in direct I/O mode.
+ */
+int bridge_chnl_add_io_req(struct chnl_object *chnl_obj, void *pHostBuf,
+ u32 byte_size, u32 buf_size,
+ OPTIONAL u32 dw_dsp_addr, u32 dw_arg)
+{
+ int status = 0;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+ struct chnl_irp *chnl_packet_obj = NULL;
+ struct bridge_dev_context *dev_ctxt;
+ struct dev_object *dev_obj;
+ u8 dw_state;
+ bool is_eos;
+ struct chnl_mgr *chnl_mgr_obj = pchnl->chnl_mgr_obj;
+ u8 *host_sys_buf = NULL;
+ bool sched_dpc = false;
+ u16 mb_val = 0;
+
+ is_eos = (byte_size == 0);
+
+ /* Validate args */
+ if (!pHostBuf || !pchnl) {
+ status = -EFAULT;
+ } else if (is_eos && CHNL_IS_INPUT(pchnl->chnl_mode)) {
+ status = -EPERM;
+ } else {
+ /*
+ * Check the channel state: only queue chirp if channel state
+ * allows it.
+ */
+ dw_state = pchnl->dw_state;
+ if (dw_state != CHNL_STATEREADY) {
+ if (dw_state & CHNL_STATECANCEL)
+ status = -ECANCELED;
+ else if ((dw_state & CHNL_STATEEOS) &&
+ CHNL_IS_OUTPUT(pchnl->chnl_mode))
+ status = -EPIPE;
+ else
+ /* No other possible states left */
+ DBC_ASSERT(0);
+ }
+ }
+
+ dev_obj = dev_get_first();
+ dev_get_bridge_context(dev_obj, &dev_ctxt);
+ if (!dev_ctxt)
+ status = -EFAULT;
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (pchnl->chnl_type == CHNL_PCPY && pchnl->chnl_id > 1 && pHostBuf) {
+ if (!(pHostBuf < (void *)USERMODE_ADDR)) {
+ host_sys_buf = pHostBuf;
+ goto func_cont;
+ }
+ /* if addr in user mode, then copy to kernel space */
+ host_sys_buf = kmalloc(buf_size, GFP_KERNEL);
+ if (host_sys_buf == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ if (CHNL_IS_OUTPUT(pchnl->chnl_mode)) {
+ status = copy_from_user(host_sys_buf, pHostBuf,
+ buf_size);
+ if (status) {
+ kfree(host_sys_buf);
+ host_sys_buf = NULL;
+ status = -EFAULT;
+ goto func_end;
+ }
+ }
+ }
+func_cont:
+ /* Mailbox IRQ is disabled to avoid race condition with DMA/ZCPY
+ * channels. DPCCS is held to avoid race conditions with PCPY channels.
+ * If DPC is scheduled in process context (iosm_schedule) and any
+ * non-mailbox interrupt occurs, that DPC will run and break CS. Hence
+ * we disable ALL DPCs. We will try to disable ONLY IO DPC later. */
+ spin_lock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+ omap_mbox_disable_irq(dev_ctxt->mbox, IRQ_RX);
+ if (pchnl->chnl_type == CHNL_PCPY) {
+ /* This is a processor-copy channel. */
+ if (DSP_SUCCEEDED(status) && CHNL_IS_OUTPUT(pchnl->chnl_mode)) {
+ /* Check buffer size on output channels for fit. */
+ if (byte_size >
+ io_buf_size(pchnl->chnl_mgr_obj->hio_mgr))
+ status = -EINVAL;
+
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Get a free chirp: */
+ chnl_packet_obj =
+ (struct chnl_irp *)lst_get_head(pchnl->free_packets_list);
+ if (chnl_packet_obj == NULL)
+ status = -EIO;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Enqueue the chirp on the chnl's IORequest queue: */
+ chnl_packet_obj->host_user_buf = chnl_packet_obj->host_sys_buf =
+ pHostBuf;
+ if (pchnl->chnl_type == CHNL_PCPY && pchnl->chnl_id > 1)
+ chnl_packet_obj->host_sys_buf = host_sys_buf;
+
+ /*
+ * Note: for dma chans dw_dsp_addr contains dsp address
+ * of SM buffer.
+ */
+ DBC_ASSERT(chnl_mgr_obj->word_size != 0);
+ /* DSP address */
+ chnl_packet_obj->dsp_tx_addr =
+ dw_dsp_addr / chnl_mgr_obj->word_size;
+ chnl_packet_obj->byte_size = byte_size;
+ chnl_packet_obj->buf_size = buf_size;
+ /* Only valid for output channel */
+ chnl_packet_obj->dw_arg = dw_arg;
+ chnl_packet_obj->status = (is_eos ? CHNL_IOCSTATEOS :
+ CHNL_IOCSTATCOMPLETE);
+ lst_put_tail(pchnl->pio_requests,
+ (struct list_head *)chnl_packet_obj);
+ pchnl->cio_reqs++;
+ DBC_ASSERT(pchnl->cio_reqs <= pchnl->chnl_packets);
+ /*
+ * If end of stream, update the channel state to prevent
+ * more IOR's.
+ */
+ if (is_eos)
+ pchnl->dw_state |= CHNL_STATEEOS;
+
+ /* Legacy DSM Processor-Copy */
+ DBC_ASSERT(pchnl->chnl_type == CHNL_PCPY);
+ /* Request IO from the DSP */
+ io_request_chnl(chnl_mgr_obj->hio_mgr, pchnl,
+ (CHNL_IS_INPUT(pchnl->chnl_mode) ? IO_INPUT :
+ IO_OUTPUT), &mb_val);
+ sched_dpc = true;
+
+ }
+ omap_mbox_enable_irq(dev_ctxt->mbox, IRQ_RX);
+ spin_unlock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+ if (mb_val != 0)
+ io_intr_dsp2(chnl_mgr_obj->hio_mgr, mb_val);
+
+ /* Schedule a DPC, to do the actual data transfer */
+ if (sched_dpc)
+ iosm_schedule(chnl_mgr_obj->hio_mgr);
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_cancel_io ========
+ * Return all I/O requests to the client which have not yet been
+ * transferred. The channel's I/O completion object is
+ * signalled, and all the I/O requests are queued as IOC's, with the
+ * status field set to CHNL_IOCSTATCANCEL.
+ * This call is typically used in abort situations, and is a prelude to
+ * chnl_close();
+ */
+int bridge_chnl_cancel_io(struct chnl_object *chnl_obj)
+{
+ int status = 0;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+ u32 chnl_id = -1;
+ s8 chnl_mode;
+ struct chnl_irp *chnl_packet_obj;
+ struct chnl_mgr *chnl_mgr_obj = NULL;
+
+ /* Check args: */
+ if (pchnl && pchnl->chnl_mgr_obj) {
+ chnl_id = pchnl->chnl_id;
+ chnl_mode = pchnl->chnl_mode;
+ chnl_mgr_obj = pchnl->chnl_mgr_obj;
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Mark this channel as cancelled, to prevent further IORequests or
+ * IORequests or dispatching. */
+ spin_lock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+ pchnl->dw_state |= CHNL_STATECANCEL;
+ if (LST_IS_EMPTY(pchnl->pio_requests))
+ goto func_cont;
+
+ if (pchnl->chnl_type == CHNL_PCPY) {
+ /* Indicate we have no more buffers available for transfer: */
+ if (CHNL_IS_INPUT(pchnl->chnl_mode)) {
+ io_cancel_chnl(chnl_mgr_obj->hio_mgr, chnl_id);
+ } else {
+ /* Record that we no longer have output buffers
+ * available: */
+ chnl_mgr_obj->dw_output_mask &= ~(1 << chnl_id);
+ }
+ }
+ /* Move all IOR's to IOC queue: */
+ while (!LST_IS_EMPTY(pchnl->pio_requests)) {
+ chnl_packet_obj =
+ (struct chnl_irp *)lst_get_head(pchnl->pio_requests);
+ if (chnl_packet_obj) {
+ chnl_packet_obj->byte_size = 0;
+ chnl_packet_obj->status |= CHNL_IOCSTATCANCEL;
+ lst_put_tail(pchnl->pio_completions,
+ (struct list_head *)chnl_packet_obj);
+ pchnl->cio_cs++;
+ pchnl->cio_reqs--;
+ DBC_ASSERT(pchnl->cio_reqs >= 0);
+ }
+ }
+func_cont:
+ spin_unlock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_close ========
+ * Purpose:
+ * Ensures all pending I/O on this channel is cancelled, discards all
+ * queued I/O completion notifications, then frees the resources allocated
+ * for this channel, and makes the corresponding logical channel id
+ * available for subsequent use.
+ */
+int bridge_chnl_close(struct chnl_object *chnl_obj)
+{
+ int status;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+
+ /* Check args: */
+ if (!pchnl) {
+ status = -EFAULT;
+ goto func_cont;
+ }
+ {
+ /* Cancel IO: this ensures no further IO requests or
+ * notifications. */
+ status = bridge_chnl_cancel_io(chnl_obj);
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ /* Assert I/O on this channel is now cancelled: Protects
+ * from io_dpc. */
+ DBC_ASSERT((pchnl->dw_state & CHNL_STATECANCEL));
+ /* Invalidate channel object: Protects from
+ * CHNL_GetIOCompletion(). */
+ /* Free the slot in the channel manager: */
+ pchnl->chnl_mgr_obj->ap_channel[pchnl->chnl_id] = NULL;
+ spin_lock_bh(&pchnl->chnl_mgr_obj->chnl_mgr_lock);
+ pchnl->chnl_mgr_obj->open_channels -= 1;
+ spin_unlock_bh(&pchnl->chnl_mgr_obj->chnl_mgr_lock);
+ if (pchnl->ntfy_obj) {
+ ntfy_delete(pchnl->ntfy_obj);
+ kfree(pchnl->ntfy_obj);
+ pchnl->ntfy_obj = NULL;
+ }
+ /* Reset channel event: (NOTE: user_event freed in user
+ * context.). */
+ if (pchnl->sync_event) {
+ sync_reset_event(pchnl->sync_event);
+ kfree(pchnl->sync_event);
+ pchnl->sync_event = NULL;
+ }
+ /* Free I/O request and I/O completion queues: */
+ if (pchnl->pio_completions) {
+ free_chirp_list(pchnl->pio_completions);
+ pchnl->pio_completions = NULL;
+ pchnl->cio_cs = 0;
+ }
+ if (pchnl->pio_requests) {
+ free_chirp_list(pchnl->pio_requests);
+ pchnl->pio_requests = NULL;
+ pchnl->cio_reqs = 0;
+ }
+ if (pchnl->free_packets_list) {
+ free_chirp_list(pchnl->free_packets_list);
+ pchnl->free_packets_list = NULL;
+ }
+ /* Release channel object. */
+ kfree(pchnl);
+ pchnl = NULL;
+ }
+ DBC_ENSURE(DSP_FAILED(status) || !pchnl);
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_create ========
+ * Create a channel manager object, responsible for opening new channels
+ * and closing old ones for a given board.
+ */
+int bridge_chnl_create(OUT struct chnl_mgr **phChnlMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct chnl_mgrattrs *pMgrAttrs)
+{
+ int status = 0;
+ struct chnl_mgr *chnl_mgr_obj = NULL;
+ u8 max_channels;
+
+ /* Check DBC requirements: */
+ DBC_REQUIRE(phChnlMgr != NULL);
+ DBC_REQUIRE(pMgrAttrs != NULL);
+ DBC_REQUIRE(pMgrAttrs->max_channels > 0);
+ DBC_REQUIRE(pMgrAttrs->max_channels <= CHNL_MAXCHANNELS);
+ DBC_REQUIRE(pMgrAttrs->word_size != 0);
+
+ /* Allocate channel manager object */
+ chnl_mgr_obj = kzalloc(sizeof(struct chnl_mgr), GFP_KERNEL);
+ if (chnl_mgr_obj) {
+ /*
+ * The max_channels attr must equal the # of supported chnls for
+ * each transport(# chnls for PCPY = DDMA = ZCPY): i.e.
+ * pMgrAttrs->max_channels = CHNL_MAXCHANNELS =
+ * DDMA_MAXDDMACHNLS = DDMA_MAXZCPYCHNLS.
+ */
+ DBC_ASSERT(pMgrAttrs->max_channels == CHNL_MAXCHANNELS);
+ max_channels = CHNL_MAXCHANNELS + CHNL_MAXCHANNELS * CHNL_PCPY;
+ /* Create array of channels */
+ chnl_mgr_obj->ap_channel = kzalloc(sizeof(struct chnl_object *)
+ * max_channels, GFP_KERNEL);
+ if (chnl_mgr_obj->ap_channel) {
+ /* Initialize chnl_mgr object */
+ chnl_mgr_obj->dw_type = CHNL_TYPESM;
+ chnl_mgr_obj->word_size = pMgrAttrs->word_size;
+ /* Total # chnls supported */
+ chnl_mgr_obj->max_channels = max_channels;
+ chnl_mgr_obj->open_channels = 0;
+ chnl_mgr_obj->dw_output_mask = 0;
+ chnl_mgr_obj->dw_last_output = 0;
+ chnl_mgr_obj->hdev_obj = hdev_obj;
+ if (DSP_SUCCEEDED(status))
+ spin_lock_init(&chnl_mgr_obj->chnl_mgr_lock);
+ } else {
+ status = -ENOMEM;
+ }
+ } else {
+ status = -ENOMEM;
+ }
+
+ if (DSP_FAILED(status)) {
+ bridge_chnl_destroy(chnl_mgr_obj);
+ *phChnlMgr = NULL;
+ } else {
+ /* Return channel manager object to caller... */
+ *phChnlMgr = chnl_mgr_obj;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_destroy ========
+ * Purpose:
+ * Close all open channels, and destroy the channel manager.
+ */
+int bridge_chnl_destroy(struct chnl_mgr *hchnl_mgr)
+{
+ int status = 0;
+ struct chnl_mgr *chnl_mgr_obj = hchnl_mgr;
+ u32 chnl_id;
+
+ if (hchnl_mgr) {
+ /* Close all open channels: */
+ for (chnl_id = 0; chnl_id < chnl_mgr_obj->max_channels;
+ chnl_id++) {
+ status =
+ bridge_chnl_close(chnl_mgr_obj->ap_channel
+ [chnl_id]);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "%s: Error status 0x%x\n",
+ __func__, status);
+ }
+
+ /* Free channel manager object: */
+ kfree(chnl_mgr_obj->ap_channel);
+
+ /* Set hchnl_mgr to NULL in device object. */
+ dev_set_chnl_mgr(chnl_mgr_obj->hdev_obj, NULL);
+ /* Free this Chnl Mgr object: */
+ kfree(hchnl_mgr);
+ } else {
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_flush_io ========
+ * purpose:
+ * Flushes all the outstanding data requests on a channel.
+ */
+int bridge_chnl_flush_io(struct chnl_object *chnl_obj, u32 dwTimeOut)
+{
+ int status = 0;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+ s8 chnl_mode = -1;
+ struct chnl_mgr *chnl_mgr_obj;
+ struct chnl_ioc chnl_ioc_obj;
+ /* Check args: */
+ if (pchnl) {
+ if ((dwTimeOut == CHNL_IOCNOWAIT)
+ && CHNL_IS_OUTPUT(pchnl->chnl_mode)) {
+ status = -EINVAL;
+ } else {
+ chnl_mode = pchnl->chnl_mode;
+ chnl_mgr_obj = pchnl->chnl_mgr_obj;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Note: Currently, if another thread continues to add IO
+ * requests to this channel, this function will continue to
+ * flush all such queued IO requests. */
+ if (CHNL_IS_OUTPUT(chnl_mode)
+ && (pchnl->chnl_type == CHNL_PCPY)) {
+ /* Wait for IO completions, up to the specified
+ * timeout: */
+ while (!LST_IS_EMPTY(pchnl->pio_requests) &&
+ DSP_SUCCEEDED(status)) {
+ status = bridge_chnl_get_ioc(chnl_obj,
+ dwTimeOut, &chnl_ioc_obj);
+ if (DSP_FAILED(status))
+ continue;
+
+ if (chnl_ioc_obj.status & CHNL_IOCSTATTIMEOUT)
+ status = -ETIMEDOUT;
+
+ }
+ } else {
+ status = bridge_chnl_cancel_io(chnl_obj);
+ /* Now, leave the channel in the ready state: */
+ pchnl->dw_state &= ~CHNL_STATECANCEL;
+ }
+ }
+ DBC_ENSURE(DSP_FAILED(status) || LST_IS_EMPTY(pchnl->pio_requests));
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_get_info ========
+ * Purpose:
+ * Retrieve information related to a channel.
+ */
+int bridge_chnl_get_info(struct chnl_object *chnl_obj,
+ OUT struct chnl_info *pInfo)
+{
+ int status = 0;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+ if (pInfo != NULL) {
+ if (pchnl) {
+ /* Return the requested information: */
+ pInfo->hchnl_mgr = pchnl->chnl_mgr_obj;
+ pInfo->event_obj = pchnl->user_event;
+ pInfo->cnhl_id = pchnl->chnl_id;
+ pInfo->dw_mode = pchnl->chnl_mode;
+ pInfo->bytes_tx = pchnl->bytes_moved;
+ pInfo->process = pchnl->process;
+ pInfo->sync_event = pchnl->sync_event;
+ pInfo->cio_cs = pchnl->cio_cs;
+ pInfo->cio_reqs = pchnl->cio_reqs;
+ pInfo->dw_state = pchnl->dw_state;
+ } else {
+ status = -EFAULT;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_get_ioc ========
+ * Optionally wait for I/O completion on a channel. Dequeue an I/O
+ * completion record, which contains information about the completed
+ * I/O request.
+ * Note: Ensures Channel Invariant (see notes above).
+ */
+int bridge_chnl_get_ioc(struct chnl_object *chnl_obj, u32 dwTimeOut,
+ OUT struct chnl_ioc *pIOC)
+{
+ int status = 0;
+ struct chnl_object *pchnl = (struct chnl_object *)chnl_obj;
+ struct chnl_irp *chnl_packet_obj;
+ int stat_sync;
+ bool dequeue_ioc = true;
+ struct chnl_ioc ioc = { NULL, 0, 0, 0, 0 };
+ u8 *host_sys_buf = NULL;
+ struct bridge_dev_context *dev_ctxt;
+ struct dev_object *dev_obj;
+
+ /* Check args: */
+ if (!pIOC || !pchnl) {
+ status = -EFAULT;
+ } else if (dwTimeOut == CHNL_IOCNOWAIT) {
+ if (LST_IS_EMPTY(pchnl->pio_completions))
+ status = -EREMOTEIO;
+
+ }
+
+ dev_obj = dev_get_first();
+ dev_get_bridge_context(dev_obj, &dev_ctxt);
+ if (!dev_ctxt)
+ status = -EFAULT;
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ ioc.status = CHNL_IOCSTATCOMPLETE;
+ if (dwTimeOut !=
+ CHNL_IOCNOWAIT && LST_IS_EMPTY(pchnl->pio_completions)) {
+ if (dwTimeOut == CHNL_IOCINFINITE)
+ dwTimeOut = SYNC_INFINITE;
+
+ stat_sync = sync_wait_on_event(pchnl->sync_event, dwTimeOut);
+ if (stat_sync == -ETIME) {
+ /* No response from DSP */
+ ioc.status |= CHNL_IOCSTATTIMEOUT;
+ dequeue_ioc = false;
+ } else if (stat_sync == -EPERM) {
+ /* This can occur when the user mode thread is
+ * aborted (^C), or when _VWIN32_WaitSingleObject()
+ * fails due to unkown causes. */
+ /* Even though Wait failed, there may be something in
+ * the Q: */
+ if (LST_IS_EMPTY(pchnl->pio_completions)) {
+ ioc.status |= CHNL_IOCSTATCANCEL;
+ dequeue_ioc = false;
+ }
+ }
+ }
+ /* See comment in AddIOReq */
+ spin_lock_bh(&pchnl->chnl_mgr_obj->chnl_mgr_lock);
+ omap_mbox_disable_irq(dev_ctxt->mbox, IRQ_RX);
+ if (dequeue_ioc) {
+ /* Dequeue IOC and set pIOC; */
+ DBC_ASSERT(!LST_IS_EMPTY(pchnl->pio_completions));
+ chnl_packet_obj =
+ (struct chnl_irp *)lst_get_head(pchnl->pio_completions);
+ /* Update pIOC from channel state and chirp: */
+ if (chnl_packet_obj) {
+ pchnl->cio_cs--;
+ /* If this is a zero-copy channel, then set IOC's pbuf
+ * to the DSP's address. This DSP address will get
+ * translated to user's virtual addr later. */
+ {
+ host_sys_buf = chnl_packet_obj->host_sys_buf;
+ ioc.pbuf = chnl_packet_obj->host_user_buf;
+ }
+ ioc.byte_size = chnl_packet_obj->byte_size;
+ ioc.buf_size = chnl_packet_obj->buf_size;
+ ioc.dw_arg = chnl_packet_obj->dw_arg;
+ ioc.status |= chnl_packet_obj->status;
+ /* Place the used chirp on the free list: */
+ lst_put_tail(pchnl->free_packets_list,
+ (struct list_head *)chnl_packet_obj);
+ } else {
+ ioc.pbuf = NULL;
+ ioc.byte_size = 0;
+ }
+ } else {
+ ioc.pbuf = NULL;
+ ioc.byte_size = 0;
+ ioc.dw_arg = 0;
+ ioc.buf_size = 0;
+ }
+ /* Ensure invariant: If any IOC's are queued for this channel... */
+ if (!LST_IS_EMPTY(pchnl->pio_completions)) {
+ /* Since DSPStream_Reclaim() does not take a timeout
+ * parameter, we pass the stream's timeout value to
+ * bridge_chnl_get_ioc. We cannot determine whether or not
+ * we have waited in User mode. Since the stream's timeout
+ * value may be non-zero, we still have to set the event.
+ * Therefore, this optimization is taken out.
+ *
+ * if (dwTimeOut == CHNL_IOCNOWAIT) {
+ * ... ensure event is set..
+ * sync_set_event(pchnl->sync_event);
+ * } */
+ sync_set_event(pchnl->sync_event);
+ } else {
+ /* else, if list is empty, ensure event is reset. */
+ sync_reset_event(pchnl->sync_event);
+ }
+ omap_mbox_enable_irq(dev_ctxt->mbox, IRQ_RX);
+ spin_unlock_bh(&pchnl->chnl_mgr_obj->chnl_mgr_lock);
+ if (dequeue_ioc
+ && (pchnl->chnl_type == CHNL_PCPY && pchnl->chnl_id > 1)) {
+ if (!(ioc.pbuf < (void *)USERMODE_ADDR))
+ goto func_cont;
+
+ /* If the addr is in user mode, then copy it */
+ if (!host_sys_buf || !ioc.pbuf) {
+ status = -EFAULT;
+ goto func_cont;
+ }
+ if (!CHNL_IS_INPUT(pchnl->chnl_mode))
+ goto func_cont1;
+
+ /*host_user_buf */
+ status = copy_to_user(ioc.pbuf, host_sys_buf, ioc.byte_size);
+ if (status) {
+ if (current->flags & PF_EXITING)
+ status = 0;
+ }
+ if (status)
+ status = -EFAULT;
+func_cont1:
+ kfree(host_sys_buf);
+ }
+func_cont:
+ /* Update User's IOC block: */
+ *pIOC = ioc;
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_get_mgr_info ========
+ * Retrieve information related to the channel manager.
+ */
+int bridge_chnl_get_mgr_info(struct chnl_mgr *hchnl_mgr, u32 uChnlID,
+ OUT struct chnl_mgrinfo *pMgrInfo)
+{
+ int status = 0;
+ struct chnl_mgr *chnl_mgr_obj = (struct chnl_mgr *)hchnl_mgr;
+
+ if (pMgrInfo != NULL) {
+ if (uChnlID <= CHNL_MAXCHANNELS) {
+ if (hchnl_mgr) {
+ /* Return the requested information: */
+ pMgrInfo->chnl_obj =
+ chnl_mgr_obj->ap_channel[uChnlID];
+ pMgrInfo->open_channels =
+ chnl_mgr_obj->open_channels;
+ pMgrInfo->dw_type = chnl_mgr_obj->dw_type;
+ /* total # of chnls */
+ pMgrInfo->max_channels =
+ chnl_mgr_obj->max_channels;
+ } else {
+ status = -EFAULT;
+ }
+ } else {
+ status = -ECHRNG;
+ }
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_idle ========
+ * Idles a particular channel.
+ */
+int bridge_chnl_idle(struct chnl_object *chnl_obj, u32 dwTimeOut,
+ bool fFlush)
+{
+ s8 chnl_mode;
+ struct chnl_mgr *chnl_mgr_obj;
+ int status = 0;
+
+ DBC_REQUIRE(chnl_obj);
+
+ chnl_mode = chnl_obj->chnl_mode;
+ chnl_mgr_obj = chnl_obj->chnl_mgr_obj;
+
+ if (CHNL_IS_OUTPUT(chnl_mode) && !fFlush) {
+ /* Wait for IO completions, up to the specified timeout: */
+ status = bridge_chnl_flush_io(chnl_obj, dwTimeOut);
+ } else {
+ status = bridge_chnl_cancel_io(chnl_obj);
+
+ /* Reset the byte count and put channel back in ready state. */
+ chnl_obj->bytes_moved = 0;
+ chnl_obj->dw_state &= ~CHNL_STATECANCEL;
+ }
+
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_open ========
+ * Open a new half-duplex channel to the DSP board.
+ */
+int bridge_chnl_open(OUT struct chnl_object **phChnl,
+ struct chnl_mgr *hchnl_mgr, s8 chnl_mode,
+ u32 uChnlId, CONST IN struct chnl_attr *pattrs)
+{
+ int status = 0;
+ struct chnl_mgr *chnl_mgr_obj = hchnl_mgr;
+ struct chnl_object *pchnl = NULL;
+ struct sync_object *sync_event = NULL;
+ /* Ensure DBC requirements: */
+ DBC_REQUIRE(phChnl != NULL);
+ DBC_REQUIRE(pattrs != NULL);
+ DBC_REQUIRE(hchnl_mgr != NULL);
+ *phChnl = NULL;
+ /* Validate Args: */
+ if (pattrs->uio_reqs == 0) {
+ status = -EINVAL;
+ } else {
+ if (!hchnl_mgr) {
+ status = -EFAULT;
+ } else {
+ if (uChnlId != CHNL_PICKFREE) {
+ if (uChnlId >= chnl_mgr_obj->max_channels)
+ status = -ECHRNG;
+ else if (chnl_mgr_obj->ap_channel[uChnlId] !=
+ NULL)
+ status = -EALREADY;
+ } else {
+ /* Check for free channel */
+ status =
+ search_free_channel(chnl_mgr_obj, &uChnlId);
+ }
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ DBC_ASSERT(uChnlId < chnl_mgr_obj->max_channels);
+ /* Create channel object: */
+ pchnl = kzalloc(sizeof(struct chnl_object), GFP_KERNEL);
+ if (!pchnl) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ /* Protect queues from io_dpc: */
+ pchnl->dw_state = CHNL_STATECANCEL;
+ /* Allocate initial IOR and IOC queues: */
+ pchnl->free_packets_list = create_chirp_list(pattrs->uio_reqs);
+ pchnl->pio_requests = create_chirp_list(0);
+ pchnl->pio_completions = create_chirp_list(0);
+ pchnl->chnl_packets = pattrs->uio_reqs;
+ pchnl->cio_cs = 0;
+ pchnl->cio_reqs = 0;
+ sync_event = kzalloc(sizeof(struct sync_object), GFP_KERNEL);
+ if (sync_event)
+ sync_init_event(sync_event);
+ else
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ pchnl->ntfy_obj = kmalloc(sizeof(struct ntfy_object),
+ GFP_KERNEL);
+ if (pchnl->ntfy_obj)
+ ntfy_init(pchnl->ntfy_obj);
+ else
+ status = -ENOMEM;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ if (pchnl->pio_completions && pchnl->pio_requests &&
+ pchnl->free_packets_list) {
+ /* Initialize CHNL object fields: */
+ pchnl->chnl_mgr_obj = chnl_mgr_obj;
+ pchnl->chnl_id = uChnlId;
+ pchnl->chnl_mode = chnl_mode;
+ pchnl->user_event = sync_event;
+ pchnl->sync_event = sync_event;
+ /* Get the process handle */
+ pchnl->process = current->tgid;
+ pchnl->pcb_arg = 0;
+ pchnl->bytes_moved = 0;
+ /* Default to proc-copy */
+ pchnl->chnl_type = CHNL_PCPY;
+ } else {
+ status = -ENOMEM;
+ }
+ }
+
+ if (DSP_FAILED(status)) {
+ /* Free memory */
+ if (pchnl->pio_completions) {
+ free_chirp_list(pchnl->pio_completions);
+ pchnl->pio_completions = NULL;
+ pchnl->cio_cs = 0;
+ }
+ if (pchnl->pio_requests) {
+ free_chirp_list(pchnl->pio_requests);
+ pchnl->pio_requests = NULL;
+ }
+ if (pchnl->free_packets_list) {
+ free_chirp_list(pchnl->free_packets_list);
+ pchnl->free_packets_list = NULL;
+ }
+ kfree(sync_event);
+ sync_event = NULL;
+
+ if (pchnl->ntfy_obj) {
+ ntfy_delete(pchnl->ntfy_obj);
+ kfree(pchnl->ntfy_obj);
+ pchnl->ntfy_obj = NULL;
+ }
+ kfree(pchnl);
+ } else {
+ /* Insert channel object in channel manager: */
+ chnl_mgr_obj->ap_channel[pchnl->chnl_id] = pchnl;
+ spin_lock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+ chnl_mgr_obj->open_channels++;
+ spin_unlock_bh(&chnl_mgr_obj->chnl_mgr_lock);
+ /* Return result... */
+ pchnl->dw_state = CHNL_STATEREADY;
+ *phChnl = pchnl;
+ }
+func_end:
+ DBC_ENSURE((DSP_SUCCEEDED(status) && pchnl) || (*phChnl == NULL));
+ return status;
+}
+
+/*
+ * ======== bridge_chnl_register_notify ========
+ * Registers for events on a particular channel.
+ */
+int bridge_chnl_register_notify(struct chnl_object *chnl_obj,
+ u32 event_mask, u32 notify_type,
+ struct dsp_notification *hnotification)
+{
+ int status = 0;
+
+ DBC_ASSERT(!(event_mask & ~(DSP_STREAMDONE | DSP_STREAMIOCOMPLETION)));
+
+ if (event_mask)
+ status = ntfy_register(chnl_obj->ntfy_obj, hnotification,
+ event_mask, notify_type);
+ else
+ status = ntfy_unregister(chnl_obj->ntfy_obj, hnotification);
+
+ return status;
+}
+
+/*
+ * ======== create_chirp_list ========
+ * Purpose:
+ * Initialize a queue of channel I/O Request/Completion packets.
+ * Parameters:
+ * uChirps: Number of Chirps to allocate.
+ * Returns:
+ * Pointer to queue of IRPs, or NULL.
+ * Requires:
+ * Ensures:
+ */
+static struct lst_list *create_chirp_list(u32 uChirps)
+{
+ struct lst_list *chirp_list;
+ struct chnl_irp *chnl_packet_obj;
+ u32 i;
+
+ chirp_list = kzalloc(sizeof(struct lst_list), GFP_KERNEL);
+
+ if (chirp_list) {
+ INIT_LIST_HEAD(&chirp_list->head);
+ /* Make N chirps and place on queue. */
+ for (i = 0; (i < uChirps)
+ && ((chnl_packet_obj = make_new_chirp()) != NULL); i++) {
+ lst_put_tail(chirp_list,
+ (struct list_head *)chnl_packet_obj);
+ }
+
+ /* If we couldn't allocate all chirps, free those allocated: */
+ if (i != uChirps) {
+ free_chirp_list(chirp_list);
+ chirp_list = NULL;
+ }
+ }
+
+ return chirp_list;
+}
+
+/*
+ * ======== free_chirp_list ========
+ * Purpose:
+ * Free the queue of Chirps.
+ */
+static void free_chirp_list(struct lst_list *chirp_list)
+{
+ DBC_REQUIRE(chirp_list != NULL);
+
+ while (!LST_IS_EMPTY(chirp_list))
+ kfree(lst_get_head(chirp_list));
+
+ kfree(chirp_list);
+}
+
+/*
+ * ======== make_new_chirp ========
+ * Allocate the memory for a new channel IRP.
+ */
+static struct chnl_irp *make_new_chirp(void)
+{
+ struct chnl_irp *chnl_packet_obj;
+
+ chnl_packet_obj = kzalloc(sizeof(struct chnl_irp), GFP_KERNEL);
+ if (chnl_packet_obj != NULL) {
+ /* lst_init_elem only resets the list's member values. */
+ lst_init_elem(&chnl_packet_obj->link);
+ }
+
+ return chnl_packet_obj;
+}
+
+/*
+ * ======== search_free_channel ========
+ * Search for a free channel slot in the array of channel pointers.
+ */
+static int search_free_channel(struct chnl_mgr *chnl_mgr_obj,
+ OUT u32 *pdwChnl)
+{
+ int status = -ENOSR;
+ u32 i;
+
+ DBC_REQUIRE(chnl_mgr_obj);
+
+ for (i = 0; i < chnl_mgr_obj->max_channels; i++) {
+ if (chnl_mgr_obj->ap_channel[i] == NULL) {
+ status = 0;
+ *pdwChnl = i;
+ break;
+ }
+ }
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/core/dsp-clock.c b/drivers/staging/tidspbridge/core/dsp-clock.c
new file mode 100644
index 0000000..abaa595
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/dsp-clock.c
@@ -0,0 +1,421 @@
+/*
+ * clk.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Clock and Timer services.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+#include <plat/dmtimer.h>
+#include <plat/mcbsp.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/cfg.h>
+#include <dspbridge/drv.h>
+#include <dspbridge/dev.h>
+#include "_tiomap.h"
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/clk.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+
+#define OMAP_SSI_OFFSET 0x58000
+#define OMAP_SSI_SIZE 0x1000
+#define OMAP_SSI_SYSCONFIG_OFFSET 0x10
+
+#define SSI_AUTOIDLE (1 << 0)
+#define SSI_SIDLE_SMARTIDLE (2 << 3)
+#define SSI_MIDLE_NOIDLE (1 << 12)
+
+/* Clk types requested by the dsp */
+#define IVA2_CLK 0
+#define GPT_CLK 1
+#define WDT_CLK 2
+#define MCBSP_CLK 3
+#define SSI_CLK 4
+
+/* Bridge GPT id (1 - 4), DM Timer id (5 - 8) */
+#define DMT_ID(id) ((id) + 4)
+
+/* Bridge MCBSP id (6 - 10), OMAP Mcbsp id (0 - 4) */
+#define MCBSP_ID(id) ((id) - 6)
+
+static struct omap_dm_timer *timer[4];
+
+struct clk *iva2_clk;
+
+struct dsp_ssi {
+ struct clk *sst_fck;
+ struct clk *ssr_fck;
+ struct clk *ick;
+};
+
+static struct dsp_ssi ssi;
+
+static u32 dsp_clocks;
+
+static inline u32 is_dsp_clk_active(u32 clk, u8 id)
+{
+ return clk & (1 << id);
+}
+
+static inline void set_dsp_clk_active(u32 *clk, u8 id)
+{
+ *clk |= (1 << id);
+}
+
+static inline void set_dsp_clk_inactive(u32 *clk, u8 id)
+{
+ *clk &= ~(1 << id);
+}
+
+static s8 get_clk_type(u8 id)
+{
+ s8 type;
+
+ if (id == DSP_CLK_IVA2)
+ type = IVA2_CLK;
+ else if (id <= DSP_CLK_GPT8)
+ type = GPT_CLK;
+ else if (id == DSP_CLK_WDT3)
+ type = WDT_CLK;
+ else if (id <= DSP_CLK_MCBSP5)
+ type = MCBSP_CLK;
+ else if (id == DSP_CLK_SSI)
+ type = SSI_CLK;
+ else
+ type = -1;
+
+ return type;
+}
+
+/*
+ * ======== dsp_clk_exit ========
+ * Purpose:
+ * Cleanup CLK module.
+ */
+void dsp_clk_exit(void)
+{
+ dsp_clock_disable_all(dsp_clocks);
+
+ clk_put(iva2_clk);
+ clk_put(ssi.sst_fck);
+ clk_put(ssi.ssr_fck);
+ clk_put(ssi.ick);
+}
+
+/*
+ * ======== dsp_clk_init ========
+ * Purpose:
+ * Initialize CLK module.
+ */
+void dsp_clk_init(void)
+{
+ static struct platform_device dspbridge_device;
+
+ dspbridge_device.dev.bus = &platform_bus_type;
+
+ iva2_clk = clk_get(&dspbridge_device.dev, "iva2_ck");
+ if (IS_ERR(iva2_clk))
+ dev_err(bridge, "failed to get iva2 clock %p\n", iva2_clk);
+
+ ssi.sst_fck = clk_get(&dspbridge_device.dev, "ssi_sst_fck");
+ ssi.ssr_fck = clk_get(&dspbridge_device.dev, "ssi_ssr_fck");
+ ssi.ick = clk_get(&dspbridge_device.dev, "ssi_ick");
+
+ if (IS_ERR(ssi.sst_fck) || IS_ERR(ssi.ssr_fck) || IS_ERR(ssi.ick))
+ dev_err(bridge, "failed to get ssi: sst %p, ssr %p, ick %p\n",
+ ssi.sst_fck, ssi.ssr_fck, ssi.ick);
+}
+
+#ifdef CONFIG_OMAP_MCBSP
+static void mcbsp_clk_prepare(bool flag, u8 id)
+{
+ struct cfg_hostres *resources;
+ struct dev_object *hdev_object = NULL;
+ struct bridge_dev_context *bridge_context = NULL;
+ u32 val;
+
+ hdev_object = (struct dev_object *)drv_get_first_dev_object();
+ if (!hdev_object)
+ return;
+
+ dev_get_bridge_context(hdev_object, &bridge_context);
+ if (!bridge_context)
+ return;
+
+ resources = bridge_context->resources;
+ if (!resources)
+ return;
+
+ if (flag) {
+ if (id == DSP_CLK_MCBSP1) {
+ /* set MCBSP1_CLKS, on McBSP1 ON */
+ val = __raw_readl(resources->dw_sys_ctrl_base + 0x274);
+ val |= 1 << 2;
+ __raw_writel(val, resources->dw_sys_ctrl_base + 0x274);
+ } else if (id == DSP_CLK_MCBSP2) {
+ /* set MCBSP2_CLKS, on McBSP2 ON */
+ val = __raw_readl(resources->dw_sys_ctrl_base + 0x274);
+ val |= 1 << 6;
+ __raw_writel(val, resources->dw_sys_ctrl_base + 0x274);
+ }
+ } else {
+ if (id == DSP_CLK_MCBSP1) {
+ /* clear MCBSP1_CLKS, on McBSP1 OFF */
+ val = __raw_readl(resources->dw_sys_ctrl_base + 0x274);
+ val &= ~(1 << 2);
+ __raw_writel(val, resources->dw_sys_ctrl_base + 0x274);
+ } else if (id == DSP_CLK_MCBSP2) {
+ /* clear MCBSP2_CLKS, on McBSP2 OFF */
+ val = __raw_readl(resources->dw_sys_ctrl_base + 0x274);
+ val &= ~(1 << 6);
+ __raw_writel(val, resources->dw_sys_ctrl_base + 0x274);
+ }
+ }
+}
+#endif
+
+/**
+ * dsp_gpt_wait_overflow - set gpt overflow and wait for fixed timeout
+ * @clk_id: GP Timer clock id.
+ * @load: Overflow value.
+ *
+ * Sets an overflow interrupt for the desired GPT waiting for a timeout
+ * of 5 msecs for the interrupt to occur.
+ */
+void dsp_gpt_wait_overflow(short int clk_id, unsigned int load)
+{
+ struct omap_dm_timer *gpt = timer[clk_id - 1];
+ unsigned long timeout;
+
+ if (!gpt)
+ return;
+
+ /* Enable overflow interrupt */
+ omap_dm_timer_set_int_enable(gpt, OMAP_TIMER_INT_OVERFLOW);
+
+ /*
+ * Set counter value to overflow counter after
+ * one tick and start timer.
+ */
+ omap_dm_timer_set_load_start(gpt, 0, load);
+
+ /* Wait 80us for timer to overflow */
+ udelay(80);
+
+ timeout = msecs_to_jiffies(5);
+ /* Check interrupt status and wait for interrupt */
+ while (!(omap_dm_timer_read_status(gpt) & OMAP_TIMER_INT_OVERFLOW)) {
+ if (time_is_after_jiffies(timeout)) {
+ pr_err("%s: GPTimer interrupt failed\n", __func__);
+ break;
+ }
+ }
+}
+
+/*
+ * ======== dsp_clk_enable ========
+ * Purpose:
+ * Enable Clock .
+ *
+ */
+int dsp_clk_enable(IN enum dsp_clk_id clk_id)
+{
+ int status = 0;
+
+ if (is_dsp_clk_active(dsp_clocks, clk_id)) {
+ dev_err(bridge, "WARN: clock id %d already enabled\n", clk_id);
+ goto out;
+ }
+
+ switch (get_clk_type(clk_id)) {
+ case IVA2_CLK:
+ clk_enable(iva2_clk);
+ break;
+ case GPT_CLK:
+ timer[clk_id - 1] =
+ omap_dm_timer_request_specific(DMT_ID(clk_id));
+ break;
+#ifdef CONFIG_OMAP_MCBSP
+ case MCBSP_CLK:
+ mcbsp_clk_prepare(true, clk_id);
+ omap_mcbsp_set_io_type(MCBSP_ID(clk_id), OMAP_MCBSP_POLL_IO);
+ omap_mcbsp_request(MCBSP_ID(clk_id));
+ break;
+#endif
+ case WDT_CLK:
+ dev_err(bridge, "ERROR: DSP requested to enable WDT3 clk\n");
+ break;
+ case SSI_CLK:
+ clk_enable(ssi.sst_fck);
+ clk_enable(ssi.ssr_fck);
+ clk_enable(ssi.ick);
+
+ /*
+ * The SSI module need to configured not to have the Forced
+ * idle for master interface. If it is set to forced idle,
+ * the SSI module is transitioning to standby thereby causing
+ * the client in the DSP hang waiting for the SSI module to
+ * be active after enabling the clocks
+ */
+ ssi_clk_prepare(true);
+ break;
+ default:
+ dev_err(bridge, "Invalid clock id for enable\n");
+ status = -EPERM;
+ }
+
+ if (DSP_SUCCEEDED(status))
+ set_dsp_clk_active(&dsp_clocks, clk_id);
+
+out:
+ return status;
+}
+
+/**
+ * dsp_clock_enable_all - Enable clocks used by the DSP
+ * @dev_context Driver's device context strucure
+ *
+ * This function enables all the peripheral clocks that were requested by DSP.
+ */
+u32 dsp_clock_enable_all(u32 dsp_per_clocks)
+{
+ u32 clk_id;
+ u32 status = -EPERM;
+
+ for (clk_id = 0; clk_id < DSP_CLK_NOT_DEFINED; clk_id++) {
+ if (is_dsp_clk_active(dsp_per_clocks, clk_id))
+ status = dsp_clk_enable(clk_id);
+ }
+
+ return status;
+}
+
+/*
+ * ======== dsp_clk_disable ========
+ * Purpose:
+ * Disable the clock.
+ *
+ */
+int dsp_clk_disable(IN enum dsp_clk_id clk_id)
+{
+ int status = 0;
+
+ if (!is_dsp_clk_active(dsp_clocks, clk_id)) {
+ dev_err(bridge, "ERR: clock id %d already disabled\n", clk_id);
+ goto out;
+ }
+
+ switch (get_clk_type(clk_id)) {
+ case IVA2_CLK:
+ clk_disable(iva2_clk);
+ break;
+ case GPT_CLK:
+ omap_dm_timer_free(timer[clk_id - 1]);
+ break;
+#ifdef CONFIG_OMAP_MCBSP
+ case MCBSP_CLK:
+ mcbsp_clk_prepare(false, clk_id);
+ omap_mcbsp_free(MCBSP_ID(clk_id));
+ break;
+#endif
+ case WDT_CLK:
+ dev_err(bridge, "ERROR: DSP requested to disable WDT3 clk\n");
+ break;
+ case SSI_CLK:
+ ssi_clk_prepare(false);
+ ssi_clk_prepare(false);
+ clk_disable(ssi.sst_fck);
+ clk_disable(ssi.ssr_fck);
+ clk_disable(ssi.ick);
+ break;
+ default:
+ dev_err(bridge, "Invalid clock id for disable\n");
+ status = -EPERM;
+ }
+
+ if (DSP_SUCCEEDED(status))
+ set_dsp_clk_inactive(&dsp_clocks, clk_id);
+
+out:
+ return status;
+}
+
+/**
+ * dsp_clock_disable_all - Disable all active clocks
+ * @dev_context Driver's device context structure
+ *
+ * This function disables all the peripheral clocks that were enabled by DSP.
+ * It is meant to be called only when DSP is entering hibernation or when DSP
+ * is in error state.
+ */
+u32 dsp_clock_disable_all(u32 dsp_per_clocks)
+{
+ u32 clk_id;
+ u32 status = -EPERM;
+
+ for (clk_id = 0; clk_id < DSP_CLK_NOT_DEFINED; clk_id++) {
+ if (is_dsp_clk_active(dsp_per_clocks, clk_id))
+ status = dsp_clk_disable(clk_id);
+ }
+
+ return status;
+}
+
+u32 dsp_clk_get_iva2_rate(void)
+{
+ u32 clk_speed_khz;
+
+ clk_speed_khz = clk_get_rate(iva2_clk);
+ clk_speed_khz /= 1000;
+ dev_dbg(bridge, "%s: clk speed Khz = %d\n", __func__, clk_speed_khz);
+
+ return clk_speed_khz;
+}
+
+void ssi_clk_prepare(bool FLAG)
+{
+ void __iomem *ssi_base;
+ unsigned int value;
+
+ ssi_base = ioremap(L4_34XX_BASE + OMAP_SSI_OFFSET, OMAP_SSI_SIZE);
+ if (!ssi_base) {
+ pr_err("%s: error, SSI not configured\n", __func__);
+ return;
+ }
+
+ if (FLAG) {
+ /* Set Autoidle, SIDLEMode to smart idle, and MIDLEmode to
+ * no idle
+ */
+ value = SSI_AUTOIDLE | SSI_SIDLE_SMARTIDLE | SSI_MIDLE_NOIDLE;
+ } else {
+ /* Set Autoidle, SIDLEMode to forced idle, and MIDLEmode to
+ * forced idle
+ */
+ value = SSI_AUTOIDLE;
+ }
+
+ __raw_writel(value, ssi_base + OMAP_SSI_SYSCONFIG_OFFSET);
+ iounmap(ssi_base);
+}
+
diff --git a/drivers/staging/tidspbridge/core/io_sm.c b/drivers/staging/tidspbridge/core/io_sm.c
new file mode 100644
index 0000000..7fb840d
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/io_sm.c
@@ -0,0 +1,2410 @@
+/*
+ * io_sm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * IO dispatcher for a shared memory channel driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/*
+ * Channel Invariant:
+ * There is an important invariant condition which must be maintained per
+ * channel outside of bridge_chnl_get_ioc() and IO_Dispatch(), violation of
+ * which may cause timeouts and/or failure of the sync_wait_on_event
+ * function.
+ */
+
+/* Host OS */
+#include <dspbridge/host_os.h>
+#include <linux/workqueue.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* Services Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/ntfy.h>
+#include <dspbridge/sync.h>
+
+/* Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+/* Bridge Driver */
+#include <dspbridge/dspdeh.h>
+#include <dspbridge/dspio.h>
+#include <dspbridge/dspioctl.h>
+#include <dspbridge/wdt.h>
+#include <_tiomap.h>
+#include <tiomap_io.h>
+#include <_tiomap_pwr.h>
+
+/* Platform Manager */
+#include <dspbridge/cod.h>
+#include <dspbridge/node.h>
+#include <dspbridge/dev.h>
+
+/* Others */
+#include <dspbridge/rms_sh.h>
+#include <dspbridge/mgr.h>
+#include <dspbridge/drv.h>
+#include "_cmm.h"
+#include "module_list.h"
+
+/* This */
+#include <dspbridge/io_sm.h>
+#include "_msg_sm.h"
+
+/* Defines, Data Structures, Typedefs */
+#define OUTPUTNOTREADY 0xffff
+#define NOTENABLED 0xffff /* Channel(s) not enabled */
+
+#define EXTEND "_EXT_END"
+
+#define SWAP_WORD(x) (x)
+#define UL_PAGE_ALIGN_SIZE 0x10000 /* Page Align Size */
+
+#define MAX_PM_REQS 32
+
+#define MMU_FAULT_HEAD1 0xa5a5a5a5
+#define MMU_FAULT_HEAD2 0x96969696
+#define POLL_MAX 1000
+#define MAX_MMU_DBGBUFF 10240
+
+/* IO Manager: only one created per board */
+struct io_mgr {
+ /* These four fields must be the first fields in a io_mgr_ struct */
+ /* Bridge device context */
+ struct bridge_dev_context *hbridge_context;
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+ struct dev_object *hdev_obj; /* Device this board represents */
+
+ /* These fields initialized in bridge_io_create() */
+ struct chnl_mgr *hchnl_mgr;
+ struct shm *shared_mem; /* Shared Memory control */
+ u8 *input; /* Address of input channel */
+ u8 *output; /* Address of output channel */
+ struct msg_mgr *hmsg_mgr; /* Message manager */
+ /* Msg control for from DSP messages */
+ struct msg_ctrl *msg_input_ctrl;
+ /* Msg control for to DSP messages */
+ struct msg_ctrl *msg_output_ctrl;
+ u8 *msg_input; /* Address of input messages */
+ u8 *msg_output; /* Address of output messages */
+ u32 usm_buf_size; /* Size of a shared memory I/O channel */
+ bool shared_irq; /* Is this IRQ shared? */
+ u32 word_size; /* Size in bytes of DSP word */
+ u16 intr_val; /* Interrupt value */
+ /* Private extnd proc info; mmu setup */
+ struct mgr_processorextinfo ext_proc_info;
+ struct cmm_object *hcmm_mgr; /* Shared Mem Mngr */
+ struct work_struct io_workq; /* workqueue */
+#ifndef DSP_TRACEBUF_DISABLED
+ u32 ul_trace_buffer_begin; /* Trace message start address */
+ u32 ul_trace_buffer_end; /* Trace message end address */
+ u32 ul_trace_buffer_current; /* Trace message current address */
+ u32 ul_gpp_read_pointer; /* GPP Read pointer to Trace buffer */
+ u8 *pmsg;
+ u32 ul_gpp_va;
+ u32 ul_dsp_va;
+#endif
+ /* IO Dpc */
+ u32 dpc_req; /* Number of requested DPC's. */
+ u32 dpc_sched; /* Number of executed DPC's. */
+ struct tasklet_struct dpc_tasklet;
+ spinlock_t dpc_lock;
+
+};
+
+/* Function Prototypes */
+static void io_dispatch_chnl(IN struct io_mgr *pio_mgr,
+ IN OUT struct chnl_object *pchnl, u8 iMode);
+static void io_dispatch_msg(IN struct io_mgr *pio_mgr,
+ struct msg_mgr *hmsg_mgr);
+static void io_dispatch_pm(struct io_mgr *pio_mgr);
+static void notify_chnl_complete(struct chnl_object *pchnl,
+ struct chnl_irp *chnl_packet_obj);
+static void input_chnl(struct io_mgr *pio_mgr, struct chnl_object *pchnl,
+ u8 iMode);
+static void output_chnl(struct io_mgr *pio_mgr, struct chnl_object *pchnl,
+ u8 iMode);
+static void input_msg(struct io_mgr *pio_mgr, struct msg_mgr *hmsg_mgr);
+static void output_msg(struct io_mgr *pio_mgr, struct msg_mgr *hmsg_mgr);
+static u32 find_ready_output(struct chnl_mgr *chnl_mgr_obj,
+ struct chnl_object *pchnl, u32 dwMask);
+static u32 read_data(struct bridge_dev_context *hDevContext, void *dest,
+ void *pSrc, u32 usize);
+static u32 write_data(struct bridge_dev_context *hDevContext, void *dest,
+ void *pSrc, u32 usize);
+
+/* Bus Addr (cached kernel) */
+static int register_shm_segs(struct io_mgr *hio_mgr,
+ struct cod_manager *cod_man,
+ u32 dw_gpp_base_pa);
+
+/*
+ * ======== bridge_io_create ========
+ * Create an IO manager object.
+ */
+int bridge_io_create(OUT struct io_mgr **phIOMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct io_attrs *pMgrAttrs)
+{
+ int status = 0;
+ struct io_mgr *pio_mgr = NULL;
+ struct shm *shared_mem = NULL;
+ struct bridge_dev_context *hbridge_context = NULL;
+ struct cfg_devnode *dev_node_obj;
+ struct chnl_mgr *hchnl_mgr;
+ u8 dev_type;
+
+ /* Check requirements */
+ if (!phIOMgr || !pMgrAttrs || pMgrAttrs->word_size == 0) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ dev_get_chnl_mgr(hdev_obj, &hchnl_mgr);
+ if (!hchnl_mgr || hchnl_mgr->hio_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /*
+ * Message manager will be created when a file is loaded, since
+ * size of message buffer in shared memory is configurable in
+ * the base image.
+ */
+ dev_get_bridge_context(hdev_obj, &hbridge_context);
+ if (!hbridge_context) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ dev_get_dev_type(hdev_obj, &dev_type);
+ /*
+ * DSP shared memory area will get set properly when
+ * a program is loaded. They are unknown until a COFF file is
+ * loaded. I chose the value -1 because it was less likely to be
+ * a valid address than 0.
+ */
+ shared_mem = (struct shm *)-1;
+
+ /* Allocate IO manager object */
+ pio_mgr = kzalloc(sizeof(struct io_mgr), GFP_KERNEL);
+ if (pio_mgr == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ /* Initialize chnl_mgr object */
+#ifndef DSP_TRACEBUF_DISABLED
+ pio_mgr->pmsg = NULL;
+#endif
+ pio_mgr->hchnl_mgr = hchnl_mgr;
+ pio_mgr->word_size = pMgrAttrs->word_size;
+ pio_mgr->shared_mem = shared_mem;
+
+ if (dev_type == DSP_UNIT) {
+ /* Create an IO DPC */
+ tasklet_init(&pio_mgr->dpc_tasklet, io_dpc, (u32) pio_mgr);
+
+ /* Initialize DPC counters */
+ pio_mgr->dpc_req = 0;
+ pio_mgr->dpc_sched = 0;
+
+ spin_lock_init(&pio_mgr->dpc_lock);
+
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_dev_node(hdev_obj, &dev_node_obj);
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ pio_mgr->hbridge_context = hbridge_context;
+ pio_mgr->shared_irq = pMgrAttrs->irq_shared;
+ if (dsp_wdt_init())
+ status = -EPERM;
+ } else {
+ status = -EIO;
+ }
+func_end:
+ if (DSP_FAILED(status)) {
+ /* Cleanup */
+ bridge_io_destroy(pio_mgr);
+ if (phIOMgr)
+ *phIOMgr = NULL;
+ } else {
+ /* Return IO manager object to caller... */
+ hchnl_mgr->hio_mgr = pio_mgr;
+ *phIOMgr = pio_mgr;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_io_destroy ========
+ * Purpose:
+ * Disable interrupts, destroy the IO manager.
+ */
+int bridge_io_destroy(struct io_mgr *hio_mgr)
+{
+ int status = 0;
+ if (hio_mgr) {
+ /* Free IO DPC object */
+ tasklet_kill(&hio_mgr->dpc_tasklet);
+
+#ifndef DSP_TRACEBUF_DISABLED
+ kfree(hio_mgr->pmsg);
+#endif
+ dsp_wdt_exit();
+ /* Free this IO manager object */
+ kfree(hio_mgr);
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== bridge_io_on_loaded ========
+ * Purpose:
+ * Called when a new program is loaded to get shared memory buffer
+ * parameters from COFF file. ulSharedBufferBase and ulSharedBufferLimit
+ * are in DSP address units.
+ */
+int bridge_io_on_loaded(struct io_mgr *hio_mgr)
+{
+ struct cod_manager *cod_man;
+ struct chnl_mgr *hchnl_mgr;
+ struct msg_mgr *hmsg_mgr;
+ u32 ul_shm_base;
+ u32 ul_shm_base_offset;
+ u32 ul_shm_limit;
+ u32 ul_shm_length = -1;
+ u32 ul_mem_length = -1;
+ u32 ul_msg_base;
+ u32 ul_msg_limit;
+ u32 ul_msg_length = -1;
+ u32 ul_ext_end;
+ u32 ul_gpp_pa = 0;
+ u32 ul_gpp_va = 0;
+ u32 ul_dsp_va = 0;
+ u32 ul_seg_size = 0;
+ u32 ul_pad_size = 0;
+ u32 i;
+ int status = 0;
+ u8 num_procs = 0;
+ s32 ndx = 0;
+ /* DSP MMU setup table */
+ struct bridge_ioctl_extproc ae_proc[BRDIOCTL_NUMOFMMUTLB];
+ struct cfg_hostres *host_res;
+ struct bridge_dev_context *pbridge_context;
+ u32 map_attrs;
+ u32 shm0_end;
+ u32 ul_dyn_ext_base;
+ u32 ul_seg1_size = 0;
+ u32 pa_curr = 0;
+ u32 va_curr = 0;
+ u32 gpp_va_curr = 0;
+ u32 num_bytes = 0;
+ u32 all_bits = 0;
+ u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
+ HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
+ };
+
+ status = dev_get_bridge_context(hio_mgr->hdev_obj, &pbridge_context);
+ if (!pbridge_context) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ host_res = pbridge_context->resources;
+ if (!host_res) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ status = dev_get_cod_mgr(hio_mgr->hdev_obj, &cod_man);
+ if (!cod_man) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hchnl_mgr = hio_mgr->hchnl_mgr;
+ /* The message manager is destroyed when the board is stopped. */
+ dev_get_msg_mgr(hio_mgr->hdev_obj, &hio_mgr->hmsg_mgr);
+ hmsg_mgr = hio_mgr->hmsg_mgr;
+ if (!hchnl_mgr || !hmsg_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ if (hio_mgr->shared_mem)
+ hio_mgr->shared_mem = NULL;
+
+ /* Get start and length of channel part of shared memory */
+ status = cod_get_sym_value(cod_man, CHNL_SHARED_BUFFER_BASE_SYM,
+ &ul_shm_base);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ status = cod_get_sym_value(cod_man, CHNL_SHARED_BUFFER_LIMIT_SYM,
+ &ul_shm_limit);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ if (ul_shm_limit <= ul_shm_base) {
+ status = -EINVAL;
+ goto func_end;
+ }
+ /* Get total length in bytes */
+ ul_shm_length = (ul_shm_limit - ul_shm_base + 1) * hio_mgr->word_size;
+ /* Calculate size of a PROCCOPY shared memory region */
+ dev_dbg(bridge, "%s: (proc)proccopy shmmem size: 0x%x bytes\n",
+ __func__, (ul_shm_length - sizeof(struct shm)));
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Get start and length of message part of shared memory */
+ status = cod_get_sym_value(cod_man, MSG_SHARED_BUFFER_BASE_SYM,
+ &ul_msg_base);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = cod_get_sym_value(cod_man, MSG_SHARED_BUFFER_LIMIT_SYM,
+ &ul_msg_limit);
+ if (DSP_SUCCEEDED(status)) {
+ if (ul_msg_limit <= ul_msg_base) {
+ status = -EINVAL;
+ } else {
+ /*
+ * Length (bytes) of messaging part of shared
+ * memory.
+ */
+ ul_msg_length =
+ (ul_msg_limit - ul_msg_base +
+ 1) * hio_mgr->word_size;
+ /*
+ * Total length (bytes) of shared memory:
+ * chnl + msg.
+ */
+ ul_mem_length = ul_shm_length + ul_msg_length;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+#ifndef DSP_TRACEBUF_DISABLED
+ status =
+ cod_get_sym_value(cod_man, DSP_TRACESEC_END, &shm0_end);
+#else
+ status = cod_get_sym_value(cod_man, SHM0_SHARED_END_SYM,
+ &shm0_end);
+#endif
+ if (DSP_FAILED(status))
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ cod_get_sym_value(cod_man, DYNEXTBASE, &ul_dyn_ext_base);
+ if (DSP_FAILED(status))
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = cod_get_sym_value(cod_man, EXTEND, &ul_ext_end);
+ if (DSP_FAILED(status))
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Get memory reserved in host resources */
+ (void)mgr_enum_processor_info(0, (struct dsp_processorinfo *)
+ &hio_mgr->ext_proc_info,
+ sizeof(struct
+ mgr_processorextinfo),
+ &num_procs);
+
+ /* The first MMU TLB entry(TLB_0) in DCD is ShmBase. */
+ ndx = 0;
+ ul_gpp_pa = host_res->dw_mem_phys[1];
+ ul_gpp_va = host_res->dw_mem_base[1];
+ /* This is the virtual uncached ioremapped address!!! */
+ /* Why can't we directly take the DSPVA from the symbols? */
+ ul_dsp_va = hio_mgr->ext_proc_info.ty_tlb[0].ul_dsp_virt;
+ ul_seg_size = (shm0_end - ul_dsp_va) * hio_mgr->word_size;
+ ul_seg1_size =
+ (ul_ext_end - ul_dyn_ext_base) * hio_mgr->word_size;
+ /* 4K align */
+ ul_seg1_size = (ul_seg1_size + 0xFFF) & (~0xFFFUL);
+ /* 64K align */
+ ul_seg_size = (ul_seg_size + 0xFFFF) & (~0xFFFFUL);
+ ul_pad_size = UL_PAGE_ALIGN_SIZE - ((ul_gpp_pa + ul_seg1_size) %
+ UL_PAGE_ALIGN_SIZE);
+ if (ul_pad_size == UL_PAGE_ALIGN_SIZE)
+ ul_pad_size = 0x0;
+
+ dev_dbg(bridge, "%s: ul_gpp_pa %x, ul_gpp_va %x, ul_dsp_va %x, "
+ "shm0_end %x, ul_dyn_ext_base %x, ul_ext_end %x, "
+ "ul_seg_size %x ul_seg1_size %x \n", __func__,
+ ul_gpp_pa, ul_gpp_va, ul_dsp_va, shm0_end,
+ ul_dyn_ext_base, ul_ext_end, ul_seg_size, ul_seg1_size);
+
+ if ((ul_seg_size + ul_seg1_size + ul_pad_size) >
+ host_res->dw_mem_length[1]) {
+ pr_err("%s: shm Error, reserved 0x%x required 0x%x\n",
+ __func__, host_res->dw_mem_length[1],
+ ul_seg_size + ul_seg1_size + ul_pad_size);
+ status = -ENOMEM;
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ pa_curr = ul_gpp_pa;
+ va_curr = ul_dyn_ext_base * hio_mgr->word_size;
+ gpp_va_curr = ul_gpp_va;
+ num_bytes = ul_seg1_size;
+
+ /*
+ * Try to fit into TLB entries. If not possible, push them to page
+ * tables. It is quite possible that if sections are not on
+ * bigger page boundary, we may end up making several small pages.
+ * So, push them onto page tables, if that is the case.
+ */
+ map_attrs = 0x00000000;
+ map_attrs = DSP_MAPLITTLEENDIAN;
+ map_attrs |= DSP_MAPPHYSICALADDR;
+ map_attrs |= DSP_MAPELEMSIZE32;
+ map_attrs |= DSP_MAPDONOTLOCK;
+
+ while (num_bytes) {
+ /*
+ * To find the max. page size with which both PA & VA are
+ * aligned.
+ */
+ all_bits = pa_curr | va_curr;
+ dev_dbg(bridge, "all_bits %x, pa_curr %x, va_curr %x, "
+ "num_bytes %x\n", all_bits, pa_curr, va_curr,
+ num_bytes);
+ for (i = 0; i < 4; i++) {
+ if ((num_bytes >= page_size[i]) && ((all_bits &
+ (page_size[i] -
+ 1)) == 0)) {
+ status =
+ hio_mgr->intf_fxns->
+ pfn_brd_mem_map(hio_mgr->hbridge_context,
+ pa_curr, va_curr,
+ page_size[i], map_attrs,
+ NULL);
+ if (DSP_FAILED(status))
+ goto func_end;
+ pa_curr += page_size[i];
+ va_curr += page_size[i];
+ gpp_va_curr += page_size[i];
+ num_bytes -= page_size[i];
+ /*
+ * Don't try smaller sizes. Hopefully we have
+ * reached an address aligned to a bigger page
+ * size.
+ */
+ break;
+ }
+ }
+ }
+ pa_curr += ul_pad_size;
+ va_curr += ul_pad_size;
+ gpp_va_curr += ul_pad_size;
+
+ /* Configure the TLB entries for the next cacheable segment */
+ num_bytes = ul_seg_size;
+ va_curr = ul_dsp_va * hio_mgr->word_size;
+ while (num_bytes) {
+ /*
+ * To find the max. page size with which both PA & VA are
+ * aligned.
+ */
+ all_bits = pa_curr | va_curr;
+ dev_dbg(bridge, "all_bits for Seg1 %x, pa_curr %x, "
+ "va_curr %x, num_bytes %x\n", all_bits, pa_curr,
+ va_curr, num_bytes);
+ for (i = 0; i < 4; i++) {
+ if (!(num_bytes >= page_size[i]) ||
+ !((all_bits & (page_size[i] - 1)) == 0))
+ continue;
+ if (ndx < MAX_LOCK_TLB_ENTRIES) {
+ /*
+ * This is the physical address written to
+ * DSP MMU.
+ */
+ ae_proc[ndx].ul_gpp_pa = pa_curr;
+ /*
+ * This is the virtual uncached ioremapped
+ * address!!!
+ */
+ ae_proc[ndx].ul_gpp_va = gpp_va_curr;
+ ae_proc[ndx].ul_dsp_va =
+ va_curr / hio_mgr->word_size;
+ ae_proc[ndx].ul_size = page_size[i];
+ ae_proc[ndx].endianism = HW_LITTLE_ENDIAN;
+ ae_proc[ndx].elem_size = HW_ELEM_SIZE16BIT;
+ ae_proc[ndx].mixed_mode = HW_MMU_CPUES;
+ dev_dbg(bridge, "shm MMU TLB entry PA %x"
+ " VA %x DSP_VA %x Size %x\n",
+ ae_proc[ndx].ul_gpp_pa,
+ ae_proc[ndx].ul_gpp_va,
+ ae_proc[ndx].ul_dsp_va *
+ hio_mgr->word_size, page_size[i]);
+ ndx++;
+ } else {
+ status =
+ hio_mgr->intf_fxns->
+ pfn_brd_mem_map(hio_mgr->hbridge_context,
+ pa_curr, va_curr,
+ page_size[i], map_attrs,
+ NULL);
+ dev_dbg(bridge,
+ "shm MMU PTE entry PA %x"
+ " VA %x DSP_VA %x Size %x\n",
+ ae_proc[ndx].ul_gpp_pa,
+ ae_proc[ndx].ul_gpp_va,
+ ae_proc[ndx].ul_dsp_va *
+ hio_mgr->word_size, page_size[i]);
+ if (DSP_FAILED(status))
+ goto func_end;
+ }
+ pa_curr += page_size[i];
+ va_curr += page_size[i];
+ gpp_va_curr += page_size[i];
+ num_bytes -= page_size[i];
+ /*
+ * Don't try smaller sizes. Hopefully we have reached
+ * an address aligned to a bigger page size.
+ */
+ break;
+ }
+ }
+
+ /*
+ * Copy remaining entries from CDB. All entries are 1 MB and
+ * should not conflict with shm entries on MPU or DSP side.
+ */
+ for (i = 3; i < 7 && ndx < BRDIOCTL_NUMOFMMUTLB; i++) {
+ if (hio_mgr->ext_proc_info.ty_tlb[i].ul_gpp_phys == 0)
+ continue;
+
+ if ((hio_mgr->ext_proc_info.ty_tlb[i].ul_gpp_phys >
+ ul_gpp_pa - 0x100000
+ && hio_mgr->ext_proc_info.ty_tlb[i].ul_gpp_phys <=
+ ul_gpp_pa + ul_seg_size)
+ || (hio_mgr->ext_proc_info.ty_tlb[i].ul_dsp_virt >
+ ul_dsp_va - 0x100000 / hio_mgr->word_size
+ && hio_mgr->ext_proc_info.ty_tlb[i].ul_dsp_virt <=
+ ul_dsp_va + ul_seg_size / hio_mgr->word_size)) {
+ dev_dbg(bridge,
+ "CDB MMU entry %d conflicts with "
+ "shm.\n\tCDB: GppPa %x, DspVa %x.\n\tSHM: "
+ "GppPa %x, DspVa %x, Bytes %x.\n", i,
+ hio_mgr->ext_proc_info.ty_tlb[i].ul_gpp_phys,
+ hio_mgr->ext_proc_info.ty_tlb[i].ul_dsp_virt,
+ ul_gpp_pa, ul_dsp_va, ul_seg_size);
+ status = -EPERM;
+ } else {
+ if (ndx < MAX_LOCK_TLB_ENTRIES) {
+ ae_proc[ndx].ul_dsp_va =
+ hio_mgr->ext_proc_info.ty_tlb[i].
+ ul_dsp_virt;
+ ae_proc[ndx].ul_gpp_pa =
+ hio_mgr->ext_proc_info.ty_tlb[i].
+ ul_gpp_phys;
+ ae_proc[ndx].ul_gpp_va = 0;
+ /* 1 MB */
+ ae_proc[ndx].ul_size = 0x100000;
+ dev_dbg(bridge, "shm MMU entry PA %x "
+ "DSP_VA 0x%x\n", ae_proc[ndx].ul_gpp_pa,
+ ae_proc[ndx].ul_dsp_va);
+ ndx++;
+ } else {
+ status = hio_mgr->intf_fxns->pfn_brd_mem_map
+ (hio_mgr->hbridge_context,
+ hio_mgr->ext_proc_info.ty_tlb[i].
+ ul_gpp_phys,
+ hio_mgr->ext_proc_info.ty_tlb[i].
+ ul_dsp_virt, 0x100000, map_attrs,
+ NULL);
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+ }
+
+ map_attrs = 0x00000000;
+ map_attrs = DSP_MAPLITTLEENDIAN;
+ map_attrs |= DSP_MAPPHYSICALADDR;
+ map_attrs |= DSP_MAPELEMSIZE32;
+ map_attrs |= DSP_MAPDONOTLOCK;
+
+ /* Map the L4 peripherals */
+ i = 0;
+ while (l4_peripheral_table[i].phys_addr) {
+ status = hio_mgr->intf_fxns->pfn_brd_mem_map
+ (hio_mgr->hbridge_context, l4_peripheral_table[i].phys_addr,
+ l4_peripheral_table[i].dsp_virt_addr, HW_PAGE_SIZE4KB,
+ map_attrs, NULL);
+ if (DSP_FAILED(status))
+ goto func_end;
+ i++;
+ }
+
+ for (i = ndx; i < BRDIOCTL_NUMOFMMUTLB; i++) {
+ ae_proc[i].ul_dsp_va = 0;
+ ae_proc[i].ul_gpp_pa = 0;
+ ae_proc[i].ul_gpp_va = 0;
+ ae_proc[i].ul_size = 0;
+ }
+ /*
+ * Set the shm physical address entry (grayed out in CDB file)
+ * to the virtual uncached ioremapped address of shm reserved
+ * on MPU.
+ */
+ hio_mgr->ext_proc_info.ty_tlb[0].ul_gpp_phys =
+ (ul_gpp_va + ul_seg1_size + ul_pad_size);
+
+ /*
+ * Need shm Phys addr. IO supports only one DSP for now:
+ * num_procs = 1.
+ */
+ if (!hio_mgr->ext_proc_info.ty_tlb[0].ul_gpp_phys || num_procs != 1) {
+ status = -EFAULT;
+ goto func_end;
+ } else {
+ if (ae_proc[0].ul_dsp_va > ul_shm_base) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* ul_shm_base may not be at ul_dsp_va address */
+ ul_shm_base_offset = (ul_shm_base - ae_proc[0].ul_dsp_va) *
+ hio_mgr->word_size;
+ /*
+ * bridge_dev_ctrl() will set dev context dsp-mmu info. In
+ * bridge_brd_start() the MMU will be re-programed with MMU
+ * DSPVa-GPPPa pair info while DSP is in a known
+ * (reset) state.
+ */
+
+ status =
+ hio_mgr->intf_fxns->pfn_dev_cntrl(hio_mgr->hbridge_context,
+ BRDIOCTL_SETMMUCONFIG,
+ ae_proc);
+ if (DSP_FAILED(status))
+ goto func_end;
+ ul_shm_base = hio_mgr->ext_proc_info.ty_tlb[0].ul_gpp_phys;
+ ul_shm_base += ul_shm_base_offset;
+ ul_shm_base = (u32) MEM_LINEAR_ADDRESS((void *)ul_shm_base,
+ ul_mem_length);
+ if (ul_shm_base == 0) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* Register SM */
+ status =
+ register_shm_segs(hio_mgr, cod_man, ae_proc[0].ul_gpp_pa);
+ }
+
+ hio_mgr->shared_mem = (struct shm *)ul_shm_base;
+ hio_mgr->input = (u8 *) hio_mgr->shared_mem + sizeof(struct shm);
+ hio_mgr->output = hio_mgr->input + (ul_shm_length -
+ sizeof(struct shm)) / 2;
+ hio_mgr->usm_buf_size = hio_mgr->output - hio_mgr->input;
+
+ /* Set up Shared memory addresses for messaging. */
+ hio_mgr->msg_input_ctrl = (struct msg_ctrl *)((u8 *) hio_mgr->shared_mem
+ + ul_shm_length);
+ hio_mgr->msg_input =
+ (u8 *) hio_mgr->msg_input_ctrl + sizeof(struct msg_ctrl);
+ hio_mgr->msg_output_ctrl =
+ (struct msg_ctrl *)((u8 *) hio_mgr->msg_input_ctrl +
+ ul_msg_length / 2);
+ hio_mgr->msg_output =
+ (u8 *) hio_mgr->msg_output_ctrl + sizeof(struct msg_ctrl);
+ hmsg_mgr->max_msgs =
+ ((u8 *) hio_mgr->msg_output_ctrl - hio_mgr->msg_input)
+ / sizeof(struct msg_dspmsg);
+ dev_dbg(bridge, "IO MGR shm details: shared_mem %p, input %p, "
+ "output %p, msg_input_ctrl %p, msg_input %p, "
+ "msg_output_ctrl %p, msg_output %p\n",
+ (u8 *) hio_mgr->shared_mem, hio_mgr->input,
+ hio_mgr->output, (u8 *) hio_mgr->msg_input_ctrl,
+ hio_mgr->msg_input, (u8 *) hio_mgr->msg_output_ctrl,
+ hio_mgr->msg_output);
+ dev_dbg(bridge, "(proc) Mas msgs in shared memory: 0x%x\n",
+ hmsg_mgr->max_msgs);
+ memset((void *)hio_mgr->shared_mem, 0, sizeof(struct shm));
+
+#ifndef DSP_TRACEBUF_DISABLED
+ /* Get the start address of trace buffer */
+ status = cod_get_sym_value(cod_man, SYS_PUTCBEG,
+ &hio_mgr->ul_trace_buffer_begin);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ hio_mgr->ul_gpp_read_pointer = hio_mgr->ul_trace_buffer_begin =
+ (ul_gpp_va + ul_seg1_size + ul_pad_size) +
+ (hio_mgr->ul_trace_buffer_begin - ul_dsp_va);
+ /* Get the end address of trace buffer */
+ status = cod_get_sym_value(cod_man, SYS_PUTCEND,
+ &hio_mgr->ul_trace_buffer_end);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hio_mgr->ul_trace_buffer_end =
+ (ul_gpp_va + ul_seg1_size + ul_pad_size) +
+ (hio_mgr->ul_trace_buffer_end - ul_dsp_va);
+ /* Get the current address of DSP write pointer */
+ status = cod_get_sym_value(cod_man, BRIDGE_SYS_PUTC_CURRENT,
+ &hio_mgr->ul_trace_buffer_current);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hio_mgr->ul_trace_buffer_current =
+ (ul_gpp_va + ul_seg1_size + ul_pad_size) +
+ (hio_mgr->ul_trace_buffer_current - ul_dsp_va);
+ /* Calculate the size of trace buffer */
+ kfree(hio_mgr->pmsg);
+ hio_mgr->pmsg = kmalloc(((hio_mgr->ul_trace_buffer_end -
+ hio_mgr->ul_trace_buffer_begin) *
+ hio_mgr->word_size) + 2, GFP_KERNEL);
+ if (!hio_mgr->pmsg)
+ status = -ENOMEM;
+
+ hio_mgr->ul_dsp_va = ul_dsp_va;
+ hio_mgr->ul_gpp_va = (ul_gpp_va + ul_seg1_size + ul_pad_size);
+
+#endif
+func_end:
+ return status;
+}
+
+/*
+ * ======== io_buf_size ========
+ * Size of shared memory I/O channel.
+ */
+u32 io_buf_size(struct io_mgr *hio_mgr)
+{
+ if (hio_mgr)
+ return hio_mgr->usm_buf_size;
+ else
+ return 0;
+}
+
+/*
+ * ======== io_cancel_chnl ========
+ * Cancel IO on a given PCPY channel.
+ */
+void io_cancel_chnl(struct io_mgr *hio_mgr, u32 ulChnl)
+{
+ struct io_mgr *pio_mgr = (struct io_mgr *)hio_mgr;
+ struct shm *sm;
+
+ if (!hio_mgr)
+ goto func_end;
+ sm = hio_mgr->shared_mem;
+
+ /* Inform DSP that we have no more buffers on this channel */
+ IO_AND_VALUE(pio_mgr->hbridge_context, struct shm, sm, host_free_mask,
+ (~(1 << ulChnl)));
+
+ sm_interrupt_dsp(pio_mgr->hbridge_context, MBX_PCPY_CLASS);
+func_end:
+ return;
+}
+
+/*
+ * ======== io_dispatch_chnl ========
+ * Proc-copy chanl dispatch.
+ */
+static void io_dispatch_chnl(IN struct io_mgr *pio_mgr,
+ IN OUT struct chnl_object *pchnl, u8 iMode)
+{
+ if (!pio_mgr)
+ goto func_end;
+
+ /* See if there is any data available for transfer */
+ if (iMode != IO_SERVICE)
+ goto func_end;
+
+ /* Any channel will do for this mode */
+ input_chnl(pio_mgr, pchnl, iMode);
+ output_chnl(pio_mgr, pchnl, iMode);
+func_end:
+ return;
+}
+
+/*
+ * ======== io_dispatch_msg ========
+ * Performs I/O dispatch on message queues.
+ */
+static void io_dispatch_msg(IN struct io_mgr *pio_mgr, struct msg_mgr *hmsg_mgr)
+{
+ if (!pio_mgr)
+ goto func_end;
+
+ /* We are performing both input and output processing. */
+ input_msg(pio_mgr, hmsg_mgr);
+ output_msg(pio_mgr, hmsg_mgr);
+func_end:
+ return;
+}
+
+/*
+ * ======== io_dispatch_pm ========
+ * Performs I/O dispatch on PM related messages from DSP
+ */
+static void io_dispatch_pm(struct io_mgr *pio_mgr)
+{
+ int status;
+ u32 parg[2];
+
+ /* Perform Power message processing here */
+ parg[0] = pio_mgr->intr_val;
+
+ /* Send the command to the Bridge clk/pwr manager to handle */
+ if (parg[0] == MBX_PM_HIBERNATE_EN) {
+ dev_dbg(bridge, "PM: Hibernate command\n");
+ status = pio_mgr->intf_fxns->
+ pfn_dev_cntrl(pio_mgr->hbridge_context,
+ BRDIOCTL_PWR_HIBERNATE, parg);
+ if (DSP_FAILED(status))
+ pr_err("%s: hibernate cmd failed 0x%x\n",
+ __func__, status);
+ } else if (parg[0] == MBX_PM_OPP_REQ) {
+ parg[1] = pio_mgr->shared_mem->opp_request.rqst_opp_pt;
+ dev_dbg(bridge, "PM: Requested OPP = 0x%x\n", parg[1]);
+ status = pio_mgr->intf_fxns->
+ pfn_dev_cntrl(pio_mgr->hbridge_context,
+ BRDIOCTL_CONSTRAINT_REQUEST, parg);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "PM: Failed to set constraint "
+ "= 0x%x \n", parg[1]);
+ } else {
+ dev_dbg(bridge, "PM: clk control value of msg = 0x%x\n",
+ parg[0]);
+ status = pio_mgr->intf_fxns->
+ pfn_dev_cntrl(pio_mgr->hbridge_context,
+ BRDIOCTL_CLK_CTRL, parg);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "PM: Failed to ctrl the DSP clk"
+ "= 0x%x\n", *parg);
+ }
+}
+
+/*
+ * ======== io_dpc ========
+ * Deferred procedure call for shared memory channel driver ISR. Carries
+ * out the dispatch of I/O as a non-preemptible event.It can only be
+ * pre-empted by an ISR.
+ */
+void io_dpc(IN OUT unsigned long pRefData)
+{
+ struct io_mgr *pio_mgr = (struct io_mgr *)pRefData;
+ struct chnl_mgr *chnl_mgr_obj;
+ struct msg_mgr *msg_mgr_obj;
+ struct deh_mgr *hdeh_mgr;
+ u32 requested;
+ u32 serviced;
+
+ if (!pio_mgr)
+ goto func_end;
+ chnl_mgr_obj = pio_mgr->hchnl_mgr;
+ dev_get_msg_mgr(pio_mgr->hdev_obj, &msg_mgr_obj);
+ dev_get_deh_mgr(pio_mgr->hdev_obj, &hdeh_mgr);
+ if (!chnl_mgr_obj)
+ goto func_end;
+
+ requested = pio_mgr->dpc_req;
+ serviced = pio_mgr->dpc_sched;
+
+ if (serviced == requested)
+ goto func_end;
+
+ /* Process pending DPC's */
+ do {
+ /* Check value of interrupt reg to ensure it's a valid error */
+ if ((pio_mgr->intr_val > DEH_BASE) &&
+ (pio_mgr->intr_val < DEH_LIMIT)) {
+ /* Notify DSP/BIOS exception */
+ if (hdeh_mgr) {
+#ifndef DSP_TRACE_BUF_DISABLED
+ print_dsp_debug_trace(pio_mgr);
+#endif
+ bridge_deh_notify(hdeh_mgr, DSP_SYSERROR,
+ pio_mgr->intr_val);
+ }
+ }
+ io_dispatch_chnl(pio_mgr, NULL, IO_SERVICE);
+#ifdef CHNL_MESSAGES
+ if (msg_mgr_obj)
+ io_dispatch_msg(pio_mgr, msg_mgr_obj);
+#endif
+#ifndef DSP_TRACEBUF_DISABLED
+ if (pio_mgr->intr_val & MBX_DBG_SYSPRINTF) {
+ /* Notify DSP Trace message */
+ print_dsp_debug_trace(pio_mgr);
+ }
+#endif
+ serviced++;
+ } while (serviced != requested);
+ pio_mgr->dpc_sched = requested;
+func_end:
+ return;
+}
+
+/*
+ * ======== io_mbox_msg ========
+ * Main interrupt handler for the shared memory IO manager.
+ * Calls the Bridge's CHNL_ISR to determine if this interrupt is ours, then
+ * schedules a DPC to dispatch I/O.
+ */
+void io_mbox_msg(u32 msg)
+{
+ struct io_mgr *pio_mgr;
+ struct dev_object *dev_obj;
+ unsigned long flags;
+
+ dev_obj = dev_get_first();
+ dev_get_io_mgr(dev_obj, &pio_mgr);
+
+ if (!pio_mgr)
+ return;
+
+ pio_mgr->intr_val = (u16)msg;
+ if (pio_mgr->intr_val & MBX_PM_CLASS)
+ io_dispatch_pm(pio_mgr);
+
+ if (pio_mgr->intr_val == MBX_DEH_RESET) {
+ pio_mgr->intr_val = 0;
+ } else {
+ spin_lock_irqsave(&pio_mgr->dpc_lock, flags);
+ pio_mgr->dpc_req++;
+ spin_unlock_irqrestore(&pio_mgr->dpc_lock, flags);
+ tasklet_schedule(&pio_mgr->dpc_tasklet);
+ }
+ return;
+}
+
+/*
+ * ======== io_request_chnl ========
+ * Purpose:
+ * Request chanenel I/O from the DSP. Sets flags in shared memory, then
+ * interrupts the DSP.
+ */
+void io_request_chnl(struct io_mgr *pio_mgr, struct chnl_object *pchnl,
+ u8 iMode, OUT u16 *pwMbVal)
+{
+ struct chnl_mgr *chnl_mgr_obj;
+ struct shm *sm;
+
+ if (!pchnl || !pwMbVal)
+ goto func_end;
+ chnl_mgr_obj = pio_mgr->hchnl_mgr;
+ sm = pio_mgr->shared_mem;
+ if (iMode == IO_INPUT) {
+ /*
+ * Assertion fires if CHNL_AddIOReq() called on a stream
+ * which was cancelled, or attached to a dead board.
+ */
+ DBC_ASSERT((pchnl->dw_state == CHNL_STATEREADY) ||
+ (pchnl->dw_state == CHNL_STATEEOS));
+ /* Indicate to the DSP we have a buffer available for input */
+ IO_OR_VALUE(pio_mgr->hbridge_context, struct shm, sm,
+ host_free_mask, (1 << pchnl->chnl_id));
+ *pwMbVal = MBX_PCPY_CLASS;
+ } else if (iMode == IO_OUTPUT) {
+ /*
+ * This assertion fails if CHNL_AddIOReq() was called on a
+ * stream which was cancelled, or attached to a dead board.
+ */
+ DBC_ASSERT((pchnl->dw_state & ~CHNL_STATEEOS) ==
+ CHNL_STATEREADY);
+ /*
+ * Record the fact that we have a buffer available for
+ * output.
+ */
+ chnl_mgr_obj->dw_output_mask |= (1 << pchnl->chnl_id);
+ } else {
+ DBC_ASSERT(iMode); /* Shouldn't get here. */
+ }
+func_end:
+ return;
+}
+
+/*
+ * ======== iosm_schedule ========
+ * Schedule DPC for IO.
+ */
+void iosm_schedule(struct io_mgr *pio_mgr)
+{
+ unsigned long flags;
+
+ if (!pio_mgr)
+ return;
+
+ /* Increment count of DPC's pending. */
+ spin_lock_irqsave(&pio_mgr->dpc_lock, flags);
+ pio_mgr->dpc_req++;
+ spin_unlock_irqrestore(&pio_mgr->dpc_lock, flags);
+
+ /* Schedule DPC */
+ tasklet_schedule(&pio_mgr->dpc_tasklet);
+}
+
+/*
+ * ======== find_ready_output ========
+ * Search for a host output channel which is ready to send. If this is
+ * called as a result of servicing the DPC, then implement a round
+ * robin search; otherwise, this was called by a client thread (via
+ * IO_Dispatch()), so just start searching from the current channel id.
+ */
+static u32 find_ready_output(struct chnl_mgr *chnl_mgr_obj,
+ struct chnl_object *pchnl, u32 dwMask)
+{
+ u32 ret = OUTPUTNOTREADY;
+ u32 id, start_id;
+ u32 shift;
+
+ id = (pchnl !=
+ NULL ? pchnl->chnl_id : (chnl_mgr_obj->dw_last_output + 1));
+ id = ((id == CHNL_MAXCHANNELS) ? 0 : id);
+ if (id >= CHNL_MAXCHANNELS)
+ goto func_end;
+ if (dwMask) {
+ shift = (1 << id);
+ start_id = id;
+ do {
+ if (dwMask & shift) {
+ ret = id;
+ if (pchnl == NULL)
+ chnl_mgr_obj->dw_last_output = id;
+ break;
+ }
+ id = id + 1;
+ id = ((id == CHNL_MAXCHANNELS) ? 0 : id);
+ shift = (1 << id);
+ } while (id != start_id);
+ }
+func_end:
+ return ret;
+}
+
+/*
+ * ======== input_chnl ========
+ * Dispatch a buffer on an input channel.
+ */
+static void input_chnl(struct io_mgr *pio_mgr, struct chnl_object *pchnl,
+ u8 iMode)
+{
+ struct chnl_mgr *chnl_mgr_obj;
+ struct shm *sm;
+ u32 chnl_id;
+ u32 bytes;
+ struct chnl_irp *chnl_packet_obj = NULL;
+ u32 dw_arg;
+ bool clear_chnl = false;
+ bool notify_client = false;
+
+ sm = pio_mgr->shared_mem;
+ chnl_mgr_obj = pio_mgr->hchnl_mgr;
+
+ /* Attempt to perform input */
+ if (!IO_GET_VALUE(pio_mgr->hbridge_context, struct shm, sm, input_full))
+ goto func_end;
+
+ bytes =
+ IO_GET_VALUE(pio_mgr->hbridge_context, struct shm, sm,
+ input_size) * chnl_mgr_obj->word_size;
+ chnl_id = IO_GET_VALUE(pio_mgr->hbridge_context, struct shm,
+ sm, input_id);
+ dw_arg = IO_GET_LONG(pio_mgr->hbridge_context, struct shm, sm, arg);
+ if (chnl_id >= CHNL_MAXCHANNELS) {
+ /* Shouldn't be here: would indicate corrupted shm. */
+ DBC_ASSERT(chnl_id);
+ goto func_end;
+ }
+ pchnl = chnl_mgr_obj->ap_channel[chnl_id];
+ if ((pchnl != NULL) && CHNL_IS_INPUT(pchnl->chnl_mode)) {
+ if ((pchnl->dw_state & ~CHNL_STATEEOS) == CHNL_STATEREADY) {
+ if (!pchnl->pio_requests)
+ goto func_end;
+ /* Get the I/O request, and attempt a transfer */
+ chnl_packet_obj = (struct chnl_irp *)
+ lst_get_head(pchnl->pio_requests);
+ if (chnl_packet_obj) {
+ pchnl->cio_reqs--;
+ if (pchnl->cio_reqs < 0)
+ goto func_end;
+ /*
+ * Ensure we don't overflow the client's
+ * buffer.
+ */
+ bytes = min(bytes, chnl_packet_obj->byte_size);
+ /* Transfer buffer from DSP side */
+ bytes = read_data(pio_mgr->hbridge_context,
+ chnl_packet_obj->host_sys_buf,
+ pio_mgr->input, bytes);
+ pchnl->bytes_moved += bytes;
+ chnl_packet_obj->byte_size = bytes;
+ chnl_packet_obj->dw_arg = dw_arg;
+ chnl_packet_obj->status = CHNL_IOCSTATCOMPLETE;
+
+ if (bytes == 0) {
+ /*
+ * This assertion fails if the DSP
+ * sends EOS more than once on this
+ * channel.
+ */
+ if (pchnl->dw_state & CHNL_STATEEOS)
+ goto func_end;
+ /*
+ * Zero bytes indicates EOS. Update
+ * IOC status for this chirp, and also
+ * the channel state.
+ */
+ chnl_packet_obj->status |=
+ CHNL_IOCSTATEOS;
+ pchnl->dw_state |= CHNL_STATEEOS;
+ /*
+ * Notify that end of stream has
+ * occurred.
+ */
+ ntfy_notify(pchnl->ntfy_obj,
+ DSP_STREAMDONE);
+ }
+ /* Tell DSP if no more I/O buffers available */
+ if (!pchnl->pio_requests)
+ goto func_end;
+ if (LST_IS_EMPTY(pchnl->pio_requests)) {
+ IO_AND_VALUE(pio_mgr->hbridge_context,
+ struct shm, sm,
+ host_free_mask,
+ ~(1 << pchnl->chnl_id));
+ }
+ clear_chnl = true;
+ notify_client = true;
+ } else {
+ /*
+ * Input full for this channel, but we have no
+ * buffers available. The channel must be
+ * "idling". Clear out the physical input
+ * channel.
+ */
+ clear_chnl = true;
+ }
+ } else {
+ /* Input channel cancelled: clear input channel */
+ clear_chnl = true;
+ }
+ } else {
+ /* DPC fired after host closed channel: clear input channel */
+ clear_chnl = true;
+ }
+ if (clear_chnl) {
+ /* Indicate to the DSP we have read the input */
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm,
+ input_full, 0);
+ sm_interrupt_dsp(pio_mgr->hbridge_context, MBX_PCPY_CLASS);
+ }
+ if (notify_client) {
+ /* Notify client with IO completion record */
+ notify_chnl_complete(pchnl, chnl_packet_obj);
+ }
+func_end:
+ return;
+}
+
+/*
+ * ======== input_msg ========
+ * Copies messages from shared memory to the message queues.
+ */
+static void input_msg(struct io_mgr *pio_mgr, struct msg_mgr *hmsg_mgr)
+{
+ u32 num_msgs;
+ u32 i;
+ u8 *msg_input;
+ struct msg_queue *msg_queue_obj;
+ struct msg_frame *pmsg;
+ struct msg_dspmsg msg;
+ struct msg_ctrl *msg_ctr_obj;
+ u32 input_empty;
+ u32 addr;
+
+ msg_ctr_obj = pio_mgr->msg_input_ctrl;
+ /* Get the number of input messages to be read */
+ input_empty =
+ IO_GET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl, msg_ctr_obj,
+ buf_empty);
+ num_msgs =
+ IO_GET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl, msg_ctr_obj,
+ size);
+ if (input_empty)
+ goto func_end;
+
+ msg_input = pio_mgr->msg_input;
+ for (i = 0; i < num_msgs; i++) {
+ /* Read the next message */
+ addr = (u32) &(((struct msg_dspmsg *)msg_input)->msg.dw_cmd);
+ msg.msg.dw_cmd =
+ read_ext32_bit_dsp_data(pio_mgr->hbridge_context, addr);
+ addr = (u32) &(((struct msg_dspmsg *)msg_input)->msg.dw_arg1);
+ msg.msg.dw_arg1 =
+ read_ext32_bit_dsp_data(pio_mgr->hbridge_context, addr);
+ addr = (u32) &(((struct msg_dspmsg *)msg_input)->msg.dw_arg2);
+ msg.msg.dw_arg2 =
+ read_ext32_bit_dsp_data(pio_mgr->hbridge_context, addr);
+ addr = (u32) &(((struct msg_dspmsg *)msg_input)->msgq_id);
+ msg.msgq_id =
+ read_ext32_bit_dsp_data(pio_mgr->hbridge_context, addr);
+ msg_input += sizeof(struct msg_dspmsg);
+ if (!hmsg_mgr->queue_list)
+ goto func_end;
+
+ /* Determine which queue to put the message in */
+ msg_queue_obj =
+ (struct msg_queue *)lst_first(hmsg_mgr->queue_list);
+ dev_dbg(bridge, "input msg: dw_cmd=0x%x dw_arg1=0x%x "
+ "dw_arg2=0x%x msgq_id=0x%x \n", msg.msg.dw_cmd,
+ msg.msg.dw_arg1, msg.msg.dw_arg2, msg.msgq_id);
+ /*
+ * Interrupt may occur before shared memory and message
+ * input locations have been set up. If all nodes were
+ * cleaned up, hmsg_mgr->max_msgs should be 0.
+ */
+ while (msg_queue_obj != NULL) {
+ if (msg.msgq_id == msg_queue_obj->msgq_id) {
+ /* Found it */
+ if (msg.msg.dw_cmd == RMS_EXITACK) {
+ /*
+ * Call the node exit notification.
+ * The exit message does not get
+ * queued.
+ */
+ (*hmsg_mgr->on_exit) ((void *)
+ msg_queue_obj->arg,
+ msg.msg.dw_arg1);
+ } else {
+ /*
+ * Not an exit acknowledgement, queue
+ * the message.
+ */
+ if (!msg_queue_obj->msg_free_list)
+ goto func_end;
+ pmsg = (struct msg_frame *)lst_get_head
+ (msg_queue_obj->msg_free_list);
+ if (msg_queue_obj->msg_used_list
+ && pmsg) {
+ pmsg->msg_data = msg;
+ lst_put_tail
+ (msg_queue_obj->msg_used_list,
+ (struct list_head *)pmsg);
+ ntfy_notify
+ (msg_queue_obj->ntfy_obj,
+ DSP_NODEMESSAGEREADY);
+ sync_set_event
+ (msg_queue_obj->sync_event);
+ } else {
+ /*
+ * No free frame to copy the
+ * message into.
+ */
+ pr_err("%s: no free msg frames,"
+ " discarding msg\n",
+ __func__);
+ }
+ }
+ break;
+ }
+
+ if (!hmsg_mgr->queue_list || !msg_queue_obj)
+ goto func_end;
+ msg_queue_obj =
+ (struct msg_queue *)lst_next(hmsg_mgr->queue_list,
+ (struct list_head *)
+ msg_queue_obj);
+ }
+ }
+ /* Set the post SWI flag */
+ if (num_msgs > 0) {
+ /* Tell the DSP we've read the messages */
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, buf_empty, true);
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, post_swi, true);
+ sm_interrupt_dsp(pio_mgr->hbridge_context, MBX_PCPY_CLASS);
+ }
+func_end:
+ return;
+}
+
+/*
+ * ======== notify_chnl_complete ========
+ * Purpose:
+ * Signal the channel event, notifying the client that I/O has completed.
+ */
+static void notify_chnl_complete(struct chnl_object *pchnl,
+ struct chnl_irp *chnl_packet_obj)
+{
+ bool signal_event;
+
+ if (!pchnl || !pchnl->sync_event ||
+ !pchnl->pio_completions || !chnl_packet_obj)
+ goto func_end;
+
+ /*
+ * Note: we signal the channel event only if the queue of IO
+ * completions is empty. If it is not empty, the event is sure to be
+ * signalled by the only IO completion list consumer:
+ * bridge_chnl_get_ioc().
+ */
+ signal_event = LST_IS_EMPTY(pchnl->pio_completions);
+ /* Enqueue the IO completion info for the client */
+ lst_put_tail(pchnl->pio_completions,
+ (struct list_head *)chnl_packet_obj);
+ pchnl->cio_cs++;
+
+ if (pchnl->cio_cs > pchnl->chnl_packets)
+ goto func_end;
+ /* Signal the channel event (if not already set) that IO is complete */
+ if (signal_event)
+ sync_set_event(pchnl->sync_event);
+
+ /* Notify that IO is complete */
+ ntfy_notify(pchnl->ntfy_obj, DSP_STREAMIOCOMPLETION);
+func_end:
+ return;
+}
+
+/*
+ * ======== output_chnl ========
+ * Purpose:
+ * Dispatch a buffer on an output channel.
+ */
+static void output_chnl(struct io_mgr *pio_mgr, struct chnl_object *pchnl,
+ u8 iMode)
+{
+ struct chnl_mgr *chnl_mgr_obj;
+ struct shm *sm;
+ u32 chnl_id;
+ struct chnl_irp *chnl_packet_obj;
+ u32 dw_dsp_f_mask;
+
+ chnl_mgr_obj = pio_mgr->hchnl_mgr;
+ sm = pio_mgr->shared_mem;
+ /* Attempt to perform output */
+ if (IO_GET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_full))
+ goto func_end;
+
+ if (pchnl && !((pchnl->dw_state & ~CHNL_STATEEOS) == CHNL_STATEREADY))
+ goto func_end;
+
+ /* Look to see if both a PC and DSP output channel are ready */
+ dw_dsp_f_mask = IO_GET_VALUE(pio_mgr->hbridge_context, struct shm, sm,
+ dsp_free_mask);
+ chnl_id =
+ find_ready_output(chnl_mgr_obj, pchnl,
+ (chnl_mgr_obj->dw_output_mask & dw_dsp_f_mask));
+ if (chnl_id == OUTPUTNOTREADY)
+ goto func_end;
+
+ pchnl = chnl_mgr_obj->ap_channel[chnl_id];
+ if (!pchnl || !pchnl->pio_requests) {
+ /* Shouldn't get here */
+ goto func_end;
+ }
+ /* Get the I/O request, and attempt a transfer */
+ chnl_packet_obj = (struct chnl_irp *)lst_get_head(pchnl->pio_requests);
+ if (!chnl_packet_obj)
+ goto func_end;
+
+ pchnl->cio_reqs--;
+ if (pchnl->cio_reqs < 0 || !pchnl->pio_requests)
+ goto func_end;
+
+ /* Record fact that no more I/O buffers available */
+ if (LST_IS_EMPTY(pchnl->pio_requests))
+ chnl_mgr_obj->dw_output_mask &= ~(1 << chnl_id);
+
+ /* Transfer buffer to DSP side */
+ chnl_packet_obj->byte_size =
+ write_data(pio_mgr->hbridge_context, pio_mgr->output,
+ chnl_packet_obj->host_sys_buf, min(pio_mgr->usm_buf_size,
+ chnl_packet_obj->byte_size));
+ pchnl->bytes_moved += chnl_packet_obj->byte_size;
+ /* Write all 32 bits of arg */
+ IO_SET_LONG(pio_mgr->hbridge_context, struct shm, sm, arg,
+ chnl_packet_obj->dw_arg);
+#if _CHNL_WORDSIZE == 2
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_id,
+ (u16) chnl_id);
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_size,
+ (u16) (chnl_packet_obj->byte_size +
+ (chnl_mgr_obj->word_size -
+ 1)) / (u16) chnl_mgr_obj->word_size);
+#else
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_id,
+ chnl_id);
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_size,
+ (chnl_packet_obj->byte_size +
+ (chnl_mgr_obj->word_size - 1)) / chnl_mgr_obj->word_size);
+#endif
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct shm, sm, output_full, 1);
+ /* Indicate to the DSP we have written the output */
+ sm_interrupt_dsp(pio_mgr->hbridge_context, MBX_PCPY_CLASS);
+ /* Notify client with IO completion record (keep EOS) */
+ chnl_packet_obj->status &= CHNL_IOCSTATEOS;
+ notify_chnl_complete(pchnl, chnl_packet_obj);
+ /* Notify if stream is done. */
+ if (chnl_packet_obj->status & CHNL_IOCSTATEOS)
+ ntfy_notify(pchnl->ntfy_obj, DSP_STREAMDONE);
+
+func_end:
+ return;
+}
+
+/*
+ * ======== output_msg ========
+ * Copies messages from the message queues to the shared memory.
+ */
+static void output_msg(struct io_mgr *pio_mgr, struct msg_mgr *hmsg_mgr)
+{
+ u32 num_msgs = 0;
+ u32 i;
+ u8 *msg_output;
+ struct msg_frame *pmsg;
+ struct msg_ctrl *msg_ctr_obj;
+ u32 output_empty;
+ u32 val;
+ u32 addr;
+
+ msg_ctr_obj = pio_mgr->msg_output_ctrl;
+
+ /* Check if output has been cleared */
+ output_empty =
+ IO_GET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl, msg_ctr_obj,
+ buf_empty);
+ if (output_empty) {
+ num_msgs = (hmsg_mgr->msgs_pending > hmsg_mgr->max_msgs) ?
+ hmsg_mgr->max_msgs : hmsg_mgr->msgs_pending;
+ msg_output = pio_mgr->msg_output;
+ /* Copy num_msgs messages into shared memory */
+ for (i = 0; i < num_msgs; i++) {
+ if (!hmsg_mgr->msg_used_list) {
+ pmsg = NULL;
+ goto func_end;
+ } else {
+ pmsg = (struct msg_frame *)
+ lst_get_head(hmsg_mgr->msg_used_list);
+ }
+ if (pmsg != NULL) {
+ val = (pmsg->msg_data).msgq_id;
+ addr = (u32) &(((struct msg_dspmsg *)
+ msg_output)->msgq_id);
+ write_ext32_bit_dsp_data(
+ pio_mgr->hbridge_context, addr, val);
+ val = (pmsg->msg_data).msg.dw_cmd;
+ addr = (u32) &((((struct msg_dspmsg *)
+ msg_output)->msg).dw_cmd);
+ write_ext32_bit_dsp_data(
+ pio_mgr->hbridge_context, addr, val);
+ val = (pmsg->msg_data).msg.dw_arg1;
+ addr = (u32) &((((struct msg_dspmsg *)
+ msg_output)->msg).dw_arg1);
+ write_ext32_bit_dsp_data(
+ pio_mgr->hbridge_context, addr, val);
+ val = (pmsg->msg_data).msg.dw_arg2;
+ addr = (u32) &((((struct msg_dspmsg *)
+ msg_output)->msg).dw_arg2);
+ write_ext32_bit_dsp_data(
+ pio_mgr->hbridge_context, addr, val);
+ msg_output += sizeof(struct msg_dspmsg);
+ if (!hmsg_mgr->msg_free_list)
+ goto func_end;
+ lst_put_tail(hmsg_mgr->msg_free_list,
+ (struct list_head *)pmsg);
+ sync_set_event(hmsg_mgr->sync_event);
+ }
+ }
+
+ if (num_msgs > 0) {
+ hmsg_mgr->msgs_pending -= num_msgs;
+#if _CHNL_WORDSIZE == 2
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, size, (u16) num_msgs);
+#else
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, size, num_msgs);
+#endif
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, buf_empty, false);
+ /* Set the post SWI flag */
+ IO_SET_VALUE(pio_mgr->hbridge_context, struct msg_ctrl,
+ msg_ctr_obj, post_swi, true);
+ /* Tell the DSP we have written the output. */
+ sm_interrupt_dsp(pio_mgr->hbridge_context,
+ MBX_PCPY_CLASS);
+ }
+ }
+func_end:
+ return;
+}
+
+/*
+ * ======== register_shm_segs ========
+ * purpose:
+ * Registers GPP SM segment with CMM.
+ */
+static int register_shm_segs(struct io_mgr *hio_mgr,
+ struct cod_manager *cod_man,
+ u32 dw_gpp_base_pa)
+{
+ int status = 0;
+ u32 ul_shm0_base = 0;
+ u32 shm0_end = 0;
+ u32 ul_shm0_rsrvd_start = 0;
+ u32 ul_rsrvd_size = 0;
+ u32 ul_gpp_phys;
+ u32 ul_dsp_virt;
+ u32 ul_shm_seg_id0 = 0;
+ u32 dw_offset, dw_gpp_base_va, ul_dsp_size;
+
+ /*
+ * Read address and size info for first SM region.
+ * Get start of 1st SM Heap region.
+ */
+ status =
+ cod_get_sym_value(cod_man, SHM0_SHARED_BASE_SYM, &ul_shm0_base);
+ if (ul_shm0_base == 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* Get end of 1st SM Heap region */
+ if (DSP_SUCCEEDED(status)) {
+ /* Get start and length of message part of shared memory */
+ status = cod_get_sym_value(cod_man, SHM0_SHARED_END_SYM,
+ &shm0_end);
+ if (shm0_end == 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ }
+ /* Start of Gpp reserved region */
+ if (DSP_SUCCEEDED(status)) {
+ /* Get start and length of message part of shared memory */
+ status =
+ cod_get_sym_value(cod_man, SHM0_SHARED_RESERVED_BASE_SYM,
+ &ul_shm0_rsrvd_start);
+ if (ul_shm0_rsrvd_start == 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ }
+ /* Register with CMM */
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_cmm_mgr(hio_mgr->hdev_obj, &hio_mgr->hcmm_mgr);
+ if (DSP_SUCCEEDED(status)) {
+ status = cmm_un_register_gppsm_seg(hio_mgr->hcmm_mgr,
+ CMM_ALLSEGMENTS);
+ }
+ }
+ /* Register new SM region(s) */
+ if (DSP_SUCCEEDED(status) && (shm0_end - ul_shm0_base) > 0) {
+ /* Calc size (bytes) of SM the GPP can alloc from */
+ ul_rsrvd_size =
+ (shm0_end - ul_shm0_rsrvd_start + 1) * hio_mgr->word_size;
+ if (ul_rsrvd_size <= 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* Calc size of SM DSP can alloc from */
+ ul_dsp_size =
+ (ul_shm0_rsrvd_start - ul_shm0_base) * hio_mgr->word_size;
+ if (ul_dsp_size <= 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* First TLB entry reserved for Bridge SM use. */
+ ul_gpp_phys = hio_mgr->ext_proc_info.ty_tlb[0].ul_gpp_phys;
+ /* Get size in bytes */
+ ul_dsp_virt =
+ hio_mgr->ext_proc_info.ty_tlb[0].ul_dsp_virt *
+ hio_mgr->word_size;
+ /*
+ * Calc byte offset used to convert GPP phys <-> DSP byte
+ * address.
+ */
+ if (dw_gpp_base_pa > ul_dsp_virt)
+ dw_offset = dw_gpp_base_pa - ul_dsp_virt;
+ else
+ dw_offset = ul_dsp_virt - dw_gpp_base_pa;
+
+ if (ul_shm0_rsrvd_start * hio_mgr->word_size < ul_dsp_virt) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /*
+ * Calc Gpp phys base of SM region.
+ * This is actually uncached kernel virtual address.
+ */
+ dw_gpp_base_va =
+ ul_gpp_phys + ul_shm0_rsrvd_start * hio_mgr->word_size -
+ ul_dsp_virt;
+ /*
+ * Calc Gpp phys base of SM region.
+ * This is the physical address.
+ */
+ dw_gpp_base_pa =
+ dw_gpp_base_pa + ul_shm0_rsrvd_start * hio_mgr->word_size -
+ ul_dsp_virt;
+ /* Register SM Segment 0. */
+ status =
+ cmm_register_gppsm_seg(hio_mgr->hcmm_mgr, dw_gpp_base_pa,
+ ul_rsrvd_size, dw_offset,
+ (dw_gpp_base_pa >
+ ul_dsp_virt) ? CMM_ADDTODSPPA :
+ CMM_SUBFROMDSPPA,
+ (u32) (ul_shm0_base *
+ hio_mgr->word_size),
+ ul_dsp_size, &ul_shm_seg_id0,
+ dw_gpp_base_va);
+ /* First SM region is seg_id = 1 */
+ if (ul_shm_seg_id0 != 1)
+ status = -EPERM;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== read_data ========
+ * Copies buffers from the shared memory to the host buffer.
+ */
+static u32 read_data(struct bridge_dev_context *hDevContext, void *dest,
+ void *pSrc, u32 usize)
+{
+ memcpy(dest, pSrc, usize);
+ return usize;
+}
+
+/*
+ * ======== write_data ========
+ * Copies buffers from the host side buffer to the shared memory.
+ */
+static u32 write_data(struct bridge_dev_context *hDevContext, void *dest,
+ void *pSrc, u32 usize)
+{
+ memcpy(dest, pSrc, usize);
+ return usize;
+}
+
+/* ZCPY IO routines. */
+void io_intr_dsp2(IN struct io_mgr *pio_mgr, IN u16 mb_val)
+{
+ sm_interrupt_dsp(pio_mgr->hbridge_context, mb_val);
+}
+
+/*
+ * ======== IO_SHMcontrol ========
+ * Sets the requested shm setting.
+ */
+int io_sh_msetting(struct io_mgr *hio_mgr, u8 desc, void *pargs)
+{
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 i;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ switch (desc) {
+ case SHM_CURROPP:
+ /* Update the shared memory with requested OPP information */
+ if (pargs != NULL)
+ hio_mgr->shared_mem->opp_table_struct.curr_opp_pt =
+ *(u32 *) pargs;
+ else
+ return -EPERM;
+ break;
+ case SHM_OPPINFO:
+ /*
+ * Update the shared memory with the voltage, frequency,
+ * min and max frequency values for an OPP.
+ */
+ for (i = 0; i <= dsp_max_opps; i++) {
+ hio_mgr->shared_mem->opp_table_struct.opp_point[i].
+ voltage = vdd1_dsp_freq[i][0];
+ dev_dbg(bridge, "OPP-shm: voltage: %d\n",
+ vdd1_dsp_freq[i][0]);
+ hio_mgr->shared_mem->opp_table_struct.
+ opp_point[i].frequency = vdd1_dsp_freq[i][1];
+ dev_dbg(bridge, "OPP-shm: frequency: %d\n",
+ vdd1_dsp_freq[i][1]);
+ hio_mgr->shared_mem->opp_table_struct.opp_point[i].
+ min_freq = vdd1_dsp_freq[i][2];
+ dev_dbg(bridge, "OPP-shm: min freq: %d\n",
+ vdd1_dsp_freq[i][2]);
+ hio_mgr->shared_mem->opp_table_struct.opp_point[i].
+ max_freq = vdd1_dsp_freq[i][3];
+ dev_dbg(bridge, "OPP-shm: max freq: %d\n",
+ vdd1_dsp_freq[i][3]);
+ }
+ hio_mgr->shared_mem->opp_table_struct.num_opp_pts =
+ dsp_max_opps;
+ dev_dbg(bridge, "OPP-shm: max OPP number: %d\n", dsp_max_opps);
+ /* Update the current OPP number */
+ if (pdata->dsp_get_opp)
+ i = (*pdata->dsp_get_opp) ();
+ hio_mgr->shared_mem->opp_table_struct.curr_opp_pt = i;
+ dev_dbg(bridge, "OPP-shm: value programmed = %d\n", i);
+ break;
+ case SHM_GETOPP:
+ /* Get the OPP that DSP has requested */
+ *(u32 *) pargs = hio_mgr->shared_mem->opp_request.rqst_opp_pt;
+ break;
+ default:
+ break;
+ }
+#endif
+ return 0;
+}
+
+/*
+ * ======== bridge_io_get_proc_load ========
+ * Gets the Processor's Load information
+ */
+int bridge_io_get_proc_load(IN struct io_mgr *hio_mgr,
+ OUT struct dsp_procloadstat *pProcStat)
+{
+ pProcStat->curr_load = hio_mgr->shared_mem->load_mon_info.curr_dsp_load;
+ pProcStat->predicted_load =
+ hio_mgr->shared_mem->load_mon_info.pred_dsp_load;
+ pProcStat->curr_dsp_freq =
+ hio_mgr->shared_mem->load_mon_info.curr_dsp_freq;
+ pProcStat->predicted_freq =
+ hio_mgr->shared_mem->load_mon_info.pred_dsp_freq;
+
+ dev_dbg(bridge, "Curr Load = %d, Pred Load = %d, Curr Freq = %d, "
+ "Pred Freq = %d\n", pProcStat->curr_load,
+ pProcStat->predicted_load, pProcStat->curr_dsp_freq,
+ pProcStat->predicted_freq);
+ return 0;
+}
+
+#ifndef DSP_TRACEBUF_DISABLED
+void print_dsp_debug_trace(struct io_mgr *hio_mgr)
+{
+ u32 ul_new_message_length = 0, ul_gpp_cur_pointer;
+
+ while (true) {
+ /* Get the DSP current pointer */
+ ul_gpp_cur_pointer =
+ *(u32 *) (hio_mgr->ul_trace_buffer_current);
+ ul_gpp_cur_pointer =
+ hio_mgr->ul_gpp_va + (ul_gpp_cur_pointer -
+ hio_mgr->ul_dsp_va);
+
+ /* No new debug messages available yet */
+ if (ul_gpp_cur_pointer == hio_mgr->ul_gpp_read_pointer) {
+ break;
+ } else if (ul_gpp_cur_pointer > hio_mgr->ul_gpp_read_pointer) {
+ /* Continuous data */
+ ul_new_message_length =
+ ul_gpp_cur_pointer - hio_mgr->ul_gpp_read_pointer;
+
+ memcpy(hio_mgr->pmsg,
+ (char *)hio_mgr->ul_gpp_read_pointer,
+ ul_new_message_length);
+ hio_mgr->pmsg[ul_new_message_length] = '\0';
+ /*
+ * Advance the GPP trace pointer to DSP current
+ * pointer.
+ */
+ hio_mgr->ul_gpp_read_pointer += ul_new_message_length;
+ /* Print the trace messages */
+ pr_info("DSPTrace: %s\n", hio_mgr->pmsg);
+ } else if (ul_gpp_cur_pointer < hio_mgr->ul_gpp_read_pointer) {
+ /* Handle trace buffer wraparound */
+ memcpy(hio_mgr->pmsg,
+ (char *)hio_mgr->ul_gpp_read_pointer,
+ hio_mgr->ul_trace_buffer_end -
+ hio_mgr->ul_gpp_read_pointer);
+ ul_new_message_length =
+ ul_gpp_cur_pointer - hio_mgr->ul_trace_buffer_begin;
+ memcpy(&hio_mgr->pmsg[hio_mgr->ul_trace_buffer_end -
+ hio_mgr->ul_gpp_read_pointer],
+ (char *)hio_mgr->ul_trace_buffer_begin,
+ ul_new_message_length);
+ hio_mgr->pmsg[hio_mgr->ul_trace_buffer_end -
+ hio_mgr->ul_gpp_read_pointer +
+ ul_new_message_length] = '\0';
+ /*
+ * Advance the GPP trace pointer to DSP current
+ * pointer.
+ */
+ hio_mgr->ul_gpp_read_pointer =
+ hio_mgr->ul_trace_buffer_begin +
+ ul_new_message_length;
+ /* Print the trace messages */
+ pr_info("DSPTrace: %s\n", hio_mgr->pmsg);
+ }
+ }
+}
+#endif
+
+/*
+ * ======== print_dsp_trace_buffer ========
+ * Prints the trace buffer returned from the DSP (if DBG_Trace is enabled).
+ * Parameters:
+ * hdeh_mgr: Handle to DEH manager object
+ * number of extra carriage returns to generate.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Unable to allocate memory.
+ * Requires:
+ * hdeh_mgr muse be valid. Checked in bridge_deh_notify.
+ */
+int print_dsp_trace_buffer(struct bridge_dev_context *hbridge_context)
+{
+ int status = 0;
+ struct cod_manager *cod_mgr;
+ u32 ul_trace_end;
+ u32 ul_trace_begin;
+ u32 trace_cur_pos;
+ u32 ul_num_bytes = 0;
+ u32 ul_num_words = 0;
+ u32 ul_word_size = 2;
+ char *psz_buf;
+ char *str_beg;
+ char *trace_end;
+ char *buf_end;
+ char *new_line;
+
+ struct bridge_dev_context *pbridge_context = hbridge_context;
+ struct bridge_drv_interface *intf_fxns;
+ struct dev_object *dev_obj = (struct dev_object *)
+ pbridge_context->hdev_obj;
+
+ status = dev_get_cod_mgr(dev_obj, &cod_mgr);
+
+ if (cod_mgr) {
+ /* Look for SYS_PUTCBEG/SYS_PUTCEND */
+ status =
+ cod_get_sym_value(cod_mgr, COD_TRACEBEG, &ul_trace_begin);
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status))
+ status =
+ cod_get_sym_value(cod_mgr, COD_TRACEEND, &ul_trace_end);
+
+ if (DSP_SUCCEEDED(status))
+ /* trace_cur_pos will hold the address of a DSP pointer */
+ status = cod_get_sym_value(cod_mgr, COD_TRACECURPOS,
+ &trace_cur_pos);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ ul_num_bytes = (ul_trace_end - ul_trace_begin);
+
+ ul_num_words = ul_num_bytes * ul_word_size;
+ status = dev_get_intf_fxns(dev_obj, &intf_fxns);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ psz_buf = kzalloc(ul_num_bytes + 2, GFP_ATOMIC);
+ if (psz_buf != NULL) {
+ /* Read trace buffer data */
+ status = (*intf_fxns->pfn_brd_read)(pbridge_context,
+ (u8 *)psz_buf, (u32)ul_trace_begin,
+ ul_num_bytes, 0);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Pack and do newline conversion */
+ pr_debug("PrintDspTraceBuffer: "
+ "before pack and unpack.\n");
+ pr_debug("%s: DSP Trace Buffer Begin:\n"
+ "=======================\n%s\n",
+ __func__, psz_buf);
+
+ /* Read the value at the DSP address in trace_cur_pos. */
+ status = (*intf_fxns->pfn_brd_read)(pbridge_context,
+ (u8 *)&trace_cur_pos, (u32)trace_cur_pos,
+ 4, 0);
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* Pack and do newline conversion */
+ pr_info("DSP Trace Buffer Begin:\n"
+ "=======================\n%s\n",
+ psz_buf);
+
+
+ /* convert to offset */
+ trace_cur_pos = trace_cur_pos - ul_trace_begin;
+
+ if (ul_num_bytes) {
+ /*
+ * The buffer is not full, find the end of the
+ * data -- buf_end will be >= pszBuf after
+ * while.
+ */
+ buf_end = &psz_buf[ul_num_bytes+1];
+ /* DSP print position */
+ trace_end = &psz_buf[trace_cur_pos];
+
+ /*
+ * Search buffer for a new_line and replace it
+ * with '\0', then print as string.
+ * Continue until end of buffer is reached.
+ */
+ str_beg = trace_end;
+ ul_num_bytes = buf_end - str_beg;
+
+ while (str_beg < buf_end) {
+ new_line = strnchr(str_beg, ul_num_bytes,
+ '\n');
+ if (new_line && new_line < buf_end) {
+ *new_line = 0;
+ pr_debug("%s\n", str_beg);
+ str_beg = ++new_line;
+ ul_num_bytes = buf_end - str_beg;
+ } else {
+ /*
+ * Assume buffer empty if it contains
+ * a zero
+ */
+ if (*str_beg != '\0') {
+ str_beg[ul_num_bytes] = 0;
+ pr_debug("%s\n", str_beg);
+ }
+ str_beg = buf_end;
+ ul_num_bytes = 0;
+ }
+ }
+ /*
+ * Search buffer for a nNewLine and replace it
+ * with '\0', then print as string.
+ * Continue until buffer is exhausted.
+ */
+ str_beg = psz_buf;
+ ul_num_bytes = trace_end - str_beg;
+
+ while (str_beg < trace_end) {
+ new_line = strnchr(str_beg, ul_num_bytes, '\n');
+ if (new_line != NULL && new_line < trace_end) {
+ *new_line = 0;
+ pr_debug("%s\n", str_beg);
+ str_beg = ++new_line;
+ ul_num_bytes = trace_end - str_beg;
+ } else {
+ /*
+ * Assume buffer empty if it contains
+ * a zero
+ */
+ if (*str_beg != '\0') {
+ str_beg[ul_num_bytes] = 0;
+ pr_debug("%s\n", str_beg);
+ }
+ str_beg = trace_end;
+ ul_num_bytes = 0;
+ }
+ }
+ }
+ pr_info("\n=======================\n"
+ "DSP Trace Buffer End:\n");
+ kfree(psz_buf);
+ } else {
+ status = -ENOMEM;
+ }
+func_end:
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "%s Failed, status 0x%x\n", __func__, status);
+ return status;
+}
+
+void io_sm_init(void)
+{
+ /* Do nothing */
+}
+/**
+ * dump_dsp_stack() - This function dumps the data on the DSP stack.
+ * @bridge_context: Bridge driver's device context pointer.
+ *
+ */
+int dump_dsp_stack(struct bridge_dev_context *bridge_context)
+{
+ int status = 0;
+ struct cod_manager *code_mgr;
+ struct node_mgr *node_mgr;
+ u32 trace_begin;
+ char name[256];
+ struct {
+ u32 head[2];
+ u32 size;
+ } mmu_fault_dbg_info;
+ u32 *buffer;
+ u32 *buffer_beg;
+ u32 *buffer_end;
+ u32 exc_type;
+ u32 dyn_ext_base;
+ u32 i;
+ u32 offset_output;
+ u32 total_size;
+ u32 poll_cnt;
+ const char *dsp_regs[] = {"EFR", "IERR", "ITSR", "NTSR",
+ "IRP", "NRP", "AMR", "SSR",
+ "ILC", "RILC", "IER", "CSR"};
+ const char *exec_ctxt[] = {"Task", "SWI", "HWI", "Unknown"};
+ struct bridge_drv_interface *intf_fxns;
+ struct dev_object *dev_object = bridge_context->hdev_obj;
+
+ status = dev_get_cod_mgr(dev_object, &code_mgr);
+ if (!code_mgr) {
+ pr_debug("%s: Failed on dev_get_cod_mgr.\n", __func__);
+ status = -EFAULT;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_node_manager(dev_object, &node_mgr);
+ if (!node_mgr) {
+ pr_debug("%s: Failed on dev_get_node_manager.\n",
+ __func__);
+ status = -EFAULT;
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Look for SYS_PUTCBEG/SYS_PUTCEND: */
+ status =
+ cod_get_sym_value(code_mgr, COD_TRACEBEG, &trace_begin);
+ pr_debug("%s: trace_begin Value 0x%x\n",
+ __func__, trace_begin);
+ if (DSP_FAILED(status))
+ pr_debug("%s: Failed on cod_get_sym_value.\n",
+ __func__);
+ }
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_intf_fxns(dev_object, &intf_fxns);
+ /*
+ * Check for the "magic number" in the trace buffer. If it has
+ * yet to appear then poll the trace buffer to wait for it. Its
+ * appearance signals that the DSP has finished dumping its state.
+ */
+ mmu_fault_dbg_info.head[0] = 0;
+ mmu_fault_dbg_info.head[1] = 0;
+ if (DSP_SUCCEEDED(status)) {
+ poll_cnt = 0;
+ while ((mmu_fault_dbg_info.head[0] != MMU_FAULT_HEAD1 ||
+ mmu_fault_dbg_info.head[1] != MMU_FAULT_HEAD2) &&
+ poll_cnt < POLL_MAX) {
+
+ /* Read DSP dump size from the DSP trace buffer... */
+ status = (*intf_fxns->pfn_brd_read)(bridge_context,
+ (u8 *)&mmu_fault_dbg_info, (u32)trace_begin,
+ sizeof(mmu_fault_dbg_info), 0);
+
+ if (DSP_FAILED(status))
+ break;
+
+ poll_cnt++;
+ }
+
+ if (mmu_fault_dbg_info.head[0] != MMU_FAULT_HEAD1 &&
+ mmu_fault_dbg_info.head[1] != MMU_FAULT_HEAD2) {
+ status = -ETIME;
+ pr_err("%s:No DSP MMU-Fault information available.\n",
+ __func__);
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ total_size = mmu_fault_dbg_info.size;
+ /* Limit the size in case DSP went crazy */
+ if (total_size > MAX_MMU_DBGBUFF)
+ total_size = MAX_MMU_DBGBUFF;
+
+ buffer = kzalloc(total_size, GFP_ATOMIC);
+ if (!buffer) {
+ status = -ENOMEM;
+ pr_debug("%s: Failed to "
+ "allocate stack dump buffer.\n", __func__);
+ goto func_end;
+ }
+
+ buffer_beg = buffer;
+ buffer_end = buffer + total_size / 4;
+
+ /* Read bytes from the DSP trace buffer... */
+ status = (*intf_fxns->pfn_brd_read)(bridge_context,
+ (u8 *)buffer, (u32)trace_begin,
+ total_size, 0);
+ if (DSP_FAILED(status)) {
+ pr_debug("%s: Failed to Read Trace Buffer.\n",
+ __func__);
+ goto func_end;
+ }
+
+ pr_err("\nAproximate Crash Position:\n"
+ "--------------------------\n");
+
+ exc_type = buffer[3];
+ if (!exc_type)
+ i = buffer[79]; /* IRP */
+ else
+ i = buffer[80]; /* NRP */
+
+ status =
+ cod_get_sym_value(code_mgr, DYNEXTBASE, &dyn_ext_base);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ if ((i > dyn_ext_base) && (node_find_addr(node_mgr, i,
+ 0x1000, &offset_output, name) == 0))
+ pr_err("0x%-8x [\"%s\" + 0x%x]\n", i, name,
+ i - offset_output);
+ else
+ pr_err("0x%-8x [Unable to match to a symbol.]\n", i);
+
+ buffer += 4;
+
+ pr_err("\nExecution Info:\n"
+ "---------------\n");
+
+ if (*buffer < ARRAY_SIZE(exec_ctxt)) {
+ pr_err("Execution context \t%s\n",
+ exec_ctxt[*buffer++]);
+ } else {
+ pr_err("Execution context corrupt\n");
+ kfree(buffer_beg);
+ return -EFAULT;
+ }
+ pr_err("Task Handle\t\t0x%x\n", *buffer++);
+ pr_err("Stack Pointer\t\t0x%x\n", *buffer++);
+ pr_err("Stack Top\t\t0x%x\n", *buffer++);
+ pr_err("Stack Bottom\t\t0x%x\n", *buffer++);
+ pr_err("Stack Size\t\t0x%x\n", *buffer++);
+ pr_err("Stack Size In Use\t0x%x\n", *buffer++);
+
+ pr_err("\nCPU Registers\n"
+ "---------------\n");
+
+ for (i = 0; i < 32; i++) {
+ if (i == 4 || i == 6 || i == 8)
+ pr_err("A%d 0x%-8x [Function Argument %d]\n",
+ i, *buffer++, i-3);
+ else if (i == 15)
+ pr_err("A15 0x%-8x [Frame Pointer]\n",
+ *buffer++);
+ else
+ pr_err("A%d 0x%x\n", i, *buffer++);
+ }
+
+ pr_err("\nB0 0x%x\n", *buffer++);
+ pr_err("B1 0x%x\n", *buffer++);
+ pr_err("B2 0x%x\n", *buffer++);
+
+ if ((*buffer > dyn_ext_base) && (node_find_addr(node_mgr,
+ *buffer, 0x1000, &offset_output, name) == 0))
+
+ pr_err("B3 0x%-8x [Function Return Pointer:"
+ " \"%s\" + 0x%x]\n", *buffer, name,
+ *buffer - offset_output);
+ else
+ pr_err("B3 0x%-8x [Function Return Pointer:"
+ "Unable to match to a symbol.]\n", *buffer);
+
+ buffer++;
+
+ for (i = 4; i < 32; i++) {
+ if (i == 4 || i == 6 || i == 8)
+ pr_err("B%d 0x%-8x [Function Argument %d]\n",
+ i, *buffer++, i-2);
+ else if (i == 14)
+ pr_err("B14 0x%-8x [Data Page Pointer]\n",
+ *buffer++);
+ else
+ pr_err("B%d 0x%x\n", i, *buffer++);
+ }
+
+ pr_err("\n");
+
+ for (i = 0; i < ARRAY_SIZE(dsp_regs); i++)
+ pr_err("%s 0x%x\n", dsp_regs[i], *buffer++);
+
+ pr_err("\nStack:\n"
+ "------\n");
+
+ for (i = 0; buffer < buffer_end; i++, buffer++) {
+ if ((*buffer > dyn_ext_base) && (
+ node_find_addr(node_mgr, *buffer , 0x600,
+ &offset_output, name) == 0))
+ pr_err("[%d] 0x%-8x [\"%s\" + 0x%x]\n",
+ i, *buffer, name,
+ *buffer - offset_output);
+ else
+ pr_err("[%d] 0x%x\n", i, *buffer);
+ }
+ kfree(buffer_beg);
+ }
+func_end:
+ return status;
+}
+
+/**
+ * dump_dl_modules() - This functions dumps the _DLModules loaded in DSP side
+ * @bridge_context: Bridge driver's device context pointer.
+ *
+ */
+void dump_dl_modules(struct bridge_dev_context *bridge_context)
+{
+ struct cod_manager *code_mgr;
+ struct bridge_drv_interface *intf_fxns;
+ struct bridge_dev_context *bridge_ctxt = bridge_context;
+ struct dev_object *dev_object = bridge_ctxt->hdev_obj;
+ struct modules_header modules_hdr;
+ struct dll_module *module_struct = NULL;
+ u32 module_dsp_addr;
+ u32 module_size;
+ u32 module_struct_size = 0;
+ u32 sect_ndx;
+ char *sect_str ;
+ int status = 0;
+
+ status = dev_get_intf_fxns(dev_object, &intf_fxns);
+ if (DSP_FAILED(status)) {
+ pr_debug("%s: Failed on dev_get_intf_fxns.\n", __func__);
+ goto func_end;
+ }
+
+ status = dev_get_cod_mgr(dev_object, &code_mgr);
+ if (!code_mgr) {
+ pr_debug("%s: Failed on dev_get_cod_mgr.\n", __func__);
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ /* Lookup the address of the modules_header structure */
+ status = cod_get_sym_value(code_mgr, "_DLModules", &module_dsp_addr);
+ if (DSP_FAILED(status)) {
+ pr_debug("%s: Failed on cod_get_sym_value for _DLModules.\n",
+ __func__);
+ goto func_end;
+ }
+
+ pr_debug("%s: _DLModules at 0x%x\n", __func__, module_dsp_addr);
+
+ /* Copy the modules_header structure from DSP memory. */
+ status = (*intf_fxns->pfn_brd_read)(bridge_context, (u8 *) &modules_hdr,
+ (u32) module_dsp_addr, sizeof(modules_hdr), 0);
+
+ if (DSP_FAILED(status)) {
+ pr_debug("%s: Failed failed to read modules header.\n",
+ __func__);
+ goto func_end;
+ }
+
+ module_dsp_addr = modules_hdr.first_module;
+ module_size = modules_hdr.first_module_size;
+
+ pr_debug("%s: dll_module_header 0x%x %d\n", __func__, module_dsp_addr,
+ module_size);
+
+ pr_err("\nDynamically Loaded Modules:\n"
+ "---------------------------\n");
+
+ /* For each dll_module structure in the list... */
+ while (module_size) {
+ /*
+ * Allocate/re-allocate memory to hold the dll_module
+ * structure. The memory is re-allocated only if the existing
+ * allocation is too small.
+ */
+ if (module_size > module_struct_size) {
+ kfree(module_struct);
+ module_struct = kzalloc(module_size+128, GFP_ATOMIC);
+ module_struct_size = module_size+128;
+ pr_debug("%s: allocated module struct %p %d\n",
+ __func__, module_struct, module_struct_size);
+ if (!module_struct)
+ goto func_end;
+ }
+ /* Copy the dll_module structure from DSP memory */
+ status = (*intf_fxns->pfn_brd_read)(bridge_context,
+ (u8 *)module_struct, module_dsp_addr, module_size, 0);
+
+ if (DSP_FAILED(status)) {
+ pr_debug(
+ "%s: Failed to read dll_module stuct for 0x%x.\n",
+ __func__, module_dsp_addr);
+ break;
+ }
+
+ /* Update info regarding the _next_ module in the list. */
+ module_dsp_addr = module_struct->next_module;
+ module_size = module_struct->next_module_size;
+
+ pr_debug("%s: next module 0x%x %d, this module num sects %d\n",
+ __func__, module_dsp_addr, module_size,
+ module_struct->num_sects);
+
+ /*
+ * The section name strings start immedialty following
+ * the array of dll_sect structures.
+ */
+ sect_str = (char *) &module_struct->
+ sects[module_struct->num_sects];
+ pr_err("%s\n", sect_str);
+
+ /*
+ * Advance to the first section name string.
+ * Each string follows the one before.
+ */
+ sect_str += strlen(sect_str) + 1;
+
+ /* Access each dll_sect structure and its name string. */
+ for (sect_ndx = 0;
+ sect_ndx < module_struct->num_sects; sect_ndx++) {
+ pr_err(" Section: 0x%x ",
+ module_struct->sects[sect_ndx].sect_load_adr);
+
+ if (((u32) sect_str - (u32) module_struct) <
+ module_struct_size) {
+ pr_err("%s\n", sect_str);
+ /* Each string follows the one before. */
+ sect_str += strlen(sect_str)+1;
+ } else {
+ pr_err("<string error>\n");
+ pr_debug("%s: section name sting address "
+ "is invalid %p\n", __func__, sect_str);
+ }
+ }
+ }
+func_end:
+ kfree(module_struct);
+}
+
diff --git a/drivers/staging/tidspbridge/core/mmu_fault.c b/drivers/staging/tidspbridge/core/mmu_fault.c
new file mode 100644
index 0000000..5c0124f
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/mmu_fault.c
@@ -0,0 +1,139 @@
+/*
+ * mmu_fault.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implements DSP MMU fault handling functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/host_os.h>
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/drv.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspdeh.h>
+
+/* ------------------------------------ Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+/* ----------------------------------- This */
+#include "_deh.h"
+#include <dspbridge/cfg.h>
+#include "_tiomap.h"
+#include "mmu_fault.h"
+
+static u32 dmmu_event_mask;
+u32 fault_addr;
+
+static bool mmu_check_if_fault(struct bridge_dev_context *dev_context);
+
+/*
+ * ======== mmu_fault_dpc ========
+ * Deferred procedure call to handle DSP MMU fault.
+ */
+void mmu_fault_dpc(IN unsigned long pRefData)
+{
+ struct deh_mgr *hdeh_mgr = (struct deh_mgr *)pRefData;
+
+ if (hdeh_mgr)
+ bridge_deh_notify(hdeh_mgr, DSP_MMUFAULT, 0L);
+
+}
+
+/*
+ * ======== mmu_fault_isr ========
+ * ISR to be triggered by a DSP MMU fault interrupt.
+ */
+irqreturn_t mmu_fault_isr(int irq, IN void *pRefData)
+{
+ struct deh_mgr *deh_mgr_obj = (struct deh_mgr *)pRefData;
+ struct bridge_dev_context *dev_context;
+ struct cfg_hostres *resources;
+
+ DBC_REQUIRE(irq == INT_DSP_MMU_IRQ);
+ DBC_REQUIRE(deh_mgr_obj);
+
+ if (deh_mgr_obj) {
+
+ dev_context =
+ (struct bridge_dev_context *)deh_mgr_obj->hbridge_context;
+
+ resources = dev_context->resources;
+
+ if (!resources) {
+ dev_dbg(bridge, "%s: Failed to get Host Resources\n",
+ __func__);
+ return IRQ_HANDLED;
+ }
+ if (mmu_check_if_fault(dev_context)) {
+ printk(KERN_INFO "***** DSPMMU FAULT ***** IRQStatus "
+ "0x%x\n", dmmu_event_mask);
+ printk(KERN_INFO "***** DSPMMU FAULT ***** fault_addr "
+ "0x%x\n", fault_addr);
+ /*
+ * Schedule a DPC directly. In the future, it may be
+ * necessary to check if DSP MMU fault is intended for
+ * Bridge.
+ */
+ tasklet_schedule(&deh_mgr_obj->dpc_tasklet);
+
+ /* Reset err_info structure before use. */
+ deh_mgr_obj->err_info.dw_err_mask = DSP_MMUFAULT;
+ deh_mgr_obj->err_info.dw_val1 = fault_addr >> 16;
+ deh_mgr_obj->err_info.dw_val2 = fault_addr & 0xFFFF;
+ deh_mgr_obj->err_info.dw_val3 = 0L;
+ /* Disable the MMU events, else once we clear it will
+ * start to raise INTs again */
+ hw_mmu_event_disable(resources->dw_dmmu_base,
+ HW_MMU_TRANSLATION_FAULT);
+ } else {
+ hw_mmu_event_disable(resources->dw_dmmu_base,
+ HW_MMU_ALL_INTERRUPTS);
+ }
+ }
+ return IRQ_HANDLED;
+}
+
+/*
+ * ======== mmu_check_if_fault ========
+ * Check to see if MMU Fault is valid TLB miss from DSP
+ * Note: This function is called from an ISR
+ */
+static bool mmu_check_if_fault(struct bridge_dev_context *dev_context)
+{
+
+ bool ret = false;
+ hw_status hw_status_obj;
+ struct cfg_hostres *resources = dev_context->resources;
+
+ if (!resources) {
+ dev_dbg(bridge, "%s: Failed to get Host Resources in\n",
+ __func__);
+ return ret;
+ }
+ hw_status_obj =
+ hw_mmu_event_status(resources->dw_dmmu_base, &dmmu_event_mask);
+ if (dmmu_event_mask == HW_MMU_TRANSLATION_FAULT) {
+ hw_mmu_fault_addr_read(resources->dw_dmmu_base, &fault_addr);
+ ret = true;
+ }
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/core/mmu_fault.h b/drivers/staging/tidspbridge/core/mmu_fault.h
new file mode 100644
index 0000000..74db489
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/mmu_fault.h
@@ -0,0 +1,36 @@
+/*
+ * mmu_fault.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Defines DSP MMU fault handling functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MMU_FAULT_
+#define MMU_FAULT_
+
+extern u32 fault_addr;
+
+/*
+ * ======== mmu_fault_dpc ========
+ * Deferred procedure call to handle DSP MMU fault.
+ */
+void mmu_fault_dpc(IN unsigned long pRefData);
+
+/*
+ * ======== mmu_fault_isr ========
+ * ISR to be triggered by a DSP MMU fault interrupt.
+ */
+irqreturn_t mmu_fault_isr(int irq, IN void *pRefData);
+
+#endif /* MMU_FAULT_ */
diff --git a/drivers/staging/tidspbridge/core/msg_sm.c b/drivers/staging/tidspbridge/core/msg_sm.c
new file mode 100644
index 0000000..7c6d6cc
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/msg_sm.c
@@ -0,0 +1,673 @@
+/*
+ * msg_sm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implements upper edge functions for Bridge message module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/list.h>
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/io_sm.h>
+
+/* ----------------------------------- This */
+#include <_msg_sm.h>
+#include <dspbridge/dspmsg.h>
+
+/* ----------------------------------- Function Prototypes */
+static int add_new_msg(struct lst_list *msgList);
+static void delete_msg_mgr(struct msg_mgr *hmsg_mgr);
+static void delete_msg_queue(struct msg_queue *msg_queue_obj, u32 uNumToDSP);
+static void free_msg_list(struct lst_list *msgList);
+
+/*
+ * ======== bridge_msg_create ========
+ * Create an object to manage message queues. Only one of these objects
+ * can exist per device object.
+ */
+int bridge_msg_create(OUT struct msg_mgr **phMsgMgr,
+ struct dev_object *hdev_obj,
+ msg_onexit msgCallback)
+{
+ struct msg_mgr *msg_mgr_obj;
+ struct io_mgr *hio_mgr;
+ int status = 0;
+
+ if (!phMsgMgr || !msgCallback || !hdev_obj) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ dev_get_io_mgr(hdev_obj, &hio_mgr);
+ if (!hio_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ *phMsgMgr = NULL;
+ /* Allocate msg_ctrl manager object */
+ msg_mgr_obj = kzalloc(sizeof(struct msg_mgr), GFP_KERNEL);
+
+ if (msg_mgr_obj) {
+ msg_mgr_obj->on_exit = msgCallback;
+ msg_mgr_obj->hio_mgr = hio_mgr;
+ /* List of MSG_QUEUEs */
+ msg_mgr_obj->queue_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ /* Queues of message frames for messages to the DSP. Message
+ * frames will only be added to the free queue when a
+ * msg_queue object is created. */
+ msg_mgr_obj->msg_free_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ msg_mgr_obj->msg_used_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (msg_mgr_obj->queue_list == NULL ||
+ msg_mgr_obj->msg_free_list == NULL ||
+ msg_mgr_obj->msg_used_list == NULL) {
+ status = -ENOMEM;
+ } else {
+ INIT_LIST_HEAD(&msg_mgr_obj->queue_list->head);
+ INIT_LIST_HEAD(&msg_mgr_obj->msg_free_list->head);
+ INIT_LIST_HEAD(&msg_mgr_obj->msg_used_list->head);
+ spin_lock_init(&msg_mgr_obj->msg_mgr_lock);
+ }
+
+ /* Create an event to be used by bridge_msg_put() in waiting
+ * for an available free frame from the message manager. */
+ msg_mgr_obj->sync_event =
+ kzalloc(sizeof(struct sync_object), GFP_KERNEL);
+ if (!msg_mgr_obj->sync_event)
+ status = -ENOMEM;
+ else
+ sync_init_event(msg_mgr_obj->sync_event);
+
+ if (DSP_SUCCEEDED(status))
+ *phMsgMgr = msg_mgr_obj;
+ else
+ delete_msg_mgr(msg_mgr_obj);
+
+ } else {
+ status = -ENOMEM;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_msg_create_queue ========
+ * Create a msg_queue for sending/receiving messages to/from a node
+ * on the DSP.
+ */
+int bridge_msg_create_queue(struct msg_mgr *hmsg_mgr,
+ OUT struct msg_queue **phMsgQueue,
+ u32 msgq_id, u32 max_msgs, void *arg)
+{
+ u32 i;
+ u32 num_allocated = 0;
+ struct msg_queue *msg_q;
+ int status = 0;
+
+ if (!hmsg_mgr || phMsgQueue == NULL || !hmsg_mgr->msg_free_list) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ *phMsgQueue = NULL;
+ /* Allocate msg_queue object */
+ msg_q = kzalloc(sizeof(struct msg_queue), GFP_KERNEL);
+ if (!msg_q) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ lst_init_elem((struct list_head *)msg_q);
+ msg_q->max_msgs = max_msgs;
+ msg_q->hmsg_mgr = hmsg_mgr;
+ msg_q->arg = arg; /* Node handle */
+ msg_q->msgq_id = msgq_id; /* Node env (not valid yet) */
+ /* Queues of Message frames for messages from the DSP */
+ msg_q->msg_free_list = kzalloc(sizeof(struct lst_list), GFP_KERNEL);
+ msg_q->msg_used_list = kzalloc(sizeof(struct lst_list), GFP_KERNEL);
+ if (msg_q->msg_free_list == NULL || msg_q->msg_used_list == NULL)
+ status = -ENOMEM;
+ else {
+ INIT_LIST_HEAD(&msg_q->msg_free_list->head);
+ INIT_LIST_HEAD(&msg_q->msg_used_list->head);
+ }
+
+ /* Create event that will be signalled when a message from
+ * the DSP is available. */
+ if (DSP_SUCCEEDED(status)) {
+ msg_q->sync_event = kzalloc(sizeof(struct sync_object),
+ GFP_KERNEL);
+ if (msg_q->sync_event)
+ sync_init_event(msg_q->sync_event);
+ else
+ status = -ENOMEM;
+ }
+
+ /* Create a notification list for message ready notification. */
+ if (DSP_SUCCEEDED(status)) {
+ msg_q->ntfy_obj = kmalloc(sizeof(struct ntfy_object),
+ GFP_KERNEL);
+ if (msg_q->ntfy_obj)
+ ntfy_init(msg_q->ntfy_obj);
+ else
+ status = -ENOMEM;
+ }
+
+ /* Create events that will be used to synchronize cleanup
+ * when the object is deleted. sync_done will be set to
+ * unblock threads in MSG_Put() or MSG_Get(). sync_done_ack
+ * will be set by the unblocked thread to signal that it
+ * is unblocked and will no longer reference the object. */
+ if (DSP_SUCCEEDED(status)) {
+ msg_q->sync_done = kzalloc(sizeof(struct sync_object),
+ GFP_KERNEL);
+ if (msg_q->sync_done)
+ sync_init_event(msg_q->sync_done);
+ else
+ status = -ENOMEM;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ msg_q->sync_done_ack = kzalloc(sizeof(struct sync_object),
+ GFP_KERNEL);
+ if (msg_q->sync_done_ack)
+ sync_init_event(msg_q->sync_done_ack);
+ else
+ status = -ENOMEM;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Enter critical section */
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+ /* Initialize message frames and put in appropriate queues */
+ for (i = 0; i < max_msgs && DSP_SUCCEEDED(status); i++) {
+ status = add_new_msg(hmsg_mgr->msg_free_list);
+ if (DSP_SUCCEEDED(status)) {
+ num_allocated++;
+ status = add_new_msg(msg_q->msg_free_list);
+ }
+ }
+ if (DSP_FAILED(status)) {
+ /* Stay inside CS to prevent others from taking any
+ * of the newly allocated message frames. */
+ delete_msg_queue(msg_q, num_allocated);
+ } else {
+ lst_put_tail(hmsg_mgr->queue_list,
+ (struct list_head *)msg_q);
+ *phMsgQueue = msg_q;
+ /* Signal that free frames are now available */
+ if (!LST_IS_EMPTY(hmsg_mgr->msg_free_list))
+ sync_set_event(hmsg_mgr->sync_event);
+
+ }
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ } else {
+ delete_msg_queue(msg_q, 0);
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_msg_delete ========
+ * Delete a msg_ctrl manager allocated in bridge_msg_create().
+ */
+void bridge_msg_delete(struct msg_mgr *hmsg_mgr)
+{
+ if (hmsg_mgr)
+ delete_msg_mgr(hmsg_mgr);
+}
+
+/*
+ * ======== bridge_msg_delete_queue ========
+ * Delete a msg_ctrl queue allocated in bridge_msg_create_queue.
+ */
+void bridge_msg_delete_queue(struct msg_queue *msg_queue_obj)
+{
+ struct msg_mgr *hmsg_mgr;
+ u32 io_msg_pend;
+
+ if (!msg_queue_obj || !msg_queue_obj->hmsg_mgr)
+ goto func_end;
+
+ hmsg_mgr = msg_queue_obj->hmsg_mgr;
+ msg_queue_obj->done = true;
+ /* Unblock all threads blocked in MSG_Get() or MSG_Put(). */
+ io_msg_pend = msg_queue_obj->io_msg_pend;
+ while (io_msg_pend) {
+ /* Unblock thread */
+ sync_set_event(msg_queue_obj->sync_done);
+ /* Wait for acknowledgement */
+ sync_wait_on_event(msg_queue_obj->sync_done_ack, SYNC_INFINITE);
+ io_msg_pend = msg_queue_obj->io_msg_pend;
+ }
+ /* Remove message queue from hmsg_mgr->queue_list */
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+ lst_remove_elem(hmsg_mgr->queue_list,
+ (struct list_head *)msg_queue_obj);
+ /* Free the message queue object */
+ delete_msg_queue(msg_queue_obj, msg_queue_obj->max_msgs);
+ if (!hmsg_mgr->msg_free_list)
+ goto func_cont;
+ if (LST_IS_EMPTY(hmsg_mgr->msg_free_list))
+ sync_reset_event(hmsg_mgr->sync_event);
+func_cont:
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+func_end:
+ return;
+}
+
+/*
+ * ======== bridge_msg_get ========
+ * Get a message from a msg_ctrl queue.
+ */
+int bridge_msg_get(struct msg_queue *msg_queue_obj,
+ struct dsp_msg *pmsg, u32 utimeout)
+{
+ struct msg_frame *msg_frame_obj;
+ struct msg_mgr *hmsg_mgr;
+ bool got_msg = false;
+ struct sync_object *syncs[2];
+ u32 index;
+ int status = 0;
+
+ if (!msg_queue_obj || pmsg == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ hmsg_mgr = msg_queue_obj->hmsg_mgr;
+ if (!msg_queue_obj->msg_used_list) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ /* Enter critical section */
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+ /* If a message is already there, get it */
+ if (!LST_IS_EMPTY(msg_queue_obj->msg_used_list)) {
+ msg_frame_obj = (struct msg_frame *)
+ lst_get_head(msg_queue_obj->msg_used_list);
+ if (msg_frame_obj != NULL) {
+ *pmsg = msg_frame_obj->msg_data.msg;
+ lst_put_tail(msg_queue_obj->msg_free_list,
+ (struct list_head *)msg_frame_obj);
+ if (LST_IS_EMPTY(msg_queue_obj->msg_used_list))
+ sync_reset_event(msg_queue_obj->sync_event);
+
+ got_msg = true;
+ }
+ } else {
+ if (msg_queue_obj->done)
+ status = -EPERM;
+ else
+ msg_queue_obj->io_msg_pend++;
+
+ }
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ if (DSP_SUCCEEDED(status) && !got_msg) {
+ /* Wait til message is available, timeout, or done. We don't
+ * have to schedule the DPC, since the DSP will send messages
+ * when they are available. */
+ syncs[0] = msg_queue_obj->sync_event;
+ syncs[1] = msg_queue_obj->sync_done;
+ status = sync_wait_on_multiple_events(syncs, 2, utimeout,
+ &index);
+ /* Enter critical section */
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+ if (msg_queue_obj->done) {
+ msg_queue_obj->io_msg_pend--;
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ /* Signal that we're not going to access msg_queue_obj
+ * anymore, so it can be deleted. */
+ (void)sync_set_event(msg_queue_obj->sync_done_ack);
+ status = -EPERM;
+ } else {
+ if (DSP_SUCCEEDED(status)) {
+ DBC_ASSERT(!LST_IS_EMPTY
+ (msg_queue_obj->msg_used_list));
+ /* Get msg from used list */
+ msg_frame_obj = (struct msg_frame *)
+ lst_get_head(msg_queue_obj->msg_used_list);
+ /* Copy message into pmsg and put frame on the
+ * free list */
+ if (msg_frame_obj != NULL) {
+ *pmsg = msg_frame_obj->msg_data.msg;
+ lst_put_tail
+ (msg_queue_obj->msg_free_list,
+ (struct list_head *)
+ msg_frame_obj);
+ }
+ }
+ msg_queue_obj->io_msg_pend--;
+ /* Reset the event if there are still queued messages */
+ if (!LST_IS_EMPTY(msg_queue_obj->msg_used_list))
+ sync_set_event(msg_queue_obj->sync_event);
+
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_msg_put ========
+ * Put a message onto a msg_ctrl queue.
+ */
+int bridge_msg_put(struct msg_queue *msg_queue_obj,
+ IN CONST struct dsp_msg *pmsg, u32 utimeout)
+{
+ struct msg_frame *msg_frame_obj;
+ struct msg_mgr *hmsg_mgr;
+ bool put_msg = false;
+ struct sync_object *syncs[2];
+ u32 index;
+ int status = 0;
+
+ if (!msg_queue_obj || !pmsg || !msg_queue_obj->hmsg_mgr) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ hmsg_mgr = msg_queue_obj->hmsg_mgr;
+ if (!hmsg_mgr->msg_free_list) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+
+ /* If a message frame is available, use it */
+ if (!LST_IS_EMPTY(hmsg_mgr->msg_free_list)) {
+ msg_frame_obj =
+ (struct msg_frame *)lst_get_head(hmsg_mgr->msg_free_list);
+ if (msg_frame_obj != NULL) {
+ msg_frame_obj->msg_data.msg = *pmsg;
+ msg_frame_obj->msg_data.msgq_id =
+ msg_queue_obj->msgq_id;
+ lst_put_tail(hmsg_mgr->msg_used_list,
+ (struct list_head *)msg_frame_obj);
+ hmsg_mgr->msgs_pending++;
+ put_msg = true;
+ }
+ if (LST_IS_EMPTY(hmsg_mgr->msg_free_list))
+ sync_reset_event(hmsg_mgr->sync_event);
+
+ /* Release critical section before scheduling DPC */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ /* Schedule a DPC, to do the actual data transfer: */
+ iosm_schedule(hmsg_mgr->hio_mgr);
+ } else {
+ if (msg_queue_obj->done)
+ status = -EPERM;
+ else
+ msg_queue_obj->io_msg_pend++;
+
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ }
+ if (DSP_SUCCEEDED(status) && !put_msg) {
+ /* Wait til a free message frame is available, timeout,
+ * or done */
+ syncs[0] = hmsg_mgr->sync_event;
+ syncs[1] = msg_queue_obj->sync_done;
+ status = sync_wait_on_multiple_events(syncs, 2, utimeout,
+ &index);
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* Enter critical section */
+ spin_lock_bh(&hmsg_mgr->msg_mgr_lock);
+ if (msg_queue_obj->done) {
+ msg_queue_obj->io_msg_pend--;
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ /* Signal that we're not going to access msg_queue_obj
+ * anymore, so it can be deleted. */
+ (void)sync_set_event(msg_queue_obj->sync_done_ack);
+ status = -EPERM;
+ } else {
+ if (LST_IS_EMPTY(hmsg_mgr->msg_free_list)) {
+ status = -EFAULT;
+ goto func_cont;
+ }
+ /* Get msg from free list */
+ msg_frame_obj = (struct msg_frame *)
+ lst_get_head(hmsg_mgr->msg_free_list);
+ /*
+ * Copy message into pmsg and put frame on the
+ * used list.
+ */
+ if (msg_frame_obj) {
+ msg_frame_obj->msg_data.msg = *pmsg;
+ msg_frame_obj->msg_data.msgq_id =
+ msg_queue_obj->msgq_id;
+ lst_put_tail(hmsg_mgr->msg_used_list,
+ (struct list_head *)msg_frame_obj);
+ hmsg_mgr->msgs_pending++;
+ /*
+ * Schedule a DPC, to do the actual
+ * data transfer.
+ */
+ iosm_schedule(hmsg_mgr->hio_mgr);
+ }
+
+ msg_queue_obj->io_msg_pend--;
+ /* Reset event if there are still frames available */
+ if (!LST_IS_EMPTY(hmsg_mgr->msg_free_list))
+ sync_set_event(hmsg_mgr->sync_event);
+func_cont:
+ /* Exit critical section */
+ spin_unlock_bh(&hmsg_mgr->msg_mgr_lock);
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_msg_register_notify ========
+ */
+int bridge_msg_register_notify(struct msg_queue *msg_queue_obj,
+ u32 event_mask, u32 notify_type,
+ struct dsp_notification *hnotification)
+{
+ int status = 0;
+
+ if (!msg_queue_obj || !hnotification) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ if (!(event_mask == DSP_NODEMESSAGEREADY || event_mask == 0)) {
+ status = -EPERM;
+ goto func_end;
+ }
+
+ if (notify_type != DSP_SIGNALEVENT) {
+ status = -EBADR;
+ goto func_end;
+ }
+
+ if (event_mask)
+ status = ntfy_register(msg_queue_obj->ntfy_obj, hnotification,
+ event_mask, notify_type);
+ else
+ status = ntfy_unregister(msg_queue_obj->ntfy_obj,
+ hnotification);
+
+ if (status == -EINVAL) {
+ /* Not registered. Ok, since we couldn't have known. Node
+ * notifications are split between node state change handled
+ * by NODE, and message ready handled by msg_ctrl. */
+ status = 0;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_msg_set_queue_id ========
+ */
+void bridge_msg_set_queue_id(struct msg_queue *msg_queue_obj, u32 msgq_id)
+{
+ /*
+ * A message queue must be created when a node is allocated,
+ * so that node_register_notify() can be called before the node
+ * is created. Since we don't know the node environment until the
+ * node is created, we need this function to set msg_queue_obj->msgq_id
+ * to the node environment, after the node is created.
+ */
+ if (msg_queue_obj)
+ msg_queue_obj->msgq_id = msgq_id;
+}
+
+/*
+ * ======== add_new_msg ========
+ * Must be called in message manager critical section.
+ */
+static int add_new_msg(struct lst_list *msgList)
+{
+ struct msg_frame *pmsg;
+ int status = 0;
+
+ pmsg = kzalloc(sizeof(struct msg_frame), GFP_ATOMIC);
+ if (pmsg != NULL) {
+ lst_init_elem((struct list_head *)pmsg);
+ lst_put_tail(msgList, (struct list_head *)pmsg);
+ } else {
+ status = -ENOMEM;
+ }
+
+ return status;
+}
+
+/*
+ * ======== delete_msg_mgr ========
+ */
+static void delete_msg_mgr(struct msg_mgr *hmsg_mgr)
+{
+ if (!hmsg_mgr)
+ goto func_end;
+
+ if (hmsg_mgr->queue_list) {
+ if (LST_IS_EMPTY(hmsg_mgr->queue_list)) {
+ kfree(hmsg_mgr->queue_list);
+ hmsg_mgr->queue_list = NULL;
+ }
+ }
+
+ if (hmsg_mgr->msg_free_list) {
+ free_msg_list(hmsg_mgr->msg_free_list);
+ hmsg_mgr->msg_free_list = NULL;
+ }
+
+ if (hmsg_mgr->msg_used_list) {
+ free_msg_list(hmsg_mgr->msg_used_list);
+ hmsg_mgr->msg_used_list = NULL;
+ }
+
+ kfree(hmsg_mgr->sync_event);
+
+ kfree(hmsg_mgr);
+func_end:
+ return;
+}
+
+/*
+ * ======== delete_msg_queue ========
+ */
+static void delete_msg_queue(struct msg_queue *msg_queue_obj, u32 uNumToDSP)
+{
+ struct msg_mgr *hmsg_mgr;
+ struct msg_frame *pmsg;
+ u32 i;
+
+ if (!msg_queue_obj ||
+ !msg_queue_obj->hmsg_mgr || !msg_queue_obj->hmsg_mgr->msg_free_list)
+ goto func_end;
+
+ hmsg_mgr = msg_queue_obj->hmsg_mgr;
+
+ /* Pull off uNumToDSP message frames from Msg manager and free */
+ for (i = 0; i < uNumToDSP; i++) {
+
+ if (!LST_IS_EMPTY(hmsg_mgr->msg_free_list)) {
+ pmsg = (struct msg_frame *)
+ lst_get_head(hmsg_mgr->msg_free_list);
+ kfree(pmsg);
+ } else {
+ /* Cannot free all of the message frames */
+ break;
+ }
+ }
+
+ if (msg_queue_obj->msg_free_list) {
+ free_msg_list(msg_queue_obj->msg_free_list);
+ msg_queue_obj->msg_free_list = NULL;
+ }
+
+ if (msg_queue_obj->msg_used_list) {
+ free_msg_list(msg_queue_obj->msg_used_list);
+ msg_queue_obj->msg_used_list = NULL;
+ }
+
+ if (msg_queue_obj->ntfy_obj) {
+ ntfy_delete(msg_queue_obj->ntfy_obj);
+ kfree(msg_queue_obj->ntfy_obj);
+ }
+
+ kfree(msg_queue_obj->sync_event);
+ kfree(msg_queue_obj->sync_done);
+ kfree(msg_queue_obj->sync_done_ack);
+
+ kfree(msg_queue_obj);
+func_end:
+ return;
+
+}
+
+/*
+ * ======== free_msg_list ========
+ */
+static void free_msg_list(struct lst_list *msgList)
+{
+ struct msg_frame *pmsg;
+
+ if (!msgList)
+ goto func_end;
+
+ while ((pmsg = (struct msg_frame *)lst_get_head(msgList)) != NULL)
+ kfree(pmsg);
+
+ DBC_ASSERT(LST_IS_EMPTY(msgList));
+
+ kfree(msgList);
+func_end:
+ return;
+}
diff --git a/drivers/staging/tidspbridge/core/tiomap3430.c b/drivers/staging/tidspbridge/core/tiomap3430.c
new file mode 100644
index 0000000..ee9205b
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/tiomap3430.c
@@ -0,0 +1,1887 @@
+/*
+ * tiomap.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Processor Manager Driver for TI OMAP3430 EVM.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+#include <linux/mm.h>
+#include <linux/mmzone.h>
+#include <plat/control.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/drv.h>
+#include <dspbridge/sync.h>
+
+/* ------------------------------------ Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspdefs.h>
+#include <dspbridge/dspchnl.h>
+#include <dspbridge/dspdeh.h>
+#include <dspbridge/dspio.h>
+#include <dspbridge/dspmsg.h>
+#include <dspbridge/pwr.h>
+#include <dspbridge/io_sm.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/dspapi.h>
+#include <dspbridge/dmm.h>
+#include <dspbridge/wdt.h>
+
+/* ----------------------------------- Local */
+#include "_tiomap.h"
+#include "_tiomap_pwr.h"
+#include "tiomap_io.h"
+
+/* Offset in shared mem to write to in order to synchronize start with DSP */
+#define SHMSYNCOFFSET 4 /* GPP byte offset */
+
+#define BUFFERSIZE 1024
+
+#define TIHELEN_ACKTIMEOUT 10000
+
+#define MMU_SECTION_ADDR_MASK 0xFFF00000
+#define MMU_SSECTION_ADDR_MASK 0xFF000000
+#define MMU_LARGE_PAGE_MASK 0xFFFF0000
+#define MMU_SMALL_PAGE_MASK 0xFFFFF000
+#define OMAP3_IVA2_BOOTADDR_MASK 0xFFFFFC00
+#define PAGES_II_LVL_TABLE 512
+#define PHYS_TO_PAGE(phys) pfn_to_page((phys) >> PAGE_SHIFT)
+
+#define MMU_GFLUSH 0x60
+
+/* Forward Declarations: */
+static int bridge_brd_monitor(struct bridge_dev_context *dev_context);
+static int bridge_brd_read(struct bridge_dev_context *dev_context,
+ OUT u8 *pbHostBuf,
+ u32 dwDSPAddr, u32 ul_num_bytes,
+ u32 ulMemType);
+static int bridge_brd_start(struct bridge_dev_context *dev_context,
+ u32 dwDSPAddr);
+static int bridge_brd_status(struct bridge_dev_context *dev_context,
+ int *pdwState);
+static int bridge_brd_stop(struct bridge_dev_context *dev_context);
+static int bridge_brd_write(struct bridge_dev_context *dev_context,
+ IN u8 *pbHostBuf,
+ u32 dwDSPAddr, u32 ul_num_bytes,
+ u32 ulMemType);
+static int bridge_brd_set_state(struct bridge_dev_context *hDevContext,
+ u32 ulBrdState);
+static int bridge_brd_mem_copy(struct bridge_dev_context *hDevContext,
+ u32 ulDspDestAddr, u32 ulDspSrcAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+static int bridge_brd_mem_write(struct bridge_dev_context *dev_context,
+ IN u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+static int bridge_brd_mem_map(struct bridge_dev_context *hDevContext,
+ u32 ul_mpu_addr, u32 ulVirtAddr,
+ u32 ul_num_bytes, u32 ul_map_attr,
+ struct page **mapped_pages);
+static int bridge_brd_mem_un_map(struct bridge_dev_context *hDevContext,
+ u32 ulVirtAddr, u32 ul_num_bytes);
+static int bridge_dev_create(OUT struct bridge_dev_context
+ **ppDevContext,
+ struct dev_object *hdev_obj,
+ IN struct cfg_hostres *pConfig);
+static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
+ u32 dw_cmd, IN OUT void *pargs);
+static int bridge_dev_destroy(struct bridge_dev_context *dev_context);
+static u32 user_va2_pa(struct mm_struct *mm, u32 address);
+static int pte_update(struct bridge_dev_context *hDevContext, u32 pa,
+ u32 va, u32 size,
+ struct hw_mmu_map_attrs_t *map_attrs);
+static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
+ u32 size, struct hw_mmu_map_attrs_t *attrs);
+static int mem_map_vmalloc(struct bridge_dev_context *hDevContext,
+ u32 ul_mpu_addr, u32 ulVirtAddr,
+ u32 ul_num_bytes,
+ struct hw_mmu_map_attrs_t *hw_attrs);
+
+bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr);
+
+/* ----------------------------------- Globals */
+
+/* Attributes of L2 page tables for DSP MMU */
+struct page_info {
+ u32 num_entries; /* Number of valid PTEs in the L2 PT */
+};
+
+/* Attributes used to manage the DSP MMU page tables */
+struct pg_table_attrs {
+ spinlock_t pg_lock; /* Critical section object handle */
+
+ u32 l1_base_pa; /* Physical address of the L1 PT */
+ u32 l1_base_va; /* Virtual address of the L1 PT */
+ u32 l1_size; /* Size of the L1 PT */
+ u32 l1_tbl_alloc_pa;
+ /* Physical address of Allocated mem for L1 table. May not be aligned */
+ u32 l1_tbl_alloc_va;
+ /* Virtual address of Allocated mem for L1 table. May not be aligned */
+ u32 l1_tbl_alloc_sz;
+ /* Size of consistent memory allocated for L1 table.
+ * May not be aligned */
+
+ u32 l2_base_pa; /* Physical address of the L2 PT */
+ u32 l2_base_va; /* Virtual address of the L2 PT */
+ u32 l2_size; /* Size of the L2 PT */
+ u32 l2_tbl_alloc_pa;
+ /* Physical address of Allocated mem for L2 table. May not be aligned */
+ u32 l2_tbl_alloc_va;
+ /* Virtual address of Allocated mem for L2 table. May not be aligned */
+ u32 l2_tbl_alloc_sz;
+ /* Size of consistent memory allocated for L2 table.
+ * May not be aligned */
+
+ u32 l2_num_pages; /* Number of allocated L2 PT */
+ /* Array [l2_num_pages] of L2 PT info structs */
+ struct page_info *pg_info;
+};
+
+/*
+ * This Bridge driver's function interface table.
+ */
+static struct bridge_drv_interface drv_interface_fxns = {
+ /* Bridge API ver. for which this bridge driver is built. */
+ BRD_API_MAJOR_VERSION,
+ BRD_API_MINOR_VERSION,
+ bridge_dev_create,
+ bridge_dev_destroy,
+ bridge_dev_ctrl,
+ bridge_brd_monitor,
+ bridge_brd_start,
+ bridge_brd_stop,
+ bridge_brd_status,
+ bridge_brd_read,
+ bridge_brd_write,
+ bridge_brd_set_state,
+ bridge_brd_mem_copy,
+ bridge_brd_mem_write,
+ bridge_brd_mem_map,
+ bridge_brd_mem_un_map,
+ /* The following CHNL functions are provided by chnl_io.lib: */
+ bridge_chnl_create,
+ bridge_chnl_destroy,
+ bridge_chnl_open,
+ bridge_chnl_close,
+ bridge_chnl_add_io_req,
+ bridge_chnl_get_ioc,
+ bridge_chnl_cancel_io,
+ bridge_chnl_flush_io,
+ bridge_chnl_get_info,
+ bridge_chnl_get_mgr_info,
+ bridge_chnl_idle,
+ bridge_chnl_register_notify,
+ /* The following DEH functions are provided by tihelen_ue_deh.c */
+ bridge_deh_create,
+ bridge_deh_destroy,
+ bridge_deh_notify,
+ bridge_deh_register_notify,
+ bridge_deh_get_info,
+ /* The following IO functions are provided by chnl_io.lib: */
+ bridge_io_create,
+ bridge_io_destroy,
+ bridge_io_on_loaded,
+ bridge_io_get_proc_load,
+ /* The following msg_ctrl functions are provided by chnl_io.lib: */
+ bridge_msg_create,
+ bridge_msg_create_queue,
+ bridge_msg_delete,
+ bridge_msg_delete_queue,
+ bridge_msg_get,
+ bridge_msg_put,
+ bridge_msg_register_notify,
+ bridge_msg_set_queue_id,
+};
+
+static inline void tlb_flush_all(const void __iomem *base)
+{
+ __raw_writeb(__raw_readb(base + MMU_GFLUSH) | 1, base + MMU_GFLUSH);
+}
+
+static inline void flush_all(struct bridge_dev_context *dev_context)
+{
+ if (dev_context->dw_brd_state == BRD_DSP_HIBERNATION ||
+ dev_context->dw_brd_state == BRD_HIBERNATION)
+ wake_dsp(dev_context, NULL);
+
+ tlb_flush_all(dev_context->dw_dsp_mmu_base);
+}
+
+static void bad_page_dump(u32 pa, struct page *pg)
+{
+ pr_emerg("DSPBRIDGE: MAP function: COUNT 0 FOR PA 0x%x\n", pa);
+ pr_emerg("Bad page state in process '%s'\n"
+ "page:%p flags:0x%0*lx mapping:%p mapcount:%d count:%d\n"
+ "Backtrace:\n",
+ current->comm, pg, (int)(2 * sizeof(unsigned long)),
+ (unsigned long)pg->flags, pg->mapping,
+ page_mapcount(pg), page_count(pg));
+ dump_stack();
+}
+
+/*
+ * ======== bridge_drv_entry ========
+ * purpose:
+ * Bridge Driver entry point.
+ */
+void bridge_drv_entry(OUT struct bridge_drv_interface **ppDrvInterface,
+ IN CONST char *driver_file_name)
+{
+
+ DBC_REQUIRE(driver_file_name != NULL);
+
+ io_sm_init(); /* Initialization of io_sm module */
+
+ if (strcmp(driver_file_name, "UMA") == 0)
+ *ppDrvInterface = &drv_interface_fxns;
+ else
+ dev_dbg(bridge, "%s Unknown Bridge file name", __func__);
+
+}
+
+/*
+ * ======== bridge_brd_monitor ========
+ * purpose:
+ * This bridge_brd_monitor puts DSP into a Loadable state.
+ * i.e Application can load and start the device.
+ *
+ * Preconditions:
+ * Device in 'OFF' state.
+ */
+static int bridge_brd_monitor(struct bridge_dev_context *hDevContext)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ u32 temp;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ temp = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST) &
+ OMAP_POWERSTATEST_MASK;
+ if (!(temp & 0x02)) {
+ /* IVA2 is not in ON state */
+ /* Read and set PM_PWSTCTRL_IVA2 to ON */
+ (*pdata->dsp_prm_rmw_bits)(OMAP_POWERSTATEST_MASK,
+ PWRDM_POWER_ON, OMAP3430_IVA2_MOD, OMAP2_PM_PWSTCTRL);
+ /* Set the SW supervised state transition */
+ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_FORCE_WAKEUP,
+ OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+
+ /* Wait until the state has moved to ON */
+ while ((*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST) &
+ OMAP_INTRANSITION_MASK)
+ ;
+ /* Disable Automatic transition */
+ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_DISABLE_AUTO,
+ OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+ }
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2_MASK, 0,
+ OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+ dsp_clk_enable(DSP_CLK_IVA2);
+
+ if (DSP_SUCCEEDED(status)) {
+ /* set the device state to IDLE */
+ dev_context->dw_brd_state = BRD_IDLE;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_brd_read ========
+ * purpose:
+ * Reads buffers for DSP memory.
+ */
+static int bridge_brd_read(struct bridge_dev_context *hDevContext,
+ OUT u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ u32 offset;
+ u32 dsp_base_addr = hDevContext->dw_dsp_base_addr;
+
+ if (dwDSPAddr < dev_context->dw_dsp_start_add) {
+ status = -EPERM;
+ return status;
+ }
+ /* change here to account for the 3 bands of the DSP internal memory */
+ if ((dwDSPAddr - dev_context->dw_dsp_start_add) <
+ dev_context->dw_internal_size) {
+ offset = dwDSPAddr - dev_context->dw_dsp_start_add;
+ } else {
+ status = read_ext_dsp_data(dev_context, pbHostBuf, dwDSPAddr,
+ ul_num_bytes, ulMemType);
+ return status;
+ }
+ /* copy the data from DSP memory, */
+ memcpy(pbHostBuf, (void *)(dsp_base_addr + offset), ul_num_bytes);
+ return status;
+}
+
+/*
+ * ======== bridge_brd_set_state ========
+ * purpose:
+ * This routine updates the Board status.
+ */
+static int bridge_brd_set_state(struct bridge_dev_context *hDevContext,
+ u32 ulBrdState)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+
+ dev_context->dw_brd_state = ulBrdState;
+ return status;
+}
+
+/*
+ * ======== bridge_brd_start ========
+ * purpose:
+ * Initializes DSP MMU and Starts DSP.
+ *
+ * Preconditions:
+ * a) DSP domain is 'ACTIVE'.
+ * b) DSP_RST1 is asserted.
+ * b) DSP_RST2 is released.
+ */
+static int bridge_brd_start(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ u32 dw_sync_addr = 0;
+ u32 ul_shm_base; /* Gpp Phys SM base addr(byte) */
+ u32 ul_shm_base_virt; /* Dsp Virt SM base addr */
+ u32 ul_tlb_base_virt; /* Base of MMU TLB entry */
+ /* Offset of shm_base_virt from tlb_base_virt */
+ u32 ul_shm_offset_virt;
+ s32 entry_ndx;
+ s32 itmp_entry_ndx = 0; /* DSP-MMU TLB entry base address */
+ struct cfg_hostres *resources = NULL;
+ u32 temp;
+ u32 ul_dsp_clk_rate;
+ u32 ul_dsp_clk_addr;
+ u32 ul_bios_gp_timer;
+ u32 clk_cmd;
+ struct io_mgr *hio_mgr;
+ u32 ul_load_monitor_timer;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ /* The device context contains all the mmu setup info from when the
+ * last dsp base image was loaded. The first entry is always
+ * SHMMEM base. */
+ /* Get SHM_BEG - convert to byte address */
+ (void)dev_get_symbol(dev_context->hdev_obj, SHMBASENAME,
+ &ul_shm_base_virt);
+ ul_shm_base_virt *= DSPWORDSIZE;
+ DBC_ASSERT(ul_shm_base_virt != 0);
+ /* DSP Virtual address */
+ ul_tlb_base_virt = dev_context->atlb_entry[0].ul_dsp_va;
+ DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);
+ ul_shm_offset_virt =
+ ul_shm_base_virt - (ul_tlb_base_virt * DSPWORDSIZE);
+ /* Kernel logical address */
+ ul_shm_base = dev_context->atlb_entry[0].ul_gpp_va + ul_shm_offset_virt;
+
+ DBC_ASSERT(ul_shm_base != 0);
+ /* 2nd wd is used as sync field */
+ dw_sync_addr = ul_shm_base + SHMSYNCOFFSET;
+ /* Write a signature into the shm base + offset; this will
+ * get cleared when the DSP program starts. */
+ if ((ul_shm_base_virt == 0) || (ul_shm_base == 0)) {
+ pr_err("%s: Illegal SM base\n", __func__);
+ status = -EPERM;
+ } else
+ *((volatile u32 *)dw_sync_addr) = 0xffffffff;
+
+ if (DSP_SUCCEEDED(status)) {
+ resources = dev_context->resources;
+ if (!resources)
+ status = -EPERM;
+
+ /* Assert RST1 i.e only the RST only for DSP megacell */
+ if (DSP_SUCCEEDED(status)) {
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2_MASK,
+ OMAP3430_RST1_IVA2_MASK, OMAP3430_IVA2_MOD,
+ OMAP2_RM_RSTCTRL);
+ /* Mask address with 1K for compatibility */
+ __raw_writel(dwDSPAddr & OMAP3_IVA2_BOOTADDR_MASK,
+ OMAP343X_CTRL_REGADDR(
+ OMAP343X_CONTROL_IVA2_BOOTADDR));
+ /*
+ * Set bootmode to self loop if dsp_debug flag is true
+ */
+ __raw_writel((dsp_debug) ? OMAP3_IVA2_BOOTMOD_IDLE : 0,
+ OMAP343X_CTRL_REGADDR(
+ OMAP343X_CONTROL_IVA2_BOOTMOD));
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Reset and Unreset the RST2, so that BOOTADDR is copied to
+ * IVA2 SYSC register */
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2_MASK,
+ OMAP3430_RST2_IVA2_MASK, OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+ udelay(100);
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST2_IVA2_MASK, 0,
+ OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+ udelay(100);
+
+ /* Disbale the DSP MMU */
+ hw_mmu_disable(resources->dw_dmmu_base);
+ /* Disable TWL */
+ hw_mmu_twl_disable(resources->dw_dmmu_base);
+
+ /* Only make TLB entry if both addresses are non-zero */
+ for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB;
+ entry_ndx++) {
+ struct bridge_ioctl_extproc *e = &dev_context->atlb_entry[entry_ndx];
+ struct hw_mmu_map_attrs_t map_attrs = {
+ .endianism = e->endianism,
+ .element_size = e->elem_size,
+ .mixed_size = e->mixed_mode,
+ };
+
+ if (!e->ul_gpp_pa || !e->ul_dsp_va)
+ continue;
+
+ dev_dbg(bridge,
+ "MMU %d, pa: 0x%x, va: 0x%x, size: 0x%x",
+ itmp_entry_ndx,
+ e->ul_gpp_pa,
+ e->ul_dsp_va,
+ e->ul_size);
+
+ hw_mmu_tlb_add(dev_context->dw_dsp_mmu_base,
+ e->ul_gpp_pa,
+ e->ul_dsp_va,
+ e->ul_size,
+ itmp_entry_ndx,
+ &map_attrs, 1, 1);
+
+ itmp_entry_ndx++;
+ }
+ }
+
+ /* Lock the above TLB entries and get the BIOS and load monitor timer
+ * information */
+ if (DSP_SUCCEEDED(status)) {
+ hw_mmu_num_locked_set(resources->dw_dmmu_base, itmp_entry_ndx);
+ hw_mmu_victim_num_set(resources->dw_dmmu_base, itmp_entry_ndx);
+ hw_mmu_ttb_set(resources->dw_dmmu_base,
+ dev_context->pt_attrs->l1_base_pa);
+ hw_mmu_twl_enable(resources->dw_dmmu_base);
+ /* Enable the SmartIdle and AutoIdle bit for MMU_SYSCONFIG */
+
+ temp = __raw_readl((resources->dw_dmmu_base) + 0x10);
+ temp = (temp & 0xFFFFFFEF) | 0x11;
+ __raw_writel(temp, (resources->dw_dmmu_base) + 0x10);
+
+ /* Let the DSP MMU run */
+ hw_mmu_enable(resources->dw_dmmu_base);
+
+ /* Enable the BIOS clock */
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ BRIDGEINIT_BIOSGPTIMER, &ul_bios_gp_timer);
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ BRIDGEINIT_LOADMON_GPTIMER,
+ &ul_load_monitor_timer);
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ if (ul_load_monitor_timer != 0xFFFF) {
+ clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
+ ul_load_monitor_timer;
+ dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ } else {
+ dev_dbg(bridge, "Not able to get the symbol for Load "
+ "Monitor Timer\n");
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ if (ul_bios_gp_timer != 0xFFFF) {
+ clk_cmd = (BPWR_ENABLE_CLOCK << MBX_PM_CLK_CMDSHIFT) |
+ ul_bios_gp_timer;
+ dsp_peripheral_clk_ctrl(dev_context, &clk_cmd);
+ } else {
+ dev_dbg(bridge,
+ "Not able to get the symbol for BIOS Timer\n");
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Set the DSP clock rate */
+ (void)dev_get_symbol(dev_context->hdev_obj,
+ "_BRIDGEINIT_DSP_FREQ", &ul_dsp_clk_addr);
+ /*Set Autoidle Mode for IVA2 PLL */
+ (*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
+ OMAP3430_IVA2_MOD, OMAP3430_CM_AUTOIDLE_PLL);
+
+ if ((unsigned int *)ul_dsp_clk_addr != NULL) {
+ /* Get the clock rate */
+ ul_dsp_clk_rate = dsp_clk_get_iva2_rate();
+ dev_dbg(bridge, "%s: DSP clock rate (KHZ): 0x%x \n",
+ __func__, ul_dsp_clk_rate);
+ (void)bridge_brd_write(dev_context,
+ (u8 *) &ul_dsp_clk_rate,
+ ul_dsp_clk_addr, sizeof(u32), 0);
+ }
+ /*
+ * Enable Mailbox events and also drain any pending
+ * stale messages.
+ */
+ dev_context->mbox = omap_mbox_get("dsp");
+ if (IS_ERR(dev_context->mbox)) {
+ dev_context->mbox = NULL;
+ pr_err("%s: Failed to get dsp mailbox handle\n",
+ __func__);
+ status = -EPERM;
+ }
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg;
+
+/*PM_IVA2GRPSEL_PER = 0xC0;*/
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) + 0xA8));
+ temp = (temp & 0xFFFFFF30) | 0xC0;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8)) =
+ (u32) temp;
+
+/*PM_MPUGRPSEL_PER &= 0xFFFFFF3F; */
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) + 0xA4));
+ temp = (temp & 0xFFFFFF3F);
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4)) =
+ (u32) temp;
+/*CM_SLEEPDEP_PER |= 0x04; */
+ temp = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_base) + 0x44));
+ temp = (temp & 0xFFFFFFFB) | 0x04;
+ *((reg_uword32 *) ((u32) (resources->dw_per_base) + 0x44)) =
+ (u32) temp;
+
+/*CM_CLKSTCTRL_IVA2 = 0x00000003 -To Allow automatic transitions */
+ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_ENABLE_AUTO,
+ OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+
+ /* Let DSP go */
+ dev_dbg(bridge, "%s Unreset\n", __func__);
+ /* Enable DSP MMU Interrupts */
+ hw_mmu_event_enable(resources->dw_dmmu_base,
+ HW_MMU_ALL_INTERRUPTS);
+ /* release the RST1, DSP starts executing now .. */
+ (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2_MASK, 0,
+ OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+
+ dev_dbg(bridge, "Waiting for Sync @ 0x%x\n", dw_sync_addr);
+ dev_dbg(bridge, "DSP c_int00 Address = 0x%x\n", dwDSPAddr);
+ if (dsp_debug)
+ while (*((volatile u16 *)dw_sync_addr))
+ ;;
+
+ /* Wait for DSP to clear word in shared memory */
+ /* Read the Location */
+ if (!wait_for_start(dev_context, dw_sync_addr))
+ status = -ETIMEDOUT;
+
+ /* Start wdt */
+ dsp_wdt_sm_set((void *)ul_shm_base);
+ dsp_wdt_enable(true);
+
+ status = dev_get_io_mgr(dev_context->hdev_obj, &hio_mgr);
+ if (hio_mgr) {
+ io_sh_msetting(hio_mgr, SHM_OPPINFO, NULL);
+ /* Write the synchronization bit to indicate the
+ * completion of OPP table update to DSP
+ */
+ *((volatile u32 *)dw_sync_addr) = 0XCAFECAFE;
+
+ /* update board state */
+ dev_context->dw_brd_state = BRD_RUNNING;
+ /* (void)chnlsm_enable_interrupt(dev_context); */
+ } else {
+ dev_context->dw_brd_state = BRD_UNKNOWN;
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_brd_stop ========
+ * purpose:
+ * Puts DSP in self loop.
+ *
+ * Preconditions :
+ * a) None
+ */
+static int bridge_brd_stop(struct bridge_dev_context *hDevContext)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ struct pg_table_attrs *pt_attrs;
+ u32 dsp_pwr_state;
+ int clk_status;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ if (dev_context->dw_brd_state == BRD_STOPPED)
+ return status;
+
+ /* as per TRM, it is advised to first drive the IVA2 to 'Standby' mode,
+ * before turning off the clocks.. This is to ensure that there are no
+ * pending L3 or other transactons from IVA2 */
+ dsp_pwr_state = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST) &
+ OMAP_POWERSTATEST_MASK;
+ if (dsp_pwr_state != PWRDM_POWER_OFF) {
+ sm_interrupt_dsp(dev_context, MBX_PM_DSPIDLE);
+ mdelay(10);
+
+ clk_status = dsp_clk_disable(DSP_CLK_IVA2);
+
+ /* IVA2 is not in OFF state */
+ /* Set PM_PWSTCTRL_IVA2 to OFF */
+ (*pdata->dsp_prm_rmw_bits)(OMAP_POWERSTATEST_MASK,
+ PWRDM_POWER_OFF, OMAP3430_IVA2_MOD, OMAP2_PM_PWSTCTRL);
+ /* Set the SW supervised state transition for Sleep */
+ (*pdata->dsp_cm_write)(OMAP34XX_CLKSTCTRL_FORCE_SLEEP,
+ OMAP3430_IVA2_MOD, OMAP2_CM_CLKSTCTRL);
+ } else {
+ clk_status = dsp_clk_disable(DSP_CLK_IVA2);
+ }
+ udelay(10);
+ /* Release the Ext Base virtual Address as the next DSP Program
+ * may have a different load address */
+ if (dev_context->dw_dsp_ext_base_addr)
+ dev_context->dw_dsp_ext_base_addr = 0;
+
+ dev_context->dw_brd_state = BRD_STOPPED; /* update board state */
+
+ dsp_wdt_enable(false);
+
+ /* This is a good place to clear the MMU page tables as well */
+ if (dev_context->pt_attrs) {
+ pt_attrs = dev_context->pt_attrs;
+ memset((u8 *) pt_attrs->l1_base_va, 0x00, pt_attrs->l1_size);
+ memset((u8 *) pt_attrs->l2_base_va, 0x00, pt_attrs->l2_size);
+ memset((u8 *) pt_attrs->pg_info, 0x00,
+ (pt_attrs->l2_num_pages * sizeof(struct page_info)));
+ }
+ /* Disable the mailbox interrupts */
+ if (dev_context->mbox) {
+ omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
+ omap_mbox_put(dev_context->mbox);
+ dev_context->mbox = NULL;
+ }
+ /* Reset IVA2 clocks*/
+ (*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2_MASK | OMAP3430_RST2_IVA2_MASK |
+ OMAP3430_RST3_IVA2_MASK, OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+
+ return status;
+}
+
+/*
+ * ======== bridge_brd_delete ========
+ * purpose:
+ * Puts DSP in Low power mode
+ *
+ * Preconditions :
+ * a) None
+ */
+static int bridge_brd_delete(struct bridge_dev_context *hDevContext)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ struct pg_table_attrs *pt_attrs;
+ int clk_status;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ if (dev_context->dw_brd_state == BRD_STOPPED)
+ return status;
+
+ /* as per TRM, it is advised to first drive
+ * the IVA2 to 'Standby' mode, before turning off the clocks.. This is
+ * to ensure that there are no pending L3 or other transactons from
+ * IVA2 */
+ status = sleep_dsp(dev_context, PWR_EMERGENCYDEEPSLEEP, NULL);
+ clk_status = dsp_clk_disable(DSP_CLK_IVA2);
+
+ /* Release the Ext Base virtual Address as the next DSP Program
+ * may have a different load address */
+ if (dev_context->dw_dsp_ext_base_addr)
+ dev_context->dw_dsp_ext_base_addr = 0;
+
+ dev_context->dw_brd_state = BRD_STOPPED; /* update board state */
+
+ /* This is a good place to clear the MMU page tables as well */
+ if (dev_context->pt_attrs) {
+ pt_attrs = dev_context->pt_attrs;
+ memset((u8 *) pt_attrs->l1_base_va, 0x00, pt_attrs->l1_size);
+ memset((u8 *) pt_attrs->l2_base_va, 0x00, pt_attrs->l2_size);
+ memset((u8 *) pt_attrs->pg_info, 0x00,
+ (pt_attrs->l2_num_pages * sizeof(struct page_info)));
+ }
+ /* Disable the mail box interrupts */
+ if (dev_context->mbox) {
+ omap_mbox_disable_irq(dev_context->mbox, IRQ_RX);
+ omap_mbox_put(dev_context->mbox);
+ dev_context->mbox = NULL;
+ }
+ /* Reset IVA2 clocks*/
+ (*pdata->dsp_prm_write)(OMAP3430_RST1_IVA2_MASK | OMAP3430_RST2_IVA2_MASK |
+ OMAP3430_RST3_IVA2_MASK, OMAP3430_IVA2_MOD, OMAP2_RM_RSTCTRL);
+
+ return status;
+}
+
+/*
+ * ======== bridge_brd_status ========
+ * Returns the board status.
+ */
+static int bridge_brd_status(struct bridge_dev_context *hDevContext,
+ int *pdwState)
+{
+ struct bridge_dev_context *dev_context = hDevContext;
+ *pdwState = dev_context->dw_brd_state;
+ return 0;
+}
+
+/*
+ * ======== bridge_brd_write ========
+ * Copies the buffers to DSP internal or external memory.
+ */
+static int bridge_brd_write(struct bridge_dev_context *hDevContext,
+ IN u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+
+ if (dwDSPAddr < dev_context->dw_dsp_start_add) {
+ status = -EPERM;
+ return status;
+ }
+ if ((dwDSPAddr - dev_context->dw_dsp_start_add) <
+ dev_context->dw_internal_size) {
+ status = write_dsp_data(hDevContext, pbHostBuf, dwDSPAddr,
+ ul_num_bytes, ulMemType);
+ } else {
+ status = write_ext_dsp_data(dev_context, pbHostBuf, dwDSPAddr,
+ ul_num_bytes, ulMemType, false);
+ }
+
+ return status;
+}
+
+/*
+ * ======== bridge_dev_create ========
+ * Creates a driver object. Puts DSP in self loop.
+ */
+static int bridge_dev_create(OUT struct bridge_dev_context
+ **ppDevContext,
+ struct dev_object *hdev_obj,
+ IN struct cfg_hostres *pConfig)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = NULL;
+ s32 entry_ndx;
+ struct cfg_hostres *resources = pConfig;
+ struct pg_table_attrs *pt_attrs;
+ u32 pg_tbl_pa;
+ u32 pg_tbl_va;
+ u32 align_size;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ /* Allocate and initialize a data structure to contain the bridge driver
+ * state, which becomes the context for later calls into this driver */
+ dev_context = kzalloc(sizeof(struct bridge_dev_context), GFP_KERNEL);
+ if (!dev_context) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ dev_context->dw_dsp_start_add = (u32) OMAP_GEM_BASE;
+ dev_context->dw_self_loop = (u32) NULL;
+ dev_context->dsp_per_clks = 0;
+ dev_context->dw_internal_size = OMAP_DSP_SIZE;
+ /* Clear dev context MMU table entries.
+ * These get set on bridge_io_on_loaded() call after program loaded. */
+ for (entry_ndx = 0; entry_ndx < BRDIOCTL_NUMOFMMUTLB; entry_ndx++) {
+ dev_context->atlb_entry[entry_ndx].ul_gpp_pa =
+ dev_context->atlb_entry[entry_ndx].ul_dsp_va = 0;
+ }
+ dev_context->num_tlb_entries = 0;
+ dev_context->dw_dsp_base_addr = (u32) MEM_LINEAR_ADDRESS((void *)
+ (pConfig->
+ dw_mem_base
+ [3]),
+ pConfig->
+ dw_mem_length
+ [3]);
+ if (!dev_context->dw_dsp_base_addr)
+ status = -EPERM;
+
+ pt_attrs = kzalloc(sizeof(struct pg_table_attrs), GFP_KERNEL);
+ if (pt_attrs != NULL) {
+ /* Assuming that we use only DSP's memory map
+ * until 0x4000:0000 , we would need only 1024
+ * L1 enties i.e L1 size = 4K */
+ pt_attrs->l1_size = 0x1000;
+ align_size = pt_attrs->l1_size;
+ /* Align sizes are expected to be power of 2 */
+ /* we like to get aligned on L1 table size */
+ pg_tbl_va = (u32) mem_alloc_phys_mem(pt_attrs->l1_size,
+ align_size, &pg_tbl_pa);
+
+ /* Check if the PA is aligned for us */
+ if ((pg_tbl_pa) & (align_size - 1)) {
+ /* PA not aligned to page table size ,
+ * try with more allocation and align */
+ mem_free_phys_mem((void *)pg_tbl_va, pg_tbl_pa,
+ pt_attrs->l1_size);
+ /* we like to get aligned on L1 table size */
+ pg_tbl_va =
+ (u32) mem_alloc_phys_mem((pt_attrs->l1_size) * 2,
+ align_size, &pg_tbl_pa);
+ /* We should be able to get aligned table now */
+ pt_attrs->l1_tbl_alloc_pa = pg_tbl_pa;
+ pt_attrs->l1_tbl_alloc_va = pg_tbl_va;
+ pt_attrs->l1_tbl_alloc_sz = pt_attrs->l1_size * 2;
+ /* Align the PA to the next 'align' boundary */
+ pt_attrs->l1_base_pa =
+ ((pg_tbl_pa) +
+ (align_size - 1)) & (~(align_size - 1));
+ pt_attrs->l1_base_va =
+ pg_tbl_va + (pt_attrs->l1_base_pa - pg_tbl_pa);
+ } else {
+ /* We got aligned PA, cool */
+ pt_attrs->l1_tbl_alloc_pa = pg_tbl_pa;
+ pt_attrs->l1_tbl_alloc_va = pg_tbl_va;
+ pt_attrs->l1_tbl_alloc_sz = pt_attrs->l1_size;
+ pt_attrs->l1_base_pa = pg_tbl_pa;
+ pt_attrs->l1_base_va = pg_tbl_va;
+ }
+ if (pt_attrs->l1_base_va)
+ memset((u8 *) pt_attrs->l1_base_va, 0x00,
+ pt_attrs->l1_size);
+
+ /* number of L2 page tables = DMM pool used + SHMMEM +EXTMEM +
+ * L4 pages */
+ pt_attrs->l2_num_pages = ((DMMPOOLSIZE >> 20) + 6);
+ pt_attrs->l2_size = HW_MMU_COARSE_PAGE_SIZE *
+ pt_attrs->l2_num_pages;
+ align_size = 4; /* Make it u32 aligned */
+ /* we like to get aligned on L1 table size */
+ pg_tbl_va = (u32) mem_alloc_phys_mem(pt_attrs->l2_size,
+ align_size, &pg_tbl_pa);
+ pt_attrs->l2_tbl_alloc_pa = pg_tbl_pa;
+ pt_attrs->l2_tbl_alloc_va = pg_tbl_va;
+ pt_attrs->l2_tbl_alloc_sz = pt_attrs->l2_size;
+ pt_attrs->l2_base_pa = pg_tbl_pa;
+ pt_attrs->l2_base_va = pg_tbl_va;
+
+ if (pt_attrs->l2_base_va)
+ memset((u8 *) pt_attrs->l2_base_va, 0x00,
+ pt_attrs->l2_size);
+
+ pt_attrs->pg_info = kzalloc(pt_attrs->l2_num_pages *
+ sizeof(struct page_info), GFP_KERNEL);
+ dev_dbg(bridge,
+ "L1 pa %x, va %x, size %x\n L2 pa %x, va "
+ "%x, size %x\n", pt_attrs->l1_base_pa,
+ pt_attrs->l1_base_va, pt_attrs->l1_size,
+ pt_attrs->l2_base_pa, pt_attrs->l2_base_va,
+ pt_attrs->l2_size);
+ dev_dbg(bridge, "pt_attrs %p L2 NumPages %x pg_info %p\n",
+ pt_attrs, pt_attrs->l2_num_pages, pt_attrs->pg_info);
+ }
+ if ((pt_attrs != NULL) && (pt_attrs->l1_base_va != 0) &&
+ (pt_attrs->l2_base_va != 0) && (pt_attrs->pg_info != NULL))
+ dev_context->pt_attrs = pt_attrs;
+ else
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ spin_lock_init(&pt_attrs->pg_lock);
+ dev_context->tc_word_swap_on = drv_datap->tc_wordswapon;
+
+ /* Set the Clock Divisor for the DSP module */
+ udelay(5);
+ /* MMU address is obtained from the host
+ * resources struct */
+ dev_context->dw_dsp_mmu_base = resources->dw_dmmu_base;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ dev_context->hdev_obj = hdev_obj;
+ dev_context->ul_int_mask = 0;
+ /* Store current board state. */
+ dev_context->dw_brd_state = BRD_STOPPED;
+ dev_context->resources = resources;
+ /* Return ptr to our device state to the DSP API for storage */
+ *ppDevContext = dev_context;
+ } else {
+ if (pt_attrs != NULL) {
+ kfree(pt_attrs->pg_info);
+
+ if (pt_attrs->l2_tbl_alloc_va) {
+ mem_free_phys_mem((void *)
+ pt_attrs->l2_tbl_alloc_va,
+ pt_attrs->l2_tbl_alloc_pa,
+ pt_attrs->l2_tbl_alloc_sz);
+ }
+ if (pt_attrs->l1_tbl_alloc_va) {
+ mem_free_phys_mem((void *)
+ pt_attrs->l1_tbl_alloc_va,
+ pt_attrs->l1_tbl_alloc_pa,
+ pt_attrs->l1_tbl_alloc_sz);
+ }
+ }
+ kfree(pt_attrs);
+ kfree(dev_context);
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== bridge_dev_ctrl ========
+ * Receives device specific commands.
+ */
+static int bridge_dev_ctrl(struct bridge_dev_context *dev_context,
+ u32 dw_cmd, IN OUT void *pargs)
+{
+ int status = 0;
+ struct bridge_ioctl_extproc *pa_ext_proc =
+ (struct bridge_ioctl_extproc *)pargs;
+ s32 ndx;
+
+ switch (dw_cmd) {
+ case BRDIOCTL_CHNLREAD:
+ break;
+ case BRDIOCTL_CHNLWRITE:
+ break;
+ case BRDIOCTL_SETMMUCONFIG:
+ /* store away dsp-mmu setup values for later use */
+ for (ndx = 0; ndx < BRDIOCTL_NUMOFMMUTLB; ndx++, pa_ext_proc++)
+ dev_context->atlb_entry[ndx] = *pa_ext_proc;
+ break;
+ case BRDIOCTL_DEEPSLEEP:
+ case BRDIOCTL_EMERGENCYSLEEP:
+ /* Currently only DSP Idle is supported Need to update for
+ * later releases */
+ status = sleep_dsp(dev_context, PWR_DEEPSLEEP, pargs);
+ break;
+ case BRDIOCTL_WAKEUP:
+ status = wake_dsp(dev_context, pargs);
+ break;
+ case BRDIOCTL_CLK_CTRL:
+ status = 0;
+ /* Looking For Baseport Fix for Clocks */
+ status = dsp_peripheral_clk_ctrl(dev_context, pargs);
+ break;
+ case BRDIOCTL_PWR_HIBERNATE:
+ status = handle_hibernation_from_dsp(dev_context);
+ break;
+ case BRDIOCTL_PRESCALE_NOTIFY:
+ status = pre_scale_dsp(dev_context, pargs);
+ break;
+ case BRDIOCTL_POSTSCALE_NOTIFY:
+ status = post_scale_dsp(dev_context, pargs);
+ break;
+ case BRDIOCTL_CONSTRAINT_REQUEST:
+ status = handle_constraints_set(dev_context, pargs);
+ break;
+ default:
+ status = -EPERM;
+ break;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_dev_destroy ========
+ * Destroys the driver object.
+ */
+static int bridge_dev_destroy(struct bridge_dev_context *hDevContext)
+{
+ struct pg_table_attrs *pt_attrs;
+ int status = 0;
+ struct bridge_dev_context *dev_context = (struct bridge_dev_context *)
+ hDevContext;
+ struct cfg_hostres *host_res;
+ u32 shm_size;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ /* It should never happen */
+ if (!hDevContext)
+ return -EFAULT;
+
+ /* first put the device to stop state */
+ bridge_brd_delete(dev_context);
+ if (dev_context->pt_attrs) {
+ pt_attrs = dev_context->pt_attrs;
+ kfree(pt_attrs->pg_info);
+
+ if (pt_attrs->l2_tbl_alloc_va) {
+ mem_free_phys_mem((void *)pt_attrs->l2_tbl_alloc_va,
+ pt_attrs->l2_tbl_alloc_pa,
+ pt_attrs->l2_tbl_alloc_sz);
+ }
+ if (pt_attrs->l1_tbl_alloc_va) {
+ mem_free_phys_mem((void *)pt_attrs->l1_tbl_alloc_va,
+ pt_attrs->l1_tbl_alloc_pa,
+ pt_attrs->l1_tbl_alloc_sz);
+ }
+ kfree(pt_attrs);
+
+ }
+
+ if (dev_context->resources) {
+ host_res = dev_context->resources;
+ shm_size = drv_datap->shm_size;
+ if (shm_size >= 0x10000) {
+ if ((host_res->dw_mem_base[1]) &&
+ (host_res->dw_mem_phys[1])) {
+ mem_free_phys_mem((void *)
+ host_res->dw_mem_base
+ [1],
+ host_res->dw_mem_phys
+ [1], shm_size);
+ }
+ } else {
+ dev_dbg(bridge, "%s: Error getting shm size "
+ "from registry: %x. Not calling "
+ "mem_free_phys_mem\n", __func__,
+ status);
+ }
+ host_res->dw_mem_base[1] = 0;
+ host_res->dw_mem_phys[1] = 0;
+
+ if (host_res->dw_mem_base[0])
+ iounmap((void *)host_res->dw_mem_base[0]);
+ if (host_res->dw_mem_base[2])
+ iounmap((void *)host_res->dw_mem_base[2]);
+ if (host_res->dw_mem_base[3])
+ iounmap((void *)host_res->dw_mem_base[3]);
+ if (host_res->dw_mem_base[4])
+ iounmap((void *)host_res->dw_mem_base[4]);
+ if (host_res->dw_dmmu_base)
+ iounmap(host_res->dw_dmmu_base);
+ if (host_res->dw_per_base)
+ iounmap(host_res->dw_per_base);
+ if (host_res->dw_per_pm_base)
+ iounmap((void *)host_res->dw_per_pm_base);
+ if (host_res->dw_core_pm_base)
+ iounmap((void *)host_res->dw_core_pm_base);
+ if (host_res->dw_sys_ctrl_base)
+ iounmap(host_res->dw_sys_ctrl_base);
+
+ host_res->dw_mem_base[0] = (u32) NULL;
+ host_res->dw_mem_base[2] = (u32) NULL;
+ host_res->dw_mem_base[3] = (u32) NULL;
+ host_res->dw_mem_base[4] = (u32) NULL;
+ host_res->dw_dmmu_base = NULL;
+ host_res->dw_sys_ctrl_base = NULL;
+
+ kfree(host_res);
+ }
+
+ /* Free the driver's device context: */
+ kfree(drv_datap->base_img);
+ kfree(drv_datap);
+ dev_set_drvdata(bridge, NULL);
+ kfree((void *)hDevContext);
+ return status;
+}
+
+static int bridge_brd_mem_copy(struct bridge_dev_context *hDevContext,
+ u32 ulDspDestAddr, u32 ulDspSrcAddr,
+ u32 ul_num_bytes, u32 ulMemType)
+{
+ int status = 0;
+ u32 src_addr = ulDspSrcAddr;
+ u32 dest_addr = ulDspDestAddr;
+ u32 copy_bytes = 0;
+ u32 total_bytes = ul_num_bytes;
+ u8 host_buf[BUFFERSIZE];
+ struct bridge_dev_context *dev_context = hDevContext;
+ while ((total_bytes > 0) && DSP_SUCCEEDED(status)) {
+ copy_bytes =
+ total_bytes > BUFFERSIZE ? BUFFERSIZE : total_bytes;
+ /* Read from External memory */
+ status = read_ext_dsp_data(hDevContext, host_buf, src_addr,
+ copy_bytes, ulMemType);
+ if (DSP_SUCCEEDED(status)) {
+ if (dest_addr < (dev_context->dw_dsp_start_add +
+ dev_context->dw_internal_size)) {
+ /* Write to Internal memory */
+ status = write_dsp_data(hDevContext, host_buf,
+ dest_addr, copy_bytes,
+ ulMemType);
+ } else {
+ /* Write to External memory */
+ status =
+ write_ext_dsp_data(hDevContext, host_buf,
+ dest_addr, copy_bytes,
+ ulMemType, false);
+ }
+ }
+ total_bytes -= copy_bytes;
+ src_addr += copy_bytes;
+ dest_addr += copy_bytes;
+ }
+ return status;
+}
+
+/* Mem Write does not halt the DSP to write unlike bridge_brd_write */
+static int bridge_brd_mem_write(struct bridge_dev_context *hDevContext,
+ IN u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ u32 ul_remain_bytes = 0;
+ u32 ul_bytes = 0;
+ ul_remain_bytes = ul_num_bytes;
+ while (ul_remain_bytes > 0 && DSP_SUCCEEDED(status)) {
+ ul_bytes =
+ ul_remain_bytes > BUFFERSIZE ? BUFFERSIZE : ul_remain_bytes;
+ if (dwDSPAddr < (dev_context->dw_dsp_start_add +
+ dev_context->dw_internal_size)) {
+ status =
+ write_dsp_data(hDevContext, pbHostBuf, dwDSPAddr,
+ ul_bytes, ulMemType);
+ } else {
+ status = write_ext_dsp_data(hDevContext, pbHostBuf,
+ dwDSPAddr, ul_bytes,
+ ulMemType, true);
+ }
+ ul_remain_bytes -= ul_bytes;
+ dwDSPAddr += ul_bytes;
+ pbHostBuf = pbHostBuf + ul_bytes;
+ }
+ return status;
+}
+
+/*
+ * ======== bridge_brd_mem_map ========
+ * This function maps MPU buffer to the DSP address space. It performs
+ * linear to physical address translation if required. It translates each
+ * page since linear addresses can be physically non-contiguous
+ * All address & size arguments are assumed to be page aligned (in proc.c)
+ *
+ * TODO: Disable MMU while updating the page tables (but that'll stall DSP)
+ */
+static int bridge_brd_mem_map(struct bridge_dev_context *hDevContext,
+ u32 ul_mpu_addr, u32 ulVirtAddr,
+ u32 ul_num_bytes, u32 ul_map_attr,
+ struct page **mapped_pages)
+{
+ u32 attrs;
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ struct hw_mmu_map_attrs_t hw_attrs;
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ u32 write = 0;
+ u32 num_usr_pgs = 0;
+ struct page *mapped_page, *pg;
+ s32 pg_num;
+ u32 va = ulVirtAddr;
+ struct task_struct *curr_task = current;
+ u32 pg_i = 0;
+ u32 mpu_addr, pa;
+
+ dev_dbg(bridge,
+ "%s hDevCtxt %p, pa %x, va %x, size %x, ul_map_attr %x\n",
+ __func__, hDevContext, ul_mpu_addr, ulVirtAddr, ul_num_bytes,
+ ul_map_attr);
+ if (ul_num_bytes == 0)
+ return -EINVAL;
+
+ if (ul_map_attr & DSP_MAP_DIR_MASK) {
+ attrs = ul_map_attr;
+ } else {
+ /* Assign default attributes */
+ attrs = ul_map_attr | (DSP_MAPVIRTUALADDR | DSP_MAPELEMSIZE16);
+ }
+ /* Take mapping properties */
+ if (attrs & DSP_MAPBIGENDIAN)
+ hw_attrs.endianism = HW_BIG_ENDIAN;
+ else
+ hw_attrs.endianism = HW_LITTLE_ENDIAN;
+
+ hw_attrs.mixed_size = (enum hw_mmu_mixed_size_t)
+ ((attrs & DSP_MAPMIXEDELEMSIZE) >> 2);
+ /* Ignore element_size if mixed_size is enabled */
+ if (hw_attrs.mixed_size == 0) {
+ if (attrs & DSP_MAPELEMSIZE8) {
+ /* Size is 8 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE8BIT;
+ } else if (attrs & DSP_MAPELEMSIZE16) {
+ /* Size is 16 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE16BIT;
+ } else if (attrs & DSP_MAPELEMSIZE32) {
+ /* Size is 32 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE32BIT;
+ } else if (attrs & DSP_MAPELEMSIZE64) {
+ /* Size is 64 bit */
+ hw_attrs.element_size = HW_ELEM_SIZE64BIT;
+ } else {
+ /*
+ * Mixedsize isn't enabled, so size can't be
+ * zero here
+ */
+ return -EINVAL;
+ }
+ }
+ if (attrs & DSP_MAPDONOTLOCK)
+ hw_attrs.donotlockmpupage = 1;
+ else
+ hw_attrs.donotlockmpupage = 0;
+
+ if (attrs & DSP_MAPVMALLOCADDR) {
+ return mem_map_vmalloc(hDevContext, ul_mpu_addr, ulVirtAddr,
+ ul_num_bytes, &hw_attrs);
+ }
+ /*
+ * Do OS-specific user-va to pa translation.
+ * Combine physically contiguous regions to reduce TLBs.
+ * Pass the translated pa to pte_update.
+ */
+ if ((attrs & DSP_MAPPHYSICALADDR)) {
+ status = pte_update(dev_context, ul_mpu_addr, ulVirtAddr,
+ ul_num_bytes, &hw_attrs);
+ goto func_cont;
+ }
+
+ /*
+ * Important Note: ul_mpu_addr is mapped from user application process
+ * to current process - it must lie completely within the current
+ * virtual memory address space in order to be of use to us here!
+ */
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, ul_mpu_addr);
+ if (vma)
+ dev_dbg(bridge,
+ "VMAfor UserBuf: ul_mpu_addr=%x, ul_num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
+ ul_num_bytes, vma->vm_start, vma->vm_end,
+ vma->vm_flags);
+
+ /*
+ * It is observed that under some circumstances, the user buffer is
+ * spread across several VMAs. So loop through and check if the entire
+ * user buffer is covered
+ */
+ while ((vma) && (ul_mpu_addr + ul_num_bytes > vma->vm_end)) {
+ /* jump to the next VMA region */
+ vma = find_vma(mm, vma->vm_end + 1);
+ dev_dbg(bridge,
+ "VMA for UserBuf ul_mpu_addr=%x ul_num_bytes=%x, "
+ "vm_start=%lx, vm_end=%lx, vm_flags=%lx\n", ul_mpu_addr,
+ ul_num_bytes, vma->vm_start, vma->vm_end,
+ vma->vm_flags);
+ }
+ if (!vma) {
+ pr_err("%s: Failed to get VMA region for 0x%x (%d)\n",
+ __func__, ul_mpu_addr, ul_num_bytes);
+ status = -EINVAL;
+ up_read(&mm->mmap_sem);
+ goto func_cont;
+ }
+
+ if (vma->vm_flags & VM_IO) {
+ num_usr_pgs = ul_num_bytes / PG_SIZE4K;
+ mpu_addr = ul_mpu_addr;
+
+ /* Get the physical addresses for user buffer */
+ for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
+ pa = user_va2_pa(mm, mpu_addr);
+ if (!pa) {
+ status = -EPERM;
+ pr_err("DSPBRIDGE: VM_IO mapping physical"
+ "address is invalid\n");
+ break;
+ }
+ if (pfn_valid(__phys_to_pfn(pa))) {
+ pg = PHYS_TO_PAGE(pa);
+ get_page(pg);
+ if (page_count(pg) < 1) {
+ pr_err("Bad page in VM_IO buffer\n");
+ bad_page_dump(pa, pg);
+ }
+ }
+ status = pte_set(dev_context->pt_attrs, pa,
+ va, HW_PAGE_SIZE4KB, &hw_attrs);
+ if (DSP_FAILED(status))
+ break;
+
+ va += HW_PAGE_SIZE4KB;
+ mpu_addr += HW_PAGE_SIZE4KB;
+ pa += HW_PAGE_SIZE4KB;
+ }
+ } else {
+ num_usr_pgs = ul_num_bytes / PG_SIZE4K;
+ if (vma->vm_flags & (VM_WRITE | VM_MAYWRITE))
+ write = 1;
+
+ for (pg_i = 0; pg_i < num_usr_pgs; pg_i++) {
+ pg_num = get_user_pages(curr_task, mm, ul_mpu_addr, 1,
+ write, 1, &mapped_page, NULL);
+ if (pg_num > 0) {
+ if (page_count(mapped_page) < 1) {
+ pr_err("Bad page count after doing"
+ "get_user_pages on"
+ "user buffer\n");
+ bad_page_dump(page_to_phys(mapped_page),
+ mapped_page);
+ }
+ status = pte_set(dev_context->pt_attrs,
+ page_to_phys(mapped_page), va,
+ HW_PAGE_SIZE4KB, &hw_attrs);
+ if (DSP_FAILED(status))
+ break;
+
+ if (mapped_pages)
+ mapped_pages[pg_i] = mapped_page;
+
+ va += HW_PAGE_SIZE4KB;
+ ul_mpu_addr += HW_PAGE_SIZE4KB;
+ } else {
+ pr_err("DSPBRIDGE: get_user_pages FAILED,"
+ "MPU addr = 0x%x,"
+ "vma->vm_flags = 0x%lx,"
+ "get_user_pages Err"
+ "Value = %d, Buffer"
+ "size=0x%x\n", ul_mpu_addr,
+ vma->vm_flags, pg_num, ul_num_bytes);
+ status = -EPERM;
+ break;
+ }
+ }
+ }
+ up_read(&mm->mmap_sem);
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ status = 0;
+ } else {
+ /*
+ * Roll out the mapped pages incase it failed in middle of
+ * mapping
+ */
+ if (pg_i) {
+ bridge_brd_mem_un_map(dev_context, ulVirtAddr,
+ (pg_i * PG_SIZE4K));
+ }
+ status = -EPERM;
+ }
+ /*
+ * In any case, flush the TLB
+ * This is called from here instead from pte_update to avoid unnecessary
+ * repetition while mapping non-contiguous physical regions of a virtual
+ * region
+ */
+ flush_all(dev_context);
+ dev_dbg(bridge, "%s status %x\n", __func__, status);
+ return status;
+}
+
+/*
+ * ======== bridge_brd_mem_un_map ========
+ * Invalidate the PTEs for the DSP VA block to be unmapped.
+ *
+ * PTEs of a mapped memory block are contiguous in any page table
+ * So, instead of looking up the PTE address for every 4K block,
+ * we clear consecutive PTEs until we unmap all the bytes
+ */
+static int bridge_brd_mem_un_map(struct bridge_dev_context *hDevContext,
+ u32 ulVirtAddr, u32 ul_num_bytes)
+{
+ u32 l1_base_va;
+ u32 l2_base_va;
+ u32 l2_base_pa;
+ u32 l2_page_num;
+ u32 pte_val;
+ u32 pte_size;
+ u32 pte_count;
+ u32 pte_addr_l1;
+ u32 pte_addr_l2 = 0;
+ u32 rem_bytes;
+ u32 rem_bytes_l2;
+ u32 va_curr;
+ struct page *pg = NULL;
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ struct pg_table_attrs *pt = dev_context->pt_attrs;
+ u32 temp;
+ u32 paddr;
+ u32 numof4k_pages = 0;
+
+ va_curr = ulVirtAddr;
+ rem_bytes = ul_num_bytes;
+ rem_bytes_l2 = 0;
+ l1_base_va = pt->l1_base_va;
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
+ dev_dbg(bridge, "%s hDevContext %p, va %x, NumBytes %x l1_base_va %x, "
+ "pte_addr_l1 %x\n", __func__, hDevContext, ulVirtAddr,
+ ul_num_bytes, l1_base_va, pte_addr_l1);
+
+ while (rem_bytes && (DSP_SUCCEEDED(status))) {
+ u32 va_curr_orig = va_curr;
+ /* Find whether the L1 PTE points to a valid L2 PT */
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va_curr);
+ pte_val = *(u32 *) pte_addr_l1;
+ pte_size = hw_mmu_pte_size_l1(pte_val);
+
+ if (pte_size != HW_MMU_COARSE_PAGE_SIZE)
+ goto skip_coarse_page;
+
+ /*
+ * Get the L2 PA from the L1 PTE, and find
+ * corresponding L2 VA
+ */
+ l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
+ l2_base_va = l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
+ l2_page_num =
+ (l2_base_pa - pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
+ /*
+ * Find the L2 PTE address from which we will start
+ * clearing, the number of PTEs to be cleared on this
+ * page, and the size of VA space that needs to be
+ * cleared on this L2 page
+ */
+ pte_addr_l2 = hw_mmu_pte_addr_l2(l2_base_va, va_curr);
+ pte_count = pte_addr_l2 & (HW_MMU_COARSE_PAGE_SIZE - 1);
+ pte_count = (HW_MMU_COARSE_PAGE_SIZE - pte_count) / sizeof(u32);
+ if (rem_bytes < (pte_count * PG_SIZE4K))
+ pte_count = rem_bytes / PG_SIZE4K;
+ rem_bytes_l2 = pte_count * PG_SIZE4K;
+
+ /*
+ * Unmap the VA space on this L2 PT. A quicker way
+ * would be to clear pte_count entries starting from
+ * pte_addr_l2. However, below code checks that we don't
+ * clear invalid entries or less than 64KB for a 64KB
+ * entry. Similar checking is done for L1 PTEs too
+ * below
+ */
+ while (rem_bytes_l2 && (DSP_SUCCEEDED(status))) {
+ pte_val = *(u32 *) pte_addr_l2;
+ pte_size = hw_mmu_pte_size_l2(pte_val);
+ /* va_curr aligned to pte_size? */
+ if (pte_size == 0 || rem_bytes_l2 < pte_size ||
+ va_curr & (pte_size - 1)) {
+ status = -EPERM;
+ break;
+ }
+
+ /* Collect Physical addresses from VA */
+ paddr = (pte_val & ~(pte_size - 1));
+ if (pte_size == HW_PAGE_SIZE64KB)
+ numof4k_pages = 16;
+ else
+ numof4k_pages = 1;
+ temp = 0;
+ while (temp++ < numof4k_pages) {
+ if (!pfn_valid(__phys_to_pfn(paddr))) {
+ paddr += HW_PAGE_SIZE4KB;
+ continue;
+ }
+ pg = PHYS_TO_PAGE(paddr);
+ if (page_count(pg) < 1) {
+ pr_info("DSPBRIDGE: UNMAP function: "
+ "COUNT 0 FOR PA 0x%x, size = "
+ "0x%x\n", paddr, ul_num_bytes);
+ bad_page_dump(paddr, pg);
+ } else {
+ SetPageDirty(pg);
+ page_cache_release(pg);
+ }
+ paddr += HW_PAGE_SIZE4KB;
+ }
+ if (hw_mmu_pte_clear(pte_addr_l2, va_curr, pte_size)
+ == RET_FAIL) {
+ status = -EPERM;
+ goto EXIT_LOOP;
+ }
+
+ status = 0;
+ rem_bytes_l2 -= pte_size;
+ va_curr += pte_size;
+ pte_addr_l2 += (pte_size >> 12) * sizeof(u32);
+ }
+ spin_lock(&pt->pg_lock);
+ if (rem_bytes_l2 == 0) {
+ pt->pg_info[l2_page_num].num_entries -= pte_count;
+ if (pt->pg_info[l2_page_num].num_entries == 0) {
+ /*
+ * Clear the L1 PTE pointing to the L2 PT
+ */
+ if (hw_mmu_pte_clear(l1_base_va, va_curr_orig,
+ HW_MMU_COARSE_PAGE_SIZE) ==
+ RET_OK)
+ status = 0;
+ else {
+ status = -EPERM;
+ spin_unlock(&pt->pg_lock);
+ goto EXIT_LOOP;
+ }
+ }
+ rem_bytes -= pte_count * PG_SIZE4K;
+ } else
+ status = -EPERM;
+
+ spin_unlock(&pt->pg_lock);
+ continue;
+skip_coarse_page:
+ /* va_curr aligned to pte_size? */
+ /* pte_size = 1 MB or 16 MB */
+ if (pte_size == 0 || rem_bytes < pte_size ||
+ va_curr & (pte_size - 1)) {
+ status = -EPERM;
+ break;
+ }
+
+ if (pte_size == HW_PAGE_SIZE1MB)
+ numof4k_pages = 256;
+ else
+ numof4k_pages = 4096;
+ temp = 0;
+ /* Collect Physical addresses from VA */
+ paddr = (pte_val & ~(pte_size - 1));
+ while (temp++ < numof4k_pages) {
+ if (pfn_valid(__phys_to_pfn(paddr))) {
+ pg = PHYS_TO_PAGE(paddr);
+ if (page_count(pg) < 1) {
+ pr_info("DSPBRIDGE: UNMAP function: "
+ "COUNT 0 FOR PA 0x%x, size = "
+ "0x%x\n", paddr, ul_num_bytes);
+ bad_page_dump(paddr, pg);
+ } else {
+ SetPageDirty(pg);
+ page_cache_release(pg);
+ }
+ }
+ paddr += HW_PAGE_SIZE4KB;
+ }
+ if (hw_mmu_pte_clear(l1_base_va, va_curr, pte_size) == RET_OK) {
+ status = 0;
+ rem_bytes -= pte_size;
+ va_curr += pte_size;
+ } else {
+ status = -EPERM;
+ goto EXIT_LOOP;
+ }
+ }
+ /*
+ * It is better to flush the TLB here, so that any stale old entries
+ * get flushed
+ */
+EXIT_LOOP:
+ flush_all(dev_context);
+ dev_dbg(bridge,
+ "%s: va_curr %x, pte_addr_l1 %x pte_addr_l2 %x rem_bytes %x,"
+ " rem_bytes_l2 %x status %x\n", __func__, va_curr, pte_addr_l1,
+ pte_addr_l2, rem_bytes, rem_bytes_l2, status);
+ return status;
+}
+
+/*
+ * ======== user_va2_pa ========
+ * Purpose:
+ * This function walks through the page tables to convert a userland
+ * virtual address to physical address
+ */
+static u32 user_va2_pa(struct mm_struct *mm, u32 address)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *ptep, pte;
+
+ pgd = pgd_offset(mm, address);
+ if (!(pgd_none(*pgd) || pgd_bad(*pgd))) {
+ pmd = pmd_offset(pgd, address);
+ if (!(pmd_none(*pmd) || pmd_bad(*pmd))) {
+ ptep = pte_offset_map(pmd, address);
+ if (ptep) {
+ pte = *ptep;
+ if (pte_present(pte))
+ return pte & PAGE_MASK;
+ }
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * ======== pte_update ========
+ * This function calculates the optimum page-aligned addresses and sizes
+ * Caller must pass page-aligned values
+ */
+static int pte_update(struct bridge_dev_context *hDevContext, u32 pa,
+ u32 va, u32 size,
+ struct hw_mmu_map_attrs_t *map_attrs)
+{
+ u32 i;
+ u32 all_bits;
+ u32 pa_curr = pa;
+ u32 va_curr = va;
+ u32 num_bytes = size;
+ struct bridge_dev_context *dev_context = hDevContext;
+ int status = 0;
+ u32 page_size[] = { HW_PAGE_SIZE16MB, HW_PAGE_SIZE1MB,
+ HW_PAGE_SIZE64KB, HW_PAGE_SIZE4KB
+ };
+
+ while (num_bytes && DSP_SUCCEEDED(status)) {
+ /* To find the max. page size with which both PA & VA are
+ * aligned */
+ all_bits = pa_curr | va_curr;
+
+ for (i = 0; i < 4; i++) {
+ if ((num_bytes >= page_size[i]) && ((all_bits &
+ (page_size[i] -
+ 1)) == 0)) {
+ status =
+ pte_set(dev_context->pt_attrs, pa_curr,
+ va_curr, page_size[i], map_attrs);
+ pa_curr += page_size[i];
+ va_curr += page_size[i];
+ num_bytes -= page_size[i];
+ /* Don't try smaller sizes. Hopefully we have
+ * reached an address aligned to a bigger page
+ * size */
+ break;
+ }
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== pte_set ========
+ * This function calculates PTE address (MPU virtual) to be updated
+ * It also manages the L2 page tables
+ */
+static int pte_set(struct pg_table_attrs *pt, u32 pa, u32 va,
+ u32 size, struct hw_mmu_map_attrs_t *attrs)
+{
+ u32 i;
+ u32 pte_val;
+ u32 pte_addr_l1;
+ u32 pte_size;
+ /* Base address of the PT that will be updated */
+ u32 pg_tbl_va;
+ u32 l1_base_va;
+ /* Compiler warns that the next three variables might be used
+ * uninitialized in this function. Doesn't seem so. Working around,
+ * anyways. */
+ u32 l2_base_va = 0;
+ u32 l2_base_pa = 0;
+ u32 l2_page_num = 0;
+ int status = 0;
+
+ l1_base_va = pt->l1_base_va;
+ pg_tbl_va = l1_base_va;
+ if ((size == HW_PAGE_SIZE64KB) || (size == HW_PAGE_SIZE4KB)) {
+ /* Find whether the L1 PTE points to a valid L2 PT */
+ pte_addr_l1 = hw_mmu_pte_addr_l1(l1_base_va, va);
+ if (pte_addr_l1 <= (pt->l1_base_va + pt->l1_size)) {
+ pte_val = *(u32 *) pte_addr_l1;
+ pte_size = hw_mmu_pte_size_l1(pte_val);
+ } else {
+ return -EPERM;
+ }
+ spin_lock(&pt->pg_lock);
+ if (pte_size == HW_MMU_COARSE_PAGE_SIZE) {
+ /* Get the L2 PA from the L1 PTE, and find
+ * corresponding L2 VA */
+ l2_base_pa = hw_mmu_pte_coarse_l1(pte_val);
+ l2_base_va =
+ l2_base_pa - pt->l2_base_pa + pt->l2_base_va;
+ l2_page_num =
+ (l2_base_pa -
+ pt->l2_base_pa) / HW_MMU_COARSE_PAGE_SIZE;
+ } else if (pte_size == 0) {
+ /* L1 PTE is invalid. Allocate a L2 PT and
+ * point the L1 PTE to it */
+ /* Find a free L2 PT. */
+ for (i = 0; (i < pt->l2_num_pages) &&
+ (pt->pg_info[i].num_entries != 0); i++)
+ ;;
+ if (i < pt->l2_num_pages) {
+ l2_page_num = i;
+ l2_base_pa = pt->l2_base_pa + (l2_page_num *
+ HW_MMU_COARSE_PAGE_SIZE);
+ l2_base_va = pt->l2_base_va + (l2_page_num *
+ HW_MMU_COARSE_PAGE_SIZE);
+ /* Endianness attributes are ignored for
+ * HW_MMU_COARSE_PAGE_SIZE */
+ status =
+ hw_mmu_pte_set(l1_base_va, l2_base_pa, va,
+ HW_MMU_COARSE_PAGE_SIZE,
+ attrs);
+ } else {
+ status = -ENOMEM;
+ }
+ } else {
+ /* Found valid L1 PTE of another size.
+ * Should not overwrite it. */
+ status = -EPERM;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ pg_tbl_va = l2_base_va;
+ if (size == HW_PAGE_SIZE64KB)
+ pt->pg_info[l2_page_num].num_entries += 16;
+ else
+ pt->pg_info[l2_page_num].num_entries++;
+ dev_dbg(bridge, "PTE: L2 BaseVa %x, BasePa %x, PageNum "
+ "%x, num_entries %x\n", l2_base_va,
+ l2_base_pa, l2_page_num,
+ pt->pg_info[l2_page_num].num_entries);
+ }
+ spin_unlock(&pt->pg_lock);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ dev_dbg(bridge, "PTE: pg_tbl_va %x, pa %x, va %x, size %x\n",
+ pg_tbl_va, pa, va, size);
+ dev_dbg(bridge, "PTE: endianism %x, element_size %x, "
+ "mixed_size %x\n", attrs->endianism,
+ attrs->element_size, attrs->mixed_size);
+ status = hw_mmu_pte_set(pg_tbl_va, pa, va, size, attrs);
+ }
+
+ return status;
+}
+
+/* Memory map kernel VA -- memory allocated with vmalloc */
+static int mem_map_vmalloc(struct bridge_dev_context *dev_context,
+ u32 ul_mpu_addr, u32 ulVirtAddr,
+ u32 ul_num_bytes,
+ struct hw_mmu_map_attrs_t *hw_attrs)
+{
+ int status = 0;
+ struct page *page[1];
+ u32 i;
+ u32 pa_curr;
+ u32 pa_next;
+ u32 va_curr;
+ u32 size_curr;
+ u32 num_pages;
+ u32 pa;
+ u32 num_of4k_pages;
+ u32 temp = 0;
+
+ /*
+ * Do Kernel va to pa translation.
+ * Combine physically contiguous regions to reduce TLBs.
+ * Pass the translated pa to pte_update.
+ */
+ num_pages = ul_num_bytes / PAGE_SIZE; /* PAGE_SIZE = OS page size */
+ i = 0;
+ va_curr = ul_mpu_addr;
+ page[0] = vmalloc_to_page((void *)va_curr);
+ pa_next = page_to_phys(page[0]);
+ while (DSP_SUCCEEDED(status) && (i < num_pages)) {
+ /*
+ * Reuse pa_next from the previous iteraion to avoid
+ * an extra va2pa call
+ */
+ pa_curr = pa_next;
+ size_curr = PAGE_SIZE;
+ /*
+ * If the next page is physically contiguous,
+ * map it with the current one by increasing
+ * the size of the region to be mapped
+ */
+ while (++i < num_pages) {
+ page[0] =
+ vmalloc_to_page((void *)(va_curr + size_curr));
+ pa_next = page_to_phys(page[0]);
+
+ if (pa_next == (pa_curr + size_curr))
+ size_curr += PAGE_SIZE;
+ else
+ break;
+
+ }
+ if (pa_next == 0) {
+ status = -ENOMEM;
+ break;
+ }
+ pa = pa_curr;
+ num_of4k_pages = size_curr / HW_PAGE_SIZE4KB;
+ while (temp++ < num_of4k_pages) {
+ get_page(PHYS_TO_PAGE(pa));
+ pa += HW_PAGE_SIZE4KB;
+ }
+ status = pte_update(dev_context, pa_curr, ulVirtAddr +
+ (va_curr - ul_mpu_addr), size_curr,
+ hw_attrs);
+ va_curr += size_curr;
+ }
+ if (DSP_SUCCEEDED(status))
+ status = 0;
+ else
+ status = -EPERM;
+
+ /*
+ * In any case, flush the TLB
+ * This is called from here instead from pte_update to avoid unnecessary
+ * repetition while mapping non-contiguous physical regions of a virtual
+ * region
+ */
+ flush_all(dev_context);
+ dev_dbg(bridge, "%s status %x\n", __func__, status);
+ return status;
+}
+
+/*
+ * ======== wait_for_start ========
+ * Wait for the singal from DSP that it has started, or time out.
+ */
+bool wait_for_start(struct bridge_dev_context *dev_context, u32 dw_sync_addr)
+{
+ u16 timeout = TIHELEN_ACKTIMEOUT;
+
+ /* Wait for response from board */
+ while (*((volatile u16 *)dw_sync_addr) && --timeout)
+ udelay(10);
+
+ /* If timed out: return FALSE */
+ if (!timeout) {
+ pr_err("%s: Timed out waiting DSP to Start\n", __func__);
+ return FALSE;
+ }
+ return TRUE;
+}
diff --git a/drivers/staging/tidspbridge/core/tiomap3430_pwr.c b/drivers/staging/tidspbridge/core/tiomap3430_pwr.c
new file mode 100644
index 0000000..00ebc0b
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/tiomap3430_pwr.c
@@ -0,0 +1,604 @@
+/*
+ * tiomap_pwr.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implementation of DSP wake/sleep routines.
+ *
+ * Copyright (C) 2007-2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/cfg.h>
+#include <dspbridge/drv.h>
+#include <dspbridge/io_sm.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/brddefs.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/iodefs.h>
+
+/* ------------------------------------ Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+#include <dspbridge/pwr_sh.h>
+
+/* ----------------------------------- Bridge Driver */
+#include <dspbridge/dspdeh.h>
+#include <dspbridge/wdt.h>
+
+/* ----------------------------------- specific to this file */
+#include "_tiomap.h"
+#include "_tiomap_pwr.h"
+#include <mach-omap2/prm-regbits-34xx.h>
+#include <mach-omap2/cm-regbits-34xx.h>
+
+#define PWRSTST_TIMEOUT 200
+
+/*
+ * ======== handle_constraints_set ========
+ * Sets new DSP constraint
+ */
+int handle_constraints_set(struct bridge_dev_context *dev_context,
+ IN void *pargs)
+{
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 *constraint_val;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ constraint_val = (u32 *) (pargs);
+ /* Read the target value requested by DSP */
+ dev_dbg(bridge, "OPP: %s opp requested = 0x%x\n", __func__,
+ (u32) *(constraint_val + 1));
+
+ /* Set the new opp value */
+ if (pdata->dsp_set_min_opp)
+ (*pdata->dsp_set_min_opp) ((u32) *(constraint_val + 1));
+#endif /* #ifdef CONFIG_BRIDGE_DVFS */
+ return 0;
+}
+
+/*
+ * ======== handle_hibernation_from_dsp ========
+ * Handle Hibernation requested from DSP
+ */
+int handle_hibernation_from_dsp(struct bridge_dev_context *dev_context)
+{
+ int status = 0;
+#ifdef CONFIG_PM
+ u16 timeout = PWRSTST_TIMEOUT / 10;
+ u32 pwr_state;
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 opplevel;
+ struct io_mgr *hio_mgr;
+#endif
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ pwr_state = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST) &
+ OMAP_POWERSTATEST_MASK;
+ /* Wait for DSP to move into OFF state */
+ while ((pwr_state != PWRDM_POWER_OFF) && --timeout) {
+ if (msleep_interruptible(10)) {
+ pr_err("Waiting for DSP OFF mode interrupted\n");
+ return -EPERM;
+ }
+ pwr_state = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD,
+ OMAP2_PM_PWSTST) & OMAP_POWERSTATEST_MASK;
+ }
+ if (timeout == 0) {
+ pr_err("%s: Timed out waiting for DSP off mode\n", __func__);
+ status = -ETIMEDOUT;
+ return status;
+ } else {
+
+ /* Save mailbox settings */
+ omap_mbox_save_ctx(dev_context->mbox);
+
+ /* Turn off DSP Peripheral clocks and DSP Load monitor timer */
+ status = dsp_clock_disable_all(dev_context->dsp_per_clks);
+
+ /* Disable wdt on hibernation. */
+ dsp_wdt_enable(false);
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Update the Bridger Driver state */
+ dev_context->dw_brd_state = BRD_DSP_HIBERNATION;
+#ifdef CONFIG_BRIDGE_DVFS
+ status =
+ dev_get_io_mgr(dev_context->hdev_obj, &hio_mgr);
+ if (!hio_mgr) {
+ status = DSP_EHANDLE;
+ return status;
+ }
+ io_sh_msetting(hio_mgr, SHM_GETOPP, &opplevel);
+
+ /*
+ * Set the OPP to low level before moving to OFF
+ * mode
+ */
+ if (pdata->dsp_set_min_opp)
+ (*pdata->dsp_set_min_opp) (VDD1_OPP1);
+ status = 0;
+#endif /* CONFIG_BRIDGE_DVFS */
+ }
+ }
+#endif
+ return status;
+}
+
+/*
+ * ======== sleep_dsp ========
+ * Put DSP in low power consuming state.
+ */
+int sleep_dsp(struct bridge_dev_context *dev_context, IN u32 dw_cmd,
+ IN void *pargs)
+{
+ int status = 0;
+#ifdef CONFIG_PM
+#ifdef CONFIG_BRIDGE_NTFY_PWRERR
+ struct deh_mgr *hdeh_mgr;
+#endif /* CONFIG_BRIDGE_NTFY_PWRERR */
+ u16 timeout = PWRSTST_TIMEOUT / 10;
+ u32 pwr_state, target_pwr_state;
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ /* Check if sleep code is valid */
+ if ((dw_cmd != PWR_DEEPSLEEP) && (dw_cmd != PWR_EMERGENCYDEEPSLEEP))
+ return -EINVAL;
+
+ switch (dev_context->dw_brd_state) {
+ case BRD_RUNNING:
+ omap_mbox_save_ctx(dev_context->mbox);
+ if (dsp_test_sleepstate == PWRDM_POWER_OFF) {
+ sm_interrupt_dsp(dev_context, MBX_PM_DSPHIBERNATE);
+ dev_dbg(bridge, "PM: %s - sent hibernate cmd to DSP\n",
+ __func__);
+ target_pwr_state = PWRDM_POWER_OFF;
+ } else {
+ sm_interrupt_dsp(dev_context, MBX_PM_DSPRETENTION);
+ target_pwr_state = PWRDM_POWER_RET;
+ }
+ break;
+ case BRD_RETENTION:
+ omap_mbox_save_ctx(dev_context->mbox);
+ if (dsp_test_sleepstate == PWRDM_POWER_OFF) {
+ sm_interrupt_dsp(dev_context, MBX_PM_DSPHIBERNATE);
+ target_pwr_state = PWRDM_POWER_OFF;
+ } else
+ return 0;
+ break;
+ case BRD_HIBERNATION:
+ case BRD_DSP_HIBERNATION:
+ /* Already in Hibernation, so just return */
+ dev_dbg(bridge, "PM: %s - DSP already in hibernation\n",
+ __func__);
+ return 0;
+ case BRD_STOPPED:
+ dev_dbg(bridge, "PM: %s - Board in STOP state\n", __func__);
+ return 0;
+ default:
+ dev_dbg(bridge, "PM: %s - Bridge in Illegal state\n", __func__);
+ return -EPERM;
+ }
+
+ /* Get the PRCM DSP power domain status */
+ pwr_state = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD, OMAP2_PM_PWSTST) &
+ OMAP_POWERSTATEST_MASK;
+
+ /* Wait for DSP to move into target power state */
+ while ((pwr_state != target_pwr_state) && --timeout) {
+ if (msleep_interruptible(10)) {
+ pr_err("Waiting for DSP to Suspend interrupted\n");
+ return -EPERM;
+ }
+ pwr_state = (*pdata->dsp_prm_read)(OMAP3430_IVA2_MOD,
+ OMAP2_PM_PWSTST) & OMAP_POWERSTATEST_MASK;
+ }
+
+ if (!timeout) {
+ pr_err("%s: Timed out waiting for DSP off mode, state %x\n",
+ __func__, pwr_state);
+#ifdef CONFIG_BRIDGE_NTFY_PWRERR
+ dev_get_deh_mgr(dev_context->hdev_obj, &hdeh_mgr);
+ bridge_deh_notify(hdeh_mgr, DSP_PWRERROR, 0);
+#endif /* CONFIG_BRIDGE_NTFY_PWRERR */
+ return -ETIMEDOUT;
+ } else {
+ /* Update the Bridger Driver state */
+ if (dsp_test_sleepstate == PWRDM_POWER_OFF)
+ dev_context->dw_brd_state = BRD_HIBERNATION;
+ else
+ dev_context->dw_brd_state = BRD_RETENTION;
+
+ /* Disable wdt on hibernation. */
+ dsp_wdt_enable(false);
+
+ /* Turn off DSP Peripheral clocks */
+ status = dsp_clock_disable_all(dev_context->dsp_per_clks);
+ if (DSP_FAILED(status))
+ return status;
+#ifdef CONFIG_BRIDGE_DVFS
+ else if (target_pwr_state == PWRDM_POWER_OFF) {
+ /*
+ * Set the OPP to low level before moving to OFF mode
+ */
+ if (pdata->dsp_set_min_opp)
+ (*pdata->dsp_set_min_opp) (VDD1_OPP1);
+ }
+#endif /* CONFIG_BRIDGE_DVFS */
+ }
+#endif /* CONFIG_PM */
+ return status;
+}
+
+/*
+ * ======== wake_dsp ========
+ * Wake up DSP from sleep.
+ */
+int wake_dsp(struct bridge_dev_context *dev_context, IN void *pargs)
+{
+ int status = 0;
+#ifdef CONFIG_PM
+
+ /* Check the board state, if it is not 'SLEEP' then return */
+ if (dev_context->dw_brd_state == BRD_RUNNING ||
+ dev_context->dw_brd_state == BRD_STOPPED) {
+ /* The Device is in 'RET' or 'OFF' state and Bridge state is not
+ * 'SLEEP', this means state inconsistency, so return */
+ return 0;
+ }
+
+ /* Send a wakeup message to DSP */
+ sm_interrupt_dsp(dev_context, MBX_PM_DSPWAKEUP);
+
+ /* Set the device state to RUNNIG */
+ dev_context->dw_brd_state = BRD_RUNNING;
+#endif /* CONFIG_PM */
+ return status;
+}
+
+/*
+ * ======== dsp_peripheral_clk_ctrl ========
+ * Enable/Disable the DSP peripheral clocks as needed..
+ */
+int dsp_peripheral_clk_ctrl(struct bridge_dev_context *dev_context,
+ IN void *pargs)
+{
+ u32 ext_clk = 0;
+ u32 ext_clk_id = 0;
+ u32 ext_clk_cmd = 0;
+ u32 clk_id_index = MBX_PM_MAX_RESOURCES;
+ u32 tmp_index;
+ u32 dsp_per_clks_before;
+ int status = 0;
+
+ dsp_per_clks_before = dev_context->dsp_per_clks;
+
+ ext_clk = (u32) *((u32 *) pargs);
+ ext_clk_id = ext_clk & MBX_PM_CLK_IDMASK;
+
+ /* process the power message -- TODO, keep it in a separate function */
+ for (tmp_index = 0; tmp_index < MBX_PM_MAX_RESOURCES; tmp_index++) {
+ if (ext_clk_id == bpwr_clkid[tmp_index]) {
+ clk_id_index = tmp_index;
+ break;
+ }
+ }
+ /* TODO -- Assert may be a too hard restriction here.. May be we should
+ * just return with failure when the CLK ID does not match */
+ /* DBC_ASSERT(clk_id_index < MBX_PM_MAX_RESOURCES); */
+ if (clk_id_index == MBX_PM_MAX_RESOURCES) {
+ /* return with a more meaningfull error code */
+ return -EPERM;
+ }
+ ext_clk_cmd = (ext_clk >> MBX_PM_CLK_CMDSHIFT) & MBX_PM_CLK_CMDMASK;
+ switch (ext_clk_cmd) {
+ case BPWR_DISABLE_CLOCK:
+ status = dsp_clk_disable(bpwr_clks[clk_id_index].clk);
+ dsp_clk_wakeup_event_ctrl(bpwr_clks[clk_id_index].clk_id,
+ false);
+ if (DSP_SUCCEEDED(status)) {
+ (dev_context->dsp_per_clks) &=
+ (~((u32) (1 << bpwr_clks[clk_id_index].clk)));
+ }
+ break;
+ case BPWR_ENABLE_CLOCK:
+ status = dsp_clk_enable(bpwr_clks[clk_id_index].clk);
+ dsp_clk_wakeup_event_ctrl(bpwr_clks[clk_id_index].clk_id, true);
+ if (DSP_SUCCEEDED(status))
+ (dev_context->dsp_per_clks) |=
+ (1 << bpwr_clks[clk_id_index].clk);
+ break;
+ default:
+ dev_dbg(bridge, "%s: Unsupported CMD\n", __func__);
+ /* unsupported cmd */
+ /* TODO -- provide support for AUTOIDLE Enable/Disable
+ * commands */
+ }
+ return status;
+}
+
+/*
+ * ========pre_scale_dsp========
+ * Sends prescale notification to DSP
+ *
+ */
+int pre_scale_dsp(struct bridge_dev_context *dev_context, IN void *pargs)
+{
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 level;
+ u32 voltage_domain;
+
+ voltage_domain = *((u32 *) pargs);
+ level = *((u32 *) pargs + 1);
+
+ dev_dbg(bridge, "OPP: %s voltage_domain = %x, level = 0x%x\n",
+ __func__, voltage_domain, level);
+ if ((dev_context->dw_brd_state == BRD_HIBERNATION) ||
+ (dev_context->dw_brd_state == BRD_RETENTION) ||
+ (dev_context->dw_brd_state == BRD_DSP_HIBERNATION)) {
+ dev_dbg(bridge, "OPP: %s IVA in sleep. No message to DSP\n");
+ return 0;
+ } else if ((dev_context->dw_brd_state == BRD_RUNNING)) {
+ /* Send a prenotificatio to DSP */
+ dev_dbg(bridge, "OPP: %s sent notification to DSP\n", __func__);
+ sm_interrupt_dsp(dev_context, MBX_PM_SETPOINT_PRENOTIFY);
+ return 0;
+ } else {
+ return -EPERM;
+ }
+#endif /* #ifdef CONFIG_BRIDGE_DVFS */
+ return 0;
+}
+
+/*
+ * ========post_scale_dsp========
+ * Sends postscale notification to DSP
+ *
+ */
+int post_scale_dsp(struct bridge_dev_context *dev_context,
+ IN void *pargs)
+{
+ int status = 0;
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 level;
+ u32 voltage_domain;
+ struct io_mgr *hio_mgr;
+
+ status = dev_get_io_mgr(dev_context->hdev_obj, &hio_mgr);
+ if (!hio_mgr)
+ return -EFAULT;
+
+ voltage_domain = *((u32 *) pargs);
+ level = *((u32 *) pargs + 1);
+ dev_dbg(bridge, "OPP: %s voltage_domain = %x, level = 0x%x\n",
+ __func__, voltage_domain, level);
+ if ((dev_context->dw_brd_state == BRD_HIBERNATION) ||
+ (dev_context->dw_brd_state == BRD_RETENTION) ||
+ (dev_context->dw_brd_state == BRD_DSP_HIBERNATION)) {
+ /* Update the OPP value in shared memory */
+ io_sh_msetting(hio_mgr, SHM_CURROPP, &level);
+ dev_dbg(bridge, "OPP: %s IVA in sleep. Wrote to shm\n",
+ __func__);
+ } else if ((dev_context->dw_brd_state == BRD_RUNNING)) {
+ /* Update the OPP value in shared memory */
+ io_sh_msetting(hio_mgr, SHM_CURROPP, &level);
+ /* Send a post notification to DSP */
+ sm_interrupt_dsp(dev_context, MBX_PM_SETPOINT_POSTNOTIFY);
+ dev_dbg(bridge, "OPP: %s wrote to shm. Sent post notification "
+ "to DSP\n", __func__);
+ } else {
+ status = -EPERM;
+ }
+#endif /* #ifdef CONFIG_BRIDGE_DVFS */
+ return status;
+}
+
+void dsp_clk_wakeup_event_ctrl(u32 ClkId, bool enable)
+{
+ struct cfg_hostres *resources;
+ int status = 0;
+ u32 iva2_grpsel;
+ u32 mpu_grpsel;
+ struct dev_object *hdev_object = NULL;
+ struct bridge_dev_context *bridge_context = NULL;
+
+ hdev_object = (struct dev_object *)drv_get_first_dev_object();
+ if (!hdev_object)
+ return;
+
+ status = dev_get_bridge_context(hdev_object, &bridge_context);
+ if (!bridge_context)
+ return;
+
+ resources = bridge_context->resources;
+ if (!resources)
+ return;
+
+ switch (ClkId) {
+ case BPWR_GP_TIMER5:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_GPT5_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_GPT5_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_GPT5_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_GPT5_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_GP_TIMER6:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_GPT6_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_GPT6_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_GPT6_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_GPT6_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_GP_TIMER7:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_GPT7_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_GPT7_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_GPT7_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_GPT7_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_GP_TIMER8:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_GPT8_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_GPT8_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_GPT8_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_GPT8_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_MCBSP1:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_core_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_core_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_MCBSP1_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP1_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_MCBSP1_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP1_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_MCBSP2:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_MCBSP2_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP2_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_MCBSP2_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP2_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_MCBSP3:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_MCBSP3_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP3_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_MCBSP3_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP3_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_MCBSP4:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_per_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_MCBSP4_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP4_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_MCBSP4_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP4_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_per_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ case BPWR_MCBSP5:
+ iva2_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_core_pm_base) +
+ 0xA8));
+ mpu_grpsel = (u32) *((reg_uword32 *)
+ ((u32) (resources->dw_core_pm_base) +
+ 0xA4));
+ if (enable) {
+ iva2_grpsel |= OMAP3430_GRPSEL_MCBSP5_MASK;
+ mpu_grpsel &= ~OMAP3430_GRPSEL_MCBSP5_MASK;
+ } else {
+ mpu_grpsel |= OMAP3430_GRPSEL_MCBSP5_MASK;
+ iva2_grpsel &= ~OMAP3430_GRPSEL_MCBSP5_MASK;
+ }
+ *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA8))
+ = iva2_grpsel;
+ *((reg_uword32 *) ((u32) (resources->dw_core_pm_base) + 0xA4))
+ = mpu_grpsel;
+ break;
+ }
+}
diff --git a/drivers/staging/tidspbridge/core/tiomap_io.c b/drivers/staging/tidspbridge/core/tiomap_io.c
new file mode 100644
index 0000000..3b2ea70
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/tiomap_io.c
@@ -0,0 +1,458 @@
+/*
+ * tiomap_io.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implementation for the io read/write routines.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/drv.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/wdt.h>
+
+/* ----------------------------------- specific to this file */
+#include "_tiomap.h"
+#include "_tiomap_pwr.h"
+#include "tiomap_io.h"
+
+static u32 ul_ext_base;
+static u32 ul_ext_end;
+
+static u32 shm0_end;
+static u32 ul_dyn_ext_base;
+static u32 ul_trace_sec_beg;
+static u32 ul_trace_sec_end;
+static u32 ul_shm_base_virt;
+
+bool symbols_reloaded = true;
+
+/*
+ * ======== read_ext_dsp_data ========
+ * Copies DSP external memory buffers to the host side buffers.
+ */
+int read_ext_dsp_data(struct bridge_dev_context *hDevContext,
+ OUT u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType)
+{
+ int status = 0;
+ struct bridge_dev_context *dev_context = hDevContext;
+ u32 offset;
+ u32 ul_tlb_base_virt = 0;
+ u32 ul_shm_offset_virt = 0;
+ u32 dw_ext_prog_virt_mem;
+ u32 dw_base_addr = dev_context->dw_dsp_ext_base_addr;
+ bool trace_read = false;
+
+ if (!ul_shm_base_virt) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ SHMBASENAME, &ul_shm_base_virt);
+ }
+ DBC_ASSERT(ul_shm_base_virt != 0);
+
+ /* Check if it is a read of Trace section */
+ if (DSP_SUCCEEDED(status) && !ul_trace_sec_beg) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ DSP_TRACESEC_BEG, &ul_trace_sec_beg);
+ }
+ DBC_ASSERT(ul_trace_sec_beg != 0);
+
+ if (DSP_SUCCEEDED(status) && !ul_trace_sec_end) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ DSP_TRACESEC_END, &ul_trace_sec_end);
+ }
+ DBC_ASSERT(ul_trace_sec_end != 0);
+
+ if (DSP_SUCCEEDED(status)) {
+ if ((dwDSPAddr <= ul_trace_sec_end) &&
+ (dwDSPAddr >= ul_trace_sec_beg))
+ trace_read = true;
+ }
+
+ /* If reading from TRACE, force remap/unmap */
+ if (trace_read && dw_base_addr) {
+ dw_base_addr = 0;
+ dev_context->dw_dsp_ext_base_addr = 0;
+ }
+
+ if (!dw_base_addr) {
+ /* Initialize ul_ext_base and ul_ext_end */
+ ul_ext_base = 0;
+ ul_ext_end = 0;
+
+ /* Get DYNEXT_BEG, EXT_BEG and EXT_END. */
+ if (DSP_SUCCEEDED(status) && !ul_dyn_ext_base) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ DYNEXTBASE, &ul_dyn_ext_base);
+ }
+ DBC_ASSERT(ul_dyn_ext_base != 0);
+
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ EXTBASE, &ul_ext_base);
+ }
+ DBC_ASSERT(ul_ext_base != 0);
+
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_symbol(dev_context->hdev_obj,
+ EXTEND, &ul_ext_end);
+ }
+ DBC_ASSERT(ul_ext_end != 0);
+
+ /* Trace buffer is right after the shm SEG0,
+ * so set the base address to SHMBASE */
+ if (trace_read) {
+ ul_ext_base = ul_shm_base_virt;
+ ul_ext_end = ul_trace_sec_end;
+ }
+
+ DBC_ASSERT(ul_ext_end != 0);
+ DBC_ASSERT(ul_ext_end > ul_ext_base);
+
+ if (ul_ext_end < ul_ext_base)
+ status = -EPERM;
+
+ if (DSP_SUCCEEDED(status)) {
+ ul_tlb_base_virt =
+ dev_context->atlb_entry[0].ul_dsp_va * DSPWORDSIZE;
+ DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);
+ dw_ext_prog_virt_mem =
+ dev_context->atlb_entry[0].ul_gpp_va;
+
+ if (!trace_read) {
+ ul_shm_offset_virt =
+ ul_shm_base_virt - ul_tlb_base_virt;
+ ul_shm_offset_virt +=
+ PG_ALIGN_HIGH(ul_ext_end - ul_dyn_ext_base +
+ 1, HW_PAGE_SIZE64KB);
+ dw_ext_prog_virt_mem -= ul_shm_offset_virt;
+ dw_ext_prog_virt_mem +=
+ (ul_ext_base - ul_dyn_ext_base);
+ dev_context->dw_dsp_ext_base_addr =
+ dw_ext_prog_virt_mem;
+
+ /*
+ * This dw_dsp_ext_base_addr will get cleared
+ * only when the board is stopped.
+ */
+ if (!dev_context->dw_dsp_ext_base_addr)
+ status = -EPERM;
+ }
+
+ dw_base_addr = dw_ext_prog_virt_mem;
+ }
+ }
+
+ if (!dw_base_addr || !ul_ext_base || !ul_ext_end)
+ status = -EPERM;
+
+ offset = dwDSPAddr - ul_ext_base;
+
+ if (DSP_SUCCEEDED(status))
+ memcpy(pbHostBuf, (u8 *) dw_base_addr + offset, ul_num_bytes);
+
+ return status;
+}
+
+/*
+ * ======== write_dsp_data ========
+ * purpose:
+ * Copies buffers to the DSP internal/external memory.
+ */
+int write_dsp_data(struct bridge_dev_context *hDevContext,
+ IN u8 *pbHostBuf, u32 dwDSPAddr, u32 ul_num_bytes,
+ u32 ulMemType)
+{
+ u32 offset;
+ u32 dw_base_addr = hDevContext->dw_dsp_base_addr;
+ struct cfg_hostres *resources = hDevContext->resources;
+ int status = 0;
+ u32 base1, base2, base3;
+ base1 = OMAP_DSP_MEM1_SIZE;
+ base2 = OMAP_DSP_MEM2_BASE - OMAP_DSP_MEM1_BASE;
+ base3 = OMAP_DSP_MEM3_BASE - OMAP_DSP_MEM1_BASE;
+
+ if (!resources)
+ return -EPERM;
+
+ offset = dwDSPAddr - hDevContext->dw_dsp_start_add;
+ if (offset < base1) {
+ dw_base_addr = MEM_LINEAR_ADDRESS(resources->dw_mem_base[2],
+ resources->dw_mem_length[2]);
+ } else if (offset > base1 && offset < base2 + OMAP_DSP_MEM2_SIZE) {
+ dw_base_addr = MEM_LINEAR_ADDRESS(resources->dw_mem_base[3],
+ resources->dw_mem_length[3]);
+ offset = offset - base2;
+ } else if (offset >= base2 + OMAP_DSP_MEM2_SIZE &&
+ offset < base3 + OMAP_DSP_MEM3_SIZE) {
+ dw_base_addr = MEM_LINEAR_ADDRESS(resources->dw_mem_base[4],
+ resources->dw_mem_length[4]);
+ offset = offset - base3;
+ } else {
+ return -EPERM;
+ }
+ if (ul_num_bytes)
+ memcpy((u8 *) (dw_base_addr + offset), pbHostBuf, ul_num_bytes);
+ else
+ *((u32 *) pbHostBuf) = dw_base_addr + offset;
+
+ return status;
+}
+
+/*
+ * ======== write_ext_dsp_data ========
+ * purpose:
+ * Copies buffers to the external memory.
+ *
+ */
+int write_ext_dsp_data(struct bridge_dev_context *dev_context,
+ IN u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType,
+ bool bDynamicLoad)
+{
+ u32 dw_base_addr = dev_context->dw_dsp_ext_base_addr;
+ u32 dw_offset = 0;
+ u8 temp_byte1, temp_byte2;
+ u8 remain_byte[4];
+ s32 i;
+ int ret = 0;
+ u32 dw_ext_prog_virt_mem;
+ u32 ul_tlb_base_virt = 0;
+ u32 ul_shm_offset_virt = 0;
+ struct cfg_hostres *host_res = dev_context->resources;
+ bool trace_load = false;
+ temp_byte1 = 0x0;
+ temp_byte2 = 0x0;
+
+ if (symbols_reloaded) {
+ /* Check if it is a load to Trace section */
+ ret = dev_get_symbol(dev_context->hdev_obj,
+ DSP_TRACESEC_BEG, &ul_trace_sec_beg);
+ if (DSP_SUCCEEDED(ret))
+ ret = dev_get_symbol(dev_context->hdev_obj,
+ DSP_TRACESEC_END,
+ &ul_trace_sec_end);
+ }
+ if (DSP_SUCCEEDED(ret)) {
+ if ((dwDSPAddr <= ul_trace_sec_end) &&
+ (dwDSPAddr >= ul_trace_sec_beg))
+ trace_load = true;
+ }
+
+ /* If dynamic, force remap/unmap */
+ if ((bDynamicLoad || trace_load) && dw_base_addr) {
+ dw_base_addr = 0;
+ MEM_UNMAP_LINEAR_ADDRESS((void *)
+ dev_context->dw_dsp_ext_base_addr);
+ dev_context->dw_dsp_ext_base_addr = 0x0;
+ }
+ if (!dw_base_addr) {
+ if (symbols_reloaded)
+ /* Get SHM_BEG EXT_BEG and EXT_END. */
+ ret = dev_get_symbol(dev_context->hdev_obj,
+ SHMBASENAME, &ul_shm_base_virt);
+ DBC_ASSERT(ul_shm_base_virt != 0);
+ if (bDynamicLoad) {
+ if (DSP_SUCCEEDED(ret)) {
+ if (symbols_reloaded)
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj, DYNEXTBASE,
+ &ul_ext_base);
+ }
+ DBC_ASSERT(ul_ext_base != 0);
+ if (DSP_SUCCEEDED(ret)) {
+ /* DR OMAPS00013235 : DLModules array may be
+ * in EXTMEM. It is expected that DYNEXTMEM and
+ * EXTMEM are contiguous, so checking for the
+ * upper bound at EXTEND should be Ok. */
+ if (symbols_reloaded)
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj, EXTEND,
+ &ul_ext_end);
+ }
+ } else {
+ if (symbols_reloaded) {
+ if (DSP_SUCCEEDED(ret))
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj, EXTBASE,
+ &ul_ext_base);
+ DBC_ASSERT(ul_ext_base != 0);
+ if (DSP_SUCCEEDED(ret))
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj, EXTEND,
+ &ul_ext_end);
+ }
+ }
+ /* Trace buffer it right after the shm SEG0, so set the
+ * base address to SHMBASE */
+ if (trace_load)
+ ul_ext_base = ul_shm_base_virt;
+
+ DBC_ASSERT(ul_ext_end != 0);
+ DBC_ASSERT(ul_ext_end > ul_ext_base);
+ if (ul_ext_end < ul_ext_base)
+ ret = -EPERM;
+
+ if (DSP_SUCCEEDED(ret)) {
+ ul_tlb_base_virt =
+ dev_context->atlb_entry[0].ul_dsp_va * DSPWORDSIZE;
+ DBC_ASSERT(ul_tlb_base_virt <= ul_shm_base_virt);
+
+ if (symbols_reloaded) {
+ if (DSP_SUCCEEDED(ret)) {
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj,
+ DSP_TRACESEC_END, &shm0_end);
+ }
+ if (DSP_SUCCEEDED(ret)) {
+ ret =
+ dev_get_symbol
+ (dev_context->hdev_obj, DYNEXTBASE,
+ &ul_dyn_ext_base);
+ }
+ }
+ ul_shm_offset_virt =
+ ul_shm_base_virt - ul_tlb_base_virt;
+ if (trace_load) {
+ dw_ext_prog_virt_mem =
+ dev_context->atlb_entry[0].ul_gpp_va;
+ } else {
+ dw_ext_prog_virt_mem = host_res->dw_mem_base[1];
+ dw_ext_prog_virt_mem +=
+ (ul_ext_base - ul_dyn_ext_base);
+ }
+
+ dev_context->dw_dsp_ext_base_addr =
+ (u32) MEM_LINEAR_ADDRESS((void *)
+ dw_ext_prog_virt_mem,
+ ul_ext_end - ul_ext_base);
+ dw_base_addr += dev_context->dw_dsp_ext_base_addr;
+ /* This dw_dsp_ext_base_addr will get cleared only when
+ * the board is stopped. */
+ if (!dev_context->dw_dsp_ext_base_addr)
+ ret = -EPERM;
+ }
+ }
+ if (!dw_base_addr || !ul_ext_base || !ul_ext_end)
+ ret = -EPERM;
+
+ if (DSP_SUCCEEDED(ret)) {
+ for (i = 0; i < 4; i++)
+ remain_byte[i] = 0x0;
+
+ dw_offset = dwDSPAddr - ul_ext_base;
+ /* Also make sure the dwDSPAddr is < ul_ext_end */
+ if (dwDSPAddr > ul_ext_end || dw_offset > dwDSPAddr)
+ ret = -EPERM;
+ }
+ if (DSP_SUCCEEDED(ret)) {
+ if (ul_num_bytes)
+ memcpy((u8 *) dw_base_addr + dw_offset, pbHostBuf,
+ ul_num_bytes);
+ else
+ *((u32 *) pbHostBuf) = dw_base_addr + dw_offset;
+ }
+ /* Unmap here to force remap for other Ext loads */
+ if ((bDynamicLoad || trace_load) && dev_context->dw_dsp_ext_base_addr) {
+ MEM_UNMAP_LINEAR_ADDRESS((void *)
+ dev_context->dw_dsp_ext_base_addr);
+ dev_context->dw_dsp_ext_base_addr = 0x0;
+ }
+ symbols_reloaded = false;
+ return ret;
+}
+
+int sm_interrupt_dsp(struct bridge_dev_context *dev_context, u16 mb_val)
+{
+#ifdef CONFIG_BRIDGE_DVFS
+ u32 opplevel = 0;
+#endif
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+ struct cfg_hostres *resources = dev_context->resources;
+ int status = 0;
+ u32 temp;
+
+ if (!dev_context->mbox)
+ return 0;
+
+ if (!resources)
+ return -EPERM;
+
+ if (dev_context->dw_brd_state == BRD_DSP_HIBERNATION ||
+ dev_context->dw_brd_state == BRD_HIBERNATION) {
+#ifdef CONFIG_BRIDGE_DVFS
+ if (pdata->dsp_get_opp)
+ opplevel = (*pdata->dsp_get_opp) ();
+ if (opplevel == VDD1_OPP1) {
+ if (pdata->dsp_set_min_opp)
+ (*pdata->dsp_set_min_opp) (VDD1_OPP2);
+ }
+#endif
+ /* Restart the peripheral clocks */
+ dsp_clock_enable_all(dev_context->dsp_per_clks);
+ dsp_wdt_enable(true);
+
+ /*
+ * 2:0 AUTO_IVA2_DPLL - Enabling IVA2 DPLL auto control
+ * in CM_AUTOIDLE_PLL_IVA2 register
+ */
+ (*pdata->dsp_cm_write)(1 << OMAP3430_AUTO_IVA2_DPLL_SHIFT,
+ OMAP3430_IVA2_MOD, OMAP3430_CM_AUTOIDLE_PLL);
+
+ /*
+ * 7:4 IVA2_DPLL_FREQSEL - IVA2 internal frq set to
+ * 0.75 MHz - 1.0 MHz
+ * 2:0 EN_IVA2_DPLL - Enable IVA2 DPLL in lock mode
+ */
+ (*pdata->dsp_cm_rmw_bits)(OMAP3430_IVA2_DPLL_FREQSEL_MASK |
+ OMAP3430_EN_IVA2_DPLL_MASK,
+ 0x3 << OMAP3430_IVA2_DPLL_FREQSEL_SHIFT |
+ 0x7 << OMAP3430_EN_IVA2_DPLL_SHIFT,
+ OMAP3430_IVA2_MOD, OMAP3430_CM_CLKEN_PLL);
+
+ /* Restore mailbox settings */
+ omap_mbox_restore_ctx(dev_context->mbox);
+
+ /* Access MMU SYS CONFIG register to generate a short wakeup */
+ temp = *(reg_uword32 *) (resources->dw_dmmu_base + 0x10);
+
+ dev_context->dw_brd_state = BRD_RUNNING;
+ } else if (dev_context->dw_brd_state == BRD_RETENTION) {
+ /* Restart the peripheral clocks */
+ dsp_clock_enable_all(dev_context->dsp_per_clks);
+ }
+
+ status = omap_mbox_msg_send(dev_context->mbox, mb_val);
+
+ if (status) {
+ pr_err("omap_mbox_msg_send Fail and status = %d\n", status);
+ status = -EPERM;
+ }
+
+ return 0;
+}
diff --git a/drivers/staging/tidspbridge/core/tiomap_io.h b/drivers/staging/tidspbridge/core/tiomap_io.h
new file mode 100644
index 0000000..a176e5c
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/tiomap_io.h
@@ -0,0 +1,104 @@
+/*
+ * tiomap_io.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definitions, types and function prototypes for the io (r/w external mem).
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _TIOMAP_IO_
+#define _TIOMAP_IO_
+
+/*
+ * Symbol that defines beginning of shared memory.
+ * For OMAP (Helen) this is the DSP Virtual base address of SDRAM.
+ * This will be used to program DSP MMU to map DSP Virt to GPP phys.
+ * (see dspMmuTlbEntry()).
+ */
+#define SHMBASENAME "SHM_BEG"
+#define EXTBASE "EXT_BEG"
+#define EXTEND "_EXT_END"
+#define DYNEXTBASE "_DYNEXT_BEG"
+#define DYNEXTEND "_DYNEXT_END"
+#define IVAEXTMEMBASE "_IVAEXTMEM_BEG"
+#define IVAEXTMEMEND "_IVAEXTMEM_END"
+
+#define DSP_TRACESEC_BEG "_BRIDGE_TRACE_BEG"
+#define DSP_TRACESEC_END "_BRIDGE_TRACE_END"
+
+#define SYS_PUTCBEG "_SYS_PUTCBEG"
+#define SYS_PUTCEND "_SYS_PUTCEND"
+#define BRIDGE_SYS_PUTC_CURRENT "_BRIDGE_SYS_PUTC_current"
+
+#define WORDSWAP_ENABLE 0x3 /* Enable word swap */
+
+/*
+ * ======== read_ext_dsp_data ========
+ * Reads it from DSP External memory. The external memory for the DSP
+ * is configured by the combination of DSP MMU and shm Memory manager in the CDB
+ */
+extern int read_ext_dsp_data(struct bridge_dev_context *dev_context,
+ OUT u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+
+/*
+ * ======== write_dsp_data ========
+ */
+extern int write_dsp_data(struct bridge_dev_context *dev_context,
+ OUT u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+
+/*
+ * ======== write_ext_dsp_data ========
+ * Writes to the DSP External memory for external program.
+ * The ext mem for progra is configured by the combination of DSP MMU and
+ * shm Memory manager in the CDB
+ */
+extern int write_ext_dsp_data(struct bridge_dev_context *dev_context,
+ IN u8 *pbHostBuf, u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType,
+ bool bDynamicLoad);
+
+/*
+ * ======== write_ext32_bit_dsp_data ========
+ * Writes 32 bit data to the external memory
+ */
+extern inline void write_ext32_bit_dsp_data(IN const
+ struct bridge_dev_context *dev_context,
+ IN u32 dwDSPAddr, IN u32 val)
+{
+ *(u32 *) dwDSPAddr = ((dev_context->tc_word_swap_on) ? (((val << 16) &
+ 0xFFFF0000) |
+ ((val >> 16) &
+ 0x0000FFFF)) :
+ val);
+}
+
+/*
+ * ======== read_ext32_bit_dsp_data ========
+ * Reads 32 bit data from the external memory
+ */
+extern inline u32 read_ext32_bit_dsp_data(IN const struct bridge_dev_context
+ *dev_context, IN u32 dwDSPAddr)
+{
+ u32 ret;
+ ret = *(u32 *) dwDSPAddr;
+
+ ret = ((dev_context->tc_word_swap_on) ? (((ret << 16)
+ & 0xFFFF0000) | ((ret >> 16) &
+ 0x0000FFFF))
+ : ret);
+ return ret;
+}
+
+#endif /* _TIOMAP_IO_ */
diff --git a/drivers/staging/tidspbridge/core/ue_deh.c b/drivers/staging/tidspbridge/core/ue_deh.c
new file mode 100644
index 0000000..64e9366
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/ue_deh.c
@@ -0,0 +1,303 @@
+/*
+ * ue_deh.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implements upper edge DSP exception handling (DEH) functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/clk.h>
+#include <dspbridge/ntfy.h>
+#include <dspbridge/drv.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspdeh.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/dspapi.h>
+#include <dspbridge/wdt.h>
+
+/* ------------------------------------ Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+/* ----------------------------------- This */
+#include "mmu_fault.h"
+#include "_tiomap.h"
+#include "_deh.h"
+#include "_tiomap_pwr.h"
+#include <dspbridge/io_sm.h>
+
+
+static struct hw_mmu_map_attrs_t map_attrs = { HW_LITTLE_ENDIAN,
+ HW_ELEM_SIZE16BIT,
+ HW_MMU_CPUES
+};
+
+static void *dummy_va_addr;
+
+int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
+ struct dev_object *hdev_obj)
+{
+ int status = 0;
+ struct deh_mgr *deh_mgr;
+ struct bridge_dev_context *hbridge_context = NULL;
+
+ /* Message manager will be created when a file is loaded, since
+ * size of message buffer in shared memory is configurable in
+ * the base image. */
+ /* Get Bridge context info. */
+ dev_get_bridge_context(hdev_obj, &hbridge_context);
+ DBC_ASSERT(hbridge_context);
+ dummy_va_addr = NULL;
+ /* Allocate IO manager object: */
+ deh_mgr = kzalloc(sizeof(struct deh_mgr), GFP_KERNEL);
+ if (!deh_mgr) {
+ status = -ENOMEM;
+ goto leave;
+ }
+
+ /* Create an NTFY object to manage notifications */
+ deh_mgr->ntfy_obj = kmalloc(sizeof(struct ntfy_object), GFP_KERNEL);
+ if (deh_mgr->ntfy_obj) {
+ ntfy_init(deh_mgr->ntfy_obj);
+ } else {
+ status = -ENOMEM;
+ goto err;
+ }
+
+ /* Create a MMUfault DPC */
+ tasklet_init(&deh_mgr->dpc_tasklet, mmu_fault_dpc, (u32) deh_mgr);
+
+ /* Fill in context structure */
+ deh_mgr->hbridge_context = hbridge_context;
+ deh_mgr->err_info.dw_err_mask = 0L;
+ deh_mgr->err_info.dw_val1 = 0L;
+ deh_mgr->err_info.dw_val2 = 0L;
+ deh_mgr->err_info.dw_val3 = 0L;
+
+ /* Install ISR function for DSP MMU fault */
+ if ((request_irq(INT_DSP_MMU_IRQ, mmu_fault_isr, 0,
+ "DspBridge\tiommu fault",
+ (void *)deh_mgr)) == 0)
+ status = 0;
+ else
+ status = -EPERM;
+
+err:
+ if (DSP_FAILED(status)) {
+ /* If create failed, cleanup */
+ bridge_deh_destroy(deh_mgr);
+ deh_mgr = NULL;
+ }
+leave:
+ *ret_deh_mgr = deh_mgr;
+
+ return status;
+}
+
+int bridge_deh_destroy(struct deh_mgr *deh_mgr)
+{
+ if (!deh_mgr)
+ return -EFAULT;
+
+ /* Release dummy VA buffer */
+ bridge_deh_release_dummy_mem();
+ /* If notification object exists, delete it */
+ if (deh_mgr->ntfy_obj) {
+ ntfy_delete(deh_mgr->ntfy_obj);
+ kfree(deh_mgr->ntfy_obj);
+ }
+ /* Disable DSP MMU fault */
+ free_irq(INT_DSP_MMU_IRQ, deh_mgr);
+
+ /* Free DPC object */
+ tasklet_kill(&deh_mgr->dpc_tasklet);
+
+ /* Deallocate the DEH manager object */
+ kfree(deh_mgr);
+
+ return 0;
+}
+
+int bridge_deh_register_notify(struct deh_mgr *deh_mgr, u32 event_mask,
+ u32 notify_type,
+ struct dsp_notification *hnotification)
+{
+ int status = 0;
+
+ if (!deh_mgr)
+ return -EFAULT;
+
+ if (event_mask)
+ status = ntfy_register(deh_mgr->ntfy_obj, hnotification,
+ event_mask, notify_type);
+ else
+ status = ntfy_unregister(deh_mgr->ntfy_obj, hnotification);
+
+ return status;
+}
+
+void bridge_deh_notify(struct deh_mgr *deh_mgr, u32 ulEventMask, u32 dwErrInfo)
+{
+ struct bridge_dev_context *dev_context;
+ int status = 0;
+ u32 hw_mmu_max_tlb_count = 31;
+ struct cfg_hostres *resources;
+ hw_status hw_status_obj;
+
+ if (!deh_mgr)
+ return;
+
+ dev_info(bridge, "%s: device exception\n", __func__);
+ dev_context = (struct bridge_dev_context *)deh_mgr->hbridge_context;
+ resources = dev_context->resources;
+
+ switch (ulEventMask) {
+ case DSP_SYSERROR:
+ /* reset err_info structure before use */
+ deh_mgr->err_info.dw_err_mask = DSP_SYSERROR;
+ deh_mgr->err_info.dw_val1 = 0L;
+ deh_mgr->err_info.dw_val2 = 0L;
+ deh_mgr->err_info.dw_val3 = 0L;
+ deh_mgr->err_info.dw_val1 = dwErrInfo;
+ dev_err(bridge, "%s: %s, err_info = 0x%x\n",
+ __func__, "DSP_SYSERROR", dwErrInfo);
+ dump_dl_modules(dev_context);
+ dump_dsp_stack(dev_context);
+ break;
+ case DSP_MMUFAULT:
+ /* MMU fault routine should have set err info structure. */
+ deh_mgr->err_info.dw_err_mask = DSP_MMUFAULT;
+ dev_err(bridge, "%s: %s, err_info = 0x%x\n",
+ __func__, "DSP_MMUFAULT", dwErrInfo);
+ dev_info(bridge, "%s: %s, high=0x%x, low=0x%x, "
+ "fault=0x%x\n", __func__, "DSP_MMUFAULT",
+ (unsigned int) deh_mgr->err_info.dw_val1,
+ (unsigned int) deh_mgr->err_info.dw_val2,
+ (unsigned int) fault_addr);
+ dummy_va_addr = (void*)__get_free_page(GFP_ATOMIC);
+ dev_context = (struct bridge_dev_context *)
+ deh_mgr->hbridge_context;
+
+ print_dsp_trace_buffer(dev_context);
+ dump_dl_modules(dev_context);
+
+ /*
+ * Reset the dynamic mmu index to fixed count if it exceeds
+ * 31. So that the dynmmuindex is always between the range of
+ * standard/fixed entries and 31.
+ */
+ if (dev_context->num_tlb_entries >
+ hw_mmu_max_tlb_count) {
+ dev_context->num_tlb_entries =
+ dev_context->fixed_tlb_entries;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ hw_status_obj =
+ hw_mmu_tlb_add(resources->dw_dmmu_base,
+ virt_to_phys(dummy_va_addr), fault_addr,
+ HW_PAGE_SIZE4KB, 1,
+ &map_attrs, HW_SET, HW_SET);
+ }
+
+ dsp_clk_enable(DSP_CLK_GPT8);
+
+ dsp_gpt_wait_overflow(DSP_CLK_GPT8, 0xfffffffe);
+
+ /* Clear MMU interrupt */
+ hw_mmu_event_ack(resources->dw_dmmu_base,
+ HW_MMU_TRANSLATION_FAULT);
+ dump_dsp_stack(deh_mgr->hbridge_context);
+ dsp_clk_disable(DSP_CLK_GPT8);
+ break;
+#ifdef CONFIG_BRIDGE_NTFY_PWRERR
+ case DSP_PWRERROR:
+ /* reset err_info structure before use */
+ deh_mgr->err_info.dw_err_mask = DSP_PWRERROR;
+ deh_mgr->err_info.dw_val1 = 0L;
+ deh_mgr->err_info.dw_val2 = 0L;
+ deh_mgr->err_info.dw_val3 = 0L;
+ deh_mgr->err_info.dw_val1 = dwErrInfo;
+ dev_err(bridge, "%s: %s, err_info = 0x%x\n",
+ __func__, "DSP_PWRERROR", dwErrInfo);
+ break;
+#endif /* CONFIG_BRIDGE_NTFY_PWRERR */
+ case DSP_WDTOVERFLOW:
+ deh_mgr->err_info.dw_err_mask = DSP_WDTOVERFLOW;
+ deh_mgr->err_info.dw_val1 = 0L;
+ deh_mgr->err_info.dw_val2 = 0L;
+ deh_mgr->err_info.dw_val3 = 0L;
+ dev_err(bridge, "%s: DSP_WDTOVERFLOW\n", __func__);
+ break;
+ default:
+ dev_dbg(bridge, "%s: Unknown Error, err_info = 0x%x\n",
+ __func__, dwErrInfo);
+ break;
+ }
+
+ /* Filter subsequent notifications when an error occurs */
+ if (dev_context->dw_brd_state != BRD_ERROR) {
+ ntfy_notify(deh_mgr->ntfy_obj, ulEventMask);
+#ifdef CONFIG_BRIDGE_RECOVERY
+ bridge_recover_schedule();
+#endif
+ }
+
+ /* Set the Board state as ERROR */
+ dev_context->dw_brd_state = BRD_ERROR;
+ /* Disable all the clocks that were enabled by DSP */
+ dsp_clock_disable_all(dev_context->dsp_per_clks);
+ /*
+ * Avoid the subsequent WDT if it happens once,
+ * also if fatal error occurs.
+ */
+ dsp_wdt_enable(false);
+}
+
+int bridge_deh_get_info(struct deh_mgr *deh_mgr,
+ struct dsp_errorinfo *pErrInfo)
+{
+ DBC_REQUIRE(deh_mgr);
+ DBC_REQUIRE(pErrInfo);
+
+ if (!deh_mgr)
+ return -EFAULT;
+
+ /* Copy DEH error info structure to PROC error info structure. */
+ pErrInfo->dw_err_mask = deh_mgr->err_info.dw_err_mask;
+ pErrInfo->dw_val1 = deh_mgr->err_info.dw_val1;
+ pErrInfo->dw_val2 = deh_mgr->err_info.dw_val2;
+ pErrInfo->dw_val3 = deh_mgr->err_info.dw_val3;
+
+ return 0;
+}
+
+void bridge_deh_release_dummy_mem(void)
+{
+ free_page((unsigned long)dummy_va_addr);
+ dummy_va_addr = NULL;
+}
diff --git a/drivers/staging/tidspbridge/core/wdt.c b/drivers/staging/tidspbridge/core/wdt.c
new file mode 100644
index 0000000..5881fe0
--- /dev/null
+++ b/drivers/staging/tidspbridge/core/wdt.c
@@ -0,0 +1,150 @@
+/*
+ * wdt.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * IO dispatcher for a shared memory channel driver.
+ *
+ * Copyright (C) 2010 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/dspdeh.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/_chnl_sm.h>
+#include <dspbridge/wdt.h>
+#include <dspbridge/host_os.h>
+
+
+#ifdef CONFIG_BRIDGE_WDT3
+
+#define OMAP34XX_WDT3_BASE (L4_PER_34XX_BASE + 0x30000)
+
+static struct dsp_wdt_setting dsp_wdt;
+
+void dsp_wdt_dpc(unsigned long data)
+{
+ struct deh_mgr *deh_mgr;
+ dev_get_deh_mgr(dev_get_first(), &deh_mgr);
+ if (deh_mgr)
+ bridge_deh_notify(deh_mgr, DSP_WDTOVERFLOW, 0);
+}
+
+irqreturn_t dsp_wdt_isr(int irq, void *data)
+{
+ u32 value;
+ /* ack wdt3 interrupt */
+ value = __raw_readl(dsp_wdt.reg_base + OMAP3_WDT3_ISR_OFFSET);
+ __raw_writel(value, dsp_wdt.reg_base + OMAP3_WDT3_ISR_OFFSET);
+
+ tasklet_schedule(&dsp_wdt.wdt3_tasklet);
+ return IRQ_HANDLED;
+}
+
+int dsp_wdt_init(void)
+{
+ int ret = 0;
+
+ dsp_wdt.sm_wdt = NULL;
+ dsp_wdt.reg_base = OMAP2_L4_IO_ADDRESS(OMAP34XX_WDT3_BASE);
+ tasklet_init(&dsp_wdt.wdt3_tasklet, dsp_wdt_dpc, 0);
+
+ dsp_wdt.fclk = clk_get(NULL, "wdt3_fck");
+
+ if (dsp_wdt.fclk) {
+ dsp_wdt.iclk = clk_get(NULL, "wdt3_ick");
+ if (!dsp_wdt.iclk) {
+ clk_put(dsp_wdt.fclk);
+ dsp_wdt.fclk = NULL;
+ ret = -EFAULT;
+ }
+ } else
+ ret = -EFAULT;
+
+ if (!ret)
+ ret = request_irq(INT_34XX_WDT3_IRQ, dsp_wdt_isr, 0,
+ "dsp_wdt", &dsp_wdt);
+
+ /* Disable at this moment, it will be enabled when DSP starts */
+ if (!ret)
+ disable_irq(INT_34XX_WDT3_IRQ);
+
+ return ret;
+}
+
+void dsp_wdt_sm_set(void *data)
+{
+ dsp_wdt.sm_wdt = data;
+ dsp_wdt.sm_wdt->wdt_overflow = CONFIG_WDT_TIMEOUT;
+}
+
+
+void dsp_wdt_exit(void)
+{
+ free_irq(INT_34XX_WDT3_IRQ, &dsp_wdt);
+ tasklet_kill(&dsp_wdt.wdt3_tasklet);
+
+ if (dsp_wdt.fclk)
+ clk_put(dsp_wdt.fclk);
+ if (dsp_wdt.iclk)
+ clk_put(dsp_wdt.iclk);
+
+ dsp_wdt.fclk = NULL;
+ dsp_wdt.iclk = NULL;
+ dsp_wdt.sm_wdt = NULL;
+ dsp_wdt.reg_base = NULL;
+}
+
+void dsp_wdt_enable(bool enable)
+{
+ u32 tmp;
+ static bool wdt_enable;
+
+ if (wdt_enable == enable || !dsp_wdt.fclk || !dsp_wdt.iclk)
+ return;
+
+ wdt_enable = enable;
+
+ if (enable) {
+ clk_enable(dsp_wdt.fclk);
+ clk_enable(dsp_wdt.iclk);
+ dsp_wdt.sm_wdt->wdt_setclocks = 1;
+ tmp = __raw_readl(dsp_wdt.reg_base + OMAP3_WDT3_ISR_OFFSET);
+ __raw_writel(tmp, dsp_wdt.reg_base + OMAP3_WDT3_ISR_OFFSET);
+ enable_irq(INT_34XX_WDT3_IRQ);
+ } else {
+ disable_irq(INT_34XX_WDT3_IRQ);
+ dsp_wdt.sm_wdt->wdt_setclocks = 0;
+ clk_disable(dsp_wdt.iclk);
+ clk_disable(dsp_wdt.fclk);
+ }
+}
+
+#else
+void dsp_wdt_enable(bool enable)
+{
+}
+
+void dsp_wdt_sm_set(void *data)
+{
+}
+
+int dsp_wdt_init(void)
+{
+ return 0;
+}
+
+void dsp_wdt_exit(void)
+{
+}
+#endif
+
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge platform manager driver sources
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/pmgr/chnl.c | 163 +++
drivers/staging/tidspbridge/pmgr/chnlobj.h | 46 +
drivers/staging/tidspbridge/pmgr/cmm.c | 1172 +++++++++++++++++++
drivers/staging/tidspbridge/pmgr/cod.c | 658 +++++++++++
drivers/staging/tidspbridge/pmgr/dbll.c | 1585 ++++++++++++++++++++++++++
drivers/staging/tidspbridge/pmgr/dev.c | 1171 +++++++++++++++++++
drivers/staging/tidspbridge/pmgr/dmm.c | 533 +++++++++
drivers/staging/tidspbridge/pmgr/dspapi.c | 1685 ++++++++++++++++++++++++++++
drivers/staging/tidspbridge/pmgr/io.c | 142 +++
drivers/staging/tidspbridge/pmgr/ioobj.h | 38 +
drivers/staging/tidspbridge/pmgr/msg.c | 129 +++
drivers/staging/tidspbridge/pmgr/msgobj.h | 38 +
12 files changed, 7360 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/pmgr/chnl.c
create mode 100644 drivers/staging/tidspbridge/pmgr/chnlobj.h
create mode 100644 drivers/staging/tidspbridge/pmgr/cmm.c
create mode 100644 drivers/staging/tidspbridge/pmgr/cod.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dbll.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dev.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dmm.c
create mode 100644 drivers/staging/tidspbridge/pmgr/dspapi.c
create mode 100644 drivers/staging/tidspbridge/pmgr/io.c
create mode 100644 drivers/staging/tidspbridge/pmgr/ioobj.h
create mode 100644 drivers/staging/tidspbridge/pmgr/msg.c
create mode 100644 drivers/staging/tidspbridge/pmgr/msgobj.h
diff --git a/drivers/staging/tidspbridge/pmgr/chnl.c b/drivers/staging/tidspbridge/pmgr/chnl.c
new file mode 100644
index 0000000..bc969d8
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/chnl.c
@@ -0,0 +1,163 @@
+/*
+ * chnl.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP API channel interface: multiplexes data streams through the single
+ * physical link managed by a Bridge Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/proc.h>
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/chnlpriv.h>
+#include <chnlobj.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/chnl.h>
+
+/* ----------------------------------- Globals */
+static u32 refs;
+
+/*
+ * ======== chnl_create ========
+ * Purpose:
+ * Create a channel manager object, responsible for opening new channels
+ * and closing old ones for a given 'Bridge board.
+ */
+int chnl_create(OUT struct chnl_mgr **phChnlMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct chnl_mgrattrs *pMgrAttrs)
+{
+ int status;
+ struct chnl_mgr *hchnl_mgr;
+ struct chnl_mgr_ *chnl_mgr_obj = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phChnlMgr != NULL);
+ DBC_REQUIRE(pMgrAttrs != NULL);
+
+ *phChnlMgr = NULL;
+
+ /* Validate args: */
+ if ((0 < pMgrAttrs->max_channels) &&
+ (pMgrAttrs->max_channels <= CHNL_MAXCHANNELS))
+ status = 0;
+ else if (pMgrAttrs->max_channels == 0)
+ status = -EINVAL;
+ else
+ status = -ECHRNG;
+
+ if (pMgrAttrs->word_size == 0)
+ status = -EINVAL;
+
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_chnl_mgr(hdev_obj, &hchnl_mgr);
+ if (DSP_SUCCEEDED(status) && hchnl_mgr != NULL)
+ status = -EEXIST;
+
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ struct bridge_drv_interface *intf_fxns;
+ dev_get_intf_fxns(hdev_obj, &intf_fxns);
+ /* Let Bridge channel module finish the create: */
+ status = (*intf_fxns->pfn_chnl_create) (&hchnl_mgr, hdev_obj,
+ pMgrAttrs);
+ if (DSP_SUCCEEDED(status)) {
+ /* Fill in DSP API channel module's fields of the
+ * chnl_mgr structure */
+ chnl_mgr_obj = (struct chnl_mgr_ *)hchnl_mgr;
+ chnl_mgr_obj->intf_fxns = intf_fxns;
+ /* Finally, return the new channel manager handle: */
+ *phChnlMgr = hchnl_mgr;
+ }
+ }
+
+ DBC_ENSURE(DSP_FAILED(status) || chnl_mgr_obj);
+
+ return status;
+}
+
+/*
+ * ======== chnl_destroy ========
+ * Purpose:
+ * Close all open channels, and destroy the channel manager.
+ */
+int chnl_destroy(struct chnl_mgr *hchnl_mgr)
+{
+ struct chnl_mgr_ *chnl_mgr_obj = (struct chnl_mgr_ *)hchnl_mgr;
+ struct bridge_drv_interface *intf_fxns;
+ int status;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (chnl_mgr_obj) {
+ intf_fxns = chnl_mgr_obj->intf_fxns;
+ /* Let Bridge channel module destroy the chnl_mgr: */
+ status = (*intf_fxns->pfn_chnl_destroy) (hchnl_mgr);
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== chnl_exit ========
+ * Purpose:
+ * Discontinue usage of the CHNL module.
+ */
+void chnl_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== chnl_init ========
+ * Purpose:
+ * Initialize the CHNL module's private state.
+ */
+bool chnl_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/chnlobj.h b/drivers/staging/tidspbridge/pmgr/chnlobj.h
new file mode 100644
index 0000000..6795e0a
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/chnlobj.h
@@ -0,0 +1,46 @@
+/*
+ * chnlobj.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Structure subcomponents of channel class library channel objects which
+ * are exposed to DSP API from Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CHNLOBJ_
+#define CHNLOBJ_
+
+#include <dspbridge/chnldefs.h>
+#include <dspbridge/dspdefs.h>
+
+/*
+ * This struct is the first field in a chnl_mgr struct. Other. implementation
+ * specific fields follow this structure in memory.
+ */
+struct chnl_mgr_ {
+ /* These must be the first fields in a chnl_mgr struct: */
+
+ /* Function interface to Bridge driver. */
+ struct bridge_drv_interface *intf_fxns;
+};
+
+/*
+ * This struct is the first field in a chnl_object struct. Other,
+ * implementation specific fields follow this structure in memory.
+ */
+struct chnl_object_ {
+ /* These must be the first fields in a chnl_object struct: */
+ struct chnl_mgr_ *chnl_mgr_obj; /* Pointer back to channel manager. */
+};
+
+#endif /* CHNLOBJ_ */
diff --git a/drivers/staging/tidspbridge/pmgr/cmm.c b/drivers/staging/tidspbridge/pmgr/cmm.c
new file mode 100644
index 0000000..7aa4ca4
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/cmm.c
@@ -0,0 +1,1172 @@
+/*
+ * cmm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * The Communication(Shared) Memory Management(CMM) module provides
+ * shared memory management services for DSP/BIOS Bridge data streaming
+ * and messaging.
+ *
+ * Multiple shared memory segments can be registered with CMM.
+ * Each registered SM segment is represented by a SM "allocator" that
+ * describes a block of physically contiguous shared memory used for
+ * future allocations by CMM.
+ *
+ * Memory is coelesced back to the appropriate heap when a buffer is
+ * freed.
+ *
+ * Notes:
+ * Va: Virtual address.
+ * Pa: Physical or kernel system address.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/list.h>
+#include <dspbridge/sync.h>
+#include <dspbridge/utildefs.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/proc.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/cmm.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define NEXT_PA(pnode) (pnode->dw_pa + pnode->ul_size)
+
+/* Other bus/platform translations */
+#define DSPPA2GPPPA(base, x, y) ((x)+(y))
+#define GPPPA2DSPPA(base, x, y) ((x)-(y))
+
+/*
+ * Allocators define a block of contiguous memory used for future allocations.
+ *
+ * sma - shared memory allocator.
+ * vma - virtual memory allocator.(not used).
+ */
+struct cmm_allocator { /* sma */
+ unsigned int shm_base; /* Start of physical SM block */
+ u32 ul_sm_size; /* Size of SM block in bytes */
+ unsigned int dw_vm_base; /* Start of VM block. (Dev driver
+ * context for 'sma') */
+ u32 dw_dsp_phys_addr_offset; /* DSP PA to GPP PA offset for this
+ * SM space */
+ s8 c_factor; /* DSPPa to GPPPa Conversion Factor */
+ unsigned int dw_dsp_base; /* DSP virt base byte address */
+ u32 ul_dsp_size; /* DSP seg size in bytes */
+ struct cmm_object *hcmm_mgr; /* back ref to parent mgr */
+ /* node list of available memory */
+ struct lst_list *free_list_head;
+ /* node list of memory in use */
+ struct lst_list *in_use_list_head;
+};
+
+struct cmm_xlator { /* Pa<->Va translator object */
+ /* CMM object this translator associated */
+ struct cmm_object *hcmm_mgr;
+ /*
+ * Client process virtual base address that corresponds to phys SM
+ * base address for translator's ul_seg_id.
+ * Only 1 segment ID currently supported.
+ */
+ unsigned int dw_virt_base; /* virtual base address */
+ u32 ul_virt_size; /* size of virt space in bytes */
+ u32 ul_seg_id; /* Segment Id */
+};
+
+/* CMM Mgr */
+struct cmm_object {
+ /*
+ * Cmm Lock is used to serialize access mem manager for multi-threads.
+ */
+ struct mutex cmm_lock; /* Lock to access cmm mgr */
+ struct lst_list *node_free_list_head; /* Free list of memory nodes */
+ u32 ul_min_block_size; /* Min SM block; default 16 bytes */
+ u32 dw_page_size; /* Memory Page size (1k/4k) */
+ /* GPP SM segment ptrs */
+ struct cmm_allocator *pa_gppsm_seg_tab[CMM_MAXGPPSEGS];
+};
+
+/* Default CMM Mgr attributes */
+static struct cmm_mgrattrs cmm_dfltmgrattrs = {
+ /* ul_min_block_size, min block size(bytes) allocated by cmm mgr */
+ 16
+};
+
+/* Default allocation attributes */
+static struct cmm_attrs cmm_dfltalctattrs = {
+ 1 /* ul_seg_id, default segment Id for allocator */
+};
+
+/* Address translator default attrs */
+static struct cmm_xlatorattrs cmm_dfltxlatorattrs = {
+ /* ul_seg_id, does not have to match cmm_dfltalctattrs ul_seg_id */
+ 1,
+ 0, /* dw_dsp_bufs */
+ 0, /* dw_dsp_buf_size */
+ NULL, /* vm_base */
+ 0, /* dw_vm_size */
+};
+
+/* SM node representing a block of memory. */
+struct cmm_mnode {
+ struct list_head link; /* must be 1st element */
+ u32 dw_pa; /* Phys addr */
+ u32 dw_va; /* Virtual address in device process context */
+ u32 ul_size; /* SM block size in bytes */
+ u32 client_proc; /* Process that allocated this mem block */
+};
+
+/* ----------------------------------- Globals */
+static u32 refs; /* module reference count */
+
+/* ----------------------------------- Function Prototypes */
+static void add_to_free_list(struct cmm_allocator *allocator,
+ struct cmm_mnode *pnode);
+static struct cmm_allocator *get_allocator(struct cmm_object *cmm_mgr_obj,
+ u32 ul_seg_id);
+static struct cmm_mnode *get_free_block(struct cmm_allocator *allocator,
+ u32 usize);
+static struct cmm_mnode *get_node(struct cmm_object *cmm_mgr_obj, u32 dw_pa,
+ u32 dw_va, u32 ul_size);
+/* get available slot for new allocator */
+static s32 get_slot(struct cmm_object *hcmm_mgr);
+static void un_register_gppsm_seg(struct cmm_allocator *psma);
+
+/*
+ * ======== cmm_calloc_buf ========
+ * Purpose:
+ * Allocate a SM buffer, zero contents, and return the physical address
+ * and optional driver context virtual address(pp_buf_va).
+ *
+ * The freelist is sorted in increasing size order. Get the first
+ * block that satifies the request and sort the remaining back on
+ * the freelist; if large enough. The kept block is placed on the
+ * inUseList.
+ */
+void *cmm_calloc_buf(struct cmm_object *hcmm_mgr, u32 usize,
+ struct cmm_attrs *pattrs, OUT void **pp_buf_va)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ void *buf_pa = NULL;
+ struct cmm_mnode *pnode = NULL;
+ struct cmm_mnode *new_node = NULL;
+ struct cmm_allocator *allocator = NULL;
+ u32 delta_size;
+ u8 *pbyte = NULL;
+ s32 cnt;
+
+ if (pattrs == NULL)
+ pattrs = &cmm_dfltalctattrs;
+
+ if (pp_buf_va != NULL)
+ *pp_buf_va = NULL;
+
+ if (cmm_mgr_obj && (usize != 0)) {
+ if (pattrs->ul_seg_id > 0) {
+ /* SegId > 0 is SM */
+ /* get the allocator object for this segment id */
+ allocator =
+ get_allocator(cmm_mgr_obj, pattrs->ul_seg_id);
+ /* keep block size a multiple of ul_min_block_size */
+ usize =
+ ((usize - 1) & ~(cmm_mgr_obj->ul_min_block_size -
+ 1))
+ + cmm_mgr_obj->ul_min_block_size;
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ pnode = get_free_block(allocator, usize);
+ }
+ if (pnode) {
+ delta_size = (pnode->ul_size - usize);
+ if (delta_size >= cmm_mgr_obj->ul_min_block_size) {
+ /* create a new block with the leftovers and
+ * add to freelist */
+ new_node =
+ get_node(cmm_mgr_obj, pnode->dw_pa + usize,
+ pnode->dw_va + usize,
+ (u32) delta_size);
+ /* leftovers go free */
+ add_to_free_list(allocator, new_node);
+ /* adjust our node's size */
+ pnode->ul_size = usize;
+ }
+ /* Tag node with client process requesting allocation
+ * We'll need to free up a process's alloc'd SM if the
+ * client process goes away.
+ */
+ /* Return TGID instead of process handle */
+ pnode->client_proc = current->tgid;
+
+ /* put our node on InUse list */
+ lst_put_tail(allocator->in_use_list_head,
+ (struct list_head *)pnode);
+ buf_pa = (void *)pnode->dw_pa; /* physical address */
+ /* clear mem */
+ pbyte = (u8 *) pnode->dw_va;
+ for (cnt = 0; cnt < (s32) usize; cnt++, pbyte++)
+ *pbyte = 0;
+
+ if (pp_buf_va != NULL) {
+ /* Virtual address */
+ *pp_buf_va = (void *)pnode->dw_va;
+ }
+ }
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ }
+ return buf_pa;
+}
+
+/*
+ * ======== cmm_create ========
+ * Purpose:
+ * Create a communication memory manager object.
+ */
+int cmm_create(OUT struct cmm_object **ph_cmm_mgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct cmm_mgrattrs *pMgrAttrs)
+{
+ struct cmm_object *cmm_obj = NULL;
+ int status = 0;
+ struct util_sysinfo sys_info;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ph_cmm_mgr != NULL);
+
+ *ph_cmm_mgr = NULL;
+ /* create, zero, and tag a cmm mgr object */
+ cmm_obj = kzalloc(sizeof(struct cmm_object), GFP_KERNEL);
+ if (cmm_obj != NULL) {
+ if (pMgrAttrs == NULL)
+ pMgrAttrs = &cmm_dfltmgrattrs; /* set defaults */
+
+ /* 4 bytes minimum */
+ DBC_ASSERT(pMgrAttrs->ul_min_block_size >= 4);
+ /* save away smallest block allocation for this cmm mgr */
+ cmm_obj->ul_min_block_size = pMgrAttrs->ul_min_block_size;
+ /* save away the systems memory page size */
+ sys_info.dw_page_size = PAGE_SIZE;
+ sys_info.dw_allocation_granularity = PAGE_SIZE;
+ sys_info.dw_number_of_processors = 1;
+ if (DSP_SUCCEEDED(status)) {
+ cmm_obj->dw_page_size = sys_info.dw_page_size;
+ } else {
+ cmm_obj->dw_page_size = 0;
+ status = -EPERM;
+ }
+ /* Note: DSP SM seg table(aDSPSMSegTab[]) zero'd by
+ * MEM_ALLOC_OBJECT */
+ if (DSP_SUCCEEDED(status)) {
+ /* create node free list */
+ cmm_obj->node_free_list_head =
+ kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (cmm_obj->node_free_list_head == NULL)
+ status = -ENOMEM;
+ else
+ INIT_LIST_HEAD(&cmm_obj->
+ node_free_list_head->head);
+ }
+ if (DSP_SUCCEEDED(status))
+ mutex_init(&cmm_obj->cmm_lock);
+
+ if (DSP_SUCCEEDED(status))
+ *ph_cmm_mgr = cmm_obj;
+ else
+ cmm_destroy(cmm_obj, true);
+
+ } else {
+ status = -ENOMEM;
+ }
+ return status;
+}
+
+/*
+ * ======== cmm_destroy ========
+ * Purpose:
+ * Release the communication memory manager resources.
+ */
+int cmm_destroy(struct cmm_object *hcmm_mgr, bool bForce)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ struct cmm_info temp_info;
+ int status = 0;
+ s32 slot_seg;
+ struct cmm_mnode *pnode;
+
+ DBC_REQUIRE(refs > 0);
+ if (!hcmm_mgr) {
+ status = -EFAULT;
+ return status;
+ }
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ /* If not force then fail if outstanding allocations exist */
+ if (!bForce) {
+ /* Check for outstanding memory allocations */
+ status = cmm_get_info(hcmm_mgr, &temp_info);
+ if (DSP_SUCCEEDED(status)) {
+ if (temp_info.ul_total_in_use_cnt > 0) {
+ /* outstanding allocations */
+ status = -EPERM;
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* UnRegister SM allocator */
+ for (slot_seg = 0; slot_seg < CMM_MAXGPPSEGS; slot_seg++) {
+ if (cmm_mgr_obj->pa_gppsm_seg_tab[slot_seg] != NULL) {
+ un_register_gppsm_seg
+ (cmm_mgr_obj->pa_gppsm_seg_tab[slot_seg]);
+ /* Set slot to NULL for future reuse */
+ cmm_mgr_obj->pa_gppsm_seg_tab[slot_seg] = NULL;
+ }
+ }
+ }
+ if (cmm_mgr_obj->node_free_list_head != NULL) {
+ /* Free the free nodes */
+ while (!LST_IS_EMPTY(cmm_mgr_obj->node_free_list_head)) {
+ pnode = (struct cmm_mnode *)
+ lst_get_head(cmm_mgr_obj->node_free_list_head);
+ kfree(pnode);
+ }
+ /* delete NodeFreeList list */
+ kfree(cmm_mgr_obj->node_free_list_head);
+ }
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ if (DSP_SUCCEEDED(status)) {
+ /* delete CS & cmm mgr object */
+ mutex_destroy(&cmm_mgr_obj->cmm_lock);
+ kfree(cmm_mgr_obj);
+ }
+ return status;
+}
+
+/*
+ * ======== cmm_exit ========
+ * Purpose:
+ * Discontinue usage of module; free resources when reference count
+ * reaches 0.
+ */
+void cmm_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+}
+
+/*
+ * ======== cmm_free_buf ========
+ * Purpose:
+ * Free the given buffer.
+ */
+int cmm_free_buf(struct cmm_object *hcmm_mgr, void *buf_pa,
+ u32 ul_seg_id)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ int status = -EFAULT;
+ struct cmm_mnode *mnode_obj = NULL;
+ struct cmm_allocator *allocator = NULL;
+ struct cmm_attrs *pattrs;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(buf_pa != NULL);
+
+ if (ul_seg_id == 0) {
+ pattrs = &cmm_dfltalctattrs;
+ ul_seg_id = pattrs->ul_seg_id;
+ }
+ if (!hcmm_mgr || !(ul_seg_id > 0)) {
+ status = -EFAULT;
+ return status;
+ }
+ /* get the allocator for this segment id */
+ allocator = get_allocator(cmm_mgr_obj, ul_seg_id);
+ if (allocator != NULL) {
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ mnode_obj =
+ (struct cmm_mnode *)lst_first(allocator->in_use_list_head);
+ while (mnode_obj) {
+ if ((u32) buf_pa == mnode_obj->dw_pa) {
+ /* Found it */
+ lst_remove_elem(allocator->in_use_list_head,
+ (struct list_head *)mnode_obj);
+ /* back to freelist */
+ add_to_free_list(allocator, mnode_obj);
+ status = 0; /* all right! */
+ break;
+ }
+ /* next node. */
+ mnode_obj = (struct cmm_mnode *)
+ lst_next(allocator->in_use_list_head,
+ (struct list_head *)mnode_obj);
+ }
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ }
+ return status;
+}
+
+/*
+ * ======== cmm_get_handle ========
+ * Purpose:
+ * Return the communication memory manager object for this device.
+ * This is typically called from the client process.
+ */
+int cmm_get_handle(void *hprocessor, OUT struct cmm_object ** ph_cmm_mgr)
+{
+ int status = 0;
+ struct dev_object *hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ph_cmm_mgr != NULL);
+ if (hprocessor != NULL)
+ status = proc_get_dev_object(hprocessor, &hdev_obj);
+ else
+ hdev_obj = dev_get_first(); /* default */
+
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_cmm_mgr(hdev_obj, ph_cmm_mgr);
+
+ return status;
+}
+
+/*
+ * ======== cmm_get_info ========
+ * Purpose:
+ * Return the current memory utilization information.
+ */
+int cmm_get_info(struct cmm_object *hcmm_mgr,
+ OUT struct cmm_info *cmm_info_obj)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ u32 ul_seg;
+ int status = 0;
+ struct cmm_allocator *altr;
+ struct cmm_mnode *mnode_obj = NULL;
+
+ DBC_REQUIRE(cmm_info_obj != NULL);
+
+ if (!hcmm_mgr) {
+ status = -EFAULT;
+ return status;
+ }
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ cmm_info_obj->ul_num_gppsm_segs = 0; /* # of SM segments */
+ /* Total # of outstanding alloc */
+ cmm_info_obj->ul_total_in_use_cnt = 0;
+ /* min block size */
+ cmm_info_obj->ul_min_block_size = cmm_mgr_obj->ul_min_block_size;
+ /* check SM memory segments */
+ for (ul_seg = 1; ul_seg <= CMM_MAXGPPSEGS; ul_seg++) {
+ /* get the allocator object for this segment id */
+ altr = get_allocator(cmm_mgr_obj, ul_seg);
+ if (altr != NULL) {
+ cmm_info_obj->ul_num_gppsm_segs++;
+ cmm_info_obj->seg_info[ul_seg - 1].dw_seg_base_pa =
+ altr->shm_base - altr->ul_dsp_size;
+ cmm_info_obj->seg_info[ul_seg - 1].ul_total_seg_size =
+ altr->ul_dsp_size + altr->ul_sm_size;
+ cmm_info_obj->seg_info[ul_seg - 1].dw_gpp_base_pa =
+ altr->shm_base;
+ cmm_info_obj->seg_info[ul_seg - 1].ul_gpp_size =
+ altr->ul_sm_size;
+ cmm_info_obj->seg_info[ul_seg - 1].dw_dsp_base_va =
+ altr->dw_dsp_base;
+ cmm_info_obj->seg_info[ul_seg - 1].ul_dsp_size =
+ altr->ul_dsp_size;
+ cmm_info_obj->seg_info[ul_seg - 1].dw_seg_base_va =
+ altr->dw_vm_base - altr->ul_dsp_size;
+ cmm_info_obj->seg_info[ul_seg - 1].ul_in_use_cnt = 0;
+ mnode_obj = (struct cmm_mnode *)
+ lst_first(altr->in_use_list_head);
+ /* Count inUse blocks */
+ while (mnode_obj) {
+ cmm_info_obj->ul_total_in_use_cnt++;
+ cmm_info_obj->seg_info[ul_seg -
+ 1].ul_in_use_cnt++;
+ /* next node. */
+ mnode_obj = (struct cmm_mnode *)
+ lst_next(altr->in_use_list_head,
+ (struct list_head *)mnode_obj);
+ }
+ }
+ } /* end for */
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ return status;
+}
+
+/*
+ * ======== cmm_init ========
+ * Purpose:
+ * Initializes private state of CMM module.
+ */
+bool cmm_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== cmm_register_gppsm_seg ========
+ * Purpose:
+ * Register a block of SM with the CMM to be used for later GPP SM
+ * allocations.
+ */
+int cmm_register_gppsm_seg(struct cmm_object *hcmm_mgr,
+ u32 dw_gpp_base_pa, u32 ul_size,
+ u32 dwDSPAddrOffset, s8 c_factor,
+ u32 dw_dsp_base, u32 ul_dsp_size,
+ u32 *pulSegId, u32 dw_gpp_base_va)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ struct cmm_allocator *psma = NULL;
+ int status = 0;
+ struct cmm_mnode *new_node;
+ s32 slot_seg;
+
+ DBC_REQUIRE(ul_size > 0);
+ DBC_REQUIRE(pulSegId != NULL);
+ DBC_REQUIRE(dw_gpp_base_pa != 0);
+ DBC_REQUIRE(dw_gpp_base_va != 0);
+ DBC_REQUIRE((c_factor <= CMM_ADDTODSPPA) &&
+ (c_factor >= CMM_SUBFROMDSPPA));
+ dev_dbg(bridge, "%s: dw_gpp_base_pa %x ul_size %x dwDSPAddrOffset %x "
+ "dw_dsp_base %x ul_dsp_size %x dw_gpp_base_va %x\n", __func__,
+ dw_gpp_base_pa, ul_size, dwDSPAddrOffset, dw_dsp_base,
+ ul_dsp_size, dw_gpp_base_va);
+ if (!hcmm_mgr) {
+ status = -EFAULT;
+ return status;
+ }
+ /* make sure we have room for another allocator */
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ slot_seg = get_slot(cmm_mgr_obj);
+ if (slot_seg < 0) {
+ /* get a slot number */
+ status = -EPERM;
+ goto func_end;
+ }
+ /* Check if input ul_size is big enough to alloc at least one block */
+ if (DSP_SUCCEEDED(status)) {
+ if (ul_size < cmm_mgr_obj->ul_min_block_size) {
+ status = -EINVAL;
+ goto func_end;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* create, zero, and tag an SM allocator object */
+ psma = kzalloc(sizeof(struct cmm_allocator), GFP_KERNEL);
+ }
+ if (psma != NULL) {
+ psma->hcmm_mgr = hcmm_mgr; /* ref to parent */
+ psma->shm_base = dw_gpp_base_pa; /* SM Base phys */
+ psma->ul_sm_size = ul_size; /* SM segment size in bytes */
+ psma->dw_vm_base = dw_gpp_base_va;
+ psma->dw_dsp_phys_addr_offset = dwDSPAddrOffset;
+ psma->c_factor = c_factor;
+ psma->dw_dsp_base = dw_dsp_base;
+ psma->ul_dsp_size = ul_dsp_size;
+ if (psma->dw_vm_base == 0) {
+ status = -EPERM;
+ goto func_end;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* return the actual segment identifier */
+ *pulSegId = (u32) slot_seg + 1;
+ /* create memory free list */
+ psma->free_list_head = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (psma->free_list_head == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ INIT_LIST_HEAD(&psma->free_list_head->head);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* create memory in-use list */
+ psma->in_use_list_head = kzalloc(sizeof(struct
+ lst_list), GFP_KERNEL);
+ if (psma->in_use_list_head == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ INIT_LIST_HEAD(&psma->in_use_list_head->head);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Get a mem node for this hunk-o-memory */
+ new_node = get_node(cmm_mgr_obj, dw_gpp_base_pa,
+ psma->dw_vm_base, ul_size);
+ /* Place node on the SM allocator's free list */
+ if (new_node) {
+ lst_put_tail(psma->free_list_head,
+ (struct list_head *)new_node);
+ } else {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ }
+ if (DSP_FAILED(status)) {
+ /* Cleanup allocator */
+ un_register_gppsm_seg(psma);
+ }
+ } else {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ /* make entry */
+ if (DSP_SUCCEEDED(status))
+ cmm_mgr_obj->pa_gppsm_seg_tab[slot_seg] = psma;
+
+func_end:
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ return status;
+}
+
+/*
+ * ======== cmm_un_register_gppsm_seg ========
+ * Purpose:
+ * UnRegister GPP SM segments with the CMM.
+ */
+int cmm_un_register_gppsm_seg(struct cmm_object *hcmm_mgr,
+ u32 ul_seg_id)
+{
+ struct cmm_object *cmm_mgr_obj = (struct cmm_object *)hcmm_mgr;
+ int status = 0;
+ struct cmm_allocator *psma;
+ u32 ul_id = ul_seg_id;
+
+ DBC_REQUIRE(ul_seg_id > 0);
+ if (hcmm_mgr) {
+ if (ul_seg_id == CMM_ALLSEGMENTS)
+ ul_id = 1;
+
+ if ((ul_id > 0) && (ul_id <= CMM_MAXGPPSEGS)) {
+ while (ul_id <= CMM_MAXGPPSEGS) {
+ mutex_lock(&cmm_mgr_obj->cmm_lock);
+ /* slot = seg_id-1 */
+ psma = cmm_mgr_obj->pa_gppsm_seg_tab[ul_id - 1];
+ if (psma != NULL) {
+ un_register_gppsm_seg(psma);
+ /* Set alctr ptr to NULL for future
+ * reuse */
+ cmm_mgr_obj->pa_gppsm_seg_tab[ul_id -
+ 1] = NULL;
+ } else if (ul_seg_id != CMM_ALLSEGMENTS) {
+ status = -EPERM;
+ }
+ mutex_unlock(&cmm_mgr_obj->cmm_lock);
+ if (ul_seg_id != CMM_ALLSEGMENTS)
+ break;
+
+ ul_id++;
+ } /* end while */
+ } else {
+ status = -EINVAL;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== un_register_gppsm_seg ========
+ * Purpose:
+ * UnRegister the SM allocator by freeing all its resources and
+ * nulling cmm mgr table entry.
+ * Note:
+ * This routine is always called within cmm lock crit sect.
+ */
+static void un_register_gppsm_seg(struct cmm_allocator *psma)
+{
+ struct cmm_mnode *mnode_obj = NULL;
+ struct cmm_mnode *next_node = NULL;
+
+ DBC_REQUIRE(psma != NULL);
+ if (psma->free_list_head != NULL) {
+ /* free nodes on free list */
+ mnode_obj = (struct cmm_mnode *)lst_first(psma->free_list_head);
+ while (mnode_obj) {
+ next_node =
+ (struct cmm_mnode *)lst_next(psma->free_list_head,
+ (struct list_head *)
+ mnode_obj);
+ lst_remove_elem(psma->free_list_head,
+ (struct list_head *)mnode_obj);
+ kfree((void *)mnode_obj);
+ /* next node. */
+ mnode_obj = next_node;
+ }
+ kfree(psma->free_list_head); /* delete freelist */
+ /* free nodes on InUse list */
+ mnode_obj =
+ (struct cmm_mnode *)lst_first(psma->in_use_list_head);
+ while (mnode_obj) {
+ next_node =
+ (struct cmm_mnode *)lst_next(psma->in_use_list_head,
+ (struct list_head *)
+ mnode_obj);
+ lst_remove_elem(psma->in_use_list_head,
+ (struct list_head *)mnode_obj);
+ kfree((void *)mnode_obj);
+ /* next node. */
+ mnode_obj = next_node;
+ }
+ kfree(psma->in_use_list_head); /* delete InUse list */
+ }
+ if ((void *)psma->dw_vm_base != NULL)
+ MEM_UNMAP_LINEAR_ADDRESS((void *)psma->dw_vm_base);
+
+ /* Free allocator itself */
+ kfree(psma);
+}
+
+/*
+ * ======== get_slot ========
+ * Purpose:
+ * An available slot # is returned. Returns negative on failure.
+ */
+static s32 get_slot(struct cmm_object *cmm_mgr_obj)
+{
+ s32 slot_seg = -1; /* neg on failure */
+ DBC_REQUIRE(cmm_mgr_obj != NULL);
+ /* get first available slot in cmm mgr SMSegTab[] */
+ for (slot_seg = 0; slot_seg < CMM_MAXGPPSEGS; slot_seg++) {
+ if (cmm_mgr_obj->pa_gppsm_seg_tab[slot_seg] == NULL)
+ break;
+
+ }
+ if (slot_seg == CMM_MAXGPPSEGS)
+ slot_seg = -1; /* failed */
+
+ return slot_seg;
+}
+
+/*
+ * ======== get_node ========
+ * Purpose:
+ * Get a memory node from freelist or create a new one.
+ */
+static struct cmm_mnode *get_node(struct cmm_object *cmm_mgr_obj, u32 dw_pa,
+ u32 dw_va, u32 ul_size)
+{
+ struct cmm_mnode *pnode = NULL;
+
+ DBC_REQUIRE(cmm_mgr_obj != NULL);
+ DBC_REQUIRE(dw_pa != 0);
+ DBC_REQUIRE(dw_va != 0);
+ DBC_REQUIRE(ul_size != 0);
+ /* Check cmm mgr's node freelist */
+ if (LST_IS_EMPTY(cmm_mgr_obj->node_free_list_head)) {
+ pnode = kzalloc(sizeof(struct cmm_mnode), GFP_KERNEL);
+ } else {
+ /* surely a valid element */
+ pnode = (struct cmm_mnode *)
+ lst_get_head(cmm_mgr_obj->node_free_list_head);
+ }
+ if (pnode) {
+ lst_init_elem((struct list_head *)pnode); /* set self */
+ pnode->dw_pa = dw_pa; /* Physical addr of start of block */
+ pnode->dw_va = dw_va; /* Virtual " " */
+ pnode->ul_size = ul_size; /* Size of block */
+ }
+ return pnode;
+}
+
+/*
+ * ======== delete_node ========
+ * Purpose:
+ * Put a memory node on the cmm nodelist for later use.
+ * Doesn't actually delete the node. Heap thrashing friendly.
+ */
+static void delete_node(struct cmm_object *cmm_mgr_obj, struct cmm_mnode *pnode)
+{
+ DBC_REQUIRE(pnode != NULL);
+ lst_init_elem((struct list_head *)pnode); /* init .self ptr */
+ lst_put_tail(cmm_mgr_obj->node_free_list_head,
+ (struct list_head *)pnode);
+}
+
+/*
+ * ====== get_free_block ========
+ * Purpose:
+ * Scan the free block list and return the first block that satisfies
+ * the size.
+ */
+static struct cmm_mnode *get_free_block(struct cmm_allocator *allocator,
+ u32 usize)
+{
+ if (allocator) {
+ struct cmm_mnode *mnode_obj = (struct cmm_mnode *)
+ lst_first(allocator->free_list_head);
+ while (mnode_obj) {
+ if (usize <= (u32) mnode_obj->ul_size) {
+ lst_remove_elem(allocator->free_list_head,
+ (struct list_head *)mnode_obj);
+ return mnode_obj;
+ }
+ /* next node. */
+ mnode_obj = (struct cmm_mnode *)
+ lst_next(allocator->free_list_head,
+ (struct list_head *)mnode_obj);
+ }
+ }
+ return NULL;
+}
+
+/*
+ * ======== add_to_free_list ========
+ * Purpose:
+ * Coelesce node into the freelist in ascending size order.
+ */
+static void add_to_free_list(struct cmm_allocator *allocator,
+ struct cmm_mnode *pnode)
+{
+ struct cmm_mnode *node_prev = NULL;
+ struct cmm_mnode *node_next = NULL;
+ struct cmm_mnode *mnode_obj;
+ u32 dw_this_pa;
+ u32 dw_next_pa;
+
+ DBC_REQUIRE(pnode != NULL);
+ DBC_REQUIRE(allocator != NULL);
+ dw_this_pa = pnode->dw_pa;
+ dw_next_pa = NEXT_PA(pnode);
+ mnode_obj = (struct cmm_mnode *)lst_first(allocator->free_list_head);
+ while (mnode_obj) {
+ if (dw_this_pa == NEXT_PA(mnode_obj)) {
+ /* found the block ahead of this one */
+ node_prev = mnode_obj;
+ } else if (dw_next_pa == mnode_obj->dw_pa) {
+ node_next = mnode_obj;
+ }
+ if ((node_prev == NULL) || (node_next == NULL)) {
+ /* next node. */
+ mnode_obj = (struct cmm_mnode *)
+ lst_next(allocator->free_list_head,
+ (struct list_head *)mnode_obj);
+ } else {
+ /* got 'em */
+ break;
+ }
+ } /* while */
+ if (node_prev != NULL) {
+ /* combine with previous block */
+ lst_remove_elem(allocator->free_list_head,
+ (struct list_head *)node_prev);
+ /* grow node to hold both */
+ pnode->ul_size += node_prev->ul_size;
+ pnode->dw_pa = node_prev->dw_pa;
+ pnode->dw_va = node_prev->dw_va;
+ /* place node on mgr nodeFreeList */
+ delete_node((struct cmm_object *)allocator->hcmm_mgr,
+ node_prev);
+ }
+ if (node_next != NULL) {
+ /* combine with next block */
+ lst_remove_elem(allocator->free_list_head,
+ (struct list_head *)node_next);
+ /* grow da node */
+ pnode->ul_size += node_next->ul_size;
+ /* place node on mgr nodeFreeList */
+ delete_node((struct cmm_object *)allocator->hcmm_mgr,
+ node_next);
+ }
+ /* Now, let's add to freelist in increasing size order */
+ mnode_obj = (struct cmm_mnode *)lst_first(allocator->free_list_head);
+ while (mnode_obj) {
+ if (pnode->ul_size <= mnode_obj->ul_size)
+ break;
+
+ /* next node. */
+ mnode_obj =
+ (struct cmm_mnode *)lst_next(allocator->free_list_head,
+ (struct list_head *)mnode_obj);
+ }
+ /* if mnode_obj is NULL then add our pnode to the end of the freelist */
+ if (mnode_obj == NULL) {
+ lst_put_tail(allocator->free_list_head,
+ (struct list_head *)pnode);
+ } else {
+ /* insert our node before the current traversed node */
+ lst_insert_before(allocator->free_list_head,
+ (struct list_head *)pnode,
+ (struct list_head *)mnode_obj);
+ }
+}
+
+/*
+ * ======== get_allocator ========
+ * Purpose:
+ * Return the allocator for the given SM Segid.
+ * SegIds: 1,2,3..max.
+ */
+static struct cmm_allocator *get_allocator(struct cmm_object *cmm_mgr_obj,
+ u32 ul_seg_id)
+{
+ struct cmm_allocator *allocator = NULL;
+
+ DBC_REQUIRE(cmm_mgr_obj != NULL);
+ DBC_REQUIRE((ul_seg_id > 0) && (ul_seg_id <= CMM_MAXGPPSEGS));
+ allocator = cmm_mgr_obj->pa_gppsm_seg_tab[ul_seg_id - 1];
+ if (allocator != NULL) {
+ /* make sure it's for real */
+ if (!allocator) {
+ allocator = NULL;
+ DBC_ASSERT(false);
+ }
+ }
+ return allocator;
+}
+
+/*
+ * The CMM_Xlator[xxx] routines below are used by Node and Stream
+ * to perform SM address translation to the client process address space.
+ * A "translator" object is created by a node/stream for each SM seg used.
+ */
+
+/*
+ * ======== cmm_xlator_create ========
+ * Purpose:
+ * Create an address translator object.
+ */
+int cmm_xlator_create(OUT struct cmm_xlatorobject **phXlator,
+ struct cmm_object *hcmm_mgr,
+ struct cmm_xlatorattrs *pXlatorAttrs)
+{
+ struct cmm_xlator *xlator_object = NULL;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phXlator != NULL);
+ DBC_REQUIRE(hcmm_mgr != NULL);
+
+ *phXlator = NULL;
+ if (pXlatorAttrs == NULL)
+ pXlatorAttrs = &cmm_dfltxlatorattrs; /* set defaults */
+
+ xlator_object = kzalloc(sizeof(struct cmm_xlator), GFP_KERNEL);
+ if (xlator_object != NULL) {
+ xlator_object->hcmm_mgr = hcmm_mgr; /* ref back to CMM */
+ /* SM seg_id */
+ xlator_object->ul_seg_id = pXlatorAttrs->ul_seg_id;
+ } else {
+ status = -ENOMEM;
+ }
+ if (DSP_SUCCEEDED(status))
+ *phXlator = (struct cmm_xlatorobject *)xlator_object;
+
+ return status;
+}
+
+/*
+ * ======== cmm_xlator_delete ========
+ * Purpose:
+ * Free the Xlator resources.
+ * VM gets freed later.
+ */
+int cmm_xlator_delete(struct cmm_xlatorobject *xlator, bool bForce)
+{
+ struct cmm_xlator *xlator_obj = (struct cmm_xlator *)xlator;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (xlator_obj)
+ kfree(xlator_obj);
+ else
+ status = -EFAULT;
+
+ return status;
+}
+
+/*
+ * ======== cmm_xlator_alloc_buf ========
+ */
+void *cmm_xlator_alloc_buf(struct cmm_xlatorobject *xlator, void *pVaBuf,
+ u32 uPaSize)
+{
+ struct cmm_xlator *xlator_obj = (struct cmm_xlator *)xlator;
+ void *pbuf = NULL;
+ struct cmm_attrs attrs;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(xlator != NULL);
+ DBC_REQUIRE(xlator_obj->hcmm_mgr != NULL);
+ DBC_REQUIRE(pVaBuf != NULL);
+ DBC_REQUIRE(uPaSize > 0);
+ DBC_REQUIRE(xlator_obj->ul_seg_id > 0);
+
+ if (xlator_obj) {
+ attrs.ul_seg_id = xlator_obj->ul_seg_id;
+ *(volatile u32 *)pVaBuf = 0;
+ /* Alloc SM */
+ pbuf =
+ cmm_calloc_buf(xlator_obj->hcmm_mgr, uPaSize, &attrs, NULL);
+ if (pbuf) {
+ /* convert to translator(node/strm) process Virtual
+ * address */
+ *(volatile u32 **)pVaBuf =
+ (u32 *) cmm_xlator_translate(xlator,
+ pbuf, CMM_PA2VA);
+ }
+ }
+ return pbuf;
+}
+
+/*
+ * ======== cmm_xlator_free_buf ========
+ * Purpose:
+ * Free the given SM buffer and descriptor.
+ * Does not free virtual memory.
+ */
+int cmm_xlator_free_buf(struct cmm_xlatorobject *xlator, void *pBufVa)
+{
+ struct cmm_xlator *xlator_obj = (struct cmm_xlator *)xlator;
+ int status = -EPERM;
+ void *buf_pa = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pBufVa != NULL);
+ DBC_REQUIRE(xlator_obj->ul_seg_id > 0);
+
+ if (xlator_obj) {
+ /* convert Va to Pa so we can free it. */
+ buf_pa = cmm_xlator_translate(xlator, pBufVa, CMM_VA2PA);
+ if (buf_pa) {
+ status = cmm_free_buf(xlator_obj->hcmm_mgr, buf_pa,
+ xlator_obj->ul_seg_id);
+ if (DSP_FAILED(status)) {
+ /* Uh oh, this shouldn't happen. Descriptor
+ * gone! */
+ DBC_ASSERT(false); /* CMM is leaking mem */
+ }
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== cmm_xlator_info ========
+ * Purpose:
+ * Set/Get translator info.
+ */
+int cmm_xlator_info(struct cmm_xlatorobject *xlator, IN OUT u8 ** paddr,
+ u32 ul_size, u32 uSegId, bool set_info)
+{
+ struct cmm_xlator *xlator_obj = (struct cmm_xlator *)xlator;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(paddr != NULL);
+ DBC_REQUIRE((uSegId > 0) && (uSegId <= CMM_MAXGPPSEGS));
+
+ if (xlator_obj) {
+ if (set_info) {
+ /* set translators virtual address range */
+ xlator_obj->dw_virt_base = (u32) *paddr;
+ xlator_obj->ul_virt_size = ul_size;
+ } else { /* return virt base address */
+ *paddr = (u8 *) xlator_obj->dw_virt_base;
+ }
+ } else {
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== cmm_xlator_translate ========
+ */
+void *cmm_xlator_translate(struct cmm_xlatorobject *xlator, void *paddr,
+ enum cmm_xlatetype xType)
+{
+ u32 dw_addr_xlate = 0;
+ struct cmm_xlator *xlator_obj = (struct cmm_xlator *)xlator;
+ struct cmm_object *cmm_mgr_obj = NULL;
+ struct cmm_allocator *allocator = NULL;
+ u32 dw_offset = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(paddr != NULL);
+ DBC_REQUIRE((xType >= CMM_VA2PA) && (xType <= CMM_DSPPA2PA));
+
+ if (!xlator_obj)
+ goto loop_cont;
+
+ cmm_mgr_obj = (struct cmm_object *)xlator_obj->hcmm_mgr;
+ /* get this translator's default SM allocator */
+ DBC_ASSERT(xlator_obj->ul_seg_id > 0);
+ allocator = cmm_mgr_obj->pa_gppsm_seg_tab[xlator_obj->ul_seg_id - 1];
+ if (!allocator)
+ goto loop_cont;
+
+ if ((xType == CMM_VA2DSPPA) || (xType == CMM_VA2PA) ||
+ (xType == CMM_PA2VA)) {
+ if (xType == CMM_PA2VA) {
+ /* Gpp Va = Va Base + offset */
+ dw_offset = (u8 *) paddr - (u8 *) (allocator->shm_base -
+ allocator->
+ ul_dsp_size);
+ dw_addr_xlate = xlator_obj->dw_virt_base + dw_offset;
+ /* Check if translated Va base is in range */
+ if ((dw_addr_xlate < xlator_obj->dw_virt_base) ||
+ (dw_addr_xlate >=
+ (xlator_obj->dw_virt_base +
+ xlator_obj->ul_virt_size))) {
+ dw_addr_xlate = 0; /* bad address */
+ }
+ } else {
+ /* Gpp PA = Gpp Base + offset */
+ dw_offset =
+ (u8 *) paddr - (u8 *) xlator_obj->dw_virt_base;
+ dw_addr_xlate =
+ allocator->shm_base - allocator->ul_dsp_size +
+ dw_offset;
+ }
+ } else {
+ dw_addr_xlate = (u32) paddr;
+ }
+ /*Now convert address to proper target physical address if needed */
+ if ((xType == CMM_VA2DSPPA) || (xType == CMM_PA2DSPPA)) {
+ /* Got Gpp Pa now, convert to DSP Pa */
+ dw_addr_xlate =
+ GPPPA2DSPPA((allocator->shm_base - allocator->ul_dsp_size),
+ dw_addr_xlate,
+ allocator->dw_dsp_phys_addr_offset *
+ allocator->c_factor);
+ } else if (xType == CMM_DSPPA2PA) {
+ /* Got DSP Pa, convert to GPP Pa */
+ dw_addr_xlate =
+ DSPPA2GPPPA(allocator->shm_base - allocator->ul_dsp_size,
+ dw_addr_xlate,
+ allocator->dw_dsp_phys_addr_offset *
+ allocator->c_factor);
+ }
+loop_cont:
+ return (void *)dw_addr_xlate;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/cod.c b/drivers/staging/tidspbridge/pmgr/cod.c
new file mode 100644
index 0000000..f9c0f30
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/cod.c
@@ -0,0 +1,658 @@
+/*
+ * cod.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This module implements DSP code management for the DSP/BIOS Bridge
+ * environment. It is mostly a thin wrapper.
+ *
+ * This module provides an interface for loading both static and
+ * dynamic code objects onto DSP systems.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/ldr.h>
+
+/* ----------------------------------- Platform Manager */
+/* Include appropriate loader header file */
+#include <dspbridge/dbll.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/cod.h>
+
+/* magic number for handle validation */
+#define MAGIC 0xc001beef
+
+/* macro to validate COD manager handles */
+#define IS_VALID(h) ((h) != NULL && (h)->ul_magic == MAGIC)
+
+/*
+ * ======== cod_manager ========
+ */
+struct cod_manager {
+ struct dbll_tar_obj *target;
+ struct dbll_library_obj *base_lib;
+ bool loaded; /* Base library loaded? */
+ u32 ul_entry;
+ struct ldr_module *dll_obj;
+ struct dbll_fxns fxns;
+ struct dbll_attrs attrs;
+ char sz_zl_file[COD_MAXPATHLENGTH];
+ u32 ul_magic;
+};
+
+/*
+ * ======== cod_libraryobj ========
+ */
+struct cod_libraryobj {
+ struct dbll_library_obj *dbll_lib;
+ struct cod_manager *cod_mgr;
+};
+
+static u32 refs = 0L;
+
+static struct dbll_fxns ldr_fxns = {
+ (dbll_close_fxn) dbll_close,
+ (dbll_create_fxn) dbll_create,
+ (dbll_delete_fxn) dbll_delete,
+ (dbll_exit_fxn) dbll_exit,
+ (dbll_get_attrs_fxn) dbll_get_attrs,
+ (dbll_get_addr_fxn) dbll_get_addr,
+ (dbll_get_c_addr_fxn) dbll_get_c_addr,
+ (dbll_get_sect_fxn) dbll_get_sect,
+ (dbll_init_fxn) dbll_init,
+ (dbll_load_fxn) dbll_load,
+ (dbll_load_sect_fxn) dbll_load_sect,
+ (dbll_open_fxn) dbll_open,
+ (dbll_read_sect_fxn) dbll_read_sect,
+ (dbll_set_attrs_fxn) dbll_set_attrs,
+ (dbll_unload_fxn) dbll_unload,
+ (dbll_unload_sect_fxn) dbll_unload_sect,
+};
+
+static bool no_op(void);
+
+/*
+ * File operations (originally were under kfile.c)
+ */
+static s32 cod_f_close(struct file *filp)
+{
+ /* Check for valid handle */
+ if (!filp)
+ return -EFAULT;
+
+ filp_close(filp, NULL);
+
+ /* we can't use 0 here */
+ return 0;
+}
+
+static struct file *cod_f_open(CONST char *psz_file_name, CONST char *pszMode)
+{
+ mm_segment_t fs;
+ struct file *filp;
+
+ fs = get_fs();
+ set_fs(get_ds());
+
+ /* ignore given mode and open file as read-only */
+ filp = filp_open(psz_file_name, O_RDONLY, 0);
+
+ if (IS_ERR(filp))
+ filp = NULL;
+
+ set_fs(fs);
+
+ return filp;
+}
+
+static s32 cod_f_read(void __user *pbuffer, s32 size, s32 cCount,
+ struct file *filp)
+{
+ /* check for valid file handle */
+ if (!filp)
+ return -EFAULT;
+
+ if ((size > 0) && (cCount > 0) && pbuffer) {
+ u32 dw_bytes_read;
+ mm_segment_t fs;
+
+ /* read from file */
+ fs = get_fs();
+ set_fs(get_ds());
+ dw_bytes_read = filp->f_op->read(filp, pbuffer, size * cCount,
+ &(filp->f_pos));
+ set_fs(fs);
+
+ if (!dw_bytes_read)
+ return -EBADF;
+
+ return dw_bytes_read / size;
+ }
+
+ return -EINVAL;
+}
+
+static s32 cod_f_seek(struct file *filp, s32 lOffset, s32 cOrigin)
+{
+ loff_t dw_cur_pos;
+
+ /* check for valid file handle */
+ if (!filp)
+ return -EFAULT;
+
+ /* based on the origin flag, move the internal pointer */
+ dw_cur_pos = filp->f_op->llseek(filp, lOffset, cOrigin);
+
+ if ((s32) dw_cur_pos < 0)
+ return -EPERM;
+
+ /* we can't use 0 here */
+ return 0;
+}
+
+static s32 cod_f_tell(struct file *filp)
+{
+ loff_t dw_cur_pos;
+
+ if (!filp)
+ return -EFAULT;
+
+ /* Get current position */
+ dw_cur_pos = filp->f_op->llseek(filp, 0, SEEK_CUR);
+
+ if ((s32) dw_cur_pos < 0)
+ return -EPERM;
+
+ return dw_cur_pos;
+}
+
+/*
+ * ======== cod_close ========
+ */
+void cod_close(struct cod_libraryobj *lib)
+{
+ struct cod_manager *hmgr;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(lib != NULL);
+ DBC_REQUIRE(IS_VALID(((struct cod_libraryobj *)lib)->cod_mgr));
+
+ hmgr = lib->cod_mgr;
+ hmgr->fxns.close_fxn(lib->dbll_lib);
+
+ kfree(lib);
+}
+
+/*
+ * ======== cod_create ========
+ * Purpose:
+ * Create an object to manage code on a DSP system.
+ * This object can be used to load an initial program image with
+ * arguments that can later be expanded with
+ * dynamically loaded object files.
+ *
+ */
+int cod_create(OUT struct cod_manager **phMgr, char *pstrDummyFile,
+ IN OPTIONAL CONST struct cod_attrs *attrs)
+{
+ struct cod_manager *mgr_new;
+ struct dbll_attrs zl_attrs;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMgr != NULL);
+
+ /* assume failure */
+ *phMgr = NULL;
+
+ /* we don't support non-default attrs yet */
+ if (attrs != NULL)
+ return -ENOSYS;
+
+ mgr_new = kzalloc(sizeof(struct cod_manager), GFP_KERNEL);
+ if (mgr_new == NULL)
+ return -ENOMEM;
+
+ mgr_new->ul_magic = MAGIC;
+
+ /* Set up loader functions */
+ mgr_new->fxns = ldr_fxns;
+
+ /* initialize the ZL module */
+ mgr_new->fxns.init_fxn();
+
+ zl_attrs.alloc = (dbll_alloc_fxn) no_op;
+ zl_attrs.free = (dbll_free_fxn) no_op;
+ zl_attrs.fread = (dbll_read_fxn) cod_f_read;
+ zl_attrs.fseek = (dbll_seek_fxn) cod_f_seek;
+ zl_attrs.ftell = (dbll_tell_fxn) cod_f_tell;
+ zl_attrs.fclose = (dbll_f_close_fxn) cod_f_close;
+ zl_attrs.fopen = (dbll_f_open_fxn) cod_f_open;
+ zl_attrs.sym_lookup = NULL;
+ zl_attrs.base_image = true;
+ zl_attrs.log_write = NULL;
+ zl_attrs.log_write_handle = NULL;
+ zl_attrs.write = NULL;
+ zl_attrs.rmm_handle = NULL;
+ zl_attrs.input_params = NULL;
+ zl_attrs.sym_handle = NULL;
+ zl_attrs.sym_arg = NULL;
+
+ mgr_new->attrs = zl_attrs;
+
+ status = mgr_new->fxns.create_fxn(&mgr_new->target, &zl_attrs);
+
+ if (DSP_FAILED(status)) {
+ cod_delete(mgr_new);
+ return -ESPIPE;
+ }
+
+ /* return the new manager */
+ *phMgr = mgr_new;
+
+ return 0;
+}
+
+/*
+ * ======== cod_delete ========
+ * Purpose:
+ * Delete a code manager object.
+ */
+void cod_delete(struct cod_manager *hmgr)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(hmgr));
+
+ if (hmgr->base_lib) {
+ if (hmgr->loaded)
+ hmgr->fxns.unload_fxn(hmgr->base_lib, &hmgr->attrs);
+
+ hmgr->fxns.close_fxn(hmgr->base_lib);
+ }
+ if (hmgr->target) {
+ hmgr->fxns.delete_fxn(hmgr->target);
+ hmgr->fxns.exit_fxn();
+ }
+ hmgr->ul_magic = ~MAGIC;
+ kfree(hmgr);
+}
+
+/*
+ * ======== cod_exit ========
+ * Purpose:
+ * Discontinue usage of the COD module.
+ *
+ */
+void cod_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== cod_get_base_lib ========
+ * Purpose:
+ * Get handle to the base image DBL library.
+ */
+int cod_get_base_lib(struct cod_manager *cod_mgr_obj,
+ struct dbll_library_obj **plib)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(cod_mgr_obj));
+ DBC_REQUIRE(plib != NULL);
+
+ *plib = (struct dbll_library_obj *)cod_mgr_obj->base_lib;
+
+ return status;
+}
+
+/*
+ * ======== cod_get_base_name ========
+ */
+int cod_get_base_name(struct cod_manager *cod_mgr_obj, char *pszName,
+ u32 usize)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(cod_mgr_obj));
+ DBC_REQUIRE(pszName != NULL);
+
+ if (usize <= COD_MAXPATHLENGTH)
+ strncpy(pszName, cod_mgr_obj->sz_zl_file, usize);
+ else
+ status = -EPERM;
+
+ return status;
+}
+
+/*
+ * ======== cod_get_entry ========
+ * Purpose:
+ * Retrieve the entry point of a loaded DSP program image
+ *
+ */
+int cod_get_entry(struct cod_manager *cod_mgr_obj, u32 *pulEntry)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(cod_mgr_obj));
+ DBC_REQUIRE(pulEntry != NULL);
+
+ *pulEntry = cod_mgr_obj->ul_entry;
+
+ return 0;
+}
+
+/*
+ * ======== cod_get_loader ========
+ * Purpose:
+ * Get handle to the DBLL loader.
+ */
+int cod_get_loader(struct cod_manager *cod_mgr_obj,
+ struct dbll_tar_obj **phLoader)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(cod_mgr_obj));
+ DBC_REQUIRE(phLoader != NULL);
+
+ *phLoader = (struct dbll_tar_obj *)cod_mgr_obj->target;
+
+ return status;
+}
+
+/*
+ * ======== cod_get_section ========
+ * Purpose:
+ * Retrieve the starting address and length of a section in the COFF file
+ * given the section name.
+ */
+int cod_get_section(struct cod_libraryobj *lib, IN char *pstrSect,
+ OUT u32 *puAddr, OUT u32 *puLen)
+{
+ struct cod_manager *cod_mgr_obj;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(lib != NULL);
+ DBC_REQUIRE(IS_VALID(lib->cod_mgr));
+ DBC_REQUIRE(pstrSect != NULL);
+ DBC_REQUIRE(puAddr != NULL);
+ DBC_REQUIRE(puLen != NULL);
+
+ *puAddr = 0;
+ *puLen = 0;
+ if (lib != NULL) {
+ cod_mgr_obj = lib->cod_mgr;
+ status = cod_mgr_obj->fxns.get_sect_fxn(lib->dbll_lib, pstrSect,
+ puAddr, puLen);
+ } else {
+ status = -ESPIPE;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((*puAddr == 0) && (*puLen == 0)));
+
+ return status;
+}
+
+/*
+ * ======== cod_get_sym_value ========
+ * Purpose:
+ * Retrieve the value for the specified symbol. The symbol is first
+ * searched for literally and then, if not found, searched for as a
+ * C symbol.
+ *
+ */
+int cod_get_sym_value(struct cod_manager *hmgr, char *pstrSym,
+ u32 *pul_value)
+{
+ struct dbll_sym_val *dbll_sym;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(hmgr));
+ DBC_REQUIRE(pstrSym != NULL);
+ DBC_REQUIRE(pul_value != NULL);
+
+ dev_dbg(bridge, "%s: hmgr: %p pstrSym: %s pul_value: %p\n",
+ __func__, hmgr, pstrSym, pul_value);
+ if (hmgr->base_lib) {
+ if (!hmgr->fxns.
+ get_addr_fxn(hmgr->base_lib, pstrSym, &dbll_sym)) {
+ if (!hmgr->fxns.
+ get_c_addr_fxn(hmgr->base_lib, pstrSym, &dbll_sym))
+ return -ESPIPE;
+ }
+ } else {
+ return -ESPIPE;
+ }
+
+ *pul_value = dbll_sym->value;
+
+ return 0;
+}
+
+/*
+ * ======== cod_init ========
+ * Purpose:
+ * Initialize the COD module's private state.
+ *
+ */
+bool cod_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && refs > 0) || (!ret && refs >= 0));
+ return ret;
+}
+
+/*
+ * ======== cod_load_base ========
+ * Purpose:
+ * Load the initial program image, optionally with command-line arguments,
+ * on the DSP system managed by the supplied handle. The program to be
+ * loaded must be the first element of the args array and must be a fully
+ * qualified pathname.
+ * Details:
+ * if nArgc doesn't match the number of arguments in the aArgs array, the
+ * aArgs array is searched for a NULL terminating entry, and argc is
+ * recalculated to reflect this. In this way, we can support NULL
+ * terminating aArgs arrays, if nArgc is very large.
+ */
+int cod_load_base(struct cod_manager *hmgr, u32 nArgc, char *aArgs[],
+ cod_writefxn pfn_write, void *pArb, char *envp[])
+{
+ dbll_flags flags;
+ struct dbll_attrs save_attrs;
+ struct dbll_attrs new_attrs;
+ int status;
+ u32 i;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(hmgr));
+ DBC_REQUIRE(nArgc > 0);
+ DBC_REQUIRE(aArgs != NULL);
+ DBC_REQUIRE(aArgs[0] != NULL);
+ DBC_REQUIRE(pfn_write != NULL);
+ DBC_REQUIRE(hmgr->base_lib != NULL);
+
+ /*
+ * Make sure every argv[] stated in argc has a value, or change argc to
+ * reflect true number in NULL terminated argv array.
+ */
+ for (i = 0; i < nArgc; i++) {
+ if (aArgs[i] == NULL) {
+ nArgc = i;
+ break;
+ }
+ }
+
+ /* set the write function for this operation */
+ hmgr->fxns.get_attrs_fxn(hmgr->target, &save_attrs);
+
+ new_attrs = save_attrs;
+ new_attrs.write = (dbll_write_fxn) pfn_write;
+ new_attrs.input_params = pArb;
+ new_attrs.alloc = (dbll_alloc_fxn) no_op;
+ new_attrs.free = (dbll_free_fxn) no_op;
+ new_attrs.log_write = NULL;
+ new_attrs.log_write_handle = NULL;
+
+ /* Load the image */
+ flags = DBLL_CODE | DBLL_DATA | DBLL_SYMB;
+ status = hmgr->fxns.load_fxn(hmgr->base_lib, flags, &new_attrs,
+ &hmgr->ul_entry);
+ if (DSP_FAILED(status))
+ hmgr->fxns.close_fxn(hmgr->base_lib);
+
+ if (DSP_SUCCEEDED(status))
+ hmgr->loaded = true;
+ else
+ hmgr->base_lib = NULL;
+
+ return status;
+}
+
+/*
+ * ======== cod_open ========
+ * Open library for reading sections.
+ */
+int cod_open(struct cod_manager *hmgr, IN char *pszCoffPath,
+ u32 flags, struct cod_libraryobj **pLib)
+{
+ int status = 0;
+ struct cod_libraryobj *lib = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(hmgr));
+ DBC_REQUIRE(pszCoffPath != NULL);
+ DBC_REQUIRE(flags == COD_NOLOAD || flags == COD_SYMB);
+ DBC_REQUIRE(pLib != NULL);
+
+ *pLib = NULL;
+
+ lib = kzalloc(sizeof(struct cod_libraryobj), GFP_KERNEL);
+ if (lib == NULL)
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ lib->cod_mgr = hmgr;
+ status = hmgr->fxns.open_fxn(hmgr->target, pszCoffPath, flags,
+ &lib->dbll_lib);
+ if (DSP_SUCCEEDED(status))
+ *pLib = lib;
+ }
+
+ if (DSP_FAILED(status))
+ pr_err("%s: error status 0x%x, pszCoffPath: %s flags: 0x%x\n",
+ __func__, status, pszCoffPath, flags);
+ return status;
+}
+
+/*
+ * ======== cod_open_base ========
+ * Purpose:
+ * Open base image for reading sections.
+ */
+int cod_open_base(struct cod_manager *hmgr, IN char *pszCoffPath,
+ dbll_flags flags)
+{
+ int status = 0;
+ struct dbll_library_obj *lib;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(IS_VALID(hmgr));
+ DBC_REQUIRE(pszCoffPath != NULL);
+
+ /* if we previously opened a base image, close it now */
+ if (hmgr->base_lib) {
+ if (hmgr->loaded) {
+ hmgr->fxns.unload_fxn(hmgr->base_lib, &hmgr->attrs);
+ hmgr->loaded = false;
+ }
+ hmgr->fxns.close_fxn(hmgr->base_lib);
+ hmgr->base_lib = NULL;
+ }
+ status = hmgr->fxns.open_fxn(hmgr->target, pszCoffPath, flags, &lib);
+ if (DSP_SUCCEEDED(status)) {
+ /* hang onto the library for subsequent sym table usage */
+ hmgr->base_lib = lib;
+ strncpy(hmgr->sz_zl_file, pszCoffPath, COD_MAXPATHLENGTH - 1);
+ hmgr->sz_zl_file[COD_MAXPATHLENGTH - 1] = '\0';
+ }
+
+ if (DSP_FAILED(status))
+ pr_err("%s: error status 0x%x pszCoffPath: %s\n", __func__,
+ status, pszCoffPath);
+ return status;
+}
+
+/*
+ * ======== cod_read_section ========
+ * Purpose:
+ * Retrieve the content of a code section given the section name.
+ */
+int cod_read_section(struct cod_libraryobj *lib, IN char *pstrSect,
+ OUT char *pstrContent, IN u32 cContentSize)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(lib != NULL);
+ DBC_REQUIRE(IS_VALID(lib->cod_mgr));
+ DBC_REQUIRE(pstrSect != NULL);
+ DBC_REQUIRE(pstrContent != NULL);
+
+ if (lib != NULL)
+ status =
+ lib->cod_mgr->fxns.read_sect_fxn(lib->dbll_lib, pstrSect,
+ pstrContent, cContentSize);
+ else
+ status = -ESPIPE;
+
+ return status;
+}
+
+/*
+ * ======== no_op ========
+ * Purpose:
+ * No Operation.
+ *
+ */
+static bool no_op(void)
+{
+ return true;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/dbll.c b/drivers/staging/tidspbridge/pmgr/dbll.c
new file mode 100644
index 0000000..3619d53
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/dbll.c
@@ -0,0 +1,1585 @@
+/*
+ * dbll.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+#include <dspbridge/gh.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+
+/* Dynamic loader library interface */
+#include <dspbridge/dynamic_loader.h>
+#include <dspbridge/getsection.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dbll.h>
+#include <dspbridge/rmm.h>
+
+/* Number of buckets for symbol hash table */
+#define MAXBUCKETS 211
+
+/* Max buffer length */
+#define MAXEXPR 128
+
+#ifndef UINT32_C
+#define UINT32_C(zzz) ((uint32_t)zzz)
+#endif
+#define DOFF_ALIGN(x) (((x) + 3) & ~UINT32_C(3))
+
+/*
+ * ======== struct dbll_tar_obj* ========
+ * A target may have one or more libraries of symbols/code/data loaded
+ * onto it, where a library is simply the symbols/code/data contained
+ * in a DOFF file.
+ */
+/*
+ * ======== dbll_tar_obj ========
+ */
+struct dbll_tar_obj {
+ struct dbll_attrs attrs;
+ struct dbll_library_obj *head; /* List of all opened libraries */
+};
+
+/*
+ * The following 4 typedefs are "super classes" of the dynamic loader
+ * library types used in dynamic loader functions (dynamic_loader.h).
+ */
+/*
+ * ======== dbll_stream ========
+ * Contains dynamic_loader_stream
+ */
+struct dbll_stream {
+ struct dynamic_loader_stream dl_stream;
+ struct dbll_library_obj *lib;
+};
+
+/*
+ * ======== ldr_symbol ========
+ */
+struct ldr_symbol {
+ struct dynamic_loader_sym dl_symbol;
+ struct dbll_library_obj *lib;
+};
+
+/*
+ * ======== dbll_alloc ========
+ */
+struct dbll_alloc {
+ struct dynamic_loader_allocate dl_alloc;
+ struct dbll_library_obj *lib;
+};
+
+/*
+ * ======== dbll_init_obj ========
+ */
+struct dbll_init_obj {
+ struct dynamic_loader_initialize dl_init;
+ struct dbll_library_obj *lib;
+};
+
+/*
+ * ======== DBLL_Library ========
+ * A library handle is returned by DBLL_Open() and is passed to dbll_load()
+ * to load symbols/code/data, and to dbll_unload(), to remove the
+ * symbols/code/data loaded by dbll_load().
+ */
+
+/*
+ * ======== dbll_library_obj ========
+ */
+struct dbll_library_obj {
+ struct dbll_library_obj *next; /* Next library in target's list */
+ struct dbll_library_obj *prev; /* Previous in the list */
+ struct dbll_tar_obj *target_obj; /* target for this library */
+
+ /* Objects needed by dynamic loader */
+ struct dbll_stream stream;
+ struct ldr_symbol symbol;
+ struct dbll_alloc allocate;
+ struct dbll_init_obj init;
+ void *dload_mod_obj;
+
+ char *file_name; /* COFF file name */
+ void *fp; /* Opaque file handle */
+ u32 entry; /* Entry point */
+ void *desc; /* desc of DOFF file loaded */
+ u32 open_ref; /* Number of times opened */
+ u32 load_ref; /* Number of times loaded */
+ struct gh_t_hash_tab *sym_tab; /* Hash table of symbols */
+ u32 ul_pos;
+};
+
+/*
+ * ======== dbll_symbol ========
+ */
+struct dbll_symbol {
+ struct dbll_sym_val value;
+ char *name;
+};
+
+static void dof_close(struct dbll_library_obj *zl_lib);
+static int dof_open(struct dbll_library_obj *zl_lib);
+static s32 no_op(struct dynamic_loader_initialize *thisptr, void *bufr,
+ ldr_addr locn, struct ldr_section_info *info, unsigned bytsiz);
+
+/*
+ * Functions called by dynamic loader
+ *
+ */
+/* dynamic_loader_stream */
+static int dbll_read_buffer(struct dynamic_loader_stream *this, void *buffer,
+ unsigned bufsize);
+static int dbll_set_file_posn(struct dynamic_loader_stream *this,
+ unsigned int pos);
+/* dynamic_loader_sym */
+static struct dynload_symbol *dbll_find_symbol(struct dynamic_loader_sym *this,
+ const char *name);
+static struct dynload_symbol *dbll_add_to_symbol_table(struct dynamic_loader_sym
+ *this, const char *name,
+ unsigned moduleId);
+static struct dynload_symbol *find_in_symbol_table(struct dynamic_loader_sym
+ *this, const char *name,
+ unsigned moduleid);
+static void dbll_purge_symbol_table(struct dynamic_loader_sym *this,
+ unsigned moduleId);
+static void *allocate(struct dynamic_loader_sym *this, unsigned memsize);
+static void deallocate(struct dynamic_loader_sym *this, void *memPtr);
+static void dbll_err_report(struct dynamic_loader_sym *this, const char *errstr,
+ va_list args);
+/* dynamic_loader_allocate */
+static int dbll_rmm_alloc(struct dynamic_loader_allocate *this,
+ struct ldr_section_info *info, unsigned align);
+static void rmm_dealloc(struct dynamic_loader_allocate *this,
+ struct ldr_section_info *info);
+
+/* dynamic_loader_initialize */
+static int connect(struct dynamic_loader_initialize *this);
+static int read_mem(struct dynamic_loader_initialize *this, void *buf,
+ ldr_addr addr, struct ldr_section_info *info,
+ unsigned nbytes);
+static int write_mem(struct dynamic_loader_initialize *this, void *buf,
+ ldr_addr addr, struct ldr_section_info *info,
+ unsigned nbytes);
+static int fill_mem(struct dynamic_loader_initialize *this, ldr_addr addr,
+ struct ldr_section_info *info, unsigned nbytes,
+ unsigned val);
+static int execute(struct dynamic_loader_initialize *this, ldr_addr start);
+static void release(struct dynamic_loader_initialize *this);
+
+/* symbol table hash functions */
+static u16 name_hash(void *name, u16 max_bucket);
+static bool name_match(void *name, void *sp);
+static void sym_delete(void *sp);
+
+static u32 refs; /* module reference count */
+
+/* Symbol Redefinition */
+static int redefined_symbol;
+static int gbl_search = 1;
+
+/*
+ * ======== dbll_close ========
+ */
+void dbll_close(struct dbll_library_obj *zl_lib)
+{
+ struct dbll_tar_obj *zl_target;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(zl_lib->open_ref > 0);
+ zl_target = zl_lib->target_obj;
+ zl_lib->open_ref--;
+ if (zl_lib->open_ref == 0) {
+ /* Remove library from list */
+ if (zl_target->head == zl_lib)
+ zl_target->head = zl_lib->next;
+
+ if (zl_lib->prev)
+ (zl_lib->prev)->next = zl_lib->next;
+
+ if (zl_lib->next)
+ (zl_lib->next)->prev = zl_lib->prev;
+
+ /* Free DOF resources */
+ dof_close(zl_lib);
+ kfree(zl_lib->file_name);
+
+ /* remove symbols from symbol table */
+ if (zl_lib->sym_tab)
+ gh_delete(zl_lib->sym_tab);
+
+ /* remove the library object itself */
+ kfree(zl_lib);
+ zl_lib = NULL;
+ }
+}
+
+/*
+ * ======== dbll_create ========
+ */
+int dbll_create(struct dbll_tar_obj **target_obj,
+ struct dbll_attrs *pattrs)
+{
+ struct dbll_tar_obj *pzl_target;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pattrs != NULL);
+ DBC_REQUIRE(target_obj != NULL);
+
+ /* Allocate DBL target object */
+ pzl_target = kzalloc(sizeof(struct dbll_tar_obj), GFP_KERNEL);
+ if (target_obj != NULL) {
+ if (pzl_target == NULL) {
+ *target_obj = NULL;
+ status = -ENOMEM;
+ } else {
+ pzl_target->attrs = *pattrs;
+ *target_obj = (struct dbll_tar_obj *)pzl_target;
+ }
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *target_obj) ||
+ (DSP_FAILED(status) && *target_obj == NULL));
+ }
+
+ return status;
+}
+
+/*
+ * ======== dbll_delete ========
+ */
+void dbll_delete(struct dbll_tar_obj *target)
+{
+ struct dbll_tar_obj *zl_target = (struct dbll_tar_obj *)target;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_target);
+
+ if (zl_target != NULL)
+ kfree(zl_target);
+
+}
+
+/*
+ * ======== dbll_exit ========
+ * Discontinue usage of DBL module.
+ */
+void dbll_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ if (refs == 0)
+ gh_exit();
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== dbll_get_addr ========
+ * Get address of name in the specified library.
+ */
+bool dbll_get_addr(struct dbll_library_obj *zl_lib, char *name,
+ struct dbll_sym_val **ppSym)
+{
+ struct dbll_symbol *sym;
+ bool status = false;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(name != NULL);
+ DBC_REQUIRE(ppSym != NULL);
+ DBC_REQUIRE(zl_lib->sym_tab != NULL);
+
+ sym = (struct dbll_symbol *)gh_find(zl_lib->sym_tab, name);
+ if (sym != NULL) {
+ *ppSym = &sym->value;
+ status = true;
+ }
+
+ dev_dbg(bridge, "%s: lib: %p name: %s paddr: %p, status 0x%x\n",
+ __func__, zl_lib, name, ppSym, status);
+ return status;
+}
+
+/*
+ * ======== dbll_get_attrs ========
+ * Retrieve the attributes of the target.
+ */
+void dbll_get_attrs(struct dbll_tar_obj *target, struct dbll_attrs *pattrs)
+{
+ struct dbll_tar_obj *zl_target = (struct dbll_tar_obj *)target;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_target);
+ DBC_REQUIRE(pattrs != NULL);
+
+ if ((pattrs != NULL) && (zl_target != NULL))
+ *pattrs = zl_target->attrs;
+
+}
+
+/*
+ * ======== dbll_get_c_addr ========
+ * Get address of a "C" name in the specified library.
+ */
+bool dbll_get_c_addr(struct dbll_library_obj *zl_lib, char *name,
+ struct dbll_sym_val **ppSym)
+{
+ struct dbll_symbol *sym;
+ char cname[MAXEXPR + 1];
+ bool status = false;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(ppSym != NULL);
+ DBC_REQUIRE(zl_lib->sym_tab != NULL);
+ DBC_REQUIRE(name != NULL);
+
+ cname[0] = '_';
+
+ strncpy(cname + 1, name, sizeof(cname) - 2);
+ cname[MAXEXPR] = '\0'; /* insure '\0' string termination */
+
+ /* Check for C name, if not found */
+ sym = (struct dbll_symbol *)gh_find(zl_lib->sym_tab, cname);
+
+ if (sym != NULL) {
+ *ppSym = &sym->value;
+ status = true;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dbll_get_sect ========
+ * Get the base address and size (in bytes) of a COFF section.
+ */
+int dbll_get_sect(struct dbll_library_obj *lib, char *name, u32 *paddr,
+ u32 *psize)
+{
+ u32 byte_size;
+ bool opened_doff = false;
+ const struct ldr_section_info *sect = NULL;
+ struct dbll_library_obj *zl_lib = (struct dbll_library_obj *)lib;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(name != NULL);
+ DBC_REQUIRE(paddr != NULL);
+ DBC_REQUIRE(psize != NULL);
+ DBC_REQUIRE(zl_lib);
+
+ /* If DOFF file is not open, we open it. */
+ if (zl_lib != NULL) {
+ if (zl_lib->fp == NULL) {
+ status = dof_open(zl_lib);
+ if (DSP_SUCCEEDED(status))
+ opened_doff = true;
+
+ } else {
+ (*(zl_lib->target_obj->attrs.fseek)) (zl_lib->fp,
+ zl_lib->ul_pos,
+ SEEK_SET);
+ }
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ byte_size = 1;
+ if (dload_get_section_info(zl_lib->desc, name, §)) {
+ *paddr = sect->load_addr;
+ *psize = sect->size * byte_size;
+ /* Make sure size is even for good swap */
+ if (*psize % 2)
+ (*psize)++;
+
+ /* Align size */
+ *psize = DOFF_ALIGN(*psize);
+ } else {
+ status = -ENXIO;
+ }
+ }
+ if (opened_doff) {
+ dof_close(zl_lib);
+ opened_doff = false;
+ }
+
+ dev_dbg(bridge, "%s: lib: %p name: %s paddr: %p psize: %p, "
+ "status 0x%x\n", __func__, lib, name, paddr, psize, status);
+
+ return status;
+}
+
+/*
+ * ======== dbll_init ========
+ */
+bool dbll_init(void)
+{
+ DBC_REQUIRE(refs >= 0);
+
+ if (refs == 0)
+ gh_init();
+
+ refs++;
+
+ return true;
+}
+
+/*
+ * ======== dbll_load ========
+ */
+int dbll_load(struct dbll_library_obj *lib, dbll_flags flags,
+ struct dbll_attrs *attrs, u32 *pEntry)
+{
+ struct dbll_library_obj *zl_lib = (struct dbll_library_obj *)lib;
+ struct dbll_tar_obj *dbzl;
+ bool got_symbols = true;
+ s32 err;
+ int status = 0;
+ bool opened_doff = false;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(pEntry != NULL);
+ DBC_REQUIRE(attrs != NULL);
+
+ /*
+ * Load if not already loaded.
+ */
+ if (zl_lib->load_ref == 0 || !(flags & DBLL_DYNAMIC)) {
+ dbzl = zl_lib->target_obj;
+ dbzl->attrs = *attrs;
+ /* Create a hash table for symbols if not already created */
+ if (zl_lib->sym_tab == NULL) {
+ got_symbols = false;
+ zl_lib->sym_tab = gh_create(MAXBUCKETS,
+ sizeof(struct dbll_symbol),
+ name_hash,
+ name_match, sym_delete);
+ if (zl_lib->sym_tab == NULL)
+ status = -ENOMEM;
+
+ }
+ /*
+ * Set up objects needed by the dynamic loader
+ */
+ /* Stream */
+ zl_lib->stream.dl_stream.read_buffer = dbll_read_buffer;
+ zl_lib->stream.dl_stream.set_file_posn = dbll_set_file_posn;
+ zl_lib->stream.lib = zl_lib;
+ /* Symbol */
+ zl_lib->symbol.dl_symbol.find_matching_symbol =
+ dbll_find_symbol;
+ if (got_symbols) {
+ zl_lib->symbol.dl_symbol.add_to_symbol_table =
+ find_in_symbol_table;
+ } else {
+ zl_lib->symbol.dl_symbol.add_to_symbol_table =
+ dbll_add_to_symbol_table;
+ }
+ zl_lib->symbol.dl_symbol.purge_symbol_table =
+ dbll_purge_symbol_table;
+ zl_lib->symbol.dl_symbol.dload_allocate = allocate;
+ zl_lib->symbol.dl_symbol.dload_deallocate = deallocate;
+ zl_lib->symbol.dl_symbol.error_report = dbll_err_report;
+ zl_lib->symbol.lib = zl_lib;
+ /* Allocate */
+ zl_lib->allocate.dl_alloc.dload_allocate = dbll_rmm_alloc;
+ zl_lib->allocate.dl_alloc.dload_deallocate = rmm_dealloc;
+ zl_lib->allocate.lib = zl_lib;
+ /* Init */
+ zl_lib->init.dl_init.connect = connect;
+ zl_lib->init.dl_init.readmem = read_mem;
+ zl_lib->init.dl_init.writemem = write_mem;
+ zl_lib->init.dl_init.fillmem = fill_mem;
+ zl_lib->init.dl_init.execute = execute;
+ zl_lib->init.dl_init.release = release;
+ zl_lib->init.lib = zl_lib;
+ /* If COFF file is not open, we open it. */
+ if (zl_lib->fp == NULL) {
+ status = dof_open(zl_lib);
+ if (DSP_SUCCEEDED(status))
+ opened_doff = true;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ zl_lib->ul_pos = (*(zl_lib->target_obj->attrs.ftell))
+ (zl_lib->fp);
+ /* Reset file cursor */
+ (*(zl_lib->target_obj->attrs.fseek)) (zl_lib->fp,
+ (long)0,
+ SEEK_SET);
+ symbols_reloaded = true;
+ /* The 5th argument, DLOAD_INITBSS, tells the DLL
+ * module to zero-init all BSS sections. In general,
+ * this is not necessary and also increases load time.
+ * We may want to make this configurable by the user */
+ err = dynamic_load_module(&zl_lib->stream.dl_stream,
+ &zl_lib->symbol.dl_symbol,
+ &zl_lib->allocate.dl_alloc,
+ &zl_lib->init.dl_init,
+ DLOAD_INITBSS,
+ &zl_lib->dload_mod_obj);
+
+ if (err != 0) {
+ status = -EILSEQ;
+ } else if (redefined_symbol) {
+ zl_lib->load_ref++;
+ dbll_unload(zl_lib, (struct dbll_attrs *)attrs);
+ redefined_symbol = false;
+ status = -EILSEQ;
+ } else {
+ *pEntry = zl_lib->entry;
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status))
+ zl_lib->load_ref++;
+
+ /* Clean up DOFF resources */
+ if (opened_doff)
+ dof_close(zl_lib);
+
+ DBC_ENSURE(DSP_FAILED(status) || zl_lib->load_ref > 0);
+
+ dev_dbg(bridge, "%s: lib: %p flags: 0x%x pEntry: %p, status 0x%x\n",
+ __func__, lib, flags, pEntry, status);
+
+ return status;
+}
+
+/*
+ * ======== dbll_load_sect ========
+ * Not supported for COFF.
+ */
+int dbll_load_sect(struct dbll_library_obj *zl_lib, char *sectName,
+ struct dbll_attrs *attrs)
+{
+ DBC_REQUIRE(zl_lib);
+
+ return -ENOSYS;
+}
+
+/*
+ * ======== dbll_open ========
+ */
+int dbll_open(struct dbll_tar_obj *target, char *file, dbll_flags flags,
+ struct dbll_library_obj **pLib)
+{
+ struct dbll_tar_obj *zl_target = (struct dbll_tar_obj *)target;
+ struct dbll_library_obj *zl_lib = NULL;
+ s32 err;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_target);
+ DBC_REQUIRE(zl_target->attrs.fopen != NULL);
+ DBC_REQUIRE(file != NULL);
+ DBC_REQUIRE(pLib != NULL);
+
+ zl_lib = zl_target->head;
+ while (zl_lib != NULL) {
+ if (strcmp(zl_lib->file_name, file) == 0) {
+ /* Library is already opened */
+ zl_lib->open_ref++;
+ break;
+ }
+ zl_lib = zl_lib->next;
+ }
+ if (zl_lib == NULL) {
+ /* Allocate DBL library object */
+ zl_lib = kzalloc(sizeof(struct dbll_library_obj), GFP_KERNEL);
+ if (zl_lib == NULL) {
+ status = -ENOMEM;
+ } else {
+ zl_lib->ul_pos = 0;
+ /* Increment ref count to allow close on failure
+ * later on */
+ zl_lib->open_ref++;
+ zl_lib->target_obj = zl_target;
+ /* Keep a copy of the file name */
+ zl_lib->file_name = kzalloc(strlen(file) + 1,
+ GFP_KERNEL);
+ if (zl_lib->file_name == NULL) {
+ status = -ENOMEM;
+ } else {
+ strncpy(zl_lib->file_name, file,
+ strlen(file) + 1);
+ }
+ zl_lib->sym_tab = NULL;
+ }
+ }
+ /*
+ * Set up objects needed by the dynamic loader
+ */
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /* Stream */
+ zl_lib->stream.dl_stream.read_buffer = dbll_read_buffer;
+ zl_lib->stream.dl_stream.set_file_posn = dbll_set_file_posn;
+ zl_lib->stream.lib = zl_lib;
+ /* Symbol */
+ zl_lib->symbol.dl_symbol.add_to_symbol_table = dbll_add_to_symbol_table;
+ zl_lib->symbol.dl_symbol.find_matching_symbol = dbll_find_symbol;
+ zl_lib->symbol.dl_symbol.purge_symbol_table = dbll_purge_symbol_table;
+ zl_lib->symbol.dl_symbol.dload_allocate = allocate;
+ zl_lib->symbol.dl_symbol.dload_deallocate = deallocate;
+ zl_lib->symbol.dl_symbol.error_report = dbll_err_report;
+ zl_lib->symbol.lib = zl_lib;
+ /* Allocate */
+ zl_lib->allocate.dl_alloc.dload_allocate = dbll_rmm_alloc;
+ zl_lib->allocate.dl_alloc.dload_deallocate = rmm_dealloc;
+ zl_lib->allocate.lib = zl_lib;
+ /* Init */
+ zl_lib->init.dl_init.connect = connect;
+ zl_lib->init.dl_init.readmem = read_mem;
+ zl_lib->init.dl_init.writemem = write_mem;
+ zl_lib->init.dl_init.fillmem = fill_mem;
+ zl_lib->init.dl_init.execute = execute;
+ zl_lib->init.dl_init.release = release;
+ zl_lib->init.lib = zl_lib;
+ if (DSP_SUCCEEDED(status) && zl_lib->fp == NULL)
+ status = dof_open(zl_lib);
+
+ zl_lib->ul_pos = (*(zl_lib->target_obj->attrs.ftell)) (zl_lib->fp);
+ (*(zl_lib->target_obj->attrs.fseek)) (zl_lib->fp, (long)0, SEEK_SET);
+ /* Create a hash table for symbols if flag is set */
+ if (zl_lib->sym_tab != NULL || !(flags & DBLL_SYMB))
+ goto func_cont;
+
+ zl_lib->sym_tab =
+ gh_create(MAXBUCKETS, sizeof(struct dbll_symbol), name_hash,
+ name_match, sym_delete);
+ if (zl_lib->sym_tab == NULL) {
+ status = -ENOMEM;
+ } else {
+ /* Do a fake load to get symbols - set write func to no_op */
+ zl_lib->init.dl_init.writemem = no_op;
+ err = dynamic_open_module(&zl_lib->stream.dl_stream,
+ &zl_lib->symbol.dl_symbol,
+ &zl_lib->allocate.dl_alloc,
+ &zl_lib->init.dl_init, 0,
+ &zl_lib->dload_mod_obj);
+ if (err != 0) {
+ status = -EILSEQ;
+ } else {
+ /* Now that we have the symbol table, we can unload */
+ err = dynamic_unload_module(zl_lib->dload_mod_obj,
+ &zl_lib->symbol.dl_symbol,
+ &zl_lib->allocate.dl_alloc,
+ &zl_lib->init.dl_init);
+ if (err != 0)
+ status = -EILSEQ;
+
+ zl_lib->dload_mod_obj = NULL;
+ }
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ if (zl_lib->open_ref == 1) {
+ /* First time opened - insert in list */
+ if (zl_target->head)
+ (zl_target->head)->prev = zl_lib;
+
+ zl_lib->prev = NULL;
+ zl_lib->next = zl_target->head;
+ zl_target->head = zl_lib;
+ }
+ *pLib = (struct dbll_library_obj *)zl_lib;
+ } else {
+ *pLib = NULL;
+ if (zl_lib != NULL)
+ dbll_close((struct dbll_library_obj *)zl_lib);
+
+ }
+ DBC_ENSURE((DSP_SUCCEEDED(status) && (zl_lib->open_ref > 0) && *pLib)
+ || (DSP_FAILED(status) && *pLib == NULL));
+
+ dev_dbg(bridge, "%s: target: %p file: %s pLib: %p, status 0x%x\n",
+ __func__, target, file, pLib, status);
+
+ return status;
+}
+
+/*
+ * ======== dbll_read_sect ========
+ * Get the content of a COFF section.
+ */
+int dbll_read_sect(struct dbll_library_obj *lib, char *name,
+ char *pContent, u32 size)
+{
+ struct dbll_library_obj *zl_lib = (struct dbll_library_obj *)lib;
+ bool opened_doff = false;
+ u32 byte_size; /* size of bytes */
+ u32 ul_sect_size; /* size of section */
+ const struct ldr_section_info *sect = NULL;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(name != NULL);
+ DBC_REQUIRE(pContent != NULL);
+ DBC_REQUIRE(size != 0);
+
+ /* If DOFF file is not open, we open it. */
+ if (zl_lib != NULL) {
+ if (zl_lib->fp == NULL) {
+ status = dof_open(zl_lib);
+ if (DSP_SUCCEEDED(status))
+ opened_doff = true;
+
+ } else {
+ (*(zl_lib->target_obj->attrs.fseek)) (zl_lib->fp,
+ zl_lib->ul_pos,
+ SEEK_SET);
+ }
+ } else {
+ status = -EFAULT;
+ }
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ byte_size = 1;
+ if (!dload_get_section_info(zl_lib->desc, name, §)) {
+ status = -ENXIO;
+ goto func_cont;
+ }
+ /*
+ * Ensure the supplied buffer size is sufficient to store
+ * the section content to be read.
+ */
+ ul_sect_size = sect->size * byte_size;
+ /* Make sure size is even for good swap */
+ if (ul_sect_size % 2)
+ ul_sect_size++;
+
+ /* Align size */
+ ul_sect_size = DOFF_ALIGN(ul_sect_size);
+ if (ul_sect_size > size) {
+ status = -EPERM;
+ } else {
+ if (!dload_get_section(zl_lib->desc, sect, pContent))
+ status = -EBADF;
+
+ }
+func_cont:
+ if (opened_doff) {
+ dof_close(zl_lib);
+ opened_doff = false;
+ }
+
+ dev_dbg(bridge, "%s: lib: %p name: %s pContent: %p size: 0x%x, "
+ "status 0x%x\n", __func__, lib, name, pContent, size, status);
+ return status;
+}
+
+/*
+ * ======== dbll_set_attrs ========
+ * Set the attributes of the target.
+ */
+void dbll_set_attrs(struct dbll_tar_obj *target, struct dbll_attrs *pattrs)
+{
+ struct dbll_tar_obj *zl_target = (struct dbll_tar_obj *)target;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_target);
+ DBC_REQUIRE(pattrs != NULL);
+
+ if ((pattrs != NULL) && (zl_target != NULL))
+ zl_target->attrs = *pattrs;
+
+}
+
+/*
+ * ======== dbll_unload ========
+ */
+void dbll_unload(struct dbll_library_obj *lib, struct dbll_attrs *attrs)
+{
+ struct dbll_library_obj *zl_lib = (struct dbll_library_obj *)lib;
+ s32 err = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(zl_lib);
+ DBC_REQUIRE(zl_lib->load_ref > 0);
+ dev_dbg(bridge, "%s: lib: %p\n", __func__, lib);
+ zl_lib->load_ref--;
+ /* Unload only if reference count is 0 */
+ if (zl_lib->load_ref != 0)
+ goto func_end;
+
+ zl_lib->target_obj->attrs = *attrs;
+ if (zl_lib->dload_mod_obj) {
+ err = dynamic_unload_module(zl_lib->dload_mod_obj,
+ &zl_lib->symbol.dl_symbol,
+ &zl_lib->allocate.dl_alloc,
+ &zl_lib->init.dl_init);
+ if (err != 0)
+ dev_dbg(bridge, "%s: failed: 0x%x\n", __func__, err);
+ }
+ /* remove symbols from symbol table */
+ if (zl_lib->sym_tab != NULL) {
+ gh_delete(zl_lib->sym_tab);
+ zl_lib->sym_tab = NULL;
+ }
+ /* delete DOFF desc since it holds *lots* of host OS
+ * resources */
+ dof_close(zl_lib);
+func_end:
+ DBC_ENSURE(zl_lib->load_ref >= 0);
+}
+
+/*
+ * ======== dbll_unload_sect ========
+ * Not supported for COFF.
+ */
+int dbll_unload_sect(struct dbll_library_obj *lib, char *sectName,
+ struct dbll_attrs *attrs)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(sectName != NULL);
+
+ return -ENOSYS;
+}
+
+/*
+ * ======== dof_close ========
+ */
+static void dof_close(struct dbll_library_obj *zl_lib)
+{
+ if (zl_lib->desc) {
+ dload_module_close(zl_lib->desc);
+ zl_lib->desc = NULL;
+ }
+ /* close file */
+ if (zl_lib->fp) {
+ (zl_lib->target_obj->attrs.fclose) (zl_lib->fp);
+ zl_lib->fp = NULL;
+ }
+}
+
+/*
+ * ======== dof_open ========
+ */
+static int dof_open(struct dbll_library_obj *zl_lib)
+{
+ void *open = *(zl_lib->target_obj->attrs.fopen);
+ int status = 0;
+
+ /* First open the file for the dynamic loader, then open COF */
+ zl_lib->fp =
+ (void *)((dbll_f_open_fxn) (open)) (zl_lib->file_name, "rb");
+
+ /* Open DOFF module */
+ if (zl_lib->fp && zl_lib->desc == NULL) {
+ (*(zl_lib->target_obj->attrs.fseek)) (zl_lib->fp, (long)0,
+ SEEK_SET);
+ zl_lib->desc =
+ dload_module_open(&zl_lib->stream.dl_stream,
+ &zl_lib->symbol.dl_symbol);
+ if (zl_lib->desc == NULL) {
+ (zl_lib->target_obj->attrs.fclose) (zl_lib->fp);
+ zl_lib->fp = NULL;
+ status = -EBADF;
+ }
+ } else {
+ status = -EBADF;
+ }
+
+ return status;
+}
+
+/*
+ * ======== name_hash ========
+ */
+static u16 name_hash(void *key, u16 max_bucket)
+{
+ u16 ret;
+ u16 hash;
+ char *name = (char *)key;
+
+ DBC_REQUIRE(name != NULL);
+
+ hash = 0;
+
+ while (*name) {
+ hash <<= 1;
+ hash ^= *name++;
+ }
+
+ ret = hash % max_bucket;
+
+ return ret;
+}
+
+/*
+ * ======== name_match ========
+ */
+static bool name_match(void *key, void *value)
+{
+ DBC_REQUIRE(key != NULL);
+ DBC_REQUIRE(value != NULL);
+
+ if ((key != NULL) && (value != NULL)) {
+ if (strcmp((char *)key, ((struct dbll_symbol *)value)->name) ==
+ 0)
+ return true;
+ }
+ return false;
+}
+
+/*
+ * ======== no_op ========
+ */
+static int no_op(struct dynamic_loader_initialize *thisptr, void *bufr,
+ ldr_addr locn, struct ldr_section_info *info, unsigned bytsize)
+{
+ return 1;
+}
+
+/*
+ * ======== sym_delete ========
+ */
+static void sym_delete(void *value)
+{
+ struct dbll_symbol *sp = (struct dbll_symbol *)value;
+
+ kfree(sp->name);
+}
+
+/*
+ * Dynamic Loader Functions
+ */
+
+/* dynamic_loader_stream */
+/*
+ * ======== dbll_read_buffer ========
+ */
+static int dbll_read_buffer(struct dynamic_loader_stream *this, void *buffer,
+ unsigned bufsize)
+{
+ struct dbll_stream *pstream = (struct dbll_stream *)this;
+ struct dbll_library_obj *lib;
+ int bytes_read = 0;
+
+ DBC_REQUIRE(this != NULL);
+ lib = pstream->lib;
+ DBC_REQUIRE(lib);
+
+ if (lib != NULL) {
+ bytes_read =
+ (*(lib->target_obj->attrs.fread)) (buffer, 1, bufsize,
+ lib->fp);
+ }
+ return bytes_read;
+}
+
+/*
+ * ======== dbll_set_file_posn ========
+ */
+static int dbll_set_file_posn(struct dynamic_loader_stream *this,
+ unsigned int pos)
+{
+ struct dbll_stream *pstream = (struct dbll_stream *)this;
+ struct dbll_library_obj *lib;
+ int status = 0; /* Success */
+
+ DBC_REQUIRE(this != NULL);
+ lib = pstream->lib;
+ DBC_REQUIRE(lib);
+
+ if (lib != NULL) {
+ status = (*(lib->target_obj->attrs.fseek)) (lib->fp, (long)pos,
+ SEEK_SET);
+ }
+
+ return status;
+}
+
+/* dynamic_loader_sym */
+
+/*
+ * ======== dbll_find_symbol ========
+ */
+static struct dynload_symbol *dbll_find_symbol(struct dynamic_loader_sym *this,
+ const char *name)
+{
+ struct dynload_symbol *ret_sym;
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+ struct dbll_sym_val *dbll_sym = NULL;
+ bool status = false; /* Symbol not found yet */
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+
+ if (lib != NULL) {
+ if (lib->target_obj->attrs.sym_lookup) {
+ /* Check current lib + base lib + dep lib +
+ * persistent lib */
+ status = (*(lib->target_obj->attrs.sym_lookup))
+ (lib->target_obj->attrs.sym_handle,
+ lib->target_obj->attrs.sym_arg,
+ lib->target_obj->attrs.rmm_handle, name,
+ &dbll_sym);
+ } else {
+ /* Just check current lib for symbol */
+ status = dbll_get_addr((struct dbll_library_obj *)lib,
+ (char *)name, &dbll_sym);
+ if (!status) {
+ status =
+ dbll_get_c_addr((struct dbll_library_obj *)
+ lib, (char *)name,
+ &dbll_sym);
+ }
+ }
+ }
+
+ if (!status && gbl_search)
+ dev_dbg(bridge, "%s: Symbol not found: %s\n", __func__, name);
+
+ DBC_ASSERT((status && (dbll_sym != NULL))
+ || (!status && (dbll_sym == NULL)));
+
+ ret_sym = (struct dynload_symbol *)dbll_sym;
+ return ret_sym;
+}
+
+/*
+ * ======== find_in_symbol_table ========
+ */
+static struct dynload_symbol *find_in_symbol_table(struct dynamic_loader_sym
+ *this, const char *name,
+ unsigned moduleid)
+{
+ struct dynload_symbol *ret_sym;
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+ struct dbll_symbol *sym;
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+ DBC_REQUIRE(lib->sym_tab != NULL);
+
+ sym = (struct dbll_symbol *)gh_find(lib->sym_tab, (char *)name);
+
+ ret_sym = (struct dynload_symbol *)&sym->value;
+ return ret_sym;
+}
+
+/*
+ * ======== dbll_add_to_symbol_table ========
+ */
+static struct dynload_symbol *dbll_add_to_symbol_table(struct dynamic_loader_sym
+ *this, const char *name,
+ unsigned moduleId)
+{
+ struct dbll_symbol *sym_ptr = NULL;
+ struct dbll_symbol symbol;
+ struct dynload_symbol *dbll_sym = NULL;
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+ struct dynload_symbol *ret;
+
+ DBC_REQUIRE(this != NULL);
+ DBC_REQUIRE(name);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+
+ /* Check to see if symbol is already defined in symbol table */
+ if (!(lib->target_obj->attrs.base_image)) {
+ gbl_search = false;
+ dbll_sym = dbll_find_symbol(this, name);
+ gbl_search = true;
+ if (dbll_sym) {
+ redefined_symbol = true;
+ dev_dbg(bridge, "%s already defined in symbol table\n",
+ name);
+ return NULL;
+ }
+ }
+ /* Allocate string to copy symbol name */
+ symbol.name = kzalloc(strlen((char *const)name) + 1, GFP_KERNEL);
+ if (symbol.name == NULL)
+ return NULL;
+
+ if (symbol.name != NULL) {
+ /* Just copy name (value will be filled in by dynamic loader) */
+ strncpy(symbol.name, (char *const)name,
+ strlen((char *const)name) + 1);
+
+ /* Add symbol to symbol table */
+ sym_ptr =
+ (struct dbll_symbol *)gh_insert(lib->sym_tab, (void *)name,
+ (void *)&symbol);
+ if (sym_ptr == NULL)
+ kfree(symbol.name);
+
+ }
+ if (sym_ptr != NULL)
+ ret = (struct dynload_symbol *)&sym_ptr->value;
+ else
+ ret = NULL;
+
+ return ret;
+}
+
+/*
+ * ======== dbll_purge_symbol_table ========
+ */
+static void dbll_purge_symbol_table(struct dynamic_loader_sym *this,
+ unsigned moduleId)
+{
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+
+ /* May not need to do anything */
+}
+
+/*
+ * ======== allocate ========
+ */
+static void *allocate(struct dynamic_loader_sym *this, unsigned memsize)
+{
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+ void *buf;
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+
+ buf = kzalloc(memsize, GFP_KERNEL);
+
+ return buf;
+}
+
+/*
+ * ======== deallocate ========
+ */
+static void deallocate(struct dynamic_loader_sym *this, void *memPtr)
+{
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+
+ kfree(memPtr);
+}
+
+/*
+ * ======== dbll_err_report ========
+ */
+static void dbll_err_report(struct dynamic_loader_sym *this, const char *errstr,
+ va_list args)
+{
+ struct ldr_symbol *ldr_sym = (struct ldr_symbol *)this;
+ struct dbll_library_obj *lib;
+ char temp_buf[MAXEXPR];
+
+ DBC_REQUIRE(this != NULL);
+ lib = ldr_sym->lib;
+ DBC_REQUIRE(lib);
+ vsnprintf((char *)temp_buf, MAXEXPR, (char *)errstr, args);
+ dev_dbg(bridge, "%s\n", temp_buf);
+}
+
+/* dynamic_loader_allocate */
+
+/*
+ * ======== dbll_rmm_alloc ========
+ */
+static int dbll_rmm_alloc(struct dynamic_loader_allocate *this,
+ struct ldr_section_info *info, unsigned align)
+{
+ struct dbll_alloc *dbll_alloc_obj = (struct dbll_alloc *)this;
+ struct dbll_library_obj *lib;
+ int status = 0;
+ u32 mem_sect_type;
+ struct rmm_addr rmm_addr_obj;
+ s32 ret = TRUE;
+ unsigned stype = DLOAD_SECTION_TYPE(info->type);
+ char *token = NULL;
+ char *sz_sec_last_token = NULL;
+ char *sz_last_token = NULL;
+ char *sz_sect_name = NULL;
+ char *psz_cur;
+ s32 token_len = 0;
+ s32 seg_id = -1;
+ s32 req = -1;
+ s32 count = 0;
+ u32 alloc_size = 0;
+ u32 run_addr_flag = 0;
+
+ DBC_REQUIRE(this != NULL);
+ lib = dbll_alloc_obj->lib;
+ DBC_REQUIRE(lib);
+
+ mem_sect_type =
+ (stype == DLOAD_TEXT) ? DBLL_CODE : (stype ==
+ DLOAD_BSS) ? DBLL_BSS :
+ DBLL_DATA;
+
+ /* Attempt to extract the segment ID and requirement information from
+ the name of the section */
+ DBC_REQUIRE(info->name);
+ token_len = strlen((char *)(info->name)) + 1;
+
+ sz_sect_name = kzalloc(token_len, GFP_KERNEL);
+ sz_last_token = kzalloc(token_len, GFP_KERNEL);
+ sz_sec_last_token = kzalloc(token_len, GFP_KERNEL);
+
+ if (sz_sect_name == NULL || sz_sec_last_token == NULL ||
+ sz_last_token == NULL) {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+ strncpy(sz_sect_name, (char *)(info->name), token_len);
+ psz_cur = sz_sect_name;
+ while ((token = strsep(&psz_cur, ":")) && *token != '\0') {
+ strncpy(sz_sec_last_token, sz_last_token,
+ strlen(sz_last_token) + 1);
+ strncpy(sz_last_token, token, strlen(token) + 1);
+ token = strsep(&psz_cur, ":");
+ count++; /* optimizes processing */
+ }
+ /* If token is 0 or 1, and sz_sec_last_token is DYN_DARAM or DYN_SARAM,
+ or DYN_EXTERNAL, then mem granularity information is present
+ within the section name - only process if there are at least three
+ tokens within the section name (just a minor optimization) */
+ if (count >= 3)
+ strict_strtol(sz_last_token, 10, (long *)&req);
+
+ if ((req == 0) || (req == 1)) {
+ if (strcmp(sz_sec_last_token, "DYN_DARAM") == 0) {
+ seg_id = 0;
+ } else {
+ if (strcmp(sz_sec_last_token, "DYN_SARAM") == 0) {
+ seg_id = 1;
+ } else {
+ if (strcmp(sz_sec_last_token,
+ "DYN_EXTERNAL") == 0)
+ seg_id = 2;
+ }
+ }
+ }
+func_cont:
+ kfree(sz_sect_name);
+ sz_sect_name = NULL;
+ kfree(sz_last_token);
+ sz_last_token = NULL;
+ kfree(sz_sec_last_token);
+ sz_sec_last_token = NULL;
+
+ if (mem_sect_type == DBLL_CODE)
+ alloc_size = info->size + GEM_L1P_PREFETCH_SIZE;
+ else
+ alloc_size = info->size;
+
+ if (info->load_addr != info->run_addr)
+ run_addr_flag = 1;
+ /* TODO - ideally, we can pass the alignment requirement also
+ * from here */
+ if (lib != NULL) {
+ status =
+ (lib->target_obj->attrs.alloc) (lib->target_obj->attrs.
+ rmm_handle, mem_sect_type,
+ alloc_size, align,
+ (u32 *) &rmm_addr_obj,
+ seg_id, req, FALSE);
+ }
+ if (DSP_FAILED(status)) {
+ ret = false;
+ } else {
+ /* RMM gives word address. Need to convert to byte address */
+ info->load_addr = rmm_addr_obj.addr * DSPWORDSIZE;
+ if (!run_addr_flag)
+ info->run_addr = info->load_addr;
+ info->context = (u32) rmm_addr_obj.segid;
+ dev_dbg(bridge, "%s: %s base = 0x%x len = 0x%x, "
+ "info->run_addr 0x%x, info->load_addr 0x%x\n",
+ __func__, info->name, info->load_addr / DSPWORDSIZE,
+ info->size / DSPWORDSIZE, info->run_addr,
+ info->load_addr);
+ }
+ return ret;
+}
+
+/*
+ * ======== rmm_dealloc ========
+ */
+static void rmm_dealloc(struct dynamic_loader_allocate *this,
+ struct ldr_section_info *info)
+{
+ struct dbll_alloc *dbll_alloc_obj = (struct dbll_alloc *)this;
+ struct dbll_library_obj *lib;
+ u32 segid;
+ int status = 0;
+ unsigned stype = DLOAD_SECTION_TYPE(info->type);
+ u32 mem_sect_type;
+ u32 free_size = 0;
+
+ mem_sect_type =
+ (stype == DLOAD_TEXT) ? DBLL_CODE : (stype ==
+ DLOAD_BSS) ? DBLL_BSS :
+ DBLL_DATA;
+ DBC_REQUIRE(this != NULL);
+ lib = dbll_alloc_obj->lib;
+ DBC_REQUIRE(lib);
+ /* segid was set by alloc function */
+ segid = (u32) info->context;
+ if (mem_sect_type == DBLL_CODE)
+ free_size = info->size + GEM_L1P_PREFETCH_SIZE;
+ else
+ free_size = info->size;
+ if (lib != NULL) {
+ status =
+ (lib->target_obj->attrs.free) (lib->target_obj->attrs.
+ sym_handle, segid,
+ info->load_addr /
+ DSPWORDSIZE, free_size,
+ false);
+ }
+}
+
+/* dynamic_loader_initialize */
+/*
+ * ======== connect ========
+ */
+static int connect(struct dynamic_loader_initialize *this)
+{
+ return true;
+}
+
+/*
+ * ======== read_mem ========
+ * This function does not need to be implemented.
+ */
+static int read_mem(struct dynamic_loader_initialize *this, void *buf,
+ ldr_addr addr, struct ldr_section_info *info,
+ unsigned nbytes)
+{
+ struct dbll_init_obj *init_obj = (struct dbll_init_obj *)this;
+ struct dbll_library_obj *lib;
+ int bytes_read = 0;
+
+ DBC_REQUIRE(this != NULL);
+ lib = init_obj->lib;
+ DBC_REQUIRE(lib);
+ /* Need bridge_brd_read function */
+ return bytes_read;
+}
+
+/*
+ * ======== write_mem ========
+ */
+static int write_mem(struct dynamic_loader_initialize *this, void *buf,
+ ldr_addr addr, struct ldr_section_info *info,
+ unsigned bytes)
+{
+ struct dbll_init_obj *init_obj = (struct dbll_init_obj *)this;
+ struct dbll_library_obj *lib;
+ struct dbll_tar_obj *target_obj;
+ struct dbll_sect_info sect_info;
+ u32 mem_sect_type;
+ bool ret = true;
+
+ DBC_REQUIRE(this != NULL);
+ lib = init_obj->lib;
+ if (!lib)
+ return false;
+
+ target_obj = lib->target_obj;
+
+ mem_sect_type =
+ (DLOAD_SECTION_TYPE(info->type) ==
+ DLOAD_TEXT) ? DBLL_CODE : DBLL_DATA;
+ if (target_obj && target_obj->attrs.write) {
+ ret =
+ (*target_obj->attrs.write) (target_obj->attrs.input_params,
+ addr, buf, bytes,
+ mem_sect_type);
+
+ if (target_obj->attrs.log_write) {
+ sect_info.name = info->name;
+ sect_info.sect_run_addr = info->run_addr;
+ sect_info.sect_load_addr = info->load_addr;
+ sect_info.size = info->size;
+ sect_info.type = mem_sect_type;
+ /* Pass the information about what we've written to
+ * another module */
+ (*target_obj->attrs.log_write) (target_obj->attrs.
+ log_write_handle,
+ §_info, addr,
+ bytes);
+ }
+ }
+ return ret;
+}
+
+/*
+ * ======== fill_mem ========
+ * Fill bytes of memory at a given address with a given value by
+ * writing from a buffer containing the given value. Write in
+ * sets of MAXEXPR (128) bytes to avoid large stack buffer issues.
+ */
+static int fill_mem(struct dynamic_loader_initialize *this, ldr_addr addr,
+ struct ldr_section_info *info, unsigned bytes, unsigned val)
+{
+ bool ret = true;
+ char *pbuf;
+ struct dbll_library_obj *lib;
+ struct dbll_init_obj *init_obj = (struct dbll_init_obj *)this;
+
+ DBC_REQUIRE(this != NULL);
+ lib = init_obj->lib;
+ pbuf = NULL;
+ /* Pass the NULL pointer to write_mem to get the start address of Shared
+ memory. This is a trick to just get the start address, there is no
+ writing taking place with this Writemem
+ */
+ if ((lib->target_obj->attrs.write) != (dbll_write_fxn) no_op)
+ write_mem(this, &pbuf, addr, info, 0);
+ if (pbuf)
+ memset(pbuf, val, bytes);
+
+ return ret;
+}
+
+/*
+ * ======== execute ========
+ */
+static int execute(struct dynamic_loader_initialize *this, ldr_addr start)
+{
+ struct dbll_init_obj *init_obj = (struct dbll_init_obj *)this;
+ struct dbll_library_obj *lib;
+ bool ret = true;
+
+ DBC_REQUIRE(this != NULL);
+ lib = init_obj->lib;
+ DBC_REQUIRE(lib);
+ /* Save entry point */
+ if (lib != NULL)
+ lib->entry = (u32) start;
+
+ return ret;
+}
+
+/*
+ * ======== release ========
+ */
+static void release(struct dynamic_loader_initialize *this)
+{
+}
+
+/**
+ * find_symbol_context - Basic symbol context structure
+ * @address: Symbol Adress
+ * @offset_range: Offset range where the search for the DSP symbol
+ * started.
+ * @cur_best_offset: Best offset to start looking for the DSP symbol
+ * @sym_addr: Address of the DSP symbol
+ * @name: Symbol name
+ *
+ */
+struct find_symbol_context {
+ /* input */
+ u32 address;
+ u32 offset_range;
+ /* state */
+ u32 cur_best_offset;
+ /* output */
+ u32 sym_addr;
+ char name[120];
+};
+
+/**
+ * find_symbol_callback() - Validates symbol address and copies the symbol name
+ * to the user data.
+ * @elem: dsp library context
+ * @user_data: Find symbol context
+ *
+ */
+void find_symbol_callback(void *elem, void *user_data)
+{
+ struct dbll_symbol *symbol = elem;
+ struct find_symbol_context *context = user_data;
+ u32 symbol_addr = symbol->value.value;
+ u32 offset = context->address - symbol_addr;
+
+ /*
+ * Address given should be greater than symbol address,
+ * symbol address should be within specified range
+ * and the offset should be better than previous one
+ */
+ if (context->address >= symbol_addr && symbol_addr < (u32)-1 &&
+ offset < context->cur_best_offset) {
+ context->cur_best_offset = offset;
+ context->sym_addr = symbol_addr;
+ strncpy(context->name, symbol->name, sizeof(context->name));
+ }
+
+ return;
+}
+
+/**
+ * dbll_find_dsp_symbol() - This function retrieves the dsp symbol from the dsp binary.
+ * @zl_lib: DSP binary obj library pointer
+ * @address: Given address to find the dsp symbol
+ * @offset_range: offset range to look for dsp symbol
+ * @sym_addr_output: Symbol Output address
+ * @name_output: String with the dsp symbol
+ *
+ * This function retrieves the dsp symbol from the dsp binary.
+ */
+bool dbll_find_dsp_symbol(struct dbll_library_obj *zl_lib, u32 address,
+ u32 offset_range, u32 *sym_addr_output,
+ char *name_output)
+{
+ bool status = false;
+ struct find_symbol_context context;
+
+ context.address = address;
+ context.offset_range = offset_range;
+ context.cur_best_offset = offset_range;
+ context.sym_addr = 0;
+ context.name[0] = '\0';
+
+ gh_iterate(zl_lib->sym_tab, find_symbol_callback, &context);
+
+ if (context.name[0]) {
+ status = true;
+ strcpy(name_output, context.name);
+ *sym_addr_output = context.sym_addr;
+ }
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/dev.c b/drivers/staging/tidspbridge/pmgr/dev.c
new file mode 100644
index 0000000..50a5d97
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/dev.c
@@ -0,0 +1,1171 @@
+/*
+ * dev.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implementation of Bridge Bridge driver device operations.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/ldr.h>
+#include <dspbridge/list.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/cod.h>
+#include <dspbridge/drv.h>
+#include <dspbridge/proc.h>
+#include <dspbridge/dmm.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/mgr.h>
+#include <dspbridge/node.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/dspapi.h> /* DSP API version info. */
+
+#include <dspbridge/chnl.h>
+#include <dspbridge/io.h>
+#include <dspbridge/msg.h>
+#include <dspbridge/cmm.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+
+#define MAKEVERSION(major, minor) (major * 10 + minor)
+#define BRD_API_VERSION MAKEVERSION(BRD_API_MAJOR_VERSION, \
+ BRD_API_MINOR_VERSION)
+
+/* The Bridge device object: */
+struct dev_object {
+ /* LST requires "link" to be first field! */
+ struct list_head link; /* Link to next dev_object. */
+ u8 dev_type; /* Device Type */
+ struct cfg_devnode *dev_node_obj; /* Platform specific dev id */
+ /* Bridge Context Handle */
+ struct bridge_dev_context *hbridge_context;
+ /* Function interface to Bridge driver. */
+ struct bridge_drv_interface bridge_interface;
+ struct brd_object *lock_owner; /* Client with exclusive access. */
+ struct cod_manager *cod_mgr; /* Code manager handle. */
+ struct chnl_mgr *hchnl_mgr; /* Channel manager. */
+ struct deh_mgr *hdeh_mgr; /* DEH manager. */
+ struct msg_mgr *hmsg_mgr; /* Message manager. */
+ struct io_mgr *hio_mgr; /* IO manager (CHNL, msg_ctrl) */
+ struct cmm_object *hcmm_mgr; /* SM memory manager. */
+ struct dmm_object *dmm_mgr; /* Dynamic memory manager. */
+ struct ldr_module *module_obj; /* Bridge Module handle. */
+ u32 word_size; /* DSP word size: quick access. */
+ struct drv_object *hdrv_obj; /* Driver Object */
+ struct lst_list *proc_list; /* List of Proceeosr attached to
+ * this device */
+ struct node_mgr *hnode_mgr;
+};
+
+/* ----------------------------------- Globals */
+static u32 refs; /* Module reference count */
+
+/* ----------------------------------- Function Prototypes */
+static int fxn_not_implemented(int arg, ...);
+static int init_cod_mgr(struct dev_object *dev_obj);
+static void store_interface_fxns(struct bridge_drv_interface *drv_fxns,
+ OUT struct bridge_drv_interface *intf_fxns);
+/*
+ * ======== dev_brd_write_fxn ========
+ * Purpose:
+ * Exported function to be used as the COD write function. This function
+ * is passed a handle to a DEV_hObject, then calls the
+ * device's bridge_brd_write() function.
+ */
+u32 dev_brd_write_fxn(void *pArb, u32 ulDspAddr, void *pHostBuf,
+ u32 ul_num_bytes, u32 nMemSpace)
+{
+ struct dev_object *dev_obj = (struct dev_object *)pArb;
+ u32 ul_written = 0;
+ int status;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pHostBuf != NULL); /* Required of BrdWrite(). */
+ if (dev_obj) {
+ /* Require of BrdWrite() */
+ DBC_ASSERT(dev_obj->hbridge_context != NULL);
+ status = (*dev_obj->bridge_interface.pfn_brd_write) (
+ dev_obj->hbridge_context, pHostBuf,
+ ulDspAddr, ul_num_bytes, nMemSpace);
+ /* Special case of getting the address only */
+ if (ul_num_bytes == 0)
+ ul_num_bytes = 1;
+ if (DSP_SUCCEEDED(status))
+ ul_written = ul_num_bytes;
+
+ }
+ return ul_written;
+}
+
+/*
+ * ======== dev_create_device ========
+ * Purpose:
+ * Called by the operating system to load the PM Bridge Driver for a
+ * PM board (device).
+ */
+int dev_create_device(OUT struct dev_object **phDevObject,
+ IN CONST char *driver_file_name,
+ struct cfg_devnode *dev_node_obj)
+{
+ struct cfg_hostres *host_res;
+ struct ldr_module *module_obj = NULL;
+ struct bridge_drv_interface *drv_fxns = NULL;
+ struct dev_object *dev_obj = NULL;
+ struct chnl_mgrattrs mgr_attrs;
+ struct io_attrs io_mgr_attrs;
+ u32 num_windows;
+ struct drv_object *hdrv_obj = NULL;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDevObject != NULL);
+ DBC_REQUIRE(driver_file_name != NULL);
+
+ status = drv_request_bridge_res_dsp((void *)&host_res);
+
+ if (DSP_FAILED(status)) {
+ dev_dbg(bridge, "%s: Failed to reserve bridge resources\n",
+ __func__);
+ goto leave;
+ }
+
+ /* Get the Bridge driver interface functions */
+ bridge_drv_entry(&drv_fxns, driver_file_name);
+ if (DSP_FAILED(cfg_get_object((u32 *) &hdrv_obj, REG_DRV_OBJECT))) {
+ /* don't propogate CFG errors from this PROC function */
+ status = -EPERM;
+ }
+ /* Create the device object, and pass a handle to the Bridge driver for
+ * storage. */
+ if (DSP_SUCCEEDED(status)) {
+ DBC_ASSERT(drv_fxns);
+ dev_obj = kzalloc(sizeof(struct dev_object), GFP_KERNEL);
+ if (dev_obj) {
+ /* Fill out the rest of the Dev Object structure: */
+ dev_obj->dev_node_obj = dev_node_obj;
+ dev_obj->module_obj = module_obj;
+ dev_obj->cod_mgr = NULL;
+ dev_obj->hchnl_mgr = NULL;
+ dev_obj->hdeh_mgr = NULL;
+ dev_obj->lock_owner = NULL;
+ dev_obj->word_size = DSPWORDSIZE;
+ dev_obj->hdrv_obj = hdrv_obj;
+ dev_obj->dev_type = DSP_UNIT;
+ /* Store this Bridge's interface functions, based on its
+ * version. */
+ store_interface_fxns(drv_fxns,
+ &dev_obj->bridge_interface);
+
+ /* Call fxn_dev_create() to get the Bridge's device
+ * context handle. */
+ status = (dev_obj->bridge_interface.pfn_dev_create)
+ (&dev_obj->hbridge_context, dev_obj,
+ host_res);
+ /* Assert bridge_dev_create()'s ensure clause: */
+ DBC_ASSERT(DSP_FAILED(status)
+ || (dev_obj->hbridge_context != NULL));
+ } else {
+ status = -ENOMEM;
+ }
+ }
+ /* Attempt to create the COD manager for this device: */
+ if (DSP_SUCCEEDED(status))
+ status = init_cod_mgr(dev_obj);
+
+ /* Attempt to create the channel manager for this device: */
+ if (DSP_SUCCEEDED(status)) {
+ mgr_attrs.max_channels = CHNL_MAXCHANNELS;
+ io_mgr_attrs.birq = host_res->birq_registers;
+ io_mgr_attrs.irq_shared =
+ (host_res->birq_attrib & CFG_IRQSHARED);
+ io_mgr_attrs.word_size = DSPWORDSIZE;
+ mgr_attrs.word_size = DSPWORDSIZE;
+ num_windows = host_res->num_mem_windows;
+ if (num_windows) {
+ /* Assume last memory window is for CHNL */
+ io_mgr_attrs.shm_base = host_res->dw_mem_base[1] +
+ host_res->dw_offset_for_monitor;
+ io_mgr_attrs.usm_length =
+ host_res->dw_mem_length[1] -
+ host_res->dw_offset_for_monitor;
+ } else {
+ io_mgr_attrs.shm_base = 0;
+ io_mgr_attrs.usm_length = 0;
+ pr_err("%s: No memory reserved for shared structures\n",
+ __func__);
+ }
+ status = chnl_create(&dev_obj->hchnl_mgr, dev_obj, &mgr_attrs);
+ if (status == -ENOSYS) {
+ /* It's OK for a device not to have a channel
+ * manager: */
+ status = 0;
+ }
+ /* Create CMM mgr even if Msg Mgr not impl. */
+ status = cmm_create(&dev_obj->hcmm_mgr,
+ (struct dev_object *)dev_obj, NULL);
+ /* Only create IO manager if we have a channel manager */
+ if (DSP_SUCCEEDED(status) && dev_obj->hchnl_mgr) {
+ status = io_create(&dev_obj->hio_mgr, dev_obj,
+ &io_mgr_attrs);
+ }
+ /* Only create DEH manager if we have an IO manager */
+ if (DSP_SUCCEEDED(status)) {
+ /* Instantiate the DEH module */
+ status = (*dev_obj->bridge_interface.pfn_deh_create)
+ (&dev_obj->hdeh_mgr, dev_obj);
+ }
+ /* Create DMM mgr . */
+ status = dmm_create(&dev_obj->dmm_mgr,
+ (struct dev_object *)dev_obj, NULL);
+ }
+ /* Add the new DEV_Object to the global list: */
+ if (DSP_SUCCEEDED(status)) {
+ lst_init_elem(&dev_obj->link);
+ status = drv_insert_dev_object(hdrv_obj, dev_obj);
+ }
+ /* Create the Processor List */
+ if (DSP_SUCCEEDED(status)) {
+ dev_obj->proc_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (!(dev_obj->proc_list))
+ status = -EPERM;
+ else
+ INIT_LIST_HEAD(&dev_obj->proc_list->head);
+ }
+leave:
+ /* If all went well, return a handle to the dev object;
+ * else, cleanup and return NULL in the OUT parameter. */
+ if (DSP_SUCCEEDED(status)) {
+ *phDevObject = dev_obj;
+ } else {
+ if (dev_obj) {
+ kfree(dev_obj->proc_list);
+ if (dev_obj->cod_mgr)
+ cod_delete(dev_obj->cod_mgr);
+ if (dev_obj->dmm_mgr)
+ dmm_destroy(dev_obj->dmm_mgr);
+ kfree(dev_obj);
+ }
+
+ *phDevObject = NULL;
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phDevObject) ||
+ (DSP_FAILED(status) && !*phDevObject));
+ return status;
+}
+
+/*
+ * ======== dev_create2 ========
+ * Purpose:
+ * After successful loading of the image from api_init_complete2
+ * (PROC Auto_Start) or proc_load this fxn is called. This creates
+ * the Node Manager and updates the DEV Object.
+ */
+int dev_create2(struct dev_object *hdev_obj)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdev_obj);
+
+ /* There can be only one Node Manager per DEV object */
+ DBC_ASSERT(!dev_obj->hnode_mgr);
+ status = node_create_mgr(&dev_obj->hnode_mgr, hdev_obj);
+ if (DSP_FAILED(status))
+ dev_obj->hnode_mgr = NULL;
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && dev_obj->hnode_mgr != NULL)
+ || (DSP_FAILED(status) && dev_obj->hnode_mgr == NULL));
+ return status;
+}
+
+/*
+ * ======== dev_destroy2 ========
+ * Purpose:
+ * Destroys the Node manager for this device.
+ */
+int dev_destroy2(struct dev_object *hdev_obj)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdev_obj);
+
+ if (dev_obj->hnode_mgr) {
+ if (DSP_FAILED(node_delete_mgr(dev_obj->hnode_mgr)))
+ status = -EPERM;
+ else
+ dev_obj->hnode_mgr = NULL;
+
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && dev_obj->hnode_mgr == NULL) ||
+ DSP_FAILED(status));
+ return status;
+}
+
+/*
+ * ======== dev_destroy_device ========
+ * Purpose:
+ * Destroys the channel manager for this device, if any, calls
+ * bridge_dev_destroy(), and then attempts to unload the Bridge module.
+ */
+int dev_destroy_device(struct dev_object *hdev_obj)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (hdev_obj) {
+ if (dev_obj->cod_mgr) {
+ cod_delete(dev_obj->cod_mgr);
+ dev_obj->cod_mgr = NULL;
+ }
+
+ if (dev_obj->hnode_mgr) {
+ node_delete_mgr(dev_obj->hnode_mgr);
+ dev_obj->hnode_mgr = NULL;
+ }
+
+ /* Free the io, channel, and message managers for this board: */
+ if (dev_obj->hio_mgr) {
+ io_destroy(dev_obj->hio_mgr);
+ dev_obj->hio_mgr = NULL;
+ }
+ if (dev_obj->hchnl_mgr) {
+ chnl_destroy(dev_obj->hchnl_mgr);
+ dev_obj->hchnl_mgr = NULL;
+ }
+ if (dev_obj->hmsg_mgr) {
+ msg_delete(dev_obj->hmsg_mgr);
+ dev_obj->hmsg_mgr = NULL;
+ }
+
+ if (dev_obj->hdeh_mgr) {
+ /* Uninitialize DEH module. */
+ (*dev_obj->bridge_interface.pfn_deh_destroy)
+ (dev_obj->hdeh_mgr);
+ dev_obj->hdeh_mgr = NULL;
+ }
+ if (dev_obj->hcmm_mgr) {
+ cmm_destroy(dev_obj->hcmm_mgr, true);
+ dev_obj->hcmm_mgr = NULL;
+ }
+
+ if (dev_obj->dmm_mgr) {
+ dmm_destroy(dev_obj->dmm_mgr);
+ dev_obj->dmm_mgr = NULL;
+ }
+
+ /* Call the driver's bridge_dev_destroy() function: */
+ /* Require of DevDestroy */
+ if (dev_obj->hbridge_context) {
+ status = (*dev_obj->bridge_interface.pfn_dev_destroy)
+ (dev_obj->hbridge_context);
+ dev_obj->hbridge_context = NULL;
+ } else
+ status = -EPERM;
+ if (DSP_SUCCEEDED(status)) {
+ kfree(dev_obj->proc_list);
+ dev_obj->proc_list = NULL;
+
+ /* Remove this DEV_Object from the global list: */
+ drv_remove_dev_object(dev_obj->hdrv_obj, dev_obj);
+ /* Free The library * LDR_FreeModule
+ * (dev_obj->module_obj); */
+ /* Free this dev object: */
+ kfree(dev_obj);
+ dev_obj = NULL;
+ }
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dev_get_chnl_mgr ========
+ * Purpose:
+ * Retrieve the handle to the channel manager handle created for this
+ * device.
+ */
+int dev_get_chnl_mgr(struct dev_object *hdev_obj,
+ OUT struct chnl_mgr **phMgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMgr != NULL);
+
+ if (hdev_obj) {
+ *phMgr = dev_obj->hchnl_mgr;
+ } else {
+ *phMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phMgr != NULL) &&
+ (*phMgr == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_get_cmm_mgr ========
+ * Purpose:
+ * Retrieve the handle to the shared memory manager created for this
+ * device.
+ */
+int dev_get_cmm_mgr(struct dev_object *hdev_obj,
+ OUT struct cmm_object **phMgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMgr != NULL);
+
+ if (hdev_obj) {
+ *phMgr = dev_obj->hcmm_mgr;
+ } else {
+ *phMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phMgr != NULL) &&
+ (*phMgr == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_get_dmm_mgr ========
+ * Purpose:
+ * Retrieve the handle to the dynamic memory manager created for this
+ * device.
+ */
+int dev_get_dmm_mgr(struct dev_object *hdev_obj,
+ OUT struct dmm_object **phMgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMgr != NULL);
+
+ if (hdev_obj) {
+ *phMgr = dev_obj->dmm_mgr;
+ } else {
+ *phMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phMgr != NULL) &&
+ (*phMgr == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_get_cod_mgr ========
+ * Purpose:
+ * Retrieve the COD manager create for this device.
+ */
+int dev_get_cod_mgr(struct dev_object *hdev_obj,
+ OUT struct cod_manager **phCodMgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phCodMgr != NULL);
+
+ if (hdev_obj) {
+ *phCodMgr = dev_obj->cod_mgr;
+ } else {
+ *phCodMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phCodMgr != NULL) &&
+ (*phCodMgr == NULL)));
+ return status;
+}
+
+/*
+ * ========= dev_get_deh_mgr ========
+ */
+int dev_get_deh_mgr(struct dev_object *hdev_obj,
+ OUT struct deh_mgr **phDehMgr)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDehMgr != NULL);
+ DBC_REQUIRE(hdev_obj);
+ if (hdev_obj) {
+ *phDehMgr = hdev_obj->hdeh_mgr;
+ } else {
+ *phDehMgr = NULL;
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== dev_get_dev_node ========
+ * Purpose:
+ * Retrieve the platform specific device ID for this device.
+ */
+int dev_get_dev_node(struct dev_object *hdev_obj,
+ OUT struct cfg_devnode **phDevNode)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDevNode != NULL);
+
+ if (hdev_obj) {
+ *phDevNode = dev_obj->dev_node_obj;
+ } else {
+ *phDevNode = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phDevNode != NULL) &&
+ (*phDevNode == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_get_first ========
+ * Purpose:
+ * Retrieve the first Device Object handle from an internal linked list
+ * DEV_OBJECTs maintained by DEV.
+ */
+struct dev_object *dev_get_first(void)
+{
+ struct dev_object *dev_obj = NULL;
+
+ dev_obj = (struct dev_object *)drv_get_first_dev_object();
+
+ return dev_obj;
+}
+
+/*
+ * ======== dev_get_intf_fxns ========
+ * Purpose:
+ * Retrieve the Bridge interface function structure for the loaded driver.
+ * ppIntfFxns != NULL.
+ */
+int dev_get_intf_fxns(struct dev_object *hdev_obj,
+ OUT struct bridge_drv_interface **ppIntfFxns)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ppIntfFxns != NULL);
+
+ if (hdev_obj) {
+ *ppIntfFxns = &dev_obj->bridge_interface;
+ } else {
+ *ppIntfFxns = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((ppIntfFxns != NULL) &&
+ (*ppIntfFxns == NULL)));
+ return status;
+}
+
+/*
+ * ========= dev_get_io_mgr ========
+ */
+int dev_get_io_mgr(struct dev_object *hdev_obj,
+ OUT struct io_mgr **phIOMgr)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phIOMgr != NULL);
+ DBC_REQUIRE(hdev_obj);
+
+ if (hdev_obj) {
+ *phIOMgr = hdev_obj->hio_mgr;
+ } else {
+ *phIOMgr = NULL;
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dev_get_next ========
+ * Purpose:
+ * Retrieve the next Device Object handle from an internal linked list
+ * of DEV_OBJECTs maintained by DEV, after having previously called
+ * dev_get_first() and zero or more dev_get_next
+ */
+struct dev_object *dev_get_next(struct dev_object *hdev_obj)
+{
+ struct dev_object *next_dev_object = NULL;
+
+ if (hdev_obj) {
+ next_dev_object = (struct dev_object *)
+ drv_get_next_dev_object((u32) hdev_obj);
+ }
+
+ return next_dev_object;
+}
+
+/*
+ * ========= dev_get_msg_mgr ========
+ */
+void dev_get_msg_mgr(struct dev_object *hdev_obj, OUT struct msg_mgr **phMsgMgr)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMsgMgr != NULL);
+ DBC_REQUIRE(hdev_obj);
+
+ *phMsgMgr = hdev_obj->hmsg_mgr;
+}
+
+/*
+ * ======== dev_get_node_manager ========
+ * Purpose:
+ * Retrieve the Node Manager Handle
+ */
+int dev_get_node_manager(struct dev_object *hdev_obj,
+ OUT struct node_mgr **phNodeMgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phNodeMgr != NULL);
+
+ if (hdev_obj) {
+ *phNodeMgr = dev_obj->hnode_mgr;
+ } else {
+ *phNodeMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phNodeMgr != NULL) &&
+ (*phNodeMgr == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_get_symbol ========
+ */
+int dev_get_symbol(struct dev_object *hdev_obj,
+ IN CONST char *pstrSym, OUT u32 * pul_value)
+{
+ int status = 0;
+ struct cod_manager *cod_mgr;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pstrSym != NULL && pul_value != NULL);
+
+ if (hdev_obj) {
+ status = dev_get_cod_mgr(hdev_obj, &cod_mgr);
+ if (cod_mgr)
+ status = cod_get_sym_value(cod_mgr, (char *)pstrSym,
+ pul_value);
+ else
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dev_get_bridge_context ========
+ * Purpose:
+ * Retrieve the Bridge Context handle, as returned by the
+ * bridge_dev_create fxn.
+ */
+int dev_get_bridge_context(struct dev_object *hdev_obj,
+ OUT struct bridge_dev_context **phbridge_context)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phbridge_context != NULL);
+
+ if (hdev_obj) {
+ *phbridge_context = dev_obj->hbridge_context;
+ } else {
+ *phbridge_context = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phbridge_context != NULL) &&
+ (*phbridge_context == NULL)));
+ return status;
+}
+
+/*
+ * ======== dev_exit ========
+ * Purpose:
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ */
+void dev_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ if (refs == 0) {
+ cmm_exit();
+ dmm_exit();
+ }
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== dev_init ========
+ * Purpose:
+ * Initialize DEV's private state, keeping a reference count on each call.
+ */
+bool dev_init(void)
+{
+ bool cmm_ret, dmm_ret, ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (refs == 0) {
+ cmm_ret = cmm_init();
+ dmm_ret = dmm_init();
+
+ ret = cmm_ret && dmm_ret;
+
+ if (!ret) {
+ if (cmm_ret)
+ cmm_exit();
+
+ if (dmm_ret)
+ dmm_exit();
+
+ }
+ }
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== dev_notify_clients ========
+ * Purpose:
+ * Notify all clients of this device of a change in device status.
+ */
+int dev_notify_clients(struct dev_object *hdev_obj, u32 ulStatus)
+{
+ int status = 0;
+
+ struct dev_object *dev_obj = hdev_obj;
+ void *proc_obj;
+
+ for (proc_obj = (void *)lst_first(dev_obj->proc_list);
+ proc_obj != NULL;
+ proc_obj = (void *)lst_next(dev_obj->proc_list,
+ (struct list_head *)proc_obj))
+ proc_notify_clients(proc_obj, (u32) ulStatus);
+
+ return status;
+}
+
+/*
+ * ======== dev_remove_device ========
+ */
+int dev_remove_device(struct cfg_devnode *dev_node_obj)
+{
+ struct dev_object *hdev_obj; /* handle to device object */
+ int status = 0;
+ struct dev_object *dev_obj;
+
+ /* Retrieve the device object handle originaly stored with
+ * the dev_node: */
+ status = cfg_get_dev_object(dev_node_obj, (u32 *) &hdev_obj);
+ if (DSP_SUCCEEDED(status)) {
+ /* Remove the Processor List */
+ dev_obj = (struct dev_object *)hdev_obj;
+ /* Destroy the device object. */
+ status = dev_destroy_device(hdev_obj);
+ }
+
+ return status;
+}
+
+/*
+ * ======== dev_set_chnl_mgr ========
+ * Purpose:
+ * Set the channel manager for this device.
+ */
+int dev_set_chnl_mgr(struct dev_object *hdev_obj,
+ struct chnl_mgr *hmgr)
+{
+ int status = 0;
+ struct dev_object *dev_obj = hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (hdev_obj)
+ dev_obj->hchnl_mgr = hmgr;
+ else
+ status = -EFAULT;
+
+ DBC_ENSURE(DSP_FAILED(status) || (dev_obj->hchnl_mgr == hmgr));
+ return status;
+}
+
+/*
+ * ======== dev_set_msg_mgr ========
+ * Purpose:
+ * Set the message manager for this device.
+ */
+void dev_set_msg_mgr(struct dev_object *hdev_obj, struct msg_mgr *hmgr)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdev_obj);
+
+ hdev_obj->hmsg_mgr = hmgr;
+}
+
+/*
+ * ======== dev_start_device ========
+ * Purpose:
+ * Initializes the new device with the BRIDGE environment.
+ */
+int dev_start_device(struct cfg_devnode *dev_node_obj)
+{
+ struct dev_object *hdev_obj = NULL; /* handle to 'Bridge Device */
+ /* Bridge driver filename */
+ char bridge_file_name[CFG_MAXSEARCHPATHLEN] = "UMA";
+ int status;
+ struct mgr_object *hmgr_obj = NULL;
+
+ DBC_REQUIRE(refs > 0);
+
+ /* Given all resources, create a device object. */
+ status = dev_create_device(&hdev_obj, bridge_file_name,
+ dev_node_obj);
+ if (DSP_SUCCEEDED(status)) {
+ /* Store away the hdev_obj with the DEVNODE */
+ status = cfg_set_dev_object(dev_node_obj, (u32) hdev_obj);
+ if (DSP_FAILED(status)) {
+ /* Clean up */
+ dev_destroy_device(hdev_obj);
+ hdev_obj = NULL;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Create the Manager Object */
+ status = mgr_create(&hmgr_obj, dev_node_obj);
+ }
+ if (DSP_FAILED(status)) {
+ if (hdev_obj)
+ dev_destroy_device(hdev_obj);
+
+ /* Ensure the device extension is NULL */
+ cfg_set_dev_object(dev_node_obj, 0L);
+ }
+
+ return status;
+}
+
+/*
+ * ======== fxn_not_implemented ========
+ * Purpose:
+ * Takes the place of a Bridge Null Function.
+ * Parameters:
+ * Multiple, optional.
+ * Returns:
+ * -ENOSYS: Always.
+ */
+static int fxn_not_implemented(int arg, ...)
+{
+ return -ENOSYS;
+}
+
+/*
+ * ======== init_cod_mgr ========
+ * Purpose:
+ * Create a COD manager for this device.
+ * Parameters:
+ * dev_obj: Pointer to device object created with
+ * dev_create_device()
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * Should only be called once by dev_create_device() for a given DevObject.
+ * Ensures:
+ */
+static int init_cod_mgr(struct dev_object *dev_obj)
+{
+ int status = 0;
+ char *sz_dummy_file = "dummy";
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(!dev_obj || (dev_obj->cod_mgr == NULL));
+
+ status = cod_create(&dev_obj->cod_mgr, sz_dummy_file, NULL);
+
+ return status;
+}
+
+/*
+ * ======== dev_insert_proc_object ========
+ * Purpose:
+ * Insert a ProcObject into the list maintained by DEV.
+ * Parameters:
+ * p_proc_object: Ptr to ProcObject to insert.
+ * dev_obj: Ptr to Dev Object where the list is.
+ * pbAlreadyAttached: Ptr to return the bool
+ * Returns:
+ * 0: If successful.
+ * Requires:
+ * List Exists
+ * hdev_obj is Valid handle
+ * DEV Initialized
+ * pbAlreadyAttached != NULL
+ * proc_obj != 0
+ * Ensures:
+ * 0 and List is not Empty.
+ */
+int dev_insert_proc_object(struct dev_object *hdev_obj,
+ u32 proc_obj, OUT bool *pbAlreadyAttached)
+{
+ int status = 0;
+ struct dev_object *dev_obj = (struct dev_object *)hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(dev_obj);
+ DBC_REQUIRE(proc_obj != 0);
+ DBC_REQUIRE(dev_obj->proc_list != NULL);
+ DBC_REQUIRE(pbAlreadyAttached != NULL);
+ if (!LST_IS_EMPTY(dev_obj->proc_list))
+ *pbAlreadyAttached = true;
+
+ /* Add DevObject to tail. */
+ lst_put_tail(dev_obj->proc_list, (struct list_head *)proc_obj);
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) && !LST_IS_EMPTY(dev_obj->proc_list));
+
+ return status;
+}
+
+/*
+ * ======== dev_remove_proc_object ========
+ * Purpose:
+ * Search for and remove a Proc object from the given list maintained
+ * by the DEV
+ * Parameters:
+ * p_proc_object: Ptr to ProcObject to insert.
+ * dev_obj Ptr to Dev Object where the list is.
+ * Returns:
+ * 0: If successful.
+ * Requires:
+ * List exists and is not empty
+ * proc_obj != 0
+ * hdev_obj is a valid Dev handle.
+ * Ensures:
+ * Details:
+ * List will be deleted when the DEV is destroyed.
+ */
+int dev_remove_proc_object(struct dev_object *hdev_obj, u32 proc_obj)
+{
+ int status = -EPERM;
+ struct list_head *cur_elem;
+ struct dev_object *dev_obj = (struct dev_object *)hdev_obj;
+
+ DBC_REQUIRE(dev_obj);
+ DBC_REQUIRE(proc_obj != 0);
+ DBC_REQUIRE(dev_obj->proc_list != NULL);
+ DBC_REQUIRE(!LST_IS_EMPTY(dev_obj->proc_list));
+
+ /* Search list for dev_obj: */
+ for (cur_elem = lst_first(dev_obj->proc_list); cur_elem != NULL;
+ cur_elem = lst_next(dev_obj->proc_list, cur_elem)) {
+ /* If found, remove it. */
+ if ((u32) cur_elem == proc_obj) {
+ lst_remove_elem(dev_obj->proc_list, cur_elem);
+ status = 0;
+ break;
+ }
+ }
+
+ return status;
+}
+
+int dev_get_dev_type(struct dev_object *hdevObject, u8 *dev_type)
+{
+ int status = 0;
+ struct dev_object *dev_obj = (struct dev_object *)hdevObject;
+
+ *dev_type = dev_obj->dev_type;
+
+ return status;
+}
+
+/*
+ * ======== store_interface_fxns ========
+ * Purpose:
+ * Copy the Bridge's interface functions into the device object,
+ * ensuring that fxn_not_implemented() is set for:
+ *
+ * 1. All Bridge function pointers which are NULL; and
+ * 2. All function slots in the struct dev_object structure which have no
+ * corresponding slots in the the Bridge's interface, because the Bridge
+ * is of an *older* version.
+ * Parameters:
+ * intf_fxns: Interface fxn Structure of the Bridge's Dev Object.
+ * drv_fxns: Interface Fxns offered by the Bridge during DEV_Create().
+ * Returns:
+ * Requires:
+ * Input pointers are valid.
+ * Bridge driver is *not* written for a newer DSP API.
+ * Ensures:
+ * All function pointers in the dev object's fxn interface are not NULL.
+ */
+static void store_interface_fxns(struct bridge_drv_interface *drv_fxns,
+ OUT struct bridge_drv_interface *intf_fxns)
+{
+ u32 bridge_version;
+
+ /* Local helper macro: */
+#define STORE_FXN(cast, pfn) \
+ (intf_fxns->pfn = ((drv_fxns->pfn != NULL) ? drv_fxns->pfn : \
+ (cast)fxn_not_implemented))
+
+ DBC_REQUIRE(intf_fxns != NULL);
+ DBC_REQUIRE(drv_fxns != NULL);
+ DBC_REQUIRE(MAKEVERSION(drv_fxns->brd_api_major_version,
+ drv_fxns->brd_api_minor_version) <= BRD_API_VERSION);
+ bridge_version = MAKEVERSION(drv_fxns->brd_api_major_version,
+ drv_fxns->brd_api_minor_version);
+ intf_fxns->brd_api_major_version = drv_fxns->brd_api_major_version;
+ intf_fxns->brd_api_minor_version = drv_fxns->brd_api_minor_version;
+ /* Install functions up to DSP API version .80 (first alpha): */
+ if (bridge_version > 0) {
+ STORE_FXN(fxn_dev_create, pfn_dev_create);
+ STORE_FXN(fxn_dev_destroy, pfn_dev_destroy);
+ STORE_FXN(fxn_dev_ctrl, pfn_dev_cntrl);
+ STORE_FXN(fxn_brd_monitor, pfn_brd_monitor);
+ STORE_FXN(fxn_brd_start, pfn_brd_start);
+ STORE_FXN(fxn_brd_stop, pfn_brd_stop);
+ STORE_FXN(fxn_brd_status, pfn_brd_status);
+ STORE_FXN(fxn_brd_read, pfn_brd_read);
+ STORE_FXN(fxn_brd_write, pfn_brd_write);
+ STORE_FXN(fxn_brd_setstate, pfn_brd_set_state);
+ STORE_FXN(fxn_brd_memcopy, pfn_brd_mem_copy);
+ STORE_FXN(fxn_brd_memwrite, pfn_brd_mem_write);
+ STORE_FXN(fxn_brd_memmap, pfn_brd_mem_map);
+ STORE_FXN(fxn_brd_memunmap, pfn_brd_mem_un_map);
+ STORE_FXN(fxn_chnl_create, pfn_chnl_create);
+ STORE_FXN(fxn_chnl_destroy, pfn_chnl_destroy);
+ STORE_FXN(fxn_chnl_open, pfn_chnl_open);
+ STORE_FXN(fxn_chnl_close, pfn_chnl_close);
+ STORE_FXN(fxn_chnl_addioreq, pfn_chnl_add_io_req);
+ STORE_FXN(fxn_chnl_getioc, pfn_chnl_get_ioc);
+ STORE_FXN(fxn_chnl_cancelio, pfn_chnl_cancel_io);
+ STORE_FXN(fxn_chnl_flushio, pfn_chnl_flush_io);
+ STORE_FXN(fxn_chnl_getinfo, pfn_chnl_get_info);
+ STORE_FXN(fxn_chnl_getmgrinfo, pfn_chnl_get_mgr_info);
+ STORE_FXN(fxn_chnl_idle, pfn_chnl_idle);
+ STORE_FXN(fxn_chnl_registernotify, pfn_chnl_register_notify);
+ STORE_FXN(fxn_deh_create, pfn_deh_create);
+ STORE_FXN(fxn_deh_destroy, pfn_deh_destroy);
+ STORE_FXN(fxn_deh_notify, pfn_deh_notify);
+ STORE_FXN(fxn_deh_registernotify, pfn_deh_register_notify);
+ STORE_FXN(fxn_deh_getinfo, pfn_deh_get_info);
+ STORE_FXN(fxn_io_create, pfn_io_create);
+ STORE_FXN(fxn_io_destroy, pfn_io_destroy);
+ STORE_FXN(fxn_io_onloaded, pfn_io_on_loaded);
+ STORE_FXN(fxn_io_getprocload, pfn_io_get_proc_load);
+ STORE_FXN(fxn_msg_create, pfn_msg_create);
+ STORE_FXN(fxn_msg_createqueue, pfn_msg_create_queue);
+ STORE_FXN(fxn_msg_delete, pfn_msg_delete);
+ STORE_FXN(fxn_msg_deletequeue, pfn_msg_delete_queue);
+ STORE_FXN(fxn_msg_get, pfn_msg_get);
+ STORE_FXN(fxn_msg_put, pfn_msg_put);
+ STORE_FXN(fxn_msg_registernotify, pfn_msg_register_notify);
+ STORE_FXN(fxn_msg_setqueueid, pfn_msg_set_queue_id);
+ }
+ /* Add code for any additional functions in newerBridge versions here */
+ /* Ensure postcondition: */
+ DBC_ENSURE(intf_fxns->pfn_dev_create != NULL);
+ DBC_ENSURE(intf_fxns->pfn_dev_destroy != NULL);
+ DBC_ENSURE(intf_fxns->pfn_dev_cntrl != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_monitor != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_start != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_stop != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_status != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_read != NULL);
+ DBC_ENSURE(intf_fxns->pfn_brd_write != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_create != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_destroy != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_open != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_close != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_add_io_req != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_get_ioc != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_cancel_io != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_flush_io != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_get_info != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_get_mgr_info != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_idle != NULL);
+ DBC_ENSURE(intf_fxns->pfn_chnl_register_notify != NULL);
+ DBC_ENSURE(intf_fxns->pfn_deh_create != NULL);
+ DBC_ENSURE(intf_fxns->pfn_deh_destroy != NULL);
+ DBC_ENSURE(intf_fxns->pfn_deh_notify != NULL);
+ DBC_ENSURE(intf_fxns->pfn_deh_register_notify != NULL);
+ DBC_ENSURE(intf_fxns->pfn_deh_get_info != NULL);
+ DBC_ENSURE(intf_fxns->pfn_io_create != NULL);
+ DBC_ENSURE(intf_fxns->pfn_io_destroy != NULL);
+ DBC_ENSURE(intf_fxns->pfn_io_on_loaded != NULL);
+ DBC_ENSURE(intf_fxns->pfn_io_get_proc_load != NULL);
+ DBC_ENSURE(intf_fxns->pfn_msg_set_queue_id != NULL);
+
+#undef STORE_FXN
+}
diff --git a/drivers/staging/tidspbridge/pmgr/dmm.c b/drivers/staging/tidspbridge/pmgr/dmm.c
new file mode 100644
index 0000000..c8abce8
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/dmm.c
@@ -0,0 +1,533 @@
+/*
+ * dmm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * The Dynamic Memory Manager (DMM) module manages the DSP Virtual address
+ * space that can be directly mapped to any MPU buffer or memory region
+ *
+ * Notes:
+ * Region: Generic memory entitiy having a start address and a size
+ * Chunk: Reserved region
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/proc.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dmm.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define DMM_ADDR_VIRTUAL(a) \
+ (((struct map_page *)(a) - virtual_mapping_table) * PG_SIZE4K +\
+ dyn_mem_map_beg)
+#define DMM_ADDR_TO_INDEX(a) (((a) - dyn_mem_map_beg) / PG_SIZE4K)
+
+/* DMM Mgr */
+struct dmm_object {
+ /* Dmm Lock is used to serialize access mem manager for
+ * multi-threads. */
+ spinlock_t dmm_lock; /* Lock to access dmm mgr */
+};
+
+/* ----------------------------------- Globals */
+static u32 refs; /* module reference count */
+struct map_page {
+ u32 region_size:15;
+ u32 mapped_size:15;
+ u32 reserved:1;
+ u32 mapped:1;
+};
+
+/* Create the free list */
+static struct map_page *virtual_mapping_table;
+static u32 free_region; /* The index of free region */
+static u32 free_size;
+static u32 dyn_mem_map_beg; /* The Beginning of dynamic memory mapping */
+static u32 table_size; /* The size of virt and phys pages tables */
+
+/* ----------------------------------- Function Prototypes */
+static struct map_page *get_region(u32 addr);
+static struct map_page *get_free_region(u32 aSize);
+static struct map_page *get_mapped_region(u32 aAddr);
+
+/* ======== dmm_create_tables ========
+ * Purpose:
+ * Create table to hold the information of physical address
+ * the buffer pages that is passed by the user, and the table
+ * to hold the information of the virtual memory that is reserved
+ * for DSP.
+ */
+int dmm_create_tables(struct dmm_object *dmm_mgr, u32 addr, u32 size)
+{
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ int status = 0;
+
+ status = dmm_delete_tables(dmm_obj);
+ if (DSP_SUCCEEDED(status)) {
+ dyn_mem_map_beg = addr;
+ table_size = PG_ALIGN_HIGH(size, PG_SIZE4K) / PG_SIZE4K;
+ /* Create the free list */
+ virtual_mapping_table = __vmalloc(table_size *
+ sizeof(struct map_page), GFP_KERNEL |
+ __GFP_HIGHMEM | __GFP_ZERO, PAGE_KERNEL);
+ if (virtual_mapping_table == NULL)
+ status = -ENOMEM;
+ else {
+ /* On successful allocation,
+ * all entries are zero ('free') */
+ free_region = 0;
+ free_size = table_size * PG_SIZE4K;
+ virtual_mapping_table[0].region_size = table_size;
+ }
+ }
+
+ if (DSP_FAILED(status))
+ pr_err("%s: failure, status 0x%x\n", __func__, status);
+
+ return status;
+}
+
+/*
+ * ======== dmm_create ========
+ * Purpose:
+ * Create a dynamic memory manager object.
+ */
+int dmm_create(OUT struct dmm_object **phDmmMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct dmm_mgrattrs *pMgrAttrs)
+{
+ struct dmm_object *dmm_obj = NULL;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDmmMgr != NULL);
+
+ *phDmmMgr = NULL;
+ /* create, zero, and tag a cmm mgr object */
+ dmm_obj = kzalloc(sizeof(struct dmm_object), GFP_KERNEL);
+ if (dmm_obj != NULL) {
+ spin_lock_init(&dmm_obj->dmm_lock);
+ *phDmmMgr = dmm_obj;
+ } else {
+ status = -ENOMEM;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dmm_destroy ========
+ * Purpose:
+ * Release the communication memory manager resources.
+ */
+int dmm_destroy(struct dmm_object *dmm_mgr)
+{
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ if (dmm_mgr) {
+ status = dmm_delete_tables(dmm_obj);
+ if (DSP_SUCCEEDED(status))
+ kfree(dmm_obj);
+ } else
+ status = -EFAULT;
+
+ return status;
+}
+
+/*
+ * ======== dmm_delete_tables ========
+ * Purpose:
+ * Delete DMM Tables.
+ */
+int dmm_delete_tables(struct dmm_object *dmm_mgr)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ /* Delete all DMM tables */
+ if (dmm_mgr)
+ vfree(virtual_mapping_table);
+ else
+ status = -EFAULT;
+ return status;
+}
+
+/*
+ * ======== dmm_exit ========
+ * Purpose:
+ * Discontinue usage of module; free resources when reference count
+ * reaches 0.
+ */
+void dmm_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+}
+
+/*
+ * ======== dmm_get_handle ========
+ * Purpose:
+ * Return the dynamic memory manager object for this device.
+ * This is typically called from the client process.
+ */
+int dmm_get_handle(void *hprocessor, OUT struct dmm_object **phDmmMgr)
+{
+ int status = 0;
+ struct dev_object *hdev_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDmmMgr != NULL);
+ if (hprocessor != NULL)
+ status = proc_get_dev_object(hprocessor, &hdev_obj);
+ else
+ hdev_obj = dev_get_first(); /* default */
+
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_dmm_mgr(hdev_obj, phDmmMgr);
+
+ return status;
+}
+
+/*
+ * ======== dmm_init ========
+ * Purpose:
+ * Initializes private state of DMM module.
+ */
+bool dmm_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ virtual_mapping_table = NULL;
+ table_size = 0;
+
+ return ret;
+}
+
+/*
+ * ======== dmm_map_memory ========
+ * Purpose:
+ * Add a mapping block to the reserved chunk. DMM assumes that this block
+ * will be mapped in the DSP/IVA's address space. DMM returns an error if a
+ * mapping overlaps another one. This function stores the info that will be
+ * required later while unmapping the block.
+ */
+int dmm_map_memory(struct dmm_object *dmm_mgr, u32 addr, u32 size)
+{
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ struct map_page *chunk;
+ int status = 0;
+
+ spin_lock(&dmm_obj->dmm_lock);
+ /* Find the Reserved memory chunk containing the DSP block to
+ * be mapped */
+ chunk = (struct map_page *)get_region(addr);
+ if (chunk != NULL) {
+ /* Mark the region 'mapped', leave the 'reserved' info as-is */
+ chunk->mapped = true;
+ chunk->mapped_size = (size / PG_SIZE4K);
+ } else
+ status = -ENOENT;
+ spin_unlock(&dmm_obj->dmm_lock);
+
+ dev_dbg(bridge, "%s dmm_mgr %p, addr %x, size %x\n\tstatus %x, "
+ "chunk %p", __func__, dmm_mgr, addr, size, status, chunk);
+
+ return status;
+}
+
+/*
+ * ======== dmm_reserve_memory ========
+ * Purpose:
+ * Reserve a chunk of virtually contiguous DSP/IVA address space.
+ */
+int dmm_reserve_memory(struct dmm_object *dmm_mgr, u32 size,
+ u32 *prsv_addr)
+{
+ int status = 0;
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ struct map_page *node;
+ u32 rsv_addr = 0;
+ u32 rsv_size = 0;
+
+ spin_lock(&dmm_obj->dmm_lock);
+
+ /* Try to get a DSP chunk from the free list */
+ node = get_free_region(size);
+ if (node != NULL) {
+ /* DSP chunk of given size is available. */
+ rsv_addr = DMM_ADDR_VIRTUAL(node);
+ /* Calculate the number entries to use */
+ rsv_size = size / PG_SIZE4K;
+ if (rsv_size < node->region_size) {
+ /* Mark remainder of free region */
+ node[rsv_size].mapped = false;
+ node[rsv_size].reserved = false;
+ node[rsv_size].region_size =
+ node->region_size - rsv_size;
+ node[rsv_size].mapped_size = 0;
+ }
+ /* get_region will return first fit chunk. But we only use what
+ is requested. */
+ node->mapped = false;
+ node->reserved = true;
+ node->region_size = rsv_size;
+ node->mapped_size = 0;
+ /* Return the chunk's starting address */
+ *prsv_addr = rsv_addr;
+ } else
+ /*dSP chunk of given size is not available */
+ status = -ENOMEM;
+
+ spin_unlock(&dmm_obj->dmm_lock);
+
+ dev_dbg(bridge, "%s dmm_mgr %p, size %x, prsv_addr %p\n\tstatus %x, "
+ "rsv_addr %x, rsv_size %x\n", __func__, dmm_mgr, size,
+ prsv_addr, status, rsv_addr, rsv_size);
+
+ return status;
+}
+
+/*
+ * ======== dmm_un_map_memory ========
+ * Purpose:
+ * Remove the mapped block from the reserved chunk.
+ */
+int dmm_un_map_memory(struct dmm_object *dmm_mgr, u32 addr, u32 *psize)
+{
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ struct map_page *chunk;
+ int status = 0;
+
+ spin_lock(&dmm_obj->dmm_lock);
+ chunk = get_mapped_region(addr);
+ if (chunk == NULL)
+ status = -ENOENT;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Unmap the region */
+ *psize = chunk->mapped_size * PG_SIZE4K;
+ chunk->mapped = false;
+ chunk->mapped_size = 0;
+ }
+ spin_unlock(&dmm_obj->dmm_lock);
+
+ dev_dbg(bridge, "%s: dmm_mgr %p, addr %x, psize %p\n\tstatus %x, "
+ "chunk %p\n", __func__, dmm_mgr, addr, psize, status, chunk);
+
+ return status;
+}
+
+/*
+ * ======== dmm_un_reserve_memory ========
+ * Purpose:
+ * Free a chunk of reserved DSP/IVA address space.
+ */
+int dmm_un_reserve_memory(struct dmm_object *dmm_mgr, u32 rsv_addr)
+{
+ struct dmm_object *dmm_obj = (struct dmm_object *)dmm_mgr;
+ struct map_page *chunk;
+ u32 i;
+ int status = 0;
+ u32 chunk_size;
+
+ spin_lock(&dmm_obj->dmm_lock);
+
+ /* Find the chunk containing the reserved address */
+ chunk = get_mapped_region(rsv_addr);
+ if (chunk == NULL)
+ status = -ENOENT;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Free all the mapped pages for this reserved region */
+ i = 0;
+ while (i < chunk->region_size) {
+ if (chunk[i].mapped) {
+ /* Remove mapping from the page tables. */
+ chunk_size = chunk[i].mapped_size;
+ /* Clear the mapping flags */
+ chunk[i].mapped = false;
+ chunk[i].mapped_size = 0;
+ i += chunk_size;
+ } else
+ i++;
+ }
+ /* Clear the flags (mark the region 'free') */
+ chunk->reserved = false;
+ /* NOTE: We do NOT coalesce free regions here.
+ * Free regions are coalesced in get_region(), as it traverses
+ *the whole mapping table
+ */
+ }
+ spin_unlock(&dmm_obj->dmm_lock);
+
+ dev_dbg(bridge, "%s: dmm_mgr %p, rsv_addr %x\n\tstatus %x chunk %p",
+ __func__, dmm_mgr, rsv_addr, status, chunk);
+
+ return status;
+}
+
+/*
+ * ======== get_region ========
+ * Purpose:
+ * Returns a region containing the specified memory region
+ */
+static struct map_page *get_region(u32 aAddr)
+{
+ struct map_page *curr_region = NULL;
+ u32 i = 0;
+
+ if (virtual_mapping_table != NULL) {
+ /* find page mapped by this address */
+ i = DMM_ADDR_TO_INDEX(aAddr);
+ if (i < table_size)
+ curr_region = virtual_mapping_table + i;
+ }
+
+ dev_dbg(bridge, "%s: curr_region %p, free_region %d, free_size %d\n",
+ __func__, curr_region, free_region, free_size);
+ return curr_region;
+}
+
+/*
+ * ======== get_free_region ========
+ * Purpose:
+ * Returns the requested free region
+ */
+static struct map_page *get_free_region(u32 aSize)
+{
+ struct map_page *curr_region = NULL;
+ u32 i = 0;
+ u32 region_size = 0;
+ u32 next_i = 0;
+
+ if (virtual_mapping_table == NULL)
+ return curr_region;
+ if (aSize > free_size) {
+ /* Find the largest free region
+ * (coalesce during the traversal) */
+ while (i < table_size) {
+ region_size = virtual_mapping_table[i].region_size;
+ next_i = i + region_size;
+ if (virtual_mapping_table[i].reserved == false) {
+ /* Coalesce, if possible */
+ if (next_i < table_size &&
+ virtual_mapping_table[next_i].reserved
+ == false) {
+ virtual_mapping_table[i].region_size +=
+ virtual_mapping_table
+ [next_i].region_size;
+ continue;
+ }
+ region_size *= PG_SIZE4K;
+ if (region_size > free_size) {
+ free_region = i;
+ free_size = region_size;
+ }
+ }
+ i = next_i;
+ }
+ }
+ if (aSize <= free_size) {
+ curr_region = virtual_mapping_table + free_region;
+ free_region += (aSize / PG_SIZE4K);
+ free_size -= aSize;
+ }
+ return curr_region;
+}
+
+/*
+ * ======== get_mapped_region ========
+ * Purpose:
+ * Returns the requestedmapped region
+ */
+static struct map_page *get_mapped_region(u32 aAddr)
+{
+ u32 i = 0;
+ struct map_page *curr_region = NULL;
+
+ if (virtual_mapping_table == NULL)
+ return curr_region;
+
+ i = DMM_ADDR_TO_INDEX(aAddr);
+ if (i < table_size && (virtual_mapping_table[i].mapped ||
+ virtual_mapping_table[i].reserved))
+ curr_region = virtual_mapping_table + i;
+ return curr_region;
+}
+
+#ifdef DSP_DMM_DEBUG
+u32 dmm_mem_map_dump(struct dmm_object *dmm_mgr)
+{
+ struct map_page *curr_node = NULL;
+ u32 i;
+ u32 freemem = 0;
+ u32 bigsize = 0;
+
+ spin_lock(&dmm_mgr->dmm_lock);
+
+ if (virtual_mapping_table != NULL) {
+ for (i = 0; i < table_size; i +=
+ virtual_mapping_table[i].region_size) {
+ curr_node = virtual_mapping_table + i;
+ if (curr_node->reserved == TRUE) {
+ /*printk("RESERVED size = 0x%x, "
+ "Map size = 0x%x\n",
+ (curr_node->region_size * PG_SIZE4K),
+ (curr_node->mapped == false) ? 0 :
+ (curr_node->mapped_size * PG_SIZE4K));
+ */
+ } else {
+/* printk("UNRESERVED size = 0x%x\n",
+ (curr_node->region_size * PG_SIZE4K));
+ */
+ freemem += (curr_node->region_size * PG_SIZE4K);
+ if (curr_node->region_size > bigsize)
+ bigsize = curr_node->region_size;
+ }
+ }
+ }
+ spin_unlock(&dmm_mgr->dmm_lock);
+ printk(KERN_INFO "Total DSP VA FREE memory = %d Mbytes\n",
+ freemem / (1024 * 1024));
+ printk(KERN_INFO "Total DSP VA USED memory= %d Mbytes \n",
+ (((table_size * PG_SIZE4K) - freemem)) / (1024 * 1024));
+ printk(KERN_INFO "DSP VA - Biggest FREE block = %d Mbytes \n\n",
+ (bigsize * PG_SIZE4K / (1024 * 1024)));
+
+ return 0;
+}
+#endif
diff --git a/drivers/staging/tidspbridge/pmgr/dspapi.c b/drivers/staging/tidspbridge/pmgr/dspapi.c
new file mode 100644
index 0000000..7597210
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/dspapi.c
@@ -0,0 +1,1685 @@
+/*
+ * dspapi.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Common DSP API functions, also includes the wrapper
+ * functions called directly by the DeviceIOControl interface.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/ntfy.h>
+#include <dspbridge/services.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/chnl.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/drv.h>
+
+#include <dspbridge/proc.h>
+#include <dspbridge/strm.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/disp.h>
+#include <dspbridge/mgr.h>
+#include <dspbridge/node.h>
+#include <dspbridge/rmm.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/msg.h>
+#include <dspbridge/cmm.h>
+#include <dspbridge/io.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dspapi.h>
+#include <dspbridge/dbdcd.h>
+
+#include <dspbridge/resourcecleanup.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define MAX_TRACEBUFLEN 255
+#define MAX_LOADARGS 16
+#define MAX_NODES 64
+#define MAX_STREAMS 16
+#define MAX_BUFS 64
+
+/* Used to get dspbridge ioctl table */
+#define DB_GET_IOC_TABLE(cmd) (DB_GET_MODULE(cmd) >> DB_MODULE_SHIFT)
+
+/* Device IOCtl function pointer */
+struct api_cmd {
+ u32(*fxn) (union Trapped_Args *args, void *pr_ctxt);
+ u32 dw_index;
+};
+
+/* ----------------------------------- Globals */
+static u32 api_c_refs;
+
+/*
+ * Function tables.
+ * The order of these functions MUST be the same as the order of the command
+ * numbers defined in dspapi-ioctl.h This is how an IOCTL number in user mode
+ * turns into a function call in kernel mode.
+ */
+
+/* MGR wrapper functions */
+static struct api_cmd mgr_cmd[] = {
+ {mgrwrap_enum_node_info}, /* MGR_ENUMNODE_INFO */
+ {mgrwrap_enum_proc_info}, /* MGR_ENUMPROC_INFO */
+ {mgrwrap_register_object}, /* MGR_REGISTEROBJECT */
+ {mgrwrap_unregister_object}, /* MGR_UNREGISTEROBJECT */
+ {mgrwrap_wait_for_bridge_events}, /* MGR_WAIT */
+ {mgrwrap_get_process_resources_info}, /* MGR_GET_PROC_RES */
+};
+
+/* PROC wrapper functions */
+static struct api_cmd proc_cmd[] = {
+ {procwrap_attach}, /* PROC_ATTACH */
+ {procwrap_ctrl}, /* PROC_CTRL */
+ {procwrap_detach}, /* PROC_DETACH */
+ {procwrap_enum_node_info}, /* PROC_ENUMNODE */
+ {procwrap_enum_resources}, /* PROC_ENUMRESOURCES */
+ {procwrap_get_state}, /* PROC_GET_STATE */
+ {procwrap_get_trace}, /* PROC_GET_TRACE */
+ {procwrap_load}, /* PROC_LOAD */
+ {procwrap_register_notify}, /* PROC_REGISTERNOTIFY */
+ {procwrap_start}, /* PROC_START */
+ {procwrap_reserve_memory}, /* PROC_RSVMEM */
+ {procwrap_un_reserve_memory}, /* PROC_UNRSVMEM */
+ {procwrap_map}, /* PROC_MAPMEM */
+ {procwrap_un_map}, /* PROC_UNMAPMEM */
+ {procwrap_flush_memory}, /* PROC_FLUSHMEMORY */
+ {procwrap_stop}, /* PROC_STOP */
+ {procwrap_invalidate_memory}, /* PROC_INVALIDATEMEMORY */
+ {procwrap_begin_dma}, /* PROC_BEGINDMA */
+ {procwrap_end_dma}, /* PROC_ENDDMA */
+};
+
+/* NODE wrapper functions */
+static struct api_cmd node_cmd[] = {
+ {nodewrap_allocate}, /* NODE_ALLOCATE */
+ {nodewrap_alloc_msg_buf}, /* NODE_ALLOCMSGBUF */
+ {nodewrap_change_priority}, /* NODE_CHANGEPRIORITY */
+ {nodewrap_connect}, /* NODE_CONNECT */
+ {nodewrap_create}, /* NODE_CREATE */
+ {nodewrap_delete}, /* NODE_DELETE */
+ {nodewrap_free_msg_buf}, /* NODE_FREEMSGBUF */
+ {nodewrap_get_attr}, /* NODE_GETATTR */
+ {nodewrap_get_message}, /* NODE_GETMESSAGE */
+ {nodewrap_pause}, /* NODE_PAUSE */
+ {nodewrap_put_message}, /* NODE_PUTMESSAGE */
+ {nodewrap_register_notify}, /* NODE_REGISTERNOTIFY */
+ {nodewrap_run}, /* NODE_RUN */
+ {nodewrap_terminate}, /* NODE_TERMINATE */
+ {nodewrap_get_uuid_props}, /* NODE_GETUUIDPROPS */
+};
+
+/* STRM wrapper functions */
+static struct api_cmd strm_cmd[] = {
+ {strmwrap_allocate_buffer}, /* STRM_ALLOCATEBUFFER */
+ {strmwrap_close}, /* STRM_CLOSE */
+ {strmwrap_free_buffer}, /* STRM_FREEBUFFER */
+ {strmwrap_get_event_handle}, /* STRM_GETEVENTHANDLE */
+ {strmwrap_get_info}, /* STRM_GETINFO */
+ {strmwrap_idle}, /* STRM_IDLE */
+ {strmwrap_issue}, /* STRM_ISSUE */
+ {strmwrap_open}, /* STRM_OPEN */
+ {strmwrap_reclaim}, /* STRM_RECLAIM */
+ {strmwrap_register_notify}, /* STRM_REGISTERNOTIFY */
+ {strmwrap_select}, /* STRM_SELECT */
+};
+
+/* CMM wrapper functions */
+static struct api_cmd cmm_cmd[] = {
+ {cmmwrap_calloc_buf}, /* CMM_ALLOCBUF */
+ {cmmwrap_free_buf}, /* CMM_FREEBUF */
+ {cmmwrap_get_handle}, /* CMM_GETHANDLE */
+ {cmmwrap_get_info}, /* CMM_GETINFO */
+};
+
+/* Array used to store ioctl table sizes. It can hold up to 8 entries */
+static u8 size_cmd[] = {
+ ARRAY_SIZE(mgr_cmd),
+ ARRAY_SIZE(proc_cmd),
+ ARRAY_SIZE(node_cmd),
+ ARRAY_SIZE(strm_cmd),
+ ARRAY_SIZE(cmm_cmd),
+};
+
+static inline void _cp_fm_usr(void *to, const void __user * from,
+ int *err, unsigned long bytes)
+{
+ if (DSP_FAILED(*err))
+ return;
+
+ if (unlikely(!from)) {
+ *err = -EFAULT;
+ return;
+ }
+
+ if (unlikely(copy_from_user(to, from, bytes)))
+ *err = -EFAULT;
+}
+
+#define CP_FM_USR(to, from, err, n) \
+ _cp_fm_usr(to, from, &(err), (n) * sizeof(*(to)))
+
+static inline void _cp_to_usr(void __user *to, const void *from,
+ int *err, unsigned long bytes)
+{
+ if (DSP_FAILED(*err))
+ return;
+
+ if (unlikely(!to)) {
+ *err = -EFAULT;
+ return;
+ }
+
+ if (unlikely(copy_to_user(to, from, bytes)))
+ *err = -EFAULT;
+}
+
+#define CP_TO_USR(to, from, err, n) \
+ _cp_to_usr(to, from, &(err), (n) * sizeof(*(from)))
+
+/*
+ * ======== api_call_dev_ioctl ========
+ * Purpose:
+ * Call the (wrapper) function for the corresponding API IOCTL.
+ */
+inline int api_call_dev_ioctl(u32 cmd, union Trapped_Args *args,
+ u32 *result, void *pr_ctxt)
+{
+ u32(*ioctl_cmd) (union Trapped_Args *args, void *pr_ctxt) = NULL;
+ int i;
+
+ if (_IOC_TYPE(cmd) != DB) {
+ pr_err("%s: Incompatible dspbridge ioctl number\n", __func__);
+ goto err;
+ }
+
+ if (DB_GET_IOC_TABLE(cmd) > ARRAY_SIZE(size_cmd)) {
+ pr_err("%s: undefined ioctl module\n", __func__);
+ goto err;
+ }
+
+ /* Check the size of the required cmd table */
+ i = DB_GET_IOC(cmd);
+ if (i > size_cmd[DB_GET_IOC_TABLE(cmd)]) {
+ pr_err("%s: requested ioctl %d out of bounds for table %d\n",
+ __func__, i, DB_GET_IOC_TABLE(cmd));
+ goto err;
+ }
+
+ switch (DB_GET_MODULE(cmd)) {
+ case DB_MGR:
+ ioctl_cmd = mgr_cmd[i].fxn;
+ break;
+ case DB_PROC:
+ ioctl_cmd = proc_cmd[i].fxn;
+ break;
+ case DB_NODE:
+ ioctl_cmd = node_cmd[i].fxn;
+ break;
+ case DB_STRM:
+ ioctl_cmd = strm_cmd[i].fxn;
+ break;
+ case DB_CMM:
+ ioctl_cmd = cmm_cmd[i].fxn;
+ break;
+ }
+
+ if (!ioctl_cmd) {
+ pr_err("%s: requested ioctl not defined\n", __func__);
+ goto err;
+ } else {
+ *result = (*ioctl_cmd) (args, pr_ctxt);
+ }
+
+ return 0;
+
+err:
+ return -EINVAL;
+}
+
+/*
+ * ======== api_exit ========
+ */
+void api_exit(void)
+{
+ DBC_REQUIRE(api_c_refs > 0);
+ api_c_refs--;
+
+ if (api_c_refs == 0) {
+ /* Release all modules initialized in api_init(). */
+ cod_exit();
+ dev_exit();
+ chnl_exit();
+ msg_exit();
+ io_exit();
+ strm_exit();
+ disp_exit();
+ node_exit();
+ proc_exit();
+ mgr_exit();
+ rmm_exit();
+ drv_exit();
+ }
+ DBC_ENSURE(api_c_refs >= 0);
+}
+
+/*
+ * ======== api_init ========
+ * Purpose:
+ * Module initialization used by Bridge API.
+ */
+bool api_init(void)
+{
+ bool ret = true;
+ bool fdrv, fdev, fcod, fchnl, fmsg, fio;
+ bool fmgr, fproc, fnode, fdisp, fstrm, frmm;
+
+ if (api_c_refs == 0) {
+ /* initialize driver and other modules */
+ fdrv = drv_init();
+ fmgr = mgr_init();
+ fproc = proc_init();
+ fnode = node_init();
+ fdisp = disp_init();
+ fstrm = strm_init();
+ frmm = rmm_init();
+ fchnl = chnl_init();
+ fmsg = msg_mod_init();
+ fio = io_init();
+ fdev = dev_init();
+ fcod = cod_init();
+ ret = fdrv && fdev && fchnl && fcod && fmsg && fio;
+ ret = ret && fmgr && fproc && frmm;
+ if (!ret) {
+ if (fdrv)
+ drv_exit();
+
+ if (fmgr)
+ mgr_exit();
+
+ if (fstrm)
+ strm_exit();
+
+ if (fproc)
+ proc_exit();
+
+ if (fnode)
+ node_exit();
+
+ if (fdisp)
+ disp_exit();
+
+ if (fchnl)
+ chnl_exit();
+
+ if (fmsg)
+ msg_exit();
+
+ if (fio)
+ io_exit();
+
+ if (fdev)
+ dev_exit();
+
+ if (fcod)
+ cod_exit();
+
+ if (frmm)
+ rmm_exit();
+
+ }
+ }
+ if (ret)
+ api_c_refs++;
+
+ return ret;
+}
+
+/*
+ * ======== api_init_complete2 ========
+ * Purpose:
+ * Perform any required bridge initialization which cannot
+ * be performed in api_init() or dev_start_device() due
+ * to the fact that some services are not yet
+ * completely initialized.
+ * Parameters:
+ * Returns:
+ * 0: Allow this device to load
+ * -EPERM: Failure.
+ * Requires:
+ * Bridge API initialized.
+ * Ensures:
+ */
+int api_init_complete2(void)
+{
+ int status = 0;
+ struct cfg_devnode *dev_node;
+ struct dev_object *hdev_obj;
+ u8 dev_type;
+ u32 tmp;
+
+ DBC_REQUIRE(api_c_refs > 0);
+
+ /* Walk the list of DevObjects, get each devnode, and attempting to
+ * autostart the board. Note that this requires COF loading, which
+ * requires KFILE. */
+ for (hdev_obj = dev_get_first(); hdev_obj != NULL;
+ hdev_obj = dev_get_next(hdev_obj)) {
+ if (DSP_FAILED(dev_get_dev_node(hdev_obj, &dev_node)))
+ continue;
+
+ if (DSP_FAILED(dev_get_dev_type(hdev_obj, &dev_type)))
+ continue;
+
+ if ((dev_type == DSP_UNIT) || (dev_type == IVA_UNIT))
+ if (cfg_get_auto_start(dev_node, &tmp) == 0
+ && tmp)
+ proc_auto_start(dev_node, hdev_obj);
+ }
+
+ return status;
+}
+
+/* TODO: Remove deprecated and not implemented ioctl wrappers */
+
+/*
+ * ======== mgrwrap_enum_node_info ========
+ */
+u32 mgrwrap_enum_node_info(union Trapped_Args *args, void *pr_ctxt)
+{
+ u8 *pndb_props;
+ u32 num_nodes;
+ int status = 0;
+ u32 size = args->args_mgr_enumnode_info.undb_props_size;
+
+ if (size < sizeof(struct dsp_ndbprops))
+ return -EINVAL;
+
+ pndb_props = kmalloc(size, GFP_KERNEL);
+ if (pndb_props == NULL)
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ mgr_enum_node_info(args->args_mgr_enumnode_info.node_id,
+ (struct dsp_ndbprops *)pndb_props, size,
+ &num_nodes);
+ }
+ CP_TO_USR(args->args_mgr_enumnode_info.pndb_props, pndb_props, status,
+ size);
+ CP_TO_USR(args->args_mgr_enumnode_info.pu_num_nodes, &num_nodes, status,
+ 1);
+ kfree(pndb_props);
+
+ return status;
+}
+
+/*
+ * ======== mgrwrap_enum_proc_info ========
+ */
+u32 mgrwrap_enum_proc_info(union Trapped_Args *args, void *pr_ctxt)
+{
+ u8 *processor_info;
+ u8 num_procs;
+ int status = 0;
+ u32 size = args->args_mgr_enumproc_info.processor_info_size;
+
+ if (size < sizeof(struct dsp_processorinfo))
+ return -EINVAL;
+
+ processor_info = kmalloc(size, GFP_KERNEL);
+ if (processor_info == NULL)
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ mgr_enum_processor_info(args->args_mgr_enumproc_info.
+ processor_id,
+ (struct dsp_processorinfo *)
+ processor_info, size, &num_procs);
+ }
+ CP_TO_USR(args->args_mgr_enumproc_info.processor_info, processor_info,
+ status, size);
+ CP_TO_USR(args->args_mgr_enumproc_info.pu_num_procs, &num_procs,
+ status, 1);
+ kfree(processor_info);
+
+ return status;
+}
+
+#define WRAP_MAP2CALLER(x) x
+/*
+ * ======== mgrwrap_register_object ========
+ */
+u32 mgrwrap_register_object(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+ struct dsp_uuid uuid_obj;
+ u32 path_size = 0;
+ char *psz_path_name = NULL;
+ int status = 0;
+
+ CP_FM_USR(&uuid_obj, args->args_mgr_registerobject.uuid_obj, status, 1);
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* path_size is increased by 1 to accommodate NULL */
+ path_size = strlen_user((char *)
+ args->args_mgr_registerobject.psz_path_name) +
+ 1;
+ psz_path_name = kmalloc(path_size, GFP_KERNEL);
+ if (!psz_path_name)
+ goto func_end;
+ ret = strncpy_from_user(psz_path_name,
+ (char *)args->args_mgr_registerobject.
+ psz_path_name, path_size);
+ if (!ret) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ if (args->args_mgr_registerobject.obj_type >= DSP_DCDMAXOBJTYPE)
+ return -EINVAL;
+
+ status = dcd_register_object(&uuid_obj,
+ args->args_mgr_registerobject.obj_type,
+ (char *)psz_path_name);
+func_end:
+ kfree(psz_path_name);
+ return status;
+}
+
+/*
+ * ======== mgrwrap_unregister_object ========
+ */
+u32 mgrwrap_unregister_object(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_uuid uuid_obj;
+
+ CP_FM_USR(&uuid_obj, args->args_mgr_registerobject.uuid_obj, status, 1);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ status = dcd_unregister_object(&uuid_obj,
+ args->args_mgr_unregisterobject.
+ obj_type);
+func_end:
+ return status;
+
+}
+
+/*
+ * ======== mgrwrap_wait_for_bridge_events ========
+ */
+u32 mgrwrap_wait_for_bridge_events(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0, real_status = 0;
+ struct dsp_notification *anotifications[MAX_EVENTS];
+ struct dsp_notification notifications[MAX_EVENTS];
+ u32 index, i;
+ u32 count = args->args_mgr_wait.count;
+
+ if (count > MAX_EVENTS)
+ status = -EINVAL;
+
+ /* get the array of pointers to user structures */
+ CP_FM_USR(anotifications, args->args_mgr_wait.anotifications,
+ status, count);
+ /* get the events */
+ for (i = 0; i < count; i++) {
+ CP_FM_USR(¬ifications[i], anotifications[i], status, 1);
+ if (DSP_SUCCEEDED(status)) {
+ /* set the array of pointers to kernel structures */
+ anotifications[i] = ¬ifications[i];
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ real_status = mgr_wait_for_bridge_events(anotifications, count,
+ &index,
+ args->args_mgr_wait.
+ utimeout);
+ }
+ CP_TO_USR(args->args_mgr_wait.pu_index, &index, status, 1);
+ return real_status;
+}
+
+/*
+ * ======== MGRWRAP_GetProcessResourceInfo ========
+ */
+u32 __deprecated mgrwrap_get_process_resources_info(union Trapped_Args * args,
+ void *pr_ctxt)
+{
+ pr_err("%s: deprecated dspbridge ioctl\n", __func__);
+ return 0;
+}
+
+/*
+ * ======== procwrap_attach ========
+ */
+u32 procwrap_attach(union Trapped_Args *args, void *pr_ctxt)
+{
+ void *processor;
+ int status = 0;
+ struct dsp_processorattrin proc_attr_in, *attr_in = NULL;
+
+ /* Optional argument */
+ if (args->args_proc_attach.attr_in) {
+ CP_FM_USR(&proc_attr_in, args->args_proc_attach.attr_in, status,
+ 1);
+ if (DSP_SUCCEEDED(status))
+ attr_in = &proc_attr_in;
+ else
+ goto func_end;
+
+ }
+ status = proc_attach(args->args_proc_attach.processor_id, attr_in,
+ &processor, pr_ctxt);
+ CP_TO_USR(args->args_proc_attach.ph_processor, &processor, status, 1);
+func_end:
+ return status;
+}
+
+/*
+ * ======== procwrap_ctrl ========
+ */
+u32 procwrap_ctrl(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 cb_data_size, __user * psize = (u32 __user *)
+ args->args_proc_ctrl.pargs;
+ u8 *pargs = NULL;
+ int status = 0;
+
+ if (psize) {
+ if (get_user(cb_data_size, psize)) {
+ status = -EPERM;
+ goto func_end;
+ }
+ cb_data_size += sizeof(u32);
+ pargs = kmalloc(cb_data_size, GFP_KERNEL);
+ if (pargs == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ CP_FM_USR(pargs, args->args_proc_ctrl.pargs, status,
+ cb_data_size);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = proc_ctrl(args->args_proc_ctrl.hprocessor,
+ args->args_proc_ctrl.dw_cmd,
+ (struct dsp_cbdata *)pargs);
+ }
+
+ /* CP_TO_USR(args->args_proc_ctrl.pargs, pargs, status, 1); */
+ kfree(pargs);
+func_end:
+ return status;
+}
+
+/*
+ * ======== procwrap_detach ========
+ */
+u32 __deprecated procwrap_detach(union Trapped_Args * args, void *pr_ctxt)
+{
+ /* proc_detach called at bridge_release only */
+ pr_err("%s: deprecated dspbridge ioctl\n", __func__);
+ return 0;
+}
+
+/*
+ * ======== procwrap_enum_node_info ========
+ */
+u32 procwrap_enum_node_info(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ void *node_tab[MAX_NODES];
+ u32 num_nodes;
+ u32 alloc_cnt;
+
+ if (!args->args_proc_enumnode_info.node_tab_size)
+ return -EINVAL;
+
+ status = proc_enum_nodes(args->args_proc_enumnode_info.hprocessor,
+ node_tab,
+ args->args_proc_enumnode_info.node_tab_size,
+ &num_nodes, &alloc_cnt);
+ CP_TO_USR(args->args_proc_enumnode_info.node_tab, node_tab, status,
+ num_nodes);
+ CP_TO_USR(args->args_proc_enumnode_info.pu_num_nodes, &num_nodes,
+ status, 1);
+ CP_TO_USR(args->args_proc_enumnode_info.pu_allocated, &alloc_cnt,
+ status, 1);
+ return status;
+}
+
+u32 procwrap_end_dma(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ if (args->args_proc_dma.dir >= DMA_NONE)
+ return -EINVAL;
+
+ status = proc_end_dma(pr_ctxt,
+ args->args_proc_dma.pmpu_addr,
+ args->args_proc_dma.ul_size,
+ args->args_proc_dma.dir);
+ return status;
+}
+
+u32 procwrap_begin_dma(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ if (args->args_proc_dma.dir >= DMA_NONE)
+ return -EINVAL;
+
+ status = proc_begin_dma(pr_ctxt,
+ args->args_proc_dma.pmpu_addr,
+ args->args_proc_dma.ul_size,
+ args->args_proc_dma.dir);
+ return status;
+}
+
+/*
+ * ======== procwrap_flush_memory ========
+ */
+u32 procwrap_flush_memory(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ if (args->args_proc_flushmemory.ul_flags >
+ PROC_WRITEBACK_INVALIDATE_MEM)
+ return -EINVAL;
+
+ status = proc_flush_memory(pr_ctxt,
+ args->args_proc_flushmemory.pmpu_addr,
+ args->args_proc_flushmemory.ul_size,
+ args->args_proc_flushmemory.ul_flags);
+ return status;
+}
+
+/*
+ * ======== procwrap_invalidate_memory ========
+ */
+u32 procwrap_invalidate_memory(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ status =
+ proc_invalidate_memory(pr_ctxt,
+ args->args_proc_invalidatememory.pmpu_addr,
+ args->args_proc_invalidatememory.ul_size);
+ return status;
+}
+
+/*
+ * ======== procwrap_enum_resources ========
+ */
+u32 procwrap_enum_resources(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_resourceinfo resource_info;
+
+ if (args->args_proc_enumresources.resource_info_size <
+ sizeof(struct dsp_resourceinfo))
+ return -EINVAL;
+
+ status =
+ proc_get_resource_info(args->args_proc_enumresources.hprocessor,
+ args->args_proc_enumresources.resource_type,
+ &resource_info,
+ args->args_proc_enumresources.
+ resource_info_size);
+
+ CP_TO_USR(args->args_proc_enumresources.resource_info, &resource_info,
+ status, 1);
+
+ return status;
+
+}
+
+/*
+ * ======== procwrap_get_state ========
+ */
+u32 procwrap_get_state(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ struct dsp_processorstate proc_state;
+
+ if (args->args_proc_getstate.state_info_size <
+ sizeof(struct dsp_processorstate))
+ return -EINVAL;
+
+ status =
+ proc_get_state(args->args_proc_getstate.hprocessor, &proc_state,
+ args->args_proc_getstate.state_info_size);
+ CP_TO_USR(args->args_proc_getstate.proc_state_obj, &proc_state, status,
+ 1);
+ return status;
+
+}
+
+/*
+ * ======== procwrap_get_trace ========
+ */
+u32 procwrap_get_trace(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ u8 *pbuf;
+
+ if (args->args_proc_gettrace.max_size > MAX_TRACEBUFLEN)
+ return -EINVAL;
+
+ pbuf = kzalloc(args->args_proc_gettrace.max_size, GFP_KERNEL);
+ if (pbuf != NULL) {
+ status = proc_get_trace(args->args_proc_gettrace.hprocessor,
+ pbuf,
+ args->args_proc_gettrace.max_size);
+ } else {
+ status = -ENOMEM;
+ }
+ CP_TO_USR(args->args_proc_gettrace.pbuf, pbuf, status,
+ args->args_proc_gettrace.max_size);
+ kfree(pbuf);
+
+ return status;
+}
+
+/*
+ * ======== procwrap_load ========
+ */
+u32 procwrap_load(union Trapped_Args *args, void *pr_ctxt)
+{
+ s32 i, len;
+ int status = 0;
+ char *temp;
+ s32 count = args->args_proc_load.argc_index;
+ u8 **argv = NULL, **envp = NULL;
+
+ if (count <= 0 || count > MAX_LOADARGS) {
+ status = -EINVAL;
+ goto func_cont;
+ }
+
+ argv = kmalloc(count * sizeof(u8 *), GFP_KERNEL);
+ if (!argv) {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+
+ CP_FM_USR(argv, args->args_proc_load.user_args, status, count);
+ if (DSP_FAILED(status)) {
+ kfree(argv);
+ argv = NULL;
+ goto func_cont;
+ }
+
+ for (i = 0; i < count; i++) {
+ if (argv[i]) {
+ /* User space pointer to argument */
+ temp = (char *)argv[i];
+ /* len is increased by 1 to accommodate NULL */
+ len = strlen_user((char *)temp) + 1;
+ /* Kernel space pointer to argument */
+ argv[i] = kmalloc(len, GFP_KERNEL);
+ if (argv[i]) {
+ CP_FM_USR(argv[i], temp, status, len);
+ if (DSP_FAILED(status)) {
+ kfree(argv[i]);
+ argv[i] = NULL;
+ goto func_cont;
+ }
+ } else {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+ }
+ }
+ /* TODO: validate this */
+ if (args->args_proc_load.user_envp) {
+ /* number of elements in the envp array including NULL */
+ count = 0;
+ do {
+ get_user(temp, args->args_proc_load.user_envp + count);
+ count++;
+ } while (temp);
+ envp = kmalloc(count * sizeof(u8 *), GFP_KERNEL);
+ if (!envp) {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+
+ CP_FM_USR(envp, args->args_proc_load.user_envp, status, count);
+ if (DSP_FAILED(status)) {
+ kfree(envp);
+ envp = NULL;
+ goto func_cont;
+ }
+ for (i = 0; envp[i]; i++) {
+ /* User space pointer to argument */
+ temp = (char *)envp[i];
+ /* len is increased by 1 to accommodate NULL */
+ len = strlen_user((char *)temp) + 1;
+ /* Kernel space pointer to argument */
+ envp[i] = kmalloc(len, GFP_KERNEL);
+ if (envp[i]) {
+ CP_FM_USR(envp[i], temp, status, len);
+ if (DSP_FAILED(status)) {
+ kfree(envp[i]);
+ envp[i] = NULL;
+ goto func_cont;
+ }
+ } else {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ status = proc_load(args->args_proc_load.hprocessor,
+ args->args_proc_load.argc_index,
+ (CONST char **)argv, (CONST char **)envp);
+ }
+func_cont:
+ if (envp) {
+ i = 0;
+ while (envp[i])
+ kfree(envp[i++]);
+
+ kfree(envp);
+ }
+
+ if (argv) {
+ count = args->args_proc_load.argc_index;
+ for (i = 0; (i < count) && argv[i]; i++)
+ kfree(argv[i]);
+
+ kfree(argv);
+ }
+
+ return status;
+}
+
+/*
+ * ======== procwrap_map ========
+ */
+u32 procwrap_map(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ void *map_addr;
+
+ if (!args->args_proc_mapmem.ul_size)
+ return -EINVAL;
+
+ status = proc_map(args->args_proc_mapmem.hprocessor,
+ args->args_proc_mapmem.pmpu_addr,
+ args->args_proc_mapmem.ul_size,
+ args->args_proc_mapmem.req_addr, &map_addr,
+ args->args_proc_mapmem.ul_map_attr, pr_ctxt);
+ if (DSP_SUCCEEDED(status)) {
+ if (put_user(map_addr, args->args_proc_mapmem.pp_map_addr)) {
+ status = -EINVAL;
+ proc_un_map(args->args_proc_mapmem.hprocessor,
+ map_addr, pr_ctxt);
+ }
+
+ }
+ return status;
+}
+
+/*
+ * ======== procwrap_register_notify ========
+ */
+u32 procwrap_register_notify(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ struct dsp_notification notification;
+
+ /* Initialize the notification data structure */
+ notification.ps_name = NULL;
+ notification.handle = NULL;
+
+ status =
+ proc_register_notify(args->args_proc_register_notify.hprocessor,
+ args->args_proc_register_notify.event_mask,
+ args->args_proc_register_notify.notify_type,
+ ¬ification);
+ CP_TO_USR(args->args_proc_register_notify.hnotification, ¬ification,
+ status, 1);
+ return status;
+}
+
+/*
+ * ======== procwrap_reserve_memory ========
+ */
+u32 procwrap_reserve_memory(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ void *prsv_addr;
+
+ if ((args->args_proc_rsvmem.ul_size <= 0) ||
+ (args->args_proc_rsvmem.ul_size & (PG_SIZE4K - 1)) != 0)
+ return -EINVAL;
+
+ status = proc_reserve_memory(args->args_proc_rsvmem.hprocessor,
+ args->args_proc_rsvmem.ul_size, &prsv_addr,
+ pr_ctxt);
+ if (DSP_SUCCEEDED(status)) {
+ if (put_user(prsv_addr, args->args_proc_rsvmem.pp_rsv_addr)) {
+ status = -EINVAL;
+ proc_un_reserve_memory(args->args_proc_rsvmem.
+ hprocessor, prsv_addr, pr_ctxt);
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== procwrap_start ========
+ */
+u32 procwrap_start(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = proc_start(args->args_proc_start.hprocessor);
+ return ret;
+}
+
+/*
+ * ======== procwrap_un_map ========
+ */
+u32 procwrap_un_map(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ status = proc_un_map(args->args_proc_unmapmem.hprocessor,
+ args->args_proc_unmapmem.map_addr, pr_ctxt);
+ return status;
+}
+
+/*
+ * ======== procwrap_un_reserve_memory ========
+ */
+u32 procwrap_un_reserve_memory(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+
+ status = proc_un_reserve_memory(args->args_proc_unrsvmem.hprocessor,
+ args->args_proc_unrsvmem.prsv_addr,
+ pr_ctxt);
+ return status;
+}
+
+/*
+ * ======== procwrap_stop ========
+ */
+u32 procwrap_stop(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = proc_stop(args->args_proc_stop.hprocessor);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_allocate ========
+ */
+u32 nodewrap_allocate(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_uuid node_uuid;
+ u32 cb_data_size = 0;
+ u32 __user *psize = (u32 __user *) args->args_node_allocate.pargs;
+ u8 *pargs = NULL;
+ struct dsp_nodeattrin proc_attr_in, *attr_in = NULL;
+ struct node_object *hnode;
+
+ /* Optional argument */
+ if (psize) {
+ if (get_user(cb_data_size, psize))
+ status = -EPERM;
+
+ cb_data_size += sizeof(u32);
+ if (DSP_SUCCEEDED(status)) {
+ pargs = kmalloc(cb_data_size, GFP_KERNEL);
+ if (pargs == NULL)
+ status = -ENOMEM;
+
+ }
+ CP_FM_USR(pargs, args->args_node_allocate.pargs, status,
+ cb_data_size);
+ }
+ CP_FM_USR(&node_uuid, args->args_node_allocate.node_id_ptr, status, 1);
+ if (DSP_FAILED(status))
+ goto func_cont;
+ /* Optional argument */
+ if (args->args_node_allocate.attr_in) {
+ CP_FM_USR(&proc_attr_in, args->args_node_allocate.attr_in,
+ status, 1);
+ if (DSP_SUCCEEDED(status))
+ attr_in = &proc_attr_in;
+ else
+ status = -ENOMEM;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = node_allocate(args->args_node_allocate.hprocessor,
+ &node_uuid, (struct dsp_cbdata *)pargs,
+ attr_in, &hnode, pr_ctxt);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ CP_TO_USR(args->args_node_allocate.ph_node, &hnode, status, 1);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ node_delete(hnode, pr_ctxt);
+ }
+ }
+func_cont:
+ kfree(pargs);
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_alloc_msg_buf ========
+ */
+u32 nodewrap_alloc_msg_buf(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_bufferattr *pattr = NULL;
+ struct dsp_bufferattr attr;
+ u8 *pbuffer = NULL;
+
+ if (!args->args_node_allocmsgbuf.usize)
+ return -EINVAL;
+
+ if (args->args_node_allocmsgbuf.pattr) { /* Optional argument */
+ CP_FM_USR(&attr, args->args_node_allocmsgbuf.pattr, status, 1);
+ if (DSP_SUCCEEDED(status))
+ pattr = &attr;
+
+ }
+ /* IN OUT argument */
+ CP_FM_USR(&pbuffer, args->args_node_allocmsgbuf.pbuffer, status, 1);
+ if (DSP_SUCCEEDED(status)) {
+ status = node_alloc_msg_buf(args->args_node_allocmsgbuf.hnode,
+ args->args_node_allocmsgbuf.usize,
+ pattr, &pbuffer);
+ }
+ CP_TO_USR(args->args_node_allocmsgbuf.pbuffer, &pbuffer, status, 1);
+ return status;
+}
+
+/*
+ * ======== nodewrap_change_priority ========
+ */
+u32 nodewrap_change_priority(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = node_change_priority(args->args_node_changepriority.hnode,
+ args->args_node_changepriority.prio);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_connect ========
+ */
+u32 nodewrap_connect(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_strmattr attrs;
+ struct dsp_strmattr *pattrs = NULL;
+ u32 cb_data_size;
+ u32 __user *psize = (u32 __user *) args->args_node_connect.conn_param;
+ u8 *pargs = NULL;
+
+ /* Optional argument */
+ if (psize) {
+ if (get_user(cb_data_size, psize))
+ status = -EPERM;
+
+ cb_data_size += sizeof(u32);
+ if (DSP_SUCCEEDED(status)) {
+ pargs = kmalloc(cb_data_size, GFP_KERNEL);
+ if (pargs == NULL) {
+ status = -ENOMEM;
+ goto func_cont;
+ }
+
+ }
+ CP_FM_USR(pargs, args->args_node_connect.conn_param, status,
+ cb_data_size);
+ if (DSP_FAILED(status))
+ goto func_cont;
+ }
+ if (args->args_node_connect.pattrs) { /* Optional argument */
+ CP_FM_USR(&attrs, args->args_node_connect.pattrs, status, 1);
+ if (DSP_SUCCEEDED(status))
+ pattrs = &attrs;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = node_connect(args->args_node_connect.hnode,
+ args->args_node_connect.stream_id,
+ args->args_node_connect.other_node,
+ args->args_node_connect.other_stream,
+ pattrs, (struct dsp_cbdata *)pargs);
+ }
+func_cont:
+ kfree(pargs);
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_create ========
+ */
+u32 nodewrap_create(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = node_create(args->args_node_create.hnode);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_delete ========
+ */
+u32 nodewrap_delete(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = node_delete(args->args_node_delete.hnode, pr_ctxt);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_free_msg_buf ========
+ */
+u32 nodewrap_free_msg_buf(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_bufferattr *pattr = NULL;
+ struct dsp_bufferattr attr;
+ if (args->args_node_freemsgbuf.pattr) { /* Optional argument */
+ CP_FM_USR(&attr, args->args_node_freemsgbuf.pattr, status, 1);
+ if (DSP_SUCCEEDED(status))
+ pattr = &attr;
+
+ }
+
+ if (!args->args_node_freemsgbuf.pbuffer)
+ return -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ status = node_free_msg_buf(args->args_node_freemsgbuf.hnode,
+ args->args_node_freemsgbuf.pbuffer,
+ pattr);
+ }
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_get_attr ========
+ */
+u32 nodewrap_get_attr(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_nodeattr attr;
+
+ status = node_get_attr(args->args_node_getattr.hnode, &attr,
+ args->args_node_getattr.attr_size);
+ CP_TO_USR(args->args_node_getattr.pattr, &attr, status, 1);
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_get_message ========
+ */
+u32 nodewrap_get_message(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ struct dsp_msg msg;
+
+ status = node_get_message(args->args_node_getmessage.hnode, &msg,
+ args->args_node_getmessage.utimeout);
+
+ CP_TO_USR(args->args_node_getmessage.message, &msg, status, 1);
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_pause ========
+ */
+u32 nodewrap_pause(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = node_pause(args->args_node_pause.hnode);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_put_message ========
+ */
+u32 nodewrap_put_message(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_msg msg;
+
+ CP_FM_USR(&msg, args->args_node_putmessage.message, status, 1);
+
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ node_put_message(args->args_node_putmessage.hnode, &msg,
+ args->args_node_putmessage.utimeout);
+ }
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_register_notify ========
+ */
+u32 nodewrap_register_notify(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_notification notification;
+
+ /* Initialize the notification data structure */
+ notification.ps_name = NULL;
+ notification.handle = NULL;
+
+ if (!args->args_proc_register_notify.event_mask)
+ CP_FM_USR(¬ification,
+ args->args_proc_register_notify.hnotification,
+ status, 1);
+
+ status = node_register_notify(args->args_node_registernotify.hnode,
+ args->args_node_registernotify.event_mask,
+ args->args_node_registernotify.
+ notify_type, ¬ification);
+ CP_TO_USR(args->args_node_registernotify.hnotification, ¬ification,
+ status, 1);
+ return status;
+}
+
+/*
+ * ======== nodewrap_run ========
+ */
+u32 nodewrap_run(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = node_run(args->args_node_run.hnode);
+
+ return ret;
+}
+
+/*
+ * ======== nodewrap_terminate ========
+ */
+u32 nodewrap_terminate(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ int tempstatus;
+
+ status = node_terminate(args->args_node_terminate.hnode, &tempstatus);
+
+ CP_TO_USR(args->args_node_terminate.pstatus, &tempstatus, status, 1);
+
+ return status;
+}
+
+/*
+ * ======== nodewrap_get_uuid_props ========
+ */
+u32 nodewrap_get_uuid_props(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_uuid node_uuid;
+ struct dsp_ndbprops *pnode_props = NULL;
+
+ CP_FM_USR(&node_uuid, args->args_node_getuuidprops.node_id_ptr, status,
+ 1);
+ if (DSP_FAILED(status))
+ goto func_cont;
+ pnode_props = kmalloc(sizeof(struct dsp_ndbprops), GFP_KERNEL);
+ if (pnode_props != NULL) {
+ status =
+ node_get_uuid_props(args->args_node_getuuidprops.hprocessor,
+ &node_uuid, pnode_props);
+ CP_TO_USR(args->args_node_getuuidprops.node_props, pnode_props,
+ status, 1);
+ } else
+ status = -ENOMEM;
+func_cont:
+ kfree(pnode_props);
+ return status;
+}
+
+/*
+ * ======== strmwrap_allocate_buffer ========
+ */
+u32 strmwrap_allocate_buffer(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status;
+ u8 **ap_buffer = NULL;
+ u32 num_bufs = args->args_strm_allocatebuffer.num_bufs;
+
+ if (num_bufs > MAX_BUFS)
+ return -EINVAL;
+
+ ap_buffer = kmalloc((num_bufs * sizeof(u8 *)), GFP_KERNEL);
+
+ status = strm_allocate_buffer(args->args_strm_allocatebuffer.hstream,
+ args->args_strm_allocatebuffer.usize,
+ ap_buffer, num_bufs, pr_ctxt);
+ if (DSP_SUCCEEDED(status)) {
+ CP_TO_USR(args->args_strm_allocatebuffer.ap_buffer, ap_buffer,
+ status, num_bufs);
+ if (DSP_FAILED(status)) {
+ status = -EFAULT;
+ strm_free_buffer(args->args_strm_allocatebuffer.hstream,
+ ap_buffer, num_bufs, pr_ctxt);
+ }
+ }
+ kfree(ap_buffer);
+
+ return status;
+}
+
+/*
+ * ======== strmwrap_close ========
+ */
+u32 strmwrap_close(union Trapped_Args *args, void *pr_ctxt)
+{
+ return strm_close(args->args_strm_close.hstream, pr_ctxt);
+}
+
+/*
+ * ======== strmwrap_free_buffer ========
+ */
+u32 strmwrap_free_buffer(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ u8 **ap_buffer = NULL;
+ u32 num_bufs = args->args_strm_freebuffer.num_bufs;
+
+ if (num_bufs > MAX_BUFS)
+ return -EINVAL;
+
+ ap_buffer = kmalloc((num_bufs * sizeof(u8 *)), GFP_KERNEL);
+
+ CP_FM_USR(ap_buffer, args->args_strm_freebuffer.ap_buffer, status,
+ num_bufs);
+
+ if (DSP_SUCCEEDED(status)) {
+ status = strm_free_buffer(args->args_strm_freebuffer.hstream,
+ ap_buffer, num_bufs, pr_ctxt);
+ }
+ CP_TO_USR(args->args_strm_freebuffer.ap_buffer, ap_buffer, status,
+ num_bufs);
+ kfree(ap_buffer);
+
+ return status;
+}
+
+/*
+ * ======== strmwrap_get_event_handle ========
+ */
+u32 __deprecated strmwrap_get_event_handle(union Trapped_Args * args,
+ void *pr_ctxt)
+{
+ pr_err("%s: deprecated dspbridge ioctl\n", __func__);
+ return -ENOSYS;
+}
+
+/*
+ * ======== strmwrap_get_info ========
+ */
+u32 strmwrap_get_info(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct stream_info strm_info;
+ struct dsp_streaminfo user;
+ struct dsp_streaminfo *temp;
+
+ CP_FM_USR(&strm_info, args->args_strm_getinfo.stream_info, status, 1);
+ temp = strm_info.user_strm;
+
+ strm_info.user_strm = &user;
+
+ if (DSP_SUCCEEDED(status)) {
+ status = strm_get_info(args->args_strm_getinfo.hstream,
+ &strm_info,
+ args->args_strm_getinfo.
+ stream_info_size);
+ }
+ CP_TO_USR(temp, strm_info.user_strm, status, 1);
+ strm_info.user_strm = temp;
+ CP_TO_USR(args->args_strm_getinfo.stream_info, &strm_info, status, 1);
+ return status;
+}
+
+/*
+ * ======== strmwrap_idle ========
+ */
+u32 strmwrap_idle(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 ret;
+
+ ret = strm_idle(args->args_strm_idle.hstream,
+ args->args_strm_idle.flush_flag);
+
+ return ret;
+}
+
+/*
+ * ======== strmwrap_issue ========
+ */
+u32 strmwrap_issue(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+
+ if (!args->args_strm_issue.pbuffer)
+ return -EFAULT;
+
+ /* No need of doing CP_FM_USR for the user buffer (pbuffer)
+ as this is done in Bridge internal function bridge_chnl_add_io_req
+ in chnl_sm.c */
+ status = strm_issue(args->args_strm_issue.hstream,
+ args->args_strm_issue.pbuffer,
+ args->args_strm_issue.dw_bytes,
+ args->args_strm_issue.dw_buf_size,
+ args->args_strm_issue.dw_arg);
+
+ return status;
+}
+
+/*
+ * ======== strmwrap_open ========
+ */
+u32 strmwrap_open(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct strm_attr attr;
+ struct strm_object *strm_obj;
+ struct dsp_streamattrin strm_attr_in;
+
+ CP_FM_USR(&attr, args->args_strm_open.attr_in, status, 1);
+
+ if (attr.stream_attr_in != NULL) { /* Optional argument */
+ CP_FM_USR(&strm_attr_in, attr.stream_attr_in, status, 1);
+ if (DSP_SUCCEEDED(status)) {
+ attr.stream_attr_in = &strm_attr_in;
+ if (attr.stream_attr_in->strm_mode == STRMMODE_LDMA)
+ return -ENOSYS;
+ }
+
+ }
+ status = strm_open(args->args_strm_open.hnode,
+ args->args_strm_open.direction,
+ args->args_strm_open.index, &attr, &strm_obj,
+ pr_ctxt);
+ CP_TO_USR(args->args_strm_open.ph_stream, &strm_obj, status, 1);
+ return status;
+}
+
+/*
+ * ======== strmwrap_reclaim ========
+ */
+u32 strmwrap_reclaim(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ u8 *buf_ptr;
+ u32 ul_bytes;
+ u32 dw_arg;
+ u32 ul_buf_size;
+
+ status = strm_reclaim(args->args_strm_reclaim.hstream, &buf_ptr,
+ &ul_bytes, &ul_buf_size, &dw_arg);
+ CP_TO_USR(args->args_strm_reclaim.buf_ptr, &buf_ptr, status, 1);
+ CP_TO_USR(args->args_strm_reclaim.bytes, &ul_bytes, status, 1);
+ CP_TO_USR(args->args_strm_reclaim.pdw_arg, &dw_arg, status, 1);
+
+ if (args->args_strm_reclaim.buf_size_ptr != NULL) {
+ CP_TO_USR(args->args_strm_reclaim.buf_size_ptr, &ul_buf_size,
+ status, 1);
+ }
+
+ return status;
+}
+
+/*
+ * ======== strmwrap_register_notify ========
+ */
+u32 strmwrap_register_notify(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct dsp_notification notification;
+
+ /* Initialize the notification data structure */
+ notification.ps_name = NULL;
+ notification.handle = NULL;
+
+ status = strm_register_notify(args->args_strm_registernotify.hstream,
+ args->args_strm_registernotify.event_mask,
+ args->args_strm_registernotify.
+ notify_type, ¬ification);
+ CP_TO_USR(args->args_strm_registernotify.hnotification, ¬ification,
+ status, 1);
+
+ return status;
+}
+
+/*
+ * ======== strmwrap_select ========
+ */
+u32 strmwrap_select(union Trapped_Args *args, void *pr_ctxt)
+{
+ u32 mask;
+ struct strm_object *strm_tab[MAX_STREAMS];
+ int status = 0;
+
+ if (args->args_strm_select.strm_num > MAX_STREAMS)
+ return -EINVAL;
+
+ CP_FM_USR(strm_tab, args->args_strm_select.stream_tab, status,
+ args->args_strm_select.strm_num);
+ if (DSP_SUCCEEDED(status)) {
+ status = strm_select(strm_tab, args->args_strm_select.strm_num,
+ &mask, args->args_strm_select.utimeout);
+ }
+ CP_TO_USR(args->args_strm_select.pmask, &mask, status, 1);
+ return status;
+}
+
+/* CMM */
+
+/*
+ * ======== cmmwrap_calloc_buf ========
+ */
+u32 __deprecated cmmwrap_calloc_buf(union Trapped_Args * args, void *pr_ctxt)
+{
+ /* This operation is done in kernel */
+ pr_err("%s: deprecated dspbridge ioctl\n", __func__);
+ return -ENOSYS;
+}
+
+/*
+ * ======== cmmwrap_free_buf ========
+ */
+u32 __deprecated cmmwrap_free_buf(union Trapped_Args * args, void *pr_ctxt)
+{
+ /* This operation is done in kernel */
+ pr_err("%s: deprecated dspbridge ioctl\n", __func__);
+ return -ENOSYS;
+}
+
+/*
+ * ======== cmmwrap_get_handle ========
+ */
+u32 cmmwrap_get_handle(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct cmm_object *hcmm_mgr;
+
+ status = cmm_get_handle(args->args_cmm_gethandle.hprocessor, &hcmm_mgr);
+
+ CP_TO_USR(args->args_cmm_gethandle.ph_cmm_mgr, &hcmm_mgr, status, 1);
+
+ return status;
+}
+
+/*
+ * ======== cmmwrap_get_info ========
+ */
+u32 cmmwrap_get_info(union Trapped_Args *args, void *pr_ctxt)
+{
+ int status = 0;
+ struct cmm_info cmm_info_obj;
+
+ status = cmm_get_info(args->args_cmm_getinfo.hcmm_mgr, &cmm_info_obj);
+
+ CP_TO_USR(args->args_cmm_getinfo.cmm_info_obj, &cmm_info_obj, status,
+ 1);
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/io.c b/drivers/staging/tidspbridge/pmgr/io.c
new file mode 100644
index 0000000..c6ad203
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/io.c
@@ -0,0 +1,142 @@
+/*
+ * io.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * IO manager interface: Manages IO between CHNL and msg_ctrl.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- This */
+#include <ioobj.h>
+#include <dspbridge/iodefs.h>
+#include <dspbridge/io.h>
+
+/* ----------------------------------- Globals */
+static u32 refs;
+
+/*
+ * ======== io_create ========
+ * Purpose:
+ * Create an IO manager object, responsible for managing IO between
+ * CHNL and msg_ctrl
+ */
+int io_create(OUT struct io_mgr **phIOMgr, struct dev_object *hdev_obj,
+ IN CONST struct io_attrs *pMgrAttrs)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct io_mgr *hio_mgr = NULL;
+ struct io_mgr_ *pio_mgr = NULL;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phIOMgr != NULL);
+ DBC_REQUIRE(pMgrAttrs != NULL);
+
+ *phIOMgr = NULL;
+
+ /* A memory base of 0 implies no memory base: */
+ if ((pMgrAttrs->shm_base != 0) && (pMgrAttrs->usm_length == 0))
+ status = -EINVAL;
+
+ if (pMgrAttrs->word_size == 0)
+ status = -EINVAL;
+
+ if (DSP_SUCCEEDED(status)) {
+ dev_get_intf_fxns(hdev_obj, &intf_fxns);
+
+ /* Let Bridge channel module finish the create: */
+ status = (*intf_fxns->pfn_io_create) (&hio_mgr, hdev_obj,
+ pMgrAttrs);
+
+ if (DSP_SUCCEEDED(status)) {
+ pio_mgr = (struct io_mgr_ *)hio_mgr;
+ pio_mgr->intf_fxns = intf_fxns;
+ pio_mgr->hdev_obj = hdev_obj;
+
+ /* Return the new channel manager handle: */
+ *phIOMgr = hio_mgr;
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== io_destroy ========
+ * Purpose:
+ * Delete IO manager.
+ */
+int io_destroy(struct io_mgr *hio_mgr)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct io_mgr_ *pio_mgr = (struct io_mgr_ *)hio_mgr;
+ int status;
+
+ DBC_REQUIRE(refs > 0);
+
+ intf_fxns = pio_mgr->intf_fxns;
+
+ /* Let Bridge channel module destroy the io_mgr: */
+ status = (*intf_fxns->pfn_io_destroy) (hio_mgr);
+
+ return status;
+}
+
+/*
+ * ======== io_exit ========
+ * Purpose:
+ * Discontinue usage of the IO module.
+ */
+void io_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== io_init ========
+ * Purpose:
+ * Initialize the IO module's private state.
+ */
+bool io_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/ioobj.h b/drivers/staging/tidspbridge/pmgr/ioobj.h
new file mode 100644
index 0000000..f46355f
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/ioobj.h
@@ -0,0 +1,38 @@
+/*
+ * ioobj.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Structure subcomponents of channel class library IO objects which
+ * are exposed to DSP API from Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef IOOBJ_
+#define IOOBJ_
+
+#include <dspbridge/devdefs.h>
+#include <dspbridge/dspdefs.h>
+
+/*
+ * This struct is the first field in a io_mgr struct. Other, implementation
+ * specific fields follow this structure in memory.
+ */
+struct io_mgr_ {
+ /* These must be the first fields in a io_mgr struct: */
+ struct bridge_dev_context *hbridge_context; /* Bridge context. */
+ /* Function interface to Bridge driver. */
+ struct bridge_drv_interface *intf_fxns;
+ struct dev_object *hdev_obj; /* Device this board represents. */
+};
+
+#endif /* IOOBJ_ */
diff --git a/drivers/staging/tidspbridge/pmgr/msg.c b/drivers/staging/tidspbridge/pmgr/msg.c
new file mode 100644
index 0000000..64f1cb4
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/msg.c
@@ -0,0 +1,129 @@
+/*
+ * msg.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge msg_ctrl Module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- Bridge Driver */
+#include <dspbridge/dspdefs.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- This */
+#include <msgobj.h>
+#include <dspbridge/msg.h>
+
+/* ----------------------------------- Globals */
+static u32 refs; /* module reference count */
+
+/*
+ * ======== msg_create ========
+ * Purpose:
+ * Create an object to manage message queues. Only one of these objects
+ * can exist per device object.
+ */
+int msg_create(OUT struct msg_mgr **phMsgMgr,
+ struct dev_object *hdev_obj, msg_onexit msgCallback)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct msg_mgr_ *msg_mgr_obj;
+ struct msg_mgr *hmsg_mgr;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phMsgMgr != NULL);
+ DBC_REQUIRE(msgCallback != NULL);
+ DBC_REQUIRE(hdev_obj != NULL);
+
+ *phMsgMgr = NULL;
+
+ dev_get_intf_fxns(hdev_obj, &intf_fxns);
+
+ /* Let Bridge message module finish the create: */
+ status =
+ (*intf_fxns->pfn_msg_create) (&hmsg_mgr, hdev_obj, msgCallback);
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Fill in DSP API message module's fields of the msg_mgr
+ * structure */
+ msg_mgr_obj = (struct msg_mgr_ *)hmsg_mgr;
+ msg_mgr_obj->intf_fxns = intf_fxns;
+
+ /* Finally, return the new message manager handle: */
+ *phMsgMgr = hmsg_mgr;
+ } else {
+ status = -EPERM;
+ }
+ return status;
+}
+
+/*
+ * ======== msg_delete ========
+ * Purpose:
+ * Delete a msg_ctrl manager allocated in msg_create().
+ */
+void msg_delete(struct msg_mgr *hmsg_mgr)
+{
+ struct msg_mgr_ *msg_mgr_obj = (struct msg_mgr_ *)hmsg_mgr;
+ struct bridge_drv_interface *intf_fxns;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (msg_mgr_obj) {
+ intf_fxns = msg_mgr_obj->intf_fxns;
+
+ /* Let Bridge message module destroy the msg_mgr: */
+ (*intf_fxns->pfn_msg_delete) (hmsg_mgr);
+ } else {
+ dev_dbg(bridge, "%s: Error hmsg_mgr handle: %p\n",
+ __func__, hmsg_mgr);
+ }
+}
+
+/*
+ * ======== msg_exit ========
+ */
+void msg_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== msg_mod_init ========
+ */
+bool msg_mod_init(void)
+{
+ DBC_REQUIRE(refs >= 0);
+
+ refs++;
+
+ DBC_ENSURE(refs >= 0);
+
+ return true;
+}
diff --git a/drivers/staging/tidspbridge/pmgr/msgobj.h b/drivers/staging/tidspbridge/pmgr/msgobj.h
new file mode 100644
index 0000000..14ca633
--- /dev/null
+++ b/drivers/staging/tidspbridge/pmgr/msgobj.h
@@ -0,0 +1,38 @@
+/*
+ * msgobj.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Structure subcomponents of channel class library msg_ctrl objects which
+ * are exposed to DSP API from Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MSGOBJ_
+#define MSGOBJ_
+
+#include <dspbridge/dspdefs.h>
+
+#include <dspbridge/msgdefs.h>
+
+/*
+ * This struct is the first field in a msg_mgr struct. Other, implementation
+ * specific fields follow this structure in memory.
+ */
+struct msg_mgr_ {
+ /* The first field must match that in _msg_sm.h */
+
+ /* Function interface to Bridge driver. */
+ struct bridge_drv_interface *intf_fxns;
+};
+
+#endif /* MSGOBJ_ */
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge MMU support
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/hw/EasiGlobal.h | 41 ++
drivers/staging/tidspbridge/hw/GlobalTypes.h | 308 ++++++++++++++
drivers/staging/tidspbridge/hw/MMUAccInt.h | 76 ++++
drivers/staging/tidspbridge/hw/MMURegAcM.h | 226 ++++++++++
drivers/staging/tidspbridge/hw/hw_defs.h | 60 +++
drivers/staging/tidspbridge/hw/hw_mmu.c | 587 ++++++++++++++++++++++++++
drivers/staging/tidspbridge/hw/hw_mmu.h | 161 +++++++
7 files changed, 1459 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/hw/EasiGlobal.h
create mode 100644 drivers/staging/tidspbridge/hw/GlobalTypes.h
create mode 100644 drivers/staging/tidspbridge/hw/MMUAccInt.h
create mode 100644 drivers/staging/tidspbridge/hw/MMURegAcM.h
create mode 100644 drivers/staging/tidspbridge/hw/hw_defs.h
create mode 100644 drivers/staging/tidspbridge/hw/hw_mmu.c
create mode 100644 drivers/staging/tidspbridge/hw/hw_mmu.h
diff --git a/drivers/staging/tidspbridge/hw/EasiGlobal.h b/drivers/staging/tidspbridge/hw/EasiGlobal.h
new file mode 100644
index 0000000..9b45aa7
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/EasiGlobal.h
@@ -0,0 +1,41 @@
+/*
+ * EasiGlobal.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _EASIGLOBAL_H
+#define _EASIGLOBAL_H
+#include <linux/types.h>
+
+/*
+ * DEFINE: READ_ONLY, WRITE_ONLY & READ_WRITE
+ *
+ * DESCRIPTION: Defines used to describe register types for EASI-checker tests.
+ */
+
+#define READ_ONLY 1
+#define WRITE_ONLY 2
+#define READ_WRITE 3
+
+/*
+ * MACRO: _DEBUG_LEVEL1_EASI
+ *
+ * DESCRIPTION: A MACRO which can be used to indicate that a particular beach
+ * register access function was called.
+ *
+ * NOTE: We currently dont use this functionality.
+ */
+#define _DEBUG_LEVEL1_EASI(easiNum) ((void)0)
+
+#endif /* _EASIGLOBAL_H */
diff --git a/drivers/staging/tidspbridge/hw/GlobalTypes.h b/drivers/staging/tidspbridge/hw/GlobalTypes.h
new file mode 100644
index 0000000..9b55150
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/GlobalTypes.h
@@ -0,0 +1,308 @@
+/*
+ * GlobalTypes.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global HW definitions
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _GLOBALTYPES_H
+#define _GLOBALTYPES_H
+
+/*
+ * Definition: TRUE, FALSE
+ *
+ * DESCRIPTION: Boolean Definitions
+ */
+#ifndef TRUE
+#define FALSE 0
+#define TRUE (!(FALSE))
+#endif
+
+/*
+ * Definition: NULL
+ *
+ * DESCRIPTION: Invalid pointer
+ */
+#ifndef NULL
+#define NULL (void *)0
+#endif
+
+/*
+ * Definition: RET_CODE_BASE
+ *
+ * DESCRIPTION: Base value for return code offsets
+ */
+#define RET_CODE_BASE 0
+
+/*
+ * Definition: *BIT_OFFSET
+ *
+ * DESCRIPTION: offset in bytes from start of 32-bit word.
+ */
+#define LOWER16BIT_OFFSET 0
+#define UPPER16BIT_OFFSET 2
+
+#define LOWER8BIT_OFFSET 0
+#define LOWER_MIDDLE8BIT_OFFSET 1
+#define UPPER_MIDDLE8BIT_OFFSET 2
+#define UPPER8BIT_OFFSET 3
+
+#define LOWER8BIT_OF16_OFFSET 0
+#define UPPER8BIT_OF16_OFFSET 1
+
+/*
+ * Definition: *BIT_SHIFT
+ *
+ * DESCRIPTION: offset in bits from start of 32-bit word.
+ */
+#define LOWER16BIT_SHIFT 0
+#define UPPER16BIT_SHIFT 16
+
+#define LOWER8BIT_SHIFT 0
+#define LOWER_MIDDLE8BIT_SHIFT 8
+#define UPPER_MIDDLE8BIT_SHIFT 16
+#define UPPER8BIT_SHIFT 24
+
+#define LOWER8BIT_OF16_SHIFT 0
+#define UPPER8BIT_OF16_SHIFT 8
+
+/*
+ * Definition: LOWER16BIT_MASK
+ *
+ * DESCRIPTION: 16 bit mask used for inclusion of lower 16 bits i.e. mask out
+ * the upper 16 bits
+ */
+#define LOWER16BIT_MASK 0x0000FFFF
+
+/*
+ * Definition: LOWER8BIT_MASK
+ *
+ * DESCRIPTION: 8 bit masks used for inclusion of 8 bits i.e. mask out
+ * the upper 16 bits
+ */
+#define LOWER8BIT_MASK 0x000000FF
+
+/*
+ * Definition: RETURN32BITS_FROM16LOWER_AND16UPPER(lower16Bits, upper16Bits)
+ *
+ * DESCRIPTION: Returns a 32 bit value given a 16 bit lower value and a 16
+ * bit upper value
+ */
+#define RETURN32BITS_FROM16LOWER_AND16UPPER(lower16Bits, upper16Bits)\
+ (((((u32)lower16Bits) & LOWER16BIT_MASK)) | \
+ (((((u32)upper16Bits) & LOWER16BIT_MASK) << UPPER16BIT_SHIFT)))
+
+/*
+ * Definition: RETURN16BITS_FROM8LOWER_AND8UPPER(lower16Bits, upper16Bits)
+ *
+ * DESCRIPTION: Returns a 16 bit value given a 8 bit lower value and a 8
+ * bit upper value
+ */
+#define RETURN16BITS_FROM8LOWER_AND8UPPER(lower8Bits, upper8Bits)\
+ (((((u32)lower8Bits) & LOWER8BIT_MASK)) | \
+ (((((u32)upper8Bits) & LOWER8BIT_MASK) << UPPER8BIT_OF16_SHIFT)))
+
+/*
+ * Definition: RETURN32BITS_FROM48BIT_VALUES(lower8Bits, lowerMiddle8Bits,
+ * lowerUpper8Bits, upper8Bits)
+ *
+ * DESCRIPTION: Returns a 32 bit value given four 8 bit values
+ */
+#define RETURN32BITS_FROM48BIT_VALUES(lower8Bits, lowerMiddle8Bits,\
+ lowerUpper8Bits, upper8Bits)\
+ (((((u32)lower8Bits) & LOWER8BIT_MASK)) | \
+ (((((u32)lowerMiddle8Bits) & LOWER8BIT_MASK) <<\
+ LOWER_MIDDLE8BIT_SHIFT)) | \
+ (((((u32)lowerUpper8Bits) & LOWER8BIT_MASK) <<\
+ UPPER_MIDDLE8BIT_SHIFT)) | \
+ (((((u32)upper8Bits) & LOWER8BIT_MASK) <<\
+ UPPER8BIT_SHIFT)))
+
+/*
+ * Definition: READ_LOWER16BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 16 lower bits of 32bit value
+ */
+#define READ_LOWER16BITS_OF32(value32bits)\
+ ((u16)((u32)(value32bits) & LOWER16BIT_MASK))
+
+/*
+ * Definition: READ_UPPER16BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 16 lower bits of 32bit value
+ */
+#define READ_UPPER16BITS_OF32(value32bits)\
+ (((u16)((u32)(value32bits) >> UPPER16BIT_SHIFT)) &\
+ LOWER16BIT_MASK)
+
+/*
+ * Definition: READ_LOWER8BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 8 lower bits of 32bit value
+ */
+#define READ_LOWER8BITS_OF32(value32bits)\
+ ((u8)((u32)(value32bits) & LOWER8BIT_MASK))
+
+/*
+ * Definition: READ_LOWER_MIDDLE8BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 8 lower middle bits of 32bit value
+ */
+#define READ_LOWER_MIDDLE8BITS_OF32(value32bits)\
+ (((u8)((u32)(value32bits) >> LOWER_MIDDLE8BIT_SHIFT)) &\
+ LOWER8BIT_MASK)
+
+/*
+ * Definition: READ_LOWER_MIDDLE8BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 8 lower middle bits of 32bit value
+ */
+#define READ_UPPER_MIDDLE8BITS_OF32(value32bits)\
+ (((u8)((u32)(value32bits) >> LOWER_MIDDLE8BIT_SHIFT)) &\
+ LOWER8BIT_MASK)
+
+/*
+ * Definition: READ_UPPER8BITS_OF32(value32bits)
+ *
+ * DESCRIPTION: Returns a 8 upper bits of 32bit value
+ */
+#define READ_UPPER8BITS_OF32(value32bits)\
+ (((u8)((u32)(value32bits) >> UPPER8BIT_SHIFT)) & LOWER8BIT_MASK)
+
+/*
+ * Definition: READ_LOWER8BITS_OF16(value16bits)
+ *
+ * DESCRIPTION: Returns a 8 lower bits of 16bit value
+ */
+#define READ_LOWER8BITS_OF16(value16bits)\
+ ((u8)((u16)(value16bits) & LOWER8BIT_MASK))
+
+/*
+ * Definition: READ_UPPER8BITS_OF16(value32bits)
+ *
+ * DESCRIPTION: Returns a 8 upper bits of 16bit value
+ */
+#define READ_UPPER8BITS_OF16(value16bits)\
+ (((u8)((u32)(value16bits) >> UPPER8BIT_SHIFT)) & LOWER8BIT_MASK)
+
+/* UWORD16: 16 bit tpyes */
+
+/* reg_uword8, reg_word8: 8 bit register types */
+typedef volatile unsigned char reg_uword8;
+typedef volatile signed char reg_word8;
+
+/* reg_uword16, reg_word16: 16 bit register types */
+#ifndef OMAPBRIDGE_TYPES
+typedef volatile unsigned short reg_uword16;
+#endif
+typedef volatile short reg_word16;
+
+/* reg_uword32, REG_WORD32: 32 bit register types */
+typedef volatile unsigned long reg_uword32;
+
+/* FLOAT
+ *
+ * Type to be used for floating point calculation. Note that floating point
+ * calculation is very CPU expensive, and you should only use if you
+ * absolutely need this. */
+
+/* boolean_t: Boolean Type True, False */
+/* return_code_t: Return codes to be returned by all library functions */
+enum return_code_label {
+ RET_OK = 0,
+ RET_FAIL = -1,
+ RET_BAD_NULL_PARAM = -2,
+ RET_PARAM_OUT_OF_RANGE = -3,
+ RET_INVALID_ID = -4,
+ RET_EMPTY = -5,
+ RET_FULL = -6,
+ RET_TIMEOUT = -7,
+ RET_INVALID_OPERATION = -8,
+
+ /* Add new error codes at end of above list */
+
+ RET_NUM_RET_CODES /* this should ALWAYS be LAST entry */
+};
+
+/* MACRO: RD_MEM8, WR_MEM8
+ *
+ * DESCRIPTION: 32 bit memory access macros
+ */
+#define RD_MEM8(addr) ((u8)(*((u8 *)(addr))))
+#define WR_MEM8(addr, data) (*((u8 *)(addr)) = (u8)(data))
+
+/* MACRO: RD_MEM8_VOLATILE, WR_MEM8_VOLATILE
+ *
+ * DESCRIPTION: 8 bit register access macros
+ */
+#define RD_MEM8_VOLATILE(addr) ((u8)(*((reg_uword8 *)(addr))))
+#define WR_MEM8_VOLATILE(addr, data) (*((reg_uword8 *)(addr)) = (u8)(data))
+
+/*
+ * MACRO: RD_MEM16, WR_MEM16
+ *
+ * DESCRIPTION: 16 bit memory access macros
+ */
+#define RD_MEM16(addr) ((u16)(*((u16 *)(addr))))
+#define WR_MEM16(addr, data) (*((u16 *)(addr)) = (u16)(data))
+
+/*
+ * MACRO: RD_MEM16_VOLATILE, WR_MEM16_VOLATILE
+ *
+ * DESCRIPTION: 16 bit register access macros
+ */
+#define RD_MEM16_VOLATILE(addr) ((u16)(*((reg_uword16 *)(addr))))
+#define WR_MEM16_VOLATILE(addr, data) (*((reg_uword16 *)(addr)) =\
+ (u16)(data))
+
+/*
+ * MACRO: RD_MEM32, WR_MEM32
+ *
+ * DESCRIPTION: 32 bit memory access macros
+ */
+#define RD_MEM32(addr) ((u32)(*((u32 *)(addr))))
+#define WR_MEM32(addr, data) (*((u32 *)(addr)) = (u32)(data))
+
+/*
+ * MACRO: RD_MEM32_VOLATILE, WR_MEM32_VOLATILE
+ *
+ * DESCRIPTION: 32 bit register access macros
+ */
+#define RD_MEM32_VOLATILE(addr) ((u32)(*((reg_uword32 *)(addr))))
+#define WR_MEM32_VOLATILE(addr, data) (*((reg_uword32 *)(addr)) =\
+ (u32)(data))
+
+/* Not sure if this all belongs here */
+
+#define CHECK_RETURN_VALUE(actualValue, expectedValue, returnCodeIfMismatch,\
+ spyCodeIfMisMatch)
+#define CHECK_RETURN_VALUE_RET(actualValue, expectedValue, returnCodeIfMismatch)
+#define CHECK_RETURN_VALUE_RES(actualValue, expectedValue, spyCodeIfMisMatch)
+#define CHECK_RETURN_VALUE_RET_VOID(actualValue, expectedValue,\
+ spyCodeIfMisMatch)
+
+#define CHECK_INPUT_PARAM(actualValue, invalidValue, returnCodeIfMismatch,\
+ spyCodeIfMisMatch)
+#define CHECK_INPUT_PARAM_NO_SPY(actualValue, invalidValue,\
+ returnCodeIfMismatch)
+#define CHECK_INPUT_RANGE(actualValue, minValidValue, maxValidValue,\
+ returnCodeIfMismatch, spyCodeIfMisMatch)
+#define CHECK_INPUT_RANGE_NO_SPY(actualValue, minValidValue, maxValidValue,\
+ returnCodeIfMismatch)
+#define CHECK_INPUT_RANGE_MIN0(actualValue, maxValidValue,\
+ returnCodeIfMismatch, spyCodeIfMisMatch)
+#define CHECK_INPUT_RANGE_NO_SPY_MIN0(actualValue, maxValidValue,\
+ returnCodeIfMismatch)
+
+#endif /* _GLOBALTYPES_H */
diff --git a/drivers/staging/tidspbridge/hw/MMUAccInt.h b/drivers/staging/tidspbridge/hw/MMUAccInt.h
new file mode 100644
index 0000000..1cefca3
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/MMUAccInt.h
@@ -0,0 +1,76 @@
+/*
+ * MMUAccInt.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _MMU_ACC_INT_H
+#define _MMU_ACC_INT_H
+
+/* Mappings of level 1 EASI function numbers to function names */
+
+#define EASIL1_MMUMMU_SYSCONFIG_READ_REGISTER32 (MMU_BASE_EASIL1 + 3)
+#define EASIL1_MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32 (MMU_BASE_EASIL1 + 17)
+#define EASIL1_MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32 (MMU_BASE_EASIL1 + 39)
+#define EASIL1_MMUMMU_IRQSTATUS_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 51)
+#define EASIL1_MMUMMU_IRQENABLE_READ_REGISTER32 (MMU_BASE_EASIL1 + 102)
+#define EASIL1_MMUMMU_IRQENABLE_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 103)
+#define EASIL1_MMUMMU_WALKING_STTWL_RUNNING_READ32 (MMU_BASE_EASIL1 + 156)
+#define EASIL1_MMUMMU_CNTLTWL_ENABLE_READ32 (MMU_BASE_EASIL1 + 174)
+#define EASIL1_MMUMMU_CNTLTWL_ENABLE_WRITE32 (MMU_BASE_EASIL1 + 180)
+#define EASIL1_MMUMMU_CNTLMMU_ENABLE_WRITE32 (MMU_BASE_EASIL1 + 190)
+#define EASIL1_MMUMMU_FAULT_AD_READ_REGISTER32 (MMU_BASE_EASIL1 + 194)
+#define EASIL1_MMUMMU_TTB_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 198)
+#define EASIL1_MMUMMU_LOCK_READ_REGISTER32 (MMU_BASE_EASIL1 + 203)
+#define EASIL1_MMUMMU_LOCK_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 204)
+#define EASIL1_MMUMMU_LOCK_BASE_VALUE_READ32 (MMU_BASE_EASIL1 + 205)
+#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_READ32 (MMU_BASE_EASIL1 + 209)
+#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_WRITE32 (MMU_BASE_EASIL1 + 211)
+#define EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_SET32 (MMU_BASE_EASIL1 + 212)
+#define EASIL1_MMUMMU_LD_TLB_READ_REGISTER32 (MMU_BASE_EASIL1 + 213)
+#define EASIL1_MMUMMU_LD_TLB_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 214)
+#define EASIL1_MMUMMU_CAM_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 226)
+#define EASIL1_MMUMMU_RAM_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 268)
+#define EASIL1_MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32 (MMU_BASE_EASIL1 + 322)
+
+/* Register offset address definitions */
+#define MMU_MMU_SYSCONFIG_OFFSET 0x10
+#define MMU_MMU_IRQSTATUS_OFFSET 0x18
+#define MMU_MMU_IRQENABLE_OFFSET 0x1c
+#define MMU_MMU_WALKING_ST_OFFSET 0x40
+#define MMU_MMU_CNTL_OFFSET 0x44
+#define MMU_MMU_FAULT_AD_OFFSET 0x48
+#define MMU_MMU_TTB_OFFSET 0x4c
+#define MMU_MMU_LOCK_OFFSET 0x50
+#define MMU_MMU_LD_TLB_OFFSET 0x54
+#define MMU_MMU_CAM_OFFSET 0x58
+#define MMU_MMU_RAM_OFFSET 0x5c
+#define MMU_MMU_GFLUSH_OFFSET 0x60
+#define MMU_MMU_FLUSH_ENTRY_OFFSET 0x64
+/* Bitfield mask and offset declarations */
+#define MMU_MMU_SYSCONFIG_IDLE_MODE_MASK 0x18
+#define MMU_MMU_SYSCONFIG_IDLE_MODE_OFFSET 3
+#define MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK 0x1
+#define MMU_MMU_SYSCONFIG_AUTO_IDLE_OFFSET 0
+#define MMU_MMU_WALKING_ST_TWL_RUNNING_MASK 0x1
+#define MMU_MMU_WALKING_ST_TWL_RUNNING_OFFSET 0
+#define MMU_MMU_CNTL_TWL_ENABLE_MASK 0x4
+#define MMU_MMU_CNTL_TWL_ENABLE_OFFSET 2
+#define MMU_MMU_CNTL_MMU_ENABLE_MASK 0x2
+#define MMU_MMU_CNTL_MMU_ENABLE_OFFSET 1
+#define MMU_MMU_LOCK_BASE_VALUE_MASK 0xfc00
+#define MMU_MMU_LOCK_BASE_VALUE_OFFSET 10
+#define MMU_MMU_LOCK_CURRENT_VICTIM_MASK 0x3f0
+#define MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET 4
+
+#endif /* _MMU_ACC_INT_H */
diff --git a/drivers/staging/tidspbridge/hw/MMURegAcM.h b/drivers/staging/tidspbridge/hw/MMURegAcM.h
new file mode 100644
index 0000000..8c0c549
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/MMURegAcM.h
@@ -0,0 +1,226 @@
+/*
+ * MMURegAcM.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _MMU_REG_ACM_H
+#define _MMU_REG_ACM_H
+
+#include <GlobalTypes.h>
+#include <linux/io.h>
+#include <EasiGlobal.h>
+
+#include "MMUAccInt.h"
+
+#if defined(USE_LEVEL_1_MACROS)
+
+#define MMUMMU_SYSCONFIG_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_READ_REGISTER32),\
+ __raw_readl((baseAddress)+MMU_MMU_SYSCONFIG_OFFSET))
+
+#define MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_SYSCONFIG_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_IDLE_MODE_WRITE32);\
+ data &= ~(MMU_MMU_SYSCONFIG_IDLE_MODE_MASK);\
+ newValue <<= MMU_MMU_SYSCONFIG_IDLE_MODE_OFFSET;\
+ newValue &= MMU_MMU_SYSCONFIG_IDLE_MODE_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_SYSCONFIG_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_SYSCONFIG_AUTO_IDLE_WRITE32);\
+ data &= ~(MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK);\
+ newValue <<= MMU_MMU_SYSCONFIG_AUTO_IDLE_OFFSET;\
+ newValue &= MMU_MMU_SYSCONFIG_AUTO_IDLE_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_IRQSTATUS_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQSTATUSReadRegister32),\
+ __raw_readl((baseAddress)+MMU_MMU_IRQSTATUS_OFFSET))
+
+#define MMUMMU_IRQSTATUS_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_IRQSTATUS_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQSTATUS_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQENABLE_READ_REGISTER32),\
+ __raw_readl((baseAddress)+MMU_MMU_IRQENABLE_OFFSET))
+
+#define MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_IRQENABLE_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_IRQENABLE_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_WALKING_STTWL_RUNNING_READ32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_WALKING_STTWL_RUNNING_READ32),\
+ (((__raw_readl(((baseAddress)+(MMU_MMU_WALKING_ST_OFFSET))))\
+ & MMU_MMU_WALKING_ST_TWL_RUNNING_MASK) >>\
+ MMU_MMU_WALKING_ST_TWL_RUNNING_OFFSET))
+
+#define MMUMMU_CNTLTWL_ENABLE_READ32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLTWL_ENABLE_READ32),\
+ (((__raw_readl(((baseAddress)+(MMU_MMU_CNTL_OFFSET)))) &\
+ MMU_MMU_CNTL_TWL_ENABLE_MASK) >>\
+ MMU_MMU_CNTL_TWL_ENABLE_OFFSET))
+
+#define MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_CNTL_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLTWL_ENABLE_WRITE32);\
+ data &= ~(MMU_MMU_CNTL_TWL_ENABLE_MASK);\
+ newValue <<= MMU_MMU_CNTL_TWL_ENABLE_OFFSET;\
+ newValue &= MMU_MMU_CNTL_TWL_ENABLE_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_CNTL_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CNTLMMU_ENABLE_WRITE32);\
+ data &= ~(MMU_MMU_CNTL_MMU_ENABLE_MASK);\
+ newValue <<= MMU_MMU_CNTL_MMU_ENABLE_OFFSET;\
+ newValue &= MMU_MMU_CNTL_MMU_ENABLE_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_FAULT_AD_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_FAULT_AD_READ_REGISTER32),\
+ __raw_readl((baseAddress)+MMU_MMU_FAULT_AD_OFFSET))
+
+#define MMUMMU_TTB_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_TTB_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_TTB_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_LOCK_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_READ_REGISTER32),\
+ __raw_readl((baseAddress)+MMU_MMU_LOCK_OFFSET))
+
+#define MMUMMU_LOCK_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_LOCK_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_LOCK_BASE_VALUE_READ32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_BASE_VALUE_READ32),\
+ (((__raw_readl(((baseAddress)+(MMU_MMU_LOCK_OFFSET)))) &\
+ MMU_MMU_LOCK_BASE_VALUE_MASK) >>\
+ MMU_MMU_LOCK_BASE_VALUE_OFFSET))
+
+#define MMUMMU_LOCK_BASE_VALUE_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_LOCK_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCKBaseValueWrite32);\
+ data &= ~(MMU_MMU_LOCK_BASE_VALUE_MASK);\
+ newValue <<= MMU_MMU_LOCK_BASE_VALUE_OFFSET;\
+ newValue &= MMU_MMU_LOCK_BASE_VALUE_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_LOCK_CURRENT_VICTIM_READ32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_READ32),\
+ (((__raw_readl(((baseAddress)+(MMU_MMU_LOCK_OFFSET)))) &\
+ MMU_MMU_LOCK_CURRENT_VICTIM_MASK) >>\
+ MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET))
+
+#define MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_LOCK_OFFSET;\
+ register u32 data = __raw_readl((baseAddress)+offset);\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_WRITE32);\
+ data &= ~(MMU_MMU_LOCK_CURRENT_VICTIM_MASK);\
+ newValue <<= MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET;\
+ newValue &= MMU_MMU_LOCK_CURRENT_VICTIM_MASK;\
+ newValue |= data;\
+ __raw_writel(newValue, baseAddress+offset);\
+}
+
+#define MMUMMU_LOCK_CURRENT_VICTIM_SET32(var, value)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LOCK_CURRENT_VICTIM_SET32),\
+ (((var) & ~(MMU_MMU_LOCK_CURRENT_VICTIM_MASK)) |\
+ (((value) << MMU_MMU_LOCK_CURRENT_VICTIM_OFFSET) &\
+ MMU_MMU_LOCK_CURRENT_VICTIM_MASK)))
+
+#define MMUMMU_LD_TLB_READ_REGISTER32(baseAddress)\
+ (_DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LD_TLB_READ_REGISTER32),\
+ __raw_readl((baseAddress)+MMU_MMU_LD_TLB_OFFSET))
+
+#define MMUMMU_LD_TLB_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_LD_TLB_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_LD_TLB_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_CAM_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_CAM_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_CAM_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_RAM_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_RAM_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_RAM_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#define MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(baseAddress, value)\
+{\
+ const u32 offset = MMU_MMU_FLUSH_ENTRY_OFFSET;\
+ register u32 newValue = (value);\
+ _DEBUG_LEVEL1_EASI(EASIL1_MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32);\
+ __raw_writel(newValue, (baseAddress)+offset);\
+}
+
+#endif /* USE_LEVEL_1_MACROS */
+
+#endif /* _MMU_REG_ACM_H */
diff --git a/drivers/staging/tidspbridge/hw/hw_defs.h b/drivers/staging/tidspbridge/hw/hw_defs.h
new file mode 100644
index 0000000..98f6045
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/hw_defs.h
@@ -0,0 +1,60 @@
+/*
+ * hw_defs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global HW definitions
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _HW_DEFS_H
+#define _HW_DEFS_H
+
+#include <GlobalTypes.h>
+
+/* Page size */
+#define HW_PAGE_SIZE4KB 0x1000
+#define HW_PAGE_SIZE64KB 0x10000
+#define HW_PAGE_SIZE1MB 0x100000
+#define HW_PAGE_SIZE16MB 0x1000000
+
+/* hw_status: return type for HW API */
+typedef long hw_status;
+
+/* Macro used to set and clear any bit */
+#define HW_CLEAR 0
+#define HW_SET 1
+
+/* hw_endianism_t: Enumerated Type used to specify the endianism
+ * Do NOT change these values. They are used as bit fields. */
+enum hw_endianism_t {
+ HW_LITTLE_ENDIAN,
+ HW_BIG_ENDIAN
+};
+
+/* hw_element_size_t: Enumerated Type used to specify the element size
+ * Do NOT change these values. They are used as bit fields. */
+enum hw_element_size_t {
+ HW_ELEM_SIZE8BIT,
+ HW_ELEM_SIZE16BIT,
+ HW_ELEM_SIZE32BIT,
+ HW_ELEM_SIZE64BIT
+};
+
+/* hw_idle_mode_t: Enumerated Type used to specify Idle modes */
+enum hw_idle_mode_t {
+ HW_FORCE_IDLE,
+ HW_NO_IDLE,
+ HW_SMART_IDLE
+};
+
+#endif /* _HW_DEFS_H */
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.c b/drivers/staging/tidspbridge/hw/hw_mmu.c
new file mode 100644
index 0000000..965b659
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.c
@@ -0,0 +1,587 @@
+/*
+ * hw_mmu.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * API definitions to setup MMU TLB and PTE
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <GlobalTypes.h>
+#include <linux/io.h>
+#include "MMURegAcM.h"
+#include <hw_defs.h>
+#include <hw_mmu.h>
+#include <linux/types.h>
+
+#define MMU_BASE_VAL_MASK 0xFC00
+#define MMU_PAGE_MAX 3
+#define MMU_ELEMENTSIZE_MAX 3
+#define MMU_ADDR_MASK 0xFFFFF000
+#define MMU_TTB_MASK 0xFFFFC000
+#define MMU_SECTION_ADDR_MASK 0xFFF00000
+#define MMU_SSECTION_ADDR_MASK 0xFF000000
+#define MMU_PAGE_TABLE_MASK 0xFFFFFC00
+#define MMU_LARGE_PAGE_MASK 0xFFFF0000
+#define MMU_SMALL_PAGE_MASK 0xFFFFF000
+
+#define MMU_LOAD_TLB 0x00000001
+
+/*
+ * hw_mmu_page_size_t: Enumerated Type used to specify the MMU Page Size(SLSS)
+ */
+enum hw_mmu_page_size_t {
+ HW_MMU_SECTION,
+ HW_MMU_LARGE_PAGE,
+ HW_MMU_SMALL_PAGE,
+ HW_MMU_SUPERSECTION
+};
+
+/*
+ * FUNCTION : mmu_flush_entry
+ *
+ * INPUTS:
+ *
+ * Identifier : baseAddress
+ * Type : const u32
+ * Description : Base Address of instance of MMU module
+ *
+ * RETURNS:
+ *
+ * Type : hw_status
+ * Description : RET_OK -- No errors occured
+ * RET_BAD_NULL_PARAM -- A Pointer
+ * Paramater was set to NULL
+ *
+ * PURPOSE: : Flush the TLB entry pointed by the
+ * lock counter register
+ * even if this entry is set protected
+ *
+ * METHOD: : Check the Input parameter and Flush a
+ * single entry in the TLB.
+ */
+static hw_status mmu_flush_entry(const void __iomem *baseAddress);
+
+/*
+ * FUNCTION : mmu_set_cam_entry
+ *
+ * INPUTS:
+ *
+ * Identifier : baseAddress
+ * TypE : const u32
+ * Description : Base Address of instance of MMU module
+ *
+ * Identifier : pageSize
+ * TypE : const u32
+ * Description : It indicates the page size
+ *
+ * Identifier : preservedBit
+ * Type : const u32
+ * Description : It indicates the TLB entry is preserved entry
+ * or not
+ *
+ * Identifier : validBit
+ * Type : const u32
+ * Description : It indicates the TLB entry is valid entry or not
+ *
+ *
+ * Identifier : virtual_addr_tag
+ * Type : const u32
+ * Description : virtual Address
+ *
+ * RETURNS:
+ *
+ * Type : hw_status
+ * Description : RET_OK -- No errors occured
+ * RET_BAD_NULL_PARAM -- A Pointer Paramater
+ * was set to NULL
+ * RET_PARAM_OUT_OF_RANGE -- Input Parameter out
+ * of Range
+ *
+ * PURPOSE: : Set MMU_CAM reg
+ *
+ * METHOD: : Check the Input parameters and set the CAM entry.
+ */
+static hw_status mmu_set_cam_entry(const void __iomem *baseAddress,
+ const u32 pageSize,
+ const u32 preservedBit,
+ const u32 validBit,
+ const u32 virtual_addr_tag);
+
+/*
+ * FUNCTION : mmu_set_ram_entry
+ *
+ * INPUTS:
+ *
+ * Identifier : baseAddress
+ * Type : const u32
+ * Description : Base Address of instance of MMU module
+ *
+ * Identifier : physicalAddr
+ * Type : const u32
+ * Description : Physical Address to which the corresponding
+ * virtual Address shouldpoint
+ *
+ * Identifier : endianism
+ * Type : hw_endianism_t
+ * Description : endianism for the given page
+ *
+ * Identifier : element_size
+ * Type : hw_element_size_t
+ * Description : The element size ( 8,16, 32 or 64 bit)
+ *
+ * Identifier : mixed_size
+ * Type : hw_mmu_mixed_size_t
+ * Description : Element Size to follow CPU or TLB
+ *
+ * RETURNS:
+ *
+ * Type : hw_status
+ * Description : RET_OK -- No errors occured
+ * RET_BAD_NULL_PARAM -- A Pointer Paramater
+ * was set to NULL
+ * RET_PARAM_OUT_OF_RANGE -- Input Parameter
+ * out of Range
+ *
+ * PURPOSE: : Set MMU_CAM reg
+ *
+ * METHOD: : Check the Input parameters and set the RAM entry.
+ */
+static hw_status mmu_set_ram_entry(const void __iomem *baseAddress,
+ const u32 physicalAddr,
+ enum hw_endianism_t endianism,
+ enum hw_element_size_t element_size,
+ enum hw_mmu_mixed_size_t mixed_size);
+
+/* HW FUNCTIONS */
+
+hw_status hw_mmu_enable(const void __iomem *baseAddress)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, HW_SET);
+
+ return status;
+}
+
+hw_status hw_mmu_disable(const void __iomem *baseAddress)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_CNTLMMU_ENABLE_WRITE32(baseAddress, HW_CLEAR);
+
+ return status;
+}
+
+hw_status hw_mmu_num_locked_set(const void __iomem *baseAddress,
+ u32 numLockedEntries)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_LOCK_BASE_VALUE_WRITE32(baseAddress, numLockedEntries);
+
+ return status;
+}
+
+hw_status hw_mmu_victim_num_set(const void __iomem *baseAddress,
+ u32 victimEntryNum)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, victimEntryNum);
+
+ return status;
+}
+
+hw_status hw_mmu_event_ack(const void __iomem *baseAddress, u32 irqMask)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_IRQSTATUS_WRITE_REGISTER32(baseAddress, irqMask);
+
+ return status;
+}
+
+hw_status hw_mmu_event_disable(const void __iomem *baseAddress, u32 irqMask)
+{
+ hw_status status = RET_OK;
+ u32 irq_reg;
+
+ irq_reg = MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress);
+
+ MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, irq_reg & ~irqMask);
+
+ return status;
+}
+
+hw_status hw_mmu_event_enable(const void __iomem *baseAddress, u32 irqMask)
+{
+ hw_status status = RET_OK;
+ u32 irq_reg;
+
+ irq_reg = MMUMMU_IRQENABLE_READ_REGISTER32(baseAddress);
+
+ MMUMMU_IRQENABLE_WRITE_REGISTER32(baseAddress, irq_reg | irqMask);
+
+ return status;
+}
+
+hw_status hw_mmu_event_status(const void __iomem *baseAddress, u32 *irqMask)
+{
+ hw_status status = RET_OK;
+
+ *irqMask = MMUMMU_IRQSTATUS_READ_REGISTER32(baseAddress);
+
+ return status;
+}
+
+hw_status hw_mmu_fault_addr_read(const void __iomem *baseAddress, u32 *addr)
+{
+ hw_status status = RET_OK;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+
+ /* read values from register */
+ *addr = MMUMMU_FAULT_AD_READ_REGISTER32(baseAddress);
+
+ return status;
+}
+
+hw_status hw_mmu_ttb_set(const void __iomem *baseAddress, u32 TTBPhysAddr)
+{
+ hw_status status = RET_OK;
+ u32 load_ttb;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+
+ load_ttb = TTBPhysAddr & ~0x7FUL;
+ /* write values to register */
+ MMUMMU_TTB_WRITE_REGISTER32(baseAddress, load_ttb);
+
+ return status;
+}
+
+hw_status hw_mmu_twl_enable(const void __iomem *baseAddress)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, HW_SET);
+
+ return status;
+}
+
+hw_status hw_mmu_twl_disable(const void __iomem *baseAddress)
+{
+ hw_status status = RET_OK;
+
+ MMUMMU_CNTLTWL_ENABLE_WRITE32(baseAddress, HW_CLEAR);
+
+ return status;
+}
+
+hw_status hw_mmu_tlb_flush(const void __iomem *baseAddress, u32 virtualAddr,
+ u32 pageSize)
+{
+ hw_status status = RET_OK;
+ u32 virtual_addr_tag;
+ enum hw_mmu_page_size_t pg_size_bits;
+
+ switch (pageSize) {
+ case HW_PAGE_SIZE4KB:
+ pg_size_bits = HW_MMU_SMALL_PAGE;
+ break;
+
+ case HW_PAGE_SIZE64KB:
+ pg_size_bits = HW_MMU_LARGE_PAGE;
+ break;
+
+ case HW_PAGE_SIZE1MB:
+ pg_size_bits = HW_MMU_SECTION;
+ break;
+
+ case HW_PAGE_SIZE16MB:
+ pg_size_bits = HW_MMU_SUPERSECTION;
+ break;
+
+ default:
+ return RET_FAIL;
+ }
+
+ /* Generate the 20-bit tag from virtual address */
+ virtual_addr_tag = ((virtualAddr & MMU_ADDR_MASK) >> 12);
+
+ mmu_set_cam_entry(baseAddress, pg_size_bits, 0, 0, virtual_addr_tag);
+
+ mmu_flush_entry(baseAddress);
+
+ return status;
+}
+
+hw_status hw_mmu_tlb_add(const void __iomem *baseAddress,
+ u32 physicalAddr,
+ u32 virtualAddr,
+ u32 pageSize,
+ u32 entryNum,
+ struct hw_mmu_map_attrs_t *map_attrs,
+ s8 preservedBit, s8 validBit)
+{
+ hw_status status = RET_OK;
+ u32 lock_reg;
+ u32 virtual_addr_tag;
+ enum hw_mmu_page_size_t mmu_pg_size;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+ CHECK_INPUT_RANGE_MIN0(pageSize, MMU_PAGE_MAX, RET_PARAM_OUT_OF_RANGE,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+ CHECK_INPUT_RANGE_MIN0(map_attrs->element_size, MMU_ELEMENTSIZE_MAX,
+ RET_PARAM_OUT_OF_RANGE, RES_MMU_BASE +
+ RES_INVALID_INPUT_PARAM);
+
+ switch (pageSize) {
+ case HW_PAGE_SIZE4KB:
+ mmu_pg_size = HW_MMU_SMALL_PAGE;
+ break;
+
+ case HW_PAGE_SIZE64KB:
+ mmu_pg_size = HW_MMU_LARGE_PAGE;
+ break;
+
+ case HW_PAGE_SIZE1MB:
+ mmu_pg_size = HW_MMU_SECTION;
+ break;
+
+ case HW_PAGE_SIZE16MB:
+ mmu_pg_size = HW_MMU_SUPERSECTION;
+ break;
+
+ default:
+ return RET_FAIL;
+ }
+
+ lock_reg = MMUMMU_LOCK_READ_REGISTER32(baseAddress);
+
+ /* Generate the 20-bit tag from virtual address */
+ virtual_addr_tag = ((virtualAddr & MMU_ADDR_MASK) >> 12);
+
+ /* Write the fields in the CAM Entry Register */
+ mmu_set_cam_entry(baseAddress, mmu_pg_size, preservedBit, validBit,
+ virtual_addr_tag);
+
+ /* Write the different fields of the RAM Entry Register */
+ /* endianism of the page,Element Size of the page (8, 16, 32, 64 bit) */
+ mmu_set_ram_entry(baseAddress, physicalAddr, map_attrs->endianism,
+ map_attrs->element_size, map_attrs->mixed_size);
+
+ /* Update the MMU Lock Register */
+ /* currentVictim between lockedBaseValue and (MMU_Entries_Number - 1) */
+ MMUMMU_LOCK_CURRENT_VICTIM_WRITE32(baseAddress, entryNum);
+
+ /* Enable loading of an entry in TLB by writing 1
+ into LD_TLB_REG register */
+ MMUMMU_LD_TLB_WRITE_REGISTER32(baseAddress, MMU_LOAD_TLB);
+
+ MMUMMU_LOCK_WRITE_REGISTER32(baseAddress, lock_reg);
+
+ return status;
+}
+
+hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
+ u32 physicalAddr,
+ u32 virtualAddr,
+ u32 pageSize, struct hw_mmu_map_attrs_t *map_attrs)
+{
+ hw_status status = RET_OK;
+ u32 pte_addr, pte_val;
+ s32 num_entries = 1;
+
+ switch (pageSize) {
+ case HW_PAGE_SIZE4KB:
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
+ virtualAddr &
+ MMU_SMALL_PAGE_MASK);
+ pte_val =
+ ((physicalAddr & MMU_SMALL_PAGE_MASK) |
+ (map_attrs->endianism << 9) | (map_attrs->
+ element_size << 4) |
+ (map_attrs->mixed_size << 11) | 2);
+ break;
+
+ case HW_PAGE_SIZE64KB:
+ num_entries = 16;
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
+ virtualAddr &
+ MMU_LARGE_PAGE_MASK);
+ pte_val =
+ ((physicalAddr & MMU_LARGE_PAGE_MASK) |
+ (map_attrs->endianism << 9) | (map_attrs->
+ element_size << 4) |
+ (map_attrs->mixed_size << 11) | 1);
+ break;
+
+ case HW_PAGE_SIZE1MB:
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
+ virtualAddr &
+ MMU_SECTION_ADDR_MASK);
+ pte_val =
+ ((((physicalAddr & MMU_SECTION_ADDR_MASK) |
+ (map_attrs->endianism << 15) | (map_attrs->
+ element_size << 10) |
+ (map_attrs->mixed_size << 17)) & ~0x40000) | 0x2);
+ break;
+
+ case HW_PAGE_SIZE16MB:
+ num_entries = 16;
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
+ virtualAddr &
+ MMU_SSECTION_ADDR_MASK);
+ pte_val =
+ (((physicalAddr & MMU_SSECTION_ADDR_MASK) |
+ (map_attrs->endianism << 15) | (map_attrs->
+ element_size << 10) |
+ (map_attrs->mixed_size << 17)
+ ) | 0x40000 | 0x2);
+ break;
+
+ case HW_MMU_COARSE_PAGE_SIZE:
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
+ virtualAddr &
+ MMU_SECTION_ADDR_MASK);
+ pte_val = (physicalAddr & MMU_PAGE_TABLE_MASK) | 1;
+ break;
+
+ default:
+ return RET_FAIL;
+ }
+
+ while (--num_entries >= 0)
+ ((u32 *) pte_addr)[num_entries] = pte_val;
+
+ return status;
+}
+
+hw_status hw_mmu_pte_clear(const u32 pg_tbl_va, u32 virtualAddr, u32 page_size)
+{
+ hw_status status = RET_OK;
+ u32 pte_addr;
+ s32 num_entries = 1;
+
+ switch (page_size) {
+ case HW_PAGE_SIZE4KB:
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
+ virtualAddr &
+ MMU_SMALL_PAGE_MASK);
+ break;
+
+ case HW_PAGE_SIZE64KB:
+ num_entries = 16;
+ pte_addr = hw_mmu_pte_addr_l2(pg_tbl_va,
+ virtualAddr &
+ MMU_LARGE_PAGE_MASK);
+ break;
+
+ case HW_PAGE_SIZE1MB:
+ case HW_MMU_COARSE_PAGE_SIZE:
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
+ virtualAddr &
+ MMU_SECTION_ADDR_MASK);
+ break;
+
+ case HW_PAGE_SIZE16MB:
+ num_entries = 16;
+ pte_addr = hw_mmu_pte_addr_l1(pg_tbl_va,
+ virtualAddr &
+ MMU_SSECTION_ADDR_MASK);
+ break;
+
+ default:
+ return RET_FAIL;
+ }
+
+ while (--num_entries >= 0)
+ ((u32 *) pte_addr)[num_entries] = 0;
+
+ return status;
+}
+
+/* mmu_flush_entry */
+static hw_status mmu_flush_entry(const void __iomem *baseAddress)
+{
+ hw_status status = RET_OK;
+ u32 flush_entry_data = 0x1;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+
+ /* write values to register */
+ MMUMMU_FLUSH_ENTRY_WRITE_REGISTER32(baseAddress, flush_entry_data);
+
+ return status;
+}
+
+/* mmu_set_cam_entry */
+static hw_status mmu_set_cam_entry(const void __iomem *baseAddress,
+ const u32 pageSize,
+ const u32 preservedBit,
+ const u32 validBit,
+ const u32 virtual_addr_tag)
+{
+ hw_status status = RET_OK;
+ u32 mmu_cam_reg;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+
+ mmu_cam_reg = (virtual_addr_tag << 12);
+ mmu_cam_reg = (mmu_cam_reg) | (pageSize) | (validBit << 2) |
+ (preservedBit << 3);
+
+ /* write values to register */
+ MMUMMU_CAM_WRITE_REGISTER32(baseAddress, mmu_cam_reg);
+
+ return status;
+}
+
+/* mmu_set_ram_entry */
+static hw_status mmu_set_ram_entry(const void __iomem *baseAddress,
+ const u32 physicalAddr,
+ enum hw_endianism_t endianism,
+ enum hw_element_size_t element_size,
+ enum hw_mmu_mixed_size_t mixed_size)
+{
+ hw_status status = RET_OK;
+ u32 mmu_ram_reg;
+
+ /*Check the input Parameters */
+ CHECK_INPUT_PARAM(baseAddress, 0, RET_BAD_NULL_PARAM,
+ RES_MMU_BASE + RES_INVALID_INPUT_PARAM);
+ CHECK_INPUT_RANGE_MIN0(element_size, MMU_ELEMENTSIZE_MAX,
+ RET_PARAM_OUT_OF_RANGE, RES_MMU_BASE +
+ RES_INVALID_INPUT_PARAM);
+
+ mmu_ram_reg = (physicalAddr & MMU_ADDR_MASK);
+ mmu_ram_reg = (mmu_ram_reg) | ((endianism << 9) | (element_size << 7) |
+ (mixed_size << 6));
+
+ /* write values to register */
+ MMUMMU_RAM_WRITE_REGISTER32(baseAddress, mmu_ram_reg);
+
+ return status;
+
+}
diff --git a/drivers/staging/tidspbridge/hw/hw_mmu.h b/drivers/staging/tidspbridge/hw/hw_mmu.h
new file mode 100644
index 0000000..9b13468
--- /dev/null
+++ b/drivers/staging/tidspbridge/hw/hw_mmu.h
@@ -0,0 +1,161 @@
+/*
+ * hw_mmu.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * MMU types and API declarations
+ *
+ * Copyright (C) 2007 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _HW_MMU_H
+#define _HW_MMU_H
+
+#include <linux/types.h>
+
+/* Bitmasks for interrupt sources */
+#define HW_MMU_TRANSLATION_FAULT 0x2
+#define HW_MMU_ALL_INTERRUPTS 0x1F
+
+#define HW_MMU_COARSE_PAGE_SIZE 0x400
+
+/* hw_mmu_mixed_size_t: Enumerated Type used to specify whether to follow
+ CPU/TLB Element size */
+enum hw_mmu_mixed_size_t {
+ HW_MMU_TLBES,
+ HW_MMU_CPUES
+};
+
+/* hw_mmu_map_attrs_t: Struct containing MMU mapping attributes */
+struct hw_mmu_map_attrs_t {
+ enum hw_endianism_t endianism;
+ enum hw_element_size_t element_size;
+ enum hw_mmu_mixed_size_t mixed_size;
+ bool donotlockmpupage;
+};
+
+extern hw_status hw_mmu_enable(const void __iomem *baseAddress);
+
+extern hw_status hw_mmu_disable(const void __iomem *baseAddress);
+
+extern hw_status hw_mmu_num_locked_set(const void __iomem *baseAddress,
+ u32 numLockedEntries);
+
+extern hw_status hw_mmu_victim_num_set(const void __iomem *baseAddress,
+ u32 victimEntryNum);
+
+/* For MMU faults */
+extern hw_status hw_mmu_event_ack(const void __iomem *baseAddress,
+ u32 irqMask);
+
+extern hw_status hw_mmu_event_disable(const void __iomem *baseAddress,
+ u32 irqMask);
+
+extern hw_status hw_mmu_event_enable(const void __iomem *baseAddress,
+ u32 irqMask);
+
+extern hw_status hw_mmu_event_status(const void __iomem *baseAddress,
+ u32 *irqMask);
+
+extern hw_status hw_mmu_fault_addr_read(const void __iomem *baseAddress,
+ u32 *addr);
+
+/* Set the TT base address */
+extern hw_status hw_mmu_ttb_set(const void __iomem *baseAddress,
+ u32 TTBPhysAddr);
+
+extern hw_status hw_mmu_twl_enable(const void __iomem *baseAddress);
+
+extern hw_status hw_mmu_twl_disable(const void __iomem *baseAddress);
+
+extern hw_status hw_mmu_tlb_flush(const void __iomem *baseAddress,
+ u32 virtualAddr, u32 pageSize);
+
+extern hw_status hw_mmu_tlb_add(const void __iomem *baseAddress,
+ u32 physicalAddr,
+ u32 virtualAddr,
+ u32 pageSize,
+ u32 entryNum,
+ struct hw_mmu_map_attrs_t *map_attrs,
+ s8 preservedBit, s8 validBit);
+
+/* For PTEs */
+extern hw_status hw_mmu_pte_set(const u32 pg_tbl_va,
+ u32 physicalAddr,
+ u32 virtualAddr,
+ u32 pageSize,
+ struct hw_mmu_map_attrs_t *map_attrs);
+
+extern hw_status hw_mmu_pte_clear(const u32 pg_tbl_va,
+ u32 page_size, u32 virtualAddr);
+
+static inline u32 hw_mmu_pte_addr_l1(u32 L1_base, u32 va)
+{
+ u32 pte_addr;
+ u32 va31_to20;
+
+ va31_to20 = va >> (20 - 2); /* Left-shift by 2 here itself */
+ va31_to20 &= 0xFFFFFFFCUL;
+ pte_addr = L1_base + va31_to20;
+
+ return pte_addr;
+}
+
+static inline u32 hw_mmu_pte_addr_l2(u32 L2_base, u32 va)
+{
+ u32 pte_addr;
+
+ pte_addr = (L2_base & 0xFFFFFC00) | ((va >> 10) & 0x3FC);
+
+ return pte_addr;
+}
+
+static inline u32 hw_mmu_pte_coarse_l1(u32 pte_val)
+{
+ u32 pte_coarse;
+
+ pte_coarse = pte_val & 0xFFFFFC00;
+
+ return pte_coarse;
+}
+
+static inline u32 hw_mmu_pte_size_l1(u32 pte_val)
+{
+ u32 pte_size = 0;
+
+ if ((pte_val & 0x3) == 0x1) {
+ /* Points to L2 PT */
+ pte_size = HW_MMU_COARSE_PAGE_SIZE;
+ }
+
+ if ((pte_val & 0x3) == 0x2) {
+ if (pte_val & (1 << 18))
+ pte_size = HW_PAGE_SIZE16MB;
+ else
+ pte_size = HW_PAGE_SIZE1MB;
+ }
+
+ return pte_size;
+}
+
+static inline u32 hw_mmu_pte_size_l2(u32 pte_val)
+{
+ u32 pte_size = 0;
+
+ if (pte_val & 0x2)
+ pte_size = HW_PAGE_SIZE4KB;
+ else if (pte_val & 0x1)
+ pte_size = HW_PAGE_SIZE64KB;
+
+ return pte_size;
+}
+
+#endif /* _HW_MMU_H */
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge driver services code
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/services/cfg.c | 253 +++++++++++++++++++++++
drivers/staging/tidspbridge/services/ntfy.c | 31 +++
drivers/staging/tidspbridge/services/services.c | 69 ++++++
drivers/staging/tidspbridge/services/sync.c | 104 +++++++++
4 files changed, 457 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/services/cfg.c
create mode 100644 drivers/staging/tidspbridge/services/ntfy.c
create mode 100644 drivers/staging/tidspbridge/services/services.c
create mode 100644 drivers/staging/tidspbridge/services/sync.c
diff --git a/drivers/staging/tidspbridge/services/cfg.c b/drivers/staging/tidspbridge/services/cfg.c
new file mode 100644
index 0000000..8ae64f4
--- /dev/null
+++ b/drivers/staging/tidspbridge/services/cfg.c
@@ -0,0 +1,253 @@
+/*
+ * cfg.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implementation of platform specific config services.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+
+/* ----------------------------------- This */
+#include <dspbridge/cfg.h>
+#include <dspbridge/drv.h>
+
+struct drv_ext {
+ struct list_head link;
+ char sz_string[MAXREGPATHLENGTH];
+};
+
+/*
+ * ======== cfg_exit ========
+ * Purpose:
+ * Discontinue usage of the CFG module.
+ */
+void cfg_exit(void)
+{
+ /* Do nothing */
+}
+
+/*
+ * ======== cfg_get_auto_start ========
+ * Purpose:
+ * Retreive the autostart mask, if any, for this board.
+ */
+int cfg_get_auto_start(struct cfg_devnode *dev_node_obj,
+ OUT u32 *pdwAutoStart)
+{
+ int status = 0;
+ u32 dw_buf_size;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ dw_buf_size = sizeof(*pdwAutoStart);
+ if (!dev_node_obj)
+ status = -EFAULT;
+ if (!pdwAutoStart || !drv_datap)
+ status = -EFAULT;
+ if (DSP_SUCCEEDED(status))
+ *pdwAutoStart = (drv_datap->base_img) ? 1 : 0;
+
+ DBC_ENSURE((status == 0 &&
+ (*pdwAutoStart == 0 || *pdwAutoStart == 1))
+ || status != 0);
+ return status;
+}
+
+/*
+ * ======== cfg_get_dev_object ========
+ * Purpose:
+ * Retrieve the Device Object handle for a given devnode.
+ */
+int cfg_get_dev_object(struct cfg_devnode *dev_node_obj,
+ OUT u32 *pdwValue)
+{
+ int status = 0;
+ u32 dw_buf_size;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ if (!drv_datap)
+ status = -EPERM;
+
+ if (!dev_node_obj)
+ status = -EFAULT;
+
+ if (!pdwValue)
+ status = -EFAULT;
+
+ dw_buf_size = sizeof(pdwValue);
+ if (DSP_SUCCEEDED(status)) {
+
+ /* check the device string and then store dev object */
+ if (!
+ (strcmp
+ ((char *)((struct drv_ext *)dev_node_obj)->sz_string,
+ "TIOMAP1510")))
+ *pdwValue = (u32)drv_datap->dev_object;
+ }
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+ return status;
+}
+
+/*
+ * ======== cfg_get_exec_file ========
+ * Purpose:
+ * Retreive the default executable, if any, for this board.
+ */
+int cfg_get_exec_file(struct cfg_devnode *dev_node_obj, u32 ul_buf_size,
+ OUT char *pstrExecFile)
+{
+ int status = 0;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ if (!dev_node_obj)
+ status = -EFAULT;
+
+ else if (!pstrExecFile || !drv_datap)
+ status = -EFAULT;
+
+ if (strlen(drv_datap->base_img) > ul_buf_size)
+ status = -EINVAL;
+
+ if (DSP_SUCCEEDED(status) && drv_datap->base_img)
+ strcpy(pstrExecFile, drv_datap->base_img);
+
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+ DBC_ENSURE(((status == 0) &&
+ (strlen(pstrExecFile) <= ul_buf_size))
+ || (status != 0));
+ return status;
+}
+
+/*
+ * ======== cfg_get_object ========
+ * Purpose:
+ * Retrieve the Object handle from the Registry
+ */
+int cfg_get_object(OUT u32 *pdwValue, u8 dw_type)
+{
+ int status = -EINVAL;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ DBC_REQUIRE(pdwValue != NULL);
+
+ if (!drv_datap)
+ return -EPERM;
+
+ switch (dw_type) {
+ case (REG_DRV_OBJECT):
+ if (drv_datap->drv_object) {
+ *pdwValue = (u32)drv_datap->drv_object;
+ status = 0;
+ } else {
+ status = -ENODATA;
+ }
+ break;
+ case (REG_MGR_OBJECT):
+ if (drv_datap->mgr_object) {
+ *pdwValue = (u32)drv_datap->mgr_object;
+ status = 0;
+ } else {
+ status = -ENODATA;
+ }
+ break;
+
+ default:
+ break;
+ }
+ if (DSP_FAILED(status)) {
+ *pdwValue = 0;
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+ }
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *pdwValue != 0) ||
+ (DSP_FAILED(status) && *pdwValue == 0));
+ return status;
+}
+
+/*
+ * ======== cfg_init ========
+ * Purpose:
+ * Initialize the CFG module's private state.
+ */
+bool cfg_init(void)
+{
+ return true;
+}
+
+/*
+ * ======== cfg_set_dev_object ========
+ * Purpose:
+ * Store the Device Object handle and dev_node pointer for a given devnode.
+ */
+int cfg_set_dev_object(struct cfg_devnode *dev_node_obj, u32 dwValue)
+{
+ int status = 0;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ if (!drv_datap) {
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+ return -EPERM;
+ }
+
+ if (!dev_node_obj)
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Store the Bridge device object in the Registry */
+
+ if (!(strcmp((char *)dev_node_obj, "TIOMAP1510")))
+ drv_datap->dev_object = (void *) dwValue;
+ }
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+
+ return status;
+}
+
+/*
+ * ======== cfg_set_object ========
+ * Purpose:
+ * Store the Driver Object handle
+ */
+int cfg_set_object(u32 dwValue, u8 dw_type)
+{
+ int status = -EINVAL;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ if (!drv_datap)
+ return -EPERM;
+
+ switch (dw_type) {
+ case (REG_DRV_OBJECT):
+ drv_datap->drv_object = (void *)dwValue;
+ status = 0;
+ break;
+ case (REG_MGR_OBJECT):
+ drv_datap->mgr_object = (void *)dwValue;
+ status = 0;
+ break;
+ default:
+ break;
+ }
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed, status 0x%x\n", __func__, status);
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/services/ntfy.c b/drivers/staging/tidspbridge/services/ntfy.c
new file mode 100644
index 0000000..a2ea698
--- /dev/null
+++ b/drivers/staging/tidspbridge/services/ntfy.c
@@ -0,0 +1,31 @@
+/*
+ * ntfy.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Manage lists of notification events.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- This */
+#include <dspbridge/ntfy.h>
+
+int dsp_notifier_event(struct notifier_block *this, unsigned long event,
+ void *data)
+{
+ struct ntfy_event *ne = container_of(this, struct ntfy_event,
+ noti_block);
+ if (ne->event & event)
+ sync_set_event(&ne->sync_obj);
+ return NOTIFY_OK;
+}
+
diff --git a/drivers/staging/tidspbridge/services/services.c b/drivers/staging/tidspbridge/services/services.c
new file mode 100644
index 0000000..23be95c
--- /dev/null
+++ b/drivers/staging/tidspbridge/services/services.c
@@ -0,0 +1,69 @@
+/*
+ * services.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Provide SERVICES loading.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/ntfy.h>
+#include <dspbridge/sync.h>
+#include <dspbridge/clk.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/services.h>
+
+/*
+ * ======== services_exit ========
+ * Purpose:
+ * Discontinue usage of module; free resources when reference count
+ * reaches 0.
+ */
+void services_exit(void)
+{
+ cfg_exit();
+}
+
+/*
+ * ======== services_init ========
+ * Purpose:
+ * Initializes SERVICES modules.
+ */
+bool services_init(void)
+{
+ bool ret = true;
+ bool fcfg;
+
+ /* Perform required initialization of SERVICES modules. */
+ fcfg = cfg_init();
+
+ ret = fcfg;
+
+ if (!ret) {
+ if (fcfg)
+ cfg_exit();
+ }
+
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/services/sync.c b/drivers/staging/tidspbridge/services/sync.c
new file mode 100644
index 0000000..9010b37
--- /dev/null
+++ b/drivers/staging/tidspbridge/services/sync.c
@@ -0,0 +1,104 @@
+/*
+ * sync.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Synchronization services.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/sync.h>
+
+DEFINE_SPINLOCK(sync_lock);
+
+/**
+ * sync_set_event() - set or signal and specified event
+ * @event: Event to be set..
+ *
+ * set the @event, if there is an thread waiting for the event
+ * it will be waken up, this function only wakes one thread.
+ */
+
+void sync_set_event(struct sync_object *event)
+{
+ spin_lock_bh(&sync_lock);
+ complete(&event->comp);
+ if (event->multi_comp)
+ complete(event->multi_comp);
+ spin_unlock_bh(&sync_lock);
+}
+
+/**
+ * sync_wait_on_multiple_events() - waits for multiple events to be set.
+ * @events: Array of events to wait for them.
+ * @count: number of elements of the array.
+ * @timeout timeout on waiting for the evetns.
+ * @pu_index index of the event set.
+ *
+ * This functios will wait until any of the array element is set or until
+ * timeout. In case of success the function will return 0 and
+ * @pu_index will store the index of the array element set or in case
+ * of timeout the function will return -ETIME or in case of
+ * interrupting by a signal it will return -EPERM.
+ */
+
+int sync_wait_on_multiple_events(struct sync_object **events,
+ unsigned count, unsigned timeout,
+ unsigned *index)
+{
+ unsigned i;
+ int status = -EPERM;
+ struct completion m_comp;
+
+ init_completion(&m_comp);
+
+ if (SYNC_INFINITE == timeout)
+ timeout = MAX_SCHEDULE_TIMEOUT;
+
+ spin_lock_bh(&sync_lock);
+ for (i = 0; i < count; i++) {
+ if (completion_done(&events[i]->comp)) {
+ INIT_COMPLETION(events[i]->comp);
+ *index = i;
+ spin_unlock_bh(&sync_lock);
+ status = 0;
+ goto func_end;
+ }
+ }
+
+ for (i = 0; i < count; i++)
+ events[i]->multi_comp = &m_comp;
+
+ spin_unlock_bh(&sync_lock);
+
+ if (!wait_for_completion_interruptible_timeout(&m_comp,
+ msecs_to_jiffies(timeout)))
+ status = -ETIME;
+
+ spin_lock_bh(&sync_lock);
+ for (i = 0; i < count; i++) {
+ if (completion_done(&events[i]->comp)) {
+ INIT_COMPLETION(events[i]->comp);
+ *index = i;
+ status = 0;
+ }
+ events[i]->multi_comp = NULL;
+ }
+ spin_unlock_bh(&sync_lock);
+func_end:
+ return status;
+}
+
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge DOFF binaries dynamic loader driver sources
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/dynload/cload.c | 1960 ++++++++++++++++++++
.../staging/tidspbridge/dynload/dload_internal.h | 351 ++++
drivers/staging/tidspbridge/dynload/doff.h | 344 ++++
drivers/staging/tidspbridge/dynload/getsection.c | 416 +++++
drivers/staging/tidspbridge/dynload/header.h | 55 +
drivers/staging/tidspbridge/dynload/module_list.h | 159 ++
drivers/staging/tidspbridge/dynload/params.h | 226 +++
drivers/staging/tidspbridge/dynload/reloc.c | 484 +++++
drivers/staging/tidspbridge/dynload/reloc_table.h | 102 +
.../tidspbridge/dynload/reloc_table_c6000.c | 257 +++
drivers/staging/tidspbridge/dynload/tramp.c | 1143 ++++++++++++
.../tidspbridge/dynload/tramp_table_c6000.c | 164 ++
12 files changed, 5661 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/dynload/cload.c
create mode 100644 drivers/staging/tidspbridge/dynload/dload_internal.h
create mode 100644 drivers/staging/tidspbridge/dynload/doff.h
create mode 100644 drivers/staging/tidspbridge/dynload/getsection.c
create mode 100644 drivers/staging/tidspbridge/dynload/header.h
create mode 100644 drivers/staging/tidspbridge/dynload/module_list.h
create mode 100644 drivers/staging/tidspbridge/dynload/params.h
create mode 100644 drivers/staging/tidspbridge/dynload/reloc.c
create mode 100644 drivers/staging/tidspbridge/dynload/reloc_table.h
create mode 100644 drivers/staging/tidspbridge/dynload/reloc_table_c6000.c
create mode 100644 drivers/staging/tidspbridge/dynload/tramp.c
create mode 100644 drivers/staging/tidspbridge/dynload/tramp_table_c6000.c
diff --git a/drivers/staging/tidspbridge/dynload/cload.c b/drivers/staging/tidspbridge/dynload/cload.c
new file mode 100644
index 0000000..d4f71b5
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/cload.c
@@ -0,0 +1,1960 @@
+/*
+ * cload.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include "header.h"
+
+#include "module_list.h"
+#define LINKER_MODULES_HEADER ("_" MODULES_HEADER)
+
+/*
+ * we use the fact that DOFF section records are shaped just like
+ * ldr_section_info to reduce our section storage usage. This macro marks
+ * the places where that assumption is made
+ */
+#define DOFFSEC_IS_LDRSEC(pdoffsec) ((struct ldr_section_info *)(pdoffsec))
+
+/*
+ * forward references
+ */
+static void dload_symbols(struct dload_state *dlthis);
+static void dload_data(struct dload_state *dlthis);
+static void allocate_sections(struct dload_state *dlthis);
+static void string_table_free(struct dload_state *dlthis);
+static void symbol_table_free(struct dload_state *dlthis);
+static void section_table_free(struct dload_state *dlthis);
+static void init_module_handle(struct dload_state *dlthis);
+#if BITS_PER_AU > BITS_PER_BYTE
+static char *unpack_name(struct dload_state *dlthis, u32 soffset);
+#endif
+
+static const char cinitname[] = { ".cinit" };
+static const char loader_dllview_root[] = { "?DLModules?" };
+
+/*
+ * Error strings
+ */
+static const char readstrm[] = { "Error reading %s from input stream" };
+static const char err_alloc[] = { "Syms->dload_allocate( %d ) failed" };
+static const char tgtalloc[] = {
+ "Target memory allocate failed, section %s size " FMT_UI32 };
+static const char initfail[] = { "%s to target address " FMT_UI32 " failed" };
+static const char dlvwrite[] = { "Write to DLLview list failed" };
+static const char iconnect[] = { "Connect call to init interface failed" };
+static const char err_checksum[] = { "Checksum failed on %s" };
+
+/*************************************************************************
+ * Procedure dload_error
+ *
+ * Parameters:
+ * errtxt description of the error, printf style
+ * ... additional information
+ *
+ * Effect:
+ * Reports or records the error as appropriate.
+ *********************************************************************** */
+void dload_error(struct dload_state *dlthis, const char *errtxt, ...)
+{
+ va_list args;
+
+ va_start(args, errtxt);
+ dlthis->mysym->error_report(dlthis->mysym, errtxt, args);
+ va_end(args);
+ dlthis->dload_errcount += 1;
+
+} /* dload_error */
+
+#define DL_ERROR(zza, zzb) dload_error(dlthis, zza, zzb)
+
+/*************************************************************************
+ * Procedure dload_syms_error
+ *
+ * Parameters:
+ * errtxt description of the error, printf style
+ * ... additional information
+ *
+ * Effect:
+ * Reports or records the error as appropriate.
+ *********************************************************************** */
+void dload_syms_error(struct dynamic_loader_sym *syms, const char *errtxt, ...)
+{
+ va_list args;
+
+ va_start(args, errtxt);
+ syms->error_report(syms, errtxt, args);
+ va_end(args);
+}
+
+/*************************************************************************
+ * Procedure dynamic_load_module
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ * init Target-side memory initialization
+ * options Option flags DLOAD_*
+ * mhandle A module handle for use with Dynamic_Unload
+ *
+ * Effect:
+ * The module image is read using *module. Target storage for the new
+ * image is
+ * obtained from *alloc. Symbols defined and referenced by the module are
+ * managed using *syms. The image is then relocated and references
+ * resolved as necessary, and the resulting executable bits are placed
+ * into target memory using *init.
+ *
+ * Returns:
+ * On a successful load, a module handle is placed in *mhandle,
+ * and zero is returned. On error, the number of errors detected is
+ * returned. Individual errors are reported during the load process
+ * using syms->error_report().
+ ********************************************************************** */
+int dynamic_load_module(struct dynamic_loader_stream *module,
+ struct dynamic_loader_sym *syms,
+ struct dynamic_loader_allocate *alloc,
+ struct dynamic_loader_initialize *init,
+ unsigned options, void **mhandle)
+{
+ register unsigned *dp, sz;
+ struct dload_state dl_state; /* internal state for this call */
+
+ /* blast our internal state */
+ dp = (unsigned *)&dl_state;
+ for (sz = sizeof(dl_state) / sizeof(unsigned); sz > 0; sz -= 1)
+ *dp++ = 0;
+
+ /* Enable _only_ BSS initialization if enabled by user */
+ if ((options & DLOAD_INITBSS) == DLOAD_INITBSS)
+ dl_state.myoptions = DLOAD_INITBSS;
+
+ /* Check that mandatory arguments are present */
+ if (!module || !syms) {
+ dload_error(&dl_state, "Required parameter is NULL");
+ } else {
+ dl_state.strm = module;
+ dl_state.mysym = syms;
+ dload_headers(&dl_state);
+ if (!dl_state.dload_errcount)
+ dload_strings(&dl_state, false);
+ if (!dl_state.dload_errcount)
+ dload_sections(&dl_state);
+
+ if (init && !dl_state.dload_errcount) {
+ if (init->connect(init)) {
+ dl_state.myio = init;
+ dl_state.myalloc = alloc;
+ /* do now, before reducing symbols */
+ allocate_sections(&dl_state);
+ } else
+ dload_error(&dl_state, iconnect);
+ }
+
+ if (!dl_state.dload_errcount) {
+ /* fix up entry point address */
+ unsigned sref = dl_state.dfile_hdr.df_entry_secn - 1;
+ if (sref < dl_state.allocated_secn_count)
+ dl_state.dfile_hdr.df_entrypt +=
+ dl_state.ldr_sections[sref].run_addr;
+
+ dload_symbols(&dl_state);
+ }
+
+ if (init && !dl_state.dload_errcount)
+ dload_data(&dl_state);
+
+ init_module_handle(&dl_state);
+
+ /* dl_state.myio is init or 0 at this point. */
+ if (dl_state.myio) {
+ if ((!dl_state.dload_errcount) &&
+ (dl_state.dfile_hdr.df_entry_secn != DN_UNDEF) &&
+ (!init->execute(init,
+ dl_state.dfile_hdr.df_entrypt)))
+ dload_error(&dl_state, "Init->Execute Failed");
+ init->release(init);
+ }
+
+ symbol_table_free(&dl_state);
+ section_table_free(&dl_state);
+ string_table_free(&dl_state);
+ dload_tramp_cleanup(&dl_state);
+
+ if (dl_state.dload_errcount) {
+ dynamic_unload_module(dl_state.myhandle, syms, alloc,
+ init);
+ dl_state.myhandle = NULL;
+ }
+ }
+
+ if (mhandle)
+ *mhandle = dl_state.myhandle; /* give back the handle */
+
+ return dl_state.dload_errcount;
+} /* DLOAD_File */
+
+/*************************************************************************
+ * Procedure dynamic_open_module
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ * init Target-side memory initialization
+ * options Option flags DLOAD_*
+ * mhandle A module handle for use with Dynamic_Unload
+ *
+ * Effect:
+ * The module image is read using *module. Target storage for the new
+ * image is
+ * obtained from *alloc. Symbols defined and referenced by the module are
+ * managed using *syms. The image is then relocated and references
+ * resolved as necessary, and the resulting executable bits are placed
+ * into target memory using *init.
+ *
+ * Returns:
+ * On a successful load, a module handle is placed in *mhandle,
+ * and zero is returned. On error, the number of errors detected is
+ * returned. Individual errors are reported during the load process
+ * using syms->error_report().
+ ********************************************************************** */
+int
+dynamic_open_module(struct dynamic_loader_stream *module,
+ struct dynamic_loader_sym *syms,
+ struct dynamic_loader_allocate *alloc,
+ struct dynamic_loader_initialize *init,
+ unsigned options, void **mhandle)
+{
+ register unsigned *dp, sz;
+ struct dload_state dl_state; /* internal state for this call */
+
+ /* blast our internal state */
+ dp = (unsigned *)&dl_state;
+ for (sz = sizeof(dl_state) / sizeof(unsigned); sz > 0; sz -= 1)
+ *dp++ = 0;
+
+ /* Enable _only_ BSS initialization if enabled by user */
+ if ((options & DLOAD_INITBSS) == DLOAD_INITBSS)
+ dl_state.myoptions = DLOAD_INITBSS;
+
+ /* Check that mandatory arguments are present */
+ if (!module || !syms) {
+ dload_error(&dl_state, "Required parameter is NULL");
+ } else {
+ dl_state.strm = module;
+ dl_state.mysym = syms;
+ dload_headers(&dl_state);
+ if (!dl_state.dload_errcount)
+ dload_strings(&dl_state, false);
+ if (!dl_state.dload_errcount)
+ dload_sections(&dl_state);
+
+ if (init && !dl_state.dload_errcount) {
+ if (init->connect(init)) {
+ dl_state.myio = init;
+ dl_state.myalloc = alloc;
+ /* do now, before reducing symbols */
+ allocate_sections(&dl_state);
+ } else
+ dload_error(&dl_state, iconnect);
+ }
+
+ if (!dl_state.dload_errcount) {
+ /* fix up entry point address */
+ unsigned sref = dl_state.dfile_hdr.df_entry_secn - 1;
+ if (sref < dl_state.allocated_secn_count)
+ dl_state.dfile_hdr.df_entrypt +=
+ dl_state.ldr_sections[sref].run_addr;
+
+ dload_symbols(&dl_state);
+ }
+
+ init_module_handle(&dl_state);
+
+ /* dl_state.myio is either 0 or init at this point. */
+ if (dl_state.myio) {
+ if ((!dl_state.dload_errcount) &&
+ (dl_state.dfile_hdr.df_entry_secn != DN_UNDEF) &&
+ (!init->execute(init,
+ dl_state.dfile_hdr.df_entrypt)))
+ dload_error(&dl_state, "Init->Execute Failed");
+ init->release(init);
+ }
+
+ symbol_table_free(&dl_state);
+ section_table_free(&dl_state);
+ string_table_free(&dl_state);
+
+ if (dl_state.dload_errcount) {
+ dynamic_unload_module(dl_state.myhandle, syms, alloc,
+ init);
+ dl_state.myhandle = NULL;
+ }
+ }
+
+ if (mhandle)
+ *mhandle = dl_state.myhandle; /* give back the handle */
+
+ return dl_state.dload_errcount;
+} /* DLOAD_File */
+
+/*************************************************************************
+ * Procedure dload_headers
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Loads the DOFF header and verify record. Deals with any byte-order
+ * issues and checks them for validity.
+ *********************************************************************** */
+#define COMBINED_HEADER_SIZE (sizeof(struct doff_filehdr_t)+ \
+ sizeof(struct doff_verify_rec_t))
+
+void dload_headers(struct dload_state *dlthis)
+{
+ u32 map;
+
+ /* Read the header and the verify record as one. If we don't get it
+ all, we're done */
+ if (dlthis->strm->read_buffer(dlthis->strm, &dlthis->dfile_hdr,
+ COMBINED_HEADER_SIZE) !=
+ COMBINED_HEADER_SIZE) {
+ DL_ERROR(readstrm, "File Headers");
+ return;
+ }
+ /*
+ * Verify that we have the byte order of the file correct.
+ * If not, must fix it before we can continue
+ */
+ map = REORDER_MAP(dlthis->dfile_hdr.df_byte_reshuffle);
+ if (map != REORDER_MAP(BYTE_RESHUFFLE_VALUE)) {
+ /* input is either byte-shuffled or bad */
+ if ((map & 0xFCFCFCFC) == 0) { /* no obviously bogus bits */
+ dload_reorder(&dlthis->dfile_hdr, COMBINED_HEADER_SIZE,
+ map);
+ }
+ if (dlthis->dfile_hdr.df_byte_reshuffle !=
+ BYTE_RESHUFFLE_VALUE) {
+ /* didn't fix the problem, the byte swap map is bad */
+ dload_error(dlthis,
+ "Bad byte swap map " FMT_UI32 " in header",
+ dlthis->dfile_hdr.df_byte_reshuffle);
+ return;
+ }
+ dlthis->reorder_map = map; /* keep map for future use */
+ }
+
+ /*
+ * Verify checksum of header and verify record
+ */
+ if (~dload_checksum(&dlthis->dfile_hdr,
+ sizeof(struct doff_filehdr_t)) ||
+ ~dload_checksum(&dlthis->verify,
+ sizeof(struct doff_verify_rec_t))) {
+ DL_ERROR(err_checksum, "header or verify record");
+ return;
+ }
+#if HOST_ENDIANNESS
+ dlthis->dfile_hdr.df_byte_reshuffle = map; /* put back for later */
+#endif
+
+ /* Check for valid target ID */
+ if ((dlthis->dfile_hdr.df_target_id != TARGET_ID) &&
+ -(dlthis->dfile_hdr.df_target_id != TMS470_ID)) {
+ dload_error(dlthis, "Bad target ID 0x%x and TARGET_ID 0x%x",
+ dlthis->dfile_hdr.df_target_id, TARGET_ID);
+ return;
+ }
+ /* Check for valid file format */
+ if ((dlthis->dfile_hdr.df_doff_version != DOFF0)) {
+ dload_error(dlthis, "Bad DOFF version 0x%x",
+ dlthis->dfile_hdr.df_doff_version);
+ return;
+ }
+
+ /*
+ * Apply reasonableness checks to count fields
+ */
+ if (dlthis->dfile_hdr.df_strtab_size > MAX_REASONABLE_STRINGTAB) {
+ dload_error(dlthis, "Excessive string table size " FMT_UI32,
+ dlthis->dfile_hdr.df_strtab_size);
+ return;
+ }
+ if (dlthis->dfile_hdr.df_no_scns > MAX_REASONABLE_SECTIONS) {
+ dload_error(dlthis, "Excessive section count 0x%x",
+ dlthis->dfile_hdr.df_no_scns);
+ return;
+ }
+#ifndef TARGET_ENDIANNESS
+ /*
+ * Check that endianness does not disagree with explicit specification
+ */
+ if ((dlthis->dfile_hdr.df_flags >> ALIGN_COFF_ENDIANNESS) &
+ dlthis->myoptions & ENDIANNESS_MASK) {
+ dload_error(dlthis,
+ "Input endianness disagrees with specified option");
+ return;
+ }
+ dlthis->big_e_target = dlthis->dfile_hdr.df_flags & DF_BIG;
+#endif
+
+} /* dload_headers */
+
+/* COFF Section Processing
+ *
+ * COFF sections are read in and retained intact. Each record is embedded
+ * in a new structure that records the updated load and
+ * run addresses of the section */
+
+static const char secn_errid[] = { "section" };
+
+/*************************************************************************
+ * Procedure dload_sections
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Loads the section records into an internal table.
+ *********************************************************************** */
+void dload_sections(struct dload_state *dlthis)
+{
+ s16 siz;
+ struct doff_scnhdr_t *shp;
+ unsigned nsecs = dlthis->dfile_hdr.df_no_scns;
+
+ /* allocate space for the DOFF section records */
+ siz = nsecs * sizeof(struct doff_scnhdr_t);
+ shp =
+ (struct doff_scnhdr_t *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ siz);
+ if (!shp) { /* not enough storage */
+ DL_ERROR(err_alloc, siz);
+ return;
+ }
+ dlthis->sect_hdrs = shp;
+
+ /* read in the section records */
+ if (dlthis->strm->read_buffer(dlthis->strm, shp, siz) != siz) {
+ DL_ERROR(readstrm, secn_errid);
+ return;
+ }
+
+ /* if we need to fix up byte order, do it now */
+ if (dlthis->reorder_map)
+ dload_reorder(shp, siz, dlthis->reorder_map);
+
+ /* check for validity */
+ if (~dload_checksum(dlthis->sect_hdrs, siz) !=
+ dlthis->verify.dv_scn_rec_checksum) {
+ DL_ERROR(err_checksum, secn_errid);
+ return;
+ }
+
+} /* dload_sections */
+
+/*****************************************************************************
+ * Procedure allocate_sections
+ *
+ * Parameters:
+ * alloc target memory allocator class
+ *
+ * Effect:
+ * Assigns new (target) addresses for sections
+ **************************************************************************** */
+static void allocate_sections(struct dload_state *dlthis)
+{
+ u16 curr_sect, nsecs, siz;
+ struct doff_scnhdr_t *shp;
+ struct ldr_section_info *asecs;
+ struct my_handle *hndl;
+ nsecs = dlthis->dfile_hdr.df_no_scns;
+ if (!nsecs)
+ return;
+ if ((dlthis->myalloc == NULL) &&
+ (dlthis->dfile_hdr.df_target_scns > 0)) {
+ DL_ERROR("Arg 3 (alloc) required but NULL", 0);
+ return;
+ }
+ /*
+ * allocate space for the module handle, which we will keep for unload
+ * purposes include an additional section store for an auto-generated
+ * trampoline section in case we need it.
+ */
+ siz = (dlthis->dfile_hdr.df_target_scns + 1) *
+ sizeof(struct ldr_section_info) + MY_HANDLE_SIZE;
+
+ hndl =
+ (struct my_handle *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ siz);
+ if (!hndl) { /* not enough storage */
+ DL_ERROR(err_alloc, siz);
+ return;
+ }
+ /* initialize the handle header */
+ hndl->dm.hnext = hndl->dm.hprev = hndl; /* circular list */
+ hndl->dm.hroot = NULL;
+ hndl->dm.dbthis = 0;
+ dlthis->myhandle = hndl; /* save away for return */
+ /* pointer to the section list of allocated sections */
+ dlthis->ldr_sections = asecs = hndl->secns;
+ /* * Insert names into all sections, make copies of
+ the sections we allocate */
+ shp = dlthis->sect_hdrs;
+ for (curr_sect = 0; curr_sect < nsecs; curr_sect++) {
+ u32 soffset = shp->ds_offset;
+#if BITS_PER_AU <= BITS_PER_BYTE
+ /* attempt to insert the name of this section */
+ if (soffset < dlthis->dfile_hdr.df_strtab_size)
+ DOFFSEC_IS_LDRSEC(shp)->name = dlthis->str_head +
+ soffset;
+ else {
+ dload_error(dlthis, "Bad name offset in section %d",
+ curr_sect);
+ DOFFSEC_IS_LDRSEC(shp)->name = NULL;
+ }
+#endif
+ /* allocate target storage for sections that require it */
+ if (DS_NEEDS_ALLOCATION(shp)) {
+ *asecs = *DOFFSEC_IS_LDRSEC(shp);
+ asecs->context = 0; /* zero the context field */
+#if BITS_PER_AU > BITS_PER_BYTE
+ asecs->name = unpack_name(dlthis, soffset);
+ dlthis->debug_string_size = soffset + dlthis->temp_len;
+#else
+ dlthis->debug_string_size = soffset;
+#endif
+ if (dlthis->myalloc != NULL) {
+ if (!dlthis->myalloc->
+ dload_allocate(dlthis->myalloc, asecs,
+ DS_ALIGNMENT(asecs->type))) {
+ dload_error(dlthis, tgtalloc,
+ asecs->name, asecs->size);
+ return;
+ }
+ }
+ /* keep address deltas in original section table */
+ shp->ds_vaddr = asecs->load_addr - shp->ds_vaddr;
+ shp->ds_paddr = asecs->run_addr - shp->ds_paddr;
+ dlthis->allocated_secn_count += 1;
+ } /* allocate target storage */
+ shp += 1;
+ asecs += 1;
+ }
+#if BITS_PER_AU <= BITS_PER_BYTE
+ dlthis->debug_string_size +=
+ strlen(dlthis->str_head + dlthis->debug_string_size) + 1;
+#endif
+} /* allocate sections */
+
+/*************************************************************************
+ * Procedure section_table_free
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Frees any state used by the symbol table.
+ *
+ * WARNING:
+ * This routine is not allowed to declare errors!
+ *********************************************************************** */
+static void section_table_free(struct dload_state *dlthis)
+{
+ struct doff_scnhdr_t *shp;
+
+ shp = dlthis->sect_hdrs;
+ if (shp)
+ dlthis->mysym->dload_deallocate(dlthis->mysym, shp);
+
+} /* section_table_free */
+
+/*************************************************************************
+ * Procedure dload_strings
+ *
+ * Parameters:
+ * sec_names_only If true only read in the "section names"
+ * portion of the string table
+ *
+ * Effect:
+ * Loads the DOFF string table into memory. DOFF keeps all strings in a
+ * big unsorted array. We just read that array into memory in bulk.
+ *********************************************************************** */
+static const char stringtbl[] = { "string table" };
+
+void dload_strings(struct dload_state *dlthis, bool sec_names_only)
+{
+ u32 ssiz;
+ char *strbuf;
+
+ if (sec_names_only) {
+ ssiz = BYTE_TO_HOST(DOFF_ALIGN
+ (dlthis->dfile_hdr.df_scn_name_size));
+ } else {
+ ssiz = BYTE_TO_HOST(DOFF_ALIGN
+ (dlthis->dfile_hdr.df_strtab_size));
+ }
+ if (ssiz == 0)
+ return;
+
+ /* get some memory for the string table */
+#if BITS_PER_AU > BITS_PER_BYTE
+ strbuf = (char *)dlthis->mysym->dload_allocate(dlthis->mysym, ssiz +
+ dlthis->dfile_hdr.
+ df_max_str_len);
+#else
+ strbuf = (char *)dlthis->mysym->dload_allocate(dlthis->mysym, ssiz);
+#endif
+ if (strbuf == NULL) {
+ DL_ERROR(err_alloc, ssiz);
+ return;
+ }
+ dlthis->str_head = strbuf;
+#if BITS_PER_AU > BITS_PER_BYTE
+ dlthis->str_temp = strbuf + ssiz;
+#endif
+ /* read in the strings and verify them */
+ if ((unsigned)(dlthis->strm->read_buffer(dlthis->strm, strbuf,
+ ssiz)) != ssiz) {
+ DL_ERROR(readstrm, stringtbl);
+ }
+ /* if we need to fix up byte order, do it now */
+#ifndef _BIG_ENDIAN
+ if (dlthis->reorder_map)
+ dload_reorder(strbuf, ssiz, dlthis->reorder_map);
+
+ if ((!sec_names_only) && (~dload_checksum(strbuf, ssiz) !=
+ dlthis->verify.dv_str_tab_checksum)) {
+ DL_ERROR(err_checksum, stringtbl);
+ }
+#else
+ if (dlthis->dfile_hdr.df_byte_reshuffle !=
+ HOST_BYTE_ORDER(REORDER_MAP(BYTE_RESHUFFLE_VALUE))) {
+ /* put strings in big-endian order, not in PC order */
+ dload_reorder(strbuf, ssiz,
+ HOST_BYTE_ORDER(dlthis->
+ dfile_hdr.df_byte_reshuffle));
+ }
+ if ((!sec_names_only) && (~dload_reverse_checksum(strbuf, ssiz) !=
+ dlthis->verify.dv_str_tab_checksum)) {
+ DL_ERROR(err_checksum, stringtbl);
+ }
+#endif
+} /* dload_strings */
+
+/*************************************************************************
+ * Procedure string_table_free
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Frees any state used by the string table.
+ *
+ * WARNING:
+ * This routine is not allowed to declare errors!
+ ************************************************************************ */
+static void string_table_free(struct dload_state *dlthis)
+{
+ if (dlthis->str_head)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ dlthis->str_head);
+
+} /* string_table_free */
+
+/*
+ * Symbol Table Maintenance Functions
+ *
+ * COFF symbols are read by dload_symbols(), which is called after
+ * sections have been allocated. Symbols which might be used in
+ * relocation (ie, not debug info) are retained in an internal temporary
+ * compressed table (type local_symbol). A particular symbol is recovered
+ * by index by calling dload_find_symbol(). dload_find_symbol
+ * reconstructs a more explicit representation (type SLOTVEC) which is
+ * used by reloc.c
+ */
+/* real size of debug header */
+#define DBG_HDR_SIZE (sizeof(struct dll_module) - sizeof(struct dll_sect))
+
+static const char sym_errid[] = { "symbol" };
+
+/**************************************************************************
+ * Procedure dload_symbols
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Reads in symbols and retains ones that might be needed for relocation
+ * purposes.
+ *********************************************************************** */
+/* size of symbol buffer no bigger than target data buffer, to limit stack
+ * usage */
+#define MY_SYM_BUF_SIZ (BYTE_TO_HOST(IMAGE_PACKET_SIZE)/\
+ sizeof(struct doff_syment_t))
+
+static void dload_symbols(struct dload_state *dlthis)
+{
+ u32 sym_count, siz, dsiz, symbols_left;
+ u32 checks;
+ struct local_symbol *sp;
+ struct dynload_symbol *symp;
+ struct dynload_symbol *newsym;
+
+ sym_count = dlthis->dfile_hdr.df_no_syms;
+ if (sym_count == 0)
+ return;
+
+ /*
+ * We keep a local symbol table for all of the symbols in the input.
+ * This table contains only section & value info, as we do not have
+ * to do any name processing for locals. We reuse this storage
+ * as a temporary for .dllview record construction.
+ * Allocate storage for the whole table. Add 1 to the section count
+ * in case a trampoline section is auto-generated as well as the
+ * size of the trampoline section name so DLLView doens't get lost.
+ */
+
+ siz = sym_count * sizeof(struct local_symbol);
+ dsiz = DBG_HDR_SIZE +
+ (sizeof(struct dll_sect) * dlthis->allocated_secn_count) +
+ BYTE_TO_HOST_ROUND(dlthis->debug_string_size + 1);
+ if (dsiz > siz)
+ siz = dsiz; /* larger of symbols and .dllview temp */
+ sp = (struct local_symbol *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ siz);
+ if (!sp) {
+ DL_ERROR(err_alloc, siz);
+ return;
+ }
+ dlthis->local_symtab = sp;
+ /* Read the symbols in the input, store them in the table, and post any
+ * globals to the global symbol table. In the process, externals
+ become defined from the global symbol table */
+ checks = dlthis->verify.dv_sym_tab_checksum;
+ symbols_left = sym_count;
+ do { /* read all symbols */
+ char *sname;
+ u32 val;
+ s32 delta;
+ struct doff_syment_t *input_sym;
+ unsigned syms_in_buf;
+ struct doff_syment_t my_sym_buf[MY_SYM_BUF_SIZ];
+ input_sym = my_sym_buf;
+ syms_in_buf = symbols_left > MY_SYM_BUF_SIZ ?
+ MY_SYM_BUF_SIZ : symbols_left;
+ siz = syms_in_buf * sizeof(struct doff_syment_t);
+ if (dlthis->strm->read_buffer(dlthis->strm, input_sym, siz) !=
+ siz) {
+ DL_ERROR(readstrm, sym_errid);
+ return;
+ }
+ if (dlthis->reorder_map)
+ dload_reorder(input_sym, siz, dlthis->reorder_map);
+
+ checks += dload_checksum(input_sym, siz);
+ do { /* process symbols in buffer */
+ symbols_left -= 1;
+ /* attempt to derive the name of this symbol */
+ sname = NULL;
+ if (input_sym->dn_offset > 0) {
+#if BITS_PER_AU <= BITS_PER_BYTE
+ if ((u32) input_sym->dn_offset <
+ dlthis->dfile_hdr.df_strtab_size)
+ sname = dlthis->str_head +
+ BYTE_TO_HOST(input_sym->dn_offset);
+ else
+ dload_error(dlthis,
+ "Bad name offset in symbol "
+ " %d", symbols_left);
+#else
+ sname = unpack_name(dlthis,
+ input_sym->dn_offset);
+#endif
+ }
+ val = input_sym->dn_value;
+ delta = 0;
+ sp->sclass = input_sym->dn_sclass;
+ sp->secnn = input_sym->dn_scnum;
+ /* if this is an undefined symbol,
+ * define it (or fail) now */
+ if (sp->secnn == DN_UNDEF) {
+ /* pointless for static undefined */
+ if (input_sym->dn_sclass != DN_EXT)
+ goto loop_cont;
+
+ /* try to define symbol from previously
+ * loaded images */
+ symp = dlthis->mysym->find_matching_symbol
+ (dlthis->mysym, sname);
+ if (!symp) {
+ DL_ERROR
+ ("Undefined external symbol %s",
+ sname);
+ goto loop_cont;
+ }
+ val = delta = symp->value;
+#ifdef ENABLE_TRAMP_DEBUG
+ dload_syms_error(dlthis->mysym,
+ "===> ext sym [%s] at %x",
+ sname, val);
+#endif
+
+ goto loop_cont;
+ }
+ /* symbol defined by this module */
+ if (sp->secnn > 0) {
+ /* symbol references a section */
+ if ((unsigned)sp->secnn <=
+ dlthis->allocated_secn_count) {
+ /* section was allocated */
+ struct doff_scnhdr_t *srefp =
+ &dlthis->sect_hdrs[sp->secnn - 1];
+
+ if (input_sym->dn_sclass ==
+ DN_STATLAB ||
+ input_sym->dn_sclass == DN_EXTLAB) {
+ /* load */
+ delta = srefp->ds_vaddr;
+ } else {
+ /* run */
+ delta = srefp->ds_paddr;
+ }
+ val += delta;
+ }
+ goto loop_itr;
+ }
+ /* This symbol is an absolute symbol */
+ if (sp->secnn == DN_ABS && ((sp->sclass == DN_EXT) ||
+ (sp->sclass ==
+ DN_EXTLAB))) {
+ symp =
+ dlthis->mysym->find_matching_symbol(dlthis->
+ mysym,
+ sname);
+ if (!symp)
+ goto loop_itr;
+ /* This absolute symbol is already defined. */
+ if (symp->value == input_sym->dn_value) {
+ /* If symbol values are equal, continue
+ * but don't add to the global symbol
+ * table */
+ sp->value = val;
+ sp->delta = delta;
+ sp += 1;
+ input_sym += 1;
+ continue;
+ } else {
+ /* If symbol values are not equal,
+ * return with redefinition error */
+ DL_ERROR("Absolute symbol %s is "
+ "defined multiple times with "
+ "different values", sname);
+ return;
+ }
+ }
+loop_itr:
+ /* if this is a global symbol, post it to the
+ * global table */
+ if (input_sym->dn_sclass == DN_EXT ||
+ input_sym->dn_sclass == DN_EXTLAB) {
+ /* Keep this global symbol for subsequent
+ * modules. Don't complain on error, to allow
+ * symbol API to suppress global symbols */
+ if (!sname)
+ goto loop_cont;
+
+ newsym = dlthis->mysym->add_to_symbol_table
+ (dlthis->mysym, sname,
+ (unsigned)dlthis->myhandle);
+ if (newsym)
+ newsym->value = val;
+
+ } /* global */
+loop_cont:
+ sp->value = val;
+ sp->delta = delta;
+ sp += 1;
+ input_sym += 1;
+ } while ((syms_in_buf -= 1) > 0); /* process sym in buf */
+ } while (symbols_left > 0); /* read all symbols */
+ if (~checks)
+ dload_error(dlthis, "Checksum of symbols failed");
+
+} /* dload_symbols */
+
+/*****************************************************************************
+ * Procedure symbol_table_free
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Frees any state used by the symbol table.
+ *
+ * WARNING:
+ * This routine is not allowed to declare errors!
+ **************************************************************************** */
+static void symbol_table_free(struct dload_state *dlthis)
+{
+ if (dlthis->local_symtab) {
+ if (dlthis->dload_errcount) { /* blow off our symbols */
+ dlthis->mysym->purge_symbol_table(dlthis->mysym,
+ (unsigned)
+ dlthis->myhandle);
+ }
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ dlthis->local_symtab);
+ }
+} /* symbol_table_free */
+
+/* .cinit Processing
+ *
+ * The dynamic loader does .cinit interpretation. cload_cinit()
+ * acts as a special write-to-target function, in that it takes relocated
+ * data from the normal data flow, and interprets it as .cinit actions.
+ * Because the normal data flow does not necessarily process the whole
+ * .cinit section in one buffer, cload_cinit() must be prepared to
+ * interpret the data piecemeal. A state machine is used for this
+ * purpose.
+ */
+
+/* The following are only for use by reloc.c and things it calls */
+static const struct ldr_section_info cinit_info_init = { cinitname, 0, 0,
+ (ldr_addr)-1, 0, DLOAD_BSS, 0
+};
+
+/*************************************************************************
+ * Procedure cload_cinit
+ *
+ * Parameters:
+ * ipacket Pointer to data packet to be loaded
+ *
+ * Effect:
+ * Interprets the data in the buffer as .cinit data, and performs the
+ * appropriate initializations.
+ *********************************************************************** */
+static void cload_cinit(struct dload_state *dlthis,
+ struct image_packet_t *ipacket)
+{
+#if TDATA_TO_HOST(CINIT_COUNT)*BITS_PER_AU > 16
+ s32 init_count, left;
+#else
+ s16 init_count, left;
+#endif
+ unsigned char *pktp = ipacket->img_data;
+ unsigned char *pktend = pktp + BYTE_TO_HOST_ROUND(ipacket->packet_size);
+ int temp;
+ ldr_addr atmp;
+ struct ldr_section_info cinit_info;
+
+ /* PROCESS ALL THE INITIALIZATION RECORDS IN THE BUFFER. */
+ while (true) {
+ left = pktend - pktp;
+ switch (dlthis->cinit_state) {
+ case CI_COUNT: /* count field */
+ if (left < TDATA_TO_HOST(CINIT_COUNT))
+ goto loopexit;
+ temp = dload_unpack(dlthis, (tgt_au_t *) pktp,
+ CINIT_COUNT * TDATA_AU_BITS, 0,
+ ROP_SGN);
+ pktp += TDATA_TO_HOST(CINIT_COUNT);
+ /* negative signifies BSS table, zero means done */
+ if (temp <= 0) {
+ dlthis->cinit_state = CI_DONE;
+ break;
+ }
+ dlthis->cinit_count = temp;
+ dlthis->cinit_state = CI_ADDRESS;
+ break;
+#if CINIT_ALIGN < CINIT_ADDRESS
+ case CI_PARTADDRESS:
+ pktp -= TDATA_TO_HOST(CINIT_ALIGN);
+ /* back up pointer into space courtesy of caller */
+ *(uint16_t *) pktp = dlthis->cinit_addr;
+ /* stuff in saved bits !! FALL THRU !! */
+#endif
+ case CI_ADDRESS: /* Address field for a copy packet */
+ if (left < TDATA_TO_HOST(CINIT_ADDRESS)) {
+#if CINIT_ALIGN < CINIT_ADDRESS
+ if (left == TDATA_TO_HOST(CINIT_ALIGN)) {
+ /* address broken into halves */
+ dlthis->cinit_addr = *(uint16_t *) pktp;
+ /* remember 1st half */
+ dlthis->cinit_state = CI_PARTADDRESS;
+ left = 0;
+ }
+#endif
+ goto loopexit;
+ }
+ atmp = dload_unpack(dlthis, (tgt_au_t *) pktp,
+ CINIT_ADDRESS * TDATA_AU_BITS, 0,
+ ROP_UNS);
+ pktp += TDATA_TO_HOST(CINIT_ADDRESS);
+#if CINIT_PAGE_BITS > 0
+ dlthis->cinit_page = atmp &
+ ((1 << CINIT_PAGE_BITS) - 1);
+ atmp >>= CINIT_PAGE_BITS;
+#else
+ dlthis->cinit_page = CINIT_DEFAULT_PAGE;
+#endif
+ dlthis->cinit_addr = atmp;
+ dlthis->cinit_state = CI_COPY;
+ break;
+ case CI_COPY: /* copy bits to the target */
+ init_count = HOST_TO_TDATA(left);
+ if (init_count > dlthis->cinit_count)
+ init_count = dlthis->cinit_count;
+ if (init_count == 0)
+ goto loopexit; /* get more bits */
+ cinit_info = cinit_info_init;
+ cinit_info.page = dlthis->cinit_page;
+ if (!dlthis->myio->writemem(dlthis->myio, pktp,
+ TDATA_TO_TADDR
+ (dlthis->cinit_addr),
+ &cinit_info,
+ TDATA_TO_HOST(init_count))) {
+ dload_error(dlthis, initfail, "write",
+ dlthis->cinit_addr);
+ }
+ dlthis->cinit_count -= init_count;
+ if (dlthis->cinit_count <= 0) {
+ dlthis->cinit_state = CI_COUNT;
+ init_count = (init_count + CINIT_ALIGN - 1) &
+ -CINIT_ALIGN;
+ /* align to next init */
+ }
+ pktp += TDATA_TO_HOST(init_count);
+ dlthis->cinit_addr += init_count;
+ break;
+ case CI_DONE: /* no more .cinit to do */
+ return;
+ } /* switch (cinit_state) */
+ } /* while */
+
+loopexit:
+ if (left > 0) {
+ dload_error(dlthis, "%d bytes left over in cinit packet", left);
+ dlthis->cinit_state = CI_DONE; /* left over bytes are bad */
+ }
+} /* cload_cinit */
+
+/* Functions to interface to reloc.c
+ *
+ * reloc.c is the relocation module borrowed from the linker, with
+ * minimal (we hope) changes for our purposes. cload_sect_data() invokes
+ * this module on a section to relocate and load the image data for that
+ * section. The actual read and write actions are supplied by the global
+ * routines below.
+ */
+
+/************************************************************************
+ * Procedure relocate_packet
+ *
+ * Parameters:
+ * ipacket Pointer to an image packet to relocate
+ *
+ * Effect:
+ * Performs the required relocations on the packet. Returns a checksum
+ * of the relocation operations.
+ *********************************************************************** */
+#define MY_RELOC_BUF_SIZ 8
+/* careful! exists at the same time as the image buffer */
+static int relocate_packet(struct dload_state *dlthis,
+ struct image_packet_t *ipacket,
+ u32 *checks, bool *tramps_generated)
+{
+ u32 rnum;
+ *tramps_generated = false;
+
+ rnum = ipacket->num_relocs;
+ do { /* all relocs */
+ unsigned rinbuf;
+ int siz;
+ struct reloc_record_t *rp, rrec[MY_RELOC_BUF_SIZ];
+ rp = rrec;
+ rinbuf = rnum > MY_RELOC_BUF_SIZ ? MY_RELOC_BUF_SIZ : rnum;
+ siz = rinbuf * sizeof(struct reloc_record_t);
+ if (dlthis->strm->read_buffer(dlthis->strm, rp, siz) != siz) {
+ DL_ERROR(readstrm, "relocation");
+ return 0;
+ }
+ /* reorder the bytes if need be */
+ if (dlthis->reorder_map)
+ dload_reorder(rp, siz, dlthis->reorder_map);
+
+ *checks += dload_checksum(rp, siz);
+ do {
+ /* perform the relocation operation */
+ dload_relocate(dlthis, (tgt_au_t *) ipacket->img_data,
+ rp, tramps_generated, false);
+ rp += 1;
+ rnum -= 1;
+ } while ((rinbuf -= 1) > 0);
+ } while (rnum > 0); /* all relocs */
+ /* If trampoline(s) were generated, we need to do an update of the
+ * trampoline copy of the packet since a 2nd phase relo will be done
+ * later. */
+ if (*tramps_generated == true) {
+ dload_tramp_pkt_udpate(dlthis,
+ (dlthis->image_secn -
+ dlthis->ldr_sections),
+ dlthis->image_offset, ipacket);
+ }
+
+ return 1;
+} /* dload_read_reloc */
+
+#define IPH_SIZE (sizeof(struct image_packet_t) - sizeof(u32))
+
+/* VERY dangerous */
+static const char imagepak[] = { "image packet" };
+
+/*************************************************************************
+ * Procedure dload_data
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Read image data from input file, relocate it, and download it to the
+ * target.
+ *********************************************************************** */
+static void dload_data(struct dload_state *dlthis)
+{
+ u16 curr_sect;
+ struct doff_scnhdr_t *sptr = dlthis->sect_hdrs;
+ struct ldr_section_info *lptr = dlthis->ldr_sections;
+#ifdef OPT_ZERO_COPY_LOADER
+ bool zero_copy = false;
+#endif
+ u8 *dest;
+
+ struct {
+ struct image_packet_t ipacket;
+ u8 bufr[BYTE_TO_HOST(IMAGE_PACKET_SIZE)];
+ } ibuf;
+
+ /* Indicates whether CINIT processing has occurred */
+ bool cinit_processed = false;
+
+ /* Loop through the sections and load them one at a time.
+ */
+ for (curr_sect = 0; curr_sect < dlthis->dfile_hdr.df_no_scns;
+ curr_sect += 1) {
+ if (DS_NEEDS_DOWNLOAD(sptr)) {
+ s32 nip;
+ ldr_addr image_offset = 0;
+ /* set relocation info for this section */
+ if (curr_sect < dlthis->allocated_secn_count)
+ dlthis->delta_runaddr = sptr->ds_paddr;
+ else {
+ lptr = DOFFSEC_IS_LDRSEC(sptr);
+ dlthis->delta_runaddr = 0;
+ }
+ dlthis->image_secn = lptr;
+#if BITS_PER_AU > BITS_PER_BYTE
+ lptr->name = unpack_name(dlthis, sptr->ds_offset);
+#endif
+ nip = sptr->ds_nipacks;
+ while ((nip -= 1) >= 0) { /* process packets */
+
+ s32 ipsize;
+ u32 checks;
+ bool tramp_generated = false;
+
+ /* get the fixed header bits */
+ if (dlthis->strm->read_buffer(dlthis->strm,
+ &ibuf.ipacket,
+ IPH_SIZE) !=
+ IPH_SIZE) {
+ DL_ERROR(readstrm, imagepak);
+ return;
+ }
+ /* reorder the header if need be */
+ if (dlthis->reorder_map) {
+ dload_reorder(&ibuf.ipacket, IPH_SIZE,
+ dlthis->reorder_map);
+ }
+ /* now read the rest of the packet */
+ ipsize =
+ BYTE_TO_HOST(DOFF_ALIGN
+ (ibuf.ipacket.packet_size));
+ if (ipsize > BYTE_TO_HOST(IMAGE_PACKET_SIZE)) {
+ DL_ERROR("Bad image packet size %d",
+ ipsize);
+ return;
+ }
+ dest = ibuf.bufr;
+#ifdef OPT_ZERO_COPY_LOADER
+ zero_copy = false;
+ if (DLOAD_SECT_TYPE(sptr) != DLOAD_CINIT) {
+ dlthis->myio->writemem(dlthis->myio,
+ &dest,
+ lptr->load_addr +
+ image_offset,
+ lptr, 0);
+ zero_copy = (dest != ibuf.bufr);
+ }
+#endif
+ /* End of determination */
+
+ if (dlthis->strm->read_buffer(dlthis->strm,
+ ibuf.bufr,
+ ipsize) !=
+ ipsize) {
+ DL_ERROR(readstrm, imagepak);
+ return;
+ }
+ ibuf.ipacket.img_data = dest;
+
+ /* reorder the bytes if need be */
+#if !defined(_BIG_ENDIAN) || (TARGET_AU_BITS > 16)
+ if (dlthis->reorder_map) {
+ dload_reorder(dest, ipsize,
+ dlthis->reorder_map);
+ }
+ checks = dload_checksum(dest, ipsize);
+#else
+ if (dlthis->dfile_hdr.df_byte_reshuffle !=
+ TARGET_ORDER(REORDER_MAP
+ (BYTE_RESHUFFLE_VALUE))) {
+ /* put image bytes in big-endian order,
+ * not PC order */
+ dload_reorder(dest, ipsize,
+ TARGET_ORDER
+ (dlthis->dfile_hdr.
+ df_byte_reshuffle));
+ }
+#if TARGET_AU_BITS > 8
+ checks = dload_reverse_checksum16(dest, ipsize);
+#else
+ checks = dload_reverse_checksum(dest, ipsize);
+#endif
+#endif
+
+ checks += dload_checksum(&ibuf.ipacket,
+ IPH_SIZE);
+ /* relocate the image bits as needed */
+ if (ibuf.ipacket.num_relocs) {
+ dlthis->image_offset = image_offset;
+ if (!relocate_packet(dlthis,
+ &ibuf.ipacket,
+ &checks,
+ &tramp_generated))
+ return; /* serious error */
+ }
+ if (~checks)
+ DL_ERROR(err_checksum, imagepak);
+ /* Only write the result to the target if no
+ * trampoline was generated. Otherwise it
+ *will be done during trampoline finalize. */
+
+ if (tramp_generated == false) {
+
+ /* stuff the result into target
+ * memory */
+ if (DLOAD_SECT_TYPE(sptr) ==
+ DLOAD_CINIT) {
+ cload_cinit(dlthis,
+ &ibuf.ipacket);
+ cinit_processed = true;
+ } else {
+#ifdef OPT_ZERO_COPY_LOADER
+ if (!zero_copy) {
+#endif
+ /* FIXME */
+ if (!dlthis->myio->
+ writemem(dlthis->
+ myio,
+ ibuf.bufr,
+ lptr->
+ load_addr +
+ image_offset,
+ lptr,
+ BYTE_TO_HOST
+ (ibuf.
+ ipacket.
+ packet_size))) {
+ DL_ERROR
+ ("Write to "
+ FMT_UI32
+ " failed",
+ lptr->
+ load_addr +
+ image_offset);
+ }
+#ifdef OPT_ZERO_COPY_LOADER
+ }
+#endif
+ }
+ }
+ image_offset +=
+ BYTE_TO_TADDR(ibuf.ipacket.packet_size);
+ } /* process packets */
+ /* if this is a BSS section, we may want to fill it */
+ if (DLOAD_SECT_TYPE(sptr) != DLOAD_BSS)
+ goto loop_cont;
+
+ if (!(dlthis->myoptions & DLOAD_INITBSS))
+ goto loop_cont;
+
+ if (cinit_processed) {
+ /* Don't clear BSS after load-time
+ * initialization */
+ DL_ERROR
+ ("Zero-initialization at " FMT_UI32
+ " after " "load-time initialization!",
+ lptr->load_addr);
+ goto loop_cont;
+ }
+ /* fill the .bss area */
+ dlthis->myio->fillmem(dlthis->myio,
+ TADDR_TO_HOST(lptr->load_addr),
+ lptr, TADDR_TO_HOST(lptr->size),
+ DLOAD_FILL_BSS);
+ goto loop_cont;
+ }
+ /* if DS_DOWNLOAD_MASK */
+ /* If not loading, but BSS, zero initialize */
+ if (DLOAD_SECT_TYPE(sptr) != DLOAD_BSS)
+ goto loop_cont;
+
+ if (!(dlthis->myoptions & DLOAD_INITBSS))
+ goto loop_cont;
+
+ if (curr_sect >= dlthis->allocated_secn_count)
+ lptr = DOFFSEC_IS_LDRSEC(sptr);
+
+ if (cinit_processed) {
+ /*Don't clear BSS after load-time initialization */
+ DL_ERROR("Zero-initialization at " FMT_UI32
+ " attempted after "
+ "load-time initialization!", lptr->load_addr);
+ goto loop_cont;
+ }
+ /* fill the .bss area */
+ dlthis->myio->fillmem(dlthis->myio,
+ TADDR_TO_HOST(lptr->load_addr), lptr,
+ TADDR_TO_HOST(lptr->size),
+ DLOAD_FILL_BSS);
+loop_cont:
+ sptr += 1;
+ lptr += 1;
+ } /* load sections */
+
+ /* Finalize any trampolines that were created during the load */
+ if (dload_tramp_finalize(dlthis) == 0) {
+ DL_ERROR("Finalization of auto-trampolines (size = " FMT_UI32
+ ") failed", dlthis->tramp.tramp_sect_next_addr);
+ }
+} /* dload_data */
+
+/*************************************************************************
+ * Procedure dload_reorder
+ *
+ * Parameters:
+ * data 32-bit aligned pointer to data to be byte-swapped
+ * dsiz size of the data to be reordered in sizeof() units.
+ * map 32-bit map defining how to reorder the data. Value
+ * must be REORDER_MAP() of some permutation
+ * of 0x00 01 02 03
+ *
+ * Effect:
+ * Re-arranges the bytes in each word according to the map specified.
+ *
+ *********************************************************************** */
+/* mask for byte shift count */
+#define SHIFT_COUNT_MASK (3 << LOG_BITS_PER_BYTE)
+
+void dload_reorder(void *data, int dsiz, unsigned int map)
+{
+ register u32 tmp, tmap, datv;
+ u32 *dp = (u32 *) data;
+
+ map <<= LOG_BITS_PER_BYTE; /* align map with SHIFT_COUNT_MASK */
+ do {
+ tmp = 0;
+ datv = *dp;
+ tmap = map;
+ do {
+ tmp |= (datv & BYTE_MASK) << (tmap & SHIFT_COUNT_MASK);
+ tmap >>= BITS_PER_BYTE;
+ } while (datv >>= BITS_PER_BYTE);
+ *dp++ = tmp;
+ } while ((dsiz -= sizeof(u32)) > 0);
+} /* dload_reorder */
+
+/*************************************************************************
+ * Procedure dload_checksum
+ *
+ * Parameters:
+ * data 32-bit aligned pointer to data to be checksummed
+ * siz size of the data to be checksummed in sizeof() units.
+ *
+ * Effect:
+ * Returns a checksum of the specified block
+ *
+ *********************************************************************** */
+u32 dload_checksum(void *data, unsigned siz)
+{
+ u32 sum;
+ u32 *dp;
+ int left;
+
+ sum = 0;
+ dp = (u32 *) data;
+ for (left = siz; left > 0; left -= sizeof(u32))
+ sum += *dp++;
+ return sum;
+} /* dload_checksum */
+
+#if HOST_ENDIANNESS
+/*************************************************************************
+ * Procedure dload_reverse_checksum
+ *
+ * Parameters:
+ * data 32-bit aligned pointer to data to be checksummed
+ * siz size of the data to be checksummed in sizeof() units.
+ *
+ * Effect:
+ * Returns a checksum of the specified block, which is assumed to be bytes
+ * in big-endian order.
+ *
+ * Notes:
+ * In a big-endian host, things like the string table are stored as bytes
+ * in host order. But dllcreate always checksums in little-endian order.
+ * It is most efficient to just handle the difference a word at a time.
+ *
+ ********************************************************************** */
+u32 dload_reverse_checksum(void *data, unsigned siz)
+{
+ u32 sum, temp;
+ u32 *dp;
+ int left;
+
+ sum = 0;
+ dp = (u32 *) data;
+
+ for (left = siz; left > 0; left -= sizeof(u32)) {
+ temp = *dp++;
+ sum += temp << BITS_PER_BYTE * 3;
+ sum += temp >> BITS_PER_BYTE * 3;
+ sum += (temp >> BITS_PER_BYTE) & (BYTE_MASK << BITS_PER_BYTE);
+ sum += (temp & (BYTE_MASK << BITS_PER_BYTE)) << BITS_PER_BYTE;
+ }
+
+ return sum;
+} /* dload_reverse_checksum */
+
+#if (TARGET_AU_BITS > 8) && (TARGET_AU_BITS < 32)
+u32 dload_reverse_checksum16(void *data, unsigned siz)
+{
+ uint_fast32_t sum, temp;
+ u32 *dp;
+ int left;
+
+ sum = 0;
+ dp = (u32 *) data;
+
+ for (left = siz; left > 0; left -= sizeof(u32)) {
+ temp = *dp++;
+ sum += temp << BITS_PER_BYTE * 2;
+ sum += temp >> BITS_PER_BYTE * 2;
+ }
+
+ return sum;
+} /* dload_reverse_checksum16 */
+#endif
+#endif
+
+/*************************************************************************
+ * Procedure swap_words
+ *
+ * Parameters:
+ * data 32-bit aligned pointer to data to be swapped
+ * siz size of the data to be swapped.
+ * bitmap Bit map of how to swap each 32-bit word; 1 => 2 shorts,
+ * 0 => 1 long
+ *
+ * Effect:
+ * Swaps the specified data according to the specified map
+ *
+ *********************************************************************** */
+static void swap_words(void *data, unsigned siz, unsigned bitmap)
+{
+ register int i;
+#if TARGET_AU_BITS < 16
+ register u16 *sp;
+#endif
+ register u32 *lp;
+
+ siz /= sizeof(u16);
+
+#if TARGET_AU_BITS < 16
+ /* pass 1: do all the bytes */
+ i = siz;
+ sp = (u16 *) data;
+ do {
+ register u16 tmp;
+ tmp = *sp;
+ *sp++ = SWAP16BY8(tmp);
+ } while ((i -= 1) > 0);
+#endif
+
+#if TARGET_AU_BITS < 32
+ /* pass 2: fixup the 32-bit words */
+ i = siz >> 1;
+ lp = (u32 *) data;
+ do {
+ if ((bitmap & 1) == 0) {
+ register u32 tmp;
+ tmp = *lp;
+ *lp = SWAP32BY16(tmp);
+ }
+ lp += 1;
+ bitmap >>= 1;
+ } while ((i -= 1) > 0);
+#endif
+} /* swap_words */
+
+/*************************************************************************
+ * Procedure copy_tgt_strings
+ *
+ * Parameters:
+ * dstp Destination address. Assumed to be 32-bit aligned
+ * srcp Source address. Assumed to be 32-bit aligned
+ * charcount Number of characters to copy.
+ *
+ * Effect:
+ * Copies strings from the source (which is in usual .dof file order on
+ * the loading processor) to the destination buffer (which should be in proper
+ * target addressable unit order). Makes sure the last string in the
+ * buffer is NULL terminated (for safety).
+ * Returns the first unused destination address.
+ *********************************************************************** */
+static char *copy_tgt_strings(void *dstp, void *srcp, unsigned charcount)
+{
+ register tgt_au_t *src = (tgt_au_t *) srcp;
+ register tgt_au_t *dst = (tgt_au_t *) dstp;
+ register int cnt = charcount;
+ do {
+#if TARGET_AU_BITS <= BITS_PER_AU
+ /* byte-swapping issues may exist for strings on target */
+ *dst++ = *src++;
+#else
+ *dst++ = *src++;
+#endif
+ } while ((cnt -= (sizeof(tgt_au_t) * BITS_PER_AU / BITS_PER_BYTE)) > 0);
+ /*apply force to make sure that the string table has null terminator */
+#if (BITS_PER_AU == BITS_PER_BYTE) && (TARGET_AU_BITS == BITS_PER_BYTE)
+ dst[-1] = 0;
+#else
+ /* little endian */
+ dst[-1] &= (1 << (BITS_PER_AU - BITS_PER_BYTE)) - 1;
+#endif
+ return (char *)dst;
+} /* copy_tgt_strings */
+
+/*************************************************************************
+ * Procedure init_module_handle
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Initializes the module handle we use to enable unloading, and installs
+ * the debug information required by the target.
+ *
+ * Notes:
+ * The handle returned from dynamic_load_module needs to encapsulate all the
+ * allocations done for the module, and enable them plus the modules symbols to
+ * be deallocated.
+ *
+ *********************************************************************** */
+#ifndef _BIG_ENDIAN
+static const struct ldr_section_info dllview_info_init = { ".dllview", 0, 0,
+ (ldr_addr)-1, DBG_LIST_PAGE, DLOAD_DATA, 0
+};
+#else
+static const struct ldr_section_info dllview_info_init = { ".dllview", 0, 0,
+ (ldr_addr)-1, DLOAD_DATA, DBG_LIST_PAGE, 0
+};
+#endif
+static void init_module_handle(struct dload_state *dlthis)
+{
+ struct my_handle *hndl;
+ u16 curr_sect;
+ struct ldr_section_info *asecs;
+ struct dll_module *dbmod;
+ struct dll_sect *dbsec;
+ struct dbg_mirror_root *mlist;
+ register char *cp;
+ struct modules_header mhdr;
+ struct ldr_section_info dllview_info;
+ struct dynload_symbol *debug_mirror_sym;
+ hndl = dlthis->myhandle;
+ if (!hndl)
+ return; /* must be errors detected, so forget it */
+
+ /* Store the section count */
+ hndl->secn_count = dlthis->allocated_secn_count;
+
+ /* If a trampoline section was created, add it in */
+ if (dlthis->tramp.tramp_sect_next_addr != 0)
+ hndl->secn_count += 1;
+
+ hndl->secn_count = hndl->secn_count << 1;
+
+ hndl->secn_count = dlthis->allocated_secn_count << 1;
+#ifndef TARGET_ENDIANNESS
+ if (dlthis->big_e_target)
+ hndl->secn_count += 1; /* flag for big-endian */
+#endif
+ if (dlthis->dload_errcount)
+ return; /* abandon if errors detected */
+ /* Locate the symbol that names the header for the CCS debug list
+ of modules. If not found, we just don't generate the debug record.
+ If found, we create our modules list. We make sure to create the
+ loader_dllview_root even if there is no relocation info to record,
+ just to try to put both symbols in the same symbol table and
+ module. */
+ debug_mirror_sym = dlthis->mysym->find_matching_symbol(dlthis->mysym,
+ loader_dllview_root);
+ if (!debug_mirror_sym) {
+ struct dynload_symbol *dlmodsym;
+ struct dbg_mirror_root *mlst;
+
+ /* our root symbol is not yet present;
+ check if we have DLModules defined */
+ dlmodsym = dlthis->mysym->find_matching_symbol(dlthis->mysym,
+ LINKER_MODULES_HEADER);
+ if (!dlmodsym)
+ return; /* no DLModules list so no debug info */
+ /* if we have DLModules defined, construct our header */
+ mlst = (struct dbg_mirror_root *)
+ dlthis->mysym->dload_allocate(dlthis->mysym,
+ sizeof(struct
+ dbg_mirror_root));
+ if (!mlst) {
+ DL_ERROR(err_alloc, sizeof(struct dbg_mirror_root));
+ return;
+ }
+ mlst->hnext = NULL;
+ mlst->changes = 0;
+ mlst->refcount = 0;
+ mlst->dbthis = TDATA_TO_TADDR(dlmodsym->value);
+ /* add our root symbol */
+ debug_mirror_sym = dlthis->mysym->add_to_symbol_table
+ (dlthis->mysym, loader_dllview_root,
+ (unsigned)dlthis->myhandle);
+ if (!debug_mirror_sym) {
+ /* failed, recover memory */
+ dlthis->mysym->dload_deallocate(dlthis->mysym, mlst);
+ return;
+ }
+ debug_mirror_sym->value = (u32) mlst;
+ }
+ /* First create the DLLview record and stuff it into the buffer.
+ Then write it to the DSP. Record pertinent locations in our hndl,
+ and add it to the per-processor list of handles with debug info. */
+#ifndef DEBUG_HEADER_IN_LOADER
+ mlist = (struct dbg_mirror_root *)debug_mirror_sym->value;
+ if (!mlist)
+ return;
+#else
+ mlist = (struct dbg_mirror_root *)&debug_list_header;
+#endif
+ hndl->dm.hroot = mlist; /* set pointer to root into our handle */
+ if (!dlthis->allocated_secn_count)
+ return; /* no load addresses to be recorded */
+ /* reuse temporary symbol storage */
+ dbmod = (struct dll_module *)dlthis->local_symtab;
+ /* Create the DLLview record in the memory we retain for our handle */
+ dbmod->num_sects = dlthis->allocated_secn_count;
+ dbmod->timestamp = dlthis->verify.dv_timdat;
+ dbmod->version = INIT_VERSION;
+ dbmod->verification = VERIFICATION;
+ asecs = dlthis->ldr_sections;
+ dbsec = dbmod->sects;
+ for (curr_sect = dlthis->allocated_secn_count;
+ curr_sect > 0; curr_sect -= 1) {
+ dbsec->sect_load_adr = asecs->load_addr;
+ dbsec->sect_run_adr = asecs->run_addr;
+ dbsec += 1;
+ asecs += 1;
+ }
+
+ /* If a trampoline section was created go ahead and add its info */
+ if (dlthis->tramp.tramp_sect_next_addr != 0) {
+ dbmod->num_sects++;
+ dbsec->sect_load_adr = asecs->load_addr;
+ dbsec->sect_run_adr = asecs->run_addr;
+ dbsec++;
+ asecs++;
+ }
+
+ /* now cram in the names */
+ cp = copy_tgt_strings(dbsec, dlthis->str_head,
+ dlthis->debug_string_size);
+
+ /* If a trampoline section was created, add its name so DLLView
+ * can show the user the section info. */
+ if (dlthis->tramp.tramp_sect_next_addr != 0) {
+ cp = copy_tgt_strings(cp,
+ dlthis->tramp.final_string_table,
+ strlen(dlthis->tramp.final_string_table) +
+ 1);
+ }
+
+ /* round off the size of the debug record, and remember same */
+ hndl->dm.dbsiz = HOST_TO_TDATA_ROUND(cp - (char *)dbmod);
+ *cp = 0; /* strictly to make our test harness happy */
+ dllview_info = dllview_info_init;
+ dllview_info.size = TDATA_TO_TADDR(hndl->dm.dbsiz);
+ /* Initialize memory context to default heap */
+ dllview_info.context = 0;
+ hndl->dm.context = 0;
+ /* fill in next pointer and size */
+ if (mlist->hnext) {
+ dbmod->next_module = TADDR_TO_TDATA(mlist->hnext->dm.dbthis);
+ dbmod->next_module_size = mlist->hnext->dm.dbsiz;
+ } else {
+ dbmod->next_module_size = 0;
+ dbmod->next_module = 0;
+ }
+ /* allocate memory for on-DSP DLLview debug record */
+ if (!dlthis->myalloc)
+ return;
+ if (!dlthis->myalloc->dload_allocate(dlthis->myalloc, &dllview_info,
+ HOST_TO_TADDR(sizeof(u32)))) {
+ return;
+ }
+ /* Store load address of .dllview section */
+ hndl->dm.dbthis = dllview_info.load_addr;
+ /* Store memory context (segid) in which .dllview section
+ * was allocated */
+ hndl->dm.context = dllview_info.context;
+ mlist->refcount += 1;
+ /* swap bytes in the entire debug record, but not the string table */
+ if (TARGET_ENDIANNESS_DIFFERS(TARGET_BIG_ENDIAN)) {
+ swap_words(dbmod, (char *)dbsec - (char *)dbmod,
+ DLL_MODULE_BITMAP);
+ }
+ /* Update the DLLview list on the DSP write new record */
+ if (!dlthis->myio->writemem(dlthis->myio, dbmod,
+ dllview_info.load_addr, &dllview_info,
+ TADDR_TO_HOST(dllview_info.size))) {
+ return;
+ }
+ /* write new header */
+ mhdr.first_module_size = hndl->dm.dbsiz;
+ mhdr.first_module = TADDR_TO_TDATA(dllview_info.load_addr);
+ /* swap bytes in the module header, if needed */
+ if (TARGET_ENDIANNESS_DIFFERS(TARGET_BIG_ENDIAN)) {
+ swap_words(&mhdr, sizeof(struct modules_header) - sizeof(u16),
+ MODULES_HEADER_BITMAP);
+ }
+ dllview_info = dllview_info_init;
+ if (!dlthis->myio->writemem(dlthis->myio, &mhdr, mlist->dbthis,
+ &dllview_info,
+ sizeof(struct modules_header) -
+ sizeof(u16))) {
+ return;
+ }
+ /* Add the module handle to this processor's list
+ of handles with debug info */
+ hndl->dm.hnext = mlist->hnext;
+ if (hndl->dm.hnext)
+ hndl->dm.hnext->dm.hprev = hndl;
+ hndl->dm.hprev = (struct my_handle *)mlist;
+ mlist->hnext = hndl; /* insert after root */
+} /* init_module_handle */
+
+/*************************************************************************
+ * Procedure dynamic_unload_module
+ *
+ * Parameters:
+ * mhandle A module handle from dynamic_load_module
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ *
+ * Effect:
+ * The module specified by mhandle is unloaded. Unloading causes all
+ * target memory to be deallocated, all symbols defined by the module to
+ * be purged, and any host-side storage used by the dynamic loader for
+ * this module to be released.
+ *
+ * Returns:
+ * Zero for success. On error, the number of errors detected is returned.
+ * Individual errors are reported using syms->error_report().
+ *********************************************************************** */
+int dynamic_unload_module(void *mhandle,
+ struct dynamic_loader_sym *syms,
+ struct dynamic_loader_allocate *alloc,
+ struct dynamic_loader_initialize *init)
+{
+ s16 curr_sect;
+ struct ldr_section_info *asecs;
+ struct my_handle *hndl;
+ struct dbg_mirror_root *root;
+ unsigned errcount = 0;
+ struct ldr_section_info dllview_info = dllview_info_init;
+ struct modules_header mhdr;
+
+ hndl = (struct my_handle *)mhandle;
+ if (!hndl)
+ return 0; /* if handle is null, nothing to do */
+ /* Clear out the module symbols
+ * Note that if this is the module that defined MODULES_HEADER
+ (the head of the target debug list)
+ * then this operation will blow away that symbol.
+ It will therefore be impossible for subsequent
+ * operations to add entries to this un-referenceable list. */
+ if (!syms)
+ return 1;
+ syms->purge_symbol_table(syms, (unsigned)hndl);
+ /* Deallocate target memory for sections
+ * NOTE: The trampoline section, if created, gets deleted here, too */
+
+ asecs = hndl->secns;
+ if (alloc)
+ for (curr_sect = (hndl->secn_count >> 1); curr_sect > 0;
+ curr_sect -= 1) {
+ asecs->name = NULL;
+ alloc->dload_deallocate(alloc, asecs++);
+ }
+ root = hndl->dm.hroot;
+ if (!root) {
+ /* there is a debug list containing this module */
+ goto func_end;
+ }
+ if (!hndl->dm.dbthis) { /* target-side dllview record exists */
+ goto loop_end;
+ }
+ /* Retrieve memory context in which .dllview was allocated */
+ dllview_info.context = hndl->dm.context;
+ if (hndl->dm.hprev == hndl)
+ goto exitunltgt;
+
+ /* target-side dllview record is in list */
+ /* dequeue this record from our GPP-side mirror list */
+ hndl->dm.hprev->dm.hnext = hndl->dm.hnext;
+ if (hndl->dm.hnext)
+ hndl->dm.hnext->dm.hprev = hndl->dm.hprev;
+ /* Update next_module of previous entry in target list
+ * We are using mhdr here as a surrogate for either a
+ struct modules_header or a dll_module */
+ if (hndl->dm.hnext) {
+ mhdr.first_module = TADDR_TO_TDATA(hndl->dm.hnext->dm.dbthis);
+ mhdr.first_module_size = hndl->dm.hnext->dm.dbsiz;
+ } else {
+ mhdr.first_module = 0;
+ mhdr.first_module_size = 0;
+ }
+ if (!init)
+ goto exitunltgt;
+
+ if (!init->connect(init)) {
+ dload_syms_error(syms, iconnect);
+ errcount += 1;
+ goto exitunltgt;
+ }
+ /* swap bytes in the module header, if needed */
+ if (TARGET_ENDIANNESS_DIFFERS(hndl->secn_count & 0x1)) {
+ swap_words(&mhdr, sizeof(struct modules_header) - sizeof(u16),
+ MODULES_HEADER_BITMAP);
+ }
+ if (!init->writemem(init, &mhdr, hndl->dm.hprev->dm.dbthis,
+ &dllview_info, sizeof(struct modules_header) -
+ sizeof(mhdr.update_flag))) {
+ dload_syms_error(syms, dlvwrite);
+ errcount += 1;
+ }
+ /* update change counter */
+ root->changes += 1;
+ if (!init->writemem(init, &(root->changes),
+ root->dbthis + HOST_TO_TADDR
+ (sizeof(mhdr.first_module) +
+ sizeof(mhdr.first_module_size)),
+ &dllview_info, sizeof(mhdr.update_flag))) {
+ dload_syms_error(syms, dlvwrite);
+ errcount += 1;
+ }
+ init->release(init);
+exitunltgt:
+ /* release target storage */
+ dllview_info.size = TDATA_TO_TADDR(hndl->dm.dbsiz);
+ dllview_info.load_addr = hndl->dm.dbthis;
+ if (alloc)
+ alloc->dload_deallocate(alloc, &dllview_info);
+ root->refcount -= 1;
+ /* target-side dllview record exists */
+loop_end:
+#ifndef DEBUG_HEADER_IN_LOADER
+ if (root->refcount <= 0) {
+ /* if all references gone, blow off the header */
+ /* our root symbol may be gone due to the Purge above,
+ but if not, do not destroy the root */
+ if (syms->find_matching_symbol
+ (syms, loader_dllview_root) == NULL)
+ syms->dload_deallocate(syms, root);
+ }
+#endif
+func_end:
+ /* there is a debug list containing this module */
+ syms->dload_deallocate(syms, mhandle); /* release our storage */
+ return errcount;
+} /* dynamic_unload_module */
+
+#if BITS_PER_AU > BITS_PER_BYTE
+/*************************************************************************
+ * Procedure unpack_name
+ *
+ * Parameters:
+ * soffset Byte offset into the string table
+ *
+ * Effect:
+ * Returns a pointer to the string specified by the offset supplied, or
+ * NULL for error.
+ *
+ *********************************************************************** */
+static char *unpack_name(struct dload_state *dlthis, u32 soffset)
+{
+ u8 tmp, *src;
+ char *dst;
+
+ if (soffset >= dlthis->dfile_hdr.df_strtab_size) {
+ dload_error(dlthis, "Bad string table offset " FMT_UI32,
+ soffset);
+ return NULL;
+ }
+ src = (uint_least8_t *) dlthis->str_head +
+ (soffset >> (LOG_BITS_PER_AU - LOG_BITS_PER_BYTE));
+ dst = dlthis->str_temp;
+ if (soffset & 1)
+ *dst++ = *src++; /* only 1 character in first word */
+ do {
+ tmp = *src++;
+ *dst = (tmp >> BITS_PER_BYTE);
+ if (!(*dst++))
+ break;
+ } while ((*dst++ = tmp & BYTE_MASK));
+ dlthis->temp_len = dst - dlthis->str_temp;
+ /* squirrel away length including terminating null */
+ return dlthis->str_temp;
+} /* unpack_name */
+#endif
diff --git a/drivers/staging/tidspbridge/dynload/dload_internal.h b/drivers/staging/tidspbridge/dynload/dload_internal.h
new file mode 100644
index 0000000..8037561
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/dload_internal.h
@@ -0,0 +1,351 @@
+/*
+ * dload_internal.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DLOAD_INTERNAL_
+#define _DLOAD_INTERNAL_
+
+#include <linux/types.h>
+
+/*
+ * Internal state definitions for the dynamic loader
+ */
+
+#define TRUE 1
+#define FALSE 0
+
+/* type used for relocation intermediate results */
+typedef s32 rvalue;
+
+/* unsigned version of same; must have at least as many bits */
+typedef u32 urvalue;
+
+/*
+ * Dynamic loader configuration constants
+ */
+/* error issued if input has more sections than this limit */
+#define REASONABLE_SECTION_LIMIT 100
+
+/* (Addressable unit) value used to clear BSS section */
+#define DLOAD_FILL_BSS 0
+
+/*
+ * Reorder maps explained (?)
+ *
+ * The doff file format defines a 32-bit pattern used to determine the
+ * byte order of an image being read. That value is
+ * BYTE_RESHUFFLE_VALUE == 0x00010203
+ * For purposes of the reorder routine, we would rather have the all-is-OK
+ * for 32-bits pattern be 0x03020100. This first macro makes the
+ * translation from doff file header value to MAP value: */
+#define REORDER_MAP(rawmap) ((rawmap) ^ 0x3030303)
+/* This translation is made in dload_headers. Thereafter, the all-is-OK
+ * value for the maps stored in dlthis is REORDER_MAP(BYTE_RESHUFFLE_VALUE).
+ * But sadly, not all bits of the doff file are 32-bit integers.
+ * The notable exceptions are strings and image bits.
+ * Strings obey host byte order: */
+#if defined(_BIG_ENDIAN)
+#define HOST_BYTE_ORDER(cookedmap) ((cookedmap) ^ 0x3030303)
+#else
+#define HOST_BYTE_ORDER(cookedmap) (cookedmap)
+#endif
+/* Target bits consist of target AUs (could be bytes, or 16-bits,
+ * or 32-bits) stored as an array in host order. A target order
+ * map is defined by: */
+#if !defined(_BIG_ENDIAN) || TARGET_AU_BITS > 16
+#define TARGET_ORDER(cookedmap) (cookedmap)
+#elif TARGET_AU_BITS > 8
+#define TARGET_ORDER(cookedmap) ((cookedmap) ^ 0x2020202)
+#else
+#define TARGET_ORDER(cookedmap) ((cookedmap) ^ 0x3030303)
+#endif
+
+/* forward declaration for handle returned by dynamic loader */
+struct my_handle;
+
+/*
+ * a list of module handles, which mirrors the debug list on the target
+ */
+struct dbg_mirror_root {
+ /* must be same as dbg_mirror_list; __DLModules address on target */
+ u32 dbthis;
+ struct my_handle *hnext; /* must be same as dbg_mirror_list */
+ u16 changes; /* change counter */
+ u16 refcount; /* number of modules referencing this root */
+};
+
+struct dbg_mirror_list {
+ u32 dbthis;
+ struct my_handle *hnext, *hprev;
+ struct dbg_mirror_root *hroot;
+ u16 dbsiz;
+ u32 context; /* Save context for .dllview memory allocation */
+};
+
+#define VARIABLE_SIZE 1
+/*
+ * the structure we actually return as an opaque module handle
+ */
+struct my_handle {
+ struct dbg_mirror_list dm; /* !!! must be first !!! */
+ /* sections following << 1, LSB is set for big-endian target */
+ u16 secn_count;
+ struct ldr_section_info secns[VARIABLE_SIZE];
+};
+#define MY_HANDLE_SIZE (sizeof(struct my_handle) -\
+ sizeof(struct ldr_section_info))
+/* real size of my_handle */
+
+/*
+ * reduced symbol structure used for symbols during relocation
+ */
+struct local_symbol {
+ s32 value; /* Relocated symbol value */
+ s32 delta; /* Original value in input file */
+ s16 secnn; /* section number */
+ s16 sclass; /* symbol class */
+};
+
+/*
+ * Trampoline data structures
+ */
+#define TRAMP_NO_GEN_AVAIL 65535
+#define TRAMP_SYM_PREFIX "__$dbTR__"
+#define TRAMP_SECT_NAME ".dbTR"
+/* MUST MATCH THE LENGTH ABOVE!! */
+#define TRAMP_SYM_PREFIX_LEN 9
+/* Includes NULL termination */
+#define TRAMP_SYM_HEX_ASCII_LEN 9
+
+#define GET_CONTAINER(ptr, type, field) ((type *)((unsigned long)ptr -\
+ (unsigned long)(&((type *)0)->field)))
+#ifndef FIELD_OFFSET
+#define FIELD_OFFSET(type, field) ((unsigned long)(&((type *)0)->field))
+#endif
+
+/*
+ The trampoline code for the target is located in a table called
+ "tramp_gen_info" with is indexed by looking up the index in the table
+ "tramp_map". The tramp_map index is acquired using the target
+ HASH_FUNC on the relocation type that caused the trampoline. Each
+ trampoline code table entry MUST follow this format:
+
+ |----------------------------------------------|
+ | tramp_gen_code_hdr |
+ |----------------------------------------------|
+ | Trampoline image code |
+ | (the raw instruction code for the target) |
+ |----------------------------------------------|
+ | Relocation entries for the image code |
+ |----------------------------------------------|
+
+ This is very similar to how image data is laid out in the DOFF file
+ itself.
+ */
+struct tramp_gen_code_hdr {
+ u32 tramp_code_size; /* in BYTES */
+ u32 num_relos;
+ u32 relo_offset; /* in BYTES */
+};
+
+struct tramp_img_pkt {
+ struct tramp_img_pkt *next; /* MUST BE FIRST */
+ u32 base;
+ struct tramp_gen_code_hdr hdr;
+ u8 payload[VARIABLE_SIZE];
+};
+
+struct tramp_img_dup_relo {
+ struct tramp_img_dup_relo *next;
+ struct reloc_record_t relo;
+};
+
+struct tramp_img_dup_pkt {
+ struct tramp_img_dup_pkt *next; /* MUST BE FIRST */
+ s16 secnn;
+ u32 offset;
+ struct image_packet_t img_pkt;
+ struct tramp_img_dup_relo *relo_chain;
+
+ /* PAYLOAD OF IMG PKT FOLLOWS */
+};
+
+struct tramp_sym {
+ struct tramp_sym *next; /* MUST BE FIRST */
+ u32 index;
+ u32 str_index;
+ struct local_symbol sym_info;
+};
+
+struct tramp_string {
+ struct tramp_string *next; /* MUST BE FIRST */
+ u32 index;
+ char str[VARIABLE_SIZE]; /* NULL terminated */
+};
+
+struct tramp_info {
+ u32 tramp_sect_next_addr;
+ struct ldr_section_info sect_info;
+
+ struct tramp_sym *symbol_head;
+ struct tramp_sym *symbol_tail;
+ u32 tramp_sym_next_index;
+ struct local_symbol *final_sym_table;
+
+ struct tramp_string *string_head;
+ struct tramp_string *string_tail;
+ u32 tramp_string_next_index;
+ u32 tramp_string_size;
+ char *final_string_table;
+
+ struct tramp_img_pkt *tramp_pkts;
+ struct tramp_img_dup_pkt *dup_pkts;
+};
+
+/*
+ * States of the .cinit state machine
+ */
+enum cinit_mode {
+ CI_COUNT = 0, /* expecting a count */
+ CI_ADDRESS, /* expecting an address */
+#if CINIT_ALIGN < CINIT_ADDRESS /* handle case of partial address field */
+ CI_PARTADDRESS, /* have only part of the address */
+#endif
+ CI_COPY, /* in the middle of copying data */
+ CI_DONE /* end of .cinit table */
+};
+
+/*
+ * The internal state of the dynamic loader, which is passed around as
+ * an object
+ */
+struct dload_state {
+ struct dynamic_loader_stream *strm; /* The module input stream */
+ struct dynamic_loader_sym *mysym; /* Symbols for this session */
+ /* target memory allocator */
+ struct dynamic_loader_allocate *myalloc;
+ struct dynamic_loader_initialize *myio; /* target memory initializer */
+ unsigned myoptions; /* Options parameter dynamic_load_module */
+
+ char *str_head; /* Pointer to string table */
+#if BITS_PER_AU > BITS_PER_BYTE
+ char *str_temp; /* Pointer to temporary buffer for strings */
+ /* big enough to hold longest string */
+ unsigned temp_len; /* length of last temporary string */
+ char *xstrings; /* Pointer to buffer for expanded */
+ /* strings for sec names */
+#endif
+ /* Total size of strings for DLLView section names */
+ unsigned debug_string_size;
+ /* Pointer to parallel section info for allocated sections only */
+ struct doff_scnhdr_t *sect_hdrs; /* Pointer to section table */
+ struct ldr_section_info *ldr_sections;
+#if TMS32060
+ /* The address of the start of the .bss section */
+ ldr_addr bss_run_base;
+#endif
+ struct local_symbol *local_symtab; /* Relocation symbol table */
+
+ /* pointer to DL section info for the section being relocated */
+ struct ldr_section_info *image_secn;
+ /* change in run address for current section during relocation */
+ ldr_addr delta_runaddr;
+ ldr_addr image_offset; /* offset of current packet in section */
+ enum cinit_mode cinit_state; /* current state of cload_cinit() */
+ int cinit_count; /* the current count */
+ ldr_addr cinit_addr; /* the current address */
+ s16 cinit_page; /* the current page */
+ /* Handle to be returned by dynamic_load_module */
+ struct my_handle *myhandle;
+ unsigned dload_errcount; /* Total # of errors reported so far */
+ /* Number of target sections that require allocation and relocation */
+ unsigned allocated_secn_count;
+#ifndef TARGET_ENDIANNESS
+ int big_e_target; /* Target data in big-endian format */
+#endif
+ /* map for reordering bytes, 0 if not needed */
+ u32 reorder_map;
+ struct doff_filehdr_t dfile_hdr; /* DOFF file header structure */
+ struct doff_verify_rec_t verify; /* Verify record */
+
+ struct tramp_info tramp; /* Trampoline data, if needed */
+
+ int relstkidx; /* index into relocation value stack */
+ /* relocation value stack used in relexp.c */
+ rvalue relstk[STATIC_EXPR_STK_SIZE];
+
+};
+
+#ifdef TARGET_ENDIANNESS
+#define TARGET_BIG_ENDIAN TARGET_ENDIANNESS
+#else
+#define TARGET_BIG_ENDIAN (dlthis->big_e_target)
+#endif
+
+/*
+ * Exports from cload.c to rest of the world
+ */
+extern void dload_error(struct dload_state *dlthis, const char *errtxt, ...);
+extern void dload_syms_error(struct dynamic_loader_sym *syms,
+ const char *errtxt, ...);
+extern void dload_headers(struct dload_state *dlthis);
+extern void dload_strings(struct dload_state *dlthis, bool sec_names_only);
+extern void dload_sections(struct dload_state *dlthis);
+extern void dload_reorder(void *data, int dsiz, u32 map);
+extern u32 dload_checksum(void *data, unsigned siz);
+
+#if HOST_ENDIANNESS
+extern uint32_t dload_reverse_checksum(void *data, unsigned siz);
+#if (TARGET_AU_BITS > 8) && (TARGET_AU_BITS < 32)
+extern uint32_t dload_reverse_checksum16(void *data, unsigned siz);
+#endif
+#endif
+
+#define IS_DATA_SCN(zzz) (DLOAD_SECTION_TYPE((zzz)->type) != DLOAD_TEXT)
+#define IS_DATA_SCN_NUM(zzz) \
+ (DLOAD_SECT_TYPE(&dlthis->sect_hdrs[(zzz)-1]) != DLOAD_TEXT)
+
+/*
+ * exported by reloc.c
+ */
+extern void dload_relocate(struct dload_state *dlthis, tgt_au_t * data,
+ struct reloc_record_t *rp, bool * tramps_generated,
+ bool second_pass);
+
+extern rvalue dload_unpack(struct dload_state *dlthis, tgt_au_t * data,
+ int fieldsz, int offset, unsigned sgn);
+
+extern int dload_repack(struct dload_state *dlthis, rvalue val, tgt_au_t * data,
+ int fieldsz, int offset, unsigned sgn);
+
+/*
+ * exported by tramp.c
+ */
+extern bool dload_tramp_avail(struct dload_state *dlthis,
+ struct reloc_record_t *rp);
+
+int dload_tramp_generate(struct dload_state *dlthis, s16 secnn,
+ u32 image_offset, struct image_packet_t *ipacket,
+ struct reloc_record_t *rp);
+
+extern int dload_tramp_pkt_udpate(struct dload_state *dlthis,
+ s16 secnn, u32 image_offset,
+ struct image_packet_t *ipacket);
+
+extern int dload_tramp_finalize(struct dload_state *dlthis);
+
+extern void dload_tramp_cleanup(struct dload_state *dlthis);
+
+#endif /* _DLOAD_INTERNAL_ */
diff --git a/drivers/staging/tidspbridge/dynload/doff.h b/drivers/staging/tidspbridge/dynload/doff.h
new file mode 100644
index 0000000..5bf9924
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/doff.h
@@ -0,0 +1,344 @@
+/*
+ * doff.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Structures & definitions used for dynamically loaded modules file format.
+ * This format is a reformatted version of COFF. It optimizes the layout for
+ * the dynamic loader.
+ *
+ * .dof files, when viewed as a sequence of 32-bit integers, look the same
+ * on big-endian and little-endian machines.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DOFF_H
+#define _DOFF_H
+
+#ifndef UINT32_C
+#define UINT32_C(zzz) ((u32)zzz)
+#endif
+
+#define BYTE_RESHUFFLE_VALUE UINT32_C(0x00010203)
+
+/* DOFF file header containing fields categorizing the remainder of the file */
+struct doff_filehdr_t {
+
+ /* string table size, including filename, in bytes */
+ u32 df_strtab_size;
+
+ /* entry point if one exists */
+ u32 df_entrypt;
+
+ /* identifies byte ordering of file;
+ * always set to BYTE_RESHUFFLE_VALUE */
+ u32 df_byte_reshuffle;
+
+ /* Size of the string table up to and including the last section name */
+ /* Size includes the name of the COFF file also */
+ u32 df_scn_name_size;
+
+#ifndef _BIG_ENDIAN
+ /* number of symbols */
+ u16 df_no_syms;
+
+ /* length in bytes of the longest string, including terminating NULL */
+ /* excludes the name of the file */
+ u16 df_max_str_len;
+
+ /* total number of sections including no-load ones */
+ u16 df_no_scns;
+
+ /* number of sections containing target code allocated or downloaded */
+ u16 df_target_scns;
+
+ /* unique id for dll file format & version */
+ u16 df_doff_version;
+
+ /* identifies ISA */
+ u16 df_target_id;
+
+ /* useful file flags */
+ u16 df_flags;
+
+ /* section reference for entry point, N_UNDEF for none, */
+ /* N_ABS for absolute address */
+ s16 df_entry_secn;
+#else
+ /* length of the longest string, including terminating NULL */
+ u16 df_max_str_len;
+
+ /* number of symbols */
+ u16 df_no_syms;
+
+ /* number of sections containing target code allocated or downloaded */
+ u16 df_target_scns;
+
+ /* total number of sections including no-load ones */
+ u16 df_no_scns;
+
+ /* identifies ISA */
+ u16 df_target_id;
+
+ /* unique id for dll file format & version */
+ u16 df_doff_version;
+
+ /* section reference for entry point, N_UNDEF for none, */
+ /* N_ABS for absolute address */
+ s16 df_entry_secn;
+
+ /* useful file flags */
+ u16 df_flags;
+#endif
+ /* checksum for file header record */
+ u32 df_checksum;
+
+};
+
+/* flags in the df_flags field */
+#define DF_LITTLE 0x100
+#define DF_BIG 0x200
+#define DF_BYTE_ORDER (DF_LITTLE | DF_BIG)
+
+/* Supported processors */
+#define TMS470_ID 0x97
+#define LEAD_ID 0x98
+#define TMS32060_ID 0x99
+#define LEAD3_ID 0x9c
+
+/* Primary processor for loading */
+#if TMS32060
+#define TARGET_ID TMS32060_ID
+#endif
+
+/* Verification record containing values used to test integrity of the bits */
+struct doff_verify_rec_t {
+
+ /* time and date stamp */
+ u32 dv_timdat;
+
+ /* checksum for all section records */
+ u32 dv_scn_rec_checksum;
+
+ /* checksum for string table */
+ u32 dv_str_tab_checksum;
+
+ /* checksum for symbol table */
+ u32 dv_sym_tab_checksum;
+
+ /* checksum for verification record */
+ u32 dv_verify_rec_checksum;
+
+};
+
+/* String table is an array of null-terminated strings. The first entry is
+ * the filename, which is added by DLLcreate. No new structure definitions
+ * are required.
+ */
+
+/* Section Records including information on the corresponding image packets */
+/*
+ * !!WARNING!!
+ *
+ * This structure is expected to match in form ldr_section_info in
+ * dynamic_loader.h
+ */
+
+struct doff_scnhdr_t {
+
+ s32 ds_offset; /* offset into string table of name */
+ s32 ds_paddr; /* RUN address, in target AU */
+ s32 ds_vaddr; /* LOAD address, in target AU */
+ s32 ds_size; /* section size, in target AU */
+#ifndef _BIG_ENDIAN
+ u16 ds_page; /* memory page id */
+ u16 ds_flags; /* section flags */
+#else
+ u16 ds_flags; /* section flags */
+ u16 ds_page; /* memory page id */
+#endif
+ u32 ds_first_pkt_offset;
+ /* Absolute byte offset into the file */
+ /* where the first image record resides */
+
+ s32 ds_nipacks; /* number of image packets */
+
+};
+
+/* Symbol table entry */
+struct doff_syment_t {
+
+ s32 dn_offset; /* offset into string table of name */
+ s32 dn_value; /* value of symbol */
+#ifndef _BIG_ENDIAN
+ s16 dn_scnum; /* section number */
+ s16 dn_sclass; /* storage class */
+#else
+ s16 dn_sclass; /* storage class */
+ s16 dn_scnum; /* section number, 1-based */
+#endif
+
+};
+
+/* special values for dn_scnum */
+#define DN_UNDEF 0 /* undefined symbol */
+#define DN_ABS (-1) /* value of symbol is absolute */
+/* special values for dn_sclass */
+#define DN_EXT 2
+#define DN_STATLAB 20
+#define DN_EXTLAB 21
+
+/* Default value of image bits in packet */
+/* Configurable by user on the command line */
+#define IMAGE_PACKET_SIZE 1024
+
+/* An image packet contains a chunk of data from a section along with */
+/* information necessary for its processing. */
+struct image_packet_t {
+
+ s32 num_relocs; /* number of relocations for */
+ /* this packet */
+
+ s32 packet_size; /* number of bytes in array */
+ /* "bits" occupied by */
+ /* valid data. Could be */
+ /* < IMAGE_PACKET_SIZE to */
+ /* prevent splitting a */
+ /* relocation across packets. */
+ /* Last packet of a section */
+ /* will most likely contain */
+ /* < IMAGE_PACKET_SIZE bytes */
+ /* of valid data */
+
+ s32 img_chksum; /* Checksum for image packet */
+ /* and the corresponding */
+ /* relocation records */
+
+ u8 *img_data; /* Actual data in section */
+
+};
+
+/* The relocation structure definition matches the COFF version. Offsets */
+/* however are relative to the image packet base not the section base. */
+struct reloc_record_t {
+
+ s32 vaddr;
+
+ /* expressed in target AUs */
+
+ union {
+ struct {
+#ifndef _BIG_ENDIAN
+ u8 _offset; /* bit offset of rel fld */
+ u8 _fieldsz; /* size of rel fld */
+ u8 _wordsz; /* # bytes containing rel fld */
+ u8 _dum1;
+ u16 _dum2;
+ u16 _type;
+#else
+ unsigned _dum1:8;
+ unsigned _wordsz:8; /* # bytes containing rel fld */
+ unsigned _fieldsz:8; /* size of rel fld */
+ unsigned _offset:8; /* bit offset of rel fld */
+ u16 _type;
+ u16 _dum2;
+#endif
+ } _r_field;
+
+ struct {
+ u32 _spc; /* image packet relative PC */
+#ifndef _BIG_ENDIAN
+ u16 _dum;
+ u16 _type; /* relocation type */
+#else
+ u16 _type; /* relocation type */
+ u16 _dum;
+#endif
+ } _r_spc;
+
+ struct {
+ u32 _uval; /* constant value */
+#ifndef _BIG_ENDIAN
+ u16 _dum;
+ u16 _type; /* relocation type */
+#else
+ u16 _type; /* relocation type */
+ u16 _dum;
+#endif
+ } _r_uval;
+
+ struct {
+ s32 _symndx; /* 32-bit sym tbl index */
+#ifndef _BIG_ENDIAN
+ u16 _disp; /* extra addr encode data */
+ u16 _type; /* relocation type */
+#else
+ u16 _type; /* relocation type */
+ u16 _disp; /* extra addr encode data */
+#endif
+ } _r_sym;
+ } _u_reloc;
+
+};
+
+/* abbreviations for convenience */
+#ifndef TYPE
+#define TYPE _u_reloc._r_sym._type
+#define UVAL _u_reloc._r_uval._uval
+#define SYMNDX _u_reloc._r_sym._symndx
+#define OFFSET _u_reloc._r_field._offset
+#define FIELDSZ _u_reloc._r_field._fieldsz
+#define WORDSZ _u_reloc._r_field._wordsz
+#define R_DISP _u_reloc._r_sym._disp
+#endif
+
+/**************************************************************************** */
+/* */
+/* Important DOFF macros used for file processing */
+/* */
+/**************************************************************************** */
+
+/* DOFF Versions */
+#define DOFF0 0
+
+/* Return the address/size >= to addr that is at a 32-bit boundary */
+/* This assumes that a byte is 8 bits */
+#define DOFF_ALIGN(addr) (((addr) + 3) & ~UINT32_C(3))
+
+/**************************************************************************** */
+/* */
+/* The DOFF section header flags field is laid out as follows: */
+/* */
+/* Bits 0-3 : Section Type */
+/* Bit 4 : Set when section requires target memory to be allocated by DL */
+/* Bit 5 : Set when section requires downloading */
+/* Bits 8-11: Alignment, same as COFF */
+/* */
+/**************************************************************************** */
+
+/* Enum for DOFF section types (bits 0-3 of flag): See dynamic_loader.h */
+
+/* Macros to help processing of sections */
+#define DLOAD_SECT_TYPE(s_hdr) ((s_hdr)->ds_flags & 0xF)
+
+/* DS_ALLOCATE indicates whether a section needs space on the target */
+#define DS_ALLOCATE_MASK 0x10
+#define DS_NEEDS_ALLOCATION(s_hdr) ((s_hdr)->ds_flags & DS_ALLOCATE_MASK)
+
+/* DS_DOWNLOAD indicates that the loader needs to copy bits */
+#define DS_DOWNLOAD_MASK 0x20
+#define DS_NEEDS_DOWNLOAD(s_hdr) ((s_hdr)->ds_flags & DS_DOWNLOAD_MASK)
+
+/* Section alignment requirement in AUs */
+#define DS_ALIGNMENT(ds_flags) (1 << (((ds_flags) >> 8) & 0xF))
+
+#endif /* _DOFF_H */
diff --git a/drivers/staging/tidspbridge/dynload/getsection.c b/drivers/staging/tidspbridge/dynload/getsection.c
new file mode 100644
index 0000000..029898f
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/getsection.c
@@ -0,0 +1,416 @@
+/*
+ * getsection.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/getsection.h>
+#include "header.h"
+
+/*
+ * Error strings
+ */
+static const char readstrm[] = { "Error reading %s from input stream" };
+static const char seek[] = { "Set file position to %d failed" };
+static const char isiz[] = { "Bad image packet size %d" };
+static const char err_checksum[] = { "Checksum failed on %s" };
+
+static const char err_reloc[] = { "dload_get_section unable to read"
+ "sections containing relocation entries"
+};
+
+#if BITS_PER_AU > BITS_PER_BYTE
+static const char err_alloc[] = { "Syms->dload_allocate( %d ) failed" };
+static const char stbl[] = { "Bad string table offset " FMT_UI32 };
+#endif
+
+/*
+ * we use the fact that DOFF section records are shaped just like
+ * ldr_section_info to reduce our section storage usage. These macros
+ * marks the places where that assumption is made
+ */
+#define DOFFSEC_IS_LDRSEC(pdoffsec) ((struct ldr_section_info *)(pdoffsec))
+#define LDRSEC_IS_DOFFSEC(ldrsec) ((struct doff_scnhdr_t *)(ldrsec))
+
+/************************************************************** */
+/********************* SUPPORT FUNCTIONS ********************** */
+/************************************************************** */
+
+#if BITS_PER_AU > BITS_PER_BYTE
+/**************************************************************************
+ * Procedure unpack_sec_name
+ *
+ * Parameters:
+ * dlthis Handle from dload_module_open for this module
+ * soffset Byte offset into the string table
+ * dst Place to store the expanded string
+ *
+ * Effect:
+ * Stores a string from the string table into the destination, expanding
+ * it in the process. Returns a pointer just past the end of the stored
+ * string on success, or NULL on failure.
+ *
+ ************************************************************************ */
+static char *unpack_sec_name(struct dload_state *dlthis, u32 soffset, char *dst)
+{
+ u8 tmp, *src;
+
+ if (soffset >= dlthis->dfile_hdr.df_scn_name_size) {
+ dload_error(dlthis, stbl, soffset);
+ return NULL;
+ }
+ src = (u8 *) dlthis->str_head +
+ (soffset >> (LOG_BITS_PER_AU - LOG_BITS_PER_BYTE));
+ if (soffset & 1)
+ *dst++ = *src++; /* only 1 character in first word */
+ do {
+ tmp = *src++;
+ *dst = (tmp >> BITS_PER_BYTE)
+ if (!(*dst++))
+ break;
+ } while ((*dst++ = tmp & BYTE_MASK));
+
+ return dst;
+}
+
+/**************************************************************************
+ * Procedure expand_sec_names
+ *
+ * Parameters:
+ * dlthis Handle from dload_module_open for this module
+ *
+ * Effect:
+ * Allocates a buffer, unpacks and copies strings from string table into it.
+ * Stores a pointer to the buffer into a state variable.
+ ************************************************************************* */
+static void expand_sec_names(struct dload_state *dlthis)
+{
+ char *xstrings, *curr, *next;
+ u32 xsize;
+ u16 sec;
+ struct ldr_section_info *shp;
+ /* assume worst-case size requirement */
+ xsize = dlthis->dfile_hdr.df_max_str_len * dlthis->dfile_hdr.df_no_scns;
+ xstrings = (char *)dlthis->mysym->dload_allocate(dlthis->mysym, xsize);
+ if (xstrings == NULL) {
+ dload_error(dlthis, err_alloc, xsize);
+ return;
+ }
+ dlthis->xstrings = xstrings;
+ /* For each sec, copy and expand its name */
+ curr = xstrings;
+ for (sec = 0; sec < dlthis->dfile_hdr.df_no_scns; sec++) {
+ shp = DOFFSEC_IS_LDRSEC(&dlthis->sect_hdrs[sec]);
+ next = unpack_sec_name(dlthis, *(u32 *) &shp->name, curr);
+ if (next == NULL)
+ break; /* error */
+ shp->name = curr;
+ curr = next;
+ }
+}
+
+#endif
+
+/************************************************************** */
+/********************* EXPORTED FUNCTIONS ********************* */
+/************************************************************** */
+
+/**************************************************************************
+ * Procedure dload_module_open
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side malloc/free and error reporting functions.
+ * Other methods are unused.
+ *
+ * Effect:
+ * Reads header information from a dynamic loader module using the
+ specified
+ * stream object, and returns a handle for the module information. This
+ * handle may be used in subsequent query calls to obtain information
+ * contained in the module.
+ *
+ * Returns:
+ * NULL if an error is encountered, otherwise a module handle for use
+ * in subsequent operations.
+ ************************************************************************* */
+void *dload_module_open(struct dynamic_loader_stream *module,
+ struct dynamic_loader_sym *syms)
+{
+ struct dload_state *dlthis; /* internal state for this call */
+ unsigned *dp, sz;
+ u32 sec_start;
+#if BITS_PER_AU <= BITS_PER_BYTE
+ u16 sec;
+#endif
+
+ /* Check that mandatory arguments are present */
+ if (!module || !syms) {
+ if (syms != NULL)
+ dload_syms_error(syms, "Required parameter is NULL");
+
+ return NULL;
+ }
+
+ dlthis = (struct dload_state *)
+ syms->dload_allocate(syms, sizeof(struct dload_state));
+ if (!dlthis) {
+ /* not enough storage */
+ dload_syms_error(syms, "Can't allocate module info");
+ return NULL;
+ }
+
+ /* clear our internal state */
+ dp = (unsigned *)dlthis;
+ for (sz = sizeof(struct dload_state) / sizeof(unsigned);
+ sz > 0; sz -= 1)
+ *dp++ = 0;
+
+ dlthis->strm = module;
+ dlthis->mysym = syms;
+
+ /* read in the doff image and store in our state variable */
+ dload_headers(dlthis);
+
+ if (!dlthis->dload_errcount)
+ dload_strings(dlthis, true);
+
+ /* skip ahead past the unread portion of the string table */
+ sec_start = sizeof(struct doff_filehdr_t) +
+ sizeof(struct doff_verify_rec_t) +
+ BYTE_TO_HOST(DOFF_ALIGN(dlthis->dfile_hdr.df_strtab_size));
+
+ if (dlthis->strm->set_file_posn(dlthis->strm, sec_start) != 0) {
+ dload_error(dlthis, seek, sec_start);
+ return NULL;
+ }
+
+ if (!dlthis->dload_errcount)
+ dload_sections(dlthis);
+
+ if (dlthis->dload_errcount) {
+ dload_module_close(dlthis); /* errors, blow off our state */
+ dlthis = NULL;
+ return NULL;
+ }
+#if BITS_PER_AU > BITS_PER_BYTE
+ /* Expand all section names from the string table into the */
+ /* state variable, and convert section names from a relative */
+ /* string table offset to a pointers to the expanded string. */
+ expand_sec_names(dlthis);
+#else
+ /* Convert section names from a relative string table offset */
+ /* to a pointer into the string table. */
+ for (sec = 0; sec < dlthis->dfile_hdr.df_no_scns; sec++) {
+ struct ldr_section_info *shp =
+ DOFFSEC_IS_LDRSEC(&dlthis->sect_hdrs[sec]);
+ shp->name = dlthis->str_head + *(u32 *) &shp->name;
+ }
+#endif
+
+ return dlthis;
+}
+
+/***************************************************************************
+ * Procedure dload_get_section_info
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ * sectionName Pointer to the string name of the section desired
+ * sectionInfo Address of a section info structure pointer to be
+ * initialized
+ *
+ * Effect:
+ * Finds the specified section in the module information, and initializes
+ * the provided struct ldr_section_info pointer.
+ *
+ * Returns:
+ * true for success, false for section not found
+ ************************************************************************* */
+int dload_get_section_info(void *minfo, const char *sectionName,
+ const struct ldr_section_info **const sectionInfo)
+{
+ struct dload_state *dlthis;
+ struct ldr_section_info *shp;
+ u16 sec;
+
+ dlthis = (struct dload_state *)minfo;
+ if (!dlthis)
+ return false;
+
+ for (sec = 0; sec < dlthis->dfile_hdr.df_no_scns; sec++) {
+ shp = DOFFSEC_IS_LDRSEC(&dlthis->sect_hdrs[sec]);
+ if (strcmp(sectionName, shp->name) == 0) {
+ *sectionInfo = shp;
+ return true;
+ }
+ }
+
+ return false;
+}
+
+#define IPH_SIZE (sizeof(struct image_packet_t) - sizeof(u32))
+#define REVERSE_REORDER_MAP(rawmap) ((rawmap) ^ 0x3030303)
+
+/**************************************************************************
+ * Procedure dload_get_section
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ * sectionInfo Pointer to a section info structure for the desired
+ * section
+ * sectionData Buffer to contain the section initialized data
+ *
+ * Effect:
+ * Copies the initialized data for the specified section into the
+ * supplied buffer.
+ *
+ * Returns:
+ * true for success, false for section not found
+ ************************************************************************* */
+int dload_get_section(void *minfo,
+ const struct ldr_section_info *sectionInfo,
+ void *sectionData)
+{
+ struct dload_state *dlthis;
+ u32 pos;
+ struct doff_scnhdr_t *sptr = NULL;
+ s32 nip;
+ struct image_packet_t ipacket;
+ s32 ipsize;
+ u32 checks;
+ s8 *dest = (s8 *) sectionData;
+
+ dlthis = (struct dload_state *)minfo;
+ if (!dlthis)
+ return false;
+ sptr = LDRSEC_IS_DOFFSEC(sectionInfo);
+ if (sptr == NULL)
+ return false;
+
+ /* skip ahead to the start of the first packet */
+ pos = BYTE_TO_HOST(DOFF_ALIGN((u32) sptr->ds_first_pkt_offset));
+ if (dlthis->strm->set_file_posn(dlthis->strm, pos) != 0) {
+ dload_error(dlthis, seek, pos);
+ return false;
+ }
+
+ nip = sptr->ds_nipacks;
+ while ((nip -= 1) >= 0) { /* for each packet */
+ /* get the fixed header bits */
+ if (dlthis->strm->read_buffer(dlthis->strm, &ipacket,
+ IPH_SIZE) != IPH_SIZE) {
+ dload_error(dlthis, readstrm, "image packet");
+ return false;
+ }
+ /* reorder the header if need be */
+ if (dlthis->reorder_map)
+ dload_reorder(&ipacket, IPH_SIZE, dlthis->reorder_map);
+
+ /* Now read the packet image bits. Note: round the size up to
+ * the next multiple of 4 bytes; this is what checksum
+ * routines want. */
+ ipsize = BYTE_TO_HOST(DOFF_ALIGN(ipacket.packet_size));
+ if (ipsize > BYTE_TO_HOST(IMAGE_PACKET_SIZE)) {
+ dload_error(dlthis, isiz, ipsize);
+ return false;
+ }
+ if (dlthis->strm->read_buffer
+ (dlthis->strm, dest, ipsize) != ipsize) {
+ dload_error(dlthis, readstrm, "image packet");
+ return false;
+ }
+ /* reorder the bytes if need be */
+#if !defined(_BIG_ENDIAN) || (TARGET_AU_BITS > 16)
+ if (dlthis->reorder_map)
+ dload_reorder(dest, ipsize, dlthis->reorder_map);
+
+ checks = dload_checksum(dest, ipsize);
+#else
+ if (dlthis->dfile_hdr.df_byte_reshuffle !=
+ TARGET_ORDER(REORDER_MAP(BYTE_RESHUFFLE_VALUE))) {
+ /* put image bytes in big-endian order, not PC order */
+ dload_reorder(dest, ipsize,
+ TARGET_ORDER(dlthis->
+ dfile_hdr.df_byte_reshuffle));
+ }
+#if TARGET_AU_BITS > 8
+ checks = dload_reverse_checksum16(dest, ipsize);
+#else
+ checks = dload_reverse_checksum(dest, ipsize);
+#endif
+#endif
+ checks += dload_checksum(&ipacket, IPH_SIZE);
+
+ /* NYI: unable to handle relocation entries here. Reloc
+ * entries referring to fields that span the packet boundaries
+ * may result in packets of sizes that are not multiple of
+ * 4 bytes. Our checksum implementation works on 32-bit words
+ * only. */
+ if (ipacket.num_relocs != 0) {
+ dload_error(dlthis, err_reloc, ipsize);
+ return false;
+ }
+
+ if (~checks) {
+ dload_error(dlthis, err_checksum, "image packet");
+ return false;
+ }
+
+ /*Advance destination ptr by the size of the just-read packet */
+ dest += ipsize;
+ }
+
+ return true;
+}
+
+/***************************************************************************
+ * Procedure dload_module_close
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ *
+ * Effect:
+ * Releases any storage associated with the module handle. On return,
+ * the module handle is invalid.
+ *
+ * Returns:
+ * Zero for success. On error, the number of errors detected is returned.
+ * Individual errors are reported using syms->error_report(), where syms was
+ * an argument to dload_module_open
+ ************************************************************************* */
+void dload_module_close(void *minfo)
+{
+ struct dload_state *dlthis;
+
+ dlthis = (struct dload_state *)minfo;
+ if (!dlthis)
+ return;
+
+ if (dlthis->str_head)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ dlthis->str_head);
+
+ if (dlthis->sect_hdrs)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ dlthis->sect_hdrs);
+
+#if BITS_PER_AU > BITS_PER_BYTE
+ if (dlthis->xstrings)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ dlthis->xstrings);
+
+#endif
+
+ dlthis->mysym->dload_deallocate(dlthis->mysym, dlthis);
+}
diff --git a/drivers/staging/tidspbridge/dynload/header.h b/drivers/staging/tidspbridge/dynload/header.h
new file mode 100644
index 0000000..5cef360
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/header.h
@@ -0,0 +1,55 @@
+/*
+ * header.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#define TRUE 1
+#define FALSE 0
+#ifndef NULL
+#define NULL 0
+#endif
+
+#include <linux/string.h>
+#define DL_STRCMP strcmp
+
+/* maximum parenthesis nesting in relocation stack expressions */
+#define STATIC_EXPR_STK_SIZE 10
+
+#include <linux/types.h>
+
+#include "doff.h"
+#include <dspbridge/dynamic_loader.h>
+#include "params.h"
+#include "dload_internal.h"
+#include "reloc_table.h"
+
+/*
+ * Plausibility limits
+ *
+ * These limits are imposed upon the input DOFF file as a check for validity.
+ * They are hard limits, in that the load will fail if they are exceeded.
+ * The numbers selected are arbitrary, in that the loader implementation does
+ * not require these limits.
+ */
+
+/* maximum number of bytes in string table */
+#define MAX_REASONABLE_STRINGTAB (0x100000)
+/* maximum number of code,data,etc. sections */
+#define MAX_REASONABLE_SECTIONS (200)
+/* maximum number of linker symbols */
+#define MAX_REASONABLE_SYMBOLS (100000)
+
+/* shift count to align F_BIG with DLOAD_LITTLE */
+#define ALIGN_COFF_ENDIANNESS 7
+#define ENDIANNESS_MASK (DF_BYTE_ORDER >> ALIGN_COFF_ENDIANNESS)
diff --git a/drivers/staging/tidspbridge/dynload/module_list.h b/drivers/staging/tidspbridge/dynload/module_list.h
new file mode 100644
index 0000000..a216bb1
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/module_list.h
@@ -0,0 +1,159 @@
+/*
+ * dspbridge/mpu_driver/src/dynload/module_list.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/*
+ * This C header file gives the layout of the data structure created by the
+ * dynamic loader to describe the set of modules loaded into the DSP.
+ *
+ * Linked List Structure:
+ * ----------------------
+ * The data structure defined here is a singly-linked list. The list
+ * represents the set of modules which are currently loaded in the DSP memory.
+ * The first entry in the list is a header record which contains a flag
+ * representing the state of the list. The rest of the entries in the list
+ * are module records.
+ *
+ * Global symbol _DLModules designates the first record in the list (i.e. the
+ * header record). This symbol must be defined in any program that wishes to
+ * use DLLview plug-in.
+ *
+ * String Representation:
+ * ----------------------
+ * The string names of the module and its sections are stored in a block of
+ * memory which follows the module record itself. The strings are ordered:
+ * module name first, followed by section names in order from the first
+ * section to the last. String names are tightly packed arrays of 8-bit
+ * characters (two characters per 16-bit word on the C55x). Strings are
+ * zero-byte-terminated.
+ *
+ * Creating and updating the list:
+ * -------------------------------
+ * Upon loading a new module into the DSP memory the dynamic loader inserts a
+ * new module record as the first module record in the list. The fields of
+ * this module record are initialized to reflect the properties of the module.
+ * The dynamic loader does NOT increment the flag/counter in the list's header
+ * record.
+ *
+ * Upon unloading a module from the DSP memory the dynamic loader removes the
+ * module's record from this list. The dynamic loader also increments the
+ * flag/counter in the list's header record to indicate that the list has been
+ * changed.
+ */
+
+#ifndef _MODULE_LIST_H_
+#define _MODULE_LIST_H_
+
+#include <linux/types.h>
+
+/* Global pointer to the modules_header structure */
+#define MODULES_HEADER "_DLModules"
+#define MODULES_HEADER_NO_UNDERSCORE "DLModules"
+
+/* Initial version number */
+#define INIT_VERSION 1
+
+/* Verification number -- to be recorded in each module record */
+#define VERIFICATION 0x79
+
+/* forward declarations */
+struct dll_module;
+struct dll_sect;
+
+/* the first entry in the list is the modules_header record;
+ * its address is contained in the global _DLModules pointer */
+struct modules_header {
+
+ /*
+ * Address of the first dll_module record in the list or NULL.
+ * Note: for C55x this is a word address (C55x data is
+ * word-addressable)
+ */
+ u32 first_module;
+
+ /* Combined storage size (in target addressable units) of the
+ * dll_module record which follows this header record, or zero
+ * if the list is empty. This size includes the module's string table.
+ * Note: for C55x the unit is a 16-bit word */
+ u16 first_module_size;
+
+ /* Counter is incremented whenever a module record is removed from
+ * the list */
+ u16 update_flag;
+
+};
+
+/* for each 32-bits in above structure, a bitmap, LSB first, whose bits are:
+ * 0 => a 32-bit value, 1 => 2 16-bit values */
+/* swapping bitmap for type modules_header */
+#define MODULES_HEADER_BITMAP 0x2
+
+/* information recorded about each section in a module */
+struct dll_sect {
+
+ /* Load-time address of the section.
+ * Note: for C55x this is a byte address for program sections, and
+ * a word address for data sections. C55x program memory is
+ * byte-addressable, while data memory is word-addressable. */
+ u32 sect_load_adr;
+
+ /* Run-time address of the section.
+ * Note 1: for C55x this is a byte address for program sections, and
+ * a word address for data sections.
+ * Note 2: for C55x two most significant bits of this field indicate
+ * the section type: '00' for a code section, '11' for a data section
+ * (C55 addresses are really only 24-bits wide). */
+ u32 sect_run_adr;
+
+};
+
+/* the rest of the entries in the list are module records */
+struct dll_module {
+
+ /* Address of the next dll_module record in the list, or 0 if this is
+ * the last record in the list.
+ * Note: for C55x this is a word address (C55x data is
+ * word-addressable) */
+ u32 next_module;
+
+ /* Combined storage size (in target addressable units) of the
+ * dll_module record which follows this one, or zero if this is the
+ * last record in the list. This size includes the module's string
+ * table.
+ * Note: for C55x the unit is a 16-bit word. */
+ u16 next_module_size;
+
+ /* version number of the tooling; set to INIT_VERSION for Phase 1 */
+ u16 version;
+
+ /* the verification word; set to VERIFICATION */
+ u16 verification;
+
+ /* Number of sections in the sects array */
+ u16 num_sects;
+
+ /* Module's "unique" id; copy of the timestamp from the host
+ * COFF file */
+ u32 timestamp;
+
+ /* Array of num_sects elements of the module's section records */
+ struct dll_sect sects[1];
+};
+
+/* for each 32 bits in above structure, a bitmap, LSB first, whose bits are:
+ * 0 => a 32-bit value, 1 => 2 16-bit values */
+#define DLL_MODULE_BITMAP 0x6 /* swapping bitmap for type dll_module */
+
+#endif /* _MODULE_LIST_H_ */
diff --git a/drivers/staging/tidspbridge/dynload/params.h b/drivers/staging/tidspbridge/dynload/params.h
new file mode 100644
index 0000000..d797fcd
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/params.h
@@ -0,0 +1,226 @@
+/*
+ * params.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This file defines host and target properties for all machines
+ * supported by the dynamic loader. To be tedious...
+ *
+ * host: the machine on which the dynamic loader runs
+ * target: the machine that the dynamic loader is loading
+ *
+ * Host and target may or may not be the same, depending upon the particular
+ * use.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/******************************************************************************
+ *
+ * Host Properties
+ *
+ **************************************************************************** */
+
+#define BITS_PER_BYTE 8 /* bits in the standard PC/SUN byte */
+#define LOG_BITS_PER_BYTE 3 /* log base 2 of same */
+#define BYTE_MASK ((1U<<BITS_PER_BYTE)-1)
+
+#if defined(__TMS320C55X__) || defined(_TMS320C5XX)
+#define BITS_PER_AU 16
+#define LOG_BITS_PER_AU 4
+ /* use this print string in error messages for uint32_t */
+#define FMT_UI32 "0x%lx"
+#define FMT8_UI32 "%08lx" /* same but no 0x, fixed width field */
+#else
+/* bits in the smallest addressable data storage unit */
+#define BITS_PER_AU 8
+/* log base 2 of the same; useful for shift counts */
+#define LOG_BITS_PER_AU 3
+#define FMT_UI32 "0x%x"
+#define FMT8_UI32 "%08x"
+#endif
+
+/* generic fastest method for swapping bytes and shorts */
+#define SWAP32BY16(zz) (((zz) << 16) | ((zz) >> 16))
+#define SWAP16BY8(zz) (((zz) << 8) | ((zz) >> 8))
+
+/* !! don't be tempted to insert type definitions here; use <stdint.h> !! */
+
+/******************************************************************************
+ *
+ * Target Properties
+ *
+ **************************************************************************** */
+
+/*-------------------------------------------------------------------------- */
+/* TMS320C6x Target Specific Parameters (byte-addressable) */
+/*-------------------------------------------------------------------------- */
+#if TMS32060
+#define MEMORG 0x0L /* Size of configured memory */
+#define MEMSIZE 0x0L /* (full address space) */
+
+#define CINIT_ALIGN 8 /* alignment of cinit record in TDATA AUs */
+#define CINIT_COUNT 4 /* width of count field in TDATA AUs */
+#define CINIT_ADDRESS 4 /* width of address field in TDATA AUs */
+#define CINIT_PAGE_BITS 0 /* Number of LSBs of address that
+ * are page number */
+
+#define LENIENT_SIGNED_RELEXPS 0 /* DOES SIGNED ALLOW MAX UNSIGNED */
+
+#undef TARGET_ENDIANNESS /* may be big or little endian */
+
+/* align a target address to a word boundary */
+#define TARGET_WORD_ALIGN(zz) (((zz) + 0x3) & -0x4)
+#endif
+
+/*--------------------------------------------------------------------------
+ *
+ * DEFAULT SETTINGS and DERIVED PROPERTIES
+ *
+ * This section establishes defaults for values not specified above
+ *-------------------------------------------------------------------------- */
+#ifndef TARGET_AU_BITS
+#define TARGET_AU_BITS 8 /* width of the target addressable unit */
+#define LOG_TARGET_AU_BITS 3 /* log2 of same */
+#endif
+
+#ifndef CINIT_DEFAULT_PAGE
+#define CINIT_DEFAULT_PAGE 0 /* default .cinit page number */
+#endif
+
+#ifndef DATA_RUN2LOAD
+#define DATA_RUN2LOAD(zz) (zz) /* translate data run address to load address */
+#endif
+
+#ifndef DBG_LIST_PAGE
+#define DBG_LIST_PAGE 0 /* page number for .dllview section */
+#endif
+
+#ifndef TARGET_WORD_ALIGN
+/* align a target address to a word boundary */
+#define TARGET_WORD_ALIGN(zz) (zz)
+#endif
+
+#ifndef TDATA_TO_TADDR
+#define TDATA_TO_TADDR(zz) (zz) /* target data address to target AU address */
+#define TADDR_TO_TDATA(zz) (zz) /* target AU address to target data address */
+#define TDATA_AU_BITS TARGET_AU_BITS /* bits per data AU */
+#define LOG_TDATA_AU_BITS LOG_TARGET_AU_BITS
+#endif
+
+/*
+ *
+ * Useful properties and conversions derived from the above
+ *
+ */
+
+/*
+ * Conversions between host and target addresses
+ */
+#if LOG_BITS_PER_AU == LOG_TARGET_AU_BITS
+/* translate target addressable unit to host address */
+#define TADDR_TO_HOST(x) (x)
+/* translate host address to target addressable unit */
+#define HOST_TO_TADDR(x) (x)
+#elif LOG_BITS_PER_AU > LOG_TARGET_AU_BITS
+#define TADDR_TO_HOST(x) ((x) >> (LOG_BITS_PER_AU-LOG_TARGET_AU_BITS))
+#define HOST_TO_TADDR(x) ((x) << (LOG_BITS_PER_AU-LOG_TARGET_AU_BITS))
+#else
+#define TADDR_TO_HOST(x) ((x) << (LOG_TARGET_AU_BITS-LOG_BITS_PER_AU))
+#define HOST_TO_TADDR(x) ((x) >> (LOG_TARGET_AU_BITS-LOG_BITS_PER_AU))
+#endif
+
+#if LOG_BITS_PER_AU == LOG_TDATA_AU_BITS
+/* translate target addressable unit to host address */
+#define TDATA_TO_HOST(x) (x)
+/* translate host address to target addressable unit */
+#define HOST_TO_TDATA(x) (x)
+/* translate host address to target addressable unit, round up */
+#define HOST_TO_TDATA_ROUND(x) (x)
+/* byte offset to host offset, rounded up for TDATA size */
+#define BYTE_TO_HOST_TDATA_ROUND(x) BYTE_TO_HOST_ROUND(x)
+#elif LOG_BITS_PER_AU > LOG_TDATA_AU_BITS
+#define TDATA_TO_HOST(x) ((x) >> (LOG_BITS_PER_AU-LOG_TDATA_AU_BITS))
+#define HOST_TO_TDATA(x) ((x) << (LOG_BITS_PER_AU-LOG_TDATA_AU_BITS))
+#define HOST_TO_TDATA_ROUND(x) ((x) << (LOG_BITS_PER_AU-LOG_TDATA_AU_BITS))
+#define BYTE_TO_HOST_TDATA_ROUND(x) BYTE_TO_HOST_ROUND(x)
+#else
+#define TDATA_TO_HOST(x) ((x) << (LOG_TDATA_AU_BITS-LOG_BITS_PER_AU))
+#define HOST_TO_TDATA(x) ((x) >> (LOG_TDATA_AU_BITS-LOG_BITS_PER_AU))
+#define HOST_TO_TDATA_ROUND(x) (((x) +\
+ (1<<(LOG_TDATA_AU_BITS-LOG_BITS_PER_AU))-1) >>\
+ (LOG_TDATA_AU_BITS-LOG_BITS_PER_AU))
+#define BYTE_TO_HOST_TDATA_ROUND(x) (BYTE_TO_HOST((x) +\
+ (1<<(LOG_TDATA_AU_BITS-LOG_BITS_PER_BYTE))-1) &\
+ -(TDATA_AU_BITS/BITS_PER_AU))
+#endif
+
+/*
+ * Input in DOFF format is always expresed in bytes, regardless of loading host
+ * so we wind up converting from bytes to target and host units even when the
+ * host is not a byte machine.
+ */
+#if LOG_BITS_PER_AU == LOG_BITS_PER_BYTE
+#define BYTE_TO_HOST(x) (x)
+#define BYTE_TO_HOST_ROUND(x) (x)
+#define HOST_TO_BYTE(x) (x)
+#elif LOG_BITS_PER_AU >= LOG_BITS_PER_BYTE
+#define BYTE_TO_HOST(x) ((x) >> (LOG_BITS_PER_AU - LOG_BITS_PER_BYTE))
+#define BYTE_TO_HOST_ROUND(x) ((x + (BITS_PER_AU/BITS_PER_BYTE-1)) >>\
+ (LOG_BITS_PER_AU - LOG_BITS_PER_BYTE))
+#define HOST_TO_BYTE(x) ((x) << (LOG_BITS_PER_AU - LOG_BITS_PER_BYTE))
+#else
+/* lets not try to deal with sub-8-bit byte machines */
+#endif
+
+#if LOG_TARGET_AU_BITS == LOG_BITS_PER_BYTE
+/* translate target addressable unit to byte address */
+#define TADDR_TO_BYTE(x) (x)
+/* translate byte address to target addressable unit */
+#define BYTE_TO_TADDR(x) (x)
+#elif LOG_TARGET_AU_BITS > LOG_BITS_PER_BYTE
+#define TADDR_TO_BYTE(x) ((x) << (LOG_TARGET_AU_BITS-LOG_BITS_PER_BYTE))
+#define BYTE_TO_TADDR(x) ((x) >> (LOG_TARGET_AU_BITS-LOG_BITS_PER_BYTE))
+#else
+/* lets not try to deal with sub-8-bit byte machines */
+#endif
+
+#ifdef _BIG_ENDIAN
+#define HOST_ENDIANNESS 1
+#else
+#define HOST_ENDIANNESS 0
+#endif
+
+#ifdef TARGET_ENDIANNESS
+#define TARGET_ENDIANNESS_DIFFERS(rtend) (HOST_ENDIANNESS^TARGET_ENDIANNESS)
+#elif HOST_ENDIANNESS
+#define TARGET_ENDIANNESS_DIFFERS(rtend) (!(rtend))
+#else
+#define TARGET_ENDIANNESS_DIFFERS(rtend) (rtend)
+#endif
+
+/* the unit in which we process target image data */
+#if TARGET_AU_BITS <= 8
+typedef u8 tgt_au_t;
+#elif TARGET_AU_BITS <= 16
+typedef u16 tgt_au_t;
+#else
+typedef u32 tgt_au_t;
+#endif
+
+/* size of that unit */
+#if TARGET_AU_BITS < BITS_PER_AU
+#define TGTAU_BITS BITS_PER_AU
+#define LOG_TGTAU_BITS LOG_BITS_PER_AU
+#else
+#define TGTAU_BITS TARGET_AU_BITS
+#define LOG_TGTAU_BITS LOG_TARGET_AU_BITS
+#endif
diff --git a/drivers/staging/tidspbridge/dynload/reloc.c b/drivers/staging/tidspbridge/dynload/reloc.c
new file mode 100644
index 0000000..316a38c
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/reloc.c
@@ -0,0 +1,484 @@
+/*
+ * reloc.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include "header.h"
+
+#if TMS32060
+/* the magic symbol for the start of BSS */
+static const char bsssymbol[] = { ".bss" };
+#endif
+
+#if TMS32060
+#include "reloc_table_c6000.c"
+#endif
+
+#if TMS32060
+/* From coff.h - ignore these relocation operations */
+#define R_C60ALIGN 0x76 /* C60: Alignment info for compressor */
+#define R_C60FPHEAD 0x77 /* C60: Explicit assembly directive */
+#define R_C60NOCMP 0x100 /* C60: Don't compress this code scn */
+#endif
+
+/**************************************************************************
+ * Procedure dload_unpack
+ *
+ * Parameters:
+ * data pointer to storage unit containing lowest host address of
+ * image data
+ * fieldsz Size of bit field, 0 < fieldsz <= sizeof(rvalue)*BITS_PER_AU
+ * offset Offset from LSB, 0 <= offset < BITS_PER_AU
+ * sgn Signedness of the field (ROP_SGN, ROP_UNS, ROP_MAX, ROP_ANY)
+ *
+ * Effect:
+ * Extracts the specified field and returns it.
+ ************************************************************************* */
+rvalue dload_unpack(struct dload_state *dlthis, tgt_au_t * data, int fieldsz,
+ int offset, unsigned sgn)
+{
+ register rvalue objval;
+ register int shift, direction;
+ register tgt_au_t *dp = data;
+
+ fieldsz -= 1; /* avoid nastiness with 32-bit shift of 32-bit value */
+ /* * collect up enough bits to contain the desired field */
+ if (TARGET_BIG_ENDIAN) {
+ dp += (fieldsz + offset) >> LOG_TGTAU_BITS;
+ direction = -1;
+ } else
+ direction = 1;
+ objval = *dp >> offset;
+ shift = TGTAU_BITS - offset;
+ while (shift <= fieldsz) {
+ dp += direction;
+ objval += (rvalue) *dp << shift;
+ shift += TGTAU_BITS;
+ }
+
+ /* * sign or zero extend the value appropriately */
+ if (sgn == ROP_UNS)
+ objval &= (2 << fieldsz) - 1;
+ else {
+ shift = sizeof(rvalue) * BITS_PER_AU - 1 - fieldsz;
+ objval = (objval << shift) >> shift;
+ }
+
+ return objval;
+
+} /* dload_unpack */
+
+/**************************************************************************
+ * Procedure dload_repack
+ *
+ * Parameters:
+ * val Value to insert
+ * data Pointer to storage unit containing lowest host address of
+ * image data
+ * fieldsz Size of bit field, 0 < fieldsz <= sizeof(rvalue)*BITS_PER_AU
+ * offset Offset from LSB, 0 <= offset < BITS_PER_AU
+ * sgn Signedness of the field (ROP_SGN, ROP_UNS, ROP_MAX, ROP_ANY)
+ *
+ * Effect:
+ * Stuffs the specified value in the specified field. Returns 0 for
+ * success
+ * or 1 if the value will not fit in the specified field according to the
+ * specified signedness rule.
+ ************************************************************************* */
+static const unsigned char ovf_limit[] = { 1, 2, 2 };
+
+int dload_repack(struct dload_state *dlthis, rvalue val, tgt_au_t * data,
+ int fieldsz, int offset, unsigned sgn)
+{
+ register urvalue objval, mask;
+ register int shift, direction;
+ register tgt_au_t *dp = data;
+
+ fieldsz -= 1; /* avoid nastiness with 32-bit shift of 32-bit value */
+ /* clip the bits */
+ mask = ((UINT32_C(2) << fieldsz) - 1);
+ objval = (val & mask);
+ /* * store the bits through the specified mask */
+ if (TARGET_BIG_ENDIAN) {
+ dp += (fieldsz + offset) >> LOG_TGTAU_BITS;
+ direction = -1;
+ } else
+ direction = 1;
+
+ /* insert LSBs */
+ *dp = (*dp & ~(mask << offset)) + (objval << offset);
+ shift = TGTAU_BITS - offset;
+ /* align mask and objval with AU boundary */
+ objval >>= shift;
+ mask >>= shift;
+
+ while (mask) {
+ dp += direction;
+ *dp = (*dp & ~mask) + objval;
+ objval >>= TGTAU_BITS;
+ mask >>= TGTAU_BITS;
+ }
+
+ /*
+ * check for overflow
+ */
+ if (sgn) {
+ unsigned tmp = (val >> fieldsz) + (sgn & 0x1);
+ if (tmp > ovf_limit[sgn - 1])
+ return 1;
+ }
+ return 0;
+
+} /* dload_repack */
+
+/* lookup table for the scaling amount in a C6x instruction */
+#if TMS32060
+#define SCALE_BITS 4 /* there are 4 bits in the scale field */
+#define SCALE_MASK 0x7 /* we really only use the bottom 3 bits */
+static const u8 c60_scale[SCALE_MASK + 1] = {
+ 1, 0, 0, 0, 1, 1, 2, 2
+};
+#endif
+
+/**************************************************************************
+ * Procedure dload_relocate
+ *
+ * Parameters:
+ * data Pointer to base of image data
+ * rp Pointer to relocation operation
+ *
+ * Effect:
+ * Performs the specified relocation operation
+ ************************************************************************* */
+void dload_relocate(struct dload_state *dlthis, tgt_au_t * data,
+ struct reloc_record_t *rp, bool * tramps_genereted,
+ bool second_pass)
+{
+ rvalue val, reloc_amt, orig_val = 0;
+ unsigned int fieldsz = 0;
+ unsigned int offset = 0;
+ unsigned int reloc_info = 0;
+ unsigned int reloc_action = 0;
+ register int rx = 0;
+ rvalue *stackp = NULL;
+ int top;
+ struct local_symbol *svp = NULL;
+#ifdef RFV_SCALE
+ unsigned int scale = 0;
+#endif
+ struct image_packet_t *img_pkt = NULL;
+
+ /* The image packet data struct is only used during first pass
+ * relocation in the event that a trampoline is needed. 2nd pass
+ * relocation doesn't guarantee that data is coming from an
+ * image_packet_t structure. See cload.c, dload_data for how img_data is
+ * set. If that changes this needs to be updated!!! */
+ if (second_pass == false)
+ img_pkt = (struct image_packet_t *)((u8 *) data -
+ sizeof(struct
+ image_packet_t));
+
+ rx = HASH_FUNC(rp->TYPE);
+ while (rop_map1[rx] != rp->TYPE) {
+ rx = HASH_L(rop_map2[rx]);
+ if (rx < 0) {
+#if TMS32060
+ switch (rp->TYPE) {
+ case R_C60ALIGN:
+ case R_C60NOCMP:
+ case R_C60FPHEAD:
+ /* Ignore these reloc types and return */
+ break;
+ default:
+ /* Unknown reloc type, print error and return */
+ dload_error(dlthis, "Bad coff operator 0x%x",
+ rp->TYPE);
+ }
+#else
+ dload_error(dlthis, "Bad coff operator 0x%x", rp->TYPE);
+#endif
+ return;
+ }
+ }
+ rx = HASH_I(rop_map2[rx]);
+ if ((rx < (sizeof(rop_action) / sizeof(u16)))
+ && (rx < (sizeof(rop_info) / sizeof(u16))) && (rx > 0)) {
+ reloc_action = rop_action[rx];
+ reloc_info = rop_info[rx];
+ } else {
+ dload_error(dlthis, "Buffer Overflow - Array Index Out "
+ "of Bounds");
+ }
+
+ /* Compute the relocation amount for the referenced symbol, if any */
+ reloc_amt = rp->UVAL;
+ if (RFV_SYM(reloc_info)) { /* relocation uses a symbol reference */
+ /* If this is first pass, use the module local symbol table,
+ * else use the trampoline symbol table. */
+ if (second_pass == false) {
+ if ((u32) rp->SYMNDX < dlthis->dfile_hdr.df_no_syms) {
+ /* real symbol reference */
+ svp = &dlthis->local_symtab[rp->SYMNDX];
+ reloc_amt = (RFV_SYM(reloc_info) == ROP_SYMD) ?
+ svp->delta : svp->value;
+ }
+ /* reloc references current section */
+ else if (rp->SYMNDX == -1) {
+ reloc_amt = (RFV_SYM(reloc_info) == ROP_SYMD) ?
+ dlthis->delta_runaddr :
+ dlthis->image_secn->run_addr;
+ }
+ }
+ }
+ /* relocation uses a symbol reference */
+ /* Handle stack adjustment */
+ val = 0;
+ top = RFV_STK(reloc_info);
+ if (top) {
+ top += dlthis->relstkidx - RSTK_UOP;
+ if (top >= STATIC_EXPR_STK_SIZE) {
+ dload_error(dlthis,
+ "Expression stack overflow in %s at offset "
+ FMT_UI32, dlthis->image_secn->name,
+ rp->vaddr + dlthis->image_offset);
+ return;
+ }
+ val = dlthis->relstk[dlthis->relstkidx];
+ dlthis->relstkidx = top;
+ stackp = &dlthis->relstk[top];
+ }
+ /* Derive field position and size, if we need them */
+ if (reloc_info & ROP_RW) { /* read or write action in our future */
+ fieldsz = RFV_WIDTH(reloc_action);
+ if (fieldsz) { /* field info from table */
+ offset = RFV_POSN(reloc_action);
+ if (TARGET_BIG_ENDIAN)
+ /* make sure vaddr is the lowest target
+ * address containing bits */
+ rp->vaddr += RFV_BIGOFF(reloc_info);
+ } else { /* field info from relocation op */
+ fieldsz = rp->FIELDSZ;
+ offset = rp->OFFSET;
+ if (TARGET_BIG_ENDIAN)
+ /* make sure vaddr is the lowest target
+ address containing bits */
+ rp->vaddr += (rp->WORDSZ - offset - fieldsz)
+ >> LOG_TARGET_AU_BITS;
+ }
+ data = (tgt_au_t *) ((char *)data + TADDR_TO_HOST(rp->vaddr));
+ /* compute lowest host location of referenced data */
+#if BITS_PER_AU > TARGET_AU_BITS
+ /* conversion from target address to host address may lose
+ address bits; add loss to offset */
+ if (TARGET_BIG_ENDIAN) {
+ offset += -((rp->vaddr << LOG_TARGET_AU_BITS) +
+ offset + fieldsz) &
+ (BITS_PER_AU - TARGET_AU_BITS);
+ } else {
+ offset += (rp->vaddr << LOG_TARGET_AU_BITS) &
+ (BITS_PER_AU - 1);
+ }
+#endif
+#ifdef RFV_SCALE
+ scale = RFV_SCALE(reloc_info);
+#endif
+ }
+ /* read the object value from the current image, if so ordered */
+ if (reloc_info & ROP_R) {
+ /* relocation reads current image value */
+ val = dload_unpack(dlthis, data, fieldsz, offset,
+ RFV_SIGN(reloc_info));
+ /* Save off the original value in case the relo overflows and
+ * we can trampoline it. */
+ orig_val = val;
+
+#ifdef RFV_SCALE
+ val <<= scale;
+#endif
+ }
+ /* perform the necessary arithmetic */
+ switch (RFV_ACTION(reloc_action)) { /* relocation actions */
+ case RACT_VAL:
+ break;
+ case RACT_ASGN:
+ val = reloc_amt;
+ break;
+ case RACT_ADD:
+ val += reloc_amt;
+ break;
+ case RACT_PCR:
+ /*-----------------------------------------------------------
+ * Handle special cases of jumping from absolute sections
+ * (special reloc type) or to absolute destination
+ * (symndx == -1). In either case, set the appropriate
+ * relocation amount to 0.
+ *----------------------------------------------------------- */
+ if (rp->SYMNDX == -1)
+ reloc_amt = 0;
+ val += reloc_amt - dlthis->delta_runaddr;
+ break;
+ case RACT_ADDISP:
+ val += rp->R_DISP + reloc_amt;
+ break;
+ case RACT_ASGPC:
+ val = dlthis->image_secn->run_addr + reloc_amt;
+ break;
+ case RACT_PLUS:
+ if (stackp != NULL)
+ val += *stackp;
+ break;
+ case RACT_SUB:
+ if (stackp != NULL)
+ val = *stackp - val;
+ break;
+ case RACT_NEG:
+ val = -val;
+ break;
+ case RACT_MPY:
+ if (stackp != NULL)
+ val *= *stackp;
+ break;
+ case RACT_DIV:
+ if (stackp != NULL)
+ val = *stackp / val;
+ break;
+ case RACT_MOD:
+ if (stackp != NULL)
+ val = *stackp % val;
+ break;
+ case RACT_SR:
+ if (val >= sizeof(rvalue) * BITS_PER_AU)
+ val = 0;
+ else if (stackp != NULL)
+ val = (urvalue) *stackp >> val;
+ break;
+ case RACT_ASR:
+ if (val >= sizeof(rvalue) * BITS_PER_AU)
+ val = sizeof(rvalue) * BITS_PER_AU - 1;
+ else if (stackp != NULL)
+ val = *stackp >> val;
+ break;
+ case RACT_SL:
+ if (val >= sizeof(rvalue) * BITS_PER_AU)
+ val = 0;
+ else if (stackp != NULL)
+ val = *stackp << val;
+ break;
+ case RACT_AND:
+ if (stackp != NULL)
+ val &= *stackp;
+ break;
+ case RACT_OR:
+ if (stackp != NULL)
+ val |= *stackp;
+ break;
+ case RACT_XOR:
+ if (stackp != NULL)
+ val ^= *stackp;
+ break;
+ case RACT_NOT:
+ val = ~val;
+ break;
+#if TMS32060
+ case RACT_C6SECT:
+ /* actually needed address of secn containing symbol */
+ if (svp != NULL) {
+ if (rp->SYMNDX >= 0)
+ if (svp->secnn > 0)
+ reloc_amt = dlthis->ldr_sections
+ [svp->secnn - 1].run_addr;
+ }
+ /* !!! FALL THRU !!! */
+ case RACT_C6BASE:
+ if (dlthis->bss_run_base == 0) {
+ struct dynload_symbol *symp;
+ symp = dlthis->mysym->find_matching_symbol
+ (dlthis->mysym, bsssymbol);
+ /* lookup value of global BSS base */
+ if (symp)
+ dlthis->bss_run_base = symp->value;
+ else
+ dload_error(dlthis,
+ "Global BSS base referenced in %s "
+ "offset" FMT_UI32 " but not "
+ "defined",
+ dlthis->image_secn->name,
+ rp->vaddr + dlthis->image_offset);
+ }
+ reloc_amt -= dlthis->bss_run_base;
+ /* !!! FALL THRU !!! */
+ case RACT_C6DSPL:
+ /* scale factor determined by 3 LSBs of field */
+ scale = c60_scale[val & SCALE_MASK];
+ offset += SCALE_BITS;
+ fieldsz -= SCALE_BITS;
+ val >>= SCALE_BITS; /* ignore the scale field hereafter */
+ val <<= scale;
+ val += reloc_amt; /* do the usual relocation */
+ if (((1 << scale) - 1) & val)
+ dload_error(dlthis,
+ "Unaligned reference in %s offset "
+ FMT_UI32, dlthis->image_secn->name,
+ rp->vaddr + dlthis->image_offset);
+ break;
+#endif
+ } /* relocation actions */
+ /* * Put back result as required */
+ if (reloc_info & ROP_W) { /* relocation writes image value */
+#ifdef RFV_SCALE
+ val >>= scale;
+#endif
+ if (dload_repack(dlthis, val, data, fieldsz, offset,
+ RFV_SIGN(reloc_info))) {
+ /* Check to see if this relo can be trampolined,
+ * but only in first phase relocation. 2nd phase
+ * relocation cannot trampoline. */
+ if ((second_pass == false) &&
+ (dload_tramp_avail(dlthis, rp) == true)) {
+
+ /* Before generating the trampoline, restore
+ * the value to its original so the 2nd pass
+ * relo will work. */
+ dload_repack(dlthis, orig_val, data, fieldsz,
+ offset, RFV_SIGN(reloc_info));
+ if (!dload_tramp_generate(dlthis,
+ (dlthis->image_secn -
+ dlthis->ldr_sections),
+ dlthis->image_offset,
+ img_pkt, rp)) {
+ dload_error(dlthis,
+ "Failed to "
+ "generate trampoline for "
+ "bit overflow");
+ dload_error(dlthis,
+ "Relocation val " FMT_UI32
+ " overflows %d bits in %s "
+ "offset " FMT_UI32, val,
+ fieldsz,
+ dlthis->image_secn->name,
+ dlthis->image_offset +
+ rp->vaddr);
+ } else
+ *tramps_genereted = true;
+ } else {
+ dload_error(dlthis, "Relocation value "
+ FMT_UI32 " overflows %d bits in %s"
+ " offset " FMT_UI32, val, fieldsz,
+ dlthis->image_secn->name,
+ dlthis->image_offset + rp->vaddr);
+ }
+ }
+ } else if (top)
+ *stackp = val;
+} /* reloc_value */
diff --git a/drivers/staging/tidspbridge/dynload/reloc_table.h b/drivers/staging/tidspbridge/dynload/reloc_table.h
new file mode 100644
index 0000000..6aab03d
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/reloc_table.h
@@ -0,0 +1,102 @@
+/*
+ * reloc_table.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _RELOC_TABLE_H_
+#define _RELOC_TABLE_H_
+/*
+ * Table of relocation operator properties
+ */
+#include <linux/types.h>
+
+/* How does this relocation operation access the program image? */
+#define ROP_N 0 /* does not access image */
+#define ROP_R 1 /* read from image */
+#define ROP_W 2 /* write to image */
+#define ROP_RW 3 /* read from and write to image */
+
+/* For program image access, what are the overflow rules for the bit field? */
+/* Beware! Procedure repack depends on this encoding */
+#define ROP_ANY 0 /* no overflow ever, just truncate the value */
+#define ROP_SGN 1 /* signed field */
+#define ROP_UNS 2 /* unsigned field */
+#define ROP_MAX 3 /* allow maximum range of either signed or unsigned */
+
+/* How does the relocation operation use the symbol reference */
+#define ROP_IGN 0 /* no symbol is referenced */
+#define ROP_LIT 0 /* use rp->UVAL literal field */
+#define ROP_SYM 1 /* symbol value is used in relocation */
+#define ROP_SYMD 2 /* delta value vs last link is used */
+
+/* How does the reloc op use the stack? */
+#define RSTK_N 0 /* Does not use */
+#define RSTK_POP 1 /* Does a POP */
+#define RSTK_UOP 2 /* Unary op, stack position unaffected */
+#define RSTK_PSH 3 /* Does a push */
+
+/*
+ * Computational actions performed by the dynamic loader
+ */
+enum dload_actions {
+ /* don't alter the current val (from stack or mem fetch) */
+ RACT_VAL,
+ /* set value to reference amount (from symbol reference) */
+ RACT_ASGN,
+ RACT_ADD, /* add reference to value */
+ RACT_PCR, /* add reference minus PC delta to value */
+ RACT_ADDISP, /* add reference plus R_DISP */
+ RACT_ASGPC, /* set value to section addr plus reference */
+
+ RACT_PLUS, /* stack + */
+ RACT_SUB, /* stack - */
+ RACT_NEG, /* stack unary - */
+
+ RACT_MPY, /* stack * */
+ RACT_DIV, /* stack / */
+ RACT_MOD, /* stack % */
+
+ RACT_SR, /* stack unsigned >> */
+ RACT_ASR, /* stack signed >> */
+ RACT_SL, /* stack << */
+ RACT_AND, /* stack & */
+ RACT_OR, /* stack | */
+ RACT_XOR, /* stack ^ */
+ RACT_NOT, /* stack ~ */
+ RACT_C6SECT, /* for C60 R_SECT op */
+ RACT_C6BASE, /* for C60 R_BASE op */
+ RACT_C6DSPL, /* for C60 scaled 15-bit displacement */
+ RACT_PCR23T /* for ARM Thumb long branch */
+};
+
+/*
+ * macros used to extract values
+ */
+#define RFV_POSN(aaa) ((aaa) & 0xF)
+#define RFV_WIDTH(aaa) (((aaa) >> 4) & 0x3F)
+#define RFV_ACTION(aaa) ((aaa) >> 10)
+
+#define RFV_SIGN(iii) (((iii) >> 2) & 0x3)
+#define RFV_SYM(iii) (((iii) >> 4) & 0x3)
+#define RFV_STK(iii) (((iii) >> 6) & 0x3)
+#define RFV_ACCS(iii) ((iii) & 0x3)
+
+#if (TMS32060)
+#define RFV_SCALE(iii) ((iii) >> 11)
+#define RFV_BIGOFF(iii) (((iii) >> 8) & 0x7)
+#else
+#define RFV_BIGOFF(iii) ((iii) >> 8)
+#endif
+
+#endif /* _RELOC_TABLE_H_ */
diff --git a/drivers/staging/tidspbridge/dynload/reloc_table_c6000.c b/drivers/staging/tidspbridge/dynload/reloc_table_c6000.c
new file mode 100644
index 0000000..8ae3b38
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/reloc_table_c6000.c
@@ -0,0 +1,257 @@
+/*
+ * reloc_table_c6000.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* Tables generated for c6000 */
+
+#define HASH_FUNC(zz) (((((zz) + 1) * UINT32_C(1845)) >> 11) & 63)
+#define HASH_L(zz) ((zz) >> 8)
+#define HASH_I(zz) ((zz) & 0xFF)
+
+static const u16 rop_map1[] = {
+ 0,
+ 1,
+ 2,
+ 20,
+ 4,
+ 5,
+ 6,
+ 15,
+ 80,
+ 81,
+ 82,
+ 83,
+ 84,
+ 85,
+ 86,
+ 87,
+ 17,
+ 18,
+ 19,
+ 21,
+ 16,
+ 16394,
+ 16404,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 32,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 40,
+ 112,
+ 113,
+ 65535,
+ 16384,
+ 16385,
+ 16386,
+ 16387,
+ 16388,
+ 16389,
+ 16390,
+ 16391,
+ 16392,
+ 16393,
+ 16395,
+ 16396,
+ 16397,
+ 16398,
+ 16399,
+ 16400,
+ 16401,
+ 16402,
+ 16403,
+ 16405,
+ 16406,
+ 65535,
+ 65535,
+ 65535
+};
+
+static const s16 rop_map2[] = {
+ -256,
+ -255,
+ -254,
+ -245,
+ -253,
+ -252,
+ -251,
+ -250,
+ -241,
+ -240,
+ -239,
+ -238,
+ -237,
+ -236,
+ 1813,
+ 5142,
+ -248,
+ -247,
+ 778,
+ -244,
+ -249,
+ -221,
+ -211,
+ -1,
+ -1,
+ -1,
+ -1,
+ -1,
+ -1,
+ -243,
+ -1,
+ -1,
+ -1,
+ -1,
+ -1,
+ -1,
+ -242,
+ -233,
+ -232,
+ -1,
+ -231,
+ -230,
+ -229,
+ -228,
+ -227,
+ -226,
+ -225,
+ -224,
+ -223,
+ 5410,
+ -220,
+ -219,
+ -218,
+ -217,
+ -216,
+ -215,
+ -214,
+ -213,
+ 5676,
+ -210,
+ -209,
+ -1,
+ -1,
+ -1
+};
+
+static const u16 rop_action[] = {
+ 2560,
+ 2304,
+ 2304,
+ 2432,
+ 2432,
+ 2560,
+ 2176,
+ 2304,
+ 2560,
+ 3200,
+ 3328,
+ 3584,
+ 3456,
+ 2304,
+ 4208,
+ 20788,
+ 21812,
+ 3415,
+ 3245,
+ 2311,
+ 4359,
+ 19764,
+ 2311,
+ 3191,
+ 3280,
+ 6656,
+ 7680,
+ 8704,
+ 9728,
+ 10752,
+ 11776,
+ 12800,
+ 13824,
+ 14848,
+ 15872,
+ 16896,
+ 17920,
+ 18944,
+ 0,
+ 0,
+ 0,
+ 0,
+ 1536,
+ 1536,
+ 1536,
+ 5632,
+ 512,
+ 0
+};
+
+static const u16 rop_info[] = {
+ 0,
+ 35,
+ 35,
+ 35,
+ 35,
+ 35,
+ 35,
+ 35,
+ 35,
+ 39,
+ 39,
+ 39,
+ 39,
+ 35,
+ 34,
+ 283,
+ 299,
+ 4135,
+ 4391,
+ 291,
+ 33059,
+ 283,
+ 295,
+ 4647,
+ 4135,
+ 64,
+ 64,
+ 128,
+ 64,
+ 64,
+ 64,
+ 64,
+ 64,
+ 64,
+ 64,
+ 64,
+ 64,
+ 128,
+ 201,
+ 197,
+ 74,
+ 70,
+ 208,
+ 196,
+ 200,
+ 192,
+ 192,
+ 66
+};
diff --git a/drivers/staging/tidspbridge/dynload/tramp.c b/drivers/staging/tidspbridge/dynload/tramp.c
new file mode 100644
index 0000000..7b593fc
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/tramp.c
@@ -0,0 +1,1143 @@
+/*
+ * tramp.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2009 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include "header.h"
+
+#if TMS32060
+#include "tramp_table_c6000.c"
+#endif
+
+#define MAX_RELOS_PER_PASS 4
+
+/*
+ * Function: priv_tramp_sect_tgt_alloc
+ * Description: Allocate target memory for the trampoline section. The
+ * target mem size is easily obtained as the next available address.
+ */
+static int priv_tramp_sect_tgt_alloc(struct dload_state *dlthis)
+{
+ int ret_val = 0;
+ struct ldr_section_info *sect_info;
+
+ /* Populate the trampoline loader section and allocate it on the
+ * target. The section name is ALWAYS the first string in the final
+ * string table for trampolines. The trampoline section is always
+ * 1 beyond the total number of allocated sections. */
+ sect_info = &dlthis->ldr_sections[dlthis->allocated_secn_count];
+
+ sect_info->name = dlthis->tramp.final_string_table;
+ sect_info->size = dlthis->tramp.tramp_sect_next_addr;
+ sect_info->context = 0;
+ sect_info->type =
+ (4 << 8) | DLOAD_TEXT | DS_ALLOCATE_MASK | DS_DOWNLOAD_MASK;
+ sect_info->page = 0;
+ sect_info->run_addr = 0;
+ sect_info->load_addr = 0;
+ ret_val = dlthis->myalloc->dload_allocate(dlthis->myalloc,
+ sect_info,
+ DS_ALIGNMENT
+ (sect_info->type));
+
+ if (ret_val == 0)
+ dload_error(dlthis, "Failed to allocate target memory for"
+ " trampoline");
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_h2a
+ * Description: Helper function to convert a hex value to its ASCII
+ * representation. Used for trampoline symbol name generation.
+ */
+static u8 priv_h2a(u8 value)
+{
+ if (value > 0xF)
+ return 0xFF;
+
+ if (value <= 9)
+ value += 0x30;
+ else
+ value += 0x37;
+
+ return value;
+}
+
+/*
+ * Function: priv_tramp_sym_gen_name
+ * Description: Generate a trampoline symbol name (ASCII) using the value
+ * of the symbol. This places the new name into the user buffer.
+ * The name is fixed in length and of the form: __$dbTR__xxxxxxxx
+ * (where "xxxxxxxx" is the hex value.
+ */
+static void priv_tramp_sym_gen_name(u32 value, char *dst)
+{
+ u32 i;
+ volatile char *prefix = TRAMP_SYM_PREFIX;
+ volatile char *dst_local = dst;
+ u8 tmp;
+
+ /* Clear out the destination, including the ending NULL */
+ for (i = 0; i < (TRAMP_SYM_PREFIX_LEN + TRAMP_SYM_HEX_ASCII_LEN); i++)
+ *(dst_local + i) = 0;
+
+ /* Copy the prefix to start */
+ for (i = 0; i < strlen(TRAMP_SYM_PREFIX); i++) {
+ *dst_local = *(prefix + i);
+ dst_local++;
+ }
+
+ /* Now convert the value passed in to a string equiv of the hex */
+ for (i = 0; i < sizeof(value); i++) {
+#ifndef _BIG_ENDIAN
+ tmp = *(((u8 *) &value) + (sizeof(value) - 1) - i);
+ *dst_local = priv_h2a((tmp & 0xF0) >> 4);
+ dst_local++;
+ *dst_local = priv_h2a(tmp & 0x0F);
+ dst_local++;
+#else
+ tmp = *(((u8 *) &value) + i);
+ *dst_local = priv_h2a((tmp & 0xF0) >> 4);
+ dst_local++;
+ *dst_local = priv_h2a(tmp & 0x0F);
+ dst_local++;
+#endif
+ }
+
+ /* NULL terminate */
+ *dst_local = 0;
+}
+
+/*
+ * Function: priv_tramp_string_create
+ * Description: Create a new string specific to the trampoline loading and add
+ * it to the trampoline string list. This list contains the
+ * trampoline section name and trampoline point symbols.
+ */
+static struct tramp_string *priv_tramp_string_create(struct dload_state *dlthis,
+ u32 str_len, char *str)
+{
+ struct tramp_string *new_string = NULL;
+ u32 i;
+
+ /* Create a new string object with the specified size. */
+ new_string =
+ (struct tramp_string *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ (sizeof
+ (struct
+ tramp_string)
+ + str_len +
+ 1));
+ if (new_string != NULL) {
+ /* Clear the string first. This ensures the ending NULL is
+ * present and the optimizer won't touch it. */
+ for (i = 0; i < (sizeof(struct tramp_string) + str_len + 1);
+ i++)
+ *((u8 *) new_string + i) = 0;
+
+ /* Add this string to our virtual table by assigning it the
+ * next index and pushing it to the tail of the list. */
+ new_string->index = dlthis->tramp.tramp_string_next_index;
+ dlthis->tramp.tramp_string_next_index++;
+ dlthis->tramp.tramp_string_size += str_len + 1;
+
+ new_string->next = NULL;
+ if (dlthis->tramp.string_head == NULL)
+ dlthis->tramp.string_head = new_string;
+ else
+ dlthis->tramp.string_tail->next = new_string;
+
+ dlthis->tramp.string_tail = new_string;
+
+ /* Copy the string over to the new object */
+ for (i = 0; i < str_len; i++)
+ new_string->str[i] = str[i];
+ }
+
+ return new_string;
+}
+
+/*
+ * Function: priv_tramp_string_find
+ * Description: Walk the trampoline string list and find a match for the
+ * provided string. If not match is found, NULL is returned.
+ */
+static struct tramp_string *priv_tramp_string_find(struct dload_state *dlthis,
+ char *str)
+{
+ struct tramp_string *cur_str = NULL;
+ struct tramp_string *ret_val = NULL;
+ u32 i;
+ u32 str_len = strlen(str);
+
+ for (cur_str = dlthis->tramp.string_head;
+ (ret_val == NULL) && (cur_str != NULL); cur_str = cur_str->next) {
+ /* If the string lengths aren't equal, don't bother
+ * comparing */
+ if (str_len != strlen(cur_str->str))
+ continue;
+
+ /* Walk the strings until one of them ends */
+ for (i = 0; i < str_len; i++) {
+ /* If they don't match in the current position then
+ * break out now, no sense in continuing to look at
+ * this string. */
+ if (str[i] != cur_str->str[i])
+ break;
+ }
+
+ if (i == str_len)
+ ret_val = cur_str;
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_string_tbl_finalize
+ * Description: Flatten the trampoline string list into a table of NULL
+ * terminated strings. This is the same format of string table
+ * as used by the COFF/DOFF file.
+ */
+static int priv_string_tbl_finalize(struct dload_state *dlthis)
+{
+ int ret_val = 0;
+ struct tramp_string *cur_string;
+ char *cur_loc;
+ char *tmp;
+
+ /* Allocate enough space for all strings that have been created. The
+ * table is simply all strings concatenated together will NULL
+ * endings. */
+ dlthis->tramp.final_string_table =
+ (char *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ dlthis->tramp.
+ tramp_string_size);
+ if (dlthis->tramp.final_string_table != NULL) {
+ /* We got our buffer, walk the list and release the nodes as*
+ * we go */
+ cur_loc = dlthis->tramp.final_string_table;
+ cur_string = dlthis->tramp.string_head;
+ while (cur_string != NULL) {
+ /* Move the head/tail pointers */
+ dlthis->tramp.string_head = cur_string->next;
+ if (dlthis->tramp.string_tail == cur_string)
+ dlthis->tramp.string_tail = NULL;
+
+ /* Copy the string contents */
+ for (tmp = cur_string->str;
+ *tmp != '\0'; tmp++, cur_loc++)
+ *cur_loc = *tmp;
+
+ /* Pick up the NULL termination since it was missed by
+ * breaking using it to end the above loop. */
+ *cur_loc = '\0';
+ cur_loc++;
+
+ /* Free the string node, we don't need it any more. */
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ cur_string);
+
+ /* Move our pointer to the next one */
+ cur_string = dlthis->tramp.string_head;
+ }
+
+ /* Update our return value to success */
+ ret_val = 1;
+ } else
+ dload_error(dlthis, "Failed to allocate trampoline "
+ "string table");
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_tramp_sect_alloc
+ * Description: Virtually allocate space from the trampoline section. This
+ * function returns the next offset within the trampoline section
+ * that is available and moved the next available offset by the
+ * requested size. NO TARGET ALLOCATION IS DONE AT THIS TIME.
+ */
+static u32 priv_tramp_sect_alloc(struct dload_state *dlthis, u32 tramp_size)
+{
+ u32 ret_val;
+
+ /* If the next available address is 0, this is our first allocation.
+ * Create a section name string to go into the string table . */
+ if (dlthis->tramp.tramp_sect_next_addr == 0) {
+ dload_syms_error(dlthis->mysym, "*** WARNING *** created "
+ "dynamic TRAMPOLINE section for module %s",
+ dlthis->str_head);
+ }
+
+ /* Reserve space for the new trampoline */
+ ret_val = dlthis->tramp.tramp_sect_next_addr;
+ dlthis->tramp.tramp_sect_next_addr += tramp_size;
+ return ret_val;
+}
+
+/*
+ * Function: priv_tramp_sym_create
+ * Description: Allocate and create a new trampoline specific symbol and add
+ * it to the trampoline symbol list. These symbols will include
+ * trampoline points as well as the external symbols they
+ * reference.
+ */
+static struct tramp_sym *priv_tramp_sym_create(struct dload_state *dlthis,
+ u32 str_index,
+ struct local_symbol *tmp_sym)
+{
+ struct tramp_sym *new_sym = NULL;
+ u32 i;
+
+ /* Allocate new space for the symbol in the symbol table. */
+ new_sym =
+ (struct tramp_sym *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ sizeof(struct tramp_sym));
+ if (new_sym != NULL) {
+ for (i = 0; i != sizeof(struct tramp_sym); i++)
+ *((char *)new_sym + i) = 0;
+
+ /* Assign this symbol the next symbol index for easier
+ * reference later during relocation. */
+ new_sym->index = dlthis->tramp.tramp_sym_next_index;
+ dlthis->tramp.tramp_sym_next_index++;
+
+ /* Populate the symbol information. At this point any
+ * trampoline symbols will be the offset location, not the
+ * final. Copy over the symbol info to start, then be sure to
+ * get the string index from the trampoline string table. */
+ new_sym->sym_info = *tmp_sym;
+ new_sym->str_index = str_index;
+
+ /* Push the new symbol to the tail of the symbol table list */
+ new_sym->next = NULL;
+ if (dlthis->tramp.symbol_head == NULL)
+ dlthis->tramp.symbol_head = new_sym;
+ else
+ dlthis->tramp.symbol_tail->next = new_sym;
+
+ dlthis->tramp.symbol_tail = new_sym;
+ }
+
+ return new_sym;
+}
+
+/*
+ * Function: priv_tramp_sym_get
+ * Description: Search for the symbol with the matching string index (from
+ * the trampoline string table) and return the trampoline
+ * symbol object, if found. Otherwise return NULL.
+ */
+static struct tramp_sym *priv_tramp_sym_get(struct dload_state *dlthis,
+ u32 string_index)
+{
+ struct tramp_sym *sym_found = NULL;
+
+ /* Walk the symbol table list and search vs. the string index */
+ for (sym_found = dlthis->tramp.symbol_head;
+ sym_found != NULL; sym_found = sym_found->next) {
+ if (sym_found->str_index == string_index)
+ break;
+ }
+
+ return sym_found;
+}
+
+/*
+ * Function: priv_tramp_sym_find
+ * Description: Search for a trampoline symbol based on the string name of
+ * the symbol. Return the symbol object, if found, otherwise
+ * return NULL.
+ */
+static struct tramp_sym *priv_tramp_sym_find(struct dload_state *dlthis,
+ char *string)
+{
+ struct tramp_sym *sym_found = NULL;
+ struct tramp_string *str_found = NULL;
+
+ /* First, search for the string, then search for the sym based on the
+ string index. */
+ str_found = priv_tramp_string_find(dlthis, string);
+ if (str_found != NULL)
+ sym_found = priv_tramp_sym_get(dlthis, str_found->index);
+
+ return sym_found;
+}
+
+/*
+ * Function: priv_tramp_sym_finalize
+ * Description: Allocate a flat symbol table for the trampoline section,
+ * put each trampoline symbol into the table, adjust the
+ * symbol value based on the section address on the target and
+ * free the trampoline symbol list nodes.
+ */
+static int priv_tramp_sym_finalize(struct dload_state *dlthis)
+{
+ int ret_val = 0;
+ struct tramp_sym *cur_sym;
+ struct ldr_section_info *tramp_sect =
+ &dlthis->ldr_sections[dlthis->allocated_secn_count];
+ struct local_symbol *new_sym;
+
+ /* Allocate a table to hold a flattened version of all symbols
+ * created. */
+ dlthis->tramp.final_sym_table =
+ (struct local_symbol *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ (sizeof(struct local_symbol) * dlthis->tramp.
+ tramp_sym_next_index));
+ if (dlthis->tramp.final_sym_table != NULL) {
+ /* Walk the list of all symbols, copy it over to the flattened
+ * table. After it has been copied, the node can be freed as
+ * it is no longer needed. */
+ new_sym = dlthis->tramp.final_sym_table;
+ cur_sym = dlthis->tramp.symbol_head;
+ while (cur_sym != NULL) {
+ /* Pop it off the list */
+ dlthis->tramp.symbol_head = cur_sym->next;
+ if (cur_sym == dlthis->tramp.symbol_tail)
+ dlthis->tramp.symbol_tail = NULL;
+
+ /* Copy the symbol contents into the flat table */
+ *new_sym = cur_sym->sym_info;
+
+ /* Now finaize the symbol. If it is in the tramp
+ * section, we need to adjust for the section start.
+ * If it is external then we don't need to adjust at
+ * all.
+ * NOTE: THIS CODE ASSUMES THAT THE TRAMPOLINE IS
+ * REFERENCED LIKE A CALL TO AN EXTERNAL SO VALUE AND
+ * DELTA ARE THE SAME. SEE THE FUNCTION dload_symbols
+ * WHERE DN_UNDEF IS HANDLED FOR MORE REFERENCE. */
+ if (new_sym->secnn < 0) {
+ new_sym->value += tramp_sect->load_addr;
+ new_sym->delta = new_sym->value;
+ }
+
+ /* Let go of the symbol node */
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_sym);
+
+ /* Move to the next node */
+ cur_sym = dlthis->tramp.symbol_head;
+ new_sym++;
+ }
+
+ ret_val = 1;
+ } else
+ dload_error(dlthis, "Failed to alloc trampoline sym table");
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_tgt_img_gen
+ * Description: Allocate storage for and copy the target specific image data
+ * and fix up its relocations for the new external symbol. If
+ * a trampoline image packet was successfully created it is added
+ * to the trampoline list.
+ */
+static int priv_tgt_img_gen(struct dload_state *dlthis, u32 base,
+ u32 gen_index, struct tramp_sym *new_ext_sym)
+{
+ struct tramp_img_pkt *new_img_pkt = NULL;
+ u32 i;
+ u32 pkt_size = tramp_img_pkt_size_get();
+ u8 *gen_tbl_entry;
+ u8 *pkt_data;
+ struct reloc_record_t *cur_relo;
+ int ret_val = 0;
+
+ /* Allocate a new image packet and set it up. */
+ new_img_pkt =
+ (struct tramp_img_pkt *)dlthis->mysym->dload_allocate(dlthis->mysym,
+ pkt_size);
+ if (new_img_pkt != NULL) {
+ /* Save the base, this is where it goes in the section */
+ new_img_pkt->base = base;
+
+ /* Copy over the image data and relos from the target table */
+ pkt_data = (u8 *) &new_img_pkt->hdr;
+ gen_tbl_entry = (u8 *) &tramp_gen_info[gen_index];
+ for (i = 0; i < pkt_size; i++) {
+ *pkt_data = *gen_tbl_entry;
+ pkt_data++;
+ gen_tbl_entry++;
+ }
+
+ /* Update the relocations to point to the external symbol */
+ cur_relo =
+ (struct reloc_record_t *)((u8 *) &new_img_pkt->hdr +
+ new_img_pkt->hdr.relo_offset);
+ for (i = 0; i < new_img_pkt->hdr.num_relos; i++)
+ cur_relo[i].SYMNDX = new_ext_sym->index;
+
+ /* Add it to the trampoline list. */
+ new_img_pkt->next = dlthis->tramp.tramp_pkts;
+ dlthis->tramp.tramp_pkts = new_img_pkt;
+
+ ret_val = 1;
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_pkt_relo
+ * Description: Take the provided image data and the collection of relocations
+ * for it and perform the relocations. Note that all relocations
+ * at this stage are considered SECOND PASS since the original
+ * image has already been processed in the first pass. This means
+ * TRAMPOLINES ARE TREATED AS 2ND PASS even though this is really
+ * the first (and only) relocation that will be performed on them.
+ */
+static int priv_pkt_relo(struct dload_state *dlthis, tgt_au_t * data,
+ struct reloc_record_t *rp[], u32 relo_count)
+{
+ int ret_val = 1;
+ u32 i;
+ bool tmp;
+
+ /* Walk through all of the relos and process them. This function is
+ * the equivalent of relocate_packet() from cload.c, but specialized
+ * for trampolines and 2nd phase relocations. */
+ for (i = 0; i < relo_count; i++)
+ dload_relocate(dlthis, data, rp[i], &tmp, true);
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_tramp_pkt_finalize
+ * Description: Walk the list of all trampoline packets and finalize them.
+ * Each trampoline image packet will be relocated now that the
+ * trampoline section has been allocated on the target. Once
+ * all of the relocations are done the trampoline image data
+ * is written into target memory and the trampoline packet
+ * is freed: it is no longer needed after this point.
+ */
+static int priv_tramp_pkt_finalize(struct dload_state *dlthis)
+{
+ int ret_val = 1;
+ struct tramp_img_pkt *cur_pkt = NULL;
+ struct reloc_record_t *relos[MAX_RELOS_PER_PASS];
+ u32 relos_done;
+ u32 i;
+ struct reloc_record_t *cur_relo;
+ struct ldr_section_info *sect_info =
+ &dlthis->ldr_sections[dlthis->allocated_secn_count];
+
+ /* Walk the list of trampoline packets and relocate each packet. This
+ * function is the trampoline equivalent of dload_data() from
+ * cload.c. */
+ cur_pkt = dlthis->tramp.tramp_pkts;
+ while ((ret_val != 0) && (cur_pkt != NULL)) {
+ /* Remove the pkt from the list */
+ dlthis->tramp.tramp_pkts = cur_pkt->next;
+
+ /* Setup section and image offset information for the relo */
+ dlthis->image_secn = sect_info;
+ dlthis->image_offset = cur_pkt->base;
+ dlthis->delta_runaddr = sect_info->run_addr;
+
+ /* Walk through all relos for the packet */
+ relos_done = 0;
+ cur_relo = (struct reloc_record_t *)((u8 *) &cur_pkt->hdr +
+ cur_pkt->hdr.relo_offset);
+ while (relos_done < cur_pkt->hdr.num_relos) {
+#ifdef ENABLE_TRAMP_DEBUG
+ dload_syms_error(dlthis->mysym,
+ "===> Trampoline %x branches to %x",
+ sect_info->run_addr +
+ dlthis->image_offset,
+ dlthis->
+ tramp.final_sym_table[cur_relo->
+ SYMNDX].value);
+#endif
+
+ for (i = 0;
+ ((i < MAX_RELOS_PER_PASS) &&
+ ((i + relos_done) < cur_pkt->hdr.num_relos)); i++)
+ relos[i] = cur_relo + i;
+
+ /* Do the actual relo */
+ ret_val = priv_pkt_relo(dlthis,
+ (tgt_au_t *) &cur_pkt->payload,
+ relos, i);
+ if (ret_val == 0) {
+ dload_error(dlthis,
+ "Relocation of trampoline pkt at %x"
+ " failed", cur_pkt->base +
+ sect_info->run_addr);
+ break;
+ }
+
+ relos_done += i;
+ cur_relo += i;
+ }
+
+ /* Make sure we didn't hit a problem */
+ if (ret_val != 0) {
+ /* Relos are done for the packet, write it to the
+ * target */
+ ret_val = dlthis->myio->writemem(dlthis->myio,
+ &cur_pkt->payload,
+ sect_info->load_addr +
+ cur_pkt->base,
+ sect_info,
+ BYTE_TO_HOST
+ (cur_pkt->hdr.
+ tramp_code_size));
+ if (ret_val == 0) {
+ dload_error(dlthis,
+ "Write to " FMT_UI32 " failed",
+ sect_info->load_addr +
+ cur_pkt->base);
+ }
+
+ /* Done with the pkt, let it go */
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_pkt);
+
+ /* Get the next packet to process */
+ cur_pkt = dlthis->tramp.tramp_pkts;
+ }
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_dup_pkt_finalize
+ * Description: Walk the list of duplicate image packets and finalize them.
+ * Each duplicate packet will be relocated again for the
+ * relocations that previously failed and have been adjusted
+ * to point at a trampoline. Once all relocations for a packet
+ * have been done, write the packet into target memory. The
+ * duplicate packet and its relocation chain are all freed
+ * after use here as they are no longer needed after this.
+ */
+static int priv_dup_pkt_finalize(struct dload_state *dlthis)
+{
+ int ret_val = 1;
+ struct tramp_img_dup_pkt *cur_pkt;
+ struct tramp_img_dup_relo *cur_relo;
+ struct reloc_record_t *relos[MAX_RELOS_PER_PASS];
+ struct doff_scnhdr_t *sect_hdr = NULL;
+ s32 i;
+
+ /* Similar to the trampoline pkt finalize, this function walks each dup
+ * pkt that was generated and performs all relocations that were
+ * deferred to a 2nd pass. This is the equivalent of dload_data() from
+ * cload.c, but does not need the additional reorder and checksum
+ * processing as it has already been done. */
+ cur_pkt = dlthis->tramp.dup_pkts;
+ while ((ret_val != 0) && (cur_pkt != NULL)) {
+ /* Remove the node from the list, we'll be freeing it
+ * shortly */
+ dlthis->tramp.dup_pkts = cur_pkt->next;
+
+ /* Setup the section and image offset for relocation */
+ dlthis->image_secn = &dlthis->ldr_sections[cur_pkt->secnn];
+ dlthis->image_offset = cur_pkt->offset;
+
+ /* In order to get the delta run address, we need to reference
+ * the original section header. It's a bit ugly, but needed
+ * for relo. */
+ i = (s32) (dlthis->image_secn - dlthis->ldr_sections);
+ sect_hdr = dlthis->sect_hdrs + i;
+ dlthis->delta_runaddr = sect_hdr->ds_paddr;
+
+ /* Walk all relos in the chain and process each. */
+ cur_relo = cur_pkt->relo_chain;
+ while (cur_relo != NULL) {
+ /* Process them a chunk at a time to be efficient */
+ for (i = 0; (i < MAX_RELOS_PER_PASS)
+ && (cur_relo != NULL);
+ i++, cur_relo = cur_relo->next) {
+ relos[i] = &cur_relo->relo;
+ cur_pkt->relo_chain = cur_relo->next;
+ }
+
+ /* Do the actual relo */
+ ret_val = priv_pkt_relo(dlthis,
+ cur_pkt->img_pkt.img_data,
+ relos, i);
+ if (ret_val == 0) {
+ dload_error(dlthis,
+ "Relocation of dup pkt at %x"
+ " failed", cur_pkt->offset +
+ dlthis->image_secn->run_addr);
+ break;
+ }
+
+ /* Release all of these relos, we're done with them */
+ while (i > 0) {
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ GET_CONTAINER
+ (relos[i - 1],
+ struct tramp_img_dup_relo,
+ relo));
+ i--;
+ }
+
+ /* DO NOT ADVANCE cur_relo, IT IS ALREADY READY TO
+ * GO! */
+ }
+
+ /* Done with all relos. Make sure we didn't have a problem and
+ * write it out to the target */
+ if (ret_val != 0) {
+ ret_val = dlthis->myio->writemem(dlthis->myio,
+ cur_pkt->img_pkt.
+ img_data,
+ dlthis->image_secn->
+ load_addr +
+ cur_pkt->offset,
+ dlthis->image_secn,
+ BYTE_TO_HOST
+ (cur_pkt->img_pkt.
+ packet_size));
+ if (ret_val == 0) {
+ dload_error(dlthis,
+ "Write to " FMT_UI32 " failed",
+ dlthis->image_secn->load_addr +
+ cur_pkt->offset);
+ }
+
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_pkt);
+
+ /* Advance to the next packet */
+ cur_pkt = dlthis->tramp.dup_pkts;
+ }
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: priv_dup_find
+ * Description: Walk the list of existing duplicate packets and find a
+ * match based on the section number and image offset. Return
+ * the duplicate packet if found, otherwise NULL.
+ */
+static struct tramp_img_dup_pkt *priv_dup_find(struct dload_state *dlthis,
+ s16 secnn, u32 image_offset)
+{
+ struct tramp_img_dup_pkt *cur_pkt = NULL;
+
+ for (cur_pkt = dlthis->tramp.dup_pkts;
+ cur_pkt != NULL; cur_pkt = cur_pkt->next) {
+ if ((cur_pkt->secnn == secnn) &&
+ (cur_pkt->offset == image_offset)) {
+ /* Found a match, break out */
+ break;
+ }
+ }
+
+ return cur_pkt;
+}
+
+/*
+ * Function: priv_img_pkt_dup
+ * Description: Duplicate the original image packet. If this is the first
+ * time this image packet has been seen (based on section number
+ * and image offset), create a new duplicate packet and add it
+ * to the dup packet list. If not, just get the existing one and
+ * update it with the current packet contents (since relocation
+ * on the packet is still ongoing in first pass.) Create a
+ * duplicate of the provided relocation, but update it to point
+ * to the new trampoline symbol. Add the new relocation dup to
+ * the dup packet's relo chain for 2nd pass relocation later.
+ */
+static int priv_img_pkt_dup(struct dload_state *dlthis,
+ s16 secnn, u32 image_offset,
+ struct image_packet_t *ipacket,
+ struct reloc_record_t *rp,
+ struct tramp_sym *new_tramp_sym)
+{
+ struct tramp_img_dup_pkt *dup_pkt = NULL;
+ u32 new_dup_size;
+ s32 i;
+ int ret_val = 0;
+ struct tramp_img_dup_relo *dup_relo = NULL;
+
+ /* Determinne if this image packet is already being tracked in the
+ dup list for other trampolines. */
+ dup_pkt = priv_dup_find(dlthis, secnn, image_offset);
+
+ if (dup_pkt == NULL) {
+ /* This image packet does not exist in our tracking, so create
+ * a new one and add it to the head of the list. */
+ new_dup_size = sizeof(struct tramp_img_dup_pkt) +
+ ipacket->packet_size;
+
+ dup_pkt = (struct tramp_img_dup_pkt *)
+ dlthis->mysym->dload_allocate(dlthis->mysym, new_dup_size);
+ if (dup_pkt != NULL) {
+ /* Save off the section and offset information */
+ dup_pkt->secnn = secnn;
+ dup_pkt->offset = image_offset;
+ dup_pkt->relo_chain = NULL;
+
+ /* Copy the original packet content */
+ dup_pkt->img_pkt = *ipacket;
+ dup_pkt->img_pkt.img_data = (u8 *) (dup_pkt + 1);
+ for (i = 0; i < ipacket->packet_size; i++)
+ *(dup_pkt->img_pkt.img_data + i) =
+ *(ipacket->img_data + i);
+
+ /* Add the packet to the dup list */
+ dup_pkt->next = dlthis->tramp.dup_pkts;
+ dlthis->tramp.dup_pkts = dup_pkt;
+ } else
+ dload_error(dlthis, "Failed to create dup packet!");
+ } else {
+ /* The image packet contents could have changed since
+ * trampoline detection happens during relocation of the image
+ * packets. So, we need to update the image packet contents
+ * before adding relo information. */
+ for (i = 0; i < dup_pkt->img_pkt.packet_size; i++)
+ *(dup_pkt->img_pkt.img_data + i) =
+ *(ipacket->img_data + i);
+ }
+
+ /* Since the previous code may have allocated a new dup packet for us,
+ double check that we actually have one. */
+ if (dup_pkt != NULL) {
+ /* Allocate a new node for the relo chain. Each image packet
+ * can potentially have multiple relocations that cause a
+ * trampoline to be generated. So, we keep them in a chain,
+ * order is not important. */
+ dup_relo = dlthis->mysym->dload_allocate(dlthis->mysym,
+ sizeof(struct tramp_img_dup_relo));
+ if (dup_relo != NULL) {
+ /* Copy the relo contents, adjust for the new
+ * trampoline and add it to the list. */
+ dup_relo->relo = *rp;
+ dup_relo->relo.SYMNDX = new_tramp_sym->index;
+
+ dup_relo->next = dup_pkt->relo_chain;
+ dup_pkt->relo_chain = dup_relo;
+
+ /* That's it, we're done. Make sure we update our
+ * return value to be success since everything finished
+ * ok */
+ ret_val = 1;
+ } else
+ dload_error(dlthis, "Unable to alloc dup relo");
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: dload_tramp_avail
+ * Description: Check to see if the target supports a trampoline for this type
+ * of relocation. Return true if it does, otherwise false.
+ */
+bool dload_tramp_avail(struct dload_state *dlthis, struct reloc_record_t *rp)
+{
+ bool ret_val = false;
+ u16 map_index;
+ u16 gen_index;
+
+ /* Check type hash vs. target tramp table */
+ map_index = HASH_FUNC(rp->TYPE);
+ gen_index = tramp_map[map_index];
+ if (gen_index != TRAMP_NO_GEN_AVAIL)
+ ret_val = true;
+
+ return ret_val;
+}
+
+/*
+ * Function: dload_tramp_generate
+ * Description: Create a new trampoline for the provided image packet and
+ * relocation causing problems. This will create the trampoline
+ * as well as duplicate/update the image packet and relocation
+ * causing the problem, which will be relo'd again during
+ * finalization.
+ */
+int dload_tramp_generate(struct dload_state *dlthis, s16 secnn,
+ u32 image_offset, struct image_packet_t *ipacket,
+ struct reloc_record_t *rp)
+{
+ u16 map_index;
+ u16 gen_index;
+ int ret_val = 1;
+ char tramp_sym_str[TRAMP_SYM_PREFIX_LEN + TRAMP_SYM_HEX_ASCII_LEN];
+ struct local_symbol *ref_sym;
+ struct tramp_sym *new_tramp_sym;
+ struct tramp_sym *new_ext_sym;
+ struct tramp_string *new_tramp_str;
+ u32 new_tramp_base;
+ struct local_symbol tmp_sym;
+ struct local_symbol ext_tmp_sym;
+
+ /* Hash the relo type to get our generator information */
+ map_index = HASH_FUNC(rp->TYPE);
+ gen_index = tramp_map[map_index];
+ if (gen_index != TRAMP_NO_GEN_AVAIL) {
+ /* If this is the first trampoline, create the section name in
+ * our string table for debug help later. */
+ if (dlthis->tramp.string_head == NULL) {
+ priv_tramp_string_create(dlthis,
+ strlen(TRAMP_SECT_NAME),
+ TRAMP_SECT_NAME);
+ }
+#ifdef ENABLE_TRAMP_DEBUG
+ dload_syms_error(dlthis->mysym,
+ "Trampoline at img loc %x, references %x",
+ dlthis->ldr_sections[secnn].run_addr +
+ image_offset + rp->vaddr,
+ dlthis->local_symtab[rp->SYMNDX].value);
+#endif
+
+ /* Generate the trampoline string, check if already defined.
+ * If the relo symbol index is -1, it means we need the section
+ * info for relo later. To do this we'll dummy up a symbol
+ * with the section delta and run addresses. */
+ if (rp->SYMNDX == -1) {
+ ext_tmp_sym.value =
+ dlthis->ldr_sections[secnn].run_addr;
+ ext_tmp_sym.delta = dlthis->sect_hdrs[secnn].ds_paddr;
+ ref_sym = &ext_tmp_sym;
+ } else
+ ref_sym = &(dlthis->local_symtab[rp->SYMNDX]);
+
+ priv_tramp_sym_gen_name(ref_sym->value, tramp_sym_str);
+ new_tramp_sym = priv_tramp_sym_find(dlthis, tramp_sym_str);
+ if (new_tramp_sym == NULL) {
+ /* If tramp string not defined, create it and a new
+ * string, and symbol for it as well as the original
+ * symbol which caused the trampoline. */
+ new_tramp_str = priv_tramp_string_create(dlthis,
+ strlen
+ (tramp_sym_str),
+ tramp_sym_str);
+ if (new_tramp_str == NULL) {
+ dload_error(dlthis, "Failed to create new "
+ "trampoline string\n");
+ ret_val = 0;
+ } else {
+ /* Allocate tramp section space for the new
+ * tramp from the target */
+ new_tramp_base = priv_tramp_sect_alloc(dlthis,
+ tramp_size_get());
+
+ /* We have a string, create the new symbol and
+ * duplicate the external. */
+ tmp_sym.value = new_tramp_base;
+ tmp_sym.delta = 0;
+ tmp_sym.secnn = -1;
+ tmp_sym.sclass = 0;
+ new_tramp_sym = priv_tramp_sym_create(dlthis,
+ new_tramp_str->
+ index,
+ &tmp_sym);
+
+ new_ext_sym = priv_tramp_sym_create(dlthis, -1,
+ ref_sym);
+
+ if ((new_tramp_sym != NULL) &&
+ (new_ext_sym != NULL)) {
+ /* Call the image generator to get the
+ * new image data and fix up its
+ * relocations for the external
+ * symbol. */
+ ret_val = priv_tgt_img_gen(dlthis,
+ new_tramp_base,
+ gen_index,
+ new_ext_sym);
+
+ /* Add generated image data to tramp
+ * image list */
+ if (ret_val != 1) {
+ dload_error(dlthis, "Failed to "
+ "create img pkt for"
+ " trampoline\n");
+ }
+ } else {
+ dload_error(dlthis, "Failed to create "
+ "new tramp syms "
+ "(%8.8X, %8.8X)\n",
+ new_tramp_sym, new_ext_sym);
+ ret_val = 0;
+ }
+ }
+ }
+
+ /* Duplicate the image data and relo record that caused the
+ * tramp, including update the relo data to point to the tramp
+ * symbol. */
+ if (ret_val == 1) {
+ ret_val = priv_img_pkt_dup(dlthis, secnn, image_offset,
+ ipacket, rp, new_tramp_sym);
+ if (ret_val != 1) {
+ dload_error(dlthis, "Failed to create dup of "
+ "original img pkt\n");
+ }
+ }
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: dload_tramp_pkt_update
+ * Description: Update the duplicate copy of this image packet, which the
+ * trampoline layer is already tracking. This is call is critical
+ * to make if trampolines were generated anywhere within the
+ * packet and first pass relo continued on the remainder. The
+ * trampoline layer needs the updates image data so when 2nd
+ * pass relo is done during finalize the image packet can be
+ * written to the target since all relo is done.
+ */
+int dload_tramp_pkt_udpate(struct dload_state *dlthis, s16 secnn,
+ u32 image_offset, struct image_packet_t *ipacket)
+{
+ struct tramp_img_dup_pkt *dup_pkt = NULL;
+ s32 i;
+ int ret_val = 0;
+
+ /* Find the image packet in question, the caller needs us to update it
+ since a trampoline was previously generated. */
+ dup_pkt = priv_dup_find(dlthis, secnn, image_offset);
+ if (dup_pkt != NULL) {
+ for (i = 0; i < dup_pkt->img_pkt.packet_size; i++)
+ *(dup_pkt->img_pkt.img_data + i) =
+ *(ipacket->img_data + i);
+
+ ret_val = 1;
+ } else {
+ dload_error(dlthis,
+ "Unable to find existing DUP pkt for %x, offset %x",
+ secnn, image_offset);
+
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: dload_tramp_finalize
+ * Description: If any trampolines were created, finalize everything on the
+ * target by allocating the trampoline section on the target,
+ * finalizing the trampoline symbols, finalizing the trampoline
+ * packets (write the new section to target memory) and finalize
+ * the duplicate packets by doing 2nd pass relo over them.
+ */
+int dload_tramp_finalize(struct dload_state *dlthis)
+{
+ int ret_val = 1;
+
+ if (dlthis->tramp.tramp_sect_next_addr != 0) {
+ /* Finalize strings into a flat table. This is needed so it
+ * can be added to the debug string table later. */
+ ret_val = priv_string_tbl_finalize(dlthis);
+
+ /* Do target allocation for section BEFORE finalizing
+ * symbols. */
+ if (ret_val != 0)
+ ret_val = priv_tramp_sect_tgt_alloc(dlthis);
+
+ /* Finalize symbols with their correct target information and
+ * flatten */
+ if (ret_val != 0)
+ ret_val = priv_tramp_sym_finalize(dlthis);
+
+ /* Finalize all trampoline packets. This performs the
+ * relocation on the packets as well as writing them to target
+ * memory. */
+ if (ret_val != 0)
+ ret_val = priv_tramp_pkt_finalize(dlthis);
+
+ /* Perform a 2nd pass relocation on the dup list. */
+ if (ret_val != 0)
+ ret_val = priv_dup_pkt_finalize(dlthis);
+ }
+
+ return ret_val;
+}
+
+/*
+ * Function: dload_tramp_cleanup
+ * Description: Release all temporary resources used in the trampoline layer.
+ * Note that the target memory which may have been allocated and
+ * written to store the trampolines is NOT RELEASED HERE since it
+ * is potentially still in use. It is automatically released
+ * when the module is unloaded.
+ */
+void dload_tramp_cleanup(struct dload_state *dlthis)
+{
+ struct tramp_info *tramp = &dlthis->tramp;
+ struct tramp_sym *cur_sym;
+ struct tramp_string *cur_string;
+ struct tramp_img_pkt *cur_tramp_pkt;
+ struct tramp_img_dup_pkt *cur_dup_pkt;
+ struct tramp_img_dup_relo *cur_dup_relo;
+
+ /* If there were no tramps generated, just return */
+ if (tramp->tramp_sect_next_addr == 0)
+ return;
+
+ /* Destroy all tramp information */
+ for (cur_sym = tramp->symbol_head;
+ cur_sym != NULL; cur_sym = tramp->symbol_head) {
+ tramp->symbol_head = cur_sym->next;
+ if (tramp->symbol_tail == cur_sym)
+ tramp->symbol_tail = NULL;
+
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_sym);
+ }
+
+ if (tramp->final_sym_table != NULL)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ tramp->final_sym_table);
+
+ for (cur_string = tramp->string_head;
+ cur_string != NULL; cur_string = tramp->string_head) {
+ tramp->string_head = cur_string->next;
+ if (tramp->string_tail == cur_string)
+ tramp->string_tail = NULL;
+
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_string);
+ }
+
+ if (tramp->final_string_table != NULL)
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ tramp->final_string_table);
+
+ for (cur_tramp_pkt = tramp->tramp_pkts;
+ cur_tramp_pkt != NULL; cur_tramp_pkt = tramp->tramp_pkts) {
+ tramp->tramp_pkts = cur_tramp_pkt->next;
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_tramp_pkt);
+ }
+
+ for (cur_dup_pkt = tramp->dup_pkts;
+ cur_dup_pkt != NULL; cur_dup_pkt = tramp->dup_pkts) {
+ tramp->dup_pkts = cur_dup_pkt->next;
+
+ for (cur_dup_relo = cur_dup_pkt->relo_chain;
+ cur_dup_relo != NULL;
+ cur_dup_relo = cur_dup_pkt->relo_chain) {
+ cur_dup_pkt->relo_chain = cur_dup_relo->next;
+ dlthis->mysym->dload_deallocate(dlthis->mysym,
+ cur_dup_relo);
+ }
+
+ dlthis->mysym->dload_deallocate(dlthis->mysym, cur_dup_pkt);
+ }
+}
diff --git a/drivers/staging/tidspbridge/dynload/tramp_table_c6000.c b/drivers/staging/tidspbridge/dynload/tramp_table_c6000.c
new file mode 100644
index 0000000..e38d631
--- /dev/null
+++ b/drivers/staging/tidspbridge/dynload/tramp_table_c6000.c
@@ -0,0 +1,164 @@
+/*
+ * tramp_table_c6000.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include "dload_internal.h"
+
+/* These are defined in coff.h, but may not be available on all platforms
+ so we'll go ahead and define them here. */
+#ifndef R_C60LO16
+#define R_C60LO16 0x54 /* C60: MVK Low Half Register */
+#define R_C60HI16 0x55 /* C60: MVKH/MVKLH High Half Register */
+#endif
+
+#define C6X_TRAMP_WORD_COUNT 8
+#define C6X_TRAMP_MAX_RELOS 8
+
+/* THIS HASH FUNCTION MUST MATCH THE ONE IN reloc_table_c6000.c */
+#define HASH_FUNC(zz) (((((zz) + 1) * UINT32_C(1845)) >> 11) & 63)
+
+/* THIS MUST MATCH reloc_record_t FOR A SYMBOL BASED RELO */
+struct c6000_relo_record {
+ s32 vaddr;
+ s32 symndx;
+#ifndef _BIG_ENDIAN
+ u16 disp;
+ u16 type;
+#else
+ u16 type;
+ u16 disp;
+#endif
+};
+
+struct c6000_gen_code {
+ struct tramp_gen_code_hdr hdr;
+ u32 tramp_instrs[C6X_TRAMP_WORD_COUNT];
+ struct c6000_relo_record relos[C6X_TRAMP_MAX_RELOS];
+};
+
+/* Hash mapping for relos that can cause trampolines. */
+static const u16 tramp_map[] = {
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 0,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535,
+ 65535
+};
+
+static const struct c6000_gen_code tramp_gen_info[] = {
+ /* Tramp caused by R_C60PCR21 */
+ {
+ /* Header - 8 instructions, 2 relos */
+ {
+ sizeof(u32) * C6X_TRAMP_WORD_COUNT,
+ 2,
+ FIELD_OFFSET(struct c6000_gen_code, relos)
+ },
+
+ /* Trampoline instructions */
+ {
+ 0x053C54F7, /* STW.D2T2 B10, *sp--[2] */
+ 0x0500002A, /* || MVK.S2 <blank>, B10 */
+ 0x0500006A, /* MVKH.S2 <blank>, B10 */
+ 0x00280362, /* B.S2 B10 */
+ 0x053C52E6, /* LDW.D2T2 *++sp[2], B10 */
+ 0x00006000, /* NOP 4 */
+ 0x00000000, /* NOP */
+ 0x00000000 /* NOP */
+ },
+
+ /* Relocations */
+ {
+ {4, 0, 0, R_C60LO16},
+ {8, 0, 0, R_C60HI16},
+ {0, 0, 0, 0x0000},
+ {0, 0, 0, 0x0000},
+ {0, 0, 0, 0x0000},
+ {0, 0, 0, 0x0000},
+ {0, 0, 0, 0x0000},
+ {0, 0, 0, 0x0000}
+ }
+ }
+};
+
+/* TARGET SPECIFIC FUNCTIONS THAT MUST BE DEFINED */
+static u32 tramp_size_get(void)
+{
+ return sizeof(u32) * C6X_TRAMP_WORD_COUNT;
+}
+
+static u32 tramp_img_pkt_size_get(void)
+{
+ return sizeof(struct c6000_gen_code);
+}
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge resource manager driver sources
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/rmgr/dbdcd.c | 1506 ++++++++++
drivers/staging/tidspbridge/rmgr/disp.c | 754 +++++
drivers/staging/tidspbridge/rmgr/drv.c | 1047 +++++++
drivers/staging/tidspbridge/rmgr/drv_interface.c | 644 +++++
drivers/staging/tidspbridge/rmgr/drv_interface.h | 27 +
drivers/staging/tidspbridge/rmgr/dspdrv.c | 142 +
drivers/staging/tidspbridge/rmgr/mgr.c | 374 +++
drivers/staging/tidspbridge/rmgr/nldr.c | 1999 +++++++++++++
drivers/staging/tidspbridge/rmgr/node.c | 3231 ++++++++++++++++++++++
drivers/staging/tidspbridge/rmgr/proc.c | 1948 +++++++++++++
drivers/staging/tidspbridge/rmgr/pwr.c | 182 ++
drivers/staging/tidspbridge/rmgr/rmm.c | 535 ++++
drivers/staging/tidspbridge/rmgr/strm.c | 861 ++++++
13 files changed, 13250 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/rmgr/dbdcd.c
create mode 100644 drivers/staging/tidspbridge/rmgr/disp.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv_interface.c
create mode 100644 drivers/staging/tidspbridge/rmgr/drv_interface.h
create mode 100644 drivers/staging/tidspbridge/rmgr/dspdrv.c
create mode 100644 drivers/staging/tidspbridge/rmgr/mgr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/nldr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/node.c
create mode 100644 drivers/staging/tidspbridge/rmgr/proc.c
create mode 100644 drivers/staging/tidspbridge/rmgr/pwr.c
create mode 100644 drivers/staging/tidspbridge/rmgr/rmm.c
create mode 100644 drivers/staging/tidspbridge/rmgr/strm.c
diff --git a/drivers/staging/tidspbridge/rmgr/dbdcd.c b/drivers/staging/tidspbridge/rmgr/dbdcd.c
new file mode 100644
index 0000000..e014600
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/dbdcd.c
@@ -0,0 +1,1506 @@
+/*
+ * dbdcd.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This file contains the implementation of the DSP/BIOS Bridge
+ * Configuration Database (DCD).
+ *
+ * Notes:
+ * The fxn dcd_get_objects can apply a callback fxn to each DCD object
+ * that is located in a specified COFF file. At the moment,
+ * dcd_auto_register, dcd_auto_unregister, and NLDR module all use
+ * dcd_get_objects.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/cod.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/uuidutil.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dbdcd.h>
+
+/* ----------------------------------- Global defines. */
+#define MAX_INT2CHAR_LENGTH 16 /* Max int2char len of 32 bit int */
+
+/* Name of section containing dependent libraries */
+#define DEPLIBSECT ".dspbridge_deplibs"
+
+/* DCD specific structures. */
+struct dcd_manager {
+ struct cod_manager *cod_mgr; /* Handle to COD manager object. */
+};
+
+/* Pointer to the registry support key */
+static struct list_head reg_key_list;
+static DEFINE_SPINLOCK(dbdcd_lock);
+
+/* Global reference variables. */
+static u32 refs;
+static u32 enum_refs;
+
+/* Helper function prototypes. */
+static s32 atoi(char *psz_buf);
+static int get_attrs_from_buf(char *psz_buf, u32 ul_buf_size,
+ enum dsp_dcdobjtype obj_type,
+ struct dcd_genericobj *pGenObj);
+static void compress_buf(char *psz_buf, u32 ul_buf_size, s32 cCharSize);
+static char dsp_char2_gpp_char(char *pWord, s32 cDspCharSize);
+static int get_dep_lib_info(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ IN OUT u16 *pNumLibs,
+ OPTIONAL OUT u16 *pNumPersLibs,
+ OPTIONAL OUT struct dsp_uuid *pDepLibUuids,
+ OPTIONAL OUT bool *pPersistentDepLibs,
+ IN enum nldr_phase phase);
+
+/*
+ * ======== dcd_auto_register ========
+ * Purpose:
+ * Parses the supplied image and resigsters with DCD.
+ */
+int dcd_auto_register(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (hdcd_mgr)
+ status = dcd_get_objects(hdcd_mgr, pszCoffPath,
+ (dcd_registerfxn) dcd_register_object,
+ (void *)pszCoffPath);
+ else
+ status = -EFAULT;
+
+ return status;
+}
+
+/*
+ * ======== dcd_auto_unregister ========
+ * Purpose:
+ * Parses the supplied DSP image and unresiters from DCD.
+ */
+int dcd_auto_unregister(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (hdcd_mgr)
+ status = dcd_get_objects(hdcd_mgr, pszCoffPath,
+ (dcd_registerfxn) dcd_register_object,
+ NULL);
+ else
+ status = -EFAULT;
+
+ return status;
+}
+
+/*
+ * ======== dcd_create_manager ========
+ * Purpose:
+ * Creates DCD manager.
+ */
+int dcd_create_manager(IN char *pszZlDllName,
+ OUT struct dcd_manager **phDcdMgr)
+{
+ struct cod_manager *cod_mgr; /* COD manager handle */
+ struct dcd_manager *dcd_mgr_obj = NULL; /* DCD Manager pointer */
+ int status = 0;
+
+ DBC_REQUIRE(refs >= 0);
+ DBC_REQUIRE(phDcdMgr);
+
+ status = cod_create(&cod_mgr, pszZlDllName, NULL);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Create a DCD object. */
+ dcd_mgr_obj = kzalloc(sizeof(struct dcd_manager), GFP_KERNEL);
+ if (dcd_mgr_obj != NULL) {
+ /* Fill out the object. */
+ dcd_mgr_obj->cod_mgr = cod_mgr;
+
+ /* Return handle to this DCD interface. */
+ *phDcdMgr = dcd_mgr_obj;
+ } else {
+ status = -ENOMEM;
+
+ /*
+ * If allocation of DcdManager object failed, delete the
+ * COD manager.
+ */
+ cod_delete(cod_mgr);
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status)) ||
+ ((dcd_mgr_obj == NULL) && (status == -ENOMEM)));
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== dcd_destroy_manager ========
+ * Purpose:
+ * Frees DCD Manager object.
+ */
+int dcd_destroy_manager(IN struct dcd_manager *hdcd_mgr)
+{
+ struct dcd_manager *dcd_mgr_obj = hdcd_mgr;
+ int status = -EFAULT;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (hdcd_mgr) {
+ /* Delete the COD manager. */
+ cod_delete(dcd_mgr_obj->cod_mgr);
+
+ /* Deallocate a DCD manager object. */
+ kfree(dcd_mgr_obj);
+
+ status = 0;
+ }
+
+ return status;
+}
+
+/*
+ * ======== dcd_enumerate_object ========
+ * Purpose:
+ * Enumerates objects in the DCD.
+ */
+int dcd_enumerate_object(IN s32 cIndex, IN enum dsp_dcdobjtype obj_type,
+ OUT struct dsp_uuid *uuid_obj)
+{
+ int status = 0;
+ char sz_reg_key[DCD_MAXPATHLENGTH];
+ char sz_value[DCD_MAXPATHLENGTH];
+ struct dsp_uuid dsp_uuid_obj;
+ char sz_obj_type[MAX_INT2CHAR_LENGTH]; /* str. rep. of obj_type. */
+ u32 dw_key_len = 0;
+ struct dcd_key_elem *dcd_key;
+ int len;
+
+ DBC_REQUIRE(refs >= 0);
+ DBC_REQUIRE(cIndex >= 0);
+ DBC_REQUIRE(uuid_obj != NULL);
+
+ if ((cIndex != 0) && (enum_refs == 0)) {
+ /*
+ * If an enumeration is being performed on an index greater
+ * than zero, then the current enum_refs must have been
+ * incremented to greater than zero.
+ */
+ status = -EIDRM;
+ } else {
+ /*
+ * Pre-determine final key length. It's length of DCD_REGKEY +
+ * "_\0" + length of sz_obj_type string + terminating NULL.
+ */
+ dw_key_len = strlen(DCD_REGKEY) + 1 + sizeof(sz_obj_type) + 1;
+ DBC_ASSERT(dw_key_len < DCD_MAXPATHLENGTH);
+
+ /* Create proper REG key; concatenate DCD_REGKEY with
+ * obj_type. */
+ strncpy(sz_reg_key, DCD_REGKEY, strlen(DCD_REGKEY) + 1);
+ if ((strlen(sz_reg_key) + strlen("_\0")) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, "_\0", 2);
+ } else {
+ status = -EPERM;
+ }
+
+ /* This snprintf is guaranteed not to exceed max size of an
+ * integer. */
+ status = snprintf(sz_obj_type, MAX_INT2CHAR_LENGTH, "%d",
+ obj_type);
+
+ if (status == -1) {
+ status = -EPERM;
+ } else {
+ status = 0;
+ if ((strlen(sz_reg_key) + strlen(sz_obj_type)) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, sz_obj_type,
+ strlen(sz_obj_type) + 1);
+ } else {
+ status = -EPERM;
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ len = strlen(sz_reg_key);
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ if (!strncmp(dcd_key->name, sz_reg_key, len)
+ && !cIndex--) {
+ strncpy(sz_value, &dcd_key->name[len],
+ strlen(&dcd_key->name[len]) + 1);
+ break;
+ }
+ }
+ spin_unlock(&dbdcd_lock);
+
+ if (&dcd_key->link == ®_key_list)
+ status = -ENODATA;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Create UUID value using string retrieved from
+ * registry. */
+ uuid_uuid_from_string(sz_value, &dsp_uuid_obj);
+
+ *uuid_obj = dsp_uuid_obj;
+
+ /* Increment enum_refs to update reference count. */
+ enum_refs++;
+
+ status = 0;
+ } else if (status == -ENODATA) {
+ /* At the end of enumeration. Reset enum_refs. */
+ enum_refs = 0;
+
+ /*
+ * TODO: Revisit, this is not an errror case but code
+ * expects non-zero value.
+ */
+ status = ENODATA;
+ } else {
+ status = -EPERM;
+ }
+ }
+
+ DBC_ENSURE(uuid_obj || (status == -EPERM));
+
+ return status;
+}
+
+/*
+ * ======== dcd_exit ========
+ * Purpose:
+ * Discontinue usage of the DCD module.
+ */
+void dcd_exit(void)
+{
+ struct dcd_key_elem *rv, *rv_tmp;
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+ if (refs == 0) {
+ cod_exit();
+ list_for_each_entry_safe(rv, rv_tmp, ®_key_list, link) {
+ list_del(&rv->link);
+ kfree(rv->path);
+ kfree(rv);
+ }
+ }
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== dcd_get_dep_libs ========
+ */
+int dcd_get_dep_libs(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ u16 numLibs, OUT struct dsp_uuid *pDepLibUuids,
+ OUT bool *pPersistentDepLibs,
+ IN enum nldr_phase phase)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdcd_mgr);
+ DBC_REQUIRE(uuid_obj != NULL);
+ DBC_REQUIRE(pDepLibUuids != NULL);
+ DBC_REQUIRE(pPersistentDepLibs != NULL);
+
+ status =
+ get_dep_lib_info(hdcd_mgr, uuid_obj, &numLibs, NULL, pDepLibUuids,
+ pPersistentDepLibs, phase);
+
+ return status;
+}
+
+/*
+ * ======== dcd_get_num_dep_libs ========
+ */
+int dcd_get_num_dep_libs(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ OUT u16 *pNumLibs, OUT u16 *pNumPersLibs,
+ IN enum nldr_phase phase)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdcd_mgr);
+ DBC_REQUIRE(pNumLibs != NULL);
+ DBC_REQUIRE(pNumPersLibs != NULL);
+ DBC_REQUIRE(uuid_obj != NULL);
+
+ status = get_dep_lib_info(hdcd_mgr, uuid_obj, pNumLibs, pNumPersLibs,
+ NULL, NULL, phase);
+
+ return status;
+}
+
+/*
+ * ======== dcd_get_object_def ========
+ * Purpose:
+ * Retrieves the properties of a node or processor based on the UUID and
+ * object type.
+ */
+int dcd_get_object_def(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *pObjUuid,
+ IN enum dsp_dcdobjtype obj_type,
+ OUT struct dcd_genericobj *pObjDef)
+{
+ struct dcd_manager *dcd_mgr_obj = hdcd_mgr; /* ptr to DCD mgr */
+ struct cod_libraryobj *lib = NULL;
+ int status = 0;
+ u32 ul_addr = 0; /* Used by cod_get_section */
+ u32 ul_len = 0; /* Used by cod_get_section */
+ u32 dw_buf_size; /* Used by REG functions */
+ char sz_reg_key[DCD_MAXPATHLENGTH];
+ char *sz_uuid; /*[MAXUUIDLEN]; */
+ struct dcd_key_elem *dcd_key = NULL;
+ char sz_sect_name[MAXUUIDLEN + 2]; /* ".[UUID]\0" */
+ char *psz_coff_buf;
+ u32 dw_key_len; /* Len of REG key. */
+ char sz_obj_type[MAX_INT2CHAR_LENGTH]; /* str. rep. of obj_type. */
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pObjDef != NULL);
+ DBC_REQUIRE(pObjUuid != NULL);
+
+ sz_uuid = kzalloc(MAXUUIDLEN, GFP_KERNEL);
+ if (!sz_uuid) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ if (!hdcd_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ /* Pre-determine final key length. It's length of DCD_REGKEY +
+ * "_\0" + length of sz_obj_type string + terminating NULL */
+ dw_key_len = strlen(DCD_REGKEY) + 1 + sizeof(sz_obj_type) + 1;
+ DBC_ASSERT(dw_key_len < DCD_MAXPATHLENGTH);
+
+ /* Create proper REG key; concatenate DCD_REGKEY with obj_type. */
+ strncpy(sz_reg_key, DCD_REGKEY, strlen(DCD_REGKEY) + 1);
+
+ if ((strlen(sz_reg_key) + strlen("_\0")) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, "_\0", 2);
+ else
+ status = -EPERM;
+
+ status = snprintf(sz_obj_type, MAX_INT2CHAR_LENGTH, "%d", obj_type);
+ if (status == -1) {
+ status = -EPERM;
+ } else {
+ status = 0;
+
+ if ((strlen(sz_reg_key) + strlen(sz_obj_type)) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, sz_obj_type,
+ strlen(sz_obj_type) + 1);
+ } else {
+ status = -EPERM;
+ }
+
+ /* Create UUID value to set in registry. */
+ uuid_uuid_to_string(pObjUuid, sz_uuid, MAXUUIDLEN);
+
+ if ((strlen(sz_reg_key) + MAXUUIDLEN) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, sz_uuid, MAXUUIDLEN);
+ else
+ status = -EPERM;
+
+ /* Retrieve paths from the registry based on struct dsp_uuid */
+ dw_buf_size = DCD_MAXPATHLENGTH;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ if (!strncmp(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1))
+ break;
+ }
+ spin_unlock(&dbdcd_lock);
+ if (&dcd_key->link == ®_key_list) {
+ status = -ENOKEY;
+ goto func_end;
+ }
+ }
+
+
+ /* Open COFF file. */
+ status = cod_open(dcd_mgr_obj->cod_mgr, dcd_key->path,
+ COD_NOLOAD, &lib);
+ if (DSP_FAILED(status)) {
+ status = -EACCES;
+ goto func_end;
+ }
+
+ /* Ensure sz_uuid + 1 is not greater than sizeof sz_sect_name. */
+ DBC_ASSERT((strlen(sz_uuid) + 1) < sizeof(sz_sect_name));
+
+ /* Create section name based on node UUID. A period is
+ * pre-pended to the UUID string to form the section name.
+ * I.e. ".24BC8D90_BB45_11d4_B756_006008BDB66F" */
+ strncpy(sz_sect_name, ".", 2);
+ strncat(sz_sect_name, sz_uuid, strlen(sz_uuid));
+
+ /* Get section information. */
+ status = cod_get_section(lib, sz_sect_name, &ul_addr, &ul_len);
+ if (DSP_FAILED(status)) {
+ status = -EACCES;
+ goto func_end;
+ }
+
+ /* Allocate zeroed buffer. */
+ psz_coff_buf = kzalloc(ul_len + 4, GFP_KERNEL);
+#ifdef _DB_TIOMAP
+ if (strstr(dcd_key->path, "iva") == NULL) {
+ /* Locate section by objectID and read its content. */
+ status =
+ cod_read_section(lib, sz_sect_name, psz_coff_buf, ul_len);
+ } else {
+ status =
+ cod_read_section(lib, sz_sect_name, psz_coff_buf, ul_len);
+ dev_dbg(bridge, "%s: Skipped Byte swap for IVA!!\n", __func__);
+ }
+#else
+ status = cod_read_section(lib, sz_sect_name, psz_coff_buf, ul_len);
+#endif
+ if (DSP_SUCCEEDED(status)) {
+ /* Compres DSP buffer to conform to PC format. */
+ if (strstr(dcd_key->path, "iva") == NULL) {
+ compress_buf(psz_coff_buf, ul_len, DSPWORDSIZE);
+ } else {
+ compress_buf(psz_coff_buf, ul_len, 1);
+ dev_dbg(bridge, "%s: Compressing IVA COFF buffer by 1 "
+ "for IVA!!\n", __func__);
+ }
+
+ /* Parse the content of the COFF buffer. */
+ status =
+ get_attrs_from_buf(psz_coff_buf, ul_len, obj_type, pObjDef);
+ if (DSP_FAILED(status))
+ status = -EACCES;
+ } else {
+ status = -EACCES;
+ }
+
+ /* Free the previously allocated dynamic buffer. */
+ kfree(psz_coff_buf);
+func_end:
+ if (lib)
+ cod_close(lib);
+
+ kfree(sz_uuid);
+
+ return status;
+}
+
+/*
+ * ======== dcd_get_objects ========
+ */
+int dcd_get_objects(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath, dcd_registerfxn registerFxn,
+ void *handle)
+{
+ struct dcd_manager *dcd_mgr_obj = hdcd_mgr;
+ int status = 0;
+ char *psz_coff_buf;
+ char *psz_cur;
+ struct cod_libraryobj *lib = NULL;
+ u32 ul_addr = 0; /* Used by cod_get_section */
+ u32 ul_len = 0; /* Used by cod_get_section */
+ char seps[] = ":, ";
+ char *token = NULL;
+ struct dsp_uuid dsp_uuid_obj;
+ s32 object_type;
+
+ DBC_REQUIRE(refs > 0);
+ if (!hdcd_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ /* Open DSP coff file, don't load symbols. */
+ status = cod_open(dcd_mgr_obj->cod_mgr, pszCoffPath, COD_NOLOAD, &lib);
+ if (DSP_FAILED(status)) {
+ status = -EACCES;
+ goto func_cont;
+ }
+
+ /* Get DCD_RESIGER_SECTION section information. */
+ status = cod_get_section(lib, DCD_REGISTER_SECTION, &ul_addr, &ul_len);
+ if (DSP_FAILED(status) || !(ul_len > 0)) {
+ status = -EACCES;
+ goto func_cont;
+ }
+
+ /* Allocate zeroed buffer. */
+ psz_coff_buf = kzalloc(ul_len + 4, GFP_KERNEL);
+#ifdef _DB_TIOMAP
+ if (strstr(pszCoffPath, "iva") == NULL) {
+ /* Locate section by objectID and read its content. */
+ status = cod_read_section(lib, DCD_REGISTER_SECTION,
+ psz_coff_buf, ul_len);
+ } else {
+ dev_dbg(bridge, "%s: Skipped Byte swap for IVA!!\n", __func__);
+ status = cod_read_section(lib, DCD_REGISTER_SECTION,
+ psz_coff_buf, ul_len);
+ }
+#else
+ status =
+ cod_read_section(lib, DCD_REGISTER_SECTION, psz_coff_buf, ul_len);
+#endif
+ if (DSP_SUCCEEDED(status)) {
+ /* Compress DSP buffer to conform to PC format. */
+ if (strstr(pszCoffPath, "iva") == NULL) {
+ compress_buf(psz_coff_buf, ul_len, DSPWORDSIZE);
+ } else {
+ compress_buf(psz_coff_buf, ul_len, 1);
+ dev_dbg(bridge, "%s: Compress COFF buffer with 1 word "
+ "for IVA!!\n", __func__);
+ }
+
+ /* Read from buffer and register object in buffer. */
+ psz_cur = psz_coff_buf;
+ while ((token = strsep(&psz_cur, seps)) && *token != '\0') {
+ /* Retrieve UUID string. */
+ uuid_uuid_from_string(token, &dsp_uuid_obj);
+
+ /* Retrieve object type */
+ token = strsep(&psz_cur, seps);
+
+ /* Retrieve object type */
+ object_type = atoi(token);
+
+ /*
+ * Apply registerFxn to the found DCD object.
+ * Possible actions include:
+ *
+ * 1) Register found DCD object.
+ * 2) Unregister found DCD object (when handle == NULL)
+ * 3) Add overlay node.
+ */
+ status =
+ registerFxn(&dsp_uuid_obj, object_type, handle);
+ if (DSP_FAILED(status)) {
+ /* if error occurs, break from while loop. */
+ break;
+ }
+ }
+ } else {
+ status = -EACCES;
+ }
+
+ /* Free the previously allocated dynamic buffer. */
+ kfree(psz_coff_buf);
+func_cont:
+ if (lib)
+ cod_close(lib);
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== dcd_get_library_name ========
+ * Purpose:
+ * Retrieves the library name for the given UUID.
+ *
+ */
+int dcd_get_library_name(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ IN OUT char *pstrLibName, IN OUT u32 * pdwSize,
+ enum nldr_phase phase, OUT bool *phase_split)
+{
+ char sz_reg_key[DCD_MAXPATHLENGTH];
+ char sz_uuid[MAXUUIDLEN];
+ u32 dw_key_len; /* Len of REG key. */
+ char sz_obj_type[MAX_INT2CHAR_LENGTH]; /* str. rep. of obj_type. */
+ int status = 0;
+ struct dcd_key_elem *dcd_key = NULL;
+
+ DBC_REQUIRE(uuid_obj != NULL);
+ DBC_REQUIRE(pstrLibName != NULL);
+ DBC_REQUIRE(pdwSize != NULL);
+ DBC_REQUIRE(hdcd_mgr);
+
+ dev_dbg(bridge, "%s: hdcd_mgr %p, uuid_obj %p, pstrLibName %p, pdwSize "
+ "%p\n", __func__, hdcd_mgr, uuid_obj, pstrLibName, pdwSize);
+
+ /*
+ * Pre-determine final key length. It's length of DCD_REGKEY +
+ * "_\0" + length of sz_obj_type string + terminating NULL.
+ */
+ dw_key_len = strlen(DCD_REGKEY) + 1 + sizeof(sz_obj_type) + 1;
+ DBC_ASSERT(dw_key_len < DCD_MAXPATHLENGTH);
+
+ /* Create proper REG key; concatenate DCD_REGKEY with obj_type. */
+ strncpy(sz_reg_key, DCD_REGKEY, strlen(DCD_REGKEY) + 1);
+ if ((strlen(sz_reg_key) + strlen("_\0")) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, "_\0", 2);
+ else
+ status = -EPERM;
+
+ switch (phase) {
+ case NLDR_CREATE:
+ /* create phase type */
+ sprintf(sz_obj_type, "%d", DSP_DCDCREATELIBTYPE);
+ break;
+ case NLDR_EXECUTE:
+ /* execute phase type */
+ sprintf(sz_obj_type, "%d", DSP_DCDEXECUTELIBTYPE);
+ break;
+ case NLDR_DELETE:
+ /* delete phase type */
+ sprintf(sz_obj_type, "%d", DSP_DCDDELETELIBTYPE);
+ break;
+ case NLDR_NOPHASE:
+ /* known to be a dependent library */
+ sprintf(sz_obj_type, "%d", DSP_DCDLIBRARYTYPE);
+ break;
+ default:
+ status = -EINVAL;
+ DBC_ASSERT(false);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ if ((strlen(sz_reg_key) + strlen(sz_obj_type)) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, sz_obj_type,
+ strlen(sz_obj_type) + 1);
+ } else {
+ status = -EPERM;
+ }
+ /* Create UUID value to find match in registry. */
+ uuid_uuid_to_string(uuid_obj, sz_uuid, MAXUUIDLEN);
+ if ((strlen(sz_reg_key) + MAXUUIDLEN) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, sz_uuid, MAXUUIDLEN);
+ else
+ status = -EPERM;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ /* See if the name matches. */
+ if (!strncmp(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1))
+ break;
+ }
+ spin_unlock(&dbdcd_lock);
+ }
+
+ if (&dcd_key->link == ®_key_list)
+ status = -ENOKEY;
+
+ /* If can't find, phases might be registered as generic LIBRARYTYPE */
+ if (DSP_FAILED(status) && phase != NLDR_NOPHASE) {
+ if (phase_split)
+ *phase_split = false;
+
+ strncpy(sz_reg_key, DCD_REGKEY, strlen(DCD_REGKEY) + 1);
+ if ((strlen(sz_reg_key) + strlen("_\0")) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, "_\0", 2);
+ } else {
+ status = -EPERM;
+ }
+ sprintf(sz_obj_type, "%d", DSP_DCDLIBRARYTYPE);
+ if ((strlen(sz_reg_key) + strlen(sz_obj_type))
+ < DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, sz_obj_type,
+ strlen(sz_obj_type) + 1);
+ } else {
+ status = -EPERM;
+ }
+ uuid_uuid_to_string(uuid_obj, sz_uuid, MAXUUIDLEN);
+ if ((strlen(sz_reg_key) + MAXUUIDLEN) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, sz_uuid, MAXUUIDLEN);
+ else
+ status = -EPERM;
+
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ /* See if the name matches. */
+ if (!strncmp(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1))
+ break;
+ }
+ spin_unlock(&dbdcd_lock);
+
+ status = (&dcd_key->link != ®_key_list) ?
+ 0 : -ENOKEY;
+ }
+
+ if (DSP_SUCCEEDED(status))
+ memcpy(pstrLibName, dcd_key->path, strlen(dcd_key->path) + 1);
+ return status;
+}
+
+/*
+ * ======== dcd_init ========
+ * Purpose:
+ * Initialize the DCD module.
+ */
+bool dcd_init(void)
+{
+ bool init_cod;
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (refs == 0) {
+ /* Initialize required modules. */
+ init_cod = cod_init();
+
+ if (!init_cod) {
+ ret = false;
+ /* Exit initialized modules. */
+ if (init_cod)
+ cod_exit();
+ }
+
+ INIT_LIST_HEAD(®_key_list);
+ }
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs == 0)));
+
+ return ret;
+}
+
+/*
+ * ======== dcd_register_object ========
+ * Purpose:
+ * Registers a node or a processor with the DCD.
+ * If psz_path_name == NULL, unregister the specified DCD object.
+ */
+int dcd_register_object(IN struct dsp_uuid *uuid_obj,
+ IN enum dsp_dcdobjtype obj_type,
+ IN char *psz_path_name)
+{
+ int status = 0;
+ char sz_reg_key[DCD_MAXPATHLENGTH];
+ char sz_uuid[MAXUUIDLEN + 1];
+ u32 dw_path_size = 0;
+ u32 dw_key_len; /* Len of REG key. */
+ char sz_obj_type[MAX_INT2CHAR_LENGTH]; /* str. rep. of obj_type. */
+ struct dcd_key_elem *dcd_key = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(uuid_obj != NULL);
+ DBC_REQUIRE((obj_type == DSP_DCDNODETYPE) ||
+ (obj_type == DSP_DCDPROCESSORTYPE) ||
+ (obj_type == DSP_DCDLIBRARYTYPE) ||
+ (obj_type == DSP_DCDCREATELIBTYPE) ||
+ (obj_type == DSP_DCDEXECUTELIBTYPE) ||
+ (obj_type == DSP_DCDDELETELIBTYPE));
+
+ dev_dbg(bridge, "%s: object UUID %p, obj_type %d, szPathName %s\n",
+ __func__, uuid_obj, obj_type, psz_path_name);
+
+ /*
+ * Pre-determine final key length. It's length of DCD_REGKEY +
+ * "_\0" + length of sz_obj_type string + terminating NULL.
+ */
+ dw_key_len = strlen(DCD_REGKEY) + 1 + sizeof(sz_obj_type) + 1;
+ DBC_ASSERT(dw_key_len < DCD_MAXPATHLENGTH);
+
+ /* Create proper REG key; concatenate DCD_REGKEY with obj_type. */
+ strncpy(sz_reg_key, DCD_REGKEY, strlen(DCD_REGKEY) + 1);
+ if ((strlen(sz_reg_key) + strlen("_\0")) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, "_\0", 2);
+ else {
+ status = -EPERM;
+ goto func_end;
+ }
+
+ status = snprintf(sz_obj_type, MAX_INT2CHAR_LENGTH, "%d", obj_type);
+ if (status == -1) {
+ status = -EPERM;
+ } else {
+ status = 0;
+ if ((strlen(sz_reg_key) + strlen(sz_obj_type)) <
+ DCD_MAXPATHLENGTH) {
+ strncat(sz_reg_key, sz_obj_type,
+ strlen(sz_obj_type) + 1);
+ } else
+ status = -EPERM;
+
+ /* Create UUID value to set in registry. */
+ uuid_uuid_to_string(uuid_obj, sz_uuid, MAXUUIDLEN);
+ if ((strlen(sz_reg_key) + MAXUUIDLEN) < DCD_MAXPATHLENGTH)
+ strncat(sz_reg_key, sz_uuid, MAXUUIDLEN);
+ else
+ status = -EPERM;
+ }
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /*
+ * If psz_path_name != NULL, perform registration, otherwise,
+ * perform unregistration.
+ */
+
+ if (psz_path_name) {
+ dw_path_size = strlen(psz_path_name) + 1;
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ /* See if the name matches. */
+ if (!strncmp(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1))
+ break;
+ }
+ spin_unlock(&dbdcd_lock);
+ if (&dcd_key->link == ®_key_list) {
+ /*
+ * Add new reg value (UUID+obj_type)
+ * with COFF path info
+ */
+
+ dcd_key = kmalloc(sizeof(struct dcd_key_elem),
+ GFP_KERNEL);
+ if (!dcd_key) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ dcd_key->path = kmalloc(strlen(sz_reg_key) + 1,
+ GFP_KERNEL);
+
+ if (!dcd_key->path) {
+ kfree(dcd_key);
+ status = -ENOMEM;
+ goto func_end;
+ }
+
+ strncpy(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1);
+ strncpy(dcd_key->path, psz_path_name ,
+ dw_path_size);
+ spin_lock(&dbdcd_lock);
+ list_add_tail(&dcd_key->link, ®_key_list);
+ spin_unlock(&dbdcd_lock);
+ } else {
+ /* Make sure the new data is the same. */
+ if (strncmp(dcd_key->path, psz_path_name,
+ dw_path_size)) {
+ /* The caller needs a different data size! */
+ kfree(dcd_key->path);
+ dcd_key->path = kmalloc(dw_path_size,
+ GFP_KERNEL);
+ if (dcd_key->path == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ }
+
+ /* We have a match! Copy out the data. */
+ memcpy(dcd_key->path, psz_path_name, dw_path_size);
+ }
+ dev_dbg(bridge, "%s: psz_path_name=%s, dw_path_size=%d\n",
+ __func__, psz_path_name, dw_path_size);
+ } else {
+ /* Deregister an existing object */
+ spin_lock(&dbdcd_lock);
+ list_for_each_entry(dcd_key, ®_key_list, link) {
+ if (!strncmp(dcd_key->name, sz_reg_key,
+ strlen(sz_reg_key) + 1)) {
+ list_del(&dcd_key->link);
+ kfree(dcd_key->path);
+ kfree(dcd_key);
+ break;
+ }
+ }
+ spin_unlock(&dbdcd_lock);
+ if (&dcd_key->link == ®_key_list)
+ status = -EPERM;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Because the node database has been updated through a
+ * successful object registration/de-registration operation,
+ * we need to reset the object enumeration counter to allow
+ * current enumerations to reflect this update in the node
+ * database.
+ */
+ enum_refs = 0;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== dcd_unregister_object ========
+ * Call DCD_Register object with psz_path_name set to NULL to
+ * perform actual object de-registration.
+ */
+int dcd_unregister_object(IN struct dsp_uuid *uuid_obj,
+ IN enum dsp_dcdobjtype obj_type)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(uuid_obj != NULL);
+ DBC_REQUIRE((obj_type == DSP_DCDNODETYPE) ||
+ (obj_type == DSP_DCDPROCESSORTYPE) ||
+ (obj_type == DSP_DCDLIBRARYTYPE) ||
+ (obj_type == DSP_DCDCREATELIBTYPE) ||
+ (obj_type == DSP_DCDEXECUTELIBTYPE) ||
+ (obj_type == DSP_DCDDELETELIBTYPE));
+
+ /*
+ * When dcd_register_object is called with NULL as pathname,
+ * it indicates an unregister object operation.
+ */
+ status = dcd_register_object(uuid_obj, obj_type, NULL);
+
+ return status;
+}
+
+/*
+ **********************************************************************
+ * DCD Helper Functions
+ **********************************************************************
+ */
+
+/*
+ * ======== atoi ========
+ * Purpose:
+ * This function converts strings in decimal or hex format to integers.
+ */
+static s32 atoi(char *psz_buf)
+{
+ char *pch = psz_buf;
+ s32 base = 0;
+
+ while (isspace(*pch))
+ pch++;
+
+ if (*pch == '-' || *pch == '+') {
+ base = 10;
+ pch++;
+ } else if (*pch && tolower(pch[strlen(pch) - 1]) == 'h') {
+ base = 16;
+ }
+
+ return simple_strtoul(pch, NULL, base);
+}
+
+/*
+ * ======== get_attrs_from_buf ========
+ * Purpose:
+ * Parse the content of a buffer filled with DSP-side data and
+ * retrieve an object's attributes from it. IMPORTANT: Assume the
+ * buffer has been converted from DSP format to GPP format.
+ */
+static int get_attrs_from_buf(char *psz_buf, u32 ul_buf_size,
+ enum dsp_dcdobjtype obj_type,
+ struct dcd_genericobj *pGenObj)
+{
+ int status = 0;
+ char seps[] = ", ";
+ char *psz_cur;
+ char *token;
+ s32 token_len = 0;
+ u32 i = 0;
+#ifdef _DB_TIOMAP
+ s32 entry_id;
+#endif
+
+ DBC_REQUIRE(psz_buf != NULL);
+ DBC_REQUIRE(ul_buf_size != 0);
+ DBC_REQUIRE((obj_type == DSP_DCDNODETYPE)
+ || (obj_type == DSP_DCDPROCESSORTYPE));
+ DBC_REQUIRE(pGenObj != NULL);
+
+ switch (obj_type) {
+ case DSP_DCDNODETYPE:
+ /*
+ * Parse COFF sect buffer to retrieve individual tokens used
+ * to fill in object attrs.
+ */
+ psz_cur = psz_buf;
+ token = strsep(&psz_cur, seps);
+
+ /* u32 cb_struct */
+ pGenObj->obj_data.node_obj.ndb_props.cb_struct =
+ (u32) atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* dsp_uuid ui_node_id */
+ uuid_uuid_from_string(token,
+ &pGenObj->obj_data.node_obj.ndb_props.
+ ui_node_id);
+ token = strsep(&psz_cur, seps);
+
+ /* ac_name */
+ DBC_REQUIRE(token);
+ token_len = strlen(token);
+ if (token_len > DSP_MAXNAMELEN - 1)
+ token_len = DSP_MAXNAMELEN - 1;
+
+ strncpy(pGenObj->obj_data.node_obj.ndb_props.ac_name,
+ token, token_len);
+ pGenObj->obj_data.node_obj.ndb_props.ac_name[token_len] = '\0';
+ token = strsep(&psz_cur, seps);
+ /* u32 ntype */
+ pGenObj->obj_data.node_obj.ndb_props.ntype = atoi(token);
+ token = strsep(&psz_cur, seps);
+ /* u32 cache_on_gpp */
+ pGenObj->obj_data.node_obj.ndb_props.cache_on_gpp = atoi(token);
+ token = strsep(&psz_cur, seps);
+ /* dsp_resourcereqmts dsp_resource_reqmts */
+ pGenObj->obj_data.node_obj.ndb_props.dsp_resource_reqmts.
+ cb_struct = (u32) atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.static_data_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.global_data_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.program_mem_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.uwc_execution_time = atoi(token);
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.uwc_period = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.uwc_deadline = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.avg_exection_time = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.node_obj.ndb_props.
+ dsp_resource_reqmts.minimum_period = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* s32 prio */
+ pGenObj->obj_data.node_obj.ndb_props.prio = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 stack_size */
+ pGenObj->obj_data.node_obj.ndb_props.stack_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 sys_stack_size */
+ pGenObj->obj_data.node_obj.ndb_props.sys_stack_size =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 stack_seg */
+ pGenObj->obj_data.node_obj.ndb_props.stack_seg = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 message_depth */
+ pGenObj->obj_data.node_obj.ndb_props.message_depth =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 num_input_streams */
+ pGenObj->obj_data.node_obj.ndb_props.num_input_streams =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 num_output_streams */
+ pGenObj->obj_data.node_obj.ndb_props.num_output_streams =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* u32 utimeout */
+ pGenObj->obj_data.node_obj.ndb_props.utimeout = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* char *pstr_create_phase_fxn */
+ DBC_REQUIRE(token);
+ token_len = strlen(token);
+ pGenObj->obj_data.node_obj.pstr_create_phase_fxn =
+ kzalloc(token_len + 1, GFP_KERNEL);
+ strncpy(pGenObj->obj_data.node_obj.pstr_create_phase_fxn,
+ token, token_len);
+ pGenObj->obj_data.node_obj.pstr_create_phase_fxn[token_len] =
+ '\0';
+ token = strsep(&psz_cur, seps);
+
+ /* char *pstr_execute_phase_fxn */
+ DBC_REQUIRE(token);
+ token_len = strlen(token);
+ pGenObj->obj_data.node_obj.pstr_execute_phase_fxn =
+ kzalloc(token_len + 1, GFP_KERNEL);
+ strncpy(pGenObj->obj_data.node_obj.pstr_execute_phase_fxn,
+ token, token_len);
+ pGenObj->obj_data.node_obj.pstr_execute_phase_fxn[token_len] =
+ '\0';
+ token = strsep(&psz_cur, seps);
+
+ /* char *pstr_delete_phase_fxn */
+ DBC_REQUIRE(token);
+ token_len = strlen(token);
+ pGenObj->obj_data.node_obj.pstr_delete_phase_fxn =
+ kzalloc(token_len + 1, GFP_KERNEL);
+ strncpy(pGenObj->obj_data.node_obj.pstr_delete_phase_fxn,
+ token, token_len);
+ pGenObj->obj_data.node_obj.pstr_delete_phase_fxn[token_len] =
+ '\0';
+ token = strsep(&psz_cur, seps);
+
+ /* Segment id for message buffers */
+ pGenObj->obj_data.node_obj.msg_segid = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* Message notification type */
+ pGenObj->obj_data.node_obj.msg_notify_type = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ /* char *pstr_i_alg_name */
+ if (token) {
+ token_len = strlen(token);
+ pGenObj->obj_data.node_obj.pstr_i_alg_name =
+ kzalloc(token_len + 1, GFP_KERNEL);
+ strncpy(pGenObj->obj_data.node_obj.pstr_i_alg_name,
+ token, token_len);
+ pGenObj->obj_data.node_obj.pstr_i_alg_name[token_len] =
+ '\0';
+ token = strsep(&psz_cur, seps);
+ }
+
+ /* Load type (static, dynamic, or overlay) */
+ if (token) {
+ pGenObj->obj_data.node_obj.us_load_type = atoi(token);
+ token = strsep(&psz_cur, seps);
+ }
+
+ /* Dynamic load data requirements */
+ if (token) {
+ pGenObj->obj_data.node_obj.ul_data_mem_seg_mask =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+ }
+
+ /* Dynamic load code requirements */
+ if (token) {
+ pGenObj->obj_data.node_obj.ul_code_mem_seg_mask =
+ atoi(token);
+ token = strsep(&psz_cur, seps);
+ }
+
+ /* Extract node profiles into node properties */
+ if (token) {
+
+ pGenObj->obj_data.node_obj.ndb_props.count_profiles =
+ atoi(token);
+ for (i = 0;
+ i <
+ pGenObj->obj_data.node_obj.
+ ndb_props.count_profiles; i++) {
+ token = strsep(&psz_cur, seps);
+ if (token) {
+ /* Heap Size for the node */
+ pGenObj->obj_data.node_obj.
+ ndb_props.node_profiles[i].
+ ul_heap_size = atoi(token);
+ }
+ }
+ }
+ token = strsep(&psz_cur, seps);
+ if (token) {
+ pGenObj->obj_data.node_obj.ndb_props.stack_seg_name =
+ (u32) (token);
+ }
+
+ break;
+
+ case DSP_DCDPROCESSORTYPE:
+ /*
+ * Parse COFF sect buffer to retrieve individual tokens used
+ * to fill in object attrs.
+ */
+ psz_cur = psz_buf;
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.cb_struct = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.processor_family = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.processor_type = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.clock_rate = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.ul_internal_mem_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.ul_external_mem_size = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.processor_id = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.ty_running_rtos = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.node_min_priority = atoi(token);
+ token = strsep(&psz_cur, seps);
+
+ pGenObj->obj_data.proc_info.node_max_priority = atoi(token);
+
+#ifdef _DB_TIOMAP
+ /* Proc object may contain additional(extended) attributes. */
+ /* attr must match proc.hxx */
+ for (entry_id = 0; entry_id < 7; entry_id++) {
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.ext_proc_obj.ty_tlb[entry_id].
+ ul_gpp_phys = atoi(token);
+
+ token = strsep(&psz_cur, seps);
+ pGenObj->obj_data.ext_proc_obj.ty_tlb[entry_id].
+ ul_dsp_virt = atoi(token);
+ }
+#endif
+
+ break;
+
+ default:
+ status = -EPERM;
+ break;
+ }
+
+ return status;
+}
+
+/*
+ * ======== CompressBuffer ========
+ * Purpose:
+ * Compress the DSP buffer, if necessary, to conform to PC format.
+ */
+static void compress_buf(char *psz_buf, u32 ul_buf_size, s32 cCharSize)
+{
+ char *p;
+ char ch;
+ char *q;
+
+ p = psz_buf;
+ if (p == NULL)
+ return;
+
+ for (q = psz_buf; q < (psz_buf + ul_buf_size);) {
+ ch = dsp_char2_gpp_char(q, cCharSize);
+ if (ch == '\\') {
+ q += cCharSize;
+ ch = dsp_char2_gpp_char(q, cCharSize);
+ switch (ch) {
+ case 't':
+ *p = '\t';
+ break;
+
+ case 'n':
+ *p = '\n';
+ break;
+
+ case 'r':
+ *p = '\r';
+ break;
+
+ case '0':
+ *p = '\0';
+ break;
+
+ default:
+ *p = ch;
+ break;
+ }
+ } else {
+ *p = ch;
+ }
+ p++;
+ q += cCharSize;
+ }
+
+ /* NULL out remainder of buffer. */
+ while (p < q)
+ *p++ = '\0';
+}
+
+/*
+ * ======== dsp_char2_gpp_char ========
+ * Purpose:
+ * Convert DSP char to host GPP char in a portable manner
+ */
+static char dsp_char2_gpp_char(char *pWord, s32 cDspCharSize)
+{
+ char ch = '\0';
+ char *ch_src;
+ s32 i;
+
+ for (ch_src = pWord, i = cDspCharSize; i > 0; i--)
+ ch |= *ch_src++;
+
+ return ch;
+}
+
+/*
+ * ======== get_dep_lib_info ========
+ */
+static int get_dep_lib_info(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ IN OUT u16 *pNumLibs,
+ OPTIONAL OUT u16 *pNumPersLibs,
+ OPTIONAL OUT struct dsp_uuid *pDepLibUuids,
+ OPTIONAL OUT bool *pPersistentDepLibs,
+ enum nldr_phase phase)
+{
+ struct dcd_manager *dcd_mgr_obj = hdcd_mgr;
+ char *psz_coff_buf = NULL;
+ char *psz_cur;
+ char *psz_file_name = NULL;
+ struct cod_libraryobj *lib = NULL;
+ u32 ul_addr = 0; /* Used by cod_get_section */
+ u32 ul_len = 0; /* Used by cod_get_section */
+ u32 dw_data_size = COD_MAXPATHLENGTH;
+ char seps[] = ", ";
+ char *token = NULL;
+ bool get_uuids = (pDepLibUuids != NULL);
+ u16 dep_libs = 0;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ DBC_REQUIRE(hdcd_mgr);
+ DBC_REQUIRE(pNumLibs != NULL);
+ DBC_REQUIRE(uuid_obj != NULL);
+
+ /* Initialize to 0 dependent libraries, if only counting number of
+ * dependent libraries */
+ if (!get_uuids) {
+ *pNumLibs = 0;
+ *pNumPersLibs = 0;
+ }
+
+ /* Allocate a buffer for file name */
+ psz_file_name = kzalloc(dw_data_size, GFP_KERNEL);
+ if (psz_file_name == NULL) {
+ status = -ENOMEM;
+ } else {
+ /* Get the name of the library */
+ status = dcd_get_library_name(hdcd_mgr, uuid_obj, psz_file_name,
+ &dw_data_size, phase, NULL);
+ }
+
+ /* Open the library */
+ if (DSP_SUCCEEDED(status)) {
+ status = cod_open(dcd_mgr_obj->cod_mgr, psz_file_name,
+ COD_NOLOAD, &lib);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Get dependent library section information. */
+ status = cod_get_section(lib, DEPLIBSECT, &ul_addr, &ul_len);
+
+ if (DSP_FAILED(status)) {
+ /* Ok, no dependent libraries */
+ ul_len = 0;
+ status = 0;
+ }
+ }
+
+ if (DSP_FAILED(status) || !(ul_len > 0))
+ goto func_cont;
+
+ /* Allocate zeroed buffer. */
+ psz_coff_buf = kzalloc(ul_len + 4, GFP_KERNEL);
+ if (psz_coff_buf == NULL)
+ status = -ENOMEM;
+
+ /* Read section contents. */
+ status = cod_read_section(lib, DEPLIBSECT, psz_coff_buf, ul_len);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /* Compress and format DSP buffer to conform to PC format. */
+ compress_buf(psz_coff_buf, ul_len, DSPWORDSIZE);
+
+ /* Read from buffer */
+ psz_cur = psz_coff_buf;
+ while ((token = strsep(&psz_cur, seps)) && *token != '\0') {
+ if (get_uuids) {
+ if (dep_libs >= *pNumLibs) {
+ /* Gone beyond the limit */
+ break;
+ } else {
+ /* Retrieve UUID string. */
+ uuid_uuid_from_string(token,
+ &(pDepLibUuids
+ [dep_libs]));
+ /* Is this library persistent? */
+ token = strsep(&psz_cur, seps);
+ pPersistentDepLibs[dep_libs] = atoi(token);
+ dep_libs++;
+ }
+ } else {
+ /* Advanc to next token */
+ token = strsep(&psz_cur, seps);
+ if (atoi(token))
+ (*pNumPersLibs)++;
+
+ /* Just counting number of dependent libraries */
+ (*pNumLibs)++;
+ }
+ }
+func_cont:
+ if (lib)
+ cod_close(lib);
+
+ /* Free previously allocated dynamic buffers. */
+ kfree(psz_file_name);
+
+ kfree(psz_coff_buf);
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/disp.c b/drivers/staging/tidspbridge/rmgr/disp.c
new file mode 100644
index 0000000..7195415
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/disp.c
@@ -0,0 +1,754 @@
+/*
+ * disp.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Node Dispatcher interface. Communicates with Resource Manager Server
+ * (RMS) on DSP. Access to RMS is synchronized in NODE.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspdefs.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+#include <dspbridge/chnldefs.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/nodedefs.h>
+#include <dspbridge/nodepriv.h>
+#include <dspbridge/rms_sh.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/disp.h>
+
+/* Size of a reply from RMS */
+#define REPLYSIZE (3 * sizeof(rms_word))
+
+/* Reserved channel offsets for communication with RMS */
+#define CHNLTORMSOFFSET 0
+#define CHNLFROMRMSOFFSET 1
+
+#define CHNLIOREQS 1
+
+#define SWAP_WORD(x) (((u32)(x) >> 16) | ((u32)(x) << 16))
+
+/*
+ * ======== disp_object ========
+ */
+struct disp_object {
+ struct dev_object *hdev_obj; /* Device for this processor */
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+ struct chnl_mgr *hchnl_mgr; /* Channel manager */
+ struct chnl_object *chnl_to_dsp; /* Chnl for commands to RMS */
+ struct chnl_object *chnl_from_dsp; /* Chnl for replies from RMS */
+ u8 *pbuf; /* Buffer for commands, replies */
+ u32 ul_bufsize; /* pbuf size in bytes */
+ u32 ul_bufsize_rms; /* pbuf size in RMS words */
+ u32 char_size; /* Size of DSP character */
+ u32 word_size; /* Size of DSP word */
+ u32 data_mau_size; /* Size of DSP Data MAU */
+};
+
+static u32 refs;
+
+static void delete_disp(struct disp_object *disp_obj);
+static int fill_stream_def(rms_word *pdw_buf, u32 *ptotal, u32 offset,
+ struct node_strmdef strm_def, u32 max,
+ u32 chars_in_rms_word);
+static int send_message(struct disp_object *disp_obj, u32 dwTimeout,
+ u32 ul_bytes, OUT u32 *pdw_arg);
+
+/*
+ * ======== disp_create ========
+ * Create a NODE Dispatcher object.
+ */
+int disp_create(OUT struct disp_object **phDispObject,
+ struct dev_object *hdev_obj,
+ IN CONST struct disp_attr *pDispAttrs)
+{
+ struct disp_object *disp_obj;
+ struct bridge_drv_interface *intf_fxns;
+ u32 ul_chnl_id;
+ struct chnl_attr chnl_attr_obj;
+ int status = 0;
+ u8 dev_type;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDispObject != NULL);
+ DBC_REQUIRE(pDispAttrs != NULL);
+ DBC_REQUIRE(hdev_obj != NULL);
+
+ *phDispObject = NULL;
+
+ /* Allocate Node Dispatcher object */
+ disp_obj = kzalloc(sizeof(struct disp_object), GFP_KERNEL);
+ if (disp_obj == NULL)
+ status = -ENOMEM;
+ else
+ disp_obj->hdev_obj = hdev_obj;
+
+ /* Get Channel manager and Bridge function interface */
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_chnl_mgr(hdev_obj, &(disp_obj->hchnl_mgr));
+ if (DSP_SUCCEEDED(status)) {
+ (void)dev_get_intf_fxns(hdev_obj, &intf_fxns);
+ disp_obj->intf_fxns = intf_fxns;
+ }
+ }
+
+ /* check device type and decide if streams or messag'ing is used for
+ * RMS/EDS */
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ status = dev_get_dev_type(hdev_obj, &dev_type);
+
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ if (dev_type != DSP_UNIT) {
+ status = -EPERM;
+ goto func_cont;
+ }
+
+ disp_obj->char_size = DSPWORDSIZE;
+ disp_obj->word_size = DSPWORDSIZE;
+ disp_obj->data_mau_size = DSPWORDSIZE;
+ /* Open channels for communicating with the RMS */
+ chnl_attr_obj.uio_reqs = CHNLIOREQS;
+ chnl_attr_obj.event_obj = NULL;
+ ul_chnl_id = pDispAttrs->ul_chnl_offset + CHNLTORMSOFFSET;
+ status = (*intf_fxns->pfn_chnl_open) (&(disp_obj->chnl_to_dsp),
+ disp_obj->hchnl_mgr,
+ CHNL_MODETODSP, ul_chnl_id,
+ &chnl_attr_obj);
+
+ if (DSP_SUCCEEDED(status)) {
+ ul_chnl_id = pDispAttrs->ul_chnl_offset + CHNLFROMRMSOFFSET;
+ status =
+ (*intf_fxns->pfn_chnl_open) (&(disp_obj->chnl_from_dsp),
+ disp_obj->hchnl_mgr,
+ CHNL_MODEFROMDSP, ul_chnl_id,
+ &chnl_attr_obj);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Allocate buffer for commands, replies */
+ disp_obj->ul_bufsize = pDispAttrs->ul_chnl_buf_size;
+ disp_obj->ul_bufsize_rms = RMS_COMMANDBUFSIZE;
+ disp_obj->pbuf = kzalloc(disp_obj->ul_bufsize, GFP_KERNEL);
+ if (disp_obj->pbuf == NULL)
+ status = -ENOMEM;
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status))
+ *phDispObject = disp_obj;
+ else
+ delete_disp(disp_obj);
+
+ DBC_ENSURE(((DSP_FAILED(status)) && ((*phDispObject == NULL))) ||
+ ((DSP_SUCCEEDED(status)) && *phDispObject));
+ return status;
+}
+
+/*
+ * ======== disp_delete ========
+ * Delete the NODE Dispatcher.
+ */
+void disp_delete(struct disp_object *disp_obj)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(disp_obj);
+
+ delete_disp(disp_obj);
+}
+
+/*
+ * ======== disp_exit ========
+ * Discontinue usage of DISP module.
+ */
+void disp_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== disp_init ========
+ * Initialize the DISP module.
+ */
+bool disp_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+ return ret;
+}
+
+/*
+ * ======== disp_node_change_priority ========
+ * Change the priority of a node currently running on the target.
+ */
+int disp_node_change_priority(struct disp_object *disp_obj,
+ struct node_object *hnode,
+ u32 ulRMSFxn, nodeenv node_env, s32 prio)
+{
+ u32 dw_arg;
+ struct rms_command *rms_cmd;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(disp_obj);
+ DBC_REQUIRE(hnode != NULL);
+
+ /* Send message to RMS to change priority */
+ rms_cmd = (struct rms_command *)(disp_obj->pbuf);
+ rms_cmd->fxn = (rms_word) (ulRMSFxn);
+ rms_cmd->arg1 = (rms_word) node_env;
+ rms_cmd->arg2 = prio;
+ status = send_message(disp_obj, node_get_timeout(hnode),
+ sizeof(struct rms_command), &dw_arg);
+
+ return status;
+}
+
+/*
+ * ======== disp_node_create ========
+ * Create a node on the DSP by remotely calling the node's create function.
+ */
+int disp_node_create(struct disp_object *disp_obj,
+ struct node_object *hnode, u32 ulRMSFxn,
+ u32 ul_create_fxn,
+ IN CONST struct node_createargs *pargs,
+ OUT nodeenv *pNodeEnv)
+{
+ struct node_msgargs node_msg_args;
+ struct node_taskargs task_arg_obj;
+ struct rms_command *rms_cmd;
+ struct rms_msg_args *pmsg_args;
+ struct rms_more_task_args *more_task_args;
+ enum node_type node_type;
+ u32 dw_length;
+ rms_word *pdw_buf = NULL;
+ u32 ul_bytes;
+ u32 i;
+ u32 total;
+ u32 chars_in_rms_word;
+ s32 task_args_offset;
+ s32 sio_in_def_offset;
+ s32 sio_out_def_offset;
+ s32 sio_defs_offset;
+ s32 args_offset = -1;
+ s32 offset;
+ struct node_strmdef strm_def;
+ u32 max;
+ int status = 0;
+ struct dsp_nodeinfo node_info;
+ u8 dev_type;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(disp_obj);
+ DBC_REQUIRE(hnode != NULL);
+ DBC_REQUIRE(node_get_type(hnode) != NODE_DEVICE);
+ DBC_REQUIRE(pNodeEnv != NULL);
+
+ status = dev_get_dev_type(disp_obj->hdev_obj, &dev_type);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (dev_type != DSP_UNIT) {
+ dev_dbg(bridge, "%s: unknown device type = 0x%x\n",
+ __func__, dev_type);
+ goto func_end;
+ }
+ DBC_REQUIRE(pargs != NULL);
+ node_type = node_get_type(hnode);
+ node_msg_args = pargs->asa.node_msg_args;
+ max = disp_obj->ul_bufsize_rms; /*Max # of RMS words that can be sent */
+ DBC_ASSERT(max == RMS_COMMANDBUFSIZE);
+ chars_in_rms_word = sizeof(rms_word) / disp_obj->char_size;
+ /* Number of RMS words needed to hold arg data */
+ dw_length =
+ (node_msg_args.arg_length + chars_in_rms_word -
+ 1) / chars_in_rms_word;
+ /* Make sure msg args and command fit in buffer */
+ total = sizeof(struct rms_command) / sizeof(rms_word) +
+ sizeof(struct rms_msg_args)
+ / sizeof(rms_word) - 1 + dw_length;
+ if (total >= max) {
+ status = -EPERM;
+ dev_dbg(bridge, "%s: Message args too large for buffer! size "
+ "= %d, max = %d\n", __func__, total, max);
+ }
+ /*
+ * Fill in buffer to send to RMS.
+ * The buffer will have the following format:
+ *
+ * RMS command:
+ * Address of RMS_CreateNode()
+ * Address of node's create function
+ * dummy argument
+ * node type
+ *
+ * Message Args:
+ * max number of messages
+ * segid for message buffer allocation
+ * notification type to use when message is received
+ * length of message arg data
+ * message args data
+ *
+ * Task Args (if task or socket node):
+ * priority
+ * stack size
+ * system stack size
+ * stack segment
+ * misc
+ * number of input streams
+ * pSTRMInDef[] - offsets of STRM definitions for input streams
+ * number of output streams
+ * pSTRMOutDef[] - offsets of STRM definitions for output
+ * streams
+ * STRMInDef[] - array of STRM definitions for input streams
+ * STRMOutDef[] - array of STRM definitions for output streams
+ *
+ * Socket Args (if DAIS socket node):
+ *
+ */
+ if (DSP_SUCCEEDED(status)) {
+ total = 0; /* Total number of words in buffer so far */
+ pdw_buf = (rms_word *) disp_obj->pbuf;
+ rms_cmd = (struct rms_command *)pdw_buf;
+ rms_cmd->fxn = (rms_word) (ulRMSFxn);
+ rms_cmd->arg1 = (rms_word) (ul_create_fxn);
+ if (node_get_load_type(hnode) == NLDR_DYNAMICLOAD) {
+ /* Flush ICACHE on Load */
+ rms_cmd->arg2 = 1; /* dummy argument */
+ } else {
+ /* Do not flush ICACHE */
+ rms_cmd->arg2 = 0; /* dummy argument */
+ }
+ rms_cmd->data = node_get_type(hnode);
+ /*
+ * args_offset is the offset of the data field in struct
+ * rms_command structure. We need this to calculate stream
+ * definition offsets.
+ */
+ args_offset = 3;
+ total += sizeof(struct rms_command) / sizeof(rms_word);
+ /* Message args */
+ pmsg_args = (struct rms_msg_args *)(pdw_buf + total);
+ pmsg_args->max_msgs = node_msg_args.max_msgs;
+ pmsg_args->segid = node_msg_args.seg_id;
+ pmsg_args->notify_type = node_msg_args.notify_type;
+ pmsg_args->arg_length = node_msg_args.arg_length;
+ total += sizeof(struct rms_msg_args) / sizeof(rms_word) - 1;
+ memcpy(pdw_buf + total, node_msg_args.pdata,
+ node_msg_args.arg_length);
+ total += dw_length;
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* If node is a task node, copy task create arguments into buffer */
+ if (node_type == NODE_TASK || node_type == NODE_DAISSOCKET) {
+ task_arg_obj = pargs->asa.task_arg_obj;
+ task_args_offset = total;
+ total += sizeof(struct rms_more_task_args) / sizeof(rms_word) +
+ 1 + task_arg_obj.num_inputs + task_arg_obj.num_outputs;
+ /* Copy task arguments */
+ if (total < max) {
+ total = task_args_offset;
+ more_task_args = (struct rms_more_task_args *)(pdw_buf +
+ total);
+ /*
+ * Get some important info about the node. Note that we
+ * don't just reach into the hnode struct because
+ * that would break the node object's abstraction.
+ */
+ get_node_info(hnode, &node_info);
+ more_task_args->priority = node_info.execution_priority;
+ more_task_args->stack_size = task_arg_obj.stack_size;
+ more_task_args->sysstack_size =
+ task_arg_obj.sys_stack_size;
+ more_task_args->stack_seg = task_arg_obj.stack_seg;
+ more_task_args->heap_addr = task_arg_obj.udsp_heap_addr;
+ more_task_args->heap_size = task_arg_obj.heap_size;
+ more_task_args->misc = task_arg_obj.ul_dais_arg;
+ more_task_args->num_input_streams =
+ task_arg_obj.num_inputs;
+ total +=
+ sizeof(struct rms_more_task_args) /
+ sizeof(rms_word);
+ dev_dbg(bridge, "%s: udsp_heap_addr %x, heap_size %x\n",
+ __func__, task_arg_obj.udsp_heap_addr,
+ task_arg_obj.heap_size);
+ /* Keep track of pSIOInDef[] and pSIOOutDef[]
+ * positions in the buffer, since this needs to be
+ * filled in later. */
+ sio_in_def_offset = total;
+ total += task_arg_obj.num_inputs;
+ pdw_buf[total++] = task_arg_obj.num_outputs;
+ sio_out_def_offset = total;
+ total += task_arg_obj.num_outputs;
+ sio_defs_offset = total;
+ /* Fill SIO defs and offsets */
+ offset = sio_defs_offset;
+ for (i = 0; i < task_arg_obj.num_inputs; i++) {
+ if (DSP_FAILED(status))
+ break;
+
+ pdw_buf[sio_in_def_offset + i] =
+ (offset - args_offset)
+ * (sizeof(rms_word) / DSPWORDSIZE);
+ strm_def = task_arg_obj.strm_in_def[i];
+ status =
+ fill_stream_def(pdw_buf, &total, offset,
+ strm_def, max,
+ chars_in_rms_word);
+ offset = total;
+ }
+ for (i = 0; (i < task_arg_obj.num_outputs) &&
+ (DSP_SUCCEEDED(status)); i++) {
+ pdw_buf[sio_out_def_offset + i] =
+ (offset - args_offset)
+ * (sizeof(rms_word) / DSPWORDSIZE);
+ strm_def = task_arg_obj.strm_out_def[i];
+ status =
+ fill_stream_def(pdw_buf, &total, offset,
+ strm_def, max,
+ chars_in_rms_word);
+ offset = total;
+ }
+ } else {
+ /* Args won't fit */
+ status = -EPERM;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ ul_bytes = total * sizeof(rms_word);
+ DBC_ASSERT(ul_bytes < (RMS_COMMANDBUFSIZE * sizeof(rms_word)));
+ status = send_message(disp_obj, node_get_timeout(hnode),
+ ul_bytes, pNodeEnv);
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Message successfully received from RMS.
+ * Return the status of the Node's create function
+ * on the DSP-side
+ */
+ status = (((rms_word *) (disp_obj->pbuf))[0]);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "%s: DSP-side failed: 0x%x\n",
+ __func__, status);
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== disp_node_delete ========
+ * purpose:
+ * Delete a node on the DSP by remotely calling the node's delete function.
+ *
+ */
+int disp_node_delete(struct disp_object *disp_obj,
+ struct node_object *hnode, u32 ulRMSFxn,
+ u32 ul_delete_fxn, nodeenv node_env)
+{
+ u32 dw_arg;
+ struct rms_command *rms_cmd;
+ int status = 0;
+ u8 dev_type;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(disp_obj);
+ DBC_REQUIRE(hnode != NULL);
+
+ status = dev_get_dev_type(disp_obj->hdev_obj, &dev_type);
+
+ if (DSP_SUCCEEDED(status)) {
+
+ if (dev_type == DSP_UNIT) {
+
+ /*
+ * Fill in buffer to send to RMS
+ */
+ rms_cmd = (struct rms_command *)disp_obj->pbuf;
+ rms_cmd->fxn = (rms_word) (ulRMSFxn);
+ rms_cmd->arg1 = (rms_word) node_env;
+ rms_cmd->arg2 = (rms_word) (ul_delete_fxn);
+ rms_cmd->data = node_get_type(hnode);
+
+ status = send_message(disp_obj, node_get_timeout(hnode),
+ sizeof(struct rms_command),
+ &dw_arg);
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Message successfully received from RMS.
+ * Return the status of the Node's delete
+ * function on the DSP-side
+ */
+ status = (((rms_word *) (disp_obj->pbuf))[0]);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "%s: DSP-side failed: "
+ "0x%x\n", __func__, status);
+ }
+
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== disp_node_run ========
+ * purpose:
+ * Start execution of a node's execute phase, or resume execution of a node
+ * that has been suspended (via DISP_NodePause()) on the DSP.
+ */
+int disp_node_run(struct disp_object *disp_obj,
+ struct node_object *hnode, u32 ulRMSFxn,
+ u32 ul_execute_fxn, nodeenv node_env)
+{
+ u32 dw_arg;
+ struct rms_command *rms_cmd;
+ int status = 0;
+ u8 dev_type;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(disp_obj);
+ DBC_REQUIRE(hnode != NULL);
+
+ status = dev_get_dev_type(disp_obj->hdev_obj, &dev_type);
+
+ if (DSP_SUCCEEDED(status)) {
+
+ if (dev_type == DSP_UNIT) {
+
+ /*
+ * Fill in buffer to send to RMS.
+ */
+ rms_cmd = (struct rms_command *)disp_obj->pbuf;
+ rms_cmd->fxn = (rms_word) (ulRMSFxn);
+ rms_cmd->arg1 = (rms_word) node_env;
+ rms_cmd->arg2 = (rms_word) (ul_execute_fxn);
+ rms_cmd->data = node_get_type(hnode);
+
+ status = send_message(disp_obj, node_get_timeout(hnode),
+ sizeof(struct rms_command),
+ &dw_arg);
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Message successfully received from RMS.
+ * Return the status of the Node's execute
+ * function on the DSP-side
+ */
+ status = (((rms_word *) (disp_obj->pbuf))[0]);
+ if (DSP_FAILED(status))
+ dev_dbg(bridge, "%s: DSP-side failed: "
+ "0x%x\n", __func__, status);
+ }
+
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== delete_disp ========
+ * purpose:
+ * Frees the resources allocated for the dispatcher.
+ */
+static void delete_disp(struct disp_object *disp_obj)
+{
+ int status = 0;
+ struct bridge_drv_interface *intf_fxns;
+
+ if (disp_obj) {
+ intf_fxns = disp_obj->intf_fxns;
+
+ /* Free Node Dispatcher resources */
+ if (disp_obj->chnl_from_dsp) {
+ /* Channel close can fail only if the channel handle
+ * is invalid. */
+ status = (*intf_fxns->pfn_chnl_close)
+ (disp_obj->chnl_from_dsp);
+ if (DSP_FAILED(status)) {
+ dev_dbg(bridge, "%s: Failed to close channel "
+ "from RMS: 0x%x\n", __func__, status);
+ }
+ }
+ if (disp_obj->chnl_to_dsp) {
+ status =
+ (*intf_fxns->pfn_chnl_close) (disp_obj->
+ chnl_to_dsp);
+ if (DSP_FAILED(status)) {
+ dev_dbg(bridge, "%s: Failed to close channel to"
+ " RMS: 0x%x\n", __func__, status);
+ }
+ }
+ kfree(disp_obj->pbuf);
+
+ kfree(disp_obj);
+ }
+}
+
+/*
+ * ======== fill_stream_def ========
+ * purpose:
+ * Fills stream definitions.
+ */
+static int fill_stream_def(rms_word *pdw_buf, u32 *ptotal, u32 offset,
+ struct node_strmdef strm_def, u32 max,
+ u32 chars_in_rms_word)
+{
+ struct rms_strm_def *strm_def_obj;
+ u32 total = *ptotal;
+ u32 name_len;
+ u32 dw_length;
+ int status = 0;
+
+ if (total + sizeof(struct rms_strm_def) / sizeof(rms_word) >= max) {
+ status = -EPERM;
+ } else {
+ strm_def_obj = (struct rms_strm_def *)(pdw_buf + total);
+ strm_def_obj->bufsize = strm_def.buf_size;
+ strm_def_obj->nbufs = strm_def.num_bufs;
+ strm_def_obj->segid = strm_def.seg_id;
+ strm_def_obj->align = strm_def.buf_alignment;
+ strm_def_obj->timeout = strm_def.utimeout;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Since we haven't added the device name yet, subtract
+ * 1 from total.
+ */
+ total += sizeof(struct rms_strm_def) / sizeof(rms_word) - 1;
+ DBC_REQUIRE(strm_def.sz_device);
+ dw_length = strlen(strm_def.sz_device) + 1;
+
+ /* Number of RMS_WORDS needed to hold device name */
+ name_len =
+ (dw_length + chars_in_rms_word - 1) / chars_in_rms_word;
+
+ if (total + name_len >= max) {
+ status = -EPERM;
+ } else {
+ /*
+ * Zero out last word, since the device name may not
+ * extend to completely fill this word.
+ */
+ pdw_buf[total + name_len - 1] = 0;
+ /** TODO USE SERVICES * */
+ memcpy(pdw_buf + total, strm_def.sz_device, dw_length);
+ total += name_len;
+ *ptotal = total;
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== send_message ======
+ * Send command message to RMS, get reply from RMS.
+ */
+static int send_message(struct disp_object *disp_obj, u32 dwTimeout,
+ u32 ul_bytes, u32 *pdw_arg)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct chnl_object *chnl_obj;
+ u32 dw_arg = 0;
+ u8 *pbuf;
+ struct chnl_ioc chnl_ioc_obj;
+ int status = 0;
+
+ DBC_REQUIRE(pdw_arg != NULL);
+
+ *pdw_arg = (u32) NULL;
+ intf_fxns = disp_obj->intf_fxns;
+ chnl_obj = disp_obj->chnl_to_dsp;
+ pbuf = disp_obj->pbuf;
+
+ /* Send the command */
+ status = (*intf_fxns->pfn_chnl_add_io_req) (chnl_obj, pbuf, ul_bytes, 0,
+ 0L, dw_arg);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ status =
+ (*intf_fxns->pfn_chnl_get_ioc) (chnl_obj, dwTimeout, &chnl_ioc_obj);
+ if (DSP_SUCCEEDED(status)) {
+ if (!CHNL_IS_IO_COMPLETE(chnl_ioc_obj)) {
+ if (CHNL_IS_TIMED_OUT(chnl_ioc_obj))
+ status = -ETIME;
+ else
+ status = -EPERM;
+ }
+ }
+ /* Get the reply */
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ chnl_obj = disp_obj->chnl_from_dsp;
+ ul_bytes = REPLYSIZE;
+ status = (*intf_fxns->pfn_chnl_add_io_req) (chnl_obj, pbuf, ul_bytes,
+ 0, 0L, dw_arg);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ status =
+ (*intf_fxns->pfn_chnl_get_ioc) (chnl_obj, dwTimeout, &chnl_ioc_obj);
+ if (DSP_SUCCEEDED(status)) {
+ if (CHNL_IS_TIMED_OUT(chnl_ioc_obj)) {
+ status = -ETIME;
+ } else if (chnl_ioc_obj.byte_size < ul_bytes) {
+ /* Did not get all of the reply from the RMS */
+ status = -EPERM;
+ } else {
+ if (CHNL_IS_IO_COMPLETE(chnl_ioc_obj)) {
+ DBC_ASSERT(chnl_ioc_obj.pbuf == pbuf);
+ status = (*((rms_word *) chnl_ioc_obj.pbuf));
+ *pdw_arg =
+ (((rms_word *) (chnl_ioc_obj.pbuf))[1]);
+ } else {
+ status = -EPERM;
+ }
+ }
+ }
+func_end:
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/drv.c b/drivers/staging/tidspbridge/rmgr/drv.c
new file mode 100644
index 0000000..c6e38e5
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/drv.c
@@ -0,0 +1,1047 @@
+/*
+ * drv.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge resource allocation module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/list.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/drv.h>
+#include <dspbridge/dev.h>
+
+#include <dspbridge/node.h>
+#include <dspbridge/proc.h>
+#include <dspbridge/strm.h>
+#include <dspbridge/nodepriv.h>
+#include <dspbridge/dspchnl.h>
+#include <dspbridge/resourcecleanup.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+struct drv_object {
+ struct lst_list *dev_list;
+ struct lst_list *dev_node_string;
+};
+
+/*
+ * This is the Device Extension. Named with the Prefix
+ * DRV_ since it is living in this module
+ */
+struct drv_ext {
+ struct list_head link;
+ char sz_string[MAXREGPATHLENGTH];
+};
+
+/* ----------------------------------- Globals */
+static s32 refs;
+static bool ext_phys_mem_pool_enabled;
+struct ext_phys_mem_pool {
+ u32 phys_mem_base;
+ u32 phys_mem_size;
+ u32 virt_mem_base;
+ u32 next_phys_alloc_ptr;
+};
+static struct ext_phys_mem_pool ext_mem_pool;
+
+/* ----------------------------------- Function Prototypes */
+static int request_bridge_resources(struct cfg_hostres *res);
+
+
+/* GPP PROCESS CLEANUP CODE */
+
+static int drv_proc_free_node_res(void *hPCtxt);
+
+/* Allocate and add a node resource element
+* This function is called from .Node_Allocate. */
+int drv_insert_node_res_element(void *hnode, void *hNodeRes,
+ void *hPCtxt)
+{
+ struct node_res_object **node_res_obj =
+ (struct node_res_object **)hNodeRes;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct node_res_object *temp_node_res = NULL;
+
+ *node_res_obj = kzalloc(sizeof(struct node_res_object), GFP_KERNEL);
+ if (*node_res_obj == NULL)
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ if (mutex_lock_interruptible(&ctxt->node_mutex)) {
+ kfree(*node_res_obj);
+ return -EPERM;
+ }
+ (*node_res_obj)->hnode = hnode;
+ if (ctxt->node_list != NULL) {
+ temp_node_res = ctxt->node_list;
+ while (temp_node_res->next != NULL)
+ temp_node_res = temp_node_res->next;
+
+ temp_node_res->next = *node_res_obj;
+ } else {
+ ctxt->node_list = *node_res_obj;
+ }
+ mutex_unlock(&ctxt->node_mutex);
+ }
+
+ return status;
+}
+
+/* Release all Node resources and its context
+* This is called from .Node_Delete. */
+int drv_remove_node_res_element(void *hNodeRes, void *hPCtxt)
+{
+ struct node_res_object *node_res_obj =
+ (struct node_res_object *)hNodeRes;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ struct node_res_object *temp_node;
+ int status = 0;
+
+ if (mutex_lock_interruptible(&ctxt->node_mutex))
+ return -EPERM;
+ temp_node = ctxt->node_list;
+ if (temp_node == node_res_obj) {
+ ctxt->node_list = node_res_obj->next;
+ } else {
+ while (temp_node && temp_node->next != node_res_obj)
+ temp_node = temp_node->next;
+ if (!temp_node)
+ status = -ENOENT;
+ else
+ temp_node->next = node_res_obj->next;
+ }
+ mutex_unlock(&ctxt->node_mutex);
+ kfree(node_res_obj);
+ return status;
+}
+
+/* Actual Node De-Allocation */
+static int drv_proc_free_node_res(void *hPCtxt)
+{
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct node_res_object *node_list = NULL;
+ struct node_res_object *node_res_obj = NULL;
+ u32 node_state;
+
+ node_list = ctxt->node_list;
+ while (node_list != NULL) {
+ node_res_obj = node_list;
+ node_list = node_list->next;
+ if (node_res_obj->node_allocated) {
+ node_state = node_get_state(node_res_obj->hnode);
+ if (node_state <= NODE_DELETING) {
+ if ((node_state == NODE_RUNNING) ||
+ (node_state == NODE_PAUSED) ||
+ (node_state == NODE_TERMINATING))
+ status = node_terminate
+ (node_res_obj->hnode, &status);
+
+ status = node_delete(node_res_obj->hnode, ctxt);
+ }
+ }
+ }
+ return status;
+}
+
+/* Release all Mapped and Reserved DMM resources */
+int drv_remove_all_dmm_res_elements(void *hPCtxt)
+{
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct dmm_map_object *temp_map, *map_obj;
+ struct dmm_rsv_object *temp_rsv, *rsv_obj;
+
+ /* Free DMM mapped memory resources */
+ list_for_each_entry_safe(map_obj, temp_map, &ctxt->dmm_map_list, link) {
+ status = proc_un_map(ctxt->hprocessor,
+ (void *)map_obj->dsp_addr, ctxt);
+ if (DSP_FAILED(status))
+ pr_err("%s: proc_un_map failed!"
+ " status = 0x%xn", __func__, status);
+ }
+
+ /* Free DMM reserved memory resources */
+ list_for_each_entry_safe(rsv_obj, temp_rsv, &ctxt->dmm_rsv_list, link) {
+ status = proc_un_reserve_memory(ctxt->hprocessor, (void *)
+ rsv_obj->dsp_reserved_addr,
+ ctxt);
+ if (DSP_FAILED(status))
+ pr_err("%s: proc_un_reserve_memory failed!"
+ " status = 0x%xn", __func__, status);
+ }
+ return status;
+}
+
+/* Update Node allocation status */
+void drv_proc_node_update_status(void *hNodeRes, s32 status)
+{
+ struct node_res_object *node_res_obj =
+ (struct node_res_object *)hNodeRes;
+ DBC_ASSERT(hNodeRes != NULL);
+ node_res_obj->node_allocated = status;
+}
+
+/* Update Node Heap status */
+void drv_proc_node_update_heap_status(void *hNodeRes, s32 status)
+{
+ struct node_res_object *node_res_obj =
+ (struct node_res_object *)hNodeRes;
+ DBC_ASSERT(hNodeRes != NULL);
+ node_res_obj->heap_allocated = status;
+}
+
+/* Release all Node resources and its context
+* This is called from .bridge_release.
+ */
+int drv_remove_all_node_res_elements(void *hPCtxt)
+{
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct node_res_object *temp_node2 = NULL;
+ struct node_res_object *temp_node = NULL;
+
+ drv_proc_free_node_res(ctxt);
+ temp_node = ctxt->node_list;
+ while (temp_node != NULL) {
+ temp_node2 = temp_node;
+ temp_node = temp_node->next;
+ kfree(temp_node2);
+ }
+ ctxt->node_list = NULL;
+ return status;
+}
+
+/* Getting the node resource element */
+int drv_get_node_res_element(void *hnode, void *hNodeRes,
+ void *hPCtxt)
+{
+ struct node_res_object **node_res = (struct node_res_object **)hNodeRes;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct node_res_object *temp_node2 = NULL;
+ struct node_res_object *temp_node = NULL;
+
+ if (mutex_lock_interruptible(&ctxt->node_mutex))
+ return -EPERM;
+
+ temp_node = ctxt->node_list;
+ while ((temp_node != NULL) && (temp_node->hnode != hnode)) {
+ temp_node2 = temp_node;
+ temp_node = temp_node->next;
+ }
+
+ mutex_unlock(&ctxt->node_mutex);
+
+ if (temp_node != NULL)
+ *node_res = temp_node;
+ else
+ status = -ENOENT;
+
+ return status;
+}
+
+/* Allocate the STRM resource element
+* This is called after the actual resource is allocated
+ */
+int drv_proc_insert_strm_res_element(void *hStreamHandle,
+ void *hstrm_res, void *hPCtxt)
+{
+ struct strm_res_object **pstrm_res =
+ (struct strm_res_object **)hstrm_res;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct strm_res_object *temp_strm_res = NULL;
+
+ *pstrm_res = kzalloc(sizeof(struct strm_res_object), GFP_KERNEL);
+ if (*pstrm_res == NULL)
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ if (mutex_lock_interruptible(&ctxt->strm_mutex)) {
+ kfree(*pstrm_res);
+ return -EPERM;
+ }
+ (*pstrm_res)->hstream = hStreamHandle;
+ if (ctxt->pstrm_list != NULL) {
+ temp_strm_res = ctxt->pstrm_list;
+ while (temp_strm_res->next != NULL)
+ temp_strm_res = temp_strm_res->next;
+
+ temp_strm_res->next = *pstrm_res;
+ } else {
+ ctxt->pstrm_list = *pstrm_res;
+ }
+ mutex_unlock(&ctxt->strm_mutex);
+ }
+ return status;
+}
+
+/* Release Stream resource element context
+* This function called after the actual resource is freed
+ */
+int drv_proc_remove_strm_res_element(void *hstrm_res, void *hPCtxt)
+{
+ struct strm_res_object *pstrm_res = (struct strm_res_object *)hstrm_res;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ struct strm_res_object *temp_strm_res;
+ int status = 0;
+
+ if (mutex_lock_interruptible(&ctxt->strm_mutex))
+ return -EPERM;
+ temp_strm_res = ctxt->pstrm_list;
+
+ if (ctxt->pstrm_list == pstrm_res) {
+ ctxt->pstrm_list = pstrm_res->next;
+ } else {
+ while (temp_strm_res && temp_strm_res->next != pstrm_res)
+ temp_strm_res = temp_strm_res->next;
+ if (temp_strm_res == NULL)
+ status = -ENOENT;
+ else
+ temp_strm_res->next = pstrm_res->next;
+ }
+ mutex_unlock(&ctxt->strm_mutex);
+ kfree(pstrm_res);
+ return status;
+}
+
+/* Release all Stream resources and its context
+* This is called from .bridge_release.
+ */
+int drv_remove_all_strm_res_elements(void *hPCtxt)
+{
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct strm_res_object *strm_res = NULL;
+ struct strm_res_object *strm_tmp = NULL;
+ struct stream_info strm_info;
+ struct dsp_streaminfo user;
+ u8 **ap_buffer = NULL;
+ u8 *buf_ptr;
+ u32 ul_bytes;
+ u32 dw_arg;
+ s32 ul_buf_size;
+
+ strm_tmp = ctxt->pstrm_list;
+ while (strm_tmp) {
+ strm_res = strm_tmp;
+ strm_tmp = strm_tmp->next;
+ if (strm_res->num_bufs) {
+ ap_buffer = kmalloc((strm_res->num_bufs *
+ sizeof(u8 *)), GFP_KERNEL);
+ if (ap_buffer) {
+ status = strm_free_buffer(strm_res->hstream,
+ ap_buffer,
+ strm_res->num_bufs,
+ ctxt);
+ kfree(ap_buffer);
+ }
+ }
+ strm_info.user_strm = &user;
+ user.number_bufs_in_stream = 0;
+ strm_get_info(strm_res->hstream, &strm_info, sizeof(strm_info));
+ while (user.number_bufs_in_stream--)
+ strm_reclaim(strm_res->hstream, &buf_ptr, &ul_bytes,
+ (u32 *) &ul_buf_size, &dw_arg);
+ status = strm_close(strm_res->hstream, ctxt);
+ }
+ return status;
+}
+
+/* Getting the stream resource element */
+int drv_get_strm_res_element(void *hStrm, void *hstrm_res,
+ void *hPCtxt)
+{
+ struct strm_res_object **strm_res =
+ (struct strm_res_object **)hstrm_res;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ int status = 0;
+ struct strm_res_object *temp_strm2 = NULL;
+ struct strm_res_object *temp_strm;
+
+ if (mutex_lock_interruptible(&ctxt->strm_mutex))
+ return -EPERM;
+
+ temp_strm = ctxt->pstrm_list;
+ while ((temp_strm != NULL) && (temp_strm->hstream != hStrm)) {
+ temp_strm2 = temp_strm;
+ temp_strm = temp_strm->next;
+ }
+
+ mutex_unlock(&ctxt->strm_mutex);
+
+ if (temp_strm != NULL)
+ *strm_res = temp_strm;
+ else
+ status = -ENOENT;
+
+ return status;
+}
+
+/* Updating the stream resource element */
+int drv_proc_update_strm_res(u32 num_bufs, void *hstrm_res)
+{
+ int status = 0;
+ struct strm_res_object **strm_res =
+ (struct strm_res_object **)hstrm_res;
+
+ (*strm_res)->num_bufs = num_bufs;
+ return status;
+}
+
+/* GPP PROCESS CLEANUP CODE END */
+
+/*
+ * ======== = drv_create ======== =
+ * Purpose:
+ * DRV Object gets created only once during Driver Loading.
+ */
+int drv_create(OUT struct drv_object **phDRVObject)
+{
+ int status = 0;
+ struct drv_object *pdrv_object = NULL;
+
+ DBC_REQUIRE(phDRVObject != NULL);
+ DBC_REQUIRE(refs > 0);
+
+ pdrv_object = kzalloc(sizeof(struct drv_object), GFP_KERNEL);
+ if (pdrv_object) {
+ /* Create and Initialize List of device objects */
+ pdrv_object->dev_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (pdrv_object->dev_list) {
+ /* Create and Initialize List of device Extension */
+ pdrv_object->dev_node_string =
+ kzalloc(sizeof(struct lst_list), GFP_KERNEL);
+ if (!(pdrv_object->dev_node_string)) {
+ status = -EPERM;
+ } else {
+ INIT_LIST_HEAD(&pdrv_object->
+ dev_node_string->head);
+ INIT_LIST_HEAD(&pdrv_object->dev_list->head);
+ }
+ } else {
+ status = -ENOMEM;
+ }
+ } else {
+ status = -ENOMEM;
+ }
+ /* Store the DRV Object in the Registry */
+ if (DSP_SUCCEEDED(status))
+ status = cfg_set_object((u32) pdrv_object, REG_DRV_OBJECT);
+ if (DSP_SUCCEEDED(status)) {
+ *phDRVObject = pdrv_object;
+ } else {
+ kfree(pdrv_object->dev_list);
+ kfree(pdrv_object->dev_node_string);
+ /* Free the DRV Object */
+ kfree(pdrv_object);
+ }
+
+ DBC_ENSURE(DSP_FAILED(status) || pdrv_object);
+ return status;
+}
+
+/*
+ * ======== drv_exit ========
+ * Purpose:
+ * Discontinue usage of the DRV module.
+ */
+void drv_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== = drv_destroy ======== =
+ * purpose:
+ * Invoked during bridge de-initialization
+ */
+int drv_destroy(struct drv_object *hDRVObject)
+{
+ int status = 0;
+ struct drv_object *pdrv_object = (struct drv_object *)hDRVObject;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pdrv_object);
+
+ /*
+ * Delete the List if it exists.Should not come here
+ * as the drv_remove_dev_object and the Last drv_request_resources
+ * removes the list if the lists are empty.
+ */
+ kfree(pdrv_object->dev_list);
+ kfree(pdrv_object->dev_node_string);
+ kfree(pdrv_object);
+ /* Update the DRV Object in Registry to be 0 */
+ (void)cfg_set_object(0, REG_DRV_OBJECT);
+
+ return status;
+}
+
+/*
+ * ======== drv_get_dev_object ========
+ * Purpose:
+ * Given a index, returns a handle to DevObject from the list.
+ */
+int drv_get_dev_object(u32 index, struct drv_object *hdrv_obj,
+ struct dev_object **phDevObject)
+{
+ int status = 0;
+#ifdef CONFIG_BRIDGE_DEBUG
+ /* used only for Assertions and debug messages */
+ struct drv_object *pdrv_obj = (struct drv_object *)hdrv_obj;
+#endif
+ struct dev_object *dev_obj;
+ u32 i;
+ DBC_REQUIRE(pdrv_obj);
+ DBC_REQUIRE(phDevObject != NULL);
+ DBC_REQUIRE(index >= 0);
+ DBC_REQUIRE(refs > 0);
+ DBC_ASSERT(!(LST_IS_EMPTY(pdrv_obj->dev_list)));
+
+ dev_obj = (struct dev_object *)drv_get_first_dev_object();
+ for (i = 0; i < index; i++) {
+ dev_obj =
+ (struct dev_object *)drv_get_next_dev_object((u32) dev_obj);
+ }
+ if (dev_obj) {
+ *phDevObject = (struct dev_object *)dev_obj;
+ } else {
+ *phDevObject = NULL;
+ status = -EPERM;
+ }
+
+ return status;
+}
+
+/*
+ * ======== drv_get_first_dev_object ========
+ * Purpose:
+ * Retrieve the first Device Object handle from an internal linked list of
+ * of DEV_OBJECTs maintained by DRV.
+ */
+u32 drv_get_first_dev_object(void)
+{
+ u32 dw_dev_object = 0;
+ struct drv_object *pdrv_obj;
+
+ if (DSP_SUCCEEDED(cfg_get_object((u32 *) &pdrv_obj, REG_DRV_OBJECT))) {
+ if ((pdrv_obj->dev_list != NULL) &&
+ !LST_IS_EMPTY(pdrv_obj->dev_list))
+ dw_dev_object = (u32) lst_first(pdrv_obj->dev_list);
+ }
+
+ return dw_dev_object;
+}
+
+/*
+ * ======== DRV_GetFirstDevNodeString ========
+ * Purpose:
+ * Retrieve the first Device Extension from an internal linked list of
+ * of Pointer to dev_node Strings maintained by DRV.
+ */
+u32 drv_get_first_dev_extension(void)
+{
+ u32 dw_dev_extension = 0;
+ struct drv_object *pdrv_obj;
+
+ if (DSP_SUCCEEDED(cfg_get_object((u32 *) &pdrv_obj, REG_DRV_OBJECT))) {
+
+ if ((pdrv_obj->dev_node_string != NULL) &&
+ !LST_IS_EMPTY(pdrv_obj->dev_node_string)) {
+ dw_dev_extension =
+ (u32) lst_first(pdrv_obj->dev_node_string);
+ }
+ }
+
+ return dw_dev_extension;
+}
+
+/*
+ * ======== drv_get_next_dev_object ========
+ * Purpose:
+ * Retrieve the next Device Object handle from an internal linked list of
+ * of DEV_OBJECTs maintained by DRV, after having previously called
+ * drv_get_first_dev_object() and zero or more DRV_GetNext.
+ */
+u32 drv_get_next_dev_object(u32 hdev_obj)
+{
+ u32 dw_next_dev_object = 0;
+ struct drv_object *pdrv_obj;
+
+ DBC_REQUIRE(hdev_obj != 0);
+
+ if (DSP_SUCCEEDED(cfg_get_object((u32 *) &pdrv_obj, REG_DRV_OBJECT))) {
+
+ if ((pdrv_obj->dev_list != NULL) &&
+ !LST_IS_EMPTY(pdrv_obj->dev_list)) {
+ dw_next_dev_object = (u32) lst_next(pdrv_obj->dev_list,
+ (struct list_head *)
+ hdev_obj);
+ }
+ }
+ return dw_next_dev_object;
+}
+
+/*
+ * ======== drv_get_next_dev_extension ========
+ * Purpose:
+ * Retrieve the next Device Extension from an internal linked list of
+ * of pointer to DevNodeString maintained by DRV, after having previously
+ * called drv_get_first_dev_extension() and zero or more
+ * drv_get_next_dev_extension().
+ */
+u32 drv_get_next_dev_extension(u32 hDevExtension)
+{
+ u32 dw_dev_extension = 0;
+ struct drv_object *pdrv_obj;
+
+ DBC_REQUIRE(hDevExtension != 0);
+
+ if (DSP_SUCCEEDED(cfg_get_object((u32 *) &pdrv_obj, REG_DRV_OBJECT))) {
+ if ((pdrv_obj->dev_node_string != NULL) &&
+ !LST_IS_EMPTY(pdrv_obj->dev_node_string)) {
+ dw_dev_extension =
+ (u32) lst_next(pdrv_obj->dev_node_string,
+ (struct list_head *)hDevExtension);
+ }
+ }
+
+ return dw_dev_extension;
+}
+
+/*
+ * ======== drv_init ========
+ * Purpose:
+ * Initialize DRV module private state.
+ */
+int drv_init(void)
+{
+ s32 ret = 1; /* function return value */
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== drv_insert_dev_object ========
+ * Purpose:
+ * Insert a DevObject into the list of Manager object.
+ */
+int drv_insert_dev_object(struct drv_object *hDRVObject,
+ struct dev_object *hdev_obj)
+{
+ int status = 0;
+ struct drv_object *pdrv_object = (struct drv_object *)hDRVObject;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hdev_obj != NULL);
+ DBC_REQUIRE(pdrv_object);
+ DBC_ASSERT(pdrv_object->dev_list);
+
+ lst_put_tail(pdrv_object->dev_list, (struct list_head *)hdev_obj);
+
+ DBC_ENSURE(DSP_SUCCEEDED(status)
+ && !LST_IS_EMPTY(pdrv_object->dev_list));
+
+ return status;
+}
+
+/*
+ * ======== drv_remove_dev_object ========
+ * Purpose:
+ * Search for and remove a DeviceObject from the given list of DRV
+ * objects.
+ */
+int drv_remove_dev_object(struct drv_object *hDRVObject,
+ struct dev_object *hdev_obj)
+{
+ int status = -EPERM;
+ struct drv_object *pdrv_object = (struct drv_object *)hDRVObject;
+ struct list_head *cur_elem;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pdrv_object);
+ DBC_REQUIRE(hdev_obj != NULL);
+
+ DBC_REQUIRE(pdrv_object->dev_list != NULL);
+ DBC_REQUIRE(!LST_IS_EMPTY(pdrv_object->dev_list));
+
+ /* Search list for p_proc_object: */
+ for (cur_elem = lst_first(pdrv_object->dev_list); cur_elem != NULL;
+ cur_elem = lst_next(pdrv_object->dev_list, cur_elem)) {
+ /* If found, remove it. */
+ if ((struct dev_object *)cur_elem == hdev_obj) {
+ lst_remove_elem(pdrv_object->dev_list, cur_elem);
+ status = 0;
+ break;
+ }
+ }
+ /* Remove list if empty. */
+ if (LST_IS_EMPTY(pdrv_object->dev_list)) {
+ kfree(pdrv_object->dev_list);
+ pdrv_object->dev_list = NULL;
+ }
+ DBC_ENSURE((pdrv_object->dev_list == NULL) ||
+ !LST_IS_EMPTY(pdrv_object->dev_list));
+
+ return status;
+}
+
+/*
+ * ======== drv_request_resources ========
+ * Purpose:
+ * Requests resources from the OS.
+ */
+int drv_request_resources(u32 dw_context, u32 *pDevNodeString)
+{
+ int status = 0;
+ struct drv_object *pdrv_object;
+ struct drv_ext *pszdev_node;
+
+ DBC_REQUIRE(dw_context != 0);
+ DBC_REQUIRE(pDevNodeString != NULL);
+
+ /*
+ * Allocate memory to hold the string. This will live untill
+ * it is freed in the Release resources. Update the driver object
+ * list.
+ */
+
+ status = cfg_get_object((u32 *) &pdrv_object, REG_DRV_OBJECT);
+ if (DSP_SUCCEEDED(status)) {
+ pszdev_node = kzalloc(sizeof(struct drv_ext), GFP_KERNEL);
+ if (pszdev_node) {
+ lst_init_elem(&pszdev_node->link);
+ strncpy(pszdev_node->sz_string,
+ (char *)dw_context, MAXREGPATHLENGTH - 1);
+ pszdev_node->sz_string[MAXREGPATHLENGTH - 1] = '\0';
+ /* Update the Driver Object List */
+ *pDevNodeString = (u32) pszdev_node->sz_string;
+ lst_put_tail(pdrv_object->dev_node_string,
+ (struct list_head *)pszdev_node);
+ } else {
+ status = -ENOMEM;
+ *pDevNodeString = 0;
+ }
+ } else {
+ dev_dbg(bridge, "%s: Failed to get Driver Object from Registry",
+ __func__);
+ *pDevNodeString = 0;
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && pDevNodeString != NULL &&
+ !LST_IS_EMPTY(pdrv_object->dev_node_string)) ||
+ (DSP_FAILED(status) && *pDevNodeString == 0));
+
+ return status;
+}
+
+/*
+ * ======== drv_release_resources ========
+ * Purpose:
+ * Releases resources from the OS.
+ */
+int drv_release_resources(u32 dw_context, struct drv_object *hdrv_obj)
+{
+ int status = 0;
+ struct drv_object *pdrv_object = (struct drv_object *)hdrv_obj;
+ struct drv_ext *pszdev_node;
+
+ /*
+ * Irrespective of the status go ahead and clean it
+ * The following will over write the status.
+ */
+ for (pszdev_node = (struct drv_ext *)drv_get_first_dev_extension();
+ pszdev_node != NULL; pszdev_node = (struct drv_ext *)
+ drv_get_next_dev_extension((u32) pszdev_node)) {
+ if (!pdrv_object->dev_node_string) {
+ /* When this could happen? */
+ continue;
+ }
+ if ((u32) pszdev_node == dw_context) {
+ /* Found it */
+ /* Delete from the Driver object list */
+ lst_remove_elem(pdrv_object->dev_node_string,
+ (struct list_head *)pszdev_node);
+ kfree((void *)pszdev_node);
+ break;
+ }
+ /* Delete the List if it is empty */
+ if (LST_IS_EMPTY(pdrv_object->dev_node_string)) {
+ kfree(pdrv_object->dev_node_string);
+ pdrv_object->dev_node_string = NULL;
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== request_bridge_resources ========
+ * Purpose:
+ * Reserves shared memory for bridge.
+ */
+static int request_bridge_resources(struct cfg_hostres *res)
+{
+ int status = 0;
+ struct cfg_hostres *host_res = res;
+
+ /* num_mem_windows must not be more than CFG_MAXMEMREGISTERS */
+ host_res->num_mem_windows = 2;
+
+ /* First window is for DSP internal memory */
+ host_res->dw_sys_ctrl_base = ioremap(OMAP_SYSC_BASE, OMAP_SYSC_SIZE);
+ dev_dbg(bridge, "dw_mem_base[0] 0x%x\n", host_res->dw_mem_base[0]);
+ dev_dbg(bridge, "dw_mem_base[3] 0x%x\n", host_res->dw_mem_base[3]);
+ dev_dbg(bridge, "dw_dmmu_base %p\n", host_res->dw_dmmu_base);
+
+ /* for 24xx base port is not mapping the mamory for DSP
+ * internal memory TODO Do a ioremap here */
+ /* Second window is for DSP external memory shared with MPU */
+
+ /* These are hard-coded values */
+ host_res->birq_registers = 0;
+ host_res->birq_attrib = 0;
+ host_res->dw_offset_for_monitor = 0;
+ host_res->dw_chnl_offset = 0;
+ /* CHNL_MAXCHANNELS */
+ host_res->dw_num_chnls = CHNL_MAXCHANNELS;
+ host_res->dw_chnl_buf_size = 0x400;
+
+ return status;
+}
+
+/*
+ * ======== drv_request_bridge_res_dsp ========
+ * Purpose:
+ * Reserves shared memory for bridge.
+ */
+int drv_request_bridge_res_dsp(void **phost_resources)
+{
+ int status = 0;
+ struct cfg_hostres *host_res;
+ u32 dw_buff_size;
+ u32 dma_addr;
+ u32 shm_size;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+ dw_buff_size = sizeof(struct cfg_hostres);
+
+ host_res = kzalloc(dw_buff_size, GFP_KERNEL);
+
+ if (host_res != NULL) {
+ request_bridge_resources(host_res);
+ /* num_mem_windows must not be more than CFG_MAXMEMREGISTERS */
+ host_res->num_mem_windows = 4;
+
+ host_res->dw_mem_base[0] = 0;
+ host_res->dw_mem_base[2] = (u32) ioremap(OMAP_DSP_MEM1_BASE,
+ OMAP_DSP_MEM1_SIZE);
+ host_res->dw_mem_base[3] = (u32) ioremap(OMAP_DSP_MEM2_BASE,
+ OMAP_DSP_MEM2_SIZE);
+ host_res->dw_mem_base[4] = (u32) ioremap(OMAP_DSP_MEM3_BASE,
+ OMAP_DSP_MEM3_SIZE);
+ host_res->dw_per_base = ioremap(OMAP_PER_CM_BASE,
+ OMAP_PER_CM_SIZE);
+ host_res->dw_per_pm_base = (u32) ioremap(OMAP_PER_PRM_BASE,
+ OMAP_PER_PRM_SIZE);
+ host_res->dw_core_pm_base = (u32) ioremap(OMAP_CORE_PRM_BASE,
+ OMAP_CORE_PRM_SIZE);
+ host_res->dw_dmmu_base = ioremap(OMAP_DMMU_BASE,
+ OMAP_DMMU_SIZE);
+
+ dev_dbg(bridge, "dw_mem_base[0] 0x%x\n",
+ host_res->dw_mem_base[0]);
+ dev_dbg(bridge, "dw_mem_base[1] 0x%x\n",
+ host_res->dw_mem_base[1]);
+ dev_dbg(bridge, "dw_mem_base[2] 0x%x\n",
+ host_res->dw_mem_base[2]);
+ dev_dbg(bridge, "dw_mem_base[3] 0x%x\n",
+ host_res->dw_mem_base[3]);
+ dev_dbg(bridge, "dw_mem_base[4] 0x%x\n",
+ host_res->dw_mem_base[4]);
+ dev_dbg(bridge, "dw_dmmu_base %p\n", host_res->dw_dmmu_base);
+
+ shm_size = drv_datap->shm_size;
+ if (shm_size >= 0x10000) {
+ /* Allocate Physically contiguous,
+ * non-cacheable memory */
+ host_res->dw_mem_base[1] =
+ (u32) mem_alloc_phys_mem(shm_size, 0x100000,
+ &dma_addr);
+ if (host_res->dw_mem_base[1] == 0) {
+ status = -ENOMEM;
+ pr_err("shm reservation Failed\n");
+ } else {
+ host_res->dw_mem_length[1] = shm_size;
+ host_res->dw_mem_phys[1] = dma_addr;
+
+ dev_dbg(bridge, "%s: Bridge shm address 0x%x "
+ "dma_addr %x size %x\n", __func__,
+ host_res->dw_mem_base[1],
+ dma_addr, shm_size);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* These are hard-coded values */
+ host_res->birq_registers = 0;
+ host_res->birq_attrib = 0;
+ host_res->dw_offset_for_monitor = 0;
+ host_res->dw_chnl_offset = 0;
+ /* CHNL_MAXCHANNELS */
+ host_res->dw_num_chnls = CHNL_MAXCHANNELS;
+ host_res->dw_chnl_buf_size = 0x400;
+ dw_buff_size = sizeof(struct cfg_hostres);
+ }
+ *phost_resources = host_res;
+ }
+ /* End Mem alloc */
+ return status;
+}
+
+void mem_ext_phys_pool_init(u32 poolPhysBase, u32 poolSize)
+{
+ u32 pool_virt_base;
+
+ /* get the virtual address for the physical memory pool passed */
+ pool_virt_base = (u32) ioremap(poolPhysBase, poolSize);
+
+ if ((void **)pool_virt_base == NULL) {
+ pr_err("%s: external physical memory map failed\n", __func__);
+ ext_phys_mem_pool_enabled = false;
+ } else {
+ ext_mem_pool.phys_mem_base = poolPhysBase;
+ ext_mem_pool.phys_mem_size = poolSize;
+ ext_mem_pool.virt_mem_base = pool_virt_base;
+ ext_mem_pool.next_phys_alloc_ptr = poolPhysBase;
+ ext_phys_mem_pool_enabled = true;
+ }
+}
+
+void mem_ext_phys_pool_release(void)
+{
+ if (ext_phys_mem_pool_enabled) {
+ iounmap((void *)(ext_mem_pool.virt_mem_base));
+ ext_phys_mem_pool_enabled = false;
+ }
+}
+
+/*
+ * ======== mem_ext_phys_mem_alloc ========
+ * Purpose:
+ * Allocate physically contiguous, uncached memory from external memory pool
+ */
+
+static void *mem_ext_phys_mem_alloc(u32 bytes, u32 align, OUT u32 * pPhysAddr)
+{
+ u32 new_alloc_ptr;
+ u32 offset;
+ u32 virt_addr;
+
+ if (align == 0)
+ align = 1;
+
+ if (bytes > ((ext_mem_pool.phys_mem_base + ext_mem_pool.phys_mem_size)
+ - ext_mem_pool.next_phys_alloc_ptr)) {
+ pPhysAddr = NULL;
+ return NULL;
+ } else {
+ offset = (ext_mem_pool.next_phys_alloc_ptr & (align - 1));
+ if (offset == 0)
+ new_alloc_ptr = ext_mem_pool.next_phys_alloc_ptr;
+ else
+ new_alloc_ptr = (ext_mem_pool.next_phys_alloc_ptr) +
+ (align - offset);
+ if ((new_alloc_ptr + bytes) <=
+ (ext_mem_pool.phys_mem_base + ext_mem_pool.phys_mem_size)) {
+ /* we can allocate */
+ *pPhysAddr = new_alloc_ptr;
+ ext_mem_pool.next_phys_alloc_ptr =
+ new_alloc_ptr + bytes;
+ virt_addr =
+ ext_mem_pool.virt_mem_base + (new_alloc_ptr -
+ ext_mem_pool.
+ phys_mem_base);
+ return (void *)virt_addr;
+ } else {
+ *pPhysAddr = 0;
+ return NULL;
+ }
+ }
+}
+
+/*
+ * ======== mem_alloc_phys_mem ========
+ * Purpose:
+ * Allocate physically contiguous, uncached memory
+ */
+void *mem_alloc_phys_mem(u32 byte_size, u32 ulAlign, OUT u32 * pPhysicalAddress)
+{
+ void *va_mem = NULL;
+ dma_addr_t pa_mem;
+
+ if (byte_size > 0) {
+ if (ext_phys_mem_pool_enabled) {
+ va_mem = mem_ext_phys_mem_alloc(byte_size, ulAlign,
+ (u32 *) &pa_mem);
+ } else
+ va_mem = dma_alloc_coherent(NULL, byte_size, &pa_mem,
+ GFP_KERNEL);
+ if (va_mem == NULL)
+ *pPhysicalAddress = 0;
+ else
+ *pPhysicalAddress = pa_mem;
+ }
+ return va_mem;
+}
+
+/*
+ * ======== mem_free_phys_mem ========
+ * Purpose:
+ * Free the given block of physically contiguous memory.
+ */
+void mem_free_phys_mem(void *pVirtualAddress, u32 pPhysicalAddress,
+ u32 byte_size)
+{
+ DBC_REQUIRE(pVirtualAddress != NULL);
+
+ if (!ext_phys_mem_pool_enabled)
+ dma_free_coherent(NULL, byte_size, pVirtualAddress,
+ pPhysicalAddress);
+}
diff --git a/drivers/staging/tidspbridge/rmgr/drv_interface.c b/drivers/staging/tidspbridge/rmgr/drv_interface.c
new file mode 100644
index 0000000..f0f089b
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/drv_interface.c
@@ -0,0 +1,644 @@
+/*
+ * drv_interface.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge driver interface.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+
+#include <dspbridge/host_os.h>
+#include <linux/platform_device.h>
+#include <linux/pm.h>
+
+#ifdef MODULE
+#include <linux/module.h>
+#endif
+
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/moduleparam.h>
+#include <linux/cdev.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/services.h>
+#include <dspbridge/clk.h>
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dspapi-ioctl.h>
+#include <dspbridge/dspapi.h>
+#include <dspbridge/dspdrv.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/pwr.h>
+
+/* ----------------------------------- This */
+#include <drv_interface.h>
+
+#include <dspbridge/cfg.h>
+#include <dspbridge/resourcecleanup.h>
+#include <dspbridge/chnl.h>
+#include <dspbridge/proc.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/drvdefs.h>
+#include <dspbridge/drv.h>
+
+#ifdef CONFIG_BRIDGE_DVFS
+#include <mach-omap2/omap3-opp.h>
+#endif
+
+#define BRIDGE_NAME "C6410"
+/* ----------------------------------- Globals */
+#define DRIVER_NAME "DspBridge"
+#define DSPBRIDGE_VERSION "0.3"
+s32 dsp_debug;
+
+struct platform_device *omap_dspbridge_dev;
+struct device *bridge;
+
+/* This is a test variable used by Bridge to test different sleep states */
+s32 dsp_test_sleepstate;
+
+static struct cdev bridge_cdev;
+
+static struct class *bridge_class;
+
+static u32 driver_context;
+static s32 driver_major;
+static char *base_img;
+char *iva_img;
+static s32 shm_size = 0x500000; /* 5 MB */
+static int tc_wordswapon; /* Default value is always false */
+#ifdef CONFIG_BRIDGE_RECOVERY
+#define REC_TIMEOUT 5000 /*recovery timeout in msecs */
+static atomic_t bridge_cref; /* number of bridge open handles */
+static struct workqueue_struct *bridge_rec_queue;
+static struct work_struct bridge_recovery_work;
+static DECLARE_COMPLETION(bridge_comp);
+static DECLARE_COMPLETION(bridge_open_comp);
+static bool recover;
+#endif
+
+#ifdef CONFIG_PM
+struct omap34_xx_bridge_suspend_data {
+ int suspended;
+ wait_queue_head_t suspend_wq;
+};
+
+static struct omap34_xx_bridge_suspend_data bridge_suspend_data;
+
+static int omap34_xxbridge_suspend_lockout(struct omap34_xx_bridge_suspend_data
+ *s, struct file *f)
+{
+ if ((s)->suspended) {
+ if ((f)->f_flags & O_NONBLOCK)
+ return -EPERM;
+ wait_event_interruptible((s)->suspend_wq, (s)->suspended == 0);
+ }
+ return 0;
+}
+#endif
+
+module_param(dsp_debug, int, 0);
+MODULE_PARM_DESC(dsp_debug, "Wait after loading DSP image. default = false");
+
+module_param(dsp_test_sleepstate, int, 0);
+MODULE_PARM_DESC(dsp_test_sleepstate, "DSP Sleep state = 0");
+
+module_param(base_img, charp, 0);
+MODULE_PARM_DESC(base_img, "DSP base image, default = NULL");
+
+module_param(shm_size, int, 0);
+MODULE_PARM_DESC(shm_size, "shm size, default = 4 MB, minimum = 64 KB");
+
+module_param(tc_wordswapon, int, 0);
+MODULE_PARM_DESC(tc_wordswapon, "TC Word Swap Option. default = 0");
+
+MODULE_AUTHOR("Texas Instruments");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DSPBRIDGE_VERSION);
+
+static char *driver_name = DRIVER_NAME;
+
+static const struct file_operations bridge_fops = {
+ .open = bridge_open,
+ .release = bridge_release,
+ .unlocked_ioctl = bridge_ioctl,
+ .mmap = bridge_mmap,
+};
+
+#ifdef CONFIG_PM
+static u32 time_out = 1000;
+#ifdef CONFIG_BRIDGE_DVFS
+s32 dsp_max_opps = VDD1_OPP5;
+#endif
+
+/* Maximum Opps that can be requested by IVA */
+/*vdd1 rate table */
+#ifdef CONFIG_BRIDGE_DVFS
+const struct omap_opp vdd1_rate_table_bridge[] = {
+ {0, 0, 0},
+ /*OPP1 */
+ {S125M, VDD1_OPP1, 0},
+ /*OPP2 */
+ {S250M, VDD1_OPP2, 0},
+ /*OPP3 */
+ {S500M, VDD1_OPP3, 0},
+ /*OPP4 */
+ {S550M, VDD1_OPP4, 0},
+ /*OPP5 */
+ {S600M, VDD1_OPP5, 0},
+};
+#endif
+#endif
+
+struct dspbridge_platform_data *omap_dspbridge_pdata;
+
+u32 vdd1_dsp_freq[6][4] = {
+ {0, 0, 0, 0},
+ /*OPP1 */
+ {0, 90000, 0, 86000},
+ /*OPP2 */
+ {0, 180000, 80000, 170000},
+ /*OPP3 */
+ {0, 360000, 160000, 340000},
+ /*OPP4 */
+ {0, 396000, 325000, 376000},
+ /*OPP5 */
+ {0, 430000, 355000, 430000},
+};
+
+#ifdef CONFIG_BRIDGE_RECOVERY
+static void bridge_recover(struct work_struct *work)
+{
+ struct dev_object *dev;
+ struct cfg_devnode *dev_node;
+ if (atomic_read(&bridge_cref)) {
+ INIT_COMPLETION(bridge_comp);
+ while (!wait_for_completion_timeout(&bridge_comp,
+ msecs_to_jiffies(REC_TIMEOUT)))
+ pr_info("%s:%d handle(s) still opened\n",
+ __func__, atomic_read(&bridge_cref));
+ }
+ dev = dev_get_first();
+ dev_get_dev_node(dev, &dev_node);
+ if (!dev_node || DSP_FAILED(proc_auto_start(dev_node, dev)))
+ pr_err("DSP could not be restarted\n");
+ recover = false;
+ complete_all(&bridge_open_comp);
+}
+
+void bridge_recover_schedule(void)
+{
+ INIT_COMPLETION(bridge_open_comp);
+ recover = true;
+ queue_work(bridge_rec_queue, &bridge_recovery_work);
+}
+#endif
+#ifdef CONFIG_BRIDGE_DVFS
+static int dspbridge_scale_notification(struct notifier_block *op,
+ unsigned long val, void *ptr)
+{
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+
+ if (CPUFREQ_POSTCHANGE == val && pdata->dsp_get_opp)
+ pwr_pm_post_scale(PRCM_VDD1, pdata->dsp_get_opp());
+
+ return 0;
+}
+
+static struct notifier_block iva_clk_notifier = {
+ .notifier_call = dspbridge_scale_notification,
+ NULL,
+};
+#endif
+
+/**
+ * omap3_bridge_startup() - perform low lever initializations
+ * @pdev: pointer to platform device
+ *
+ * Initializes recovery, PM and DVFS required data, before calling
+ * clk and memory init routines.
+ */
+static int omap3_bridge_startup(struct platform_device *pdev)
+{
+ struct dspbridge_platform_data *pdata = pdev->dev.platform_data;
+ struct drv_data *drv_datap = NULL;
+ u32 phys_membase, phys_memsize;
+ int err;
+
+#ifdef CONFIG_BRIDGE_RECOVERY
+ bridge_rec_queue = create_workqueue("bridge_rec_queue");
+ INIT_WORK(&bridge_recovery_work, bridge_recover);
+ INIT_COMPLETION(bridge_comp);
+#endif
+
+#ifdef CONFIG_PM
+ /* Initialize the wait queue */
+ bridge_suspend_data.suspended = 0;
+ init_waitqueue_head(&bridge_suspend_data.suspend_wq);
+
+#ifdef CONFIG_BRIDGE_DVFS
+ for (i = 0; i < 6; i++)
+ pdata->mpu_speed[i] = vdd1_rate_table_bridge[i].rate;
+
+ err = cpufreq_register_notifier(&iva_clk_notifier,
+ CPUFREQ_TRANSITION_NOTIFIER);
+ if (err)
+ pr_err("%s: clk_notifier_register failed for iva2_ck\n",
+ __func__);
+#endif
+#endif
+
+ dsp_clk_init();
+ services_init();
+
+ drv_datap = kzalloc(sizeof(struct drv_data), GFP_KERNEL);
+ if (!drv_datap) {
+ err = -ENOMEM;
+ goto err1;
+ }
+
+ drv_datap->shm_size = shm_size;
+ drv_datap->tc_wordswapon = tc_wordswapon;
+
+ if (base_img) {
+ drv_datap->base_img = kmalloc(strlen(base_img) + 1, GFP_KERNEL);
+ if (!drv_datap->base_img) {
+ err = -ENOMEM;
+ goto err2;
+ }
+ strncpy(drv_datap->base_img, base_img, strlen(base_img) + 1);
+ }
+
+ dev_set_drvdata(bridge, drv_datap);
+
+ if (shm_size < 0x10000) { /* 64 KB */
+ err = -EINVAL;
+ pr_err("%s: shm size must be at least 64 KB\n", __func__);
+ goto err3;
+ }
+ dev_dbg(bridge, "%s: requested shm_size = 0x%x\n", __func__, shm_size);
+
+ phys_membase = pdata->phys_mempool_base;
+ phys_memsize = pdata->phys_mempool_size;
+ if (phys_membase > 0 && phys_memsize > 0)
+ mem_ext_phys_pool_init(phys_membase, phys_memsize);
+
+ if (tc_wordswapon)
+ dev_dbg(bridge, "%s: TC Word Swap is enabled\n", __func__);
+
+ driver_context = dsp_init(&err);
+ if (err) {
+ pr_err("DSP Bridge driver initialization failed\n");
+ goto err4;
+ }
+
+ return 0;
+
+err4:
+ mem_ext_phys_pool_release();
+err3:
+ kfree(drv_datap->base_img);
+err2:
+ kfree(drv_datap);
+err1:
+#ifdef CONFIG_BRIDGE_DVFS
+ cpufreq_unregister_notifier(&iva_clk_notifier,
+ CPUFREQ_TRANSITION_NOTIFIER);
+#endif
+ dsp_clk_exit();
+ services_exit();
+
+ return err;
+}
+
+static int __devinit omap34_xx_bridge_probe(struct platform_device *pdev)
+{
+ int err;
+ dev_t dev = 0;
+#ifdef CONFIG_BRIDGE_DVFS
+ int i = 0;
+#endif
+
+ omap_dspbridge_dev = pdev;
+
+ /* Global bridge device */
+ bridge = &omap_dspbridge_dev->dev;
+
+ /* Bridge low level initializations */
+ err = omap3_bridge_startup(pdev);
+ if (err)
+ goto err1;
+
+ /* use 2.6 device model */
+ err = alloc_chrdev_region(&dev, 0, 1, driver_name);
+ if (err) {
+ pr_err("%s: Can't get major %d\n", __func__, driver_major);
+ goto err1;
+ }
+
+ cdev_init(&bridge_cdev, &bridge_fops);
+ bridge_cdev.owner = THIS_MODULE;
+
+ err = cdev_add(&bridge_cdev, dev, 1);
+ if (err) {
+ pr_err("%s: Failed to add bridge device\n", __func__);
+ goto err2;
+ }
+
+ /* udev support */
+ bridge_class = class_create(THIS_MODULE, "ti_bridge");
+ if (IS_ERR(bridge_class)) {
+ pr_err("%s: Error creating bridge class\n", __func__);
+ goto err3;
+ }
+
+ driver_major = MAJOR(dev);
+ device_create(bridge_class, NULL, MKDEV(driver_major, 0),
+ NULL, "DspBridge");
+ pr_info("DSP Bridge driver loaded\n");
+
+ return 0;
+
+err3:
+ cdev_del(&bridge_cdev);
+err2:
+ unregister_chrdev_region(dev, 1);
+err1:
+ return err;
+}
+
+static int __devexit omap34_xx_bridge_remove(struct platform_device *pdev)
+{
+ dev_t devno;
+ bool ret;
+ int status = 0;
+ void *hdrv_obj = NULL;
+
+ status = cfg_get_object((u32 *) &hdrv_obj, REG_DRV_OBJECT);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+#ifdef CONFIG_BRIDGE_DVFS
+ if (cpufreq_unregister_notifier(&iva_clk_notifier,
+ CPUFREQ_TRANSITION_NOTIFIER))
+ pr_err("%s: cpufreq_unregister_notifier failed for iva2_ck\n",
+ __func__);
+#endif /* #ifdef CONFIG_BRIDGE_DVFS */
+
+ if (driver_context) {
+ /* Put the DSP in reset state */
+ ret = dsp_deinit(driver_context);
+ driver_context = 0;
+ DBC_ASSERT(ret == true);
+ }
+
+func_cont:
+ mem_ext_phys_pool_release();
+
+ dsp_clk_exit();
+ services_exit();
+
+ devno = MKDEV(driver_major, 0);
+ cdev_del(&bridge_cdev);
+ unregister_chrdev_region(devno, 1);
+ if (bridge_class) {
+ /* remove the device from sysfs */
+ device_destroy(bridge_class, MKDEV(driver_major, 0));
+ class_destroy(bridge_class);
+
+ }
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int BRIDGE_SUSPEND(struct platform_device *pdev, pm_message_t state)
+{
+ u32 status;
+ u32 command = PWR_EMERGENCYDEEPSLEEP;
+
+ status = pwr_sleep_dsp(command, time_out);
+ if (DSP_FAILED(status))
+ return -1;
+
+ bridge_suspend_data.suspended = 1;
+ return 0;
+}
+
+static int BRIDGE_RESUME(struct platform_device *pdev)
+{
+ u32 status;
+
+ status = pwr_wake_dsp(time_out);
+ if (DSP_FAILED(status))
+ return -1;
+
+ bridge_suspend_data.suspended = 0;
+ wake_up(&bridge_suspend_data.suspend_wq);
+ return 0;
+}
+#else
+#define BRIDGE_SUSPEND NULL
+#define BRIDGE_RESUME NULL
+#endif
+
+static struct platform_driver bridge_driver = {
+ .driver = {
+ .name = BRIDGE_NAME,
+ },
+ .probe = omap34_xx_bridge_probe,
+ .remove = __devexit_p(omap34_xx_bridge_remove),
+ .suspend = BRIDGE_SUSPEND,
+ .resume = BRIDGE_RESUME,
+};
+
+static int __init bridge_init(void)
+{
+ return platform_driver_register(&bridge_driver);
+}
+
+static void __exit bridge_exit(void)
+{
+ platform_driver_unregister(&bridge_driver);
+}
+
+/*
+ * This function is called when an application opens handle to the
+ * bridge driver.
+ */
+static int bridge_open(struct inode *ip, struct file *filp)
+{
+ int status = 0;
+ struct process_context *pr_ctxt = NULL;
+
+ /*
+ * Allocate a new process context and insert it into global
+ * process context list.
+ */
+
+#ifdef CONFIG_BRIDGE_RECOVERY
+ if (recover) {
+ if (filp->f_flags & O_NONBLOCK ||
+ wait_for_completion_interruptible(&bridge_open_comp))
+ return -EBUSY;
+ }
+#endif
+ pr_ctxt = kzalloc(sizeof(struct process_context), GFP_KERNEL);
+ if (pr_ctxt) {
+ pr_ctxt->res_state = PROC_RES_ALLOCATED;
+ spin_lock_init(&pr_ctxt->dmm_map_lock);
+ INIT_LIST_HEAD(&pr_ctxt->dmm_map_list);
+ spin_lock_init(&pr_ctxt->dmm_rsv_lock);
+ INIT_LIST_HEAD(&pr_ctxt->dmm_rsv_list);
+ mutex_init(&pr_ctxt->node_mutex);
+ mutex_init(&pr_ctxt->strm_mutex);
+ } else {
+ status = -ENOMEM;
+ }
+
+ filp->private_data = pr_ctxt;
+#ifdef CONFIG_BRIDGE_RECOVERY
+ if (!status)
+ atomic_inc(&bridge_cref);
+#endif
+ return status;
+}
+
+/*
+ * This function is called when an application closes handle to the bridge
+ * driver.
+ */
+static int bridge_release(struct inode *ip, struct file *filp)
+{
+ int status = 0;
+ struct process_context *pr_ctxt;
+
+ if (!filp->private_data) {
+ status = -EIO;
+ goto err;
+ }
+
+ pr_ctxt = filp->private_data;
+ flush_signals(current);
+ drv_remove_all_resources(pr_ctxt);
+ proc_detach(pr_ctxt);
+ kfree(pr_ctxt);
+
+ filp->private_data = NULL;
+
+err:
+#ifdef CONFIG_BRIDGE_RECOVERY
+ if (!atomic_dec_return(&bridge_cref))
+ complete(&bridge_comp);
+#endif
+ return status;
+}
+
+/* This function provides IO interface to the bridge driver. */
+static long bridge_ioctl(struct file *filp, unsigned int code,
+ unsigned long args)
+{
+ int status;
+ u32 retval = 0;
+ union Trapped_Args buf_in;
+
+ DBC_REQUIRE(filp != NULL);
+#ifdef CONFIG_BRIDGE_RECOVERY
+ if (recover) {
+ status = -EIO;
+ goto err;
+ }
+#endif
+#ifdef CONFIG_PM
+ status = omap34_xxbridge_suspend_lockout(&bridge_suspend_data, filp);
+ if (status != 0)
+ return status;
+#endif
+
+ if (!filp->private_data) {
+ status = -EIO;
+ goto err;
+ }
+
+ status = copy_from_user(&buf_in, (union Trapped_Args *)args,
+ sizeof(union Trapped_Args));
+
+ if (!status) {
+ status = api_call_dev_ioctl(code, &buf_in, &retval,
+ filp->private_data);
+
+ if (DSP_SUCCEEDED(status)) {
+ status = retval;
+ } else {
+ dev_dbg(bridge, "%s: IOCTL Failed, code: 0x%x "
+ "status 0x%x\n", __func__, code, status);
+ status = -1;
+ }
+
+ }
+
+err:
+ return status;
+}
+
+/* This function maps kernel space memory to user space memory. */
+static int bridge_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ u32 offset = vma->vm_pgoff << PAGE_SHIFT;
+ u32 status;
+
+ DBC_ASSERT(vma->vm_start < vma->vm_end);
+
+ vma->vm_flags |= VM_RESERVED | VM_IO;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+ dev_dbg(bridge, "%s: vm filp %p offset %x start %lx end %lx page_prot "
+ "%lx flags %lx\n", __func__, filp, offset,
+ vma->vm_start, vma->vm_end, vma->vm_page_prot, vma->vm_flags);
+
+ status = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
+ if (status != 0)
+ status = -EAGAIN;
+
+ return status;
+}
+
+/* To remove all process resources before removing the process from the
+ * process context list */
+int drv_remove_all_resources(void *hPCtxt)
+{
+ int status = 0;
+ struct process_context *ctxt = (struct process_context *)hPCtxt;
+ drv_remove_all_strm_res_elements(ctxt);
+ drv_remove_all_node_res_elements(ctxt);
+ drv_remove_all_dmm_res_elements(ctxt);
+ ctxt->res_state = PROC_RES_FREED;
+ return status;
+}
+
+/* Bridge driver initialization and de-initialization functions */
+module_init(bridge_init);
+module_exit(bridge_exit);
diff --git a/drivers/staging/tidspbridge/rmgr/drv_interface.h b/drivers/staging/tidspbridge/rmgr/drv_interface.h
new file mode 100644
index 0000000..fd6f489
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/drv_interface.h
@@ -0,0 +1,27 @@
+/*
+ * drv_interface.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DRV_INTERFACE_H_
+#define _DRV_INTERFACE_H_
+
+/* Prototypes for all functions in this bridge */
+static int __init bridge_init(void); /* Initialize bridge */
+static void __exit bridge_exit(void); /* Opposite of initialize */
+static int bridge_open(struct inode *, struct file *); /* Open */
+static int bridge_release(struct inode *, struct file *); /* Release */
+static long bridge_ioctl(struct file *, unsigned int, unsigned long);
+static int bridge_mmap(struct file *filp, struct vm_area_struct *vma);
+#endif /* ifndef _DRV_INTERFACE_H_ */
diff --git a/drivers/staging/tidspbridge/rmgr/dspdrv.c b/drivers/staging/tidspbridge/rmgr/dspdrv.c
new file mode 100644
index 0000000..ec9ba4f
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/dspdrv.c
@@ -0,0 +1,142 @@
+/*
+ * dspdrv.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Interface to allocate and free bridge resources.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/drv.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/dspapi.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/mgr.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/dspdrv.h>
+
+/*
+ * ======== dsp_init ========
+ * Allocates bridge resources. Loads a base image onto DSP, if specified.
+ */
+u32 dsp_init(OUT u32 *init_status)
+{
+ char dev_node[MAXREGPATHLENGTH] = "TIOMAP1510";
+ int status = -EPERM;
+ struct drv_object *drv_obj = NULL;
+ u32 device_node;
+ u32 device_node_string;
+
+ if (!api_init())
+ goto func_cont;
+
+ status = drv_create(&drv_obj);
+ if (DSP_FAILED(status)) {
+ api_exit();
+ goto func_cont;
+ }
+
+ /* End drv_create */
+ /* Request Resources */
+ status = drv_request_resources((u32) &dev_node, &device_node_string);
+ if (DSP_SUCCEEDED(status)) {
+ /* Attempt to Start the Device */
+ status = dev_start_device((struct cfg_devnode *)
+ device_node_string);
+ if (DSP_FAILED(status))
+ (void)drv_release_resources
+ ((u32) device_node_string, drv_obj);
+ } else {
+ dev_dbg(bridge, "%s: drv_request_resources Failed\n", __func__);
+ status = -EPERM;
+ }
+
+ /* Unwind whatever was loaded */
+ if (DSP_FAILED(status)) {
+ /* irrespective of the status of dev_remove_device we conitinue
+ * unloading. Get the Driver Object iterate through and remove.
+ * Reset the status to E_FAIL to avoid going through
+ * api_init_complete2. */
+ for (device_node = drv_get_first_dev_extension();
+ device_node != 0;
+ device_node = drv_get_next_dev_extension(device_node)) {
+ (void)dev_remove_device((struct cfg_devnode *)
+ device_node);
+ (void)drv_release_resources((u32) device_node, drv_obj);
+ }
+ /* Remove the Driver Object */
+ (void)drv_destroy(drv_obj);
+ drv_obj = NULL;
+ api_exit();
+ dev_dbg(bridge, "%s: Logical device failed init\n", __func__);
+ } /* Unwinding the loaded drivers */
+func_cont:
+ /* Attempt to Start the Board */
+ if (DSP_SUCCEEDED(status)) {
+ /* BRD_AutoStart could fail if the dsp execuetable is not the
+ * correct one. We should not propagate that error
+ * into the device loader. */
+ (void)api_init_complete2();
+ } else {
+ dev_dbg(bridge, "%s: Failed\n", __func__);
+ } /* End api_init_complete2 */
+ DBC_ENSURE((DSP_SUCCEEDED(status) && drv_obj != NULL) ||
+ (DSP_FAILED(status) && drv_obj == NULL));
+ *init_status = status;
+ /* Return the Driver Object */
+ return (u32) drv_obj;
+}
+
+/*
+ * ======== dsp_deinit ========
+ * Frees the resources allocated for bridge.
+ */
+bool dsp_deinit(u32 deviceContext)
+{
+ bool ret = true;
+ u32 device_node;
+ struct mgr_object *mgr_obj = NULL;
+
+ while ((device_node = drv_get_first_dev_extension()) != 0) {
+ (void)dev_remove_device((struct cfg_devnode *)device_node);
+
+ (void)drv_release_resources((u32) device_node,
+ (struct drv_object *)deviceContext);
+ }
+
+ (void)drv_destroy((struct drv_object *)deviceContext);
+
+ /* Get the Manager Object from Registry
+ * MGR Destroy will unload the DCD dll */
+ if (DSP_SUCCEEDED(cfg_get_object((u32 *) &mgr_obj, REG_MGR_OBJECT)))
+ (void)mgr_destroy(mgr_obj);
+
+ api_exit();
+
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/mgr.c b/drivers/staging/tidspbridge/rmgr/mgr.c
new file mode 100644
index 0000000..b1a68ac
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/mgr.c
@@ -0,0 +1,374 @@
+/*
+ * mgr.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Implementation of Manager interface to the device object at the
+ * driver level. This queries the NDB data base and retrieves the
+ * data about Node and Processor.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/dbdcd.h>
+#include <dspbridge/drv.h>
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/mgr.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define ZLDLLNAME ""
+
+struct mgr_object {
+ struct dcd_manager *hdcd_mgr; /* Proc/Node data manager */
+};
+
+/* ----------------------------------- Globals */
+static u32 refs;
+
+/*
+ * ========= mgr_create =========
+ * Purpose:
+ * MGR Object gets created only once during driver Loading.
+ */
+int mgr_create(OUT struct mgr_object **phMgrObject,
+ struct cfg_devnode *dev_node_obj)
+{
+ int status = 0;
+ struct mgr_object *pmgr_obj = NULL;
+
+ DBC_REQUIRE(phMgrObject != NULL);
+ DBC_REQUIRE(refs > 0);
+
+ pmgr_obj = kzalloc(sizeof(struct mgr_object), GFP_KERNEL);
+ if (pmgr_obj) {
+ status = dcd_create_manager(ZLDLLNAME, &pmgr_obj->hdcd_mgr);
+ if (DSP_SUCCEEDED(status)) {
+ /* If succeeded store the handle in the MGR Object */
+ status = cfg_set_object((u32) pmgr_obj, REG_MGR_OBJECT);
+ if (DSP_SUCCEEDED(status)) {
+ *phMgrObject = pmgr_obj;
+ } else {
+ dcd_destroy_manager(pmgr_obj->hdcd_mgr);
+ kfree(pmgr_obj);
+ }
+ } else {
+ /* failed to Create DCD Manager */
+ kfree(pmgr_obj);
+ }
+ } else {
+ status = -ENOMEM;
+ }
+
+ DBC_ENSURE(DSP_FAILED(status) || pmgr_obj);
+ return status;
+}
+
+/*
+ * ========= mgr_destroy =========
+ * This function is invoked during bridge driver unloading.Frees MGR object.
+ */
+int mgr_destroy(struct mgr_object *hmgr_obj)
+{
+ int status = 0;
+ struct mgr_object *pmgr_obj = (struct mgr_object *)hmgr_obj;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hmgr_obj);
+
+ /* Free resources */
+ if (hmgr_obj->hdcd_mgr)
+ dcd_destroy_manager(hmgr_obj->hdcd_mgr);
+
+ kfree(pmgr_obj);
+ /* Update the Registry with NULL for MGR Object */
+ (void)cfg_set_object(0, REG_MGR_OBJECT);
+
+ return status;
+}
+
+/*
+ * ======== mgr_enum_node_info ========
+ * Enumerate and get configuration information about nodes configured
+ * in the node database.
+ */
+int mgr_enum_node_info(u32 node_id, OUT struct dsp_ndbprops *pndb_props,
+ u32 undb_props_size, OUT u32 *pu_num_nodes)
+{
+ int status = 0;
+ struct dsp_uuid node_uuid, temp_uuid;
+ u32 temp_index = 0;
+ u32 node_index = 0;
+ struct dcd_genericobj gen_obj;
+ struct mgr_object *pmgr_obj = NULL;
+
+ DBC_REQUIRE(pndb_props != NULL);
+ DBC_REQUIRE(pu_num_nodes != NULL);
+ DBC_REQUIRE(undb_props_size >= sizeof(struct dsp_ndbprops));
+ DBC_REQUIRE(refs > 0);
+
+ *pu_num_nodes = 0;
+ /* Get The Manager Object from the Registry */
+ status = cfg_get_object((u32 *) &pmgr_obj, REG_MGR_OBJECT);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ DBC_ASSERT(pmgr_obj);
+ /* Forever loop till we hit failed or no more items in the
+ * Enumeration. We will exit the loop other than 0; */
+ while (status == 0) {
+ status = dcd_enumerate_object(temp_index++, DSP_DCDNODETYPE,
+ &temp_uuid);
+ if (status == 0) {
+ node_index++;
+ if (node_id == (node_index - 1))
+ node_uuid = temp_uuid;
+
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ if (node_id > (node_index - 1)) {
+ status = -EINVAL;
+ } else {
+ status = dcd_get_object_def(pmgr_obj->hdcd_mgr,
+ (struct dsp_uuid *)
+ &node_uuid, DSP_DCDNODETYPE,
+ &gen_obj);
+ if (DSP_SUCCEEDED(status)) {
+ /* Get the Obj def */
+ *pndb_props =
+ gen_obj.obj_data.node_obj.ndb_props;
+ *pu_num_nodes = node_index;
+ }
+ }
+ }
+
+func_cont:
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *pu_num_nodes > 0) ||
+ (DSP_FAILED(status) && *pu_num_nodes == 0));
+
+ return status;
+}
+
+/*
+ * ======== mgr_enum_processor_info ========
+ * Enumerate and get configuration information about available
+ * DSP processors.
+ */
+int mgr_enum_processor_info(u32 processor_id,
+ OUT struct dsp_processorinfo *
+ processor_info, u32 processor_info_size,
+ OUT u8 *pu_num_procs)
+{
+ int status = 0;
+ int status1 = 0;
+ int status2 = 0;
+ struct dsp_uuid temp_uuid;
+ u32 temp_index = 0;
+ u32 proc_index = 0;
+ struct dcd_genericobj gen_obj;
+ struct mgr_object *pmgr_obj = NULL;
+ struct mgr_processorextinfo *ext_info;
+ struct dev_object *hdev_obj;
+ struct drv_object *hdrv_obj;
+ u8 dev_type;
+ struct cfg_devnode *dev_node;
+ bool proc_detect = false;
+
+ DBC_REQUIRE(processor_info != NULL);
+ DBC_REQUIRE(pu_num_procs != NULL);
+ DBC_REQUIRE(processor_info_size >= sizeof(struct dsp_processorinfo));
+ DBC_REQUIRE(refs > 0);
+
+ *pu_num_procs = 0;
+ status = cfg_get_object((u32 *) &hdrv_obj, REG_DRV_OBJECT);
+ if (DSP_SUCCEEDED(status)) {
+ status = drv_get_dev_object(processor_id, hdrv_obj, &hdev_obj);
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_dev_type(hdev_obj, (u8 *) &dev_type);
+ status = dev_get_dev_node(hdev_obj, &dev_node);
+ if (dev_type != DSP_UNIT)
+ status = -EPERM;
+
+ if (DSP_SUCCEEDED(status))
+ processor_info->processor_type = DSPTYPE64;
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Get The Manager Object from the Registry */
+ if (DSP_FAILED(cfg_get_object((u32 *) &pmgr_obj, REG_MGR_OBJECT))) {
+ dev_dbg(bridge, "%s: Failed to get MGR Object\n", __func__);
+ goto func_end;
+ }
+ DBC_ASSERT(pmgr_obj);
+ /* Forever loop till we hit no more items in the
+ * Enumeration. We will exit the loop other than 0; */
+ while (status1 == 0) {
+ status1 = dcd_enumerate_object(temp_index++,
+ DSP_DCDPROCESSORTYPE,
+ &temp_uuid);
+ if (status1 != 0)
+ break;
+
+ proc_index++;
+ /* Get the Object properties to find the Device/Processor
+ * Type */
+ if (proc_detect != false)
+ continue;
+
+ status2 = dcd_get_object_def(pmgr_obj->hdcd_mgr,
+ (struct dsp_uuid *)&temp_uuid,
+ DSP_DCDPROCESSORTYPE, &gen_obj);
+ if (DSP_SUCCEEDED(status2)) {
+ /* Get the Obj def */
+ if (processor_info_size <
+ sizeof(struct mgr_processorextinfo)) {
+ *processor_info = gen_obj.obj_data.proc_info;
+ } else {
+ /* extended info */
+ ext_info = (struct mgr_processorextinfo *)
+ processor_info;
+ *ext_info = gen_obj.obj_data.ext_proc_obj;
+ }
+ dev_dbg(bridge, "%s: Got proctype from DCD %x\n",
+ __func__, processor_info->processor_type);
+ /* See if we got the needed processor */
+ if (dev_type == DSP_UNIT) {
+ if (processor_info->processor_type ==
+ DSPPROCTYPE_C64)
+ proc_detect = true;
+ } else if (dev_type == IVA_UNIT) {
+ if (processor_info->processor_type ==
+ IVAPROCTYPE_ARM7)
+ proc_detect = true;
+ }
+ /* User applciatiuons aonly check for chip type, so
+ * this clumsy overwrite */
+ processor_info->processor_type = DSPTYPE64;
+ } else {
+ dev_dbg(bridge, "%s: Failed to get DCD processor info "
+ "%x\n", __func__, status2);
+ status = -EPERM;
+ }
+ }
+ *pu_num_procs = proc_index;
+ if (proc_detect == false) {
+ dev_dbg(bridge, "%s: Failed to get proc info from DCD, so use "
+ "CFG registry\n", __func__);
+ processor_info->processor_type = DSPTYPE64;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== mgr_exit ========
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ */
+void mgr_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+ refs--;
+ if (refs == 0)
+ dcd_exit();
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== mgr_get_dcd_handle ========
+ * Retrieves the MGR handle. Accessor Function.
+ */
+int mgr_get_dcd_handle(struct mgr_object *hMGRHandle,
+ OUT u32 *phDCDHandle)
+{
+ int status = -EPERM;
+ struct mgr_object *pmgr_obj = (struct mgr_object *)hMGRHandle;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDCDHandle != NULL);
+
+ *phDCDHandle = (u32) NULL;
+ if (pmgr_obj) {
+ *phDCDHandle = (u32) pmgr_obj->hdcd_mgr;
+ status = 0;
+ }
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phDCDHandle != (u32) NULL) ||
+ (DSP_FAILED(status) && *phDCDHandle == (u32) NULL));
+
+ return status;
+}
+
+/*
+ * ======== mgr_init ========
+ * Initialize MGR's private state, keeping a reference count on each call.
+ */
+bool mgr_init(void)
+{
+ bool ret = true;
+ bool init_dcd = false;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (refs == 0) {
+ init_dcd = dcd_init(); /* DCD Module */
+
+ if (!init_dcd)
+ ret = false;
+ }
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== mgr_wait_for_bridge_events ========
+ * Block on any Bridge event(s)
+ */
+int mgr_wait_for_bridge_events(struct dsp_notification **anotifications,
+ u32 count, OUT u32 *pu_index,
+ u32 utimeout)
+{
+ int status;
+ struct sync_object *sync_events[MAX_EVENTS];
+ u32 i;
+
+ DBC_REQUIRE(count < MAX_EVENTS);
+
+ for (i = 0; i < count; i++)
+ sync_events[i] = anotifications[i]->handle;
+
+ status = sync_wait_on_multiple_events(sync_events, count, utimeout,
+ pu_index);
+
+ return status;
+
+}
diff --git a/drivers/staging/tidspbridge/rmgr/nldr.c b/drivers/staging/tidspbridge/rmgr/nldr.c
new file mode 100644
index 0000000..d0138af
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/nldr.c
@@ -0,0 +1,1999 @@
+/*
+ * nldr.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge dynamic + overlay Node loader.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/host_os.h>
+
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+#include <dspbridge/dbc.h>
+
+/* Platform manager */
+#include <dspbridge/cod.h>
+#include <dspbridge/dev.h>
+
+/* Resource manager */
+#include <dspbridge/dbll.h>
+#include <dspbridge/dbdcd.h>
+#include <dspbridge/rmm.h>
+#include <dspbridge/uuidutil.h>
+
+#include <dspbridge/nldr.h>
+
+/* Name of section containing dynamic load mem */
+#define DYNMEMSECT ".dspbridge_mem"
+
+/* Name of section containing dependent library information */
+#define DEPLIBSECT ".dspbridge_deplibs"
+
+/* Max depth of recursion for loading node's dependent libraries */
+#define MAXDEPTH 5
+
+/* Max number of persistent libraries kept by a node */
+#define MAXLIBS 5
+
+/*
+ * Defines for extracting packed dynamic load memory requirements from two
+ * masks.
+ * These defines must match node.cdb and dynm.cdb
+ * Format of data/code mask is:
+ * uuuuuuuu|fueeeeee|fudddddd|fucccccc|
+ * where
+ * u = unused
+ * cccccc = prefered/required dynamic mem segid for create phase data/code
+ * dddddd = prefered/required dynamic mem segid for delete phase data/code
+ * eeeeee = prefered/req. dynamic mem segid for execute phase data/code
+ * f = flag indicating if memory is preferred or required:
+ * f = 1 if required, f = 0 if preferred.
+ *
+ * The 6 bits of the segid are interpreted as follows:
+ *
+ * If the 6th bit (bit 5) is not set, then this specifies a memory segment
+ * between 0 and 31 (a maximum of 32 dynamic loading memory segments).
+ * If the 6th bit (bit 5) is set, segid has the following interpretation:
+ * segid = 32 - Any internal memory segment can be used.
+ * segid = 33 - Any external memory segment can be used.
+ * segid = 63 - Any memory segment can be used (in this case the
+ * required/preferred flag is irrelevant).
+ *
+ */
+/* Maximum allowed dynamic loading memory segments */
+#define MAXMEMSEGS 32
+
+#define MAXSEGID 3 /* Largest possible (real) segid */
+#define MEMINTERNALID 32 /* Segid meaning use internal mem */
+#define MEMEXTERNALID 33 /* Segid meaning use external mem */
+#define NULLID 63 /* Segid meaning no memory req/pref */
+#define FLAGBIT 7 /* 7th bit is pref./req. flag */
+#define SEGMASK 0x3f /* Bits 0 - 5 */
+
+#define CREATEBIT 0 /* Create segid starts at bit 0 */
+#define DELETEBIT 8 /* Delete segid starts at bit 8 */
+#define EXECUTEBIT 16 /* Execute segid starts at bit 16 */
+
+/*
+ * Masks that define memory type. Must match defines in dynm.cdb.
+ */
+#define DYNM_CODE 0x2
+#define DYNM_DATA 0x4
+#define DYNM_CODEDATA (DYNM_CODE | DYNM_DATA)
+#define DYNM_INTERNAL 0x8
+#define DYNM_EXTERNAL 0x10
+
+/*
+ * Defines for packing memory requirement/preference flags for code and
+ * data of each of the node's phases into one mask.
+ * The bit is set if the segid is required for loading code/data of the
+ * given phase. The bit is not set, if the segid is preferred only.
+ *
+ * These defines are also used as indeces into a segid array for the node.
+ * eg node's segid[CREATEDATAFLAGBIT] is the memory segment id that the
+ * create phase data is required or preferred to be loaded into.
+ */
+#define CREATEDATAFLAGBIT 0
+#define CREATECODEFLAGBIT 1
+#define EXECUTEDATAFLAGBIT 2
+#define EXECUTECODEFLAGBIT 3
+#define DELETEDATAFLAGBIT 4
+#define DELETECODEFLAGBIT 5
+#define MAXFLAGS 6
+
+#define IS_INTERNAL(nldr_obj, segid) (((segid) <= MAXSEGID && \
+ nldr_obj->seg_table[(segid)] & DYNM_INTERNAL) || \
+ (segid) == MEMINTERNALID)
+
+#define IS_EXTERNAL(nldr_obj, segid) (((segid) <= MAXSEGID && \
+ nldr_obj->seg_table[(segid)] & DYNM_EXTERNAL) || \
+ (segid) == MEMEXTERNALID)
+
+#define SWAPLONG(x) ((((x) << 24) & 0xFF000000) | (((x) << 8) & 0xFF0000L) | \
+ (((x) >> 8) & 0xFF00L) | (((x) >> 24) & 0xFF))
+
+#define SWAPWORD(x) ((((x) << 8) & 0xFF00) | (((x) >> 8) & 0xFF))
+
+ /*
+ * These names may be embedded in overlay sections to identify which
+ * node phase the section should be overlayed.
+ */
+#define PCREATE "create"
+#define PDELETE "delete"
+#define PEXECUTE "execute"
+
+#define IS_EQUAL_UUID(uuid1, uuid2) (\
+ ((uuid1).ul_data1 == (uuid2).ul_data1) && \
+ ((uuid1).us_data2 == (uuid2).us_data2) && \
+ ((uuid1).us_data3 == (uuid2).us_data3) && \
+ ((uuid1).uc_data4 == (uuid2).uc_data4) && \
+ ((uuid1).uc_data5 == (uuid2).uc_data5) && \
+ (strncmp((void *)(uuid1).uc_data6, (void *)(uuid2).uc_data6, 6)) == 0)
+
+ /*
+ * ======== mem_seg_info ========
+ * Format of dynamic loading memory segment info in coff file.
+ * Must match dynm.h55.
+ */
+struct mem_seg_info {
+ u32 segid; /* Dynamic loading memory segment number */
+ u32 base;
+ u32 len;
+ u32 type; /* Mask of DYNM_CODE, DYNM_INTERNAL, etc. */
+};
+
+/*
+ * ======== lib_node ========
+ * For maintaining a tree of library dependencies.
+ */
+struct lib_node {
+ struct dbll_library_obj *lib; /* The library */
+ u16 dep_libs; /* Number of dependent libraries */
+ struct lib_node *dep_libs_tree; /* Dependent libraries of lib */
+};
+
+/*
+ * ======== ovly_sect ========
+ * Information needed to overlay a section.
+ */
+struct ovly_sect {
+ struct ovly_sect *next_sect;
+ u32 sect_load_addr; /* Load address of section */
+ u32 sect_run_addr; /* Run address of section */
+ u32 size; /* Size of section */
+ u16 page; /* DBL_CODE, DBL_DATA */
+};
+
+/*
+ * ======== ovly_node ========
+ * For maintaining a list of overlay nodes, with sections that need to be
+ * overlayed for each of the nodes phases.
+ */
+struct ovly_node {
+ struct dsp_uuid uuid;
+ char *node_name;
+ struct ovly_sect *create_sects_list;
+ struct ovly_sect *delete_sects_list;
+ struct ovly_sect *execute_sects_list;
+ struct ovly_sect *other_sects_list;
+ u16 create_sects;
+ u16 delete_sects;
+ u16 execute_sects;
+ u16 other_sects;
+ u16 create_ref;
+ u16 delete_ref;
+ u16 execute_ref;
+ u16 other_ref;
+};
+
+/*
+ * ======== nldr_object ========
+ * Overlay loader object.
+ */
+struct nldr_object {
+ struct dev_object *hdev_obj; /* Device object */
+ struct dcd_manager *hdcd_mgr; /* Proc/Node data manager */
+ struct dbll_tar_obj *dbll; /* The DBL loader */
+ struct dbll_library_obj *base_lib; /* Base image library */
+ struct rmm_target_obj *rmm; /* Remote memory manager for DSP */
+ struct dbll_fxns ldr_fxns; /* Loader function table */
+ struct dbll_attrs ldr_attrs; /* attrs to pass to loader functions */
+ nldr_ovlyfxn ovly_fxn; /* "write" for overlay nodes */
+ nldr_writefxn write_fxn; /* "write" for dynamic nodes */
+ struct ovly_node *ovly_table; /* Table of overlay nodes */
+ u16 ovly_nodes; /* Number of overlay nodes in base */
+ u16 ovly_nid; /* Index for tracking overlay nodes */
+ u16 dload_segs; /* Number of dynamic load mem segs */
+ u32 *seg_table; /* memtypes of dynamic memory segs
+ * indexed by segid
+ */
+ u16 us_dsp_mau_size; /* Size of DSP MAU */
+ u16 us_dsp_word_size; /* Size of DSP word */
+};
+
+/*
+ * ======== nldr_nodeobject ========
+ * Dynamic node object. This object is created when a node is allocated.
+ */
+struct nldr_nodeobject {
+ struct nldr_object *nldr_obj; /* Dynamic loader handle */
+ void *priv_ref; /* Handle to pass to dbl_write_fxn */
+ struct dsp_uuid uuid; /* Node's UUID */
+ bool dynamic; /* Dynamically loaded node? */
+ bool overlay; /* Overlay node? */
+ bool *pf_phase_split; /* Multiple phase libraries? */
+ struct lib_node root; /* Library containing node phase */
+ struct lib_node create_lib; /* Library with create phase lib */
+ struct lib_node execute_lib; /* Library with execute phase lib */
+ struct lib_node delete_lib; /* Library with delete phase lib */
+ /* libs remain loaded until Delete */
+ struct lib_node pers_lib_table[MAXLIBS];
+ s32 pers_libs; /* Number of persistent libraries */
+ /* Path in lib dependency tree */
+ struct dbll_library_obj *lib_path[MAXDEPTH + 1];
+ enum nldr_phase phase; /* Node phase currently being loaded */
+
+ /*
+ * Dynamic loading memory segments for data and code of each phase.
+ */
+ u16 seg_id[MAXFLAGS];
+
+ /*
+ * Mask indicating whether each mem segment specified in seg_id[]
+ * is preferred or required.
+ * For example
+ * if (code_data_flag_mask & (1 << EXECUTEDATAFLAGBIT)) != 0,
+ * then it is required to load execute phase data into the memory
+ * specified by seg_id[EXECUTEDATAFLAGBIT].
+ */
+ u32 code_data_flag_mask;
+};
+
+/* Dynamic loader function table */
+static struct dbll_fxns ldr_fxns = {
+ (dbll_close_fxn) dbll_close,
+ (dbll_create_fxn) dbll_create,
+ (dbll_delete_fxn) dbll_delete,
+ (dbll_exit_fxn) dbll_exit,
+ (dbll_get_attrs_fxn) dbll_get_attrs,
+ (dbll_get_addr_fxn) dbll_get_addr,
+ (dbll_get_c_addr_fxn) dbll_get_c_addr,
+ (dbll_get_sect_fxn) dbll_get_sect,
+ (dbll_init_fxn) dbll_init,
+ (dbll_load_fxn) dbll_load,
+ (dbll_load_sect_fxn) dbll_load_sect,
+ (dbll_open_fxn) dbll_open,
+ (dbll_read_sect_fxn) dbll_read_sect,
+ (dbll_set_attrs_fxn) dbll_set_attrs,
+ (dbll_unload_fxn) dbll_unload,
+ (dbll_unload_sect_fxn) dbll_unload_sect,
+};
+
+static u32 refs; /* module reference count */
+
+static int add_ovly_info(void *handle, struct dbll_sect_info *sect_info,
+ u32 addr, u32 bytes);
+static int add_ovly_node(struct dsp_uuid *uuid_obj,
+ enum dsp_dcdobjtype obj_type, IN void *handle);
+static int add_ovly_sect(struct nldr_object *nldr_obj,
+ struct ovly_sect **pList,
+ struct dbll_sect_info *pSectInfo,
+ bool *pExists, u32 addr, u32 bytes);
+static s32 fake_ovly_write(void *handle, u32 dspAddr, void *buf, u32 bytes,
+ s32 mtype);
+static void free_sects(struct nldr_object *nldr_obj,
+ struct ovly_sect *phase_sects, u16 alloc_num);
+static bool get_symbol_value(void *handle, void *parg, void *rmm_handle,
+ char *symName, struct dbll_sym_val **sym);
+static int load_lib(struct nldr_nodeobject *nldr_node_obj,
+ struct lib_node *root, struct dsp_uuid uuid,
+ bool rootPersistent,
+ struct dbll_library_obj **lib_path,
+ enum nldr_phase phase, u16 depth);
+static int load_ovly(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+static int remote_alloc(void **pRef, u16 mem_sect_type, u32 size,
+ u32 align, u32 *dspAddr, OPTIONAL s32 segmentId,
+ OPTIONAL s32 req, bool reserve);
+static int remote_free(void **pRef, u16 space, u32 dspAddr, u32 size,
+ bool reserve);
+
+static void unload_lib(struct nldr_nodeobject *nldr_node_obj,
+ struct lib_node *root);
+static void unload_ovly(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+static bool find_in_persistent_lib_array(struct nldr_nodeobject *nldr_node_obj,
+ struct dbll_library_obj *lib);
+static u32 find_lcm(u32 a, u32 b);
+static u32 find_gcf(u32 a, u32 b);
+
+/*
+ * ======== nldr_allocate ========
+ */
+int nldr_allocate(struct nldr_object *nldr_obj, void *priv_ref,
+ IN CONST struct dcd_nodeprops *node_props,
+ OUT struct nldr_nodeobject **phNldrNode,
+ IN bool *pf_phase_split)
+{
+ struct nldr_nodeobject *nldr_node_obj = NULL;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(node_props != NULL);
+ DBC_REQUIRE(phNldrNode != NULL);
+ DBC_REQUIRE(nldr_obj);
+
+ /* Initialize handle in case of failure */
+ *phNldrNode = NULL;
+ /* Allocate node object */
+ nldr_node_obj = kzalloc(sizeof(struct nldr_nodeobject), GFP_KERNEL);
+
+ if (nldr_node_obj == NULL) {
+ status = -ENOMEM;
+ } else {
+ nldr_node_obj->pf_phase_split = pf_phase_split;
+ nldr_node_obj->pers_libs = 0;
+ nldr_node_obj->nldr_obj = nldr_obj;
+ nldr_node_obj->priv_ref = priv_ref;
+ /* Save node's UUID. */
+ nldr_node_obj->uuid = node_props->ndb_props.ui_node_id;
+ /*
+ * Determine if node is a dynamically loaded node from
+ * ndb_props.
+ */
+ if (node_props->us_load_type == NLDR_DYNAMICLOAD) {
+ /* Dynamic node */
+ nldr_node_obj->dynamic = true;
+ /*
+ * Extract memory requirements from ndb_props masks
+ */
+ /* Create phase */
+ nldr_node_obj->seg_id[CREATEDATAFLAGBIT] = (u16)
+ (node_props->ul_data_mem_seg_mask >> CREATEBIT) &
+ SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_data_mem_seg_mask >>
+ (CREATEBIT + FLAGBIT)) & 1) << CREATEDATAFLAGBIT;
+ nldr_node_obj->seg_id[CREATECODEFLAGBIT] = (u16)
+ (node_props->ul_code_mem_seg_mask >>
+ CREATEBIT) & SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_code_mem_seg_mask >>
+ (CREATEBIT + FLAGBIT)) & 1) << CREATECODEFLAGBIT;
+ /* Execute phase */
+ nldr_node_obj->seg_id[EXECUTEDATAFLAGBIT] = (u16)
+ (node_props->ul_data_mem_seg_mask >>
+ EXECUTEBIT) & SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_data_mem_seg_mask >>
+ (EXECUTEBIT + FLAGBIT)) & 1) <<
+ EXECUTEDATAFLAGBIT;
+ nldr_node_obj->seg_id[EXECUTECODEFLAGBIT] = (u16)
+ (node_props->ul_code_mem_seg_mask >>
+ EXECUTEBIT) & SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_code_mem_seg_mask >>
+ (EXECUTEBIT + FLAGBIT)) & 1) <<
+ EXECUTECODEFLAGBIT;
+ /* Delete phase */
+ nldr_node_obj->seg_id[DELETEDATAFLAGBIT] = (u16)
+ (node_props->ul_data_mem_seg_mask >> DELETEBIT) &
+ SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_data_mem_seg_mask >>
+ (DELETEBIT + FLAGBIT)) & 1) << DELETEDATAFLAGBIT;
+ nldr_node_obj->seg_id[DELETECODEFLAGBIT] = (u16)
+ (node_props->ul_code_mem_seg_mask >>
+ DELETEBIT) & SEGMASK;
+ nldr_node_obj->code_data_flag_mask |=
+ ((node_props->ul_code_mem_seg_mask >>
+ (DELETEBIT + FLAGBIT)) & 1) << DELETECODEFLAGBIT;
+ } else {
+ /* Non-dynamically loaded nodes are part of the
+ * base image */
+ nldr_node_obj->root.lib = nldr_obj->base_lib;
+ /* Check for overlay node */
+ if (node_props->us_load_type == NLDR_OVLYLOAD)
+ nldr_node_obj->overlay = true;
+
+ }
+ *phNldrNode = (struct nldr_nodeobject *)nldr_node_obj;
+ }
+ /* Cleanup on failure */
+ if (DSP_FAILED(status) && nldr_node_obj)
+ kfree(nldr_node_obj);
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phNldrNode)
+ || (DSP_FAILED(status) && *phNldrNode == NULL));
+ return status;
+}
+
+/*
+ * ======== nldr_create ========
+ */
+int nldr_create(OUT struct nldr_object **phNldr,
+ struct dev_object *hdev_obj,
+ IN CONST struct nldr_attrs *pattrs)
+{
+ struct cod_manager *cod_mgr; /* COD manager */
+ char *psz_coff_buf = NULL;
+ char sz_zl_file[COD_MAXPATHLENGTH];
+ struct nldr_object *nldr_obj = NULL;
+ struct dbll_attrs save_attrs;
+ struct dbll_attrs new_attrs;
+ dbll_flags flags;
+ u32 ul_entry;
+ u16 dload_segs = 0;
+ struct mem_seg_info *mem_info_obj;
+ u32 ul_len = 0;
+ u32 ul_addr;
+ struct rmm_segment *rmm_segs = NULL;
+ u16 i;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phNldr != NULL);
+ DBC_REQUIRE(hdev_obj != NULL);
+ DBC_REQUIRE(pattrs != NULL);
+ DBC_REQUIRE(pattrs->pfn_ovly != NULL);
+ DBC_REQUIRE(pattrs->pfn_write != NULL);
+
+ /* Allocate dynamic loader object */
+ nldr_obj = kzalloc(sizeof(struct nldr_object), GFP_KERNEL);
+ if (nldr_obj) {
+ nldr_obj->hdev_obj = hdev_obj;
+ /* warning, lazy status checking alert! */
+ dev_get_cod_mgr(hdev_obj, &cod_mgr);
+ if (cod_mgr) {
+ status = cod_get_loader(cod_mgr, &nldr_obj->dbll);
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+ status = cod_get_base_lib(cod_mgr, &nldr_obj->base_lib);
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+ status =
+ cod_get_base_name(cod_mgr, sz_zl_file,
+ COD_MAXPATHLENGTH);
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+ }
+ status = 0;
+ /* end lazy status checking */
+ nldr_obj->us_dsp_mau_size = pattrs->us_dsp_mau_size;
+ nldr_obj->us_dsp_word_size = pattrs->us_dsp_word_size;
+ nldr_obj->ldr_fxns = ldr_fxns;
+ if (!(nldr_obj->ldr_fxns.init_fxn()))
+ status = -ENOMEM;
+
+ } else {
+ status = -ENOMEM;
+ }
+ /* Create the DCD Manager */
+ if (DSP_SUCCEEDED(status))
+ status = dcd_create_manager(NULL, &nldr_obj->hdcd_mgr);
+
+ /* Get dynamic loading memory sections from base lib */
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ nldr_obj->ldr_fxns.get_sect_fxn(nldr_obj->base_lib,
+ DYNMEMSECT, &ul_addr,
+ &ul_len);
+ if (DSP_SUCCEEDED(status)) {
+ psz_coff_buf =
+ kzalloc(ul_len * nldr_obj->us_dsp_mau_size,
+ GFP_KERNEL);
+ if (!psz_coff_buf)
+ status = -ENOMEM;
+ } else {
+ /* Ok to not have dynamic loading memory */
+ status = 0;
+ ul_len = 0;
+ dev_dbg(bridge, "%s: failed - no dynamic loading mem "
+ "segments: 0x%x\n", __func__, status);
+ }
+ }
+ if (DSP_SUCCEEDED(status) && ul_len > 0) {
+ /* Read section containing dynamic load mem segments */
+ status =
+ nldr_obj->ldr_fxns.read_sect_fxn(nldr_obj->base_lib,
+ DYNMEMSECT, psz_coff_buf,
+ ul_len);
+ }
+ if (DSP_SUCCEEDED(status) && ul_len > 0) {
+ /* Parse memory segment data */
+ dload_segs = (u16) (*((u32 *) psz_coff_buf));
+ if (dload_segs > MAXMEMSEGS)
+ status = -EBADF;
+ }
+ /* Parse dynamic load memory segments */
+ if (DSP_SUCCEEDED(status) && dload_segs > 0) {
+ rmm_segs = kzalloc(sizeof(struct rmm_segment) * dload_segs,
+ GFP_KERNEL);
+ nldr_obj->seg_table =
+ kzalloc(sizeof(u32) * dload_segs, GFP_KERNEL);
+ if (rmm_segs == NULL || nldr_obj->seg_table == NULL) {
+ status = -ENOMEM;
+ } else {
+ nldr_obj->dload_segs = dload_segs;
+ mem_info_obj = (struct mem_seg_info *)(psz_coff_buf +
+ sizeof(u32));
+ for (i = 0; i < dload_segs; i++) {
+ rmm_segs[i].base = (mem_info_obj + i)->base;
+ rmm_segs[i].length = (mem_info_obj + i)->len;
+ rmm_segs[i].space = 0;
+ nldr_obj->seg_table[i] =
+ (mem_info_obj + i)->type;
+ dev_dbg(bridge,
+ "(proc) DLL MEMSEGMENT: %d, "
+ "Base: 0x%x, Length: 0x%x\n", i,
+ rmm_segs[i].base, rmm_segs[i].length);
+ }
+ }
+ }
+ /* Create Remote memory manager */
+ if (DSP_SUCCEEDED(status))
+ status = rmm_create(&nldr_obj->rmm, rmm_segs, dload_segs);
+
+ if (DSP_SUCCEEDED(status)) {
+ /* set the alloc, free, write functions for loader */
+ nldr_obj->ldr_fxns.get_attrs_fxn(nldr_obj->dbll, &save_attrs);
+ new_attrs = save_attrs;
+ new_attrs.alloc = (dbll_alloc_fxn) remote_alloc;
+ new_attrs.free = (dbll_free_fxn) remote_free;
+ new_attrs.sym_lookup = (dbll_sym_lookup) get_symbol_value;
+ new_attrs.sym_handle = nldr_obj;
+ new_attrs.write = (dbll_write_fxn) pattrs->pfn_write;
+ nldr_obj->ovly_fxn = pattrs->pfn_ovly;
+ nldr_obj->write_fxn = pattrs->pfn_write;
+ nldr_obj->ldr_attrs = new_attrs;
+ }
+ kfree(rmm_segs);
+
+ kfree(psz_coff_buf);
+
+ /* Get overlay nodes */
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ cod_get_base_name(cod_mgr, sz_zl_file, COD_MAXPATHLENGTH);
+ /* lazy check */
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+ /* First count number of overlay nodes */
+ status =
+ dcd_get_objects(nldr_obj->hdcd_mgr, sz_zl_file,
+ add_ovly_node, (void *)nldr_obj);
+ /* Now build table of overlay nodes */
+ if (DSP_SUCCEEDED(status) && nldr_obj->ovly_nodes > 0) {
+ /* Allocate table for overlay nodes */
+ nldr_obj->ovly_table =
+ kzalloc(sizeof(struct ovly_node) *
+ nldr_obj->ovly_nodes, GFP_KERNEL);
+ /* Put overlay nodes in the table */
+ nldr_obj->ovly_nid = 0;
+ status = dcd_get_objects(nldr_obj->hdcd_mgr, sz_zl_file,
+ add_ovly_node,
+ (void *)nldr_obj);
+ }
+ }
+ /* Do a fake reload of the base image to get overlay section info */
+ if (DSP_SUCCEEDED(status) && nldr_obj->ovly_nodes > 0) {
+ save_attrs.write = fake_ovly_write;
+ save_attrs.log_write = add_ovly_info;
+ save_attrs.log_write_handle = nldr_obj;
+ flags = DBLL_CODE | DBLL_DATA | DBLL_SYMB;
+ status = nldr_obj->ldr_fxns.load_fxn(nldr_obj->base_lib, flags,
+ &save_attrs, &ul_entry);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ *phNldr = (struct nldr_object *)nldr_obj;
+ } else {
+ if (nldr_obj)
+ nldr_delete((struct nldr_object *)nldr_obj);
+
+ *phNldr = NULL;
+ }
+ /* FIXME:Temp. Fix. Must be removed */
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phNldr)
+ || (DSP_FAILED(status) && (*phNldr == NULL)));
+ return status;
+}
+
+/*
+ * ======== nldr_delete ========
+ */
+void nldr_delete(struct nldr_object *nldr_obj)
+{
+ struct ovly_sect *ovly_section;
+ struct ovly_sect *next;
+ u16 i;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(nldr_obj);
+
+ nldr_obj->ldr_fxns.exit_fxn();
+ if (nldr_obj->rmm)
+ rmm_delete(nldr_obj->rmm);
+
+ kfree(nldr_obj->seg_table);
+
+ if (nldr_obj->hdcd_mgr)
+ dcd_destroy_manager(nldr_obj->hdcd_mgr);
+
+ /* Free overlay node information */
+ if (nldr_obj->ovly_table) {
+ for (i = 0; i < nldr_obj->ovly_nodes; i++) {
+ ovly_section =
+ nldr_obj->ovly_table[i].create_sects_list;
+ while (ovly_section) {
+ next = ovly_section->next_sect;
+ kfree(ovly_section);
+ ovly_section = next;
+ }
+ ovly_section =
+ nldr_obj->ovly_table[i].delete_sects_list;
+ while (ovly_section) {
+ next = ovly_section->next_sect;
+ kfree(ovly_section);
+ ovly_section = next;
+ }
+ ovly_section =
+ nldr_obj->ovly_table[i].execute_sects_list;
+ while (ovly_section) {
+ next = ovly_section->next_sect;
+ kfree(ovly_section);
+ ovly_section = next;
+ }
+ ovly_section = nldr_obj->ovly_table[i].other_sects_list;
+ while (ovly_section) {
+ next = ovly_section->next_sect;
+ kfree(ovly_section);
+ ovly_section = next;
+ }
+ }
+ kfree(nldr_obj->ovly_table);
+ }
+ kfree(nldr_obj);
+}
+
+/*
+ * ======== nldr_exit ========
+ * Discontinue usage of NLDR module.
+ */
+void nldr_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ if (refs == 0)
+ rmm_exit();
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== nldr_get_fxn_addr ========
+ */
+int nldr_get_fxn_addr(struct nldr_nodeobject *nldr_node_obj,
+ char *pstrFxn, u32 * pulAddr)
+{
+ struct dbll_sym_val *dbll_sym;
+ struct nldr_object *nldr_obj;
+ int status = 0;
+ bool status1 = false;
+ s32 i = 0;
+ struct lib_node root = { NULL, 0, NULL };
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(nldr_node_obj);
+ DBC_REQUIRE(pulAddr != NULL);
+ DBC_REQUIRE(pstrFxn != NULL);
+
+ nldr_obj = nldr_node_obj->nldr_obj;
+ /* Called from node_create(), node_delete(), or node_run(). */
+ if (nldr_node_obj->dynamic && *nldr_node_obj->pf_phase_split) {
+ switch (nldr_node_obj->phase) {
+ case NLDR_CREATE:
+ root = nldr_node_obj->create_lib;
+ break;
+ case NLDR_EXECUTE:
+ root = nldr_node_obj->execute_lib;
+ break;
+ case NLDR_DELETE:
+ root = nldr_node_obj->delete_lib;
+ break;
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ } else {
+ /* for Overlay nodes or non-split Dynamic nodes */
+ root = nldr_node_obj->root;
+ }
+ status1 =
+ nldr_obj->ldr_fxns.get_c_addr_fxn(root.lib, pstrFxn, &dbll_sym);
+ if (!status1)
+ status1 =
+ nldr_obj->ldr_fxns.get_addr_fxn(root.lib, pstrFxn,
+ &dbll_sym);
+
+ /* If symbol not found, check dependent libraries */
+ if (!status1) {
+ for (i = 0; i < root.dep_libs; i++) {
+ status1 =
+ nldr_obj->ldr_fxns.get_addr_fxn(root.dep_libs_tree
+ [i].lib, pstrFxn,
+ &dbll_sym);
+ if (!status1) {
+ status1 =
+ nldr_obj->ldr_fxns.
+ get_c_addr_fxn(root.dep_libs_tree[i].lib,
+ pstrFxn, &dbll_sym);
+ }
+ if (status1) {
+ /* Symbol found */
+ break;
+ }
+ }
+ }
+ /* Check persistent libraries */
+ if (!status1) {
+ for (i = 0; i < nldr_node_obj->pers_libs; i++) {
+ status1 =
+ nldr_obj->ldr_fxns.
+ get_addr_fxn(nldr_node_obj->pers_lib_table[i].lib,
+ pstrFxn, &dbll_sym);
+ if (!status1) {
+ status1 =
+ nldr_obj->ldr_fxns.
+ get_c_addr_fxn(nldr_node_obj->pers_lib_table
+ [i].lib, pstrFxn, &dbll_sym);
+ }
+ if (status1) {
+ /* Symbol found */
+ break;
+ }
+ }
+ }
+
+ if (status1)
+ *pulAddr = dbll_sym->value;
+ else
+ status = -ESPIPE;
+
+ return status;
+}
+
+/*
+ * ======== nldr_get_rmm_manager ========
+ * Given a NLDR object, retrieve RMM Manager Handle
+ */
+int nldr_get_rmm_manager(struct nldr_object *hNldrObject,
+ OUT struct rmm_target_obj **phRmmMgr)
+{
+ int status = 0;
+ struct nldr_object *nldr_obj = hNldrObject;
+ DBC_REQUIRE(phRmmMgr != NULL);
+
+ if (hNldrObject) {
+ *phRmmMgr = nldr_obj->rmm;
+ } else {
+ *phRmmMgr = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phRmmMgr != NULL) &&
+ (*phRmmMgr == NULL)));
+
+ return status;
+}
+
+/*
+ * ======== nldr_init ========
+ * Initialize the NLDR module.
+ */
+bool nldr_init(void)
+{
+ DBC_REQUIRE(refs >= 0);
+
+ if (refs == 0)
+ rmm_init();
+
+ refs++;
+
+ DBC_ENSURE(refs > 0);
+ return true;
+}
+
+/*
+ * ======== nldr_load ========
+ */
+int nldr_load(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase)
+{
+ struct nldr_object *nldr_obj;
+ struct dsp_uuid lib_uuid;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(nldr_node_obj);
+
+ nldr_obj = nldr_node_obj->nldr_obj;
+
+ if (nldr_node_obj->dynamic) {
+ nldr_node_obj->phase = phase;
+
+ lib_uuid = nldr_node_obj->uuid;
+
+ /* At this point, we may not know if node is split into
+ * different libraries. So we'll go ahead and load the
+ * library, and then save the pointer to the appropriate
+ * location after we know. */
+
+ status =
+ load_lib(nldr_node_obj, &nldr_node_obj->root, lib_uuid,
+ false, nldr_node_obj->lib_path, phase, 0);
+
+ if (DSP_SUCCEEDED(status)) {
+ if (*nldr_node_obj->pf_phase_split) {
+ switch (phase) {
+ case NLDR_CREATE:
+ nldr_node_obj->create_lib =
+ nldr_node_obj->root;
+ break;
+
+ case NLDR_EXECUTE:
+ nldr_node_obj->execute_lib =
+ nldr_node_obj->root;
+ break;
+
+ case NLDR_DELETE:
+ nldr_node_obj->delete_lib =
+ nldr_node_obj->root;
+ break;
+
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ }
+ }
+ } else {
+ if (nldr_node_obj->overlay)
+ status = load_ovly(nldr_node_obj, phase);
+
+ }
+
+ return status;
+}
+
+/*
+ * ======== nldr_unload ========
+ */
+int nldr_unload(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase)
+{
+ int status = 0;
+ struct lib_node *root_lib = NULL;
+ s32 i = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(nldr_node_obj);
+
+ if (nldr_node_obj != NULL) {
+ if (nldr_node_obj->dynamic) {
+ if (*nldr_node_obj->pf_phase_split) {
+ switch (phase) {
+ case NLDR_CREATE:
+ root_lib = &nldr_node_obj->create_lib;
+ break;
+ case NLDR_EXECUTE:
+ root_lib = &nldr_node_obj->execute_lib;
+ break;
+ case NLDR_DELETE:
+ root_lib = &nldr_node_obj->delete_lib;
+ /* Unload persistent libraries */
+ for (i = 0;
+ i < nldr_node_obj->pers_libs;
+ i++) {
+ unload_lib(nldr_node_obj,
+ &nldr_node_obj->
+ pers_lib_table[i]);
+ }
+ nldr_node_obj->pers_libs = 0;
+ break;
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ } else {
+ /* Unload main library */
+ root_lib = &nldr_node_obj->root;
+ }
+ if (root_lib)
+ unload_lib(nldr_node_obj, root_lib);
+ } else {
+ if (nldr_node_obj->overlay)
+ unload_ovly(nldr_node_obj, phase);
+
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== add_ovly_info ========
+ */
+static int add_ovly_info(void *handle, struct dbll_sect_info *sect_info,
+ u32 addr, u32 bytes)
+{
+ char *node_name;
+ char *sect_name = (char *)sect_info->name;
+ bool sect_exists = false;
+ char seps = ':';
+ char *pch;
+ u16 i;
+ struct nldr_object *nldr_obj = (struct nldr_object *)handle;
+ int status = 0;
+
+ /* Is this an overlay section (load address != run address)? */
+ if (sect_info->sect_load_addr == sect_info->sect_run_addr)
+ goto func_end;
+
+ /* Find the node it belongs to */
+ for (i = 0; i < nldr_obj->ovly_nodes; i++) {
+ node_name = nldr_obj->ovly_table[i].node_name;
+ DBC_REQUIRE(node_name);
+ if (strncmp(node_name, sect_name + 1, strlen(node_name)) == 0) {
+ /* Found the node */
+ break;
+ }
+ }
+ if (!(i < nldr_obj->ovly_nodes))
+ goto func_end;
+
+ /* Determine which phase this section belongs to */
+ for (pch = sect_name + 1; *pch && *pch != seps; pch++)
+ ;;
+
+ if (*pch) {
+ pch++; /* Skip over the ':' */
+ if (strncmp(pch, PCREATE, strlen(PCREATE)) == 0) {
+ status =
+ add_ovly_sect(nldr_obj,
+ &nldr_obj->
+ ovly_table[i].create_sects_list,
+ sect_info, §_exists, addr, bytes);
+ if (DSP_SUCCEEDED(status) && !sect_exists)
+ nldr_obj->ovly_table[i].create_sects++;
+
+ } else if (strncmp(pch, PDELETE, strlen(PDELETE)) == 0) {
+ status =
+ add_ovly_sect(nldr_obj,
+ &nldr_obj->
+ ovly_table[i].delete_sects_list,
+ sect_info, §_exists, addr, bytes);
+ if (DSP_SUCCEEDED(status) && !sect_exists)
+ nldr_obj->ovly_table[i].delete_sects++;
+
+ } else if (strncmp(pch, PEXECUTE, strlen(PEXECUTE)) == 0) {
+ status =
+ add_ovly_sect(nldr_obj,
+ &nldr_obj->
+ ovly_table[i].execute_sects_list,
+ sect_info, §_exists, addr, bytes);
+ if (DSP_SUCCEEDED(status) && !sect_exists)
+ nldr_obj->ovly_table[i].execute_sects++;
+
+ } else {
+ /* Put in "other" sectins */
+ status =
+ add_ovly_sect(nldr_obj,
+ &nldr_obj->
+ ovly_table[i].other_sects_list,
+ sect_info, §_exists, addr, bytes);
+ if (DSP_SUCCEEDED(status) && !sect_exists)
+ nldr_obj->ovly_table[i].other_sects++;
+
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== add_ovly_node =========
+ * Callback function passed to dcd_get_objects.
+ */
+static int add_ovly_node(struct dsp_uuid *uuid_obj,
+ enum dsp_dcdobjtype obj_type, IN void *handle)
+{
+ struct nldr_object *nldr_obj = (struct nldr_object *)handle;
+ char *node_name = NULL;
+ char *pbuf = NULL;
+ u32 len;
+ struct dcd_genericobj obj_def;
+ int status = 0;
+
+ if (obj_type != DSP_DCDNODETYPE)
+ goto func_end;
+
+ status =
+ dcd_get_object_def(nldr_obj->hdcd_mgr, uuid_obj, obj_type,
+ &obj_def);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* If overlay node, add to the list */
+ if (obj_def.obj_data.node_obj.us_load_type == NLDR_OVLYLOAD) {
+ if (nldr_obj->ovly_table == NULL) {
+ nldr_obj->ovly_nodes++;
+ } else {
+ /* Add node to table */
+ nldr_obj->ovly_table[nldr_obj->ovly_nid].uuid =
+ *uuid_obj;
+ DBC_REQUIRE(obj_def.obj_data.node_obj.ndb_props.
+ ac_name);
+ len =
+ strlen(obj_def.obj_data.node_obj.ndb_props.ac_name);
+ node_name = obj_def.obj_data.node_obj.ndb_props.ac_name;
+ pbuf = kzalloc(len + 1, GFP_KERNEL);
+ if (pbuf == NULL) {
+ status = -ENOMEM;
+ } else {
+ strncpy(pbuf, node_name, len);
+ nldr_obj->ovly_table[nldr_obj->ovly_nid].
+ node_name = pbuf;
+ nldr_obj->ovly_nid++;
+ }
+ }
+ }
+ /* These were allocated in dcd_get_object_def */
+ kfree(obj_def.obj_data.node_obj.pstr_create_phase_fxn);
+
+ kfree(obj_def.obj_data.node_obj.pstr_execute_phase_fxn);
+
+ kfree(obj_def.obj_data.node_obj.pstr_delete_phase_fxn);
+
+ kfree(obj_def.obj_data.node_obj.pstr_i_alg_name);
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== add_ovly_sect ========
+ */
+static int add_ovly_sect(struct nldr_object *nldr_obj,
+ struct ovly_sect **pList,
+ struct dbll_sect_info *pSectInfo,
+ bool *pExists, u32 addr, u32 bytes)
+{
+ struct ovly_sect *new_sect = NULL;
+ struct ovly_sect *last_sect;
+ struct ovly_sect *ovly_section;
+ int status = 0;
+
+ ovly_section = last_sect = *pList;
+ *pExists = false;
+ while (ovly_section) {
+ /*
+ * Make sure section has not already been added. Multiple
+ * 'write' calls may be made to load the section.
+ */
+ if (ovly_section->sect_load_addr == addr) {
+ /* Already added */
+ *pExists = true;
+ break;
+ }
+ last_sect = ovly_section;
+ ovly_section = ovly_section->next_sect;
+ }
+
+ if (!ovly_section) {
+ /* New section */
+ new_sect = kzalloc(sizeof(struct ovly_sect), GFP_KERNEL);
+ if (new_sect == NULL) {
+ status = -ENOMEM;
+ } else {
+ new_sect->sect_load_addr = addr;
+ new_sect->sect_run_addr = pSectInfo->sect_run_addr +
+ (addr - pSectInfo->sect_load_addr);
+ new_sect->size = bytes;
+ new_sect->page = pSectInfo->type;
+ }
+
+ /* Add to the list */
+ if (DSP_SUCCEEDED(status)) {
+ if (*pList == NULL) {
+ /* First in the list */
+ *pList = new_sect;
+ } else {
+ last_sect->next_sect = new_sect;
+ }
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== fake_ovly_write ========
+ */
+static s32 fake_ovly_write(void *handle, u32 dspAddr, void *buf, u32 bytes,
+ s32 mtype)
+{
+ return (s32) bytes;
+}
+
+/*
+ * ======== free_sects ========
+ */
+static void free_sects(struct nldr_object *nldr_obj,
+ struct ovly_sect *phase_sects, u16 alloc_num)
+{
+ struct ovly_sect *ovly_section = phase_sects;
+ u16 i = 0;
+ bool ret;
+
+ while (ovly_section && i < alloc_num) {
+ /* 'Deallocate' */
+ /* segid - page not supported yet */
+ /* Reserved memory */
+ ret =
+ rmm_free(nldr_obj->rmm, 0, ovly_section->sect_run_addr,
+ ovly_section->size, true);
+ DBC_ASSERT(ret);
+ ovly_section = ovly_section->next_sect;
+ i++;
+ }
+}
+
+/*
+ * ======== get_symbol_value ========
+ * Find symbol in library's base image. If not there, check dependent
+ * libraries.
+ */
+static bool get_symbol_value(void *handle, void *parg, void *rmm_handle,
+ char *name, struct dbll_sym_val **sym)
+{
+ struct nldr_object *nldr_obj = (struct nldr_object *)handle;
+ struct nldr_nodeobject *nldr_node_obj =
+ (struct nldr_nodeobject *)rmm_handle;
+ struct lib_node *root = (struct lib_node *)parg;
+ u16 i;
+ bool status = false;
+
+ /* check the base image */
+ status = nldr_obj->ldr_fxns.get_addr_fxn(nldr_obj->base_lib, name, sym);
+ if (!status)
+ status =
+ nldr_obj->ldr_fxns.get_c_addr_fxn(nldr_obj->base_lib, name,
+ sym);
+
+ /*
+ * Check in root lib itself. If the library consists of
+ * multiple object files linked together, some symbols in the
+ * library may need to be resolved.
+ */
+ if (!status) {
+ status = nldr_obj->ldr_fxns.get_addr_fxn(root->lib, name, sym);
+ if (!status) {
+ status =
+ nldr_obj->ldr_fxns.get_c_addr_fxn(root->lib, name,
+ sym);
+ }
+ }
+
+ /*
+ * Check in root lib's dependent libraries, but not dependent
+ * libraries' dependents.
+ */
+ if (!status) {
+ for (i = 0; i < root->dep_libs; i++) {
+ status =
+ nldr_obj->ldr_fxns.get_addr_fxn(root->dep_libs_tree
+ [i].lib, name, sym);
+ if (!status) {
+ status =
+ nldr_obj->ldr_fxns.
+ get_c_addr_fxn(root->dep_libs_tree[i].lib,
+ name, sym);
+ }
+ if (status) {
+ /* Symbol found */
+ break;
+ }
+ }
+ }
+ /*
+ * Check in persistent libraries
+ */
+ if (!status) {
+ for (i = 0; i < nldr_node_obj->pers_libs; i++) {
+ status =
+ nldr_obj->ldr_fxns.
+ get_addr_fxn(nldr_node_obj->pers_lib_table[i].lib,
+ name, sym);
+ if (!status) {
+ status = nldr_obj->ldr_fxns.get_c_addr_fxn
+ (nldr_node_obj->pers_lib_table[i].lib, name,
+ sym);
+ }
+ if (status) {
+ /* Symbol found */
+ break;
+ }
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== load_lib ========
+ * Recursively load library and all its dependent libraries. The library
+ * we're loading is specified by a uuid.
+ */
+static int load_lib(struct nldr_nodeobject *nldr_node_obj,
+ struct lib_node *root, struct dsp_uuid uuid,
+ bool rootPersistent,
+ struct dbll_library_obj **lib_path,
+ enum nldr_phase phase, u16 depth)
+{
+ struct nldr_object *nldr_obj = nldr_node_obj->nldr_obj;
+ u16 nd_libs = 0; /* Number of dependent libraries */
+ u16 np_libs = 0; /* Number of persistent libraries */
+ u16 nd_libs_loaded = 0; /* Number of dep. libraries loaded */
+ u16 i;
+ u32 entry;
+ u32 dw_buf_size = NLDR_MAXPATHLENGTH;
+ dbll_flags flags = DBLL_SYMB | DBLL_CODE | DBLL_DATA | DBLL_DYNAMIC;
+ struct dbll_attrs new_attrs;
+ char *psz_file_name = NULL;
+ struct dsp_uuid *dep_lib_uui_ds = NULL;
+ bool *persistent_dep_libs = NULL;
+ int status = 0;
+ bool lib_status = false;
+ struct lib_node *dep_lib;
+
+ if (depth > MAXDEPTH) {
+ /* Error */
+ DBC_ASSERT(false);
+ }
+ root->lib = NULL;
+ /* Allocate a buffer for library file name of size DBL_MAXPATHLENGTH */
+ psz_file_name = kzalloc(DBLL_MAXPATHLENGTH, GFP_KERNEL);
+ if (psz_file_name == NULL)
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Get the name of the library */
+ if (depth == 0) {
+ status =
+ dcd_get_library_name(nldr_node_obj->nldr_obj->
+ hdcd_mgr, &uuid, psz_file_name,
+ &dw_buf_size, phase,
+ nldr_node_obj->pf_phase_split);
+ } else {
+ /* Dependent libraries are registered with a phase */
+ status =
+ dcd_get_library_name(nldr_node_obj->nldr_obj->
+ hdcd_mgr, &uuid, psz_file_name,
+ &dw_buf_size, NLDR_NOPHASE,
+ NULL);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Open the library, don't load symbols */
+ status =
+ nldr_obj->ldr_fxns.open_fxn(nldr_obj->dbll, psz_file_name,
+ DBLL_NOLOAD, &root->lib);
+ }
+ /* Done with file name */
+ kfree(psz_file_name);
+
+ /* Check to see if library not already loaded */
+ if (DSP_SUCCEEDED(status) && rootPersistent) {
+ lib_status =
+ find_in_persistent_lib_array(nldr_node_obj, root->lib);
+ /* Close library */
+ if (lib_status) {
+ nldr_obj->ldr_fxns.close_fxn(root->lib);
+ return 0;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Check for circular dependencies. */
+ for (i = 0; i < depth; i++) {
+ if (root->lib == lib_path[i]) {
+ /* This condition could be checked by a
+ * tool at build time. */
+ status = -EILSEQ;
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Add library to current path in dependency tree */
+ lib_path[depth] = root->lib;
+ depth++;
+ /* Get number of dependent libraries */
+ status =
+ dcd_get_num_dep_libs(nldr_node_obj->nldr_obj->hdcd_mgr,
+ &uuid, &nd_libs, &np_libs, phase);
+ }
+ DBC_ASSERT(nd_libs >= np_libs);
+ if (DSP_SUCCEEDED(status)) {
+ if (!(*nldr_node_obj->pf_phase_split))
+ np_libs = 0;
+
+ /* nd_libs = #of dependent libraries */
+ root->dep_libs = nd_libs - np_libs;
+ if (nd_libs > 0) {
+ dep_lib_uui_ds = kzalloc(sizeof(struct dsp_uuid) *
+ nd_libs, GFP_KERNEL);
+ persistent_dep_libs =
+ kzalloc(sizeof(bool) * nd_libs, GFP_KERNEL);
+ if (!dep_lib_uui_ds || !persistent_dep_libs)
+ status = -ENOMEM;
+
+ if (root->dep_libs > 0) {
+ /* Allocate arrays for dependent lib UUIDs,
+ * lib nodes */
+ root->dep_libs_tree = kzalloc
+ (sizeof(struct lib_node) *
+ (root->dep_libs), GFP_KERNEL);
+ if (!(root->dep_libs_tree))
+ status = -ENOMEM;
+
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Get the dependent library UUIDs */
+ status =
+ dcd_get_dep_libs(nldr_node_obj->
+ nldr_obj->hdcd_mgr, &uuid,
+ nd_libs, dep_lib_uui_ds,
+ persistent_dep_libs,
+ phase);
+ }
+ }
+ }
+
+ /*
+ * Recursively load dependent libraries.
+ */
+ if (DSP_SUCCEEDED(status)) {
+ for (i = 0; i < nd_libs; i++) {
+ /* If root library is NOT persistent, and dep library
+ * is, then record it. If root library IS persistent,
+ * the deplib is already included */
+ if (!rootPersistent && persistent_dep_libs[i] &&
+ *nldr_node_obj->pf_phase_split) {
+ if ((nldr_node_obj->pers_libs) >= MAXLIBS) {
+ status = -EILSEQ;
+ break;
+ }
+
+ /* Allocate library outside of phase */
+ dep_lib =
+ &nldr_node_obj->pers_lib_table
+ [nldr_node_obj->pers_libs];
+ } else {
+ if (rootPersistent)
+ persistent_dep_libs[i] = true;
+
+ /* Allocate library within phase */
+ dep_lib = &root->dep_libs_tree[nd_libs_loaded];
+ }
+
+ status = load_lib(nldr_node_obj, dep_lib,
+ dep_lib_uui_ds[i],
+ persistent_dep_libs[i], lib_path,
+ phase, depth);
+
+ if (DSP_SUCCEEDED(status)) {
+ if ((status != 0) &&
+ !rootPersistent && persistent_dep_libs[i] &&
+ *nldr_node_obj->pf_phase_split) {
+ (nldr_node_obj->pers_libs)++;
+ } else {
+ if (!persistent_dep_libs[i] ||
+ !(*nldr_node_obj->pf_phase_split)) {
+ nd_libs_loaded++;
+ }
+ }
+ } else {
+ break;
+ }
+ }
+ }
+
+ /* Now we can load the root library */
+ if (DSP_SUCCEEDED(status)) {
+ new_attrs = nldr_obj->ldr_attrs;
+ new_attrs.sym_arg = root;
+ new_attrs.rmm_handle = nldr_node_obj;
+ new_attrs.input_params = nldr_node_obj->priv_ref;
+ new_attrs.base_image = false;
+
+ status =
+ nldr_obj->ldr_fxns.load_fxn(root->lib, flags, &new_attrs,
+ &entry);
+ }
+
+ /*
+ * In case of failure, unload any dependent libraries that
+ * were loaded, and close the root library.
+ * (Persistent libraries are unloaded from the very top)
+ */
+ if (DSP_FAILED(status)) {
+ if (phase != NLDR_EXECUTE) {
+ for (i = 0; i < nldr_node_obj->pers_libs; i++)
+ unload_lib(nldr_node_obj,
+ &nldr_node_obj->pers_lib_table[i]);
+
+ nldr_node_obj->pers_libs = 0;
+ }
+ for (i = 0; i < nd_libs_loaded; i++)
+ unload_lib(nldr_node_obj, &root->dep_libs_tree[i]);
+
+ if (root->lib)
+ nldr_obj->ldr_fxns.close_fxn(root->lib);
+
+ }
+
+ /* Going up one node in the dependency tree */
+ depth--;
+
+ kfree(dep_lib_uui_ds);
+ dep_lib_uui_ds = NULL;
+
+ kfree(persistent_dep_libs);
+ persistent_dep_libs = NULL;
+
+ return status;
+}
+
+/*
+ * ======== load_ovly ========
+ */
+static int load_ovly(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase)
+{
+ struct nldr_object *nldr_obj = nldr_node_obj->nldr_obj;
+ struct ovly_node *po_node = NULL;
+ struct ovly_sect *phase_sects = NULL;
+ struct ovly_sect *other_sects_list = NULL;
+ u16 i;
+ u16 alloc_num = 0;
+ u16 other_alloc = 0;
+ u16 *ref_count = NULL;
+ u16 *other_ref = NULL;
+ u32 bytes;
+ struct ovly_sect *ovly_section;
+ int status = 0;
+
+ /* Find the node in the table */
+ for (i = 0; i < nldr_obj->ovly_nodes; i++) {
+ if (IS_EQUAL_UUID
+ (nldr_node_obj->uuid, nldr_obj->ovly_table[i].uuid)) {
+ /* Found it */
+ po_node = &(nldr_obj->ovly_table[i]);
+ break;
+ }
+ }
+
+ DBC_ASSERT(i < nldr_obj->ovly_nodes);
+
+ if (!po_node) {
+ status = -ENOENT;
+ goto func_end;
+ }
+
+ switch (phase) {
+ case NLDR_CREATE:
+ ref_count = &(po_node->create_ref);
+ other_ref = &(po_node->other_ref);
+ phase_sects = po_node->create_sects_list;
+ other_sects_list = po_node->other_sects_list;
+ break;
+
+ case NLDR_EXECUTE:
+ ref_count = &(po_node->execute_ref);
+ phase_sects = po_node->execute_sects_list;
+ break;
+
+ case NLDR_DELETE:
+ ref_count = &(po_node->delete_ref);
+ phase_sects = po_node->delete_sects_list;
+ break;
+
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+
+ if (ref_count == NULL)
+ goto func_end;
+
+ if (*ref_count != 0)
+ goto func_end;
+
+ /* 'Allocate' memory for overlay sections of this phase */
+ ovly_section = phase_sects;
+ while (ovly_section) {
+ /* allocate *//* page not supported yet */
+ /* reserve *//* align */
+ status = rmm_alloc(nldr_obj->rmm, 0, ovly_section->size, 0,
+ &(ovly_section->sect_run_addr), true);
+ if (DSP_SUCCEEDED(status)) {
+ ovly_section = ovly_section->next_sect;
+ alloc_num++;
+ } else {
+ break;
+ }
+ }
+ if (other_ref && *other_ref == 0) {
+ /* 'Allocate' memory for other overlay sections
+ * (create phase) */
+ if (DSP_SUCCEEDED(status)) {
+ ovly_section = other_sects_list;
+ while (ovly_section) {
+ /* page not supported *//* align */
+ /* reserve */
+ status =
+ rmm_alloc(nldr_obj->rmm, 0,
+ ovly_section->size, 0,
+ &(ovly_section->sect_run_addr),
+ true);
+ if (DSP_SUCCEEDED(status)) {
+ ovly_section = ovly_section->next_sect;
+ other_alloc++;
+ } else {
+ break;
+ }
+ }
+ }
+ }
+ if (*ref_count == 0) {
+ if (DSP_SUCCEEDED(status)) {
+ /* Load sections for this phase */
+ ovly_section = phase_sects;
+ while (ovly_section && DSP_SUCCEEDED(status)) {
+ bytes =
+ (*nldr_obj->ovly_fxn) (nldr_node_obj->
+ priv_ref,
+ ovly_section->
+ sect_run_addr,
+ ovly_section->
+ sect_load_addr,
+ ovly_section->size,
+ ovly_section->page);
+ if (bytes != ovly_section->size)
+ status = -EPERM;
+
+ ovly_section = ovly_section->next_sect;
+ }
+ }
+ }
+ if (other_ref && *other_ref == 0) {
+ if (DSP_SUCCEEDED(status)) {
+ /* Load other sections (create phase) */
+ ovly_section = other_sects_list;
+ while (ovly_section && DSP_SUCCEEDED(status)) {
+ bytes =
+ (*nldr_obj->ovly_fxn) (nldr_node_obj->
+ priv_ref,
+ ovly_section->
+ sect_run_addr,
+ ovly_section->
+ sect_load_addr,
+ ovly_section->size,
+ ovly_section->page);
+ if (bytes != ovly_section->size)
+ status = -EPERM;
+
+ ovly_section = ovly_section->next_sect;
+ }
+ }
+ }
+ if (DSP_FAILED(status)) {
+ /* 'Deallocate' memory */
+ free_sects(nldr_obj, phase_sects, alloc_num);
+ free_sects(nldr_obj, other_sects_list, other_alloc);
+ }
+func_end:
+ if (DSP_SUCCEEDED(status) && (ref_count != NULL)) {
+ *ref_count += 1;
+ if (other_ref)
+ *other_ref += 1;
+
+ }
+
+ return status;
+}
+
+/*
+ * ======== remote_alloc ========
+ */
+static int remote_alloc(void **pRef, u16 space, u32 size,
+ u32 align, u32 *dspAddr,
+ OPTIONAL s32 segmentId, OPTIONAL s32 req,
+ bool reserve)
+{
+ struct nldr_nodeobject *hnode = (struct nldr_nodeobject *)pRef;
+ struct nldr_object *nldr_obj;
+ struct rmm_target_obj *rmm;
+ u16 mem_phase_bit = MAXFLAGS;
+ u16 segid = 0;
+ u16 i;
+ u16 mem_sect_type;
+ u32 word_size;
+ struct rmm_addr *rmm_addr_obj = (struct rmm_addr *)dspAddr;
+ bool mem_load_req = false;
+ int status = -ENOMEM; /* Set to fail */
+ DBC_REQUIRE(hnode);
+ DBC_REQUIRE(space == DBLL_CODE || space == DBLL_DATA ||
+ space == DBLL_BSS);
+ nldr_obj = hnode->nldr_obj;
+ rmm = nldr_obj->rmm;
+ /* Convert size to DSP words */
+ word_size =
+ (size + nldr_obj->us_dsp_word_size -
+ 1) / nldr_obj->us_dsp_word_size;
+ /* Modify memory 'align' to account for DSP cache line size */
+ align = find_lcm(GEM_CACHE_LINE_SIZE, align);
+ dev_dbg(bridge, "%s: memory align to 0x%x\n", __func__, align);
+ if (segmentId != -1) {
+ rmm_addr_obj->segid = segmentId;
+ segid = segmentId;
+ mem_load_req = req;
+ } else {
+ switch (hnode->phase) {
+ case NLDR_CREATE:
+ mem_phase_bit = CREATEDATAFLAGBIT;
+ break;
+ case NLDR_DELETE:
+ mem_phase_bit = DELETEDATAFLAGBIT;
+ break;
+ case NLDR_EXECUTE:
+ mem_phase_bit = EXECUTEDATAFLAGBIT;
+ break;
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ if (space == DBLL_CODE)
+ mem_phase_bit++;
+
+ if (mem_phase_bit < MAXFLAGS)
+ segid = hnode->seg_id[mem_phase_bit];
+
+ /* Determine if there is a memory loading requirement */
+ if ((hnode->code_data_flag_mask >> mem_phase_bit) & 0x1)
+ mem_load_req = true;
+
+ }
+ mem_sect_type = (space == DBLL_CODE) ? DYNM_CODE : DYNM_DATA;
+
+ /* Find an appropriate segment based on space */
+ if (segid == NULLID) {
+ /* No memory requirements of preferences */
+ DBC_ASSERT(!mem_load_req);
+ goto func_cont;
+ }
+ if (segid <= MAXSEGID) {
+ DBC_ASSERT(segid < nldr_obj->dload_segs);
+ /* Attempt to allocate from segid first. */
+ rmm_addr_obj->segid = segid;
+ status =
+ rmm_alloc(rmm, segid, word_size, align, dspAddr, false);
+ if (DSP_FAILED(status)) {
+ dev_dbg(bridge, "%s: Unable allocate from segment %d\n",
+ __func__, segid);
+ }
+ } else {
+ /* segid > MAXSEGID ==> Internal or external memory */
+ DBC_ASSERT(segid == MEMINTERNALID || segid == MEMEXTERNALID);
+ /* Check for any internal or external memory segment,
+ * depending on segid. */
+ mem_sect_type |= segid == MEMINTERNALID ?
+ DYNM_INTERNAL : DYNM_EXTERNAL;
+ for (i = 0; i < nldr_obj->dload_segs; i++) {
+ if ((nldr_obj->seg_table[i] & mem_sect_type) !=
+ mem_sect_type)
+ continue;
+
+ status = rmm_alloc(rmm, i, word_size, align, dspAddr,
+ false);
+ if (DSP_SUCCEEDED(status)) {
+ /* Save segid for freeing later */
+ rmm_addr_obj->segid = i;
+ break;
+ }
+ }
+ }
+func_cont:
+ /* Haven't found memory yet, attempt to find any segment that works */
+ if (status == -ENOMEM && !mem_load_req) {
+ dev_dbg(bridge, "%s: Preferred segment unavailable, trying "
+ "another\n", __func__);
+ for (i = 0; i < nldr_obj->dload_segs; i++) {
+ /* All bits of mem_sect_type must be set */
+ if ((nldr_obj->seg_table[i] & mem_sect_type) !=
+ mem_sect_type)
+ continue;
+
+ status = rmm_alloc(rmm, i, word_size, align, dspAddr,
+ false);
+ if (DSP_SUCCEEDED(status)) {
+ /* Save segid */
+ rmm_addr_obj->segid = i;
+ break;
+ }
+ }
+ }
+
+ return status;
+}
+
+static int remote_free(void **pRef, u16 space, u32 dspAddr,
+ u32 size, bool reserve)
+{
+ struct nldr_object *nldr_obj = (struct nldr_object *)pRef;
+ struct rmm_target_obj *rmm;
+ u32 word_size;
+ int status = -ENOMEM; /* Set to fail */
+
+ DBC_REQUIRE(nldr_obj);
+
+ rmm = nldr_obj->rmm;
+
+ /* Convert size to DSP words */
+ word_size =
+ (size + nldr_obj->us_dsp_word_size -
+ 1) / nldr_obj->us_dsp_word_size;
+
+ if (rmm_free(rmm, space, dspAddr, word_size, reserve))
+ status = 0;
+
+ return status;
+}
+
+/*
+ * ======== unload_lib ========
+ */
+static void unload_lib(struct nldr_nodeobject *nldr_node_obj,
+ struct lib_node *root)
+{
+ struct dbll_attrs new_attrs;
+ struct nldr_object *nldr_obj = nldr_node_obj->nldr_obj;
+ u16 i;
+
+ DBC_ASSERT(root != NULL);
+
+ /* Unload dependent libraries */
+ for (i = 0; i < root->dep_libs; i++)
+ unload_lib(nldr_node_obj, &root->dep_libs_tree[i]);
+
+ root->dep_libs = 0;
+
+ new_attrs = nldr_obj->ldr_attrs;
+ new_attrs.rmm_handle = nldr_obj->rmm;
+ new_attrs.input_params = nldr_node_obj->priv_ref;
+ new_attrs.base_image = false;
+ new_attrs.sym_arg = root;
+
+ if (root->lib) {
+ /* Unload the root library */
+ nldr_obj->ldr_fxns.unload_fxn(root->lib, &new_attrs);
+ nldr_obj->ldr_fxns.close_fxn(root->lib);
+ }
+
+ /* Free dependent library list */
+ kfree(root->dep_libs_tree);
+ root->dep_libs_tree = NULL;
+}
+
+/*
+ * ======== unload_ovly ========
+ */
+static void unload_ovly(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase)
+{
+ struct nldr_object *nldr_obj = nldr_node_obj->nldr_obj;
+ struct ovly_node *po_node = NULL;
+ struct ovly_sect *phase_sects = NULL;
+ struct ovly_sect *other_sects_list = NULL;
+ u16 i;
+ u16 alloc_num = 0;
+ u16 other_alloc = 0;
+ u16 *ref_count = NULL;
+ u16 *other_ref = NULL;
+
+ /* Find the node in the table */
+ for (i = 0; i < nldr_obj->ovly_nodes; i++) {
+ if (IS_EQUAL_UUID
+ (nldr_node_obj->uuid, nldr_obj->ovly_table[i].uuid)) {
+ /* Found it */
+ po_node = &(nldr_obj->ovly_table[i]);
+ break;
+ }
+ }
+
+ DBC_ASSERT(i < nldr_obj->ovly_nodes);
+
+ if (!po_node)
+ /* TODO: Should we print warning here? */
+ return;
+
+ switch (phase) {
+ case NLDR_CREATE:
+ ref_count = &(po_node->create_ref);
+ phase_sects = po_node->create_sects_list;
+ alloc_num = po_node->create_sects;
+ break;
+ case NLDR_EXECUTE:
+ ref_count = &(po_node->execute_ref);
+ phase_sects = po_node->execute_sects_list;
+ alloc_num = po_node->execute_sects;
+ break;
+ case NLDR_DELETE:
+ ref_count = &(po_node->delete_ref);
+ other_ref = &(po_node->other_ref);
+ phase_sects = po_node->delete_sects_list;
+ /* 'Other' overlay sections are unloaded in the delete phase */
+ other_sects_list = po_node->other_sects_list;
+ alloc_num = po_node->delete_sects;
+ other_alloc = po_node->other_sects;
+ break;
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ DBC_ASSERT(ref_count && (*ref_count > 0));
+ if (ref_count && (*ref_count > 0)) {
+ *ref_count -= 1;
+ if (other_ref) {
+ DBC_ASSERT(*other_ref > 0);
+ *other_ref -= 1;
+ }
+ }
+
+ if (ref_count && *ref_count == 0) {
+ /* 'Deallocate' memory */
+ free_sects(nldr_obj, phase_sects, alloc_num);
+ }
+ if (other_ref && *other_ref == 0)
+ free_sects(nldr_obj, other_sects_list, other_alloc);
+}
+
+/*
+ * ======== find_in_persistent_lib_array ========
+ */
+static bool find_in_persistent_lib_array(struct nldr_nodeobject *nldr_node_obj,
+ struct dbll_library_obj *lib)
+{
+ s32 i = 0;
+
+ for (i = 0; i < nldr_node_obj->pers_libs; i++) {
+ if (lib == nldr_node_obj->pers_lib_table[i].lib)
+ return true;
+
+ }
+
+ return false;
+}
+
+/*
+ * ================ Find LCM (Least Common Multiplier ===
+ */
+static u32 find_lcm(u32 a, u32 b)
+{
+ u32 ret;
+
+ ret = a * b / find_gcf(a, b);
+
+ return ret;
+}
+
+/*
+ * ================ Find GCF (Greatest Common Factor ) ===
+ */
+static u32 find_gcf(u32 a, u32 b)
+{
+ u32 c;
+
+ /* Get the GCF (Greatest common factor between the numbers,
+ * using Euclidian Algo */
+ while ((c = (a % b))) {
+ a = b;
+ b = c;
+ }
+ return b;
+}
+
+/**
+ * nldr_find_addr() - Find the closest symbol to the given address based on
+ * dynamic node object.
+ *
+ * @nldr_node: Dynamic node object
+ * @sym_addr: Given address to find the dsp symbol
+ * @offset_range: offset range to look for dsp symbol
+ * @offset_output: Symbol Output address
+ * @sym_name: String with the dsp symbol
+ *
+ * This function finds the node library for a given address and
+ * retrieves the dsp symbol by calling dbll_find_dsp_symbol.
+ */
+int nldr_find_addr(struct nldr_nodeobject *nldr_node, u32 sym_addr,
+ u32 offset_range, void *offset_output, char *sym_name)
+{
+ int status = 0;
+ bool status1 = false;
+ s32 i = 0;
+ struct lib_node root = { NULL, 0, NULL };
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(offset_output != NULL);
+ DBC_REQUIRE(sym_name != NULL);
+ pr_debug("%s(0x%x, 0x%x, 0x%x, 0x%x, %s)\n", __func__, (u32) nldr_node,
+ sym_addr, offset_range, (u32) offset_output, sym_name);
+
+ if (nldr_node->dynamic && *nldr_node->pf_phase_split) {
+ switch (nldr_node->phase) {
+ case NLDR_CREATE:
+ root = nldr_node->create_lib;
+ break;
+ case NLDR_EXECUTE:
+ root = nldr_node->execute_lib;
+ break;
+ case NLDR_DELETE:
+ root = nldr_node->delete_lib;
+ break;
+ default:
+ DBC_ASSERT(false);
+ break;
+ }
+ } else {
+ /* for Overlay nodes or non-split Dynamic nodes */
+ root = nldr_node->root;
+ }
+
+ status1 = dbll_find_dsp_symbol(root.lib, sym_addr,
+ offset_range, offset_output, sym_name);
+
+ /* If symbol not found, check dependent libraries */
+ if (!status1)
+ for (i = 0; i < root.dep_libs; i++) {
+ status1 = dbll_find_dsp_symbol(
+ root.dep_libs_tree[i].lib, sym_addr,
+ offset_range, offset_output, sym_name);
+ if (status1)
+ /* Symbol found */
+ break;
+ }
+ /* Check persistent libraries */
+ if (!status1)
+ for (i = 0; i < nldr_node->pers_libs; i++) {
+ status1 = dbll_find_dsp_symbol(
+ nldr_node->pers_lib_table[i].lib, sym_addr,
+ offset_range, offset_output, sym_name);
+ if (status1)
+ /* Symbol found */
+ break;
+ }
+
+ if (!status1) {
+ pr_debug("%s: Address 0x%x not found in range %d.\n",
+ __func__, sym_addr, offset_range);
+ status = -ESPIPE;
+ }
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/node.c b/drivers/staging/tidspbridge/rmgr/node.c
new file mode 100644
index 0000000..3d2cf96
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/node.c
@@ -0,0 +1,3231 @@
+/*
+ * node.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Node Manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/list.h>
+#include <dspbridge/memdefs.h>
+#include <dspbridge/proc.h>
+#include <dspbridge/strm.h>
+#include <dspbridge/sync.h>
+#include <dspbridge/ntfy.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/cmm.h>
+#include <dspbridge/cod.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/msg.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/dbdcd.h>
+#include <dspbridge/disp.h>
+#include <dspbridge/rms_sh.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspdefs.h>
+#include <dspbridge/dspioctl.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/gb.h>
+#include <dspbridge/uuidutil.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/nodepriv.h>
+#include <dspbridge/node.h>
+#include <dspbridge/dmm.h>
+
+/* Static/Dynamic Loader includes */
+#include <dspbridge/dbll.h>
+#include <dspbridge/nldr.h>
+
+#include <dspbridge/drv.h>
+#include <dspbridge/drvdefs.h>
+#include <dspbridge/resourcecleanup.h>
+#include <_tiomap.h>
+
+#define HOSTPREFIX "/host"
+#define PIPEPREFIX "/dbpipe"
+
+#define MAX_INPUTS(h) \
+ ((h)->dcd_props.obj_data.node_obj.ndb_props.num_input_streams)
+#define MAX_OUTPUTS(h) \
+ ((h)->dcd_props.obj_data.node_obj.ndb_props.num_output_streams)
+
+#define NODE_GET_PRIORITY(h) ((h)->prio)
+#define NODE_SET_PRIORITY(hnode, prio) ((hnode)->prio = prio)
+#define NODE_SET_STATE(hnode, state) ((hnode)->node_state = state)
+
+#define MAXPIPES 100 /* Max # of /pipe connections (CSL limit) */
+#define MAXDEVSUFFIXLEN 2 /* Max(Log base 10 of MAXPIPES, MAXSTREAMS) */
+
+#define PIPENAMELEN (sizeof(PIPEPREFIX) + MAXDEVSUFFIXLEN)
+#define HOSTNAMELEN (sizeof(HOSTPREFIX) + MAXDEVSUFFIXLEN)
+
+#define MAXDEVNAMELEN 32 /* dsp_ndbprops.ac_name size */
+#define CREATEPHASE 1
+#define EXECUTEPHASE 2
+#define DELETEPHASE 3
+
+/* Define default STRM parameters */
+/*
+ * TBD: Put in header file, make global DSP_STRMATTRS with defaults,
+ * or make defaults configurable.
+ */
+#define DEFAULTBUFSIZE 32
+#define DEFAULTNBUFS 2
+#define DEFAULTSEGID 0
+#define DEFAULTALIGNMENT 0
+#define DEFAULTTIMEOUT 10000
+
+#define RMSQUERYSERVER 0
+#define RMSCONFIGURESERVER 1
+#define RMSCREATENODE 2
+#define RMSEXECUTENODE 3
+#define RMSDELETENODE 4
+#define RMSCHANGENODEPRIORITY 5
+#define RMSREADMEMORY 6
+#define RMSWRITEMEMORY 7
+#define RMSCOPY 8
+#define MAXTIMEOUT 2000
+
+#define NUMRMSFXNS 9
+
+#define PWR_TIMEOUT 500 /* default PWR timeout in msec */
+
+#define STACKSEGLABEL "L1DSRAM_HEAP" /* Label for DSP Stack Segment Addr */
+
+/*
+ * ======== node_mgr ========
+ */
+struct node_mgr {
+ struct dev_object *hdev_obj; /* Device object */
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+ struct dcd_manager *hdcd_mgr; /* Proc/Node data manager */
+ struct disp_object *disp_obj; /* Node dispatcher */
+ struct lst_list *node_list; /* List of all allocated nodes */
+ u32 num_nodes; /* Number of nodes in node_list */
+ u32 num_created; /* Number of nodes *created* on DSP */
+ struct gb_t_map *pipe_map; /* Pipe connection bit map */
+ struct gb_t_map *pipe_done_map; /* Pipes that are half free */
+ struct gb_t_map *chnl_map; /* Channel allocation bit map */
+ struct gb_t_map *dma_chnl_map; /* DMA Channel allocation bit map */
+ struct gb_t_map *zc_chnl_map; /* Zero-Copy Channel alloc bit map */
+ struct ntfy_object *ntfy_obj; /* Manages registered notifications */
+ struct mutex node_mgr_lock; /* For critical sections */
+ u32 ul_fxn_addrs[NUMRMSFXNS]; /* RMS function addresses */
+ struct msg_mgr *msg_mgr_obj;
+
+ /* Processor properties needed by Node Dispatcher */
+ u32 ul_num_chnls; /* Total number of channels */
+ u32 ul_chnl_offset; /* Offset of chnl ids rsvd for RMS */
+ u32 ul_chnl_buf_size; /* Buffer size for data to RMS */
+ int proc_family; /* eg, 5000 */
+ int proc_type; /* eg, 5510 */
+ u32 udsp_word_size; /* Size of DSP word on host bytes */
+ u32 udsp_data_mau_size; /* Size of DSP data MAU */
+ u32 udsp_mau_size; /* Size of MAU */
+ s32 min_pri; /* Minimum runtime priority for node */
+ s32 max_pri; /* Maximum runtime priority for node */
+
+ struct strm_mgr *strm_mgr_obj; /* STRM manager */
+
+ /* Loader properties */
+ struct nldr_object *nldr_obj; /* Handle to loader */
+ struct node_ldr_fxns nldr_fxns; /* Handle to loader functions */
+ bool loader_init; /* Loader Init function succeeded? */
+};
+
+/*
+ * ======== connecttype ========
+ */
+enum connecttype {
+ NOTCONNECTED = 0,
+ NODECONNECT,
+ HOSTCONNECT,
+ DEVICECONNECT,
+};
+
+/*
+ * ======== stream_chnl ========
+ */
+struct stream_chnl {
+ enum connecttype type; /* Type of stream connection */
+ u32 dev_id; /* pipe or channel id */
+};
+
+/*
+ * ======== node_object ========
+ */
+struct node_object {
+ struct list_head list_elem;
+ struct node_mgr *hnode_mgr; /* The manager of this node */
+ struct proc_object *hprocessor; /* Back pointer to processor */
+ struct dsp_uuid node_uuid; /* Node's ID */
+ s32 prio; /* Node's current priority */
+ u32 utimeout; /* Timeout for blocking NODE calls */
+ u32 heap_size; /* Heap Size */
+ u32 udsp_heap_virt_addr; /* Heap Size */
+ u32 ugpp_heap_virt_addr; /* Heap Size */
+ enum node_type ntype; /* Type of node: message, task, etc */
+ enum node_state node_state; /* NODE_ALLOCATED, NODE_CREATED, ... */
+ u32 num_inputs; /* Current number of inputs */
+ u32 num_outputs; /* Current number of outputs */
+ u32 max_input_index; /* Current max input stream index */
+ u32 max_output_index; /* Current max output stream index */
+ struct stream_chnl *inputs; /* Node's input streams */
+ struct stream_chnl *outputs; /* Node's output streams */
+ struct node_createargs create_args; /* Args for node create func */
+ nodeenv node_env; /* Environment returned by RMS */
+ struct dcd_genericobj dcd_props; /* Node properties from DCD */
+ struct dsp_cbdata *pargs; /* Optional args to pass to node */
+ struct ntfy_object *ntfy_obj; /* Manages registered notifications */
+ char *pstr_dev_name; /* device name, if device node */
+ struct sync_object *sync_done; /* Synchronize node_terminate */
+ s32 exit_status; /* execute function return status */
+
+ /* Information needed for node_get_attr() */
+ void *device_owner; /* If dev node, task that owns it */
+ u32 num_gpp_inputs; /* Current # of from GPP streams */
+ u32 num_gpp_outputs; /* Current # of to GPP streams */
+ /* Current stream connections */
+ struct dsp_streamconnect *stream_connect;
+
+ /* Message queue */
+ struct msg_queue *msg_queue_obj;
+
+ /* These fields used for SM messaging */
+ struct cmm_xlatorobject *xlator; /* Node's SM addr translator */
+
+ /* Handle to pass to dynamic loader */
+ struct nldr_nodeobject *nldr_node_obj;
+ bool loaded; /* Code is (dynamically) loaded */
+ bool phase_split; /* Phases split in many libs or ovly */
+
+};
+
+/* Default buffer attributes */
+static struct dsp_bufferattr node_dfltbufattrs = {
+ 0, /* cb_struct */
+ 1, /* segment_id */
+ 0, /* buf_alignment */
+};
+
+static void delete_node(struct node_object *hnode,
+ struct process_context *pr_ctxt);
+static void delete_node_mgr(struct node_mgr *hnode_mgr);
+static void fill_stream_connect(struct node_object *hNode1,
+ struct node_object *hNode2, u32 uStream1,
+ u32 uStream2);
+static void fill_stream_def(struct node_object *hnode,
+ struct node_strmdef *pstrm_def,
+ struct dsp_strmattr *pattrs);
+static void free_stream(struct node_mgr *hnode_mgr, struct stream_chnl stream);
+static int get_fxn_address(struct node_object *hnode, u32 * pulFxnAddr,
+ u32 uPhase);
+static int get_node_props(struct dcd_manager *hdcd_mgr,
+ struct node_object *hnode,
+ CONST struct dsp_uuid *pNodeId,
+ struct dcd_genericobj *pdcdProps);
+static int get_proc_props(struct node_mgr *hnode_mgr,
+ struct dev_object *hdev_obj);
+static int get_rms_fxns(struct node_mgr *hnode_mgr);
+static u32 ovly(void *priv_ref, u32 ulDspRunAddr, u32 ulDspLoadAddr,
+ u32 ul_num_bytes, u32 nMemSpace);
+static u32 mem_write(void *priv_ref, u32 ulDspAddr, void *pbuf,
+ u32 ul_num_bytes, u32 nMemSpace);
+
+static u32 refs; /* module reference count */
+
+/* Dynamic loader functions. */
+static struct node_ldr_fxns nldr_fxns = {
+ nldr_allocate,
+ nldr_create,
+ nldr_delete,
+ nldr_exit,
+ nldr_get_fxn_addr,
+ nldr_init,
+ nldr_load,
+ nldr_unload,
+};
+
+enum node_state node_get_state(void *hnode)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ if (!pnode)
+ return -1;
+ else
+ return pnode->node_state;
+}
+
+/*
+ * ======== node_allocate ========
+ * Purpose:
+ * Allocate GPP resources to manage a node on the DSP.
+ */
+int node_allocate(struct proc_object *hprocessor,
+ IN CONST struct dsp_uuid *pNodeId,
+ OPTIONAL IN CONST struct dsp_cbdata *pargs,
+ OPTIONAL IN CONST struct dsp_nodeattrin *attr_in,
+ OUT struct node_object **ph_node,
+ struct process_context *pr_ctxt)
+{
+ struct node_mgr *hnode_mgr;
+ struct dev_object *hdev_obj;
+ struct node_object *pnode = NULL;
+ enum node_type node_type = NODE_TASK;
+ struct node_msgargs *pmsg_args;
+ struct node_taskargs *ptask_args;
+ u32 num_streams;
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+ struct cmm_object *hcmm_mgr = NULL; /* Shared memory manager hndl */
+ u32 proc_id;
+ u32 pul_value;
+ u32 dynext_base;
+ u32 off_set = 0;
+ u32 ul_stack_seg_addr, ul_stack_seg_val;
+ u32 ul_gpp_mem_base;
+ struct cfg_hostres *host_res;
+ struct bridge_dev_context *pbridge_context;
+ u32 mapped_addr = 0;
+ u32 map_attrs = 0x0;
+ struct dsp_processorstate proc_state;
+#ifdef DSP_DMM_DEBUG
+ struct dmm_object *dmm_mgr;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+#endif
+
+ void *node_res;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hprocessor != NULL);
+ DBC_REQUIRE(ph_node != NULL);
+ DBC_REQUIRE(pNodeId != NULL);
+
+ *ph_node = NULL;
+
+ status = proc_get_processor_id(hprocessor, &proc_id);
+
+ if (proc_id != DSP_UNIT)
+ goto func_end;
+
+ status = proc_get_dev_object(hprocessor, &hdev_obj);
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_node_manager(hdev_obj, &hnode_mgr);
+ if (hnode_mgr == NULL)
+ status = -EPERM;
+
+ }
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ status = dev_get_bridge_context(hdev_obj, &pbridge_context);
+ if (!pbridge_context) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in error state then don't attempt
+ to send the message */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+
+ /* Assuming that 0 is not a valid function address */
+ if (hnode_mgr->ul_fxn_addrs[0] == 0) {
+ /* No RMS on target - we currently can't handle this */
+ pr_err("%s: Failed, no RMS in base image\n", __func__);
+ status = -EPERM;
+ } else {
+ /* Validate attr_in fields, if non-NULL */
+ if (attr_in) {
+ /* Check if attr_in->prio is within range */
+ if (attr_in->prio < hnode_mgr->min_pri ||
+ attr_in->prio > hnode_mgr->max_pri)
+ status = -EDOM;
+ }
+ }
+ /* Allocate node object and fill in */
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ pnode = kzalloc(sizeof(struct node_object), GFP_KERNEL);
+ if (pnode == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ pnode->hnode_mgr = hnode_mgr;
+ /* This critical section protects get_node_props */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ /* Get dsp_ndbprops from node database */
+ status = get_node_props(hnode_mgr->hdcd_mgr, pnode, pNodeId,
+ &(pnode->dcd_props));
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ pnode->node_uuid = *pNodeId;
+ pnode->hprocessor = hprocessor;
+ pnode->ntype = pnode->dcd_props.obj_data.node_obj.ndb_props.ntype;
+ pnode->utimeout = pnode->dcd_props.obj_data.node_obj.ndb_props.utimeout;
+ pnode->prio = pnode->dcd_props.obj_data.node_obj.ndb_props.prio;
+
+ /* Currently only C64 DSP builds support Node Dynamic * heaps */
+ /* Allocate memory for node heap */
+ pnode->create_args.asa.task_arg_obj.heap_size = 0;
+ pnode->create_args.asa.task_arg_obj.udsp_heap_addr = 0;
+ pnode->create_args.asa.task_arg_obj.udsp_heap_res_addr = 0;
+ pnode->create_args.asa.task_arg_obj.ugpp_heap_addr = 0;
+ if (!attr_in)
+ goto func_cont;
+
+ /* Check if we have a user allocated node heap */
+ if (!(attr_in->pgpp_virt_addr))
+ goto func_cont;
+
+ /* check for page aligned Heap size */
+ if (((attr_in->heap_size) & (PG_SIZE4K - 1))) {
+ pr_err("%s: node heap size not aligned to 4K, size = 0x%x \n",
+ __func__, attr_in->heap_size);
+ status = -EINVAL;
+ } else {
+ pnode->create_args.asa.task_arg_obj.heap_size =
+ attr_in->heap_size;
+ pnode->create_args.asa.task_arg_obj.ugpp_heap_addr =
+ (u32) attr_in->pgpp_virt_addr;
+ }
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ status = proc_reserve_memory(hprocessor,
+ pnode->create_args.asa.task_arg_obj.
+ heap_size + PAGE_SIZE,
+ (void **)&(pnode->create_args.asa.
+ task_arg_obj.udsp_heap_res_addr),
+ pr_ctxt);
+ if (DSP_FAILED(status)) {
+ pr_err("%s: Failed to reserve memory for heap: 0x%x\n",
+ __func__, status);
+ goto func_cont;
+ }
+#ifdef DSP_DMM_DEBUG
+ status = dmm_get_handle(p_proc_object, &dmm_mgr);
+ if (!dmm_mgr) {
+ status = DSP_EHANDLE;
+ goto func_cont;
+ }
+
+ dmm_mem_map_dump(dmm_mgr);
+#endif
+
+ map_attrs |= DSP_MAPLITTLEENDIAN;
+ map_attrs |= DSP_MAPELEMSIZE32;
+ map_attrs |= DSP_MAPVIRTUALADDR;
+ status = proc_map(hprocessor, (void *)attr_in->pgpp_virt_addr,
+ pnode->create_args.asa.task_arg_obj.heap_size,
+ (void *)pnode->create_args.asa.task_arg_obj.
+ udsp_heap_res_addr, (void **)&mapped_addr, map_attrs,
+ pr_ctxt);
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed to map memory for Heap: 0x%x\n",
+ __func__, status);
+ else
+ pnode->create_args.asa.task_arg_obj.udsp_heap_addr =
+ (u32) mapped_addr;
+
+func_cont:
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ if (attr_in != NULL) {
+ /* Overrides of NBD properties */
+ pnode->utimeout = attr_in->utimeout;
+ pnode->prio = attr_in->prio;
+ }
+ /* Create object to manage notifications */
+ if (DSP_SUCCEEDED(status)) {
+ pnode->ntfy_obj = kmalloc(sizeof(struct ntfy_object),
+ GFP_KERNEL);
+ if (pnode->ntfy_obj)
+ ntfy_init(pnode->ntfy_obj);
+ else
+ status = -ENOMEM;
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ node_type = node_get_type(pnode);
+ /* Allocate dsp_streamconnect array for device, task, and
+ * dais socket nodes. */
+ if (node_type != NODE_MESSAGE) {
+ num_streams = MAX_INPUTS(pnode) + MAX_OUTPUTS(pnode);
+ pnode->stream_connect = kzalloc(num_streams *
+ sizeof(struct dsp_streamconnect),
+ GFP_KERNEL);
+ if (num_streams > 0 && pnode->stream_connect == NULL)
+ status = -ENOMEM;
+
+ }
+ if (DSP_SUCCEEDED(status) && (node_type == NODE_TASK ||
+ node_type == NODE_DAISSOCKET)) {
+ /* Allocate arrays for maintainig stream connections */
+ pnode->inputs = kzalloc(MAX_INPUTS(pnode) *
+ sizeof(struct stream_chnl), GFP_KERNEL);
+ pnode->outputs = kzalloc(MAX_OUTPUTS(pnode) *
+ sizeof(struct stream_chnl), GFP_KERNEL);
+ ptask_args = &(pnode->create_args.asa.task_arg_obj);
+ ptask_args->strm_in_def = kzalloc(MAX_INPUTS(pnode) *
+ sizeof(struct node_strmdef),
+ GFP_KERNEL);
+ ptask_args->strm_out_def = kzalloc(MAX_OUTPUTS(pnode) *
+ sizeof(struct node_strmdef),
+ GFP_KERNEL);
+ if ((MAX_INPUTS(pnode) > 0 && (pnode->inputs == NULL ||
+ ptask_args->strm_in_def
+ == NULL))
+ || (MAX_OUTPUTS(pnode) > 0
+ && (pnode->outputs == NULL
+ || ptask_args->strm_out_def == NULL)))
+ status = -ENOMEM;
+ }
+ }
+ if (DSP_SUCCEEDED(status) && (node_type != NODE_DEVICE)) {
+ /* Create an event that will be posted when RMS_EXIT is
+ * received. */
+ pnode->sync_done = kzalloc(sizeof(struct sync_object),
+ GFP_KERNEL);
+ if (pnode->sync_done)
+ sync_init_event(pnode->sync_done);
+ else
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ /*Get the shared mem mgr for this nodes dev object */
+ status = cmm_get_handle(hprocessor, &hcmm_mgr);
+ if (DSP_SUCCEEDED(status)) {
+ /* Allocate a SM addr translator for this node
+ * w/ deflt attr */
+ status = cmm_xlator_create(&pnode->xlator,
+ hcmm_mgr, NULL);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Fill in message args */
+ if ((pargs != NULL) && (pargs->cb_data > 0)) {
+ pmsg_args =
+ &(pnode->create_args.asa.node_msg_args);
+ pmsg_args->pdata = kzalloc(pargs->cb_data,
+ GFP_KERNEL);
+ if (pmsg_args->pdata == NULL) {
+ status = -ENOMEM;
+ } else {
+ pmsg_args->arg_length = pargs->cb_data;
+ memcpy(pmsg_args->pdata,
+ pargs->node_data,
+ pargs->cb_data);
+ }
+ }
+ }
+ }
+
+ if (DSP_SUCCEEDED(status) && node_type != NODE_DEVICE) {
+ /* Create a message queue for this node */
+ intf_fxns = hnode_mgr->intf_fxns;
+ status =
+ (*intf_fxns->pfn_msg_create_queue) (hnode_mgr->msg_mgr_obj,
+ &pnode->msg_queue_obj,
+ 0,
+ pnode->create_args.asa.
+ node_msg_args.max_msgs,
+ pnode);
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Create object for dynamic loading */
+
+ status = hnode_mgr->nldr_fxns.pfn_allocate(hnode_mgr->nldr_obj,
+ (void *)pnode,
+ &pnode->dcd_props.
+ obj_data.node_obj,
+ &pnode->
+ nldr_node_obj,
+ &pnode->phase_split);
+ }
+
+ /* Compare value read from Node Properties and check if it is same as
+ * STACKSEGLABEL, if yes read the Address of STACKSEGLABEL, calculate
+ * GPP Address, Read the value in that address and override the
+ * stack_seg value in task args */
+ if (DSP_SUCCEEDED(status) &&
+ (char *)pnode->dcd_props.obj_data.node_obj.ndb_props.
+ stack_seg_name != NULL) {
+ if (strcmp((char *)
+ pnode->dcd_props.obj_data.node_obj.ndb_props.
+ stack_seg_name, STACKSEGLABEL) == 0) {
+ status =
+ hnode_mgr->nldr_fxns.
+ pfn_get_fxn_addr(pnode->nldr_node_obj, "DYNEXT_BEG",
+ &dynext_base);
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed to get addr for DYNEXT_BEG"
+ " status = 0x%x\n", __func__, status);
+
+ status =
+ hnode_mgr->nldr_fxns.
+ pfn_get_fxn_addr(pnode->nldr_node_obj,
+ "L1DSRAM_HEAP", &pul_value);
+
+ if (DSP_FAILED(status))
+ pr_err("%s: Failed to get addr for L1DSRAM_HEAP"
+ " status = 0x%x\n", __func__, status);
+
+ host_res = pbridge_context->resources;
+ if (!host_res)
+ status = -EPERM;
+
+ if (DSP_FAILED(status)) {
+ pr_err("%s: Failed to get host resource, status"
+ " = 0x%x\n", __func__, status);
+ goto func_end;
+ }
+
+ ul_gpp_mem_base = (u32) host_res->dw_mem_base[1];
+ off_set = pul_value - dynext_base;
+ ul_stack_seg_addr = ul_gpp_mem_base + off_set;
+ ul_stack_seg_val = (u32) *((reg_uword32 *)
+ ((u32)
+ (ul_stack_seg_addr)));
+
+ dev_dbg(bridge, "%s: StackSegVal = 0x%x, StackSegAddr ="
+ " 0x%x\n", __func__, ul_stack_seg_val,
+ ul_stack_seg_addr);
+
+ pnode->create_args.asa.task_arg_obj.stack_seg =
+ ul_stack_seg_val;
+
+ }
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Add the node to the node manager's list of allocated
+ * nodes. */
+ lst_init_elem((struct list_head *)pnode);
+ NODE_SET_STATE(pnode, NODE_ALLOCATED);
+
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ lst_put_tail(hnode_mgr->node_list, (struct list_head *) pnode);
+ ++(hnode_mgr->num_nodes);
+
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+
+ /* Preset this to assume phases are split
+ * (for overlay and dll) */
+ pnode->phase_split = true;
+
+ if (DSP_SUCCEEDED(status))
+ *ph_node = pnode;
+
+ /* Notify all clients registered for DSP_NODESTATECHANGE. */
+ proc_notify_all_clients(hprocessor, DSP_NODESTATECHANGE);
+ } else {
+ /* Cleanup */
+ if (pnode)
+ delete_node(pnode, pr_ctxt);
+
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ drv_insert_node_res_element(*ph_node, &node_res, pr_ctxt);
+ drv_proc_node_update_heap_status(node_res, true);
+ drv_proc_node_update_status(node_res, true);
+ }
+ DBC_ENSURE((DSP_FAILED(status) && (*ph_node == NULL)) ||
+ (DSP_SUCCEEDED(status) && *ph_node));
+func_end:
+ dev_dbg(bridge, "%s: hprocessor: %p pNodeId: %p pargs: %p attr_in: %p "
+ "ph_node: %p status: 0x%x\n", __func__, hprocessor,
+ pNodeId, pargs, attr_in, ph_node, status);
+ return status;
+}
+
+/*
+ * ======== node_alloc_msg_buf ========
+ * Purpose:
+ * Allocates buffer for zero copy messaging.
+ */
+DBAPI node_alloc_msg_buf(struct node_object *hnode, u32 usize,
+ OPTIONAL IN OUT struct dsp_bufferattr *pattr,
+ OUT u8 **pbuffer)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ int status = 0;
+ bool va_flag = false;
+ bool set_info;
+ u32 proc_id;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pbuffer != NULL);
+
+ DBC_REQUIRE(usize > 0);
+
+ if (!pnode)
+ status = -EFAULT;
+ else if (node_get_type(pnode) == NODE_DEVICE)
+ status = -EPERM;
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (pattr == NULL)
+ pattr = &node_dfltbufattrs; /* set defaults */
+
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+ if (proc_id != DSP_UNIT) {
+ DBC_ASSERT(NULL);
+ goto func_end;
+ }
+ /* If segment ID includes MEM_SETVIRTUALSEGID then pbuffer is a
+ * virt address, so set this info in this node's translator
+ * object for future ref. If MEM_GETVIRTUALSEGID then retrieve
+ * virtual address from node's translator. */
+ if ((pattr->segment_id & MEM_SETVIRTUALSEGID) ||
+ (pattr->segment_id & MEM_GETVIRTUALSEGID)) {
+ va_flag = true;
+ set_info = (pattr->segment_id & MEM_SETVIRTUALSEGID) ?
+ true : false;
+ /* Clear mask bits */
+ pattr->segment_id &= ~MEM_MASKVIRTUALSEGID;
+ /* Set/get this node's translators virtual address base/size */
+ status = cmm_xlator_info(pnode->xlator, pbuffer, usize,
+ pattr->segment_id, set_info);
+ }
+ if (DSP_SUCCEEDED(status) && (!va_flag)) {
+ if (pattr->segment_id != 1) {
+ /* Node supports single SM segment only. */
+ status = -EBADR;
+ }
+ /* Arbitrary SM buffer alignment not supported for host side
+ * allocs, but guaranteed for the following alignment
+ * values. */
+ switch (pattr->buf_alignment) {
+ case 0:
+ case 1:
+ case 2:
+ case 4:
+ break;
+ default:
+ /* alignment value not suportted */
+ status = -EPERM;
+ break;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* allocate physical buffer from seg_id in node's
+ * translator */
+ (void)cmm_xlator_alloc_buf(pnode->xlator, pbuffer,
+ usize);
+ if (*pbuffer == NULL) {
+ pr_err("%s: error - Out of shared memory\n",
+ __func__);
+ status = -ENOMEM;
+ }
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== node_change_priority ========
+ * Purpose:
+ * Change the priority of a node in the allocated state, or that is
+ * currently running or paused on the target.
+ */
+int node_change_priority(struct node_object *hnode, s32 prio)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ struct node_mgr *hnode_mgr = NULL;
+ enum node_type node_type;
+ enum node_state state;
+ int status = 0;
+ u32 proc_id;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hnode || !hnode->hnode_mgr) {
+ status = -EFAULT;
+ } else {
+ hnode_mgr = hnode->hnode_mgr;
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_TASK && node_type != NODE_DAISSOCKET)
+ status = -EPERM;
+ else if (prio < hnode_mgr->min_pri || prio > hnode_mgr->max_pri)
+ status = -EDOM;
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ state = node_get_state(hnode);
+ if (state == NODE_ALLOCATED || state == NODE_PAUSED) {
+ NODE_SET_PRIORITY(hnode, prio);
+ } else {
+ if (state != NODE_RUNNING) {
+ status = -EBADR;
+ goto func_cont;
+ }
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+ if (proc_id == DSP_UNIT) {
+ status =
+ disp_node_change_priority(hnode_mgr->disp_obj,
+ hnode,
+ hnode_mgr->ul_fxn_addrs
+ [RMSCHANGENODEPRIORITY],
+ hnode->node_env, prio);
+ }
+ if (DSP_SUCCEEDED(status))
+ NODE_SET_PRIORITY(hnode, prio);
+
+ }
+func_cont:
+ /* Leave critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+func_end:
+ return status;
+}
+
+/*
+ * ======== node_connect ========
+ * Purpose:
+ * Connect two nodes on the DSP, or a node on the DSP to the GPP.
+ */
+int node_connect(struct node_object *hNode1, u32 uStream1,
+ struct node_object *hNode2,
+ u32 uStream2, OPTIONAL IN struct dsp_strmattr *pattrs,
+ OPTIONAL IN struct dsp_cbdata *conn_param)
+{
+ struct node_mgr *hnode_mgr;
+ char *pstr_dev_name = NULL;
+ enum node_type node1_type = NODE_TASK;
+ enum node_type node2_type = NODE_TASK;
+ struct node_strmdef *pstrm_def;
+ struct node_strmdef *input = NULL;
+ struct node_strmdef *output = NULL;
+ struct node_object *dev_node_obj;
+ struct node_object *hnode;
+ struct stream_chnl *pstream;
+ u32 pipe_id = GB_NOBITS;
+ u32 chnl_id = GB_NOBITS;
+ s8 chnl_mode;
+ u32 dw_length;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+
+ if ((hNode1 != (struct node_object *)DSP_HGPPNODE && !hNode1) ||
+ (hNode2 != (struct node_object *)DSP_HGPPNODE && !hNode2))
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* The two nodes must be on the same processor */
+ if (hNode1 != (struct node_object *)DSP_HGPPNODE &&
+ hNode2 != (struct node_object *)DSP_HGPPNODE &&
+ hNode1->hnode_mgr != hNode2->hnode_mgr)
+ status = -EPERM;
+ /* Cannot connect a node to itself */
+ if (hNode1 == hNode2)
+ status = -EPERM;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* node_get_type() will return NODE_GPP if hnode =
+ * DSP_HGPPNODE. */
+ node1_type = node_get_type(hNode1);
+ node2_type = node_get_type(hNode2);
+ /* Check stream indices ranges */
+ if ((node1_type != NODE_GPP && node1_type != NODE_DEVICE &&
+ uStream1 >= MAX_OUTPUTS(hNode1)) || (node2_type != NODE_GPP
+ && node2_type !=
+ NODE_DEVICE
+ && uStream2 >=
+ MAX_INPUTS(hNode2)))
+ status = -EINVAL;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Only the following types of connections are allowed:
+ * task/dais socket < == > task/dais socket
+ * task/dais socket < == > device
+ * task/dais socket < == > GPP
+ *
+ * ie, no message nodes, and at least one task or dais
+ * socket node.
+ */
+ if (node1_type == NODE_MESSAGE || node2_type == NODE_MESSAGE ||
+ (node1_type != NODE_TASK && node1_type != NODE_DAISSOCKET &&
+ node2_type != NODE_TASK && node2_type != NODE_DAISSOCKET))
+ status = -EPERM;
+ }
+ /*
+ * Check stream mode. Default is STRMMODE_PROCCOPY.
+ */
+ if (DSP_SUCCEEDED(status) && pattrs) {
+ if (pattrs->strm_mode != STRMMODE_PROCCOPY)
+ status = -EPERM; /* illegal stream mode */
+
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (node1_type != NODE_GPP) {
+ hnode_mgr = hNode1->hnode_mgr;
+ } else {
+ DBC_ASSERT(hNode2 != (struct node_object *)DSP_HGPPNODE);
+ hnode_mgr = hNode2->hnode_mgr;
+ }
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ /* Nodes must be in the allocated state */
+ if (node1_type != NODE_GPP && node_get_state(hNode1) != NODE_ALLOCATED)
+ status = -EBADR;
+
+ if (node2_type != NODE_GPP && node_get_state(hNode2) != NODE_ALLOCATED)
+ status = -EBADR;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Check that stream indices for task and dais socket nodes
+ * are not already be used. (Device nodes checked later) */
+ if (node1_type == NODE_TASK || node1_type == NODE_DAISSOCKET) {
+ output =
+ &(hNode1->create_args.asa.
+ task_arg_obj.strm_out_def[uStream1]);
+ if (output->sz_device != NULL)
+ status = -EISCONN;
+
+ }
+ if (node2_type == NODE_TASK || node2_type == NODE_DAISSOCKET) {
+ input =
+ &(hNode2->create_args.asa.
+ task_arg_obj.strm_in_def[uStream2]);
+ if (input->sz_device != NULL)
+ status = -EISCONN;
+
+ }
+ }
+ /* Connecting two task nodes? */
+ if (DSP_SUCCEEDED(status) && ((node1_type == NODE_TASK ||
+ node1_type == NODE_DAISSOCKET)
+ && (node2_type == NODE_TASK
+ || node2_type == NODE_DAISSOCKET))) {
+ /* Find available pipe */
+ pipe_id = gb_findandset(hnode_mgr->pipe_map);
+ if (pipe_id == GB_NOBITS) {
+ status = -ECONNREFUSED;
+ } else {
+ hNode1->outputs[uStream1].type = NODECONNECT;
+ hNode2->inputs[uStream2].type = NODECONNECT;
+ hNode1->outputs[uStream1].dev_id = pipe_id;
+ hNode2->inputs[uStream2].dev_id = pipe_id;
+ output->sz_device = kzalloc(PIPENAMELEN + 1,
+ GFP_KERNEL);
+ input->sz_device = kzalloc(PIPENAMELEN + 1, GFP_KERNEL);
+ if (output->sz_device == NULL ||
+ input->sz_device == NULL) {
+ /* Undo the connection */
+ kfree(output->sz_device);
+
+ kfree(input->sz_device);
+
+ output->sz_device = NULL;
+ input->sz_device = NULL;
+ gb_clear(hnode_mgr->pipe_map, pipe_id);
+ status = -ENOMEM;
+ } else {
+ /* Copy "/dbpipe<pipId>" name to device names */
+ sprintf(output->sz_device, "%s%d",
+ PIPEPREFIX, pipe_id);
+ strcpy(input->sz_device, output->sz_device);
+ }
+ }
+ }
+ /* Connecting task node to host? */
+ if (DSP_SUCCEEDED(status) && (node1_type == NODE_GPP ||
+ node2_type == NODE_GPP)) {
+ if (node1_type == NODE_GPP) {
+ chnl_mode = CHNL_MODETODSP;
+ } else {
+ DBC_ASSERT(node2_type == NODE_GPP);
+ chnl_mode = CHNL_MODEFROMDSP;
+ }
+ /* Reserve a channel id. We need to put the name "/host<id>"
+ * in the node's create_args, but the host
+ * side channel will not be opened until DSPStream_Open is
+ * called for this node. */
+ if (pattrs) {
+ if (pattrs->strm_mode == STRMMODE_RDMA) {
+ chnl_id =
+ gb_findandset(hnode_mgr->dma_chnl_map);
+ /* dma chans are 2nd transport chnl set
+ * ids(e.g. 16-31) */
+ (chnl_id != GB_NOBITS) ?
+ (chnl_id =
+ chnl_id +
+ hnode_mgr->ul_num_chnls) : chnl_id;
+ } else if (pattrs->strm_mode == STRMMODE_ZEROCOPY) {
+ chnl_id = gb_findandset(hnode_mgr->zc_chnl_map);
+ /* zero-copy chans are 3nd transport set
+ * (e.g. 32-47) */
+ (chnl_id != GB_NOBITS) ? (chnl_id = chnl_id +
+ (2 *
+ hnode_mgr->
+ ul_num_chnls))
+ : chnl_id;
+ } else { /* must be PROCCOPY */
+ DBC_ASSERT(pattrs->strm_mode ==
+ STRMMODE_PROCCOPY);
+ chnl_id = gb_findandset(hnode_mgr->chnl_map);
+ /* e.g. 0-15 */
+ }
+ } else {
+ /* default to PROCCOPY */
+ chnl_id = gb_findandset(hnode_mgr->chnl_map);
+ }
+ if (chnl_id == GB_NOBITS) {
+ status = -ECONNREFUSED;
+ goto func_cont2;
+ }
+ pstr_dev_name = kzalloc(HOSTNAMELEN + 1, GFP_KERNEL);
+ if (pstr_dev_name != NULL)
+ goto func_cont2;
+
+ if (pattrs) {
+ if (pattrs->strm_mode == STRMMODE_RDMA) {
+ gb_clear(hnode_mgr->dma_chnl_map, chnl_id -
+ hnode_mgr->ul_num_chnls);
+ } else if (pattrs->strm_mode == STRMMODE_ZEROCOPY) {
+ gb_clear(hnode_mgr->zc_chnl_map, chnl_id -
+ (2 * hnode_mgr->ul_num_chnls));
+ } else {
+ DBC_ASSERT(pattrs->strm_mode ==
+ STRMMODE_PROCCOPY);
+ gb_clear(hnode_mgr->chnl_map, chnl_id);
+ }
+ } else {
+ gb_clear(hnode_mgr->chnl_map, chnl_id);
+ }
+ status = -ENOMEM;
+func_cont2:
+ if (DSP_SUCCEEDED(status)) {
+ if (hNode1 == (struct node_object *)DSP_HGPPNODE) {
+ hNode2->inputs[uStream2].type = HOSTCONNECT;
+ hNode2->inputs[uStream2].dev_id = chnl_id;
+ input->sz_device = pstr_dev_name;
+ } else {
+ hNode1->outputs[uStream1].type = HOSTCONNECT;
+ hNode1->outputs[uStream1].dev_id = chnl_id;
+ output->sz_device = pstr_dev_name;
+ }
+ sprintf(pstr_dev_name, "%s%d", HOSTPREFIX, chnl_id);
+ }
+ }
+ /* Connecting task node to device node? */
+ if (DSP_SUCCEEDED(status) && ((node1_type == NODE_DEVICE) ||
+ (node2_type == NODE_DEVICE))) {
+ if (node2_type == NODE_DEVICE) {
+ /* node1 == > device */
+ dev_node_obj = hNode2;
+ hnode = hNode1;
+ pstream = &(hNode1->outputs[uStream1]);
+ pstrm_def = output;
+ } else {
+ /* device == > node2 */
+ dev_node_obj = hNode1;
+ hnode = hNode2;
+ pstream = &(hNode2->inputs[uStream2]);
+ pstrm_def = input;
+ }
+ /* Set up create args */
+ pstream->type = DEVICECONNECT;
+ dw_length = strlen(dev_node_obj->pstr_dev_name);
+ if (conn_param != NULL) {
+ pstrm_def->sz_device = kzalloc(dw_length + 1 +
+ conn_param->cb_data,
+ GFP_KERNEL);
+ } else {
+ pstrm_def->sz_device = kzalloc(dw_length + 1,
+ GFP_KERNEL);
+ }
+ if (pstrm_def->sz_device == NULL) {
+ status = -ENOMEM;
+ } else {
+ /* Copy device name */
+ strncpy(pstrm_def->sz_device,
+ dev_node_obj->pstr_dev_name, dw_length);
+ if (conn_param != NULL) {
+ strncat(pstrm_def->sz_device,
+ (char *)conn_param->node_data,
+ (u32) conn_param->cb_data);
+ }
+ dev_node_obj->device_owner = hnode;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Fill in create args */
+ if (node1_type == NODE_TASK || node1_type == NODE_DAISSOCKET) {
+ hNode1->create_args.asa.task_arg_obj.num_outputs++;
+ fill_stream_def(hNode1, output, pattrs);
+ }
+ if (node2_type == NODE_TASK || node2_type == NODE_DAISSOCKET) {
+ hNode2->create_args.asa.task_arg_obj.num_inputs++;
+ fill_stream_def(hNode2, input, pattrs);
+ }
+ /* Update hNode1 and hNode2 stream_connect */
+ if (node1_type != NODE_GPP && node1_type != NODE_DEVICE) {
+ hNode1->num_outputs++;
+ if (uStream1 > hNode1->max_output_index)
+ hNode1->max_output_index = uStream1;
+
+ }
+ if (node2_type != NODE_GPP && node2_type != NODE_DEVICE) {
+ hNode2->num_inputs++;
+ if (uStream2 > hNode2->max_input_index)
+ hNode2->max_input_index = uStream2;
+
+ }
+ fill_stream_connect(hNode1, hNode2, uStream1, uStream2);
+ }
+ /* end of sync_enter_cs */
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+func_end:
+ dev_dbg(bridge, "%s: hNode1: %p uStream1: %d hNode2: %p uStream2: %d"
+ "pattrs: %p status: 0x%x\n", __func__, hNode1,
+ uStream1, hNode2, uStream2, pattrs, status);
+ return status;
+}
+
+/*
+ * ======== node_create ========
+ * Purpose:
+ * Create a node on the DSP by remotely calling the node's create function.
+ */
+int node_create(struct node_object *hnode)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ struct node_mgr *hnode_mgr;
+ struct bridge_drv_interface *intf_fxns;
+ u32 ul_create_fxn;
+ enum node_type node_type;
+ int status = 0;
+ int status1 = 0;
+ struct dsp_cbdata cb_data;
+ u32 proc_id = 255;
+ struct dsp_processorstate proc_state;
+ struct proc_object *hprocessor;
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+#endif
+
+ DBC_REQUIRE(refs > 0);
+ if (!pnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hprocessor = hnode->hprocessor;
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in error state then don't attempt to create
+ new node */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* create struct dsp_cbdata struct for PWR calls */
+ cb_data.cb_data = PWR_TIMEOUT;
+ node_type = node_get_type(hnode);
+ hnode_mgr = hnode->hnode_mgr;
+ intf_fxns = hnode_mgr->intf_fxns;
+ /* Get access to node dispatcher */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ /* Check node state */
+ if (node_get_state(hnode) != NODE_ALLOCATED)
+ status = -EBADR;
+
+ if (DSP_SUCCEEDED(status))
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+
+ if (DSP_FAILED(status))
+ goto func_cont2;
+
+ if (proc_id != DSP_UNIT)
+ goto func_cont2;
+
+ /* Make sure streams are properly connected */
+ if ((hnode->num_inputs && hnode->max_input_index >
+ hnode->num_inputs - 1) ||
+ (hnode->num_outputs && hnode->max_output_index >
+ hnode->num_outputs - 1))
+ status = -ENOTCONN;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* If node's create function is not loaded, load it */
+ /* Boost the OPP level to max level that DSP can be requested */
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ if (pdata->cpu_set_freq)
+ (*pdata->cpu_set_freq) (pdata->mpu_speed[VDD1_OPP3]);
+#endif
+ status = hnode_mgr->nldr_fxns.pfn_load(hnode->nldr_node_obj,
+ NLDR_CREATE);
+ /* Get address of node's create function */
+ if (DSP_SUCCEEDED(status)) {
+ hnode->loaded = true;
+ if (node_type != NODE_DEVICE) {
+ status = get_fxn_address(hnode, &ul_create_fxn,
+ CREATEPHASE);
+ }
+ } else {
+ pr_err("%s: failed to load create code: 0x%x\n",
+ __func__, status);
+ }
+ /* Request the lowest OPP level */
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ if (pdata->cpu_set_freq)
+ (*pdata->cpu_set_freq) (pdata->mpu_speed[VDD1_OPP1]);
+#endif
+ /* Get address of iAlg functions, if socket node */
+ if (DSP_SUCCEEDED(status)) {
+ if (node_type == NODE_DAISSOCKET) {
+ status = hnode_mgr->nldr_fxns.pfn_get_fxn_addr
+ (hnode->nldr_node_obj,
+ hnode->dcd_props.obj_data.node_obj.
+ pstr_i_alg_name,
+ &hnode->create_args.asa.
+ task_arg_obj.ul_dais_arg);
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ if (node_type != NODE_DEVICE) {
+ status = disp_node_create(hnode_mgr->disp_obj, hnode,
+ hnode_mgr->ul_fxn_addrs
+ [RMSCREATENODE],
+ ul_create_fxn,
+ &(hnode->create_args),
+ &(hnode->node_env));
+ if (DSP_SUCCEEDED(status)) {
+ /* Set the message queue id to the node env
+ * pointer */
+ intf_fxns = hnode_mgr->intf_fxns;
+ (*intf_fxns->pfn_msg_set_queue_id) (hnode->
+ msg_queue_obj,
+ hnode->node_env);
+ }
+ }
+ }
+ /* Phase II/Overlays: Create, execute, delete phases possibly in
+ * different files/sections. */
+ if (hnode->loaded && hnode->phase_split) {
+ /* If create code was dynamically loaded, we can now unload
+ * it. */
+ status1 = hnode_mgr->nldr_fxns.pfn_unload(hnode->nldr_node_obj,
+ NLDR_CREATE);
+ hnode->loaded = false;
+ }
+ if (DSP_FAILED(status1))
+ pr_err("%s: Failed to unload create code: 0x%x\n",
+ __func__, status1);
+func_cont2:
+ /* Update node state and node manager state */
+ if (DSP_SUCCEEDED(status)) {
+ NODE_SET_STATE(hnode, NODE_CREATED);
+ hnode_mgr->num_created++;
+ goto func_cont;
+ }
+ if (status != -EBADR) {
+ /* Put back in NODE_ALLOCATED state if error occurred */
+ NODE_SET_STATE(hnode, NODE_ALLOCATED);
+ }
+func_cont:
+ /* Free access to node dispatcher */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+func_end:
+ if (DSP_SUCCEEDED(status)) {
+ proc_notify_clients(hnode->hprocessor, DSP_NODESTATECHANGE);
+ ntfy_notify(hnode->ntfy_obj, DSP_NODESTATECHANGE);
+ }
+
+ dev_dbg(bridge, "%s: hnode: %p status: 0x%x\n", __func__,
+ hnode, status);
+ return status;
+}
+
+/*
+ * ======== node_create_mgr ========
+ * Purpose:
+ * Create a NODE Manager object.
+ */
+int node_create_mgr(OUT struct node_mgr **phNodeMgr,
+ struct dev_object *hdev_obj)
+{
+ u32 i;
+ struct node_mgr *node_mgr_obj = NULL;
+ struct disp_attr disp_attr_obj;
+ char *sz_zl_file = "";
+ struct nldr_attrs nldr_attrs_obj;
+ int status = 0;
+ u8 dev_type;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phNodeMgr != NULL);
+ DBC_REQUIRE(hdev_obj != NULL);
+
+ *phNodeMgr = NULL;
+ /* Allocate Node manager object */
+ node_mgr_obj = kzalloc(sizeof(struct node_mgr), GFP_KERNEL);
+ if (node_mgr_obj) {
+ node_mgr_obj->hdev_obj = hdev_obj;
+ node_mgr_obj->node_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ node_mgr_obj->pipe_map = gb_create(MAXPIPES);
+ node_mgr_obj->pipe_done_map = gb_create(MAXPIPES);
+ if (node_mgr_obj->node_list == NULL
+ || node_mgr_obj->pipe_map == NULL
+ || node_mgr_obj->pipe_done_map == NULL) {
+ status = -ENOMEM;
+ } else {
+ INIT_LIST_HEAD(&node_mgr_obj->node_list->head);
+ node_mgr_obj->ntfy_obj = kmalloc(
+ sizeof(struct ntfy_object), GFP_KERNEL);
+ if (node_mgr_obj->ntfy_obj)
+ ntfy_init(node_mgr_obj->ntfy_obj);
+ else
+ status = -ENOMEM;
+ }
+ node_mgr_obj->num_created = 0;
+ } else {
+ status = -ENOMEM;
+ }
+ /* get devNodeType */
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_dev_type(hdev_obj, &dev_type);
+
+ /* Create the DCD Manager */
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ dcd_create_manager(sz_zl_file, &node_mgr_obj->hdcd_mgr);
+ if (DSP_SUCCEEDED(status))
+ status = get_proc_props(node_mgr_obj, hdev_obj);
+
+ }
+ /* Create NODE Dispatcher */
+ if (DSP_SUCCEEDED(status)) {
+ disp_attr_obj.ul_chnl_offset = node_mgr_obj->ul_chnl_offset;
+ disp_attr_obj.ul_chnl_buf_size = node_mgr_obj->ul_chnl_buf_size;
+ disp_attr_obj.proc_family = node_mgr_obj->proc_family;
+ disp_attr_obj.proc_type = node_mgr_obj->proc_type;
+ status =
+ disp_create(&node_mgr_obj->disp_obj, hdev_obj,
+ &disp_attr_obj);
+ }
+ /* Create a STRM Manager */
+ if (DSP_SUCCEEDED(status))
+ status = strm_create(&node_mgr_obj->strm_mgr_obj, hdev_obj);
+
+ if (DSP_SUCCEEDED(status)) {
+ dev_get_intf_fxns(hdev_obj, &node_mgr_obj->intf_fxns);
+ /* Get msg_ctrl queue manager */
+ dev_get_msg_mgr(hdev_obj, &node_mgr_obj->msg_mgr_obj);
+ mutex_init(&node_mgr_obj->node_mgr_lock);
+ node_mgr_obj->chnl_map = gb_create(node_mgr_obj->ul_num_chnls);
+ /* dma chnl map. ul_num_chnls is # per transport */
+ node_mgr_obj->dma_chnl_map =
+ gb_create(node_mgr_obj->ul_num_chnls);
+ node_mgr_obj->zc_chnl_map =
+ gb_create(node_mgr_obj->ul_num_chnls);
+ if ((node_mgr_obj->chnl_map == NULL)
+ || (node_mgr_obj->dma_chnl_map == NULL)
+ || (node_mgr_obj->zc_chnl_map == NULL)) {
+ status = -ENOMEM;
+ } else {
+ /* Block out reserved channels */
+ for (i = 0; i < node_mgr_obj->ul_chnl_offset; i++)
+ gb_set(node_mgr_obj->chnl_map, i);
+
+ /* Block out channels reserved for RMS */
+ gb_set(node_mgr_obj->chnl_map,
+ node_mgr_obj->ul_chnl_offset);
+ gb_set(node_mgr_obj->chnl_map,
+ node_mgr_obj->ul_chnl_offset + 1);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* NO RM Server on the IVA */
+ if (dev_type != IVA_UNIT) {
+ /* Get addresses of any RMS functions loaded */
+ status = get_rms_fxns(node_mgr_obj);
+ }
+ }
+
+ /* Get loader functions and create loader */
+ if (DSP_SUCCEEDED(status))
+ node_mgr_obj->nldr_fxns = nldr_fxns; /* Dyn loader funcs */
+
+ if (DSP_SUCCEEDED(status)) {
+ nldr_attrs_obj.pfn_ovly = ovly;
+ nldr_attrs_obj.pfn_write = mem_write;
+ nldr_attrs_obj.us_dsp_word_size = node_mgr_obj->udsp_word_size;
+ nldr_attrs_obj.us_dsp_mau_size = node_mgr_obj->udsp_mau_size;
+ node_mgr_obj->loader_init = node_mgr_obj->nldr_fxns.pfn_init();
+ status =
+ node_mgr_obj->nldr_fxns.pfn_create(&node_mgr_obj->nldr_obj,
+ hdev_obj,
+ &nldr_attrs_obj);
+ }
+ if (DSP_SUCCEEDED(status))
+ *phNodeMgr = node_mgr_obj;
+ else
+ delete_node_mgr(node_mgr_obj);
+
+ DBC_ENSURE((DSP_FAILED(status) && (*phNodeMgr == NULL)) ||
+ (DSP_SUCCEEDED(status) && *phNodeMgr));
+
+ return status;
+}
+
+/*
+ * ======== node_delete ========
+ * Purpose:
+ * Delete a node on the DSP by remotely calling the node's delete function.
+ * Loads the node's delete function if necessary. Free GPP side resources
+ * after node's delete function returns.
+ */
+int node_delete(struct node_object *hnode,
+ struct process_context *pr_ctxt)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ struct node_mgr *hnode_mgr;
+ struct proc_object *hprocessor;
+ struct disp_object *disp_obj;
+ u32 ul_delete_fxn;
+ enum node_type node_type;
+ enum node_state state;
+ int status = 0;
+ int status1 = 0;
+ struct dsp_cbdata cb_data;
+ u32 proc_id;
+ struct bridge_drv_interface *intf_fxns;
+
+ void *node_res;
+
+ struct dsp_processorstate proc_state;
+ DBC_REQUIRE(refs > 0);
+
+ if (!hnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* create struct dsp_cbdata struct for PWR call */
+ cb_data.cb_data = PWR_TIMEOUT;
+ hnode_mgr = hnode->hnode_mgr;
+ hprocessor = hnode->hprocessor;
+ disp_obj = hnode_mgr->disp_obj;
+ node_type = node_get_type(hnode);
+ intf_fxns = hnode_mgr->intf_fxns;
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ state = node_get_state(hnode);
+ /* Execute delete phase code for non-device node in all cases
+ * except when the node was only allocated. Delete phase must be
+ * executed even if create phase was executed, but failed.
+ * If the node environment pointer is non-NULL, the delete phase
+ * code must be executed. */
+ if (!(state == NODE_ALLOCATED && hnode->node_env == (u32) NULL) &&
+ node_type != NODE_DEVICE) {
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+ if (DSP_FAILED(status))
+ goto func_cont1;
+
+ if (proc_id == DSP_UNIT || proc_id == IVA_UNIT) {
+ /* If node has terminated, execute phase code will
+ * have already been unloaded in node_on_exit(). If the
+ * node is PAUSED, the execute phase is loaded, and it
+ * is now ok to unload it. If the node is running, we
+ * will unload the execute phase only after deleting
+ * the node. */
+ if (state == NODE_PAUSED && hnode->loaded &&
+ hnode->phase_split) {
+ /* Ok to unload execute code as long as node
+ * is not * running */
+ status1 =
+ hnode_mgr->nldr_fxns.
+ pfn_unload(hnode->nldr_node_obj,
+ NLDR_EXECUTE);
+ hnode->loaded = false;
+ NODE_SET_STATE(hnode, NODE_DONE);
+ }
+ /* Load delete phase code if not loaded or if haven't
+ * * unloaded EXECUTE phase */
+ if ((!(hnode->loaded) || (state == NODE_RUNNING)) &&
+ hnode->phase_split) {
+ status =
+ hnode_mgr->nldr_fxns.
+ pfn_load(hnode->nldr_node_obj, NLDR_DELETE);
+ if (DSP_SUCCEEDED(status))
+ hnode->loaded = true;
+ else
+ pr_err("%s: fail - load delete code:"
+ " 0x%x\n", __func__, status);
+ }
+ }
+func_cont1:
+ if (DSP_SUCCEEDED(status)) {
+ /* Unblock a thread trying to terminate the node */
+ (void)sync_set_event(hnode->sync_done);
+ if (proc_id == DSP_UNIT) {
+ /* ul_delete_fxn = address of node's delete
+ * function */
+ status = get_fxn_address(hnode, &ul_delete_fxn,
+ DELETEPHASE);
+ } else if (proc_id == IVA_UNIT)
+ ul_delete_fxn = (u32) hnode->node_env;
+ if (DSP_SUCCEEDED(status)) {
+ status = proc_get_state(hprocessor,
+ &proc_state,
+ sizeof(struct
+ dsp_processorstate));
+ if (proc_state.proc_state != PROC_ERROR) {
+ status =
+ disp_node_delete(disp_obj, hnode,
+ hnode_mgr->
+ ul_fxn_addrs
+ [RMSDELETENODE],
+ ul_delete_fxn,
+ hnode->node_env);
+ } else
+ NODE_SET_STATE(hnode, NODE_DONE);
+
+ /* Unload execute, if not unloaded, and delete
+ * function */
+ if (state == NODE_RUNNING &&
+ hnode->phase_split) {
+ status1 =
+ hnode_mgr->nldr_fxns.
+ pfn_unload(hnode->nldr_node_obj,
+ NLDR_EXECUTE);
+ }
+ if (DSP_FAILED(status1))
+ pr_err("%s: fail - unload execute code:"
+ " 0x%x\n", __func__, status1);
+
+ status1 =
+ hnode_mgr->nldr_fxns.pfn_unload(hnode->
+ nldr_node_obj,
+ NLDR_DELETE);
+ hnode->loaded = false;
+ if (DSP_FAILED(status1))
+ pr_err("%s: fail - unload delete code: "
+ "0x%x\n", __func__, status1);
+ }
+ }
+ }
+ /* Free host side resources even if a failure occurred */
+ /* Remove node from hnode_mgr->node_list */
+ lst_remove_elem(hnode_mgr->node_list, (struct list_head *)hnode);
+ hnode_mgr->num_nodes--;
+ /* Decrement count of nodes created on DSP */
+ if ((state != NODE_ALLOCATED) || ((state == NODE_ALLOCATED) &&
+ (hnode->node_env != (u32) NULL)))
+ hnode_mgr->num_created--;
+ /* Free host-side resources allocated by node_create()
+ * delete_node() fails if SM buffers not freed by client! */
+ if (drv_get_node_res_element(hnode, &node_res, pr_ctxt) !=
+ -ENOENT)
+ drv_proc_node_update_status(node_res, false);
+ delete_node(hnode, pr_ctxt);
+
+ drv_remove_node_res_element(node_res, pr_ctxt);
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ proc_notify_clients(hprocessor, DSP_NODESTATECHANGE);
+func_end:
+ dev_dbg(bridge, "%s: hnode: %p status 0x%x\n", __func__, hnode, status);
+ return status;
+}
+
+/*
+ * ======== node_delete_mgr ========
+ * Purpose:
+ * Delete the NODE Manager.
+ */
+int node_delete_mgr(struct node_mgr *hnode_mgr)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (hnode_mgr)
+ delete_node_mgr(hnode_mgr);
+ else
+ status = -EFAULT;
+
+ return status;
+}
+
+/*
+ * ======== node_enum_nodes ========
+ * Purpose:
+ * Enumerate currently allocated nodes.
+ */
+int node_enum_nodes(struct node_mgr *hnode_mgr, void **node_tab,
+ u32 node_tab_size, OUT u32 *pu_num_nodes,
+ OUT u32 *pu_allocated)
+{
+ struct node_object *hnode;
+ u32 i;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(node_tab != NULL || node_tab_size == 0);
+ DBC_REQUIRE(pu_num_nodes != NULL);
+ DBC_REQUIRE(pu_allocated != NULL);
+
+ if (!hnode_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ if (hnode_mgr->num_nodes > node_tab_size) {
+ *pu_allocated = hnode_mgr->num_nodes;
+ *pu_num_nodes = 0;
+ status = -EINVAL;
+ } else {
+ hnode = (struct node_object *)lst_first(hnode_mgr->
+ node_list);
+ for (i = 0; i < hnode_mgr->num_nodes; i++) {
+ DBC_ASSERT(hnode);
+ node_tab[i] = hnode;
+ hnode = (struct node_object *)lst_next
+ (hnode_mgr->node_list,
+ (struct list_head *)hnode);
+ }
+ *pu_allocated = *pu_num_nodes = hnode_mgr->num_nodes;
+ }
+ /* end of sync_enter_cs */
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+func_end:
+ return status;
+}
+
+/*
+ * ======== node_exit ========
+ * Purpose:
+ * Discontinue usage of NODE module.
+ */
+void node_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== node_free_msg_buf ========
+ * Purpose:
+ * Frees the message buffer.
+ */
+int node_free_msg_buf(struct node_object *hnode, IN u8 * pbuffer,
+ OPTIONAL struct dsp_bufferattr *pattr)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ int status = 0;
+ u32 proc_id;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pbuffer != NULL);
+ DBC_REQUIRE(pnode != NULL);
+ DBC_REQUIRE(pnode->xlator != NULL);
+
+ if (!hnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+ if (proc_id == DSP_UNIT) {
+ if (DSP_SUCCEEDED(status)) {
+ if (pattr == NULL) {
+ /* set defaults */
+ pattr = &node_dfltbufattrs;
+ }
+ /* Node supports single SM segment only */
+ if (pattr->segment_id != 1)
+ status = -EBADR;
+
+ /* pbuffer is clients Va. */
+ status = cmm_xlator_free_buf(pnode->xlator, pbuffer);
+ }
+ } else {
+ DBC_ASSERT(NULL); /* BUG */
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== node_get_attr ========
+ * Purpose:
+ * Copy the current attributes of the specified node into a dsp_nodeattr
+ * structure.
+ */
+int node_get_attr(struct node_object *hnode,
+ OUT struct dsp_nodeattr *pattr, u32 attr_size)
+{
+ struct node_mgr *hnode_mgr;
+ int status = 0;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pattr != NULL);
+ DBC_REQUIRE(attr_size >= sizeof(struct dsp_nodeattr));
+
+ if (!hnode) {
+ status = -EFAULT;
+ } else {
+ hnode_mgr = hnode->hnode_mgr;
+ /* Enter hnode_mgr critical section (since we're accessing
+ * data that could be changed by node_change_priority() and
+ * node_connect(). */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+ pattr->cb_struct = sizeof(struct dsp_nodeattr);
+ /* dsp_nodeattrin */
+ pattr->in_node_attr_in.cb_struct =
+ sizeof(struct dsp_nodeattrin);
+ pattr->in_node_attr_in.prio = hnode->prio;
+ pattr->in_node_attr_in.utimeout = hnode->utimeout;
+ pattr->in_node_attr_in.heap_size =
+ hnode->create_args.asa.task_arg_obj.heap_size;
+ pattr->in_node_attr_in.pgpp_virt_addr = (void *)
+ hnode->create_args.asa.task_arg_obj.ugpp_heap_addr;
+ pattr->node_attr_inputs = hnode->num_gpp_inputs;
+ pattr->node_attr_outputs = hnode->num_gpp_outputs;
+ /* dsp_nodeinfo */
+ get_node_info(hnode, &(pattr->node_info));
+ /* end of sync_enter_cs */
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ }
+ return status;
+}
+
+/*
+ * ======== node_get_channel_id ========
+ * Purpose:
+ * Get the channel index reserved for a stream connection between the
+ * host and a node.
+ */
+int node_get_channel_id(struct node_object *hnode, u32 dir, u32 index,
+ OUT u32 *pulId)
+{
+ enum node_type node_type;
+ int status = -EINVAL;
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(dir == DSP_TONODE || dir == DSP_FROMNODE);
+ DBC_REQUIRE(pulId != NULL);
+
+ if (!hnode) {
+ status = -EFAULT;
+ return status;
+ }
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_TASK && node_type != NODE_DAISSOCKET) {
+ status = -EPERM;
+ return status;
+ }
+ if (dir == DSP_TONODE) {
+ if (index < MAX_INPUTS(hnode)) {
+ if (hnode->inputs[index].type == HOSTCONNECT) {
+ *pulId = hnode->inputs[index].dev_id;
+ status = 0;
+ }
+ }
+ } else {
+ DBC_ASSERT(dir == DSP_FROMNODE);
+ if (index < MAX_OUTPUTS(hnode)) {
+ if (hnode->outputs[index].type == HOSTCONNECT) {
+ *pulId = hnode->outputs[index].dev_id;
+ status = 0;
+ }
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== node_get_message ========
+ * Purpose:
+ * Retrieve a message from a node on the DSP.
+ */
+int node_get_message(struct node_object *hnode,
+ OUT struct dsp_msg *pmsg, u32 utimeout)
+{
+ struct node_mgr *hnode_mgr;
+ enum node_type node_type;
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+ void *tmp_buf;
+ struct dsp_processorstate proc_state;
+ struct proc_object *hprocessor;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pmsg != NULL);
+
+ if (!hnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hprocessor = hnode->hprocessor;
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in error state then don't attempt to get the
+ message */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+ hnode_mgr = hnode->hnode_mgr;
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_MESSAGE && node_type != NODE_TASK &&
+ node_type != NODE_DAISSOCKET) {
+ status = -EPERM;
+ goto func_end;
+ }
+ /* This function will block unless a message is available. Since
+ * DSPNode_RegisterNotify() allows notification when a message
+ * is available, the system can be designed so that
+ * DSPNode_GetMessage() is only called when a message is
+ * available. */
+ intf_fxns = hnode_mgr->intf_fxns;
+ status =
+ (*intf_fxns->pfn_msg_get) (hnode->msg_queue_obj, pmsg, utimeout);
+ /* Check if message contains SM descriptor */
+ if (DSP_FAILED(status) || !(pmsg->dw_cmd & DSP_RMSBUFDESC))
+ goto func_end;
+
+ /* Translate DSP byte addr to GPP Va. */
+ tmp_buf = cmm_xlator_translate(hnode->xlator,
+ (void *)(pmsg->dw_arg1 *
+ hnode->hnode_mgr->
+ udsp_word_size), CMM_DSPPA2PA);
+ if (tmp_buf != NULL) {
+ /* now convert this GPP Pa to Va */
+ tmp_buf = cmm_xlator_translate(hnode->xlator, tmp_buf,
+ CMM_PA2VA);
+ if (tmp_buf != NULL) {
+ /* Adjust SM size in msg */
+ pmsg->dw_arg1 = (u32) tmp_buf;
+ pmsg->dw_arg2 *= hnode->hnode_mgr->udsp_word_size;
+ } else {
+ status = -ESRCH;
+ }
+ } else {
+ status = -ESRCH;
+ }
+func_end:
+ dev_dbg(bridge, "%s: hnode: %p pmsg: %p utimeout: 0x%x\n", __func__,
+ hnode, pmsg, utimeout);
+ return status;
+}
+
+/*
+ * ======== node_get_nldr_obj ========
+ */
+int node_get_nldr_obj(struct node_mgr *hnode_mgr,
+ struct nldr_object **phNldrObj)
+{
+ int status = 0;
+ struct node_mgr *node_mgr_obj = hnode_mgr;
+ DBC_REQUIRE(phNldrObj != NULL);
+
+ if (!hnode_mgr)
+ status = -EFAULT;
+ else
+ *phNldrObj = node_mgr_obj->nldr_obj;
+
+ DBC_ENSURE(DSP_SUCCEEDED(status) || ((phNldrObj != NULL) &&
+ (*phNldrObj == NULL)));
+ return status;
+}
+
+/*
+ * ======== node_get_strm_mgr ========
+ * Purpose:
+ * Returns the Stream manager.
+ */
+int node_get_strm_mgr(struct node_object *hnode,
+ struct strm_mgr **phStrmMgr)
+{
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hnode)
+ status = -EFAULT;
+ else
+ *phStrmMgr = hnode->hnode_mgr->strm_mgr_obj;
+
+ return status;
+}
+
+/*
+ * ======== node_get_load_type ========
+ */
+enum nldr_loadtype node_get_load_type(struct node_object *hnode)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hnode);
+ if (!hnode) {
+ dev_dbg(bridge, "%s: Failed. hnode: %p\n", __func__, hnode);
+ return -1;
+ } else {
+ return hnode->dcd_props.obj_data.node_obj.us_load_type;
+ }
+}
+
+/*
+ * ======== node_get_timeout ========
+ * Purpose:
+ * Returns the timeout value for this node.
+ */
+u32 node_get_timeout(struct node_object *hnode)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hnode);
+ if (!hnode) {
+ dev_dbg(bridge, "%s: failed. hnode: %p\n", __func__, hnode);
+ return 0;
+ } else {
+ return hnode->utimeout;
+ }
+}
+
+/*
+ * ======== node_get_type ========
+ * Purpose:
+ * Returns the node type.
+ */
+enum node_type node_get_type(struct node_object *hnode)
+{
+ enum node_type node_type;
+
+ if (hnode == (struct node_object *)DSP_HGPPNODE)
+ node_type = NODE_GPP;
+ else {
+ if (!hnode)
+ node_type = -1;
+ else
+ node_type = hnode->ntype;
+ }
+ return node_type;
+}
+
+/*
+ * ======== node_init ========
+ * Purpose:
+ * Initialize the NODE module.
+ */
+bool node_init(void)
+{
+ DBC_REQUIRE(refs >= 0);
+
+ refs++;
+
+ return true;
+}
+
+/*
+ * ======== node_on_exit ========
+ * Purpose:
+ * Gets called when RMS_EXIT is received for a node.
+ */
+void node_on_exit(struct node_object *hnode, s32 nStatus)
+{
+ if (!hnode)
+ return;
+
+ /* Set node state to done */
+ NODE_SET_STATE(hnode, NODE_DONE);
+ hnode->exit_status = nStatus;
+ if (hnode->loaded && hnode->phase_split) {
+ (void)hnode->hnode_mgr->nldr_fxns.pfn_unload(hnode->
+ nldr_node_obj,
+ NLDR_EXECUTE);
+ hnode->loaded = false;
+ }
+ /* Unblock call to node_terminate */
+ (void)sync_set_event(hnode->sync_done);
+ /* Notify clients */
+ proc_notify_clients(hnode->hprocessor, DSP_NODESTATECHANGE);
+ ntfy_notify(hnode->ntfy_obj, DSP_NODESTATECHANGE);
+}
+
+/*
+ * ======== node_pause ========
+ * Purpose:
+ * Suspend execution of a node currently running on the DSP.
+ */
+int node_pause(struct node_object *hnode)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ enum node_type node_type;
+ enum node_state state;
+ struct node_mgr *hnode_mgr;
+ int status = 0;
+ u32 proc_id;
+ struct dsp_processorstate proc_state;
+ struct proc_object *hprocessor;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hnode) {
+ status = -EFAULT;
+ } else {
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_TASK && node_type != NODE_DAISSOCKET)
+ status = -EPERM;
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+
+ if (proc_id == IVA_UNIT)
+ status = -ENOSYS;
+
+ if (DSP_SUCCEEDED(status)) {
+ hnode_mgr = hnode->hnode_mgr;
+
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+ state = node_get_state(hnode);
+ /* Check node state */
+ if (state != NODE_RUNNING)
+ status = -EBADR;
+
+ if (DSP_FAILED(status))
+ goto func_cont;
+ hprocessor = hnode->hprocessor;
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_cont;
+ /* If processor is in error state then don't attempt
+ to send the message */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_cont;
+ }
+
+ status = disp_node_change_priority(hnode_mgr->disp_obj, hnode,
+ hnode_mgr->ul_fxn_addrs[RMSCHANGENODEPRIORITY],
+ hnode->node_env, NODE_SUSPENDEDPRI);
+
+ /* Update state */
+ if (DSP_SUCCEEDED(status))
+ NODE_SET_STATE(hnode, NODE_PAUSED);
+
+func_cont:
+ /* End of sync_enter_cs */
+ /* Leave critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ if (DSP_SUCCEEDED(status)) {
+ proc_notify_clients(hnode->hprocessor,
+ DSP_NODESTATECHANGE);
+ ntfy_notify(hnode->ntfy_obj, DSP_NODESTATECHANGE);
+ }
+ }
+func_end:
+ dev_dbg(bridge, "%s: hnode: %p status 0x%x\n", __func__, hnode, status);
+ return status;
+}
+
+/*
+ * ======== node_put_message ========
+ * Purpose:
+ * Send a message to a message node, task node, or XDAIS socket node. This
+ * function will block until the message stream can accommodate the
+ * message, or a timeout occurs.
+ */
+int node_put_message(struct node_object *hnode,
+ IN CONST struct dsp_msg *pmsg, u32 utimeout)
+{
+ struct node_mgr *hnode_mgr = NULL;
+ enum node_type node_type;
+ struct bridge_drv_interface *intf_fxns;
+ enum node_state state;
+ int status = 0;
+ void *tmp_buf;
+ struct dsp_msg new_msg;
+ struct dsp_processorstate proc_state;
+ struct proc_object *hprocessor;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pmsg != NULL);
+
+ if (!hnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hprocessor = hnode->hprocessor;
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in bad state then don't attempt sending the
+ message */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+ hnode_mgr = hnode->hnode_mgr;
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_MESSAGE && node_type != NODE_TASK &&
+ node_type != NODE_DAISSOCKET)
+ status = -EPERM;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Check node state. Can't send messages to a node after
+ * we've sent the RMS_EXIT command. There is still the
+ * possibility that node_terminate can be called after we've
+ * checked the state. Could add another SYNC object to
+ * prevent this (can't use node_mgr_lock, since we don't
+ * want to block other NODE functions). However, the node may
+ * still exit on its own, before this message is sent. */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+ state = node_get_state(hnode);
+ if (state == NODE_TERMINATING || state == NODE_DONE)
+ status = -EBADR;
+
+ /* end of sync_enter_cs */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* assign pmsg values to new msg */
+ new_msg = *pmsg;
+ /* Now, check if message contains a SM buffer descriptor */
+ if (pmsg->dw_cmd & DSP_RMSBUFDESC) {
+ /* Translate GPP Va to DSP physical buf Ptr. */
+ tmp_buf = cmm_xlator_translate(hnode->xlator,
+ (void *)new_msg.dw_arg1,
+ CMM_VA2DSPPA);
+ if (tmp_buf != NULL) {
+ /* got translation, convert to MAUs in msg */
+ if (hnode->hnode_mgr->udsp_word_size != 0) {
+ new_msg.dw_arg1 =
+ (u32) tmp_buf /
+ hnode->hnode_mgr->udsp_word_size;
+ /* MAUs */
+ new_msg.dw_arg2 /= hnode->hnode_mgr->
+ udsp_word_size;
+ } else {
+ pr_err("%s: udsp_word_size is zero!\n",
+ __func__);
+ status = -EPERM; /* bad DSPWordSize */
+ }
+ } else { /* failed to translate buffer address */
+ status = -ESRCH;
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ intf_fxns = hnode_mgr->intf_fxns;
+ status = (*intf_fxns->pfn_msg_put) (hnode->msg_queue_obj,
+ &new_msg, utimeout);
+ }
+func_end:
+ dev_dbg(bridge, "%s: hnode: %p pmsg: %p utimeout: 0x%x, "
+ "status 0x%x\n", __func__, hnode, pmsg, utimeout, status);
+ return status;
+}
+
+/*
+ * ======== node_register_notify ========
+ * Purpose:
+ * Register to be notified on specific events for this node.
+ */
+int node_register_notify(struct node_object *hnode, u32 event_mask,
+ u32 notify_type,
+ struct dsp_notification *hnotification)
+{
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hnotification != NULL);
+
+ if (!hnode) {
+ status = -EFAULT;
+ } else {
+ /* Check if event mask is a valid node related event */
+ if (event_mask & ~(DSP_NODESTATECHANGE | DSP_NODEMESSAGEREADY))
+ status = -EINVAL;
+
+ /* Check if notify type is valid */
+ if (notify_type != DSP_SIGNALEVENT)
+ status = -EINVAL;
+
+ /* Only one Notification can be registered at a
+ * time - Limitation */
+ if (event_mask == (DSP_NODESTATECHANGE | DSP_NODEMESSAGEREADY))
+ status = -EINVAL;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ if (event_mask == DSP_NODESTATECHANGE) {
+ status = ntfy_register(hnode->ntfy_obj, hnotification,
+ event_mask & DSP_NODESTATECHANGE,
+ notify_type);
+ } else {
+ /* Send Message part of event mask to msg_ctrl */
+ intf_fxns = hnode->hnode_mgr->intf_fxns;
+ status = (*intf_fxns->pfn_msg_register_notify)
+ (hnode->msg_queue_obj,
+ event_mask & DSP_NODEMESSAGEREADY, notify_type,
+ hnotification);
+ }
+
+ }
+ dev_dbg(bridge, "%s: hnode: %p event_mask: 0x%x notify_type: 0x%x "
+ "hnotification: %p status 0x%x\n", __func__, hnode,
+ event_mask, notify_type, hnotification, status);
+ return status;
+}
+
+/*
+ * ======== node_run ========
+ * Purpose:
+ * Start execution of a node's execute phase, or resume execution of a node
+ * that has been suspended (via NODE_NodePause()) on the DSP. Load the
+ * node's execute function if necessary.
+ */
+int node_run(struct node_object *hnode)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ struct node_mgr *hnode_mgr;
+ enum node_type node_type;
+ enum node_state state;
+ u32 ul_execute_fxn;
+ u32 ul_fxn_addr;
+ int status = 0;
+ u32 proc_id;
+ struct bridge_drv_interface *intf_fxns;
+ struct dsp_processorstate proc_state;
+ struct proc_object *hprocessor;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hnode) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ hprocessor = hnode->hprocessor;
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in error state then don't attempt to run the node */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+ node_type = node_get_type(hnode);
+ if (node_type == NODE_DEVICE)
+ status = -EPERM;
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ hnode_mgr = hnode->hnode_mgr;
+ if (!hnode_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ intf_fxns = hnode_mgr->intf_fxns;
+ /* Enter critical section */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ state = node_get_state(hnode);
+ if (state != NODE_CREATED && state != NODE_PAUSED)
+ status = -EBADR;
+
+ if (DSP_SUCCEEDED(status))
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+
+ if (DSP_FAILED(status))
+ goto func_cont1;
+
+ if ((proc_id != DSP_UNIT) && (proc_id != IVA_UNIT))
+ goto func_cont1;
+
+ if (state == NODE_CREATED) {
+ /* If node's execute function is not loaded, load it */
+ if (!(hnode->loaded) && hnode->phase_split) {
+ status =
+ hnode_mgr->nldr_fxns.pfn_load(hnode->nldr_node_obj,
+ NLDR_EXECUTE);
+ if (DSP_SUCCEEDED(status)) {
+ hnode->loaded = true;
+ } else {
+ pr_err("%s: fail - load execute code: 0x%x\n",
+ __func__, status);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Get address of node's execute function */
+ if (proc_id == IVA_UNIT)
+ ul_execute_fxn = (u32) hnode->node_env;
+ else {
+ status = get_fxn_address(hnode, &ul_execute_fxn,
+ EXECUTEPHASE);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ ul_fxn_addr = hnode_mgr->ul_fxn_addrs[RMSEXECUTENODE];
+ status =
+ disp_node_run(hnode_mgr->disp_obj, hnode,
+ ul_fxn_addr, ul_execute_fxn,
+ hnode->node_env);
+ }
+ } else if (state == NODE_PAUSED) {
+ ul_fxn_addr = hnode_mgr->ul_fxn_addrs[RMSCHANGENODEPRIORITY];
+ status = disp_node_change_priority(hnode_mgr->disp_obj, hnode,
+ ul_fxn_addr, hnode->node_env,
+ NODE_GET_PRIORITY(hnode));
+ } else {
+ /* We should never get here */
+ DBC_ASSERT(false);
+ }
+func_cont1:
+ /* Update node state. */
+ if (DSP_SUCCEEDED(status))
+ NODE_SET_STATE(hnode, NODE_RUNNING);
+ else /* Set state back to previous value */
+ NODE_SET_STATE(hnode, state);
+ /*End of sync_enter_cs */
+ /* Exit critical section */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ if (DSP_SUCCEEDED(status)) {
+ proc_notify_clients(hnode->hprocessor, DSP_NODESTATECHANGE);
+ ntfy_notify(hnode->ntfy_obj, DSP_NODESTATECHANGE);
+ }
+func_end:
+ dev_dbg(bridge, "%s: hnode: %p status 0x%x\n", __func__, hnode, status);
+ return status;
+}
+
+/*
+ * ======== node_terminate ========
+ * Purpose:
+ * Signal a node running on the DSP that it should exit its execute phase
+ * function.
+ */
+int node_terminate(struct node_object *hnode, OUT int *pstatus)
+{
+ struct node_object *pnode = (struct node_object *)hnode;
+ struct node_mgr *hnode_mgr = NULL;
+ enum node_type node_type;
+ struct bridge_drv_interface *intf_fxns;
+ enum node_state state;
+ struct dsp_msg msg, killmsg;
+ int status = 0;
+ u32 proc_id, kill_time_out;
+ struct deh_mgr *hdeh_mgr;
+ struct dsp_processorstate proc_state;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pstatus != NULL);
+
+ if (!hnode || !hnode->hnode_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ if (pnode->hprocessor == NULL) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ status = proc_get_processor_id(pnode->hprocessor, &proc_id);
+
+ if (DSP_SUCCEEDED(status)) {
+ hnode_mgr = hnode->hnode_mgr;
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_TASK && node_type != NODE_DAISSOCKET)
+ status = -EPERM;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Check node state */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+ state = node_get_state(hnode);
+ if (state != NODE_RUNNING) {
+ status = -EBADR;
+ /* Set the exit status if node terminated on
+ * its own. */
+ if (state == NODE_DONE)
+ *pstatus = hnode->exit_status;
+
+ } else {
+ NODE_SET_STATE(hnode, NODE_TERMINATING);
+ }
+ /* end of sync_enter_cs */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /*
+ * Send exit message. Do not change state to NODE_DONE
+ * here. That will be done in callback.
+ */
+ status = proc_get_state(pnode->hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_cont;
+ /* If processor is in error state then don't attempt to send
+ * A kill task command */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_cont;
+ }
+
+ msg.dw_cmd = RMS_EXIT;
+ msg.dw_arg1 = hnode->node_env;
+ killmsg.dw_cmd = RMS_KILLTASK;
+ killmsg.dw_arg1 = hnode->node_env;
+ intf_fxns = hnode_mgr->intf_fxns;
+
+ if (hnode->utimeout > MAXTIMEOUT)
+ kill_time_out = MAXTIMEOUT;
+ else
+ kill_time_out = (hnode->utimeout) * 2;
+
+ status = (*intf_fxns->pfn_msg_put) (hnode->msg_queue_obj, &msg,
+ hnode->utimeout);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /*
+ * Wait on synchronization object that will be
+ * posted in the callback on receiving RMS_EXIT
+ * message, or by node_delete. Check for valid hnode,
+ * in case posted by node_delete().
+ */
+ status = sync_wait_on_event(hnode->sync_done,
+ kill_time_out / 2);
+ if (status != ETIME)
+ goto func_cont;
+
+ status = (*intf_fxns->pfn_msg_put)(hnode->msg_queue_obj,
+ &killmsg, hnode->utimeout);
+ if (DSP_FAILED(status))
+ goto func_cont;
+ status = sync_wait_on_event(hnode->sync_done,
+ kill_time_out / 2);
+ if (DSP_FAILED(status)) {
+ /*
+ * Here it goes the part of the simulation of
+ * the DSP exception.
+ */
+ dev_get_deh_mgr(hnode_mgr->hdev_obj, &hdeh_mgr);
+ if (!hdeh_mgr)
+ goto func_cont;
+
+ (*intf_fxns->pfn_deh_notify)(hdeh_mgr, DSP_SYSERROR,
+ DSP_EXCEPTIONABORT);
+ }
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ /* Enter CS before getting exit status, in case node was
+ * deleted. */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+ /* Make sure node wasn't deleted while we blocked */
+ if (!hnode) {
+ status = -EPERM;
+ } else {
+ *pstatus = hnode->exit_status;
+ dev_dbg(bridge, "%s: hnode: %p env 0x%x status 0x%x\n",
+ __func__, hnode, hnode->node_env, status);
+ }
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+ } /*End of sync_enter_cs */
+func_end:
+ return status;
+}
+
+/*
+ * ======== delete_node ========
+ * Purpose:
+ * Free GPP resources allocated in node_allocate() or node_connect().
+ */
+static void delete_node(struct node_object *hnode,
+ struct process_context *pr_ctxt)
+{
+ struct node_mgr *hnode_mgr;
+ struct cmm_xlatorobject *xlator;
+ struct bridge_drv_interface *intf_fxns;
+ u32 i;
+ enum node_type node_type;
+ struct stream_chnl stream;
+ struct node_msgargs node_msg_args;
+ struct node_taskargs task_arg_obj;
+#ifdef DSP_DMM_DEBUG
+ struct dmm_object *dmm_mgr;
+ struct proc_object *p_proc_object =
+ (struct proc_object *)hnode->hprocessor;
+#endif
+ int status;
+ if (!hnode)
+ goto func_end;
+ hnode_mgr = hnode->hnode_mgr;
+ if (!hnode_mgr)
+ goto func_end;
+ xlator = hnode->xlator;
+ node_type = node_get_type(hnode);
+ if (node_type != NODE_DEVICE) {
+ node_msg_args = hnode->create_args.asa.node_msg_args;
+ kfree(node_msg_args.pdata);
+
+ /* Free msg_ctrl queue */
+ if (hnode->msg_queue_obj) {
+ intf_fxns = hnode_mgr->intf_fxns;
+ (*intf_fxns->pfn_msg_delete_queue) (hnode->
+ msg_queue_obj);
+ hnode->msg_queue_obj = NULL;
+ }
+
+ kfree(hnode->sync_done);
+
+ /* Free all stream info */
+ if (hnode->inputs) {
+ for (i = 0; i < MAX_INPUTS(hnode); i++) {
+ stream = hnode->inputs[i];
+ free_stream(hnode_mgr, stream);
+ }
+ kfree(hnode->inputs);
+ hnode->inputs = NULL;
+ }
+ if (hnode->outputs) {
+ for (i = 0; i < MAX_OUTPUTS(hnode); i++) {
+ stream = hnode->outputs[i];
+ free_stream(hnode_mgr, stream);
+ }
+ kfree(hnode->outputs);
+ hnode->outputs = NULL;
+ }
+ task_arg_obj = hnode->create_args.asa.task_arg_obj;
+ if (task_arg_obj.strm_in_def) {
+ for (i = 0; i < MAX_INPUTS(hnode); i++) {
+ kfree(task_arg_obj.strm_in_def[i].sz_device);
+ task_arg_obj.strm_in_def[i].sz_device = NULL;
+ }
+ kfree(task_arg_obj.strm_in_def);
+ task_arg_obj.strm_in_def = NULL;
+ }
+ if (task_arg_obj.strm_out_def) {
+ for (i = 0; i < MAX_OUTPUTS(hnode); i++) {
+ kfree(task_arg_obj.strm_out_def[i].sz_device);
+ task_arg_obj.strm_out_def[i].sz_device = NULL;
+ }
+ kfree(task_arg_obj.strm_out_def);
+ task_arg_obj.strm_out_def = NULL;
+ }
+ if (task_arg_obj.udsp_heap_res_addr) {
+ status = proc_un_map(hnode->hprocessor, (void *)
+ task_arg_obj.udsp_heap_addr,
+ pr_ctxt);
+
+ status = proc_un_reserve_memory(hnode->hprocessor,
+ (void *)
+ task_arg_obj.
+ udsp_heap_res_addr,
+ pr_ctxt);
+#ifdef DSP_DMM_DEBUG
+ status = dmm_get_handle(p_proc_object, &dmm_mgr);
+ if (dmm_mgr)
+ dmm_mem_map_dump(dmm_mgr);
+ else
+ status = DSP_EHANDLE;
+#endif
+ }
+ }
+ if (node_type != NODE_MESSAGE) {
+ kfree(hnode->stream_connect);
+ hnode->stream_connect = NULL;
+ }
+ kfree(hnode->pstr_dev_name);
+ hnode->pstr_dev_name = NULL;
+
+ if (hnode->ntfy_obj) {
+ ntfy_delete(hnode->ntfy_obj);
+ kfree(hnode->ntfy_obj);
+ hnode->ntfy_obj = NULL;
+ }
+
+ /* These were allocated in dcd_get_object_def (via node_allocate) */
+ kfree(hnode->dcd_props.obj_data.node_obj.pstr_create_phase_fxn);
+ hnode->dcd_props.obj_data.node_obj.pstr_create_phase_fxn = NULL;
+
+ kfree(hnode->dcd_props.obj_data.node_obj.pstr_execute_phase_fxn);
+ hnode->dcd_props.obj_data.node_obj.pstr_execute_phase_fxn = NULL;
+
+ kfree(hnode->dcd_props.obj_data.node_obj.pstr_delete_phase_fxn);
+ hnode->dcd_props.obj_data.node_obj.pstr_delete_phase_fxn = NULL;
+
+ kfree(hnode->dcd_props.obj_data.node_obj.pstr_i_alg_name);
+ hnode->dcd_props.obj_data.node_obj.pstr_i_alg_name = NULL;
+
+ /* Free all SM address translator resources */
+ if (xlator) {
+ (void)cmm_xlator_delete(xlator, TRUE); /* force free */
+ xlator = NULL;
+ }
+
+ kfree(hnode->nldr_node_obj);
+ hnode->nldr_node_obj = NULL;
+ hnode->hnode_mgr = NULL;
+ kfree(hnode);
+ hnode = NULL;
+func_end:
+ return;
+}
+
+/*
+ * ======== delete_node_mgr ========
+ * Purpose:
+ * Frees the node manager.
+ */
+static void delete_node_mgr(struct node_mgr *hnode_mgr)
+{
+ struct node_object *hnode;
+
+ if (hnode_mgr) {
+ /* Free resources */
+ if (hnode_mgr->hdcd_mgr)
+ dcd_destroy_manager(hnode_mgr->hdcd_mgr);
+
+ /* Remove any elements remaining in lists */
+ if (hnode_mgr->node_list) {
+ while ((hnode = (struct node_object *)
+ lst_get_head(hnode_mgr->node_list)))
+ delete_node(hnode, NULL);
+
+ DBC_ASSERT(LST_IS_EMPTY(hnode_mgr->node_list));
+ kfree(hnode_mgr->node_list);
+ }
+ mutex_destroy(&hnode_mgr->node_mgr_lock);
+ if (hnode_mgr->ntfy_obj) {
+ ntfy_delete(hnode_mgr->ntfy_obj);
+ kfree(hnode_mgr->ntfy_obj);
+ }
+
+ if (hnode_mgr->pipe_map)
+ gb_delete(hnode_mgr->pipe_map);
+
+ if (hnode_mgr->pipe_done_map)
+ gb_delete(hnode_mgr->pipe_done_map);
+
+ if (hnode_mgr->chnl_map)
+ gb_delete(hnode_mgr->chnl_map);
+
+ if (hnode_mgr->dma_chnl_map)
+ gb_delete(hnode_mgr->dma_chnl_map);
+
+ if (hnode_mgr->zc_chnl_map)
+ gb_delete(hnode_mgr->zc_chnl_map);
+
+ if (hnode_mgr->disp_obj)
+ disp_delete(hnode_mgr->disp_obj);
+
+ if (hnode_mgr->strm_mgr_obj)
+ strm_delete(hnode_mgr->strm_mgr_obj);
+
+ /* Delete the loader */
+ if (hnode_mgr->nldr_obj)
+ hnode_mgr->nldr_fxns.pfn_delete(hnode_mgr->nldr_obj);
+
+ if (hnode_mgr->loader_init)
+ hnode_mgr->nldr_fxns.pfn_exit();
+
+ kfree(hnode_mgr);
+ }
+}
+
+/*
+ * ======== fill_stream_connect ========
+ * Purpose:
+ * Fills stream information.
+ */
+static void fill_stream_connect(struct node_object *hNode1,
+ struct node_object *hNode2,
+ u32 uStream1, u32 uStream2)
+{
+ u32 strm_index;
+ struct dsp_streamconnect *strm1 = NULL;
+ struct dsp_streamconnect *strm2 = NULL;
+ enum node_type node1_type = NODE_TASK;
+ enum node_type node2_type = NODE_TASK;
+
+ node1_type = node_get_type(hNode1);
+ node2_type = node_get_type(hNode2);
+ if (hNode1 != (struct node_object *)DSP_HGPPNODE) {
+
+ if (node1_type != NODE_DEVICE) {
+ strm_index = hNode1->num_inputs +
+ hNode1->num_outputs - 1;
+ strm1 = &(hNode1->stream_connect[strm_index]);
+ strm1->cb_struct = sizeof(struct dsp_streamconnect);
+ strm1->this_node_stream_index = uStream1;
+ }
+
+ if (hNode2 != (struct node_object *)DSP_HGPPNODE) {
+ /* NODE == > NODE */
+ if (node1_type != NODE_DEVICE) {
+ strm1->connected_node = hNode2;
+ strm1->ui_connected_node_id = hNode2->node_uuid;
+ strm1->connected_node_stream_index = uStream2;
+ strm1->connect_type = CONNECTTYPE_NODEOUTPUT;
+ }
+ if (node2_type != NODE_DEVICE) {
+ strm_index = hNode2->num_inputs +
+ hNode2->num_outputs - 1;
+ strm2 = &(hNode2->stream_connect[strm_index]);
+ strm2->cb_struct =
+ sizeof(struct dsp_streamconnect);
+ strm2->this_node_stream_index = uStream2;
+ strm2->connected_node = hNode1;
+ strm2->ui_connected_node_id = hNode1->node_uuid;
+ strm2->connected_node_stream_index = uStream1;
+ strm2->connect_type = CONNECTTYPE_NODEINPUT;
+ }
+ } else if (node1_type != NODE_DEVICE)
+ strm1->connect_type = CONNECTTYPE_GPPOUTPUT;
+ } else {
+ /* GPP == > NODE */
+ DBC_ASSERT(hNode2 != (struct node_object *)DSP_HGPPNODE);
+ strm_index = hNode2->num_inputs + hNode2->num_outputs - 1;
+ strm2 = &(hNode2->stream_connect[strm_index]);
+ strm2->cb_struct = sizeof(struct dsp_streamconnect);
+ strm2->this_node_stream_index = uStream2;
+ strm2->connect_type = CONNECTTYPE_GPPINPUT;
+ }
+}
+
+/*
+ * ======== fill_stream_def ========
+ * Purpose:
+ * Fills Stream attributes.
+ */
+static void fill_stream_def(struct node_object *hnode,
+ struct node_strmdef *pstrm_def,
+ struct dsp_strmattr *pattrs)
+{
+ struct node_mgr *hnode_mgr = hnode->hnode_mgr;
+
+ if (pattrs != NULL) {
+ pstrm_def->num_bufs = pattrs->num_bufs;
+ pstrm_def->buf_size =
+ pattrs->buf_size / hnode_mgr->udsp_data_mau_size;
+ pstrm_def->seg_id = pattrs->seg_id;
+ pstrm_def->buf_alignment = pattrs->buf_alignment;
+ pstrm_def->utimeout = pattrs->utimeout;
+ } else {
+ pstrm_def->num_bufs = DEFAULTNBUFS;
+ pstrm_def->buf_size =
+ DEFAULTBUFSIZE / hnode_mgr->udsp_data_mau_size;
+ pstrm_def->seg_id = DEFAULTSEGID;
+ pstrm_def->buf_alignment = DEFAULTALIGNMENT;
+ pstrm_def->utimeout = DEFAULTTIMEOUT;
+ }
+}
+
+/*
+ * ======== free_stream ========
+ * Purpose:
+ * Updates the channel mask and frees the pipe id.
+ */
+static void free_stream(struct node_mgr *hnode_mgr, struct stream_chnl stream)
+{
+ /* Free up the pipe id unless other node has not yet been deleted. */
+ if (stream.type == NODECONNECT) {
+ if (gb_test(hnode_mgr->pipe_done_map, stream.dev_id)) {
+ /* The other node has already been deleted */
+ gb_clear(hnode_mgr->pipe_done_map, stream.dev_id);
+ gb_clear(hnode_mgr->pipe_map, stream.dev_id);
+ } else {
+ /* The other node has not been deleted yet */
+ gb_set(hnode_mgr->pipe_done_map, stream.dev_id);
+ }
+ } else if (stream.type == HOSTCONNECT) {
+ if (stream.dev_id < hnode_mgr->ul_num_chnls) {
+ gb_clear(hnode_mgr->chnl_map, stream.dev_id);
+ } else if (stream.dev_id < (2 * hnode_mgr->ul_num_chnls)) {
+ /* dsp-dma */
+ gb_clear(hnode_mgr->dma_chnl_map, stream.dev_id -
+ (1 * hnode_mgr->ul_num_chnls));
+ } else if (stream.dev_id < (3 * hnode_mgr->ul_num_chnls)) {
+ /* zero-copy */
+ gb_clear(hnode_mgr->zc_chnl_map, stream.dev_id -
+ (2 * hnode_mgr->ul_num_chnls));
+ }
+ }
+}
+
+/*
+ * ======== get_fxn_address ========
+ * Purpose:
+ * Retrieves the address for create, execute or delete phase for a node.
+ */
+static int get_fxn_address(struct node_object *hnode, u32 * pulFxnAddr,
+ u32 uPhase)
+{
+ char *pstr_fxn_name = NULL;
+ struct node_mgr *hnode_mgr = hnode->hnode_mgr;
+ int status = 0;
+ DBC_REQUIRE(node_get_type(hnode) == NODE_TASK ||
+ node_get_type(hnode) == NODE_DAISSOCKET ||
+ node_get_type(hnode) == NODE_MESSAGE);
+
+ switch (uPhase) {
+ case CREATEPHASE:
+ pstr_fxn_name =
+ hnode->dcd_props.obj_data.node_obj.pstr_create_phase_fxn;
+ break;
+ case EXECUTEPHASE:
+ pstr_fxn_name =
+ hnode->dcd_props.obj_data.node_obj.pstr_execute_phase_fxn;
+ break;
+ case DELETEPHASE:
+ pstr_fxn_name =
+ hnode->dcd_props.obj_data.node_obj.pstr_delete_phase_fxn;
+ break;
+ default:
+ /* Should never get here */
+ DBC_ASSERT(false);
+ break;
+ }
+
+ status =
+ hnode_mgr->nldr_fxns.pfn_get_fxn_addr(hnode->nldr_node_obj,
+ pstr_fxn_name, pulFxnAddr);
+
+ return status;
+}
+
+/*
+ * ======== get_node_info ========
+ * Purpose:
+ * Retrieves the node information.
+ */
+void get_node_info(struct node_object *hnode, struct dsp_nodeinfo *pNodeInfo)
+{
+ u32 i;
+
+ DBC_REQUIRE(hnode);
+ DBC_REQUIRE(pNodeInfo != NULL);
+
+ pNodeInfo->cb_struct = sizeof(struct dsp_nodeinfo);
+ pNodeInfo->nb_node_database_props =
+ hnode->dcd_props.obj_data.node_obj.ndb_props;
+ pNodeInfo->execution_priority = hnode->prio;
+ pNodeInfo->device_owner = hnode->device_owner;
+ pNodeInfo->number_streams = hnode->num_inputs + hnode->num_outputs;
+ pNodeInfo->node_env = hnode->node_env;
+
+ pNodeInfo->ns_execution_state = node_get_state(hnode);
+
+ /* Copy stream connect data */
+ for (i = 0; i < hnode->num_inputs + hnode->num_outputs; i++)
+ pNodeInfo->sc_stream_connection[i] = hnode->stream_connect[i];
+
+}
+
+/*
+ * ======== get_node_props ========
+ * Purpose:
+ * Retrieve node properties.
+ */
+static int get_node_props(struct dcd_manager *hdcd_mgr,
+ struct node_object *hnode,
+ CONST struct dsp_uuid *pNodeId,
+ struct dcd_genericobj *pdcdProps)
+{
+ u32 len;
+ struct node_msgargs *pmsg_args;
+ struct node_taskargs *task_arg_obj;
+ enum node_type node_type = NODE_TASK;
+ struct dsp_ndbprops *pndb_props =
+ &(pdcdProps->obj_data.node_obj.ndb_props);
+ int status = 0;
+ char sz_uuid[MAXUUIDLEN];
+
+ status = dcd_get_object_def(hdcd_mgr, (struct dsp_uuid *)pNodeId,
+ DSP_DCDNODETYPE, pdcdProps);
+
+ if (DSP_SUCCEEDED(status)) {
+ hnode->ntype = node_type = pndb_props->ntype;
+
+ /* Create UUID value to set in registry. */
+ uuid_uuid_to_string((struct dsp_uuid *)pNodeId, sz_uuid,
+ MAXUUIDLEN);
+ dev_dbg(bridge, "(node) UUID: %s\n", sz_uuid);
+
+ /* Fill in message args that come from NDB */
+ if (node_type != NODE_DEVICE) {
+ pmsg_args = &(hnode->create_args.asa.node_msg_args);
+ pmsg_args->seg_id =
+ pdcdProps->obj_data.node_obj.msg_segid;
+ pmsg_args->notify_type =
+ pdcdProps->obj_data.node_obj.msg_notify_type;
+ pmsg_args->max_msgs = pndb_props->message_depth;
+ dev_dbg(bridge, "(node) Max Number of Messages: 0x%x\n",
+ pmsg_args->max_msgs);
+ } else {
+ /* Copy device name */
+ DBC_REQUIRE(pndb_props->ac_name);
+ len = strlen(pndb_props->ac_name);
+ DBC_ASSERT(len < MAXDEVNAMELEN);
+ hnode->pstr_dev_name = kzalloc(len + 1, GFP_KERNEL);
+ if (hnode->pstr_dev_name == NULL) {
+ status = -ENOMEM;
+ } else {
+ strncpy(hnode->pstr_dev_name,
+ pndb_props->ac_name, len);
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Fill in create args that come from NDB */
+ if (node_type == NODE_TASK || node_type == NODE_DAISSOCKET) {
+ task_arg_obj = &(hnode->create_args.asa.task_arg_obj);
+ task_arg_obj->prio = pndb_props->prio;
+ task_arg_obj->stack_size = pndb_props->stack_size;
+ task_arg_obj->sys_stack_size =
+ pndb_props->sys_stack_size;
+ task_arg_obj->stack_seg = pndb_props->stack_seg;
+ dev_dbg(bridge, "(node) Priority: 0x%x Stack Size: "
+ "0x%x words System Stack Size: 0x%x words "
+ "Stack Segment: 0x%x profile count : 0x%x\n",
+ task_arg_obj->prio, task_arg_obj->stack_size,
+ task_arg_obj->sys_stack_size,
+ task_arg_obj->stack_seg,
+ pndb_props->count_profiles);
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== get_proc_props ========
+ * Purpose:
+ * Retrieve the processor properties.
+ */
+static int get_proc_props(struct node_mgr *hnode_mgr,
+ struct dev_object *hdev_obj)
+{
+ struct cfg_hostres *host_res;
+ struct bridge_dev_context *pbridge_context;
+ int status = 0;
+
+ status = dev_get_bridge_context(hdev_obj, &pbridge_context);
+ if (!pbridge_context)
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ host_res = pbridge_context->resources;
+ if (!host_res)
+ return -EPERM;
+ hnode_mgr->ul_chnl_offset = host_res->dw_chnl_offset;
+ hnode_mgr->ul_chnl_buf_size = host_res->dw_chnl_buf_size;
+ hnode_mgr->ul_num_chnls = host_res->dw_num_chnls;
+
+ /*
+ * PROC will add an API to get dsp_processorinfo.
+ * Fill in default values for now.
+ */
+ /* TODO -- Instead of hard coding, take from registry */
+ hnode_mgr->proc_family = 6000;
+ hnode_mgr->proc_type = 6410;
+ hnode_mgr->min_pri = DSP_NODE_MIN_PRIORITY;
+ hnode_mgr->max_pri = DSP_NODE_MAX_PRIORITY;
+ hnode_mgr->udsp_word_size = DSPWORDSIZE;
+ hnode_mgr->udsp_data_mau_size = DSPWORDSIZE;
+ hnode_mgr->udsp_mau_size = 1;
+
+ }
+ return status;
+}
+
+/*
+ * ======== node_get_uuid_props ========
+ * Purpose:
+ * Fetch Node UUID properties from DCD/DOF file.
+ */
+int node_get_uuid_props(void *hprocessor,
+ IN CONST struct dsp_uuid *pNodeId,
+ OUT struct dsp_ndbprops *node_props)
+{
+ struct node_mgr *hnode_mgr = NULL;
+ struct dev_object *hdev_obj;
+ int status = 0;
+ struct dcd_nodeprops dcd_node_props;
+ struct dsp_processorstate proc_state;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hprocessor != NULL);
+ DBC_REQUIRE(pNodeId != NULL);
+
+ if (hprocessor == NULL || pNodeId == NULL) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ status = proc_get_state(hprocessor, &proc_state,
+ sizeof(struct dsp_processorstate));
+ if (DSP_FAILED(status))
+ goto func_end;
+ /* If processor is in error state then don't attempt
+ to send the message */
+ if (proc_state.proc_state == PROC_ERROR) {
+ status = -EPERM;
+ goto func_end;
+ }
+
+ status = proc_get_dev_object(hprocessor, &hdev_obj);
+ if (hdev_obj) {
+ status = dev_get_node_manager(hdev_obj, &hnode_mgr);
+ if (hnode_mgr == NULL) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ }
+
+ /*
+ * Enter the critical section. This is needed because
+ * dcd_get_object_def will ultimately end up calling dbll_open/close,
+ * which needs to be protected in order to not corrupt the zlib manager
+ * (COD).
+ */
+ mutex_lock(&hnode_mgr->node_mgr_lock);
+
+ dcd_node_props.pstr_create_phase_fxn = NULL;
+ dcd_node_props.pstr_execute_phase_fxn = NULL;
+ dcd_node_props.pstr_delete_phase_fxn = NULL;
+ dcd_node_props.pstr_i_alg_name = NULL;
+
+ status = dcd_get_object_def(hnode_mgr->hdcd_mgr,
+ (struct dsp_uuid *)pNodeId, DSP_DCDNODETYPE,
+ (struct dcd_genericobj *)&dcd_node_props);
+
+ if (DSP_SUCCEEDED(status)) {
+ *node_props = dcd_node_props.ndb_props;
+ kfree(dcd_node_props.pstr_create_phase_fxn);
+
+ kfree(dcd_node_props.pstr_execute_phase_fxn);
+
+ kfree(dcd_node_props.pstr_delete_phase_fxn);
+
+ kfree(dcd_node_props.pstr_i_alg_name);
+ }
+ /* Leave the critical section, we're done. */
+ mutex_unlock(&hnode_mgr->node_mgr_lock);
+func_end:
+ return status;
+}
+
+/*
+ * ======== get_rms_fxns ========
+ * Purpose:
+ * Retrieve the RMS functions.
+ */
+static int get_rms_fxns(struct node_mgr *hnode_mgr)
+{
+ s32 i;
+ struct dev_object *dev_obj = hnode_mgr->hdev_obj;
+ int status = 0;
+
+ static char *psz_fxns[NUMRMSFXNS] = {
+ "RMS_queryServer", /* RMSQUERYSERVER */
+ "RMS_configureServer", /* RMSCONFIGURESERVER */
+ "RMS_createNode", /* RMSCREATENODE */
+ "RMS_executeNode", /* RMSEXECUTENODE */
+ "RMS_deleteNode", /* RMSDELETENODE */
+ "RMS_changeNodePriority", /* RMSCHANGENODEPRIORITY */
+ "RMS_readMemory", /* RMSREADMEMORY */
+ "RMS_writeMemory", /* RMSWRITEMEMORY */
+ "RMS_copy", /* RMSCOPY */
+ };
+
+ for (i = 0; i < NUMRMSFXNS; i++) {
+ status = dev_get_symbol(dev_obj, psz_fxns[i],
+ &(hnode_mgr->ul_fxn_addrs[i]));
+ if (DSP_FAILED(status)) {
+ if (status == -ESPIPE) {
+ /*
+ * May be loaded dynamically (in the future),
+ * but return an error for now.
+ */
+ dev_dbg(bridge, "%s: RMS function: %s currently"
+ " not loaded\n", __func__, psz_fxns[i]);
+ } else {
+ dev_dbg(bridge, "%s: Symbol not found: %s "
+ "status = 0x%x\n", __func__,
+ psz_fxns[i], status);
+ break;
+ }
+ }
+ }
+
+ return status;
+}
+
+/*
+ * ======== ovly ========
+ * Purpose:
+ * Called during overlay.Sends command to RMS to copy a block of data.
+ */
+static u32 ovly(void *priv_ref, u32 ulDspRunAddr, u32 ulDspLoadAddr,
+ u32 ul_num_bytes, u32 nMemSpace)
+{
+ struct node_object *hnode = (struct node_object *)priv_ref;
+ struct node_mgr *hnode_mgr;
+ u32 ul_bytes = 0;
+ u32 ul_size;
+ u32 ul_timeout;
+ int status = 0;
+ struct bridge_dev_context *hbridge_context;
+ /* Function interface to Bridge driver*/
+ struct bridge_drv_interface *intf_fxns;
+
+ DBC_REQUIRE(hnode);
+
+ hnode_mgr = hnode->hnode_mgr;
+
+ ul_size = ul_num_bytes / hnode_mgr->udsp_word_size;
+ ul_timeout = hnode->utimeout;
+
+ /* Call new MemCopy function */
+ intf_fxns = hnode_mgr->intf_fxns;
+ status = dev_get_bridge_context(hnode_mgr->hdev_obj, &hbridge_context);
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ (*intf_fxns->pfn_brd_mem_copy) (hbridge_context,
+ ulDspRunAddr, ulDspLoadAddr,
+ ul_num_bytes, (u32) nMemSpace);
+ if (DSP_SUCCEEDED(status))
+ ul_bytes = ul_num_bytes;
+ else
+ pr_debug("%s: failed to copy brd memory, status 0x%x\n",
+ __func__, status);
+ } else {
+ pr_debug("%s: failed to get Bridge context, status 0x%x\n",
+ __func__, status);
+ }
+
+ return ul_bytes;
+}
+
+/*
+ * ======== mem_write ========
+ */
+static u32 mem_write(void *priv_ref, u32 ulDspAddr, void *pbuf,
+ u32 ul_num_bytes, u32 nMemSpace)
+{
+ struct node_object *hnode = (struct node_object *)priv_ref;
+ struct node_mgr *hnode_mgr;
+ u16 mem_sect_type;
+ u32 ul_timeout;
+ int status = 0;
+ struct bridge_dev_context *hbridge_context;
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+
+ DBC_REQUIRE(hnode);
+ DBC_REQUIRE(nMemSpace & DBLL_CODE || nMemSpace & DBLL_DATA);
+
+ hnode_mgr = hnode->hnode_mgr;
+
+ ul_timeout = hnode->utimeout;
+ mem_sect_type = (nMemSpace & DBLL_CODE) ? RMS_CODE : RMS_DATA;
+
+ /* Call new MemWrite function */
+ intf_fxns = hnode_mgr->intf_fxns;
+ status = dev_get_bridge_context(hnode_mgr->hdev_obj, &hbridge_context);
+ status = (*intf_fxns->pfn_brd_mem_write) (hbridge_context, pbuf,
+ ulDspAddr, ul_num_bytes, mem_sect_type);
+
+ return ul_num_bytes;
+}
+
+/*
+ * ======== node_find_addr ========
+ */
+int node_find_addr(struct node_mgr *node_mgr, u32 sym_addr,
+ u32 offset_range, void *sym_addr_output, char *sym_name)
+{
+ struct node_object *node_obj;
+ int status = -ENOENT;
+ u32 n;
+
+ pr_debug("%s(0x%x, 0x%x, 0x%x, 0x%x, %s)\n", __func__,
+ (unsigned int) node_mgr,
+ sym_addr, offset_range,
+ (unsigned int) sym_addr_output, sym_name);
+
+ node_obj = (struct node_object *)(node_mgr->node_list->head.next);
+
+ for (n = 0; n < node_mgr->num_nodes; n++) {
+ status = nldr_find_addr(node_obj->nldr_node_obj, sym_addr,
+ offset_range, sym_addr_output, sym_name);
+
+ if (DSP_SUCCEEDED(status))
+ break;
+
+ node_obj = (struct node_object *) (node_obj->list_elem.next);
+ }
+
+ return status;
+}
+
diff --git a/drivers/staging/tidspbridge/rmgr/proc.c b/drivers/staging/tidspbridge/rmgr/proc.c
new file mode 100644
index 0000000..c5a8b6b
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/proc.c
@@ -0,0 +1,1948 @@
+/*
+ * proc.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Processor interface at the driver level.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ------------------------------------ Host OS */
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/cfg.h>
+#include <dspbridge/list.h>
+#include <dspbridge/ntfy.h>
+#include <dspbridge/sync.h>
+/* ----------------------------------- Bridge Driver */
+#include <dspbridge/dspdefs.h>
+#include <dspbridge/dspdeh.h>
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/cod.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/procpriv.h>
+#include <dspbridge/dmm.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/mgr.h>
+#include <dspbridge/node.h>
+#include <dspbridge/nldr.h>
+#include <dspbridge/rmm.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/dbdcd.h>
+#include <dspbridge/msg.h>
+#include <dspbridge/dspioctl.h>
+#include <dspbridge/drv.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/proc.h>
+#include <dspbridge/pwr.h>
+
+#include <dspbridge/resourcecleanup.h>
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define MAXCMDLINELEN 255
+#define PROC_ENVPROCID "PROC_ID=%d"
+#define MAXPROCIDLEN (8 + 5)
+#define PROC_DFLT_TIMEOUT 10000 /* Time out in milliseconds */
+#define PWR_TIMEOUT 500 /* Sleep/wake timout in msec */
+#define EXTEND "_EXT_END" /* Extmem end addr in DSP binary */
+
+#define DSP_CACHE_LINE 128
+
+#define BUFMODE_MASK (3 << 14)
+
+/* Buffer modes from DSP perspective */
+#define RBUF 0x4000 /* Input buffer */
+#define WBUF 0x8000 /* Output Buffer */
+
+extern struct device *bridge;
+
+/* ----------------------------------- Globals */
+
+/* The proc_object structure. */
+struct proc_object {
+ struct list_head link; /* Link to next proc_object */
+ struct dev_object *hdev_obj; /* Device this PROC represents */
+ u32 process; /* Process owning this Processor */
+ struct mgr_object *hmgr_obj; /* Manager Object Handle */
+ u32 attach_count; /* Processor attach count */
+ u32 processor_id; /* Processor number */
+ u32 utimeout; /* Time out count */
+ enum dsp_procstate proc_state; /* Processor state */
+ u32 ul_unit; /* DDSP unit number */
+ bool is_already_attached; /*
+ * True if the Device below has
+ * GPP Client attached
+ */
+ struct ntfy_object *ntfy_obj; /* Manages notifications */
+ /* Bridge Context Handle */
+ struct bridge_dev_context *hbridge_context;
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+ char *psz_last_coff;
+ struct list_head proc_list;
+};
+
+static u32 refs;
+
+DEFINE_MUTEX(proc_lock); /* For critical sections */
+
+/* ----------------------------------- Function Prototypes */
+static int proc_monitor(struct proc_object *hprocessor);
+static s32 get_envp_count(char **envp);
+static char **prepend_envp(char **new_envp, char **envp, s32 envp_elems,
+ s32 cnew_envp, char *szVar);
+
+/* remember mapping information */
+static struct dmm_map_object *add_mapping_info(struct process_context *pr_ctxt,
+ u32 mpu_addr, u32 dsp_addr, u32 size)
+{
+ struct dmm_map_object *map_obj;
+
+ u32 num_usr_pgs = size / PG_SIZE4K;
+
+ pr_debug("%s: adding map info: mpu_addr 0x%x virt 0x%x size 0x%x\n",
+ __func__, mpu_addr,
+ dsp_addr, size);
+
+ map_obj = kzalloc(sizeof(struct dmm_map_object), GFP_KERNEL);
+ if (!map_obj) {
+ pr_err("%s: kzalloc failed\n", __func__);
+ return NULL;
+ }
+ INIT_LIST_HEAD(&map_obj->link);
+
+ map_obj->pages = kcalloc(num_usr_pgs, sizeof(struct page *),
+ GFP_KERNEL);
+ if (!map_obj->pages) {
+ pr_err("%s: kzalloc failed\n", __func__);
+ kfree(map_obj);
+ return NULL;
+ }
+
+ map_obj->mpu_addr = mpu_addr;
+ map_obj->dsp_addr = dsp_addr;
+ map_obj->size = size;
+ map_obj->num_usr_pgs = num_usr_pgs;
+
+ spin_lock(&pr_ctxt->dmm_map_lock);
+ list_add(&map_obj->link, &pr_ctxt->dmm_map_list);
+ spin_unlock(&pr_ctxt->dmm_map_lock);
+
+ return map_obj;
+}
+
+static int match_exact_map_obj(struct dmm_map_object *map_obj,
+ u32 dsp_addr, u32 size)
+{
+ if (map_obj->dsp_addr == dsp_addr && map_obj->size != size)
+ pr_err("%s: addr match (0x%x), size don't (0x%x != 0x%x)\n",
+ __func__, dsp_addr, map_obj->size, size);
+
+ return map_obj->dsp_addr == dsp_addr &&
+ map_obj->size == size;
+}
+
+static void remove_mapping_information(struct process_context *pr_ctxt,
+ u32 dsp_addr, u32 size)
+{
+ struct dmm_map_object *map_obj;
+
+ pr_debug("%s: looking for virt 0x%x size 0x%x\n", __func__,
+ dsp_addr, size);
+
+ spin_lock(&pr_ctxt->dmm_map_lock);
+ list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
+ pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
+ __func__,
+ map_obj->mpu_addr,
+ map_obj->dsp_addr,
+ map_obj->size);
+
+ if (match_exact_map_obj(map_obj, dsp_addr, size)) {
+ pr_debug("%s: match, deleting map info\n", __func__);
+ list_del(&map_obj->link);
+ kfree(map_obj->dma_info.sg);
+ kfree(map_obj->pages);
+ kfree(map_obj);
+ goto out;
+ }
+ pr_debug("%s: candidate didn't match\n", __func__);
+ }
+
+ pr_err("%s: failed to find given map info\n", __func__);
+out:
+ spin_unlock(&pr_ctxt->dmm_map_lock);
+}
+
+static int match_containing_map_obj(struct dmm_map_object *map_obj,
+ u32 mpu_addr, u32 size)
+{
+ u32 map_obj_end = map_obj->mpu_addr + map_obj->size;
+
+ return mpu_addr >= map_obj->mpu_addr &&
+ mpu_addr + size <= map_obj_end;
+}
+
+static struct dmm_map_object *find_containing_mapping(
+ struct process_context *pr_ctxt,
+ u32 mpu_addr, u32 size)
+{
+ struct dmm_map_object *map_obj;
+ pr_debug("%s: looking for mpu_addr 0x%x size 0x%x\n", __func__,
+ mpu_addr, size);
+
+ spin_lock(&pr_ctxt->dmm_map_lock);
+ list_for_each_entry(map_obj, &pr_ctxt->dmm_map_list, link) {
+ pr_debug("%s: candidate: mpu_addr 0x%x virt 0x%x size 0x%x\n",
+ __func__,
+ map_obj->mpu_addr,
+ map_obj->dsp_addr,
+ map_obj->size);
+ if (match_containing_map_obj(map_obj, mpu_addr, size)) {
+ pr_debug("%s: match!\n", __func__);
+ goto out;
+ }
+
+ pr_debug("%s: no match!\n", __func__);
+ }
+
+ map_obj = NULL;
+out:
+ spin_unlock(&pr_ctxt->dmm_map_lock);
+ return map_obj;
+}
+
+static int find_first_page_in_cache(struct dmm_map_object *map_obj,
+ unsigned long mpu_addr)
+{
+ u32 mapped_base_page = map_obj->mpu_addr >> PAGE_SHIFT;
+ u32 requested_base_page = mpu_addr >> PAGE_SHIFT;
+ int pg_index = requested_base_page - mapped_base_page;
+
+ if (pg_index < 0 || pg_index >= map_obj->num_usr_pgs) {
+ pr_err("%s: failed (got %d)\n", __func__, pg_index);
+ return -1;
+ }
+
+ pr_debug("%s: first page is %d\n", __func__, pg_index);
+ return pg_index;
+}
+
+static inline struct page *get_mapping_page(struct dmm_map_object *map_obj,
+ int pg_i)
+{
+ pr_debug("%s: looking for pg_i %d, num_usr_pgs: %d\n", __func__,
+ pg_i, map_obj->num_usr_pgs);
+
+ if (pg_i < 0 || pg_i >= map_obj->num_usr_pgs) {
+ pr_err("%s: requested pg_i %d is out of mapped range\n",
+ __func__, pg_i);
+ return NULL;
+ }
+
+ return map_obj->pages[pg_i];
+}
+
+/*
+ * ======== proc_attach ========
+ * Purpose:
+ * Prepare for communication with a particular DSP processor, and return
+ * a handle to the processor object.
+ */
+int
+proc_attach(u32 processor_id,
+ OPTIONAL CONST struct dsp_processorattrin *attr_in,
+ void **ph_processor, struct process_context *pr_ctxt)
+{
+ int status = 0;
+ struct dev_object *hdev_obj;
+ struct proc_object *p_proc_object = NULL;
+ struct mgr_object *hmgr_obj = NULL;
+ struct drv_object *hdrv_obj = NULL;
+ u8 dev_type;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ph_processor != NULL);
+
+ if (pr_ctxt->hprocessor) {
+ *ph_processor = pr_ctxt->hprocessor;
+ return status;
+ }
+
+ /* Get the Driver and Manager Object Handles */
+ status = cfg_get_object((u32 *) &hdrv_obj, REG_DRV_OBJECT);
+ if (DSP_SUCCEEDED(status))
+ status = cfg_get_object((u32 *) &hmgr_obj, REG_MGR_OBJECT);
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Get the Device Object */
+ status = drv_get_dev_object(processor_id, hdrv_obj, &hdev_obj);
+ }
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_dev_type(hdev_obj, &dev_type);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* If we made it this far, create the Proceesor object: */
+ p_proc_object = kzalloc(sizeof(struct proc_object), GFP_KERNEL);
+ /* Fill out the Processor Object: */
+ if (p_proc_object == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ p_proc_object->hdev_obj = hdev_obj;
+ p_proc_object->hmgr_obj = hmgr_obj;
+ p_proc_object->processor_id = dev_type;
+ /* Store TGID instead of process handle */
+ p_proc_object->process = current->tgid;
+
+ INIT_LIST_HEAD(&p_proc_object->proc_list);
+
+ if (attr_in)
+ p_proc_object->utimeout = attr_in->utimeout;
+ else
+ p_proc_object->utimeout = PROC_DFLT_TIMEOUT;
+
+ status = dev_get_intf_fxns(hdev_obj, &p_proc_object->intf_fxns);
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_bridge_context(hdev_obj,
+ &p_proc_object->hbridge_context);
+ if (DSP_FAILED(status))
+ kfree(p_proc_object);
+ } else
+ kfree(p_proc_object);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Create the Notification Object */
+ /* This is created with no event mask, no notify mask
+ * and no valid handle to the notification. They all get
+ * filled up when proc_register_notify is called */
+ p_proc_object->ntfy_obj = kmalloc(sizeof(struct ntfy_object),
+ GFP_KERNEL);
+ if (p_proc_object->ntfy_obj)
+ ntfy_init(p_proc_object->ntfy_obj);
+ else
+ status = -ENOMEM;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* Insert the Processor Object into the DEV List.
+ * Return handle to this Processor Object:
+ * Find out if the Device is already attached to a
+ * Processor. If so, return AlreadyAttached status */
+ lst_init_elem(&p_proc_object->link);
+ status = dev_insert_proc_object(p_proc_object->hdev_obj,
+ (u32) p_proc_object,
+ &p_proc_object->
+ is_already_attached);
+ if (DSP_SUCCEEDED(status)) {
+ if (p_proc_object->is_already_attached)
+ status = 0;
+ } else {
+ if (p_proc_object->ntfy_obj) {
+ ntfy_delete(p_proc_object->ntfy_obj);
+ kfree(p_proc_object->ntfy_obj);
+ }
+
+ kfree(p_proc_object);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ *ph_processor = (void *)p_proc_object;
+ pr_ctxt->hprocessor = *ph_processor;
+ (void)proc_notify_clients(p_proc_object,
+ DSP_PROCESSORATTACH);
+ }
+ } else {
+ /* Don't leak memory if DSP_FAILED */
+ kfree(p_proc_object);
+ }
+func_end:
+ DBC_ENSURE((status == -EPERM && *ph_processor == NULL) ||
+ (DSP_SUCCEEDED(status) && p_proc_object) ||
+ (status == 0 && p_proc_object));
+
+ return status;
+}
+
+static int get_exec_file(struct cfg_devnode *dev_node_obj,
+ struct dev_object *hdev_obj,
+ u32 size, char *execFile)
+{
+ u8 dev_type;
+ s32 len;
+
+ dev_get_dev_type(hdev_obj, (u8 *) &dev_type);
+ if (dev_type == DSP_UNIT) {
+ return cfg_get_exec_file(dev_node_obj, size, execFile);
+ } else if (dev_type == IVA_UNIT) {
+ if (iva_img) {
+ len = strlen(iva_img);
+ strncpy(execFile, iva_img, len + 1);
+ return 0;
+ }
+ }
+ return -ENOENT;
+}
+
+/*
+ * ======== proc_auto_start ======== =
+ * Purpose:
+ * A Particular device gets loaded with the default image
+ * if the AutoStart flag is set.
+ * Parameters:
+ * hdev_obj: Handle to the Device
+ * Returns:
+ * 0: On Successful Loading
+ * -EPERM General Failure
+ * Requires:
+ * hdev_obj != NULL
+ * Ensures:
+ */
+int proc_auto_start(struct cfg_devnode *dev_node_obj,
+ struct dev_object *hdev_obj)
+{
+ int status = -EPERM;
+ struct proc_object *p_proc_object;
+ char sz_exec_file[MAXCMDLINELEN];
+ char *argv[2];
+ struct mgr_object *hmgr_obj = NULL;
+ u8 dev_type;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(dev_node_obj != NULL);
+ DBC_REQUIRE(hdev_obj != NULL);
+
+ /* Create a Dummy PROC Object */
+ status = cfg_get_object((u32 *) &hmgr_obj, REG_MGR_OBJECT);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ p_proc_object = kzalloc(sizeof(struct proc_object), GFP_KERNEL);
+ if (p_proc_object == NULL) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ p_proc_object->hdev_obj = hdev_obj;
+ p_proc_object->hmgr_obj = hmgr_obj;
+ status = dev_get_intf_fxns(hdev_obj, &p_proc_object->intf_fxns);
+ if (DSP_SUCCEEDED(status))
+ status = dev_get_bridge_context(hdev_obj,
+ &p_proc_object->hbridge_context);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /* Stop the Device, put it into standby mode */
+ status = proc_stop(p_proc_object);
+
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /* Get the default executable for this board... */
+ dev_get_dev_type(hdev_obj, (u8 *) &dev_type);
+ p_proc_object->processor_id = dev_type;
+ status = get_exec_file(dev_node_obj, hdev_obj, sizeof(sz_exec_file),
+ sz_exec_file);
+ if (DSP_SUCCEEDED(status)) {
+ argv[0] = sz_exec_file;
+ argv[1] = NULL;
+ /* ...and try to load it: */
+ status = proc_load(p_proc_object, 1, (CONST char **)argv, NULL);
+ if (DSP_SUCCEEDED(status))
+ status = proc_start(p_proc_object);
+ }
+ kfree(p_proc_object->psz_last_coff);
+ p_proc_object->psz_last_coff = NULL;
+func_cont:
+ kfree(p_proc_object);
+func_end:
+ return status;
+}
+
+/*
+ * ======== proc_ctrl ========
+ * Purpose:
+ * Pass control information to the GPP device driver managing the
+ * DSP processor.
+ *
+ * This will be an OEM-only function, and not part of the DSP/BIOS Bridge
+ * application developer's API.
+ * Call the bridge_dev_ctrl fxn with the Argument. This is a Synchronous
+ * Operation. arg can be null.
+ */
+int proc_ctrl(void *hprocessor, u32 dw_cmd, IN struct dsp_cbdata * arg)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = hprocessor;
+ u32 timeout = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (p_proc_object) {
+ /* intercept PWR deep sleep command */
+ if (dw_cmd == BRDIOCTL_DEEPSLEEP) {
+ timeout = arg->cb_data;
+ status = pwr_sleep_dsp(PWR_DEEPSLEEP, timeout);
+ }
+ /* intercept PWR emergency sleep command */
+ else if (dw_cmd == BRDIOCTL_EMERGENCYSLEEP) {
+ timeout = arg->cb_data;
+ status = pwr_sleep_dsp(PWR_EMERGENCYDEEPSLEEP, timeout);
+ } else if (dw_cmd == PWR_DEEPSLEEP) {
+ /* timeout = arg->cb_data; */
+ status = pwr_sleep_dsp(PWR_DEEPSLEEP, timeout);
+ }
+ /* intercept PWR wake commands */
+ else if (dw_cmd == BRDIOCTL_WAKEUP) {
+ timeout = arg->cb_data;
+ status = pwr_wake_dsp(timeout);
+ } else if (dw_cmd == PWR_WAKEUP) {
+ /* timeout = arg->cb_data; */
+ status = pwr_wake_dsp(timeout);
+ } else
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_dev_cntrl)
+ (p_proc_object->hbridge_context, dw_cmd,
+ arg))) {
+ status = 0;
+ } else {
+ status = -EPERM;
+ }
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== proc_detach ========
+ * Purpose:
+ * Destroys the Processor Object. Removes the notification from the Dev
+ * List.
+ */
+int proc_detach(struct process_context *pr_ctxt)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = NULL;
+
+ DBC_REQUIRE(refs > 0);
+
+ p_proc_object = (struct proc_object *)pr_ctxt->hprocessor;
+
+ if (p_proc_object) {
+ /* Notify the Client */
+ ntfy_notify(p_proc_object->ntfy_obj, DSP_PROCESSORDETACH);
+ /* Remove the notification memory */
+ if (p_proc_object->ntfy_obj) {
+ ntfy_delete(p_proc_object->ntfy_obj);
+ kfree(p_proc_object->ntfy_obj);
+ }
+
+ kfree(p_proc_object->psz_last_coff);
+ p_proc_object->psz_last_coff = NULL;
+ /* Remove the Proc from the DEV List */
+ (void)dev_remove_proc_object(p_proc_object->hdev_obj,
+ (u32) p_proc_object);
+ /* Free the Processor Object */
+ kfree(p_proc_object);
+ pr_ctxt->hprocessor = NULL;
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/*
+ * ======== proc_enum_nodes ========
+ * Purpose:
+ * Enumerate and get configuration information about nodes allocated
+ * on a DSP processor.
+ */
+int proc_enum_nodes(void *hprocessor, void **node_tab,
+ IN u32 node_tab_size, OUT u32 *pu_num_nodes,
+ OUT u32 *pu_allocated)
+{
+ int status = -EPERM;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct node_mgr *hnode_mgr = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(node_tab != NULL || node_tab_size == 0);
+ DBC_REQUIRE(pu_num_nodes != NULL);
+ DBC_REQUIRE(pu_allocated != NULL);
+
+ if (p_proc_object) {
+ if (DSP_SUCCEEDED(dev_get_node_manager(p_proc_object->hdev_obj,
+ &hnode_mgr))) {
+ if (hnode_mgr) {
+ status = node_enum_nodes(hnode_mgr, node_tab,
+ node_tab_size,
+ pu_num_nodes,
+ pu_allocated);
+ }
+ }
+ } else {
+ status = -EFAULT;
+ }
+
+ return status;
+}
+
+/* Cache operation against kernel address instead of users */
+static int build_dma_sg(struct dmm_map_object *map_obj, unsigned long start,
+ ssize_t len, int pg_i)
+{
+ struct page *page;
+ unsigned long offset;
+ ssize_t rest;
+ int ret = 0, i = 0;
+ struct scatterlist *sg = map_obj->dma_info.sg;
+
+ while (len) {
+ page = get_mapping_page(map_obj, pg_i);
+ if (!page) {
+ pr_err("%s: no page for %08lx\n", __func__, start);
+ ret = -EINVAL;
+ goto out;
+ } else if (IS_ERR(page)) {
+ pr_err("%s: err page for %08lx(%lu)\n", __func__, start,
+ PTR_ERR(page));
+ ret = PTR_ERR(page);
+ goto out;
+ }
+
+ offset = start & ~PAGE_MASK;
+ rest = min_t(ssize_t, PAGE_SIZE - offset, len);
+
+ sg_set_page(&sg[i], page, rest, offset);
+
+ len -= rest;
+ start += rest;
+ pg_i++, i++;
+ }
+
+ if (i != map_obj->dma_info.num_pages) {
+ pr_err("%s: bad number of sg iterations\n", __func__);
+ ret = -EFAULT;
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static int memory_regain_ownership(struct dmm_map_object *map_obj,
+ unsigned long start, ssize_t len, enum dma_data_direction dir)
+{
+ int ret = 0;
+ unsigned long first_data_page = start >> PAGE_SHIFT;
+ unsigned long last_data_page = ((u32)(start + len - 1) >> PAGE_SHIFT);
+ /* calculating the number of pages this area spans */
+ unsigned long num_pages = last_data_page - first_data_page + 1;
+ struct bridge_dma_map_info *dma_info = &map_obj->dma_info;
+
+ if (!dma_info->sg)
+ goto out;
+
+ if (dma_info->dir != dir || dma_info->num_pages != num_pages) {
+ pr_err("%s: dma info doesn't match given params\n", __func__);
+ return -EINVAL;
+ }
+
+ dma_unmap_sg(bridge, dma_info->sg, num_pages, dma_info->dir);
+
+ pr_debug("%s: dma_map_sg unmapped\n", __func__);
+
+ kfree(dma_info->sg);
+
+ map_obj->dma_info.sg = NULL;
+
+out:
+ return ret;
+}
+
+/* Cache operation against kernel address instead of users */
+static int memory_give_ownership(struct dmm_map_object *map_obj,
+ unsigned long start, ssize_t len, enum dma_data_direction dir)
+{
+ int pg_i, ret, sg_num;
+ struct scatterlist *sg;
+ unsigned long first_data_page = start >> PAGE_SHIFT;
+ unsigned long last_data_page = ((u32)(start + len - 1) >> PAGE_SHIFT);
+ /* calculating the number of pages this area spans */
+ unsigned long num_pages = last_data_page - first_data_page + 1;
+
+ pg_i = find_first_page_in_cache(map_obj, start);
+ if (pg_i < 0) {
+ pr_err("%s: failed to find first page in cache\n", __func__);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ sg = kcalloc(num_pages, sizeof(*sg), GFP_KERNEL);
+ if (!sg) {
+ pr_err("%s: kcalloc failed\n", __func__);
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ sg_init_table(sg, num_pages);
+
+ /* cleanup a previous sg allocation */
+ /* this may happen if application doesn't signal for e/o DMA */
+ kfree(map_obj->dma_info.sg);
+
+ map_obj->dma_info.sg = sg;
+ map_obj->dma_info.dir = dir;
+ map_obj->dma_info.num_pages = num_pages;
+
+ ret = build_dma_sg(map_obj, start, len, pg_i);
+ if (ret)
+ goto kfree_sg;
+
+ sg_num = dma_map_sg(bridge, sg, num_pages, dir);
+ if (sg_num < 1) {
+ pr_err("%s: dma_map_sg failed: %d\n", __func__, sg_num);
+ ret = -EFAULT;
+ goto kfree_sg;
+ }
+
+ pr_debug("%s: dma_map_sg mapped %d elements\n", __func__, sg_num);
+ map_obj->dma_info.sg_num = sg_num;
+
+ return 0;
+
+kfree_sg:
+ kfree(sg);
+ map_obj->dma_info.sg = NULL;
+out:
+ return ret;
+}
+
+int proc_begin_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
+ enum dma_data_direction dir)
+{
+ /* Keep STATUS here for future additions to this function */
+ int status = 0;
+ struct process_context *pr_ctxt = (struct process_context *) hprocessor;
+ struct dmm_map_object *map_obj;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!pr_ctxt) {
+ status = -EFAULT;
+ goto err_out;
+ }
+
+ pr_debug("%s: addr 0x%x, size 0x%x, type %d\n", __func__,
+ (u32)pmpu_addr,
+ ul_size, dir);
+
+ /* find requested memory are in cached mapping information */
+ map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ if (!map_obj) {
+ pr_err("%s: find_containing_mapping failed\n", __func__);
+ status = -EFAULT;
+ goto err_out;
+ }
+
+ if (memory_give_ownership(map_obj, (u32) pmpu_addr, ul_size, dir)) {
+ pr_err("%s: InValid address parameters %p %x\n",
+ __func__, pmpu_addr, ul_size);
+ status = -EFAULT;
+ }
+
+err_out:
+
+ return status;
+}
+
+int proc_end_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
+ enum dma_data_direction dir)
+{
+ /* Keep STATUS here for future additions to this function */
+ int status = 0;
+ struct process_context *pr_ctxt = (struct process_context *) hprocessor;
+ struct dmm_map_object *map_obj;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!pr_ctxt) {
+ status = -EFAULT;
+ goto err_out;
+ }
+
+ pr_debug("%s: addr 0x%x, size 0x%x, type %d\n", __func__,
+ (u32)pmpu_addr,
+ ul_size, dir);
+
+ /* find requested memory are in cached mapping information */
+ map_obj = find_containing_mapping(pr_ctxt, (u32) pmpu_addr, ul_size);
+ if (!map_obj) {
+ pr_err("%s: find_containing_mapping failed\n", __func__);
+ status = -EFAULT;
+ goto err_out;
+ }
+
+ if (memory_regain_ownership(map_obj, (u32) pmpu_addr, ul_size, dir)) {
+ pr_err("%s: InValid address parameters %p %x\n",
+ __func__, pmpu_addr, ul_size);
+ status = -EFAULT;
+ goto err_out;
+ }
+
+err_out:
+ return status;
+}
+
+/*
+ * ======== proc_flush_memory ========
+ * Purpose:
+ * Flush cache
+ */
+int proc_flush_memory(void *hprocessor, void *pmpu_addr,
+ u32 ul_size, u32 ul_flags)
+{
+ enum dma_data_direction dir = DMA_BIDIRECTIONAL;
+
+ return proc_begin_dma(hprocessor, pmpu_addr, ul_size, dir);
+}
+
+/*
+ * ======== proc_invalidate_memory ========
+ * Purpose:
+ * Invalidates the memory specified
+ */
+int proc_invalidate_memory(void *hprocessor, void *pmpu_addr, u32 size)
+{
+ enum dma_data_direction dir = DMA_FROM_DEVICE;
+
+ return proc_begin_dma(hprocessor, pmpu_addr, size, dir);
+}
+
+/*
+ * ======== proc_get_resource_info ========
+ * Purpose:
+ * Enumerate the resources currently available on a processor.
+ */
+int proc_get_resource_info(void *hprocessor, u32 resource_type,
+ OUT struct dsp_resourceinfo *resource_info,
+ u32 resource_info_size)
+{
+ int status = -EPERM;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct node_mgr *hnode_mgr = NULL;
+ struct nldr_object *nldr_obj = NULL;
+ struct rmm_target_obj *rmm = NULL;
+ struct io_mgr *hio_mgr = NULL; /* IO manager handle */
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(resource_info != NULL);
+ DBC_REQUIRE(resource_info_size >= sizeof(struct dsp_resourceinfo));
+
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ switch (resource_type) {
+ case DSP_RESOURCE_DYNDARAM:
+ case DSP_RESOURCE_DYNSARAM:
+ case DSP_RESOURCE_DYNEXTERNAL:
+ case DSP_RESOURCE_DYNSRAM:
+ status = dev_get_node_manager(p_proc_object->hdev_obj,
+ &hnode_mgr);
+ if (!hnode_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = node_get_nldr_obj(hnode_mgr, &nldr_obj);
+ if (DSP_SUCCEEDED(status)) {
+ status = nldr_get_rmm_manager(nldr_obj, &rmm);
+ if (rmm) {
+ if (!rmm_stat(rmm,
+ (enum dsp_memtype)resource_type,
+ (struct dsp_memstat *)
+ &(resource_info->result.
+ mem_stat)))
+ status = -EINVAL;
+ } else {
+ status = -EFAULT;
+ }
+ }
+ break;
+ case DSP_RESOURCE_PROCLOAD:
+ status = dev_get_io_mgr(p_proc_object->hdev_obj, &hio_mgr);
+ if (hio_mgr)
+ status =
+ p_proc_object->intf_fxns->
+ pfn_io_get_proc_load(hio_mgr,
+ (struct dsp_procloadstat *)
+ &(resource_info->result.
+ proc_load_stat));
+ else
+ status = -EFAULT;
+ break;
+ default:
+ status = -EPERM;
+ break;
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== proc_exit ========
+ * Purpose:
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ */
+void proc_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== proc_get_dev_object ========
+ * Purpose:
+ * Return the Dev Object handle for a given Processor.
+ *
+ */
+int proc_get_dev_object(void *hprocessor,
+ struct dev_object **phDevObject)
+{
+ int status = -EPERM;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phDevObject != NULL);
+
+ if (p_proc_object) {
+ *phDevObject = p_proc_object->hdev_obj;
+ status = 0;
+ } else {
+ *phDevObject = NULL;
+ status = -EFAULT;
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phDevObject != NULL) ||
+ (DSP_FAILED(status) && *phDevObject == NULL));
+
+ return status;
+}
+
+/*
+ * ======== proc_get_state ========
+ * Purpose:
+ * Report the state of the specified DSP processor.
+ */
+int proc_get_state(void *hprocessor,
+ OUT struct dsp_processorstate *proc_state_obj,
+ u32 state_info_size)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ int brd_status;
+ struct deh_mgr *hdeh_mgr;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(proc_state_obj != NULL);
+ DBC_REQUIRE(state_info_size >= sizeof(struct dsp_processorstate));
+
+ if (p_proc_object) {
+ /* First, retrieve BRD state information */
+ status = (*p_proc_object->intf_fxns->pfn_brd_status)
+ (p_proc_object->hbridge_context, &brd_status);
+ if (DSP_SUCCEEDED(status)) {
+ switch (brd_status) {
+ case BRD_STOPPED:
+ proc_state_obj->proc_state = PROC_STOPPED;
+ break;
+ case BRD_SLEEP_TRANSITION:
+ case BRD_DSP_HIBERNATION:
+ /* Fall through */
+ case BRD_RUNNING:
+ proc_state_obj->proc_state = PROC_RUNNING;
+ break;
+ case BRD_LOADED:
+ proc_state_obj->proc_state = PROC_LOADED;
+ break;
+ case BRD_ERROR:
+ proc_state_obj->proc_state = PROC_ERROR;
+ break;
+ default:
+ proc_state_obj->proc_state = 0xFF;
+ status = -EPERM;
+ break;
+ }
+ }
+ /* Next, retrieve error information, if any */
+ status = dev_get_deh_mgr(p_proc_object->hdev_obj, &hdeh_mgr);
+ if (DSP_SUCCEEDED(status) && hdeh_mgr)
+ status = (*p_proc_object->intf_fxns->pfn_deh_get_info)
+ (hdeh_mgr, &(proc_state_obj->err_info));
+ } else {
+ status = -EFAULT;
+ }
+ dev_dbg(bridge, "%s, results: status: 0x%x proc_state_obj: 0x%x\n",
+ __func__, status, proc_state_obj->proc_state);
+ return status;
+}
+
+/*
+ * ======== proc_get_trace ========
+ * Purpose:
+ * Retrieve the current contents of the trace buffer, located on the
+ * Processor. Predefined symbols for the trace buffer must have been
+ * configured into the DSP executable.
+ * Details:
+ * We support using the symbols SYS_PUTCBEG and SYS_PUTCEND to define a
+ * trace buffer, only. Treat it as an undocumented feature.
+ * This call is destructive, meaning the processor is placed in the monitor
+ * state as a result of this function.
+ */
+int proc_get_trace(void *hprocessor, u8 * pbuf, u32 max_size)
+{
+ int status;
+ status = -ENOSYS;
+ return status;
+}
+
+/*
+ * ======== proc_init ========
+ * Purpose:
+ * Initialize PROC's private state, keeping a reference count on each call
+ */
+bool proc_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== proc_load ========
+ * Purpose:
+ * Reset a processor and load a new base program image.
+ * This will be an OEM-only function, and not part of the DSP/BIOS Bridge
+ * application developer's API.
+ */
+int proc_load(void *hprocessor, IN CONST s32 argc_index,
+ IN CONST char **user_args, IN CONST char **user_envp)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct io_mgr *hio_mgr; /* IO manager handle */
+ struct msg_mgr *hmsg_mgr;
+ struct cod_manager *cod_mgr; /* Code manager handle */
+ char *pargv0; /* temp argv[0] ptr */
+ char **new_envp; /* Updated envp[] array. */
+ char sz_proc_id[MAXPROCIDLEN]; /* Size of "PROC_ID=<n>" */
+ s32 envp_elems; /* Num elements in envp[]. */
+ s32 cnew_envp; /* " " in new_envp[] */
+ s32 nproc_id = 0; /* Anticipate MP version. */
+ struct dcd_manager *hdcd_handle;
+ struct dmm_object *dmm_mgr;
+ u32 dw_ext_end;
+ u32 proc_id;
+ int brd_state;
+ struct drv_data *drv_datap = dev_get_drvdata(bridge);
+
+#ifdef OPT_LOAD_TIME_INSTRUMENTATION
+ struct timeval tv1;
+ struct timeval tv2;
+#endif
+
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ struct dspbridge_platform_data *pdata =
+ omap_dspbridge_dev->dev.platform_data;
+#endif
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(argc_index > 0);
+ DBC_REQUIRE(user_args != NULL);
+
+#ifdef OPT_LOAD_TIME_INSTRUMENTATION
+ do_gettimeofday(&tv1);
+#endif
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ dev_get_cod_mgr(p_proc_object->hdev_obj, &cod_mgr);
+ if (!cod_mgr) {
+ status = -EPERM;
+ goto func_end;
+ }
+ status = proc_stop(hprocessor);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Place the board in the monitor state. */
+ status = proc_monitor(hprocessor);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Save ptr to original argv[0]. */
+ pargv0 = (char *)user_args[0];
+ /*Prepend "PROC_ID=<nproc_id>"to envp array for target. */
+ envp_elems = get_envp_count((char **)user_envp);
+ cnew_envp = (envp_elems ? (envp_elems + 1) : (envp_elems + 2));
+ new_envp = kzalloc(cnew_envp * sizeof(char **), GFP_KERNEL);
+ if (new_envp) {
+ status = snprintf(sz_proc_id, MAXPROCIDLEN, PROC_ENVPROCID,
+ nproc_id);
+ if (status == -1) {
+ dev_dbg(bridge, "%s: Proc ID string overflow\n",
+ __func__);
+ status = -EPERM;
+ } else {
+ new_envp =
+ prepend_envp(new_envp, (char **)user_envp,
+ envp_elems, cnew_envp, sz_proc_id);
+ /* Get the DCD Handle */
+ status = mgr_get_dcd_handle(p_proc_object->hmgr_obj,
+ (u32 *) &hdcd_handle);
+ if (DSP_SUCCEEDED(status)) {
+ /* Before proceeding with new load,
+ * check if a previously registered COFF
+ * exists.
+ * If yes, unregister nodes in previously
+ * registered COFF. If any error occurred,
+ * set previously registered COFF to NULL. */
+ if (p_proc_object->psz_last_coff != NULL) {
+ status =
+ dcd_auto_unregister(hdcd_handle,
+ p_proc_object->
+ psz_last_coff);
+ /* Regardless of auto unregister status,
+ * free previously allocated
+ * memory. */
+ kfree(p_proc_object->psz_last_coff);
+ p_proc_object->psz_last_coff = NULL;
+ }
+ }
+ /* On success, do cod_open_base() */
+ status = cod_open_base(cod_mgr, (char *)user_args[0],
+ COD_SYMB);
+ }
+ } else {
+ status = -ENOMEM;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Auto-register data base */
+ /* Get the DCD Handle */
+ status = mgr_get_dcd_handle(p_proc_object->hmgr_obj,
+ (u32 *) &hdcd_handle);
+ if (DSP_SUCCEEDED(status)) {
+ /* Auto register nodes in specified COFF
+ * file. If registration did not fail,
+ * (status = 0 or -EACCES)
+ * save the name of the COFF file for
+ * de-registration in the future. */
+ status =
+ dcd_auto_register(hdcd_handle,
+ (char *)user_args[0]);
+ if (status == -EACCES)
+ status = 0;
+
+ if (DSP_FAILED(status)) {
+ status = -EPERM;
+ } else {
+ DBC_ASSERT(p_proc_object->psz_last_coff ==
+ NULL);
+ /* Allocate memory for pszLastCoff */
+ p_proc_object->psz_last_coff =
+ kzalloc((strlen(user_args[0]) +
+ 1), GFP_KERNEL);
+ /* If memory allocated, save COFF file name */
+ if (p_proc_object->psz_last_coff) {
+ strncpy(p_proc_object->psz_last_coff,
+ (char *)user_args[0],
+ (strlen((char *)user_args[0]) +
+ 1));
+ }
+ }
+ }
+ }
+ /* Update shared memory address and size */
+ if (DSP_SUCCEEDED(status)) {
+ /* Create the message manager. This must be done
+ * before calling the IOOnLoaded function. */
+ dev_get_msg_mgr(p_proc_object->hdev_obj, &hmsg_mgr);
+ if (!hmsg_mgr) {
+ status = msg_create(&hmsg_mgr, p_proc_object->hdev_obj,
+ (msg_onexit) node_on_exit);
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+ dev_set_msg_mgr(p_proc_object->hdev_obj, hmsg_mgr);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Set the Device object's message manager */
+ status = dev_get_io_mgr(p_proc_object->hdev_obj, &hio_mgr);
+ if (hio_mgr)
+ status = (*p_proc_object->intf_fxns->pfn_io_on_loaded)
+ (hio_mgr);
+ else
+ status = -EFAULT;
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Now, attempt to load an exec: */
+
+ /* Boost the OPP level to Maximum level supported by baseport */
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ if (pdata->cpu_set_freq)
+ (*pdata->cpu_set_freq) (pdata->mpu_speed[VDD1_OPP5]);
+#endif
+ status = cod_load_base(cod_mgr, argc_index, (char **)user_args,
+ dev_brd_write_fxn,
+ p_proc_object->hdev_obj, NULL);
+ if (DSP_FAILED(status)) {
+ if (status == -EBADF) {
+ dev_dbg(bridge, "%s: Failure to Load the EXE\n",
+ __func__);
+ }
+ if (status == -ESPIPE) {
+ pr_err("%s: Couldn't parse the file\n",
+ __func__);
+ }
+ }
+ /* Requesting the lowest opp supported */
+#if defined(CONFIG_BRIDGE_DVFS) && !defined(CONFIG_CPU_FREQ)
+ if (pdata->cpu_set_freq)
+ (*pdata->cpu_set_freq) (pdata->mpu_speed[VDD1_OPP1]);
+#endif
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Update the Processor status to loaded */
+ status = (*p_proc_object->intf_fxns->pfn_brd_set_state)
+ (p_proc_object->hbridge_context, BRD_LOADED);
+ if (DSP_SUCCEEDED(status)) {
+ p_proc_object->proc_state = PROC_LOADED;
+ if (p_proc_object->ntfy_obj)
+ proc_notify_clients(p_proc_object,
+ DSP_PROCESSORSTATECHANGE);
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = proc_get_processor_id(hprocessor, &proc_id);
+ if (proc_id == DSP_UNIT) {
+ /* Use all available DSP address space after EXTMEM
+ * for DMM */
+ if (DSP_SUCCEEDED(status))
+ status = cod_get_sym_value(cod_mgr, EXTEND,
+ &dw_ext_end);
+
+ /* Reset DMM structs and add an initial free chunk */
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ dev_get_dmm_mgr(p_proc_object->hdev_obj,
+ &dmm_mgr);
+ if (dmm_mgr) {
+ /* Set dw_ext_end to DMM START u8
+ * address */
+ dw_ext_end =
+ (dw_ext_end + 1) * DSPWORDSIZE;
+ /* DMM memory is from EXT_END */
+ status = dmm_create_tables(dmm_mgr,
+ dw_ext_end,
+ DMMPOOLSIZE);
+ } else {
+ status = -EFAULT;
+ }
+ }
+ }
+ }
+ /* Restore the original argv[0] */
+ kfree(new_envp);
+ user_args[0] = pargv0;
+ if (DSP_SUCCEEDED(status)) {
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_status)
+ (p_proc_object->hbridge_context, &brd_state))) {
+ pr_info("%s: Processor Loaded %s\n", __func__, pargv0);
+ kfree(drv_datap->base_img);
+ drv_datap->base_img = kmalloc(strlen(pargv0) + 1,
+ GFP_KERNEL);
+ if (drv_datap->base_img)
+ strncpy(drv_datap->base_img, pargv0,
+ strlen(pargv0) + 1);
+ else
+ status = -ENOMEM;
+ DBC_ASSERT(brd_state == BRD_LOADED);
+ }
+ }
+
+func_end:
+ if (DSP_FAILED(status))
+ pr_err("%s: Processor failed to load\n", __func__);
+
+ DBC_ENSURE((DSP_SUCCEEDED(status)
+ && p_proc_object->proc_state == PROC_LOADED)
+ || DSP_FAILED(status));
+#ifdef OPT_LOAD_TIME_INSTRUMENTATION
+ do_gettimeofday(&tv2);
+ if (tv2.tv_usec < tv1.tv_usec) {
+ tv2.tv_usec += 1000000;
+ tv2.tv_sec--;
+ }
+ dev_dbg(bridge, "%s: time to load %d sec and %d usec\n", __func__,
+ tv2.tv_sec - tv1.tv_sec, tv2.tv_usec - tv1.tv_usec);
+#endif
+ return status;
+}
+
+/*
+ * ======== proc_map ========
+ * Purpose:
+ * Maps a MPU buffer to DSP address space.
+ */
+int proc_map(void *hprocessor, void *pmpu_addr, u32 ul_size,
+ void *req_addr, void **pp_map_addr, u32 ul_map_attr,
+ struct process_context *pr_ctxt)
+{
+ u32 va_align;
+ u32 pa_align;
+ struct dmm_object *dmm_mgr;
+ u32 size_align;
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_map_object *map_obj;
+ u32 tmp_addr = 0;
+
+#ifdef CONFIG_BRIDGE_CACHE_LINE_CHECK
+ if ((ul_map_attr & BUFMODE_MASK) != RBUF) {
+ if (!IS_ALIGNED((u32)pmpu_addr, DSP_CACHE_LINE) ||
+ !IS_ALIGNED(ul_size, DSP_CACHE_LINE)) {
+ pr_err("%s: not aligned: 0x%x (%d)\n", __func__,
+ (u32)pmpu_addr, ul_size);
+ return -EFAULT;
+ }
+ }
+#endif
+
+ /* Calculate the page-aligned PA, VA and size */
+ va_align = PG_ALIGN_LOW((u32) req_addr, PG_SIZE4K);
+ pa_align = PG_ALIGN_LOW((u32) pmpu_addr, PG_SIZE4K);
+ size_align = PG_ALIGN_HIGH(ul_size + (u32) pmpu_addr - pa_align,
+ PG_SIZE4K);
+
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* Critical section */
+ mutex_lock(&proc_lock);
+ dmm_get_handle(p_proc_object, &dmm_mgr);
+ if (dmm_mgr)
+ status = dmm_map_memory(dmm_mgr, va_align, size_align);
+ else
+ status = -EFAULT;
+
+ /* Add mapping to the page tables. */
+ if (DSP_SUCCEEDED(status)) {
+
+ /* Mapped address = MSB of VA | LSB of PA */
+ tmp_addr = (va_align | ((u32) pmpu_addr & (PG_SIZE4K - 1)));
+ /* mapped memory resource tracking */
+ map_obj = add_mapping_info(pr_ctxt, pa_align, tmp_addr,
+ size_align);
+ if (!map_obj)
+ status = -ENOMEM;
+ else
+ status = (*p_proc_object->intf_fxns->pfn_brd_mem_map)
+ (p_proc_object->hbridge_context, pa_align, va_align,
+ size_align, ul_map_attr, map_obj->pages);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* Mapped address = MSB of VA | LSB of PA */
+ *pp_map_addr = (void *) tmp_addr;
+ } else {
+ remove_mapping_information(pr_ctxt, tmp_addr, size_align);
+ dmm_un_map_memory(dmm_mgr, va_align, &size_align);
+ }
+ mutex_unlock(&proc_lock);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+func_end:
+ dev_dbg(bridge, "%s: hprocessor %p, pmpu_addr %p, ul_size %x, "
+ "req_addr %p, ul_map_attr %x, pp_map_addr %p, va_align %x, "
+ "pa_align %x, size_align %x status 0x%x\n", __func__,
+ hprocessor, pmpu_addr, ul_size, req_addr, ul_map_attr,
+ pp_map_addr, va_align, pa_align, size_align, status);
+
+ return status;
+}
+
+/*
+ * ======== proc_register_notify ========
+ * Purpose:
+ * Register to be notified of specific processor events.
+ */
+int proc_register_notify(void *hprocessor, u32 event_mask,
+ u32 notify_type, struct dsp_notification
+ * hnotification)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct deh_mgr *hdeh_mgr;
+
+ DBC_REQUIRE(hnotification != NULL);
+ DBC_REQUIRE(refs > 0);
+
+ /* Check processor handle */
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* Check if event mask is a valid processor related event */
+ if (event_mask & ~(DSP_PROCESSORSTATECHANGE | DSP_PROCESSORATTACH |
+ DSP_PROCESSORDETACH | DSP_PROCESSORRESTART |
+ DSP_MMUFAULT | DSP_SYSERROR | DSP_PWRERROR |
+ DSP_WDTOVERFLOW))
+ status = -EINVAL;
+
+ /* Check if notify type is valid */
+ if (notify_type != DSP_SIGNALEVENT)
+ status = -EINVAL;
+
+ if (DSP_SUCCEEDED(status)) {
+ /* If event mask is not DSP_SYSERROR, DSP_MMUFAULT,
+ * or DSP_PWRERROR then register event immediately. */
+ if (event_mask &
+ ~(DSP_SYSERROR | DSP_MMUFAULT | DSP_PWRERROR |
+ DSP_WDTOVERFLOW)) {
+ status = ntfy_register(p_proc_object->ntfy_obj,
+ hnotification, event_mask,
+ notify_type);
+ /* Special case alert, special case alert!
+ * If we're trying to *deregister* (i.e. event_mask
+ * is 0), a DSP_SYSERROR or DSP_MMUFAULT notification,
+ * we have to deregister with the DEH manager.
+ * There's no way to know, based on event_mask which
+ * manager the notification event was registered with,
+ * so if we're trying to deregister and ntfy_register
+ * failed, we'll give the deh manager a shot.
+ */
+ if ((event_mask == 0) && DSP_FAILED(status)) {
+ status =
+ dev_get_deh_mgr(p_proc_object->hdev_obj,
+ &hdeh_mgr);
+ DBC_ASSERT(p_proc_object->
+ intf_fxns->pfn_deh_register_notify);
+ status =
+ (*p_proc_object->
+ intf_fxns->pfn_deh_register_notify)
+ (hdeh_mgr, event_mask, notify_type,
+ hnotification);
+ }
+ } else {
+ status = dev_get_deh_mgr(p_proc_object->hdev_obj,
+ &hdeh_mgr);
+ DBC_ASSERT(p_proc_object->
+ intf_fxns->pfn_deh_register_notify);
+ status =
+ (*p_proc_object->intf_fxns->pfn_deh_register_notify)
+ (hdeh_mgr, event_mask, notify_type, hnotification);
+
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== proc_reserve_memory ========
+ * Purpose:
+ * Reserve a virtually contiguous region of DSP address space.
+ */
+int proc_reserve_memory(void *hprocessor, u32 ul_size,
+ void **pp_rsv_addr,
+ struct process_context *pr_ctxt)
+{
+ struct dmm_object *dmm_mgr;
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_rsv_object *rsv_obj;
+
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = dmm_get_handle(p_proc_object, &dmm_mgr);
+ if (!dmm_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = dmm_reserve_memory(dmm_mgr, ul_size, (u32 *) pp_rsv_addr);
+ if (status != 0)
+ goto func_end;
+
+ /*
+ * A successful reserve should be followed by insertion of rsv_obj
+ * into dmm_rsv_list, so that reserved memory resource tracking
+ * remains uptodate
+ */
+ rsv_obj = kmalloc(sizeof(struct dmm_rsv_object), GFP_KERNEL);
+ if (rsv_obj) {
+ rsv_obj->dsp_reserved_addr = (u32) *pp_rsv_addr;
+ spin_lock(&pr_ctxt->dmm_rsv_lock);
+ list_add(&rsv_obj->link, &pr_ctxt->dmm_rsv_list);
+ spin_unlock(&pr_ctxt->dmm_rsv_lock);
+ }
+
+func_end:
+ dev_dbg(bridge, "%s: hprocessor: 0x%p ul_size: 0x%x pp_rsv_addr: 0x%p "
+ "status 0x%x\n", __func__, hprocessor,
+ ul_size, pp_rsv_addr, status);
+ return status;
+}
+
+/*
+ * ======== proc_start ========
+ * Purpose:
+ * Start a processor running.
+ */
+int proc_start(void *hprocessor)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct cod_manager *cod_mgr; /* Code manager handle */
+ u32 dw_dsp_addr; /* Loaded code's entry point. */
+ int brd_state;
+
+ DBC_REQUIRE(refs > 0);
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ /* Call the bridge_brd_start */
+ if (p_proc_object->proc_state != PROC_LOADED) {
+ status = -EBADR;
+ goto func_end;
+ }
+ status = dev_get_cod_mgr(p_proc_object->hdev_obj, &cod_mgr);
+ if (!cod_mgr) {
+ status = -EFAULT;
+ goto func_cont;
+ }
+
+ status = cod_get_entry(cod_mgr, &dw_dsp_addr);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ status = (*p_proc_object->intf_fxns->pfn_brd_start)
+ (p_proc_object->hbridge_context, dw_dsp_addr);
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ /* Call dev_create2 */
+ status = dev_create2(p_proc_object->hdev_obj);
+ if (DSP_SUCCEEDED(status)) {
+ p_proc_object->proc_state = PROC_RUNNING;
+ /* Deep sleep switces off the peripheral clocks.
+ * we just put the DSP CPU in idle in the idle loop.
+ * so there is no need to send a command to DSP */
+
+ if (p_proc_object->ntfy_obj) {
+ proc_notify_clients(p_proc_object,
+ DSP_PROCESSORSTATECHANGE);
+ }
+ } else {
+ /* Failed to Create Node Manager and DISP Object
+ * Stop the Processor from running. Put it in STOPPED State */
+ (void)(*p_proc_object->intf_fxns->
+ pfn_brd_stop) (p_proc_object->hbridge_context);
+ p_proc_object->proc_state = PROC_STOPPED;
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_status)
+ (p_proc_object->hbridge_context, &brd_state))) {
+ pr_info("%s: dsp in running state\n", __func__);
+ DBC_ASSERT(brd_state != BRD_HIBERNATION);
+ }
+ } else {
+ pr_err("%s: Failed to start the dsp\n", __func__);
+ }
+
+func_end:
+ DBC_ENSURE((DSP_SUCCEEDED(status) && p_proc_object->proc_state ==
+ PROC_RUNNING) || DSP_FAILED(status));
+ return status;
+}
+
+/*
+ * ======== proc_stop ========
+ * Purpose:
+ * Stop a processor running.
+ */
+int proc_stop(void *hprocessor)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct msg_mgr *hmsg_mgr;
+ struct node_mgr *hnode_mgr;
+ void *hnode;
+ u32 node_tab_size = 1;
+ u32 num_nodes = 0;
+ u32 nodes_allocated = 0;
+ int brd_state;
+
+ DBC_REQUIRE(refs > 0);
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_status)
+ (p_proc_object->hbridge_context, &brd_state))) {
+ if (brd_state == BRD_ERROR)
+ bridge_deh_release_dummy_mem();
+ }
+ /* check if there are any running nodes */
+ status = dev_get_node_manager(p_proc_object->hdev_obj, &hnode_mgr);
+ if (DSP_SUCCEEDED(status) && hnode_mgr) {
+ status = node_enum_nodes(hnode_mgr, &hnode, node_tab_size,
+ &num_nodes, &nodes_allocated);
+ if ((status == -EINVAL) || (nodes_allocated > 0)) {
+ pr_err("%s: Can't stop device, active nodes = %d \n",
+ __func__, nodes_allocated);
+ return -EBADR;
+ }
+ }
+ /* Call the bridge_brd_stop */
+ /* It is OK to stop a device that does n't have nodes OR not started */
+ status =
+ (*p_proc_object->intf_fxns->
+ pfn_brd_stop) (p_proc_object->hbridge_context);
+ if (DSP_SUCCEEDED(status)) {
+ dev_dbg(bridge, "%s: processor in standby mode\n", __func__);
+ p_proc_object->proc_state = PROC_STOPPED;
+ /* Destory the Node Manager, msg_ctrl Manager */
+ if (DSP_SUCCEEDED(dev_destroy2(p_proc_object->hdev_obj))) {
+ /* Destroy the msg_ctrl by calling msg_delete */
+ dev_get_msg_mgr(p_proc_object->hdev_obj, &hmsg_mgr);
+ if (hmsg_mgr) {
+ msg_delete(hmsg_mgr);
+ dev_set_msg_mgr(p_proc_object->hdev_obj, NULL);
+ }
+ if (DSP_SUCCEEDED
+ ((*p_proc_object->
+ intf_fxns->pfn_brd_status) (p_proc_object->
+ hbridge_context,
+ &brd_state)))
+ DBC_ASSERT(brd_state == BRD_STOPPED);
+ }
+ } else {
+ pr_err("%s: Failed to stop the processor\n", __func__);
+ }
+func_end:
+
+ return status;
+}
+
+/*
+ * ======== proc_un_map ========
+ * Purpose:
+ * Removes a MPU buffer mapping from the DSP address space.
+ */
+int proc_un_map(void *hprocessor, void *map_addr,
+ struct process_context *pr_ctxt)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_object *dmm_mgr;
+ u32 va_align;
+ u32 size_align;
+
+ va_align = PG_ALIGN_LOW((u32) map_addr, PG_SIZE4K);
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = dmm_get_handle(hprocessor, &dmm_mgr);
+ if (!dmm_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ /* Critical section */
+ mutex_lock(&proc_lock);
+ /*
+ * Update DMM structures. Get the size to unmap.
+ * This function returns error if the VA is not mapped
+ */
+ status = dmm_un_map_memory(dmm_mgr, (u32) va_align, &size_align);
+ /* Remove mapping from the page tables. */
+ if (DSP_SUCCEEDED(status)) {
+ status = (*p_proc_object->intf_fxns->pfn_brd_mem_un_map)
+ (p_proc_object->hbridge_context, va_align, size_align);
+ }
+
+ mutex_unlock(&proc_lock);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /*
+ * A successful unmap should be followed by removal of map_obj
+ * from dmm_map_list, so that mapped memory resource tracking
+ * remains uptodate
+ */
+ remove_mapping_information(pr_ctxt, (u32) map_addr, size_align);
+
+func_end:
+ dev_dbg(bridge, "%s: hprocessor: 0x%p map_addr: 0x%p status: 0x%x\n",
+ __func__, hprocessor, map_addr, status);
+ return status;
+}
+
+/*
+ * ======== proc_un_reserve_memory ========
+ * Purpose:
+ * Frees a previously reserved region of DSP address space.
+ */
+int proc_un_reserve_memory(void *hprocessor, void *prsv_addr,
+ struct process_context *pr_ctxt)
+{
+ struct dmm_object *dmm_mgr;
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hprocessor;
+ struct dmm_rsv_object *rsv_obj;
+
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = dmm_get_handle(p_proc_object, &dmm_mgr);
+ if (!dmm_mgr) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ status = dmm_un_reserve_memory(dmm_mgr, (u32) prsv_addr);
+ if (status != 0)
+ goto func_end;
+
+ /*
+ * A successful unreserve should be followed by removal of rsv_obj
+ * from dmm_rsv_list, so that reserved memory resource tracking
+ * remains uptodate
+ */
+ spin_lock(&pr_ctxt->dmm_rsv_lock);
+ list_for_each_entry(rsv_obj, &pr_ctxt->dmm_rsv_list, link) {
+ if (rsv_obj->dsp_reserved_addr == (u32) prsv_addr) {
+ list_del(&rsv_obj->link);
+ kfree(rsv_obj);
+ break;
+ }
+ }
+ spin_unlock(&pr_ctxt->dmm_rsv_lock);
+
+func_end:
+ dev_dbg(bridge, "%s: hprocessor: 0x%p prsv_addr: 0x%p status: 0x%x\n",
+ __func__, hprocessor, prsv_addr, status);
+ return status;
+}
+
+/*
+ * ======== = proc_monitor ======== ==
+ * Purpose:
+ * Place the Processor in Monitor State. This is an internal
+ * function and a requirement before Processor is loaded.
+ * This does a bridge_brd_stop, dev_destroy2 and bridge_brd_monitor.
+ * In dev_destroy2 we delete the node manager.
+ * Parameters:
+ * p_proc_object: Pointer to Processor Object
+ * Returns:
+ * 0: Processor placed in monitor mode.
+ * !0: Failed to place processor in monitor mode.
+ * Requires:
+ * Valid Processor Handle
+ * Ensures:
+ * Success: ProcObject state is PROC_IDLE
+ */
+static int proc_monitor(struct proc_object *p_proc_object)
+{
+ int status = -EPERM;
+ struct msg_mgr *hmsg_mgr;
+ int brd_state;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(p_proc_object);
+
+ /* This is needed only when Device is loaded when it is
+ * already 'ACTIVE' */
+ /* Destory the Node Manager, msg_ctrl Manager */
+ if (DSP_SUCCEEDED(dev_destroy2(p_proc_object->hdev_obj))) {
+ /* Destroy the msg_ctrl by calling msg_delete */
+ dev_get_msg_mgr(p_proc_object->hdev_obj, &hmsg_mgr);
+ if (hmsg_mgr) {
+ msg_delete(hmsg_mgr);
+ dev_set_msg_mgr(p_proc_object->hdev_obj, NULL);
+ }
+ }
+ /* Place the Board in the Monitor State */
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_monitor)
+ (p_proc_object->hbridge_context))) {
+ status = 0;
+ if (DSP_SUCCEEDED((*p_proc_object->intf_fxns->pfn_brd_status)
+ (p_proc_object->hbridge_context, &brd_state)))
+ DBC_ASSERT(brd_state == BRD_IDLE);
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && brd_state == BRD_IDLE) ||
+ DSP_FAILED(status));
+ return status;
+}
+
+/*
+ * ======== get_envp_count ========
+ * Purpose:
+ * Return the number of elements in the envp array, including the
+ * terminating NULL element.
+ */
+static s32 get_envp_count(char **envp)
+{
+ s32 ret = 0;
+ if (envp) {
+ while (*envp++)
+ ret++;
+
+ ret += 1; /* Include the terminating NULL in the count. */
+ }
+
+ return ret;
+}
+
+/*
+ * ======== prepend_envp ========
+ * Purpose:
+ * Prepend an environment variable=value pair to the new envp array, and
+ * copy in the existing var=value pairs in the old envp array.
+ */
+static char **prepend_envp(char **new_envp, char **envp, s32 envp_elems,
+ s32 cnew_envp, char *szVar)
+{
+ char **pp_envp = new_envp;
+
+ DBC_REQUIRE(new_envp);
+
+ /* Prepend new environ var=value string */
+ *new_envp++ = szVar;
+
+ /* Copy user's environment into our own. */
+ while (envp_elems--)
+ *new_envp++ = *envp++;
+
+ /* Ensure NULL terminates the new environment strings array. */
+ if (envp_elems == 0)
+ *new_envp = NULL;
+
+ return pp_envp;
+}
+
+/*
+ * ======== proc_notify_clients ========
+ * Purpose:
+ * Notify the processor the events.
+ */
+int proc_notify_clients(void *hProc, u32 uEvents)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hProc;
+
+ DBC_REQUIRE(p_proc_object);
+ DBC_REQUIRE(IS_VALID_PROC_EVENT(uEvents));
+ DBC_REQUIRE(refs > 0);
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ ntfy_notify(p_proc_object->ntfy_obj, uEvents);
+func_end:
+ return status;
+}
+
+/*
+ * ======== proc_notify_all_clients ========
+ * Purpose:
+ * Notify the processor the events. This includes notifying all clients
+ * attached to a particulat DSP.
+ */
+int proc_notify_all_clients(void *hProc, u32 uEvents)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hProc;
+
+ DBC_REQUIRE(IS_VALID_PROC_EVENT(uEvents));
+ DBC_REQUIRE(refs > 0);
+
+ if (!p_proc_object) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ dev_notify_clients(p_proc_object->hdev_obj, uEvents);
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== proc_get_processor_id ========
+ * Purpose:
+ * Retrieves the processor ID.
+ */
+int proc_get_processor_id(void *hProc, u32 * procID)
+{
+ int status = 0;
+ struct proc_object *p_proc_object = (struct proc_object *)hProc;
+
+ if (p_proc_object)
+ *procID = p_proc_object->processor_id;
+ else
+ status = -EFAULT;
+
+ return status;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/pwr.c b/drivers/staging/tidspbridge/rmgr/pwr.c
new file mode 100644
index 0000000..ec6d181
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/pwr.c
@@ -0,0 +1,182 @@
+/*
+ * pwr.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * PWR API for controlling DSP power states.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/pwr.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/devdefs.h>
+#include <dspbridge/drv.h>
+
+/* ----------------------------------- Platform Manager */
+#include <dspbridge/dev.h>
+
+/* ----------------------------------- Link Driver */
+#include <dspbridge/dspioctl.h>
+
+/*
+ * ======== pwr_sleep_dsp ========
+ * Send command to DSP to enter sleep state.
+ */
+int pwr_sleep_dsp(IN CONST u32 sleepCode, IN CONST u32 timeout)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct bridge_dev_context *dw_context;
+ int status = -EPERM;
+ struct dev_object *hdev_obj = NULL;
+ u32 ioctlcode = 0;
+ u32 arg = timeout;
+
+ for (hdev_obj = (struct dev_object *)drv_get_first_dev_object();
+ hdev_obj != NULL;
+ hdev_obj =
+ (struct dev_object *)drv_get_next_dev_object((u32) hdev_obj)) {
+ if (DSP_FAILED(dev_get_bridge_context(hdev_obj,
+ (struct bridge_dev_context **)
+ &dw_context))) {
+ continue;
+ }
+ if (DSP_FAILED(dev_get_intf_fxns(hdev_obj,
+ (struct bridge_drv_interface **)
+ &intf_fxns))) {
+ continue;
+ }
+ if (sleepCode == PWR_DEEPSLEEP)
+ ioctlcode = BRDIOCTL_DEEPSLEEP;
+ else if (sleepCode == PWR_EMERGENCYDEEPSLEEP)
+ ioctlcode = BRDIOCTL_EMERGENCYSLEEP;
+ else
+ status = -EINVAL;
+
+ if (status != -EINVAL) {
+ status = (*intf_fxns->pfn_dev_cntrl) (dw_context,
+ ioctlcode,
+ (void *)&arg);
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== pwr_wake_dsp ========
+ * Send command to DSP to wake it from sleep.
+ */
+int pwr_wake_dsp(IN CONST u32 timeout)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct bridge_dev_context *dw_context;
+ int status = -EPERM;
+ struct dev_object *hdev_obj = NULL;
+ u32 arg = timeout;
+
+ for (hdev_obj = (struct dev_object *)drv_get_first_dev_object();
+ hdev_obj != NULL;
+ hdev_obj = (struct dev_object *)drv_get_next_dev_object
+ ((u32) hdev_obj)) {
+ if (DSP_SUCCEEDED(dev_get_bridge_context(hdev_obj,
+ (struct bridge_dev_context
+ **)&dw_context))) {
+ if (DSP_SUCCEEDED
+ (dev_get_intf_fxns
+ (hdev_obj,
+ (struct bridge_drv_interface **)&intf_fxns))) {
+ status =
+ (*intf_fxns->pfn_dev_cntrl) (dw_context,
+ BRDIOCTL_WAKEUP,
+ (void *)&arg);
+ }
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== pwr_pm_pre_scale========
+ * Sends pre-notification message to DSP.
+ */
+int pwr_pm_pre_scale(IN u16 voltage_domain, u32 level)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct bridge_dev_context *dw_context;
+ int status = -EPERM;
+ struct dev_object *hdev_obj = NULL;
+ u32 arg[2];
+
+ arg[0] = voltage_domain;
+ arg[1] = level;
+
+ for (hdev_obj = (struct dev_object *)drv_get_first_dev_object();
+ hdev_obj != NULL;
+ hdev_obj = (struct dev_object *)drv_get_next_dev_object
+ ((u32) hdev_obj)) {
+ if (DSP_SUCCEEDED(dev_get_bridge_context(hdev_obj,
+ (struct bridge_dev_context
+ **)&dw_context))) {
+ if (DSP_SUCCEEDED
+ (dev_get_intf_fxns
+ (hdev_obj,
+ (struct bridge_drv_interface **)&intf_fxns))) {
+ status =
+ (*intf_fxns->pfn_dev_cntrl) (dw_context,
+ BRDIOCTL_PRESCALE_NOTIFY,
+ (void *)&arg);
+ }
+ }
+ }
+ return status;
+}
+
+/*
+ * ======== pwr_pm_post_scale========
+ * Sends post-notification message to DSP.
+ */
+int pwr_pm_post_scale(IN u16 voltage_domain, u32 level)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct bridge_dev_context *dw_context;
+ int status = -EPERM;
+ struct dev_object *hdev_obj = NULL;
+ u32 arg[2];
+
+ arg[0] = voltage_domain;
+ arg[1] = level;
+
+ for (hdev_obj = (struct dev_object *)drv_get_first_dev_object();
+ hdev_obj != NULL;
+ hdev_obj = (struct dev_object *)drv_get_next_dev_object
+ ((u32) hdev_obj)) {
+ if (DSP_SUCCEEDED(dev_get_bridge_context(hdev_obj,
+ (struct bridge_dev_context
+ **)&dw_context))) {
+ if (DSP_SUCCEEDED
+ (dev_get_intf_fxns
+ (hdev_obj,
+ (struct bridge_drv_interface **)&intf_fxns))) {
+ status =
+ (*intf_fxns->pfn_dev_cntrl) (dw_context,
+ BRDIOCTL_POSTSCALE_NOTIFY,
+ (void *)&arg);
+ }
+ }
+ }
+ return status;
+
+}
diff --git a/drivers/staging/tidspbridge/rmgr/rmm.c b/drivers/staging/tidspbridge/rmgr/rmm.c
new file mode 100644
index 0000000..ff33080
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/rmm.c
@@ -0,0 +1,535 @@
+/*
+ * rmm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/*
+ * This memory manager provides general heap management and arbitrary
+ * alignment for any number of memory segments.
+ *
+ * Notes:
+ *
+ * Memory blocks are allocated from the end of the first free memory
+ * block large enough to satisfy the request. Alignment requirements
+ * are satisfied by "sliding" the block forward until its base satisfies
+ * the alignment specification; if this is not possible then the next
+ * free block large enough to hold the request is tried.
+ *
+ * Since alignment can cause the creation of a new free block - the
+ * unused memory formed between the start of the original free block
+ * and the start of the allocated block - the memory manager must free
+ * this memory to prevent a memory leak.
+ *
+ * Overlay memory is managed by reserving through rmm_alloc, and freeing
+ * it through rmm_free. The memory manager prevents DSP code/data that is
+ * overlayed from being overwritten as long as the memory it runs at has
+ * been allocated, and not yet freed.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/list.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/rmm.h>
+
+/*
+ * ======== rmm_header ========
+ * This header is used to maintain a list of free memory blocks.
+ */
+struct rmm_header {
+ struct rmm_header *next; /* form a free memory link list */
+ u32 size; /* size of the free memory */
+ u32 addr; /* DSP address of memory block */
+};
+
+/*
+ * ======== rmm_ovly_sect ========
+ * Keeps track of memory occupied by overlay section.
+ */
+struct rmm_ovly_sect {
+ struct list_head list_elem;
+ u32 addr; /* Start of memory section */
+ u32 size; /* Length (target MAUs) of section */
+ s32 page; /* Memory page */
+};
+
+/*
+ * ======== rmm_target_obj ========
+ */
+struct rmm_target_obj {
+ struct rmm_segment *seg_tab;
+ struct rmm_header **free_list;
+ u32 num_segs;
+ struct lst_list *ovly_list; /* List of overlay memory in use */
+};
+
+static u32 refs; /* module reference count */
+
+static bool alloc_block(struct rmm_target_obj *target, u32 segid, u32 size,
+ u32 align, u32 *dspAddr);
+static bool free_block(struct rmm_target_obj *target, u32 segid, u32 addr,
+ u32 size);
+
+/*
+ * ======== rmm_alloc ========
+ */
+int rmm_alloc(struct rmm_target_obj *target, u32 segid, u32 size,
+ u32 align, u32 *dspAddr, bool reserve)
+{
+ struct rmm_ovly_sect *sect;
+ struct rmm_ovly_sect *prev_sect = NULL;
+ struct rmm_ovly_sect *new_sect;
+ u32 addr;
+ int status = 0;
+
+ DBC_REQUIRE(target);
+ DBC_REQUIRE(dspAddr != NULL);
+ DBC_REQUIRE(size > 0);
+ DBC_REQUIRE(reserve || (target->num_segs > 0));
+ DBC_REQUIRE(refs > 0);
+
+ if (!reserve) {
+ if (!alloc_block(target, segid, size, align, dspAddr)) {
+ status = -ENOMEM;
+ } else {
+ /* Increment the number of allocated blocks in this
+ * segment */
+ target->seg_tab[segid].number++;
+ }
+ goto func_end;
+ }
+ /* An overlay section - See if block is already in use. If not,
+ * insert into the list in ascending address size. */
+ addr = *dspAddr;
+ sect = (struct rmm_ovly_sect *)lst_first(target->ovly_list);
+ /* Find place to insert new list element. List is sorted from
+ * smallest to largest address. */
+ while (sect != NULL) {
+ if (addr <= sect->addr) {
+ /* Check for overlap with sect */
+ if ((addr + size > sect->addr) || (prev_sect &&
+ (prev_sect->addr +
+ prev_sect->size >
+ addr))) {
+ status = -ENXIO;
+ }
+ break;
+ }
+ prev_sect = sect;
+ sect = (struct rmm_ovly_sect *)lst_next(target->ovly_list,
+ (struct list_head *)
+ sect);
+ }
+ if (DSP_SUCCEEDED(status)) {
+ /* No overlap - allocate list element for new section. */
+ new_sect = kzalloc(sizeof(struct rmm_ovly_sect), GFP_KERNEL);
+ if (new_sect == NULL) {
+ status = -ENOMEM;
+ } else {
+ lst_init_elem((struct list_head *)new_sect);
+ new_sect->addr = addr;
+ new_sect->size = size;
+ new_sect->page = segid;
+ if (sect == NULL) {
+ /* Put new section at the end of the list */
+ lst_put_tail(target->ovly_list,
+ (struct list_head *)new_sect);
+ } else {
+ /* Put new section just before sect */
+ lst_insert_before(target->ovly_list,
+ (struct list_head *)new_sect,
+ (struct list_head *)sect);
+ }
+ }
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== rmm_create ========
+ */
+int rmm_create(struct rmm_target_obj **target_obj,
+ struct rmm_segment seg_tab[], u32 num_segs)
+{
+ struct rmm_header *hptr;
+ struct rmm_segment *sptr, *tmp;
+ struct rmm_target_obj *target;
+ s32 i;
+ int status = 0;
+
+ DBC_REQUIRE(target_obj != NULL);
+ DBC_REQUIRE(num_segs == 0 || seg_tab != NULL);
+
+ /* Allocate DBL target object */
+ target = kzalloc(sizeof(struct rmm_target_obj), GFP_KERNEL);
+
+ if (target == NULL)
+ status = -ENOMEM;
+
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ target->num_segs = num_segs;
+ if (!(num_segs > 0))
+ goto func_cont;
+
+ /* Allocate the memory for freelist from host's memory */
+ target->free_list = kzalloc(num_segs * sizeof(struct rmm_header *),
+ GFP_KERNEL);
+ if (target->free_list == NULL) {
+ status = -ENOMEM;
+ } else {
+ /* Allocate headers for each element on the free list */
+ for (i = 0; i < (s32) num_segs; i++) {
+ target->free_list[i] =
+ kzalloc(sizeof(struct rmm_header), GFP_KERNEL);
+ if (target->free_list[i] == NULL) {
+ status = -ENOMEM;
+ break;
+ }
+ }
+ /* Allocate memory for initial segment table */
+ target->seg_tab = kzalloc(num_segs * sizeof(struct rmm_segment),
+ GFP_KERNEL);
+ if (target->seg_tab == NULL) {
+ status = -ENOMEM;
+ } else {
+ /* Initialize segment table and free list */
+ sptr = target->seg_tab;
+ for (i = 0, tmp = seg_tab; num_segs > 0;
+ num_segs--, i++) {
+ *sptr = *tmp;
+ hptr = target->free_list[i];
+ hptr->addr = tmp->base;
+ hptr->size = tmp->length;
+ hptr->next = NULL;
+ tmp++;
+ sptr++;
+ }
+ }
+ }
+func_cont:
+ /* Initialize overlay memory list */
+ if (DSP_SUCCEEDED(status)) {
+ target->ovly_list = kzalloc(sizeof(struct lst_list),
+ GFP_KERNEL);
+ if (target->ovly_list == NULL)
+ status = -ENOMEM;
+ else
+ INIT_LIST_HEAD(&target->ovly_list->head);
+ }
+
+ if (DSP_SUCCEEDED(status)) {
+ *target_obj = target;
+ } else {
+ *target_obj = NULL;
+ if (target)
+ rmm_delete(target);
+
+ }
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *target_obj)
+ || (DSP_FAILED(status) && *target_obj == NULL));
+
+ return status;
+}
+
+/*
+ * ======== rmm_delete ========
+ */
+void rmm_delete(struct rmm_target_obj *target)
+{
+ struct rmm_ovly_sect *ovly_section;
+ struct rmm_header *hptr;
+ struct rmm_header *next;
+ u32 i;
+
+ DBC_REQUIRE(target);
+
+ kfree(target->seg_tab);
+
+ if (target->ovly_list) {
+ while ((ovly_section = (struct rmm_ovly_sect *)lst_get_head
+ (target->ovly_list))) {
+ kfree(ovly_section);
+ }
+ DBC_ASSERT(LST_IS_EMPTY(target->ovly_list));
+ kfree(target->ovly_list);
+ }
+
+ if (target->free_list != NULL) {
+ /* Free elements on freelist */
+ for (i = 0; i < target->num_segs; i++) {
+ hptr = next = target->free_list[i];
+ while (next) {
+ hptr = next;
+ next = hptr->next;
+ kfree(hptr);
+ }
+ }
+ kfree(target->free_list);
+ }
+
+ kfree(target);
+}
+
+/*
+ * ======== rmm_exit ========
+ */
+void rmm_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== rmm_free ========
+ */
+bool rmm_free(struct rmm_target_obj *target, u32 segid, u32 addr, u32 size,
+ bool reserved)
+{
+ struct rmm_ovly_sect *sect;
+ bool ret = true;
+
+ DBC_REQUIRE(target);
+
+ DBC_REQUIRE(reserved || segid < target->num_segs);
+ DBC_REQUIRE(reserved || (addr >= target->seg_tab[segid].base &&
+ (addr + size) <= (target->seg_tab[segid].base +
+ target->seg_tab[segid].
+ length)));
+
+ /*
+ * Free or unreserve memory.
+ */
+ if (!reserved) {
+ ret = free_block(target, segid, addr, size);
+ if (ret)
+ target->seg_tab[segid].number--;
+
+ } else {
+ /* Unreserve memory */
+ sect = (struct rmm_ovly_sect *)lst_first(target->ovly_list);
+ while (sect != NULL) {
+ if (addr == sect->addr) {
+ DBC_ASSERT(size == sect->size);
+ /* Remove from list */
+ lst_remove_elem(target->ovly_list,
+ (struct list_head *)sect);
+ kfree(sect);
+ break;
+ }
+ sect =
+ (struct rmm_ovly_sect *)lst_next(target->ovly_list,
+ (struct list_head
+ *)sect);
+ }
+ if (sect == NULL)
+ ret = false;
+
+ }
+ return ret;
+}
+
+/*
+ * ======== rmm_init ========
+ */
+bool rmm_init(void)
+{
+ DBC_REQUIRE(refs >= 0);
+
+ refs++;
+
+ return true;
+}
+
+/*
+ * ======== rmm_stat ========
+ */
+bool rmm_stat(struct rmm_target_obj *target, enum dsp_memtype segid,
+ struct dsp_memstat *pMemStatBuf)
+{
+ struct rmm_header *head;
+ bool ret = false;
+ u32 max_free_size = 0;
+ u32 total_free_size = 0;
+ u32 free_blocks = 0;
+
+ DBC_REQUIRE(pMemStatBuf != NULL);
+ DBC_ASSERT(target != NULL);
+
+ if ((u32) segid < target->num_segs) {
+ head = target->free_list[segid];
+
+ /* Collect data from free_list */
+ while (head != NULL) {
+ max_free_size = max(max_free_size, head->size);
+ total_free_size += head->size;
+ free_blocks++;
+ head = head->next;
+ }
+
+ /* ul_size */
+ pMemStatBuf->ul_size = target->seg_tab[segid].length;
+
+ /* ul_num_free_blocks */
+ pMemStatBuf->ul_num_free_blocks = free_blocks;
+
+ /* ul_total_free_size */
+ pMemStatBuf->ul_total_free_size = total_free_size;
+
+ /* ul_len_max_free_block */
+ pMemStatBuf->ul_len_max_free_block = max_free_size;
+
+ /* ul_num_alloc_blocks */
+ pMemStatBuf->ul_num_alloc_blocks =
+ target->seg_tab[segid].number;
+
+ ret = true;
+ }
+
+ return ret;
+}
+
+/*
+ * ======== balloc ========
+ * This allocation function allocates memory from the lowest addresses
+ * first.
+ */
+static bool alloc_block(struct rmm_target_obj *target, u32 segid, u32 size,
+ u32 align, u32 *dspAddr)
+{
+ struct rmm_header *head;
+ struct rmm_header *prevhead = NULL;
+ struct rmm_header *next;
+ u32 tmpalign;
+ u32 alignbytes;
+ u32 hsize;
+ u32 allocsize;
+ u32 addr;
+
+ alignbytes = (align == 0) ? 1 : align;
+ prevhead = NULL;
+ head = target->free_list[segid];
+
+ do {
+ hsize = head->size;
+ next = head->next;
+
+ addr = head->addr; /* alloc from the bottom */
+
+ /* align allocation */
+ (tmpalign = (u32) addr % alignbytes);
+ if (tmpalign != 0)
+ tmpalign = alignbytes - tmpalign;
+
+ allocsize = size + tmpalign;
+
+ if (hsize >= allocsize) { /* big enough */
+ if (hsize == allocsize && prevhead != NULL) {
+ prevhead->next = next;
+ kfree(head);
+ } else {
+ head->size = hsize - allocsize;
+ head->addr += allocsize;
+ }
+
+ /* free up any hole created by alignment */
+ if (tmpalign)
+ free_block(target, segid, addr, tmpalign);
+
+ *dspAddr = addr + tmpalign;
+ return true;
+ }
+
+ prevhead = head;
+ head = next;
+
+ } while (head != NULL);
+
+ return false;
+}
+
+/*
+ * ======== free_block ========
+ * TO DO: free_block() allocates memory, which could result in failure.
+ * Could allocate an rmm_header in rmm_alloc(), to be kept in a pool.
+ * free_block() could use an rmm_header from the pool, freeing as blocks
+ * are coalesced.
+ */
+static bool free_block(struct rmm_target_obj *target, u32 segid, u32 addr,
+ u32 size)
+{
+ struct rmm_header *head;
+ struct rmm_header *thead;
+ struct rmm_header *rhead;
+ bool ret = true;
+
+ /* Create a memory header to hold the newly free'd block. */
+ rhead = kzalloc(sizeof(struct rmm_header), GFP_KERNEL);
+ if (rhead == NULL) {
+ ret = false;
+ } else {
+ /* search down the free list to find the right place for addr */
+ head = target->free_list[segid];
+
+ if (addr >= head->addr) {
+ while (head->next != NULL && addr > head->next->addr)
+ head = head->next;
+
+ thead = head->next;
+
+ head->next = rhead;
+ rhead->next = thead;
+ rhead->addr = addr;
+ rhead->size = size;
+ } else {
+ *rhead = *head;
+ head->next = rhead;
+ head->addr = addr;
+ head->size = size;
+ thead = rhead->next;
+ }
+
+ /* join with upper block, if possible */
+ if (thead != NULL && (rhead->addr + rhead->size) ==
+ thead->addr) {
+ head->next = rhead->next;
+ thead->size = size + thead->size;
+ thead->addr = addr;
+ kfree(rhead);
+ rhead = thead;
+ }
+
+ /* join with the lower block, if possible */
+ if ((head->addr + head->size) == rhead->addr) {
+ head->next = rhead->next;
+ head->size = head->size + rhead->size;
+ kfree(rhead);
+ }
+ }
+
+ return ret;
+}
diff --git a/drivers/staging/tidspbridge/rmgr/strm.c b/drivers/staging/tidspbridge/rmgr/strm.c
new file mode 100644
index 0000000..e537ee8
--- /dev/null
+++ b/drivers/staging/tidspbridge/rmgr/strm.c
@@ -0,0 +1,861 @@
+/*
+ * strm.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Stream Manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- OS Adaptation Layer */
+#include <dspbridge/sync.h>
+
+/* ----------------------------------- Bridge Driver */
+#include <dspbridge/dspdefs.h>
+
+/* ----------------------------------- Resource Manager */
+#include <dspbridge/nodepriv.h>
+
+/* ----------------------------------- Others */
+#include <dspbridge/cmm.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/strm.h>
+
+#include <dspbridge/cfg.h>
+#include <dspbridge/resourcecleanup.h>
+
+/* ----------------------------------- Defines, Data Structures, Typedefs */
+#define DEFAULTTIMEOUT 10000
+#define DEFAULTNUMBUFS 2
+
+/*
+ * ======== strm_mgr ========
+ * The strm_mgr contains device information needed to open the underlying
+ * channels of a stream.
+ */
+struct strm_mgr {
+ struct dev_object *dev_obj; /* Device for this processor */
+ struct chnl_mgr *hchnl_mgr; /* Channel manager */
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+};
+
+/*
+ * ======== strm_object ========
+ * This object is allocated in strm_open().
+ */
+struct strm_object {
+ struct strm_mgr *strm_mgr_obj;
+ struct chnl_object *chnl_obj;
+ u32 dir; /* DSP_TONODE or DSP_FROMNODE */
+ u32 utimeout;
+ u32 num_bufs; /* Max # of bufs allowed in stream */
+ u32 un_bufs_in_strm; /* Current # of bufs in stream */
+ u32 ul_n_bytes; /* bytes transferred since idled */
+ /* STREAM_IDLE, STREAM_READY, ... */
+ enum dsp_streamstate strm_state;
+ void *user_event; /* Saved for strm_get_info() */
+ enum dsp_strmmode strm_mode; /* STRMMODE_[PROCCOPY][ZEROCOPY]... */
+ u32 udma_chnl_id; /* DMA chnl id */
+ u32 udma_priority; /* DMA priority:DMAPRI_[LOW][HIGH] */
+ u32 segment_id; /* >0 is SM segment.=0 is local heap */
+ u32 buf_alignment; /* Alignment for stream bufs */
+ /* Stream's SM address translator */
+ struct cmm_xlatorobject *xlator;
+};
+
+/* ----------------------------------- Globals */
+static u32 refs; /* module reference count */
+
+/* ----------------------------------- Function Prototypes */
+static int delete_strm(struct strm_object *hStrm);
+static void delete_strm_mgr(struct strm_mgr *strm_mgr_obj);
+
+/*
+ * ======== strm_allocate_buffer ========
+ * Purpose:
+ * Allocates buffers for a stream.
+ */
+int strm_allocate_buffer(struct strm_object *hStrm, u32 usize,
+ OUT u8 **ap_buffer, u32 num_bufs,
+ struct process_context *pr_ctxt)
+{
+ int status = 0;
+ u32 alloc_cnt = 0;
+ u32 i;
+
+ void *hstrm_res;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ap_buffer != NULL);
+
+ if (hStrm) {
+ /*
+ * Allocate from segment specified at time of stream open.
+ */
+ if (usize == 0)
+ status = -EINVAL;
+
+ } else {
+ status = -EFAULT;
+ }
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ for (i = 0; i < num_bufs; i++) {
+ DBC_ASSERT(hStrm->xlator != NULL);
+ (void)cmm_xlator_alloc_buf(hStrm->xlator, &ap_buffer[i], usize);
+ if (ap_buffer[i] == NULL) {
+ status = -ENOMEM;
+ alloc_cnt = i;
+ break;
+ }
+ }
+ if (DSP_FAILED(status))
+ strm_free_buffer(hStrm, ap_buffer, alloc_cnt, pr_ctxt);
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (drv_get_strm_res_element(hStrm, &hstrm_res, pr_ctxt) !=
+ -ENOENT)
+ drv_proc_update_strm_res(num_bufs, hstrm_res);
+
+func_end:
+ return status;
+}
+
+/*
+ * ======== strm_close ========
+ * Purpose:
+ * Close a stream opened with strm_open().
+ */
+int strm_close(struct strm_object *hStrm,
+ struct process_context *pr_ctxt)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct chnl_info chnl_info_obj;
+ int status = 0;
+
+ void *hstrm_res;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hStrm) {
+ status = -EFAULT;
+ } else {
+ /* Have all buffers been reclaimed? If not, return
+ * -EPIPE */
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+ status =
+ (*intf_fxns->pfn_chnl_get_info) (hStrm->chnl_obj,
+ &chnl_info_obj);
+ DBC_ASSERT(DSP_SUCCEEDED(status));
+
+ if (chnl_info_obj.cio_cs > 0 || chnl_info_obj.cio_reqs > 0)
+ status = -EPIPE;
+ else
+ status = delete_strm(hStrm);
+ }
+
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (drv_get_strm_res_element(hStrm, &hstrm_res, pr_ctxt) !=
+ -ENOENT)
+ drv_proc_remove_strm_res_element(hstrm_res, pr_ctxt);
+func_end:
+ DBC_ENSURE(status == 0 || status == -EFAULT ||
+ status == -EPIPE || status == -EPERM);
+
+ dev_dbg(bridge, "%s: hStrm: %p, status 0x%x\n", __func__,
+ hStrm, status);
+ return status;
+}
+
+/*
+ * ======== strm_create ========
+ * Purpose:
+ * Create a STRM manager object.
+ */
+int strm_create(OUT struct strm_mgr **phStrmMgr,
+ struct dev_object *dev_obj)
+{
+ struct strm_mgr *strm_mgr_obj;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phStrmMgr != NULL);
+ DBC_REQUIRE(dev_obj != NULL);
+
+ *phStrmMgr = NULL;
+ /* Allocate STRM manager object */
+ strm_mgr_obj = kzalloc(sizeof(struct strm_mgr), GFP_KERNEL);
+ if (strm_mgr_obj == NULL)
+ status = -ENOMEM;
+ else
+ strm_mgr_obj->dev_obj = dev_obj;
+
+ /* Get Channel manager and Bridge function interface */
+ if (DSP_SUCCEEDED(status)) {
+ status = dev_get_chnl_mgr(dev_obj, &(strm_mgr_obj->hchnl_mgr));
+ if (DSP_SUCCEEDED(status)) {
+ (void)dev_get_intf_fxns(dev_obj,
+ &(strm_mgr_obj->intf_fxns));
+ DBC_ASSERT(strm_mgr_obj->intf_fxns != NULL);
+ }
+ }
+
+ if (DSP_SUCCEEDED(status))
+ *phStrmMgr = strm_mgr_obj;
+ else
+ delete_strm_mgr(strm_mgr_obj);
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phStrmMgr) ||
+ (DSP_FAILED(status) && *phStrmMgr == NULL));
+
+ return status;
+}
+
+/*
+ * ======== strm_delete ========
+ * Purpose:
+ * Delete the STRM Manager Object.
+ */
+void strm_delete(struct strm_mgr *strm_mgr_obj)
+{
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(strm_mgr_obj);
+
+ delete_strm_mgr(strm_mgr_obj);
+}
+
+/*
+ * ======== strm_exit ========
+ * Purpose:
+ * Discontinue usage of STRM module.
+ */
+void strm_exit(void)
+{
+ DBC_REQUIRE(refs > 0);
+
+ refs--;
+
+ DBC_ENSURE(refs >= 0);
+}
+
+/*
+ * ======== strm_free_buffer ========
+ * Purpose:
+ * Frees the buffers allocated for a stream.
+ */
+int strm_free_buffer(struct strm_object *hStrm, u8 ** ap_buffer,
+ u32 num_bufs, struct process_context *pr_ctxt)
+{
+ int status = 0;
+ u32 i = 0;
+
+ void *hstrm_res = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(ap_buffer != NULL);
+
+ if (!hStrm)
+ status = -EFAULT;
+
+ if (DSP_SUCCEEDED(status)) {
+ for (i = 0; i < num_bufs; i++) {
+ DBC_ASSERT(hStrm->xlator != NULL);
+ status =
+ cmm_xlator_free_buf(hStrm->xlator, ap_buffer[i]);
+ if (DSP_FAILED(status))
+ break;
+ ap_buffer[i] = NULL;
+ }
+ }
+ if (drv_get_strm_res_element(hStrm, hstrm_res, pr_ctxt) !=
+ -ENOENT)
+ drv_proc_update_strm_res(num_bufs - i, hstrm_res);
+
+ return status;
+}
+
+/*
+ * ======== strm_get_info ========
+ * Purpose:
+ * Retrieves information about a stream.
+ */
+int strm_get_info(struct strm_object *hStrm,
+ OUT struct stream_info *stream_info,
+ u32 stream_info_size)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct chnl_info chnl_info_obj;
+ int status = 0;
+ void *virt_base = NULL; /* NULL if no SM used */
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(stream_info != NULL);
+ DBC_REQUIRE(stream_info_size >= sizeof(struct stream_info));
+
+ if (!hStrm) {
+ status = -EFAULT;
+ } else {
+ if (stream_info_size < sizeof(struct stream_info)) {
+ /* size of users info */
+ status = -EINVAL;
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+ status =
+ (*intf_fxns->pfn_chnl_get_info) (hStrm->chnl_obj, &chnl_info_obj);
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ if (hStrm->xlator) {
+ /* We have a translator */
+ DBC_ASSERT(hStrm->segment_id > 0);
+ cmm_xlator_info(hStrm->xlator, (u8 **) &virt_base, 0,
+ hStrm->segment_id, false);
+ }
+ stream_info->segment_id = hStrm->segment_id;
+ stream_info->strm_mode = hStrm->strm_mode;
+ stream_info->virt_base = virt_base;
+ stream_info->user_strm->number_bufs_allowed = hStrm->num_bufs;
+ stream_info->user_strm->number_bufs_in_stream = chnl_info_obj.cio_cs +
+ chnl_info_obj.cio_reqs;
+ /* # of bytes transferred since last call to DSPStream_Idle() */
+ stream_info->user_strm->ul_number_bytes = chnl_info_obj.bytes_tx;
+ stream_info->user_strm->sync_object_handle = chnl_info_obj.event_obj;
+ /* Determine stream state based on channel state and info */
+ if (chnl_info_obj.dw_state & CHNL_STATEEOS) {
+ stream_info->user_strm->ss_stream_state = STREAM_DONE;
+ } else {
+ if (chnl_info_obj.cio_cs > 0)
+ stream_info->user_strm->ss_stream_state = STREAM_READY;
+ else if (chnl_info_obj.cio_reqs > 0)
+ stream_info->user_strm->ss_stream_state =
+ STREAM_PENDING;
+ else
+ stream_info->user_strm->ss_stream_state = STREAM_IDLE;
+
+ }
+func_end:
+ return status;
+}
+
+/*
+ * ======== strm_idle ========
+ * Purpose:
+ * Idles a particular stream.
+ */
+int strm_idle(struct strm_object *hStrm, bool fFlush)
+{
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+
+ if (!hStrm) {
+ status = -EFAULT;
+ } else {
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+
+ status = (*intf_fxns->pfn_chnl_idle) (hStrm->chnl_obj,
+ hStrm->utimeout, fFlush);
+ }
+
+ dev_dbg(bridge, "%s: hStrm: %p fFlush: 0x%x status: 0x%x\n",
+ __func__, hStrm, fFlush, status);
+ return status;
+}
+
+/*
+ * ======== strm_init ========
+ * Purpose:
+ * Initialize the STRM module.
+ */
+bool strm_init(void)
+{
+ bool ret = true;
+
+ DBC_REQUIRE(refs >= 0);
+
+ if (ret)
+ refs++;
+
+ DBC_ENSURE((ret && (refs > 0)) || (!ret && (refs >= 0)));
+
+ return ret;
+}
+
+/*
+ * ======== strm_issue ========
+ * Purpose:
+ * Issues a buffer on a stream
+ */
+int strm_issue(struct strm_object *hStrm, IN u8 *pbuf, u32 ul_bytes,
+ u32 ul_buf_size, u32 dw_arg)
+{
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+ void *tmp_buf = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(pbuf != NULL);
+
+ if (!hStrm) {
+ status = -EFAULT;
+ } else {
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+
+ if (hStrm->segment_id != 0) {
+ tmp_buf = cmm_xlator_translate(hStrm->xlator,
+ (void *)pbuf,
+ CMM_VA2DSPPA);
+ if (tmp_buf == NULL)
+ status = -ESRCH;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status = (*intf_fxns->pfn_chnl_add_io_req)
+ (hStrm->chnl_obj, pbuf, ul_bytes, ul_buf_size,
+ (u32) tmp_buf, dw_arg);
+ }
+ if (status == -EIO)
+ status = -ENOSR;
+ }
+
+ dev_dbg(bridge, "%s: hStrm: %p pbuf: %p ul_bytes: 0x%x dw_arg: 0x%x "
+ "status: 0x%x\n", __func__, hStrm, pbuf,
+ ul_bytes, dw_arg, status);
+ return status;
+}
+
+/*
+ * ======== strm_open ========
+ * Purpose:
+ * Open a stream for sending/receiving data buffers to/from a task or
+ * XDAIS socket node on the DSP.
+ */
+int strm_open(struct node_object *hnode, u32 dir, u32 index,
+ IN struct strm_attr *pattr,
+ OUT struct strm_object **phStrm,
+ struct process_context *pr_ctxt)
+{
+ struct strm_mgr *strm_mgr_obj;
+ struct bridge_drv_interface *intf_fxns;
+ u32 ul_chnl_id;
+ struct strm_object *strm_obj = NULL;
+ s8 chnl_mode;
+ struct chnl_attr chnl_attr_obj;
+ int status = 0;
+ struct cmm_object *hcmm_mgr = NULL; /* Shared memory manager hndl */
+
+ void *hstrm_res;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(phStrm != NULL);
+ DBC_REQUIRE(pattr != NULL);
+ *phStrm = NULL;
+ if (dir != DSP_TONODE && dir != DSP_FROMNODE) {
+ status = -EPERM;
+ } else {
+ /* Get the channel id from the node (set in node_connect()) */
+ status = node_get_channel_id(hnode, dir, index, &ul_chnl_id);
+ }
+ if (DSP_SUCCEEDED(status))
+ status = node_get_strm_mgr(hnode, &strm_mgr_obj);
+
+ if (DSP_SUCCEEDED(status)) {
+ strm_obj = kzalloc(sizeof(struct strm_object), GFP_KERNEL);
+ if (strm_obj == NULL) {
+ status = -ENOMEM;
+ } else {
+ strm_obj->strm_mgr_obj = strm_mgr_obj;
+ strm_obj->dir = dir;
+ strm_obj->strm_state = STREAM_IDLE;
+ strm_obj->user_event = pattr->user_event;
+ if (pattr->stream_attr_in != NULL) {
+ strm_obj->utimeout =
+ pattr->stream_attr_in->utimeout;
+ strm_obj->num_bufs =
+ pattr->stream_attr_in->num_bufs;
+ strm_obj->strm_mode =
+ pattr->stream_attr_in->strm_mode;
+ strm_obj->segment_id =
+ pattr->stream_attr_in->segment_id;
+ strm_obj->buf_alignment =
+ pattr->stream_attr_in->buf_alignment;
+ strm_obj->udma_chnl_id =
+ pattr->stream_attr_in->udma_chnl_id;
+ strm_obj->udma_priority =
+ pattr->stream_attr_in->udma_priority;
+ chnl_attr_obj.uio_reqs =
+ pattr->stream_attr_in->num_bufs;
+ } else {
+ strm_obj->utimeout = DEFAULTTIMEOUT;
+ strm_obj->num_bufs = DEFAULTNUMBUFS;
+ strm_obj->strm_mode = STRMMODE_PROCCOPY;
+ strm_obj->segment_id = 0; /* local mem */
+ strm_obj->buf_alignment = 0;
+ strm_obj->udma_chnl_id = 0;
+ strm_obj->udma_priority = 0;
+ chnl_attr_obj.uio_reqs = DEFAULTNUMBUFS;
+ }
+ chnl_attr_obj.reserved1 = NULL;
+ /* DMA chnl flush timeout */
+ chnl_attr_obj.reserved2 = strm_obj->utimeout;
+ chnl_attr_obj.event_obj = NULL;
+ if (pattr->user_event != NULL)
+ chnl_attr_obj.event_obj = pattr->user_event;
+
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_cont;
+
+ if ((pattr->virt_base == NULL) || !(pattr->ul_virt_size > 0))
+ goto func_cont;
+
+ /* No System DMA */
+ DBC_ASSERT(strm_obj->strm_mode != STRMMODE_LDMA);
+ /* Get the shared mem mgr for this streams dev object */
+ status = dev_get_cmm_mgr(strm_mgr_obj->dev_obj, &hcmm_mgr);
+ if (DSP_SUCCEEDED(status)) {
+ /*Allocate a SM addr translator for this strm. */
+ status = cmm_xlator_create(&strm_obj->xlator, hcmm_mgr, NULL);
+ if (DSP_SUCCEEDED(status)) {
+ DBC_ASSERT(strm_obj->segment_id > 0);
+ /* Set translators Virt Addr attributes */
+ status = cmm_xlator_info(strm_obj->xlator,
+ (u8 **) &pattr->virt_base,
+ pattr->ul_virt_size,
+ strm_obj->segment_id, true);
+ }
+ }
+func_cont:
+ if (DSP_SUCCEEDED(status)) {
+ /* Open channel */
+ chnl_mode = (dir == DSP_TONODE) ?
+ CHNL_MODETODSP : CHNL_MODEFROMDSP;
+ intf_fxns = strm_mgr_obj->intf_fxns;
+ status = (*intf_fxns->pfn_chnl_open) (&(strm_obj->chnl_obj),
+ strm_mgr_obj->hchnl_mgr,
+ chnl_mode, ul_chnl_id,
+ &chnl_attr_obj);
+ if (DSP_FAILED(status)) {
+ /*
+ * over-ride non-returnable status codes so we return
+ * something documented
+ */
+ if (status != -ENOMEM && status !=
+ -EINVAL && status != -EPERM) {
+ /*
+ * We got a status that's not return-able.
+ * Assert that we got something we were
+ * expecting (-EFAULT isn't acceptable,
+ * strm_mgr_obj->hchnl_mgr better be valid or we
+ * assert here), and then return -EPERM.
+ */
+ DBC_ASSERT(status == -ENOSR ||
+ status == -ECHRNG ||
+ status == -EALREADY ||
+ status == -EIO);
+ status = -EPERM;
+ }
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ *phStrm = strm_obj;
+ drv_proc_insert_strm_res_element(*phStrm, &hstrm_res, pr_ctxt);
+ } else {
+ (void)delete_strm(strm_obj);
+ }
+
+ /* ensure we return a documented error code */
+ DBC_ENSURE((DSP_SUCCEEDED(status) && *phStrm) ||
+ (*phStrm == NULL && (status == -EFAULT ||
+ status == -EPERM
+ || status == -EINVAL)));
+
+ dev_dbg(bridge, "%s: hnode: %p dir: 0x%x index: 0x%x pattr: %p "
+ "phStrm: %p status: 0x%x\n", __func__,
+ hnode, dir, index, pattr, phStrm, status);
+ return status;
+}
+
+/*
+ * ======== strm_reclaim ========
+ * Purpose:
+ * Relcaims a buffer from a stream.
+ */
+int strm_reclaim(struct strm_object *hStrm, OUT u8 ** buf_ptr,
+ u32 *pulBytes, u32 *pulBufSize, u32 *pdw_arg)
+{
+ struct bridge_drv_interface *intf_fxns;
+ struct chnl_ioc chnl_ioc_obj;
+ int status = 0;
+ void *tmp_buf = NULL;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(buf_ptr != NULL);
+ DBC_REQUIRE(pulBytes != NULL);
+ DBC_REQUIRE(pdw_arg != NULL);
+
+ if (!hStrm) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+
+ status =
+ (*intf_fxns->pfn_chnl_get_ioc) (hStrm->chnl_obj, hStrm->utimeout,
+ &chnl_ioc_obj);
+ if (DSP_SUCCEEDED(status)) {
+ *pulBytes = chnl_ioc_obj.byte_size;
+ if (pulBufSize)
+ *pulBufSize = chnl_ioc_obj.buf_size;
+
+ *pdw_arg = chnl_ioc_obj.dw_arg;
+ if (!CHNL_IS_IO_COMPLETE(chnl_ioc_obj)) {
+ if (CHNL_IS_TIMED_OUT(chnl_ioc_obj)) {
+ status = -ETIME;
+ } else {
+ /* Allow reclaims after idle to succeed */
+ if (!CHNL_IS_IO_CANCELLED(chnl_ioc_obj))
+ status = -EPERM;
+
+ }
+ }
+ /* Translate zerocopy buffer if channel not canceled. */
+ if (DSP_SUCCEEDED(status)
+ && (!CHNL_IS_IO_CANCELLED(chnl_ioc_obj))
+ && (hStrm->strm_mode == STRMMODE_ZEROCOPY)) {
+ /*
+ * This is a zero-copy channel so chnl_ioc_obj.pbuf
+ * contains the DSP address of SM. We need to
+ * translate it to a virtual address for the user
+ * thread to access.
+ * Note: Could add CMM_DSPPA2VA to CMM in the future.
+ */
+ tmp_buf = cmm_xlator_translate(hStrm->xlator,
+ chnl_ioc_obj.pbuf,
+ CMM_DSPPA2PA);
+ if (tmp_buf != NULL) {
+ /* now convert this GPP Pa to Va */
+ tmp_buf = cmm_xlator_translate(hStrm->xlator,
+ tmp_buf,
+ CMM_PA2VA);
+ }
+ if (tmp_buf == NULL)
+ status = -ESRCH;
+
+ chnl_ioc_obj.pbuf = tmp_buf;
+ }
+ *buf_ptr = chnl_ioc_obj.pbuf;
+ }
+func_end:
+ /* ensure we return a documented return code */
+ DBC_ENSURE(DSP_SUCCEEDED(status) || status == -EFAULT ||
+ status == -ETIME || status == -ESRCH ||
+ status == -EPERM);
+
+ dev_dbg(bridge, "%s: hStrm: %p buf_ptr: %p pulBytes: %p pdw_arg: %p "
+ "status 0x%x\n", __func__, hStrm,
+ buf_ptr, pulBytes, pdw_arg, status);
+ return status;
+}
+
+/*
+ * ======== strm_register_notify ========
+ * Purpose:
+ * Register to be notified on specific events for this stream.
+ */
+int strm_register_notify(struct strm_object *hStrm, u32 event_mask,
+ u32 notify_type, struct dsp_notification
+ * hnotification)
+{
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(hnotification != NULL);
+
+ if (!hStrm) {
+ status = -EFAULT;
+ } else if ((event_mask & ~((DSP_STREAMIOCOMPLETION) |
+ DSP_STREAMDONE)) != 0) {
+ status = -EINVAL;
+ } else {
+ if (notify_type != DSP_SIGNALEVENT)
+ status = -ENOSYS;
+
+ }
+ if (DSP_SUCCEEDED(status)) {
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+
+ status =
+ (*intf_fxns->pfn_chnl_register_notify) (hStrm->chnl_obj,
+ event_mask,
+ notify_type,
+ hnotification);
+ }
+ /* ensure we return a documented return code */
+ DBC_ENSURE(DSP_SUCCEEDED(status) || status == -EFAULT ||
+ status == -ETIME || status == -ESRCH ||
+ status == -ENOSYS || status == -EPERM);
+ return status;
+}
+
+/*
+ * ======== strm_select ========
+ * Purpose:
+ * Selects a ready stream.
+ */
+int strm_select(IN struct strm_object **strm_tab, u32 nStrms,
+ OUT u32 *pmask, u32 utimeout)
+{
+ u32 index;
+ struct chnl_info chnl_info_obj;
+ struct bridge_drv_interface *intf_fxns;
+ struct sync_object **sync_events = NULL;
+ u32 i;
+ int status = 0;
+
+ DBC_REQUIRE(refs > 0);
+ DBC_REQUIRE(strm_tab != NULL);
+ DBC_REQUIRE(pmask != NULL);
+ DBC_REQUIRE(nStrms > 0);
+
+ *pmask = 0;
+ for (i = 0; i < nStrms; i++) {
+ if (!strm_tab[i]) {
+ status = -EFAULT;
+ break;
+ }
+ }
+ if (DSP_FAILED(status))
+ goto func_end;
+
+ /* Determine which channels have IO ready */
+ for (i = 0; i < nStrms; i++) {
+ intf_fxns = strm_tab[i]->strm_mgr_obj->intf_fxns;
+ status = (*intf_fxns->pfn_chnl_get_info) (strm_tab[i]->chnl_obj,
+ &chnl_info_obj);
+ if (DSP_FAILED(status)) {
+ break;
+ } else {
+ if (chnl_info_obj.cio_cs > 0)
+ *pmask |= (1 << i);
+
+ }
+ }
+ if (DSP_SUCCEEDED(status) && utimeout > 0 && *pmask == 0) {
+ /* Non-zero timeout */
+ sync_events = kmalloc(nStrms * sizeof(struct sync_object *),
+ GFP_KERNEL);
+
+ if (sync_events == NULL) {
+ status = -ENOMEM;
+ } else {
+ for (i = 0; i < nStrms; i++) {
+ intf_fxns =
+ strm_tab[i]->strm_mgr_obj->intf_fxns;
+ status = (*intf_fxns->pfn_chnl_get_info)
+ (strm_tab[i]->chnl_obj, &chnl_info_obj);
+ if (DSP_FAILED(status))
+ break;
+ else
+ sync_events[i] =
+ chnl_info_obj.sync_event;
+
+ }
+ }
+ if (DSP_SUCCEEDED(status)) {
+ status =
+ sync_wait_on_multiple_events(sync_events, nStrms,
+ utimeout, &index);
+ if (DSP_SUCCEEDED(status)) {
+ /* Since we waited on the event, we have to
+ * reset it */
+ sync_set_event(sync_events[index]);
+ *pmask = 1 << index;
+ }
+ }
+ }
+func_end:
+ kfree(sync_events);
+
+ DBC_ENSURE((DSP_SUCCEEDED(status) && (*pmask != 0 || utimeout == 0)) ||
+ (DSP_FAILED(status) && *pmask == 0));
+
+ return status;
+}
+
+/*
+ * ======== delete_strm ========
+ * Purpose:
+ * Frees the resources allocated for a stream.
+ */
+static int delete_strm(struct strm_object *hStrm)
+{
+ struct bridge_drv_interface *intf_fxns;
+ int status = 0;
+
+ if (hStrm) {
+ if (hStrm->chnl_obj) {
+ intf_fxns = hStrm->strm_mgr_obj->intf_fxns;
+ /* Channel close can fail only if the channel handle
+ * is invalid. */
+ status = (*intf_fxns->pfn_chnl_close) (hStrm->chnl_obj);
+ /* Free all SM address translator resources */
+ if (DSP_SUCCEEDED(status)) {
+ if (hStrm->xlator) {
+ /* force free */
+ (void)cmm_xlator_delete(hStrm->xlator,
+ true);
+ }
+ }
+ }
+ kfree(hStrm);
+ } else {
+ status = -EFAULT;
+ }
+ return status;
+}
+
+/*
+ * ======== delete_strm_mgr ========
+ * Purpose:
+ * Frees stream manager.
+ */
+static void delete_strm_mgr(struct strm_mgr *strm_mgr_obj)
+{
+ if (strm_mgr_obj)
+ kfree(strm_mgr_obj);
+}
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge generic utilities driver sources
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/tidspbridge/gen/gb.c | 167 +++++++++++++++++++++
drivers/staging/tidspbridge/gen/gh.c | 213 ++++++++++++++++++++++++++
drivers/staging/tidspbridge/gen/gs.c | 89 +++++++++++
drivers/staging/tidspbridge/gen/uuidutil.c | 223 ++++++++++++++++++++++++++++
4 files changed, 692 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/gen/gb.c
create mode 100644 drivers/staging/tidspbridge/gen/gh.c
create mode 100644 drivers/staging/tidspbridge/gen/gs.c
create mode 100644 drivers/staging/tidspbridge/gen/uuidutil.c
diff --git a/drivers/staging/tidspbridge/gen/gb.c b/drivers/staging/tidspbridge/gen/gb.c
new file mode 100644
index 0000000..f1a9dd3
--- /dev/null
+++ b/drivers/staging/tidspbridge/gen/gb.c
@@ -0,0 +1,167 @@
+/*
+ * gb.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Generic bitmap operations.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <linux/types.h>
+/* ----------------------------------- This */
+#include <dspbridge/gs.h>
+#include <dspbridge/gb.h>
+
+struct gb_t_map {
+ u32 len;
+ u32 wcnt;
+ u32 *words;
+};
+
+/*
+ * ======== gb_clear ========
+ * purpose:
+ * Clears a bit in the bit map.
+ */
+
+void gb_clear(struct gb_t_map *map, u32 bitn)
+{
+ u32 mask;
+
+ mask = 1L << (bitn % BITS_PER_LONG);
+ map->words[bitn / BITS_PER_LONG] &= ~mask;
+}
+
+/*
+ * ======== gb_create ========
+ * purpose:
+ * Creates a bit map.
+ */
+
+struct gb_t_map *gb_create(u32 len)
+{
+ struct gb_t_map *map;
+ u32 i;
+ map = (struct gb_t_map *)gs_alloc(sizeof(struct gb_t_map));
+ if (map != NULL) {
+ map->len = len;
+ map->wcnt = len / BITS_PER_LONG + 1;
+ map->words = (u32 *) gs_alloc(map->wcnt * sizeof(u32));
+ if (map->words != NULL) {
+ for (i = 0; i < map->wcnt; i++)
+ map->words[i] = 0L;
+
+ } else {
+ gs_frees(map, sizeof(struct gb_t_map));
+ map = NULL;
+ }
+ }
+
+ return map;
+}
+
+/*
+ * ======== gb_delete ========
+ * purpose:
+ * Frees a bit map.
+ */
+
+void gb_delete(struct gb_t_map *map)
+{
+ gs_frees(map->words, map->wcnt * sizeof(u32));
+ gs_frees(map, sizeof(struct gb_t_map));
+}
+
+/*
+ * ======== gb_findandset ========
+ * purpose:
+ * Finds a free bit and sets it.
+ */
+u32 gb_findandset(struct gb_t_map *map)
+{
+ u32 bitn;
+
+ bitn = gb_minclear(map);
+
+ if (bitn != GB_NOBITS)
+ gb_set(map, bitn);
+
+ return bitn;
+}
+
+/*
+ * ======== gb_minclear ========
+ * purpose:
+ * returns the location of the first unset bit in the bit map.
+ */
+u32 gb_minclear(struct gb_t_map *map)
+{
+ u32 bit_location = 0;
+ u32 bit_acc = 0;
+ u32 i;
+ u32 bit;
+ u32 *word;
+
+ for (word = map->words, i = 0; i < map->wcnt; word++, i++) {
+ if (~*word) {
+ for (bit = 0; bit < BITS_PER_LONG; bit++, bit_acc++) {
+ if (bit_acc == map->len)
+ return GB_NOBITS;
+
+ if (~*word & (1L << bit)) {
+ bit_location = i * BITS_PER_LONG + bit;
+ return bit_location;
+ }
+
+ }
+ } else {
+ bit_acc += BITS_PER_LONG;
+ }
+ }
+
+ return GB_NOBITS;
+}
+
+/*
+ * ======== gb_set ========
+ * purpose:
+ * Sets a bit in the bit map.
+ */
+
+void gb_set(struct gb_t_map *map, u32 bitn)
+{
+ u32 mask;
+
+ mask = 1L << (bitn % BITS_PER_LONG);
+ map->words[bitn / BITS_PER_LONG] |= mask;
+}
+
+/*
+ * ======== gb_test ========
+ * purpose:
+ * Returns true if the bit is set in the specified location.
+ */
+
+bool gb_test(struct gb_t_map *map, u32 bitn)
+{
+ bool state;
+ u32 mask;
+ u32 word;
+
+ mask = 1L << (bitn % BITS_PER_LONG);
+ word = map->words[bitn / BITS_PER_LONG];
+ state = word & mask ? TRUE : FALSE;
+
+ return state;
+}
diff --git a/drivers/staging/tidspbridge/gen/gh.c b/drivers/staging/tidspbridge/gen/gh.c
new file mode 100644
index 0000000..d1e7b38
--- /dev/null
+++ b/drivers/staging/tidspbridge/gen/gh.c
@@ -0,0 +1,213 @@
+/*
+ * gh.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/std.h>
+
+#include <dspbridge/host_os.h>
+
+#include <dspbridge/gs.h>
+
+#include <dspbridge/gh.h>
+
+struct element {
+ struct element *next;
+ u8 data[1];
+};
+
+struct gh_t_hash_tab {
+ u16 max_bucket;
+ u16 val_size;
+ struct element **buckets;
+ u16(*hash) (void *, u16);
+ bool(*match) (void *, void *);
+ void (*delete) (void *);
+};
+
+static void noop(void *p);
+static s32 cur_init;
+static void myfree(void *ptr, s32 size);
+
+/*
+ * ======== gh_create ========
+ */
+
+struct gh_t_hash_tab *gh_create(u16 max_bucket, u16 val_size,
+ u16(*hash) (void *, u16), bool(*match) (void *,
+ void *),
+ void (*delete) (void *))
+{
+ struct gh_t_hash_tab *hash_tab;
+ u16 i;
+ hash_tab =
+ (struct gh_t_hash_tab *)gs_alloc(sizeof(struct gh_t_hash_tab));
+ if (hash_tab == NULL)
+ return NULL;
+ hash_tab->max_bucket = max_bucket;
+ hash_tab->val_size = val_size;
+ hash_tab->hash = hash;
+ hash_tab->match = match;
+ hash_tab->delete = delete == NULL ? noop : delete;
+
+ hash_tab->buckets = (struct element **)
+ gs_alloc(sizeof(struct element *) * max_bucket);
+ if (hash_tab->buckets == NULL) {
+ gh_delete(hash_tab);
+ return NULL;
+ }
+
+ for (i = 0; i < max_bucket; i++)
+ hash_tab->buckets[i] = NULL;
+
+ return hash_tab;
+}
+
+/*
+ * ======== gh_delete ========
+ */
+void gh_delete(struct gh_t_hash_tab *hash_tab)
+{
+ struct element *elem, *next;
+ u16 i;
+
+ if (hash_tab != NULL) {
+ if (hash_tab->buckets != NULL) {
+ for (i = 0; i < hash_tab->max_bucket; i++) {
+ for (elem = hash_tab->buckets[i]; elem != NULL;
+ elem = next) {
+ next = elem->next;
+ (*hash_tab->delete) (elem->data);
+ myfree(elem,
+ sizeof(struct element) - 1 +
+ hash_tab->val_size);
+ }
+ }
+
+ myfree(hash_tab->buckets, sizeof(struct element *)
+ * hash_tab->max_bucket);
+ }
+
+ myfree(hash_tab, sizeof(struct gh_t_hash_tab));
+ }
+}
+
+/*
+ * ======== gh_exit ========
+ */
+
+void gh_exit(void)
+{
+ if (cur_init-- == 1)
+ gs_exit();
+
+}
+
+/*
+ * ======== gh_find ========
+ */
+
+void *gh_find(struct gh_t_hash_tab *hash_tab, void *key)
+{
+ struct element *elem;
+
+ elem = hash_tab->buckets[(*hash_tab->hash) (key, hash_tab->max_bucket)];
+
+ for (; elem; elem = elem->next) {
+ if ((*hash_tab->match) (key, elem->data))
+ return elem->data;
+ }
+
+ return NULL;
+}
+
+/*
+ * ======== gh_init ========
+ */
+
+void gh_init(void)
+{
+ if (cur_init++ == 0)
+ gs_init();
+}
+
+/*
+ * ======== gh_insert ========
+ */
+
+void *gh_insert(struct gh_t_hash_tab *hash_tab, void *key, void *value)
+{
+ struct element *elem;
+ u16 i;
+ char *src, *dst;
+
+ elem = (struct element *)gs_alloc(sizeof(struct element) - 1 +
+ hash_tab->val_size);
+ if (elem != NULL) {
+
+ dst = (char *)elem->data;
+ src = (char *)value;
+ for (i = 0; i < hash_tab->val_size; i++)
+ *dst++ = *src++;
+
+ i = (*hash_tab->hash) (key, hash_tab->max_bucket);
+ elem->next = hash_tab->buckets[i];
+ hash_tab->buckets[i] = elem;
+
+ return elem->data;
+ }
+
+ return NULL;
+}
+
+/*
+ * ======== noop ========
+ */
+/* ARGSUSED */
+static void noop(void *p)
+{
+ p = p; /* stifle compiler warning */
+}
+
+/*
+ * ======== myfree ========
+ */
+static void myfree(void *ptr, s32 size)
+{
+ gs_free(ptr);
+}
+
+/**
+ * gh_iterate() - This function goes through all the elements in the hash table
+ * looking for the dsp symbols.
+ * @hash_tab: Hash table
+ * @callback: pointer to callback function
+ * @user_data: User data, contains the find_symbol_context pointer
+ *
+ */
+void gh_iterate(struct gh_t_hash_tab *hash_tab,
+ void (*callback)(void *, void *), void *user_data)
+{
+ struct element *elem;
+ u32 i;
+
+ if (hash_tab && hash_tab->buckets)
+ for (i = 0; i < hash_tab->max_bucket; i++) {
+ elem = hash_tab->buckets[i];
+ while (elem) {
+ callback(&elem->data, user_data);
+ elem = elem->next;
+ }
+ }
+}
diff --git a/drivers/staging/tidspbridge/gen/gs.c b/drivers/staging/tidspbridge/gen/gs.c
new file mode 100644
index 0000000..3d091b9
--- /dev/null
+++ b/drivers/staging/tidspbridge/gen/gs.c
@@ -0,0 +1,89 @@
+/*
+ * gs.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * General storage memory allocator services.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+#include <linux/types.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/gs.h>
+
+#include <linux/slab.h>
+
+/* ----------------------------------- Globals */
+static u32 cumsize;
+
+/*
+ * ======== gs_alloc ========
+ * purpose:
+ * Allocates memory of the specified size.
+ */
+void *gs_alloc(u32 size)
+{
+ void *p;
+
+ p = kzalloc(size, GFP_KERNEL);
+ if (p == NULL)
+ return NULL;
+ cumsize += size;
+ return p;
+}
+
+/*
+ * ======== gs_exit ========
+ * purpose:
+ * Discontinue the usage of the GS module.
+ */
+void gs_exit(void)
+{
+ /* Do nothing */
+}
+
+/*
+ * ======== gs_free ========
+ * purpose:
+ * Frees the memory.
+ */
+void gs_free(void *ptr)
+{
+ kfree(ptr);
+ /* ack! no size info */
+ /* cumsize -= size; */
+}
+
+/*
+ * ======== gs_frees ========
+ * purpose:
+ * Frees the memory.
+ */
+void gs_frees(void *ptr, u32 size)
+{
+ kfree(ptr);
+ cumsize -= size;
+}
+
+/*
+ * ======== gs_init ========
+ * purpose:
+ * Initializes the GS module.
+ */
+void gs_init(void)
+{
+ /* Do nothing */
+}
diff --git a/drivers/staging/tidspbridge/gen/uuidutil.c b/drivers/staging/tidspbridge/gen/uuidutil.c
new file mode 100644
index 0000000..ce9319d
--- /dev/null
+++ b/drivers/staging/tidspbridge/gen/uuidutil.c
@@ -0,0 +1,223 @@
+/*
+ * uuidutil.c
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This file contains the implementation of UUID helper functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* ----------------------------------- Host OS */
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- DSP/BIOS Bridge */
+#include <dspbridge/std.h>
+#include <dspbridge/dbdefs.h>
+
+/* ----------------------------------- Trace & Debug */
+#include <dspbridge/dbc.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/uuidutil.h>
+
+/*
+ * ======== uuid_uuid_to_string ========
+ * Purpose:
+ * Converts a struct dsp_uuid to a string.
+ * Note: snprintf format specifier is:
+ * %[flags] [width] [.precision] [{h | l | I64 | L}]type
+ */
+void uuid_uuid_to_string(IN struct dsp_uuid *uuid_obj, OUT char *pszUuid,
+ IN s32 size)
+{
+ s32 i; /* return result from snprintf. */
+
+ DBC_REQUIRE(uuid_obj && pszUuid);
+
+ i = snprintf(pszUuid, size,
+ "%.8X_%.4X_%.4X_%.2X%.2X_%.2X%.2X%.2X%.2X%.2X%.2X",
+ uuid_obj->ul_data1, uuid_obj->us_data2, uuid_obj->us_data3,
+ uuid_obj->uc_data4, uuid_obj->uc_data5,
+ uuid_obj->uc_data6[0], uuid_obj->uc_data6[1],
+ uuid_obj->uc_data6[2], uuid_obj->uc_data6[3],
+ uuid_obj->uc_data6[4], uuid_obj->uc_data6[5]);
+
+ DBC_ENSURE(i != -1);
+}
+
+/*
+ * ======== htoi ========
+ * Purpose:
+ * Converts a hex value to a decimal integer.
+ */
+
+static int htoi(char c)
+{
+ switch (c) {
+ case '0':
+ return 0;
+ case '1':
+ return 1;
+ case '2':
+ return 2;
+ case '3':
+ return 3;
+ case '4':
+ return 4;
+ case '5':
+ return 5;
+ case '6':
+ return 6;
+ case '7':
+ return 7;
+ case '8':
+ return 8;
+ case '9':
+ return 9;
+ case 'A':
+ return 10;
+ case 'B':
+ return 11;
+ case 'C':
+ return 12;
+ case 'D':
+ return 13;
+ case 'E':
+ return 14;
+ case 'F':
+ return 15;
+ case 'a':
+ return 10;
+ case 'b':
+ return 11;
+ case 'c':
+ return 12;
+ case 'd':
+ return 13;
+ case 'e':
+ return 14;
+ case 'f':
+ return 15;
+ }
+ return 0;
+}
+
+/*
+ * ======== uuid_uuid_from_string ========
+ * Purpose:
+ * Converts a string to a struct dsp_uuid.
+ */
+void uuid_uuid_from_string(IN char *pszUuid, OUT struct dsp_uuid *uuid_obj)
+{
+ char c;
+ s32 i, j;
+ s32 result;
+ char *temp = pszUuid;
+
+ result = 0;
+ for (i = 0; i < 8; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->ul_data1 = result;
+
+ /* Step over underscore */
+ temp++;
+
+ result = 0;
+ for (i = 0; i < 4; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->us_data2 = (u16) result;
+
+ /* Step over underscore */
+ temp++;
+
+ result = 0;
+ for (i = 0; i < 4; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->us_data3 = (u16) result;
+
+ /* Step over underscore */
+ temp++;
+
+ result = 0;
+ for (i = 0; i < 2; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->uc_data4 = (u8) result;
+
+ result = 0;
+ for (i = 0; i < 2; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->uc_data5 = (u8) result;
+
+ /* Step over underscore */
+ temp++;
+
+ for (j = 0; j < 6; j++) {
+ result = 0;
+ for (i = 0; i < 2; i++) {
+ /* Get first character in string */
+ c = *temp;
+
+ /* Increase the results by new value */
+ result *= 16;
+ result += htoi(c);
+
+ /* Go to next character in string */
+ temp++;
+ }
+ uuid_obj->uc_data6[j] = (u8) result;
+ }
+}
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add TI's DSP Bridge driver header files
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
.../tidspbridge/include/dspbridge/_chnl_sm.h | 181 ++++
.../tidspbridge/include/dspbridge/brddefs.h | 39 +
.../staging/tidspbridge/include/dspbridge/cfg.h | 222 ++++
.../tidspbridge/include/dspbridge/cfgdefs.h | 81 ++
.../staging/tidspbridge/include/dspbridge/chnl.h | 130 +++
.../tidspbridge/include/dspbridge/chnldefs.h | 67 ++
.../tidspbridge/include/dspbridge/chnlpriv.h | 101 ++
.../staging/tidspbridge/include/dspbridge/clk.h | 101 ++
.../staging/tidspbridge/include/dspbridge/cmm.h | 386 +++++++
.../tidspbridge/include/dspbridge/cmmdefs.h | 105 ++
.../staging/tidspbridge/include/dspbridge/cod.h | 369 +++++++
.../staging/tidspbridge/include/dspbridge/dbc.h | 46 +
.../staging/tidspbridge/include/dspbridge/dbdcd.h | 358 +++++++
.../tidspbridge/include/dspbridge/dbdcddef.h | 78 ++
.../staging/tidspbridge/include/dspbridge/dbdefs.h | 546 ++++++++++
.../tidspbridge/include/dspbridge/dbldefs.h | 140 +++
.../staging/tidspbridge/include/dspbridge/dbll.h | 59 +
.../tidspbridge/include/dspbridge/dblldefs.h | 496 +++++++++
.../staging/tidspbridge/include/dspbridge/dbtype.h | 88 ++
.../tidspbridge/include/dspbridge/dehdefs.h | 32 +
.../staging/tidspbridge/include/dspbridge/dev.h | 702 ++++++++++++
.../tidspbridge/include/dspbridge/devdefs.h | 26 +
.../staging/tidspbridge/include/dspbridge/disp.h | 204 ++++
.../tidspbridge/include/dspbridge/dispdefs.h | 35 +
.../staging/tidspbridge/include/dspbridge/dmm.h | 75 ++
.../staging/tidspbridge/include/dspbridge/drv.h | 522 +++++++++
.../tidspbridge/include/dspbridge/drvdefs.h | 25 +
.../tidspbridge/include/dspbridge/dspapi-ioctl.h | 475 ++++++++
.../staging/tidspbridge/include/dspbridge/dspapi.h | 167 +++
.../tidspbridge/include/dspbridge/dspchnl.h | 72 ++
.../tidspbridge/include/dspbridge/dspdefs.h | 1128 ++++++++++++++++++++
.../staging/tidspbridge/include/dspbridge/dspdeh.h | 47 +
.../staging/tidspbridge/include/dspbridge/dspdrv.h | 62 ++
.../staging/tidspbridge/include/dspbridge/dspio.h | 41 +
.../tidspbridge/include/dspbridge/dspioctl.h | 73 ++
.../staging/tidspbridge/include/dspbridge/dspmsg.h | 56 +
.../tidspbridge/include/dspbridge/dynamic_loader.h | 492 +++++++++
drivers/staging/tidspbridge/include/dspbridge/gb.h | 79 ++
.../tidspbridge/include/dspbridge/getsection.h | 108 ++
drivers/staging/tidspbridge/include/dspbridge/gh.h | 32 +
drivers/staging/tidspbridge/include/dspbridge/gs.h | 59 +
.../tidspbridge/include/dspbridge/host_os.h | 89 ++
drivers/staging/tidspbridge/include/dspbridge/io.h | 114 ++
.../staging/tidspbridge/include/dspbridge/io_sm.h | 309 ++++++
.../staging/tidspbridge/include/dspbridge/iodefs.h | 36 +
.../staging/tidspbridge/include/dspbridge/ldr.h | 29 +
.../staging/tidspbridge/include/dspbridge/list.h | 225 ++++
.../staging/tidspbridge/include/dspbridge/mbx_sh.h | 198 ++++
.../tidspbridge/include/dspbridge/memdefs.h | 30 +
.../staging/tidspbridge/include/dspbridge/mgr.h | 205 ++++
.../tidspbridge/include/dspbridge/mgrpriv.h | 45 +
.../staging/tidspbridge/include/dspbridge/msg.h | 86 ++
.../tidspbridge/include/dspbridge/msgdefs.h | 29 +
.../staging/tidspbridge/include/dspbridge/nldr.h | 55 +
.../tidspbridge/include/dspbridge/nldrdefs.h | 293 +++++
.../staging/tidspbridge/include/dspbridge/node.h | 579 ++++++++++
.../tidspbridge/include/dspbridge/nodedefs.h | 28 +
.../tidspbridge/include/dspbridge/nodepriv.h | 182 ++++
.../staging/tidspbridge/include/dspbridge/ntfy.h | 217 ++++
.../staging/tidspbridge/include/dspbridge/proc.h | 621 +++++++++++
.../tidspbridge/include/dspbridge/procpriv.h | 25 +
.../staging/tidspbridge/include/dspbridge/pwr.h | 107 ++
.../staging/tidspbridge/include/dspbridge/pwr_sh.h | 33 +
.../include/dspbridge/resourcecleanup.h | 63 ++
.../staging/tidspbridge/include/dspbridge/rmm.h | 181 ++++
.../staging/tidspbridge/include/dspbridge/rms_sh.h | 95 ++
.../tidspbridge/include/dspbridge/rmstypes.h | 28 +
.../tidspbridge/include/dspbridge/services.h | 50 +
.../staging/tidspbridge/include/dspbridge/std.h | 94 ++
.../staging/tidspbridge/include/dspbridge/strm.h | 404 +++++++
.../tidspbridge/include/dspbridge/strmdefs.h | 46 +
.../staging/tidspbridge/include/dspbridge/sync.h | 109 ++
.../tidspbridge/include/dspbridge/utildefs.h | 39 +
.../tidspbridge/include/dspbridge/uuidutil.h | 62 ++
.../staging/tidspbridge/include/dspbridge/wdt.h | 79 ++
75 files changed, 12890 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/_chnl_sm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/brddefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cfg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cfgdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/chnlpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/clk.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cmmdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/cod.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbc.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdcd.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdcddef.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbll.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dblldefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dbtype.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dehdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dev.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/devdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/disp.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dispdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/drv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/drvdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspapi-ioctl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspapi.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspchnl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdeh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspdrv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspio.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dspmsg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/dynamic_loader.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gb.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/getsection.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/gs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/host_os.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/io.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/io_sm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/iodefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/ldr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/list.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mbx_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/memdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mgr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/mgrpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/msg.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/msgdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nldr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nldrdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/node.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nodedefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/nodepriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/ntfy.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/proc.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/procpriv.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/pwr.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/pwr_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/resourcecleanup.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rmm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rms_sh.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/rmstypes.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/services.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/std.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/strm.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/strmdefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/sync.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/utildefs.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/uuidutil.h
create mode 100644 drivers/staging/tidspbridge/include/dspbridge/wdt.h
diff --git a/drivers/staging/tidspbridge/include/dspbridge/_chnl_sm.h b/drivers/staging/tidspbridge/include/dspbridge/_chnl_sm.h
new file mode 100644
index 0000000..cdca172
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/_chnl_sm.h
@@ -0,0 +1,181 @@
+/*
+ * _chnl_sm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private header file defining channel manager and channel objects for
+ * a shared memory channel driver.
+ *
+ * Shared between the modules implementing the shared memory channel class
+ * library.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _CHNL_SM_
+#define _CHNL_SM_
+
+#include <dspbridge/dspapi.h>
+#include <dspbridge/dspdefs.h>
+
+#include <dspbridge/list.h>
+#include <dspbridge/ntfy.h>
+
+/*
+ * These target side symbols define the beginning and ending addresses
+ * of shared memory buffer. They are defined in the *cfg.cmd file by
+ * cdb code.
+ */
+#define CHNL_SHARED_BUFFER_BASE_SYM "_SHM_BEG"
+#define CHNL_SHARED_BUFFER_LIMIT_SYM "_SHM_END"
+#define BRIDGEINIT_BIOSGPTIMER "_BRIDGEINIT_BIOSGPTIMER"
+#define BRIDGEINIT_LOADMON_GPTIMER "_BRIDGEINIT_LOADMON_GPTIMER"
+
+#ifndef _CHNL_WORDSIZE
+#define _CHNL_WORDSIZE 4 /* default _CHNL_WORDSIZE is 2 bytes/word */
+#endif
+
+#define MAXOPPS 16
+
+/* Shared memory config options */
+#define SHM_CURROPP 0 /* Set current OPP in shm */
+#define SHM_OPPINFO 1 /* Set dsp voltage and freq table values */
+#define SHM_GETOPP 2 /* Get opp requested by DSP */
+
+struct opp_table_entry {
+ u32 voltage;
+ u32 frequency;
+ u32 min_freq;
+ u32 max_freq;
+};
+
+struct opp_struct {
+ u32 curr_opp_pt;
+ u32 num_opp_pts;
+ struct opp_table_entry opp_point[MAXOPPS];
+};
+
+/* Request to MPU */
+struct opp_rqst_struct {
+ u32 rqst_dsp_freq;
+ u32 rqst_opp_pt;
+};
+
+/* Info to MPU */
+struct load_mon_struct {
+ u32 curr_dsp_load;
+ u32 curr_dsp_freq;
+ u32 pred_dsp_load;
+ u32 pred_dsp_freq;
+};
+
+/* Structure in shared between DSP and PC for communication. */
+struct shm {
+ u32 dsp_free_mask; /* Written by DSP, read by PC. */
+ u32 host_free_mask; /* Written by PC, read by DSP */
+
+ u32 input_full; /* Input channel has unread data. */
+ u32 input_id; /* Channel for which input is available. */
+ u32 input_size; /* Size of data block (in DSP words). */
+
+ u32 output_full; /* Output channel has unread data. */
+ u32 output_id; /* Channel for which output is available. */
+ u32 output_size; /* Size of data block (in DSP words). */
+
+ u32 arg; /* Arg for Issue/Reclaim (23 bits for 55x). */
+ u32 resvd; /* Keep structure size even for 32-bit DSPs */
+
+ /* Operating Point structure */
+ struct opp_struct opp_table_struct;
+ /* Operating Point Request structure */
+ struct opp_rqst_struct opp_request;
+ /* load monitor information structure */
+ struct load_mon_struct load_mon_info;
+#ifdef CONFIG_BRIDGE_WDT3
+ /* Flag for WDT enable/disable F/I clocks */
+ u32 wdt_setclocks;
+ u32 wdt_overflow; /* WDT overflow time */
+ char dummy[176]; /* padding to 256 byte boundary */
+#else
+ char dummy[184]; /* padding to 256 byte boundary */
+#endif
+ u32 shm_dbg_var[64]; /* shared memory debug variables */
+};
+
+ /* Channel Manager: only one created per board: */
+struct chnl_mgr {
+ /* Function interface to Bridge driver */
+ struct bridge_drv_interface *intf_fxns;
+ struct io_mgr *hio_mgr; /* IO manager */
+ /* Device this board represents */
+ struct dev_object *hdev_obj;
+
+ /* These fields initialized in bridge_chnl_create(): */
+ u32 dw_output_mask; /* Host output channels w/ full buffers */
+ u32 dw_last_output; /* Last output channel fired from DPC */
+ /* Critical section object handle */
+ spinlock_t chnl_mgr_lock;
+ u32 word_size; /* Size in bytes of DSP word */
+ u8 max_channels; /* Total number of channels */
+ u8 open_channels; /* Total number of open channels */
+ struct chnl_object **ap_channel; /* Array of channels */
+ u8 dw_type; /* Type of channel class library */
+ /* If no shm syms, return for CHNL_Open */
+ int chnl_open_status;
+};
+
+/*
+ * Channel: up to CHNL_MAXCHANNELS per board or if DSP-DMA supported then
+ * up to CHNL_MAXCHANNELS + CHNL_MAXDDMACHNLS per board.
+ */
+struct chnl_object {
+ /* Pointer back to channel manager */
+ struct chnl_mgr *chnl_mgr_obj;
+ u32 chnl_id; /* Channel id */
+ u8 dw_state; /* Current channel state */
+ s8 chnl_mode; /* Chnl mode and attributes */
+ /* Chnl I/O completion event (user mode) */
+ void *user_event;
+ /* Abstract syncronization object */
+ struct sync_object *sync_event;
+ u32 process; /* Process which created this channel */
+ u32 pcb_arg; /* Argument to use with callback */
+ struct lst_list *pio_requests; /* List of IOR's to driver */
+ s32 cio_cs; /* Number of IOC's in queue */
+ s32 cio_reqs; /* Number of IORequests in queue */
+ s32 chnl_packets; /* Initial number of free Irps */
+ /* List of IOC's from driver */
+ struct lst_list *pio_completions;
+ struct lst_list *free_packets_list; /* List of free Irps */
+ struct ntfy_object *ntfy_obj;
+ u32 bytes_moved; /* Total number of bytes transfered */
+
+ /* For DSP-DMA */
+
+ /* Type of chnl transport:CHNL_[PCPY][DDMA] */
+ u32 chnl_type;
+};
+
+/* I/O Request/completion packet: */
+struct chnl_irp {
+ struct list_head link; /* Link to next CHIRP in queue. */
+ /* Buffer to be filled/emptied. (User) */
+ u8 *host_user_buf;
+ /* Buffer to be filled/emptied. (System) */
+ u8 *host_sys_buf;
+ u32 dw_arg; /* Issue/Reclaim argument. */
+ u32 dsp_tx_addr; /* Transfer address on DSP side. */
+ u32 byte_size; /* Bytes transferred. */
+ u32 buf_size; /* Actual buffer size when allocated. */
+ u32 status; /* Status of IO completion. */
+};
+
+#endif /* _CHNL_SM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/brddefs.h b/drivers/staging/tidspbridge/include/dspbridge/brddefs.h
new file mode 100644
index 0000000..f80d9a5
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/brddefs.h
@@ -0,0 +1,39 @@
+/*
+ * brddefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global BRD constants and types, shared between DSP API and Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef BRDDEFS_
+#define BRDDEFS_
+
+/* platform status values */
+#define BRD_STOPPED 0x0 /* No Monitor Loaded, Not running. */
+#define BRD_IDLE 0x1 /* Monitor Loaded, but suspended. */
+#define BRD_RUNNING 0x2 /* Monitor loaded, and executing. */
+#define BRD_UNKNOWN 0x3 /* Board state is indeterminate. */
+#define BRD_SYNCINIT 0x4
+#define BRD_LOADED 0x5
+#define BRD_LASTSTATE BRD_LOADED /* Set to highest legal board state. */
+#define BRD_SLEEP_TRANSITION 0x6 /* Sleep transition in progress */
+#define BRD_HIBERNATION 0x7 /* MPU initiated hibernation */
+#define BRD_RETENTION 0x8 /* Retention mode */
+#define BRD_DSP_HIBERNATION 0x9 /* DSP initiated hibernation */
+#define BRD_ERROR 0xA /* Board state is Error */
+
+/* BRD Object */
+struct brd_object;
+
+#endif /* BRDDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/cfg.h b/drivers/staging/tidspbridge/include/dspbridge/cfg.h
new file mode 100644
index 0000000..a2580f0
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/cfg.h
@@ -0,0 +1,222 @@
+/*
+ * cfg.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * PM Configuration module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CFG_
+#define CFG_
+#include <dspbridge/host_os.h>
+#include <dspbridge/cfgdefs.h>
+
+/*
+ * ======== cfg_exit ========
+ * Purpose:
+ * Discontinue usage of the CFG module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * cfg_init(void) was previously called.
+ * Ensures:
+ * Resources acquired in cfg_init(void) are freed.
+ */
+extern void cfg_exit(void);
+
+/*
+ * ======== cfg_get_auto_start ========
+ * Purpose:
+ * Retreive the autostart mask, if any, for this board.
+ * Parameters:
+ * dev_node_obj: Handle to the dev_node who's driver we are querying.
+ * pdwAutoStart: Ptr to location for 32 bit autostart mask.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: dev_node_obj is invalid.
+ * -ENODATA: Unable to retreive resource.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: *pdwAutoStart contains autostart mask for this devnode.
+ */
+extern int cfg_get_auto_start(IN struct cfg_devnode *dev_node_obj,
+ OUT u32 *pdwAutoStart);
+
+/*
+ * ======== cfg_get_cd_version ========
+ * Purpose:
+ * Retrieves the version of the PM Class Driver.
+ * Parameters:
+ * pdwVersion: Ptr to u32 to contain version number upon return.
+ * Returns:
+ * 0: Success. pdwVersion contains Class Driver version in
+ * the form: 0xAABBCCDD where AABB is Major version and
+ * CCDD is Minor.
+ * -EPERM: Failure.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: Success.
+ * else: *pdwVersion is NULL.
+ */
+extern int cfg_get_cd_version(OUT u32 *pdwVersion);
+
+/*
+ * ======== cfg_get_dev_object ========
+ * Purpose:
+ * Retrieve the Device Object handle for a given devnode.
+ * Parameters:
+ * dev_node_obj: Platform's dev_node handle from which to retrieve
+ * value.
+ * pdwValue: Ptr to location to store the value.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: dev_node_obj is invalid or phDevObject is invalid.
+ * -ENODATA: The resource is not available.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: *pdwValue is set to the retrieved u32.
+ * else: *pdwValue is set to 0L.
+ */
+extern int cfg_get_dev_object(IN struct cfg_devnode *dev_node_obj,
+ OUT u32 *pdwValue);
+
+/*
+ * ======== cfg_get_exec_file ========
+ * Purpose:
+ * Retreive the default executable, if any, for this board.
+ * Parameters:
+ * dev_node_obj: Handle to the dev_node who's driver we are querying.
+ * buf_size: Size of buffer.
+ * pstrExecFile: Ptr to character buf to hold ExecFile.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: dev_node_obj is invalid or pstrExecFile is invalid.
+ * -ENODATA: The resource is not available.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: Not more than buf_size bytes were copied into pstrExecFile,
+ * and *pstrExecFile contains default executable for this
+ * devnode.
+ */
+extern int cfg_get_exec_file(IN struct cfg_devnode *dev_node_obj,
+ IN u32 buf_size, OUT char *pstrExecFile);
+
+/*
+ * ======== cfg_get_object ========
+ * Purpose:
+ * Retrieve the Driver Object handle From the Registry
+ * Parameters:
+ * pdwValue: Ptr to location to store the value.
+ * dw_type Type of Object to Get
+ * Returns:
+ * 0: Success.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: *pdwValue is set to the retrieved u32(non-Zero).
+ * else: *pdwValue is set to 0L.
+ */
+extern int cfg_get_object(OUT u32 *pdwValue, u8 dw_type);
+
+/*
+ * ======== cfg_get_perf_value ========
+ * Purpose:
+ * Retrieve a flag indicating whether PERF should log statistics for the
+ * PM class driver.
+ * Parameters:
+ * pfEnablePerf: Location to store flag. 0 indicates the key was
+ * not found, or had a zero value. A nonzero value
+ * means the key was found and had a nonzero value.
+ * Returns:
+ * Requires:
+ * pfEnablePerf != NULL;
+ * Ensures:
+ */
+extern void cfg_get_perf_value(OUT bool *pfEnablePerf);
+
+/*
+ * ======== cfg_get_zl_file ========
+ * Purpose:
+ * Retreive the ZLFile, if any, for this board.
+ * Parameters:
+ * dev_node_obj: Handle to the dev_node who's driver we are querying.
+ * buf_size: Size of buffer.
+ * pstrZLFileName: Ptr to character buf to hold ZLFileName.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: pstrZLFileName is invalid or dev_node_obj is invalid.
+ * -ENODATA: couldn't find the ZLFileName.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: Not more than buf_size bytes were copied into
+ * pstrZLFileName, and *pstrZLFileName contains ZLFileName
+ * for this devnode.
+ */
+extern int cfg_get_zl_file(IN struct cfg_devnode *dev_node_obj,
+ IN u32 buf_size, OUT char *pstrZLFileName);
+
+/*
+ * ======== cfg_init ========
+ * Purpose:
+ * Initialize the CFG module's private state.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * A requirement for each of the other public CFG functions.
+ */
+extern bool cfg_init(void);
+
+/*
+ * ======== cfg_set_dev_object ========
+ * Purpose:
+ * Store the Device Object handle for a given devnode.
+ * Parameters:
+ * dev_node_obj: Platform's dev_node handle we are storing value with.
+ * dwValue: Arbitrary value to store.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: dev_node_obj is invalid.
+ * -EPERM: Internal Error.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: The Private u32 was successfully set.
+ */
+extern int cfg_set_dev_object(IN struct cfg_devnode *dev_node_obj,
+ IN u32 dwValue);
+
+/*
+ * ======== CFG_SetDrvObject ========
+ * Purpose:
+ * Store the Driver Object handle.
+ * Parameters:
+ * dwValue: Arbitrary value to store.
+ * dw_type Type of Object to Store
+ * Returns:
+ * 0: Success.
+ * -EPERM: Internal Error.
+ * Requires:
+ * CFG initialized.
+ * Ensures:
+ * 0: The Private u32 was successfully set.
+ */
+extern int cfg_set_object(IN u32 dwValue, u8 dw_type);
+
+#endif /* CFG_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/cfgdefs.h b/drivers/staging/tidspbridge/include/dspbridge/cfgdefs.h
new file mode 100644
index 0000000..38122db
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/cfgdefs.h
@@ -0,0 +1,81 @@
+/*
+ * cfgdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global CFG constants and types, shared between DSP API and Bridge driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CFGDEFS_
+#define CFGDEFS_
+
+/* Maximum length of module search path. */
+#define CFG_MAXSEARCHPATHLEN 255
+
+/* Maximum length of general paths. */
+#define CFG_MAXPATH 255
+
+/* Host Resources: */
+#define CFG_MAXMEMREGISTERS 9
+#define CFG_MAXIOPORTS 20
+#define CFG_MAXIRQS 7
+#define CFG_MAXDMACHANNELS 7
+
+/* IRQ flag */
+#define CFG_IRQSHARED 0x01 /* IRQ can be shared */
+
+/* DSP Resources: */
+#define CFG_DSPMAXMEMTYPES 10
+#define CFG_DEFAULT_NUM_WINDOWS 1 /* We support only one window. */
+
+/* A platform-related device handle: */
+struct cfg_devnode;
+
+/*
+ * Host resource structure.
+ */
+struct cfg_hostres {
+ u32 num_mem_windows; /* Set to default */
+ /* This is the base.memory */
+ u32 dw_mem_base[CFG_MAXMEMREGISTERS]; /* shm virtual address */
+ u32 dw_mem_length[CFG_MAXMEMREGISTERS]; /* Length of the Base */
+ u32 dw_mem_phys[CFG_MAXMEMREGISTERS]; /* shm Physical address */
+ u8 birq_registers; /* IRQ Number */
+ u8 birq_attrib; /* IRQ Attribute */
+ u32 dw_offset_for_monitor; /* The Shared memory starts from
+ * dw_mem_base + this offset */
+ /*
+ * Info needed by NODE for allocating channels to communicate with RMS:
+ * dw_chnl_offset: Offset of RMS channels. Lower channels are
+ * reserved.
+ * dw_chnl_buf_size: Size of channel buffer to send to RMS
+ * dw_num_chnls: Total number of channels
+ * (including reserved).
+ */
+ u32 dw_chnl_offset;
+ u32 dw_chnl_buf_size;
+ u32 dw_num_chnls;
+ void __iomem *dw_per_base;
+ u32 dw_per_pm_base;
+ u32 dw_core_pm_base;
+ void __iomem *dw_dmmu_base;
+ void __iomem *dw_sys_ctrl_base;
+};
+
+struct cfg_dspmemdesc {
+ u32 mem_type; /* Type of memory. */
+ u32 ul_min; /* Minimum amount of memory of this type. */
+ u32 ul_max; /* Maximum amount of memory of this type. */
+};
+
+#endif /* CFGDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/chnl.h b/drivers/staging/tidspbridge/include/dspbridge/chnl.h
new file mode 100644
index 0000000..89315dc
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/chnl.h
@@ -0,0 +1,130 @@
+/*
+ * chnl.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP API channel interface: multiplexes data streams through the single
+ * physical link managed by a Bridge driver.
+ *
+ * See DSP API chnl.h for more details.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CHNL_
+#define CHNL_
+
+#include <dspbridge/chnlpriv.h>
+
+/*
+ * ======== chnl_close ========
+ * Purpose:
+ * Ensures all pending I/O on this channel is cancelled, discards all
+ * queued I/O completion notifications, then frees the resources allocated
+ * for this channel, and makes the corresponding logical channel id
+ * available for subsequent use.
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj.
+ * Requires:
+ * chnl_init(void) called.
+ * No thread must be blocked on this channel's I/O completion event.
+ * Ensures:
+ * 0: The I/O completion event for this channel is freed.
+ * chnl_obj is no longer valid.
+ */
+extern int chnl_close(struct chnl_object *chnl_obj);
+
+/*
+ * ======== chnl_create ========
+ * Purpose:
+ * Create a channel manager object, responsible for opening new channels
+ * and closing old ones for a given board.
+ * Parameters:
+ * phChnlMgr: Location to store a channel manager object on output.
+ * hdev_obj: Handle to a device object.
+ * pMgrAttrs: Channel manager attributes.
+ * pMgrAttrs->max_channels: Max channels
+ * pMgrAttrs->birq: Channel's I/O IRQ number.
+ * pMgrAttrs->irq_shared: TRUE if the IRQ is shareable.
+ * pMgrAttrs->word_size: DSP Word size in equivalent PC bytes..
+ * Returns:
+ * 0: Success;
+ * -EFAULT: hdev_obj is invalid.
+ * -EINVAL: max_channels is 0.
+ * Invalid DSP word size (must be > 0).
+ * Invalid base address for DSP communications.
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EIO: Unable to plug channel ISR for configured IRQ.
+ * -ECHRNG: This manager cannot handle this many channels.
+ * -EEXIST: Channel manager already exists for this device.
+ * Requires:
+ * chnl_init(void) called.
+ * phChnlMgr != NULL.
+ * pMgrAttrs != NULL.
+ * Ensures:
+ * 0: Subsequent calls to chnl_create() for the same
+ * board without an intervening call to
+ * chnl_destroy() will fail.
+ */
+extern int chnl_create(OUT struct chnl_mgr **phChnlMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct chnl_mgrattrs *pMgrAttrs);
+
+/*
+ * ======== chnl_destroy ========
+ * Purpose:
+ * Close all open channels, and destroy the channel manager.
+ * Parameters:
+ * hchnl_mgr: Channel manager object.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: hchnl_mgr was invalid.
+ * Requires:
+ * chnl_init(void) called.
+ * Ensures:
+ * 0: Cancels I/O on each open channel.
+ * Closes each open channel.
+ * chnl_create may subsequently be called for the
+ * same board.
+ */
+extern int chnl_destroy(struct chnl_mgr *hchnl_mgr);
+
+/*
+ * ======== chnl_exit ========
+ * Purpose:
+ * Discontinue usage of the CHNL module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * chnl_init(void) previously called.
+ * Ensures:
+ * Resources, if any acquired in chnl_init(void), are freed when the last
+ * client of CHNL calls chnl_exit(void).
+ */
+extern void chnl_exit(void);
+
+/*
+ * ======== chnl_init ========
+ * Purpose:
+ * Initialize the CHNL module's private state.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occurred.
+ * Requires:
+ * Ensures:
+ * A requirement for each of the other public CHNL functions.
+ */
+extern bool chnl_init(void);
+
+#endif /* CHNL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/chnldefs.h b/drivers/staging/tidspbridge/include/dspbridge/chnldefs.h
new file mode 100644
index 0000000..0fe3824
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/chnldefs.h
@@ -0,0 +1,67 @@
+/*
+ * chnldefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * System-wide channel objects and constants.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CHNLDEFS_
+#define CHNLDEFS_
+
+/* Channel id option. */
+#define CHNL_PICKFREE (~0UL) /* Let manager pick a free channel. */
+
+/* Channel manager limits: */
+#define CHNL_INITIOREQS 4 /* Default # of I/O requests. */
+
+/* Channel modes */
+#define CHNL_MODETODSP 0 /* Data streaming to the DSP. */
+#define CHNL_MODEFROMDSP 1 /* Data streaming from the DSP. */
+
+/* GetIOCompletion flags */
+#define CHNL_IOCINFINITE 0xffffffff /* Wait forever for IO completion. */
+#define CHNL_IOCNOWAIT 0x0 /* Dequeue an IOC, if available. */
+
+/* IO Completion Record status: */
+#define CHNL_IOCSTATCOMPLETE 0x0000 /* IO Completed. */
+#define CHNL_IOCSTATCANCEL 0x0002 /* IO was cancelled */
+#define CHNL_IOCSTATTIMEOUT 0x0008 /* Wait for IOC timed out. */
+#define CHNL_IOCSTATEOS 0x8000 /* End Of Stream reached. */
+
+/* Macros for checking I/O Completion status: */
+#define CHNL_IS_EOS(ioc) (ioc.status & CHNL_IOCSTATEOS)
+#define CHNL_IS_IO_COMPLETE(ioc) (!(ioc.status & ~CHNL_IOCSTATEOS))
+#define CHNL_IS_IO_CANCELLED(ioc) (ioc.status & CHNL_IOCSTATCANCEL)
+#define CHNL_IS_TIMED_OUT(ioc) (ioc.status & CHNL_IOCSTATTIMEOUT)
+
+/* Channel attributes: */
+struct chnl_attr {
+ u32 uio_reqs; /* Max # of preallocated I/O requests. */
+ void *event_obj; /* User supplied auto-reset event object. */
+ char *pstr_event_name; /* Ptr to name of user event object. */
+ void *reserved1; /* Reserved for future use. */
+ u32 reserved2; /* Reserved for future use. */
+
+};
+
+/* I/O completion record: */
+struct chnl_ioc {
+ void *pbuf; /* Buffer to be filled/emptied. */
+ u32 byte_size; /* Bytes transferred. */
+ u32 buf_size; /* Actual buffer size in bytes */
+ u32 status; /* Status of IO completion. */
+ u32 dw_arg; /* User argument associated with pbuf. */
+};
+
+#endif /* CHNLDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/chnlpriv.h b/drivers/staging/tidspbridge/include/dspbridge/chnlpriv.h
new file mode 100644
index 0000000..fce5ebd
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/chnlpriv.h
@@ -0,0 +1,101 @@
+/*
+ * chnlpriv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private channel header shared between DSPSYS, DSPAPI and
+ * Bridge driver modules.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CHNLPRIV_
+#define CHNLPRIV_
+
+#include <dspbridge/chnldefs.h>
+#include <dspbridge/devdefs.h>
+#include <dspbridge/sync.h>
+
+/* Channel manager limits: */
+#define CHNL_MAXCHANNELS 32 /* Max channels available per transport */
+
+/*
+ * Trans port channel Id definitions:(must match dsp-side).
+ *
+ * For CHNL_MAXCHANNELS = 16:
+ *
+ * ChnlIds:
+ * 0-15 (PCPY) - transport 0)
+ * 16-31 (DDMA) - transport 1)
+ * 32-47 (ZCPY) - transport 2)
+ */
+#define CHNL_PCPY 0 /* Proc-copy transport 0 */
+
+#define CHNL_MAXIRQ 0xff /* Arbitrarily large number. */
+
+/* The following modes are private: */
+#define CHNL_MODEUSEREVENT 0x1000 /* User provided the channel event. */
+#define CHNL_MODEMASK 0x1001
+
+/* Higher level channel states: */
+#define CHNL_STATEREADY 0 /* Channel ready for I/O. */
+#define CHNL_STATECANCEL 1 /* I/O was cancelled. */
+#define CHNL_STATEEOS 2 /* End Of Stream reached. */
+
+/* Determine if user supplied an event for this channel: */
+#define CHNL_IS_USER_EVENT(mode) (mode & CHNL_MODEUSEREVENT)
+
+/* Macros for checking mode: */
+#define CHNL_IS_INPUT(mode) (mode & CHNL_MODEFROMDSP)
+#define CHNL_IS_OUTPUT(mode) (!CHNL_IS_INPUT(mode))
+
+/* Types of channel class libraries: */
+#define CHNL_TYPESM 1 /* Shared memory driver. */
+#define CHNL_TYPEBM 2 /* Bus Mastering driver. */
+
+/* Max string length of channel I/O completion event name - change if needed */
+#define CHNL_MAXEVTNAMELEN 32
+
+/* Max memory pages lockable in CHNL_PrepareBuffer() - change if needed */
+#define CHNL_MAXLOCKPAGES 64
+
+/* Channel info. */
+struct chnl_info {
+ struct chnl_mgr *hchnl_mgr; /* Owning channel manager. */
+ u32 cnhl_id; /* Channel ID. */
+ void *event_obj; /* Channel I/O completion event. */
+ /*Abstraction of I/O completion event. */
+ struct sync_object *sync_event;
+ s8 dw_mode; /* Channel mode. */
+ u8 dw_state; /* Current channel state. */
+ u32 bytes_tx; /* Total bytes transferred. */
+ u32 cio_cs; /* Number of IOCs in queue. */
+ u32 cio_reqs; /* Number of IO Requests in queue. */
+ u32 process; /* Process owning this channel. */
+};
+
+/* Channel manager info: */
+struct chnl_mgrinfo {
+ u8 dw_type; /* Type of channel class library. */
+ /* Channel handle, given the channel id. */
+ struct chnl_object *chnl_obj;
+ u8 open_channels; /* Number of open channels. */
+ u8 max_channels; /* total # of chnls supported */
+};
+
+/* Channel Manager Attrs: */
+struct chnl_mgrattrs {
+ /* Max number of channels this manager can use. */
+ u8 max_channels;
+ u32 word_size; /* DSP Word size. */
+};
+
+#endif /* CHNLPRIV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/clk.h b/drivers/staging/tidspbridge/include/dspbridge/clk.h
new file mode 100644
index 0000000..61474bc
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/clk.h
@@ -0,0 +1,101 @@
+/*
+ * clk.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Provides Clock functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _CLK_H
+#define _CLK_H
+
+enum dsp_clk_id {
+ DSP_CLK_IVA2 = 0,
+ DSP_CLK_GPT5,
+ DSP_CLK_GPT6,
+ DSP_CLK_GPT7,
+ DSP_CLK_GPT8,
+ DSP_CLK_WDT3,
+ DSP_CLK_MCBSP1,
+ DSP_CLK_MCBSP2,
+ DSP_CLK_MCBSP3,
+ DSP_CLK_MCBSP4,
+ DSP_CLK_MCBSP5,
+ DSP_CLK_SSI,
+ DSP_CLK_NOT_DEFINED
+};
+
+/*
+ * ======== dsp_clk_exit ========
+ * Purpose:
+ * Discontinue usage of module; free resources when reference count
+ * reaches 0.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * CLK initialized.
+ * Ensures:
+ * Resources used by module are freed when cRef reaches zero.
+ */
+extern void dsp_clk_exit(void);
+
+/*
+ * ======== dsp_clk_init ========
+ * Purpose:
+ * Initializes private state of CLK module.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * CLK initialized.
+ */
+extern void dsp_clk_init(void);
+
+void dsp_gpt_wait_overflow(short int clk_id, unsigned int load);
+
+/*
+ * ======== dsp_clk_enable ========
+ * Purpose:
+ * Enables the clock requested.
+ * Parameters:
+ * Returns:
+ * 0: Success.
+ * -EPERM: Error occured while enabling the clock.
+ * Requires:
+ * Ensures:
+ */
+extern int dsp_clk_enable(IN enum dsp_clk_id clk_id);
+
+u32 dsp_clock_enable_all(u32 dsp_per_clocks);
+
+/*
+ * ======== dsp_clk_disable ========
+ * Purpose:
+ * Disables the clock requested.
+ * Parameters:
+ * Returns:
+ * 0: Success.
+ * -EPERM: Error occured while disabling the clock.
+ * Requires:
+ * Ensures:
+ */
+extern int dsp_clk_disable(IN enum dsp_clk_id clk_id);
+
+extern u32 dsp_clk_get_iva2_rate(void);
+
+u32 dsp_clock_disable_all(u32 dsp_per_clocks);
+
+extern void ssi_clk_prepare(bool FLAG);
+
+#endif /* _SYNC_H */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/cmm.h b/drivers/staging/tidspbridge/include/dspbridge/cmm.h
new file mode 100644
index 0000000..3cf93aa
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/cmm.h
@@ -0,0 +1,386 @@
+/*
+ * cmm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * The Communication Memory Management(CMM) module provides shared memory
+ * management services for DSP/BIOS Bridge data streaming and messaging.
+ * Multiple shared memory segments can be registered with CMM. Memory is
+ * coelesced back to the appropriate pool when a buffer is freed.
+ *
+ * The CMM_Xlator[xxx] functions are used for node messaging and data
+ * streaming address translation to perform zero-copy inter-processor
+ * data transfer(GPP<->DSP). A "translator" object is created for a node or
+ * stream object that contains per thread virtual address information. This
+ * translator info is used at runtime to perform SM address translation
+ * to/from the DSP address space.
+ *
+ * Notes:
+ * cmm_xlator_alloc_buf - Used by Node and Stream modules for SM address
+ * translation.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CMM_
+#define CMM_
+
+#include <dspbridge/devdefs.h>
+
+#include <dspbridge/cmmdefs.h>
+#include <dspbridge/host_os.h>
+
+/*
+ * ======== cmm_calloc_buf ========
+ * Purpose:
+ * Allocate memory buffers that can be used for data streaming or
+ * messaging.
+ * Parameters:
+ * hcmm_mgr: Cmm Mgr handle.
+ * usize: Number of bytes to allocate.
+ * pattr: Attributes of memory to allocate.
+ * pp_buf_va: Address of where to place VA.
+ * Returns:
+ * Pointer to a zero'd block of SM memory;
+ * NULL if memory couldn't be allocated,
+ * or if byte_size == 0,
+ * Requires:
+ * Valid hcmm_mgr.
+ * CMM initialized.
+ * Ensures:
+ * The returned pointer, if not NULL, points to a valid memory block of
+ * the size requested.
+ *
+ */
+extern void *cmm_calloc_buf(struct cmm_object *hcmm_mgr,
+ u32 usize, struct cmm_attrs *pattrs,
+ OUT void **pp_buf_va);
+
+/*
+ * ======== cmm_create ========
+ * Purpose:
+ * Create a communication memory manager object.
+ * Parameters:
+ * ph_cmm_mgr: Location to store a communication manager handle on
+ * output.
+ * hdev_obj: Handle to a device object.
+ * pMgrAttrs: Comm mem manager attributes.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EPERM: Failed to initialize critical sect sync object.
+ *
+ * Requires:
+ * cmm_init(void) called.
+ * ph_cmm_mgr != NULL.
+ * pMgrAttrs->ul_min_block_size >= 4 bytes.
+ * Ensures:
+ *
+ */
+extern int cmm_create(OUT struct cmm_object **ph_cmm_mgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct cmm_mgrattrs *pMgrAttrs);
+
+/*
+ * ======== cmm_destroy ========
+ * Purpose:
+ * Destroy the communication memory manager object.
+ * Parameters:
+ * hcmm_mgr: Cmm Mgr handle.
+ * bForce: Force deallocation of all cmm memory immediately if set TRUE.
+ * If FALSE, and outstanding allocations will return -EPERM
+ * status.
+ * Returns:
+ * 0: CMM object & resources deleted.
+ * -EPERM: Unable to free CMM object due to outstanding allocation.
+ * -EFAULT: Unable to free CMM due to bad handle.
+ * Requires:
+ * CMM is initialized.
+ * hcmm_mgr != NULL.
+ * Ensures:
+ * Memory resources used by Cmm Mgr are freed.
+ */
+extern int cmm_destroy(struct cmm_object *hcmm_mgr, bool bForce);
+
+/*
+ * ======== cmm_exit ========
+ * Purpose:
+ * Discontinue usage of module. Cleanup CMM module if CMM cRef reaches zero.
+ * Parameters:
+ * n/a
+ * Returns:
+ * n/a
+ * Requires:
+ * CMM is initialized.
+ * Ensures:
+ */
+extern void cmm_exit(void);
+
+/*
+ * ======== cmm_free_buf ========
+ * Purpose:
+ * Free the given buffer.
+ * Parameters:
+ * hcmm_mgr: Cmm Mgr handle.
+ * pbuf: Pointer to memory allocated by cmm_calloc_buf().
+ * ul_seg_id: SM segment Id used in CMM_Calloc() attrs.
+ * Set to 0 to use default segment.
+ * Returns:
+ * 0
+ * -EPERM
+ * Requires:
+ * CMM initialized.
+ * buf_pa != NULL
+ * Ensures:
+ *
+ */
+extern int cmm_free_buf(struct cmm_object *hcmm_mgr,
+ void *buf_pa, u32 ul_seg_id);
+
+/*
+ * ======== cmm_get_handle ========
+ * Purpose:
+ * Return the handle to the cmm mgr for the given device obj.
+ * Parameters:
+ * hprocessor: Handle to a Processor.
+ * ph_cmm_mgr: Location to store the shared memory mgr handle on
+ * output.
+ *
+ * Returns:
+ * 0: Cmm Mgr opaque handle returned.
+ * -EFAULT: Invalid handle.
+ * Requires:
+ * ph_cmm_mgr != NULL
+ * hdev_obj != NULL
+ * Ensures:
+ */
+extern int cmm_get_handle(void *hprocessor,
+ OUT struct cmm_object **ph_cmm_mgr);
+
+/*
+ * ======== cmm_get_info ========
+ * Purpose:
+ * Return the current SM and VM utilization information.
+ * Parameters:
+ * hcmm_mgr: Handle to a Cmm Mgr.
+ * cmm_info_obj: Location to store the Cmm information on output.
+ *
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid handle.
+ * -EINVAL Invalid input argument.
+ * Requires:
+ * Ensures:
+ *
+ */
+extern int cmm_get_info(struct cmm_object *hcmm_mgr,
+ OUT struct cmm_info *cmm_info_obj);
+
+/*
+ * ======== cmm_init ========
+ * Purpose:
+ * Initializes private state of CMM module.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * CMM initialized.
+ */
+extern bool cmm_init(void);
+
+/*
+ * ======== cmm_register_gppsm_seg ========
+ * Purpose:
+ * Register a block of SM with the CMM.
+ * Parameters:
+ * hcmm_mgr: Handle to a Cmm Mgr.
+ * lpGPPBasePA: GPP Base Physical address.
+ * ul_size: Size in GPP bytes.
+ * dwDSPAddrOffset GPP PA to DSP PA Offset.
+ * c_factor: Add offset if CMM_ADDTODSPPA, sub if CMM_SUBFROMDSPPA.
+ * dw_dsp_base: DSP virtual base byte address.
+ * ul_dsp_size: Size of DSP segment in bytes.
+ * pulSegId: Address to store segment Id.
+ *
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hcmm_mgr handle.
+ * -EINVAL: Invalid input argument.
+ * -EPERM: Unable to register.
+ * - On success *pulSegId is a valid SM segment ID.
+ * Requires:
+ * ul_size > 0
+ * pulSegId != NULL
+ * dw_gpp_base_pa != 0
+ * c_factor = CMM_ADDTODSPPA || c_factor = CMM_SUBFROMDSPPA
+ * Ensures:
+ *
+ */
+extern int cmm_register_gppsm_seg(struct cmm_object *hcmm_mgr,
+ unsigned int dw_gpp_base_pa,
+ u32 ul_size,
+ u32 dwDSPAddrOffset,
+ s8 c_factor,
+ unsigned int dw_dsp_base,
+ u32 ul_dsp_size,
+ u32 *pulSegId, u32 dwGPPBaseBA);
+
+/*
+ * ======== cmm_un_register_gppsm_seg ========
+ * Purpose:
+ * Unregister the given memory segment that was previously registered
+ * by cmm_register_gppsm_seg.
+ * Parameters:
+ * hcmm_mgr: Handle to a Cmm Mgr.
+ * ul_seg_id Segment identifier returned by cmm_register_gppsm_seg.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid handle.
+ * -EINVAL: Invalid ul_seg_id.
+ * -EPERM: Unable to unregister for unknown reason.
+ * Requires:
+ * Ensures:
+ *
+ */
+extern int cmm_un_register_gppsm_seg(struct cmm_object *hcmm_mgr,
+ u32 ul_seg_id);
+
+/*
+ * ======== cmm_xlator_alloc_buf ========
+ * Purpose:
+ * Allocate the specified SM buffer and create a local memory descriptor.
+ * Place on the descriptor on the translator's HaQ (Host Alloc'd Queue).
+ * Parameters:
+ * xlator: Handle to a Xlator object.
+ * pVaBuf: Virtual address ptr(client context)
+ * uPaSize: Size of SM memory to allocate.
+ * Returns:
+ * Ptr to valid physical address(Pa) of uPaSize bytes, NULL if failed.
+ * Requires:
+ * pVaBuf != 0.
+ * uPaSize != 0.
+ * Ensures:
+ *
+ */
+extern void *cmm_xlator_alloc_buf(struct cmm_xlatorobject *xlator,
+ void *pVaBuf, u32 uPaSize);
+
+/*
+ * ======== cmm_xlator_create ========
+ * Purpose:
+ * Create a translator(xlator) object used for process specific Va<->Pa
+ * address translation. Node messaging and streams use this to perform
+ * inter-processor(GPP<->DSP) zero-copy data transfer.
+ * Parameters:
+ * phXlator: Address to place handle to a new Xlator handle.
+ * hcmm_mgr: Handle to Cmm Mgr associated with this translator.
+ * pXlatorAttrs: Translator attributes used for the client NODE or STREAM.
+ * Returns:
+ * 0: Success.
+ * -EINVAL: Bad input Attrs.
+ * -ENOMEM: Insufficient memory(local) for requested resources.
+ * Requires:
+ * phXlator != NULL
+ * hcmm_mgr != NULL
+ * pXlatorAttrs != NULL
+ * Ensures:
+ *
+ */
+extern int cmm_xlator_create(OUT struct cmm_xlatorobject **phXlator,
+ struct cmm_object *hcmm_mgr,
+ struct cmm_xlatorattrs *pXlatorAttrs);
+
+/*
+ * ======== cmm_xlator_delete ========
+ * Purpose:
+ * Delete translator resources
+ * Parameters:
+ * xlator: handle to translator.
+ * bForce: bForce = TRUE will free XLators SM buffers/dscriptrs.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Bad translator handle.
+ * -EPERM: Unable to free translator resources.
+ * Requires:
+ * refs > 0
+ * Ensures:
+ *
+ */
+extern int cmm_xlator_delete(struct cmm_xlatorobject *xlator,
+ bool bForce);
+
+/*
+ * ======== cmm_xlator_free_buf ========
+ * Purpose:
+ * Free SM buffer and descriptor.
+ * Does not free client process VM.
+ * Parameters:
+ * xlator: handle to translator.
+ * pBufVa Virtual address of PA to free.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Bad translator handle.
+ * Requires:
+ * Ensures:
+ *
+ */
+extern int cmm_xlator_free_buf(struct cmm_xlatorobject *xlator,
+ void *pBufVa);
+
+/*
+ * ======== cmm_xlator_info ========
+ * Purpose:
+ * Set/Get process specific "translator" address info.
+ * This is used to perform fast virtaul address translation
+ * for shared memory buffers between the GPP and DSP.
+ * Parameters:
+ * xlator: handle to translator.
+ * paddr: Virtual base address of segment.
+ * ul_size: Size in bytes.
+ * uSegId: Segment identifier of SM segment(s)
+ * set_info Set xlator fields if TRUE, else return base addr
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Bad translator handle.
+ * Requires:
+ * (refs > 0)
+ * (paddr != NULL)
+ * (ul_size > 0)
+ * Ensures:
+ *
+ */
+extern int cmm_xlator_info(struct cmm_xlatorobject *xlator,
+ IN OUT u8 **paddr,
+ u32 ul_size, u32 uSegId, bool set_info);
+
+/*
+ * ======== cmm_xlator_translate ========
+ * Purpose:
+ * Perform address translation VA<->PA for the specified stream or
+ * message shared memory buffer.
+ * Parameters:
+ * xlator: handle to translator.
+ * paddr address of buffer to translate.
+ * xType Type of address xlation. CMM_PA2VA or CMM_VA2PA.
+ * Returns:
+ * Valid address on success, else NULL.
+ * Requires:
+ * refs > 0
+ * paddr != NULL
+ * xType >= CMM_VA2PA) && (xType <= CMM_DSPPA2PA)
+ * Ensures:
+ *
+ */
+extern void *cmm_xlator_translate(struct cmm_xlatorobject *xlator,
+ void *paddr, enum cmm_xlatetype xType);
+
+#endif /* CMM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/cmmdefs.h b/drivers/staging/tidspbridge/include/dspbridge/cmmdefs.h
new file mode 100644
index 0000000..fbff372
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/cmmdefs.h
@@ -0,0 +1,105 @@
+/*
+ * cmmdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global MEM constants and types.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef CMMDEFS_
+#define CMMDEFS_
+
+#include <dspbridge/list.h>
+
+/* Cmm attributes used in cmm_create() */
+struct cmm_mgrattrs {
+ /* Minimum SM allocation; default 32 bytes. */
+ u32 ul_min_block_size;
+};
+
+/* Attributes for CMM_AllocBuf() & CMM_AllocDesc() */
+struct cmm_attrs {
+ u32 ul_seg_id; /* 1,2... are SM segments. 0 is not. */
+ u32 ul_alignment; /* 0,1,2,4....ul_min_block_size */
+};
+
+/*
+ * DSPPa to GPPPa Conversion Factor.
+ *
+ * For typical platforms:
+ * converted Address = PaDSP + ( c_factor * addressToConvert).
+ */
+#define CMM_SUBFROMDSPPA -1
+#define CMM_ADDTODSPPA 1
+
+#define CMM_ALLSEGMENTS 0xFFFFFF /* All SegIds */
+#define CMM_MAXGPPSEGS 1 /* Maximum # of SM segs */
+
+/*
+ * SMSEGs are SM segments the DSP allocates from.
+ *
+ * This info is used by the GPP to xlate DSP allocated PAs.
+ */
+
+struct cmm_seginfo {
+ u32 dw_seg_base_pa; /* Start Phys address of SM segment */
+ /* Total size in bytes of segment: DSP+GPP */
+ u32 ul_total_seg_size;
+ u32 dw_gpp_base_pa; /* Start Phys addr of Gpp SM seg */
+ u32 ul_gpp_size; /* Size of Gpp SM seg in bytes */
+ u32 dw_dsp_base_va; /* DSP virt base byte address */
+ u32 ul_dsp_size; /* DSP seg size in bytes */
+ /* # of current GPP allocations from this segment */
+ u32 ul_in_use_cnt;
+ u32 dw_seg_base_va; /* Start Virt address of SM seg */
+
+};
+
+/* CMM useful information */
+struct cmm_info {
+ /* # of SM segments registered with this Cmm. */
+ u32 ul_num_gppsm_segs;
+ /* Total # of allocations outstanding for CMM */
+ u32 ul_total_in_use_cnt;
+ /* Min SM block size allocation from cmm_create() */
+ u32 ul_min_block_size;
+ /* Info per registered SM segment. */
+ struct cmm_seginfo seg_info[CMM_MAXGPPSEGS];
+};
+
+/* XlatorCreate attributes */
+struct cmm_xlatorattrs {
+ u32 ul_seg_id; /* segment Id used for SM allocations */
+ u32 dw_dsp_bufs; /* # of DSP-side bufs */
+ u32 dw_dsp_buf_size; /* size of DSP-side bufs in GPP bytes */
+ /* Vm base address alloc'd in client process context */
+ void *vm_base;
+ /* dw_vm_size must be >= (dwMaxNumBufs * dwMaxSize) */
+ u32 dw_vm_size;
+};
+
+/*
+ * Cmm translation types. Use to map SM addresses to process context.
+ */
+enum cmm_xlatetype {
+ CMM_VA2PA = 0, /* Virtual to GPP physical address xlation */
+ CMM_PA2VA = 1, /* GPP Physical to virtual */
+ CMM_VA2DSPPA = 2, /* Va to DSP Pa */
+ CMM_PA2DSPPA = 3, /* GPP Pa to DSP Pa */
+ CMM_DSPPA2PA = 4, /* DSP Pa to GPP Pa */
+};
+
+struct cmm_object;
+struct cmm_xlatorobject;
+
+#endif /* CMMDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/cod.h b/drivers/staging/tidspbridge/include/dspbridge/cod.h
new file mode 100644
index 0000000..c8e6098
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/cod.h
@@ -0,0 +1,369 @@
+/*
+ * cod.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Code management module for DSPs. This module provides an interface
+ * interface for loading both static and dynamic code objects onto DSP
+ * systems.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef COD_
+#define COD_
+
+#include <dspbridge/dblldefs.h>
+
+#define COD_MAXPATHLENGTH 255
+#define COD_TRACEBEG "SYS_PUTCBEG"
+#define COD_TRACEEND "SYS_PUTCEND"
+#define COD_TRACECURPOS "BRIDGE_SYS_PUTC_current"
+#define COD_TRACESECT "trace"
+#define COD_TRACEBEGOLD "PUTCBEG"
+#define COD_TRACEENDOLD "PUTCEND"
+
+#define COD_NOLOAD DBLL_NOLOAD
+#define COD_SYMB DBLL_SYMB
+
+/* COD code manager handle */
+struct cod_manager;
+
+/* COD library handle */
+struct cod_libraryobj;
+
+/* COD attributes */
+struct cod_attrs {
+ u32 ul_reserved;
+};
+
+/*
+ * Function prototypes for writing memory to a DSP system, allocating
+ * and freeing DSP memory.
+ */
+typedef u32(*cod_writefxn) (void *priv_ref, u32 ulDspAddr,
+ void *pbuf, u32 ul_num_bytes, u32 nMemSpace);
+
+/*
+ * ======== cod_close ========
+ * Purpose:
+ * Close a library opened with cod_open().
+ * Parameters:
+ * lib - Library handle returned by cod_open().
+ * Returns:
+ * None.
+ * Requires:
+ * COD module initialized.
+ * valid lib.
+ * Ensures:
+ *
+ */
+extern void cod_close(struct cod_libraryobj *lib);
+
+/*
+ * ======== cod_create ========
+ * Purpose:
+ * Create an object to manage code on a DSP system. This object can be
+ * used to load an initial program image with arguments that can later
+ * be expanded with dynamically loaded object files.
+ * Symbol table information is managed by this object and can be retrieved
+ * using the cod_get_sym_value() function.
+ * Parameters:
+ * phManager: created manager object
+ * pstrZLFile: ZL DLL filename, of length < COD_MAXPATHLENGTH.
+ * attrs: attributes to be used by this object. A NULL value
+ * will cause default attrs to be used.
+ * Returns:
+ * 0: Success.
+ * -ESPIPE: ZL_Create failed.
+ * -ENOSYS: attrs was not NULL. We don't yet support
+ * non default values of attrs.
+ * Requires:
+ * COD module initialized.
+ * pstrZLFile != NULL
+ * Ensures:
+ */
+extern int cod_create(OUT struct cod_manager **phManager,
+ char *pstrZLFile,
+ IN OPTIONAL CONST struct cod_attrs *attrs);
+
+/*
+ * ======== cod_delete ========
+ * Purpose:
+ * Delete a code manager object.
+ * Parameters:
+ * cod_mgr_obj: handle of manager to be deleted
+ * Returns:
+ * None.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * Ensures:
+ */
+extern void cod_delete(struct cod_manager *cod_mgr_obj);
+
+/*
+ * ======== cod_exit ========
+ * Purpose:
+ * Discontinue usage of the COD module.
+ * Parameters:
+ * None.
+ * Returns:
+ * None.
+ * Requires:
+ * COD initialized.
+ * Ensures:
+ * Resources acquired in cod_init(void) are freed.
+ */
+extern void cod_exit(void);
+
+/*
+ * ======== cod_get_base_lib ========
+ * Purpose:
+ * Get handle to the base image DBL library.
+ * Parameters:
+ * cod_mgr_obj: handle of manager to be deleted
+ * plib: location to store library handle on output.
+ * Returns:
+ * 0: Success.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * plib != NULL.
+ * Ensures:
+ */
+extern int cod_get_base_lib(struct cod_manager *cod_mgr_obj,
+ struct dbll_library_obj **plib);
+
+/*
+ * ======== cod_get_base_name ========
+ * Purpose:
+ * Get the name of the base image DBL library.
+ * Parameters:
+ * cod_mgr_obj: handle of manager to be deleted
+ * pszName: location to store library name on output.
+ * usize: size of name buffer.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Buffer too small.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * pszName != NULL.
+ * Ensures:
+ */
+extern int cod_get_base_name(struct cod_manager *cod_mgr_obj,
+ char *pszName, u32 usize);
+
+/*
+ * ======== cod_get_entry ========
+ * Purpose:
+ * Retrieve the entry point of a loaded DSP program image
+ * Parameters:
+ * cod_mgr_obj: handle of manager to be deleted
+ * pulEntry: pointer to location for entry point
+ * Returns:
+ * 0: Success.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * pulEntry != NULL.
+ * Ensures:
+ */
+extern int cod_get_entry(struct cod_manager *cod_mgr_obj,
+ u32 *pulEntry);
+
+/*
+ * ======== cod_get_loader ========
+ * Purpose:
+ * Get handle to the DBL loader.
+ * Parameters:
+ * cod_mgr_obj: handle of manager to be deleted
+ * phLoader: location to store loader handle on output.
+ * Returns:
+ * 0: Success.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * phLoader != NULL.
+ * Ensures:
+ */
+extern int cod_get_loader(struct cod_manager *cod_mgr_obj,
+ struct dbll_tar_obj **phLoader);
+
+/*
+ * ======== cod_get_section ========
+ * Purpose:
+ * Retrieve the starting address and length of a section in the COFF file
+ * given the section name.
+ * Parameters:
+ * lib Library handle returned from cod_open().
+ * pstrSect: name of the section, with or without leading "."
+ * puAddr: Location to store address.
+ * puLen: Location to store length.
+ * Returns:
+ * 0: Success
+ * -ESPIPE: Symbols could not be found or have not been loaded onto
+ * the board.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * pstrSect != NULL;
+ * puAddr != NULL;
+ * puLen != NULL;
+ * Ensures:
+ * 0: *puAddr and *puLen contain the address and length of the
+ * section.
+ * else: *puAddr == 0 and *puLen == 0;
+ *
+ */
+extern int cod_get_section(struct cod_libraryobj *lib,
+ IN char *pstrSect,
+ OUT u32 *puAddr, OUT u32 *puLen);
+
+/*
+ * ======== cod_get_sym_value ========
+ * Purpose:
+ * Retrieve the value for the specified symbol. The symbol is first
+ * searched for literally and then, if not found, searched for as a
+ * C symbol.
+ * Parameters:
+ * lib: library handle returned from cod_open().
+ * pstrSymbol: name of the symbol
+ * value: value of the symbol
+ * Returns:
+ * 0: Success.
+ * -ESPIPE: Symbols could not be found or have not been loaded onto
+ * the board.
+ * Requires:
+ * COD module initialized.
+ * Valid cod_mgr_obj.
+ * pstrSym != NULL.
+ * pul_value != NULL.
+ * Ensures:
+ */
+extern int cod_get_sym_value(struct cod_manager *cod_mgr_obj,
+ IN char *pstrSym, OUT u32 * pul_value);
+
+/*
+ * ======== cod_init ========
+ * Purpose:
+ * Initialize the COD module's private state.
+ * Parameters:
+ * None.
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * A requirement for each of the other public COD functions.
+ */
+extern bool cod_init(void);
+
+/*
+ * ======== cod_load_base ========
+ * Purpose:
+ * Load the initial program image, optionally with command-line arguments,
+ * on the DSP system managed by the supplied handle. The program to be
+ * loaded must be the first element of the args array and must be a fully
+ * qualified pathname.
+ * Parameters:
+ * hmgr: manager to load the code with
+ * nArgc: number of arguments in the args array
+ * args: array of strings for arguments to DSP program
+ * write_fxn: board-specific function to write data to DSP system
+ * pArb: arbitrary pointer to be passed as first arg to write_fxn
+ * envp: array of environment strings for DSP exec.
+ * Returns:
+ * 0: Success.
+ * -EBADF: Failed to open target code.
+ * Requires:
+ * COD module initialized.
+ * hmgr is valid.
+ * nArgc > 0.
+ * aArgs != NULL.
+ * aArgs[0] != NULL.
+ * pfn_write != NULL.
+ * Ensures:
+ */
+extern int cod_load_base(struct cod_manager *cod_mgr_obj,
+ u32 nArgc, char *aArgs[],
+ cod_writefxn pfn_write, void *pArb,
+ char *envp[]);
+
+/*
+ * ======== cod_open ========
+ * Purpose:
+ * Open a library for reading sections. Does not load or set the base.
+ * Parameters:
+ * hmgr: manager to load the code with
+ * pszCoffPath: Coff file to open.
+ * flags: COD_NOLOAD (don't load symbols) or COD_SYMB (load
+ * symbols).
+ * pLib: Handle returned that can be used in calls to cod_close
+ * and cod_get_section.
+ * Returns:
+ * S_OK: Success.
+ * -EBADF: Failed to open target code.
+ * Requires:
+ * COD module initialized.
+ * hmgr is valid.
+ * flags == COD_NOLOAD || flags == COD_SYMB.
+ * pszCoffPath != NULL.
+ * Ensures:
+ */
+extern int cod_open(struct cod_manager *hmgr,
+ IN char *pszCoffPath,
+ u32 flags, OUT struct cod_libraryobj **pLib);
+
+/*
+ * ======== cod_open_base ========
+ * Purpose:
+ * Open base image for reading sections. Does not load the base.
+ * Parameters:
+ * hmgr: manager to load the code with
+ * pszCoffPath: Coff file to open.
+ * flags: Specifies whether to load symbols.
+ * Returns:
+ * 0: Success.
+ * -EBADF: Failed to open target code.
+ * Requires:
+ * COD module initialized.
+ * hmgr is valid.
+ * pszCoffPath != NULL.
+ * Ensures:
+ */
+extern int cod_open_base(struct cod_manager *hmgr, IN char *pszCoffPath,
+ dbll_flags flags);
+
+/*
+ * ======== cod_read_section ========
+ * Purpose:
+ * Retrieve the content of a code section given the section name.
+ * Parameters:
+ * cod_mgr_obj - manager in which to search for the symbol
+ * pstrSect - name of the section, with or without leading "."
+ * pstrContent - buffer to store content of the section.
+ * Returns:
+ * 0: on success, error code on failure
+ * -ESPIPE: Symbols have not been loaded onto the board.
+ * Requires:
+ * COD module initialized.
+ * valid cod_mgr_obj.
+ * pstrSect != NULL;
+ * pstrContent != NULL;
+ * Ensures:
+ * 0: *pstrContent stores the content of the named section.
+ */
+extern int cod_read_section(struct cod_libraryobj *lib,
+ IN char *pstrSect,
+ OUT char *pstrContent, IN u32 cContentSize);
+
+#endif /* COD_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbc.h b/drivers/staging/tidspbridge/include/dspbridge/dbc.h
new file mode 100644
index 0000000..76f049e
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbc.h
@@ -0,0 +1,46 @@
+/*
+ * dbc.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * "Design by Contract" programming macros.
+ *
+ * Notes:
+ * Requires that the GT->ERROR function has been defaulted to a valid
+ * error handler for the given execution environment.
+ *
+ * Does not require that GT_init() be called.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBC_
+#define DBC_
+
+/* Assertion Macros: */
+#ifdef CONFIG_BRIDGE_DEBUG
+
+#define DBC_ASSERT(exp) \
+ if (!(exp)) \
+ pr_err("%s, line %d: Assertion (" #exp ") failed.\n", \
+ __FILE__, __LINE__)
+#define DBC_REQUIRE DBC_ASSERT /* Function Precondition. */
+#define DBC_ENSURE DBC_ASSERT /* Function Postcondition. */
+
+#else
+
+#define DBC_ASSERT(exp) {}
+#define DBC_REQUIRE(exp) {}
+#define DBC_ENSURE(exp) {}
+
+#endif /* DEBUG */
+
+#endif /* DBC_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbdcd.h b/drivers/staging/tidspbridge/include/dspbridge/dbdcd.h
new file mode 100644
index 0000000..df172bc
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbdcd.h
@@ -0,0 +1,358 @@
+/*
+ * dbdcd.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Defines the DSP/BIOS Bridge Configuration Database (DCD) API.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBDCD_
+#define DBDCD_
+
+#include <dspbridge/dbdcddef.h>
+#include <dspbridge/host_os.h>
+#include <dspbridge/nldrdefs.h>
+
+/*
+ * ======== dcd_auto_register ========
+ * Purpose:
+ * This function automatically registers DCD objects specified in a
+ * special COFF section called ".dcd_register"
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * pszCoffPath: Pointer to name of COFF file containing DCD
+ * objects to be registered.
+ * Returns:
+ * 0: Success.
+ * -EACCES: Unable to find auto-registration/read/load section.
+ * -EFAULT: Invalid DCD_HMANAGER handle..
+ * Requires:
+ * DCD initialized.
+ * Ensures:
+ * Note:
+ * Due to the DCD database construction, it is essential for a DCD-enabled
+ * COFF file to contain the right COFF sections, especially
+ * ".dcd_register", which is used for auto registration.
+ */
+extern int dcd_auto_register(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath);
+
+/*
+ * ======== dcd_auto_unregister ========
+ * Purpose:
+ * This function automatically unregisters DCD objects specified in a
+ * special COFF section called ".dcd_register"
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * pszCoffPath: Pointer to name of COFF file containing
+ * DCD objects to be unregistered.
+ * Returns:
+ * 0: Success.
+ * -EACCES: Unable to find auto-registration/read/load section.
+ * -EFAULT: Invalid DCD_HMANAGER handle..
+ * Requires:
+ * DCD initialized.
+ * Ensures:
+ * Note:
+ * Due to the DCD database construction, it is essential for a DCD-enabled
+ * COFF file to contain the right COFF sections, especially
+ * ".dcd_register", which is used for auto unregistration.
+ */
+extern int dcd_auto_unregister(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath);
+
+/*
+ * ======== dcd_create_manager ========
+ * Purpose:
+ * This function creates a DCD module manager.
+ * Parameters:
+ * pszZlDllName: Pointer to a DLL name string.
+ * phDcdMgr: A pointer to a DCD manager handle.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Unable to allocate memory for DCD manager handle.
+ * -EPERM: General failure.
+ * Requires:
+ * DCD initialized.
+ * pszZlDllName is non-NULL.
+ * phDcdMgr is non-NULL.
+ * Ensures:
+ * A DCD manager handle is created.
+ */
+extern int dcd_create_manager(IN char *pszZlDllName,
+ OUT struct dcd_manager **phDcdMgr);
+
+/*
+ * ======== dcd_destroy_manager ========
+ * Purpose:
+ * This function destroys a DCD module manager.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid DCD manager handle.
+ * Requires:
+ * DCD initialized.
+ * Ensures:
+ */
+extern int dcd_destroy_manager(IN struct dcd_manager *hdcd_mgr);
+
+/*
+ * ======== dcd_enumerate_object ========
+ * Purpose:
+ * This function enumerates currently visible DSP/BIOS Bridge objects
+ * and returns the UUID and type of each enumerated object.
+ * Parameters:
+ * cIndex: The object enumeration index.
+ * obj_type: Type of object to enumerate.
+ * uuid_obj: Pointer to a dsp_uuid object.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Unable to enumerate through the DCD database.
+ * ENODATA: Enumeration completed. This is not an error code.
+ * Requires:
+ * DCD initialized.
+ * uuid_obj is a valid pointer.
+ * Ensures:
+ * Details:
+ * This function can be used in conjunction with dcd_get_object_def to
+ * retrieve object properties.
+ */
+extern int dcd_enumerate_object(IN s32 cIndex,
+ IN enum dsp_dcdobjtype obj_type,
+ OUT struct dsp_uuid *uuid_obj);
+
+/*
+ * ======== dcd_exit ========
+ * Purpose:
+ * This function cleans up the DCD module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * DCD initialized.
+ * Ensures:
+ */
+extern void dcd_exit(void);
+
+/*
+ * ======== dcd_get_dep_libs ========
+ * Purpose:
+ * Given the uuid of a library and size of array of uuids, this function
+ * fills the array with the uuids of all dependent libraries of the input
+ * library.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * uuid_obj: Pointer to a dsp_uuid for a library.
+ * numLibs: Size of uuid array (number of library uuids).
+ * pDepLibUuids: Array of dependent library uuids to be filled in.
+ * pPersistentDepLibs: Array indicating if corresponding lib is persistent.
+ * phase: phase to obtain correct input library
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EACCES: Failure to read section containing library info.
+ * -EPERM: General failure.
+ * Requires:
+ * DCD initialized.
+ * Valid hdcd_mgr.
+ * uuid_obj != NULL
+ * pDepLibUuids != NULL.
+ * Ensures:
+ */
+extern int dcd_get_dep_libs(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ u16 numLibs,
+ OUT struct dsp_uuid *pDepLibUuids,
+ OUT bool *pPersistentDepLibs,
+ IN enum nldr_phase phase);
+
+/*
+ * ======== dcd_get_num_dep_libs ========
+ * Purpose:
+ * Given the uuid of a library, determine its number of dependent
+ * libraries.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * uuid_obj: Pointer to a dsp_uuid for a library.
+ * pNumLibs: Size of uuid array (number of library uuids).
+ * pNumPersLibs: number of persistent dependent library.
+ * phase: Phase to obtain correct input library
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EACCES: Failure to read section containing library info.
+ * -EPERM: General failure.
+ * Requires:
+ * DCD initialized.
+ * Valid hdcd_mgr.
+ * uuid_obj != NULL
+ * pNumLibs != NULL.
+ * Ensures:
+ */
+extern int dcd_get_num_dep_libs(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ OUT u16 *pNumLibs,
+ OUT u16 *pNumPersLibs,
+ IN enum nldr_phase phase);
+
+/*
+ * ======== dcd_get_library_name ========
+ * Purpose:
+ * This function returns the name of a (dynamic) library for a given
+ * UUID.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * uuid_obj: Pointer to a dsp_uuid that represents a unique DSP/BIOS
+ * Bridge object.
+ * pstrLibName: Buffer to hold library name.
+ * pdwSize: Contains buffer size. Set to string size on output.
+ * phase: Which phase to load
+ * phase_split: Are phases in multiple libraries
+ * Returns:
+ * 0: Success.
+ * -EPERM: General failure.
+ * Requires:
+ * DCD initialized.
+ * Valid hdcd_mgr.
+ * pstrLibName != NULL.
+ * uuid_obj != NULL
+ * pdwSize != NULL.
+ * Ensures:
+ */
+extern int dcd_get_library_name(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *uuid_obj,
+ IN OUT char *pstrLibName,
+ IN OUT u32 *pdwSize,
+ IN enum nldr_phase phase,
+ OUT bool *phase_split);
+
+/*
+ * ======== dcd_get_object_def ========
+ * Purpose:
+ * This function returns the properties/attributes of a DSP/BIOS Bridge
+ * object.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * uuid_obj: Pointer to a dsp_uuid that represents a unique
+ * DSP/BIOS Bridge object.
+ * obj_type: The type of DSP/BIOS Bridge object to be
+ * referenced (node, processor, etc).
+ * pObjDef: Pointer to an object definition structure. A
+ * union of various possible DCD object types.
+ * Returns:
+ * 0: Success.
+ * -EACCES: Unable to access/read/parse/load content of object code
+ * section.
+ * -EPERM: General failure.
+ * -EFAULT: Invalid DCD_HMANAGER handle.
+ * Requires:
+ * DCD initialized.
+ * pObjUuid is non-NULL.
+ * pObjDef is non-NULL.
+ * Ensures:
+ */
+extern int dcd_get_object_def(IN struct dcd_manager *hdcd_mgr,
+ IN struct dsp_uuid *pObjUuid,
+ IN enum dsp_dcdobjtype obj_type,
+ OUT struct dcd_genericobj *pObjDef);
+
+/*
+ * ======== dcd_get_objects ========
+ * Purpose:
+ * This function finds all DCD objects specified in a special
+ * COFF section called ".dcd_register", and for each object,
+ * call a "register" function. The "register" function may perform
+ * various actions, such as 1) register nodes in the node database, 2)
+ * unregister nodes from the node database, and 3) add overlay nodes.
+ * Parameters:
+ * hdcd_mgr: A DCD manager handle.
+ * pszCoffPath: Pointer to name of COFF file containing DCD
+ * objects.
+ * registerFxn: Callback fxn to be applied on each located
+ * DCD object.
+ * handle: Handle to pass to callback.
+ * Returns:
+ * 0: Success.
+ * -EACCES: Unable to access/read/parse/load content of object code
+ * section.
+ * -EFAULT: Invalid DCD_HMANAGER handle..
+ * Requires:
+ * DCD initialized.
+ * Ensures:
+ * Note:
+ * Due to the DCD database construction, it is essential for a DCD-enabled
+ * COFF file to contain the right COFF sections, especially
+ * ".dcd_register", which is used for auto registration.
+ */
+extern int dcd_get_objects(IN struct dcd_manager *hdcd_mgr,
+ IN char *pszCoffPath,
+ dcd_registerfxn registerFxn, void *handle);
+
+/*
+ * ======== dcd_init ========
+ * Purpose:
+ * This function initializes DCD.
+ * Parameters:
+ * Returns:
+ * FALSE: Initialization failed.
+ * TRUE: Initialization succeeded.
+ * Requires:
+ * Ensures:
+ * DCD initialized.
+ */
+extern bool dcd_init(void);
+
+/*
+ * ======== dcd_register_object ========
+ * Purpose:
+ * This function registers a DSP/BIOS Bridge object in the DCD database.
+ * Parameters:
+ * uuid_obj: Pointer to a dsp_uuid that identifies a DSP/BIOS
+ * Bridge object.
+ * obj_type: Type of object.
+ * psz_path_name: Path to the object's COFF file.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Failed to register object.
+ * Requires:
+ * DCD initialized.
+ * uuid_obj and szPathName are non-NULL values.
+ * obj_type is a valid type value.
+ * Ensures:
+ */
+extern int dcd_register_object(IN struct dsp_uuid *uuid_obj,
+ IN enum dsp_dcdobjtype obj_type,
+ IN char *psz_path_name);
+
+/*
+ * ======== dcd_unregister_object ========
+ * Purpose:
+ * This function de-registers a valid DSP/BIOS Bridge object from the DCD
+ * database.
+ * Parameters:
+ * uuid_obj: Pointer to a dsp_uuid that identifies a DSP/BIOS Bridge
+ * object.
+ * obj_type: Type of object.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Unable to de-register the specified object.
+ * Requires:
+ * DCD initialized.
+ * uuid_obj is a non-NULL value.
+ * obj_type is a valid type value.
+ * Ensures:
+ */
+extern int dcd_unregister_object(IN struct dsp_uuid *uuid_obj,
+ IN enum dsp_dcdobjtype obj_type);
+
+#endif /* _DBDCD_H */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbdcddef.h b/drivers/staging/tidspbridge/include/dspbridge/dbdcddef.h
new file mode 100644
index 0000000..47afc82
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbdcddef.h
@@ -0,0 +1,78 @@
+/*
+ * dbdcddef.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DCD (DSP/BIOS Bridge Configuration Database) constants and types.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBDCDDEF_
+#define DBDCDDEF_
+
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/mgrpriv.h> /* for mgr_processorextinfo */
+
+/*
+ * The following defines are critical elements for the DCD module:
+ *
+ * - DCD_REGKEY enables DCD functions to locate registered DCD objects.
+ * - DCD_REGISTER_SECTION identifies the COFF section where the UUID of
+ * registered DCD objects are stored.
+ */
+#define DCD_REGKEY "Software\\TexasInstruments\\DspBridge\\DCD"
+#define DCD_REGISTER_SECTION ".dcd_register"
+
+#define DCD_MAXPATHLENGTH 255
+
+/* DCD Manager Object */
+struct dcd_manager;
+
+struct dcd_key_elem {
+ struct list_head link; /* Make it linked to a list */
+ char name[DCD_MAXPATHLENGTH]; /* Name of a given value entry */
+ char *path; /* Pointer to the actual data */
+};
+
+/* DCD Node Properties */
+struct dcd_nodeprops {
+ struct dsp_ndbprops ndb_props;
+ u32 msg_segid;
+ u32 msg_notify_type;
+ char *pstr_create_phase_fxn;
+ char *pstr_delete_phase_fxn;
+ char *pstr_execute_phase_fxn;
+ char *pstr_i_alg_name;
+
+ /* Dynamic load properties */
+ u16 us_load_type; /* Static, dynamic, overlay */
+ u32 ul_data_mem_seg_mask; /* Data memory requirements */
+ u32 ul_code_mem_seg_mask; /* Code memory requirements */
+};
+
+/* DCD Generic Object Type */
+struct dcd_genericobj {
+ union dcdObjUnion {
+ struct dcd_nodeprops node_obj; /* node object. */
+ /* processor object. */
+ struct dsp_processorinfo proc_info;
+ /* extended proc object (private) */
+ struct mgr_processorextinfo ext_proc_obj;
+ } obj_data;
+};
+
+/* DCD Internal Callback Type */
+typedef int(*dcd_registerfxn) (IN struct dsp_uuid *uuid_obj,
+ IN enum dsp_dcdobjtype obj_type,
+ IN void *handle);
+
+#endif /* DBDCDDEF_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dbdefs.h
new file mode 100644
index 0000000..aba8a86
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbdefs.h
@@ -0,0 +1,546 @@
+/*
+ * dbdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global definitions and constants for DSP/BIOS Bridge.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBDEFS_
+#define DBDEFS_
+
+#include <linux/types.h>
+
+#include <dspbridge/dbtype.h> /* GPP side type definitions */
+#include <dspbridge/std.h> /* DSP/BIOS type definitions */
+#include <dspbridge/rms_sh.h> /* Types shared between GPP and DSP */
+
+#define PG_SIZE4K 4096
+#define PG_MASK(pg_size) (~((pg_size)-1))
+#define PG_ALIGN_LOW(addr, pg_size) ((addr) & PG_MASK(pg_size))
+#define PG_ALIGN_HIGH(addr, pg_size) (((addr)+(pg_size)-1) & PG_MASK(pg_size))
+
+/* API return value and calling convention */
+#define DBAPI int
+
+/* Infinite time value for the utimeout parameter to DSPStream_Select() */
+#define DSP_FOREVER (-1)
+
+/* Maximum length of node name, used in dsp_ndbprops */
+#define DSP_MAXNAMELEN 32
+
+/* notify_type values for the RegisterNotify() functions. */
+#define DSP_SIGNALEVENT 0x00000001
+
+/* Types of events for processors */
+#define DSP_PROCESSORSTATECHANGE 0x00000001
+#define DSP_PROCESSORATTACH 0x00000002
+#define DSP_PROCESSORDETACH 0x00000004
+#define DSP_PROCESSORRESTART 0x00000008
+
+/* DSP exception events (DSP/BIOS and DSP MMU fault) */
+#define DSP_MMUFAULT 0x00000010
+#define DSP_SYSERROR 0x00000020
+#define DSP_EXCEPTIONABORT 0x00000300
+#define DSP_PWRERROR 0x00000080
+#define DSP_WDTOVERFLOW 0x00000040
+
+/* IVA exception events (IVA MMU fault) */
+#define IVA_MMUFAULT 0x00000040
+/* Types of events for nodes */
+#define DSP_NODESTATECHANGE 0x00000100
+#define DSP_NODEMESSAGEREADY 0x00000200
+
+/* Types of events for streams */
+#define DSP_STREAMDONE 0x00001000
+#define DSP_STREAMIOCOMPLETION 0x00002000
+
+/* Handle definition representing the GPP node in DSPNode_Connect() calls */
+#define DSP_HGPPNODE 0xFFFFFFFF
+
+/* Node directions used in DSPNode_Connect() */
+#define DSP_TONODE 1
+#define DSP_FROMNODE 2
+
+/* Define Node Minimum and Maximum Priorities */
+#define DSP_NODE_MIN_PRIORITY 1
+#define DSP_NODE_MAX_PRIORITY 15
+
+/* Pre-Defined Message Command Codes available to user: */
+#define DSP_RMSUSERCODESTART RMS_USER /* Start of RMS user cmd codes */
+/* end of user codes */
+#define DSP_RMSUSERCODEEND (RMS_USER + RMS_MAXUSERCODES);
+/* msg_ctrl contains SM buffer description */
+#define DSP_RMSBUFDESC RMS_BUFDESC
+
+/* Shared memory identifier for MEM segment named "SHMSEG0" */
+#define DSP_SHMSEG0 (u32)(-1)
+
+/* Processor ID numbers */
+#define DSP_UNIT 0
+#define IVA_UNIT 1
+
+#define DSPWORD unsigned char
+#define DSPWORDSIZE sizeof(DSPWORD)
+
+/* Success & Failure macros */
+#define DSP_SUCCEEDED(Status) likely((s32)(Status) >= 0)
+#define DSP_FAILED(Status) unlikely((s32)(Status) < 0)
+
+/* Power control enumerations */
+#define PROC_PWRCONTROL 0x8070
+
+#define PROC_PWRMGT_ENABLE (PROC_PWRCONTROL + 0x3)
+#define PROC_PWRMGT_DISABLE (PROC_PWRCONTROL + 0x4)
+
+/* Bridge Code Version */
+#define BRIDGE_VERSION_CODE 333
+
+#define MAX_PROFILES 16
+
+/* DSP chip type */
+#define DSPTYPE64 0x99
+
+/* Handy Macros */
+#define IS_VALID_PROC_EVENT(x) (((x) == 0) || (((x) & \
+ (DSP_PROCESSORSTATECHANGE | \
+ DSP_PROCESSORATTACH | \
+ DSP_PROCESSORDETACH | \
+ DSP_PROCESSORRESTART | \
+ DSP_NODESTATECHANGE | \
+ DSP_STREAMDONE | \
+ DSP_STREAMIOCOMPLETION | \
+ DSP_MMUFAULT | \
+ DSP_SYSERROR | \
+ DSP_WDTOVERFLOW | \
+ DSP_PWRERROR)) && \
+ !((x) & ~(DSP_PROCESSORSTATECHANGE | \
+ DSP_PROCESSORATTACH | \
+ DSP_PROCESSORDETACH | \
+ DSP_PROCESSORRESTART | \
+ DSP_NODESTATECHANGE | \
+ DSP_STREAMDONE | \
+ DSP_STREAMIOCOMPLETION | \
+ DSP_MMUFAULT | \
+ DSP_SYSERROR | \
+ DSP_WDTOVERFLOW | \
+ DSP_PWRERROR))))
+
+#define IS_VALID_NODE_EVENT(x) (((x) == 0) || \
+ (((x) & (DSP_NODESTATECHANGE | DSP_NODEMESSAGEREADY)) && \
+ !((x) & ~(DSP_NODESTATECHANGE | DSP_NODEMESSAGEREADY))))
+
+#define IS_VALID_STRM_EVENT(x) (((x) == 0) || (((x) & (DSP_STREAMDONE | \
+ DSP_STREAMIOCOMPLETION)) && \
+ !((x) & ~(DSP_STREAMDONE | \
+ DSP_STREAMIOCOMPLETION))))
+
+#define IS_VALID_NOTIFY_MASK(x) ((x) & DSP_SIGNALEVENT)
+
+/* The Node UUID structure */
+struct dsp_uuid {
+ u32 ul_data1;
+ u16 us_data2;
+ u16 us_data3;
+ u8 uc_data4;
+ u8 uc_data5;
+ u8 uc_data6[6];
+};
+
+/* DCD types */
+enum dsp_dcdobjtype {
+ DSP_DCDNODETYPE,
+ DSP_DCDPROCESSORTYPE,
+ DSP_DCDLIBRARYTYPE,
+ DSP_DCDCREATELIBTYPE,
+ DSP_DCDEXECUTELIBTYPE,
+ DSP_DCDDELETELIBTYPE,
+ /* DSP_DCDMAXOBJTYPE is meant to be the last DCD object type */
+ DSP_DCDMAXOBJTYPE
+};
+
+/* Processor states */
+enum dsp_procstate {
+ PROC_STOPPED,
+ PROC_LOADED,
+ PROC_RUNNING,
+ PROC_ERROR
+};
+
+/*
+ * Node types: Message node, task node, xDAIS socket node, and
+ * device node. _NODE_GPP is used when defining a stream connection
+ * between a task or socket node and the GPP.
+ *
+ */
+enum node_type {
+ NODE_DEVICE,
+ NODE_TASK,
+ NODE_DAISSOCKET,
+ NODE_MESSAGE,
+ NODE_GPP
+};
+
+/*
+ * ======== node_state ========
+ * Internal node states.
+ */
+enum node_state {
+ NODE_ALLOCATED,
+ NODE_CREATED,
+ NODE_RUNNING,
+ NODE_PAUSED,
+ NODE_DONE,
+ NODE_CREATING,
+ NODE_STARTING,
+ NODE_PAUSING,
+ NODE_TERMINATING,
+ NODE_DELETING,
+};
+
+/* Stream states */
+enum dsp_streamstate {
+ STREAM_IDLE,
+ STREAM_READY,
+ STREAM_PENDING,
+ STREAM_DONE
+};
+
+/* Stream connect types */
+enum dsp_connecttype {
+ CONNECTTYPE_NODEOUTPUT,
+ CONNECTTYPE_GPPOUTPUT,
+ CONNECTTYPE_NODEINPUT,
+ CONNECTTYPE_GPPINPUT
+};
+
+/* Stream mode types */
+enum dsp_strmmode {
+ STRMMODE_PROCCOPY, /* Processor(s) copy stream data payloads */
+ STRMMODE_ZEROCOPY, /* Strm buffer ptrs swapped no data copied */
+ STRMMODE_LDMA, /* Local DMA : OMAP's System-DMA device */
+ STRMMODE_RDMA /* Remote DMA: OMAP's DSP-DMA device */
+};
+
+/* Resource Types */
+enum dsp_resourceinfotype {
+ DSP_RESOURCE_DYNDARAM = 0,
+ DSP_RESOURCE_DYNSARAM,
+ DSP_RESOURCE_DYNEXTERNAL,
+ DSP_RESOURCE_DYNSRAM,
+ DSP_RESOURCE_PROCLOAD
+};
+
+/* Memory Segment Types */
+enum dsp_memtype {
+ DSP_DYNDARAM = 0,
+ DSP_DYNSARAM,
+ DSP_DYNEXTERNAL,
+ DSP_DYNSRAM
+};
+
+/* Memory Flush Types */
+enum dsp_flushtype {
+ PROC_INVALIDATE_MEM = 0,
+ PROC_WRITEBACK_MEM,
+ PROC_WRITEBACK_INVALIDATE_MEM,
+};
+
+/* Memory Segment Status Values */
+struct dsp_memstat {
+ u32 ul_size;
+ u32 ul_total_free_size;
+ u32 ul_len_max_free_block;
+ u32 ul_num_free_blocks;
+ u32 ul_num_alloc_blocks;
+};
+
+/* Processor Load information Values */
+struct dsp_procloadstat {
+ u32 curr_load;
+ u32 predicted_load;
+ u32 curr_dsp_freq;
+ u32 predicted_freq;
+};
+
+/* Attributes for STRM connections between nodes */
+struct dsp_strmattr {
+ u32 seg_id; /* Memory segment on DSP to allocate buffers */
+ u32 buf_size; /* Buffer size (DSP words) */
+ u32 num_bufs; /* Number of buffers */
+ u32 buf_alignment; /* Buffer alignment */
+ u32 utimeout; /* Timeout for blocking STRM calls */
+ enum dsp_strmmode strm_mode; /* mode of stream when opened */
+ /* DMA chnl id if dsp_strmmode is LDMA or RDMA */
+ u32 udma_chnl_id;
+ u32 udma_priority; /* DMA channel priority 0=lowest, >0=high */
+};
+
+/* The dsp_cbdata structure */
+struct dsp_cbdata {
+ u32 cb_data;
+ u8 node_data[1];
+};
+
+/* The dsp_msg structure */
+struct dsp_msg {
+ u32 dw_cmd;
+ u32 dw_arg1;
+ u32 dw_arg2;
+};
+
+/* The dsp_resourcereqmts structure for node's resource requirements */
+struct dsp_resourcereqmts {
+ u32 cb_struct;
+ u32 static_data_size;
+ u32 global_data_size;
+ u32 program_mem_size;
+ u32 uwc_execution_time;
+ u32 uwc_period;
+ u32 uwc_deadline;
+ u32 avg_exection_time;
+ u32 minimum_period;
+};
+
+/*
+ * The dsp_streamconnect structure describes a stream connection
+ * between two nodes, or between a node and the GPP
+ */
+struct dsp_streamconnect {
+ u32 cb_struct;
+ enum dsp_connecttype connect_type;
+ u32 this_node_stream_index;
+ void *connected_node;
+ struct dsp_uuid ui_connected_node_id;
+ u32 connected_node_stream_index;
+};
+
+struct dsp_nodeprofs {
+ u32 ul_heap_size;
+};
+
+/* The dsp_ndbprops structure reports the attributes of a node */
+struct dsp_ndbprops {
+ u32 cb_struct;
+ struct dsp_uuid ui_node_id;
+ char ac_name[DSP_MAXNAMELEN];
+ enum node_type ntype;
+ u32 cache_on_gpp;
+ struct dsp_resourcereqmts dsp_resource_reqmts;
+ s32 prio;
+ u32 stack_size;
+ u32 sys_stack_size;
+ u32 stack_seg;
+ u32 message_depth;
+ u32 num_input_streams;
+ u32 num_output_streams;
+ u32 utimeout;
+ u32 count_profiles; /* Number of supported profiles */
+ /* Array of profiles */
+ struct dsp_nodeprofs node_profiles[MAX_PROFILES];
+ u32 stack_seg_name; /* Stack Segment Name */
+};
+
+ /* The dsp_nodeattrin structure describes the attributes of a
+ * node client */
+struct dsp_nodeattrin {
+ u32 cb_struct;
+ s32 prio;
+ u32 utimeout;
+ u32 profile_id;
+ /* Reserved, for Bridge Internal use only */
+ u32 heap_size;
+ void *pgpp_virt_addr; /* Reserved, for Bridge Internal use only */
+};
+
+ /* The dsp_nodeinfo structure is used to retrieve information
+ * about a node */
+struct dsp_nodeinfo {
+ u32 cb_struct;
+ struct dsp_ndbprops nb_node_database_props;
+ u32 execution_priority;
+ enum node_state ns_execution_state;
+ void *device_owner;
+ u32 number_streams;
+ struct dsp_streamconnect sc_stream_connection[16];
+ u32 node_env;
+};
+
+ /* The dsp_nodeattr structure describes the attributes of a node */
+struct dsp_nodeattr {
+ u32 cb_struct;
+ struct dsp_nodeattrin in_node_attr_in;
+ u32 node_attr_inputs;
+ u32 node_attr_outputs;
+ struct dsp_nodeinfo node_info;
+};
+
+/*
+ * Notification type: either the name of an opened event, or an event or
+ * window handle.
+ */
+struct dsp_notification {
+ char *ps_name;
+ void *handle;
+};
+
+/* The dsp_processorattrin structure describes the attributes of a processor */
+struct dsp_processorattrin {
+ u32 cb_struct;
+ u32 utimeout;
+};
+/*
+ * The dsp_processorinfo structure describes basic capabilities of a
+ * DSP processor
+ */
+struct dsp_processorinfo {
+ u32 cb_struct;
+ int processor_family;
+ int processor_type;
+ u32 clock_rate;
+ u32 ul_internal_mem_size;
+ u32 ul_external_mem_size;
+ u32 processor_id;
+ int ty_running_rtos;
+ s32 node_min_priority;
+ s32 node_max_priority;
+};
+
+/* Error information of last DSP exception signalled to the GPP */
+struct dsp_errorinfo {
+ u32 dw_err_mask;
+ u32 dw_val1;
+ u32 dw_val2;
+ u32 dw_val3;
+};
+
+/* The dsp_processorstate structure describes the state of a DSP processor */
+struct dsp_processorstate {
+ u32 cb_struct;
+ enum dsp_procstate proc_state;
+ struct dsp_errorinfo err_info;
+};
+
+/*
+ * The dsp_resourceinfo structure is used to retrieve information about a
+ * processor's resources
+ */
+struct dsp_resourceinfo {
+ u32 cb_struct;
+ enum dsp_resourceinfotype resource_type;
+ union {
+ u32 ul_resource;
+ struct dsp_memstat mem_stat;
+ struct dsp_procloadstat proc_load_stat;
+ } result;
+};
+
+/*
+ * The dsp_streamattrin structure describes the attributes of a stream,
+ * including segment and alignment of data buffers allocated with
+ * DSPStream_AllocateBuffers(), if applicable
+ */
+struct dsp_streamattrin {
+ u32 cb_struct;
+ u32 utimeout;
+ u32 segment_id;
+ u32 buf_alignment;
+ u32 num_bufs;
+ enum dsp_strmmode strm_mode;
+ u32 udma_chnl_id;
+ u32 udma_priority;
+};
+
+/* The dsp_bufferattr structure describes the attributes of a data buffer */
+struct dsp_bufferattr {
+ u32 cb_struct;
+ u32 segment_id;
+ u32 buf_alignment;
+};
+
+/*
+ * The dsp_streaminfo structure is used to retrieve information
+ * about a stream.
+ */
+struct dsp_streaminfo {
+ u32 cb_struct;
+ u32 number_bufs_allowed;
+ u32 number_bufs_in_stream;
+ u32 ul_number_bytes;
+ void *sync_object_handle;
+ enum dsp_streamstate ss_stream_state;
+};
+
+/* DMM MAP attributes
+It is a bit mask with each bit value indicating a specific attribute
+bit 0 - GPP address type (user virtual=0, physical=1)
+bit 1 - MMU Endianism (Big Endian=1, Little Endian=0)
+bit 2 - MMU mixed page attribute (Mixed/ CPUES=1, TLBES =0)
+bit 3 - MMU element size = 8bit (valid only for non mixed page entries)
+bit 4 - MMU element size = 16bit (valid only for non mixed page entries)
+bit 5 - MMU element size = 32bit (valid only for non mixed page entries)
+bit 6 - MMU element size = 64bit (valid only for non mixed page entries)
+
+bit 14 - Input (read only) buffer
+bit 15 - Output (writeable) buffer
+*/
+
+/* Types of mapping attributes */
+
+/* MPU address is virtual and needs to be translated to physical addr */
+#define DSP_MAPVIRTUALADDR 0x00000000
+#define DSP_MAPPHYSICALADDR 0x00000001
+
+/* Mapped data is big endian */
+#define DSP_MAPBIGENDIAN 0x00000002
+#define DSP_MAPLITTLEENDIAN 0x00000000
+
+/* Element size is based on DSP r/w access size */
+#define DSP_MAPMIXEDELEMSIZE 0x00000004
+
+/*
+ * Element size for MMU mapping (8, 16, 32, or 64 bit)
+ * Ignored if DSP_MAPMIXEDELEMSIZE enabled
+ */
+#define DSP_MAPELEMSIZE8 0x00000008
+#define DSP_MAPELEMSIZE16 0x00000010
+#define DSP_MAPELEMSIZE32 0x00000020
+#define DSP_MAPELEMSIZE64 0x00000040
+
+#define DSP_MAPVMALLOCADDR 0x00000080
+
+#define DSP_MAPDONOTLOCK 0x00000100
+
+#define DSP_MAP_DIR_MASK 0x3FFF
+
+#define GEM_CACHE_LINE_SIZE 128
+#define GEM_L1P_PREFETCH_SIZE 128
+
+/*
+ * Definitions from dbreg.h
+ */
+
+#define DSPPROCTYPE_C64 6410
+#define IVAPROCTYPE_ARM7 470
+
+#define REG_MGR_OBJECT 1
+#define REG_DRV_OBJECT 2
+
+/* registry */
+#define DRVOBJECT "DrvObject"
+#define MGROBJECT "MgrObject"
+
+/* Max registry path length. Also the max registry value length. */
+#define MAXREGPATHLENGTH 255
+
+#endif /* DBDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbldefs.h b/drivers/staging/tidspbridge/include/dspbridge/dbldefs.h
new file mode 100644
index 0000000..a47e7b8
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbldefs.h
@@ -0,0 +1,140 @@
+/*
+ * dbldefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBLDEFS_
+#define DBLDEFS_
+
+/*
+ * Bit masks for dbl_flags.
+ */
+#define DBL_NOLOAD 0x0 /* Don't load symbols, code, or data */
+#define DBL_SYMB 0x1 /* load symbols */
+#define DBL_CODE 0x2 /* load code */
+#define DBL_DATA 0x4 /* load data */
+#define DBL_DYNAMIC 0x8 /* dynamic load */
+#define DBL_BSS 0x20 /* Unitialized section */
+
+#define DBL_MAXPATHLENGTH 255
+
+/*
+ * ======== dbl_flags ========
+ * Specifies whether to load code, data, or symbols
+ */
+typedef s32 dbl_flags;
+
+/*
+ * ======== dbl_sect_info ========
+ * For collecting info on overlay sections
+ */
+struct dbl_sect_info {
+ const char *name; /* name of section */
+ u32 sect_run_addr; /* run address of section */
+ u32 sect_load_addr; /* load address of section */
+ u32 size; /* size of section (target MAUs) */
+ dbl_flags type; /* Code, data, or BSS */
+};
+
+/*
+ * ======== dbl_symbol ========
+ * (Needed for dynamic load library)
+ */
+struct dbl_symbol {
+ u32 value;
+};
+
+/*
+ * ======== dbl_alloc_fxn ========
+ * Allocate memory function. Allocate or reserve (if reserved == TRUE)
+ * "size" bytes of memory from segment "space" and return the address in
+ * *dspAddr (or starting at *dspAddr if reserve == TRUE). Returns 0 on
+ * success, or an error code on failure.
+ */
+typedef s32(*dbl_alloc_fxn) (void *hdl, s32 space, u32 size, u32 align,
+ u32 *dspAddr, s32 seg_id, s32 req, bool reserved);
+
+/*
+ * ======== dbl_free_fxn ========
+ * Free memory function. Free, or unreserve (if reserved == TRUE) "size"
+ * bytes of memory from segment "space"
+ */
+typedef bool(*dbl_free_fxn) (void *hdl, u32 addr, s32 space, u32 size,
+ bool reserved);
+
+/*
+ * ======== dbl_log_write_fxn ========
+ * Function to call when writing data from a section, to log the info.
+ * Can be NULL if no logging is required.
+ */
+typedef int(*dbl_log_write_fxn) (void *handle,
+ struct dbl_sect_info *sect, u32 addr,
+ u32 bytes);
+
+/*
+ * ======== dbl_sym_lookup ========
+ * Symbol lookup function - Find the symbol name and return its value.
+ *
+ * Parameters:
+ * handle - Opaque handle
+ * parg - Opaque argument.
+ * name - Name of symbol to lookup.
+ * sym - Location to store address of symbol structure.
+ *
+ * Returns:
+ * TRUE: Success (symbol was found).
+ * FALSE: Failed to find symbol.
+ */
+typedef bool(*dbl_sym_lookup) (void *handle, void *parg, void *rmm_handle,
+ const char *name, struct dbl_symbol ** sym);
+
+/*
+ * ======== dbl_write_fxn ========
+ * Write memory function. Write "n" HOST bytes of memory to segment "mtype"
+ * starting at address "dspAddr" from the buffer "buf". The buffer is
+ * formatted as an array of words appropriate for the DSP.
+ */
+typedef s32(*dbl_write_fxn) (void *hdl, u32 dspAddr, void *buf,
+ u32 n, s32 mtype);
+
+/*
+ * ======== dbl_attrs ========
+ */
+struct dbl_attrs {
+ dbl_alloc_fxn alloc;
+ dbl_free_fxn free;
+ void *rmm_handle; /* Handle to pass to alloc, free functions */
+ dbl_write_fxn write;
+ void *input_params; /* Handle to pass to write, cinit function */
+
+ dbl_log_write_fxn log_write;
+ void *log_write_handle;
+
+ /* Symbol matching function and handle to pass to it */
+ dbl_sym_lookup sym_lookup;
+ void *sym_handle;
+ void *sym_arg;
+
+ /*
+ * These file manipulation functions should be compatible with the
+ * "C" run time library functions of the same name.
+ */
+ s32(*fread) (void *, size_t, size_t, void *);
+ s32(*fseek) (void *, long, int);
+ s32(*ftell) (void *);
+ s32(*fclose) (void *);
+ void *(*fopen) (const char *, const char *);
+};
+
+#endif /* DBLDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbll.h b/drivers/staging/tidspbridge/include/dspbridge/dbll.h
new file mode 100644
index 0000000..54c6219
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbll.h
@@ -0,0 +1,59 @@
+/*
+ * dbll.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Dynamic load library module interface. Function header
+ * comments are in the file dblldefs.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBLL_
+#define DBLL_
+
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/dblldefs.h>
+
+extern bool symbols_reloaded;
+
+extern void dbll_close(struct dbll_library_obj *lib);
+extern int dbll_create(struct dbll_tar_obj **target_obj,
+ struct dbll_attrs *pattrs);
+extern void dbll_delete(struct dbll_tar_obj *target);
+extern void dbll_exit(void);
+extern bool dbll_get_addr(struct dbll_library_obj *lib, char *name,
+ struct dbll_sym_val **ppSym);
+extern void dbll_get_attrs(struct dbll_tar_obj *target,
+ struct dbll_attrs *pattrs);
+extern bool dbll_get_c_addr(struct dbll_library_obj *lib, char *name,
+ struct dbll_sym_val **ppSym);
+extern int dbll_get_sect(struct dbll_library_obj *lib, char *name,
+ u32 *paddr, u32 *psize);
+extern bool dbll_init(void);
+extern int dbll_load(struct dbll_library_obj *lib,
+ dbll_flags flags,
+ struct dbll_attrs *attrs, u32 * pEntry);
+extern int dbll_load_sect(struct dbll_library_obj *lib,
+ char *sectName, struct dbll_attrs *attrs);
+extern int dbll_open(struct dbll_tar_obj *target, char *file,
+ dbll_flags flags, struct dbll_library_obj **pLib);
+extern int dbll_read_sect(struct dbll_library_obj *lib,
+ char *name, char *pbuf, u32 size);
+extern void dbll_set_attrs(struct dbll_tar_obj *target,
+ struct dbll_attrs *pattrs);
+extern void dbll_unload(struct dbll_library_obj *lib, struct dbll_attrs *attrs);
+extern int dbll_unload_sect(struct dbll_library_obj *lib,
+ char *sectName, struct dbll_attrs *attrs);
+bool dbll_find_dsp_symbol(struct dbll_library_obj *zl_lib, u32 address,
+ u32 offset_range, u32 *sym_addr_output, char *name_output);
+
+#endif /* DBLL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dblldefs.h b/drivers/staging/tidspbridge/include/dspbridge/dblldefs.h
new file mode 100644
index 0000000..f587106
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dblldefs.h
@@ -0,0 +1,496 @@
+/*
+ * dblldefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBLLDEFS_
+#define DBLLDEFS_
+
+/*
+ * Bit masks for dbl_flags.
+ */
+#define DBLL_NOLOAD 0x0 /* Don't load symbols, code, or data */
+#define DBLL_SYMB 0x1 /* load symbols */
+#define DBLL_CODE 0x2 /* load code */
+#define DBLL_DATA 0x4 /* load data */
+#define DBLL_DYNAMIC 0x8 /* dynamic load */
+#define DBLL_BSS 0x20 /* Unitialized section */
+
+#define DBLL_MAXPATHLENGTH 255
+
+/*
+ * ======== DBLL_Target ========
+ *
+ */
+struct dbll_tar_obj;
+
+/*
+ * ======== dbll_flags ========
+ * Specifies whether to load code, data, or symbols
+ */
+typedef s32 dbll_flags;
+
+/*
+ * ======== DBLL_Library ========
+ *
+ */
+struct dbll_library_obj;
+
+/*
+ * ======== dbll_sect_info ========
+ * For collecting info on overlay sections
+ */
+struct dbll_sect_info {
+ const char *name; /* name of section */
+ u32 sect_run_addr; /* run address of section */
+ u32 sect_load_addr; /* load address of section */
+ u32 size; /* size of section (target MAUs) */
+ dbll_flags type; /* Code, data, or BSS */
+};
+
+/*
+ * ======== dbll_sym_val ========
+ * (Needed for dynamic load library)
+ */
+struct dbll_sym_val {
+ u32 value;
+};
+
+/*
+ * ======== dbll_alloc_fxn ========
+ * Allocate memory function. Allocate or reserve (if reserved == TRUE)
+ * "size" bytes of memory from segment "space" and return the address in
+ * *dspAddr (or starting at *dspAddr if reserve == TRUE). Returns 0 on
+ * success, or an error code on failure.
+ */
+typedef s32(*dbll_alloc_fxn) (void *hdl, s32 space, u32 size, u32 align,
+ u32 *dspAddr, s32 seg_id, s32 req,
+ bool reserved);
+
+/*
+ * ======== dbll_close_fxn ========
+ */
+typedef s32(*dbll_f_close_fxn) (void *);
+
+/*
+ * ======== dbll_free_fxn ========
+ * Free memory function. Free, or unreserve (if reserved == TRUE) "size"
+ * bytes of memory from segment "space"
+ */
+typedef bool(*dbll_free_fxn) (void *hdl, u32 addr, s32 space, u32 size,
+ bool reserved);
+
+/*
+ * ======== dbll_f_open_fxn ========
+ */
+typedef void *(*dbll_f_open_fxn) (const char *, const char *);
+
+/*
+ * ======== dbll_log_write_fxn ========
+ * Function to call when writing data from a section, to log the info.
+ * Can be NULL if no logging is required.
+ */
+typedef int(*dbll_log_write_fxn) (void *handle,
+ struct dbll_sect_info *sect, u32 addr,
+ u32 bytes);
+
+/*
+ * ======== dbll_read_fxn ========
+ */
+typedef s32(*dbll_read_fxn) (void *, size_t, size_t, void *);
+
+/*
+ * ======== dbll_seek_fxn ========
+ */
+typedef s32(*dbll_seek_fxn) (void *, long, int);
+
+/*
+ * ======== dbll_sym_lookup ========
+ * Symbol lookup function - Find the symbol name and return its value.
+ *
+ * Parameters:
+ * handle - Opaque handle
+ * parg - Opaque argument.
+ * name - Name of symbol to lookup.
+ * sym - Location to store address of symbol structure.
+ *
+ * Returns:
+ * TRUE: Success (symbol was found).
+ * FALSE: Failed to find symbol.
+ */
+typedef bool(*dbll_sym_lookup) (void *handle, void *parg, void *rmm_handle,
+ const char *name, struct dbll_sym_val ** sym);
+
+/*
+ * ======== dbll_tell_fxn ========
+ */
+typedef s32(*dbll_tell_fxn) (void *);
+
+/*
+ * ======== dbll_write_fxn ========
+ * Write memory function. Write "n" HOST bytes of memory to segment "mtype"
+ * starting at address "dspAddr" from the buffer "buf". The buffer is
+ * formatted as an array of words appropriate for the DSP.
+ */
+typedef s32(*dbll_write_fxn) (void *hdl, u32 dspAddr, void *buf,
+ u32 n, s32 mtype);
+
+/*
+ * ======== dbll_attrs ========
+ */
+struct dbll_attrs {
+ dbll_alloc_fxn alloc;
+ dbll_free_fxn free;
+ void *rmm_handle; /* Handle to pass to alloc, free functions */
+ dbll_write_fxn write;
+ void *input_params; /* Handle to pass to write, cinit function */
+ bool base_image;
+ dbll_log_write_fxn log_write;
+ void *log_write_handle;
+
+ /* Symbol matching function and handle to pass to it */
+ dbll_sym_lookup sym_lookup;
+ void *sym_handle;
+ void *sym_arg;
+
+ /*
+ * These file manipulation functions should be compatible with the
+ * "C" run time library functions of the same name.
+ */
+ s32(*fread) (void *, size_t, size_t, void *);
+ s32(*fseek) (void *, long, int);
+ s32(*ftell) (void *);
+ s32(*fclose) (void *);
+ void *(*fopen) (const char *, const char *);
+};
+
+/*
+ * ======== dbll_close ========
+ * Close library opened with dbll_open.
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * Returns:
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * Ensures:
+ */
+typedef void (*dbll_close_fxn) (struct dbll_library_obj *library);
+
+/*
+ * ======== dbll_create ========
+ * Create a target object, specifying the alloc, free, and write functions.
+ * Parameters:
+ * target_obj - Location to store target handle on output.
+ * pattrs - Attributes.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failed.
+ * Requires:
+ * DBL initialized.
+ * pattrs != NULL.
+ * target_obj != NULL;
+ * Ensures:
+ * Success: *target_obj != NULL.
+ * Failure: *target_obj == NULL.
+ */
+typedef int(*dbll_create_fxn) (struct dbll_tar_obj **target_obj,
+ struct dbll_attrs *attrs);
+
+/*
+ * ======== dbll_delete ========
+ * Delete target object and free resources for any loaded libraries.
+ * Parameters:
+ * target - Handle returned from DBLL_Create().
+ * Returns:
+ * Requires:
+ * DBL initialized.
+ * Valid target.
+ * Ensures:
+ */
+typedef void (*dbll_delete_fxn) (struct dbll_tar_obj *target);
+
+/*
+ * ======== dbll_exit ========
+ * Discontinue use of DBL module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * refs > 0.
+ * Ensures:
+ * refs >= 0.
+ */
+typedef void (*dbll_exit_fxn) (void);
+
+/*
+ * ======== dbll_get_addr ========
+ * Get address of name in the specified library.
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * name - Name of symbol
+ * ppSym - Location to store symbol address on output.
+ * Returns:
+ * TRUE: Success.
+ * FALSE: Symbol not found.
+ * Requires:
+ * DBL initialized.
+ * Valid library.
+ * name != NULL.
+ * ppSym != NULL.
+ * Ensures:
+ */
+typedef bool(*dbll_get_addr_fxn) (struct dbll_library_obj *lib, char *name,
+ struct dbll_sym_val **ppSym);
+
+/*
+ * ======== dbll_get_attrs ========
+ * Retrieve the attributes of the target.
+ * Parameters:
+ * target - Handle returned from DBLL_Create().
+ * pattrs - Location to store attributes on output.
+ * Returns:
+ * Requires:
+ * DBL initialized.
+ * Valid target.
+ * pattrs != NULL.
+ * Ensures:
+ */
+typedef void (*dbll_get_attrs_fxn) (struct dbll_tar_obj *target,
+ struct dbll_attrs *attrs);
+
+/*
+ * ======== dbll_get_c_addr ========
+ * Get address of "C" name on the specified library.
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * name - Name of symbol
+ * ppSym - Location to store symbol address on output.
+ * Returns:
+ * TRUE: Success.
+ * FALSE: Symbol not found.
+ * Requires:
+ * DBL initialized.
+ * Valid target.
+ * name != NULL.
+ * ppSym != NULL.
+ * Ensures:
+ */
+typedef bool(*dbll_get_c_addr_fxn) (struct dbll_library_obj *lib, char *name,
+ struct dbll_sym_val **ppSym);
+
+/*
+ * ======== dbll_get_sect ========
+ * Get address and size of a named section.
+ * Parameters:
+ * lib - Library handle returned from dbll_open().
+ * name - Name of section.
+ * paddr - Location to store section address on output.
+ * psize - Location to store section size on output.
+ * Returns:
+ * 0: Success.
+ * -ENXIO: Section not found.
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * name != NULL.
+ * paddr != NULL;
+ * psize != NULL.
+ * Ensures:
+ */
+typedef int(*dbll_get_sect_fxn) (struct dbll_library_obj *lib,
+ char *name, u32 * addr, u32 * size);
+
+/*
+ * ======== dbll_init ========
+ * Initialize DBL module.
+ * Parameters:
+ * Returns:
+ * TRUE: Success.
+ * FALSE: Failure.
+ * Requires:
+ * refs >= 0.
+ * Ensures:
+ * Success: refs > 0.
+ * Failure: refs >= 0.
+ */
+typedef bool(*dbll_init_fxn) (void);
+
+/*
+ * ======== dbll_load ========
+ * Load library onto the target.
+ *
+ * Parameters:
+ * lib - Library handle returned from dbll_open().
+ * flags - Load code, data and/or symbols.
+ * attrs - May contain alloc, free, and write function.
+ * pulEntry - Location to store program entry on output.
+ * Returns:
+ * 0: Success.
+ * -EBADF: File read failed.
+ * -EILSEQ: Failure in dynamic loader library.
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * pEntry != NULL.
+ * Ensures:
+ */
+typedef int(*dbll_load_fxn) (struct dbll_library_obj *lib,
+ dbll_flags flags,
+ struct dbll_attrs *attrs, u32 *entry);
+
+/*
+ * ======== dbll_load_sect ========
+ * Load a named section from an library (for overlay support).
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * sectName - Name of section to load.
+ * attrs - Contains write function and handle to pass to it.
+ * Returns:
+ * 0: Success.
+ * -ENXIO: Section not found.
+ * -ENOSYS: Function not implemented.
+ * Requires:
+ * Valid lib.
+ * sectName != NULL.
+ * attrs != NULL.
+ * attrs->write != NULL.
+ * Ensures:
+ */
+typedef int(*dbll_load_sect_fxn) (struct dbll_library_obj *lib,
+ char *pszSectName,
+ struct dbll_attrs *attrs);
+
+/*
+ * ======== dbll_open ========
+ * dbll_open() returns a library handle that can be used to load/unload
+ * the symbols/code/data via dbll_load()/dbll_unload().
+ * Parameters:
+ * target - Handle returned from dbll_create().
+ * file - Name of file to open.
+ * flags - If flags & DBLL_SYMB, load symbols.
+ * pLib - Location to store library handle on output.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EBADF: File open/read failure.
+ * Unable to determine target type.
+ * Requires:
+ * DBL initialized.
+ * Valid target.
+ * file != NULL.
+ * pLib != NULL.
+ * dbll_attrs fopen function non-NULL.
+ * Ensures:
+ * Success: Valid *pLib.
+ * Failure: *pLib == NULL.
+ */
+typedef int(*dbll_open_fxn) (struct dbll_tar_obj *target, char *file,
+ dbll_flags flags,
+ struct dbll_library_obj **pLib);
+
+/*
+ * ======== dbll_read_sect ========
+ * Read COFF section into a character buffer.
+ * Parameters:
+ * lib - Library handle returned from dbll_open().
+ * name - Name of section.
+ * pbuf - Buffer to write section contents into.
+ * size - Buffer size
+ * Returns:
+ * 0: Success.
+ * -ENXIO: Named section does not exists.
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * name != NULL.
+ * pbuf != NULL.
+ * size != 0.
+ * Ensures:
+ */
+typedef int(*dbll_read_sect_fxn) (struct dbll_library_obj *lib,
+ char *name, char *content,
+ u32 uContentSize);
+
+/*
+ * ======== dbll_set_attrs ========
+ * Set the attributes of the target.
+ * Parameters:
+ * target - Handle returned from dbll_create().
+ * pattrs - New attributes.
+ * Returns:
+ * Requires:
+ * DBL initialized.
+ * Valid target.
+ * pattrs != NULL.
+ * Ensures:
+ */
+typedef void (*dbll_set_attrs_fxn) (struct dbll_tar_obj *target,
+ struct dbll_attrs *attrs);
+
+/*
+ * ======== dbll_unload ========
+ * Unload library loaded with dbll_load().
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * attrs - Contains free() function and handle to pass to it.
+ * Returns:
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * Ensures:
+ */
+typedef void (*dbll_unload_fxn) (struct dbll_library_obj *library,
+ struct dbll_attrs *attrs);
+
+/*
+ * ======== dbll_unload_sect ========
+ * Unload a named section from an library (for overlay support).
+ * Parameters:
+ * lib - Handle returned from dbll_open().
+ * sectName - Name of section to load.
+ * attrs - Contains free() function and handle to pass to it.
+ * Returns:
+ * 0: Success.
+ * -ENXIO: Named section not found.
+ * -ENOSYS
+ * Requires:
+ * DBL initialized.
+ * Valid lib.
+ * sectName != NULL.
+ * Ensures:
+ */
+typedef int(*dbll_unload_sect_fxn) (struct dbll_library_obj *lib,
+ char *pszSectName,
+ struct dbll_attrs *attrs);
+
+struct dbll_fxns {
+ dbll_close_fxn close_fxn;
+ dbll_create_fxn create_fxn;
+ dbll_delete_fxn delete_fxn;
+ dbll_exit_fxn exit_fxn;
+ dbll_get_attrs_fxn get_attrs_fxn;
+ dbll_get_addr_fxn get_addr_fxn;
+ dbll_get_c_addr_fxn get_c_addr_fxn;
+ dbll_get_sect_fxn get_sect_fxn;
+ dbll_init_fxn init_fxn;
+ dbll_load_fxn load_fxn;
+ dbll_load_sect_fxn load_sect_fxn;
+ dbll_open_fxn open_fxn;
+ dbll_read_sect_fxn read_sect_fxn;
+ dbll_set_attrs_fxn set_attrs_fxn;
+ dbll_unload_fxn unload_fxn;
+ dbll_unload_sect_fxn unload_sect_fxn;
+};
+
+#endif /* DBLDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dbtype.h b/drivers/staging/tidspbridge/include/dspbridge/dbtype.h
new file mode 100644
index 0000000..de65a82
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dbtype.h
@@ -0,0 +1,88 @@
+/*
+ * dbtype.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This header defines data types for DSP/BIOS Bridge APIs and device
+ * driver modules. It also defines the Hungarian prefix to use for each
+ * base type.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DBTYPE_
+#define DBTYPE_
+
+/*===========================================================================*/
+/* Argument specification syntax */
+/*===========================================================================*/
+
+#ifndef IN
+#define IN /* Following parameter is for input. */
+#endif
+
+#ifndef OUT
+#define OUT /* Following parameter is for output. */
+#endif
+
+#ifndef OPTIONAL
+#define OPTIONAL /* Function may optionally use previous parameter. */
+#endif
+
+#ifndef CONST
+#define CONST const
+#endif
+
+/*===========================================================================*/
+/* Boolean constants */
+/*===========================================================================*/
+
+#ifndef FALSE
+#define FALSE 0
+#endif
+#ifndef TRUE
+#define TRUE 1
+#endif
+
+/*===========================================================================*/
+/* NULL (Definition is language specific) */
+/*===========================================================================*/
+
+#ifndef NULL
+#define NULL ((void *)0) /* Null pointer. */
+#endif
+
+/*===========================================================================*/
+/* NULL character (normally used for string termination) */
+/*===========================================================================*/
+
+#ifndef NULL_CHAR
+#define NULL_CHAR '\0' /* Null character. */
+#endif
+
+/*===========================================================================*/
+/* Basic Type definitions (with Prefixes for Hungarian notation) */
+/*===========================================================================*/
+
+#ifndef OMAPBRIDGE_TYPES
+#define OMAPBRIDGE_TYPES
+typedef volatile unsigned short reg_uword16;
+#endif
+
+#define TEXT(x) x
+
+#define DLLIMPORT
+#define DLLEXPORT
+
+/* Define DSPAPIDLL correctly in dspapi.h */
+#define _DSPSYSDLL32_
+
+#endif /* DBTYPE_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dehdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dehdefs.h
new file mode 100644
index 0000000..09f8bf8
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dehdefs.h
@@ -0,0 +1,32 @@
+/*
+ * dehdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definition for Bridge driver module DEH.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DEHDEFS_
+#define DEHDEFS_
+
+#include <dspbridge/mbx_sh.h> /* shared mailbox codes */
+
+/* DEH object manager */
+struct deh_mgr;
+
+/* Magic code used to determine if DSP signaled exception. */
+#define DEH_BASE MBX_DEH_BASE
+#define DEH_USERS_BASE MBX_DEH_USERS_BASE
+#define DEH_LIMIT MBX_DEH_LIMIT
+
+#endif /* _DEHDEFS_H */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dev.h b/drivers/staging/tidspbridge/include/dspbridge/dev.h
new file mode 100644
index 0000000..434c128
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dev.h
@@ -0,0 +1,702 @@
+/*
+ * dev.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Bridge Bridge driver device operations.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DEV_
+#define DEV_
+
+/* ----------------------------------- Module Dependent Headers */
+#include <dspbridge/chnldefs.h>
+#include <dspbridge/cmm.h>
+#include <dspbridge/cod.h>
+#include <dspbridge/dehdefs.h>
+#include <dspbridge/nodedefs.h>
+#include <dspbridge/dispdefs.h>
+#include <dspbridge/dspdefs.h>
+#include <dspbridge/dmm.h>
+#include <dspbridge/host_os.h>
+
+/* ----------------------------------- This */
+#include <dspbridge/devdefs.h>
+
+/*
+ * ======== dev_brd_write_fxn ========
+ * Purpose:
+ * Exported function to be used as the COD write function. This function
+ * is passed a handle to a DEV_hObject by ZL in pArb, then calls the
+ * device's bridge_brd_write() function.
+ * Parameters:
+ * pArb: Handle to a Device Object.
+ * hDevContext: Handle to Bridge driver defined device info.
+ * dwDSPAddr: Address on DSP board (Destination).
+ * pHostBuf: Pointer to host buffer (Source).
+ * ul_num_bytes: Number of bytes to transfer.
+ * ulMemType: Memory space on DSP to which to transfer.
+ * Returns:
+ * Number of bytes written. Returns 0 if the DEV_hObject passed in via
+ * pArb is invalid.
+ * Requires:
+ * DEV Initialized.
+ * pHostBuf != NULL
+ * Ensures:
+ */
+extern u32 dev_brd_write_fxn(void *pArb,
+ u32 ulDspAddr,
+ void *pHostBuf, u32 ul_num_bytes, u32 nMemSpace);
+
+/*
+ * ======== dev_create_device ========
+ * Purpose:
+ * Called by the operating system to load the Bridge Driver for a
+ * 'Bridge device.
+ * Parameters:
+ * phDevObject: Ptr to location to receive the device object handle.
+ * driver_file_name: Name of Bridge driver PE DLL file to load. If the
+ * absolute path is not provided, the file is loaded
+ * through 'Bridge's module search path.
+ * pHostConfig: Host configuration information, to be passed down
+ * to the Bridge driver when bridge_dev_create() is called.
+ * pDspConfig: DSP resources, to be passed down to the Bridge driver
+ * when bridge_dev_create() is called.
+ * dev_node_obj: Platform specific device node.
+ * Returns:
+ * 0: Module is loaded, device object has been created
+ * -ENOMEM: Insufficient memory to create needed resources.
+ * -EPERM: Unable to find Bridge driver entry point function.
+ * -ESPIPE: Unable to load ZL DLL.
+ * Requires:
+ * DEV Initialized.
+ * phDevObject != NULL.
+ * driver_file_name != NULL.
+ * pHostConfig != NULL.
+ * pDspConfig != NULL.
+ * Ensures:
+ * 0: *phDevObject will contain handle to the new device object.
+ * Otherwise, does not create the device object, ensures the Bridge driver
+ * module is unloaded, and sets *phDevObject to NULL.
+ */
+extern int dev_create_device(OUT struct dev_object
+ **phDevObject,
+ IN CONST char *driver_file_name,
+ struct cfg_devnode *dev_node_obj);
+
+/*
+ * ======== dev_create_iva_device ========
+ * Purpose:
+ * Called by the operating system to load the Bridge Driver for IVA.
+ * Parameters:
+ * phDevObject: Ptr to location to receive the device object handle.
+ * driver_file_name: Name of Bridge driver PE DLL file to load. If the
+ * absolute path is not provided, the file is loaded
+ * through 'Bridge's module search path.
+ * pHostConfig: Host configuration information, to be passed down
+ * to the Bridge driver when bridge_dev_create() is called.
+ * pDspConfig: DSP resources, to be passed down to the Bridge driver
+ * when bridge_dev_create() is called.
+ * dev_node_obj: Platform specific device node.
+ * Returns:
+ * 0: Module is loaded, device object has been created
+ * -ENOMEM: Insufficient memory to create needed resources.
+ * -EPERM: Unable to find Bridge driver entry point function.
+ * -ESPIPE: Unable to load ZL DLL.
+ * Requires:
+ * DEV Initialized.
+ * phDevObject != NULL.
+ * driver_file_name != NULL.
+ * pHostConfig != NULL.
+ * pDspConfig != NULL.
+ * Ensures:
+ * 0: *phDevObject will contain handle to the new device object.
+ * Otherwise, does not create the device object, ensures the Bridge driver
+ * module is unloaded, and sets *phDevObject to NULL.
+ */
+extern int dev_create_iva_device(OUT struct dev_object
+ **phDevObject,
+ IN CONST char *driver_file_name,
+ IN CONST struct cfg_hostres
+ *pHostConfig,
+ struct cfg_devnode *dev_node_obj);
+
+/*
+ * ======== dev_create2 ========
+ * Purpose:
+ * After successful loading of the image from api_init_complete2
+ * (PROC Auto_Start) or proc_load this fxn is called. This creates
+ * the Node Manager and updates the DEV Object.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device().
+ * Returns:
+ * 0: Successful Creation of Node Manager
+ * -EPERM: Some Error Occurred.
+ * Requires:
+ * DEV Initialized
+ * Valid hdev_obj
+ * Ensures:
+ * 0 and hdev_obj->hnode_mgr != NULL
+ * else hdev_obj->hnode_mgr == NULL
+ */
+extern int dev_create2(IN struct dev_object *hdev_obj);
+
+/*
+ * ======== dev_destroy2 ========
+ * Purpose:
+ * Destroys the Node manager for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device().
+ * Returns:
+ * 0: Successful Creation of Node Manager
+ * -EPERM: Some Error Occurred.
+ * Requires:
+ * DEV Initialized
+ * Valid hdev_obj
+ * Ensures:
+ * 0 and hdev_obj->hnode_mgr == NULL
+ * else -EPERM.
+ */
+extern int dev_destroy2(IN struct dev_object *hdev_obj);
+
+/*
+ * ======== dev_destroy_device ========
+ * Purpose:
+ * Destroys the channel manager for this device, if any, calls
+ * bridge_dev_destroy(), and then attempts to unload the Bridge module.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * -EPERM: The Bridge driver failed it's bridge_dev_destroy() function.
+ * Requires:
+ * DEV Initialized.
+ * Ensures:
+ */
+extern int dev_destroy_device(struct dev_object
+ *hdev_obj);
+
+/*
+ * ======== dev_get_chnl_mgr ========
+ * Purpose:
+ * Retrieve the handle to the channel manager created for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *phMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phMgr != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phMgr contains a handle to a channel manager object,
+ * or NULL.
+ * else: *phMgr is NULL.
+ */
+extern int dev_get_chnl_mgr(struct dev_object *hdev_obj,
+ OUT struct chnl_mgr **phMgr);
+
+/*
+ * ======== dev_get_cmm_mgr ========
+ * Purpose:
+ * Retrieve the handle to the shared memory manager created for this
+ * device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *phMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phMgr != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phMgr contains a handle to a channel manager object,
+ * or NULL.
+ * else: *phMgr is NULL.
+ */
+extern int dev_get_cmm_mgr(struct dev_object *hdev_obj,
+ OUT struct cmm_object **phMgr);
+
+/*
+ * ======== dev_get_dmm_mgr ========
+ * Purpose:
+ * Retrieve the handle to the dynamic memory manager created for this
+ * device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *phMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phMgr != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phMgr contains a handle to a channel manager object,
+ * or NULL.
+ * else: *phMgr is NULL.
+ */
+extern int dev_get_dmm_mgr(struct dev_object *hdev_obj,
+ OUT struct dmm_object **phMgr);
+
+/*
+ * ======== dev_get_cod_mgr ========
+ * Purpose:
+ * Retrieve the COD manager create for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *phCodMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phCodMgr != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phCodMgr contains a handle to a COD manager object.
+ * else: *phCodMgr is NULL.
+ */
+extern int dev_get_cod_mgr(struct dev_object *hdev_obj,
+ OUT struct cod_manager **phCodMgr);
+
+/*
+ * ======== dev_get_deh_mgr ========
+ * Purpose:
+ * Retrieve the DEH manager created for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device().
+ * *phDehMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phDehMgr != NULL.
+ * DEH Initialized.
+ * Ensures:
+ * 0: *phDehMgr contains a handle to a DEH manager object.
+ * else: *phDehMgr is NULL.
+ */
+extern int dev_get_deh_mgr(struct dev_object *hdev_obj,
+ OUT struct deh_mgr **phDehMgr);
+
+/*
+ * ======== dev_get_dev_node ========
+ * Purpose:
+ * Retrieve the platform specific device ID for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * phDevNode: Ptr to location to get the device node handle.
+ * Returns:
+ * 0: Returns a DEVNODE in *dev_node_obj.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phDevNode != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phDevNode contains a platform specific device ID;
+ * else: *phDevNode is NULL.
+ */
+extern int dev_get_dev_node(struct dev_object *hdev_obj,
+ OUT struct cfg_devnode **phDevNode);
+
+/*
+ * ======== dev_get_dev_type ========
+ * Purpose:
+ * Retrieve the platform specific device ID for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * phDevNode: Ptr to location to get the device node handle.
+ * Returns:
+ * 0: Success
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phDevNode != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phDevNode contains a platform specific device ID;
+ * else: *phDevNode is NULL.
+ */
+extern int dev_get_dev_type(struct dev_object *hdevObject,
+ u8 *dev_type);
+
+/*
+ * ======== dev_get_first ========
+ * Purpose:
+ * Retrieve the first Device Object handle from an internal linked list of
+ * of DEV_OBJECTs maintained by DEV.
+ * Parameters:
+ * Returns:
+ * NULL if there are no device objects stored; else
+ * a valid DEV_HOBJECT.
+ * Requires:
+ * No calls to dev_create_device or dev_destroy_device (which my modify the
+ * internal device object list) may occur between calls to dev_get_first
+ * and dev_get_next.
+ * Ensures:
+ * The DEV_HOBJECT returned is valid.
+ * A subsequent call to dev_get_next will return the next device object in
+ * the list.
+ */
+extern struct dev_object *dev_get_first(void);
+
+/*
+ * ======== dev_get_intf_fxns ========
+ * Purpose:
+ * Retrieve the Bridge driver interface function structure for the
+ * loaded Bridge driver.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *ppIntfFxns: Ptr to location to store fxn interface.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * ppIntfFxns != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *ppIntfFxns contains a pointer to the Bridge
+ * driver interface;
+ * else: *ppIntfFxns is NULL.
+ */
+extern int dev_get_intf_fxns(struct dev_object *hdev_obj,
+ OUT struct bridge_drv_interface **ppIntfFxns);
+
+/*
+ * ======== dev_get_io_mgr ========
+ * Purpose:
+ * Retrieve the handle to the IO manager created for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * *phMgr: Ptr to location to store handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phMgr != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phMgr contains a handle to an IO manager object.
+ * else: *phMgr is NULL.
+ */
+extern int dev_get_io_mgr(struct dev_object *hdev_obj,
+ OUT struct io_mgr **phMgr);
+
+/*
+ * ======== dev_get_next ========
+ * Purpose:
+ * Retrieve the next Device Object handle from an internal linked list of
+ * of DEV_OBJECTs maintained by DEV, after having previously called
+ * dev_get_first() and zero or more dev_get_next
+ * Parameters:
+ * hdev_obj: Handle to the device object returned from a previous
+ * call to dev_get_first() or dev_get_next().
+ * Returns:
+ * NULL if there are no further device objects on the list or hdev_obj
+ * was invalid;
+ * else the next valid DEV_HOBJECT in the list.
+ * Requires:
+ * No calls to dev_create_device or dev_destroy_device (which my modify the
+ * internal device object list) may occur between calls to dev_get_first
+ * and dev_get_next.
+ * Ensures:
+ * The DEV_HOBJECT returned is valid.
+ * A subsequent call to dev_get_next will return the next device object in
+ * the list.
+ */
+extern struct dev_object *dev_get_next(struct dev_object
+ *hdev_obj);
+
+/*
+ * ========= dev_get_msg_mgr ========
+ * Purpose:
+ * Retrieve the msg_ctrl Manager Handle from the DevObject.
+ * Parameters:
+ * hdev_obj: Handle to the Dev Object
+ * phMsgMgr: Location where msg_ctrl Manager handle will be returned.
+ * Returns:
+ * Requires:
+ * DEV Initialized.
+ * Valid hdev_obj.
+ * phNodeMgr != NULL.
+ * Ensures:
+ */
+extern void dev_get_msg_mgr(struct dev_object *hdev_obj,
+ OUT struct msg_mgr **phMsgMgr);
+
+/*
+ * ========= dev_get_node_manager ========
+ * Purpose:
+ * Retrieve the Node Manager Handle from the DevObject. It is an
+ * accessor function
+ * Parameters:
+ * hdev_obj: Handle to the Dev Object
+ * phNodeMgr: Location where Handle to the Node Manager will be
+ * returned..
+ * Returns:
+ * 0: Success
+ * -EFAULT: Invalid Dev Object handle.
+ * Requires:
+ * DEV Initialized.
+ * phNodeMgr is not null
+ * Ensures:
+ * 0: *phNodeMgr contains a handle to a Node manager object.
+ * else: *phNodeMgr is NULL.
+ */
+extern int dev_get_node_manager(struct dev_object
+ *hdev_obj,
+ OUT struct node_mgr **phNodeMgr);
+
+/*
+ * ======== dev_get_symbol ========
+ * Purpose:
+ * Get the value of a symbol in the currently loaded program.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * pstrSym: Name of symbol to look up.
+ * pul_value: Ptr to symbol value.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * -ESPIPE: Symbols couldn not be found or have not been loaded onto
+ * the board.
+ * Requires:
+ * pstrSym != NULL.
+ * pul_value != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *pul_value contains the symbol value;
+ */
+extern int dev_get_symbol(struct dev_object *hdev_obj,
+ IN CONST char *pstrSym, OUT u32 * pul_value);
+
+/*
+ * ======== dev_get_bridge_context ========
+ * Purpose:
+ * Retrieve the Bridge Context handle, as returned by the
+ * bridge_dev_create fxn.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device()
+ * *phbridge_context: Ptr to location to store context handle.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * phbridge_context != NULL.
+ * DEV Initialized.
+ * Ensures:
+ * 0: *phbridge_context contains context handle;
+ * else: *phbridge_context is NULL;
+ */
+extern int dev_get_bridge_context(struct dev_object *hdev_obj,
+ OUT struct bridge_dev_context
+ **phbridge_context);
+
+/*
+ * ======== dev_exit ========
+ * Purpose:
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * DEV is initialized.
+ * Ensures:
+ * When reference count == 0, DEV's private resources are freed.
+ */
+extern void dev_exit(void);
+
+/*
+ * ======== dev_init ========
+ * Purpose:
+ * Initialize DEV's private state, keeping a reference count on each call.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * TRUE: A requirement for the other public DEV functions.
+ */
+extern bool dev_init(void);
+
+/*
+ * ======== dev_is_locked ========
+ * Purpose:
+ * Predicate function to determine if the device has been
+ * locked by a client for exclusive access.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * Returns:
+ * 0: TRUE: device has been locked.
+ * 0: FALSE: device not locked.
+ * -EFAULT: hdev_obj was invalid.
+ * Requires:
+ * DEV Initialized.
+ * Ensures:
+ */
+extern int dev_is_locked(IN struct dev_object *hdev_obj);
+
+/*
+ * ======== dev_insert_proc_object ========
+ * Purpose:
+ * Inserts the Processor Object into the List of PROC Objects
+ * kept in the DEV Object
+ * Parameters:
+ * proc_obj: Handle to the Proc Object
+ * hdev_obj Handle to the Dev Object
+ * bAttachedNew Specifies if there are already processors attached
+ * Returns:
+ * 0: Successfully inserted into the list
+ * Requires:
+ * proc_obj is not NULL
+ * hdev_obj is a valid handle to the DEV.
+ * DEV Initialized.
+ * List(of Proc object in Dev) Exists.
+ * Ensures:
+ * 0 & the PROC Object is inserted and the list is not empty
+ * Details:
+ * If the List of Proc Object is empty bAttachedNew is TRUE, it indicated
+ * this is the first Processor attaching.
+ * If it is False, there are already processors attached.
+ */
+extern int dev_insert_proc_object(IN struct dev_object
+ *hdev_obj,
+ IN u32 proc_obj,
+ OUT bool *pbAlreadyAttached);
+
+/*
+ * ======== dev_remove_proc_object ========
+ * Purpose:
+ * Search for and remove a Proc object from the given list maintained
+ * by the DEV
+ * Parameters:
+ * p_proc_object: Ptr to ProcObject to insert.
+ * dev_obj: Ptr to Dev Object where the list is.
+ * pbAlreadyAttached: Ptr to return the bool
+ * Returns:
+ * 0: If successful.
+ * -EPERM Failure to Remove the PROC Object from the list
+ * Requires:
+ * DevObject is Valid
+ * proc_obj != 0
+ * dev_obj->proc_list != NULL
+ * !LST_IS_EMPTY(dev_obj->proc_list)
+ * pbAlreadyAttached !=NULL
+ * Ensures:
+ * Details:
+ * List will be deleted when the DEV is destroyed.
+ *
+ */
+extern int dev_remove_proc_object(struct dev_object
+ *hdev_obj, u32 proc_obj);
+
+/*
+ * ======== dev_notify_clients ========
+ * Purpose:
+ * Notify all clients of this device of a change in device status.
+ * Clients may include multiple users of BRD, as well as CHNL.
+ * This function is asychronous, and may be called by a timer event
+ * set up by a watchdog timer.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device().
+ * ulStatus: A status word, most likely a BRD_STATUS.
+ * Returns:
+ * 0: All registered clients were asynchronously notified.
+ * -EINVAL: Invalid hdev_obj.
+ * Requires:
+ * DEV Initialized.
+ * Ensures:
+ * 0: Notifications are queued by the operating system to be
+ * delivered to clients. This function does not ensure that
+ * the notifications will ever be delivered.
+ */
+extern int dev_notify_clients(struct dev_object *hdev_obj, u32 ulStatus);
+
+/*
+ * ======== dev_remove_device ========
+ * Purpose:
+ * Destroys the Device Object created by dev_start_device.
+ * Parameters:
+ * dev_node_obj: Device node as it is know to OS.
+ * Returns:
+ * 0: If success;
+ * <error code> Otherwise.
+ * Requires:
+ * Ensures:
+ */
+extern int dev_remove_device(struct cfg_devnode *dev_node_obj);
+
+/*
+ * ======== dev_set_chnl_mgr ========
+ * Purpose:
+ * Set the channel manager for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with
+ * dev_create_device().
+ * hmgr: Handle to a channel manager, or NULL.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hdev_obj.
+ * Requires:
+ * DEV Initialized.
+ * Ensures:
+ */
+extern int dev_set_chnl_mgr(struct dev_object *hdev_obj,
+ struct chnl_mgr *hmgr);
+
+/*
+ * ======== dev_set_msg_mgr ========
+ * Purpose:
+ * Set the Message manager for this device.
+ * Parameters:
+ * hdev_obj: Handle to device object created with dev_create_device().
+ * hmgr: Handle to a message manager, or NULL.
+ * Returns:
+ * Requires:
+ * DEV Initialized.
+ * Ensures:
+ */
+extern void dev_set_msg_mgr(struct dev_object *hdev_obj, struct msg_mgr *hmgr);
+
+/*
+ * ======== dev_start_device ========
+ * Purpose:
+ * Initializes the new device with bridge environment. This involves
+ * querying CM for allocated resources, querying the registry for
+ * necessary dsp resources (requested in the INF file), and using this
+ * information to create a bridge device object.
+ * Parameters:
+ * dev_node_obj: Device node as it is know to OS.
+ * Returns:
+ * 0: If success;
+ * <error code> Otherwise.
+ * Requires:
+ * DEV initialized.
+ * Ensures:
+ */
+extern int dev_start_device(struct cfg_devnode *dev_node_obj);
+
+#endif /* DEV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/devdefs.h b/drivers/staging/tidspbridge/include/dspbridge/devdefs.h
new file mode 100644
index 0000000..a2f9241
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/devdefs.h
@@ -0,0 +1,26 @@
+/*
+ * devdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definition of common include typedef between dspdefs.h and dev.h. Required
+ * to break circular dependency between Bridge driver and DEV include files.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DEVDEFS_
+#define DEVDEFS_
+
+/* Bridge Device Object */
+struct dev_object;
+
+#endif /* DEVDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/disp.h b/drivers/staging/tidspbridge/include/dspbridge/disp.h
new file mode 100644
index 0000000..2fd14b0
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/disp.h
@@ -0,0 +1,204 @@
+/*
+ * disp.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Node Dispatcher.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DISP_
+#define DISP_
+
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/nodedefs.h>
+#include <dspbridge/nodepriv.h>
+#include <dspbridge/dispdefs.h>
+
+/*
+ * ======== disp_create ========
+ * Create a NODE Dispatcher object. This object handles the creation,
+ * deletion, and execution of nodes on the DSP target, through communication
+ * with the Resource Manager Server running on the target. Each NODE
+ * Manager object should have exactly one NODE Dispatcher.
+ *
+ * Parameters:
+ * phDispObject: Location to store node dispatcher object on output.
+ * hdev_obj: Device for this processor.
+ * pDispAttrs: Node dispatcher attributes.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EPERM: Unable to create dispatcher.
+ * Requires:
+ * disp_init(void) called.
+ * pDispAttrs != NULL.
+ * hdev_obj != NULL.
+ * phDispObject != NULL.
+ * Ensures:
+ * 0: IS_VALID(*phDispObject).
+ * error: *phDispObject == NULL.
+ */
+extern int disp_create(OUT struct disp_object **phDispObject,
+ struct dev_object *hdev_obj,
+ IN CONST struct disp_attr *pDispAttrs);
+
+/*
+ * ======== disp_delete ========
+ * Delete the NODE Dispatcher.
+ *
+ * Parameters:
+ * hDispObject: Node Dispatcher object.
+ * Returns:
+ * Requires:
+ * disp_init(void) called.
+ * Valid hDispObject.
+ * Ensures:
+ * hDispObject is invalid.
+ */
+extern void disp_delete(struct disp_object *hDispObject);
+
+/*
+ * ======== disp_exit ========
+ * Discontinue usage of DISP module.
+ *
+ * Parameters:
+ * Returns:
+ * Requires:
+ * disp_init(void) previously called.
+ * Ensures:
+ * Any resources acquired in disp_init(void) will be freed when last DISP
+ * client calls disp_exit(void).
+ */
+extern void disp_exit(void);
+
+/*
+ * ======== disp_init ========
+ * Initialize the DISP module.
+ *
+ * Parameters:
+ * Returns:
+ * TRUE if initialization succeeded, FALSE otherwise.
+ * Ensures:
+ */
+extern bool disp_init(void);
+
+/*
+ * ======== disp_node_change_priority ========
+ * Change the priority of a node currently running on the target.
+ *
+ * Parameters:
+ * hDispObject: Node Dispatcher object.
+ * hnode: Node object representing a node currently
+ * allocated or running on the DSP.
+ * ulFxnAddress: Address of RMS function for changing priority.
+ * node_env: Address of node's environment structure.
+ * prio: New priority level to set node's priority to.
+ * Returns:
+ * 0: Success.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * Requires:
+ * disp_init(void) called.
+ * Valid hDispObject.
+ * hnode != NULL.
+ * Ensures:
+ */
+extern int disp_node_change_priority(struct disp_object
+ *hDispObject,
+ struct node_object *hnode,
+ u32 ul_fxn_addr,
+ nodeenv node_env, s32 prio);
+
+/*
+ * ======== disp_node_create ========
+ * Create a node on the DSP by remotely calling the node's create function.
+ *
+ * Parameters:
+ * hDispObject: Node Dispatcher object.
+ * hnode: Node handle obtained from node_allocate().
+ * ul_fxn_addr: Address or RMS create node function.
+ * ul_create_fxn: Address of node's create function.
+ * pargs: Arguments to pass to RMS node create function.
+ * pNodeEnv: Location to store node environment pointer on
+ * output.
+ * Returns:
+ * 0: Success.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * -EPERM: A failure occurred, unable to create node.
+ * Requires:
+ * disp_init(void) called.
+ * Valid hDispObject.
+ * pargs != NULL.
+ * hnode != NULL.
+ * pNodeEnv != NULL.
+ * node_get_type(hnode) != NODE_DEVICE.
+ * Ensures:
+ */
+extern int disp_node_create(struct disp_object *hDispObject,
+ struct node_object *hnode,
+ u32 ul_fxn_addr,
+ u32 ul_create_fxn,
+ IN CONST struct node_createargs
+ *pargs, OUT nodeenv *pNodeEnv);
+
+/*
+ * ======== disp_node_delete ========
+ * Delete a node on the DSP by remotely calling the node's delete function.
+ *
+ * Parameters:
+ * hDispObject: Node Dispatcher object.
+ * hnode: Node object representing a node currently
+ * loaded on the DSP.
+ * ul_fxn_addr: Address or RMS delete node function.
+ * ul_delete_fxn: Address of node's delete function.
+ * node_env: Address of node's environment structure.
+ * Returns:
+ * 0: Success.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * Requires:
+ * disp_init(void) called.
+ * Valid hDispObject.
+ * hnode != NULL.
+ * Ensures:
+ */
+extern int disp_node_delete(struct disp_object *hDispObject,
+ struct node_object *hnode,
+ u32 ul_fxn_addr,
+ u32 ul_delete_fxn, nodeenv node_env);
+
+/*
+ * ======== disp_node_run ========
+ * Start execution of a node's execute phase, or resume execution of a node
+ * that has been suspended (via DISP_NodePause()) on the DSP.
+ *
+ * Parameters:
+ * hDispObject: Node Dispatcher object.
+ * hnode: Node object representing a node to be executed
+ * on the DSP.
+ * ul_fxn_addr: Address or RMS node execute function.
+ * ul_execute_fxn: Address of node's execute function.
+ * node_env: Address of node's environment structure.
+ * Returns:
+ * 0: Success.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * Requires:
+ * disp_init(void) called.
+ * Valid hDispObject.
+ * hnode != NULL.
+ * Ensures:
+ */
+extern int disp_node_run(struct disp_object *hDispObject,
+ struct node_object *hnode,
+ u32 ul_fxn_addr,
+ u32 ul_execute_fxn, nodeenv node_env);
+
+#endif /* DISP_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dispdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dispdefs.h
new file mode 100644
index 0000000..946551a
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dispdefs.h
@@ -0,0 +1,35 @@
+/*
+ * dispdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global DISP constants and types, shared by PROCESSOR, NODE, and DISP.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DISPDEFS_
+#define DISPDEFS_
+
+struct disp_object;
+
+/* Node Dispatcher attributes */
+struct disp_attr {
+ u32 ul_chnl_offset; /* Offset of channel ids reserved for RMS */
+ /* Size of buffer for sending data to RMS */
+ u32 ul_chnl_buf_size;
+ int proc_family; /* eg, 5000 */
+ int proc_type; /* eg, 5510 */
+ void *reserved1; /* Reserved for future use. */
+ u32 reserved2; /* Reserved for future use. */
+};
+
+#endif /* DISPDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dmm.h b/drivers/staging/tidspbridge/include/dspbridge/dmm.h
new file mode 100644
index 0000000..1ce1b65
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dmm.h
@@ -0,0 +1,75 @@
+/*
+ * dmm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * The Dynamic Memory Mapping(DMM) module manages the DSP Virtual address
+ * space that can be directly mapped to any MPU buffer or memory region.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DMM_
+#define DMM_
+
+#include <dspbridge/dbdefs.h>
+
+struct dmm_object;
+
+/* DMM attributes used in dmm_create() */
+struct dmm_mgrattrs {
+ u32 reserved;
+};
+
+#define DMMPOOLSIZE 0x4000000
+
+/*
+ * ======== dmm_get_handle ========
+ * Purpose:
+ * Return the dynamic memory manager object for this device.
+ * This is typically called from the client process.
+ */
+
+extern int dmm_get_handle(void *hprocessor,
+ OUT struct dmm_object **phDmmMgr);
+
+extern int dmm_reserve_memory(struct dmm_object *dmm_mgr,
+ u32 size, u32 *prsv_addr);
+
+extern int dmm_un_reserve_memory(struct dmm_object *dmm_mgr,
+ u32 rsv_addr);
+
+extern int dmm_map_memory(struct dmm_object *dmm_mgr, u32 addr,
+ u32 size);
+
+extern int dmm_un_map_memory(struct dmm_object *dmm_mgr,
+ u32 addr, u32 *psize);
+
+extern int dmm_destroy(struct dmm_object *dmm_mgr);
+
+extern int dmm_delete_tables(struct dmm_object *dmm_mgr);
+
+extern int dmm_create(OUT struct dmm_object **phDmmMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct dmm_mgrattrs *pMgrAttrs);
+
+extern bool dmm_init(void);
+
+extern void dmm_exit(void);
+
+extern int dmm_create_tables(struct dmm_object *dmm_mgr,
+ u32 addr, u32 size);
+
+#ifdef DSP_DMM_DEBUG
+u32 dmm_mem_map_dump(struct dmm_object *dmm_mgr);
+#endif
+
+#endif /* DMM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/drv.h b/drivers/staging/tidspbridge/include/dspbridge/drv.h
new file mode 100644
index 0000000..66f12ef
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/drv.h
@@ -0,0 +1,522 @@
+/*
+ * drv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DRV Resource allocation module. Driver Object gets Created
+ * at the time of Loading. It holds the List of Device Objects
+ * in the system.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DRV_
+#define DRV_
+
+#include <dspbridge/devdefs.h>
+
+#include <dspbridge/drvdefs.h>
+
+#define DRV_ASSIGN 1
+#define DRV_RELEASE 0
+
+/* Provide the DSP Internal memory windows that can be accessed from L3 address
+ * space */
+
+#define OMAP_GEM_BASE 0x107F8000
+#define OMAP_DSP_SIZE 0x00720000
+
+/* MEM1 is L2 RAM + L2 Cache space */
+#define OMAP_DSP_MEM1_BASE 0x5C7F8000
+#define OMAP_DSP_MEM1_SIZE 0x18000
+#define OMAP_DSP_GEM1_BASE 0x107F8000
+
+/* MEM2 is L1P RAM/CACHE space */
+#define OMAP_DSP_MEM2_BASE 0x5CE00000
+#define OMAP_DSP_MEM2_SIZE 0x8000
+#define OMAP_DSP_GEM2_BASE 0x10E00000
+
+/* MEM3 is L1D RAM/CACHE space */
+#define OMAP_DSP_MEM3_BASE 0x5CF04000
+#define OMAP_DSP_MEM3_SIZE 0x14000
+#define OMAP_DSP_GEM3_BASE 0x10F04000
+
+#define OMAP_IVA2_PRM_BASE 0x48306000
+#define OMAP_IVA2_PRM_SIZE 0x1000
+
+#define OMAP_IVA2_CM_BASE 0x48004000
+#define OMAP_IVA2_CM_SIZE 0x1000
+
+#define OMAP_PER_CM_BASE 0x48005000
+#define OMAP_PER_CM_SIZE 0x1000
+
+#define OMAP_PER_PRM_BASE 0x48307000
+#define OMAP_PER_PRM_SIZE 0x1000
+
+#define OMAP_CORE_PRM_BASE 0x48306A00
+#define OMAP_CORE_PRM_SIZE 0x1000
+
+#define OMAP_SYSC_BASE 0x48002000
+#define OMAP_SYSC_SIZE 0x1000
+
+#define OMAP_DMMU_BASE 0x5D000000
+#define OMAP_DMMU_SIZE 0x1000
+
+#define OMAP_PRCM_VDD1_DOMAIN 1
+#define OMAP_PRCM_VDD2_DOMAIN 2
+
+/* GPP PROCESS CLEANUP Data structures */
+
+/* New structure (member of process context) abstracts NODE resource info */
+struct node_res_object {
+ void *hnode;
+ s32 node_allocated; /* Node status */
+ s32 heap_allocated; /* Heap status */
+ s32 streams_allocated; /* Streams status */
+ struct node_res_object *next;
+};
+
+/* used to cache dma mapping information */
+struct bridge_dma_map_info {
+ /* direction of DMA in action, or DMA_NONE */
+ enum dma_data_direction dir;
+ /* number of elements requested by us */
+ int num_pages;
+ /* number of elements returned from dma_map_sg */
+ int sg_num;
+ /* list of buffers used in this DMA action */
+ struct scatterlist *sg;
+};
+
+/* Used for DMM mapped memory accounting */
+struct dmm_map_object {
+ struct list_head link;
+ u32 dsp_addr;
+ u32 mpu_addr;
+ u32 size;
+ u32 num_usr_pgs;
+ struct page **pages;
+ struct bridge_dma_map_info dma_info;
+};
+
+/* Used for DMM reserved memory accounting */
+struct dmm_rsv_object {
+ struct list_head link;
+ u32 dsp_reserved_addr;
+};
+
+/* New structure (member of process context) abstracts DMM resource info */
+struct dspheap_res_object {
+ s32 heap_allocated; /* DMM status */
+ u32 ul_mpu_addr;
+ u32 ul_dsp_addr;
+ u32 ul_dsp_res_addr;
+ u32 heap_size;
+ void *hprocessor;
+ struct dspheap_res_object *next;
+};
+
+/* New structure (member of process context) abstracts stream resource info */
+struct strm_res_object {
+ s32 stream_allocated; /* Stream status */
+ void *hstream;
+ u32 num_bufs;
+ u32 dir;
+ struct strm_res_object *next;
+};
+
+/* Overall Bridge process resource usage state */
+enum gpp_proc_res_state {
+ PROC_RES_ALLOCATED,
+ PROC_RES_FREED
+};
+
+/* Bridge Data */
+struct drv_data {
+ char *base_img;
+ s32 shm_size;
+ int tc_wordswapon;
+ void *drv_object;
+ void *dev_object;
+ void *mgr_object;
+};
+
+/* Process Context */
+struct process_context {
+ /* Process State */
+ enum gpp_proc_res_state res_state;
+
+ /* Handle to Processor */
+ void *hprocessor;
+
+ /* DSP Node resources */
+ struct node_res_object *node_list;
+ struct mutex node_mutex;
+
+ /* DMM mapped memory resources */
+ struct list_head dmm_map_list;
+ spinlock_t dmm_map_lock;
+
+ /* DMM reserved memory resources */
+ struct list_head dmm_rsv_list;
+ spinlock_t dmm_rsv_lock;
+
+ /* DSP Heap resources */
+ struct dspheap_res_object *pdspheap_list;
+
+ /* Stream resources */
+ struct strm_res_object *pstrm_list;
+ struct mutex strm_mutex;
+};
+
+/*
+ * ======== drv_create ========
+ * Purpose:
+ * Creates the Driver Object. This is done during the driver loading.
+ * There is only one Driver Object in the DSP/BIOS Bridge.
+ * Parameters:
+ * phDrvObject: Location to store created DRV Object handle.
+ * Returns:
+ * 0: Sucess
+ * -ENOMEM: Failed in Memory allocation
+ * -EPERM: General Failure
+ * Requires:
+ * DRV Initialized (refs > 0 )
+ * phDrvObject != NULL.
+ * Ensures:
+ * 0: - *phDrvObject is a valid DRV interface to the device.
+ * - List of DevObject Created and Initialized.
+ * - List of dev_node String created and intialized.
+ * - Registry is updated with the DRV Object.
+ * !0: DRV Object not created
+ * Details:
+ * There is one Driver Object for the Driver representing
+ * the driver itself. It contains the list of device
+ * Objects and the list of Device Extensions in the system.
+ * Also it can hold other neccessary
+ * information in its storage area.
+ */
+extern int drv_create(struct drv_object **phDrvObject);
+
+/*
+ * ======== drv_destroy ========
+ * Purpose:
+ * destroys the Dev Object list, DrvExt list
+ * and destroy the DRV object
+ * Called upon driver unLoading.or unsuccesful loading of the driver.
+ * Parameters:
+ * hdrv_obj: Handle to Driver object .
+ * Returns:
+ * 0: Success.
+ * -EPERM: Failed to destroy DRV Object
+ * Requires:
+ * DRV Initialized (cRegs > 0 )
+ * hdrv_obj is not NULL and a valid DRV handle .
+ * List of DevObject is Empty.
+ * List of DrvExt is Empty
+ * Ensures:
+ * 0: - DRV Object destroyed and hdrv_obj is not a valid
+ * DRV handle.
+ * - Registry is updated with "0" as the DRV Object.
+ */
+extern int drv_destroy(struct drv_object *hdrv_obj);
+
+/*
+ * ======== drv_exit ========
+ * Purpose:
+ * Exit the DRV module, freeing any modules initialized in drv_init.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * Ensures:
+ */
+extern void drv_exit(void);
+
+/*
+ * ======== drv_get_first_dev_object ========
+ * Purpose:
+ * Returns the Ptr to the FirstDev Object in the List
+ * Parameters:
+ * Requires:
+ * DRV Initialized
+ * Returns:
+ * dw_dev_object: Ptr to the First Dev Object as a u32
+ * 0 if it fails to retrieve the First Dev Object
+ * Ensures:
+ */
+extern u32 drv_get_first_dev_object(void);
+
+/*
+ * ======== drv_get_first_dev_extension ========
+ * Purpose:
+ * Returns the Ptr to the First Device Extension in the List
+ * Parameters:
+ * Requires:
+ * DRV Initialized
+ * Returns:
+ * dw_dev_extension: Ptr to the First Device Extension as a u32
+ * 0: Failed to Get the Device Extension
+ * Ensures:
+ */
+extern u32 drv_get_first_dev_extension(void);
+
+/*
+ * ======== drv_get_dev_object ========
+ * Purpose:
+ * Given a index, returns a handle to DevObject from the list
+ * Parameters:
+ * hdrv_obj: Handle to the Manager
+ * phDevObject: Location to store the Dev Handle
+ * Requires:
+ * DRV Initialized
+ * index >= 0
+ * hdrv_obj is not NULL and Valid DRV Object
+ * phDevObject is not NULL
+ * Device Object List not Empty
+ * Returns:
+ * 0: Success
+ * -EPERM: Failed to Get the Dev Object
+ * Ensures:
+ * 0: *phDevObject != NULL
+ * -EPERM: *phDevObject = NULL
+ */
+extern int drv_get_dev_object(u32 index,
+ struct drv_object *hdrv_obj,
+ struct dev_object **phDevObject);
+
+/*
+ * ======== drv_get_next_dev_object ========
+ * Purpose:
+ * Returns the Ptr to the Next Device Object from the the List
+ * Parameters:
+ * hdev_obj: Handle to the Device Object
+ * Requires:
+ * DRV Initialized
+ * hdev_obj != 0
+ * Returns:
+ * dw_dev_object: Ptr to the Next Dev Object as a u32
+ * 0: If it fail to get the next Dev Object.
+ * Ensures:
+ */
+extern u32 drv_get_next_dev_object(u32 hdev_obj);
+
+/*
+ * ======== drv_get_next_dev_extension ========
+ * Purpose:
+ * Returns the Ptr to the Next Device Extension from the the List
+ * Parameters:
+ * hDevExtension: Handle to the Device Extension
+ * Requires:
+ * DRV Initialized
+ * hDevExtension != 0.
+ * Returns:
+ * dw_dev_extension: Ptr to the Next Dev Extension
+ * 0: If it fail to Get the next Dev Extension
+ * Ensures:
+ */
+extern u32 drv_get_next_dev_extension(u32 hDevExtension);
+
+/*
+ * ======== drv_init ========
+ * Purpose:
+ * Initialize the DRV module.
+ * Parameters:
+ * Returns:
+ * TRUE if success; FALSE otherwise.
+ * Requires:
+ * Ensures:
+ */
+extern int drv_init(void);
+
+/*
+ * ======== drv_insert_dev_object ========
+ * Purpose:
+ * Insert a DeviceObject into the list of Driver object.
+ * Parameters:
+ * hdrv_obj: Handle to DrvObject
+ * hdev_obj: Handle to DeviceObject to insert.
+ * Returns:
+ * 0: If successful.
+ * -EPERM: General Failure:
+ * Requires:
+ * hdrv_obj != NULL and Valid DRV Handle.
+ * hdev_obj != NULL.
+ * Ensures:
+ * 0: Device Object is inserted and the List is not empty.
+ */
+extern int drv_insert_dev_object(struct drv_object *hdrv_obj,
+ struct dev_object *hdev_obj);
+
+/*
+ * ======== drv_remove_dev_object ========
+ * Purpose:
+ * Search for and remove a Device object from the given list of Device Obj
+ * objects.
+ * Parameters:
+ * hdrv_obj: Handle to DrvObject
+ * hdev_obj: Handle to DevObject to Remove
+ * Returns:
+ * 0: Success.
+ * -EPERM: Unable to find dev_obj.
+ * Requires:
+ * hdrv_obj != NULL and a Valid DRV Handle.
+ * hdev_obj != NULL.
+ * List exists and is not empty.
+ * Ensures:
+ * List either does not exist (NULL), or is not empty if it does exist.
+ */
+extern int drv_remove_dev_object(struct drv_object *hdrv_obj,
+ struct dev_object *hdev_obj);
+
+/*
+ * ======== drv_request_resources ========
+ * Purpose:
+ * Assigns the Resources or Releases them.
+ * Parameters:
+ * dw_context: Path to the driver Registry Key.
+ * pDevNodeString: Ptr to dev_node String stored in the Device Ext.
+ * Returns:
+ * TRUE if success; FALSE otherwise.
+ * Requires:
+ * Ensures:
+ * The Resources are assigned based on Bus type.
+ * The hardware is initialized. Resource information is
+ * gathered from the Registry(ISA, PCMCIA)or scanned(PCI)
+ * Resource structure is stored in the registry which will be
+ * later used by the CFG module.
+ */
+extern int drv_request_resources(IN u32 dw_context,
+ OUT u32 *pDevNodeString);
+
+/*
+ * ======== drv_release_resources ========
+ * Purpose:
+ * Assigns the Resources or Releases them.
+ * Parameters:
+ * dw_context: Path to the driver Registry Key.
+ * hdrv_obj: Handle to the Driver Object.
+ * Returns:
+ * TRUE if success; FALSE otherwise.
+ * Requires:
+ * Ensures:
+ * The Resources are released based on Bus type.
+ * Resource structure is deleted from the registry
+ */
+extern int drv_release_resources(IN u32 dw_context,
+ struct drv_object *hdrv_obj);
+
+/**
+ * drv_request_bridge_res_dsp() - Reserves shared memory for bridge.
+ * @phost_resources: pointer to host resources.
+ */
+int drv_request_bridge_res_dsp(void **phost_resources);
+
+#ifdef CONFIG_BRIDGE_RECOVERY
+void bridge_recover_schedule(void);
+#endif
+
+/*
+ * ======== mem_ext_phys_pool_init ========
+ * Purpose:
+ * Uses the physical memory chunk passed for internal consitent memory
+ * allocations.
+ * physical address based on the page frame address.
+ * Parameters:
+ * poolPhysBase starting address of the physical memory pool.
+ * poolSize size of the physical memory pool.
+ * Returns:
+ * none.
+ * Requires:
+ * - MEM initialized.
+ * - valid physical address for the base and size > 0
+ */
+extern void mem_ext_phys_pool_init(IN u32 poolPhysBase, IN u32 poolSize);
+
+/*
+ * ======== mem_ext_phys_pool_release ========
+ */
+extern void mem_ext_phys_pool_release(void);
+
+/* ======== mem_alloc_phys_mem ========
+ * Purpose:
+ * Allocate physically contiguous, uncached memory
+ * Parameters:
+ * byte_size: Number of bytes to allocate.
+ * ulAlign: Alignment Mask.
+ * pPhysicalAddress: Physical address of allocated memory.
+ * Returns:
+ * Pointer to a block of memory;
+ * NULL if memory couldn't be allocated, or if byte_size == 0.
+ * Requires:
+ * MEM initialized.
+ * Ensures:
+ * The returned pointer, if not NULL, points to a valid memory block of
+ * the size requested. Returned physical address refers to physical
+ * location of memory.
+ */
+extern void *mem_alloc_phys_mem(IN u32 byte_size,
+ IN u32 ulAlign, OUT u32 *pPhysicalAddress);
+
+/*
+ * ======== mem_free_phys_mem ========
+ * Purpose:
+ * Free the given block of physically contiguous memory.
+ * Parameters:
+ * pVirtualAddress: Pointer to virtual memory region allocated
+ * by mem_alloc_phys_mem().
+ * pPhysicalAddress: Pointer to physical memory region allocated
+ * by mem_alloc_phys_mem().
+ * byte_size: Size of the memory region allocated by mem_alloc_phys_mem().
+ * Returns:
+ * Requires:
+ * MEM initialized.
+ * pVirtualAddress is a valid memory address returned by
+ * mem_alloc_phys_mem()
+ * Ensures:
+ * pVirtualAddress is no longer a valid pointer to memory.
+ */
+extern void mem_free_phys_mem(void *pVirtualAddress,
+ u32 pPhysicalAddress, u32 byte_size);
+
+/*
+ * ======== MEM_LINEAR_ADDRESS ========
+ * Purpose:
+ * Get the linear address corresponding to the given physical address.
+ * Parameters:
+ * pPhysAddr: Physical address to be mapped.
+ * byte_size: Number of bytes in physical range to map.
+ * Returns:
+ * The corresponding linear address, or NULL if unsuccessful.
+ * Requires:
+ * MEM initialized.
+ * Ensures:
+ * Notes:
+ * If valid linear address is returned, be sure to call
+ * MEM_UNMAP_LINEAR_ADDRESS().
+ */
+#define MEM_LINEAR_ADDRESS(pPhyAddr, byte_size) pPhyAddr
+
+/*
+ * ======== MEM_UNMAP_LINEAR_ADDRESS ========
+ * Purpose:
+ * Unmap the linear address mapped in MEM_LINEAR_ADDRESS.
+ * Parameters:
+ * pBaseAddr: Ptr to mapped memory (as returned by MEM_LINEAR_ADDRESS()).
+ * Returns:
+ * Requires:
+ * - MEM initialized.
+ * - pBaseAddr is a valid linear address mapped in MEM_LINEAR_ADDRESS.
+ * Ensures:
+ * - pBaseAddr no longer points to a valid linear address.
+ */
+#define MEM_UNMAP_LINEAR_ADDRESS(pBaseAddr) {}
+
+#endif /* DRV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/drvdefs.h b/drivers/staging/tidspbridge/include/dspbridge/drvdefs.h
new file mode 100644
index 0000000..2920917
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/drvdefs.h
@@ -0,0 +1,25 @@
+/*
+ * drvdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definition of common struct between dspdefs.h and drv.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DRVDEFS_
+#define DRVDEFS_
+
+/* Bridge Driver Object */
+struct drv_object;
+
+#endif /* DRVDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspapi-ioctl.h b/drivers/staging/tidspbridge/include/dspbridge/dspapi-ioctl.h
new file mode 100644
index 0000000..cc4e75b
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspapi-ioctl.h
@@ -0,0 +1,475 @@
+/*
+ * dspapi-ioctl.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Contains structures and commands that are used for interaction
+ * between the DDSP API and Bridge driver.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPAPIIOCTL_
+#define DSPAPIIOCTL_
+
+#include <dspbridge/cmm.h>
+#include <dspbridge/strmdefs.h>
+#include <dspbridge/dbdcd.h>
+
+union Trapped_Args {
+
+ /* MGR Module */
+ struct {
+ u32 node_id;
+ struct dsp_ndbprops __user *pndb_props;
+ u32 undb_props_size;
+ u32 __user *pu_num_nodes;
+ } args_mgr_enumnode_info;
+
+ struct {
+ u32 processor_id;
+ struct dsp_processorinfo __user *processor_info;
+ u32 processor_info_size;
+ u32 __user *pu_num_procs;
+ } args_mgr_enumproc_info;
+
+ struct {
+ struct dsp_uuid *uuid_obj;
+ enum dsp_dcdobjtype obj_type;
+ char *psz_path_name;
+ } args_mgr_registerobject;
+
+ struct {
+ struct dsp_uuid *uuid_obj;
+ enum dsp_dcdobjtype obj_type;
+ } args_mgr_unregisterobject;
+
+ struct {
+ struct dsp_notification __user *__user *anotifications;
+ u32 count;
+ u32 __user *pu_index;
+ u32 utimeout;
+ } args_mgr_wait;
+
+ /* PROC Module */
+ struct {
+ u32 processor_id;
+ struct dsp_processorattrin __user *attr_in;
+ void *__user *ph_processor;
+ } args_proc_attach;
+
+ struct {
+ void *hprocessor;
+ u32 dw_cmd;
+ struct dsp_cbdata __user *pargs;
+ } args_proc_ctrl;
+
+ struct {
+ void *hprocessor;
+ } args_proc_detach;
+
+ struct {
+ void *hprocessor;
+ void *__user *node_tab;
+ u32 node_tab_size;
+ u32 __user *pu_num_nodes;
+ u32 __user *pu_allocated;
+ } args_proc_enumnode_info;
+
+ struct {
+ void *hprocessor;
+ u32 resource_type;
+ struct dsp_resourceinfo *resource_info;
+ u32 resource_info_size;
+ } args_proc_enumresources;
+
+ struct {
+ void *hprocessor;
+ struct dsp_processorstate __user *proc_state_obj;
+ u32 state_info_size;
+ } args_proc_getstate;
+
+ struct {
+ void *hprocessor;
+ u8 __user *pbuf;
+ u8 __user *psize;
+ u32 max_size;
+ } args_proc_gettrace;
+
+ struct {
+ void *hprocessor;
+ s32 argc_index;
+ char __user *__user *user_args;
+ char *__user *user_envp;
+ } args_proc_load;
+
+ struct {
+ void *hprocessor;
+ u32 event_mask;
+ u32 notify_type;
+ struct dsp_notification __user *hnotification;
+ } args_proc_register_notify;
+
+ struct {
+ void *hprocessor;
+ } args_proc_start;
+
+ struct {
+ void *hprocessor;
+ u32 ul_size;
+ void *__user *pp_rsv_addr;
+ } args_proc_rsvmem;
+
+ struct {
+ void *hprocessor;
+ u32 ul_size;
+ void *prsv_addr;
+ } args_proc_unrsvmem;
+
+ struct {
+ void *hprocessor;
+ void *pmpu_addr;
+ u32 ul_size;
+ void *req_addr;
+ void *__user *pp_map_addr;
+ u32 ul_map_attr;
+ } args_proc_mapmem;
+
+ struct {
+ void *hprocessor;
+ u32 ul_size;
+ void *map_addr;
+ } args_proc_unmapmem;
+
+ struct {
+ void *hprocessor;
+ void *pmpu_addr;
+ u32 ul_size;
+ u32 dir;
+ } args_proc_dma;
+
+ struct {
+ void *hprocessor;
+ void *pmpu_addr;
+ u32 ul_size;
+ u32 ul_flags;
+ } args_proc_flushmemory;
+
+ struct {
+ void *hprocessor;
+ } args_proc_stop;
+
+ struct {
+ void *hprocessor;
+ void *pmpu_addr;
+ u32 ul_size;
+ } args_proc_invalidatememory;
+
+ /* NODE Module */
+ struct {
+ void *hprocessor;
+ struct dsp_uuid __user *node_id_ptr;
+ struct dsp_cbdata __user *pargs;
+ struct dsp_nodeattrin __user *attr_in;
+ void *__user *ph_node;
+ } args_node_allocate;
+
+ struct {
+ void *hnode;
+ u32 usize;
+ struct dsp_bufferattr __user *pattr;
+ u8 *__user *pbuffer;
+ } args_node_allocmsgbuf;
+
+ struct {
+ void *hnode;
+ s32 prio;
+ } args_node_changepriority;
+
+ struct {
+ void *hnode;
+ u32 stream_id;
+ void *other_node;
+ u32 other_stream;
+ struct dsp_strmattr __user *pattrs;
+ struct dsp_cbdata __user *conn_param;
+ } args_node_connect;
+
+ struct {
+ void *hnode;
+ } args_node_create;
+
+ struct {
+ void *hnode;
+ } args_node_delete;
+
+ struct {
+ void *hnode;
+ struct dsp_bufferattr __user *pattr;
+ u8 *pbuffer;
+ } args_node_freemsgbuf;
+
+ struct {
+ void *hnode;
+ struct dsp_nodeattr __user *pattr;
+ u32 attr_size;
+ } args_node_getattr;
+
+ struct {
+ void *hnode;
+ struct dsp_msg __user *message;
+ u32 utimeout;
+ } args_node_getmessage;
+
+ struct {
+ void *hnode;
+ } args_node_pause;
+
+ struct {
+ void *hnode;
+ struct dsp_msg __user *message;
+ u32 utimeout;
+ } args_node_putmessage;
+
+ struct {
+ void *hnode;
+ u32 event_mask;
+ u32 notify_type;
+ struct dsp_notification __user *hnotification;
+ } args_node_registernotify;
+
+ struct {
+ void *hnode;
+ } args_node_run;
+
+ struct {
+ void *hnode;
+ int __user *pstatus;
+ } args_node_terminate;
+
+ struct {
+ void *hprocessor;
+ struct dsp_uuid __user *node_id_ptr;
+ struct dsp_ndbprops __user *node_props;
+ } args_node_getuuidprops;
+
+ /* STRM module */
+
+ struct {
+ void *hstream;
+ u32 usize;
+ u8 *__user *ap_buffer;
+ u32 num_bufs;
+ } args_strm_allocatebuffer;
+
+ struct {
+ void *hstream;
+ } args_strm_close;
+
+ struct {
+ void *hstream;
+ u8 *__user *ap_buffer;
+ u32 num_bufs;
+ } args_strm_freebuffer;
+
+ struct {
+ void *hstream;
+ void **ph_event;
+ } args_strm_geteventhandle;
+
+ struct {
+ void *hstream;
+ struct stream_info __user *stream_info;
+ u32 stream_info_size;
+ } args_strm_getinfo;
+
+ struct {
+ void *hstream;
+ bool flush_flag;
+ } args_strm_idle;
+
+ struct {
+ void *hstream;
+ u8 *pbuffer;
+ u32 dw_bytes;
+ u32 dw_buf_size;
+ u32 dw_arg;
+ } args_strm_issue;
+
+ struct {
+ void *hnode;
+ u32 direction;
+ u32 index;
+ struct strm_attr __user *attr_in;
+ void *__user *ph_stream;
+ } args_strm_open;
+
+ struct {
+ void *hstream;
+ u8 *__user *buf_ptr;
+ u32 __user *bytes;
+ u32 __user *buf_size_ptr;
+ u32 __user *pdw_arg;
+ } args_strm_reclaim;
+
+ struct {
+ void *hstream;
+ u32 event_mask;
+ u32 notify_type;
+ struct dsp_notification __user *hnotification;
+ } args_strm_registernotify;
+
+ struct {
+ void *__user *stream_tab;
+ u32 strm_num;
+ u32 __user *pmask;
+ u32 utimeout;
+ } args_strm_select;
+
+ /* CMM Module */
+ struct {
+ struct cmm_object *hcmm_mgr;
+ u32 usize;
+ struct cmm_attrs *pattrs;
+ OUT void **pp_buf_va;
+ } args_cmm_allocbuf;
+
+ struct {
+ struct cmm_object *hcmm_mgr;
+ void *buf_pa;
+ u32 ul_seg_id;
+ } args_cmm_freebuf;
+
+ struct {
+ void *hprocessor;
+ struct cmm_object *__user *ph_cmm_mgr;
+ } args_cmm_gethandle;
+
+ struct {
+ struct cmm_object *hcmm_mgr;
+ struct cmm_info __user *cmm_info_obj;
+ } args_cmm_getinfo;
+
+ /* UTIL module */
+ struct {
+ s32 util_argc;
+ char **pp_argv;
+ } args_util_testdll;
+};
+
+/*
+ * Dspbridge Ioctl numbering scheme
+ *
+ * 7 0
+ * ---------------------------------
+ * | Module | Ioctl Number |
+ * ---------------------------------
+ * | x | x | x | 0 | 0 | 0 | 0 | 0 |
+ * ---------------------------------
+ */
+
+/* Ioctl driver identifier */
+#define DB 0xDB
+
+/*
+ * Following are used to distinguish between module ioctls, this is needed
+ * in case new ioctls are introduced.
+ */
+#define DB_MODULE_MASK 0xE0
+#define DB_IOC_MASK 0x1F
+
+/* Ioctl module masks */
+#define DB_MGR 0x0
+#define DB_PROC 0x20
+#define DB_NODE 0x40
+#define DB_STRM 0x60
+#define DB_CMM 0x80
+
+#define DB_MODULE_SHIFT 5
+
+/* Used to calculate the ioctl per dspbridge module */
+#define DB_IOC(module, num) \
+ (((module) & DB_MODULE_MASK) | ((num) & DB_IOC_MASK))
+/* Used to get dspbridge ioctl module */
+#define DB_GET_MODULE(cmd) ((cmd) & DB_MODULE_MASK)
+/* Used to get dspbridge ioctl number */
+#define DB_GET_IOC(cmd) ((cmd) & DB_IOC_MASK)
+
+/* TODO: Remove deprecated and not implemented */
+
+/* MGR Module */
+#define MGR_ENUMNODE_INFO _IOWR(DB, DB_IOC(DB_MGR, 0), unsigned long)
+#define MGR_ENUMPROC_INFO _IOWR(DB, DB_IOC(DB_MGR, 1), unsigned long)
+#define MGR_REGISTEROBJECT _IOWR(DB, DB_IOC(DB_MGR, 2), unsigned long)
+#define MGR_UNREGISTEROBJECT _IOWR(DB, DB_IOC(DB_MGR, 3), unsigned long)
+#define MGR_WAIT _IOWR(DB, DB_IOC(DB_MGR, 4), unsigned long)
+/* MGR_GET_PROC_RES Deprecated */
+#define MGR_GET_PROC_RES _IOR(DB, DB_IOC(DB_MGR, 5), unsigned long)
+
+/* PROC Module */
+#define PROC_ATTACH _IOWR(DB, DB_IOC(DB_PROC, 0), unsigned long)
+#define PROC_CTRL _IOR(DB, DB_IOC(DB_PROC, 1), unsigned long)
+/* PROC_DETACH Deprecated */
+#define PROC_DETACH _IOR(DB, DB_IOC(DB_PROC, 2), unsigned long)
+#define PROC_ENUMNODE _IOWR(DB, DB_IOC(DB_PROC, 3), unsigned long)
+#define PROC_ENUMRESOURCES _IOWR(DB, DB_IOC(DB_PROC, 4), unsigned long)
+#define PROC_GET_STATE _IOWR(DB, DB_IOC(DB_PROC, 5), unsigned long)
+#define PROC_GET_TRACE _IOWR(DB, DB_IOC(DB_PROC, 6), unsigned long)
+#define PROC_LOAD _IOW(DB, DB_IOC(DB_PROC, 7), unsigned long)
+#define PROC_REGISTERNOTIFY _IOWR(DB, DB_IOC(DB_PROC, 8), unsigned long)
+#define PROC_START _IOW(DB, DB_IOC(DB_PROC, 9), unsigned long)
+#define PROC_RSVMEM _IOWR(DB, DB_IOC(DB_PROC, 10), unsigned long)
+#define PROC_UNRSVMEM _IOW(DB, DB_IOC(DB_PROC, 11), unsigned long)
+#define PROC_MAPMEM _IOWR(DB, DB_IOC(DB_PROC, 12), unsigned long)
+#define PROC_UNMAPMEM _IOR(DB, DB_IOC(DB_PROC, 13), unsigned long)
+#define PROC_FLUSHMEMORY _IOW(DB, DB_IOC(DB_PROC, 14), unsigned long)
+#define PROC_STOP _IOWR(DB, DB_IOC(DB_PROC, 15), unsigned long)
+#define PROC_INVALIDATEMEMORY _IOW(DB, DB_IOC(DB_PROC, 16), unsigned long)
+#define PROC_BEGINDMA _IOW(DB, DB_IOC(DB_PROC, 17), unsigned long)
+#define PROC_ENDDMA _IOW(DB, DB_IOC(DB_PROC, 18), unsigned long)
+
+/* NODE Module */
+#define NODE_ALLOCATE _IOWR(DB, DB_IOC(DB_NODE, 0), unsigned long)
+#define NODE_ALLOCMSGBUF _IOWR(DB, DB_IOC(DB_NODE, 1), unsigned long)
+#define NODE_CHANGEPRIORITY _IOW(DB, DB_IOC(DB_NODE, 2), unsigned long)
+#define NODE_CONNECT _IOW(DB, DB_IOC(DB_NODE, 3), unsigned long)
+#define NODE_CREATE _IOW(DB, DB_IOC(DB_NODE, 4), unsigned long)
+#define NODE_DELETE _IOW(DB, DB_IOC(DB_NODE, 5), unsigned long)
+#define NODE_FREEMSGBUF _IOW(DB, DB_IOC(DB_NODE, 6), unsigned long)
+#define NODE_GETATTR _IOWR(DB, DB_IOC(DB_NODE, 7), unsigned long)
+#define NODE_GETMESSAGE _IOWR(DB, DB_IOC(DB_NODE, 8), unsigned long)
+#define NODE_PAUSE _IOW(DB, DB_IOC(DB_NODE, 9), unsigned long)
+#define NODE_PUTMESSAGE _IOW(DB, DB_IOC(DB_NODE, 10), unsigned long)
+#define NODE_REGISTERNOTIFY _IOWR(DB, DB_IOC(DB_NODE, 11), unsigned long)
+#define NODE_RUN _IOW(DB, DB_IOC(DB_NODE, 12), unsigned long)
+#define NODE_TERMINATE _IOWR(DB, DB_IOC(DB_NODE, 13), unsigned long)
+#define NODE_GETUUIDPROPS _IOWR(DB, DB_IOC(DB_NODE, 14), unsigned long)
+
+/* STRM Module */
+#define STRM_ALLOCATEBUFFER _IOWR(DB, DB_IOC(DB_STRM, 0), unsigned long)
+#define STRM_CLOSE _IOW(DB, DB_IOC(DB_STRM, 1), unsigned long)
+#define STRM_FREEBUFFER _IOWR(DB, DB_IOC(DB_STRM, 2), unsigned long)
+#define STRM_GETEVENTHANDLE _IO(DB, DB_IOC(DB_STRM, 3)) /* Not Impl'd */
+#define STRM_GETINFO _IOWR(DB, DB_IOC(DB_STRM, 4), unsigned long)
+#define STRM_IDLE _IOW(DB, DB_IOC(DB_STRM, 5), unsigned long)
+#define STRM_ISSUE _IOW(DB, DB_IOC(DB_STRM, 6), unsigned long)
+#define STRM_OPEN _IOWR(DB, DB_IOC(DB_STRM, 7), unsigned long)
+#define STRM_RECLAIM _IOWR(DB, DB_IOC(DB_STRM, 8), unsigned long)
+#define STRM_REGISTERNOTIFY _IOWR(DB, DB_IOC(DB_STRM, 9), unsigned long)
+#define STRM_SELECT _IOWR(DB, DB_IOC(DB_STRM, 10), unsigned long)
+
+/* CMM Module */
+#define CMM_ALLOCBUF _IO(DB, DB_IOC(DB_CMM, 0)) /* Not Impl'd */
+#define CMM_FREEBUF _IO(DB, DB_IOC(DB_CMM, 1)) /* Not Impl'd */
+#define CMM_GETHANDLE _IOR(DB, DB_IOC(DB_CMM, 2), unsigned long)
+#define CMM_GETINFO _IOR(DB, DB_IOC(DB_CMM, 3), unsigned long)
+
+#endif /* DSPAPIIOCTL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspapi.h b/drivers/staging/tidspbridge/include/dspbridge/dspapi.h
new file mode 100644
index 0000000..f84ac69
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspapi.h
@@ -0,0 +1,167 @@
+/*
+ * dspapi.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Includes the wrapper functions called directly by the
+ * DeviceIOControl interface.
+ *
+ * Notes:
+ * Bridge services exported to Bridge driver are initialized by the DSPAPI on
+ * behalf of the Bridge driver. Bridge driver must not call module Init/Exit
+ * functions.
+ *
+ * To ensure Bridge driver binary compatibility across different platforms,
+ * for the same processor, a Bridge driver must restrict its usage of system
+ * services to those exported by the DSPAPI library.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPAPI_
+#define DSPAPI_
+
+#include <dspbridge/dspapi-ioctl.h>
+
+/* This BRD API Library Version: */
+#define BRD_API_MAJOR_VERSION (u32)8 /* .8x - Alpha, .9x - Beta, 1.x FCS */
+#define BRD_API_MINOR_VERSION (u32)0
+
+/*
+ * ======== api_call_dev_ioctl ========
+ * Purpose:
+ * Call the (wrapper) function for the corresponding API IOCTL.
+ * Parameters:
+ * cmd: IOCTL id, base 0.
+ * args: Argument structure.
+ * pResult:
+ * Returns:
+ * 0 if command called; -EINVAL if command not in IOCTL
+ * table.
+ * Requires:
+ * Ensures:
+ */
+extern int api_call_dev_ioctl(unsigned int cmd,
+ union Trapped_Args *args,
+ u32 *pResult, void *pr_ctxt);
+
+/*
+ * ======== api_init ========
+ * Purpose:
+ * Initialize modules used by Bridge API.
+ * This procedure is called when the driver is loaded.
+ * Parameters:
+ * Returns:
+ * TRUE if success; FALSE otherwise.
+ * Requires:
+ * Ensures:
+ */
+extern bool api_init(void);
+
+/*
+ * ======== api_init_complete2 ========
+ * Purpose:
+ * Perform any required bridge initialization which cannot
+ * be performed in api_init() or dev_start_device() due
+ * to the fact that some services are not yet
+ * completely initialized.
+ * Parameters:
+ * Returns:
+ * 0: Allow this device to load
+ * -EPERM: Failure.
+ * Requires:
+ * Bridge API initialized.
+ * Ensures:
+ */
+extern int api_init_complete2(void);
+
+/*
+ * ======== api_exit ========
+ * Purpose:
+ * Exit all modules initialized in api_init(void).
+ * This procedure is called when the driver is unloaded.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * api_init(void) was previously called.
+ * Ensures:
+ * Resources acquired in api_init(void) are freed.
+ */
+extern void api_exit(void);
+
+/* MGR wrapper functions */
+extern u32 mgrwrap_enum_node_info(union Trapped_Args *args, void *pr_ctxt);
+extern u32 mgrwrap_enum_proc_info(union Trapped_Args *args, void *pr_ctxt);
+extern u32 mgrwrap_register_object(union Trapped_Args *args, void *pr_ctxt);
+extern u32 mgrwrap_unregister_object(union Trapped_Args *args, void *pr_ctxt);
+extern u32 mgrwrap_wait_for_bridge_events(union Trapped_Args *args,
+ void *pr_ctxt);
+
+extern u32 mgrwrap_get_process_resources_info(union Trapped_Args *args,
+ void *pr_ctxt);
+
+/* CPRC (Processor) wrapper Functions */
+extern u32 procwrap_attach(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_ctrl(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_detach(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_enum_node_info(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_enum_resources(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_get_state(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_get_trace(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_load(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_register_notify(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_start(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_reserve_memory(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_un_reserve_memory(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_map(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_un_map(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_flush_memory(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_stop(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_invalidate_memory(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_begin_dma(union Trapped_Args *args, void *pr_ctxt);
+extern u32 procwrap_end_dma(union Trapped_Args *args, void *pr_ctxt);
+
+/* NODE wrapper functions */
+extern u32 nodewrap_allocate(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_alloc_msg_buf(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_change_priority(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_connect(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_create(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_delete(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_free_msg_buf(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_get_attr(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_get_message(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_pause(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_put_message(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_register_notify(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_run(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_terminate(union Trapped_Args *args, void *pr_ctxt);
+extern u32 nodewrap_get_uuid_props(union Trapped_Args *args, void *pr_ctxt);
+
+/* STRM wrapper functions */
+extern u32 strmwrap_allocate_buffer(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_close(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_free_buffer(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_get_event_handle(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_get_info(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_idle(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_issue(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_open(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_reclaim(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_register_notify(union Trapped_Args *args, void *pr_ctxt);
+extern u32 strmwrap_select(union Trapped_Args *args, void *pr_ctxt);
+
+extern u32 cmmwrap_calloc_buf(union Trapped_Args *args, void *pr_ctxt);
+extern u32 cmmwrap_free_buf(union Trapped_Args *args, void *pr_ctxt);
+extern u32 cmmwrap_get_handle(union Trapped_Args *args, void *pr_ctxt);
+extern u32 cmmwrap_get_info(union Trapped_Args *args, void *pr_ctxt);
+
+#endif /* DSPAPI_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspchnl.h b/drivers/staging/tidspbridge/include/dspbridge/dspchnl.h
new file mode 100644
index 0000000..5661bca
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspchnl.h
@@ -0,0 +1,72 @@
+/*
+ * dspchnl.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Declares the upper edge channel class library functions required by
+ * all Bridge driver / DSP API driver interface tables. These functions are
+ * implemented by every class of Bridge channel library.
+ *
+ * Notes:
+ * The function comment headers reside in dspdefs.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPCHNL_
+#define DSPCHNL_
+
+extern int bridge_chnl_create(OUT struct chnl_mgr **phChnlMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct chnl_mgrattrs
+ *pMgrAttrs);
+
+extern int bridge_chnl_destroy(struct chnl_mgr *hchnl_mgr);
+
+extern int bridge_chnl_open(OUT struct chnl_object **phChnl,
+ struct chnl_mgr *hchnl_mgr,
+ s8 chnl_mode,
+ u32 uChnlId,
+ CONST IN OPTIONAL struct chnl_attr
+ *pattrs);
+
+extern int bridge_chnl_close(struct chnl_object *chnl_obj);
+
+extern int bridge_chnl_add_io_req(struct chnl_object *chnl_obj,
+ void *pHostBuf,
+ u32 byte_size, u32 buf_size,
+ OPTIONAL u32 dw_dsp_addr, u32 dw_arg);
+
+extern int bridge_chnl_get_ioc(struct chnl_object *chnl_obj,
+ u32 dwTimeOut, OUT struct chnl_ioc *pIOC);
+
+extern int bridge_chnl_cancel_io(struct chnl_object *chnl_obj);
+
+extern int bridge_chnl_flush_io(struct chnl_object *chnl_obj,
+ u32 dwTimeOut);
+
+extern int bridge_chnl_get_info(struct chnl_object *chnl_obj,
+ OUT struct chnl_info *pInfo);
+
+extern int bridge_chnl_get_mgr_info(struct chnl_mgr *hchnl_mgr,
+ u32 uChnlID, OUT struct chnl_mgrinfo
+ *pMgrInfo);
+
+extern int bridge_chnl_idle(struct chnl_object *chnl_obj,
+ u32 dwTimeOut, bool fFlush);
+
+extern int bridge_chnl_register_notify(struct chnl_object *chnl_obj,
+ u32 event_mask,
+ u32 notify_type,
+ struct dsp_notification
+ *hnotification);
+
+#endif /* DSPCHNL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
new file mode 100644
index 0000000..493f62e
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdefs.h
@@ -0,0 +1,1128 @@
+/*
+ * dspdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Bridge driver entry point and interface function declarations.
+ *
+ * Notes:
+ * The DSP API obtains it's function interface to
+ * the Bridge driver via a call to bridge_drv_entry().
+ *
+ * Bridge services exported to Bridge drivers are initialized by the
+ * DSP API on behalf of the Bridge driver.
+ *
+ * Bridge function DBC Requires and Ensures are also made by the DSP API on
+ * behalf of the Bridge driver, to simplify the Bridge driver code.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPDEFS_
+#define DSPDEFS_
+
+#include <dspbridge/brddefs.h>
+#include <dspbridge/cfgdefs.h>
+#include <dspbridge/chnlpriv.h>
+#include <dspbridge/dehdefs.h>
+#include <dspbridge/devdefs.h>
+#include <dspbridge/iodefs.h>
+#include <dspbridge/msgdefs.h>
+
+/*
+ * Any IOCTLS at or above this value are reserved for standard Bridge driver
+ * interfaces.
+ */
+#define BRD_RESERVEDIOCTLBASE 0x8000
+
+/* Handle to Bridge driver's private device context. */
+struct bridge_dev_context;
+
+/*--------------------------------------------------------------------------- */
+/* BRIDGE DRIVER FUNCTION TYPES */
+/*--------------------------------------------------------------------------- */
+
+/*
+ * ======== bridge_brd_monitor ========
+ * Purpose:
+ * Bring the board to the BRD_IDLE (monitor) state.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device context.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL
+ * Ensures:
+ * 0: Board is in BRD_IDLE state;
+ * else: Board state is indeterminate.
+ */
+typedef int(*fxn_brd_monitor) (struct bridge_dev_context *hDevContext);
+
+/*
+ * ======== fxn_brd_setstate ========
+ * Purpose:
+ * Sets the Bridge driver state
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * ulBrdState: Board state
+ * Returns:
+ * 0: Success.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * ulBrdState <= BRD_LASTSTATE.
+ * Ensures:
+ * ulBrdState <= BRD_LASTSTATE.
+ * Update the Board state to the specified state.
+ */
+typedef int(*fxn_brd_setstate) (struct bridge_dev_context
+ * hDevContext, u32 ulBrdState);
+
+/*
+ * ======== bridge_brd_start ========
+ * Purpose:
+ * Bring board to the BRD_RUNNING (start) state.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device context.
+ * dwDSPAddr: DSP address at which to start execution.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL
+ * Board is in monitor (BRD_IDLE) state.
+ * Ensures:
+ * 0: Board is in BRD_RUNNING state.
+ * Interrupts to the PC are enabled.
+ * else: Board state is indeterminate.
+ */
+typedef int(*fxn_brd_start) (struct bridge_dev_context
+ * hDevContext, u32 dwDSPAddr);
+
+/*
+ * ======== bridge_brd_mem_copy ========
+ * Purpose:
+ * Copy memory from one DSP address to another
+ * Parameters:
+ * dev_context: Pointer to context handle
+ * ulDspDestAddr: DSP address to copy to
+ * ulDspSrcAddr: DSP address to copy from
+ * ul_num_bytes: Number of bytes to copy
+ * ulMemType: What section of memory to copy to
+ * Returns:
+ * 0: Success.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * dev_context != NULL
+ * Ensures:
+ * 0: Board is in BRD_RUNNING state.
+ * Interrupts to the PC are enabled.
+ * else: Board state is indeterminate.
+ */
+typedef int(*fxn_brd_memcopy) (struct bridge_dev_context
+ * hDevContext,
+ u32 ulDspDestAddr,
+ u32 ulDspSrcAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+/*
+ * ======== bridge_brd_mem_write ========
+ * Purpose:
+ * Write a block of host memory into a DSP address, into a given memory
+ * space. Unlike bridge_brd_write, this API does reset the DSP
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * dwDSPAddr: Address on DSP board (Destination).
+ * pHostBuf: Pointer to host buffer (Source).
+ * ul_num_bytes: Number of bytes to transfer.
+ * ulMemType: Memory space on DSP to which to transfer.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * pHostBuf != NULL.
+ * Ensures:
+ */
+typedef int(*fxn_brd_memwrite) (struct bridge_dev_context
+ * hDevContext,
+ IN u8 *pHostBuf,
+ u32 dwDSPAddr, u32 ul_num_bytes,
+ u32 ulMemType);
+
+/*
+ * ======== bridge_brd_mem_map ========
+ * Purpose:
+ * Map a MPU memory region to a DSP/IVA memory space
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * ul_mpu_addr: MPU memory region start address.
+ * ulVirtAddr: DSP/IVA memory region u8 address.
+ * ul_num_bytes: Number of bytes to map.
+ * map_attrs: Mapping attributes (e.g. endianness).
+ * Returns:
+ * 0: Success.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_brd_memmap) (struct bridge_dev_context
+ * hDevContext, u32 ul_mpu_addr,
+ u32 ulVirtAddr, u32 ul_num_bytes,
+ u32 ulMapAttrs,
+ struct page **mapped_pages);
+
+/*
+ * ======== bridge_brd_mem_un_map ========
+ * Purpose:
+ * UnMap an MPU memory region from DSP/IVA memory space
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * ulVirtAddr: DSP/IVA memory region u8 address.
+ * ul_num_bytes: Number of bytes to unmap.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_brd_memunmap) (struct bridge_dev_context
+ * hDevContext,
+ u32 ulVirtAddr, u32 ul_num_bytes);
+
+/*
+ * ======== bridge_brd_stop ========
+ * Purpose:
+ * Bring board to the BRD_STOPPED state.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device context.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL
+ * Ensures:
+ * 0: Board is in BRD_STOPPED (stop) state;
+ * Interrupts to the PC are disabled.
+ * else: Board state is indeterminate.
+ */
+typedef int(*fxn_brd_stop) (struct bridge_dev_context *hDevContext);
+
+/*
+ * ======== bridge_brd_status ========
+ * Purpose:
+ * Report the current state of the board.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device context.
+ * pdwState: Ptr to BRD status variable.
+ * Returns:
+ * 0:
+ * Requires:
+ * pdwState != NULL;
+ * hDevContext != NULL
+ * Ensures:
+ * *pdwState is one of {BRD_STOPPED, BRD_IDLE, BRD_RUNNING, BRD_UNKNOWN};
+ */
+typedef int(*fxn_brd_status) (struct bridge_dev_context *hDevContext,
+ int *pdwState);
+
+/*
+ * ======== bridge_brd_read ========
+ * Purpose:
+ * Read a block of DSP memory, from a given memory space, into a host
+ * buffer.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * pHostBuf: Pointer to host buffer (Destination).
+ * dwDSPAddr: Address on DSP board (Source).
+ * ul_num_bytes: Number of bytes to transfer.
+ * ulMemType: Memory space on DSP from which to transfer.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * pHostBuf != NULL.
+ * Ensures:
+ * Will not write more than ul_num_bytes bytes into pHostBuf.
+ */
+typedef int(*fxn_brd_read) (struct bridge_dev_context *hDevContext,
+ OUT u8 *pHostBuf,
+ u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+
+/*
+ * ======== bridge_brd_write ========
+ * Purpose:
+ * Write a block of host memory into a DSP address, into a given memory
+ * space.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * dwDSPAddr: Address on DSP board (Destination).
+ * pHostBuf: Pointer to host buffer (Source).
+ * ul_num_bytes: Number of bytes to transfer.
+ * ulMemType: Memory space on DSP to which to transfer.
+ * Returns:
+ * 0: Success.
+ * -ETIMEDOUT: Timeout occured waiting for a response from hardware.
+ * -EPERM: Other, unspecified error.
+ * Requires:
+ * hDevContext != NULL;
+ * pHostBuf != NULL.
+ * Ensures:
+ */
+typedef int(*fxn_brd_write) (struct bridge_dev_context *hDevContext,
+ IN u8 *pHostBuf,
+ u32 dwDSPAddr,
+ u32 ul_num_bytes, u32 ulMemType);
+
+/*
+ * ======== bridge_chnl_create ========
+ * Purpose:
+ * Create a channel manager object, responsible for opening new channels
+ * and closing old ones for a given 'Bridge board.
+ * Parameters:
+ * phChnlMgr: Location to store a channel manager object on output.
+ * hdev_obj: Handle to a device object.
+ * pMgrAttrs: Channel manager attributes.
+ * pMgrAttrs->max_channels: Max channels
+ * pMgrAttrs->birq: Channel's I/O IRQ number.
+ * pMgrAttrs->irq_shared: TRUE if the IRQ is shareable.
+ * pMgrAttrs->word_size: DSP Word size in equivalent PC bytes..
+ * pMgrAttrs->shm_base: Base physical address of shared memory, if any.
+ * pMgrAttrs->usm_length: Bytes of shared memory block.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EIO: Unable to plug ISR for given IRQ.
+ * -EFAULT: Couldn't map physical address to a virtual one.
+ * Requires:
+ * phChnlMgr != NULL.
+ * pMgrAttrs != NULL
+ * pMgrAttrs field are all valid:
+ * 0 < max_channels <= CHNL_MAXCHANNELS.
+ * birq <= 15.
+ * word_size > 0.
+ * hdev_obj != NULL
+ * No channel manager exists for this board.
+ * Ensures:
+ */
+typedef int(*fxn_chnl_create) (OUT struct chnl_mgr
+ **phChnlMgr,
+ struct dev_object
+ * hdev_obj,
+ IN CONST struct
+ chnl_mgrattrs * pMgrAttrs);
+
+/*
+ * ======== bridge_chnl_destroy ========
+ * Purpose:
+ * Close all open channels, and destroy the channel manager.
+ * Parameters:
+ * hchnl_mgr: Channel manager object.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: hchnl_mgr was invalid.
+ * Requires:
+ * Ensures:
+ * 0: Cancels I/O on each open channel. Closes each open channel.
+ * chnl_create may subsequently be called for the same device.
+ */
+typedef int(*fxn_chnl_destroy) (struct chnl_mgr *hchnl_mgr);
+/*
+ * ======== bridge_deh_notify ========
+ * Purpose:
+ * When notified of DSP error, take appropriate action.
+ * Parameters:
+ * hdeh_mgr: Handle to DEH manager object.
+ * ulEventMask: Indicate the type of exception
+ * dwErrInfo: Error information
+ * Returns:
+ *
+ * Requires:
+ * hdeh_mgr != NULL;
+ * ulEventMask with a valid exception
+ * Ensures:
+ */
+typedef void (*fxn_deh_notify) (struct deh_mgr *hdeh_mgr,
+ u32 ulEventMask, u32 dwErrInfo);
+
+/*
+ * ======== bridge_chnl_open ========
+ * Purpose:
+ * Open a new half-duplex channel to the DSP board.
+ * Parameters:
+ * phChnl: Location to store a channel object handle.
+ * hchnl_mgr: Handle to channel manager, as returned by
+ * CHNL_GetMgr().
+ * chnl_mode: One of {CHNL_MODETODSP, CHNL_MODEFROMDSP} specifies
+ * direction of data transfer.
+ * uChnlId: If CHNL_PICKFREE is specified, the channel manager will
+ * select a free channel id (default);
+ * otherwise this field specifies the id of the channel.
+ * pattrs: Channel attributes. Attribute fields are as follows:
+ * pattrs->uio_reqs: Specifies the maximum number of I/O requests which can
+ * be pending at any given time. All request packets are
+ * preallocated when the channel is opened.
+ * pattrs->event_obj: This field allows the user to supply an auto reset
+ * event object for channel I/O completion notifications.
+ * It is the responsibility of the user to destroy this
+ * object AFTER closing the channel.
+ * This channel event object can be retrieved using
+ * CHNL_GetEventHandle().
+ * pattrs->hReserved: The kernel mode handle of this event object.
+ *
+ * Returns:
+ * 0: Success.
+ * -EFAULT: hchnl_mgr is invalid.
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EINVAL: Invalid number of IOReqs.
+ * -ENOSR: No free channels available.
+ * -ECHRNG: Channel ID is out of range.
+ * -EALREADY: Channel is in use.
+ * -EIO: No free IO request packets available for
+ * queuing.
+ * Requires:
+ * phChnl != NULL.
+ * pattrs != NULL.
+ * pattrs->event_obj is a valid event handle.
+ * pattrs->hReserved is the kernel mode handle for pattrs->event_obj.
+ * Ensures:
+ * 0: *phChnl is a valid channel.
+ * else: *phChnl is set to NULL if (phChnl != NULL);
+ */
+typedef int(*fxn_chnl_open) (OUT struct chnl_object
+ **phChnl,
+ struct chnl_mgr *hchnl_mgr,
+ s8 chnl_mode,
+ u32 uChnlId,
+ CONST IN OPTIONAL struct
+ chnl_attr * pattrs);
+
+/*
+ * ======== bridge_chnl_close ========
+ * Purpose:
+ * Ensures all pending I/O on this channel is cancelled, discards all
+ * queued I/O completion notifications, then frees the resources allocated
+ * for this channel, and makes the corresponding logical channel id
+ * available for subsequent use.
+ * Parameters:
+ * chnl_obj: Handle to a channel object.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj.
+ * Requires:
+ * No thread must be blocked on this channel's I/O completion event.
+ * Ensures:
+ * 0: chnl_obj is no longer valid.
+ */
+typedef int(*fxn_chnl_close) (struct chnl_object *chnl_obj);
+
+/*
+ * ======== bridge_chnl_add_io_req ========
+ * Purpose:
+ * Enqueue an I/O request for data transfer on a channel to the DSP.
+ * The direction (mode) is specified in the channel object. Note the DSP
+ * address is specified for channels opened in direct I/O mode.
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * pHostBuf: Host buffer address source.
+ * byte_size: Number of PC bytes to transfer. A zero value indicates
+ * that this buffer is the last in the output channel.
+ * A zero value is invalid for an input channel.
+ *! buf_size: Actual buffer size in host bytes.
+ * dw_dsp_addr: DSP address for transfer. (Currently ignored).
+ * dw_arg: A user argument that travels with the buffer.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj or pHostBuf.
+ * -EPERM: User cannot mark EOS on an input channel.
+ * -ECANCELED: I/O has been cancelled on this channel. No further
+ * I/O is allowed.
+ * -EPIPE: End of stream was already marked on a previous
+ * IORequest on this channel. No further I/O is expected.
+ * -EINVAL: Buffer submitted to this output channel is larger than
+ * the size of the physical shared memory output window.
+ * Requires:
+ * Ensures:
+ * 0: The buffer will be transferred if the channel is ready;
+ * otherwise, will be queued for transfer when the channel becomes
+ * ready. In any case, notifications of I/O completion are
+ * asynchronous.
+ * If byte_size is 0 for an output channel, subsequent CHNL_AddIOReq's
+ * on this channel will fail with error code -EPIPE. The
+ * corresponding IOC for this I/O request will have its status flag
+ * set to CHNL_IOCSTATEOS.
+ */
+typedef int(*fxn_chnl_addioreq) (struct chnl_object
+ * chnl_obj,
+ void *pHostBuf,
+ u32 byte_size,
+ u32 buf_size,
+ OPTIONAL u32 dw_dsp_addr, u32 dw_arg);
+
+/*
+ * ======== bridge_chnl_get_ioc ========
+ * Purpose:
+ * Dequeue an I/O completion record, which contains information about the
+ * completed I/O request.
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * dwTimeOut: A value of CHNL_IOCNOWAIT will simply dequeue the
+ * first available IOC.
+ * pIOC: On output, contains host buffer address, bytes
+ * transferred, and status of I/O completion.
+ * pIOC->status: See chnldefs.h.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid chnl_obj or pIOC.
+ * -EREMOTEIO: CHNL_IOCNOWAIT was specified as the dwTimeOut parameter
+ * yet no I/O completions were queued.
+ * Requires:
+ * dwTimeOut == CHNL_IOCNOWAIT.
+ * Ensures:
+ * 0: if there are any remaining IOC's queued before this call
+ * returns, the channel event object will be left in a signalled
+ * state.
+ */
+typedef int(*fxn_chnl_getioc) (struct chnl_object *chnl_obj,
+ u32 dwTimeOut,
+ OUT struct chnl_ioc *pIOC);
+
+/*
+ * ======== bridge_chnl_cancel_io ========
+ * Purpose:
+ * Return all I/O requests to the client which have not yet been
+ * transferred. The channel's I/O completion object is
+ * signalled, and all the I/O requests are queued as IOC's, with the
+ * status field set to CHNL_IOCSTATCANCEL.
+ * This call is typically used in abort situations, and is a prelude to
+ * chnl_close();
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj.
+ * Requires:
+ * Ensures:
+ * Subsequent I/O requests to this channel will not be accepted.
+ */
+typedef int(*fxn_chnl_cancelio) (struct chnl_object *chnl_obj);
+
+/*
+ * ======== bridge_chnl_flush_io ========
+ * Purpose:
+ * For an output stream (to the DSP), indicates if any IO requests are in
+ * the output request queue. For input streams (from the DSP), will
+ * cancel all pending IO requests.
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * dwTimeOut: Timeout value for flush operation.
+ * Returns:
+ * 0: Success;
+ * S_CHNLIOREQUEST: Returned if any IORequests are in the output queue.
+ * -EFAULT: Invalid chnl_obj.
+ * Requires:
+ * Ensures:
+ * 0: No I/O requests will be pending on this channel.
+ */
+typedef int(*fxn_chnl_flushio) (struct chnl_object *chnl_obj,
+ u32 dwTimeOut);
+
+/*
+ * ======== bridge_chnl_get_info ========
+ * Purpose:
+ * Retrieve information related to a channel.
+ * Parameters:
+ * chnl_obj: Handle to a valid channel object, or NULL.
+ * pInfo: Location to store channel info.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj or pInfo.
+ * Requires:
+ * Ensures:
+ * 0: pInfo points to a filled in chnl_info struct,
+ * if (pInfo != NULL).
+ */
+typedef int(*fxn_chnl_getinfo) (struct chnl_object *chnl_obj,
+ OUT struct chnl_info *pChnlInfo);
+
+/*
+ * ======== bridge_chnl_get_mgr_info ========
+ * Purpose:
+ * Retrieve information related to the channel manager.
+ * Parameters:
+ * hchnl_mgr: Handle to a valid channel manager, or NULL.
+ * uChnlID: Channel ID.
+ * pMgrInfo: Location to store channel manager info.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid hchnl_mgr or pMgrInfo.
+ * -ECHRNG: Invalid channel ID.
+ * Requires:
+ * Ensures:
+ * 0: pMgrInfo points to a filled in chnl_mgrinfo
+ * struct, if (pMgrInfo != NULL).
+ */
+typedef int(*fxn_chnl_getmgrinfo) (struct chnl_mgr
+ * hchnl_mgr,
+ u32 uChnlID,
+ OUT struct chnl_mgrinfo *pMgrInfo);
+
+/*
+ * ======== bridge_chnl_idle ========
+ * Purpose:
+ * Idle a channel. If this is an input channel, or if this is an output
+ * channel and fFlush is TRUE, all currently enqueued buffers will be
+ * dequeued (data discarded for output channel).
+ * If this is an output channel and fFlush is FALSE, this function
+ * will block until all currently buffered data is output, or the timeout
+ * specified has been reached.
+ *
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * dwTimeOut: If output channel and fFlush is FALSE, timeout value
+ * to wait for buffers to be output. (Not used for
+ * input channel).
+ * fFlush: If output channel and fFlush is TRUE, discard any
+ * currently buffered data. If FALSE, wait for currently
+ * buffered data to be output, or timeout, whichever
+ * occurs first. fFlush is ignored for input channel.
+ * Returns:
+ * 0: Success;
+ * -EFAULT: Invalid chnl_obj.
+ * -ETIMEDOUT: Timeout occured before channel could be idled.
+ * Requires:
+ * Ensures:
+ */
+typedef int(*fxn_chnl_idle) (struct chnl_object *chnl_obj,
+ u32 dwTimeOut, bool fFlush);
+
+/*
+ * ======== bridge_chnl_register_notify ========
+ * Purpose:
+ * Register for notification of events on a channel.
+ * Parameters:
+ * chnl_obj: Channel object handle.
+ * event_mask: Type of events to be notified about: IO completion
+ * (DSP_STREAMIOCOMPLETION) or end of stream
+ * (DSP_STREAMDONE).
+ * notify_type: DSP_SIGNALEVENT.
+ * hnotification: Handle of a dsp_notification object.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory.
+ * -EINVAL: event_mask is 0 and hnotification was not
+ * previously registered.
+ * -EFAULT: NULL hnotification, hnotification event name
+ * too long, or hnotification event name NULL.
+ * Requires:
+ * Valid chnl_obj.
+ * hnotification != NULL.
+ * (event_mask & ~(DSP_STREAMIOCOMPLETION | DSP_STREAMDONE)) == 0.
+ * notify_type == DSP_SIGNALEVENT.
+ * Ensures:
+ */
+typedef int(*fxn_chnl_registernotify)
+ (struct chnl_object *chnl_obj,
+ u32 event_mask, u32 notify_type, struct dsp_notification *hnotification);
+
+/*
+ * ======== bridge_dev_create ========
+ * Purpose:
+ * Complete creation of the device object for this board.
+ * Parameters:
+ * phDevContext: Ptr to location to store a Bridge device context.
+ * hdev_obj: Handle to a Device Object, created and managed by DSP API.
+ * pConfig: Ptr to configuration parameters provided by the
+ * Configuration Manager during device loading.
+ * pDspConfig: DSP resources, as specified in the registry key for this
+ * device.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Unable to allocate memory for device context.
+ * Requires:
+ * phDevContext != NULL;
+ * hdev_obj != NULL;
+ * pConfig != NULL;
+ * pDspConfig != NULL;
+ * Fields in pConfig and pDspConfig contain valid values.
+ * Ensures:
+ * 0: All Bridge driver specific DSP resource and other
+ * board context has been allocated.
+ * -ENOMEM: Bridge failed to allocate resources.
+ * Any acquired resources have been freed. The DSP API
+ * will not call bridge_dev_destroy() if
+ * bridge_dev_create() fails.
+ * Details:
+ * Called during the CONFIGMG's Device_Init phase. Based on host and
+ * DSP configuration information, create a board context, a handle to
+ * which is passed into other Bridge BRD and CHNL functions. The
+ * board context contains state information for the device. Since the
+ * addresses of all IN pointer parameters may be invalid when this
+ * function returns, they must not be stored into the device context
+ * structure.
+ */
+typedef int(*fxn_dev_create) (OUT struct bridge_dev_context
+ **phDevContext,
+ struct dev_object
+ * hdev_obj,
+ IN struct cfg_hostres
+ * pConfig);
+
+/*
+ * ======== bridge_dev_ctrl ========
+ * Purpose:
+ * Bridge driver specific interface.
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device info.
+ * dw_cmd: Bridge driver defined command code.
+ * pargs: Pointer to an arbitrary argument structure.
+ * Returns:
+ * 0 or -EPERM. Actual command error codes should be passed back in
+ * the pargs structure, and are defined by the Bridge driver implementor.
+ * Requires:
+ * All calls are currently assumed to be synchronous. There are no
+ * IOCTL completion routines provided.
+ * Ensures:
+ */
+typedef int(*fxn_dev_ctrl) (struct bridge_dev_context *hDevContext,
+ u32 dw_cmd, IN OUT void *pargs);
+
+/*
+ * ======== bridge_dev_destroy ========
+ * Purpose:
+ * Deallocate Bridge device extension structures and all other resources
+ * acquired by the Bridge driver.
+ * No calls to other Bridge driver functions may subsequently
+ * occur, except for bridge_dev_create().
+ * Parameters:
+ * hDevContext: Handle to Bridge driver defined device information.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Failed to release a resource previously acquired.
+ * Requires:
+ * hDevContext != NULL;
+ * Ensures:
+ * 0: Device context is freed.
+ */
+typedef int(*fxn_dev_destroy) (struct bridge_dev_context *hDevContext);
+
+/*
+ * ======== bridge_deh_create ========
+ * Purpose:
+ * Create an object that manages DSP exceptions from the GPP.
+ * Parameters:
+ * phDehMgr: Location to store DEH manager on output.
+ * hdev_obj: Handle to DEV object.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EPERM: Creation failed.
+ * Requires:
+ * hdev_obj != NULL;
+ * phDehMgr != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_deh_create) (OUT struct deh_mgr
+ **phDehMgr, struct dev_object *hdev_obj);
+
+/*
+ * ======== bridge_deh_destroy ========
+ * Purpose:
+ * Destroy the DEH object.
+ * Parameters:
+ * hdeh_mgr: Handle to DEH manager object.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Destroy failed.
+ * Requires:
+ * hdeh_mgr != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_deh_destroy) (struct deh_mgr *hdeh_mgr);
+
+/*
+ * ======== bridge_deh_register_notify ========
+ * Purpose:
+ * Register for DEH event notification.
+ * Parameters:
+ * hdeh_mgr: Handle to DEH manager object.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Destroy failed.
+ * Requires:
+ * hdeh_mgr != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_deh_registernotify)
+ (struct deh_mgr *hdeh_mgr,
+ u32 event_mask, u32 notify_type, struct dsp_notification *hnotification);
+
+/*
+ * ======== bridge_deh_get_info ========
+ * Purpose:
+ * Get DSP exception info.
+ * Parameters:
+ * phDehMgr: Location to store DEH manager on output.
+ * pErrInfo: Ptr to error info structure.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Creation failed.
+ * Requires:
+ * phDehMgr != NULL;
+ * pErrorInfo != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_deh_getinfo) (struct deh_mgr *phDehMgr,
+ struct dsp_errorinfo *pErrInfo);
+
+/*
+ * ======== bridge_io_create ========
+ * Purpose:
+ * Create an object that manages I/O between CHNL and msg_ctrl.
+ * Parameters:
+ * phIOMgr: Location to store IO manager on output.
+ * hchnl_mgr: Handle to channel manager.
+ * hmsg_mgr: Handle to message manager.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EPERM: Creation failed.
+ * Requires:
+ * hdev_obj != NULL;
+ * Channel manager already created;
+ * Message manager already created;
+ * pMgrAttrs != NULL;
+ * phIOMgr != NULL;
+ * Ensures:
+ */
+typedef int(*fxn_io_create) (OUT struct io_mgr **phIOMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct io_attrs *pMgrAttrs);
+
+/*
+ * ======== bridge_io_destroy ========
+ * Purpose:
+ * Destroy object created in bridge_io_create.
+ * Parameters:
+ * hio_mgr: IO Manager.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failure.
+ * -EPERM: Creation failed.
+ * Requires:
+ * Valid hio_mgr;
+ * Ensures:
+ */
+typedef int(*fxn_io_destroy) (struct io_mgr *hio_mgr);
+
+/*
+ * ======== bridge_io_on_loaded ========
+ * Purpose:
+ * Called whenever a program is loaded to update internal data. For
+ * example, if shared memory is used, this function would update the
+ * shared memory location and address.
+ * Parameters:
+ * hio_mgr: IO Manager.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Internal failure occurred.
+ * Requires:
+ * Valid hio_mgr;
+ * Ensures:
+ */
+typedef int(*fxn_io_onloaded) (struct io_mgr *hio_mgr);
+
+/*
+ * ======== fxn_io_getprocload ========
+ * Purpose:
+ * Called to get the Processor's current and predicted load
+ * Parameters:
+ * hio_mgr: IO Manager.
+ * pProcLoadStat Processor Load statistics
+ * Returns:
+ * 0: Success.
+ * -EPERM: Internal failure occurred.
+ * Requires:
+ * Valid hio_mgr;
+ * Ensures:
+ */
+typedef int(*fxn_io_getprocload) (struct io_mgr *hio_mgr,
+ struct dsp_procloadstat *
+ pProcLoadStat);
+
+/*
+ * ======== bridge_msg_create ========
+ * Purpose:
+ * Create an object to manage message queues. Only one of these objects
+ * can exist per device object.
+ * Parameters:
+ * phMsgMgr: Location to store msg_ctrl manager on output.
+ * hdev_obj: Handle to a device object.
+ * msgCallback: Called whenever an RMS_EXIT message is received.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory.
+ * Requires:
+ * phMsgMgr != NULL.
+ * msgCallback != NULL.
+ * hdev_obj != NULL.
+ * Ensures:
+ */
+typedef int(*fxn_msg_create)
+ (OUT struct msg_mgr **phMsgMgr,
+ struct dev_object *hdev_obj, msg_onexit msgCallback);
+
+/*
+ * ======== bridge_msg_create_queue ========
+ * Purpose:
+ * Create a msg_ctrl queue for sending or receiving messages from a Message
+ * node on the DSP.
+ * Parameters:
+ * hmsg_mgr: msg_ctrl queue manager handle returned from
+ * bridge_msg_create.
+ * phMsgQueue: Location to store msg_ctrl queue on output.
+ * msgq_id: Identifier for messages (node environment pointer).
+ * max_msgs: Max number of simultaneous messages for the node.
+ * h: Handle passed to hmsg_mgr->msgCallback().
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory.
+ * Requires:
+ * phMsgQueue != NULL.
+ * h != NULL.
+ * max_msgs > 0.
+ * Ensures:
+ * phMsgQueue !=NULL <==> 0.
+ */
+typedef int(*fxn_msg_createqueue)
+ (struct msg_mgr *hmsg_mgr,
+ OUT struct msg_queue **phMsgQueue, u32 msgq_id, u32 max_msgs, void *h);
+
+/*
+ * ======== bridge_msg_delete ========
+ * Purpose:
+ * Delete a msg_ctrl manager allocated in bridge_msg_create().
+ * Parameters:
+ * hmsg_mgr: Handle returned from bridge_msg_create().
+ * Returns:
+ * Requires:
+ * Valid hmsg_mgr.
+ * Ensures:
+ */
+typedef void (*fxn_msg_delete) (struct msg_mgr *hmsg_mgr);
+
+/*
+ * ======== bridge_msg_delete_queue ========
+ * Purpose:
+ * Delete a msg_ctrl queue allocated in bridge_msg_create_queue.
+ * Parameters:
+ * msg_queue_obj: Handle to msg_ctrl queue returned from
+ * bridge_msg_create_queue.
+ * Returns:
+ * Requires:
+ * Valid msg_queue_obj.
+ * Ensures:
+ */
+typedef void (*fxn_msg_deletequeue) (struct msg_queue *msg_queue_obj);
+
+/*
+ * ======== bridge_msg_get ========
+ * Purpose:
+ * Get a message from a msg_ctrl queue.
+ * Parameters:
+ * msg_queue_obj: Handle to msg_ctrl queue returned from
+ * bridge_msg_create_queue.
+ * pmsg: Location to copy message into.
+ * utimeout: Timeout to wait for a message.
+ * Returns:
+ * 0: Success.
+ * -ETIME: Timeout occurred.
+ * -EPERM: No frames available for message (max_msgs too
+ * small).
+ * Requires:
+ * Valid msg_queue_obj.
+ * pmsg != NULL.
+ * Ensures:
+ */
+typedef int(*fxn_msg_get) (struct msg_queue *msg_queue_obj,
+ struct dsp_msg *pmsg, u32 utimeout);
+
+/*
+ * ======== bridge_msg_put ========
+ * Purpose:
+ * Put a message onto a msg_ctrl queue.
+ * Parameters:
+ * msg_queue_obj: Handle to msg_ctrl queue returned from
+ * bridge_msg_create_queue.
+ * pmsg: Pointer to message.
+ * utimeout: Timeout to wait for a message.
+ * Returns:
+ * 0: Success.
+ * -ETIME: Timeout occurred.
+ * -EPERM: No frames available for message (max_msgs too
+ * small).
+ * Requires:
+ * Valid msg_queue_obj.
+ * pmsg != NULL.
+ * Ensures:
+ */
+typedef int(*fxn_msg_put) (struct msg_queue *msg_queue_obj,
+ IN CONST struct dsp_msg *pmsg, u32 utimeout);
+
+/*
+ * ======== bridge_msg_register_notify ========
+ * Purpose:
+ * Register notification for when a message is ready.
+ * Parameters:
+ * msg_queue_obj: Handle to msg_ctrl queue returned from
+ * bridge_msg_create_queue.
+ * event_mask: Type of events to be notified about: Must be
+ * DSP_NODEMESSAGEREADY, or 0 to unregister.
+ * notify_type: DSP_SIGNALEVENT.
+ * hnotification: Handle of notification object.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory.
+ * Requires:
+ * Valid msg_queue_obj.
+ * hnotification != NULL.
+ * notify_type == DSP_SIGNALEVENT.
+ * event_mask == DSP_NODEMESSAGEREADY || event_mask == 0.
+ * Ensures:
+ */
+typedef int(*fxn_msg_registernotify)
+ (struct msg_queue *msg_queue_obj,
+ u32 event_mask, u32 notify_type, struct dsp_notification *hnotification);
+
+/*
+ * ======== bridge_msg_set_queue_id ========
+ * Purpose:
+ * Set message queue id to node environment. Allows bridge_msg_create_queue
+ * to be called in node_allocate, before the node environment is known.
+ * Parameters:
+ * msg_queue_obj: Handle to msg_ctrl queue returned from
+ * bridge_msg_create_queue.
+ * msgq_id: Node environment pointer.
+ * Returns:
+ * Requires:
+ * Valid msg_queue_obj.
+ * msgq_id != 0.
+ * Ensures:
+ */
+typedef void (*fxn_msg_setqueueid) (struct msg_queue *msg_queue_obj,
+ u32 msgq_id);
+
+/*
+ * Bridge Driver interface function table.
+ *
+ * The information in this table is filled in by the specific Bridge driver,
+ * and copied into the DSP API's own space. If any interface
+ * function field is set to a value of NULL, then the DSP API will
+ * consider that function not implemented, and return the error code
+ * -ENOSYS when a Bridge driver client attempts to call that function.
+ *
+ * This function table contains DSP API version numbers, which are used by the
+ * Bridge driver loader to help ensure backwards compatility between older
+ * Bridge drivers and newer DSP API. These must be set to
+ * BRD_API_MAJOR_VERSION and BRD_API_MINOR_VERSION, respectively.
+ *
+ * A Bridge driver need not export a CHNL interface. In this case, *all* of
+ * the bridge_chnl_* entries must be set to NULL.
+ */
+struct bridge_drv_interface {
+ u32 brd_api_major_version; /* Set to BRD_API_MAJOR_VERSION. */
+ u32 brd_api_minor_version; /* Set to BRD_API_MINOR_VERSION. */
+ fxn_dev_create pfn_dev_create; /* Create device context */
+ fxn_dev_destroy pfn_dev_destroy; /* Destroy device context */
+ fxn_dev_ctrl pfn_dev_cntrl; /* Optional vendor interface */
+ fxn_brd_monitor pfn_brd_monitor; /* Load and/or start monitor */
+ fxn_brd_start pfn_brd_start; /* Start DSP program. */
+ fxn_brd_stop pfn_brd_stop; /* Stop/reset board. */
+ fxn_brd_status pfn_brd_status; /* Get current board status. */
+ fxn_brd_read pfn_brd_read; /* Read board memory */
+ fxn_brd_write pfn_brd_write; /* Write board memory. */
+ fxn_brd_setstate pfn_brd_set_state; /* Sets the Board State */
+ fxn_brd_memcopy pfn_brd_mem_copy; /* Copies DSP Memory */
+ fxn_brd_memwrite pfn_brd_mem_write; /* Write DSP Memory w/o halt */
+ fxn_brd_memmap pfn_brd_mem_map; /* Maps MPU mem to DSP mem */
+ fxn_brd_memunmap pfn_brd_mem_un_map; /* Unmaps MPU mem to DSP mem */
+ fxn_chnl_create pfn_chnl_create; /* Create channel manager. */
+ fxn_chnl_destroy pfn_chnl_destroy; /* Destroy channel manager. */
+ fxn_chnl_open pfn_chnl_open; /* Create a new channel. */
+ fxn_chnl_close pfn_chnl_close; /* Close a channel. */
+ fxn_chnl_addioreq pfn_chnl_add_io_req; /* Req I/O on a channel. */
+ fxn_chnl_getioc pfn_chnl_get_ioc; /* Wait for I/O completion. */
+ fxn_chnl_cancelio pfn_chnl_cancel_io; /* Cancl I/O on a channel. */
+ fxn_chnl_flushio pfn_chnl_flush_io; /* Flush I/O. */
+ fxn_chnl_getinfo pfn_chnl_get_info; /* Get channel specific info */
+ /* Get channel manager info. */
+ fxn_chnl_getmgrinfo pfn_chnl_get_mgr_info;
+ fxn_chnl_idle pfn_chnl_idle; /* Idle the channel */
+ /* Register for notif. */
+ fxn_chnl_registernotify pfn_chnl_register_notify;
+ fxn_deh_create pfn_deh_create; /* Create DEH manager */
+ fxn_deh_destroy pfn_deh_destroy; /* Destroy DEH manager */
+ fxn_deh_notify pfn_deh_notify; /* Notify of DSP error */
+ /* register for deh notif. */
+ fxn_deh_registernotify pfn_deh_register_notify;
+ fxn_deh_getinfo pfn_deh_get_info; /* register for deh notif. */
+ fxn_io_create pfn_io_create; /* Create IO manager */
+ fxn_io_destroy pfn_io_destroy; /* Destroy IO manager */
+ fxn_io_onloaded pfn_io_on_loaded; /* Notify of program loaded */
+ /* Get Processor's current and predicted load */
+ fxn_io_getprocload pfn_io_get_proc_load;
+ fxn_msg_create pfn_msg_create; /* Create message manager */
+ /* Create message queue */
+ fxn_msg_createqueue pfn_msg_create_queue;
+ fxn_msg_delete pfn_msg_delete; /* Delete message manager */
+ /* Delete message queue */
+ fxn_msg_deletequeue pfn_msg_delete_queue;
+ fxn_msg_get pfn_msg_get; /* Get a message */
+ fxn_msg_put pfn_msg_put; /* Send a message */
+ /* Register for notif. */
+ fxn_msg_registernotify pfn_msg_register_notify;
+ /* Set message queue id */
+ fxn_msg_setqueueid pfn_msg_set_queue_id;
+};
+
+/*
+ * ======== bridge_drv_entry ========
+ * Purpose:
+ * Registers Bridge driver functions with the DSP API. Called only once
+ * by the DSP API. The caller will first check DSP API version
+ * compatibility, and then copy the interface functions into its own
+ * memory space.
+ * Parameters:
+ * ppDrvInterface Pointer to a location to receive a pointer to the
+ * Bridge driver interface.
+ * Returns:
+ * Requires:
+ * The code segment this function resides in must expect to be discarded
+ * after completion.
+ * Ensures:
+ * ppDrvInterface pointer initialized to Bridge driver's function
+ * interface. No system resources are acquired by this function.
+ * Details:
+ * Called during the Device_Init phase.
+ */
+void bridge_drv_entry(OUT struct bridge_drv_interface **ppDrvInterface,
+ IN CONST char *driver_file_name);
+
+#endif /* DSPDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdeh.h b/drivers/staging/tidspbridge/include/dspbridge/dspdeh.h
new file mode 100644
index 0000000..4394711
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdeh.h
@@ -0,0 +1,47 @@
+/*
+ * dspdeh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Defines upper edge DEH functions required by all Bridge driver/DSP API
+ * interface tables.
+ *
+ * Notes:
+ * Function comment headers reside with the function typedefs in dspdefs.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPDEH_
+#define DSPDEH_
+
+#include <dspbridge/devdefs.h>
+
+#include <dspbridge/dehdefs.h>
+
+extern int bridge_deh_create(struct deh_mgr **ret_deh_mgr,
+ struct dev_object *hdev_obj);
+
+extern int bridge_deh_destroy(struct deh_mgr *deh_mgr);
+
+extern int bridge_deh_get_info(struct deh_mgr *deh_mgr,
+ struct dsp_errorinfo *pErrInfo);
+
+extern int bridge_deh_register_notify(struct deh_mgr *deh_mgr,
+ u32 event_mask,
+ u32 notify_type,
+ struct dsp_notification *hnotification);
+
+extern void bridge_deh_notify(struct deh_mgr *deh_mgr,
+ u32 ulEventMask, u32 dwErrInfo);
+
+extern void bridge_deh_release_dummy_mem(void);
+#endif /* DSPDEH_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspdrv.h b/drivers/staging/tidspbridge/include/dspbridge/dspdrv.h
new file mode 100644
index 0000000..2dd4f8b
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspdrv.h
@@ -0,0 +1,62 @@
+/*
+ * dspdrv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This is the Stream Interface for the DSp API.
+ * All Device operations are performed via DeviceIOControl.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#if !defined _DSPDRV_H_
+#define _DSPDRV_H_
+
+#define MAX_DEV 10 /* Max support of 10 devices */
+
+/*
+ * ======== dsp_deinit ========
+ * Purpose:
+ * This function is called by Device Manager to de-initialize a device.
+ * This function is not called by applications.
+ * Parameters:
+ * dwDeviceContext:Handle to the device context. The XXX_Init function
+ * creates and returns this identifier.
+ * Returns:
+ * TRUE indicates the device successfully de-initialized. Otherwise it
+ * returns FALSE.
+ * Requires:
+ * dwDeviceContext!= NULL. For a built in device this should never
+ * get called.
+ * Ensures:
+ */
+extern bool dsp_deinit(u32 dwDeviceContext);
+
+/*
+ * ======== dsp_init ========
+ * Purpose:
+ * This function is called by Device Manager to initialize a device.
+ * This function is not called by applications
+ * Parameters:
+ * dw_context: Specifies a pointer to a string containing the registry
+ * path to the active key for the stream interface driver.
+ * HKEY_LOCAL_MACHINE\Drivers\Active
+ * Returns:
+ * Returns a handle to the device context created. This is the our actual
+ * Device Object representing the DSP Device instance.
+ * Requires:
+ * Ensures:
+ * Succeeded: device context > 0
+ * Failed: device Context = 0
+ */
+extern u32 dsp_init(OUT u32 *init_status);
+
+#endif
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspio.h b/drivers/staging/tidspbridge/include/dspbridge/dspio.h
new file mode 100644
index 0000000..275697a
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspio.h
@@ -0,0 +1,41 @@
+/*
+ * dspio.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Declares the upper edge IO functions required by all Bridge driver /DSP API
+ * interface tables.
+ *
+ * Notes:
+ * Function comment headers reside in dspdefs.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPIO_
+#define DSPIO_
+
+#include <dspbridge/devdefs.h>
+#include <dspbridge/iodefs.h>
+
+extern int bridge_io_create(OUT struct io_mgr **phIOMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct io_attrs *pMgrAttrs);
+
+extern int bridge_io_destroy(struct io_mgr *hio_mgr);
+
+extern int bridge_io_on_loaded(struct io_mgr *hio_mgr);
+
+extern int iva_io_on_loaded(struct io_mgr *hio_mgr);
+extern int bridge_io_get_proc_load(IN struct io_mgr *hio_mgr,
+ OUT struct dsp_procloadstat *pProcStat);
+
+#endif /* DSPIO_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h b/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
new file mode 100644
index 0000000..41e0594
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspioctl.h
@@ -0,0 +1,73 @@
+/*
+ * dspioctl.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Bridge driver BRD_IOCtl reserved command definitions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPIOCTL_
+#define DSPIOCTL_
+
+/* ------------------------------------ Hardware Abstraction Layer */
+#include <hw_defs.h>
+#include <hw_mmu.h>
+
+/*
+ * Any IOCTLS at or above this value are reserved for standard Bridge driver
+ * interfaces.
+ */
+#define BRDIOCTL_RESERVEDBASE 0x8000
+
+#define BRDIOCTL_CHNLREAD (BRDIOCTL_RESERVEDBASE + 0x10)
+#define BRDIOCTL_CHNLWRITE (BRDIOCTL_RESERVEDBASE + 0x20)
+#define BRDIOCTL_GETINTRCOUNT (BRDIOCTL_RESERVEDBASE + 0x30)
+#define BRDIOCTL_RESETINTRCOUNT (BRDIOCTL_RESERVEDBASE + 0x40)
+#define BRDIOCTL_INTERRUPTDSP (BRDIOCTL_RESERVEDBASE + 0x50)
+/* DMMU */
+#define BRDIOCTL_SETMMUCONFIG (BRDIOCTL_RESERVEDBASE + 0x60)
+/* PWR */
+#define BRDIOCTL_PWRCONTROL (BRDIOCTL_RESERVEDBASE + 0x70)
+
+/* attention, modifiers:
+ * Some of these control enumerations are made visible to user for power
+ * control, so any changes to this list, should also be updated in the user
+ * header file 'dbdefs.h' ***/
+/* These ioctls are reserved for PWR power commands for the DSP */
+#define BRDIOCTL_DEEPSLEEP (BRDIOCTL_PWRCONTROL + 0x0)
+#define BRDIOCTL_EMERGENCYSLEEP (BRDIOCTL_PWRCONTROL + 0x1)
+#define BRDIOCTL_WAKEUP (BRDIOCTL_PWRCONTROL + 0x2)
+#define BRDIOCTL_PWRENABLE (BRDIOCTL_PWRCONTROL + 0x3)
+#define BRDIOCTL_PWRDISABLE (BRDIOCTL_PWRCONTROL + 0x4)
+#define BRDIOCTL_CLK_CTRL (BRDIOCTL_PWRCONTROL + 0x7)
+/* DSP Initiated Hibernate */
+#define BRDIOCTL_PWR_HIBERNATE (BRDIOCTL_PWRCONTROL + 0x8)
+#define BRDIOCTL_PRESCALE_NOTIFY (BRDIOCTL_PWRCONTROL + 0x9)
+#define BRDIOCTL_POSTSCALE_NOTIFY (BRDIOCTL_PWRCONTROL + 0xA)
+#define BRDIOCTL_CONSTRAINT_REQUEST (BRDIOCTL_PWRCONTROL + 0xB)
+
+/* Number of actual DSP-MMU TLB entrries */
+#define BRDIOCTL_NUMOFMMUTLB 32
+
+struct bridge_ioctl_extproc {
+ u32 ul_dsp_va; /* DSP virtual address */
+ u32 ul_gpp_pa; /* GPP physical address */
+ /* GPP virtual address. __va does not work for ioremapped addresses */
+ u32 ul_gpp_va;
+ u32 ul_size; /* Size of the mapped memory in bytes */
+ enum hw_endianism_t endianism;
+ enum hw_mmu_mixed_size_t mixed_mode;
+ enum hw_element_size_t elem_size;
+};
+
+#endif /* DSPIOCTL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dspmsg.h b/drivers/staging/tidspbridge/include/dspbridge/dspmsg.h
new file mode 100644
index 0000000..a10634e
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dspmsg.h
@@ -0,0 +1,56 @@
+/*
+ * dspmsg.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Declares the upper edge message class library functions required by
+ * all Bridge driver / DSP API interface tables. These functions are
+ * implemented by every class of Bridge driver channel library.
+ *
+ * Notes:
+ * Function comment headers reside in dspdefs.h.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef DSPMSG_
+#define DSPMSG_
+
+#include <dspbridge/msgdefs.h>
+
+extern int bridge_msg_create(OUT struct msg_mgr **phMsgMgr,
+ struct dev_object *hdev_obj,
+ msg_onexit msgCallback);
+
+extern int bridge_msg_create_queue(struct msg_mgr *hmsg_mgr,
+ OUT struct msg_queue **phMsgQueue,
+ u32 msgq_id, u32 max_msgs, void *h);
+
+extern void bridge_msg_delete(struct msg_mgr *hmsg_mgr);
+
+extern void bridge_msg_delete_queue(struct msg_queue *msg_queue_obj);
+
+extern int bridge_msg_get(struct msg_queue *msg_queue_obj,
+ struct dsp_msg *pmsg, u32 utimeout);
+
+extern int bridge_msg_put(struct msg_queue *msg_queue_obj,
+ IN CONST struct dsp_msg *pmsg, u32 utimeout);
+
+extern int bridge_msg_register_notify(struct msg_queue *msg_queue_obj,
+ u32 event_mask,
+ u32 notify_type,
+ struct dsp_notification
+ *hnotification);
+
+extern void bridge_msg_set_queue_id(struct msg_queue *msg_queue_obj,
+ u32 msgq_id);
+
+#endif /* DSPMSG_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/dynamic_loader.h b/drivers/staging/tidspbridge/include/dspbridge/dynamic_loader.h
new file mode 100644
index 0000000..4b109d1
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/dynamic_loader.h
@@ -0,0 +1,492 @@
+/*
+ * dynamic_loader.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _DYNAMIC_LOADER_H_
+#define _DYNAMIC_LOADER_H_
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+/*
+ * Dynamic Loader
+ *
+ * The function of the dynamic loader is to load a "module" containing
+ * instructions for a "target" processor into that processor. In the process
+ * it assigns memory for the module, resolves symbol references made by the
+ * module, and remembers symbols defined by the module.
+ *
+ * The dynamic loader is parameterized for a particular system by 4 classes
+ * that supply the module and system specific functions it requires
+ */
+ /* The read functions for the module image to be loaded */
+struct dynamic_loader_stream;
+
+ /* This class defines "host" symbol and support functions */
+struct dynamic_loader_sym;
+
+ /* This class defines the allocator for "target" memory */
+struct dynamic_loader_allocate;
+
+ /* This class defines the copy-into-target-memory functions */
+struct dynamic_loader_initialize;
+
+/*
+ * Option flags to modify the behavior of module loading
+ */
+#define DLOAD_INITBSS 0x1 /* initialize BSS sections to zero */
+#define DLOAD_BIGEND 0x2 /* require big-endian load module */
+#define DLOAD_LITTLE 0x4 /* require little-endian load module */
+
+/*****************************************************************************
+ * Procedure dynamic_load_module
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ * init Target-side memory initialization, or NULL for symbol read only
+ * options Option flags DLOAD_*
+ * mhandle A module handle for use with Dynamic_Unload
+ *
+ * Effect:
+ * The module image is read using *module. Target storage for the new image is
+ * obtained from *alloc. Symbols defined and referenced by the module are
+ * managed using *syms. The image is then relocated and references resolved
+ * as necessary, and the resulting executable bits are placed into target memory
+ * using *init.
+ *
+ * Returns:
+ * On a successful load, a module handle is placed in *mhandle, and zero is
+ * returned. On error, the number of errors detected is returned. Individual
+ * errors are reported during the load process using syms->error_report().
+ **************************************************************************** */
+extern int dynamic_load_module(
+ /* the source for the module image */
+ struct dynamic_loader_stream *module,
+ /* host support for symbols and storage */
+ struct dynamic_loader_sym *syms,
+ /* the target memory allocator */
+ struct dynamic_loader_allocate *alloc,
+ /* the target memory initializer */
+ struct dynamic_loader_initialize *init,
+ unsigned options, /* option flags */
+ /* the returned module handle */
+ void **mhandle);
+
+/*****************************************************************************
+ * Procedure dynamic_open_module
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ * init Target-side memory initialization, or NULL for symbol read only
+ * options Option flags DLOAD_*
+ * mhandle A module handle for use with Dynamic_Unload
+ *
+ * Effect:
+ * The module image is read using *module. Target storage for the new image is
+ * obtained from *alloc. Symbols defined and referenced by the module are
+ * managed using *syms. The image is then relocated and references resolved
+ * as necessary, and the resulting executable bits are placed into target memory
+ * using *init.
+ *
+ * Returns:
+ * On a successful load, a module handle is placed in *mhandle, and zero is
+ * returned. On error, the number of errors detected is returned. Individual
+ * errors are reported during the load process using syms->error_report().
+ **************************************************************************** */
+extern int dynamic_open_module(
+ /* the source for the module image */
+ struct dynamic_loader_stream *module,
+ /* host support for symbols and storage */
+ struct dynamic_loader_sym *syms,
+ /* the target memory allocator */
+ struct dynamic_loader_allocate *alloc,
+ /* the target memory initializer */
+ struct dynamic_loader_initialize *init,
+ unsigned options, /* option flags */
+ /* the returned module handle */
+ void **mhandle);
+
+/*****************************************************************************
+ * Procedure dynamic_unload_module
+ *
+ * Parameters:
+ * mhandle A module handle from dynamic_load_module
+ * syms Host-side symbol table and malloc/free functions
+ * alloc Target-side memory allocation
+ *
+ * Effect:
+ * The module specified by mhandle is unloaded. Unloading causes all
+ * target memory to be deallocated, all symbols defined by the module to
+ * be purged, and any host-side storage used by the dynamic loader for
+ * this module to be released.
+ *
+ * Returns:
+ * Zero for success. On error, the number of errors detected is returned.
+ * Individual errors are reported using syms->error_report().
+ **************************************************************************** */
+extern int dynamic_unload_module(void *mhandle, /* the module
+ * handle */
+ /* host support for symbols and
+ * storage */
+ struct dynamic_loader_sym *syms,
+ /* the target memory allocator */
+ struct dynamic_loader_allocate *alloc,
+ /* the target memory initializer */
+ struct dynamic_loader_initialize *init);
+
+/*****************************************************************************
+ *****************************************************************************
+ * A class used by the dynamic loader for input of the module image
+ *****************************************************************************
+ **************************************************************************** */
+struct dynamic_loader_stream {
+/* public: */
+ /*************************************************************************
+ * read_buffer
+ *
+ * PARAMETERS :
+ * buffer Pointer to the buffer to fill
+ * bufsiz Amount of data desired in sizeof() units
+ *
+ * EFFECT :
+ * Reads the specified amount of data from the module input stream
+ * into the specified buffer. Returns the amount of data read in sizeof()
+ * units (which if less than the specification, represents an error).
+ *
+ * NOTES:
+ * In release 1 increments the file position by the number of bytes read
+ *
+ ************************************************************************ */
+ int (*read_buffer) (struct dynamic_loader_stream *thisptr,
+ void *buffer, unsigned bufsiz);
+
+ /*************************************************************************
+ * set_file_posn (release 1 only)
+ *
+ * PARAMETERS :
+ * posn Desired file position relative to start of file in sizeof() units.
+ *
+ * EFFECT :
+ * Adjusts the internal state of the stream object so that the next
+ * read_buffer call will begin to read at the specified offset from
+ * the beginning of the input module. Returns 0 for success, non-zero
+ * for failure.
+ *
+ ************************************************************************ */
+ int (*set_file_posn) (struct dynamic_loader_stream *thisptr,
+ /* to be eliminated in release 2 */
+ unsigned int posn);
+
+};
+
+/*****************************************************************************
+ *****************************************************************************
+ * A class used by the dynamic loader for symbol table support and
+ * miscellaneous host-side functions
+ *****************************************************************************
+ **************************************************************************** */
+
+typedef u32 ldr_addr;
+
+/*
+ * the structure of a symbol known to the dynamic loader
+ */
+struct dynload_symbol {
+ ldr_addr value;
+};
+
+struct dynamic_loader_sym {
+/* public: */
+ /*************************************************************************
+ * find_matching_symbol
+ *
+ * PARAMETERS :
+ * name The name of the desired symbol
+ *
+ * EFFECT :
+ * Locates a symbol matching the name specified. A pointer to the
+ * symbol is returned if it exists; 0 is returned if no such symbol is
+ * found.
+ *
+ ************************************************************************ */
+ struct dynload_symbol *(*find_matching_symbol)
+ (struct dynamic_loader_sym *thisptr, const char *name);
+
+ /*************************************************************************
+ * add_to_symbol_table
+ *
+ * PARAMETERS :
+ * nname Pointer to the name of the new symbol
+ * moduleid An opaque module id assigned by the dynamic loader
+ *
+ * EFFECT :
+ * The new symbol is added to the table. A pointer to the symbol is
+ * returned, or NULL is returned for failure.
+ *
+ * NOTES:
+ * It is permissible for this function to return NULL; the effect is that
+ * the named symbol will not be available to resolve references in
+ * subsequent loads. Returning NULL will not cause the current load
+ * to fail.
+ ************************************************************************ */
+ struct dynload_symbol *(*add_to_symbol_table)
+ (struct dynamic_loader_sym *
+ thisptr, const char *nname, unsigned moduleid);
+
+ /*************************************************************************
+ * purge_symbol_table
+ *
+ * PARAMETERS :
+ * moduleid An opaque module id assigned by the dynamic loader
+ *
+ * EFFECT :
+ * Each symbol in the symbol table whose moduleid matches the argument
+ * is removed from the table.
+ ************************************************************************ */
+ void (*purge_symbol_table) (struct dynamic_loader_sym *thisptr,
+ unsigned moduleid);
+
+ /*************************************************************************
+ * dload_allocate
+ *
+ * PARAMETERS :
+ * memsiz size of desired memory in sizeof() units
+ *
+ * EFFECT :
+ * Returns a pointer to some "host" memory for use by the dynamic
+ * loader, or NULL for failure.
+ * This function is serves as a replaceable form of "malloc" to
+ * allow the user to configure the memory usage of the dynamic loader.
+ ************************************************************************ */
+ void *(*dload_allocate) (struct dynamic_loader_sym *thisptr,
+ unsigned memsiz);
+
+ /*************************************************************************
+ * dload_deallocate
+ *
+ * PARAMETERS :
+ * memptr pointer to previously allocated memory
+ *
+ * EFFECT :
+ * Releases the previously allocated "host" memory.
+ ************************************************************************ */
+ void (*dload_deallocate) (struct dynamic_loader_sym *thisptr,
+ void *memptr);
+
+ /*************************************************************************
+ * error_report
+ *
+ * PARAMETERS :
+ * errstr pointer to an error string
+ * args additional arguments
+ *
+ * EFFECT :
+ * This function provides an error reporting interface for the dynamic
+ * loader. The error string and arguments are designed as for the
+ * library function vprintf.
+ ************************************************************************ */
+ void (*error_report) (struct dynamic_loader_sym *thisptr,
+ const char *errstr, va_list args);
+
+}; /* class dynamic_loader_sym */
+
+/*****************************************************************************
+ *****************************************************************************
+ * A class used by the dynamic loader to allocate and deallocate target memory.
+ *****************************************************************************
+ **************************************************************************** */
+
+struct ldr_section_info {
+ /* Name of the memory section assigned at build time */
+ const char *name;
+ ldr_addr run_addr; /* execution address of the section */
+ ldr_addr load_addr; /* load address of the section */
+ ldr_addr size; /* size of the section in addressable units */
+#ifndef _BIG_ENDIAN
+ u16 page; /* memory page or view */
+ u16 type; /* one of the section types below */
+#else
+ u16 type; /* one of the section types below */
+ u16 page; /* memory page or view */
+#endif
+ /* a context field for use by dynamic_loader_allocate;
+ * ignored but maintained by the dynamic loader */
+ u32 context;
+};
+
+/* use this macro to extract type of section from ldr_section_info.type field */
+#define DLOAD_SECTION_TYPE(typeinfo) (typeinfo & 0xF)
+
+/* type of section to be allocated */
+#define DLOAD_TEXT 0
+#define DLOAD_DATA 1
+#define DLOAD_BSS 2
+ /* internal use only, run-time cinit will be of type DLOAD_DATA */
+#define DLOAD_CINIT 3
+
+struct dynamic_loader_allocate {
+/* public: */
+
+ /*************************************************************************
+ * Function allocate
+ *
+ * Parameters:
+ * info A pointer to an information block for the section
+ * align The alignment of the storage in target AUs
+ *
+ * Effect:
+ * Allocates target memory for the specified section and fills in the
+ * load_addr and run_addr fields of the section info structure. Returns TRUE
+ * for success, FALSE for failure.
+ *
+ * Notes:
+ * Frequently load_addr and run_addr are the same, but if they are not
+ * load_addr is used with dynamic_loader_initialize, and run_addr is
+ * used for almost all relocations. This function should always initialize
+ * both fields.
+ ************************************************************************ */
+ int (*dload_allocate) (struct dynamic_loader_allocate *thisptr,
+ struct ldr_section_info *info, unsigned align);
+
+ /*************************************************************************
+ * Function deallocate
+ *
+ * Parameters:
+ * info A pointer to an information block for the section
+ *
+ * Effect:
+ * Releases the target memory previously allocated.
+ *
+ * Notes:
+ * The content of the info->name field is undefined on call to this function.
+ ************************************************************************ */
+ void (*dload_deallocate) (struct dynamic_loader_allocate *thisptr,
+ struct ldr_section_info *info);
+
+}; /* class dynamic_loader_allocate */
+
+/*****************************************************************************
+ *****************************************************************************
+ * A class used by the dynamic loader to load data into a target. This class
+ * provides the interface-specific functions needed to load data.
+ *****************************************************************************
+ **************************************************************************** */
+
+struct dynamic_loader_initialize {
+/* public: */
+ /*************************************************************************
+ * Function connect
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Connect to the initialization interface. Returns TRUE for success,
+ * FALSE for failure.
+ *
+ * Notes:
+ * This function is called prior to use of any other functions in
+ * this interface.
+ ************************************************************************ */
+ int (*connect) (struct dynamic_loader_initialize *thisptr);
+
+ /*************************************************************************
+ * Function readmem
+ *
+ * Parameters:
+ * bufr Pointer to a word-aligned buffer for the result
+ * locn Target address of first data element
+ * info Section info for the section in which the address resides
+ * bytsiz Size of the data to be read in sizeof() units
+ *
+ * Effect:
+ * Fills the specified buffer with data from the target. Returns TRUE for
+ * success, FALSE for failure.
+ ************************************************************************ */
+ int (*readmem) (struct dynamic_loader_initialize *thisptr,
+ void *bufr,
+ ldr_addr locn,
+ struct ldr_section_info *info, unsigned bytsiz);
+
+ /*************************************************************************
+ * Function writemem
+ *
+ * Parameters:
+ * bufr Pointer to a word-aligned buffer of data
+ * locn Target address of first data element to be written
+ * info Section info for the section in which the address resides
+ * bytsiz Size of the data to be written in sizeof() units
+ *
+ * Effect:
+ * Writes the specified buffer to the target. Returns TRUE for success,
+ * FALSE for failure.
+ ************************************************************************ */
+ int (*writemem) (struct dynamic_loader_initialize *thisptr,
+ void *bufr,
+ ldr_addr locn,
+ struct ldr_section_info *info, unsigned bytsiz);
+
+ /*************************************************************************
+ * Function fillmem
+ *
+ * Parameters:
+ * locn Target address of first data element to be written
+ * info Section info for the section in which the address resides
+ * bytsiz Size of the data to be written in sizeof() units
+ * val Value to be written in each byte
+ * Effect:
+ * Fills the specified area of target memory. Returns TRUE for success,
+ * FALSE for failure.
+ ************************************************************************ */
+ int (*fillmem) (struct dynamic_loader_initialize *thisptr,
+ ldr_addr locn, struct ldr_section_info *info,
+ unsigned bytsiz, unsigned val);
+
+ /*************************************************************************
+ * Function execute
+ *
+ * Parameters:
+ * start Starting address
+ *
+ * Effect:
+ * The target code at the specified starting address is executed.
+ *
+ * Notes:
+ * This function is called at the end of the dynamic load process
+ * if the input module has specified a starting address.
+ ************************************************************************ */
+ int (*execute) (struct dynamic_loader_initialize *thisptr,
+ ldr_addr start);
+
+ /*************************************************************************
+ * Function release
+ *
+ * Parameters:
+ * none
+ *
+ * Effect:
+ * Releases the connection to the load interface.
+ *
+ * Notes:
+ * This function is called at the end of the dynamic load process.
+ ************************************************************************ */
+ void (*release) (struct dynamic_loader_initialize *thisptr);
+
+}; /* class dynamic_loader_initialize */
+
+#endif /* _DYNAMIC_LOADER_H_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/gb.h b/drivers/staging/tidspbridge/include/dspbridge/gb.h
new file mode 100644
index 0000000..fda783a
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/gb.h
@@ -0,0 +1,79 @@
+/*
+ * gb.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Generic bitmap manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef GB_
+#define GB_
+
+#define GB_NOBITS (~0)
+#include <dspbridge/host_os.h>
+
+struct gb_t_map;
+
+/*
+ * ======== gb_clear ========
+ * Clear the bit in position bitn in the bitmap map. Bit positions are
+ * zero based.
+ */
+
+extern void gb_clear(struct gb_t_map *map, u32 bitn);
+
+/*
+ * ======== gb_create ========
+ * Create a bit map with len bits. Initially all bits are cleared.
+ */
+
+extern struct gb_t_map *gb_create(u32 len);
+
+/*
+ * ======== gb_delete ========
+ * Delete previously created bit map
+ */
+
+extern void gb_delete(struct gb_t_map *map);
+
+/*
+ * ======== gb_findandset ========
+ * Finds a clear bit, sets it, and returns the position
+ */
+
+extern u32 gb_findandset(struct gb_t_map *map);
+
+/*
+ * ======== gb_minclear ========
+ * gb_minclear returns the minimum clear bit position. If no bit is
+ * clear, gb_minclear returns -1.
+ */
+extern u32 gb_minclear(struct gb_t_map *map);
+
+/*
+ * ======== gb_set ========
+ * Set the bit in position bitn in the bitmap map. Bit positions are
+ * zero based.
+ */
+
+extern void gb_set(struct gb_t_map *map, u32 bitn);
+
+/*
+ * ======== gb_test ========
+ * Returns TRUE if the bit in position bitn is set in map; otherwise
+ * gb_test returns FALSE. Bit positions are zero based.
+ */
+
+extern bool gb_test(struct gb_t_map *map, u32 bitn);
+
+#endif /*GB_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/getsection.h b/drivers/staging/tidspbridge/include/dspbridge/getsection.h
new file mode 100644
index 0000000..bdd8e20
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/getsection.h
@@ -0,0 +1,108 @@
+/*
+ * getsection.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This file provides an API add-on to the dynamic loader that allows the user
+ * to query section information and extract section data from dynamic load
+ * modules.
+ *
+ * Notes:
+ * Functions in this API assume that the supplied dynamic_loader_stream
+ * object supports the set_file_posn method.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _GETSECTION_H_
+#define _GETSECTION_H_
+
+#include "dynamic_loader.h"
+
+/*
+ * Procedure dload_module_open
+ *
+ * Parameters:
+ * module The input stream that supplies the module image
+ * syms Host-side malloc/free and error reporting functions.
+ * Other methods are unused.
+ *
+ * Effect:
+ * Reads header information from a dynamic loader module using the specified
+ * stream object, and returns a handle for the module information. This
+ * handle may be used in subsequent query calls to obtain information
+ * contained in the module.
+ *
+ * Returns:
+ * NULL if an error is encountered, otherwise a module handle for use
+ * in subsequent operations.
+ */
+extern void *dload_module_open(struct dynamic_loader_stream
+ *module, struct dynamic_loader_sym
+ *syms);
+
+/*
+ * Procedure dload_get_section_info
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ * sectionName Pointer to the string name of the section desired
+ * sectionInfo Address of a section info structure pointer to be initialized
+ *
+ * Effect:
+ * Finds the specified section in the module information, and fills in
+ * the provided ldr_section_info structure.
+ *
+ * Returns:
+ * TRUE for success, FALSE for section not found
+ */
+extern int dload_get_section_info(void *minfo,
+ const char *sectionName,
+ const struct ldr_section_info
+ **const sectionInfo);
+
+/*
+ * Procedure dload_get_section
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ * sectionInfo Pointer to a section info structure for the desired section
+ * sectionData Buffer to contain the section initialized data
+ *
+ * Effect:
+ * Copies the initialized data for the specified section into the
+ * supplied buffer.
+ *
+ * Returns:
+ * TRUE for success, FALSE for section not found
+ */
+extern int dload_get_section(void *minfo,
+ const struct ldr_section_info *sectionInfo,
+ void *sectionData);
+
+/*
+ * Procedure dload_module_close
+ *
+ * Parameters:
+ * minfo Handle from dload_module_open for this module
+ *
+ * Effect:
+ * Releases any storage associated with the module handle. On return,
+ * the module handle is invalid.
+ *
+ * Returns:
+ * Zero for success. On error, the number of errors detected is returned.
+ * Individual errors are reported using syms->error_report(), where syms was
+ * an argument to dload_module_open
+ */
+extern void dload_module_close(void *minfo);
+
+#endif /* _GETSECTION_H_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/gh.h b/drivers/staging/tidspbridge/include/dspbridge/gh.h
new file mode 100644
index 0000000..55c0489
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/gh.h
@@ -0,0 +1,32 @@
+/*
+ * gh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef GH_
+#define GH_
+#include <dspbridge/host_os.h>
+
+extern struct gh_t_hash_tab *gh_create(u16 max_bucket, u16 val_size,
+ u16(*hash) (void *, u16),
+ bool(*match) (void *, void *),
+ void (*delete) (void *));
+extern void gh_delete(struct gh_t_hash_tab *hash_tab);
+extern void gh_exit(void);
+extern void *gh_find(struct gh_t_hash_tab *hash_tab, void *key);
+extern void gh_init(void);
+extern void *gh_insert(struct gh_t_hash_tab *hash_tab, void *key, void *value);
+void gh_iterate(struct gh_t_hash_tab *hash_tab,
+ void (*callback)(void *, void *), void *user_data);
+#endif /* GH_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/gs.h b/drivers/staging/tidspbridge/include/dspbridge/gs.h
new file mode 100644
index 0000000..f32d8d9
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/gs.h
@@ -0,0 +1,59 @@
+/*
+ * gs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Memory allocation/release wrappers. This module allows clients to
+ * avoid OS spacific issues related to memory allocation. It also provides
+ * simple diagnostic capabilities to assist in the detection of memory
+ * leaks.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef GS_
+#define GS_
+
+/*
+ * ======== gs_alloc ========
+ * Alloc size bytes of space. Returns pointer to space
+ * allocated, otherwise NULL.
+ */
+extern void *gs_alloc(u32 size);
+
+/*
+ * ======== gs_exit ========
+ * Module exit. Do not change to "#define gs_init()"; in
+ * some environments this operation must actually do some work!
+ */
+extern void gs_exit(void);
+
+/*
+ * ======== gs_free ========
+ * Free space allocated by gs_alloc() or GS_calloc().
+ */
+extern void gs_free(void *ptr);
+
+/*
+ * ======== gs_frees ========
+ * Free space allocated by gs_alloc() or GS_calloc() and assert that
+ * the size of the allocation is size bytes.
+ */
+extern void gs_frees(void *ptr, u32 size);
+
+/*
+ * ======== gs_init ========
+ * Module initialization. Do not change to "#define gs_init()"; in
+ * some environments this operation must actually do some work!
+ */
+extern void gs_init(void);
+
+#endif /*GS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/host_os.h b/drivers/staging/tidspbridge/include/dspbridge/host_os.h
new file mode 100644
index 0000000..a91c136
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/host_os.h
@@ -0,0 +1,89 @@
+/*
+ * host_os.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _HOST_OS_H_
+#define _HOST_OS_H_
+
+#include <asm/system.h>
+#include <asm/atomic.h>
+#include <linux/semaphore.h>
+#include <linux/uaccess.h>
+#include <linux/irq.h>
+#include <linux/io.h>
+#include <linux/syscalls.h>
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/stddef.h>
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/ctype.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <linux/vmalloc.h>
+#include <linux/ioport.h>
+#include <linux/platform_device.h>
+#include <dspbridge/dbtype.h>
+#include <plat/clock.h>
+#include <linux/clk.h>
+#include <plat/mailbox.h>
+#include <linux/pagemap.h>
+#include <asm/cacheflush.h>
+#include <linux/dma-mapping.h>
+
+/* TODO -- Remove, once BP defines them */
+#define INT_DSP_MMU_IRQ 28
+
+struct dspbridge_platform_data {
+ void (*dsp_set_min_opp) (u8 opp_id);
+ u8(*dsp_get_opp) (void);
+ void (*cpu_set_freq) (unsigned long f);
+ unsigned long (*cpu_get_freq) (void);
+ unsigned long mpu_speed[6];
+
+ /* functions to write and read PRCM registers */
+ void (*dsp_prm_write)(u32, s16 , u16);
+ u32 (*dsp_prm_read)(s16 , u16);
+ u32 (*dsp_prm_rmw_bits)(u32, u32, s16, s16);
+ void (*dsp_cm_write)(u32, s16 , u16);
+ u32 (*dsp_cm_read)(s16 , u16);
+ u32 (*dsp_cm_rmw_bits)(u32, u32, s16, s16);
+
+ u32 phys_mempool_base;
+ u32 phys_mempool_size;
+};
+
+#define PRCM_VDD1 1
+
+extern struct platform_device *omap_dspbridge_dev;
+extern struct device *bridge;
+
+#if defined(CONFIG_TIDSPBRIDGE) || defined(CONFIG_TIDSPBRIDGE_MODULE)
+extern void dspbridge_reserve_sdram(void);
+#else
+static inline void dspbridge_reserve_sdram(void)
+{
+}
+#endif
+
+extern unsigned long dspbridge_get_mempool_base(void);
+#endif
diff --git a/drivers/staging/tidspbridge/include/dspbridge/io.h b/drivers/staging/tidspbridge/include/dspbridge/io.h
new file mode 100644
index 0000000..e1610f1
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/io.h
@@ -0,0 +1,114 @@
+/*
+ * io.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * The io module manages IO between CHNL and msg_ctrl.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef IO_
+#define IO_
+
+#include <dspbridge/cfgdefs.h>
+#include <dspbridge/devdefs.h>
+
+#include <dspbridge/iodefs.h>
+
+/*
+ * ======== io_create ========
+ * Purpose:
+ * Create an IO manager object, responsible for managing IO between
+ * CHNL and msg_ctrl.
+ * Parameters:
+ * phChnlMgr: Location to store a channel manager object on
+ * output.
+ * hdev_obj: Handle to a device object.
+ * pMgrAttrs: IO manager attributes.
+ * pMgrAttrs->birq: I/O IRQ number.
+ * pMgrAttrs->irq_shared: TRUE if the IRQ is shareable.
+ * pMgrAttrs->word_size: DSP Word size in equivalent PC bytes..
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EIO: Unable to plug channel ISR for configured IRQ.
+ * -EINVAL: Invalid DSP word size (must be > 0).
+ * Invalid base address for DSP communications.
+ * Requires:
+ * io_init(void) called.
+ * phIOMgr != NULL.
+ * pMgrAttrs != NULL.
+ * Ensures:
+ */
+extern int io_create(OUT struct io_mgr **phIOMgr,
+ struct dev_object *hdev_obj,
+ IN CONST struct io_attrs *pMgrAttrs);
+
+/*
+ * ======== io_destroy ========
+ * Purpose:
+ * Destroy the IO manager.
+ * Parameters:
+ * hio_mgr: IOmanager object.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: hio_mgr was invalid.
+ * Requires:
+ * io_init(void) called.
+ * Ensures:
+ */
+extern int io_destroy(struct io_mgr *hio_mgr);
+
+/*
+ * ======== io_exit ========
+ * Purpose:
+ * Discontinue usage of the IO module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * io_init(void) previously called.
+ * Ensures:
+ * Resources, if any acquired in io_init(void), are freed when the last
+ * client of IO calls io_exit(void).
+ */
+extern void io_exit(void);
+
+/*
+ * ======== io_init ========
+ * Purpose:
+ * Initialize the IO module's private state.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occurred.
+ * Requires:
+ * Ensures:
+ * A requirement for each of the other public CHNL functions.
+ */
+extern bool io_init(void);
+
+/*
+ * ======== io_on_loaded ========
+ * Purpose:
+ * Called when a program is loaded so IO manager can update its
+ * internal state.
+ * Parameters:
+ * hio_mgr: IOmanager object.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: hio_mgr was invalid.
+ * Requires:
+ * io_init(void) called.
+ * Ensures:
+ */
+extern int io_on_loaded(struct io_mgr *hio_mgr);
+
+#endif /* CHNL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/io_sm.h b/drivers/staging/tidspbridge/include/dspbridge/io_sm.h
new file mode 100644
index 0000000..3ffd542
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/io_sm.h
@@ -0,0 +1,309 @@
+/*
+ * io_sm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * IO dispatcher for a shared memory channel driver.
+ * Also, includes macros to simulate shm via port io calls.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef IOSM_
+#define IOSM_
+
+#include <dspbridge/_chnl_sm.h>
+#include <dspbridge/host_os.h>
+
+#include <dspbridge/iodefs.h>
+
+#define IO_INPUT 0
+#define IO_OUTPUT 1
+#define IO_SERVICE 2
+#define IO_MAXSERVICE IO_SERVICE
+
+#define DSP_FIELD_ADDR(type, field, base, wordsize) \
+ ((((s32)&(((type *)0)->field)) / wordsize) + (u32)base)
+
+/* Access can be different SM access word size (e.g. 16/32 bit words) */
+#define IO_SET_VALUE(pContext, type, base, field, value) (base->field = value)
+#define IO_GET_VALUE(pContext, type, base, field) (base->field)
+#define IO_OR_VALUE(pContext, type, base, field, value) (base->field |= value)
+#define IO_AND_VALUE(pContext, type, base, field, value) (base->field &= value)
+#define IO_SET_LONG(pContext, type, base, field, value) (base->field = value)
+#define IO_GET_LONG(pContext, type, base, field) (base->field)
+
+#ifdef CONFIG_BRIDGE_DVFS
+/* The maximum number of OPPs that are supported */
+extern s32 dsp_max_opps;
+/* The Vdd1 opp table information */
+extern u32 vdd1_dsp_freq[6][4];
+#endif
+
+/*
+ * ======== io_cancel_chnl ========
+ * Purpose:
+ * Cancel IO on a given channel.
+ * Parameters:
+ * hio_mgr: IO Manager.
+ * ulChnl: Index of channel to cancel IO on.
+ * Returns:
+ * Requires:
+ * Valid hio_mgr.
+ * Ensures:
+ */
+extern void io_cancel_chnl(struct io_mgr *hio_mgr, u32 ulChnl);
+
+/*
+ * ======== io_dpc ========
+ * Purpose:
+ * Deferred procedure call for shared memory channel driver ISR. Carries
+ * out the dispatch of I/O.
+ * Parameters:
+ * pRefData: Pointer to reference data registered via a call to
+ * DPC_Create().
+ * Returns:
+ * Requires:
+ * Must not block.
+ * Must not acquire resources.
+ * All data touched must be locked in memory if running in kernel mode.
+ * Ensures:
+ * Non-preemptible (but interruptible).
+ */
+extern void io_dpc(IN OUT unsigned long pRefData);
+
+/*
+ * ======== io_mbox_msg ========
+ * Purpose:
+ * Main interrupt handler for the shared memory Bridge channel manager.
+ * Calls the Bridge's chnlsm_isr to determine if this interrupt is ours,
+ * then schedules a DPC to dispatch I/O.
+ * Parameters:
+ * pRefData: Pointer to the channel manager object for this board.
+ * Set in an initial call to ISR_Install().
+ * Returns:
+ * TRUE if interrupt handled; FALSE otherwise.
+ * Requires:
+ * Must be in locked memory if executing in kernel mode.
+ * Must only call functions which are in locked memory if Kernel mode.
+ * Must only call asynchronous services.
+ * Interrupts are disabled and EOI for this interrupt has been sent.
+ * Ensures:
+ */
+void io_mbox_msg(u32 msg);
+
+/*
+ * ======== io_request_chnl ========
+ * Purpose:
+ * Request I/O from the DSP. Sets flags in shared memory, then interrupts
+ * the DSP.
+ * Parameters:
+ * hio_mgr: IO manager handle.
+ * pchnl: Ptr to the channel requesting I/O.
+ * iMode: Mode of channel: {IO_INPUT | IO_OUTPUT}.
+ * Returns:
+ * Requires:
+ * pchnl != NULL
+ * Ensures:
+ */
+extern void io_request_chnl(struct io_mgr *hio_mgr,
+ struct chnl_object *pchnl,
+ u8 iMode, OUT u16 *pwMbVal);
+
+/*
+ * ======== iosm_schedule ========
+ * Purpose:
+ * Schedule DPC for IO.
+ * Parameters:
+ * pio_mgr: Ptr to a I/O manager.
+ * Returns:
+ * Requires:
+ * pchnl != NULL
+ * Ensures:
+ */
+extern void iosm_schedule(struct io_mgr *hio_mgr);
+
+/*
+ * DSP-DMA IO functions
+ */
+
+/*
+ * ======== io_ddma_init_chnl_desc ========
+ * Purpose:
+ * Initialize DSP DMA channel descriptor.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * uDDMAChnlId: DDMA channel identifier.
+ * uNumDesc: Number of buffer descriptors(equals # of IOReqs &
+ * Chirps)
+ * pDsp: Dsp address;
+ * Returns:
+ * Requires:
+ * uDDMAChnlId < DDMA_MAXDDMACHNLS
+ * uNumDesc > 0
+ * pVa != NULL
+ * pDspPa != NULL
+ *
+ * Ensures:
+ */
+extern void io_ddma_init_chnl_desc(struct io_mgr *hio_mgr, u32 uDDMAChnlId,
+ u32 uNumDesc, void *pDsp);
+
+/*
+ * ======== io_ddma_clear_chnl_desc ========
+ * Purpose:
+ * Clear DSP DMA channel descriptor.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * uDDMAChnlId: DDMA channel identifier.
+ * Returns:
+ * Requires:
+ * uDDMAChnlId < DDMA_MAXDDMACHNLS
+ * Ensures:
+ */
+extern void io_ddma_clear_chnl_desc(struct io_mgr *hio_mgr, u32 uDDMAChnlId);
+
+/*
+ * ======== io_ddma_request_chnl ========
+ * Purpose:
+ * Request channel DSP-DMA from the DSP. Sets up SM descriptors and
+ * control fields in shared memory.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * pchnl: Ptr to channel object
+ * chnl_packet_obj: Ptr to channel i/o request packet.
+ * Returns:
+ * Requires:
+ * pchnl != NULL
+ * pchnl->cio_reqs > 0
+ * chnl_packet_obj != NULL
+ * Ensures:
+ */
+extern void io_ddma_request_chnl(struct io_mgr *hio_mgr,
+ struct chnl_object *pchnl,
+ struct chnl_irp *chnl_packet_obj,
+ OUT u16 *pwMbVal);
+
+/*
+ * Zero-copy IO functions
+ */
+
+/*
+ * ======== io_ddzc_init_chnl_desc ========
+ * Purpose:
+ * Initialize ZCPY channel descriptor.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * uZId: zero-copy channel identifier.
+ * Returns:
+ * Requires:
+ * uDDMAChnlId < DDMA_MAXZCPYCHNLS
+ * hio_mgr != Null
+ * Ensures:
+ */
+extern void io_ddzc_init_chnl_desc(struct io_mgr *hio_mgr, u32 uZId);
+
+/*
+ * ======== io_ddzc_clear_chnl_desc ========
+ * Purpose:
+ * Clear DSP ZC channel descriptor.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * uChnlId: ZC channel identifier.
+ * Returns:
+ * Requires:
+ * hio_mgr is valid
+ * uChnlId < DDMA_MAXZCPYCHNLS
+ * Ensures:
+ */
+extern void io_ddzc_clear_chnl_desc(struct io_mgr *hio_mgr, u32 uChnlId);
+
+/*
+ * ======== io_ddzc_request_chnl ========
+ * Purpose:
+ * Request zero-copy channel transfer. Sets up SM descriptors and
+ * control fields in shared memory.
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * pchnl: Ptr to channel object
+ * chnl_packet_obj: Ptr to channel i/o request packet.
+ * Returns:
+ * Requires:
+ * pchnl != NULL
+ * pchnl->cio_reqs > 0
+ * chnl_packet_obj != NULL
+ * Ensures:
+ */
+extern void io_ddzc_request_chnl(struct io_mgr *hio_mgr,
+ struct chnl_object *pchnl,
+ struct chnl_irp *chnl_packet_obj,
+ OUT u16 *pwMbVal);
+
+/*
+ * ======== io_sh_msetting ========
+ * Purpose:
+ * Sets the shared memory setting
+ * Parameters:
+ * hio_mgr: Handle to a I/O manager.
+ * desc: Shared memory type
+ * pargs: Ptr to shm setting
+ * Returns:
+ * Requires:
+ * hio_mgr != NULL
+ * pargs != NULL
+ * Ensures:
+ */
+extern int io_sh_msetting(struct io_mgr *hio_mgr, u8 desc, void *pargs);
+
+/*
+ * Misc functions for the CHNL_IO shared memory library:
+ */
+
+/* Maximum channel bufsize that can be used. */
+extern u32 io_buf_size(struct io_mgr *hio_mgr);
+
+extern u32 io_read_value(struct bridge_dev_context *hDevContext, u32 dwDSPAddr);
+
+extern void io_write_value(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr, u32 dwValue);
+
+extern u32 io_read_value_long(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr);
+
+extern void io_write_value_long(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr, u32 dwValue);
+
+extern void io_or_set_value(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr, u32 dwValue);
+
+extern void io_and_set_value(struct bridge_dev_context *hDevContext,
+ u32 dwDSPAddr, u32 dwValue);
+
+extern void io_intr_dsp2(IN struct io_mgr *pio_mgr, IN u16 mb_val);
+
+extern void io_sm_init(void);
+
+/*
+ * ========print_dsp_trace_buffer ========
+ * Print DSP tracebuffer.
+ */
+extern int print_dsp_trace_buffer(struct bridge_dev_context
+ *hbridge_context);
+
+int dump_dsp_stack(struct bridge_dev_context *bridge_context);
+
+void dump_dl_modules(struct bridge_dev_context *bridge_context);
+
+#ifndef DSP_TRACEBUF_DISABLED
+void print_dsp_debug_trace(struct io_mgr *hio_mgr);
+#endif
+
+#endif /* IOSM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/iodefs.h b/drivers/staging/tidspbridge/include/dspbridge/iodefs.h
new file mode 100644
index 0000000..8bd10a0
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/iodefs.h
@@ -0,0 +1,36 @@
+/*
+ * iodefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * System-wide channel objects and constants.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef IODEFS_
+#define IODEFS_
+
+#define IO_MAXIRQ 0xff /* Arbitrarily large number. */
+
+/* IO Objects: */
+struct io_mgr;
+
+/* IO manager attributes: */
+struct io_attrs {
+ u8 birq; /* Channel's I/O IRQ number. */
+ bool irq_shared; /* TRUE if the IRQ is shareable. */
+ u32 word_size; /* DSP Word size. */
+ u32 shm_base; /* Physical base address of shared memory. */
+ u32 usm_length; /* Size (in bytes) of shared memory. */
+};
+
+#endif /* IODEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/ldr.h b/drivers/staging/tidspbridge/include/dspbridge/ldr.h
new file mode 100644
index 0000000..6a0269c
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/ldr.h
@@ -0,0 +1,29 @@
+/*
+ * ldr.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Provide module loading services and symbol export services.
+ *
+ * Notes:
+ * This service is meant to be used by modules of the DSP/BIOS Bridge
+ * driver.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef LDR_
+#define LDR_
+
+/* Loader objects: */
+struct ldr_module;
+
+#endif /* LDR_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/list.h b/drivers/staging/tidspbridge/include/dspbridge/list.h
new file mode 100644
index 0000000..dc8ae09
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/list.h
@@ -0,0 +1,225 @@
+/*
+ * list.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Declarations of list management control structures and definitions
+ * of inline list management functions.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef LIST_
+#define LIST_
+
+#include <dspbridge/host_os.h>
+#include <linux/list.h>
+
+#define LST_IS_EMPTY(l) list_empty(&(l)->head)
+
+struct lst_list {
+ struct list_head head;
+};
+
+/*
+ * ======== lst_first ========
+ * Purpose:
+ * Returns a pointer to the first element of the list, or NULL if the list
+ * is empty.
+ * Parameters:
+ * pList: Pointer to list control structure.
+ * Returns:
+ * Pointer to first list element, or NULL.
+ * Requires:
+ * - LST initialized.
+ * - pList != NULL.
+ * Ensures:
+ */
+static inline struct list_head *lst_first(struct lst_list *pList)
+{
+ if (pList && !list_empty(&pList->head))
+ return pList->head.next;
+ return NULL;
+}
+
+/*
+ * ======== lst_get_head ========
+ * Purpose:
+ * Pops the head off the list and returns a pointer to it.
+ * Details:
+ * If the list is empty, returns NULL.
+ * Else, removes the element at the head of the list, making the next
+ * element the head of the list.
+ * The head is removed by making the tail element of the list point its
+ * "next" pointer at the next element after the head, and by making the
+ * "prev" pointer of the next element after the head point at the tail
+ * element. So the next element after the head becomes the new head of
+ * the list.
+ * Parameters:
+ * pList: Pointer to list control structure of list whose head
+ * element is to be removed
+ * Returns:
+ * Pointer to element that was at the head of the list (success)
+ * NULL No elements in list
+ * Requires:
+ * - LST initialized.
+ * - pList != NULL.
+ * Ensures:
+ * Notes:
+ * Because the tail of the list points forward (its "next" pointer) to
+ * the head of the list, and the head of the list points backward (its
+ * "prev" pointer) to the tail of the list, this list is circular.
+ */
+static inline struct list_head *lst_get_head(struct lst_list *pList)
+{
+ struct list_head *elem_list;
+
+ if (!pList || list_empty(&pList->head))
+ return NULL;
+
+ elem_list = pList->head.next;
+ pList->head.next = elem_list->next;
+ elem_list->next->prev = &pList->head;
+
+ return elem_list;
+}
+
+/*
+ * ======== lst_init_elem ========
+ * Purpose:
+ * Initializes a list element to default (cleared) values
+ * Details:
+ * Parameters:
+ * elem_list: Pointer to list element to be reset
+ * Returns:
+ * Requires:
+ * LST initialized.
+ * Ensures:
+ * Notes:
+ * This function must not be called to "reset" an element in the middle
+ * of a list chain -- that would break the chain.
+ *
+ */
+static inline void lst_init_elem(struct list_head *elem_list)
+{
+ if (elem_list) {
+ elem_list->next = NULL;
+ elem_list->prev = NULL;
+ }
+}
+
+/*
+ * ======== lst_insert_before ========
+ * Purpose:
+ * Insert the element before the existing element.
+ * Parameters:
+ * pList: Pointer to list control structure.
+ * elem_list: Pointer to element in list to insert.
+ * pElemExisting: Pointer to existing list element.
+ * Returns:
+ * Requires:
+ * - LST initialized.
+ * - pList != NULL.
+ * - elem_list != NULL.
+ * - pElemExisting != NULL.
+ * Ensures:
+ */
+static inline void lst_insert_before(struct lst_list *pList,
+ struct list_head *elem_list,
+ struct list_head *pElemExisting)
+{
+ if (pList && elem_list && pElemExisting)
+ list_add_tail(elem_list, pElemExisting);
+}
+
+/*
+ * ======== lst_next ========
+ * Purpose:
+ * Returns a pointer to the next element of the list, or NULL if the next
+ * element is the head of the list or the list is empty.
+ * Parameters:
+ * pList: Pointer to list control structure.
+ * cur_elem: Pointer to element in list to remove.
+ * Returns:
+ * Pointer to list element, or NULL.
+ * Requires:
+ * - LST initialized.
+ * - pList != NULL.
+ * - cur_elem != NULL.
+ * Ensures:
+ */
+static inline struct list_head *lst_next(struct lst_list *pList,
+ struct list_head *cur_elem)
+{
+ if (pList && !list_empty(&pList->head) && cur_elem &&
+ (cur_elem->next != &pList->head))
+ return cur_elem->next;
+ return NULL;
+}
+
+/*
+ * ======== lst_put_tail ========
+ * Purpose:
+ * Adds the specified element to the tail of the list
+ * Details:
+ * Sets new element's "prev" pointer to the address previously held by
+ * the head element's prev pointer. This is the previous tail member of
+ * the list.
+ * Sets the new head's prev pointer to the address of the element.
+ * Sets next pointer of the previous tail member of the list to point to
+ * the new element (rather than the head, which it had been pointing at).
+ * Sets new element's next pointer to the address of the head element.
+ * Sets head's prev pointer to the address of the new element.
+ * Parameters:
+ * pList: Pointer to list control structure to which *elem_list will be
+ * added
+ * elem_list: Pointer to list element to be added
+ * Returns:
+ * Void
+ * Requires:
+ * *elem_list and *pList must both exist.
+ * LST initialized.
+ * Ensures:
+ * Notes:
+ * Because the tail is always "just before" the head of the list (the
+ * tail's "next" pointer points at the head of the list, and the head's
+ * "prev" pointer points at the tail of the list), the list is circular.
+ */
+static inline void lst_put_tail(struct lst_list *pList,
+ struct list_head *elem_list)
+{
+ if (pList && elem_list)
+ list_add_tail(elem_list, &pList->head);
+}
+
+/*
+ * ======== lst_remove_elem ========
+ * Purpose:
+ * Removes (unlinks) the given element from the list, if the list is not
+ * empty. Does not free the list element.
+ * Parameters:
+ * pList: Pointer to list control structure.
+ * cur_elem: Pointer to element in list to remove.
+ * Returns:
+ * Requires:
+ * - LST initialized.
+ * - pList != NULL.
+ * - cur_elem != NULL.
+ * Ensures:
+ */
+static inline void lst_remove_elem(struct lst_list *pList,
+ struct list_head *cur_elem)
+{
+ if (pList && !list_empty(&pList->head) && cur_elem)
+ list_del_init(cur_elem);
+}
+
+#endif /* LIST_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/mbx_sh.h b/drivers/staging/tidspbridge/include/dspbridge/mbx_sh.h
new file mode 100644
index 0000000..289f6f3
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/mbx_sh.h
@@ -0,0 +1,198 @@
+/*
+ * mbx_sh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Definitions for shared mailbox cmd/data values.(used on both
+ * the GPP and DSP sides).
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/*
+ * Bridge usage of OMAP mailbox 1 is determined by the "class" of the
+ * mailbox interrupt's cmd value received. The class value are defined
+ * as a bit (10 thru 15) being set.
+ *
+ * Note: Only 16 bits of each is used. Other 16 bit data reg available.
+ *
+ * 16 bit Mbx bit defns:
+ *
+ * A). Exception/Error handling (Module DEH) : class = 0.
+ *
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|0|0|0|0|x|x|x|x|x|x|x|x|x|x|
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ *
+ * B: DSP-DMA link driver channels (DDMA) : class = 1.
+ *
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|0|0|0|1|b|b|b|b|b|c|c|c|c|c|
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ * where b -> buffer index (32 DDMA buffers/chnl max)
+ * c -> channel Id (32 DDMA chnls max)
+ *
+ *
+ * C: Proc-copy link driver channels (PCPY) : class = 2.
+ *
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|0|0|1|0|x|x|x|x|x|x|x|x|x|x|
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ *
+ * D: Zero-copy link driver channels (DDZC) : class = 4.
+ *
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|0|1|0|0|x|x|x|x|x|c|c|c|c|c|
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ * where x -> not used
+ * c -> channel Id (32 ZCPY chnls max)
+ *
+ *
+ * E: Power management : class = 8.
+ *
+ * 15 10 0
+ * ---------------------------------
+ * |0|0|1|0|0|0|x|x|x|x|x|c|c|c|c|c|
+
+ * 0010 00xx xxxc cccc
+ * 0010 00nn pppp qqqq
+ * nn:
+ * 00 = reserved
+ * 01 = pwr state change
+ * 10 = opp pre-change
+ * 11 = opp post-change
+ *
+ * if nn = pwr state change:
+ * pppp = don't care
+ * qqqq:
+ * 0010 = hibernate
+ * 0010 0001 0000 0010
+ * 0110 = retention
+ * 0010 0001 0000 0110
+ * others reserved
+ *
+ * if nn = opp pre-change:
+ * pppp = current opp
+ * qqqq = next opp
+ *
+ * if nn = opp post-change:
+ * pppp = prev opp
+ * qqqq = current opp
+ *
+ * ---------------------------------
+ * | (class) | (module specific) |
+ *
+ * where x -> not used
+ * c -> Power management command
+ *
+ */
+
+#ifndef _MBX_SH_H
+#define _MBX_SH_H
+
+#define MBX_CLASS_MSK 0xFC00 /* Class bits are 10 thru 15 */
+#define MBX_VALUE_MSK 0x03FF /* Value is 0 thru 9 */
+
+#define MBX_DEH_CLASS 0x0000 /* DEH owns Mbx INTR */
+#define MBX_DDMA_CLASS 0x0400 /* DSP-DMA link drvr chnls owns INTR */
+#define MBX_PCPY_CLASS 0x0800 /* PROC-COPY " */
+#define MBX_ZCPY_CLASS 0x1000 /* ZERO-COPY " */
+#define MBX_PM_CLASS 0x2000 /* Power Management */
+#define MBX_DBG_CLASS 0x4000 /* For debugging purpose */
+
+/*
+ * Exception Handler codes
+ * Magic code used to determine if DSP signaled exception.
+ */
+#define MBX_DEH_BASE 0x0
+#define MBX_DEH_USERS_BASE 0x100 /* 256 */
+#define MBX_DEH_LIMIT 0x3FF /* 1023 */
+#define MBX_DEH_RESET 0x101 /* DSP RESET (DEH) */
+#define MBX_DEH_EMMU 0X103 /*DSP MMU FAULT RECOVERY */
+
+/*
+ * Link driver command/status codes.
+ */
+/* DSP-DMA */
+#define MBX_DDMA_NUMCHNLBITS 5 /* # chnl Id: # bits available */
+#define MBX_DDMA_CHNLSHIFT 0 /* # of bits to shift */
+#define MBX_DDMA_CHNLMSK 0x01F /* bits 0 thru 4 */
+
+#define MBX_DDMA_NUMBUFBITS 5 /* buffer index: # of bits avail */
+#define MBX_DDMA_BUFSHIFT (MBX_DDMA_NUMCHNLBITS + MBX_DDMA_CHNLSHIFT)
+#define MBX_DDMA_BUFMSK 0x3E0 /* bits 5 thru 9 */
+
+/* Zero-Copy */
+#define MBX_ZCPY_NUMCHNLBITS 5 /* # chnl Id: # bits available */
+#define MBX_ZCPY_CHNLSHIFT 0 /* # of bits to shift */
+#define MBX_ZCPY_CHNLMSK 0x01F /* bits 0 thru 4 */
+
+/* Power Management Commands */
+#define MBX_PM_DSPIDLE (MBX_PM_CLASS + 0x0)
+#define MBX_PM_DSPWAKEUP (MBX_PM_CLASS + 0x1)
+#define MBX_PM_EMERGENCYSLEEP (MBX_PM_CLASS + 0x2)
+#define MBX_PM_SLEEPUNTILRESTART (MBX_PM_CLASS + 0x3)
+#define MBX_PM_DSPGLOBALIDLE_OFF (MBX_PM_CLASS + 0x4)
+#define MBX_PM_DSPGLOBALIDLE_ON (MBX_PM_CLASS + 0x5)
+#define MBX_PM_SETPOINT_PRENOTIFY (MBX_PM_CLASS + 0x6)
+#define MBX_PM_SETPOINT_POSTNOTIFY (MBX_PM_CLASS + 0x7)
+#define MBX_PM_DSPRETN (MBX_PM_CLASS + 0x8)
+#define MBX_PM_DSPRETENTION (MBX_PM_CLASS + 0x8)
+#define MBX_PM_DSPHIBERNATE (MBX_PM_CLASS + 0x9)
+#define MBX_PM_HIBERNATE_EN (MBX_PM_CLASS + 0xA)
+#define MBX_PM_OPP_REQ (MBX_PM_CLASS + 0xB)
+#define MBX_PM_OPP_CHG (MBX_PM_CLASS + 0xC)
+
+#define MBX_PM_TYPE_MASK 0x0300
+#define MBX_PM_TYPE_PWR_CHNG 0x0100
+#define MBX_PM_TYPE_OPP_PRECHNG 0x0200
+#define MBX_PM_TYPE_OPP_POSTCHNG 0x0300
+#define MBX_PM_TYPE_OPP_MASK 0x0300
+#define MBX_PM_OPP_PRECHNG (MBX_PM_CLASS | MBX_PM_TYPE_OPP_PRECHNG)
+/* DSP to MPU */
+#define MBX_PM_OPP_CHNG(OPP) (MBX_PM_CLASS | MBX_PM_TYPE_OPP_PRECHNG | (OPP))
+#define MBX_PM_RET (MBX_PM_CLASS | MBX_PM_TYPE_PWR_CHNG | 0x0006)
+#define MBX_PM_HIB (MBX_PM_CLASS | MBX_PM_TYPE_PWR_CHNG | 0x0002)
+#define MBX_PM_OPP1 0
+#define MBX_PM_OPP2 1
+#define MBX_PM_OPP3 2
+#define MBX_PM_OPP4 3
+#define MBX_OLDOPP_EXTRACT(OPPMSG) ((0x00F0 & (OPPMSG)) >> 4)
+#define MBX_NEWOPP_EXTRACT(OPPMSG) (0x000F & (OPPMSG))
+#define MBX_PREVOPP_EXTRACT(OPPMSG) ((0x00F0 & (OPPMSG)) >> 4)
+#define MBX_CUROPP_EXTRACT(OPPMSG) (0x000F & (OPPMSG))
+
+/* Bridge Debug Commands */
+#define MBX_DBG_SYSPRINTF (MBX_DBG_CLASS + 0x0)
+
+/*
+ * Useful macros
+ */
+/* DSP-DMA channel */
+#define MBX_SETDDMAVAL(x, y) (MBX_DDMA_CLASS | (x << MBX_DDMA_BUFSHIFT) | \
+ (y << MBX_DDMA_CHNLSHIFT))
+
+/* Zero-Copy channel */
+#define MBX_SETZCPYVAL(x) (MBX_ZCPY_CLASS | (x << MBX_ZCPY_CHNLSHIFT))
+
+#endif /* _MBX_SH_H */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/memdefs.h b/drivers/staging/tidspbridge/include/dspbridge/memdefs.h
new file mode 100644
index 0000000..78d2c5d
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/memdefs.h
@@ -0,0 +1,30 @@
+/*
+ * memdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global MEM constants and types, shared between Bridge driver and DSP API.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MEMDEFS_
+#define MEMDEFS_
+
+/*
+ * MEM_VIRTUALSEGID is used by Node & Strm to access virtual address space in
+ * the correct client process context.
+ */
+#define MEM_SETVIRTUALSEGID 0x10000000
+#define MEM_GETVIRTUALSEGID 0x20000000
+#define MEM_MASKVIRTUALSEGID (MEM_SETVIRTUALSEGID | MEM_GETVIRTUALSEGID)
+
+#endif /* MEMDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/mgr.h b/drivers/staging/tidspbridge/include/dspbridge/mgr.h
new file mode 100644
index 0000000..ce418ae
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/mgr.h
@@ -0,0 +1,205 @@
+/*
+ * mgr.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This is the DSP API RM module interface.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MGR_
+#define MGR_
+
+#include <dspbridge/mgrpriv.h>
+
+#define MAX_EVENTS 32
+
+/*
+ * ======== mgr_wait_for_bridge_events ========
+ * Purpose:
+ * Block on any Bridge event(s)
+ * Parameters:
+ * anotifications : array of pointers to notification objects.
+ * count : number of elements in above array
+ * pu_index : index of signaled event object
+ * utimeout : timeout interval in milliseocnds
+ * Returns:
+ * 0 : Success.
+ * -ETIME : Wait timed out. *pu_index is undetermined.
+ * Details:
+ */
+
+int mgr_wait_for_bridge_events(struct dsp_notification
+ **anotifications,
+ u32 count, OUT u32 *pu_index,
+ u32 utimeout);
+
+/*
+ * ======== mgr_create ========
+ * Purpose:
+ * Creates the Manager Object. This is done during the driver loading.
+ * There is only one Manager Object in the DSP/BIOS Bridge.
+ * Parameters:
+ * phMgrObject: Location to store created MGR Object handle.
+ * dev_node_obj: Device object as known to the system.
+ * Returns:
+ * 0: Success
+ * -ENOMEM: Failed to Create the Object
+ * -EPERM: General Failure
+ * Requires:
+ * MGR Initialized (refs > 0 )
+ * phMgrObject != NULL.
+ * Ensures:
+ * 0: *phMgrObject is a valid MGR interface to the device.
+ * MGR Object stores the DCD Manager Handle.
+ * MGR Object stored in the Regsitry.
+ * !0: MGR Object not created
+ * Details:
+ * DCD Dll is loaded and MGR Object stores the handle of the DLL.
+ */
+extern int mgr_create(OUT struct mgr_object **hmgr_obj,
+ struct cfg_devnode *dev_node_obj);
+
+/*
+ * ======== mgr_destroy ========
+ * Purpose:
+ * Destroys the MGR object. Called upon driver unloading.
+ * Parameters:
+ * hmgr_obj: Handle to Manager object .
+ * Returns:
+ * 0: Success.
+ * DCD Manager freed; MGR Object destroyed;
+ * MGR Object deleted from the Registry.
+ * -EPERM: Failed to destroy MGR Object
+ * Requires:
+ * MGR Initialized (refs > 0 )
+ * hmgr_obj is a valid MGR handle .
+ * Ensures:
+ * 0: MGR Object destroyed and hmgr_obj is Invalid MGR
+ * Handle.
+ */
+extern int mgr_destroy(struct mgr_object *hmgr_obj);
+
+/*
+ * ======== mgr_enum_node_info ========
+ * Purpose:
+ * Enumerate and get configuration information about nodes configured
+ * in the node database.
+ * Parameters:
+ * node_id: The node index (base 0).
+ * pndb_props: Ptr to the dsp_ndbprops structure for output.
+ * undb_props_size: Size of the dsp_ndbprops structure.
+ * pu_num_nodes: Location where the number of nodes configured
+ * in the database will be returned.
+ * Returns:
+ * 0: Success.
+ * -EINVAL: Parameter node_id is > than the number of nodes.
+ * configutred in the system
+ * -EIDRM: During Enumeration there has been a change in
+ * the number of nodes configured or in the
+ * the properties of the enumerated nodes.
+ * -EPERM: Failed to querry the Node Data Base
+ * Requires:
+ * pNDBPROPS is not null
+ * undb_props_size >= sizeof(dsp_ndbprops)
+ * pu_num_nodes is not null
+ * MGR Initialized (refs > 0 )
+ * Ensures:
+ * SUCCESS on successful retreival of data and *pu_num_nodes > 0 OR
+ * DSP_FAILED && *pu_num_nodes == 0.
+ * Details:
+ */
+extern int mgr_enum_node_info(u32 node_id,
+ OUT struct dsp_ndbprops *pndb_props,
+ u32 undb_props_size,
+ OUT u32 *pu_num_nodes);
+
+/*
+ * ======== mgr_enum_processor_info ========
+ * Purpose:
+ * Enumerate and get configuration information about available DSP
+ * processors
+ * Parameters:
+ * processor_id: The processor index (zero-based).
+ * processor_info: Ptr to the dsp_processorinfo structure .
+ * processor_info_size: Size of dsp_processorinfo structure.
+ * pu_num_procs: Location where the number of DSPs configured
+ * in the database will be returned
+ * Returns:
+ * 0: Success.
+ * -EINVAL: Parameter processor_id is > than the number of
+ * DSP Processors in the system.
+ * -EPERM: Failed to querry the Node Data Base
+ * Requires:
+ * processor_info is not null
+ * pu_num_procs is not null
+ * processor_info_size >= sizeof(dsp_processorinfo)
+ * MGR Initialized (refs > 0 )
+ * Ensures:
+ * SUCCESS on successful retreival of data and *pu_num_procs > 0 OR
+ * DSP_FAILED && *pu_num_procs == 0.
+ * Details:
+ */
+extern int mgr_enum_processor_info(u32 processor_id,
+ OUT struct dsp_processorinfo
+ *processor_info,
+ u32 processor_info_size,
+ OUT u8 *pu_num_procs);
+/*
+ * ======== mgr_exit ========
+ * Purpose:
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * MGR is initialized.
+ * Ensures:
+ * When reference count == 0, MGR's private resources are freed.
+ */
+extern void mgr_exit(void);
+
+/*
+ * ======== mgr_get_dcd_handle ========
+ * Purpose:
+ * Retrieves the MGR handle. Accessor Function
+ * Parameters:
+ * hMGRHandle: Handle to the Manager Object
+ * phDCDHandle: Ptr to receive the DCD Handle.
+ * Returns:
+ * 0: Sucess
+ * -EPERM: Failure to get the Handle
+ * Requires:
+ * MGR is initialized.
+ * phDCDHandle != NULL
+ * Ensures:
+ * 0 and *phDCDHandle != NULL ||
+ * -EPERM and *phDCDHandle == NULL
+ */
+extern int mgr_get_dcd_handle(IN struct mgr_object
+ *hMGRHandle, OUT u32 *phDCDHandle);
+
+/*
+ * ======== mgr_init ========
+ * Purpose:
+ * Initialize MGR's private state, keeping a reference count on each
+ * call. Intializes the DCD.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * TRUE: A requirement for the other public MGR functions.
+ */
+extern bool mgr_init(void);
+
+#endif /* MGR_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/mgrpriv.h b/drivers/staging/tidspbridge/include/dspbridge/mgrpriv.h
new file mode 100644
index 0000000..bca4e10
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/mgrpriv.h
@@ -0,0 +1,45 @@
+/*
+ * mgrpriv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global MGR constants and types, shared by PROC, MGR, and DSP API.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MGRPRIV_
+#define MGRPRIV_
+
+/*
+ * OMAP1510 specific
+ */
+#define MGR_MAXTLBENTRIES 32
+
+/* RM MGR Object */
+struct mgr_object;
+
+struct mgr_tlbentry {
+ u32 ul_dsp_virt; /* DSP virtual address */
+ u32 ul_gpp_phys; /* GPP physical address */
+};
+
+/*
+ * The DSP_PROCESSOREXTINFO structure describes additional extended
+ * capabilities of a DSP processor not exposed to user.
+ */
+struct mgr_processorextinfo {
+ struct dsp_processorinfo ty_basic; /* user processor info */
+ /* private dsp mmu entries */
+ struct mgr_tlbentry ty_tlb[MGR_MAXTLBENTRIES];
+};
+
+#endif /* MGRPRIV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/msg.h b/drivers/staging/tidspbridge/include/dspbridge/msg.h
new file mode 100644
index 0000000..baac5f3
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/msg.h
@@ -0,0 +1,86 @@
+/*
+ * msg.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge msg_ctrl Module.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MSG_
+#define MSG_
+
+#include <dspbridge/devdefs.h>
+#include <dspbridge/msgdefs.h>
+
+/*
+ * ======== msg_create ========
+ * Purpose:
+ * Create an object to manage message queues. Only one of these objects
+ * can exist per device object. The msg_ctrl manager must be created before
+ * the IO Manager.
+ * Parameters:
+ * phMsgMgr: Location to store msg_ctrl manager handle on output.
+ * hdev_obj: The device object.
+ * msgCallback: Called whenever an RMS_EXIT message is received.
+ * Returns:
+ * Requires:
+ * msg_mod_init(void) called.
+ * phMsgMgr != NULL.
+ * hdev_obj != NULL.
+ * msgCallback != NULL.
+ * Ensures:
+ */
+extern int msg_create(OUT struct msg_mgr **phMsgMgr,
+ struct dev_object *hdev_obj,
+ msg_onexit msgCallback);
+
+/*
+ * ======== msg_delete ========
+ * Purpose:
+ * Delete a msg_ctrl manager allocated in msg_create().
+ * Parameters:
+ * hmsg_mgr: Handle returned from msg_create().
+ * Returns:
+ * Requires:
+ * msg_mod_init(void) called.
+ * Valid hmsg_mgr.
+ * Ensures:
+ */
+extern void msg_delete(struct msg_mgr *hmsg_mgr);
+
+/*
+ * ======== msg_exit ========
+ * Purpose:
+ * Discontinue usage of msg_ctrl module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * msg_mod_init(void) successfully called before.
+ * Ensures:
+ * Any resources acquired in msg_mod_init(void) will be freed when last
+ * msg_ctrl client calls msg_exit(void).
+ */
+extern void msg_exit(void);
+
+/*
+ * ======== msg_mod_init ========
+ * Purpose:
+ * Initialize the msg_ctrl module.
+ * Parameters:
+ * Returns:
+ * TRUE if initialization succeeded, FALSE otherwise.
+ * Ensures:
+ */
+extern bool msg_mod_init(void);
+
+#endif /* MSG_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/msgdefs.h b/drivers/staging/tidspbridge/include/dspbridge/msgdefs.h
new file mode 100644
index 0000000..fe24656
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/msgdefs.h
@@ -0,0 +1,29 @@
+/*
+ * msgdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global msg_ctrl constants and types.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef MSGDEFS_
+#define MSGDEFS_
+
+/* msg_ctrl Objects: */
+struct msg_mgr;
+struct msg_queue;
+
+/* Function prototype for callback to be called on RMS_EXIT message received */
+typedef void (*msg_onexit) (void *h, s32 nStatus);
+
+#endif /* MSGDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/nldr.h b/drivers/staging/tidspbridge/include/dspbridge/nldr.h
new file mode 100644
index 0000000..073aa9f
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/nldr.h
@@ -0,0 +1,55 @@
+/*
+ * nldr.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge dynamic loader interface.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/dbdcddef.h>
+#include <dspbridge/dev.h>
+#include <dspbridge/rmm.h>
+#include <dspbridge/nldrdefs.h>
+
+#ifndef NLDR_
+#define NLDR_
+
+extern int nldr_allocate(struct nldr_object *nldr_obj,
+ void *priv_ref, IN CONST struct dcd_nodeprops
+ *node_props,
+ OUT struct nldr_nodeobject **phNldrNode,
+ IN bool *pf_phase_split);
+
+extern int nldr_create(OUT struct nldr_object **phNldr,
+ struct dev_object *hdev_obj,
+ IN CONST struct nldr_attrs *pattrs);
+
+extern void nldr_delete(struct nldr_object *nldr_obj);
+extern void nldr_exit(void);
+
+extern int nldr_get_fxn_addr(struct nldr_nodeobject *nldr_node_obj,
+ char *pstrFxn, u32 * pulAddr);
+
+extern int nldr_get_rmm_manager(struct nldr_object *hNldrObject,
+ OUT struct rmm_target_obj **phRmmMgr);
+
+extern bool nldr_init(void);
+extern int nldr_load(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+extern int nldr_unload(struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+int nldr_find_addr(struct nldr_nodeobject *nldr_node, u32 sym_addr,
+ u32 offset_range, void *offset_output, char *sym_name);
+
+#endif /* NLDR_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/nldrdefs.h b/drivers/staging/tidspbridge/include/dspbridge/nldrdefs.h
new file mode 100644
index 0000000..9be0483
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/nldrdefs.h
@@ -0,0 +1,293 @@
+/*
+ * nldrdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global Dynamic + static/overlay Node loader (NLDR) constants and types.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef NLDRDEFS_
+#define NLDRDEFS_
+
+#include <dspbridge/dbdcddef.h>
+#include <dspbridge/devdefs.h>
+
+#define NLDR_MAXPATHLENGTH 255
+/* NLDR Objects: */
+struct nldr_object;
+struct nldr_nodeobject;
+
+/*
+ * ======== nldr_loadtype ========
+ * Load types for a node. Must match values in node.h55.
+ */
+enum nldr_loadtype {
+ NLDR_STATICLOAD, /* Linked in base image, not overlay */
+ NLDR_DYNAMICLOAD, /* Dynamically loaded node */
+ NLDR_OVLYLOAD /* Linked in base image, overlay node */
+};
+
+/*
+ * ======== nldr_ovlyfxn ========
+ * Causes code or data to be copied from load address to run address. This
+ * is the "cod_writefxn" that gets passed to the DBLL_Library and is used as
+ * the ZL write function.
+ *
+ * Parameters:
+ * priv_ref: Handle to identify the node.
+ * ulDspRunAddr: Run address of code or data.
+ * ulDspLoadAddr: Load address of code or data.
+ * ul_num_bytes: Number of (GPP) bytes to copy.
+ * nMemSpace: RMS_CODE or RMS_DATA.
+ * Returns:
+ * ul_num_bytes: Success.
+ * 0: Failure.
+ * Requires:
+ * Ensures:
+ */
+typedef u32(*nldr_ovlyfxn) (void *priv_ref, u32 ulDspRunAddr,
+ u32 ulDspLoadAddr, u32 ul_num_bytes, u32 nMemSpace);
+
+/*
+ * ======== nldr_writefxn ========
+ * Write memory function. Used for dynamic load writes.
+ * Parameters:
+ * priv_ref: Handle to identify the node.
+ * ulDspAddr: Address of code or data.
+ * pbuf: Code or data to be written
+ * ul_num_bytes: Number of (GPP) bytes to write.
+ * nMemSpace: DBLL_DATA or DBLL_CODE.
+ * Returns:
+ * ul_num_bytes: Success.
+ * 0: Failure.
+ * Requires:
+ * Ensures:
+ */
+typedef u32(*nldr_writefxn) (void *priv_ref,
+ u32 ulDspAddr, void *pbuf,
+ u32 ul_num_bytes, u32 nMemSpace);
+
+/*
+ * ======== nldr_attrs ========
+ * Attributes passed to nldr_create function.
+ */
+struct nldr_attrs {
+ nldr_ovlyfxn pfn_ovly;
+ nldr_writefxn pfn_write;
+ u16 us_dsp_word_size;
+ u16 us_dsp_mau_size;
+};
+
+/*
+ * ======== nldr_phase ========
+ * Indicates node create, delete, or execute phase function.
+ */
+enum nldr_phase {
+ NLDR_CREATE,
+ NLDR_DELETE,
+ NLDR_EXECUTE,
+ NLDR_NOPHASE
+};
+
+/*
+ * Typedefs of loader functions imported from a DLL, or defined in a
+ * function table.
+ */
+
+/*
+ * ======== nldr_allocate ========
+ * Allocate resources to manage the loading of a node on the DSP.
+ *
+ * Parameters:
+ * nldr_obj: Handle of loader that will load the node.
+ * priv_ref: Handle to identify the node.
+ * node_props: Pointer to a dcd_nodeprops for the node.
+ * phNldrNode: Location to store node handle on output. This handle
+ * will be passed to nldr_load/nldr_unload.
+ * pf_phase_split: pointer to int variable referenced in node.c
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory on GPP.
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_obj.
+ * node_props != NULL.
+ * phNldrNode != NULL.
+ * Ensures:
+ * 0: IsValidNode(*phNldrNode).
+ * error: *phNldrNode == NULL.
+ */
+typedef int(*nldr_allocatefxn) (struct nldr_object *nldr_obj,
+ void *priv_ref,
+ IN CONST struct dcd_nodeprops
+ * node_props,
+ OUT struct nldr_nodeobject
+ **phNldrNode,
+ OUT bool *pf_phase_split);
+
+/*
+ * ======== nldr_create ========
+ * Create a loader object. This object handles the loading and unloading of
+ * create, delete, and execute phase functions of nodes on the DSP target.
+ *
+ * Parameters:
+ * phNldr: Location to store loader handle on output.
+ * hdev_obj: Device for this processor.
+ * pattrs: Loader attributes.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * Requires:
+ * nldr_init(void) called.
+ * phNldr != NULL.
+ * hdev_obj != NULL.
+ * pattrs != NULL.
+ * Ensures:
+ * 0: Valid *phNldr.
+ * error: *phNldr == NULL.
+ */
+typedef int(*nldr_createfxn) (OUT struct nldr_object **phNldr,
+ struct dev_object *hdev_obj,
+ IN CONST struct nldr_attrs *pattrs);
+
+/*
+ * ======== nldr_delete ========
+ * Delete the NLDR loader.
+ *
+ * Parameters:
+ * nldr_obj: Node manager object.
+ * Returns:
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_obj.
+ * Ensures:
+ * nldr_obj invalid
+ */
+typedef void (*nldr_deletefxn) (struct nldr_object *nldr_obj);
+
+/*
+ * ======== nldr_exit ========
+ * Discontinue usage of NLDR module.
+ *
+ * Parameters:
+ * Returns:
+ * Requires:
+ * nldr_init(void) successfully called before.
+ * Ensures:
+ * Any resources acquired in nldr_init(void) will be freed when last NLDR
+ * client calls nldr_exit(void).
+ */
+typedef void (*nldr_exitfxn) (void);
+
+/*
+ * ======== NLDR_Free ========
+ * Free resources allocated in nldr_allocate.
+ *
+ * Parameters:
+ * nldr_node_obj: Handle returned from nldr_allocate().
+ * Returns:
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_node_obj.
+ * Ensures:
+ */
+typedef void (*nldr_freefxn) (struct nldr_nodeobject *nldr_node_obj);
+
+/*
+ * ======== nldr_get_fxn_addr ========
+ * Get address of create, delete, or execute phase function of a node on
+ * the DSP.
+ *
+ * Parameters:
+ * nldr_node_obj: Handle returned from nldr_allocate().
+ * pstrFxn: Name of function.
+ * pulAddr: Location to store function address.
+ * Returns:
+ * 0: Success.
+ * -ESPIPE: Address of function not found.
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_node_obj.
+ * pulAddr != NULL;
+ * pstrFxn != NULL;
+ * Ensures:
+ */
+typedef int(*nldr_getfxnaddrfxn) (struct nldr_nodeobject
+ * nldr_node_obj,
+ char *pstrFxn, u32 * pulAddr);
+
+/*
+ * ======== nldr_init ========
+ * Initialize the NLDR module.
+ *
+ * Parameters:
+ * Returns:
+ * TRUE if initialization succeeded, FALSE otherwise.
+ * Ensures:
+ */
+typedef bool(*nldr_initfxn) (void);
+
+/*
+ * ======== nldr_load ========
+ * Load create, delete, or execute phase function of a node on the DSP.
+ *
+ * Parameters:
+ * nldr_node_obj: Handle returned from nldr_allocate().
+ * phase: Type of function to load (create, delete, or execute).
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory on GPP.
+ * -ENXIO: Can't overlay phase because overlay memory
+ * is already in use.
+ * -EILSEQ: Failure in dynamic loader library.
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_node_obj.
+ * Ensures:
+ */
+typedef int(*nldr_loadfxn) (struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+
+/*
+ * ======== nldr_unload ========
+ * Unload create, delete, or execute phase function of a node on the DSP.
+ *
+ * Parameters:
+ * nldr_node_obj: Handle returned from nldr_allocate().
+ * phase: Node function to unload (create, delete, or execute).
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory on GPP.
+ * Requires:
+ * nldr_init(void) called.
+ * Valid nldr_node_obj.
+ * Ensures:
+ */
+typedef int(*nldr_unloadfxn) (struct nldr_nodeobject *nldr_node_obj,
+ enum nldr_phase phase);
+
+/*
+ * ======== node_ldr_fxns ========
+ */
+struct node_ldr_fxns {
+ nldr_allocatefxn pfn_allocate;
+ nldr_createfxn pfn_create;
+ nldr_deletefxn pfn_delete;
+ nldr_exitfxn pfn_exit;
+ nldr_getfxnaddrfxn pfn_get_fxn_addr;
+ nldr_initfxn pfn_init;
+ nldr_loadfxn pfn_load;
+ nldr_unloadfxn pfn_unload;
+};
+
+#endif /* NLDRDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/node.h b/drivers/staging/tidspbridge/include/dspbridge/node.h
new file mode 100644
index 0000000..7587213
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/node.h
@@ -0,0 +1,579 @@
+/*
+ * node.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Node Manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef NODE_
+#define NODE_
+
+#include <dspbridge/procpriv.h>
+
+#include <dspbridge/nodedefs.h>
+#include <dspbridge/dispdefs.h>
+#include <dspbridge/nldrdefs.h>
+#include <dspbridge/drv.h>
+
+/*
+ * ======== node_allocate ========
+ * Purpose:
+ * Allocate GPP resources to manage a node on the DSP.
+ * Parameters:
+ * hprocessor: Handle of processor that is allocating the node.
+ * pNodeId: Pointer to a dsp_uuid for the node.
+ * pargs: Optional arguments to be passed to the node.
+ * attr_in: Optional pointer to node attributes (priority,
+ * timeout...)
+ * ph_node: Location to store node handle on output.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Insufficient memory on GPP.
+ * -ENOKEY: Node UUID has not been registered.
+ * -ESPIPE: iAlg functions not found for a DAIS node.
+ * -EDOM: attr_in != NULL and attr_in->prio out of
+ * range.
+ * -EPERM: A failure occured, unable to allocate node.
+ * -EBADR: Proccessor is not in the running state.
+ * Requires:
+ * node_init(void) called.
+ * hprocessor != NULL.
+ * pNodeId != NULL.
+ * ph_node != NULL.
+ * Ensures:
+ * 0: IsValidNode(*ph_node).
+ * error: *ph_node == NULL.
+ */
+extern int node_allocate(struct proc_object *hprocessor,
+ IN CONST struct dsp_uuid *pNodeId,
+ OPTIONAL IN CONST struct dsp_cbdata
+ *pargs, OPTIONAL IN CONST struct dsp_nodeattrin
+ *attr_in,
+ OUT struct node_object **ph_node,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== node_alloc_msg_buf ========
+ * Purpose:
+ * Allocate and Prepare a buffer whose descriptor will be passed to a
+ * Node within a (dsp_msg)message
+ * Parameters:
+ * hnode: The node handle.
+ * usize: The size of the buffer to be allocated.
+ * pattr: Pointer to a dsp_bufferattr structure.
+ * pbuffer: Location to store the address of the allocated
+ * buffer on output.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid node handle.
+ * -ENOMEM: Insufficent memory.
+ * -EPERM: General Failure.
+ * -EINVAL: Invalid Size.
+ * Requires:
+ * node_init(void) called.
+ * pbuffer != NULL.
+ * Ensures:
+ */
+extern int node_alloc_msg_buf(struct node_object *hnode,
+ u32 usize, OPTIONAL struct dsp_bufferattr
+ *pattr, OUT u8 **pbuffer);
+
+/*
+ * ======== node_change_priority ========
+ * Purpose:
+ * Change the priority of an allocated node.
+ * Parameters:
+ * hnode: Node handle returned from node_allocate.
+ * prio: New priority level to set node's priority to.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EDOM: prio is out of range.
+ * -EPERM: The specified node is not a task node.
+ * Unable to change node's runtime priority level.
+ * -EBADR: Node is not in the NODE_ALLOCATED, NODE_PAUSED,
+ * or NODE_RUNNING state.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ * 0 && (Node's current priority == prio)
+ */
+extern int node_change_priority(struct node_object *hnode, s32 prio);
+
+/*
+ * ======== node_close_orphans ========
+ * Purpose:
+ * Delete all nodes whose owning processor is being destroyed.
+ * Parameters:
+ * hnode_mgr: Node manager object.
+ * hProc: Handle to processor object being destroyed.
+ * Returns:
+ * 0: Success.
+ * -EPERM: Unable to delete all nodes belonging to hProc.
+ * Requires:
+ * Valid hnode_mgr.
+ * hProc != NULL.
+ * Ensures:
+ */
+extern int node_close_orphans(struct node_mgr *hnode_mgr,
+ struct proc_object *hProc);
+
+/*
+ * ======== node_connect ========
+ * Purpose:
+ * Connect two nodes on the DSP, or a node on the DSP to the GPP. In the
+ * case that the connnection is being made between a node on the DSP and
+ * the GPP, one of the node handles (either hNode1 or hNode2) must be
+ * the constant NODE_HGPPNODE.
+ * Parameters:
+ * hNode1: Handle of first node to connect to second node. If
+ * this is a connection from the GPP to hNode2, hNode1
+ * must be the constant NODE_HGPPNODE. Otherwise, hNode1
+ * must be a node handle returned from a successful call
+ * to Node_Allocate().
+ * hNode2: Handle of second node. Must be either NODE_HGPPNODE
+ * if this is a connection from DSP node to GPP, or a
+ * node handle returned from a successful call to
+ * node_allocate().
+ * uStream1: Output stream index on first node, to be connected
+ * to second node's input stream. Value must range from
+ * 0 <= uStream1 < number of output streams.
+ * uStream2: Input stream index on second node. Value must range
+ * from 0 <= uStream2 < number of input streams.
+ * pattrs: Stream attributes (NULL ==> use defaults).
+ * conn_param: A pointer to a dsp_cbdata structure that defines
+ * connection parameter for device nodes to pass to DSP
+ * side.
+ * If the value of this parameter is NULL, then this API
+ * behaves like DSPNode_Connect. This parameter will have
+ * length of the string and the null terminated string in
+ * dsp_cbdata struct. This can be extended in future tp
+ * pass binary data.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hNode1 or hNode2.
+ * -ENOMEM: Insufficient host memory.
+ * -EINVAL: A stream index parameter is invalid.
+ * -EISCONN: A connection already exists for one of the
+ * indices uStream1 or uStream2.
+ * -EBADR: Either hNode1 or hNode2 is not in the
+ * NODE_ALLOCATED state.
+ * -ECONNREFUSED: No more connections available.
+ * -EPERM: Attempt to make an illegal connection (eg,
+ * Device node to device node, or device node to
+ * GPP), the two nodes are on different DSPs.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ */
+extern int node_connect(struct node_object *hNode1,
+ u32 uStream1,
+ struct node_object *hNode2,
+ u32 uStream2,
+ OPTIONAL IN struct dsp_strmattr *pattrs,
+ OPTIONAL IN struct dsp_cbdata
+ *conn_param);
+
+/*
+ * ======== node_create ========
+ * Purpose:
+ * Create a node on the DSP by remotely calling the node's create
+ * function. If necessary, load code that contains the node's create
+ * function.
+ * Parameters:
+ * hnode: Node handle returned from node_allocate().
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -ESPIPE: Create function not found in the COFF file.
+ * -EBADR: Node is not in the NODE_ALLOCATED state.
+ * -ENOMEM: Memory allocation failure on the DSP.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * -EPERM: A failure occurred, unable to create node.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ */
+extern int node_create(struct node_object *hnode);
+
+/*
+ * ======== node_create_mgr ========
+ * Purpose:
+ * Create a NODE Manager object. This object handles the creation,
+ * deletion, and execution of nodes on the DSP target. The NODE Manager
+ * also maintains a pipe map of used and available node connections.
+ * Each DEV object should have exactly one NODE Manager object.
+ *
+ * Parameters:
+ * phNodeMgr: Location to store node manager handle on output.
+ * hdev_obj: Device for this processor.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EPERM: General failure.
+ * Requires:
+ * node_init(void) called.
+ * phNodeMgr != NULL.
+ * hdev_obj != NULL.
+ * Ensures:
+ * 0: Valide *phNodeMgr.
+ * error: *phNodeMgr == NULL.
+ */
+extern int node_create_mgr(OUT struct node_mgr **phNodeMgr,
+ struct dev_object *hdev_obj);
+
+/*
+ * ======== node_delete ========
+ * Purpose:
+ * Delete resources allocated in node_allocate(). If the node was
+ * created, delete the node on the DSP by remotely calling the node's
+ * delete function. Loads the node's delete function if necessary.
+ * GPP side resources are freed after node's delete function returns.
+ * Parameters:
+ * hnode: Node handle returned from node_allocate().
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * -EPERM: A failure occurred in deleting the node.
+ * -ESPIPE: Delete function not found in the COFF file.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ * 0: hnode is invalid.
+ */
+extern int node_delete(struct node_object *hnode,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== node_delete_mgr ========
+ * Purpose:
+ * Delete the NODE Manager.
+ * Parameters:
+ * hnode_mgr: Node manager object.
+ * Returns:
+ * 0: Success.
+ * Requires:
+ * node_init(void) called.
+ * Valid hnode_mgr.
+ * Ensures:
+ */
+extern int node_delete_mgr(struct node_mgr *hnode_mgr);
+
+/*
+ * ======== node_enum_nodes ========
+ * Purpose:
+ * Enumerate the nodes currently allocated for the DSP.
+ * Parameters:
+ * hnode_mgr: Node manager returned from node_create_mgr().
+ * node_tab: Array to copy node handles into.
+ * node_tab_size: Number of handles that can be written to node_tab.
+ * pu_num_nodes: Location where number of node handles written to
+ * node_tab will be written.
+ * pu_allocated: Location to write total number of allocated nodes.
+ * Returns:
+ * 0: Success.
+ * -EINVAL: node_tab is too small to hold all node handles.
+ * Requires:
+ * Valid hnode_mgr.
+ * node_tab != NULL || node_tab_size == 0.
+ * pu_num_nodes != NULL.
+ * pu_allocated != NULL.
+ * Ensures:
+ * - (-EINVAL && *pu_num_nodes == 0)
+ * - || (0 && *pu_num_nodes <= node_tab_size) &&
+ * (*pu_allocated == *pu_num_nodes)
+ */
+extern int node_enum_nodes(struct node_mgr *hnode_mgr,
+ void **node_tab,
+ u32 node_tab_size,
+ OUT u32 *pu_num_nodes,
+ OUT u32 *pu_allocated);
+
+/*
+ * ======== node_exit ========
+ * Purpose:
+ * Discontinue usage of NODE module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * node_init(void) successfully called before.
+ * Ensures:
+ * Any resources acquired in node_init(void) will be freed when last NODE
+ * client calls node_exit(void).
+ */
+extern void node_exit(void);
+
+/*
+ * ======== node_free_msg_buf ========
+ * Purpose:
+ * Free a message buffer previously allocated with node_alloc_msg_buf.
+ * Parameters:
+ * hnode: The node handle.
+ * pbuffer: (Address) Buffer allocated by node_alloc_msg_buf.
+ * pattr: Same buffer attributes passed to node_alloc_msg_buf.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid node handle.
+ * -EPERM: Failure to free the buffer.
+ * Requires:
+ * node_init(void) called.
+ * pbuffer != NULL.
+ * Ensures:
+ */
+extern int node_free_msg_buf(struct node_object *hnode,
+ IN u8 *pbuffer,
+ OPTIONAL struct dsp_bufferattr
+ *pattr);
+
+/*
+ * ======== node_get_attr ========
+ * Purpose:
+ * Copy the current attributes of the specified node into a dsp_nodeattr
+ * structure.
+ * Parameters:
+ * hnode: Node object allocated from node_allocate().
+ * pattr: Pointer to dsp_nodeattr structure to copy node's
+ * attributes.
+ * attr_size: Size of pattr.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * Requires:
+ * node_init(void) called.
+ * pattr != NULL.
+ * Ensures:
+ * 0: *pattrs contains the node's current attributes.
+ */
+extern int node_get_attr(struct node_object *hnode,
+ OUT struct dsp_nodeattr *pattr, u32 attr_size);
+
+/*
+ * ======== node_get_message ========
+ * Purpose:
+ * Retrieve a message from a node on the DSP. The node must be either a
+ * message node, task node, or XDAIS socket node.
+ * If a message is not available, this function will block until a
+ * message is available, or the node's timeout value is reached.
+ * Parameters:
+ * hnode: Node handle returned from node_allocate().
+ * message: Pointer to dsp_msg structure to copy the
+ * message into.
+ * utimeout: Timeout in milliseconds to wait for message.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: Cannot retrieve messages from this type of node.
+ * Error occurred while trying to retrieve a message.
+ * -ETIME: Timeout occurred and no message is available.
+ * Requires:
+ * node_init(void) called.
+ * message != NULL.
+ * Ensures:
+ */
+extern int node_get_message(struct node_object *hnode,
+ OUT struct dsp_msg *message, u32 utimeout);
+
+/*
+ * ======== node_get_nldr_obj ========
+ * Purpose:
+ * Retrieve the Nldr manager
+ * Parameters:
+ * hnode_mgr: Node Manager
+ * phNldrObj: Pointer to a Nldr manager handle
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * Ensures:
+ */
+extern int node_get_nldr_obj(struct node_mgr *hnode_mgr,
+ OUT struct nldr_object **phNldrObj);
+
+/*
+ * ======== node_init ========
+ * Purpose:
+ * Initialize the NODE module.
+ * Parameters:
+ * Returns:
+ * TRUE if initialization succeeded, FALSE otherwise.
+ * Ensures:
+ */
+extern bool node_init(void);
+
+/*
+ * ======== node_on_exit ========
+ * Purpose:
+ * Gets called when RMS_EXIT is received for a node. PROC needs to pass
+ * this function as a parameter to msg_create(). This function then gets
+ * called by the Bridge driver when an exit message for a node is received.
+ * Parameters:
+ * hnode: Handle of the node that the exit message is for.
+ * nStatus: Return status of the node's execute phase.
+ * Returns:
+ * Ensures:
+ */
+void node_on_exit(struct node_object *hnode, s32 nStatus);
+
+/*
+ * ======== node_pause ========
+ * Purpose:
+ * Suspend execution of a node currently running on the DSP.
+ * Parameters:
+ * hnode: Node object representing a node currently
+ * running on the DSP.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: Node is not a task or socket node.
+ * Failed to pause node.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * DSP_EWRONGSTSATE: Node is not in NODE_RUNNING state.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ */
+extern int node_pause(struct node_object *hnode);
+
+/*
+ * ======== node_put_message ========
+ * Purpose:
+ * Send a message to a message node, task node, or XDAIS socket node.
+ * This function will block until the message stream can accommodate
+ * the message, or a timeout occurs. The message will be copied, so Msg
+ * can be re-used immediately after return.
+ * Parameters:
+ * hnode: Node handle returned by node_allocate().
+ * pmsg: Location of message to be sent to the node.
+ * utimeout: Timeout in msecs to wait.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: Messages can't be sent to this type of node.
+ * Unable to send message.
+ * -ETIME: Timeout occurred before message could be set.
+ * -EBADR: Node is in invalid state for sending messages.
+ * Requires:
+ * node_init(void) called.
+ * pmsg != NULL.
+ * Ensures:
+ */
+extern int node_put_message(struct node_object *hnode,
+ IN CONST struct dsp_msg *pmsg, u32 utimeout);
+
+/*
+ * ======== node_register_notify ========
+ * Purpose:
+ * Register to be notified on specific events for this node.
+ * Parameters:
+ * hnode: Node handle returned by node_allocate().
+ * event_mask: Mask of types of events to be notified about.
+ * notify_type: Type of notification to be sent.
+ * hnotification: Handle to be used for notification.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -ENOMEM: Insufficient memory on GPP.
+ * -EINVAL: event_mask is invalid.
+ * -ENOSYS: Notification type specified by notify_type is not
+ * supported.
+ * Requires:
+ * node_init(void) called.
+ * hnotification != NULL.
+ * Ensures:
+ */
+extern int node_register_notify(struct node_object *hnode,
+ u32 event_mask, u32 notify_type,
+ struct dsp_notification
+ *hnotification);
+
+/*
+ * ======== node_run ========
+ * Purpose:
+ * Start execution of a node's execute phase, or resume execution of
+ * a node that has been suspended (via node_pause()) on the DSP. Load
+ * the node's execute function if necessary.
+ * Parameters:
+ * hnode: Node object representing a node currently
+ * running on the DSP.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: hnode doesn't represent a message, task or dais socket node.
+ * Unable to start or resume execution.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * DSP_EWRONGSTSATE: Node is not in NODE_PAUSED or NODE_CREATED state.
+ * -ESPIPE: Execute function not found in the COFF file.
+ * Requires:
+ * node_init(void) called.
+ * Ensures:
+ */
+extern int node_run(struct node_object *hnode);
+
+/*
+ * ======== node_terminate ========
+ * Purpose:
+ * Signal a node running on the DSP that it should exit its execute
+ * phase function.
+ * Parameters:
+ * hnode: Node object representing a node currently
+ * running on the DSP.
+ * pstatus: Location to store execute-phase function return
+ * value.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -ETIME: A timeout occurred before the DSP responded.
+ * -EPERM: Type of node specified cannot be terminated.
+ * Unable to terminate the node.
+ * -EBADR: Operation not valid for the current node state.
+ * Requires:
+ * node_init(void) called.
+ * pstatus != NULL.
+ * Ensures:
+ */
+extern int node_terminate(struct node_object *hnode,
+ OUT int *pstatus);
+
+/*
+ * ======== node_get_uuid_props ========
+ * Purpose:
+ * Fetch Node properties given the UUID
+ * Parameters:
+ *
+ */
+extern int node_get_uuid_props(void *hprocessor,
+ IN CONST struct dsp_uuid *pNodeId,
+ OUT struct dsp_ndbprops
+ *node_props);
+
+/**
+ * node_find_addr() - Find the closest symbol to the given address.
+ *
+ * @node_mgr: Node manager handle
+ * @sym_addr: Given address to find the closest symbol
+ * @offset_range: offset range to look fo the closest symbol
+ * @sym_addr_output: Symbol Output address
+ * @sym_name: String with the symbol name of the closest symbol
+ *
+ * This function finds the closest symbol to the address where a MMU
+ * Fault occurred on the DSP side.
+ */
+int node_find_addr(struct node_mgr *node_mgr, u32 sym_addr,
+ u32 offset_range, void *sym_addr_output,
+ char *sym_name);
+
+enum node_state node_get_state(void *hnode);
+
+#endif /* NODE_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/nodedefs.h b/drivers/staging/tidspbridge/include/dspbridge/nodedefs.h
new file mode 100644
index 0000000..fb9623d
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/nodedefs.h
@@ -0,0 +1,28 @@
+/*
+ * nodedefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global NODE constants and types, shared by PROCESSOR, NODE, and DISP.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef NODEDEFS_
+#define NODEDEFS_
+
+#define NODE_SUSPENDEDPRI -1
+
+/* NODE Objects: */
+struct node_mgr;
+struct node_object;
+
+#endif /* NODEDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/nodepriv.h b/drivers/staging/tidspbridge/include/dspbridge/nodepriv.h
new file mode 100644
index 0000000..42e1a94
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/nodepriv.h
@@ -0,0 +1,182 @@
+/*
+ * nodepriv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Private node header shared by NODE and DISP.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef NODEPRIV_
+#define NODEPRIV_
+
+#include <dspbridge/strmdefs.h>
+#include <dspbridge/nodedefs.h>
+#include <dspbridge/nldrdefs.h>
+
+/* DSP address of node environment structure */
+typedef u32 nodeenv;
+
+/*
+ * Node create structures
+ */
+
+/* Message node */
+struct node_msgargs {
+ u32 max_msgs; /* Max # of simultaneous messages for node */
+ u32 seg_id; /* Segment for allocating message buffers */
+ u32 notify_type; /* Notify type (SEM_post, SWI_post, etc.) */
+ u32 arg_length; /* Length in 32-bit words of arg data block */
+ u8 *pdata; /* Argument data for node */
+};
+
+struct node_strmdef {
+ u32 buf_size; /* Size of buffers for SIO stream */
+ u32 num_bufs; /* max # of buffers in SIO stream at once */
+ u32 seg_id; /* Memory segment id to allocate buffers */
+ u32 utimeout; /* Timeout for blocking SIO calls */
+ u32 buf_alignment; /* Buffer alignment */
+ char *sz_device; /* Device name for stream */
+};
+
+/* Task node */
+struct node_taskargs {
+ struct node_msgargs node_msg_args;
+ s32 prio;
+ u32 stack_size;
+ u32 sys_stack_size;
+ u32 stack_seg;
+ u32 udsp_heap_res_addr; /* DSP virtual heap address */
+ u32 udsp_heap_addr; /* DSP virtual heap address */
+ u32 heap_size; /* Heap size */
+ u32 ugpp_heap_addr; /* GPP virtual heap address */
+ u32 profile_id; /* Profile ID */
+ u32 num_inputs;
+ u32 num_outputs;
+ u32 ul_dais_arg; /* Address of iAlg object */
+ struct node_strmdef *strm_in_def;
+ struct node_strmdef *strm_out_def;
+};
+
+/*
+ * ======== node_createargs ========
+ */
+struct node_createargs {
+ union {
+ struct node_msgargs node_msg_args;
+ struct node_taskargs task_arg_obj;
+ } asa;
+};
+
+/*
+ * ======== node_get_channel_id ========
+ * Purpose:
+ * Get the channel index reserved for a stream connection between the
+ * host and a node. This index is reserved when node_connect() is called
+ * to connect the node with the host. This index should be passed to
+ * the CHNL_Open function when the stream is actually opened.
+ * Parameters:
+ * hnode: Node object allocated from node_allocate().
+ * dir: Input (DSP_TONODE) or output (DSP_FROMNODE).
+ * index: Stream index.
+ * pulId: Location to store channel index.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: Not a task or DAIS socket node.
+ * -EINVAL: The node's stream corresponding to index and dir
+ * is not a stream to or from the host.
+ * Requires:
+ * node_init(void) called.
+ * Valid dir.
+ * pulId != NULL.
+ * Ensures:
+ */
+extern int node_get_channel_id(struct node_object *hnode,
+ u32 dir, u32 index, OUT u32 *pulId);
+
+/*
+ * ======== node_get_strm_mgr ========
+ * Purpose:
+ * Get the STRM manager for a node.
+ * Parameters:
+ * hnode: Node allocated with node_allocate().
+ * phStrmMgr: Location to store STRM manager on output.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * Requires:
+ * phStrmMgr != NULL.
+ * Ensures:
+ */
+extern int node_get_strm_mgr(struct node_object *hnode,
+ struct strm_mgr **phStrmMgr);
+
+/*
+ * ======== node_get_timeout ========
+ * Purpose:
+ * Get the timeout value of a node.
+ * Parameters:
+ * hnode: Node allocated with node_allocate(), or DSP_HGPPNODE.
+ * Returns:
+ * Node's timeout value.
+ * Requires:
+ * Valid hnode.
+ * Ensures:
+ */
+extern u32 node_get_timeout(struct node_object *hnode);
+
+/*
+ * ======== node_get_type ========
+ * Purpose:
+ * Get the type (device, message, task, or XDAIS socket) of a node.
+ * Parameters:
+ * hnode: Node allocated with node_allocate(), or DSP_HGPPNODE.
+ * Returns:
+ * Node type: NODE_DEVICE, NODE_TASK, NODE_XDAIS, or NODE_GPP.
+ * Requires:
+ * Valid hnode.
+ * Ensures:
+ */
+extern enum node_type node_get_type(struct node_object *hnode);
+
+/*
+ * ======== get_node_info ========
+ * Purpose:
+ * Get node information without holding semaphore.
+ * Parameters:
+ * hnode: Node allocated with node_allocate(), or DSP_HGPPNODE.
+ * Returns:
+ * Node info: priority, device owner, no. of streams, execution state
+ * NDB properties.
+ * Requires:
+ * Valid hnode.
+ * Ensures:
+ */
+extern void get_node_info(struct node_object *hnode,
+ struct dsp_nodeinfo *pNodeInfo);
+
+/*
+ * ======== node_get_load_type ========
+ * Purpose:
+ * Get the load type (dynamic, overlay, static) of a node.
+ * Parameters:
+ * hnode: Node allocated with node_allocate(), or DSP_HGPPNODE.
+ * Returns:
+ * Node type: NLDR_DYNAMICLOAD, NLDR_OVLYLOAD, NLDR_STATICLOAD
+ * Requires:
+ * Valid hnode.
+ * Ensures:
+ */
+extern enum nldr_loadtype node_get_load_type(struct node_object *hnode);
+
+#endif /* NODEPRIV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/ntfy.h b/drivers/staging/tidspbridge/include/dspbridge/ntfy.h
new file mode 100644
index 0000000..cbc8819
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/ntfy.h
@@ -0,0 +1,217 @@
+/*
+ * ntfy.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Manage lists of notification events.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef NTFY_
+#define NTFY_
+
+#include <dspbridge/host_os.h>
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/sync.h>
+
+/**
+ * ntfy_object - head structure to nofify dspbridge events
+ * @head: List of notify objects
+ * @ntfy_lock: lock for list access.
+ *
+ */
+struct ntfy_object {
+ struct raw_notifier_head head;/* List of notifier objects */
+ spinlock_t ntfy_lock; /* For critical sections */
+};
+
+/**
+ * ntfy_event - structure store specify event to be notified
+ * @noti_block: List of notify objects
+ * @event: event that it respond
+ * @type: event type (only DSP_SIGNALEVENT supported)
+ * @sync_obj: sync_event used to set the event
+ *
+ */
+struct ntfy_event {
+ struct notifier_block noti_block;
+ u32 event; /* Events to be notified about */
+ u32 type; /* Type of notification to be sent */
+ struct sync_object sync_obj;
+};
+
+
+/**
+ * dsp_notifier_event() - callback function to nofity events
+ * @this: pointer to itself struct notifier_block
+ * @event: event to be notified.
+ * @data: Currently not used.
+ *
+ */
+int dsp_notifier_event(struct notifier_block *this, unsigned long event,
+ void *data);
+
+/**
+ * ntfy_init() - Set the initial state of the ntfy_object structure.
+ * @no: pointer to ntfy_object structure.
+ *
+ * This function sets the initial state of the ntfy_object in order it
+ * can be used by the other ntfy functions.
+ */
+
+static inline void ntfy_init(struct ntfy_object *no)
+{
+ spin_lock_init(&no->ntfy_lock);
+ RAW_INIT_NOTIFIER_HEAD(&no->head);
+}
+
+/**
+ * ntfy_delete() - delete list of nofy events registered.
+ * @ntfy_obj: Pointer to the ntfy object structure.
+ *
+ * This function is used to remove all the notify events registered.
+ * unregister function is not needed in this function, to unregister
+ * a ntfy_event please look at ntfy_register function.
+ *
+ */
+static inline void ntfy_delete(struct ntfy_object *ntfy_obj)
+{
+ struct ntfy_event *ne;
+ struct notifier_block *nb;
+
+ spin_lock_bh(&ntfy_obj->ntfy_lock);
+ nb = ntfy_obj->head.head;
+ while (nb) {
+ ne = container_of(nb, struct ntfy_event, noti_block);
+ nb = nb->next;
+ kfree(ne);
+ }
+ spin_unlock_bh(&ntfy_obj->ntfy_lock);
+}
+
+/**
+ * ntfy_notify() - nofity all event register for an specific event.
+ * @ntfy_obj: Pointer to the ntfy_object structure.
+ * @event: event to be notified.
+ *
+ * This function traverses all the ntfy events registers and
+ * set the event with mach with @event.
+ */
+static inline void ntfy_notify(struct ntfy_object *ntfy_obj, u32 event)
+{
+ spin_lock_bh(&ntfy_obj->ntfy_lock);
+ raw_notifier_call_chain(&ntfy_obj->head, event, NULL);
+ spin_unlock_bh(&ntfy_obj->ntfy_lock);
+}
+
+
+
+/**
+ * ntfy_init() - Create and initialize a ntfy_event structure.
+ * @event: event that the ntfy event will respond
+ * @type event type (only DSP_SIGNALEVENT supported)
+ *
+ * This function create a ntfy_event element and sets the event it will
+ * respond the ntfy_event in order it can be used by the other ntfy functions.
+ * In case of success it will return a pointer to the ntfy_event struct
+ * created. Otherwise it will return NULL;
+ */
+
+static inline struct ntfy_event *ntfy_event_create(u32 event, u32 type)
+{
+ struct ntfy_event *ne;
+ ne = kmalloc(sizeof(struct ntfy_event), GFP_KERNEL);
+ if (ne) {
+ sync_init_event(&ne->sync_obj);
+ ne->noti_block.notifier_call = dsp_notifier_event;
+ ne->event = event;
+ ne->type = type;
+ }
+ return ne;
+}
+
+/**
+ * ntfy_register() - register new ntfy_event into a given ntfy_object
+ * @ntfy_obj: Pointer to the ntfy_object structure.
+ * @noti: Pointer to the handle to be returned to the user space.
+ * @event event that the ntfy event will respond
+ * @type event type (only DSP_SIGNALEVENT supported)
+ *
+ * This function register a new ntfy_event into the ntfy_object list,
+ * which will respond to the @event passed.
+ * This function will return 0 in case of error.
+ * -EFAULT in case of bad pointers and
+ * DSP_EMemory in case of no memory to create ntfy_event.
+ */
+static inline int ntfy_register(struct ntfy_object *ntfy_obj,
+ struct dsp_notification *noti,
+ u32 event, u32 type)
+{
+ struct ntfy_event *ne;
+ int status = 0;
+
+ if (!noti || !ntfy_obj) {
+ status = -EFAULT;
+ goto func_end;
+ }
+ if (!event) {
+ status = -EINVAL;
+ goto func_end;
+ }
+ ne = ntfy_event_create(event, type);
+ if (!ne) {
+ status = -ENOMEM;
+ goto func_end;
+ }
+ noti->handle = &ne->sync_obj;
+
+ spin_lock_bh(&ntfy_obj->ntfy_lock);
+ raw_notifier_chain_register(&ntfy_obj->head, &ne->noti_block);
+ spin_unlock_bh(&ntfy_obj->ntfy_lock);
+func_end:
+ return status;
+}
+
+/**
+ * ntfy_unregister() - unregister a ntfy_event from a given ntfy_object
+ * @ntfy_obj: Pointer to the ntfy_object structure.
+ * @noti: Pointer to the event that will be removed.
+ *
+ * This function unregister a ntfy_event from the ntfy_object list,
+ * @noti contains the event which is wanted to be removed.
+ * This function will return 0 in case of error.
+ * -EFAULT in case of bad pointers and
+ * DSP_EMemory in case of no memory to create ntfy_event.
+ */
+static inline int ntfy_unregister(struct ntfy_object *ntfy_obj,
+ struct dsp_notification *noti)
+{
+ int status = 0;
+ struct ntfy_event *ne;
+
+ if (!noti || !ntfy_obj) {
+ status = -EFAULT;
+ goto func_end;
+ }
+
+ ne = container_of((struct sync_object *)noti, struct ntfy_event,
+ sync_obj);
+ spin_lock_bh(&ntfy_obj->ntfy_lock);
+ raw_notifier_chain_unregister(&ntfy_obj->head,
+ &ne->noti_block);
+ kfree(ne);
+ spin_unlock_bh(&ntfy_obj->ntfy_lock);
+func_end:
+ return status;
+}
+
+#endif /* NTFY_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/proc.h b/drivers/staging/tidspbridge/include/dspbridge/proc.h
new file mode 100644
index 0000000..230828c
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/proc.h
@@ -0,0 +1,621 @@
+/*
+ * proc.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This is the DSP API RM module interface.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef PROC_
+#define PROC_
+
+#include <dspbridge/cfgdefs.h>
+#include <dspbridge/devdefs.h>
+#include <dspbridge/drv.h>
+
+extern char *iva_img;
+
+/*
+ * ======== proc_attach ========
+ * Purpose:
+ * Prepare for communication with a particular DSP processor, and return
+ * a handle to the processor object. The PROC Object gets created
+ * Parameters:
+ * processor_id : The processor index (zero-based).
+ * hmgr_obj : Handle to the Manager Object
+ * attr_in : Ptr to the dsp_processorattrin structure.
+ * A NULL value means use default values.
+ * ph_processor : Ptr to location to store processor handle.
+ * Returns:
+ * 0 : Success.
+ * -EPERM : General failure.
+ * -EFAULT : Invalid processor handle.
+ * 0: Success; Processor already attached.
+ * Requires:
+ * ph_processor != NULL.
+ * PROC Initialized.
+ * Ensures:
+ * -EPERM, and *ph_processor == NULL, OR
+ * Success and *ph_processor is a Valid Processor handle OR
+ * 0 and *ph_processor is a Valid Processor.
+ * Details:
+ * When attr_in is NULL, the default timeout value is 10 seconds.
+ */
+extern int proc_attach(u32 processor_id,
+ OPTIONAL CONST struct dsp_processorattrin
+ *attr_in, void **ph_processor,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== proc_auto_start =========
+ * Purpose:
+ * A Particular device gets loaded with the default image
+ * if the AutoStart flag is set.
+ * Parameters:
+ * hdev_obj : Handle to the Device
+ * Returns:
+ * 0 : On Successful Loading
+ * -ENOENT : No DSP exec file found.
+ * -EPERM : General Failure
+ * Requires:
+ * hdev_obj != NULL.
+ * dev_node_obj != NULL.
+ * PROC Initialized.
+ * Ensures:
+ */
+extern int proc_auto_start(struct cfg_devnode *dev_node_obj,
+ struct dev_object *hdev_obj);
+
+/*
+ * ======== proc_ctrl ========
+ * Purpose:
+ * Pass control information to the GPP device driver managing the DSP
+ * processor. This will be an OEM-only function, and not part of the
+ * 'Bridge application developer's API.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * dw_cmd : Private driver IOCTL cmd ID.
+ * pargs : Ptr to an driver defined argument structure.
+ * Returns:
+ * 0 : SUCCESS
+ * -EFAULT : Invalid processor handle.
+ * -ETIME: A Timeout Occured before the Control information
+ * could be sent.
+ * -EPERM : General Failure.
+ * Requires:
+ * PROC Initialized.
+ * Ensures
+ * Details:
+ * This function Calls bridge_dev_ctrl.
+ */
+extern int proc_ctrl(void *hprocessor,
+ u32 dw_cmd, IN struct dsp_cbdata *pargs);
+
+/*
+ * ======== proc_detach ========
+ * Purpose:
+ * Close a DSP processor and de-allocate all (GPP) resources reserved
+ * for it. The Processor Object is deleted.
+ * Parameters:
+ * pr_ctxt : The processor handle.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : InValid Handle.
+ * -EPERM : General failure.
+ * Requires:
+ * PROC Initialized.
+ * Ensures:
+ * PROC Object is destroyed.
+ */
+extern int proc_detach(struct process_context *pr_ctxt);
+
+/*
+ * ======== proc_enum_nodes ========
+ * Purpose:
+ * Enumerate the nodes currently allocated on a processor.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * node_tab : The first Location of an array allocated for node
+ * handles.
+ * node_tab_size: The number of (DSP_HNODE) handles that can be held
+ * to the memory the client has allocated for node_tab
+ * pu_num_nodes : Location where DSPProcessor_EnumNodes will return
+ * the number of valid handles written to node_tab
+ * pu_allocated : Location where DSPProcessor_EnumNodes will return
+ * the number of nodes that are allocated on the DSP.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EINVAL : The amount of memory allocated for node_tab is
+ * insufficent. That is the number of nodes actually
+ * allocated on the DSP is greater than the value
+ * specified for node_tab_size.
+ * -EPERM : Unable to get Resource Information.
+ * Details:
+ * Requires
+ * pu_num_nodes is not NULL.
+ * pu_allocated is not NULL.
+ * node_tab is not NULL.
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_enum_nodes(void *hprocessor,
+ void **node_tab,
+ IN u32 node_tab_size,
+ OUT u32 *pu_num_nodes,
+ OUT u32 *pu_allocated);
+
+/*
+ * ======== proc_get_resource_info ========
+ * Purpose:
+ * Enumerate the resources currently available on a processor.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * resource_type: Type of resource .
+ * resource_info: Ptr to the dsp_resourceinfo structure.
+ * resource_info_size: Size of the structure.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EBADR: The processor is not in the PROC_RUNNING state.
+ * -ETIME: A timeout occured before the DSP responded to the
+ * querry.
+ * -EPERM : Unable to get Resource Information
+ * Requires:
+ * resource_info is not NULL.
+ * Parameter resource_type is Valid.[TBD]
+ * resource_info_size is >= sizeof dsp_resourceinfo struct.
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ * This function currently returns
+ * -ENOSYS, and does not write any data to the resource_info struct.
+ */
+extern int proc_get_resource_info(void *hprocessor,
+ u32 resource_type,
+ OUT struct dsp_resourceinfo
+ *resource_info,
+ u32 resource_info_size);
+
+/*
+ * ======== proc_exit ========
+ * Purpose:
+ * Decrement reference count, and free resources when reference count is
+ * 0.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * PROC is initialized.
+ * Ensures:
+ * When reference count == 0, PROC's private resources are freed.
+ */
+extern void proc_exit(void);
+
+/*
+ * ======== proc_get_dev_object =========
+ * Purpose:
+ * Returns the DEV Hanlde for a given Processor handle
+ * Parameters:
+ * hprocessor : Processor Handle
+ * phDevObject : Location to store the DEV Handle.
+ * Returns:
+ * 0 : Success; *phDevObject has Dev handle
+ * -EPERM : Failure; *phDevObject is zero.
+ * Requires:
+ * phDevObject is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * 0 : *phDevObject is not NULL
+ * -EPERM : *phDevObject is NULL.
+ */
+extern int proc_get_dev_object(void *hprocessor,
+ struct dev_object **phDevObject);
+
+/*
+ * ======== proc_init ========
+ * Purpose:
+ * Initialize PROC's private state, keeping a reference count on each
+ * call.
+ * Parameters:
+ * Returns:
+ * TRUE if initialized; FALSE if error occured.
+ * Requires:
+ * Ensures:
+ * TRUE: A requirement for the other public PROC functions.
+ */
+extern bool proc_init(void);
+
+/*
+ * ======== proc_get_state ========
+ * Purpose:
+ * Report the state of the specified DSP processor.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * proc_state_obj : Ptr to location to store the dsp_processorstate
+ * structure.
+ * state_info_size: Size of dsp_processorstate.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure while querying processor state.
+ * Requires:
+ * proc_state_obj is not NULL
+ * state_info_size is >= than the size of dsp_processorstate structure.
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_get_state(void *hprocessor, OUT struct dsp_processorstate
+ *proc_state_obj, u32 state_info_size);
+
+/*
+ * ======== PROC_GetProcessorID ========
+ * Purpose:
+ * Report the state of the specified DSP processor.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * procID : Processor ID
+ *
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure while querying processor state.
+ * Requires:
+ * proc_state_obj is not NULL
+ * state_info_size is >= than the size of dsp_processorstate structure.
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_get_processor_id(void *hprocessor, u32 * procID);
+
+/*
+ * ======== proc_get_trace ========
+ * Purpose:
+ * Retrieve the trace buffer from the specified DSP processor.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pbuf : Ptr to buffer to hold trace output.
+ * max_size : Maximum size of the output buffer.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure while retireving processor trace
+ * Buffer.
+ * Requires:
+ * pbuf is not NULL
+ * max_size is > 0.
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_get_trace(void *hprocessor, u8 * pbuf, u32 max_size);
+
+/*
+ * ======== proc_load ========
+ * Purpose:
+ * Reset a processor and load a new base program image.
+ * This will be an OEM-only function.
+ * Parameters:
+ * hprocessor: The processor handle.
+ * argc_index: The number of Arguments(strings)in the aArgV[]
+ * user_args: An Array of Arguments(Unicode Strings)
+ * user_envp: An Array of Environment settings(Unicode Strings)
+ * Returns:
+ * 0: Success.
+ * -ENOENT: The DSP Execuetable was not found.
+ * -EFAULT: Invalid processor handle.
+ * -EPERM : Unable to Load the Processor
+ * Requires:
+ * user_args is not NULL
+ * argc_index is > 0
+ * PROC Initialized.
+ * Ensures:
+ * Success and ProcState == PROC_LOADED
+ * or DSP_FAILED status.
+ * Details:
+ * Does not implement access rights to control which GPP application
+ * can load the processor.
+ */
+extern int proc_load(void *hprocessor,
+ IN CONST s32 argc_index, IN CONST char **user_args,
+ IN CONST char **user_envp);
+
+/*
+ * ======== proc_register_notify ========
+ * Purpose:
+ * Register to be notified of specific processor events
+ * Parameters:
+ * hprocessor : The processor handle.
+ * event_mask : Mask of types of events to be notified about.
+ * notify_type : Type of notification to be sent.
+ * hnotification: Handle to be used for notification.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle or hnotification.
+ * -EINVAL : Parameter event_mask is Invalid
+ * DSP_ENOTIMP : The notification type specified in uNotifyMask
+ * is not supported.
+ * -EPERM : Unable to register for notification.
+ * Requires:
+ * hnotification is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_register_notify(void *hprocessor,
+ u32 event_mask, u32 notify_type,
+ struct dsp_notification
+ *hnotification);
+
+/*
+ * ======== proc_notify_clients ========
+ * Purpose:
+ * Notify the Processor Clients
+ * Parameters:
+ * hProc : The processor handle.
+ * uEvents : Event to be notified about.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : Failure to Set or Reset the Event
+ * Requires:
+ * uEvents is Supported or Valid type of Event
+ * hProc is a valid handle
+ * PROC Initialized.
+ * Ensures:
+ */
+extern int proc_notify_clients(void *hProc, u32 uEvents);
+
+/*
+ * ======== proc_notify_all_clients ========
+ * Purpose:
+ * Notify the Processor Clients
+ * Parameters:
+ * hProc : The processor handle.
+ * uEvents : Event to be notified about.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : Failure to Set or Reset the Event
+ * Requires:
+ * uEvents is Supported or Valid type of Event
+ * hProc is a valid handle
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ * NODE And STRM would use this function to notify their clients
+ * about the state changes in NODE or STRM.
+ */
+extern int proc_notify_all_clients(void *hProc, u32 uEvents);
+
+/*
+ * ======== proc_start ========
+ * Purpose:
+ * Start a processor running.
+ * Processor must be in PROC_LOADED state.
+ * This will be an OEM-only function, and not part of the 'Bridge
+ * application developer's API.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EBADR: Processor is not in PROC_LOADED state.
+ * -EPERM : Unable to start the processor.
+ * Requires:
+ * PROC Initialized.
+ * Ensures:
+ * Success and ProcState == PROC_RUNNING or DSP_FAILED status.
+ * Details:
+ */
+extern int proc_start(void *hprocessor);
+
+/*
+ * ======== proc_stop ========
+ * Purpose:
+ * Start a processor running.
+ * Processor must be in PROC_LOADED state.
+ * This will be an OEM-only function, and not part of the 'Bridge
+ * application developer's API.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EBADR: Processor is not in PROC_LOADED state.
+ * -EPERM : Unable to start the processor.
+ * Requires:
+ * PROC Initialized.
+ * Ensures:
+ * Success and ProcState == PROC_RUNNING or DSP_FAILED status.
+ * Details:
+ */
+extern int proc_stop(void *hprocessor);
+
+/*
+ * ======== proc_end_dma ========
+ * Purpose:
+ * Begin a DMA transfer
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pmpu_addr : Buffer start address
+ * ul_size : Buffer size
+ * dir : The direction of the transfer
+ * Requires:
+ * Memory was previously mapped.
+ */
+extern int proc_end_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
+ enum dma_data_direction dir);
+/*
+ * ======== proc_begin_dma ========
+ * Purpose:
+ * Begin a DMA transfer
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pmpu_addr : Buffer start address
+ * ul_size : Buffer size
+ * dir : The direction of the transfer
+ * Requires:
+ * Memory was previously mapped.
+ */
+extern int proc_begin_dma(void *hprocessor, void *pmpu_addr, u32 ul_size,
+ enum dma_data_direction dir);
+
+/*
+ * ======== proc_flush_memory ========
+ * Purpose:
+ * Flushes a buffer from the MPU data cache.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pmpu_addr : Buffer start address
+ * ul_size : Buffer size
+ * ul_flags : Reserved.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * Requires:
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ * All the arguments are currently ignored.
+ */
+extern int proc_flush_memory(void *hprocessor,
+ void *pmpu_addr, u32 ul_size, u32 ul_flags);
+
+/*
+ * ======== proc_invalidate_memory ========
+ * Purpose:
+ * Invalidates a buffer from the MPU data cache.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pmpu_addr : Buffer start address
+ * ul_size : Buffer size
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * Requires:
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ * All the arguments are currently ignored.
+ */
+extern int proc_invalidate_memory(void *hprocessor,
+ void *pmpu_addr, u32 ul_size);
+
+/*
+ * ======== proc_map ========
+ * Purpose:
+ * Maps a MPU buffer to DSP address space.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * pmpu_addr : Starting address of the memory region to map.
+ * ul_size : Size of the memory region to map.
+ * req_addr : Requested DSP start address. Offset-adjusted actual
+ * mapped address is in the last argument.
+ * pp_map_addr : Ptr to DSP side mapped u8 address.
+ * ul_map_attr : Optional endianness attributes, virt to phys flag.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * -ENOMEM : MPU side memory allocation error.
+ * -ENOENT : Cannot find a reserved region starting with this
+ * : address.
+ * Requires:
+ * pmpu_addr is not NULL
+ * ul_size is not zero
+ * pp_map_addr is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_map(void *hprocessor,
+ void *pmpu_addr,
+ u32 ul_size,
+ void *req_addr,
+ void **pp_map_addr, u32 ul_map_attr,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== proc_reserve_memory ========
+ * Purpose:
+ * Reserve a virtually contiguous region of DSP address space.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * ul_size : Size of the address space to reserve.
+ * pp_rsv_addr : Ptr to DSP side reserved u8 address.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * -ENOMEM : Cannot reserve chunk of this size.
+ * Requires:
+ * pp_rsv_addr is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_reserve_memory(void *hprocessor,
+ u32 ul_size, void **pp_rsv_addr,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== proc_un_map ========
+ * Purpose:
+ * Removes a MPU buffer mapping from the DSP address space.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * map_addr : Starting address of the mapped memory region.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * -ENOENT : Cannot find a mapped region starting with this
+ * : address.
+ * Requires:
+ * map_addr is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_un_map(void *hprocessor, void *map_addr,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== proc_un_reserve_memory ========
+ * Purpose:
+ * Frees a previously reserved region of DSP address space.
+ * Parameters:
+ * hprocessor : The processor handle.
+ * prsv_addr : Ptr to DSP side reservedBYTE address.
+ * Returns:
+ * 0 : Success.
+ * -EFAULT : Invalid processor handle.
+ * -EPERM : General failure.
+ * -ENOENT : Cannot find a reserved region starting with this
+ * : address.
+ * Requires:
+ * prsv_addr is not NULL
+ * PROC Initialized.
+ * Ensures:
+ * Details:
+ */
+extern int proc_un_reserve_memory(void *hprocessor,
+ void *prsv_addr,
+ struct process_context *pr_ctxt);
+
+#endif /* PROC_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/procpriv.h b/drivers/staging/tidspbridge/include/dspbridge/procpriv.h
new file mode 100644
index 0000000..77d1f0e
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/procpriv.h
@@ -0,0 +1,25 @@
+/*
+ * procpriv.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global PROC constants and types, shared by PROC, MGR and DSP API.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef PROCPRIV_
+#define PROCPRIV_
+
+/* RM PROC Object */
+struct proc_object;
+
+#endif /* PROCPRIV_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/pwr.h b/drivers/staging/tidspbridge/include/dspbridge/pwr.h
new file mode 100644
index 0000000..63ccf8c
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/pwr.h
@@ -0,0 +1,107 @@
+/*
+ * pwr.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef PWR_
+#define PWR_
+
+#include <dspbridge/dbdefs.h>
+#include <dspbridge/pwr_sh.h>
+
+/*
+ * ======== pwr_sleep_dsp ========
+ * Signal the DSP to go to sleep.
+ *
+ * Parameters:
+ * sleepCode: New sleep state for DSP. (Initially, valid codes
+ * are PWR_DEEPSLEEP or PWR_EMERGENCYDEEPSLEEP; both of
+ * these codes will simply put the DSP in deep sleep.)
+ *
+ * timeout: Maximum time (msec) that PWR should wait for
+ * confirmation that the DSP sleep state has been
+ * reached. If PWR should simply send the command to
+ * the DSP to go to sleep and then return (i.e.,
+ * asynchrounous sleep), the timeout should be
+ * specified as zero.
+ *
+ * Returns:
+ * 0: Success.
+ * 0: Success, but the DSP was already asleep.
+ * -EINVAL: The specified sleepCode is not supported.
+ * -ETIME: A timeout occured while waiting for DSP sleep
+ * confirmation.
+ * -EPERM: General failure, unable to send sleep command to
+ * the DSP.
+ */
+extern int pwr_sleep_dsp(IN CONST u32 sleepCode, IN CONST u32 timeout);
+
+/*
+ * ======== pwr_wake_dsp ========
+ * Signal the DSP to wake from sleep.
+ *
+ * Parameters:
+ * timeout: Maximum time (msec) that PWR should wait for
+ * confirmation that the DSP is awake. If PWR should
+ * simply send a command to the DSP to wake and then
+ * return (i.e., asynchrounous wake), timeout should
+ * be specified as zero.
+ *
+ * Returns:
+ * 0: Success.
+ * 0: Success, but the DSP was already awake.
+ * -ETIME: A timeout occured while waiting for wake
+ * confirmation.
+ * -EPERM: General failure, unable to send wake command to
+ * the DSP.
+ */
+extern int pwr_wake_dsp(IN CONST u32 timeout);
+
+/*
+ * ======== pwr_pm_pre_scale ========
+ * Prescale notification to DSP.
+ *
+ * Parameters:
+ * voltage_domain: The voltage domain for which notification is sent
+ * level: The level of voltage domain
+ *
+ * Returns:
+ * 0: Success.
+ * 0: Success, but the DSP was already awake.
+ * -ETIME: A timeout occured while waiting for wake
+ * confirmation.
+ * -EPERM: General failure, unable to send wake command to
+ * the DSP.
+ */
+extern int pwr_pm_pre_scale(IN u16 voltage_domain, u32 level);
+
+/*
+ * ======== pwr_pm_post_scale ========
+ * PostScale notification to DSP.
+ *
+ * Parameters:
+ * voltage_domain: The voltage domain for which notification is sent
+ * level: The level of voltage domain
+ *
+ * Returns:
+ * 0: Success.
+ * 0: Success, but the DSP was already awake.
+ * -ETIME: A timeout occured while waiting for wake
+ * confirmation.
+ * -EPERM: General failure, unable to send wake command to
+ * the DSP.
+ */
+extern int pwr_pm_post_scale(IN u16 voltage_domain, u32 level);
+
+#endif /* PWR_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/pwr_sh.h b/drivers/staging/tidspbridge/include/dspbridge/pwr_sh.h
new file mode 100644
index 0000000..1b4a090
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/pwr_sh.h
@@ -0,0 +1,33 @@
+/*
+ * pwr_sh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Power Manager shared definitions (used on both GPP and DSP sides).
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef PWR_SH_
+#define PWR_SH_
+
+#include <dspbridge/mbx_sh.h>
+
+/* valid sleep command codes that can be sent by GPP via mailbox: */
+#define PWR_DEEPSLEEP MBX_PM_DSPIDLE
+#define PWR_EMERGENCYDEEPSLEEP MBX_PM_EMERGENCYSLEEP
+#define PWR_SLEEPUNTILRESTART MBX_PM_SLEEPUNTILRESTART
+#define PWR_WAKEUP MBX_PM_DSPWAKEUP
+#define PWR_AUTOENABLE MBX_PM_PWRENABLE
+#define PWR_AUTODISABLE MBX_PM_PWRDISABLE
+#define PWR_RETENTION MBX_PM_DSPRETN
+
+#endif /* PWR_SH_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/resourcecleanup.h b/drivers/staging/tidspbridge/include/dspbridge/resourcecleanup.h
new file mode 100644
index 0000000..b452a71
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/resourcecleanup.h
@@ -0,0 +1,63 @@
+/*
+ * resourcecleanup.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#include <dspbridge/nodepriv.h>
+#include <dspbridge/drv.h>
+
+extern int drv_get_proc_ctxt_list(struct process_context **pPctxt,
+ struct drv_object *hdrv_obj);
+
+extern int drv_insert_proc_context(struct drv_object *hDrVObject,
+ void *hPCtxt);
+
+extern int drv_remove_all_dmm_res_elements(void *ctxt);
+
+extern int drv_remove_all_node_res_elements(void *ctxt);
+
+extern int drv_proc_set_pid(void *ctxt, s32 process);
+
+extern int drv_remove_all_resources(void *pPctxt);
+
+extern int drv_remove_proc_context(struct drv_object *hDRVObject,
+ void *pr_ctxt);
+
+extern int drv_get_node_res_element(void *hnode, void *node_res,
+ void *ctxt);
+
+extern int drv_insert_node_res_element(void *hnode, void *node_res,
+ void *ctxt);
+
+extern void drv_proc_node_update_heap_status(void *hNodeRes, s32 status);
+
+extern int drv_remove_node_res_element(void *node_res, void *status);
+
+extern void drv_proc_node_update_status(void *hNodeRes, s32 status);
+
+extern int drv_proc_update_strm_res(u32 num_bufs, void *strm_res);
+
+extern int drv_proc_insert_strm_res_element(void *hStrm,
+ void *strm_res,
+ void *pPctxt);
+
+extern int drv_get_strm_res_element(void *hStrm, void *strm_res,
+ void *ctxt);
+
+extern int drv_proc_remove_strm_res_element(void *strm_res,
+ void *ctxt);
+
+extern int drv_remove_all_strm_res_elements(void *ctxt);
+
+extern enum node_state node_get_state(void *hnode);
diff --git a/drivers/staging/tidspbridge/include/dspbridge/rmm.h b/drivers/staging/tidspbridge/include/dspbridge/rmm.h
new file mode 100644
index 0000000..d36a8c3
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/rmm.h
@@ -0,0 +1,181 @@
+/*
+ * rmm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This memory manager provides general heap management and arbitrary
+ * alignment for any number of memory segments, and management of overlay
+ * memory.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef RMM_
+#define RMM_
+
+/*
+ * ======== rmm_addr ========
+ * DSP address + segid
+ */
+struct rmm_addr {
+ u32 addr;
+ s32 segid;
+};
+
+/*
+ * ======== rmm_segment ========
+ * Memory segment on the DSP available for remote allocations.
+ */
+struct rmm_segment {
+ u32 base; /* Base of the segment */
+ u32 length; /* Size of the segment (target MAUs) */
+ s32 space; /* Code or data */
+ u32 number; /* Number of Allocated Blocks */
+};
+
+/*
+ * ======== RMM_Target ========
+ */
+struct rmm_target_obj;
+
+/*
+ * ======== rmm_alloc ========
+ *
+ * rmm_alloc is used to remotely allocate or reserve memory on the DSP.
+ *
+ * Parameters:
+ * target - Target returned from rmm_create().
+ * segid - Memory segment to allocate from.
+ * size - Size (target MAUS) to allocate.
+ * align - alignment.
+ * dspAddr - If reserve is FALSE, the location to store allocated
+ * address on output, otherwise, the DSP address to
+ * reserve.
+ * reserve - If TRUE, reserve the memory specified by dspAddr.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation on GPP failed.
+ * -ENXIO: Cannot "allocate" overlay memory because it's
+ * already in use.
+ * Requires:
+ * RMM initialized.
+ * Valid target.
+ * dspAddr != NULL.
+ * size > 0
+ * reserve || target->num_segs > 0.
+ * Ensures:
+ */
+extern int rmm_alloc(struct rmm_target_obj *target, u32 segid, u32 size,
+ u32 align, u32 *dspAdr, bool reserve);
+
+/*
+ * ======== rmm_create ========
+ * Create a target object with memory segments for remote allocation. If
+ * seg_tab == NULL or num_segs == 0, memory can only be reserved through
+ * rmm_alloc().
+ *
+ * Parameters:
+ * target_obj: - Location to store target on output.
+ * seg_tab: - Table of memory segments.
+ * num_segs: - Number of memory segments.
+ * Returns:
+ * 0: Success.
+ * -ENOMEM: Memory allocation failed.
+ * Requires:
+ * RMM initialized.
+ * target_obj != NULL.
+ * num_segs == 0 || seg_tab != NULL.
+ * Ensures:
+ * Success: Valid *target_obj.
+ * Failure: *target_obj == NULL.
+ */
+extern int rmm_create(struct rmm_target_obj **target_obj,
+ struct rmm_segment seg_tab[], u32 num_segs);
+
+/*
+ * ======== rmm_delete ========
+ * Delete target allocated in rmm_create().
+ *
+ * Parameters:
+ * target - Target returned from rmm_create().
+ * Returns:
+ * Requires:
+ * RMM initialized.
+ * Valid target.
+ * Ensures:
+ */
+extern void rmm_delete(struct rmm_target_obj *target);
+
+/*
+ * ======== rmm_exit ========
+ * Exit the RMM module
+ *
+ * Parameters:
+ * Returns:
+ * Requires:
+ * rmm_init successfully called.
+ * Ensures:
+ */
+extern void rmm_exit(void);
+
+/*
+ * ======== rmm_free ========
+ * Free or unreserve memory allocated through rmm_alloc().
+ *
+ * Parameters:
+ * target: - Target returned from rmm_create().
+ * segid: - Segment of memory to free.
+ * dspAddr: - Address to free or unreserve.
+ * size: - Size of memory to free or unreserve.
+ * reserved: - TRUE if memory was reserved only, otherwise FALSE.
+ * Returns:
+ * Requires:
+ * RMM initialized.
+ * Valid target.
+ * reserved || segid < target->num_segs.
+ * reserve || [dspAddr, dspAddr + size] is a valid memory range.
+ * Ensures:
+ */
+extern bool rmm_free(struct rmm_target_obj *target, u32 segid, u32 dspAddr,
+ u32 size, bool reserved);
+
+/*
+ * ======== rmm_init ========
+ * Initialize the RMM module
+ *
+ * Parameters:
+ * Returns:
+ * TRUE: Success.
+ * FALSE: Failure.
+ * Requires:
+ * Ensures:
+ */
+extern bool rmm_init(void);
+
+/*
+ * ======== rmm_stat ========
+ * Obtain memory segment status
+ *
+ * Parameters:
+ * segid: Segment ID of the dynamic loading segment.
+ * pMemStatBuf: Pointer to allocated buffer into which memory stats are
+ * placed.
+ * Returns:
+ * TRUE: Success.
+ * FALSE: Failure.
+ * Requires:
+ * segid < target->num_segs
+ * Ensures:
+ */
+extern bool rmm_stat(struct rmm_target_obj *target, enum dsp_memtype segid,
+ struct dsp_memstat *pMemStatBuf);
+
+#endif /* RMM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/rms_sh.h b/drivers/staging/tidspbridge/include/dspbridge/rms_sh.h
new file mode 100644
index 0000000..7bc5574
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/rms_sh.h
@@ -0,0 +1,95 @@
+/*
+ * rms_sh.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Resource Manager Server shared definitions (used on both
+ * GPP and DSP sides).
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef RMS_SH_
+#define RMS_SH_
+
+#include <dspbridge/rmstypes.h>
+
+/* Node Types: */
+#define RMS_TASK 1 /* Task node */
+#define RMS_DAIS 2 /* xDAIS socket node */
+#define RMS_MSG 3 /* Message node */
+
+/* Memory Types: */
+#define RMS_CODE 0 /* Program space */
+#define RMS_DATA 1 /* Data space */
+#define RMS_IO 2 /* I/O space */
+
+/* RM Server Command and Response Buffer Sizes: */
+#define RMS_COMMANDBUFSIZE 256 /* Size of command buffer */
+#define RMS_RESPONSEBUFSIZE 16 /* Size of response buffer */
+
+/* Pre-Defined Command/Response Codes: */
+#define RMS_EXIT 0x80000000 /* GPP->Node: shutdown */
+#define RMS_EXITACK 0x40000000 /* Node->GPP: ack shutdown */
+#define RMS_BUFDESC 0x20000000 /* Arg1 SM buf, Arg2 SM size */
+#define RMS_KILLTASK 0x10000000 /* GPP->Node: Kill Task */
+#define RMS_USER 0x0 /* Start of user-defined msg codes */
+#define RMS_MAXUSERCODES 0xfff /* Maximum user defined C/R Codes */
+
+/* RM Server RPC Command Structure: */
+struct rms_command {
+ rms_word fxn; /* Server function address */
+ rms_word arg1; /* First argument */
+ rms_word arg2; /* Second argument */
+ rms_word data; /* Function-specific data array */
+};
+
+/*
+ * The rms_strm_def structure defines the parameters for both input and output
+ * streams, and is passed to a node's create function.
+ */
+struct rms_strm_def {
+ rms_word bufsize; /* Buffer size (in DSP words) */
+ rms_word nbufs; /* Max number of bufs in stream */
+ rms_word segid; /* Segment to allocate buffers */
+ rms_word align; /* Alignment for allocated buffers */
+ rms_word timeout; /* Timeout (msec) for blocking calls */
+ char name[1]; /* Device Name (terminated by '\0') */
+};
+
+/* Message node create args structure: */
+struct rms_msg_args {
+ rms_word max_msgs; /* Max # simultaneous msgs to node */
+ rms_word segid; /* Mem segment for NODE_allocMsgBuf */
+ rms_word notify_type; /* Type of message notification */
+ rms_word arg_length; /* Length (in DSP chars) of arg data */
+ rms_word arg_data; /* Arg data for node */
+};
+
+/* Partial task create args structure */
+struct rms_more_task_args {
+ rms_word priority; /* Task's runtime priority level */
+ rms_word stack_size; /* Task's stack size */
+ rms_word sysstack_size; /* Task's system stack size (55x) */
+ rms_word stack_seg; /* Memory segment for task's stack */
+ rms_word heap_addr; /* base address of the node memory heap in
+ * external memory (DSP virtual address) */
+ rms_word heap_size; /* size in MAUs of the node memory heap in
+ * external memory */
+ rms_word misc; /* Misc field. Not used for 'normal'
+ * task nodes; for xDAIS socket nodes
+ * specifies the IALG_Fxn pointer.
+ */
+ /* # input STRM definition structures */
+ rms_word num_input_streams;
+};
+
+#endif /* RMS_SH_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/rmstypes.h b/drivers/staging/tidspbridge/include/dspbridge/rmstypes.h
new file mode 100644
index 0000000..3c31f5e
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/rmstypes.h
@@ -0,0 +1,28 @@
+/*
+ * rmstypes.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSP/BIOS Bridge Resource Manager Server shared data type definitions.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef RMSTYPES_
+#define RMSTYPES_
+#include <linux/types.h>
+/*
+ * DSP-side definitions.
+ */
+#include <dspbridge/std.h>
+typedef u32 rms_word;
+
+#endif /* RMSTYPES_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/services.h b/drivers/staging/tidspbridge/include/dspbridge/services.h
new file mode 100644
index 0000000..eb26c86
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/services.h
@@ -0,0 +1,50 @@
+/*
+ * services.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Provide loading and unloading of SERVICES modules.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef SERVICES_
+#define SERVICES_
+
+#include <dspbridge/host_os.h>
+/*
+ * ======== services_exit ========
+ * Purpose:
+ * Discontinue usage of module; free resources when reference count
+ * reaches 0.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * SERVICES initialized.
+ * Ensures:
+ * Resources used by module are freed when cRef reaches zero.
+ */
+extern void services_exit(void);
+
+/*
+ * ======== services_init ========
+ * Purpose:
+ * Initializes SERVICES modules.
+ * Parameters:
+ * Returns:
+ * TRUE if all modules initialized; otherwise FALSE.
+ * Requires:
+ * Ensures:
+ * SERVICES modules initialized.
+ */
+extern bool services_init(void);
+
+#endif /* SERVICES_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/std.h b/drivers/staging/tidspbridge/include/dspbridge/std.h
new file mode 100644
index 0000000..7e09fec
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/std.h
@@ -0,0 +1,94 @@
+/*
+ * std.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Copyright (C) 2008 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef STD_
+#define STD_
+
+#include <linux/types.h>
+
+/*
+ * ======== _TI_ ========
+ * _TI_ is defined for all TI targets
+ */
+#if defined(_29_) || defined(_30_) || defined(_40_) || defined(_50_) || \
+ defined(_54_) || defined(_55_) || defined(_6x_) || defined(_80_) || \
+ defined(_28_) || defined(_24_)
+#define _TI_ 1
+#endif
+
+/*
+ * ======== _FLOAT_ ========
+ * _FLOAT_ is defined for all targets that natively support floating point
+ */
+#if defined(_SUN_) || defined(_30_) || defined(_40_) || defined(_67_) || \
+ defined(_80_)
+#define _FLOAT_ 1
+#endif
+
+/*
+ * ======== _FIXED_ ========
+ * _FIXED_ is defined for all fixed point target architectures
+ */
+#if defined(_29_) || defined(_50_) || defined(_54_) || defined(_55_) || \
+ defined(_62_) || defined(_64_) || defined(_28_)
+#define _FIXED_ 1
+#endif
+
+/*
+ * ======== _TARGET_ ========
+ * _TARGET_ is defined for all target architectures (as opposed to
+ * host-side software)
+ */
+#if defined(_FIXED_) || defined(_FLOAT_)
+#define _TARGET_ 1
+#endif
+
+/*
+ * 8, 16, 32-bit type definitions
+ *
+ * Sm* - 8-bit type
+ * Md* - 16-bit type
+ * Lg* - 32-bit type
+ *
+ * *s32 - signed type
+ * *u32 - unsigned type
+ * *Bits - unsigned type (bit-maps)
+ */
+
+/*
+ * Aliases for standard C types
+ */
+
+typedef s32(*fxn) (void); /* generic function type */
+
+#ifndef NULL
+#define NULL 0
+#endif
+
+/*
+ * These macros are used to cast 'Arg' types to 's32' or 'Ptr'.
+ * These macros were added for the 55x since Arg is not the same
+ * size as s32 and Ptr in 55x large model.
+ */
+#if defined(_28l_) || defined(_55l_)
+#define ARG_TO_INT(A) ((s32)((long)(A) & 0xffff))
+#define ARG_TO_PTR(A) ((Ptr)(A))
+#else
+#define ARG_TO_INT(A) ((s32)(A))
+#define ARG_TO_PTR(A) ((Ptr)(A))
+#endif
+
+#endif /* STD_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/strm.h b/drivers/staging/tidspbridge/include/dspbridge/strm.h
new file mode 100644
index 0000000..b85a460
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/strm.h
@@ -0,0 +1,404 @@
+/*
+ * strm.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * DSPBridge Stream Manager.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef STRM_
+#define STRM_
+
+#include <dspbridge/dev.h>
+
+#include <dspbridge/strmdefs.h>
+#include <dspbridge/proc.h>
+
+/*
+ * ======== strm_allocate_buffer ========
+ * Purpose:
+ * Allocate data buffer(s) for use with a stream.
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * usize: Size (GPP bytes) of the buffer(s).
+ * num_bufs: Number of buffers to allocate.
+ * ap_buffer: Array to hold buffer addresses.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -ENOMEM: Insufficient memory.
+ * -EPERM: Failure occurred, unable to allocate buffers.
+ * -EINVAL: usize must be > 0 bytes.
+ * Requires:
+ * strm_init(void) called.
+ * ap_buffer != NULL.
+ * Ensures:
+ */
+extern int strm_allocate_buffer(struct strm_object *hStrm,
+ u32 usize,
+ OUT u8 **ap_buffer,
+ u32 num_bufs,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== strm_close ========
+ * Purpose:
+ * Close a stream opened with strm_open().
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -EPIPE: Some data buffers issued to the stream have not
+ * been reclaimed.
+ * -EPERM: Failure to close stream.
+ * Requires:
+ * strm_init(void) called.
+ * Ensures:
+ */
+extern int strm_close(struct strm_object *hStrm,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== strm_create ========
+ * Purpose:
+ * Create a STRM manager object. This object holds information about the
+ * device needed to open streams.
+ * Parameters:
+ * phStrmMgr: Location to store handle to STRM manager object on
+ * output.
+ * dev_obj: Device for this processor.
+ * Returns:
+ * 0: Success;
+ * -ENOMEM: Insufficient memory for requested resources.
+ * -EPERM: General failure.
+ * Requires:
+ * strm_init(void) called.
+ * phStrmMgr != NULL.
+ * dev_obj != NULL.
+ * Ensures:
+ * 0: Valid *phStrmMgr.
+ * error: *phStrmMgr == NULL.
+ */
+extern int strm_create(OUT struct strm_mgr **phStrmMgr,
+ struct dev_object *dev_obj);
+
+/*
+ * ======== strm_delete ========
+ * Purpose:
+ * Delete the STRM Object.
+ * Parameters:
+ * strm_mgr_obj: Handle to STRM manager object from strm_create.
+ * Returns:
+ * Requires:
+ * strm_init(void) called.
+ * Valid strm_mgr_obj.
+ * Ensures:
+ * strm_mgr_obj is not valid.
+ */
+extern void strm_delete(struct strm_mgr *strm_mgr_obj);
+
+/*
+ * ======== strm_exit ========
+ * Purpose:
+ * Discontinue usage of STRM module.
+ * Parameters:
+ * Returns:
+ * Requires:
+ * strm_init(void) successfully called before.
+ * Ensures:
+ */
+extern void strm_exit(void);
+
+/*
+ * ======== strm_free_buffer ========
+ * Purpose:
+ * Free buffer(s) allocated with strm_allocate_buffer.
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * ap_buffer: Array containing buffer addresses.
+ * num_bufs: Number of buffers to be freed.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid stream handle.
+ * -EPERM: Failure occurred, unable to free buffers.
+ * Requires:
+ * strm_init(void) called.
+ * ap_buffer != NULL.
+ * Ensures:
+ */
+extern int strm_free_buffer(struct strm_object *hStrm,
+ u8 **ap_buffer, u32 num_bufs,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== strm_get_event_handle ========
+ * Purpose:
+ * Get stream's user event handle. This function is used when closing
+ * a stream, so the event can be closed.
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * ph_event: Location to store event handle on output.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * Requires:
+ * strm_init(void) called.
+ * ph_event != NULL.
+ * Ensures:
+ */
+extern int strm_get_event_handle(struct strm_object *hStrm,
+ OUT void **ph_event);
+
+/*
+ * ======== strm_get_info ========
+ * Purpose:
+ * Get information about a stream. User's dsp_streaminfo is contained
+ * in stream_info struct. stream_info also contains Bridge private info.
+ * Parameters:
+ * hStrm: Stream handle returned from strm_open().
+ * stream_info: Location to store stream info on output.
+ * uSteamInfoSize: Size of user's dsp_streaminfo structure.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -EINVAL: stream_info_size < sizeof(dsp_streaminfo).
+ * -EPERM: Unable to get stream info.
+ * Requires:
+ * strm_init(void) called.
+ * stream_info != NULL.
+ * Ensures:
+ */
+extern int strm_get_info(struct strm_object *hStrm,
+ OUT struct stream_info *stream_info,
+ u32 stream_info_size);
+
+/*
+ * ======== strm_idle ========
+ * Purpose:
+ * Idle a stream and optionally flush output data buffers.
+ * If this is an output stream and fFlush is TRUE, all data currently
+ * enqueued will be discarded.
+ * If this is an output stream and fFlush is FALSE, this function
+ * will block until all currently buffered data is output, or the timeout
+ * specified has been reached.
+ * After a successful call to strm_idle(), all buffers can immediately
+ * be reclaimed.
+ * Parameters:
+ * hStrm: Stream handle returned from strm_open().
+ * fFlush: If TRUE, discard output buffers.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -ETIME: A timeout occurred before the stream could be idled.
+ * -EPERM: Unable to idle stream.
+ * Requires:
+ * strm_init(void) called.
+ * Ensures:
+ */
+extern int strm_idle(struct strm_object *hStrm, bool fFlush);
+
+/*
+ * ======== strm_init ========
+ * Purpose:
+ * Initialize the STRM module.
+ * Parameters:
+ * Returns:
+ * TRUE if initialization succeeded, FALSE otherwise.
+ * Requires:
+ * Ensures:
+ */
+extern bool strm_init(void);
+
+/*
+ * ======== strm_issue ========
+ * Purpose:
+ * Send a buffer of data to a stream.
+ * Parameters:
+ * hStrm: Stream handle returned from strm_open().
+ * pbuf: Pointer to buffer of data to be sent to the stream.
+ * ul_bytes: Number of bytes of data in the buffer.
+ * ul_buf_size: Actual buffer size in bytes.
+ * dw_arg: A user argument that travels with the buffer.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -ENOSR: The stream is full.
+ * -EPERM: Failure occurred, unable to issue buffer.
+ * Requires:
+ * strm_init(void) called.
+ * pbuf != NULL.
+ * Ensures:
+ */
+extern int strm_issue(struct strm_object *hStrm, IN u8 * pbuf,
+ u32 ul_bytes, u32 ul_buf_size, IN u32 dw_arg);
+
+/*
+ * ======== strm_open ========
+ * Purpose:
+ * Open a stream for sending/receiving data buffers to/from a task of
+ * DAIS socket node on the DSP.
+ * Parameters:
+ * hnode: Node handle returned from node_allocate().
+ * dir: DSP_TONODE or DSP_FROMNODE.
+ * index: Stream index.
+ * pattr: Pointer to structure containing attributes to be
+ * applied to stream. Cannot be NULL.
+ * phStrm: Location to store stream handle on output.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hnode.
+ * -EPERM: Invalid direction.
+ * hnode is not a task or DAIS socket node.
+ * Unable to open stream.
+ * -EINVAL: Invalid index.
+ * Requires:
+ * strm_init(void) called.
+ * phStrm != NULL.
+ * pattr != NULL.
+ * Ensures:
+ * 0: *phStrm is valid.
+ * error: *phStrm == NULL.
+ */
+extern int strm_open(struct node_object *hnode, u32 dir,
+ u32 index, IN struct strm_attr *pattr,
+ OUT struct strm_object **phStrm,
+ struct process_context *pr_ctxt);
+
+/*
+ * ======== strm_prepare_buffer ========
+ * Purpose:
+ * Prepare a data buffer not allocated by DSPStream_AllocateBuffers()
+ * for use with a stream.
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * usize: Size (GPP bytes) of the buffer.
+ * pbuffer: Buffer address.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -EPERM: Failure occurred, unable to prepare buffer.
+ * Requires:
+ * strm_init(void) called.
+ * pbuffer != NULL.
+ * Ensures:
+ */
+extern int strm_prepare_buffer(struct strm_object *hStrm,
+ u32 usize, u8 *pbuffer);
+
+/*
+ * ======== strm_reclaim ========
+ * Purpose:
+ * Request a buffer back from a stream.
+ * Parameters:
+ * hStrm: Stream handle returned from strm_open().
+ * buf_ptr: Location to store pointer to reclaimed buffer.
+ * pulBytes: Location where number of bytes of data in the
+ * buffer will be written.
+ * pulBufSize: Location where actual buffer size will be written.
+ * pdw_arg: Location where user argument that travels with
+ * the buffer will be written.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -ETIME: A timeout occurred before a buffer could be
+ * retrieved.
+ * -EPERM: Failure occurred, unable to reclaim buffer.
+ * Requires:
+ * strm_init(void) called.
+ * buf_ptr != NULL.
+ * pulBytes != NULL.
+ * pdw_arg != NULL.
+ * Ensures:
+ */
+extern int strm_reclaim(struct strm_object *hStrm,
+ OUT u8 **buf_ptr, u32 * pulBytes,
+ u32 *pulBufSize, u32 *pdw_arg);
+
+/*
+ * ======== strm_register_notify ========
+ * Purpose:
+ * Register to be notified on specific events for this stream.
+ * Parameters:
+ * hStrm: Stream handle returned by strm_open().
+ * event_mask: Mask of types of events to be notified about.
+ * notify_type: Type of notification to be sent.
+ * hnotification: Handle to be used for notification.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -ENOMEM: Insufficient memory on GPP.
+ * -EINVAL: event_mask is invalid.
+ * -ENOSYS: Notification type specified by notify_type is not
+ * supported.
+ * Requires:
+ * strm_init(void) called.
+ * hnotification != NULL.
+ * Ensures:
+ */
+extern int strm_register_notify(struct strm_object *hStrm,
+ u32 event_mask, u32 notify_type,
+ struct dsp_notification
+ *hnotification);
+
+/*
+ * ======== strm_select ========
+ * Purpose:
+ * Select a ready stream.
+ * Parameters:
+ * strm_tab: Array of stream handles returned from strm_open().
+ * nStrms: Number of stream handles in array.
+ * pmask: Location to store mask of ready streams on output.
+ * utimeout: Timeout value (milliseconds).
+ * Returns:
+ * 0: Success.
+ * -EDOM: nStrms out of range.
+
+ * -EFAULT: Invalid stream handle in array.
+ * -ETIME: A timeout occurred before a stream became ready.
+ * -EPERM: Failure occurred, unable to select a stream.
+ * Requires:
+ * strm_init(void) called.
+ * strm_tab != NULL.
+ * nStrms > 0.
+ * pmask != NULL.
+ * Ensures:
+ * 0: *pmask != 0 || utimeout == 0.
+ * Error: *pmask == 0.
+ */
+extern int strm_select(IN struct strm_object **strm_tab,
+ u32 nStrms, OUT u32 *pmask, u32 utimeout);
+
+/*
+ * ======== strm_unprepare_buffer ========
+ * Purpose:
+ * Unprepare a data buffer that was previously prepared for a stream
+ * with DSPStream_PrepareBuffer(), and that will no longer be used with
+ * the stream.
+ * Parameter:
+ * hStrm: Stream handle returned from strm_open().
+ * usize: Size (GPP bytes) of the buffer.
+ * pbuffer: Buffer address.
+ * Returns:
+ * 0: Success.
+ * -EFAULT: Invalid hStrm.
+ * -EPERM: Failure occurred, unable to unprepare buffer.
+ * Requires:
+ * strm_init(void) called.
+ * pbuffer != NULL.
+ * Ensures:
+ */
+extern int strm_unprepare_buffer(struct strm_object *hStrm,
+ u32 usize, u8 *pbuffer);
+
+#endif /* STRM_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/strmdefs.h b/drivers/staging/tidspbridge/include/dspbridge/strmdefs.h
new file mode 100644
index 0000000..b363f79
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/strmdefs.h
@@ -0,0 +1,46 @@
+/*
+ * strmdefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global STRM constants and types.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef STRMDEFS_
+#define STRMDEFS_
+
+#define STRM_MAXEVTNAMELEN 32
+
+struct strm_mgr;
+
+struct strm_object;
+
+struct strm_attr {
+ void *user_event;
+ char *pstr_event_name;
+ void *virt_base; /* Process virtual base address of
+ * mapped SM */
+ u32 ul_virt_size; /* Size of virtual space in bytes */
+ struct dsp_streamattrin *stream_attr_in;
+};
+
+struct stream_info {
+ enum dsp_strmmode strm_mode; /* transport mode of
+ * stream(DMA, ZEROCOPY..) */
+ u32 segment_id; /* Segment strm allocs from. 0 is local mem */
+ void *virt_base; /* " " Stream'process virt base */
+ struct dsp_streaminfo *user_strm; /* User's stream information
+ * returned */
+};
+
+#endif /* STRMDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/sync.h b/drivers/staging/tidspbridge/include/dspbridge/sync.h
new file mode 100644
index 0000000..e2651e7
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/sync.h
@@ -0,0 +1,109 @@
+/*
+ * sync.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Provide synchronization services.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef _SYNC_H
+#define _SYNC_H
+
+#include <dspbridge/dbdefs.h>
+
+
+/* Special timeout value indicating an infinite wait: */
+#define SYNC_INFINITE 0xffffffff
+
+/**
+ * struct sync_object - the basic sync_object structure
+ * @comp: use to signal events
+ * @multi_comp: use to signal multiple events.
+ *
+ */
+struct sync_object{
+ struct completion comp;
+ struct completion *multi_comp;
+};
+
+/**
+ * sync_init_event() - set initial state for a sync_event element
+ * @event: event to be initialized.
+ *
+ * Set the initial state for a sync_event element.
+ */
+
+static inline void sync_init_event(struct sync_object *event)
+{
+ init_completion(&event->comp);
+ event->multi_comp = NULL;
+}
+
+/**
+ * sync_reset_event() - reset a sync_event element
+ * @event: event to be reset.
+ *
+ * This function reset to the initial state to @event.
+ */
+
+static inline void sync_reset_event(struct sync_object *event)
+{
+ INIT_COMPLETION(event->comp);
+ event->multi_comp = NULL;
+}
+
+/**
+ * sync_set_event() - set or signal and specified event
+ * @event: Event to be set..
+ *
+ * set the @event, if there is an thread waiting for the event
+ * it will be waken up, this function only wakes one thread.
+ */
+
+void sync_set_event(struct sync_object *event);
+
+/**
+ * sync_wait_on_event() - waits for a event to be set.
+ * @event: events to wait for it.
+ * @timeout timeout on waiting for the evetn.
+ *
+ * This functios will wait until @event is set or until timeout. In case of
+ * success the function will return 0 and
+ * in case of timeout the function will return -ETIME
+ */
+
+static inline int sync_wait_on_event(struct sync_object *event,
+ unsigned timeout)
+{
+ return wait_for_completion_timeout(&event->comp,
+ msecs_to_jiffies(timeout)) ? 0 : -ETIME;
+}
+
+/**
+ * sync_wait_on_multiple_events() - waits for multiple events to be set.
+ * @events: Array of events to wait for them.
+ * @count: number of elements of the array.
+ * @timeout timeout on waiting for the evetns.
+ * @pu_index index of the event set.
+ *
+ * This functios will wait until any of the array element is set or until
+ * timeout. In case of success the function will return 0 and
+ * @pu_index will store the index of the array element set and in case
+ * of timeout the function will return -ETIME.
+ */
+
+int sync_wait_on_multiple_events(struct sync_object **events,
+ unsigned count, unsigned timeout,
+ unsigned *index);
+
+#endif /* _SYNC_H */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/utildefs.h b/drivers/staging/tidspbridge/include/dspbridge/utildefs.h
new file mode 100644
index 0000000..8fe5414
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/utildefs.h
@@ -0,0 +1,39 @@
+/*
+ * utildefs.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * Global UTIL constants and types, shared between DSP API and DSPSYS.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef UTILDEFS_
+#define UTILDEFS_
+
+/* constants taken from configmg.h */
+#define UTIL_MAXMEMREGS 9
+#define UTIL_MAXIOPORTS 20
+#define UTIL_MAXIRQS 7
+#define UTIL_MAXDMACHNLS 7
+
+/* misc. constants */
+#define UTIL_MAXARGVS 10
+
+/* Platform specific important info */
+struct util_sysinfo {
+ /* Granularity of page protection; usually 1k or 4k */
+ u32 dw_page_size;
+ u32 dw_allocation_granularity; /* VM granularity, usually 64K */
+ u32 dw_number_of_processors; /* Used as sanity check */
+};
+
+#endif /* UTILDEFS_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/uuidutil.h b/drivers/staging/tidspbridge/include/dspbridge/uuidutil.h
new file mode 100644
index 0000000..d7d0962
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/uuidutil.h
@@ -0,0 +1,62 @@
+/*
+ * uuidutil.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * This file contains the specification of UUID helper functions.
+ *
+ * Copyright (C) 2005-2006 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+#ifndef UUIDUTIL_
+#define UUIDUTIL_
+
+#define MAXUUIDLEN 37
+
+/*
+ * ======== uuid_uuid_to_string ========
+ * Purpose:
+ * Converts a dsp_uuid to an ANSI string.
+ * Parameters:
+ * uuid_obj: Pointer to a dsp_uuid object.
+ * pszUuid: Pointer to a buffer to receive a NULL-terminated UUID
+ * string.
+ * size: Maximum size of the pszUuid string.
+ * Returns:
+ * Requires:
+ * uuid_obj & pszUuid are non-NULL values.
+ * Ensures:
+ * Lenghth of pszUuid is less than MAXUUIDLEN.
+ * Details:
+ * UUID string limit currently set at MAXUUIDLEN.
+ */
+void uuid_uuid_to_string(IN struct dsp_uuid *uuid_obj, OUT char *pszUuid,
+ s32 size);
+
+/*
+ * ======== uuid_uuid_from_string ========
+ * Purpose:
+ * Converts an ANSI string to a dsp_uuid.
+ * Parameters:
+ * pszUuid: Pointer to a string that represents a dsp_uuid object.
+ * uuid_obj: Pointer to a dsp_uuid object.
+ * Returns:
+ * Requires:
+ * uuid_obj & pszUuid are non-NULL values.
+ * Ensures:
+ * Details:
+ * We assume the string representation of a UUID has the following format:
+ * "12345678_1234_1234_1234_123456789abc".
+ */
+extern void uuid_uuid_from_string(IN char *pszUuid,
+ OUT struct dsp_uuid *uuid_obj);
+
+#endif /* UUIDUTIL_ */
diff --git a/drivers/staging/tidspbridge/include/dspbridge/wdt.h b/drivers/staging/tidspbridge/include/dspbridge/wdt.h
new file mode 100644
index 0000000..4c00ba5
--- /dev/null
+++ b/drivers/staging/tidspbridge/include/dspbridge/wdt.h
@@ -0,0 +1,79 @@
+/*
+ * wdt.h
+ *
+ * DSP-BIOS Bridge driver support functions for TI OMAP processors.
+ *
+ * IO dispatcher for a shared memory channel driver.
+ *
+ * Copyright (C) 2010 Texas Instruments, Inc.
+ *
+ * This package is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
+ * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+#ifndef __DSP_WDT3_H_
+#define __DSP_WDT3_H_
+
+/* WDT defines */
+#define OMAP3_WDT3_ISR_OFFSET 0x0018
+
+
+/**
+ * struct dsp_wdt_setting - the basic dsp_wdt_setting structure
+ * @reg_base: pointer to the base of the wdt registers
+ * @sm_wdt: pointer to flags in shared memory
+ * @wdt3_tasklet tasklet to manage wdt event
+ * @fclk handle to wdt3 functional clock
+ * @iclk handle to wdt3 interface clock
+ *
+ * This struct is used in the function to manage wdt3.
+ */
+
+struct dsp_wdt_setting {
+ void __iomem *reg_base;
+ struct shm *sm_wdt;
+ struct tasklet_struct wdt3_tasklet;
+ struct clk *fclk;
+ struct clk *iclk;
+};
+
+/**
+ * dsp_wdt_init() - initialize wdt3 module.
+ *
+ * This function initilize to wdt3 module, so that
+ * other wdt3 function can be used.
+ */
+int dsp_wdt_init(void);
+
+/**
+ * dsp_wdt_exit() - initialize wdt3 module.
+ *
+ * This function frees all resources allocated for wdt3 module.
+ */
+void dsp_wdt_exit(void);
+
+/**
+ * dsp_wdt_enable() - enable/disable wdt3
+ * @enable: bool value to enable/disable wdt3
+ *
+ * This function enables or disables wdt3 base on @enable value.
+ *
+ */
+void dsp_wdt_enable(bool enable);
+
+/**
+ * dsp_wdt_sm_set() - store pointer to the share memory
+ * @data: pointer to dspbridge share memory
+ *
+ * This function is used to pass a valid pointer to share memory,
+ * so that the flags can be set in order DSP side can read them.
+ *
+ */
+void dsp_wdt_sm_set(void *data);
+
+#endif
+
--
1.7.0.4
Add a general cleaning roadmap TODO file to TI's DSP Bridge
staging driver.
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
I can also be reached at < ohadb at ti dot com >
drivers/staging/tidspbridge/TODO | 18 ++++++++++++++++++
1 files changed, 18 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/TODO
diff --git a/drivers/staging/tidspbridge/TODO b/drivers/staging/tidspbridge/TODO
new file mode 100644
index 0000000..54f4a29
--- /dev/null
+++ b/drivers/staging/tidspbridge/TODO
@@ -0,0 +1,18 @@
+* Migrate to (and if necessary, extend) existing upstream code such as
+ iommu, wdt, mcbsp, gptimers
+* Decouple hardware-specific code (e.g. bridge_brd_start/stop/delete/monitor)
+* DOFF binary loader: consider pushing to user space. at the very least
+ eliminate the direct filesystem access
+* Eliminate general services and libraries - use or extend existing kernel
+ libraries instead (e.g. gcf/lcm in nldr.c, global helpers in gen/)
+* Eliminate direct manipulation of OMAP_SYSC_BASE
+* Eliminate list.h : seem like a redundant wrapper to existing kernel lists
+* Eliminate DSP_SUCCEEDED macros and their imposed redundant indentations
+ (adopt the kernel way of checking for return values)
+* Audit interfaces exposed to user space
+* Audit and clean up header files folder
+* Use kernel coding style
+* checkpatch.pl fixes
+
+Please send any patches to Greg Kroah-Hartman <[email protected]>
+and Omar Ramirez Luna <[email protected]>.
--
1.7.0.4
From: Omar Ramirez Luna <[email protected]>
Add Kconfig + Makefile for TI's DSP Bridge driver
and expose it to the staging menu.
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
drivers/staging/tidspbridge/Kconfig | 88 ++++++++++++++++++++++++++++++++++
drivers/staging/tidspbridge/Makefile | 34 +++++++++++++
4 files changed, 125 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/Kconfig
create mode 100644 drivers/staging/tidspbridge/Makefile
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index cdd3ea3..ce1dfa8 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -151,5 +151,7 @@ source "drivers/staging/msm/Kconfig"
source "drivers/staging/easycap/Kconfig"
+source "drivers/staging/tidspbridge/Kconfig"
+
endif # !STAGING_EXCLUDE_BUILD
endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index beceaff..7849818 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -56,3 +56,4 @@ obj-$(CONFIG_FB_XGI) += xgifb/
obj-$(CONFIG_TOUCHSCREEN_MRSTOUCH) += mrst-touchscreen/
obj-$(CONFIG_MSM_STAGING) += msm/
obj-$(CONFIG_EASYCAP) += easycap/
+obj-$(CONFIG_TIDSPBRIDGE) += tidspbridge/
diff --git a/drivers/staging/tidspbridge/Kconfig b/drivers/staging/tidspbridge/Kconfig
new file mode 100644
index 0000000..37fa2b1
--- /dev/null
+++ b/drivers/staging/tidspbridge/Kconfig
@@ -0,0 +1,88 @@
+#
+# DSP Bridge Driver Support
+#
+
+menuconfig TIDSPBRIDGE
+ tristate "DSP Bridge driver"
+ default n
+ select OMAP_MBOX_FWK
+ help
+ DSP/BIOS Bridge is designed for platforms that contain a GPP and
+ one or more attached DSPs. The GPP is considered the master or
+ "host" processor, and the attached DSPs are processing resources
+ that can be utilized by applications and drivers running on the GPP.
+
+ This driver depends on OMAP Mailbox (OMAP_MBOX_FWK).
+
+config BRIDGE_DVFS
+ bool "Enable Bridge Dynamic Voltage and Frequency Scaling (DVFS)"
+ depends on TIDSPBRIDGE && OMAP_PM_SRF && CPU_FREQ
+ default n
+ help
+ DVFS allows DSP Bridge to initiate the operating point change to
+ scale the chip voltage and frequency in order to match the
+ performance and power consumption to the current processing
+ requirements.
+
+config BRIDGE_MEMPOOL_SIZE
+ hex "Physical memory pool size (Byte)"
+ depends on TIDSPBRIDGE
+ default 0x600000
+ help
+ Allocate specified size of memory at booting time to avoid allocation
+ failure under heavy memory fragmentation after some use time.
+
+config BRIDGE_DEBUG
+ bool "DSP Bridge Debug Support"
+ depends on TIDSPBRIDGE
+ help
+ Say Y to enable Bridge debugging capabilities
+
+config BRIDGE_RECOVERY
+ bool "DSP Recovery Support"
+ depends on TIDSPBRIDGE
+ help
+ In case of DSP fatal error, BRIDGE driver will try to
+ recover itself.
+
+config BRIDGE_CACHE_LINE_CHECK
+ bool "Check buffers to be 128 byte aligned"
+ depends on TIDSPBRIDGE
+ default n
+ help
+ When the DSP processes data, the DSP cache controller loads 128-Byte
+ chunks (lines) from SDRAM and writes the data back in 128-Byte chunks.
+ If a DMM buffer does not start and end on a 128-Byte boundary, the data
+ preceding the start address (SA) from the 128-Byte boundary to the SA
+ and the data at addresses trailing the end address (EA) from the EA to
+ the next 128-Byte boundary will be loaded and written back as well.
+ This can lead to heap corruption. Say Y, to enforce the check for 128
+ byte alignment, buffers failing this check will be rejected.
+
+config BRIDGE_WDT3
+ bool "Enable WDT3 interruptions"
+ depends on TIDSPBRIDGE
+ default n
+ help
+ WTD3 is managed by DSP and once it is enabled, DSP side bridge is in
+ charge of refreshing the timer before overflow, if the DSP hangs MPU
+ will caught the interrupt and try to recover DSP.
+
+config WDT_TIMEOUT
+ int "DSP watchdog timer timeout (in secs)"
+ depends on BRIDGE_WDT3
+ default 5
+ help
+ Watchdog timer timeout value, after that time if the watchdog timer
+ counter is not reset the wdt overflow interrupt will be triggered
+
+comment "Bridge Notifications"
+ depends on TIDSPBRIDGE
+
+config BRIDGE_NTFY_PWRERR
+ bool "Notify DSP Power Error"
+ depends on TIDSPBRIDGE
+ help
+ Enable notifications to registered clients on the event of power errror
+ trying to suspend bridge driver. Say Y, to signal this event as a fatal
+ error, this will require a bridge restart to recover.
diff --git a/drivers/staging/tidspbridge/Makefile b/drivers/staging/tidspbridge/Makefile
new file mode 100644
index 0000000..6082ef0
--- /dev/null
+++ b/drivers/staging/tidspbridge/Makefile
@@ -0,0 +1,34 @@
+obj-$(CONFIG_TIDSPBRIDGE) += bridgedriver.o
+
+libgen = gen/gb.o gen/gs.o gen/gh.o gen/uuidutil.o
+libservices = services/sync.o services/cfg.o \
+ services/ntfy.o services/services.o
+libcore = core/chnl_sm.o core/msg_sm.o core/io_sm.o core/tiomap3430.o \
+ core/tiomap3430_pwr.o core/tiomap_io.o \
+ core/mmu_fault.o core/ue_deh.o core/wdt.o core/dsp-clock.o
+libpmgr = pmgr/chnl.o pmgr/io.o pmgr/msg.o pmgr/cod.o pmgr/dev.o pmgr/dspapi.o \
+ pmgr/dmm.o pmgr/cmm.o pmgr/dbll.o
+librmgr = rmgr/dbdcd.o rmgr/disp.o rmgr/drv.o rmgr/mgr.o rmgr/node.o \
+ rmgr/proc.o rmgr/pwr.o rmgr/rmm.o rmgr/strm.o rmgr/dspdrv.o \
+ rmgr/nldr.o rmgr/drv_interface.o
+libdload = dynload/cload.o dynload/getsection.o dynload/reloc.o \
+ dynload/tramp.o
+libhw = hw/hw_mmu.o
+
+bridgedriver-objs = $(libgen) $(libservices) $(libcore) $(libpmgr) $(librmgr) \
+ $(libdload) $(libhw)
+
+#Machine dependent
+ccflags-y += -D_TI_ -D_DB_TIOMAP -DTMS32060 \
+ -DTICFG_PROC_VER -DTICFG_EVM_TYPE -DCHNL_SMCLASS \
+ -DCHNL_MESSAGES -DUSE_LEVEL_1_MACROS
+
+ccflags-y += -Idrivers/staging/tidspbridge/include
+ccflags-y += -Idrivers/staging/tidspbridge/services
+ccflags-y += -Idrivers/staging/tidspbridge/core
+ccflags-y += -Idrivers/staging/tidspbridge/pmgr
+ccflags-y += -Idrivers/staging/tidspbridge/rmgr
+ccflags-y += -Idrivers/staging/tidspbridge/dynload
+ccflags-y += -Idrivers/staging/tidspbridge/hw
+ccflags-y += -Iarch/arm
+
--
1.7.0.4
On Wed, Jun 23, 2010 at 4:02 PM, Ohad Ben-Cohen <[email protected]> wrote:
> Add TI's DSP Bridge generic utilities driver sources
> Signed-off-by: Andy Shevchenko <[email protected]>
> +++ b/drivers/staging/tidspbridge/gen/uuidutil.c
Following part could be significantly simplified
> +/*
> + * ======== htoi ========
> + * Purpose:
> + * Converts a hex value to a decimal integer.
> + */
> +/*
> + * ======== uuid_uuid_from_string ========
> + * Purpose:
> + * Converts a string to a struct dsp_uuid.
> + */
There is a code (because I am already in s-o-b list I just put here
the excerpts, however, I could prepare patch in standard form, if you
want to)
static s32 uuid_hex_to_bin(char *buf, s32 len)
{
s32 i;
s32 result = 0;
for (i = 0; i < len; i++) {
value = hex_to_bin(*buf++);
result *= 16;
if (value > 0)
result += value;
}
return result;
}
void uuid_uuid_from_string(IN char *pszUuid, OUT struct dsp_uuid *uuid_obj)
{
s32 j;
uuid_obj->ul_data1 = uuid_hex_to_bin(pszUuid, 8);
pszUuid += 8;
/* Step over underscore */
pszUuid++;
uuid_obj->us_data2 = (u16) uuid_hex_to_bin(pszUuid, 4);
pszUuid += 4;
/* Step over underscore */
pszUuid++;
uuid_obj->us_data3 = (u16) uuid_hex_to_bin(pszUuid, 4);
pszUuid += 4;
/* Step over underscore */
pszUuid++;
uuid_obj->uc_data4 = (u8) uuid_hex_to_bin(pszUuid, 2);
pszUuid += 2;
uuid_obj->uc_data5 = (u8) uuid_hex_to_bin(pszUuid, 2);
pszUuid += 2;
/* Step over underscore */
pszUuid++;
for (j = 0; j < 6; j++) {
uuid_obj->uc_data6[j] = (u8) uuid_hex_to_bin(pszUuid, 2);
pszUuid += 2;
}
}
--
With Best Regards,
Andy Shevchenko
On Wed, Jun 23, 2010 at 04:14:00PM +0300, Ohad Ben-Cohen wrote:
> From: Omar Ramirez Luna <[email protected]>
>
> Add Kconfig + Makefile for TI's DSP Bridge driver
> and expose it to the staging menu.
>
> Signed-off-by: Omar Ramirez Luna <[email protected]>
> Signed-off-by: Kanigeri, Hari <[email protected]>
> Signed-off-by: Ameya Palande <[email protected]>
> Signed-off-by: Guzman Lugo, Fernando <[email protected]>
> Signed-off-by: Hebbar, Shivananda <[email protected]>
> Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
> Signed-off-by: Felipe Contreras <[email protected]>
> Signed-off-by: Anna, Suman <[email protected]>
> Signed-off-by: Gupta, Ramesh <[email protected]>
> Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
> Signed-off-by: Andy Shevchenko <[email protected]>
> Signed-off-by: Armando Uribe De Leon <[email protected]>
> Signed-off-by: Deepak Chitriki <[email protected]>
> Signed-off-by: Menon, Nishanth <[email protected]>
> Signed-off-by: Phil Carmody <[email protected]>
> Signed-off-by: Ohad Ben-Cohen <[email protected]>
> ---
> drivers/staging/Kconfig | 2 +
> drivers/staging/Makefile | 1 +
> drivers/staging/tidspbridge/Kconfig | 88 ++++++++++++++++++++++++++++++++++
> drivers/staging/tidspbridge/Makefile | 34 +++++++++++++
> 4 files changed, 125 insertions(+), 0 deletions(-)
> create mode 100644 drivers/staging/tidspbridge/Kconfig
> create mode 100644 drivers/staging/tidspbridge/Makefile
>
>
> diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
> index 3de4bca..c9e8215 100644
> --- a/drivers/staging/Kconfig
> +++ b/drivers/staging/Kconfig
> @@ -153,5 +153,9 @@ source "drivers/staging/easycap/Kconfig"
>
> source "drivers/staging/solo6x10/Kconfig"
>
> +source "drivers/staging/tidspbridge/Kconfig"
> +
> +source "drivers/staging/tidspbridge/Kconfig"
> +
> endif # !STAGING_EXCLUDE_BUILD
> endif # STAGING
> diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
> index b28d820..952b64e 100644
> --- a/drivers/staging/Makefile
> +++ b/drivers/staging/Makefile
> @@ -57,3 +57,4 @@ obj-$(CONFIG_TOUCHSCREEN_MRSTOUCH) += mrst-touchscreen/
> obj-$(CONFIG_MSM_STAGING) += msm/
> obj-$(CONFIG_EASYCAP) += easycap/
> obj-$(CONFIG_SOLO6X10) += solo6x10/
> +obj-$(CONFIG_TIDSPBRIDGE) += tidspbridge/
> diff --git a/drivers/staging/tidspbridge/Kconfig b/drivers/staging/tidspbridge/Kconfig
> new file mode 100644
> index 0000000..37fa2b1
> --- /dev/null
> +++ b/drivers/staging/tidspbridge/Kconfig
> @@ -0,0 +1,88 @@
> +#
> +# DSP Bridge Driver Support
> +#
> +
> +menuconfig TIDSPBRIDGE
> + tristate "DSP Bridge driver"
> + default n
The default is always 'n' so you don't need this.
Also, this enables the driver to be built on x86, which fails horribly,
and I don't think is what you really want to have happen :)
So I need some more Kconfig changes here, care to redo just this one
patch? I've applied all the others and they will show up in linux-next
tomorrow.
thanks,
greg k-h
On Wed, Jun 23, 2010 at 6:43 PM, Andy Shevchenko
<[email protected]> wrote:
> ... I could prepare patch in standard form, if you want to
That could be great, thanks !
From: Omar Ramirez Luna <[email protected]>
Add Kconfig + Makefile for TI's DSP Bridge driver
and expose it to the staging menu.
For now, have tidspbridge depend on ARCH_OMAP3.
That dependency should be relaxed as soon as required cleanups are applied.
Signed-off-by: Omar Ramirez Luna <[email protected]>
Signed-off-by: Kanigeri, Hari <[email protected]>
Signed-off-by: Ameya Palande <[email protected]>
Signed-off-by: Guzman Lugo, Fernando <[email protected]>
Signed-off-by: Hebbar, Shivananda <[email protected]>
Signed-off-by: Ramos Falcon, Ernesto <[email protected]>
Signed-off-by: Felipe Contreras <[email protected]>
Signed-off-by: Anna, Suman <[email protected]>
Signed-off-by: Gupta, Ramesh <[email protected]>
Signed-off-by: Gomez Castellanos, Ivan <[email protected]>
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Armando Uribe De Leon <[email protected]>
Signed-off-by: Deepak Chitriki <[email protected]>
Signed-off-by: Menon, Nishanth <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Signed-off-by: Ohad Ben-Cohen <[email protected]>
---
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
drivers/staging/tidspbridge/Kconfig | 88 ++++++++++++++++++++++++++++++++++
drivers/staging/tidspbridge/Makefile | 34 +++++++++++++
4 files changed, 125 insertions(+), 0 deletions(-)
create mode 100644 drivers/staging/tidspbridge/Kconfig
create mode 100644 drivers/staging/tidspbridge/Makefile
diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index cdd3ea3..ce1dfa8 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -151,5 +151,7 @@ source "drivers/staging/msm/Kconfig"
source "drivers/staging/easycap/Kconfig"
+source "drivers/staging/tidspbridge/Kconfig"
+
endif # !STAGING_EXCLUDE_BUILD
endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index beceaff..7849818 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -56,3 +56,4 @@ obj-$(CONFIG_FB_XGI) += xgifb/
obj-$(CONFIG_TOUCHSCREEN_MRSTOUCH) += mrst-touchscreen/
obj-$(CONFIG_MSM_STAGING) += msm/
obj-$(CONFIG_EASYCAP) += easycap/
+obj-$(CONFIG_TIDSPBRIDGE) += tidspbridge/
diff --git a/drivers/staging/tidspbridge/Kconfig b/drivers/staging/tidspbridge/Kconfig
new file mode 100644
index 0000000..45372cd
--- /dev/null
+++ b/drivers/staging/tidspbridge/Kconfig
@@ -0,0 +1,88 @@
+#
+# DSP Bridge Driver Support
+#
+
+menuconfig TIDSPBRIDGE
+ tristate "DSP Bridge driver"
+ depends on ARCH_OMAP3
+ select OMAP_MBOX_FWK
+ help
+ DSP/BIOS Bridge is designed for platforms that contain a GPP and
+ one or more attached DSPs. The GPP is considered the master or
+ "host" processor, and the attached DSPs are processing resources
+ that can be utilized by applications and drivers running on the GPP.
+
+ This driver depends on OMAP Mailbox (OMAP_MBOX_FWK).
+
+config BRIDGE_DVFS
+ bool "Enable Bridge Dynamic Voltage and Frequency Scaling (DVFS)"
+ depends on TIDSPBRIDGE && OMAP_PM_SRF && CPU_FREQ
+ default n
+ help
+ DVFS allows DSP Bridge to initiate the operating point change to
+ scale the chip voltage and frequency in order to match the
+ performance and power consumption to the current processing
+ requirements.
+
+config BRIDGE_MEMPOOL_SIZE
+ hex "Physical memory pool size (Byte)"
+ depends on TIDSPBRIDGE
+ default 0x600000
+ help
+ Allocate specified size of memory at booting time to avoid allocation
+ failure under heavy memory fragmentation after some use time.
+
+config BRIDGE_DEBUG
+ bool "DSP Bridge Debug Support"
+ depends on TIDSPBRIDGE
+ help
+ Say Y to enable Bridge debugging capabilities
+
+config BRIDGE_RECOVERY
+ bool "DSP Recovery Support"
+ depends on TIDSPBRIDGE
+ help
+ In case of DSP fatal error, BRIDGE driver will try to
+ recover itself.
+
+config BRIDGE_CACHE_LINE_CHECK
+ bool "Check buffers to be 128 byte aligned"
+ depends on TIDSPBRIDGE
+ default n
+ help
+ When the DSP processes data, the DSP cache controller loads 128-Byte
+ chunks (lines) from SDRAM and writes the data back in 128-Byte chunks.
+ If a DMM buffer does not start and end on a 128-Byte boundary, the data
+ preceding the start address (SA) from the 128-Byte boundary to the SA
+ and the data at addresses trailing the end address (EA) from the EA to
+ the next 128-Byte boundary will be loaded and written back as well.
+ This can lead to heap corruption. Say Y, to enforce the check for 128
+ byte alignment, buffers failing this check will be rejected.
+
+config BRIDGE_WDT3
+ bool "Enable WDT3 interruptions"
+ depends on TIDSPBRIDGE
+ default n
+ help
+ WTD3 is managed by DSP and once it is enabled, DSP side bridge is in
+ charge of refreshing the timer before overflow, if the DSP hangs MPU
+ will caught the interrupt and try to recover DSP.
+
+config WDT_TIMEOUT
+ int "DSP watchdog timer timeout (in secs)"
+ depends on BRIDGE_WDT3
+ default 5
+ help
+ Watchdog timer timeout value, after that time if the watchdog timer
+ counter is not reset the wdt overflow interrupt will be triggered
+
+comment "Bridge Notifications"
+ depends on TIDSPBRIDGE
+
+config BRIDGE_NTFY_PWRERR
+ bool "Notify DSP Power Error"
+ depends on TIDSPBRIDGE
+ help
+ Enable notifications to registered clients on the event of power errror
+ trying to suspend bridge driver. Say Y, to signal this event as a fatal
+ error, this will require a bridge restart to recover.
diff --git a/drivers/staging/tidspbridge/Makefile b/drivers/staging/tidspbridge/Makefile
new file mode 100644
index 0000000..6082ef0
--- /dev/null
+++ b/drivers/staging/tidspbridge/Makefile
@@ -0,0 +1,34 @@
+obj-$(CONFIG_TIDSPBRIDGE) += bridgedriver.o
+
+libgen = gen/gb.o gen/gs.o gen/gh.o gen/uuidutil.o
+libservices = services/sync.o services/cfg.o \
+ services/ntfy.o services/services.o
+libcore = core/chnl_sm.o core/msg_sm.o core/io_sm.o core/tiomap3430.o \
+ core/tiomap3430_pwr.o core/tiomap_io.o \
+ core/mmu_fault.o core/ue_deh.o core/wdt.o core/dsp-clock.o
+libpmgr = pmgr/chnl.o pmgr/io.o pmgr/msg.o pmgr/cod.o pmgr/dev.o pmgr/dspapi.o \
+ pmgr/dmm.o pmgr/cmm.o pmgr/dbll.o
+librmgr = rmgr/dbdcd.o rmgr/disp.o rmgr/drv.o rmgr/mgr.o rmgr/node.o \
+ rmgr/proc.o rmgr/pwr.o rmgr/rmm.o rmgr/strm.o rmgr/dspdrv.o \
+ rmgr/nldr.o rmgr/drv_interface.o
+libdload = dynload/cload.o dynload/getsection.o dynload/reloc.o \
+ dynload/tramp.o
+libhw = hw/hw_mmu.o
+
+bridgedriver-objs = $(libgen) $(libservices) $(libcore) $(libpmgr) $(librmgr) \
+ $(libdload) $(libhw)
+
+#Machine dependent
+ccflags-y += -D_TI_ -D_DB_TIOMAP -DTMS32060 \
+ -DTICFG_PROC_VER -DTICFG_EVM_TYPE -DCHNL_SMCLASS \
+ -DCHNL_MESSAGES -DUSE_LEVEL_1_MACROS
+
+ccflags-y += -Idrivers/staging/tidspbridge/include
+ccflags-y += -Idrivers/staging/tidspbridge/services
+ccflags-y += -Idrivers/staging/tidspbridge/core
+ccflags-y += -Idrivers/staging/tidspbridge/pmgr
+ccflags-y += -Idrivers/staging/tidspbridge/rmgr
+ccflags-y += -Idrivers/staging/tidspbridge/dynload
+ccflags-y += -Idrivers/staging/tidspbridge/hw
+ccflags-y += -Iarch/arm
+
--
1.7.0.4
On Thu, Jun 24, 2010 at 1:41 AM, Greg KH <[email protected]> wrote:
> The default is always 'n' so you don't need this.
>
> Also, this enables the driver to be built on x86, which fails horribly,
> and I don't think is what you really want to have happen :)
>
> So I need some more Kconfig changes here, care to redo just this one
> patch? I've applied all the others and they will show up in linux-next
> tomorrow.
I fixed all that stuff some time ago:
http://article.gmane.org/gmane.linux.ports.arm.omap/36065
But the patches were ignored.
I might rebase them if nobody beats me to it.
--
Felipe Contreras
There is recently added hex_to_bin() kernel's method which we could use
instead of custom long function.
Signed-off-by: Andy Shevchenko <[email protected]>
Cc: Ohad Ben-Cohen <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
---
drivers/staging/tidspbridge/gen/uuidutil.c | 167 +++++-----------------------
1 files changed, 28 insertions(+), 139 deletions(-)
diff --git a/drivers/staging/tidspbridge/gen/uuidutil.c b/drivers/staging/tidspbridge/gen/uuidutil.c
index ce9319d..eb09bc3 100644
--- a/drivers/staging/tidspbridge/gen/uuidutil.c
+++ b/drivers/staging/tidspbridge/gen/uuidutil.c
@@ -54,61 +54,19 @@ void uuid_uuid_to_string(IN struct dsp_uuid *uuid_obj, OUT char *pszUuid,
DBC_ENSURE(i != -1);
}
-/*
- * ======== htoi ========
- * Purpose:
- * Converts a hex value to a decimal integer.
- */
-
-static int htoi(char c)
+static s32 uuid_hex_to_bin(char *buf, s32 len)
{
- switch (c) {
- case '0':
- return 0;
- case '1':
- return 1;
- case '2':
- return 2;
- case '3':
- return 3;
- case '4':
- return 4;
- case '5':
- return 5;
- case '6':
- return 6;
- case '7':
- return 7;
- case '8':
- return 8;
- case '9':
- return 9;
- case 'A':
- return 10;
- case 'B':
- return 11;
- case 'C':
- return 12;
- case 'D':
- return 13;
- case 'E':
- return 14;
- case 'F':
- return 15;
- case 'a':
- return 10;
- case 'b':
- return 11;
- case 'c':
- return 12;
- case 'd':
- return 13;
- case 'e':
- return 14;
- case 'f':
- return 15;
+ s32 i;
+ s32 result = 0;
+
+ for (i = 0; i < len; i++) {
+ value = hex_to_bin(*buf++);
+ result *= 16;
+ if (value > 0)
+ result += value;
}
- return 0;
+
+ return result;
}
/*
@@ -118,106 +76,37 @@ static int htoi(char c)
*/
void uuid_uuid_from_string(IN char *pszUuid, OUT struct dsp_uuid *uuid_obj)
{
- char c;
- s32 i, j;
- s32 result;
- char *temp = pszUuid;
+ s32 j;
- result = 0;
- for (i = 0; i < 8; i++) {
- /* Get first character in string */
- c = *temp;
-
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
-
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->ul_data1 = result;
+ uuid_obj->ul_data1 = uuid_hex_to_bin(pszUuid, 8);
+ pszUuid += 8;
/* Step over underscore */
- temp++;
+ pszUuid++;
- result = 0;
- for (i = 0; i < 4; i++) {
- /* Get first character in string */
- c = *temp;
-
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
-
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->us_data2 = (u16) result;
+ uuid_obj->us_data2 = (u16) uuid_hex_to_bin(pszUuid, 4);
+ pszUuid += 4;
/* Step over underscore */
- temp++;
-
- result = 0;
- for (i = 0; i < 4; i++) {
- /* Get first character in string */
- c = *temp;
+ pszUuid++;
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
-
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->us_data3 = (u16) result;
+ uuid_obj->us_data3 = (u16) uuid_hex_to_bin(pszUuid, 4);
+ pszUuid += 4;
/* Step over underscore */
- temp++;
-
- result = 0;
- for (i = 0; i < 2; i++) {
- /* Get first character in string */
- c = *temp;
+ pszUuid++;
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
+ uuid_obj->uc_data4 = (u8) uuid_hex_to_bin(pszUuid, 2);
+ pszUuid += 2;
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->uc_data4 = (u8) result;
-
- result = 0;
- for (i = 0; i < 2; i++) {
- /* Get first character in string */
- c = *temp;
-
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
-
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->uc_data5 = (u8) result;
+ uuid_obj->uc_data5 = (u8) uuid_hex_to_bin(pszUuid, 2);
+ pszUuid += 2;
/* Step over underscore */
- temp++;
+ pszUuid++;
for (j = 0; j < 6; j++) {
- result = 0;
- for (i = 0; i < 2; i++) {
- /* Get first character in string */
- c = *temp;
-
- /* Increase the results by new value */
- result *= 16;
- result += htoi(c);
-
- /* Go to next character in string */
- temp++;
- }
- uuid_obj->uc_data6[j] = (u8) result;
+ uuid_obj->uc_data6[j] = (u8) uuid_hex_to_bin(pszUuid, 2);
+ pszUuid += 2;
}
}
--
1.6.6.1
On 7/4/2010 5:53 AM, Felipe Contreras wrote:
> On Thu, Jun 24, 2010 at 1:41 AM, Greg KH<[email protected]> wrote:
>> The default is always 'n' so you don't need this.
>>
>> Also, this enables the driver to be built on x86, which fails horribly,
>> and I don't think is what you really want to have happen :)
>>
>> So I need some more Kconfig changes here, care to redo just this one
>> patch? I've applied all the others and they will show up in linux-next
>> tomorrow.
>
> I fixed all that stuff some time ago:
> http://article.gmane.org/gmane.linux.ports.arm.omap/36065
>
> But the patches were ignored.
Patches were not ignored, discussion was held privately (you and me),
patch 13 was not accepted because changing indentation doesn't deserve a
copyright assignment (IMHO), at that point *you* wanted your patches not
to be included if the last one wasn't merged in.
- omar
I'm removing many people from the Cc which I think don't care about
this. Is this even the right place for discussing about it?
On Tue, Jul 6, 2010 at 6:52 PM, Omar Ramirez Luna <[email protected]> wrote:
> On 7/4/2010 5:53 AM, Felipe Contreras wrote:
>> On Thu, Jun 24, 2010 at 1:41 AM, Greg KH<[email protected]> wrote:
>>> So I need some more Kconfig changes here, care to redo just this one
>>> patch? I've applied all the others and they will show up in linux-next
>>> tomorrow.
>>
>> I fixed all that stuff some time ago:
>> http://article.gmane.org/gmane.linux.ports.arm.omap/36065
>>
>> But the patches were ignored.
>
> Patches were not ignored, discussion was held privately (you and me),
That was for the deh reorganization. Not the Kconfig ones.
Regarding the deh reorganization...
> patch
> 13 was not accepted because changing indentation doesn't deserve a copyright
> assignment (IMHO),
You didn't want to add a copyright without giving any valid reason, so
you started a private thread. You never mentioned any rejection of the
patches on any grounds, neither publicly, nor privately.
If the patch series is only changing indentation then the lines
removed would match the lines added, which is not the case. Take a
look:
15 files changed, 184 insertions(+), 509 deletions(-)
In my book removing 300 lines of code while keeping all the
functionality is a good thing. Without even considering that the rest
of the insertions are cleaning up the code.
> at that point *you* wanted your patches not to be
> included if the last one wasn't merged in.
Not without the copyright update patch.
Maybe you are forgetting that I made many changes before those
patches. Here are some stats for ue_deh and mmu_fault:
Me:
22 commits, 487 insertions(+), 742 deletions(-)
Others:
60 commits, 394 insertions(+), 617 deletions(-)
(I didn't count the automated camel case removal)
218 insertions(+), 209 deletions(-)
And 'git blame' shows me on 70% of ue_deh (which doesn't take into
consideration code removals which is the main thing I did).
While the vast majority of the changes are cleanups (much needed);
there are also functional changes, mostly fixing memory corruptions,
both reproduced and theoretical.
If somebody writes a piece of code that's 10,000 lines, and another
person reorganizes the code to make it 1,000 lines; IMO the usefulness
of the code relies on both person's contributions. Depending on
whether you care about having something functional, or
maintainability/readability; you might assign more value to one, or
the other, but I think both are important.
So. Would you care to give a reason why my contributions don't deserve
a copyright?
--
Felipe Contreras
On Wed, Jul 7, 2010 at 12:31 PM, Felipe Contreras
<[email protected]> wrote:
> On Tue, Jul 6, 2010 at 6:52 PM, Omar Ramirez Luna <[email protected]> wrote:
>> at that point *you* wanted your patches not to be
>> included if the last one wasn't merged in.
>
> Not without the copyright update patch.
...
> So. Would you care to give a reason why my contributions don't deserve
> a copyright?
Disclaimer: I am not a lawyer, and I speak only for myself in this
post, and doesn't represent TI in anyway.
AFAICT, you get copyright for every kernel change you submit and is
accepted. Even if you just contribute whitespace cleanups, you get the
copyright to those cleanups (not to suggest this was the sole
contribution here).
The copyright assignment is per the actual git commit itself,
obviously, and it doesn't apply for the rest of the code in those
files you edited.
There are some exceptions, but they are not applicable here:
- Usually when you get paid for the work, the employer keeps the
copyright of the patch, not the author.
- There are some projects where you have to relinquish the copyright
in order for the patch to be accepted. This is how FSF (Free Software
Foundation) projects work (e.g. gcc), but not the Linux kernel (which
is not a FSF project).
As I mentioned, I don't think these exceptions apply in this case, and
AFAICT, Felipe - you inherently get the copyright for the changes that
your accepted patches introduce.
So it all boils down to the semantic question whether to edit the
header file, adding new copyright lines, or not.
Felipe, I think your contributions are important and helpful, and I
would personally be happy if you continue to do them. I personally
don't think that adding an explicit copyright line to the header
should be important, because you get your copyright anyway. The exact
change, to which you get copyright on, is kept in the git history, and
will not likely to go away. I think this is pretty satisfying, and as
a result, you don't see people(/companies) changing copyright headers
when they submit kernel patches that edit existing files.
The only thing I am not sure about, and that may be a concern to TI,
is whether adding a copyright line in the header might actually give
copyright ownership for the complete file. If this is true, I can
understand why TI might not be so keen in adding copyright owners to
the file header, without explicitly specifying what is the copyright
about (not to suggest any opinion of TI on the matter, I speak only
for myself).
Again: I am not a lawyer, and I speak only for myself in this post,
and doesn't represent TI in anyway.
Thanks,
Ohad.
>
> --
> Felipe Contreras
>
>From: Felipe Contreras [mailto:[email protected]]
>
>I'm removing many people from the Cc which I think don't care about
>this. Is this even the right place for discussing about it?
>
>On Tue, Jul 6, 2010 at 6:52 PM, Omar Ramirez Luna <[email protected]> wrote:
>> On 7/4/2010 5:53 AM, Felipe Contreras wrote:
>>> On Thu, Jun 24, 2010 at 1:41 AM, Greg KH<[email protected]> ?wrote:
>>>> So I need some more Kconfig changes here, care to redo just this one
>>>> patch? ?I've applied all the others and they will show up in linux-next
>>>> tomorrow.
>>>
>>> I fixed all that stuff some time ago:
>>> http://article.gmane.org/gmane.linux.ports.arm.omap/36065
>>>
>>> But the patches were ignored.
>>
>> Patches were not ignored, discussion was held privately (you and me),
>
>That was for the deh reorganization. Not the Kconfig ones.
Yes, you are right, somehow opening this link showed the deh reorganization patches, my bad.
But then again patches were not ignored, I sent you a notification that I was wrongly tracking the first patch for dspbridge branch in case you might wanted to resend again.
Discussion, if any, about copyrights can be done in the patch itself.
- omar