2009-09-11 14:28:08

by Mark Hounschell

[permalink] [raw]
Subject: problems doing direct dma from a pci device to pci-e device

00:00.0 Host bridge: ATI Technologies Inc RD790 Northbridge only dual slot PCI-e_GFX and HT3 K8 part
Subsystem: ATI Technologies Inc RD790 Northbridge only dual slot PCI-e_GFX and HT3 K8 part
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
Latency: 64
Region 3: Memory at <ignored> (64-bit, non-prefetchable)
Capabilities: [c4] HyperTransport: Slave or Primary Interface
Command: BaseUnitID=0 UnitCnt=12 MastHost- DefDir- DUL-
Link Control 0: CFlE- CST- CFE- <LkFail- Init+ EOC- TXO- <CRCErr=0 IsocEn- LSEn- ExtCTL- 64b-
Link Config 0: MLWI=16bit DwFcIn- MLWO=16bit DwFcOut- LWI=16bit DwFcInEn- LWO=16bit DwFcOutEn-
Link Control 1: CFlE- CST- CFE- <LkFail+ Init- EOC+ TXO+ <CRCErr=0 IsocEn- LSEn- ExtCTL- 64b-
Link Config 1: MLWI=8bit DwFcIn- MLWO=8bit DwFcOut- LWI=8bit DwFcInEn- LWO=8bit DwFcOutEn-
Revision ID: 3.00
Link Frequency 0: [b]
Link Error 0: <Prot- <Ovfl- <EOC- CTLTm-
Link Frequency Capability 0: 200MHz+ 300MHz- 400MHz+ 500MHz- 600MHz+ 800MHz+ 1.0GHz+ 1.2GHz+ 1.4GHz- 1.6GHz- Vend-
Feature Capability: IsocFC- LDTSTOP+ CRCTM- ECTLT- 64bA- UIDRD-
Link Frequency 1: 200MHz
Link Error 1: <Prot- <Ovfl- <EOC- CTLTm-
Link Frequency Capability 1: 200MHz- 300MHz- 400MHz- 500MHz- 600MHz- 800MHz- 1.0GHz- 1.2GHz- 1.4GHz- 1.6GHz- Vend-
Error Handling: PFlE- OFlE- PFE- OFE- EOCFE- RFE- CRCFE- SERRFE- CF- RE- PNFE- ONFE- EOCNFE- RNFE- CRCNFE- SERRNFE-
Prefetchable memory behind bridge Upper: 00-00
Bus Number: 00
Capabilities: [40] HyperTransport: Retry Mode
Capabilities: [54] HyperTransport: UnitID Clumping
Capabilities: [9c] HyperTransport: #1a
Kernel modules: ati-agp

00:02.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx0 port A) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=01, subordinate=02, sec-latency=0
I/O behind bridge: 0000c000-0000cfff
Memory behind bridge: b0000000-cfffffff
Prefetchable memory behind bridge: 00000000fde00000-00000000fdefffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag+ RBE+ FLReset-
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Suprise- LLActRep+ BwNot+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+
Address: fee0300c Data: 41c9
Capabilities: [b0] Subsystem: ATI Technologies Inc Device 5956
Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
Capabilities: [100] Vendor Specific Information <?>
Capabilities: [110] Virtual Channel <?>
Kernel driver in use: pcieport-driver
Kernel modules: shpchp

00:04.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port A) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=03, subordinate=04, sec-latency=0
I/O behind bridge: 0000b000-0000bfff
Memory behind bridge: fdd00000-fddfffff
Prefetchable memory behind bridge: 00000000fdc00000-00000000fdcfffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag+ RBE+ FLReset-
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Suprise- LLActRep+ BwNot+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+
Address: fee0300c Data: 41d1
Capabilities: [b0] Subsystem: ATI Technologies Inc Device 5956
Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
Capabilities: [100] Vendor Specific Information <?>
Capabilities: [110] Virtual Channel <?>
Kernel driver in use: pcieport-driver
Kernel modules: shpchp

00:09.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (PCI express gpp port E) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
I/O behind bridge: 0000e000-0000efff
Memory behind bridge: fda00000-fdafffff
Prefetchable memory behind bridge: 00000000fdf00000-00000000fdffffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag+ RBE+ FLReset-
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #1, Speed 5GT/s, Width x2, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Suprise- LLActRep+ BwNot+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+
Address: fee0300c Data: 41d9
Capabilities: [b0] Subsystem: ATI Technologies Inc Device 5956
Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
Capabilities: [100] Vendor Specific Information <?>
Capabilities: [110] Virtual Channel <?>
Kernel driver in use: pcieport-driver
Kernel modules: shpchp

00:0b.0 PCI bridge: ATI Technologies Inc RD790 PCI to PCI bridge (external gfx1 port A) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
I/O behind bridge: 0000d000-0000dfff
Memory behind bridge: f9000000-fbffffff
Prefetchable memory behind bridge: 00000000d0000000-00000000dfffffff
Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA+ MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Express (v2) Root Port (Slot-), MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag+ RBE+ FLReset-
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x16, ASPM L0s L1, Latency L0 <64ns, L1 <1us
ClockPM- Suprise- LLActRep+ BwNot+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna- CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID 0000, PMEStatus- PMEPending-
Capabilities: [a0] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+
Address: fee0300c Data: 41e1
Capabilities: [b0] Subsystem: ATI Technologies Inc Device 5956
Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+
Capabilities: [100] Vendor Specific Information <?>
Capabilities: [110] Virtual Channel <?>
Kernel driver in use: pcieport-driver
Kernel modules: shpchp

00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [IDE mode] (prog-if 01 [AHCI 1.0])
Subsystem: ATI Technologies Inc SB700/SB800 SATA Controller [IDE mode]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64
Interrupt: pin A routed to IRQ 22
Region 0: I/O ports at ff00 [size=8]
Region 1: I/O ports at fe00 [size=4]
Region 2: I/O ports at fd00 [size=8]
Region 3: I/O ports at fc00 [size=4]
Region 4: I/O ports at fb00 [size=16]
Region 5: Memory at fe02f000 (32-bit, non-prefetchable) [size=1K]
Capabilities: [60] Power Management version 2
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [70] SATA HBA <?>
Kernel driver in use: ahci
Kernel modules: ahci

00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3a)
Subsystem: ATI Technologies Inc SBx00 SMBus Controller
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort+ <MAbort- >SERR+ <PERR- INTx-
Capabilities: [b0] HyperTransport: MSI Mapping Enable- Fixed+
Kernel driver in use: piix4_smbus
Kernel modules: i2c-piix4

00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller (prog-if 8a [Master SecP PriP])
Subsystem: ATI Technologies Inc SB700/SB800 IDE Controller
Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64
Interrupt: pin A routed to IRQ 16
Region 0: I/O ports at 01f0 [size=8]
Region 1: I/O ports at 03f4 [size=1]
Region 2: I/O ports at 0170 [size=8]
Region 3: I/O ports at 0374 [size=1]
Region 4: I/O ports at fa00 [size=16]
Capabilities: [70] Message Signalled Interrupts: Mask- 64bit- Queue=0/1 Enable-
Address: 00000000 Data: 0000
Kernel driver in use: pata_atiixp
Kernel modules: atiixp, pata_atiixp

00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia
Subsystem: ATI Technologies Inc Device 4384
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=slow >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64, Cache Line Size: 32 bytes
Interrupt: pin ? routed to IRQ 16
Region 0: Memory at fe024000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=55mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Kernel driver in use: HDA Intel
Kernel modules: snd-hda-intel

00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller
Subsystem: ATI Technologies Inc Device 4383
Control: I/O+ Mem+ BusMaster+ SpecCycle+ MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0

00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (prog-if 01 [Subtractive decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop+ ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64
Bus: primary=00, secondary=07, subordinate=07, sec-latency=64
I/O behind bridge: 0000a000-0000afff
Memory behind bridge: fc800000-fd7fffff
Prefetchable memory behind bridge: fdb00000-fdbfffff
Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-

00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] HyperTransport Configuration
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Capabilities: [80] HyperTransport: Host or Secondary Interface
Command: WarmRst+ DblEnd- DevNum=0 ChainSide- HostHide+ Slave- <EOCErr- DUL-
Link Control: CFlE- CST- CFE- <LkFail- Init+ EOC- TXO- <CRCErr=0 IsocEn- LSEn+ ExtCTL- 64b-
Link Config: MLWI=16bit DwFcIn- MLWO=16bit DwFcOut- LWI=16bit DwFcInEn- LWO=16bit DwFcOutEn-
Revision ID: 3.00
Link Frequency: [b]
Link Error: <Prot- <Ovfl- <EOC- CTLTm-
Link Frequency Capability: 200MHz+ 300MHz- 400MHz+ 500MHz- 600MHz+ 800MHz+ 1.0GHz+ 1.2GHz+ 1.4GHz- 1.6GHz- Vend-
Feature Capability: IsocFC+ LDTSTOP+ CRCTM- ECTLT- 64bA+ UIDRD- ExtRS- UCnfE-

00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Address Map
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] DRAM Controller
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Miscellaneous Control
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Capabilities: [f0] Secure device <?>

00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h [Opteron, Athlon64, Sempron] Link Control
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

01:00.0 PCI bridge: Intel Corporation Device 032d (rev 09) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=01, secondary=02, subordinate=02, sec-latency=64
I/O behind bridge: 0000c000-0000cfff
Memory behind bridge: b0000000-cfffffff
Prefetchable memory behind bridge: 00000000fde00000-00000000fdefffff
Secondary status: 66MHz+ FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [44] Express (v1) PCI/PCI-X Bridge, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal+ Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- BrConfRtry-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x8, ASPM L0s, Latency L0 <256ns, L1 unlimited
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
Capabilities: [5c] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [6c] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [d8] PCI-X bridge device
Secondary Status: 64bit+ 133MHz+ SCD- USC- SCO- SRD- Freq=conv
Status: Dev=01:00.0 64bit- 133MHz- SCD- USC- SCO- SRD-
Upstream: Capacity=65535 CommitmentLimit=65535
Downstream: Capacity=65535 CommitmentLimit=65535
Capabilities: [100] Advanced Error Reporting <?>
Capabilities: [300] Power Budgeting <?>
Kernel modules: shpchp

02:0c.0 Network controller: VMIC Device 5565
Subsystem: PLD APPLICATIONS Device 0080
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=slow >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64
Interrupt: pin A routed to IRQ 18
Region 0: Memory at cffff000 (32-bit, non-prefetchable) [size=512]
Region 1: I/O ports at ce00 [size=256]
Region 2: Memory at cfffe000 (32-bit, non-prefetchable) [size=64]
Region 3: Memory at b0000000 (32-bit, non-prefetchable) [size=256M]
Capabilities: [78] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-

03:00.0 PCI bridge: Texas Instruments XIO2000(A)/XIO2200(A) PCI Express-to-PCI Bridge (rev 03) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Bus: primary=03, secondary=04, subordinate=04, sec-latency=64
I/O behind bridge: 0000b000-0000bfff
Memory behind bridge: fdd00000-fddfffff
Prefetchable memory behind bridge: 00000000fdc00000-00000000fdcfffff
Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- <SERR- <PERR-
BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [50] Power Management version 2
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Bridge: PM- B3+
Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+ Queue=0/4 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [80] Subsystem: Gammagraphx, Inc. Device 0000
Capabilities: [90] Express (v1) PCI/PCI-X Bridge, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ BrConfRtry-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr+ TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <512ns, L1 <16us
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100] Advanced Error Reporting <?>
Kernel modules: shpchp

05:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller (rev 12)
Subsystem: DFI Inc Device 1102
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 219
Region 0: Memory at fdafc000 (64-bit, non-prefetchable) [size=16K]
Region 2: I/O ports at ee00 [size=256]
[virtual] Expansion ROM at fdf00000 [disabled] [size=128K]
Capabilities: [48] Power Management version 3
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
Status: D0 PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [50] Vital Product Data <?>
Capabilities: [5c] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+
Address: 00000000fee0f00c Data: 4132
Capabilities: [e0] Express (v1) Legacy Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
LnkCap: Port #1, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <256ns, L1 unlimited
ClockPM+ Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100] Advanced Error Reporting <?>
Kernel driver in use: sky2
Kernel modules: sky2

06:00.0 VGA compatible controller: nVidia Corporation G70 [GeForce 7800 GT] (rev a1) (prog-if 00 [VGA controller])
Subsystem: eVga.com. Corp. Device c517
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 19
Region 0: Memory at f9000000 (32-bit, non-prefetchable) [size=16M]
Region 1: Memory at d0000000 (64-bit, prefetchable) [size=256M]
Region 3: Memory at fa000000 (64-bit, non-prefetchable) [size=16M]
Region 5: I/O ports at df00 [size=128]
[virtual] Expansion ROM at fb000000 [disabled] [size=128K]
Capabilities: [60] Power Management version 2
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [68] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [78] Express (v1) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <4us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 512 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Latency L0 <256ns, L1 <4us
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
Capabilities: [100] Virtual Channel <?>
Capabilities: [128] Power Budgeting <?>
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nvidia

07:05.0 Class ff00: Compro Computer Services, Inc. Device 4610 (rev 03)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64
Interrupt: pin A routed to IRQ 20
Region 0: I/O ports at af00 [size=64]
Region 1: Memory at fd7ff000 (32-bit, non-prefetchable) [size=4K]

07:07.0 Intelligent controller [0e80]: PLX Technology, Inc. Device 0480 (rev 52)
Subsystem: Compro Computer Services, Inc. Device 4620
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 64, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 22
BIST result: 00
Region 0: Memory at fd7fe000 (32-bit, non-prefetchable) [size=1K]
Region 1: Memory at fc800000 (32-bit, non-prefetchable) [size=8M]
Region 2: Memory at fd7fd000 (32-bit, non-prefetchable) [size=16]
Capabilities: [40] #00 [0000]
Capabilities: [54] #00 [0080]
Capabilities: [58] Vital Product Data <?>



Attachments:
d.lspci-vvv (25.90 kB)

2009-09-11 14:59:46

by Richard B. Johnson

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

----- Original Message -----
From: "Mark Hounschell" <[email protected]>
To: <[email protected]>
Cc: "Mark Hounschell" <[email protected]>; <[email protected]>
Sent: Friday, September 11, 2009 10:13 AM
Subject: problems doing direct dma from a pci device to pci-e device


>I know this is really just a pci issue but have CC'd LKML just in case.
>Sorry LKML for the noise.
>
> I have a situation where a card on a regular PCI bus (Compro gpiohsd) does
> dma's
> directly into another PCI cards memory (VMIC-5565 reflective memory)
> living on
> another PCI bus. These 2 cards are sometimes seperated by many bridges.
> Expansion racks etc. We've been doing this forever. No problem (mostly).
>
> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3
> regular
> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of the
> VMIC-5565
> reflective memory card in one of the PCI-E slots and our gpiohsd card in
> one of the regular
> PCI slots. All on the motherboard. No expansion slots being used. However
> I cannot get
> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can
> certainly get the data there
> from a userland buffer, no problem. Just not from one card to the other
> directly. Oh and when
> I put the regular PCI version of the VMIC into one of the regular PCI
> slots everything works
> as expected. They are then both on the same PCI bus and no bridges are
> involved though.
>

[SNIPPED...]

Please check your software to make sure that after the DMA has presumed to
complete, you READ something from the destination PCI bus address! With so
many PCI writes queued, they may all be stuck in the many hardware FIFOs and
the sure way to complete the writes is to perform a read.
Cheers,
Richard B. Johnson
http://Route495Software.com/



2009-09-11 14:45:57

by Alan

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device


> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3 regular
> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of the VMIC-5565
> reflective memory card in one of the PCI-E slots and our gpiohsd card in one of the regular
> PCI slots. All on the motherboard. No expansion slots being used. However I cannot get
> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can certainly get the data there
> from a userland buffer, no problem. Just not from one card to the other directly. Oh and when
> I put the regular PCI version of the VMIC into one of the regular PCI slots everything works
> as expected. They are then both on the same PCI bus and no bridges are involved though.

Have you verified with the vendor that such DMA works properly ? There is
a long history of there being boards where some device to device DMA
exploded or vanished. The arrival of PCI capture cards doing direct to
video DMA cleaned the world up (eg the BT848) but I wouldn't be suprised
if this recurred somewhere since they were popular and nobody really
noticed as they didn't run such an unusual config.

Also does the board have a true IOMMU in the PCI-E side of the system ?
It's not a chipset I know.

2009-09-11 14:50:13

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Richard B. Johnson wrote:
> ----- Original Message ----- From: "Mark Hounschell" <[email protected]>
> To: <[email protected]>
> Cc: "Mark Hounschell" <[email protected]>; <[email protected]>
> Sent: Friday, September 11, 2009 10:13 AM
> Subject: problems doing direct dma from a pci device to pci-e device
>
>
>> I know this is really just a pci issue but have CC'd LKML just in
>> case. Sorry LKML for the noise.
>>
>> I have a situation where a card on a regular PCI bus (Compro gpiohsd)
>> does dma's
>> directly into another PCI cards memory (VMIC-5565 reflective memory)
>> living on
>> another PCI bus. These 2 cards are sometimes seperated by many bridges.
>> Expansion racks etc. We've been doing this forever. No problem (mostly).
>>
>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has
>> 3 regular
>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of
>> the VMIC-5565
>> reflective memory card in one of the PCI-E slots and our gpiohsd card
>> in one of the regular
>> PCI slots. All on the motherboard. No expansion slots being used.
>> However I cannot get
>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can
>> certainly get the data there
>> from a userland buffer, no problem. Just not from one card to the
>> other directly. Oh and when
>> I put the regular PCI version of the VMIC into one of the regular PCI
>> slots everything works
>> as expected. They are then both on the same PCI bus and no bridges are
>> involved though.
>>
>
> [SNIPPED...]
>
> Please check your software to make sure that after the DMA has presumed
> to complete, you READ something from the destination PCI bus address!
> With so many PCI writes queued, they may all be stuck in the many
> hardware FIFOs and the sure way to complete the writes is to perform a
> read.
> Cheers,
> Richard B. Johnson
> http://Route495Software.com/
>
>
>
>
>

Thanks, that's actually how I know the data never gets to it's destination. I read it.
The data gets lost somewhere.

Regards
Mark

2009-09-11 15:25:26

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Alan Cox wrote:
>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3 regular
>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of the VMIC-5565
>> reflective memory card in one of the PCI-E slots and our gpiohsd card in one of the regular
>> PCI slots. All on the motherboard. No expansion slots being used. However I cannot get
>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can certainly get the data there
>> from a userland buffer, no problem. Just not from one card to the other directly. Oh and when
>> I put the regular PCI version of the VMIC into one of the regular PCI slots everything works
>> as expected. They are then both on the same PCI bus and no bridges are involved though.
>
> Have you verified with the vendor that such DMA works properly ? There is
> a long history of there being boards where some device to device DMA
> exploded or vanished. The arrival of PCI capture cards doing direct to
> video DMA cleaned the world up (eg the BT848) but I wouldn't be suprised
> if this recurred somewhere since they were popular and nobody really
> noticed as they didn't run such an unusual config.
>

I have not made an inquiry with the vendor yet. I'm pretty sure I have done
this with this very card in another mother board. I guess I need to dig that one
up and verify it before I go much further.

But it's much like you describe above. The data seems to just vanish. I can't
see any way to use my v-metro to find out where it's going either.

> Also does the board have a true IOMMU in the PCI-E side of the system ?
> It's not a chipset I know.
>

That I don't know for sure but I thought they all did??

I'll verify that the VMIC card is in fact capable in that other MB before
I do anything else.

Thanks
Mark

2009-09-11 15:59:54

by Richard B. Johnson

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

----- Original Message -----
From: "Mark Hounschell" <[email protected]>
To: "Alan Cox" <[email protected]>
Cc: <[email protected]>; "Mark Hounschell" <[email protected]>;
<[email protected]>
Sent: Friday, September 11, 2009 11:25 AM
Subject: Re: problems doing direct dma from a pci device to pci-e device


> Alan Cox wrote:
>>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3
>>> regular
>>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of
>>> the VMIC-5565
>>> reflective memory card in one of the PCI-E slots and our gpiohsd card
>>> in one of the regular
>>> PCI slots. All on the motherboard. No expansion slots being used.
>>> However I cannot get
>>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can
>>> certainly get the data there
>>> from a userland buffer, no problem. Just not from one card to the other
>>> directly. Oh and when
>>> I put the regular PCI version of the VMIC into one of the regular PCI
>>> slots everything works
>>> as expected. They are then both on the same PCI bus and no bridges are
>>> involved though.

The read I mentioned was a read immediately following the DMA operation.
Some PCI (PCI-X using
a HyperTransport serial bus) implimentations deliberately time-out to
prevent a hung system. The result
that I've seen was no data anywhere. It just vanished!

Reading the destination a few milliseconds (or seconds) later won't prevent
this timeout!

Cheers,
Richard B. Johnson
http://Route495Software.com/





2009-09-11 19:42:13

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Richard B. Johnson wrote:
> ----- Original Message ----- From: "Mark Hounschell" <[email protected]>
> To: "Alan Cox" <[email protected]>
> Cc: <[email protected]>; "Mark Hounschell" <[email protected]>;
> <[email protected]>
> Sent: Friday, September 11, 2009 11:25 AM
> Subject: Re: problems doing direct dma from a pci device to pci-e device
>
>
>> Alan Cox wrote:
>>>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board
>>>> has 3 regular
>>>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version
>>>> of the VMIC-5565
>>>> reflective memory card in one of the PCI-E slots and our gpiohsd
>>>> card in one of the regular
>>>> PCI slots. All on the motherboard. No expansion slots being used.
>>>> However I cannot get
>>>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can
>>>> certainly get the data there
>>>> from a userland buffer, no problem. Just not from one card to the
>>>> other directly. Oh and when
>>>> I put the regular PCI version of the VMIC into one of the regular
>>>> PCI slots everything works
>>>> as expected. They are then both on the same PCI bus and no bridges
>>>> are involved though.
>
> The read I mentioned was a read immediately following the DMA operation.
> Some PCI (PCI-X using
> a HyperTransport serial bus) implimentations deliberately time-out to
> prevent a hung system. The result
> that I've seen was no data anywhere. It just vanished!
>
> Reading the destination a few milliseconds (or seconds) later won't
> prevent this timeout!
>

How about a couple hundred usecs? Along that line, I can make it do dma reads
from that VMIC card also. All I get is ones. It's strange that in this case, my gpiohsd
isn't reporting any kind of pci bus error in it's status register.

Thanks
Mark

2009-09-11 20:04:00

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Mark Hounschell wrote:
> Alan Cox wrote:
>>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3 regular
>>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of the VMIC-5565
>>> reflective memory card in one of the PCI-E slots and our gpiohsd card in one of the regular
>>> PCI slots. All on the motherboard. No expansion slots being used. However I cannot get
>>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can certainly get the data there
>>> from a userland buffer, no problem. Just not from one card to the other directly. Oh and when
>>> I put the regular PCI version of the VMIC into one of the regular PCI slots everything works
>>> as expected. They are then both on the same PCI bus and no bridges are involved though.
>> Have you verified with the vendor that such DMA works properly ? There is
>> a long history of there being boards where some device to device DMA
>> exploded or vanished. The arrival of PCI capture cards doing direct to
>> video DMA cleaned the world up (eg the BT848) but I wouldn't be suprised
>> if this recurred somewhere since they were popular and nobody really
>> noticed as they didn't run such an unusual config.
>>
>
> I have not made an inquiry with the vendor yet. I'm pretty sure I have done
> this with this very card in another mother board. I guess I need to dig that one
> up and verify it before I go much further.
>
> But it's much like you describe above. The data seems to just vanish. I can't
> see any way to use my v-metro to find out where it's going either.
>
>> Also does the board have a true IOMMU in the PCI-E side of the system ?
>> It's not a chipset I know.
>>
>
> That I don't know for sure but I thought they all did??
>

A quick readup on the IOMMU makes me think that if it does use one, that could certainly
cause my problem. I'm using bus addresses plugged into page tables that live in the gpiohsd card
obtained from virt2bus. They do match what lcpsi says they are. But it would appear from what
I read, that an IOMMU, could still be causing me grief if there is one being used.

> I'll verify that the VMIC card is in fact capable in that other MB before
> I do anything else.

I still haven't done this yet. Maybe the MB this all worked on didn't have an IOMMU though?

I'll investigate futher.

Regards
Mark

2009-09-14 05:36:10

by Grant Grundler

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

On Fri, Sep 11, 2009 at 04:04:00PM -0400, Mark Hounschell wrote:
> A quick readup on the IOMMU makes me think that if it does use one, that
> could certainly cause my problem. I'm using bus addresses plugged into
> page tables that live in the gpiohsd card obtained from virt2bus.

If you haven't yet, take a look at Documentation/DMA-API.txt and
DMA-mapping.txt . virt2bus has been deprecated for DMA since 2.4 releases.

grant

2009-09-14 08:03:17

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Grant Grundler wrote:
> On Fri, Sep 11, 2009 at 04:04:00PM -0400, Mark Hounschell wrote:
>> A quick readup on the IOMMU makes me think that if it does use one, that
>> could certainly cause my problem. I'm using bus addresses plugged into
>> page tables that live in the gpiohsd card obtained from virt2bus.
>
> If you haven't yet, take a look at Documentation/DMA-API.txt and
> DMA-mapping.txt . virt2bus has been deprecated for DMA since 2.4 releases.
>
> grant
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

It still gives me a bus address just fine. Actually when the gpiohsd is dmaing
into another card I don't use it. I misspoke. The VMIC cards library API gives
me its memory's bus address.

Mark

2009-09-15 11:23:53

by Mark Hounschell

[permalink] [raw]
Subject: Re: problems doing direct dma from a pci device to pci-e device

Alan Cox wrote:
>> I now have an AM3 based DFI DK 790FXB-M3H5 motherboard. This board has 3 regular
>> PCI slots and 3 PCI-E (16x) slots. I also have a PCI-E (x4) version of the VMIC-5565
>> reflective memory card in one of the PCI-E slots and our gpiohsd card in one of the regular
>> PCI slots. All on the motherboard. No expansion slots being used. However I cannot get
>> data from our gpiohsd into the PCI-E VMIC-5565 cards memory. I can certainly get the data there
>> from a userland buffer, no problem. Just not from one card to the other directly. Oh and when
>> I put the regular PCI version of the VMIC into one of the regular PCI slots everything works
>> as expected. They are then both on the same PCI bus and no bridges are involved though.
>
> Have you verified with the vendor that such DMA works properly ? There is
> a long history of there being boards where some device to device DMA
> exploded or vanished. The arrival of PCI capture cards doing direct to
> video DMA cleaned the world up (eg the BT848) but I wouldn't be suprised
> if this recurred somewhere since they were popular and nobody really
> noticed as they didn't run such an unusual config.
>

I have now verified the the VMIC card does support this type of I/O. I dug up the MB I was
using during my original testing. It's has almost the same slot configuration but an nvidia chip
set as opposed to AMD. It does in fact work fine using that MB with the nvidia chip set.
I have 3 different boards with AMD chip sets and none of them work.

> Also does the board have a true IOMMU in the PCI-E side of the system ?
> It's not a chipset I know.
>

I'm not sure what you mean by "a true" IOMMU, afaict the IOMMU is now
called "Virtualization Technology" or works in conjunction with it and almost
any new MB has it. "Virtualization" can be enabled or disabled via a BIOS setting.
So I assume when "Virtualization" is disabled that the IOMMU is disabled. Would
that be a correct assumption? And I also assume, from the AMD spec on it, that
when the IOMMU is disabled addresses are passed unaltered through it?

The MB I have in which all this works, works with the BIOS "Virtualization"
setting enabled or disabled. That was unexpected and I don't really understand that.
Also the MBs that it doesn't work on, doesn't work with it enabled or disabled.
So I am still at complete loss as to what is really happening.

I suppose the BIOS setting in each of the MBs could be broken. The one that
works is disabled even when enabled in the BIOS, and the one that doesn't work
is enabled even when disabled in the BIOS?? Just a guess.

I guess with some time I might be able to figure out how to probe the IOMMU
off the bridges in question to see if they are on or off??

I was hoping the lspci data I sent would show something obvious to the experts
unrelated to the IOMMU.

Thanks again in advance for any additional pointers
Regards Mark