2001-02-21 17:13:00

by Christoph Baumann

[permalink] [raw]
Subject: Problem with DMA buffer (in 2.2.15)

Hello!

I have the following problem.
A user process wants to talk to a PCI board via DMA. The first step I did was
to resolv the physical addresses of the data in user space. This works fine
when writing to the device. But when reading the buffer isn't allocated
and the physical addresses are resolved to zero. I fixed this with initializing
the buffer. My question is: Is there a faster method to get the kernel to
map all the virtual addresses at once and not each by each? This would
increase the performance enormously (from 33MB/s to [hopefully] 100MB/s).

Christoph

--
**********************************************************
* Christoph Baumann *
* Kirchhoff-Institut f?r Physik - Uni Heidelberg *
* Mail: [email protected] *
* Phone: ++49-6221-54-4329 *
**********************************************************


2001-02-22 16:25:16

by Christoph Baumann

[permalink] [raw]
Subject: Re: Problem with DMA buffer (in 2.2.15)

I messed up my subscription to this list yesterday so any answer to my question
didn't reach me (I looked through the archive up to 20:50 yesterday).
Now as I read my question again I decided to add some more details.

>But when reading the buffer isn't allocated
This is of course rubbish. I meant the buffer isn't initialized.

Some more details about what I'm doing:
I have a PCI board with 1M of RAM and a PLX9080 on it (and some more chips
which don't matter here). So far I have established normal communication with
the board (to program the PLX etc.). Now I want to write larger junks of data
to the RAM. So I startet to resolve the physical addresses of the user space
data to generate a chain list. Everything works fine for writing to the RAM.
The problem is reading. If I allocate a new buffer for reading back it isn't
of course not yet mapped to physical memory. The "quick and dirty"(tm) hack
was writing a zero to each buffer element to get it mapped. The better version
is to do this in steps of PAGE_SIZE. What I'm looking for is a kernel routine
to force the mapping of previous unmapped pages. Browsing through the source
in mm/ I found make_pages_present(). Could this be the solution? I hadn't the
time to try it out yet.


Christoph

--
**********************************************************
* Christoph Baumann *
* Kirchhoff-Institut f?r Physik - Uni Heidelberg *
* Mail: [email protected] *
* Phone: ++49-6221-54-4329 *
**********************************************************

2001-02-22 16:41:10

by Norbert Roos

[permalink] [raw]
Subject: Re: Problem with DMA buffer (in 2.2.15)

Christoph Baumann wrote:

> is to do this in steps of PAGE_SIZE. What I'm looking for is a kernel routine
> to force the mapping of previous unmapped pages. Browsing through the source
> in mm/ I found make_pages_present(). Could this be the solution? I hadn't the

Have you already looked at mlock(2)?

Norbert

2001-02-22 17:21:13

by Christoph Baumann

[permalink] [raw]
Subject: Re: Problem with DMA buffer (in 2.2.15)

On Thu, Feb 22, 2001 at 05:38:53PM +0100, Norbert Roos wrote:
> Christoph Baumann wrote:
>
> > is to do this in steps of PAGE_SIZE. What I'm looking for is a kernel routine
> > to force the mapping of previous unmapped pages. Browsing through the source
> > in mm/ I found make_pages_present(). Could this be the solution? I hadn't the
>
> Have you already looked at mlock(2)?

Well I would have done all this (mlock, alloc buffer in kernel space and map it to user space etc.). But the critical issue is that all should be code compatible to Win (ducking away...). The API under Win allows chain DMA from user space
so the program which was initially developed under Win uses this feature. My
module needs to jump in to support this. Another problem is that the buffer
is often reallocated.

Christoph

--
**********************************************************
* Christoph Baumann *
* Kirchhoff-Institut f?r Physik - Uni Heidelberg *
* Mail: [email protected] *
* Phone: ++49-6221-54-4329 *
**********************************************************