On 17 Sep 03 at 19:36, Stevie-O wrote:
>
> My thinking is this: I want to use __get_free_pages(1) 80 times to get the
> 160 pages, then passed those 80 pieces to the card (it's known the card can
> handle requests with that many pieces). Then I want to create a *virtually*
> contiguous 160-page mapping, so the postprocessing code in the driver can
> view the 80 2-page sub-buffers as one big consecutive 160-page buffer.
> Doing this would (a) make for more efficient use of memory, and (b) leave
> the larger piles of contiguous pages to the drivers of cards that actually
> require them.
If you'll use __get_free_pages(0) 160 times, you should be able to use
vmap() in 2.[456].x.
I must say that I do not understand why it checks for
size > (max_mapnr << PAGE_SHIFT) in 2.4.x, or for count > num_physpages
in 2.6.x (as there is nothing wrong with mapping same page several
thousand times, or is it bad? with 32MB host you have plenty of
unused VA space in the kernel...), but it should not hurt you as you
need distinct physical pages.
On other side, maybe that using SG even for driver operations is not
that complicated. Do not forget that on bigmem boxes you have only
128MB area for vmalloc/vmap/ioremap, so you can quickly find that
there is not 640KB continuous area available.
Best regards,
Petr Vandrovec
[email protected]
Petr Vandrovec wrote:
> On 17 Sep 03 at 19:36, Stevie-O wrote:
>
>>My thinking is this: I want to use __get_free_pages(1) 80 times to get the
>>160 pages, then passed those 80 pieces to the card (it's known the card can
>>handle requests with that many pieces). Then I want to create a *virtually*
>>contiguous 160-page mapping, so the postprocessing code in the driver can
>>view the 80 2-page sub-buffers as one big consecutive 160-page buffer.
>>Doing this would (a) make for more efficient use of memory, and (b) leave
>>the larger piles of contiguous pages to the drivers of cards that actually
>>require them.
>
>
> If you'll use __get_free_pages(0) 160 times, you should be able to use
> vmap() in 2.[456].x.
Actually, I specified __get_free_pages(1) 80 times because I don't know if the
card's SG can actually support 160 separate buffers (I'm certain it can do at
least 80 though).
I grepped my 2.4 kernel source for 'vmap' and the only results that seemed
meaningful were vmap_pte_range or vmap_pmd_range in mips/mm/umap.c and
mips64/mm/umap.c. Is this documented somewhere? I suffer from the 'i'm new at
this, but this looks possible' syndrome. I don't actually know how anything is
accomplished.
Btw, am I right about kmalloc(35000) effectively grabbing 64K?
>
> I must say that I do not understand why it checks for
> size > (max_mapnr << PAGE_SHIFT) in 2.4.x, or for count > num_physpages
> in 2.6.x (as there is nothing wrong with mapping same page several
> thousand times, or is it bad? with 32MB host you have plenty of
> unused VA space in the kernel...), but it should not hurt you as you
> need distinct physical pages.
>
> On other side, maybe that using SG even for driver operations is not
> that complicated. Do not forget that on bigmem boxes you have only
> 128MB area for vmalloc/vmap/ioremap, so you can quickly find that
> there is not 640KB continuous area available.
--
- Stevie-O
Real Programmers use COPY CON PROGRAM.EXE
Stevie-O wrote:
>
> I grepped my 2.4 kernel source for 'vmap' and the only results that
> seemed meaningful were vmap_pte_range or vmap_pmd_range in
> mips/mm/umap.c and mips64/mm/umap.c. Is this documented somewhere? I
> suffer from the 'i'm new at this, but this looks possible' syndrome. I
> don't actually know how anything is accomplished.
>
> Btw, am I right about kmalloc(35000) effectively grabbing 64K?
I did a freetext search of the LXR for 'remap' and came up with this function:
820 /*
821 * maps a range of physical memory into the requested pages. the old
822 * mappings are removed. any references to nonexistent pages results
823 * in null mappings (currently treated as "copy-on-access")
824 */
825 static inline void remap_pte_range(pte_t * pte, unsigned long address,
unsigned long size,
826 unsigned long phys_addr, pgprot_t prot)
This looks promising, although it's a bit ambiguous to me. Treating the
physical pages as distinct from virtual pages:
[1] maps a range of physical memory onto the requested pages
I assume 'the requested pages' here refer to 'the specified virtual pages',
since the statement makes no sense if it's referring to 'the specified physical
pages'.
[2] the old mappings are removed.
Looking at the source, it seems that this means 'if these virtual pages were
pointed to any physical pages before, they won't be anymore'
[3] any references to nonexistent pages results in null mappings ('copy-on-access')
I have no idea what this means, and I can't figure it out reading the code.
Obviously it refers to something not really being there (thus 'nonexistent'). So
copy-on-access? How do you copy something that doesn't exist?
--
- Stevie-O
Real Programmers use COPY CON PROGRAM.EXE
On Wed, Sep 17, 2003 at 09:10:22PM -0400, Stevie-O wrote:
> Stevie-O wrote:
>
> >
> >I grepped my 2.4 kernel source for 'vmap' and the only results that
> >seemed meaningful were vmap_pte_range or vmap_pmd_range in
2.4.22-ac1 has it. In mm/vmalloc.c. You cannot (well, I believe) use
any functions which take page array if pages were not allocated one
by one. Maybe you can try using page and page+1, but I'm under
impression that it will not work as expected, and that you'll hit
some BUG() somewhere.
> >mips/mm/umap.c and mips64/mm/umap.c. Is this documented somewhere? I
> >suffer from the 'i'm new at this, but this looks possible' syndrome. I
> >don't actually know how anything is accomplished.
> >
> >Btw, am I right about kmalloc(35000) effectively grabbing 64K?
Yes.
> I did a freetext search of the LXR for 'remap' and came up with this
> function:
>
> 820 /*
> 821 * maps a range of physical memory into the requested pages. the old
> 822 * mappings are removed. any references to nonexistent pages results
> 823 * in null mappings (currently treated as "copy-on-access")
> 824 */
> 825 static inline void remap_pte_range(pte_t * pte, unsigned long address,
> unsigned long size,
> 826 unsigned long phys_addr, pgprot_t prot)
Unavailable outside of mm. You must use remap_page_range. And this function
can only remap memory which does not have its 'struct page' (i.e. MMIO on
PCI busses) or pages marked as Reserved. So if you want to use it on
regular memory, you must mark pages reserved... And then you have to do
black magic to correctly remove 'PageReserved' bit at correct time - if
process does fork, and you'll clear this bit too early, you'll get
page_count < 0 and BUG(). If you'll do that too late, you'll leak memory.
vmmon did this in the past, but it was impossible to get it right under all
possible circumstances.
Other problem is that this functions is targeted for remapping userspace
addresses, not kernel space, and I would not trust this function for using
with from in kernel space. Definitely 2.6.x with 4G/4G patch will do bad
things.
Best regards,
Petr Vandrovec
[email protected]