Hello Guys!
I want to change pgprot(unset L_PTE_CACHEABLE and L_PTE_BUFFERABLE) for
several pages, allocated by alloc_pages, in kernel VM, because this memory
used for sharing data between kernelspace and userland(via .mmap).
For userland VM, i done that successul next way:
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
remap_pfn_range(...);
but for kernelspace VM i has some trouble.
I was trying directly unset L_PTE_CACHEABLE | L_PTE_BUFFERABLE flags from
pte(see below), but it doesn't working, and i has problem with cache
cogerency
between userland and kernelspace:
pgd_t *pgd;
pmd_t *pmd;
pte_t *pte;
/* find pte */
addr = page_address(my_page);
pgd = pgd_offset_k(addr & PAGE_MASK);
pmd = pmd_offset(pgd, addr & PAGE_MASK);
pte = pte_offset_kernel(pmd, addr & PAGE_MASK);
/* ok, we get pte, now unset L_PTE_CACHEABLE and
L_PTE_BUFFERABLE flags
*/
pte_val(*pte) &= ~(L_PTE_CACHEABLE|L_PTE_BUFFERABLE);
The problem descriprion simple, kernel at some time doesn't see
data, which userland writes to shared buffer, without inserting
flush_cache_all after each operation with shared buffer from kernel.
thanks,
Nickolay.
On Wed, Mar 15, 2006 at 12:31:56AM +0300, Nickolay wrote:
> I was trying directly unset L_PTE_CACHEABLE | L_PTE_BUFFERABLE flags from
> pte(see below), but it doesn't working, and i has problem with cache
> cogerency between userland and kernelspace:
>
> pgd_t *pgd;
> pmd_t *pmd;
> pte_t *pte;
>
> /* find pte */
> addr = page_address(my_page);
> pgd = pgd_offset_k(addr & PAGE_MASK);
> pmd = pmd_offset(pgd, addr & PAGE_MASK);
> pte = pte_offset_kernel(pmd, addr & PAGE_MASK);
>
> /* ok, we get pte, now unset L_PTE_CACHEABLE and
> L_PTE_BUFFERABLE flags
> */
> pte_val(*pte) &= ~(L_PTE_CACHEABLE|L_PTE_BUFFERABLE);
>
> The problem descriprion simple, kernel at some time doesn't see
> data, which userland writes to shared buffer, without inserting
> flush_cache_all after each operation with shared buffer from kernel.
Rather than just assuming that there are page tables there, if you
added the usual checks which the kernel typically does, you might
get a clue as to what's going on.
You'll find that the PMD is not valid as far as the Linux page table
walking goes - that's because we don't use a set of individual page
mappings to setup the kernel mapping of the page. In turn, that
means that there is no individual control over the status of pages
allocated by alloc_pages.
To solve your problem, you need to look at the ARM DMA mmap extension
and use that instead.
--
Russell King
Linux kernel 2.6 ARM Linux - http://www.arm.linux.org.uk/
maintainer of: 2.6 Serial core