Now, Anyone don't maintain cramfs.
I don't know who is maintain romfs. so I send this patch to linux-mm,
lkml, linux-dev.
I am not sure my thought is right.
When readpage is called, page with argument in readpage is just new
allocated because kernel can't find that page in page cache.
At this time, any user process can't map the page to their address space.
so, I think D-cache aliasing probelm never occur.
It make sense ?
---
diff --git a/arch/arm/mach-integrator/clock.h b/arch/arm/mach-integrator/clock.h
deleted file mode 100644
index e69de29..0000000
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index a07338d..40c8b84 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -492,7 +492,6 @@ static int cramfs_readpage(struct file *file, struct page * page)
pgdata = kmap(page);
memset(pgdata + bytes_filled, 0, PAGE_CACHE_SIZE - bytes_filled);
kunmap(page);
- flush_dcache_page(page);
SetPageUptodate(page);
unlock_page(page);
return 0;
diff --git a/fs/romfs/inode.c b/fs/romfs/inode.c
index 98a232f..d008262 100644
--- a/fs/romfs/inode.c
+++ b/fs/romfs/inode.c
@@ -454,7 +454,6 @@ romfs_readpage(struct file *file, struct page * page)
if (!result)
SetPageUptodate(page);
- flush_dcache_page(page);
unlock_page(page);
On Fri, Jan 16, 2009 at 02:28:04PM +0900, MinChan Kim wrote:
> Now, Anyone don't maintain cramfs.
> I don't know who is maintain romfs. so I send this patch to linux-mm,
> lkml, linux-dev.
>
> I am not sure my thought is right.
>
> When readpage is called, page with argument in readpage is just new
> allocated because kernel can't find that page in page cache.
>
> At this time, any user process can't map the page to their address space.
> so, I think D-cache aliasing probelm never occur.
>
> It make sense ?
Sorry, no. You have to call fluch_dcache_page() in two situations --
when the kernel is going to read some data that userspace wrote, *and*
when userspace is going to read some data that the kernel wrote. From a
quick look at the patch, this seems to be the second case. The kernel
wrote data to a pagecache page, and userspace should be able to read it.
To understand why this is necessary, consider a processor which is
virtually indexed and has a writeback cache. The kernel writes to a
page, then a user process reads from the same page through a different
address. The cache doesn't find the data the kernel wrote because it
has a different virtual index, so userspace reads stale data.
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
On Thu, Jan 15, 2009 at 10:33:38PM -0700, Matthew Wilcox wrote:
> On Fri, Jan 16, 2009 at 02:28:04PM +0900, MinChan Kim wrote:
> > Now, Anyone don't maintain cramfs.
> > I don't know who is maintain romfs. so I send this patch to linux-mm,
> > lkml, linux-dev.
> >
> > I am not sure my thought is right.
> >
> > When readpage is called, page with argument in readpage is just new
> > allocated because kernel can't find that page in page cache.
> >
> > At this time, any user process can't map the page to their address space.
> > so, I think D-cache aliasing probelm never occur.
> >
> > It make sense ?
>
> Sorry, no. You have to call fluch_dcache_page() in two situations --
> when the kernel is going to read some data that userspace wrote, *and*
> when userspace is going to read some data that the kernel wrote. From a
> quick look at the patch, this seems to be the second case. The kernel
> wrote data to a pagecache page, and userspace should be able to read it.
>
> To understand why this is necessary, consider a processor which is
> virtually indexed and has a writeback cache. The kernel writes to a
> page, then a user process reads from the same page through a different
> address. The cache doesn't find the data the kernel wrote because it
> has a different virtual index, so userspace reads stale data.
I see. :)
Thanks for quick reponse and good explaination.
Hmm,.. one more question.
I can't find flush_dcache_page call in mpage_readpage which is
generic read function. In case of ext fs, it use mpage_readpage
with readpage.
who and where call flush_dcache_page in mpage_readpage call path?
>
> --
> Matthew Wilcox Intel Open Source Technology Centre
> "Bill, look, we understand that you're interested in selling us this
> operating system, but compare it to ours. We can't possibly take such
> a retrograde step."
On Fri, Jan 16, 2009 at 02:51:19PM +0900, MinChan Kim wrote:
> On Thu, Jan 15, 2009 at 10:33:38PM -0700, Matthew Wilcox wrote:
> > Sorry, no. You have to call fluch_dcache_page() in two situations --
> > when the kernel is going to read some data that userspace wrote, *and*
> > when userspace is going to read some data that the kernel wrote. From a
> > quick look at the patch, this seems to be the second case. The kernel
> > wrote data to a pagecache page, and userspace should be able to read it.
> >
> > To understand why this is necessary, consider a processor which is
> > virtually indexed and has a writeback cache. The kernel writes to a
> > page, then a user process reads from the same page through a different
> > address. The cache doesn't find the data the kernel wrote because it
> > has a different virtual index, so userspace reads stale data.
>
> I see. :)
>
> Thanks for quick reponse and good explaination.
> Hmm,.. one more question.
>
> I can't find flush_dcache_page call in mpage_readpage which is
> generic read function. In case of ext fs, it use mpage_readpage
> with readpage.
>
> who and where call flush_dcache_page in mpage_readpage call path?
Most I/O devices will do DMA to the page in question and thus the kernel
hasn't written to it and the CPU won't have the data in cache. For the
few devices which can't do DMA, it's the responsibility of the device
driver to call flush_dcache_page() (or some other flushing primitive).
See the gdth driver for an example:
address = kmap_atomic(sg_page(sl), KM_BIO_SRC_IRQ) + sl->offset;
memcpy(address, buffer, cpnow);
flush_dcache_page(sg_page(sl));
kunmap_atomic(address, KM_BIO_SRC_IRQ);
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
On Fri, Jan 16, 2009 at 02:51:19PM +0900, MinChan Kim wrote:
> On Thu, Jan 15, 2009 at 10:33:38PM -0700, Matthew Wilcox wrote:
> > On Fri, Jan 16, 2009 at 02:28:04PM +0900, MinChan Kim wrote:
> > > Now, Anyone don't maintain cramfs.
> > > I don't know who is maintain romfs. so I send this patch to linux-mm,
> > > lkml, linux-dev.
> > >
> > > I am not sure my thought is right.
> > >
> > > When readpage is called, page with argument in readpage is just new
> > > allocated because kernel can't find that page in page cache.
> > >
> > > At this time, any user process can't map the page to their address space.
> > > so, I think D-cache aliasing probelm never occur.
> > >
> > > It make sense ?
> >
> > Sorry, no. You have to call fluch_dcache_page() in two situations --
> > when the kernel is going to read some data that userspace wrote, *and*
> > when userspace is going to read some data that the kernel wrote. From a
> > quick look at the patch, this seems to be the second case. The kernel
> > wrote data to a pagecache page, and userspace should be able to read it.
> >
> > To understand why this is necessary, consider a processor which is
> > virtually indexed and has a writeback cache. The kernel writes to a
> > page, then a user process reads from the same page through a different
> > address. The cache doesn't find the data the kernel wrote because it
> > has a different virtual index, so userspace reads stale data.
>
> I see. :)
>
> Thanks for quick reponse and good explaination.
> Hmm,.. one more question.
>
> I can't find flush_dcache_page call in mpage_readpage which is
> generic read function. In case of ext fs, it use mpage_readpage
> with readpage.
>
> who and where call flush_dcache_page in mpage_readpage call path?
I think if the page is populated via IO, then it is responsibility of the
IO layers (eg dma API) to ensure caches are consistent. Presumably this
would include calling flush_dcache_page if we CPU is being used for
the copies (eg. see drivers/block/brd.c).
But there are quite possibly holes around here because not as much testing
is done on CPUs with these kinds of caches. Eg. brd probably should be
doing a flush_dcache_page in the rw == WRITE direction AFAIKS, so it picks
up user aliases here.
On Fri, Jan 16, 2009 at 06:59:27AM +0100, Nick Piggin wrote:
> But there are quite possibly holes around here because not as much testing
> is done on CPUs with these kinds of caches. Eg. brd probably should be
> doing a flush_dcache_page in the rw == WRITE direction AFAIKS, so it picks
> up user aliases here.
Nick, if you wanted me to schlep a parisc machine to LCA for you, you
needed to ask me *before* I got as far as Vancouver ;-)
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
On Thu, Jan 15, 2009 at 10:57:30PM -0700, Matthew Wilcox wrote:
> On Fri, Jan 16, 2009 at 02:51:19PM +0900, MinChan Kim wrote:
> > On Thu, Jan 15, 2009 at 10:33:38PM -0700, Matthew Wilcox wrote:
> > > Sorry, no. You have to call fluch_dcache_page() in two situations --
> > > when the kernel is going to read some data that userspace wrote, *and*
> > > when userspace is going to read some data that the kernel wrote. From a
> > > quick look at the patch, this seems to be the second case. The kernel
> > > wrote data to a pagecache page, and userspace should be able to read it.
> > >
> > > To understand why this is necessary, consider a processor which is
> > > virtually indexed and has a writeback cache. The kernel writes to a
> > > page, then a user process reads from the same page through a different
> > > address. The cache doesn't find the data the kernel wrote because it
> > > has a different virtual index, so userspace reads stale data.
> >
> > I see. :)
> >
> > Thanks for quick reponse and good explaination.
> > Hmm,.. one more question.
> >
> > I can't find flush_dcache_page call in mpage_readpage which is
> > generic read function. In case of ext fs, it use mpage_readpage
> > with readpage.
> >
> > who and where call flush_dcache_page in mpage_readpage call path?
>
> Most I/O devices will do DMA to the page in question and thus the kernel
> hasn't written to it and the CPU won't have the data in cache. For the
> few devices which can't do DMA, it's the responsibility of the device
> driver to call flush_dcache_page() (or some other flushing primitive).
Hmm.. Now I am confusing.
If devicer driver or with DMA makes sure cache consistency,
Why filesystem code have to handle it ?
> See the gdth driver for an example:
>
> address = kmap_atomic(sg_page(sl), KM_BIO_SRC_IRQ) + sl->offset;
> memcpy(address, buffer, cpnow);
> flush_dcache_page(sg_page(sl));
> kunmap_atomic(address, KM_BIO_SRC_IRQ);
>
> --
> Matthew Wilcox Intel Open Source Technology Centre
> "Bill, look, we understand that you're interested in selling us this
> operating system, but compare it to ours. We can't possibly take such
> a retrograde step."
--
Kinds regards,
MinChan Kim
On Fri, Jan 16, 2009 at 03:08:30PM +0900, MinChan Kim wrote:
> On Thu, Jan 15, 2009 at 10:57:30PM -0700, Matthew Wilcox wrote:
> > Most I/O devices will do DMA to the page in question and thus the kernel
> > hasn't written to it and the CPU won't have the data in cache. For the
> > few devices which can't do DMA, it's the responsibility of the device
> > driver to call flush_dcache_page() (or some other flushing primitive).
>
> Hmm.. Now I am confusing.
> If devicer driver or with DMA makes sure cache consistency,
> Why filesystem code have to handle it ?
Because the filesystem is accessing the page directly rathe rthan going to
IO.
Basically, whoever reads or writes the page is responsible to avoid user
aliases. You see these calls in the VM for anonymous pages, in bounce
buffer layers, in filesystems that read or write from pages that are
exposed to userspace (ie. metadata generally need not be flushed because
it will not be mmapped by userspace).
On Fri, Jan 16, 2009 at 07:13:41AM +0100, Nick Piggin wrote:
> On Fri, Jan 16, 2009 at 03:08:30PM +0900, MinChan Kim wrote:
> > On Thu, Jan 15, 2009 at 10:57:30PM -0700, Matthew Wilcox wrote:
> > > Most I/O devices will do DMA to the page in question and thus the kernel
> > > hasn't written to it and the CPU won't have the data in cache. For the
> > > few devices which can't do DMA, it's the responsibility of the device
> > > driver to call flush_dcache_page() (or some other flushing primitive).
> >
> > Hmm.. Now I am confusing.
> > If devicer driver or with DMA makes sure cache consistency,
> > Why filesystem code have to handle it ?
>
> Because the filesystem is accessing the page directly rathe rthan going to
> IO.
>
> Basically, whoever reads or writes the page is responsible to avoid user
> aliases. You see these calls in the VM for anonymous pages, in bounce
> buffer layers, in filesystems that read or write from pages that are
> exposed to userspace (ie. metadata generally need not be flushed because
> it will not be mmapped by userspace).
Totally, understand.
Thanks for kind answering to my poor question in patience.