This patch implements flush_cache_vmap and flush_cache_vunmap for
RISC-V, since these functions might modify PTE. Without this patch,
SFENCE.VMA won't be added to related codes, which might introduce a bug
in some out-of-order micro-architecture implementations.
Signed-off-by: Jiuyang Liu <[email protected]>
---
arch/riscv/include/asm/cacheflush.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 23ff70350992..4adf25248c43 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -8,6 +8,14 @@
#include <linux/mm.h>
+/*
+ * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
+ * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
+ * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
+ */
+#define flush_cache_vmap(start, end) flush_tlb_all()
+#define flush_cache_vunmap(start, end) flush_tlb_all()
+
static inline void local_flush_icache_all(void)
{
asm volatile ("fence.i" ::: "memory");
--
2.31.1
Hi Jiuyang,
Le 3/28/21 ? 9:55 PM, Jiuyang Liu a ?crit?:
> This patch implements flush_cache_vmap and flush_cache_vunmap for
> RISC-V, since these functions might modify PTE. Without this patch,
> SFENCE.VMA won't be added to related codes, which might introduce a bug
> in some out-of-order micro-architecture implementations.
>
> Signed-off-by: Jiuyang Liu <[email protected]>
> ---
> arch/riscv/include/asm/cacheflush.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 23ff70350992..4adf25248c43 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -8,6 +8,14 @@
>
> #include <linux/mm.h>
>
> +/*
> + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
"might modify PTE" is not entirely true I think, this is what happens
before using this function that might modify PTE, those functions ensure
those modifications are made visible.
> + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
> + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
> + */
> +#define flush_cache_vmap(start, end) flush_tlb_all()
> +#define flush_cache_vunmap(start, end) flush_tlb_all()
> +
> static inline void local_flush_icache_all(void)
> {
> asm volatile ("fence.i" ::: "memory");
>
FWIW, you can add:
Reviewed-by: Alexandre Ghiti <[email protected]>
Thanks,
Alex
On Mon, Mar 29, 2021 at 01:55:09AM +0000, Jiuyang Liu wrote:
> +/*
> + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
> + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
> + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
> + */
Please never ever write comments > 80 chars. And please read the
coding style document.
On Sun, 28 Mar 2021 18:55:09 PDT (-0700), [email protected] wrote:
> This patch implements flush_cache_vmap and flush_cache_vunmap for
> RISC-V, since these functions might modify PTE. Without this patch,
> SFENCE.VMA won't be added to related codes, which might introduce a bug
> in some out-of-order micro-architecture implementations.
>
> Signed-off-by: Jiuyang Liu <[email protected]>
> ---
> arch/riscv/include/asm/cacheflush.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> index 23ff70350992..4adf25248c43 100644
> --- a/arch/riscv/include/asm/cacheflush.h
> +++ b/arch/riscv/include/asm/cacheflush.h
> @@ -8,6 +8,14 @@
>
> #include <linux/mm.h>
>
> +/*
> + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
> + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
> + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
These should have line breaks.
> + */
> +#define flush_cache_vmap(start, end) flush_tlb_all()
We shouldn't need cache flushes for permission upgrades: the ISA allows
the old mappings to be visible until a fence, but the theory is that
window will be sort for reasonable architectures so the overhead of
flushing the entire TLB will overwhelm the extra faults. There are a
handful of places where we preemptively flush, but those are generally
because we can't handle the faults correctly.
If you have some benchmark that demonstrates a performance issue on real
hardware here then I'm happy to talk about this further, but this
assumption is all over arch/riscv so I'd prefer to keep things
consistent for now.
> +#define flush_cache_vunmap(start, end) flush_tlb_all()
This one does seem necessary.
> +
> static inline void local_flush_icache_all(void)
> {
> asm volatile ("fence.i" ::: "memory");
On Sunday, April 11, 2021 9:41:07 PM UTC you wrote:
> On Sun, 28 Mar 2021 18:55:09 PDT (-0700), [email protected] wrote:
> > This patch implements flush_cache_vmap and flush_cache_vunmap for
> > RISC-V, since these functions might modify PTE. Without this patch,
> > SFENCE.VMA won't be added to related codes, which might introduce a bug
> > in some out-of-order micro-architecture implementations.
> >
> > Signed-off-by: Jiuyang Liu <[email protected]>
> > ---
> >
> > arch/riscv/include/asm/cacheflush.h | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/cacheflush.h
> > b/arch/riscv/include/asm/cacheflush.h index 23ff70350992..4adf25248c43
> > 100644
> > --- a/arch/riscv/include/asm/cacheflush.h
> > +++ b/arch/riscv/include/asm/cacheflush.h
> > @@ -8,6 +8,14 @@
> >
> > #include <linux/mm.h>
> >
> > +/*
> > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs
> > SFENCE.VMA. + * - flush_cache_vmap is invoked after map_kernel_range()
> > has installed the page table entries. + * - flush_cache_vunmap is invoked
> > before unmap_kernel_range() deletes the page table entries
> These should have line breaks.
Fixed in the newest patch, thanks for pointing out.
>
> > + */
> > +#define flush_cache_vmap(start, end) flush_tlb_all()
>
> We shouldn't need cache flushes for permission upgrades: the ISA allows
> the old mappings to be visible until a fence, but the theory is that
> window will be sort for reasonable architectures so the overhead of
> flushing the entire TLB will overwhelm the extra faults. There are a
> handful of places where we preemptively flush, but those are generally
> because we can't handle the faults correctly.
Got it, I removed this.
> If you have some benchmark that demonstrates a performance issue on real
> hardware here then I'm happy to talk about this further, but this
> assumption is all over arch/riscv so I'd prefer to keep things
> consistent for now.
We are using riscv-boom + FireSim setting up a benchmark environment, I can
try it after setting this.
> > +#define flush_cache_vunmap(start, end) flush_tlb_all()
>
> This one does seem necessary.
>
> > +
> >
> > static inline void local_flush_icache_all(void)
> > {
> >
> > asm volatile ("fence.i" ::: "memory");
On Sun, 11 Apr 2021 14:41:07 -0700 (PDT)
Palmer Dabbelt <[email protected]> wrote:
>
>
> On Sun, 28 Mar 2021 18:55:09 PDT (-0700), [email protected] wrote:
> > This patch implements flush_cache_vmap and flush_cache_vunmap for
> > RISC-V, since these functions might modify PTE. Without this patch,
> > SFENCE.VMA won't be added to related codes, which might introduce a bug
> > in some out-of-order micro-architecture implementations.
> >
> > Signed-off-by: Jiuyang Liu <[email protected]>
> > ---
> > arch/riscv/include/asm/cacheflush.h | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> > index 23ff70350992..4adf25248c43 100644
> > --- a/arch/riscv/include/asm/cacheflush.h
> > +++ b/arch/riscv/include/asm/cacheflush.h
> > @@ -8,6 +8,14 @@
> >
> > #include <linux/mm.h>
> >
> > +/*
> > + * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
> > + * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
> > + * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries
>
> These should have line breaks.
>
> > + */
> > +#define flush_cache_vmap(start, end) flush_tlb_all()
>
> We shouldn't need cache flushes for permission upgrades: the ISA allows
> the old mappings to be visible until a fence, but the theory is that
> window will be sort for reasonable architectures so the overhead of
> flushing the entire TLB will overwhelm the extra faults. There are a
> handful of places where we preemptively flush, but those are generally
> because we can't handle the faults correctly.
>
> If you have some benchmark that demonstrates a performance issue on real
> hardware here then I'm happy to talk about this further, but this
> assumption is all over arch/riscv so I'd prefer to keep things
> consistent for now.
IMHO the flush_cache_vmap() isn't necessary. From previous discussion, it
seems the reason to implement flush_cache_vmap() is we missed sfence.vma
in vmalloc related code path. But...
The riscv privileged spec says "In particular, if a leaf PTE is modified but
a subsuming SFENCE.VMA is not executed, either the old translation or the
new translation will be used, but the choice is unpredictable. The behavior
is otherwise well-defined"
*If old translation, we do have a page fault, but the vmalloc_fault() will
take care of it, then local_flush_tlb_page() will sfence.vma properly.
*If new translation, we don't do anything.
In both cases, we don't need to implement the flush_cache_vmap()
From another side, even we insert sfence.vma() in advance rather than
rely on the vmalloc_fault() we still can't ensure other harts use the
new translation. Take below small window case for example:
cpu0 cpu1
map_kernel_range()
map_kernel_range_noflush()
access the new vmalloced space.
flush_cache_vmap()
That's to say, we sill rely on the vmalloc_fault().
>
> > +#define flush_cache_vunmap(start, end) flush_tlb_all()
>
In flush_cache_vunmap() caller's code path, the translation is modified
*after* the flush_cache_vunmap(), for example:
unmap_kernel_range()
flush_cache_vunmap()
vunmap_page_range()
flush_tlb_kernel_range()
IOW, when we call flush_cache_vunmap(), the translation has not changed.
Instead, I believe it's the flush_tlb_kernel_range() to flush the translations
after we changed the translation in vunmap_page_range()
Regards