2007-08-24 05:52:17

by Zachary Amsden

[permalink] [raw]
Subject: [PATCH] Fix preemptible lazy mode bug

I recently sent off a fix for lazy vmalloc faults which can happen under
paravirt when lazy mode is enabled. Unfortunately, I jumped the gun a
bit on fixing this. I neglected to notice that since the new call to
flush the MMU update queue is called from the page fault handler, it can
be pre-empted. Both VMI and Xen use per-cpu variables to track lazy
mode state, as all previous calls to set, disable, or flush lazy mode
happened from a non-preemptable state.

I have no idea how to convincingly produce the problem, as generating a
kernel pre-emption at the required point is, um, difficult, but it is
most certainly a real possibility, and potentially more likely than the
bug I fixed originally.

Rusty, you may have to modify lguest code if you use lazy mode and rely
on per-cpu variables during the callout for paravirt_ops.set_lazy_mode.

I have tested as best as I can, and am trying to write a suite destined
for LTP which will help catch and debug these issues.

Zach


Attachments:
i386-paravirt-preempt-fix.patch (2.20 kB)

2007-08-24 06:53:35

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Zachary Amsden wrote:
> I recently sent off a fix for lazy vmalloc faults which can happen
> under paravirt when lazy mode is enabled. Unfortunately, I jumped the
> gun a bit on fixing this. I neglected to notice that since the new
> call to flush the MMU update queue is called from the page fault
> handler, it can be pre-empted. Both VMI and Xen use per-cpu variables
> to track lazy mode state, as all previous calls to set, disable, or
> flush lazy mode happened from a non-preemptable state.

Hm. Doing any kind of lazy-state operation with preemption enabled is
fundamentally meaningless. How does it get into a preemptable state
with a lazy mode enabled now? If a sequence of code with preempt
disabled touches a missing vmalloc mapping, it gets a fault to fix up
the mapping, and the fault handler can end up preempting the thread?
That sounds like a larger bug than just paravirt lazy mode problems.

J

2007-08-24 07:05:12

by Zachary Amsden

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Jeremy Fitzhardinge wrote:
> Hm. Doing any kind of lazy-state operation with preemption enabled is
> fundamentally meaningless. How does it get into a preemptable state
>

Agree 100%. It is the lazy mode flush that might happen when preempt is
enabled, but lazy mode is disabled. In that case, the code relies on
per-cpu variables, which is a bad thing to do in preemtible code. This
can happen in the current code path.

Thinking slightly deeper about it, it might be the case that there is no
bug, because the local lazy mode variables are only _modified_ in the
preemptible state, and guaranteed to be zero in the non-preemtible
state; but it was not clear to me that this is always the case, and I
got very nervous about reading per-cpu variables with preempt enabled.
It would, in any case, fire a BUG_ON in the Xen code, which I did fix.

Do you agree it is better to be safe than sorry in this case? The kind
of bugs introduced by getting this wrong are really hard to find, and I
would rather err on the side of an extra increment and decrement of
preempt_count that causing a regression.

Zach

2007-08-25 11:58:14

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Thu, 2007-08-23 at 23:59 -0700, Zachary Amsden wrote:
> Jeremy Fitzhardinge wrote:
> > Hm. Doing any kind of lazy-state operation with preemption enabled is
> > fundamentally meaningless. How does it get into a preemptable state
> >
>
> Agree 100%. It is the lazy mode flush that might happen when preempt is
> enabled, but lazy mode is disabled. In that case, the code relies on
> per-cpu variables, which is a bad thing to do in preemtible code. This
> can happen in the current code path.

Frankly, we should hoist the per-cpu state into generic paravirt code,
get rid of the FLUSH "state" and only call the lazy_mode hooks when
actually entering or exiting a lazy mode.

The only reason lguest doesn't use a per-cpu var is that guests are
currently UP only. If that were fixed, we'd have identical VMI, Xen and
lguest lazy state handing.

Cheers,
Rusty.


2007-09-01 21:09:57

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Zachary Amsden wrote:
> Do you agree it is better to be safe than sorry in this case? The
> kind of bugs introduced by getting this wrong are really hard to find,
> and I would rather err on the side of an extra increment and decrement
> of preempt_count that causing a regression.

I think this patch is the direction we should go. I this this would
work equally well for the other pv implementations; it would probably go
into the common lazy mode logic when we get around to doing it.

J

diff -r b3fcc228c531 arch/i386/xen/enlighten.c
--- a/arch/i386/xen/enlighten.c Mon Aug 20 14:20:15 2007 -0700
+++ b/arch/i386/xen/enlighten.c Mon Aug 27 13:40:24 2007 -0700
@@ -250,6 +250,9 @@ static void xen_halt(void)

static void xen_set_lazy_mode(enum paravirt_lazy_mode mode)
{
+ if (preemptible() && mode == PARAVIRT_LAZY_FLUSH)
+ return; /* nothing to flush with preempt on */
+
BUG_ON(preemptible());

switch (mode) {


2007-09-03 20:15:06

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Sat, 2007-09-01 at 14:09 -0700, Jeremy Fitzhardinge wrote:
> Zachary Amsden wrote:
> > Do you agree it is better to be safe than sorry in this case? The
> > kind of bugs introduced by getting this wrong are really hard to find,
> > and I would rather err on the side of an extra increment and decrement
> > of preempt_count that causing a regression.
>
> I think this patch is the direction we should go. I this this would
> work equally well for the other pv implementations; it would probably go
> into the common lazy mode logic when we get around to doing it.

Well, here's the (untested) hoisting patch...
===
Hoist per-cpu lazy_mode variable up into common code.

VMI, Xen and lguest all track paravirt_lazy_mode individually, meaning
we hand a "magic" value for set_lazy_mode() to say "flush if currently
active".

We can simplify the logic by hoisting this variable into common code.

Signed-off-by: Rusty Russell <[email protected]>

diff -r 072a0b3924fb arch/i386/kernel/vmi.c
--- a/arch/i386/kernel/vmi.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/kernel/vmi.c Tue Sep 04 05:36:11 2007 +1000
@@ -554,22 +554,14 @@ vmi_startup_ipi_hook(int phys_apicid, un

static void vmi_set_lazy_mode(enum paravirt_lazy_mode mode)
{
- static DEFINE_PER_CPU(enum paravirt_lazy_mode, lazy_mode);
-
if (!vmi_ops.set_lazy_mode)
return;

/* Modes should never nest or overlap */
- BUG_ON(__get_cpu_var(lazy_mode) && !(mode == PARAVIRT_LAZY_NONE ||
- mode == PARAVIRT_LAZY_FLUSH));
-
- if (mode == PARAVIRT_LAZY_FLUSH) {
- vmi_ops.set_lazy_mode(0);
- vmi_ops.set_lazy_mode(__get_cpu_var(lazy_mode));
- } else {
- vmi_ops.set_lazy_mode(mode);
- __get_cpu_var(lazy_mode) = mode;
- }
+ BUG_ON(get_lazy_mode() && !(mode == PARAVIRT_LAZY_NONE |
+ | mode == PARAVIRT_LAZY_FLUSH));
+
+ vmi_ops.set_lazy_mode(mode);
}

static inline int __init check_vmi_rom(struct vrom_header *rom)
diff -r 072a0b3924fb arch/i386/xen/enlighten.c
--- a/arch/i386/xen/enlighten.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/enlighten.c Tue Sep 04 05:37:14 2007 +1000
@@ -52,8 +52,6 @@

EXPORT_SYMBOL_GPL(hypercall_page);

-DEFINE_PER_CPU(enum paravirt_lazy_mode, xen_lazy_mode);
-
DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
DEFINE_PER_CPU(unsigned long, xen_cr3);
@@ -255,23 +253,16 @@ static void xen_set_lazy_mode(enum parav

switch (mode) {
case PARAVIRT_LAZY_NONE:
- BUG_ON(x86_read_percpu(xen_lazy_mode) == PARAVIRT_LAZY_NONE);
+ BUG_ON(get_lazy_mode() == PARAVIRT_LAZY_NONE);
break;

case PARAVIRT_LAZY_MMU:
case PARAVIRT_LAZY_CPU:
- BUG_ON(x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE);
+ BUG_ON(get_lazy_mode() != PARAVIRT_LAZY_NONE);
break;
-
- case PARAVIRT_LAZY_FLUSH:
- /* flush if necessary, but don't change state */
- if (x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE)
- xen_mc_flush();
- return;
}

xen_mc_flush();
- x86_write_percpu(xen_lazy_mode, mode);
}

static unsigned long xen_store_tr(void)
@@ -358,7 +349,7 @@ static void xen_load_tls(struct thread_s
* loaded properly. This will go away as soon as Xen has been
* modified to not save/restore %gs for normal hypercalls.
*/
- if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU)
+ if (x86_read_percpu(paravirt_lazy_mode) == PARAVIRT_LAZY_CPU)
loadsegment(gs, 0);
}

diff -r 072a0b3924fb arch/i386/xen/multicalls.h
--- a/arch/i386/xen/multicalls.h Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/multicalls.h Tue Sep 04 05:37:52 2007 +1000
@@ -35,7 +35,7 @@ void xen_mc_flush(void);
/* Issue a multicall if we're not in a lazy mode */
static inline void xen_mc_issue(unsigned mode)
{
- if ((xen_get_lazy_mode() & mode) == 0)
+ if ((get_lazy_mode() & mode) == 0)
xen_mc_flush();

/* restore flags saved in xen_mc_batch */
diff -r 072a0b3924fb arch/i386/xen/xen-ops.h
--- a/arch/i386/xen/xen-ops.h Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/xen-ops.h Tue Sep 04 05:29:20 2007 +1000
@@ -29,13 +29,6 @@ unsigned long long xen_sched_clock(void)

void xen_mark_init_mm_pinned(void);

-DECLARE_PER_CPU(enum paravirt_lazy_mode, xen_lazy_mode);
-
-static inline unsigned xen_get_lazy_mode(void)
-{
- return x86_read_percpu(xen_lazy_mode);
-}
-
void __init xen_fill_possible_map(void);

void __init xen_setup_vcpu_info_placement(void);
diff -r 072a0b3924fb drivers/lguest/lguest.c
--- a/drivers/lguest/lguest.c Thu Aug 30 04:47:43 2007 +1000
+++ b/drivers/lguest/lguest.c Tue Sep 04 05:32:24 2007 +1000
@@ -93,8 +93,8 @@ static cycle_t clock_base;
/*G:035 Notice the lazy_hcall() above, rather than hcall(). This is our first
* real optimization trick!
*
- * When lazy_mode is set, it means we're allowed to defer all hypercalls and do
- * them as a batch when lazy_mode is eventually turned off. Because hypercalls
+ * When lazy mode is set, it means we're allowed to defer all hypercalls and do
+ * them as a batch when lazy mode is eventually turned off. Because hypercalls
* are reasonably expensive, batching them up makes sense. For example, a
* large mmap might update dozens of page table entries: that code calls
* lguest_lazy_mode(PARAVIRT_LAZY_MMU), does the dozen updates, then calls
@@ -102,24 +102,11 @@ static cycle_t clock_base;
*
* So, when we're in lazy mode, we call async_hypercall() to store the call for
* future processing. When lazy mode is turned off we issue a hypercall to
- * flush the stored calls.
- *
- * There's also a hack where "mode" is set to "PARAVIRT_LAZY_FLUSH" which
- * indicates we're to flush any outstanding calls immediately. This is used
- * when an interrupt handler does a kmap_atomic(): the page table changes must
- * happen immediately even if we're in the middle of a batch. Usually we're
- * not, though, so there's nothing to do. */
-static enum paravirt_lazy_mode lazy_mode; /* Note: not SMP-safe! */
+ * flush the stored calls. */
static void lguest_lazy_mode(enum paravirt_lazy_mode mode)
{
- if (mode == PARAVIRT_LAZY_FLUSH) {
- if (unlikely(lazy_mode != PARAVIRT_LAZY_NONE))
- hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
- } else {
- lazy_mode = mode;
- if (mode == PARAVIRT_LAZY_NONE)
- hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
- }
+ if (mode == PARAVIRT_LAZY_NONE)
+ hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
}

static void lazy_hcall(unsigned long call,
@@ -127,7 +114,7 @@ static void lazy_hcall(unsigned long cal
unsigned long arg2,
unsigned long arg3)
{
- if (lazy_mode == PARAVIRT_LAZY_NONE)
+ if (get_lazy_mode() == PARAVIRT_LAZY_NONE)
hcall(call, arg1, arg2, arg3);
else
async_hcall(call, arg1, arg2, arg3);
diff -r 072a0b3924fb include/asm-i386/paravirt.h
--- a/include/asm-i386/paravirt.h Thu Aug 30 04:47:43 2007 +1000
+++ b/include/asm-i386/paravirt.h Tue Sep 04 05:31:47 2007 +1000
@@ -30,7 +30,6 @@ enum paravirt_lazy_mode {
PARAVIRT_LAZY_NONE = 0,
PARAVIRT_LAZY_MMU = 1,
PARAVIRT_LAZY_CPU = 2,
- PARAVIRT_LAZY_FLUSH = 3,
};

struct paravirt_ops
@@ -903,36 +902,47 @@ static inline void set_pmd(pmd_t *pmdp,
#endif /* CONFIG_X86_PAE */

#define __HAVE_ARCH_ENTER_LAZY_CPU_MODE
+DECLARE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode);
+static inline enum paravirt_lazy_mode get_lazy_mode(void)
+{
+ return x86_read_percpu(paravirt_lazy_mode);
+}
+
static inline void arch_enter_lazy_cpu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_CPU);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_CPU);
}

static inline void arch_leave_lazy_cpu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
}

static inline void arch_flush_lazy_cpu_mode(void)
{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
-}
-
+ if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_CPU))
+ arch_leave_lazy_cpu_mode();
+}

#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
static inline void arch_enter_lazy_mmu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_MMU);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_MMU);
}

static inline void arch_leave_lazy_mmu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
}

static inline void arch_flush_lazy_mmu_mode(void)
{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
+ if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_MMU))
+ arch_leave_lazy_mmu_mode();
}

void _paravirt_nop(void);


2007-09-04 13:42:40

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Rusty Russell wrote:
> static inline void arch_flush_lazy_mmu_mode(void)
> {
> - PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
> + if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_MMU))
> + arch_leave_lazy_mmu_mode();
> }
>

This changes the semantics a bit; previously "flush" would flush
anything pending but leave us in lazy mode. This just drops lazymode
altogether?

I guess if we assume that flushing is a rare event then its OK, but I
think the name's a bit misleading. How does it differ from plain
arch_leave_lazy_mmu_mode()?

J

2007-09-05 16:34:36

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Tue, 2007-09-04 at 14:42 +0100, Jeremy Fitzhardinge wrote:
> Rusty Russell wrote:
> > static inline void arch_flush_lazy_mmu_mode(void)
> > {
> > - PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
> > + if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_MMU))
> > + arch_leave_lazy_mmu_mode();
> > }
> >
>
> This changes the semantics a bit; previously "flush" would flush
> anything pending but leave us in lazy mode. This just drops lazymode
> altogether?
>
> I guess if we assume that flushing is a rare event then its OK, but I
> think the name's a bit misleading. How does it differ from plain
> arch_leave_lazy_mmu_mode()?

Whether it's likely or unlikely to be in lazy mode, basically. But
you're right, this should be folded, since we don't want to "leave" lazy
mode twice.

===
Hoist per-cpu lazy_mode variable up into common code.

VMI, Xen and lguest all track paravirt_lazy_mode individually, meaning
we hand a "magic" value for set_lazy_mode() to say "flush if currently
active".

We can simplify the logic by hoisting this variable into common code.

Signed-off-by: Rusty Russell <[email protected]>

---
arch/i386/kernel/vmi.c | 16 ++++------------
arch/i386/xen/enlighten.c | 15 +++------------
arch/i386/xen/multicalls.h | 2 +-
arch/i386/xen/xen-ops.h | 7 -------
drivers/lguest/lguest.c | 25 ++++++-------------------
include/asm-i386/paravirt.h | 29 ++++++++++++++++-------------
6 files changed, 30 insertions(+), 64 deletions(-)

diff -r 072a0b3924fb arch/i386/kernel/vmi.c
--- a/arch/i386/kernel/vmi.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/kernel/vmi.c Wed Sep 05 04:06:20 2007 +1000
@@ -554,22 +554,14 @@ vmi_startup_ipi_hook(int phys_apicid, un

static void vmi_set_lazy_mode(enum paravirt_lazy_mode mode)
{
- static DEFINE_PER_CPU(enum paravirt_lazy_mode, lazy_mode);
-
if (!vmi_ops.set_lazy_mode)
return;

/* Modes should never nest or overlap */
- BUG_ON(__get_cpu_var(lazy_mode) && !(mode == PARAVIRT_LAZY_NONE ||
- mode == PARAVIRT_LAZY_FLUSH));
-
- if (mode == PARAVIRT_LAZY_FLUSH) {
- vmi_ops.set_lazy_mode(0);
- vmi_ops.set_lazy_mode(__get_cpu_var(lazy_mode));
- } else {
- vmi_ops.set_lazy_mode(mode);
- __get_cpu_var(lazy_mode) = mode;
- }
+ BUG_ON(get_lazy_mode() && !(mode == PARAVIRT_LAZY_NONE |
+ | mode == PARAVIRT_LAZY_FLUSH));
+
+ vmi_ops.set_lazy_mode(mode);
}

static inline int __init check_vmi_rom(struct vrom_header *rom)
diff -r 072a0b3924fb arch/i386/mm/fault.c
--- a/arch/i386/mm/fault.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/mm/fault.c Wed Sep 05 04:22:33 2007 +1000
@@ -251,7 +251,7 @@ static inline pmd_t *vmalloc_sync_one(pg
return NULL;
if (!pmd_present(*pmd)) {
set_pmd(pmd, *pmd_k);
- arch_flush_lazy_mmu_mode();
+ arch_leave_lazy_mmu_mode();
} else
BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
return pmd_k;
diff -r 072a0b3924fb arch/i386/mm/highmem.c
--- a/arch/i386/mm/highmem.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/mm/highmem.c Wed Sep 05 04:22:48 2007 +1000
@@ -42,7 +42,7 @@ void *kmap_atomic_prot(struct page *page

vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
set_pte(kmap_pte-idx, mk_pte(page, prot));
- arch_flush_lazy_mmu_mode();
+ arch_leave_lazy_mmu_mode();

return (void*) vaddr;
}
@@ -72,7 +72,7 @@ void kunmap_atomic(void *kvaddr, enum km
#endif
}

- arch_flush_lazy_mmu_mode();
+ arch_leave_lazy_mmu_mode();
pagefault_enable();
}

@@ -89,7 +89,7 @@ void *kmap_atomic_pfn(unsigned long pfn,
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
set_pte(kmap_pte-idx, pfn_pte(pfn, kmap_prot));
- arch_flush_lazy_mmu_mode();
+ arch_leave_lazy_mmu_mode();

return (void*) vaddr;
}
diff -r 072a0b3924fb arch/i386/xen/enlighten.c
--- a/arch/i386/xen/enlighten.c Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/enlighten.c Wed Sep 05 04:06:20 2007 +1000
@@ -52,8 +52,6 @@

EXPORT_SYMBOL_GPL(hypercall_page);

-DEFINE_PER_CPU(enum paravirt_lazy_mode, xen_lazy_mode);
-
DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
DEFINE_PER_CPU(unsigned long, xen_cr3);
@@ -255,23 +253,16 @@ static void xen_set_lazy_mode(enum parav

switch (mode) {
case PARAVIRT_LAZY_NONE:
- BUG_ON(x86_read_percpu(xen_lazy_mode) == PARAVIRT_LAZY_NONE);
+ BUG_ON(get_lazy_mode() == PARAVIRT_LAZY_NONE);
break;

case PARAVIRT_LAZY_MMU:
case PARAVIRT_LAZY_CPU:
- BUG_ON(x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE);
+ BUG_ON(get_lazy_mode() != PARAVIRT_LAZY_NONE);
break;
-
- case PARAVIRT_LAZY_FLUSH:
- /* flush if necessary, but don't change state */
- if (x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE)
- xen_mc_flush();
- return;
}

xen_mc_flush();
- x86_write_percpu(xen_lazy_mode, mode);
}

static unsigned long xen_store_tr(void)
@@ -358,7 +349,7 @@ static void xen_load_tls(struct thread_s
* loaded properly. This will go away as soon as Xen has been
* modified to not save/restore %gs for normal hypercalls.
*/
- if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU)
+ if (x86_read_percpu(paravirt_lazy_mode) == PARAVIRT_LAZY_CPU)
loadsegment(gs, 0);
}

diff -r 072a0b3924fb arch/i386/xen/multicalls.h
--- a/arch/i386/xen/multicalls.h Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/multicalls.h Wed Sep 05 04:06:20 2007 +1000
@@ -35,7 +35,7 @@ void xen_mc_flush(void);
/* Issue a multicall if we're not in a lazy mode */
static inline void xen_mc_issue(unsigned mode)
{
- if ((xen_get_lazy_mode() & mode) == 0)
+ if ((get_lazy_mode() & mode) == 0)
xen_mc_flush();

/* restore flags saved in xen_mc_batch */
diff -r 072a0b3924fb arch/i386/xen/xen-ops.h
--- a/arch/i386/xen/xen-ops.h Thu Aug 30 04:47:43 2007 +1000
+++ b/arch/i386/xen/xen-ops.h Wed Sep 05 04:06:20 2007 +1000
@@ -29,13 +29,6 @@ unsigned long long xen_sched_clock(void)

void xen_mark_init_mm_pinned(void);

-DECLARE_PER_CPU(enum paravirt_lazy_mode, xen_lazy_mode);
-
-static inline unsigned xen_get_lazy_mode(void)
-{
- return x86_read_percpu(xen_lazy_mode);
-}
-
void __init xen_fill_possible_map(void);

void __init xen_setup_vcpu_info_placement(void);
diff -r 072a0b3924fb drivers/lguest/lguest.c
--- a/drivers/lguest/lguest.c Thu Aug 30 04:47:43 2007 +1000
+++ b/drivers/lguest/lguest.c Wed Sep 05 04:06:20 2007 +1000
@@ -93,8 +93,8 @@ static cycle_t clock_base;
/*G:035 Notice the lazy_hcall() above, rather than hcall(). This is our first
* real optimization trick!
*
- * When lazy_mode is set, it means we're allowed to defer all hypercalls and do
- * them as a batch when lazy_mode is eventually turned off. Because hypercalls
+ * When lazy mode is set, it means we're allowed to defer all hypercalls and do
+ * them as a batch when lazy mode is eventually turned off. Because hypercalls
* are reasonably expensive, batching them up makes sense. For example, a
* large mmap might update dozens of page table entries: that code calls
* lguest_lazy_mode(PARAVIRT_LAZY_MMU), does the dozen updates, then calls
@@ -102,24 +102,11 @@ static cycle_t clock_base;
*
* So, when we're in lazy mode, we call async_hypercall() to store the call for
* future processing. When lazy mode is turned off we issue a hypercall to
- * flush the stored calls.
- *
- * There's also a hack where "mode" is set to "PARAVIRT_LAZY_FLUSH" which
- * indicates we're to flush any outstanding calls immediately. This is used
- * when an interrupt handler does a kmap_atomic(): the page table changes must
- * happen immediately even if we're in the middle of a batch. Usually we're
- * not, though, so there's nothing to do. */
-static enum paravirt_lazy_mode lazy_mode; /* Note: not SMP-safe! */
+ * flush the stored calls. */
static void lguest_lazy_mode(enum paravirt_lazy_mode mode)
{
- if (mode == PARAVIRT_LAZY_FLUSH) {
- if (unlikely(lazy_mode != PARAVIRT_LAZY_NONE))
- hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
- } else {
- lazy_mode = mode;
- if (mode == PARAVIRT_LAZY_NONE)
- hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
- }
+ if (mode == PARAVIRT_LAZY_NONE)
+ hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0);
}

static void lazy_hcall(unsigned long call,
@@ -127,7 +114,7 @@ static void lazy_hcall(unsigned long cal
unsigned long arg2,
unsigned long arg3)
{
- if (lazy_mode == PARAVIRT_LAZY_NONE)
+ if (get_lazy_mode() == PARAVIRT_LAZY_NONE)
hcall(call, arg1, arg2, arg3);
else
async_hcall(call, arg1, arg2, arg3);
diff -r 072a0b3924fb include/asm-i386/paravirt.h
--- a/include/asm-i386/paravirt.h Thu Aug 30 04:47:43 2007 +1000
+++ b/include/asm-i386/paravirt.h Wed Sep 05 04:20:06 2007 +1000
@@ -30,7 +30,6 @@ enum paravirt_lazy_mode {
PARAVIRT_LAZY_NONE = 0,
PARAVIRT_LAZY_MMU = 1,
PARAVIRT_LAZY_CPU = 2,
- PARAVIRT_LAZY_FLUSH = 3,
};

struct paravirt_ops
@@ -903,19 +902,24 @@ static inline void set_pmd(pmd_t *pmdp,
#endif /* CONFIG_X86_PAE */

#define __HAVE_ARCH_ENTER_LAZY_CPU_MODE
+DECLARE_PER_CPU(enum paravirt_lazy_mode, paravirt_lazy_mode);
+static inline enum paravirt_lazy_mode get_lazy_mode(void)
+{
+ return x86_read_percpu(paravirt_lazy_mode);
+}
+
static inline void arch_enter_lazy_cpu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_CPU);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_CPU);
}

static inline void arch_leave_lazy_cpu_mode(void)
{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
-}
-
-static inline void arch_flush_lazy_cpu_mode(void)
-{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
+ if (get_lazy_mode() == PARAVIRT_LAZY_CPU) {
+ PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
+ }
}


@@ -923,16 +927,15 @@ static inline void arch_enter_lazy_mmu_m
static inline void arch_enter_lazy_mmu_mode(void)
{
PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_MMU);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_MMU);
}

static inline void arch_leave_lazy_mmu_mode(void)
{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
-}
-
-static inline void arch_flush_lazy_mmu_mode(void)
-{
- PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
+ if (get_lazy_mode() == PARAVIRT_LAZY_MMU) {
+ PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_NONE);
+ x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
+ }
}

void _paravirt_nop(void);


2007-09-05 17:05:31

by Zachary Amsden

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Thu, 2007-09-06 at 02:33 +1000, Rusty Russell wrote:
> On Tue, 2007-09-04 at 14:42 +0100, Jeremy Fitzhardinge wrote:
> > Rusty Russell wrote:
> > > static inline void arch_flush_lazy_mmu_mode(void)
> > > {
> > > - PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
> > > + if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_MMU))
> > > + arch_leave_lazy_mmu_mode();
> > > }
> > >
> >
> > This changes the semantics a bit; previously "flush" would flush
> > anything pending but leave us in lazy mode. This just drops lazymode
> > altogether?
> >
> > I guess if we assume that flushing is a rare event then its OK, but I
> > think the name's a bit misleading. How does it differ from plain
> > arch_leave_lazy_mmu_mode()?
>
> Whether it's likely or unlikely to be in lazy mode, basically. But
> you're right, this should be folded, since we don't want to "leave" lazy
> mode twice.
>
> ===
> Hoist per-cpu lazy_mode variable up into common code.
>
> VMI, Xen and lguest all track paravirt_lazy_mode individually, meaning
> we hand a "magic" value for set_lazy_mode() to say "flush if currently
> active".
>
> We can simplify the logic by hoisting this variable into common code.
>
> Signed-off-by: Rusty Russell <[email protected]>
>
> ---
> arch/i386/kernel/vmi.c | 16 ++++------------
> arch/i386/xen/enlighten.c | 15 +++------------
> arch/i386/xen/multicalls.h | 2 +-
> arch/i386/xen/xen-ops.h | 7 -------
> drivers/lguest/lguest.c | 25 ++++++-------------------
> include/asm-i386/paravirt.h | 29 ++++++++++++++++-------------
> 6 files changed, 30 insertions(+), 64 deletions(-)
>
> diff -r 072a0b3924fb arch/i386/kernel/vmi.c
> --- a/arch/i386/kernel/vmi.c Thu Aug 30 04:47:43 2007 +1000
> +++ b/arch/i386/kernel/vmi.c Wed Sep 05 04:06:20 2007 +1000
> @@ -554,22 +554,14 @@ vmi_startup_ipi_hook(int phys_apicid, un
>
> static void vmi_set_lazy_mode(enum paravirt_lazy_mode mode)
> {
> - static DEFINE_PER_CPU(enum paravirt_lazy_mode, lazy_mode);
> -
> if (!vmi_ops.set_lazy_mode)
> return;
>
> /* Modes should never nest or overlap */
> - BUG_ON(__get_cpu_var(lazy_mode) && !(mode == PARAVIRT_LAZY_NONE ||
> - mode == PARAVIRT_LAZY_FLUSH));
> -
> - if (mode == PARAVIRT_LAZY_FLUSH) {
> - vmi_ops.set_lazy_mode(0);
> - vmi_ops.set_lazy_mode(__get_cpu_var(lazy_mode));
> - } else {
> - vmi_ops.set_lazy_mode(mode);
> - __get_cpu_var(lazy_mode) = mode;
> - }
> + BUG_ON(get_lazy_mode() && !(mode == PARAVIRT_LAZY_NONE |
> + | mode == PARAVIRT_LAZY_FLUSH));

That's a pretty strange line break.



2007-09-05 17:49:57

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Wed, 2007-09-05 at 10:05 -0700, Zachary Amsden wrote:
> On Thu, 2007-09-06 at 02:33 +1000, Rusty Russell wrote:

> > + BUG_ON(get_lazy_mode() && !(mode == PARAVIRT_LAZY_NONE |
> > + | mode == PARAVIRT_LAZY_FLUSH));
>
> That's a pretty strange line break.

Sorry, should have mentioned it's completely untested: too far from my
test machine 8(

Thanks,
Rusty.

2007-09-05 20:11:37

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Rusty Russell wrote:
> On Tue, 2007-09-04 at 14:42 +0100, Jeremy Fitzhardinge wrote:
>
>> Rusty Russell wrote:
>>
>>> static inline void arch_flush_lazy_mmu_mode(void)
>>> {
>>> - PVOP_VCALL1(set_lazy_mode, PARAVIRT_LAZY_FLUSH);
>>> + if (unlikely(__get_cpu_var(paravirt_lazy_mode) == PARAVIRT_LAZY_MMU))
>>> + arch_leave_lazy_mmu_mode();
>>> }
>>>
>>>
>> This changes the semantics a bit; previously "flush" would flush
>> anything pending but leave us in lazy mode. This just drops lazymode
>> altogether?
>>
>> I guess if we assume that flushing is a rare event then its OK, but I
>> think the name's a bit misleading. How does it differ from plain
>> arch_leave_lazy_mmu_mode()?
>>
>
> Whether it's likely or unlikely to be in lazy mode, basically. But
> you're right, this should be folded, since we don't want to "leave" lazy
> mode twice.
>

Hm, I think there's still a problem here. In the current code, you can
legitimately flush lazy mode with preemption enabled (ie, there's no
lazy mode currently active), but it's always a bug to enable/disable
lazy mode with preemption enabled. Certainly enabling lazy mode with
preemption enabled is always a bug, but you could make disable
preempt-safe (and the bug checking should be in the common code).

J

2007-09-05 20:38:21

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Thu, 2007-08-23 at 22:46 -0700, Zachary Amsden wrote:
> I recently sent off a fix for lazy vmalloc faults which can happen under
> paravirt when lazy mode is enabled. Unfortunately, I jumped the gun a
> bit on fixing this. I neglected to notice that since the new call to
> flush the MMU update queue is called from the page fault handler, it can
> be pre-empted. Both VMI and Xen use per-cpu variables to track lazy
> mode state, as all previous calls to set, disable, or flush lazy mode
> happened from a non-preemptable state.

Hi Zach,

I don't think this patch does anything. The flush is because we want
the just-completed "set_pte" to have immediate effect, so if preempt is
enabled we're already screwed because we can be moved between set_pte
and the arch_flush_lazy_mmu_mode() call.

Now, where's the problem caller? By my reading or rc4, vmalloc faults
are fixed up before enabling interrupts.

Confused,
Rusty.


2007-09-05 23:49:55

by Zachary Amsden

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

On Thu, 2007-09-06 at 06:37 +1000, Rusty Russell wrote:
> On Thu, 2007-08-23 at 22:46 -0700, Zachary Amsden wrote:
> > I recently sent off a fix for lazy vmalloc faults which can happen under
> > paravirt when lazy mode is enabled. Unfortunately, I jumped the gun a
> > bit on fixing this. I neglected to notice that since the new call to
> > flush the MMU update queue is called from the page fault handler, it can
> > be pre-empted. Both VMI and Xen use per-cpu variables to track lazy
> > mode state, as all previous calls to set, disable, or flush lazy mode
> > happened from a non-preemptable state.
>
> Hi Zach,
>
> I don't think this patch does anything. The flush is because we want
> the just-completed "set_pte" to have immediate effect, so if preempt is
> enabled we're already screwed because we can be moved between set_pte
> and the arch_flush_lazy_mmu_mode() call.
>
> Now, where's the problem caller? By my reading or rc4, vmalloc faults
> are fixed up before enabling interrupts.

I agree. The patch is a nop. I just got overly paranoid. The whole
thing is just very prone to bugs.

So let's go over it carefully:

1) Lazy mode can't be entered unless preempt is disabled
2) Flushes are needed in two known cases: kmap_atomic and vmalloc sync
3) kmap_atomic can only be used when preempt is disabled
4) vmalloc sync happens under protection of interrupts disabled

Good logically. What can break this logic?

#1 is defined by us
#2 is our currently believed complete list of flush scenarios
#3 is impossible to change by design
#4 seems very unlikely to change anytime soon

Seeing #2 appears weak, let us elaborate:

A) Lazy mode is used in a couple of controlled paths for user page table
updates which requires no immediately updated mapping; further, they are
protected under spinlocks, thus never preempted
B) Thus only kernel mapping updates require explicit flushes
C) Any interrupt / fault during lazy mode can only use kmap_atomic or a
set_pmd to sync a vmalloc region, thus proving my point by circular
logic (for interrupt / fault cases).
D) Or better, other kernel mapping changes (kmap, the pageattr code,
boot_ioremap, vmap, vm86 mark_screen_rdonly) are not usable by
interrupt / fault handlers, thus proving C by exclusion.

So I'm fairly certain there is no further issues with interrupt handlers
or faults, where update semantics are heavily constrained. What of the
actual lazy mode code itself doing kernel remapping?

Most of these are clearly bogus (pageattr, boot_ioremap, vm86 stuff) for
the mm code to use inside a spinlock protected lazy mode region; it does
seem perfectly acceptable though for the mm code to use kmap or vmap
(not kmap_atomic) internally somewhere in the pagetable code.

We can exclude the trivial lazy mode regions (zeromap, unmap, and
remap). Easily by inspection. The PTE copy routine gets deep enough
that not all paths are immediately obvious, though, we should keep it in
mind for bug checking.

Zach

2007-09-06 05:42:53

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

>I agree. The patch is a nop. I just got overly paranoid. The whole
>thing is just very prone to bugs.
So do we need a patch for .23 or not?

>; it does
> seem perfectly acceptable though for the mm code to use kmap or vmap
> (not kmap_atomic) internally somewhere in the pagetable code.

i386 does it all the time for highmem pagetables in fact.

-Andi

2007-09-06 09:57:10

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Andi Kleen wrote:
>> ; it does
>> seem perfectly acceptable though for the mm code to use kmap or vmap
>> (not kmap_atomic) internally somewhere in the pagetable code.
>>
>
> i386 does it all the time for highmem pagetables in fact.
>

Yes, it uses kmap_atomic(_pte) all the time. Is that what you meant?

J

2007-09-06 09:57:59

by Jeremy Fitzhardinge

[permalink] [raw]
Subject: Re: [PATCH] Fix preemptible lazy mode bug

Andi Kleen wrote:
>> I agree. The patch is a nop. I just got overly paranoid. The whole
>> thing is just very prone to bugs.
>>
> So do we need a patch for .23 or not?
>

Forgot this bit. No, I think the upshot is that it isn't necessary (nor
Rusty's).

J