2019-09-27 23:42:23

by Leonardo Bras

[permalink] [raw]
Subject: [PATCH v4 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks

It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.

Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.

In order to speedup some cases, I propose a refcount-based approach, that
counts the number of lockless pagetable walks happening on the process.

This method does not exclude the current irq-oriented method. It works as a
complement to skip unnecessary waiting.

start_lockless_pgtbl_walk(mm)
Insert before starting any lockless pgtable walk
end_lockless_pgtbl_walk(mm)
Insert after the end of any lockless pgtable walk
(Mostly after the ptep is last used)
running_lockless_pgtbl_walk(mm)
Returns the number of lockless pgtable walks running

Signed-off-by: Leonardo Bras <[email protected]>
---
arch/powerpc/include/asm/book3s/64/mmu.h | 3 ++
arch/powerpc/mm/book3s64/mmu_context.c | 1 +
arch/powerpc/mm/book3s64/pgtable.c | 45 ++++++++++++++++++++++++
3 files changed, 49 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 23b83d3593e2..13b006e7dde4 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -116,6 +116,9 @@ typedef struct {
/* Number of users of the external (Nest) MMU */
atomic_t copros;

+ /* Number of running instances of lockless pagetable walk*/
+ atomic_t lockless_pgtbl_walk_count;
+
struct hash_mm_context *hash_context;

unsigned long vdso_base;
diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index 2d0cb5ba9a47..3dd01c0ca5be 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -200,6 +200,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
#endif
atomic_set(&mm->context.active_cpus, 0);
atomic_set(&mm->context.copros, 0);
+ atomic_set(&mm->context.lockless_pgtbl_walk_count, 0);

return 0;
}
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 7d0e0d0d22c4..6ba6195bff1b 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -98,6 +98,51 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
smp_call_function_many(mm_cpumask(mm), do_nothing, NULL, 1);
}

+/*
+ * Counting method to monitor lockless pagetable walks:
+ * Uses start_lockless_pgtbl_walk and end_lockless_pgtbl_walk to track the
+ * number of lockless pgtable walks happening, and
+ * running_lockless_pgtbl_walk to return this value.
+ */
+
+/* start_lockless_pgtbl_walk: Must be inserted before a function call that does
+ * lockless pagetable walks, such as __find_linux_pte()
+ */
+void start_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+ atomic_inc(&mm->context.lockless_pgtbl_walk_count);
+ /* Avoid reorder to garantee that the increment will happen before any
+ * part of the walkless pagetable walk after it.
+ */
+ smp_mb();
+}
+EXPORT_SYMBOL(start_lockless_pgtbl_walk);
+
+/*
+ * end_lockless_pgtbl_walk: Must be inserted after the last use of a pointer
+ * returned by a lockless pagetable walk, such as __find_linux_pte()
+*/
+void end_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+ /* Avoid reorder to garantee that it will only decrement after the last
+ * use of the returned ptep from the lockless pagetable walk.
+ */
+ smp_mb();
+ atomic_dec(&mm->context.lockless_pgtbl_walk_count);
+}
+EXPORT_SYMBOL(end_lockless_pgtbl_walk);
+
+/*
+ * running_lockless_pgtbl_walk: Returns the number of lockless pagetable walks
+ * currently running. If it returns 0, there is no running pagetable walk, and
+ * THP split/collapse can be safely done. This can be used to avoid more
+ * expensive approaches like serialize_against_pte_lookup()
+ */
+int running_lockless_pgtbl_walk(struct mm_struct *mm)
+{
+ return atomic_read(&mm->context.lockless_pgtbl_walk_count);
+}
+
/*
* We use this to invalidate a pmdp entry before switching from a
* hugepte to regular pmd entry.
--
2.20.1


2019-09-30 15:18:32

by Leonardo Bras

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks

On Sun, 2019-09-29 at 15:40 -0700, John Hubbard wrote:
> Hi, Leonardo,

Hello John, thanks for the feedback.

> Can we please do it as shown below, instead (compile-tested only)?
>
> This addresses all of the comments that I was going to make about structure
> of this patch, which are:
>
> * The lockless synch is tricky, so it should be encapsulated in function
> calls if possible.

As I told before, there are cases where this function is called from
'real mode' in powerpc, which doesn't disable irqs and may have a
tricky behavior if we do. So, encapsulate the irq disable in this
function can be a bad choice.

Of course, if we really need that, we can add a bool parameter to the
function to choose about disabling/enabling irqs.
>
> * This is really a core mm function, so don't hide it away in arch layers.
> (If you're changing mm/ files, that's a big hint.)

My idea here is to let the arch decide on how this 'register' is going
to work, as archs may have different needs (in powerpc for example, we
can't always disable irqs, since we may be in realmode).

Maybe we can create a generic function instead of a dummy, and let it
be replaced in case the arch needs to do so.

> * Other things need parts of this: gup.c needs the memory barriers; IMHO you'll
> be fixing a pre-existing, theoretical (we've never seen bug reports) problem.

Humm, you are right. Here I would suggest adding the barrier to the
generic function.

> * The documentation needs to accurately explain what's going on here.

Yes, my documentation was probably not good enough due to my lack of
experience with memory barriers (I learnt about using them last week,
and tried to come with the best solution.)

> (Not shown: one or more of the PPC Kconfig files should select
> LOCKLESS_PAGE_TABLE_WALK_TRACKING.)

The way it works today is defining it on platform pgtable.h. I agree
that using Kconfig may be a better solution that can make this config
more visible to disable/enable.

Thanks for the feedback,

Leonardo Bras


Attachments:
signature.asc (849.00 B)
This is a digitally signed message part

2019-09-30 20:41:45

by Leonardo Bras

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks

On Mon, 2019-09-30 at 10:57 -0700, John Hubbard wrote:
> > As I told before, there are cases where this function is called from
> > 'real mode' in powerpc, which doesn't disable irqs and may have a
> > tricky behavior if we do. So, encapsulate the irq disable in this
> > function can be a bad choice.
>
> You still haven't explained how this works in that case. So far, the
> synchronization we've discussed has depended upon interrupt disabling
> as part of the solution, in order to hold off page splitting and page
> table freeing.

The irqs are already disabled by another mechanism (hw): MSR_EE=0.
So, serialize will work as expected.

> Simply skipping that means that an additional mechanism is required...which
> btw might involve a new, ppc-specific routine, so maybe this is going to end
> up pretty close to what I pasted in after all...
> > Of course, if we really need that, we can add a bool parameter to the
> > function to choose about disabling/enabling irqs.
> > > * This is really a core mm function, so don't hide it away in arch layers.
> > > (If you're changing mm/ files, that's a big hint.)
> >
> > My idea here is to let the arch decide on how this 'register' is going
> > to work, as archs may have different needs (in powerpc for example, we
> > can't always disable irqs, since we may be in realmode).
> >
> > Maybe we can create a generic function instead of a dummy, and let it
> > be replaced in case the arch needs to do so.
>
> Yes, that might be what we need, if it turns out that ppc can't use this
> approach (although let's see about that).
>

I initially used the dummy approach because I did not see anything like
serialize in other archs.

I mean, even if I put some generic function here, if there is no
function to use the 'lockless_pgtbl_walk_count', it becomes only a
overhead.

>
> thanks,

Thank you!


Attachments:
signature.asc (849.00 B)
This is a digitally signed message part

2019-10-01 18:41:36

by Leonardo Bras

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] powerpc/mm: Adds counting method to monitor lockless pgtable walks

On Mon, 2019-09-30 at 14:47 -0700, John Hubbard wrote:
> On 9/30/19 11:42 AM, Leonardo Bras wrote:
> > On Mon, 2019-09-30 at 10:57 -0700, John Hubbard wrote:
> > > > As I told before, there are cases where this function is called from
> > > > 'real mode' in powerpc, which doesn't disable irqs and may have a
> > > > tricky behavior if we do. So, encapsulate the irq disable in this
> > > > function can be a bad choice.
> > >
> > > You still haven't explained how this works in that case. So far, the
> > > synchronization we've discussed has depended upon interrupt disabling
> > > as part of the solution, in order to hold off page splitting and page
> > > table freeing.
> >
> > The irqs are already disabled by another mechanism (hw): MSR_EE=0.
> > So, serialize will work as expected.
>
> I get that they're disabled. But will this interlock with the code that
> issues IPIs?? Because it's not just disabling interrupts that matters, but
> rather, synchronizing with the code (TLB flushing) that *happens* to
> require issuing IPIs, which in turn interact with disabling interrupts.
>
> So I'm still not seeing how that could work here, unless there is something
> interesting about the smp_call_function_many() on ppc with MSR_EE=0 mode...?
>

I am failing to understand the issue.
I mean, smp_call_function_many() will issue a IPI to each CPU in
CPUmask and wait it to run before returning.
If interrupts are disabled (either by MSR_EE=0 or local_irq_disable),
the IPI will not run on that CPU, and the wait part will make sure to
lock the thread until the interrupts are enabled again.

Could you please point the issue there?

> > > Simply skipping that means that an additional mechanism is required...which
> > > btw might involve a new, ppc-specific routine, so maybe this is going to end
> > > up pretty close to what I pasted in after all...
> > > > Of course, if we really need that, we can add a bool parameter to the
> > > > function to choose about disabling/enabling irqs.
> > > > > * This is really a core mm function, so don't hide it away in arch layers.
> > > > > (If you're changing mm/ files, that's a big hint.)
> > > >
> > > > My idea here is to let the arch decide on how this 'register' is going
> > > > to work, as archs may have different needs (in powerpc for example, we
> > > > can't always disable irqs, since we may be in realmode).
>
> Yes, the tension there is that a) some things are per-arch, and b) it's easy
> to get it wrong. The commit below (d9101bfa6adc) is IMHO a perfect example of
> that.
>
> So, I would like core mm/ functions that guide the way, but the interrupt
> behavior complicates it. I think your original passing of just struct_mm
> is probably the right balance, assuming that I'm wrong about interrupts.
>

I think, for the generic function, that including {en,dis}abling the
interrupt is fine. I mean, if disabling the interrupt is the generic
behavior, it's ok.
I will just make sure to explain that the interrupt {en,dis}abling is
part of the sync process. If an arch don't like it, it can write a
specific function that does the sync in a better way. (and defining
__HAVE_ARCH_LOCKLESS_PGTBL_WALK_COUNTER to ignore the generic function)

In this case, the generic function would also include the ifdef'ed
atomic inc and the memory barrier.

>
> > > > Maybe we can create a generic function instead of a dummy, and let it
> > > > be replaced in case the arch needs to do so.
> > >
> > > Yes, that might be what we need, if it turns out that ppc can't use this
> > > approach (although let's see about that).
> > >
> >
> > I initially used the dummy approach because I did not see anything like
> > serialize in other archs.
> >
> > I mean, even if I put some generic function here, if there is no
> > function to use the 'lockless_pgtbl_walk_count', it becomes only a
> > overhead.
> >
>
> Not really: the memory barrier is required in all cases, and this code
> would be good I think:
>
> +void register_lockless_pgtable_walker(struct mm_struct *mm)
> +{
> +#ifdef LOCKLESS_PAGE_TABLE_WALK_TRACKING
> + atomic_inc(&mm->lockless_pgtbl_nr_walkers);
> +#endif
> + /*
> + * This memory barrier pairs with any code that is either trying to
> + * delete page tables, or split huge pages.
> + */
> + smp_mb();
> +}
> +EXPORT_SYMBOL_GPL(gup_fast_lock_acquire);
>
> And this is the same as your original patch, with just a minor name change:
>
> @@ -2341,9 +2395,11 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
>
> if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
> gup_fast_permitted(start, end)) {
> + register_lockless_pgtable_walker(current->mm);
> local_irq_save(flags);
> gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr);
> local_irq_restore(flags);
> + deregister_lockless_pgtable_walker(current->mm);
>
>
> Btw, hopefully minor note: it also looks like there's a number of changes in the same
> area that conflict, for example:
>
> commit d9101bfa6adc ("powerpc/mm/mce: Keep irqs disabled during lockless
> page table walk") <Aneesh Kumar K.V> (Thu, 19 Sep 2019)
>
> ...so it would be good to rebase this onto 5.4-rc1, now that that's here.
>

Yeap, agree. Already rebased on top of v5.4-rc1.

>
> thanks,

Thank you!


Attachments:
signature.asc (849.00 B)
This is a digitally signed message part