2007-06-29 14:14:15

by Martin Schwidefsky

[permalink] [raw]
Subject: [patch 1/5] avoid tlb gather restarts.

From: Martin Schwidefsky <[email protected]>

If need_resched() is false it is unnecessary to call tlb_finish_mmu()
and tlb_gather_mmu() for each vma in unmap_vmas(). Moving the tlb gather
restart under the if that contains the cond_resched() will avoid
unnecessary tlb flush operations that are triggered by tlb_finish_mmu()
and tlb_gather_mmu().

Signed-off-by: Martin Schwidefsky <[email protected]>
---

mm/memory.c | 7 +++----
1 files changed, 3 insertions(+), 4 deletions(-)

diff -urpN linux-2.6/mm/memory.c linux-2.6-patched/mm/memory.c
--- linux-2.6/mm/memory.c 2007-06-29 15:44:08.000000000 +0200
+++ linux-2.6-patched/mm/memory.c 2007-06-29 15:44:08.000000000 +0200
@@ -851,19 +851,18 @@ unsigned long unmap_vmas(struct mmu_gath
break;
}

- tlb_finish_mmu(*tlbp, tlb_start, start);
-
if (need_resched() ||
(i_mmap_lock && need_lockbreak(i_mmap_lock))) {
+ tlb_finish_mmu(*tlbp, tlb_start, start);
if (i_mmap_lock) {
*tlbp = NULL;
goto out;
}
cond_resched();
+ *tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
+ tlb_start_valid = 0;
}

- *tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
- tlb_start_valid = 0;
zap_work = ZAP_BLOCK_SIZE;
}
}

--
blue skies,
Martin.

"Reality continues to ruin my life." - Calvin.


2007-06-29 18:57:31

by Hugh Dickins

[permalink] [raw]
Subject: Re: [patch 1/5] avoid tlb gather restarts.

I don't dare comment on your page_mkclean_one patch (5/5),
that dirty page business has grown too subtle for me.

Your cleanups 2-4 look good, especially the mm_types.h one (how
confident are you that everything builds?), and I'm glad we can
now lay ptep_establish to rest. Though I think you may have
missed removing a __HAVE_ARCH_PTEP... from frv at least?

But this one...

On Fri, 29 Jun 2007, Martin Schwidefsky wrote:

> If need_resched() is false it is unnecessary to call tlb_finish_mmu()
> and tlb_gather_mmu() for each vma in unmap_vmas(). Moving the tlb gather
> restart under the if that contains the cond_resched() will avoid
> unnecessary tlb flush operations that are triggered by tlb_finish_mmu()
> and tlb_gather_mmu().
>
> Signed-off-by: Martin Schwidefsky <[email protected]>

Sorry, no. It looks reasonable, but unmap_vmas is treading a delicate
and uncomfortable line between hi-performance and lo-latency: you've
chosen to improve performance at the expense of latency.

You think you're just moving the finish/gather to where they're
actually necessary; but the thing is, that per-cpu struct mmu_gather
is liable to accumulate a lot of unpreemptible work for the future
tlb_finish_mmu, particularly when anon pages are associated with swap.

So although there may be no need to resched right now, if we keep on
gathering more and more without flushing, we'll be very unresponsive
when a resched is needed later on. Hence Ingo's ZAP_BLOCK_SIZE to
split it up, small when CONFIG_PREEMPT, more reasonable but still
limited when not.

I expect there is some tinkering which could be done to improve it a
little; but my ambition has always been to eliminate ZAP_BLOCK_SIZE,
get away from the per-cpu'ness of the mmu_gather, and make unmap_vmas
preemptible. But the i_mmap_lock case, and the per-arch variations
in TLB flushing, have forever stalled me.

Hugh

> ---
>
> mm/memory.c | 7 +++----
> 1 files changed, 3 insertions(+), 4 deletions(-)
>
> diff -urpN linux-2.6/mm/memory.c linux-2.6-patched/mm/memory.c
> --- linux-2.6/mm/memory.c 2007-06-29 15:44:08.000000000 +0200
> +++ linux-2.6-patched/mm/memory.c 2007-06-29 15:44:08.000000000 +0200
> @@ -851,19 +851,18 @@ unsigned long unmap_vmas(struct mmu_gath
> break;
> }
>
> - tlb_finish_mmu(*tlbp, tlb_start, start);
> -
> if (need_resched() ||
> (i_mmap_lock && need_lockbreak(i_mmap_lock))) {
> + tlb_finish_mmu(*tlbp, tlb_start, start);
> if (i_mmap_lock) {
> *tlbp = NULL;
> goto out;
> }
> cond_resched();
> + *tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
> + tlb_start_valid = 0;
> }
>
> - *tlbp = tlb_gather_mmu(vma->vm_mm, fullmm);
> - tlb_start_valid = 0;
> zap_work = ZAP_BLOCK_SIZE;
> }
> }

2007-06-29 21:17:50

by Martin Schwidefsky

[permalink] [raw]
Subject: Re: [patch 1/5] avoid tlb gather restarts.

On Fri, 2007-06-29 at 19:56 +0100, Hugh Dickins wrote:
> I don't dare comment on your page_mkclean_one patch (5/5),
> that dirty page business has grown too subtle for me.

Oh yes, the dirty handling is tricky. I had to fix a really nasty bug
with it lately. As for page_mkclean_one the difference is that it
doesn't claim a page is dirty if only the write protect bit has not been
set. If we manage to lose dirty bits from ptes and have to rely on the
write protect bit to take over the job, then we have a different problem
altogether, no ?

> Your cleanups 2-4 look good, especially the mm_types.h one (how
> confident are you that everything builds?), and I'm glad we can
> now lay ptep_establish to rest. Though I think you may have
> missed removing a __HAVE_ARCH_PTEP... from frv at least?

Ok, thanks for the review. I take a look at frv to see if I missed
something.

> But this one...
>
> On Fri, 29 Jun 2007, Martin Schwidefsky wrote:
>
> > If need_resched() is false it is unnecessary to call tlb_finish_mmu()
> > and tlb_gather_mmu() for each vma in unmap_vmas(). Moving the tlb gather
> > restart under the if that contains the cond_resched() will avoid
> > unnecessary tlb flush operations that are triggered by tlb_finish_mmu()
> > and tlb_gather_mmu().
> >
> > Signed-off-by: Martin Schwidefsky <[email protected]>
>
> Sorry, no. It looks reasonable, but unmap_vmas is treading a delicate
> and uncomfortable line between hi-performance and lo-latency: you've
> chosen to improve performance at the expense of latency.

That it true, my only concern had been performance. You likely have a
point here.

> You think you're just moving the finish/gather to where they're
> actually necessary; but the thing is, that per-cpu struct mmu_gather
> is liable to accumulate a lot of unpreemptible work for the future
> tlb_finish_mmu, particularly when anon pages are associated with swap.

Hmm, ok, so you are saying that we should do a flush at the end of each
vma.

> So although there may be no need to resched right now, if we keep on
> gathering more and more without flushing, we'll be very unresponsive
> when a resched is needed later on. Hence Ingo's ZAP_BLOCK_SIZE to
> split it up, small when CONFIG_PREEMPT, more reasonable but still
> limited when not.

Would it be acceptable to call tlb_flush_mmu instead of the
tlb_finish_mmu / tlb_gather_mmu pair if the condition around
cond_resched evaluates to false?
The background for this change is that I'm working on another patch that
will change the tlb flushing for s390 quite a bit. We won't have
anything to flush with tlb_finish_mmu because we will either flush all
tlbs with tlb_gather_mmu or each pte seperatly. The pages will always be
freed immediatly. If we are forced to restart the tlb gather then we'll
do multiple flush_tlb_mm because the information that we already flushed
everything is lost with tlb_finish_mmu.

--
blue skies,
Martin.

"Reality continues to ruin my life." - Calvin.


2007-06-30 13:17:37

by Hugh Dickins

[permalink] [raw]
Subject: Re: [patch 1/5] avoid tlb gather restarts.

On Fri, 29 Jun 2007, Martin Schwidefsky wrote:
> On Fri, 2007-06-29 at 19:56 +0100, Hugh Dickins wrote:
> > I don't dare comment on your page_mkclean_one patch (5/5),
> > that dirty page business has grown too subtle for me.
>
> Oh yes, the dirty handling is tricky....

I'll move that discussion over to 5/5 and Cc Peter
(sorry I was too lazy to do so in the first place).

> > On Fri, 29 Jun 2007, Martin Schwidefsky wrote:
> > You think you're just moving the finish/gather to where they're
> > actually necessary; but the thing is, that per-cpu struct mmu_gather
> > is liable to accumulate a lot of unpreemptible work for the future
> > tlb_finish_mmu, particularly when anon pages are associated with swap.
>
> Hmm, ok, so you are saying that we should do a flush at the end of each
> vma.

I think of it as doing a flush every ZAP_BLOCK_SIZE, with the imperfect
structure of the loop forcing perhaps an early flush at the end of each
vma: I seem to assume large vmas, and you to assume small ones.

IIRC, the common case for doing multiple vmas here is exit, when it
ends up that the TLB flush can often be skipped because already done
by the switch from exiting task; so the premature flush per vma doesn't
matter much. But treat that claim with maximum scepticism: I've not
rechecked it, several aspects may be wrong. What I do remember is
that (at least on i386) there's a lot less actual TLB flushing done
here than it appears from the outside.

> > So although there may be no need to resched right now, if we keep on
> > gathering more and more without flushing, we'll be very unresponsive
> > when a resched is needed later on. Hence Ingo's ZAP_BLOCK_SIZE to
> > split it up, small when CONFIG_PREEMPT, more reasonable but still
> > limited when not.
>
> Would it be acceptable to call tlb_flush_mmu instead of the
> tlb_finish_mmu / tlb_gather_mmu pair if the condition around
> cond_resched evaluates to false?

That sounds a good idea, yes, that should be fine. But beware,
tlb_flush_mmu is an internal detail of the asm-generic/tlb.h method
and perhaps some others, it currently doesn't exist on some arches.

I think you just need to add a simple one to arm & arm26, and take
the "ia64_" off the ia64 one. powerpc and sparc64 go about it all
a bit differently, but it should be easy to give them one too.
There may be some others missing.

> The background for this change is that I'm working on another patch that
> will change the tlb flushing for s390 quite a bit. We won't have
> anything to flush with tlb_finish_mmu because we will either flush all
> tlbs with tlb_gather_mmu or each pte seperatly. The pages will always be
> freed immediatly. If we are forced to restart the tlb gather then we'll
> do multiple flush_tlb_mm because the information that we already flushed
> everything is lost with tlb_finish_mmu.

Thanks for the info. Sounds like we may have trouble ahead when
rearranging this stuff, easy to forget s390 from our assumptions:
keep watch!

Hugh