2013-05-31 10:55:29

by Vineet Gupta

[permalink] [raw]
Subject: [PATCH 0/2] mm: fixlets

Hi Andrew,

Max Filippov reported a generic MM issue with PTE/TLB coherency
@ http://www.spinics.net/lists/linux-arch/msg21736.html

While the fix for issue is still being discussed, sending over a bunch
mm fixlets which we found in due course.

Infact, 1/2 looks like stable material as orig code was flushing wrong range
from TLB - wherever used.

Please consider applying.

Thx,
-Vineet


Vineet Gupta (2):
mm: Fix the TLB range flushed when __tlb_remove_page() runs out of
slots
mm: tlb_fast_mode check missing in tlb_finish_mmu()

mm/memory.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

--
1.7.10.4


2013-05-31 10:56:17

by Vineet Gupta

[permalink] [raw]
Subject: [PATCH 1/2] mm: Fix the TLB range flushed when __tlb_remove_page() runs out of slots

zap_pte_range loops from @addr to @end. In the middle, if it runs out of
batching slots, TLB entries needs to be flushed for @start to @interim,
NOT @interim to @end.

Since ARC port doesn't use page free batching I can't test it myself but
this seems like the right thing to do.
Observed this when working on a fix for the issue at thread:
http://www.spinics.net/lists/linux-arch/msg21736.html

Signed-off-by: Vineet Gupta <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: [email protected] <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Max Filippov <[email protected]>
Cc: Alex Shi <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
---
mm/memory.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6dc1882..d9d5fd9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1110,6 +1110,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
spinlock_t *ptl;
pte_t *start_pte;
pte_t *pte;
+ unsigned long range_start = addr;

again:
init_rss_vec(rss);
@@ -1215,12 +1216,14 @@ again:
force_flush = 0;

#ifdef HAVE_GENERIC_MMU_GATHER
- tlb->start = addr;
- tlb->end = end;
+ tlb->start = range_start;
+ tlb->end = addr;
#endif
tlb_flush_mmu(tlb);
- if (addr != end)
+ if (addr != end) {
+ range_start = addr;
goto again;
+ }
}

return addr;
--
1.7.10.4

2013-05-31 10:57:18

by Vineet Gupta

[permalink] [raw]
Subject: [PATCH 2/2] mm: tlb_fast_mode check missing in tlb_finish_mmu()

This removes some unused generated code for tlb_fast_mode() == true

Signed-off-by: Vineet Gupta <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Cc: [email protected] <[email protected]>
---
mm/memory.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index d9d5fd9..569ffe1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -269,6 +269,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long e
/* keep the page table cache within bounds */
check_pgt_cache();

+ if (tlb_fast_mode(tlb))
+ return;
+
for (batch = tlb->local.next; batch; batch = next) {
next = batch->next;
free_pages((unsigned long)batch, 0);
--
1.7.10.4