2014-01-06 03:55:38

by Allen

[permalink] [raw]
Subject: [PATCH 0/4] PREEMPT_RT support for sparc64

Have tested it on UltraSparc T4 (Niagara4).

Allen Pais (4):
sparc64: use generic rwsem spinlocks rt
sparc64: allow forced irq threading
sparc64: convert spinlock_t to raw_spinlock_t in mmu_context_t
sparc64: convert ctx_alloc_lock raw_spinlock_t

arch/sparc/Kconfig | 7 +++----
arch/sparc/include/asm/mmu_64.h | 2 +-
arch/sparc/include/asm/mmu_context_64.h | 10 +++++-----
arch/sparc/kernel/smp_64.c | 4 ++--
arch/sparc/mm/init_64.c | 14 +++++++-------
arch/sparc/mm/tsb.c | 20 ++++++++++----------
6 files changed, 28 insertions(+), 29 deletions(-)

--
1.7.10.4


2014-01-06 03:55:40

by Allen

[permalink] [raw]
Subject: [PATCH 1/4] sparc64: use generic rwsem spinlocks rt

This patch enables the use of generic rwsem spinlocks
to support RT on sparc64

Acked-by: David S. Miller <[email protected]>
Signed-off-by: Allen Pais <[email protected]>
---
arch/sparc/Kconfig | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 6787bd3..554995d 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -179,12 +179,10 @@ config NR_CPUS
source kernel/Kconfig.hz

config RWSEM_GENERIC_SPINLOCK
- bool
- default y if SPARC32
+ def_bool PREEMPT_RT_FULL

config RWSEM_XCHGADD_ALGORITHM
- bool
- default y if SPARC64
+ def_bool !RWSEM_GENERIC_SPINLOCK && !PREEMPT_RT_FULL

config GENERIC_HWEIGHT
bool
--
1.7.10.4

2014-01-06 03:55:48

by Allen

[permalink] [raw]
Subject: [PATCH 2/4] sparc64: allow forced irq threading

Forced irq threading is a prerequisite for RT.
The following patch enables this on the sparc architecture.

Acked-by: David S. Miller <[email protected]>
Signed-off-by: Allen Pais <[email protected]>
---
arch/sparc/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 554995d..aae5aa9 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -27,6 +27,7 @@ config SPARC
select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL
select HAVE_GENERIC_HARDIRQS
+ select IRQ_FORCED_THREADING
select GENERIC_IRQ_SHOW
select ARCH_WANT_IPC_PARSE_VERSION
select USE_GENERIC_SMP_HELPERS if SMP
--
1.7.10.4

2014-01-06 03:55:54

by Allen

[permalink] [raw]
Subject: [PATCH 3/4] sparc64: convert spinlock_t to raw_spinlock_t in mmu_context_t

In the attempt of get PREEMPT_RT working on sparc64 using
linux-stable-rt version 3.10.22-rt19+, the kernel crash
with the following trace:

[ 1487.027884] I7: <rt_mutex_setprio+0x3c/0x2c0>
[ 1487.027885] Call Trace:
[ 1487.027887] [00000000004967dc] rt_mutex_setprio+0x3c/0x2c0
[ 1487.027892] [00000000004afe20] task_blocks_on_rt_mutex+0x180/0x200
[ 1487.027895] [0000000000819114] rt_spin_lock_slowlock+0x94/0x300
[ 1487.027897] [0000000000817ebc] __schedule+0x39c/0x53c
[ 1487.027899] [00000000008185fc] schedule+0x1c/0xc0
[ 1487.027908] [000000000048fff4] smpboot_thread_fn+0x154/0x2e0
[ 1487.027913] [000000000048753c] kthread+0x7c/0xa0
[ 1487.027920] [00000000004060c4] ret_from_syscall+0x1c/0x2c
[ 1487.027922] [0000000000000000] (null)

Thomas debugged this issue and pointed to switch_mm

spin_lock_irqsave(&mm->context.lock, flags);

context.lock needs to be a raw_spinlock.

Acked-by: David S. Miller <[email protected]>
Signed-off-by: Allen Pais <[email protected]>
---
arch/sparc/include/asm/mmu_64.h | 2 +-
arch/sparc/include/asm/mmu_context_64.h | 8 ++++----
arch/sparc/kernel/smp_64.c | 4 ++--
arch/sparc/mm/init_64.c | 4 ++--
arch/sparc/mm/tsb.c | 16 ++++++++--------
5 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
index 76092c4..e945ddb 100644
--- a/arch/sparc/include/asm/mmu_64.h
+++ b/arch/sparc/include/asm/mmu_64.h
@@ -90,7 +90,7 @@ struct tsb_config {
#endif

typedef struct {
- spinlock_t lock;
+ raw_spinlock_t lock;
unsigned long sparc64_ctx_val;
unsigned long huge_pte_count;
struct page *pgtable_page;
diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index 3d528f0..3a85624 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -77,7 +77,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
if (unlikely(mm == &init_mm))
return;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);
ctx_valid = CTX_VALID(mm->context);
if (!ctx_valid)
get_new_mmu_context(mm);
@@ -125,7 +125,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
__flush_tlb_mm(CTX_HWBITS(mm->context),
SECONDARY_CONTEXT);
}
- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);
}

#define deactivate_mm(tsk,mm) do { } while (0)
@@ -136,7 +136,7 @@ static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm
unsigned long flags;
int cpu;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);
if (!CTX_VALID(mm->context))
get_new_mmu_context(mm);
cpu = smp_processor_id();
@@ -146,7 +146,7 @@ static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm
load_secondary_context(mm);
__flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);
tsb_context_switch(mm);
- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);
}

#endif /* !(__ASSEMBLY__) */
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 77539ed..f42e1a7 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -975,12 +975,12 @@ void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *reg
if (unlikely(!mm || (mm == &init_mm)))
return;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);

if (unlikely(!CTX_VALID(mm->context)))
get_new_mmu_context(mm);

- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);

load_secondary_context(mm);
__flush_tlb_mm(CTX_HWBITS(mm->context),
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 04fd55a..bd5253d 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -350,7 +350,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *

mm = vma->vm_mm;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);

#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
if (mm->context.huge_pte_count && is_hugetlb_pte(pte))
@@ -361,7 +361,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *
__update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT,
address, pte_val(pte));

- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);
}

void flush_dcache_page(struct page *page)
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index 2cc3bce..d84d4ea 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -73,7 +73,7 @@ void flush_tsb_user(struct tlb_batch *tb)
struct mm_struct *mm = tb->mm;
unsigned long nentries, base, flags;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);

base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
@@ -90,14 +90,14 @@ void flush_tsb_user(struct tlb_batch *tb)
__flush_tsb_one(tb, HPAGE_SHIFT, base, nentries);
}
#endif
- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);
}

void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
{
unsigned long nentries, base, flags;

- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);

base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
@@ -114,7 +114,7 @@ void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
__flush_tsb_one_entry(base, vaddr, HPAGE_SHIFT, nentries);
}
#endif
- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);
}

#define HV_PGSZ_IDX_BASE HV_PGSZ_IDX_8K
@@ -392,7 +392,7 @@ retry_tsb_alloc:
* the lock and ask all other cpus running this address space
* to run tsb_context_switch() to see the new TSB table.
*/
- spin_lock_irqsave(&mm->context.lock, flags);
+ raw_spin_lock_irqsave(&mm->context.lock, flags);

old_tsb = mm->context.tsb_block[tsb_index].tsb;
old_cache_index =
@@ -407,7 +407,7 @@ retry_tsb_alloc:
*/
if (unlikely(old_tsb &&
(rss < mm->context.tsb_block[tsb_index].tsb_rss_limit))) {
- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);

kmem_cache_free(tsb_caches[new_cache_index], new_tsb);
return;
@@ -433,7 +433,7 @@ retry_tsb_alloc:
mm->context.tsb_block[tsb_index].tsb = new_tsb;
setup_tsb_params(mm, tsb_index, new_size);

- spin_unlock_irqrestore(&mm->context.lock, flags);
+ raw_spin_unlock_irqrestore(&mm->context.lock, flags);

/* If old_tsb is NULL, we're being invoked for the first time
* from init_new_context().
@@ -459,7 +459,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
#endif
unsigned int i;

- spin_lock_init(&mm->context.lock);
+ raw_spin_lock_init(&mm->context.lock);

mm->context.sparc64_ctx_val = 0UL;

--
1.7.10.4

2014-01-06 03:55:59

by Allen

[permalink] [raw]
Subject: [PATCH 4/4] sparc64: convert ctx_alloc_lock raw_spinlock_t

This patch fixes the kernel crash faced while
trying to attempt linux-stable-rt v3.10.22-rt19
on sparc64.

[ 2317.606015] [00000000008072f4] rt_spin_lock_slowlock+0x94/0x300
[ 2317.606020] [0000000000451d74] get_new_mmu_context+0x14/0x160
[ 2317.606026] [0000000000806394] switch_to_pc+0xd4/0x2a0
[ 2317.606029] [00000000008067dc] schedule+0x1c/0xc0
[ 2317.606031] [0000000000807364] rt_spin_lock_slowlock+0x104/0x300
[ 2317.606033] [0000000000450284] destroy_context+0x84/0x120
[ 2317.606036] [000000000045c788] __mmdrop+0x28/0xe0
[ 2317.606045] [00000000004bf290] rcu_process_callbacks+0x450/0x760
[ 2317.606049] [0000000000466d48] do_current_softirqs+0x208/0x3c0
[ 2317.606051] [0000000000466f14] run_ksoftirqd+0x14/0x40
[ 2317.606057] [000000000048c64c] smpboot_thread_fn+0x18c/0x2e0
[ 2317.606061] [0000000000483b5c] kthread+0x7c/0xa0
[ 2317.606069] [00000000004060c4] ret_from_syscall+0x1c/0x2c
[ 2317.606070] [0000000000000000] (null)

Acked-by: David S. Miller <[email protected]>
Signed-off-by: Allen Pais <[email protected]>
---
arch/sparc/include/asm/mmu_context_64.h | 2 +-
arch/sparc/mm/init_64.c | 10 +++++-----
arch/sparc/mm/tsb.c | 4 ++--
3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
index 3a85624..44e393b 100644
--- a/arch/sparc/include/asm/mmu_context_64.h
+++ b/arch/sparc/include/asm/mmu_context_64.h
@@ -13,7 +13,7 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
}

-extern spinlock_t ctx_alloc_lock;
+extern raw_spinlock_t ctx_alloc_lock;
extern unsigned long tlb_context_cache;
extern unsigned long mmu_context_bmap[];

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index bd5253d..ac5ae7a 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -661,7 +661,7 @@ void __flush_dcache_range(unsigned long start, unsigned long end)
EXPORT_SYMBOL(__flush_dcache_range);

/* get_new_mmu_context() uses "cache + 1". */
-DEFINE_SPINLOCK(ctx_alloc_lock);
+DEFINE_RAW_SPINLOCK(ctx_alloc_lock);
unsigned long tlb_context_cache = CTX_FIRST_VERSION - 1;
#define MAX_CTX_NR (1UL << CTX_NR_BITS)
#define CTX_BMAP_SLOTS BITS_TO_LONGS(MAX_CTX_NR)
@@ -683,7 +683,7 @@ void get_new_mmu_context(struct mm_struct *mm)
unsigned long orig_pgsz_bits;
int new_version;

- spin_lock(&ctx_alloc_lock);
+ raw_spin_lock(&ctx_alloc_lock);
orig_pgsz_bits = (mm->context.sparc64_ctx_val & CTX_PGSZ_MASK);
ctx = (tlb_context_cache + 1) & CTX_NR_MASK;
new_ctx = find_next_zero_bit(mmu_context_bmap, 1 << CTX_NR_BITS, ctx);
@@ -719,7 +719,7 @@ void get_new_mmu_context(struct mm_struct *mm)
out:
tlb_context_cache = new_ctx;
mm->context.sparc64_ctx_val = new_ctx | orig_pgsz_bits;
- spin_unlock(&ctx_alloc_lock);
+ raw_spin_unlock(&ctx_alloc_lock);

if (unlikely(new_version))
smp_new_mmu_context_version();
@@ -2739,7 +2739,7 @@ void hugetlb_setup(struct pt_regs *regs)
if (tlb_type == cheetah_plus) {
unsigned long ctx;

- spin_lock(&ctx_alloc_lock);
+ raw_spin_lock(&ctx_alloc_lock);
ctx = mm->context.sparc64_ctx_val;
ctx &= ~CTX_PGSZ_MASK;
ctx |= CTX_PGSZ_BASE << CTX_PGSZ0_SHIFT;
@@ -2760,7 +2760,7 @@ void hugetlb_setup(struct pt_regs *regs)
mm->context.sparc64_ctx_val = ctx;
on_each_cpu(context_reload, mm, 0);
}
- spin_unlock(&ctx_alloc_lock);
+ raw_spin_unlock(&ctx_alloc_lock);
}
}
#endif
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index d84d4ea..9eb10b4 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -523,12 +523,12 @@ void destroy_context(struct mm_struct *mm)
free_hot_cold_page(page, 0);
}

- spin_lock_irqsave(&ctx_alloc_lock, flags);
+ raw_spin_lock_irqsave(&ctx_alloc_lock, flags);

if (CTX_VALID(mm->context)) {
unsigned long nr = CTX_NRBITS(mm->context);
mmu_context_bmap[nr>>6] &= ~(1UL << (nr & 63));
}

- spin_unlock_irqrestore(&ctx_alloc_lock, flags);
+ raw_spin_unlock_irqrestore(&ctx_alloc_lock, flags);
}
--
1.7.10.4

2014-02-11 21:24:20

by Kirill Tkhai

[permalink] [raw]
Subject: Re: [PATCH 3/4] sparc64: convert spinlock_t to raw_spinlock_t in mmu_context_t



06.01.2014, 07:56, "Allen Pais" <[email protected]>:
> In the attempt of get PREEMPT_RT working on sparc64 using
> linux-stable-rt version 3.10.22-rt19+, the kernel crash
> with the following trace:
>
> [ 1487.027884] I7: <rt_mutex_setprio+0x3c/0x2c0>
> [ 1487.027885] Call Trace:
> [ 1487.027887] ?[00000000004967dc] rt_mutex_setprio+0x3c/0x2c0
> [ 1487.027892] ?[00000000004afe20] task_blocks_on_rt_mutex+0x180/0x200
> [ 1487.027895] ?[0000000000819114] rt_spin_lock_slowlock+0x94/0x300
> [ 1487.027897] ?[0000000000817ebc] __schedule+0x39c/0x53c
> [ 1487.027899] ?[00000000008185fc] schedule+0x1c/0xc0
> [ 1487.027908] ?[000000000048fff4] smpboot_thread_fn+0x154/0x2e0
> [ 1487.027913] ?[000000000048753c] kthread+0x7c/0xa0
> [ 1487.027920] ?[00000000004060c4] ret_from_syscall+0x1c/0x2c
> [ 1487.027922] ?[0000000000000000] ??????????(null)
>
> Thomas debugged this issue and pointed to switch_mm
>
> ????????spin_lock_irqsave(&mm->context.lock, flags);
>
> context.lock needs to be a raw_spinlock.
>
> Acked-by: David S. Miller <[email protected]>
> Signed-off-by: Allen Pais <[email protected]>
> ---
> ?arch/sparc/include/asm/mmu_64.h ????????| ???2 +-
> ?arch/sparc/include/asm/mmu_context_64.h | ???8 ++++----
> ?arch/sparc/kernel/smp_64.c ?????????????| ???4 ++--
> ?arch/sparc/mm/init_64.c ????????????????| ???4 ++--
> ?arch/sparc/mm/tsb.c ????????????????????| ??16 ++++++++--------
> ?5 files changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h
> index 76092c4..e945ddb 100644
> --- a/arch/sparc/include/asm/mmu_64.h
> +++ b/arch/sparc/include/asm/mmu_64.h
> @@ -90,7 +90,7 @@ struct tsb_config {
> ?#endif
>
> ?typedef struct {
> - spinlock_t lock;
> + raw_spinlock_t lock;
> ?????????unsigned long sparc64_ctx_val;
> ?????????unsigned long huge_pte_count;
> ?????????struct page *pgtable_page;
> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h
> index 3d528f0..3a85624 100644
> --- a/arch/sparc/include/asm/mmu_context_64.h
> +++ b/arch/sparc/include/asm/mmu_context_64.h
> @@ -77,7 +77,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
> ?????????if (unlikely(mm == &init_mm))
> ?????????????????return;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
> ?????????ctx_valid = CTX_VALID(mm->context);
> ?????????if (!ctx_valid)
> ?????????????????get_new_mmu_context(mm);
> @@ -125,7 +125,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
> ?????????????????__flush_tlb_mm(CTX_HWBITS(mm->context),
> ????????????????????????????????SECONDARY_CONTEXT);
> ?????????}
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
> ?}
>
> ?#define deactivate_mm(tsk,mm) do { } while (0)
> @@ -136,7 +136,7 @@ static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm
> ?????????unsigned long flags;
> ?????????int cpu;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
> ?????????if (!CTX_VALID(mm->context))
> ?????????????????get_new_mmu_context(mm);
> ?????????cpu = smp_processor_id();
> @@ -146,7 +146,7 @@ static inline void activate_mm(struct mm_struct *active_mm, struct mm_struct *mm
> ?????????load_secondary_context(mm);
> ?????????__flush_tlb_mm(CTX_HWBITS(mm->context), SECONDARY_CONTEXT);
> ?????????tsb_context_switch(mm);
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
> ?}
>
> ?#endif /* !(__ASSEMBLY__) */
> diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
> index 77539ed..f42e1a7 100644
> --- a/arch/sparc/kernel/smp_64.c
> +++ b/arch/sparc/kernel/smp_64.c
> @@ -975,12 +975,12 @@ void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *reg
> ?????????if (unlikely(!mm || (mm == &init_mm)))
> ?????????????????return;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
>
> ?????????if (unlikely(!CTX_VALID(mm->context)))
> ?????????????????get_new_mmu_context(mm);
>
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
>
> ?????????load_secondary_context(mm);
> ?????????__flush_tlb_mm(CTX_HWBITS(mm->context),
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 04fd55a..bd5253d 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -350,7 +350,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *
>
> ?????????mm = vma->vm_mm;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
>
> ?#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
> ?????????if (mm->context.huge_pte_count && is_hugetlb_pte(pte))
> @@ -361,7 +361,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *
> ?????????????????__update_mmu_tsb_insert(mm, MM_TSB_BASE, PAGE_SHIFT,
> ?????????????????????????????????????????address, pte_val(pte));
>
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
> ?}

We also should do the same in update_mmu_cache_pmd().

>
> ?void flush_dcache_page(struct page *page)
> diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
> index 2cc3bce..d84d4ea 100644
> --- a/arch/sparc/mm/tsb.c
> +++ b/arch/sparc/mm/tsb.c
> @@ -73,7 +73,7 @@ void flush_tsb_user(struct tlb_batch *tb)
> ?????????struct mm_struct *mm = tb->mm;
> ?????????unsigned long nentries, base, flags;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
>
> ?????????base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
> ?????????nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
> @@ -90,14 +90,14 @@ void flush_tsb_user(struct tlb_batch *tb)
> ?????????????????__flush_tsb_one(tb, HPAGE_SHIFT, base, nentries);
> ?????????}
> ?#endif
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
> ?}
>
> ?void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
> ?{
> ?????????unsigned long nentries, base, flags;
>
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
>
> ?????????base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
> ?????????nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
> @@ -114,7 +114,7 @@ void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
> ?????????????????__flush_tsb_one_entry(base, vaddr, HPAGE_SHIFT, nentries);
> ?????????}
> ?#endif
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
> ?}
>
> ?#define HV_PGSZ_IDX_BASE HV_PGSZ_IDX_8K
> @@ -392,7 +392,7 @@ retry_tsb_alloc:
> ??????????* the lock and ask all other cpus running this address space
> ??????????* to run tsb_context_switch() to see the new TSB table.
> ??????????*/
> - spin_lock_irqsave(&mm->context.lock, flags);
> + raw_spin_lock_irqsave(&mm->context.lock, flags);
>
> ?????????old_tsb = mm->context.tsb_block[tsb_index].tsb;
> ?????????old_cache_index =
> @@ -407,7 +407,7 @@ retry_tsb_alloc:
> ??????????*/
> ?????????if (unlikely(old_tsb &&
> ??????????????????????(rss < mm->context.tsb_block[tsb_index].tsb_rss_limit))) {
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
>
> ?????????????????kmem_cache_free(tsb_caches[new_cache_index], new_tsb);
> ?????????????????return;
> @@ -433,7 +433,7 @@ retry_tsb_alloc:
> ?????????mm->context.tsb_block[tsb_index].tsb = new_tsb;
> ?????????setup_tsb_params(mm, tsb_index, new_size);
>
> - spin_unlock_irqrestore(&mm->context.lock, flags);
> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
>
> ?????????/* If old_tsb is NULL, we're being invoked for the first time
> ??????????* from init_new_context().
> @@ -459,7 +459,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> ?#endif
> ?????????unsigned int i;
>
> - spin_lock_init(&mm->context.lock);
> + raw_spin_lock_init(&mm->context.lock);
>
> ?????????mm->context.sparc64_ctx_val = 0UL;
>
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/

2014-02-12 07:32:21

by Allen

[permalink] [raw]
Subject: Re: [PATCH 3/4] sparc64: convert spinlock_t to raw_spinlock_t in mmu_context_t

On Wednesday 12 February 2014 02:43 AM, Kirill Tkhai wrote:
>
>
> 06.01.2014, 07:56, "Allen Pais" <[email protected]>:
>> In the attempt of get PREEMPT_RT working on sparc64 using
>> linux-stable-rt version 3.10.22-rt19+, the kernel crash
>> with the following trace:
>>
>> [ 1487.027884] I7: <rt_mutex_setprio+0x3c/0x2c0>
>> [ 1487.027885] Call Trace:
>> [ 1487.027887] [00000000004967dc] rt_mutex_setprio+0x3c/0x2c0
>> [ 1487.027892] [00000000004afe20] task_blocks_on_rt_mutex+0x180/0x200
>> [ 1487.027895] [0000000000819114] rt_spin_lock_slowlock+0x94/0x300
>> [ 1487.027897] [0000000000817ebc] __schedule+0x39c/0x53c
>> [ 1487.027899] [00000000008185fc] schedule+0x1c/0xc0
>> [ 1487.027908] [000000000048fff4] smpboot_thread_fn+0x154/0x2e0
>> [ 1487.027913] [000000000048753c] kthread+0x7c/0xa0
>> [ 1487.027920] [00000000004060c4] ret_from_syscall+0x1c/0x2c
>> [ 1487.027922] [0000000000000000] (null)
>> - spin_unlock_irqrestore(&mm->context.lock, flags);
>> + raw_spin_unlock_irqrestore(&mm->context.lock, flags);
>> }
>
> We also should do the same in update_mmu_cache_pmd().
>

I have already done this. I should have updated the patch. the issue still
persists though.

- Allen