2011-02-12 10:44:28

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support

On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> +#ifdef CONFIG_ARM_LPAE
> +#define cpu_set_asid(asid) { \
> + unsigned long ttbl, ttbh; \
> + asm(" mrrc p15, 0, %0, %1, c2 @ read TTBR0\n" \
> + " mov %1, %1, lsl #(48 - 32) @ set ASID\n" \
> + " mcrr p15, 0, %0, %1, c2 @ set TTBR0\n" \
> + : "=r" (ttbl), "=r" (ttbh) \
> + : "r" (asid & ~ASID_MASK)); \

This is wrong:
1. It does nothing with %2 (the new asid)
2. it shifts the high address bits of TTBR0 left 16 places each time its
called.

> +}
> +#else
> +#define cpu_set_asid(asid) \
> + asm(" mcr p15, 0, %0, c13, c0, 1\n" : : "r" (asid))
> +#endif
> +
> /*
> * We fork()ed a process, and we need a new context for the child
> * to run in. We reserve version 0 for initial tasks so we will
> @@ -37,7 +51,7 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
> static void flush_context(void)
> {
> /* set the reserved ASID before flushing the TLB */
> - asm("mcr p15, 0, %0, c13, c0, 1\n" : : "r" (0));
> + cpu_set_asid(0);
> isb();
> local_flush_tlb_all();
> if (icache_is_vivt_asid_tagged()) {
> @@ -99,7 +113,7 @@ static void reset_context(void *info)
> set_mm_context(mm, asid);
>
> /* set the new ASID */
> - asm("mcr p15, 0, %0, c13, c0, 1\n" : : "r" (mm->context.id));
> + cpu_set_asid(mm->context.id);
> isb();
> }
>
> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index a22b89f..ed4f3cb 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -117,6 +117,11 @@ ENTRY(cpu_v7_switch_mm)
> #ifdef CONFIG_MMU
> mov r2, #0
> ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id

How about swapping the order here to avoid r1 being referenced in the very
next instruction?

> +#ifdef CONFIG_ARM_LPAE
> + and r3, r1, #0xff
> + mov r3, r3, lsl #(48 - 32) @ ASID
> + mcrr p15, 0, r0, r3, c2 @ set TTB 0
> +#else /* !CONFIG_ARM_LPAE */
> ALT_SMP(orr r0, r0, #TTB_FLAGS_SMP)
> ALT_UP(orr r0, r0, #TTB_FLAGS_UP)
> #ifdef CONFIG_ARM_ERRATA_430973
> @@ -124,9 +129,10 @@ ENTRY(cpu_v7_switch_mm)
> #endif
> mcr p15, 0, r2, c13, c0, 1 @ set reserved context ID
> isb
> -1: mcr p15, 0, r0, c2, c0, 0 @ set TTB 0
> + mcr p15, 0, r0, c2, c0, 0 @ set TTB 0
> isb
> mcr p15, 0, r1, c13, c0, 1 @ set context ID
> +#endif /* CONFIG_ARM_LPAE */
> isb
> #endif
> mov pc, lr
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


2011-02-14 13:24:19

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support

On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> > +#ifdef CONFIG_ARM_LPAE
> > +#define cpu_set_asid(asid) { \
> > + unsigned long ttbl, ttbh; \
> > + asm(" mrrc p15, 0, %0, %1, c2 @ read TTBR0\n" \
> > + " mov %1, %1, lsl #(48 - 32) @ set ASID\n" \
> > + " mcrr p15, 0, %0, %1, c2 @ set TTBR0\n" \
> > + : "=r" (ttbl), "=r" (ttbh) \
> > + : "r" (asid & ~ASID_MASK)); \
>
> This is wrong:
> 1. It does nothing with %2 (the new asid)
> 2. it shifts the high address bits of TTBR0 left 16 places each time its
> called.

It was worse actually, not even compiled in because it had output
arguments but it wasn't volatile. Some early clobber is also needed.
What about this:

#define cpu_set_asid(asid) { \
unsigned long ttbl, ttbh; \
asm volatile( \
" mrrc p15, 0, %0, %1, c2 @ read TTBR0\n" \
" mov %1, %2, lsl #(48 - 32) @ set ASID\n" \
" mcrr p15, 0, %0, %1, c2 @ set TTBR0\n" \
: "=&r" (ttbl), "=&r" (ttbh) \
: "r" (asid & ~ASID_MASK)); \
}

--
Catalin

2011-02-19 18:30:51

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support

On Mon, Feb 14, 2011 at 01:24:06PM +0000, Catalin Marinas wrote:
> On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
> > On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> > > +#ifdef CONFIG_ARM_LPAE
> > > +#define cpu_set_asid(asid) { \
> > > + unsigned long ttbl, ttbh; \
> > > + asm(" mrrc p15, 0, %0, %1, c2 @ read TTBR0\n" \
> > > + " mov %1, %1, lsl #(48 - 32) @ set ASID\n" \
> > > + " mcrr p15, 0, %0, %1, c2 @ set TTBR0\n" \
> > > + : "=r" (ttbl), "=r" (ttbh) \
> > > + : "r" (asid & ~ASID_MASK)); \
> >
> > This is wrong:
> > 1. It does nothing with %2 (the new asid)
> > 2. it shifts the high address bits of TTBR0 left 16 places each time its
> > called.
>
> It was worse actually, not even compiled in because it had output
> arguments but it wasn't volatile. Some early clobber is also needed.
> What about this:
>
> #define cpu_set_asid(asid) { \
> unsigned long ttbl, ttbh; \
> asm volatile( \
> " mrrc p15, 0, %0, %1, c2 @ read TTBR0\n" \
> " mov %1, %2, lsl #(48 - 32) @ set ASID\n" \
> " mcrr p15, 0, %0, %1, c2 @ set TTBR0\n" \
> : "=&r" (ttbl), "=&r" (ttbh) \
> : "r" (asid & ~ASID_MASK)); \
> }

So we don't care about the low 16 bits of ttbh which can be simply zeroed?

2011-02-19 23:16:58

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support

On Saturday, 19 February 2011, Russell King - ARM Linux
<[email protected]> wrote:
> On Mon, Feb 14, 2011 at 01:24:06PM +0000, Catalin Marinas wrote:
>> On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
>> > On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
>> > > +#ifdef CONFIG_ARM_LPAE
>> > > +#define cpu_set_asid(asid) {                                         \
>> > > +     unsigned long ttbl, ttbh;                                       \
>> > > +     asm("   mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
>> > > +         "   mov     %1, %1, lsl #(48 - 32)          @ set ASID\n"   \
>> > > +         "   mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
>> > > +         : "=r" (ttbl), "=r" (ttbh)                                  \
>> > > +         : "r" (asid & ~ASID_MASK));                                 \
>> >
>> > This is wrong:
>> > 1. It does nothing with %2 (the new asid)
>> > 2. it shifts the high address bits of TTBR0 left 16 places each time its
>> >    called.
>>
>> It was worse actually, not even compiled in because it had output
>> arguments but it wasn't volatile. Some early clobber is also needed.
>> What about this:
>>
>> #define cpu_set_asid(asid) {                                          \
>>       unsigned long ttbl, ttbh;                                       \
>>       asm volatile(                                                   \
>>       "       mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
>>       "       mov     %1, %2, lsl #(48 - 32)          @ set ASID\n"   \
>>       "       mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
>>       : "=&r" (ttbl), "=&r" (ttbh)                                    \
>>       : "r" (asid & ~ASID_MASK));                                     \
>> }
>
> So we don't care about the low 16 bits of ttbh which can be simply zeroed?

Since the pgd is always allocated from lowmem, it is within 32-bit of
physical address and we can safely ignore ttbh. I could write a
comment here to this.

Catalin

--
Catalin
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m???? ????????I?