2018-07-24 06:06:05

by Bharat Bhushan

[permalink] [raw]
Subject: [PATCH] powerpc/e200: Skip tlb1 entries used for kernel mapping

E200 have TLB1 only and it does not have TLB0.
So TLB1 are used for mapping kernel and user-space both.
TLB miss handler for E200 does not consider skipping TLBs
used for kernel mapping. This patch ensures that we skip
tlb1 entries used for kernel mapping (tlbcam_index).

Signed-off-by: Bharat Bhushan <[email protected]>
---
arch/powerpc/kernel/head_fsl_booke.S | 20 ++++++++++++++++++--
1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index bf4c602..951fb96 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -801,12 +801,28 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
/* Round robin TLB1 entries assignment */
mfspr r12, SPRN_MAS0

+ /* Get first free tlbcam entry */
+ lis r11, tlbcam_index@ha
+ lwz r11, tlbcam_index@l(r11)
+
+ /* Extract MAS0(NV) */
+ andi. r13, r12, 0xfff
+ cmpw 0, r13, r11
+ blt 0, 5f
+ b 6f
+5:
+ /* When NV is less than first free tlbcam entry, use first free
+ * tlbcam entry for ESEL and set NV */
+ rlwimi r12, r11, 16, 4, 15
+ addi r11, r11, 1
+ rlwimi r12, r11, 0, 20, 31
+ b 7f
+6:
/* Extract TLB1CFG(NENTRY) */
mfspr r11, SPRN_TLB1CFG
andi. r11, r11, 0xfff

- /* Extract MAS0(NV) */
- andi. r13, r12, 0xfff
+ /* Set MAS0(NV) for next TLB miss exception */
addi r13, r13, 1
cmpw 0, r13, r11
addi r12, r12, 1
--
1.9.3



2018-08-07 00:46:53

by Crystal Wood

[permalink] [raw]
Subject: Re: powerpc/e200: Skip tlb1 entries used for kernel mapping

On Tue, Jul 24, 2018 at 11:29:45AM +0530, Bharat Bhushan wrote:
> E200 have TLB1 only and it does not have TLB0.
> So TLB1 are used for mapping kernel and user-space both.
> TLB miss handler for E200 does not consider skipping TLBs
> used for kernel mapping. This patch ensures that we skip
> tlb1 entries used for kernel mapping (tlbcam_index).

How much more is needed to get e200 working? What was this tested on?

> Signed-off-by: Bharat Bhushan <[email protected]>
> ---
> arch/powerpc/kernel/head_fsl_booke.S | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index bf4c602..951fb96 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -801,12 +801,28 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
> /* Round robin TLB1 entries assignment */
> mfspr r12, SPRN_MAS0
>
> + /* Get first free tlbcam entry */
> + lis r11, tlbcam_index@ha
> + lwz r11, tlbcam_index@l(r11)

The existing handler already loads tlbcam_index and uses that when
wrapping. What specifically is causing that to not work (perhaps it's
just a matter of initializing NV when tlbcam_index changes?), and why
does this patch leave that code in place?

> +
> + /* Extract MAS0(NV) */
> + andi. r13, r12, 0xfff
> + cmpw 0, r13, r11
> + blt 0, 5f
> + b 6f
> +5:

Why these two instructions instead of "bge 6f"? If it's for branch
prediction, does e200 pay attention to static hints? If it doesn't,
you could move the wrap code out-of-line.

> + /* When NV is less than first free tlbcam entry, use first free
> + * tlbcam entry for ESEL and set NV */
> + rlwimi r12, r11, 16, 4, 15
> + addi r11, r11, 1
> + rlwimi r12, r11, 0, 20, 31
> + b 7f

The 4-argument form of rlwimi is easier to read.

BTW, The TLB miss handler would be simpler/faster if you reserve the
upper entries rather than the lower entries. Then you would just have
one value to check (instead of using TLB1CFG[NENTRY]) to see if you wrap
back to zero.

-Scott