We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
So, the TCR_T0SZ_OFFSET shift here should be removed.
Co-developed-by: Leem ChaeHoon <[email protected]>
Signed-off-by: Leem ChaeHoon <[email protected]>
Co-developed-by: Gyeonggeon Choi <[email protected]>
Signed-off-by: Gyeonggeon Choi <[email protected]>
Co-developed-by: Soomin Cho <[email protected]>
Signed-off-by: Soomin Cho <[email protected]>
Co-developed-by: DaeRo Lee <[email protected]>
Signed-off-by: DaeRo Lee <[email protected]>
Co-developed-by: kmasta <[email protected]>
Signed-off-by: kmasta <[email protected]>
Signed-off-by: Seongsu Park <[email protected]>
---
arch/arm64/include/asm/mmu_context.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index c768d16b81a4..58de99836d2e 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
return;
tcr &= ~TCR_T0SZ_MASK;
- tcr |= t0sz << TCR_T0SZ_OFFSET;
+ tcr |= t0sz;
write_sysreg(tcr, tcr_el1);
isb();
}
--
2.34.1
On Tue, Apr 02, 2024 at 07:49:50PM +0900, Seongsu Park wrote:
> We have already shifted the value of t0sz in TCR_T0SZ by TCR_T0SZ_OFFSET.
> So, the TCR_T0SZ_OFFSET shift here should be removed.
>
> Co-developed-by: Leem ChaeHoon <[email protected]>
> Signed-off-by: Leem ChaeHoon <[email protected]>
> Co-developed-by: Gyeonggeon Choi <[email protected]>
> Signed-off-by: Gyeonggeon Choi <[email protected]>
> Co-developed-by: Soomin Cho <[email protected]>
> Signed-off-by: Soomin Cho <[email protected]>
> Co-developed-by: DaeRo Lee <[email protected]>
> Signed-off-by: DaeRo Lee <[email protected]>
> Co-developed-by: kmasta <[email protected]>
> Signed-off-by: kmasta <[email protected]>
> Signed-off-by: Seongsu Park <[email protected]>
heh, that's quite a lot of people. Did you remove three chars each? :p
> ---
> arch/arm64/include/asm/mmu_context.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index c768d16b81a4..58de99836d2e 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
> return;
>
> tcr &= ~TCR_T0SZ_MASK;
> - tcr |= t0sz << TCR_T0SZ_OFFSET;
> + tcr |= t0sz;
Thankfully, TCR_T0SZ_OFFSET is 0 so this isn't as alarming as it looks.
Even so, if we're going to make the code consistent, then shouldn't the
earlier conditional be updated too?
if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
return;
seems to assume that t0sz is unshifted.
Will
> On Tue, Apr 02, 2024 at 07:49:50PM +0900, Seongsu Park wrote:
> > We have already shifted the value of t0sz in TCR_T0SZ by
TCR_T0SZ_OFFSET.
> > So, the TCR_T0SZ_OFFSET shift here should be removed.
> >
> > Co-developed-by: Leem ChaeHoon <[email protected]>
> > Signed-off-by: Leem ChaeHoon <[email protected]>
> > Co-developed-by: Gyeonggeon Choi <[email protected]>
> > Signed-off-by: Gyeonggeon Choi <[email protected]>
> > Co-developed-by: Soomin Cho <[email protected]>
> > Signed-off-by: Soomin Cho <[email protected]>
> > Co-developed-by: DaeRo Lee <[email protected]>
> > Signed-off-by: DaeRo Lee <[email protected]>
> > Co-developed-by: kmasta <[email protected]>
> > Signed-off-by: kmasta <[email protected]>
> > Signed-off-by: Seongsu Park <[email protected]>
>
> heh, that's quite a lot of people. Did you remove three chars each? :p
We are studying the Linux kernel based on arm64 together every Saturday for
7 hours! :)
>
> > ---
> > arch/arm64/include/asm/mmu_context.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/mmu_context.h
> > b/arch/arm64/include/asm/mmu_context.h
> > index c768d16b81a4..58de99836d2e 100644
> > --- a/arch/arm64/include/asm/mmu_context.h
> > +++ b/arch/arm64/include/asm/mmu_context.h
> > @@ -76,7 +76,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long
> t0sz)
> > return;
> >
> > tcr &= ~TCR_T0SZ_MASK;
> > - tcr |= t0sz << TCR_T0SZ_OFFSET;
> > + tcr |= t0sz;
>
> Thankfully, TCR_T0SZ_OFFSET is 0 so this isn't as alarming as it looks.
> Even so, if we're going to make the code consistent, then shouldn't the
> earlier conditional be updated too?
>
> if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
> return;
>
> seems to assume that t0sz is unshifted.
>
> Will
Thank you for feedback. I'll send v2 patch.