MIPS R6 version of memcpy has bug - then length to copy is zero
and addresses are not aligned then it can overwrite a whole memory.
Signed-off-by: Leonid Yegoshin <[email protected]>
---
arch/mips/lib/memcpy.S | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
index 9245e1705e69..7e0250f3aec8 100644
--- a/arch/mips/lib/memcpy.S
+++ b/arch/mips/lib/memcpy.S
@@ -514,6 +514,8 @@
#ifdef CONFIG_CPU_MIPSR6
.Lcopy_unaligned_bytes\@:
+ beqz len, .Ldone\@
+ nop
1:
COPY_BYTE(0)
COPY_BYTE(1)
Hi,
On Tue, Apr 28, 2015 at 1:35 AM, Leonid Yegoshin
<[email protected]> wrote:
> MIPS R6 version of memcpy has bug - then length to copy is zero
> and addresses are not aligned then it can overwrite a whole memory.
>
> Signed-off-by: Leonid Yegoshin <[email protected]>
> ---
> arch/mips/lib/memcpy.S | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
> index 9245e1705e69..7e0250f3aec8 100644
> --- a/arch/mips/lib/memcpy.S
> +++ b/arch/mips/lib/memcpy.S
> @@ -514,6 +514,8 @@
>
> #ifdef CONFIG_CPU_MIPSR6
> .Lcopy_unaligned_bytes\@:
> + beqz len, .Ldone\@
> + nop
> 1:
> COPY_BYTE(0)
> COPY_BYTE(1)
AFAICT it should never reach that if the amount to copy is zero bytes,
so the check seems to be superfluous:
sltu t2, len, NBYTES <- check for < NBYTES (4/8 bit
depending on 32/64 bit)
and t1, dst, ADDRMASK
PREFS( 0, 1*32(src) )
PREFD( 1, 1*32(dst) )
bnez t2, .Lcopy_bytes_checklen\@ <- skip to
copy_bytes_checklen if < NBYTES
and t0, src, ADDRMASK
PREFS( 0, 2*32(src) )
PREFD( 1, 2*32(dst) )
#ifndef CONFIG_CPU_MIPSR6
bnez t1, .Ldst_unaligned\@
nop
bnez t0, .Lsrc_unaligned_dst_aligned\@
#else
or t0, t0, t1
bnez t0, .Lcopy_unaligned_bytes\@ <- only outside place to
branch to it, and only reachable if len >= NBYTES bytes.
#endif
And in the loop itself each COPY_BYTE() will already break out if len
becomes zero, so the unconditional b 1b should also never be reached
with len == 0 in that case..
But maybe I overlooked something.
Regards
Jonas
You right,
I am debugging new core and got a wrong backtrace.
Please cancel it, sorry for noise.
- Leonid.
Jonas Gorski <[email protected]> wrote:
Hi,
On Tue, Apr 28, 2015 at 1:35 AM, Leonid Yegoshin
<[email protected]> wrote:
> MIPS R6 version of memcpy has bug - then length to copy is zero
> and addresses are not aligned then it can overwrite a whole memory.
>
> Signed-off-by: Leonid Yegoshin <[email protected]>
> ---
> arch/mips/lib/memcpy.S | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
> index 9245e1705e69..7e0250f3aec8 100644
> --- a/arch/mips/lib/memcpy.S
> +++ b/arch/mips/lib/memcpy.S
> @@ -514,6 +514,8 @@
>
> #ifdef CONFIG_CPU_MIPSR6
> .Lcopy_unaligned_bytes\@:
> + beqz len, .Ldone\@
> + nop
> 1:
> COPY_BYTE(0)
> COPY_BYTE(1)
AFAICT it should never reach that if the amount to copy is zero bytes,
so the check seems to be superfluous:
sltu t2, len, NBYTES <- check for < NBYTES (4/8 bit
depending on 32/64 bit)
and t1, dst, ADDRMASK
PREFS( 0, 1*32(src) )
PREFD( 1, 1*32(dst) )
bnez t2, .Lcopy_bytes_checklen\@ <- skip to
copy_bytes_checklen if < NBYTES
and t0, src, ADDRMASK
PREFS( 0, 2*32(src) )
PREFD( 1, 2*32(dst) )
#ifndef CONFIG_CPU_MIPSR6
bnez t1, .Ldst_unaligned\@
nop
bnez t0, .Lsrc_unaligned_dst_aligned\@
#else
or t0, t0, t1
bnez t0, .Lcopy_unaligned_bytes\@ <- only outside place to
branch to it, and only reachable if len >= NBYTES bytes.
#endif
And in the loop itself each COPY_BYTE() will already break out if len
becomes zero, so the unconditional b 1b should also never be reached
with len == 0 in that case..
But maybe I overlooked something.
Regards
Jonas