2022-12-15 10:50:00

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

Hi Handa-san,

On Thu, Nov 17, 2022 at 4:32 PM Tetsuo Handa
<[email protected]> wrote:
> A kernel built with syzbot's config file reported that
>
> scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2))
>
> causes uninitialized "save" to be copied.

> Signed-off-by: Tetsuo Handa <[email protected]>

Thanks for your patch, which is now commit a6a00d7e8ffd78d1
("fbcon: Use kzalloc() in fbcon_prepare_logo()") in v6.1-rc7,
and which is being backported to stable.

> --- a/drivers/video/fbdev/core/fbcon.c
> +++ b/drivers/video/fbdev/core/fbcon.c
> @@ -577,7 +577,7 @@ static void fbcon_prepare_logo(struct vc_data *vc, struct fb_info *info,
> if (scr_readw(r) != vc->vc_video_erase_char)
> break;
> if (r != q && new_rows >= rows + logo_lines) {
> - save = kmalloc(array3_size(logo_lines, new_cols, 2),
> + save = kzalloc(array3_size(logo_lines, new_cols, 2),
> GFP_KERNEL);
> if (save) {
> int i = min(cols, new_cols);

The next line is:

scr_memsetw(save, erase,
array3_size(logo_lines, new_cols, 2));

So how can this turn out to be uninitialized later below?

scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2));

What am I missing?

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


2022-12-16 14:59:08

by Tetsuo Handa

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On 2022/12/15 18:36, Geert Uytterhoeven wrote:
> The next line is:
>
> scr_memsetw(save, erase, array3_size(logo_lines, new_cols, 2));
>
> So how can this turn out to be uninitialized later below?
>
> scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2));
>
> What am I missing?

Good catch. It turned out that this was a KMSAN problem (i.e. a false positive report).

On x86_64, scr_memsetw() is implemented as

static inline void scr_memsetw(u16 *s, u16 c, unsigned int count)
{
memset16(s, c, count / 2);
}

and memset16() is implemented as

static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
{
long d0, d1;
asm volatile("rep\n\t"
"stosw"
: "=&c" (d0), "=&D" (d1)
: "a" (v), "1" (s), "0" (n)
: "memory");
return s;
}

. Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
via memsetXX() results in KMSAN's shadow memory being not updated.

KMSAN folks, how should we fix this problem?
Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?

2022-12-16 16:06:56

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On Fri, Dec 16, 2022 at 3:03 PM Tetsuo Handa
<[email protected]> wrote:
>
> On 2022/12/15 18:36, Geert Uytterhoeven wrote:
> > The next line is:
> >
> > scr_memsetw(save, erase, array3_size(logo_lines, new_cols, 2));
> >
> > So how can this turn out to be uninitialized later below?
> >
> > scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2));
> >
> > What am I missing?
>
> Good catch. It turned out that this was a KMSAN problem (i.e. a false positive report).
>
> On x86_64, scr_memsetw() is implemented as
>
> static inline void scr_memsetw(u16 *s, u16 c, unsigned int count)
> {
> memset16(s, c, count / 2);
> }
>
> and memset16() is implemented as
>
> static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
> {
> long d0, d1;
> asm volatile("rep\n\t"
> "stosw"
> : "=&c" (d0), "=&D" (d1)
> : "a" (v), "1" (s), "0" (n)
> : "memory");
> return s;
> }
>
> . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
> but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
> via memsetXX() results in KMSAN's shadow memory being not updated.
>
> KMSAN folks, how should we fix this problem?
> Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?
>

I think the easiest way to fix it would be disable memsetXX asm
implementations by something like:

-------------------------------------------------------------------------------------------------
diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
index 888731ccf1f67..5fb330150a7d1 100644
--- a/arch/x86/include/asm/string_64.h
+++ b/arch/x86/include/asm/string_64.h
@@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n);
#endif
void *__memset(void *s, int c, size_t n);

+#if !defined(__SANITIZE_MEMORY__)
#define __HAVE_ARCH_MEMSET16
static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
{
@@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t
v, size_t n)
: "memory");
return s;
}
+#endif

#define __HAVE_ARCH_MEMMOVE
#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
-------------------------------------------------------------------------------------------------

This way we'll just pick the existing C implementations instead of
reinventing them.


--
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Liana Sebastian
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

2023-01-05 12:25:48

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On Fri, Dec 16, 2022 at 04:52:14PM +0100, Alexander Potapenko wrote:
> On Fri, Dec 16, 2022 at 3:03 PM Tetsuo Handa
> <[email protected]> wrote:
> >
> > On 2022/12/15 18:36, Geert Uytterhoeven wrote:
> > > The next line is:
> > >
> > > scr_memsetw(save, erase, array3_size(logo_lines, new_cols, 2));
> > >
> > > So how can this turn out to be uninitialized later below?
> > >
> > > scr_memcpyw(q, save, array3_size(logo_lines, new_cols, 2));
> > >
> > > What am I missing?
> >
> > Good catch. It turned out that this was a KMSAN problem (i.e. a false positive report).
> >
> > On x86_64, scr_memsetw() is implemented as
> >
> > static inline void scr_memsetw(u16 *s, u16 c, unsigned int count)
> > {
> > memset16(s, c, count / 2);
> > }
> >
> > and memset16() is implemented as
> >
> > static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
> > {
> > long d0, d1;
> > asm volatile("rep\n\t"
> > "stosw"
> > : "=&c" (d0), "=&D" (d1)
> > : "a" (v), "1" (s), "0" (n)
> > : "memory");
> > return s;
> > }
> >
> > . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
> > but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
> > via memsetXX() results in KMSAN's shadow memory being not updated.
> >
> > KMSAN folks, how should we fix this problem?
> > Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?
> >
>
> I think the easiest way to fix it would be disable memsetXX asm
> implementations by something like:
>
> -------------------------------------------------------------------------------------------------
> diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
> index 888731ccf1f67..5fb330150a7d1 100644
> --- a/arch/x86/include/asm/string_64.h
> +++ b/arch/x86/include/asm/string_64.h
> @@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n);
> #endif
> void *__memset(void *s, int c, size_t n);
>
> +#if !defined(__SANITIZE_MEMORY__)
> #define __HAVE_ARCH_MEMSET16
> static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
> {
> @@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t
> v, size_t n)
> : "memory");
> return s;
> }
> +#endif

So ... what should I do here? Can someone please send me a revert or patch
to apply. I don't think I should do this, since I already tossed my credit
for not looking at stuff carefully enough into the wind :-)
-Daniel

>
> #define __HAVE_ARCH_MEMMOVE
> #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
> -------------------------------------------------------------------------------------------------
>
> This way we'll just pick the existing C implementations instead of
> reinventing them.
>
>
> --
> Alexander Potapenko
> Software Engineer
>
> Google Germany GmbH
> Erika-Mann-Stra?e, 33
> 80636 M?nchen
>
> Gesch?ftsf?hrer: Paul Manicle, Liana Sebastian
> Registergericht und -nummer: Hamburg, HRB 86891
> Sitz der Gesellschaft: Hamburg

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2023-01-05 13:47:55

by Tetsuo Handa

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On 2023/01/05 20:54, Daniel Vetter wrote:
>>> . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
>>> but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
>>> via memsetXX() results in KMSAN's shadow memory being not updated.
>>>
>>> KMSAN folks, how should we fix this problem?
>>> Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?
>>>
>>
>> I think the easiest way to fix it would be disable memsetXX asm
>> implementations by something like:
>>
>> -------------------------------------------------------------------------------------------------
>> diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
>> index 888731ccf1f67..5fb330150a7d1 100644
>> --- a/arch/x86/include/asm/string_64.h
>> +++ b/arch/x86/include/asm/string_64.h
>> @@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n);
>> #endif
>> void *__memset(void *s, int c, size_t n);
>>
>> +#if !defined(__SANITIZE_MEMORY__)
>> #define __HAVE_ARCH_MEMSET16
>> static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
>> {
>> @@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t
>> v, size_t n)
>> : "memory");
>> return s;
>> }
>> +#endif
>
> So ... what should I do here? Can someone please send me a revert or patch
> to apply. I don't think I should do this, since I already tossed my credit
> for not looking at stuff carefully enough into the wind :-)
> -Daniel
>
>>
>> #define __HAVE_ARCH_MEMMOVE
>> #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
>> -------------------------------------------------------------------------------------------------
>>
>> This way we'll just pick the existing C implementations instead of
>> reinventing them.
>>

I'd like to avoid touching per-arch asm/string.h files if possible.

Can't we do like below (i.e. keep asm implementations as-is, but
automatically redirect to __msan_memset()) ? If yes, we could move all
__msan_*() redirection from per-arch asm/string.h files to the common
linux/string.h file?

diff --git a/include/linux/string.h b/include/linux/string.h
index c062c581a98b..403813b04e00 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -360,4 +360,15 @@ static __always_inline size_t str_has_prefix(const char *str, const char *prefix
return strncmp(str, prefix, len) == 0 ? len : 0;
}

+#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
+#undef memset
+#define memset(dest, src, count) __msan_memset((dest), (src), (count))
+#undef memset16
+#define memset16(dest, src, count) __msan_memset((dest), (src), (count) << 1)
+#undef memset32
+#define memset32(dest, src, count) __msan_memset((dest), (src), (count) << 2)
+#undef memset64
+#define memset64(dest, src, count) __msan_memset((dest), (src), (count) << 3)
+#endif
+
#endif /* _LINUX_STRING_H_ */


2023-01-05 13:47:58

by Daniel Vetter

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On Thu, Jan 05, 2023 at 10:17:24PM +0900, Tetsuo Handa wrote:
> On 2023/01/05 20:54, Daniel Vetter wrote:
> >>> . Plain memset() in arch/x86/include/asm/string_64.h is redirected to __msan_memset()
> >>> but memsetXX() are not redirected to __msan_memsetXX(). That is, memory initialization
> >>> via memsetXX() results in KMSAN's shadow memory being not updated.
> >>>
> >>> KMSAN folks, how should we fix this problem?
> >>> Redirect assembly-implemented memset16(size) to memset(size*2) if KMSAN is enabled?
> >>>
> >>
> >> I think the easiest way to fix it would be disable memsetXX asm
> >> implementations by something like:
> >>
> >> -------------------------------------------------------------------------------------------------
> >> diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
> >> index 888731ccf1f67..5fb330150a7d1 100644
> >> --- a/arch/x86/include/asm/string_64.h
> >> +++ b/arch/x86/include/asm/string_64.h
> >> @@ -33,6 +33,7 @@ void *memset(void *s, int c, size_t n);
> >> #endif
> >> void *__memset(void *s, int c, size_t n);
> >>
> >> +#if !defined(__SANITIZE_MEMORY__)
> >> #define __HAVE_ARCH_MEMSET16
> >> static inline void *memset16(uint16_t *s, uint16_t v, size_t n)
> >> {
> >> @@ -68,6 +69,7 @@ static inline void *memset64(uint64_t *s, uint64_t
> >> v, size_t n)
> >> : "memory");
> >> return s;
> >> }
> >> +#endif
> >
> > So ... what should I do here? Can someone please send me a revert or patch
> > to apply. I don't think I should do this, since I already tossed my credit
> > for not looking at stuff carefully enough into the wind :-)
> > -Daniel
> >
> >>
> >> #define __HAVE_ARCH_MEMMOVE
> >> #if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
> >> -------------------------------------------------------------------------------------------------
> >>
> >> This way we'll just pick the existing C implementations instead of
> >> reinventing them.
> >>
>
> I'd like to avoid touching per-arch asm/string.h files if possible.
>
> Can't we do like below (i.e. keep asm implementations as-is, but
> automatically redirect to __msan_memset()) ? If yes, we could move all
> __msan_*() redirection from per-arch asm/string.h files to the common
> linux/string.h file?

Oh I was more asking about the fbdev patch. This here sounds a lot more
something that needs to be discussed with kmsan people, that's definitely
not my area.
-Daniel

>
> diff --git a/include/linux/string.h b/include/linux/string.h
> index c062c581a98b..403813b04e00 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -360,4 +360,15 @@ static __always_inline size_t str_has_prefix(const char *str, const char *prefix
> return strncmp(str, prefix, len) == 0 ? len : 0;
> }
>
> +#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
> +#undef memset
> +#define memset(dest, src, count) __msan_memset((dest), (src), (count))
> +#undef memset16
> +#define memset16(dest, src, count) __msan_memset((dest), (src), (count) << 1)
> +#undef memset32
> +#define memset32(dest, src, count) __msan_memset((dest), (src), (count) << 2)
> +#undef memset64
> +#define memset64(dest, src, count) __msan_memset((dest), (src), (count) << 3)
> +#endif
> +
> #endif /* _LINUX_STRING_H_ */
>
>

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2023-01-05 14:16:59

by Tetsuo Handa

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

On 2023/01/05 22:22, Daniel Vetter wrote:
> Oh I was more asking about the fbdev patch. This here sounds a lot more
> something that needs to be discussed with kmsan people, that's definitely
> not my area.
> -Daniel

Commit a6a00d7e8ffd ("fbcon: Use kzalloc() in fbcon_prepare_logo()") was
redundant but not reverting that commit is harmless. You don't need to
worry about this problem. This is a problem for KMSAN people.

2023-03-01 13:41:47

by Alexander Potapenko

[permalink] [raw]
Subject: Re: [PATCH] fbcon: Use kzalloc() in fbcon_prepare_logo()

>
> I'd like to avoid touching per-arch asm/string.h files if possible.
>
> Can't we do like below (i.e. keep asm implementations as-is, but
> automatically redirect to __msan_memset()) ? If yes, we could move all
> __msan_*() redirection from per-arch asm/string.h files to the common
> linux/string.h file?
>
> diff --git a/include/linux/string.h b/include/linux/string.h
> index c062c581a98b..403813b04e00 100644
> --- a/include/linux/string.h
> +++ b/include/linux/string.h
> @@ -360,4 +360,15 @@ static __always_inline size_t str_has_prefix(const char *str, const char *prefix
> return strncmp(str, prefix, len) == 0 ? len : 0;
> }
>
> +#if defined(__SANITIZE_MEMORY__) && defined(__NO_FORTIFY)
> +#undef memset
> +#define memset(dest, src, count) __msan_memset((dest), (src), (count))
> +#undef memset16
> +#define memset16(dest, src, count) __msan_memset((dest), (src), (count) << 1)
> +#undef memset32
> +#define memset32(dest, src, count) __msan_memset((dest), (src), (count) << 2)
> +#undef memset64
> +#define memset64(dest, src, count) __msan_memset((dest), (src), (count) << 3)
> +#endif

The problem with this approach is that it can only work for
memset/memcpy/memmove, whereas any function that is implemented in
lib/string.c may require undefining the respective __HAVE_ARCH_FNAME
so that KMSAN can instrument it.