2022-02-14 21:22:49

by syzbot

[permalink] [raw]
Subject: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hello,

syzbot found the following issue on:

HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
git tree: bpf-next
console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]

==================================================================
BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
Read of size 4 at addr ffffffffa0001a80 by task kworker/0:18/13642

CPU: 0 PID: 13642 Comm: kworker/0:18 Not tainted 5.16.0-syzkaller-11655-ge5313968c41b #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: events bpf_prog_free_deferred
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_address_description.constprop.0.cold+0xf/0x336 mm/kasan/report.c:255
__kasan_report mm/kasan/report.c:442 [inline]
kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
bpf_prog_free_deferred+0x5c1/0x790 kernel/bpf/core.c:2524
process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
worker_thread+0x657/0x1110 kernel/workqueue.c:2454
kthread+0x2e9/0x3a0 kernel/kthread.c:377
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK>


Memory state around the buggy address:
ffffffffa0001980: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
ffffffffa0001a00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
>ffffffffa0001a80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
^
ffffffffa0001b00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
ffffffffa0001b80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
==================================================================


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at [email protected].

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.


2022-02-15 00:52:23

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Song, ptal.

On 2/14/22 7:45 PM, syzbot wrote:
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> git tree: bpf-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
> compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
>
> Unfortunately, I don't have any reproducer for this issue yet.
>
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: [email protected]
>
> ==================================================================
> BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
> BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
> Read of size 4 at addr ffffffffa0001a80 by task kworker/0:18/13642
>
> CPU: 0 PID: 13642 Comm: kworker/0:18 Not tainted 5.16.0-syzkaller-11655-ge5313968c41b #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Workqueue: events bpf_prog_free_deferred
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
> print_address_description.constprop.0.cold+0xf/0x336 mm/kasan/report.c:255
> __kasan_report mm/kasan/report.c:442 [inline]
> kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
> bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
> bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
> bpf_prog_free_deferred+0x5c1/0x790 kernel/bpf/core.c:2524
> process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
> worker_thread+0x657/0x1110 kernel/workqueue.c:2454
> kthread+0x2e9/0x3a0 kernel/kthread.c:377
> ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
> </TASK>
>
>
> Memory state around the buggy address:
> ffffffffa0001980: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> ffffffffa0001a00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
>> ffffffffa0001a80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> ^
> ffffffffa0001b00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> ffffffffa0001b80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> ==================================================================
>
>
> ---
> This report is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at [email protected].
>
> syzbot will keep track of this issue. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
>

2022-02-15 13:15:01

by Song Liu

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

On Mon, Feb 14, 2022 at 3:52 PM Daniel Borkmann <[email protected]> wrote:
>
> Song, ptal.
>
> On 2/14/22 7:45 PM, syzbot wrote:
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> > git tree: bpf-next
> > console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> > dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
> > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> >
> > Unfortunately, I don't have any reproducer for this issue yet.
> >
> > IMPORTANT: if you fix the issue, please add the following tag to the commit:
> > Reported-by: [email protected]
> >
> > ==================================================================
> > BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
> > BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
> > Read of size 4 at addr ffffffffa0001a80 by task kworker/0:18/13642
> >
> > CPU: 0 PID: 13642 Comm: kworker/0:18 Not tainted 5.16.0-syzkaller-11655-ge5313968c41b #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> > Workqueue: events bpf_prog_free_deferred
> > Call Trace:
> > <TASK>
> > __dump_stack lib/dump_stack.c:88 [inline]
> > dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
> > print_address_description.constprop.0.cold+0xf/0x336 mm/kasan/report.c:255
> > __kasan_report mm/kasan/report.c:442 [inline]
> > kasan_report.cold+0x83/0xdf mm/kasan/report.c:459
> > bpf_jit_binary_pack_free kernel/bpf/core.c:1120 [inline]
> > bpf_jit_free+0x2b5/0x2e0 kernel/bpf/core.c:1151
> > bpf_prog_free_deferred+0x5c1/0x790 kernel/bpf/core.c:2524
> > process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
> > worker_thread+0x657/0x1110 kernel/workqueue.c:2454
> > kthread+0x2e9/0x3a0 kernel/kthread.c:377
> > ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
> > </TASK>

I think this is the same issue as [1], that the 2MB page somehow got freed
while still in use. I couldn't spot any bug with bpf_prog_pack allocate/free
logic. I haven't got luck reproducing it either. Will continue tomorrow.


[1] https://lore.kernel.org/netdev/[email protected]/t/

> >
> >
> > Memory state around the buggy address:
> > ffffffffa0001980: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> > ffffffffa0001a00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> >> ffffffffa0001a80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> > ^
> > ffffffffa0001b00: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> > ffffffffa0001b80: f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8 f8
> > ==================================================================
> >
> >
> > ---
> > This report is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at [email protected].
> >
> > syzbot will keep track of this issue. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> >
>

2022-02-16 06:38:47

by Song Liu

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

On Mon, Feb 14, 2022 at 10:41 PM Song Liu <[email protected]> wrote:
>
> On Mon, Feb 14, 2022 at 3:52 PM Daniel Borkmann <[email protected]> wrote:
> >
> > Song, ptal.
> >
> > On 2/14/22 7:45 PM, syzbot wrote:
> > > Hello,
> > >
> > > syzbot found the following issue on:
> > >
> > > HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> > > git tree: bpf-next
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f

How do I run the exact same syzkaller? I am doing something like

./bin/syz-manager -config qemu.cfg

with the cfg file like:

{
"target": "linux/amd64",
"http": ":56741",
"workdir": "workdir",
"kernel_obj": "linux",
"image": "./pkg/mgrconfig/testdata/stretch.img",
"syzkaller": ".",
"disable_syscalls": ["keyctl", "add_key", "request_key"],
"suppressions": ["some known bug"],
"procs": 8,
"type": "qemu",
"vm": {
"count": 16,
"cpu": 2,
"mem": 2048,
"kernel": "linux/arch/x86/boot/bzImage"
}
}

Is this correct? I am using stretch.img from syzkaller site, and the
.config from
the link above.

Thanks,
Song

2022-02-16 09:44:45

by Aleksandr Nogikh

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hi Song,

Is syzkaller not doing something you expect it to do with this config?

On Wed, Feb 16, 2022 at 2:38 AM Song Liu <[email protected]> wrote:
>
> On Mon, Feb 14, 2022 at 10:41 PM Song Liu <[email protected]> wrote:
> >
> > On Mon, Feb 14, 2022 at 3:52 PM Daniel Borkmann <[email protected]> wrote:
> > >
> > > Song, ptal.
> > >
> > > On 2/14/22 7:45 PM, syzbot wrote:
> > > > Hello,
> > > >
> > > > syzbot found the following issue on:
> > > >
> > > > HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> > > > git tree: bpf-next
> > > > console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> > > > dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
>
> How do I run the exact same syzkaller? I am doing something like
>
> ./bin/syz-manager -config qemu.cfg
>
> with the cfg file like:
>
> {
> "target": "linux/amd64",
> "http": ":56741",
> "workdir": "workdir",
> "kernel_obj": "linux",
> "image": "./pkg/mgrconfig/testdata/stretch.img",

This image location looks suspicious - we store some dummy data for
tests in that folder.
Instances now run on buildroot-based images, generated with
https://github.com/google/syzkaller/blob/master/tools/create-buildroot-image.sh

> "syzkaller": ".",
> "disable_syscalls": ["keyctl", "add_key", "request_key"],

For our bpf instances, instead of disable_syscalls we use enable_syscalls:

"enable_syscalls": [
"bpf", "mkdir", "mount$bpf", "unlink", "close",
"perf_event_open*", "ioctl$PERF*", "getpid", "gettid",
"socketpair", "sendmsg", "recvmsg", "setsockopt$sock_attach_bpf",
"socket$kcm", "ioctl$sock_kcm*", "syz_clone",
"mkdirat$cgroup*", "openat$cgroup*", "write$cgroup*",
"openat$tun", "write$tun", "ioctl$TUN*", "ioctl$SIOCSIFHWADDR",
"openat$ppp", "syz_open_procfs$namespace"
]

> "suppressions": ["some known bug"],
> "procs": 8,

We usually run with "procs": 6, but it's not that important.

> "type": "qemu",
> "vm": {
> "count": 16,
> "cpu": 2,
> "mem": 2048,
> "kernel": "linux/arch/x86/boot/bzImage"
> }
> }

Otherwise I don't see any really significant differences.

--
Best Regards
Aleksandr

>
> Is this correct? I am using stretch.img from syzkaller site, and the
> .config from
> the link above.
>
> Thanks,
> Song
>

2022-02-16 21:26:14

by Song Liu

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hi Aleksandr,

Thanks for your kind reply!

On Wed, Feb 16, 2022 at 1:38 AM Aleksandr Nogikh <[email protected]> wrote:
>
> Hi Song,
>
> Is syzkaller not doing something you expect it to do with this config?

I fixed sshkey in the config, and added a suppression for hsr_node_get_first.
However, I haven't got a repro overnight.

>
> On Wed, Feb 16, 2022 at 2:38 AM Song Liu <[email protected]> wrote:
> >
> > On Mon, Feb 14, 2022 at 10:41 PM Song Liu <[email protected]> wrote:
> > >
> > > On Mon, Feb 14, 2022 at 3:52 PM Daniel Borkmann <[email protected]> wrote:
> > > >
> > > > Song, ptal.
> > > >
> > > > On 2/14/22 7:45 PM, syzbot wrote:
> > > > > Hello,
> > > > >
> > > > > syzbot found the following issue on:
> > > > >
> > > > > HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> > > > > git tree: bpf-next
> > > > > console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> > > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> > > > > dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
> >
> > How do I run the exact same syzkaller? I am doing something like
> >
> > ./bin/syz-manager -config qemu.cfg
> >
> > with the cfg file like:
> >
> > {
> > "target": "linux/amd64",
> > "http": ":56741",
> > "workdir": "workdir",
> > "kernel_obj": "linux",
> > "image": "./pkg/mgrconfig/testdata/stretch.img",
>
> This image location looks suspicious - we store some dummy data for
> tests in that folder.
> Instances now run on buildroot-based images, generated with
> https://github.com/google/syzkaller/blob/master/tools/create-buildroot-image.sh

Thanks for the information. I will give it a try.

>
> > "syzkaller": ".",
> > "disable_syscalls": ["keyctl", "add_key", "request_key"],
>
> For our bpf instances, instead of disable_syscalls we use enable_syscalls:
>
> "enable_syscalls": [
> "bpf", "mkdir", "mount$bpf", "unlink", "close",
> "perf_event_open*", "ioctl$PERF*", "getpid", "gettid",
> "socketpair", "sendmsg", "recvmsg", "setsockopt$sock_attach_bpf",
> "socket$kcm", "ioctl$sock_kcm*", "syz_clone",
> "mkdirat$cgroup*", "openat$cgroup*", "write$cgroup*",
> "openat$tun", "write$tun", "ioctl$TUN*", "ioctl$SIOCSIFHWADDR",
> "openat$ppp", "syz_open_procfs$namespace"
> ]

I will try with the same list. Thanks!

Song

>
> > "suppressions": ["some known bug"],
> > "procs": 8,
>
> We usually run with "procs": 6, but it's not that important.
>
> > "type": "qemu",
> > "vm": {
> > "count": 16,
> > "cpu": 2,
> > "mem": 2048,
> > "kernel": "linux/arch/x86/boot/bzImage"
> > }
> > }
>
> Otherwise I don't see any really significant differences.
>
> --
> Best Regards
> Aleksandr
>
> >
> > Is this correct? I am using stretch.img from syzkaller site, and the
> > .config from
> > the link above.
> >
> > Thanks,
> > Song
> >

2022-02-17 18:40:25

by Aleksandr Nogikh

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hi Song,

On Wed, Feb 16, 2022 at 5:27 PM Song Liu <[email protected]> wrote:
>
> Hi Aleksandr,
>
> Thanks for your kind reply!
>
> On Wed, Feb 16, 2022 at 1:38 AM Aleksandr Nogikh <[email protected]> wrote:
> >
> > Hi Song,
> >
> > Is syzkaller not doing something you expect it to do with this config?
>
> I fixed sshkey in the config, and added a suppression for hsr_node_get_first.
> However, I haven't got a repro overnight.

Oh, that's unfortunately not a very reliable thing. The bug has so far
happened only once on syzbot, so it must be pretty rare. Maybe you'll
have more luck with your local setup :)

You can try to run syz-repro on the log file that is available on the
syzbot dashboard:
https://github.com/google/syzkaller/blob/master/tools/syz-repro/repro.go
Syzbot has already done it and apparently failed to succeed, but this
is also somewhat probabilistic, especially when the bug is due to some
rare race condition. So trying it several times might help.

Also you might want to hack your local syzkaller copy a bit:
https://github.com/google/syzkaller/blob/master/syz-manager/manager.go#L804
Here you can drop the limit on the maximum number of repro attempts
and make needLocalRepro only return true if crash.Title matches the
title of this particular bug. With this change your local syzkaller
instance won't waste time reproducing other bugs.

There's also a way to focus syzkaller on some specific kernel
functions/source files:
https://github.com/google/syzkaller/blob/master/pkg/mgrconfig/config.go#L125

--
Best Regards,
Aleksandr

>
> >
> > On Wed, Feb 16, 2022 at 2:38 AM Song Liu <[email protected]> wrote:
> > >
> > > On Mon, Feb 14, 2022 at 10:41 PM Song Liu <[email protected]> wrote:
> > > >
> > > > On Mon, Feb 14, 2022 at 3:52 PM Daniel Borkmann <[email protected]> wrote:
> > > > >
> > > > > Song, ptal.
> > > > >
> > > > > On 2/14/22 7:45 PM, syzbot wrote:
> > > > > > Hello,
> > > > > >
> > > > > > syzbot found the following issue on:
> > > > > >
> > > > > > HEAD commit: e5313968c41b Merge branch 'Split bpf_sk_lookup remote_port..
> > > > > > git tree: bpf-next
> > > > > > console output: https://syzkaller.appspot.com/x/log.txt?x=10baced8700000
> > > > > > kernel config: https://syzkaller.appspot.com/x/.config?x=c40b67275bfe2a58
> > > > > > dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
> > >
> > > How do I run the exact same syzkaller? I am doing something like
> > >
> > > ./bin/syz-manager -config qemu.cfg
> > >
> > > with the cfg file like:
> > >
> > > {
> > > "target": "linux/amd64",
> > > "http": ":56741",
> > > "workdir": "workdir",
> > > "kernel_obj": "linux",
> > > "image": "./pkg/mgrconfig/testdata/stretch.img",
> >
> > This image location looks suspicious - we store some dummy data for
> > tests in that folder.
> > Instances now run on buildroot-based images, generated with
> > https://github.com/google/syzkaller/blob/master/tools/create-buildroot-image.sh
>
> Thanks for the information. I will give it a try.
>
> >
> > > "syzkaller": ".",
> > > "disable_syscalls": ["keyctl", "add_key", "request_key"],
> >
> > For our bpf instances, instead of disable_syscalls we use enable_syscalls:
> >
> > "enable_syscalls": [
> > "bpf", "mkdir", "mount$bpf", "unlink", "close",
> > "perf_event_open*", "ioctl$PERF*", "getpid", "gettid",
> > "socketpair", "sendmsg", "recvmsg", "setsockopt$sock_attach_bpf",
> > "socket$kcm", "ioctl$sock_kcm*", "syz_clone",
> > "mkdirat$cgroup*", "openat$cgroup*", "write$cgroup*",
> > "openat$tun", "write$tun", "ioctl$TUN*", "ioctl$SIOCSIFHWADDR",
> > "openat$ppp", "syz_open_procfs$namespace"
> > ]
>
> I will try with the same list. Thanks!
>
> Song
>
> >
> > > "suppressions": ["some known bug"],
> > > "procs": 8,
> >
> > We usually run with "procs": 6, but it's not that important.
> >
> > > "type": "qemu",
> > > "vm": {
> > > "count": 16,
> > > "cpu": 2,
> > > "mem": 2048,
> > > "kernel": "linux/arch/x86/boot/bzImage"
> > > }
> > > }
> >
> > Otherwise I don't see any really significant differences.
> >
> > --
> > Best Regards
> > Aleksandr
> >
> > >
> > > Is this correct? I am using stretch.img from syzkaller site, and the
> > > .config from
> > > the link above.
> > >
> > > Thanks,
> > > Song
> > >

2022-02-18 00:18:32

by Song Liu

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hi Aleksandr,

> On Feb 17, 2022, at 10:32 AM, Aleksandr Nogikh <[email protected]> wrote:
>
> Hi Song,
>
> On Wed, Feb 16, 2022 at 5:27 PM Song Liu <[email protected]> wrote:
>>
>> Hi Aleksandr,
>>
>> Thanks for your kind reply!
>>
>> On Wed, Feb 16, 2022 at 1:38 AM Aleksandr Nogikh <[email protected]> wrote:
>>>
>>> Hi Song,
>>>
>>> Is syzkaller not doing something you expect it to do with this config?
>>
>> I fixed sshkey in the config, and added a suppression for hsr_node_get_first.
>> However, I haven't got a repro overnight.
>
> Oh, that's unfortunately not a very reliable thing. The bug has so far
> happened only once on syzbot, so it must be pretty rare. Maybe you'll
> have more luck with your local setup :)
>
> You can try to run syz-repro on the log file that is available on the
> syzbot dashboard:
> https://github.com/google/syzkaller/blob/master/tools/syz-repro/repro.go
> Syzbot has already done it and apparently failed to succeed, but this
> is also somewhat probabilistic, especially when the bug is due to some
> rare race condition. So trying it several times might help.
>
> Also you might want to hack your local syzkaller copy a bit:
> https://github.com/google/syzkaller/blob/master/syz-manager/manager.go#L804
> Here you can drop the limit on the maximum number of repro attempts
> and make needLocalRepro only return true if crash.Title matches the
> title of this particular bug. With this change your local syzkaller
> instance won't waste time reproducing other bugs.
>
> There's also a way to focus syzkaller on some specific kernel
> functions/source files:
> https://github.com/google/syzkaller/blob/master/pkg/mgrconfig/config.go#L125

Thanks for these tips!

After fixing some other things. I was able to reproduce one of the three
failures modes overnight and some related issues from fault injection.
These errors gave me clue to fix the bug (or at least one of the bugs).

I have a suggestions on the bug dashboard, like:

https://syzkaller.appspot.com/bug?id=86fa0212fb895a0d41fd1f1eecbeaee67191a4c9

It isn't obvious to me which image was used in the test. Maybe we can add
a link to the image or instructions to build the image? In this case, I
think the bug only triggers on some images, so testing with the exact image
is important.

Thanks again,
Song

2022-02-18 21:33:35

by Aleksandr Nogikh

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

Hi Song,

On Thu, Feb 17, 2022 at 9:05 PM Song Liu <[email protected]> wrote:
>
> Hi Aleksandr,
>
> > On Feb 17, 2022, at 10:32 AM, Aleksandr Nogikh <[email protected]> wrote:
> >
> > Hi Song,
> >
> > On Wed, Feb 16, 2022 at 5:27 PM Song Liu <[email protected]> wrote:
> >>
> >> Hi Aleksandr,
> >>
> >> Thanks for your kind reply!
> >>
> >> On Wed, Feb 16, 2022 at 1:38 AM Aleksandr Nogikh <[email protected]> wrote:
> >>>
> >>> Hi Song,
> >>>
> >>> Is syzkaller not doing something you expect it to do with this config?
> >>
> >> I fixed sshkey in the config, and added a suppression for hsr_node_get_first.
> >> However, I haven't got a repro overnight.
> >
> > Oh, that's unfortunately not a very reliable thing. The bug has so far
> > happened only once on syzbot, so it must be pretty rare. Maybe you'll
> > have more luck with your local setup :)
> >
> > You can try to run syz-repro on the log file that is available on the
> > syzbot dashboard:
> > https://github.com/google/syzkaller/blob/master/tools/syz-repro/repro.go
> > Syzbot has already done it and apparently failed to succeed, but this
> > is also somewhat probabilistic, especially when the bug is due to some
> > rare race condition. So trying it several times might help.
> >
> > Also you might want to hack your local syzkaller copy a bit:
> > https://github.com/google/syzkaller/blob/master/syz-manager/manager.go#L804
> > Here you can drop the limit on the maximum number of repro attempts
> > and make needLocalRepro only return true if crash.Title matches the
> > title of this particular bug. With this change your local syzkaller
> > instance won't waste time reproducing other bugs.
> >
> > There's also a way to focus syzkaller on some specific kernel
> > functions/source files:
> > https://github.com/google/syzkaller/blob/master/pkg/mgrconfig/config.go#L125
>
> Thanks for these tips!
>
> After fixing some other things. I was able to reproduce one of the three
> failures modes overnight and some related issues from fault injection.
> These errors gave me clue to fix the bug (or at least one of the bugs).
>
> I have a suggestions on the bug dashboard, like:
>
> https://syzkaller.appspot.com/bug?id=86fa0212fb895a0d41fd1f1eecbeaee67191a4c9
>
> It isn't obvious to me which image was used in the test. Maybe we can add
> a link to the image or instructions to build the image? In this case, I
> think the bug only triggers on some images, so testing with the exact image
> is important.

Hmm, that's interesting. If the exact image can really make a
difference, I think we could e.g. remember the images syzbot used for
the last 1-2 months and make them downloadable from the bug details
page. I'll check if there are any obstacles, at first sight this
should not be a problem.

Thanks for the suggestion!

--
Best Regards,
Aleksandr

>
> Thanks again,
> Song

2022-07-03 08:32:21

by syzbot

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

syzbot has found a reproducer for the following issue on:

HEAD commit: b0d93b44641a selftests/bpf: Skip lsm_cgroup when we don't ..
git tree: bpf-next
console output: https://syzkaller.appspot.com/x/log.txt?x=10c495e0080000
kernel config: https://syzkaller.appspot.com/x/.config?x=70e1a4d352a3c6ae
dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11a10a58080000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=16ab8cb8080000

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]

==================================================================
BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
Read of size 4 at addr ffffffffa0000000 by task syz-executor334/3608

CPU: 0 PID: 3608 Comm: syz-executor334 Not tainted 5.19.0-rc2-syzkaller-00498-gb0d93b44641a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_address_description.constprop.0.cold+0xf/0x495 mm/kasan/report.c:313
print_report mm/kasan/report.c:429 [inline]
kasan_report.cold+0xf4/0x1c6 mm/kasan/report.c:491
bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
jit_subprogs kernel/bpf/verifier.c:13767 [inline]
fixup_call_args kernel/bpf/verifier.c:13796 [inline]
bpf_check+0x7035/0xb040 kernel/bpf/verifier.c:15287
bpf_prog_load+0xfb2/0x2250 kernel/bpf/syscall.c:2575
__sys_bpf+0x11a1/0x5790 kernel/bpf/syscall.c:4934
__do_sys_bpf kernel/bpf/syscall.c:5038 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5036 [inline]
__x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5036
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7fe5b823e209
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffc68d718c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fe5b823e209
RDX: 0000000000000070 RSI: 0000000020000440 RDI: 0000000000000005
RBP: 00007ffc68d718e0 R08: 0000000000000002 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003
R13: 431bde82d7b634db R14: 0000000000000000 R15: 0000000000000000
</TASK>

Memory state around the buggy address:
BUG: unable to handle page fault for address: fffffbfff3ffffe0
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 23ffe4067 P4D 23ffe4067 PUD 23ffe3067 PMD 0
Oops: 0000 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 3608 Comm: syz-executor334 Not tainted 5.19.0-rc2-syzkaller-00498-gb0d93b44641a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022
RIP: 0010:memcpy_erms+0x6/0x10 arch/x86/lib/memcpy_64.S:55
Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
RSP: 0018:ffffc9000215f7b8 EFLAGS: 00010082
RAX: ffffc9000215f7c4 RBX: ffffffff9fffff00 RCX: 0000000000000010
RDX: 0000000000000010 RSI: fffffbfff3ffffe0 RDI: ffffc9000215f7c4
RBP: ffffffffa0000000 R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000014 R11: 0000000000000001 R12: 00000000fffffffe
R13: ffffffff9fffff80 R14: ffff888025745880 R15: 0000000000000282
FS: 0000555555ac7300(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: fffffbfff3ffffe0 CR3: 000000007dc79000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
print_memory_metadata+0x5a/0xdf mm/kasan/report.c:404
print_report mm/kasan/report.c:430 [inline]
kasan_report.cold+0xfe/0x1c6 mm/kasan/report.c:491
bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
jit_subprogs kernel/bpf/verifier.c:13767 [inline]
fixup_call_args kernel/bpf/verifier.c:13796 [inline]
bpf_check+0x7035/0xb040 kernel/bpf/verifier.c:15287
bpf_prog_load+0xfb2/0x2250 kernel/bpf/syscall.c:2575
__sys_bpf+0x11a1/0x5790 kernel/bpf/syscall.c:4934
__do_sys_bpf kernel/bpf/syscall.c:5038 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5036 [inline]
__x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5036
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x7fe5b823e209
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffc68d718c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fe5b823e209
RDX: 0000000000000070 RSI: 0000000020000440 RDI: 0000000000000005
RBP: 00007ffc68d718e0 R08: 0000000000000002 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003
R13: 431bde82d7b634db R14: 0000000000000000 R15: 0000000000000000
</TASK>
Modules linked in:
CR2: fffffbfff3ffffe0
---[ end trace 0000000000000000 ]---
RIP: 0010:memcpy_erms+0x6/0x10 arch/x86/lib/memcpy_64.S:55
Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
RSP: 0018:ffffc9000215f7b8 EFLAGS: 00010082
RAX: ffffc9000215f7c4 RBX: ffffffff9fffff00 RCX: 0000000000000010
RDX: 0000000000000010 RSI: fffffbfff3ffffe0 RDI: ffffc9000215f7c4
RBP: ffffffffa0000000 R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000014 R11: 0000000000000001 R12: 00000000fffffffe
R13: ffffffff9fffff80 R14: ffff888025745880 R15: 0000000000000282
FS: 0000555555ac7300(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: fffffbfff3ffffe0 CR3: 000000007dc79000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
----------------
Code disassembly (best guess):
0: cc int3
1: cc int3
2: cc int3
3: cc int3
4: eb 1e jmp 0x24
6: 0f 1f 00 nopl (%rax)
9: 48 89 f8 mov %rdi,%rax
c: 48 89 d1 mov %rdx,%rcx
f: 48 c1 e9 03 shr $0x3,%rcx
13: 83 e2 07 and $0x7,%edx
16: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi)
19: 89 d1 mov %edx,%ecx
1b: f3 a4 rep movsb %ds:(%rsi),%es:(%rdi)
1d: c3 retq
1e: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
24: 48 89 f8 mov %rdi,%rax
27: 48 89 d1 mov %rdx,%rcx
* 2a: f3 a4 rep movsb %ds:(%rsi),%es:(%rdi) <-- trapping instruction
2c: c3 retq
2d: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
34: 48 89 f8 mov %rdi,%rax
37: 48 83 fa 20 cmp $0x20,%rdx
3b: 72 7e jb 0xbb
3d: 40 38 fe cmp %dil,%sil

2022-07-04 09:40:58

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [syzbot] KASAN: vmalloc-out-of-bounds Read in bpf_jit_free

On 7/3/22 9:57 AM, syzbot wrote:
> syzbot has found a reproducer for the following issue on:

Song, ptal, thanks.

> HEAD commit: b0d93b44641a selftests/bpf: Skip lsm_cgroup when we don't ..
> git tree: bpf-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=10c495e0080000
> kernel config: https://syzkaller.appspot.com/x/.config?x=70e1a4d352a3c6ae
> dashboard link: https://syzkaller.appspot.com/bug?extid=2f649ec6d2eea1495a8f
> compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11a10a58080000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=16ab8cb8080000

Looks like this time syzbot found a repro at least, so this should help making progress.

https://lore.kernel.org/bpf/[email protected]/T/#t

> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: [email protected]
>
> ==================================================================
> BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
> BUG: KASAN: vmalloc-out-of-bounds in bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
> Read of size 4 at addr ffffffffa0000000 by task syz-executor334/3608
>
> CPU: 0 PID: 3608 Comm: syz-executor334 Not tainted 5.19.0-rc2-syzkaller-00498-gb0d93b44641a #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022
> Call Trace:
> <TASK>
> __dump_stack lib/dump_stack.c:88 [inline]
> dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
> print_address_description.constprop.0.cold+0xf/0x495 mm/kasan/report.c:313
> print_report mm/kasan/report.c:429 [inline]
> kasan_report.cold+0xf4/0x1c6 mm/kasan/report.c:491
> bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
> bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
> jit_subprogs kernel/bpf/verifier.c:13767 [inline]
> fixup_call_args kernel/bpf/verifier.c:13796 [inline]
> bpf_check+0x7035/0xb040 kernel/bpf/verifier.c:15287
> bpf_prog_load+0xfb2/0x2250 kernel/bpf/syscall.c:2575
> __sys_bpf+0x11a1/0x5790 kernel/bpf/syscall.c:4934
> __do_sys_bpf kernel/bpf/syscall.c:5038 [inline]
> __se_sys_bpf kernel/bpf/syscall.c:5036 [inline]
> __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5036
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x46/0xb0
> RIP: 0033:0x7fe5b823e209
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007ffc68d718c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
> RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fe5b823e209
> RDX: 0000000000000070 RSI: 0000000020000440 RDI: 0000000000000005
> RBP: 00007ffc68d718e0 R08: 0000000000000002 R09: 0000000000000001
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003
> R13: 431bde82d7b634db R14: 0000000000000000 R15: 0000000000000000
> </TASK>
>
> Memory state around the buggy address:
> BUG: unable to handle page fault for address: fffffbfff3ffffe0
> #PF: supervisor read access in kernel mode
> #PF: error_code(0x0000) - not-present page
> PGD 23ffe4067 P4D 23ffe4067 PUD 23ffe3067 PMD 0
> Oops: 0000 [#1] PREEMPT SMP KASAN
> CPU: 0 PID: 3608 Comm: syz-executor334 Not tainted 5.19.0-rc2-syzkaller-00498-gb0d93b44641a #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022
> RIP: 0010:memcpy_erms+0x6/0x10 arch/x86/lib/memcpy_64.S:55
> Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
> RSP: 0018:ffffc9000215f7b8 EFLAGS: 00010082
> RAX: ffffc9000215f7c4 RBX: ffffffff9fffff00 RCX: 0000000000000010
> RDX: 0000000000000010 RSI: fffffbfff3ffffe0 RDI: ffffc9000215f7c4
> RBP: ffffffffa0000000 R08: 0000000000000007 R09: 0000000000000000
> R10: 0000000000000014 R11: 0000000000000001 R12: 00000000fffffffe
> R13: ffffffff9fffff80 R14: ffff888025745880 R15: 0000000000000282
> FS: 0000555555ac7300(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: fffffbfff3ffffe0 CR3: 000000007dc79000 CR4: 00000000003506f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> <TASK>
> print_memory_metadata+0x5a/0xdf mm/kasan/report.c:404
> print_report mm/kasan/report.c:430 [inline]
> kasan_report.cold+0xfe/0x1c6 mm/kasan/report.c:491
> bpf_jit_binary_free kernel/bpf/core.c:1081 [inline]
> bpf_jit_free+0x26c/0x2b0 kernel/bpf/core.c:1206
> jit_subprogs kernel/bpf/verifier.c:13767 [inline]
> fixup_call_args kernel/bpf/verifier.c:13796 [inline]
> bpf_check+0x7035/0xb040 kernel/bpf/verifier.c:15287
> bpf_prog_load+0xfb2/0x2250 kernel/bpf/syscall.c:2575
> __sys_bpf+0x11a1/0x5790 kernel/bpf/syscall.c:4934
> __do_sys_bpf kernel/bpf/syscall.c:5038 [inline]
> __se_sys_bpf kernel/bpf/syscall.c:5036 [inline]
> __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5036
> do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
> entry_SYSCALL_64_after_hwframe+0x46/0xb0
> RIP: 0033:0x7fe5b823e209
> Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007ffc68d718c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
> RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fe5b823e209
> RDX: 0000000000000070 RSI: 0000000020000440 RDI: 0000000000000005
> RBP: 00007ffc68d718e0 R08: 0000000000000002 R09: 0000000000000001
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000003
> R13: 431bde82d7b634db R14: 0000000000000000 R15: 0000000000000000
> </TASK>
> Modules linked in:
> CR2: fffffbfff3ffffe0
> ---[ end trace 0000000000000000 ]---
> RIP: 0010:memcpy_erms+0x6/0x10 arch/x86/lib/memcpy_64.S:55
> Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
> RSP: 0018:ffffc9000215f7b8 EFLAGS: 00010082
> RAX: ffffc9000215f7c4 RBX: ffffffff9fffff00 RCX: 0000000000000010
> RDX: 0000000000000010 RSI: fffffbfff3ffffe0 RDI: ffffc9000215f7c4
> RBP: ffffffffa0000000 R08: 0000000000000007 R09: 0000000000000000
> R10: 0000000000000014 R11: 0000000000000001 R12: 00000000fffffffe
> R13: ffffffff9fffff80 R14: ffff888025745880 R15: 0000000000000282
> FS: 0000555555ac7300(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: fffffbfff3ffffe0 CR3: 000000007dc79000 CR4: 00000000003506f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> ----------------
> Code disassembly (best guess):
> 0: cc int3
> 1: cc int3
> 2: cc int3
> 3: cc int3
> 4: eb 1e jmp 0x24
> 6: 0f 1f 00 nopl (%rax)
> 9: 48 89 f8 mov %rdi,%rax
> c: 48 89 d1 mov %rdx,%rcx
> f: 48 c1 e9 03 shr $0x3,%rcx
> 13: 83 e2 07 and $0x7,%edx
> 16: f3 48 a5 rep movsq %ds:(%rsi),%es:(%rdi)
> 19: 89 d1 mov %edx,%ecx
> 1b: f3 a4 rep movsb %ds:(%rsi),%es:(%rdi)
> 1d: c3 retq
> 1e: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1)
> 24: 48 89 f8 mov %rdi,%rax
> 27: 48 89 d1 mov %rdx,%rcx
> * 2a: f3 a4 rep movsb %ds:(%rsi),%es:(%rdi) <-- trapping instruction
> 2c: c3 retq
> 2d: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
> 34: 48 89 f8 mov %rdi,%rax
> 37: 48 83 fa 20 cmp $0x20,%rdx
> 3b: 72 7e jb 0xbb
> 3d: 40 38 fe cmp %dil,%sil
>