2022-11-23 14:44:36

by Hao Sun

[permalink] [raw]
Subject: [PATCH bpf-next 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs

The verifier sometimes makes mistakes[1][2] that may be exploited to
achieve arbitrary read/write. Currently, syzbot is continuously testing
bpf, and can find memory issues in bpf syscalls, but it can hardly find
mischecking/bugs in the verifier. We need runtime checks like KASAN in
BPF programs for this. This patch series implements address sanitize
in jited BPF progs for testing purpose, so that tools like syzbot can
find interesting bugs in the verifier automatically by, if possible,
generating and executing BPF programs that bypass the verifier but have
memory issues, then triggering this sanitizing.

The idea is to dispatch read/write addr of a BPF program to the kernel
functions that are instrumented by KASAN, to achieve indirect checking.
Indirect checking is adopted because this is much simple, instrument
direct checking like compilers makes the jit much more complex. The
main step is: back up R0&R1 and store addr in R1, and then insert the
checking function before load/store insns, during bpf_misc_fixup(), and
finally in the jit stage, backup R1~R5 to make sure the checking funcs
won't corrupt regs states. An extra Kconfig option is used to enable
this, so normal use case won't be impacted at all.

Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
or conversion passes that use BPF_REG_AX are skipped, because that
conflicts with us; insns whose access addr is specified by R10 are also
skipped because they are trivial to verify.

Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
selftests for instrumentation in each possible case, and all new/existing
selftests for the verifier can pass. Also, a BPF prog that also exploits
CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
captured with this patch series.

I haven't found a better way to back up the regs before executing the
checking functions, and have to store them on the stack. Comments and
advice are surely welcome.

[1] http://bit.do/CVE-2021-3490
[2] http://bit.do/CVE-2022-23222
[3] OOB-read: https://pastebin.com/raw/Ee1Cw492

Hao Sun (3):
bpf: Sanitize STX/ST in jited BPF progs with KASAN
bpf: Sanitize LDX in jited BPF progs with KASAN
selftests/bpf: Add tests for LDX/STX/ST sanitize

arch/x86/net/bpf_jit_comp.c | 34 ++
include/linux/bpf.h | 14 +
kernel/bpf/Kconfig | 14 +
kernel/bpf/verifier.c | 190 +++++++++++
.../selftests/bpf/verifier/sanitize_st_ldx.c | 323 ++++++++++++++++++
5 files changed, 575 insertions(+)
create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c


base-commit: 8a2162a9227dda936a21fe72014a9931a3853a7b
--
2.38.1


2022-11-23 14:47:50

by Hao Sun

[permalink] [raw]
Subject: [PATCH bpf-next 1/3] bpf: Sanitize STX/ST in jited BPF progs with KASAN

Make the verifier sanitize STX/ST insns in jited BPF programs
by dispatching addr to kernel functions that are instrumented
by KASAN.

Only STX/ST insns that aren't in patches added by other passes
using REG_AX or dst_reg isn't R10 are sanitized. The former
confilicts with us, the latter are trivial for the verifier to
check, skip them to reduce the footprint.

The instrumentation is conducted in two places: fixup and jit.
During fixup, R0 and R1 are backed up or exchanged with dst_reg,
and the address to check is stored into R1, and the corresponding
bpf_asan_storeN() is inserted. In jit, R1~R5 are pushed on stack
before calling the sanitize function. The sanitize functions are
instrumented with KASAN and they simply write to the target addr
for certain bytes, KASAN conducts the actual checking. An extra
Kconfig is used to enable this.

Signed-off-by: Hao Sun <[email protected]>
---
arch/x86/net/bpf_jit_comp.c | 32 +++++++++++
include/linux/bpf.h | 9 ++++
kernel/bpf/Kconfig | 14 +++++
kernel/bpf/verifier.c | 102 ++++++++++++++++++++++++++++++++++++
4 files changed, 157 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index cec5195602bc..ceaef69adc49 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -338,7 +338,39 @@ static int emit_patch(u8 **pprog, void *func, void *ip, u8 opcode)

static int emit_call(u8 **pprog, void *func, void *ip)
{
+#ifdef CONFIG_BPF_PROG_KASAN
+ s64 offset;
+ u8 *prog = *pprog;
+ bool is_sanitize =
+ func == bpf_asan_store8 || func == bpf_asan_store16 ||
+ func == bpf_asan_store32 || func == bpf_asan_store64;
+
+ if (!is_sanitize)
+ return emit_patch(pprog, func, ip, 0xE8);
+
+ /* Six extra bytes from push insns */
+ offset = func - (ip + X86_PATCH_SIZE + 6);
+ BUG_ON(!is_simm32(offset));
+
+ /* R1 has the addr to check, backup R1~R5 here, we don't
+ * have free regs during the fixup.
+ */
+ EMIT1(0x57); /* push rdi */
+ EMIT1(0x56); /* push rsi */
+ EMIT1(0x52); /* push rdx */
+ EMIT1(0x51); /* push rcx */
+ EMIT2(0x41, 0x50); /* push r8 */
+ EMIT1_off32(0xE8, offset);
+ EMIT2(0x41, 0x58); /* pop r8 */
+ EMIT1(0x59); /* pop rcx */
+ EMIT1(0x5a); /* pop rdx */
+ EMIT1(0x5e); /* pop rsi */
+ EMIT1(0x5f); /* pop rdi */
+ *pprog = prog;
+ return 0;
+#else
return emit_patch(pprog, func, ip, 0xE8);
+#endif
}

static int emit_jump(u8 **pprog, void *func, void *ip)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c9eafa67f2a2..a7eb99928fee 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2835,4 +2835,13 @@ static inline bool type_is_alloc(u32 type)
return type & MEM_ALLOC;
}

+#ifdef CONFIG_BPF_PROG_KASAN
+
+u64 bpf_asan_store8(u8 *addr);
+u64 bpf_asan_store16(u16 *addr);
+u64 bpf_asan_store32(u32 *addr);
+u64 bpf_asan_store64(u64 *addr);
+
+#endif /* CONFIG_BPF_PROG_KASAN */
+
#endif /* _LINUX_BPF_H */
diff --git a/kernel/bpf/Kconfig b/kernel/bpf/Kconfig
index 2dfe1079f772..aeba6059b9e2 100644
--- a/kernel/bpf/Kconfig
+++ b/kernel/bpf/Kconfig
@@ -99,4 +99,18 @@ config BPF_LSM

If you are unsure how to answer this question, answer N.

+config BPF_PROG_KASAN
+ bool "Enable BPF Program Address Sanitize"
+ depends on BPF_JIT
+ depends on KASAN
+ depends on X86_64
+ help
+ Enables instrumentation on LDX/STX/ST insn to capture memory
+ access errors in BPF programs missed by the verifier.
+
+ The actual check is conducted by KASAN, this feature presents
+ certain overhead, and should be used mainly by testing purpose.
+
+ If you are unsure how to answer this question, answer N.
+
endmenu # "BPF subsystem"
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 9528a066cfa5..af214f0191e0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15221,6 +15221,25 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return 0;
}

+#ifdef CONFIG_BPF_PROG_KASAN
+
+/* Those are functions instrumented with KASAN for actual sanitizing. */
+
+#define BPF_ASAN_STORE(n) \
+ notrace u64 bpf_asan_store##n(u##n *addr) \
+ { \
+ u##n ret = *addr; \
+ *addr = ret; \
+ return ret; \
+ }
+
+BPF_ASAN_STORE(8);
+BPF_ASAN_STORE(16);
+BPF_ASAN_STORE(32);
+BPF_ASAN_STORE(64);
+
+#endif
+
/* Do various post-verification rewrites in a single program pass.
* These rewrites simplify JIT and interpreter implementations.
*/
@@ -15238,6 +15257,9 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
struct bpf_prog *new_prog;
struct bpf_map *map_ptr;
int i, ret, cnt, delta = 0;
+#ifdef CONFIG_BPF_PROG_KASAN
+ bool in_patch_use_ax = false;
+#endif

for (i = 0; i < insn_cnt; i++, insn++) {
/* Make divide-by-zero exceptions impossible. */
@@ -15354,6 +15376,86 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
continue;
}

+#ifdef CONFIG_BPF_PROG_KASAN
+ /* Patches that use REG_AX confilict with us, skip it.
+ * This starts with first use of REG_AX, stops only when
+ * we see next ldx/stx/st insn with valid aux information.
+ */
+ aux = &env->insn_aux_data[i + delta];
+ if (in_patch_use_ax && (int)aux->ptr_type != 0)
+ in_patch_use_ax = false;
+ if (insn->dst_reg == BPF_REG_AX || insn->src_reg == BPF_REG_AX)
+ in_patch_use_ax = true;
+
+ /* Sanitize ST/STX operation. */
+ if (BPF_CLASS(insn->code) == BPF_ST ||
+ BPF_CLASS(insn->code) == BPF_STX) {
+ struct bpf_insn sanitize_fn;
+ struct bpf_insn *patch = &insn_buf[0];
+
+ /* Skip st/stx to R10, they're trivial to check. */
+ if (in_patch_use_ax || insn->dst_reg == BPF_REG_10 ||
+ BPF_MODE(insn->code) == BPF_NOSPEC)
+ continue;
+
+ switch (BPF_SIZE(insn->code)) {
+ case BPF_B:
+ sanitize_fn = BPF_EMIT_CALL(bpf_asan_store8);
+ break;
+ case BPF_H:
+ sanitize_fn = BPF_EMIT_CALL(bpf_asan_store16);
+ break;
+ case BPF_W:
+ sanitize_fn = BPF_EMIT_CALL(bpf_asan_store32);
+ break;
+ case BPF_DW:
+ sanitize_fn = BPF_EMIT_CALL(bpf_asan_store64);
+ break;
+ }
+
+ /* Backup R0 and R1, store `dst + off` to R1, invoke the
+ * sanitize fn, and then restore each reg.
+ */
+ if (insn->dst_reg == BPF_REG_1) {
+ *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_0);
+ } else if (insn->dst_reg == BPF_REG_0) {
+ *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+ *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_0);
+ } else {
+ *patch++ = BPF_MOV64_REG(BPF_REG_AX, BPF_REG_1);
+ *patch++ = BPF_MOV64_REG(BPF_REG_1, insn->dst_reg);
+ *patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_0);
+ }
+ if (insn->off != 0)
+ *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, insn->off);
+ /* Call sanitize fn, R1~R5 are saved to stack during jit. */
+ *patch++ = sanitize_fn;
+ if (insn->off != 0)
+ *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -insn->off);
+ if (insn->dst_reg == BPF_REG_1) {
+ *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_AX);
+ } else if (insn->dst_reg == BPF_REG_0) {
+ *patch++ = BPF_MOV64_REG(BPF_REG_0, BPF_REG_1);
+ *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+ } else {
+ *patch++ = BPF_MOV64_REG(BPF_REG_0, insn->dst_reg);
+ *patch++ = BPF_MOV64_REG(insn->dst_reg, BPF_REG_1);
+ *patch++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_AX);
+ }
+ *patch++ = *insn;
+ cnt = patch - insn_buf;
+
+ new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+ if (!new_prog)
+ return -ENOMEM;
+
+ delta += cnt - 1;
+ env->prog = prog = new_prog;
+ insn = new_prog->insnsi + i + delta;
+ continue;
+ }
+#endif
+
if (insn->code != (BPF_JMP | BPF_CALL))
continue;
if (insn->src_reg == BPF_PSEUDO_CALL)
--
2.38.1

2022-11-24 00:35:01

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs

On 11/23/22 3:15 PM, Hao Sun wrote:
> The verifier sometimes makes mistakes[1][2] that may be exploited to
> achieve arbitrary read/write. Currently, syzbot is continuously testing
> bpf, and can find memory issues in bpf syscalls, but it can hardly find
> mischecking/bugs in the verifier. We need runtime checks like KASAN in
> BPF programs for this. This patch series implements address sanitize
> in jited BPF progs for testing purpose, so that tools like syzbot can
> find interesting bugs in the verifier automatically by, if possible,
> generating and executing BPF programs that bypass the verifier but have
> memory issues, then triggering this sanitizing.
>
> The idea is to dispatch read/write addr of a BPF program to the kernel
> functions that are instrumented by KASAN, to achieve indirect checking.
> Indirect checking is adopted because this is much simple, instrument
> direct checking like compilers makes the jit much more complex. The
> main step is: back up R0&R1 and store addr in R1, and then insert the
> checking function before load/store insns, during bpf_misc_fixup(), and
> finally in the jit stage, backup R1~R5 to make sure the checking funcs
> won't corrupt regs states. An extra Kconfig option is used to enable
> this, so normal use case won't be impacted at all.

Thanks for looking into this! It's a bit unfortunate that this will need
changes in every BPF JIT. Have you thought about a generic solution which
would not require changes in JITs? Given this is for debugging and finding
mischecking/bugs in the verifier, can't we reuse interpreter for this and
only implement it there? I would be curious if we could achieve the same
result from [3] with such approach.

> Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
> or conversion passes that use BPF_REG_AX are skipped, because that
> conflicts with us; insns whose access addr is specified by R10 are also
> skipped because they are trivial to verify.
>
> Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
> selftests for instrumentation in each possible case, and all new/existing
> selftests for the verifier can pass. Also, a BPF prog that also exploits
> CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
> captured with this patch series.
>
> I haven't found a better way to back up the regs before executing the
> checking functions, and have to store them on the stack. Comments and
> advice are surely welcome.
>
> [1] http://bit.do/CVE-2021-3490
> [2] http://bit.do/CVE-2022-23222
> [3] OOB-read: https://pastebin.com/raw/Ee1Cw492
>
> Hao Sun (3):
> bpf: Sanitize STX/ST in jited BPF progs with KASAN
> bpf: Sanitize LDX in jited BPF progs with KASAN
> selftests/bpf: Add tests for LDX/STX/ST sanitize
>
> arch/x86/net/bpf_jit_comp.c | 34 ++
> include/linux/bpf.h | 14 +
> kernel/bpf/Kconfig | 14 +
> kernel/bpf/verifier.c | 190 +++++++++++
> .../selftests/bpf/verifier/sanitize_st_ldx.c | 323 ++++++++++++++++++
> 5 files changed, 575 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c
>
>
> base-commit: 8a2162a9227dda936a21fe72014a9931a3853a7b
>

Thanks,
Daniel

2022-11-24 03:17:34

by Hao Sun

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs

Daniel Borkmann <[email protected]> 于2022年11月24日周四 07:41写道:
>
> On 11/23/22 3:15 PM, Hao Sun wrote:
> > The verifier sometimes makes mistakes[1][2] that may be exploited to
> > achieve arbitrary read/write. Currently, syzbot is continuously testing
> > bpf, and can find memory issues in bpf syscalls, but it can hardly find
> > mischecking/bugs in the verifier. We need runtime checks like KASAN in
> > BPF programs for this. This patch series implements address sanitize
> > in jited BPF progs for testing purpose, so that tools like syzbot can
> > find interesting bugs in the verifier automatically by, if possible,
> > generating and executing BPF programs that bypass the verifier but have
> > memory issues, then triggering this sanitizing.
> >
> > The idea is to dispatch read/write addr of a BPF program to the kernel
> > functions that are instrumented by KASAN, to achieve indirect checking.
> > Indirect checking is adopted because this is much simple, instrument
> > direct checking like compilers makes the jit much more complex. The
> > main step is: back up R0&R1 and store addr in R1, and then insert the
> > checking function before load/store insns, during bpf_misc_fixup(), and
> > finally in the jit stage, backup R1~R5 to make sure the checking funcs
> > won't corrupt regs states. An extra Kconfig option is used to enable
> > this, so normal use case won't be impacted at all.
>
> Thanks for looking into this! It's a bit unfortunate that this will need
> changes in every BPF JIT. Have you thought about a generic solution which
> would not require changes in JITs? Given this is for debugging and finding
> mischecking/bugs in the verifier, can't we reuse interpreter for this and
> only implement it there? I would be curious if we could achieve the same
> result from [3] with such approach.
>

Hi Daniel,

Thanks for taking a look. The reason I choose to do this in jited progs is
because JIT is used in most real cases, so does testing/fuzzing, e.g.,
syzbot test BPF with JIT_ALWAYS_ON=y. Also, a BPF program generated
by fuzzers or other tools is likely need to be run hundred times with random
inputs to trigger potential issues in it and be captured by sanitize, so JIT
makes this much faster.

We don't need changes in every BPF JIT I believe, supporting X86_64
and Arm64 would be enough, and the only thing need to be done there
is to backup regs on stack before calling checking functions.
Also, I'm wondering if anyone knows how to better make sure the checking
function won't corrupt scratch regs' states, e.g., a flag to force compiler to
push scratch regs before using them, during gen code for those funcs.
If this is feasible, the changes to JIT can be completely removed, and
fixup in the verifier would be enough.

Regards
Hao

> > Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
> > or conversion passes that use BPF_REG_AX are skipped, because that
> > conflicts with us; insns whose access addr is specified by R10 are also
> > skipped because they are trivial to verify.
> >
> > Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
> > selftests for instrumentation in each possible case, and all new/existing
> > selftests for the verifier can pass. Also, a BPF prog that also exploits
> > CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
> > captured with this patch series.
> >
> > I haven't found a better way to back up the regs before executing the
> > checking functions, and have to store them on the stack. Comments and
> > advice are surely welcome.
> >
> > [1] http://bit.do/CVE-2021-3490
> > [2] http://bit.do/CVE-2022-23222
> > [3] OOB-read: https://pastebin.com/raw/Ee1Cw492
> >
> > Hao Sun (3):
> > bpf: Sanitize STX/ST in jited BPF progs with KASAN
> > bpf: Sanitize LDX in jited BPF progs with KASAN
> > selftests/bpf: Add tests for LDX/STX/ST sanitize
> >
> > arch/x86/net/bpf_jit_comp.c | 34 ++
> > include/linux/bpf.h | 14 +
> > kernel/bpf/Kconfig | 14 +
> > kernel/bpf/verifier.c | 190 +++++++++++
> > .../selftests/bpf/verifier/sanitize_st_ldx.c | 323 ++++++++++++++++++
> > 5 files changed, 575 insertions(+)
> > create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c
> >
> >
> > base-commit: 8a2162a9227dda936a21fe72014a9931a3853a7b
> >
>
> Thanks,
> Daniel

2022-11-25 06:06:52

by Hao Sun

[permalink] [raw]
Subject: Re: [PATCH bpf-next 0/3] bpf: Add LDX/STX/ST sanitize in jited BPF progs

Hao Sun <[email protected]> 于2022年11月24日周四 11:05写道:
>
> Daniel Borkmann <[email protected]> 于2022年11月24日周四 07:41写道:
> >
> > On 11/23/22 3:15 PM, Hao Sun wrote:
> > > The verifier sometimes makes mistakes[1][2] that may be exploited to
> > > achieve arbitrary read/write. Currently, syzbot is continuously testing
> > > bpf, and can find memory issues in bpf syscalls, but it can hardly find
> > > mischecking/bugs in the verifier. We need runtime checks like KASAN in
> > > BPF programs for this. This patch series implements address sanitize
> > > in jited BPF progs for testing purpose, so that tools like syzbot can
> > > find interesting bugs in the verifier automatically by, if possible,
> > > generating and executing BPF programs that bypass the verifier but have
> > > memory issues, then triggering this sanitizing.
> > >
> > > The idea is to dispatch read/write addr of a BPF program to the kernel
> > > functions that are instrumented by KASAN, to achieve indirect checking.
> > > Indirect checking is adopted because this is much simple, instrument
> > > direct checking like compilers makes the jit much more complex. The
> > > main step is: back up R0&R1 and store addr in R1, and then insert the
> > > checking function before load/store insns, during bpf_misc_fixup(), and
> > > finally in the jit stage, backup R1~R5 to make sure the checking funcs
> > > won't corrupt regs states. An extra Kconfig option is used to enable
> > > this, so normal use case won't be impacted at all.
> >
> > Thanks for looking into this! It's a bit unfortunate that this will need
> > changes in every BPF JIT. Have you thought about a generic solution which
> > would not require changes in JITs? Given this is for debugging and finding
> > mischecking/bugs in the verifier, can't we reuse interpreter for this and
> > only implement it there? I would be curious if we could achieve the same
> > result from [3] with such approach.
> >
>
> Hi Daniel,
>
> Thanks for taking a look. The reason I choose to do this in jited progs is
> because JIT is used in most real cases, so does testing/fuzzing, e.g.,
> syzbot test BPF with JIT_ALWAYS_ON=y. Also, a BPF program generated
> by fuzzers or other tools is likely need to be run hundred times with random
> inputs to trigger potential issues in it and be captured by sanitize, so JIT
> makes this much faster.
>
> We don't need changes in every BPF JIT I believe, supporting X86_64
> and Arm64 would be enough, and the only thing need to be done there
> is to backup regs on stack before calling checking functions.
> Also, I'm wondering if anyone knows how to better make sure the checking
> function won't corrupt scratch regs' states, e.g., a flag to force compiler to
> push scratch regs before using them, during gen code for those funcs.
> If this is feasible, the changes to JIT can be completely removed, and
> fixup in the verifier would be enough.
>

I think we can extend BPF prog's stack size in this mode, then backup all
the scratch regs to those free space. This way, everything just happens
in BPF insn level, we don't need to change JIT at all.

I will send patch v2 for this.

> Regards
> Hao
>
> > > Also, not all ldx/stx/st are instrumented. Insns rewrote by other fixup
> > > or conversion passes that use BPF_REG_AX are skipped, because that
> > > conflicts with us; insns whose access addr is specified by R10 are also
> > > skipped because they are trivial to verify.
> > >
> > > Patch1 sanitizes st/stx insns, and Patch2 sanitizes ldx insns, Patch3 adds
> > > selftests for instrumentation in each possible case, and all new/existing
> > > selftests for the verifier can pass. Also, a BPF prog that also exploits
> > > CVE-2022-23222 to achieve OOB read is provided[3], this can be perfertly
> > > captured with this patch series.
> > >
> > > I haven't found a better way to back up the regs before executing the
> > > checking functions, and have to store them on the stack. Comments and
> > > advice are surely welcome.
> > >
> > > [1] http://bit.do/CVE-2021-3490
> > > [2] http://bit.do/CVE-2022-23222
> > > [3] OOB-read: https://pastebin.com/raw/Ee1Cw492
> > >
> > > Hao Sun (3):
> > > bpf: Sanitize STX/ST in jited BPF progs with KASAN
> > > bpf: Sanitize LDX in jited BPF progs with KASAN
> > > selftests/bpf: Add tests for LDX/STX/ST sanitize
> > >
> > > arch/x86/net/bpf_jit_comp.c | 34 ++
> > > include/linux/bpf.h | 14 +
> > > kernel/bpf/Kconfig | 14 +
> > > kernel/bpf/verifier.c | 190 +++++++++++
> > > .../selftests/bpf/verifier/sanitize_st_ldx.c | 323 ++++++++++++++++++
> > > 5 files changed, 575 insertions(+)
> > > create mode 100644 tools/testing/selftests/bpf/verifier/sanitize_st_ldx.c
> > >
> > >
> > > base-commit: 8a2162a9227dda936a21fe72014a9931a3853a7b
> > >
> >
> > Thanks,
> > Daniel