2024-01-27 05:31:38

by Jinghao Jia

[permalink] [raw]
Subject: [RFC PATCH 2/2] x86/kprobes: boost more instructions from grp2/3/4/5

With the instruction decoder, we are now able to decode and recognize
instructions with opcode extensions. There are more instructions in
these groups that can be boosted:

Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
Group 4: INC, DEC (byte operation)
Group 5: INC, DEC (word/doubleword/quadword operation)

These instructions are not boosted previously because there are reserved
opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
unmapped. As a result, kprobes attached to them requires two int3 traps
as being non-boostable also prevents jump-optimization.

Some simple tests on QEMU show that after boosting and jump-optimization
a single kprobe on these instructions with an empty pre-handler runs 10x
faster (~1000 cycles vs. ~100 cycles).

Since these instructions are mostly ALU operations and do not touch
special registers like RIP, let's boost them so that we get the
performance benefit.

Signed-off-by: Jinghao Jia <[email protected]>
---
arch/x86/kernel/kprobes/core.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 792b38d22126..f847bd9cc91b 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -169,22 +169,31 @@ int can_boost(struct insn *insn, void *addr)
case 0x62: /* bound */
case 0x70 ... 0x7f: /* Conditional jumps */
case 0x9a: /* Call far */
- case 0xc0 ... 0xc1: /* Grp2 */
case 0xcc ... 0xce: /* software exceptions */
- case 0xd0 ... 0xd3: /* Grp2 */
case 0xd6: /* (UD) */
case 0xd8 ... 0xdf: /* ESC */
case 0xe0 ... 0xe3: /* LOOP*, JCXZ */
case 0xe8 ... 0xe9: /* near Call, JMP */
case 0xeb: /* Short JMP */
case 0xf0 ... 0xf4: /* LOCK/REP, HLT */
- case 0xf6 ... 0xf7: /* Grp3 */
- case 0xfe: /* Grp4 */
/* ... are not boostable */
return 0;
+ case 0xc0 ... 0xc1: /* Grp2 */
+ case 0xd0 ... 0xd3: /* Grp2 */
+ /* ModR/M nnn == 110 is reserved */
+ return X86_MODRM_REG(insn->modrm.bytes[0]) != 6;
+ case 0xf6 ... 0xf7: /* Grp3 */
+ /* ModR/M nnn == 001 is reserved */
+ return X86_MODRM_REG(insn->modrm.bytes[0]) != 1;
+ case 0xfe: /* Grp4 */
+ /* Only inc and dec are boostable */
+ return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
+ X86_MODRM_REG(insn->modrm.bytes[0]) == 1;
case 0xff: /* Grp5 */
- /* Only indirect jmp is boostable */
- return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
+ /* Only inc, dec, and indirect jmp are boostable */
+ return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
+ X86_MODRM_REG(insn->modrm.bytes[0]) == 1 ||
+ X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
default:
return 1;
}
--
2.43.0



2024-01-28 02:22:31

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [RFC PATCH 2/2] x86/kprobes: boost more instructions from grp2/3/4/5

On Fri, 26 Jan 2024 22:41:24 -0600
Jinghao Jia <[email protected]> wrote:

> With the instruction decoder, we are now able to decode and recognize
> instructions with opcode extensions. There are more instructions in
> these groups that can be boosted:
>
> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
> Group 4: INC, DEC (byte operation)
> Group 5: INC, DEC (word/doubleword/quadword operation)
>
> These instructions are not boosted previously because there are reserved
> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
> unmapped. As a result, kprobes attached to them requires two int3 traps
> as being non-boostable also prevents jump-optimization.
>
> Some simple tests on QEMU show that after boosting and jump-optimization
> a single kprobe on these instructions with an empty pre-handler runs 10x
> faster (~1000 cycles vs. ~100 cycles).
>
> Since these instructions are mostly ALU operations and do not touch
> special registers like RIP, let's boost them so that we get the
> performance benefit.
>

As far as we check the ModR/M byte, I think we can safely run these
instructions on trampoline buffer without adjusting results (this
means it can be "boosted").
I just have a minor comment, but basically this looks good to me.

Reviewed-by: Masami Hiramatsu (Google) <[email protected]>

> Signed-off-by: Jinghao Jia <[email protected]>
> ---
> arch/x86/kernel/kprobes/core.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 792b38d22126..f847bd9cc91b 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -169,22 +169,31 @@ int can_boost(struct insn *insn, void *addr)
> case 0x62: /* bound */
> case 0x70 ... 0x7f: /* Conditional jumps */
> case 0x9a: /* Call far */
> - case 0xc0 ... 0xc1: /* Grp2 */
> case 0xcc ... 0xce: /* software exceptions */
> - case 0xd0 ... 0xd3: /* Grp2 */
> case 0xd6: /* (UD) */
> case 0xd8 ... 0xdf: /* ESC */
> case 0xe0 ... 0xe3: /* LOOP*, JCXZ */
> case 0xe8 ... 0xe9: /* near Call, JMP */
> case 0xeb: /* Short JMP */
> case 0xf0 ... 0xf4: /* LOCK/REP, HLT */
> - case 0xf6 ... 0xf7: /* Grp3 */
> - case 0xfe: /* Grp4 */
> /* ... are not boostable */
> return 0;
> + case 0xc0 ... 0xc1: /* Grp2 */
> + case 0xd0 ... 0xd3: /* Grp2 */
> + /* ModR/M nnn == 110 is reserved */
> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 6;
> + case 0xf6 ... 0xf7: /* Grp3 */
> + /* ModR/M nnn == 001 is reserved */

/* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */

> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 1;
> + case 0xfe: /* Grp4 */
> + /* Only inc and dec are boostable */
> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1;
> case 0xff: /* Grp5 */
> - /* Only indirect jmp is boostable */
> - return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> + /* Only inc, dec, and indirect jmp are boostable */
> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1 ||
> + X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> default:
> return 1;
> }
> --
> 2.43.0
>

Thamnk you,

--
Masami Hiramatsu (Google) <[email protected]>

2024-01-28 21:31:24

by Jinghao Jia

[permalink] [raw]
Subject: Re: [RFC PATCH 2/2] x86/kprobes: boost more instructions from grp2/3/4/5



On 1/27/24 20:22, Masami Hiramatsu (Google) wrote:
> On Fri, 26 Jan 2024 22:41:24 -0600
> Jinghao Jia <[email protected]> wrote:
>
>> With the instruction decoder, we are now able to decode and recognize
>> instructions with opcode extensions. There are more instructions in
>> these groups that can be boosted:
>>
>> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
>> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
>> Group 4: INC, DEC (byte operation)
>> Group 5: INC, DEC (word/doubleword/quadword operation)
>>
>> These instructions are not boosted previously because there are reserved
>> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
>> unmapped. As a result, kprobes attached to them requires two int3 traps
>> as being non-boostable also prevents jump-optimization.
>>
>> Some simple tests on QEMU show that after boosting and jump-optimization
>> a single kprobe on these instructions with an empty pre-handler runs 10x
>> faster (~1000 cycles vs. ~100 cycles).
>>
>> Since these instructions are mostly ALU operations and do not touch
>> special registers like RIP, let's boost them so that we get the
>> performance benefit.
>>
>
> As far as we check the ModR/M byte, I think we can safely run these
> instructions on trampoline buffer without adjusting results (this
> means it can be "boosted").
> I just have a minor comment, but basically this looks good to me.
>
> Reviewed-by: Masami Hiramatsu (Google) <[email protected]>
>
>> Signed-off-by: Jinghao Jia <[email protected]>
>> ---
>> arch/x86/kernel/kprobes/core.c | 21 +++++++++++++++------
>> 1 file changed, 15 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
>> index 792b38d22126..f847bd9cc91b 100644
>> --- a/arch/x86/kernel/kprobes/core.c
>> +++ b/arch/x86/kernel/kprobes/core.c
>> @@ -169,22 +169,31 @@ int can_boost(struct insn *insn, void *addr)
>> case 0x62: /* bound */
>> case 0x70 ... 0x7f: /* Conditional jumps */
>> case 0x9a: /* Call far */
>> - case 0xc0 ... 0xc1: /* Grp2 */
>> case 0xcc ... 0xce: /* software exceptions */
>> - case 0xd0 ... 0xd3: /* Grp2 */
>> case 0xd6: /* (UD) */
>> case 0xd8 ... 0xdf: /* ESC */
>> case 0xe0 ... 0xe3: /* LOOP*, JCXZ */
>> case 0xe8 ... 0xe9: /* near Call, JMP */
>> case 0xeb: /* Short JMP */
>> case 0xf0 ... 0xf4: /* LOCK/REP, HLT */
>> - case 0xf6 ... 0xf7: /* Grp3 */
>> - case 0xfe: /* Grp4 */
>> /* ... are not boostable */
>> return 0;
>> + case 0xc0 ... 0xc1: /* Grp2 */
>> + case 0xd0 ... 0xd3: /* Grp2 */
>> + /* ModR/M nnn == 110 is reserved */
>> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 6;
>> + case 0xf6 ... 0xf7: /* Grp3 */
>> + /* ModR/M nnn == 001 is reserved */
>
> /* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
>

I will incorporate this into the v2. Since nnn == 001 is still considered
reserved by Intel, we still need to prevent it from being boosted, don't
we?

--Jinghao

>> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 1;
>> + case 0xfe: /* Grp4 */
>> + /* Only inc and dec are boostable */
>> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
>> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1;
>> case 0xff: /* Grp5 */
>> - /* Only indirect jmp is boostable */
>> - return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
>> + /* Only inc, dec, and indirect jmp are boostable */
>> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
>> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1 ||
>> + X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
>> default:
>> return 1;
>> }
>> --
>> 2.43.0
>>
>
> Thamnk you,
>


Attachments:
OpenPGP_signature.asc (855.00 B)
OpenPGP digital signature

2024-01-30 01:47:52

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [RFC PATCH 2/2] x86/kprobes: boost more instructions from grp2/3/4/5

On Sun, 28 Jan 2024 15:30:50 -0600
Jinghao Jia <[email protected]> wrote:

>
>
> On 1/27/24 20:22, Masami Hiramatsu (Google) wrote:
> > On Fri, 26 Jan 2024 22:41:24 -0600
> > Jinghao Jia <[email protected]> wrote:
> >
> >> With the instruction decoder, we are now able to decode and recognize
> >> instructions with opcode extensions. There are more instructions in
> >> these groups that can be boosted:
> >>
> >> Group 2: ROL, ROR, RCL, RCR, SHL/SAL, SHR, SAR
> >> Group 3: TEST, NOT, NEG, MUL, IMUL, DIV, IDIV
> >> Group 4: INC, DEC (byte operation)
> >> Group 5: INC, DEC (word/doubleword/quadword operation)
> >>
> >> These instructions are not boosted previously because there are reserved
> >> opcodes within the groups, e.g., group 2 with ModR/M.nnn == 110 is
> >> unmapped. As a result, kprobes attached to them requires two int3 traps
> >> as being non-boostable also prevents jump-optimization.
> >>
> >> Some simple tests on QEMU show that after boosting and jump-optimization
> >> a single kprobe on these instructions with an empty pre-handler runs 10x
> >> faster (~1000 cycles vs. ~100 cycles).
> >>
> >> Since these instructions are mostly ALU operations and do not touch
> >> special registers like RIP, let's boost them so that we get the
> >> performance benefit.
> >>
> >
> > As far as we check the ModR/M byte, I think we can safely run these
> > instructions on trampoline buffer without adjusting results (this
> > means it can be "boosted").
> > I just have a minor comment, but basically this looks good to me.
> >
> > Reviewed-by: Masami Hiramatsu (Google) <[email protected]>
> >
> >> Signed-off-by: Jinghao Jia <[email protected]>
> >> ---
> >> arch/x86/kernel/kprobes/core.c | 21 +++++++++++++++------
> >> 1 file changed, 15 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> >> index 792b38d22126..f847bd9cc91b 100644
> >> --- a/arch/x86/kernel/kprobes/core.c
> >> +++ b/arch/x86/kernel/kprobes/core.c
> >> @@ -169,22 +169,31 @@ int can_boost(struct insn *insn, void *addr)
> >> case 0x62: /* bound */
> >> case 0x70 ... 0x7f: /* Conditional jumps */
> >> case 0x9a: /* Call far */
> >> - case 0xc0 ... 0xc1: /* Grp2 */
> >> case 0xcc ... 0xce: /* software exceptions */
> >> - case 0xd0 ... 0xd3: /* Grp2 */
> >> case 0xd6: /* (UD) */
> >> case 0xd8 ... 0xdf: /* ESC */
> >> case 0xe0 ... 0xe3: /* LOOP*, JCXZ */
> >> case 0xe8 ... 0xe9: /* near Call, JMP */
> >> case 0xeb: /* Short JMP */
> >> case 0xf0 ... 0xf4: /* LOCK/REP, HLT */
> >> - case 0xf6 ... 0xf7: /* Grp3 */
> >> - case 0xfe: /* Grp4 */
> >> /* ... are not boostable */
> >> return 0;
> >> + case 0xc0 ... 0xc1: /* Grp2 */
> >> + case 0xd0 ... 0xd3: /* Grp2 */
> >> + /* ModR/M nnn == 110 is reserved */
> >> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 6;
> >> + case 0xf6 ... 0xf7: /* Grp3 */
> >> + /* ModR/M nnn == 001 is reserved */
> >
> > /* AMD uses nnn == 001 as TEST, but Intel makes it reserved. */
> >
>
> I will incorporate this into the v2. Since nnn == 001 is still considered
> reserved by Intel, we still need to prevent it from being boosted, don't
> we?
>
> --Jinghao
>
> >> + return X86_MODRM_REG(insn->modrm.bytes[0]) != 1;
> >> + case 0xfe: /* Grp4 */
> >> + /* Only inc and dec are boostable */
> >> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
> >> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1;
> >> case 0xff: /* Grp5 */
> >> - /* Only indirect jmp is boostable */
> >> - return X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> >> + /* Only inc, dec, and indirect jmp are boostable */
> >> + return X86_MODRM_REG(insn->modrm.bytes[0]) == 0 ||
> >> + X86_MODRM_REG(insn->modrm.bytes[0]) == 1 ||
> >> + X86_MODRM_REG(insn->modrm.bytes[0]) == 4;
> >> default:
> >> return 1;
> >> }
> >> --
> >> 2.43.0
> >>
> >
> > Thamnk you,
> >


--
Masami Hiramatsu (Google) <[email protected]>