2022-02-24 15:34:56

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Have ftrace_location() search the symbol for the __fentry__ location
when it isn't at func+0 and use this for {,un}register_ftrace_direct().

This avoids a whole bunch of assumptions about __fentry__ being at
func+0.

Suggested-by: Steven Rostedt <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
kernel/trace/ftrace.c | 30 ++++++++++++++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)

--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1578,7 +1578,24 @@ unsigned long ftrace_location_range(unsi
*/
unsigned long ftrace_location(unsigned long ip)
{
- return ftrace_location_range(ip, ip);
+ struct dyn_ftrace *rec;
+ unsigned long offset;
+ unsigned long size;
+
+ rec = lookup_rec(ip, ip);
+ if (!rec) {
+ if (!kallsyms_lookup_size_offset(ip, &size, &offset))
+ goto out;
+
+ if (!offset)
+ rec = lookup_rec(ip - offset, (ip - offset) + size);
+ }
+
+ if (rec)
+ return rec->ip;
+
+out:
+ return 0;
}

/**
@@ -5110,11 +5127,16 @@ int register_ftrace_direct(unsigned long
struct ftrace_func_entry *entry;
struct ftrace_hash *free_hash = NULL;
struct dyn_ftrace *rec;
- int ret = -EBUSY;
+ int ret = -ENODEV;

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
/* See if there's a direct function at @ip already */
+ ret = -EBUSY;
if (ftrace_find_rec_direct(ip))
goto out_unlock;

@@ -5222,6 +5244,10 @@ int unregister_ftrace_direct(unsigned lo

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
entry = find_direct_entry(&ip, NULL);
if (!entry)
goto out_unlock;



2022-02-24 16:40:39

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Hi Peter,

On Thu, 24 Feb 2022 15:51:50 +0100
Peter Zijlstra <[email protected]> wrote:

> Have ftrace_location() search the symbol for the __fentry__ location
> when it isn't at func+0 and use this for {,un}register_ftrace_direct().
>
> This avoids a whole bunch of assumptions about __fentry__ being at
> func+0.
>
> Suggested-by: Steven Rostedt <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> kernel/trace/ftrace.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1578,7 +1578,24 @@ unsigned long ftrace_location_range(unsi
> */
> unsigned long ftrace_location(unsigned long ip)
> {
> - return ftrace_location_range(ip, ip);
> + struct dyn_ftrace *rec;
> + unsigned long offset;
> + unsigned long size;
> +
> + rec = lookup_rec(ip, ip);
> + if (!rec) {
> + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> + goto out;
> +
> + if (!offset)

Isn't this 'if (offset)' ?

> + rec = lookup_rec(ip - offset, (ip - offset) + size);
> + }
> +
> + if (rec)
> + return rec->ip;
> +
> +out:
> + return 0;
> }

Thank you,

>
> /**
> @@ -5110,11 +5127,16 @@ int register_ftrace_direct(unsigned long
> struct ftrace_func_entry *entry;
> struct ftrace_hash *free_hash = NULL;
> struct dyn_ftrace *rec;
> - int ret = -EBUSY;
> + int ret = -ENODEV;
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> /* See if there's a direct function at @ip already */
> + ret = -EBUSY;
> if (ftrace_find_rec_direct(ip))
> goto out_unlock;
>
> @@ -5222,6 +5244,10 @@ int unregister_ftrace_direct(unsigned lo
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> entry = find_direct_entry(&ip, NULL);
> if (!entry)
> goto out_unlock;
>
>


--
Masami Hiramatsu <[email protected]>

2022-02-24 16:43:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Fri, 25 Feb 2022 00:55:20 +0900
Masami Hiramatsu <[email protected]> wrote:

> > unsigned long ftrace_location(unsigned long ip)
> > {
> > - return ftrace_location_range(ip, ip);
> > + struct dyn_ftrace *rec;
> > + unsigned long offset;
> > + unsigned long size;
> > +
> > + rec = lookup_rec(ip, ip);
> > + if (!rec) {
> > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > + goto out;
> > +
> > + if (!offset)
>
> Isn't this 'if (offset)' ?

No, the point to only look for the fentry location if the ip passed in
points to the start of the function. IOW, +0 offset.

-- Steve


>
> > + rec = lookup_rec(ip - offset, (ip - offset) + size);
> > + }
> > +
> > + if (rec)
> > + return rec->ip;
> > +
> > +out:
> > + return 0;

2022-02-24 16:44:42

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 24 Feb 2022 10:58:47 -0500
Steven Rostedt <[email protected]> wrote:

> No, the point to only look for the fentry location if the ip passed in

"the point is to only look"

-- Steve

> points to the start of the function. IOW, +0 offset.

2022-02-24 16:44:50

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 24 Feb 2022 10:58:47 -0500
Steven Rostedt <[email protected]> wrote:

> On Fri, 25 Feb 2022 00:55:20 +0900
> Masami Hiramatsu <[email protected]> wrote:
>
> > > unsigned long ftrace_location(unsigned long ip)
> > > {
> > > - return ftrace_location_range(ip, ip);
> > > + struct dyn_ftrace *rec;
> > > + unsigned long offset;
> > > + unsigned long size;
> > > +
> > > + rec = lookup_rec(ip, ip);
> > > + if (!rec) {
> > > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > > + goto out;
> > > +
> > > + if (!offset)
> >
> > Isn't this 'if (offset)' ?
>
> No, the point to only look for the fentry location if the ip passed in
> points to the start of the function. IOW, +0 offset.
>

I do agree with Masami that it is confusing. Please add a comment:

/* Search the entire function if ip is the start of the function */
if (!offset)
[..]

-- Steve

>
>
> >
> > > + rec = lookup_rec(ip - offset, (ip - offset) + size);
> > > + }
> > > +
> > > + if (rec)
> > > + return rec->ip;
> > > +
> > > +out:
> > > + return 0;

2022-02-24 22:49:12

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, Feb 24, 2022 at 11:01:30AM -0500, Steven Rostedt wrote:
> On Thu, 24 Feb 2022 10:58:47 -0500
> Steven Rostedt <[email protected]> wrote:
>
> > On Fri, 25 Feb 2022 00:55:20 +0900
> > Masami Hiramatsu <[email protected]> wrote:
> >
> > > > unsigned long ftrace_location(unsigned long ip)
> > > > {
> > > > - return ftrace_location_range(ip, ip);
> > > > + struct dyn_ftrace *rec;
> > > > + unsigned long offset;
> > > > + unsigned long size;
> > > > +
> > > > + rec = lookup_rec(ip, ip);
> > > > + if (!rec) {
> > > > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > > > + goto out;
> > > > +
> > > > + if (!offset)
> > >
> > > Isn't this 'if (offset)' ?
> >
> > No, the point to only look for the fentry location if the ip passed in
> > points to the start of the function. IOW, +0 offset.
> >
>
> I do agree with Masami that it is confusing. Please add a comment:
>
> /* Search the entire function if ip is the start of the function */
> if (!offset)
> [..]
>
> -- Steve
>
> >
> >
> > >
> > > > + rec = lookup_rec(ip - offset, (ip - offset) + size);

If 'offset' is zero then why the math here? ^^^^^^^^^^^ ^^^^^^^^^^^
--
Josh

2022-02-25 01:38:38

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, Feb 24, 2022 at 03:51:50PM +0100, Peter Zijlstra wrote:
> Have ftrace_location() search the symbol for the __fentry__ location
> when it isn't at func+0 and use this for {,un}register_ftrace_direct().
>
> This avoids a whole bunch of assumptions about __fentry__ being at
> func+0.
>
> Suggested-by: Steven Rostedt <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>

Cool. This should help with anything using __fentry__ tricks (i.e.
future CFI...), yes?

Reviewed-by: Kees Cook <[email protected]>

--
Kees Cook

2022-02-25 03:38:54

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 24 Feb 2022 14:46:31 -0800
Josh Poimboeuf <[email protected]> wrote:

> > > > > + rec = lookup_rec(ip - offset, (ip - offset) + size);
>
> If 'offset' is zero then why the math here? ^^^^^^^^^^^ ^^^^^^^^^^^

Because it didn't check for offset being zero when we wrote that line. ;-)

Yes, checking for !offset makes that logic irrelevant.

Good catch.

-- Steve

2022-02-25 03:38:59

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 24 Feb 2022 10:58:47 -0500
Steven Rostedt <[email protected]> wrote:

> On Fri, 25 Feb 2022 00:55:20 +0900
> Masami Hiramatsu <[email protected]> wrote:
>
> > > unsigned long ftrace_location(unsigned long ip)
> > > {
> > > - return ftrace_location_range(ip, ip);
> > > + struct dyn_ftrace *rec;
> > > + unsigned long offset;
> > > + unsigned long size;
> > > +
> > > + rec = lookup_rec(ip, ip);
> > > + if (!rec) {
> > > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > > + goto out;
> > > +
> > > + if (!offset)
> >
> > Isn't this 'if (offset)' ?
>
> No, the point to only look for the fentry location if the ip passed in
> points to the start of the function. IOW, +0 offset.

OK, so this means ftrace_location() will be same as
ftrace_location_range(sym, sym_end) ?

Thank you,

>
> -- Steve
>
>
> >
> > > + rec = lookup_rec(ip - offset, (ip - offset) + size);
> > > + }
> > > +
> > > + if (rec)
> > > + return rec->ip;
> > > +
> > > +out:
> > > + return 0;


--
Masami Hiramatsu <[email protected]>

2022-02-25 07:28:02

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Fri, 25 Feb 2022 10:34:49 +0900
Masami Hiramatsu <[email protected]> wrote:

> > > > + if (!rec) {
> > > > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > > > + goto out;
> > > > +
> > > > + if (!offset)
> > >
> > > Isn't this 'if (offset)' ?
> >
> > No, the point to only look for the fentry location if the ip passed in
> > points to the start of the function. IOW, +0 offset.
>
> OK, so this means ftrace_location() will be same as
> ftrace_location_range(sym, sym_end) ?

No. It only acts like ftrace_location_range(sym, sym_end) if the passed
in argument is the ip of the function (kallsyms returns offset = 0)

-- Steve

2022-02-25 14:19:58

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Fri, 25 Feb 2022 19:20:08 +0900
Masami Hiramatsu <[email protected]> wrote:

> > No. It only acts like ftrace_location_range(sym, sym_end) if the passed
> > in argument is the ip of the function (kallsyms returns offset = 0)
>
> Got it. So now ftrace_location() will return the ftrace address
> when the ip == func or ip == mcount-call.

Exactly! :-)

Are you OK with that?

-- Steve

2022-02-25 14:55:27

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 24 Feb 2022 21:19:19 -0500
Steven Rostedt <[email protected]> wrote:

> On Fri, 25 Feb 2022 10:34:49 +0900
> Masami Hiramatsu <[email protected]> wrote:
>
> > > > > + if (!rec) {
> > > > > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > > > > + goto out;
> > > > > +
> > > > > + if (!offset)
> > > >
> > > > Isn't this 'if (offset)' ?
> > >
> > > No, the point to only look for the fentry location if the ip passed in
> > > points to the start of the function. IOW, +0 offset.
> >
> > OK, so this means ftrace_location() will be same as
> > ftrace_location_range(sym, sym_end) ?
>
> No. It only acts like ftrace_location_range(sym, sym_end) if the passed
> in argument is the ip of the function (kallsyms returns offset = 0)

Got it. So now ftrace_location() will return the ftrace address
when the ip == func or ip == mcount-call.

Thank you,

--
Masami Hiramatsu <[email protected]>

2022-03-01 19:13:13

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Steven Rostedt wrote:
> On Fri, 25 Feb 2022 19:20:08 +0900
> Masami Hiramatsu <[email protected]> wrote:
>
>> > No. It only acts like ftrace_location_range(sym, sym_end) if the passed
>> > in argument is the ip of the function (kallsyms returns offset = 0)
>>
>> Got it. So now ftrace_location() will return the ftrace address
>> when the ip == func or ip == mcount-call.

Won't this cause issues with ftrace_set_filter_ip() and others? If the
passed-in ip points to func+0 when the actual ftrace location is at some
offset, the ftrace location check in ftrace_match_addr() will now pass,
resulting in adding func+0 to the hash. Should we also update
ftrace_match_addr() to use the ip returned by ftrace_location()?


- Naveen

2022-03-01 20:24:47

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Wed, 02 Mar 2022 00:27:51 +0530
"Naveen N. Rao" <[email protected]> wrote:

> Won't this cause issues with ftrace_set_filter_ip() and others? If the
> passed-in ip points to func+0 when the actual ftrace location is at some
> offset, the ftrace location check in ftrace_match_addr() will now pass,
> resulting in adding func+0 to the hash. Should we also update
> ftrace_match_addr() to use the ip returned by ftrace_location()?
>

Yes, ftrace_match_addr() would need to be updated, or at least
ftrace_set_filter_ip() which is the only user ftrace_match_addr(), and is
currently only used by kprobes, live kernel patching and the direct
trampoline example code.

-- Steve

2022-03-02 17:28:46

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Peter Zijlstra wrote:
> Have ftrace_location() search the symbol for the __fentry__ location
> when it isn't at func+0 and use this for {,un}register_ftrace_direct().
>
> This avoids a whole bunch of assumptions about __fentry__ being at
> func+0.
>
> Suggested-by: Steven Rostedt <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> kernel/trace/ftrace.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1578,7 +1578,24 @@ unsigned long ftrace_location_range(unsi
> */
> unsigned long ftrace_location(unsigned long ip)
> {
> - return ftrace_location_range(ip, ip);
> + struct dyn_ftrace *rec;
> + unsigned long offset;
> + unsigned long size;
> +
> + rec = lookup_rec(ip, ip);
> + if (!rec) {
> + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> + goto out;
> +
> + if (!offset)
> + rec = lookup_rec(ip - offset, (ip - offset) + size);
> + }
> +
> + if (rec)
> + return rec->ip;
> +
> +out:
> + return 0;
> }
>
> /**
> @@ -5110,11 +5127,16 @@ int register_ftrace_direct(unsigned long
> struct ftrace_func_entry *entry;
> struct ftrace_hash *free_hash = NULL;
> struct dyn_ftrace *rec;
> - int ret = -EBUSY;
> + int ret = -ENODEV;
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> /* See if there's a direct function at @ip already */
> + ret = -EBUSY;
> if (ftrace_find_rec_direct(ip))
> goto out_unlock;

I think some of the validation at this point can be removed (diff below).

>
> @@ -5222,6 +5244,10 @@ int unregister_ftrace_direct(unsigned lo
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> entry = find_direct_entry(&ip, NULL);
> if (!entry)
> goto out_unlock;

We should also update modify_ftrace_direct(). An incremental diff below.


- Naveen


---
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 65d7553668ca3d..17ce4751a2051a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5126,7 +5126,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
struct ftrace_direct_func *direct;
struct ftrace_func_entry *entry;
struct ftrace_hash *free_hash = NULL;
- struct dyn_ftrace *rec;
int ret = -ENODEV;

mutex_lock(&direct_mutex);
@@ -5140,26 +5139,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
if (ftrace_find_rec_direct(ip))
goto out_unlock;

- ret = -ENODEV;
- rec = lookup_rec(ip, ip);
- if (!rec)
- goto out_unlock;
-
- /*
- * Check if the rec says it has a direct call but we didn't
- * find one earlier?
- */
- if (WARN_ON(rec->flags & FTRACE_FL_DIRECT))
- goto out_unlock;
-
- /* Make sure the ip points to the exact record */
- if (ip != rec->ip) {
- ip = rec->ip;
- /* Need to check this ip for a direct. */
- if (ftrace_find_rec_direct(ip))
- goto out_unlock;
- }
-
ret = -ENOMEM;
direct = ftrace_find_direct_func(addr);
if (!direct) {
@@ -5380,6 +5359,10 @@ int modify_ftrace_direct(unsigned long ip,
mutex_lock(&direct_mutex);

mutex_lock(&ftrace_lock);
+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
entry = find_direct_entry(&ip, &rec);
if (!entry)
goto out_unlock;
--
2.35.1

2022-03-02 21:30:56

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Wed, 2 Mar 2022 11:01:38 -0500
Steven Rostedt <[email protected]> wrote:

> On Wed, 2 Mar 2022 14:20:23 +0100
> Peter Zijlstra <[email protected]> wrote:
>
> > Like so, or is something else needed?
> >
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 68ecd3e35342..d1b30b5c5c23 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -4980,7 +4980,8 @@ ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
> > {
> > struct ftrace_func_entry *entry;
> >
> > - if (!ftrace_location(ip))
> > + ip = ftrace_location(ip);
> > + if (!ip)
> > return -EINVAL;
>
> This could possibly work. I'd have to test all this though.
>
> I probably could just take this patch and try it out. You can remove the
> "x86/ibt" from the subject, as this patch may be a requirement for that
> (include that in the commit log), but it is has no changes to x86/ibt
> specifically.
>

Note, I just pulled this patch, and I hit this warning:

WARNING: CPU: 0 PID: 6965 at arch/x86/kernel/kprobes/core.c:205 recover_probed_instruction+0x8f/0xa0

static unsigned long
__recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
{
struct kprobe *kp;
unsigned long faddr;

kp = get_kprobe((void *)addr);
faddr = ftrace_location(addr);
/*
* Addresses inside the ftrace location are refused by
* arch_check_ftrace_location(). Something went terribly wrong
* if such an address is checked here.
*/
if (WARN_ON(faddr && faddr != addr)) <<---- HERE
return 0UL;
/*
* Use the current code if it is not modified by Kprobe
* and it cannot be modified by ftrace.
*/
if (!kp && !faddr)
return addr;

I guess this patch needs kprobe updates.

I'll pull down the latest tip and test that code.

-- Steve

2022-03-02 22:39:06

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Wed, 2 Mar 2022 14:20:23 +0100
Peter Zijlstra <[email protected]> wrote:

> Like so, or is something else needed?
>
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 68ecd3e35342..d1b30b5c5c23 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -4980,7 +4980,8 @@ ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
> {
> struct ftrace_func_entry *entry;
>
> - if (!ftrace_location(ip))
> + ip = ftrace_location(ip);
> + if (!ip)
> return -EINVAL;

This could possibly work. I'd have to test all this though.

I probably could just take this patch and try it out. You can remove the
"x86/ibt" from the subject, as this patch may be a requirement for that
(include that in the commit log), but it is has no changes to x86/ibt
specifically.

-- Steve


>
> if (remove) {

2022-03-02 22:58:08

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Wed, 2 Mar 2022 14:47:16 -0500
Steven Rostedt <[email protected]> wrote:

> Note, I just pulled this patch, and I hit this warning:
>
> WARNING: CPU: 0 PID: 6965 at arch/x86/kernel/kprobes/core.c:205 recover_probed_instruction+0x8f/0xa0

As we discussed on IRC (but I want an email record of this), it appears that
with some debugging, the ftrace_location() could return the function right
after the current function because lookup_rec() has an inclusive "end"
argument.

Testing:

rec = lookup_rec(ip - offset, (ip - offset) + size - 1);

Appears to fix the issue.

-- Steve

2022-03-02 23:35:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Tue, Mar 01, 2022 at 02:20:16PM -0500, Steven Rostedt wrote:
> On Wed, 02 Mar 2022 00:27:51 +0530
> "Naveen N. Rao" <[email protected]> wrote:
>
> > Won't this cause issues with ftrace_set_filter_ip() and others? If the
> > passed-in ip points to func+0 when the actual ftrace location is at some
> > offset, the ftrace location check in ftrace_match_addr() will now pass,
> > resulting in adding func+0 to the hash. Should we also update
> > ftrace_match_addr() to use the ip returned by ftrace_location()?
> >
>
> Yes, ftrace_match_addr() would need to be updated, or at least
> ftrace_set_filter_ip() which is the only user ftrace_match_addr(), and is
> currently only used by kprobes, live kernel patching and the direct
> trampoline example code.

Like so, or is something else needed?

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 68ecd3e35342..d1b30b5c5c23 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4980,7 +4980,8 @@ ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
{
struct ftrace_func_entry *entry;

- if (!ftrace_location(ip))
+ ip = ftrace_location(ip);
+ if (!ip)
return -EINVAL;

if (remove) {

2022-03-03 00:21:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Wed, Mar 02, 2022 at 02:47:16PM -0500, Steven Rostedt wrote:
> Note, I just pulled this patch, and I hit this warning:
>
> WARNING: CPU: 0 PID: 6965 at arch/x86/kernel/kprobes/core.c:205 recover_probed_instruction+0x8f/0xa0
>
> static unsigned long
> __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
> {
> struct kprobe *kp;
> unsigned long faddr;
>
> kp = get_kprobe((void *)addr);
> faddr = ftrace_location(addr);
> /*
> * Addresses inside the ftrace location are refused by
> * arch_check_ftrace_location(). Something went terribly wrong
> * if such an address is checked here.
> */
> if (WARN_ON(faddr && faddr != addr)) <<---- HERE
> return 0UL;

Ha! so a bunch of IRC later I figured out how it is possible you hit
this with just the patch on and how I legitimately hit this and what to
do about it.

Your problem seems to be that we got ftrace_location() wrong in that
lookup_rec()'s end argument is inclusive, hence we need:

lookup_rec(ip, ip + size - 1)

Now, the above thing asserts that:

ftrace_location(x) == {0, x}

and that is genuinely false in my case, I get x+4 as additional possible
output. So I think I need the below change to go on top of all I have:


diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 7f0ce42f8ff9..4c13406e0bc4 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -198,13 +198,14 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)

kp = get_kprobe((void *)addr);
faddr = ftrace_location(addr);
+
/*
- * Addresses inside the ftrace location are refused by
- * arch_check_ftrace_location(). Something went terribly wrong
- * if such an address is checked here.
+ * In case addr maps to sym+0 ftrace_location() might return something
+ * other than faddr. In that case consider it the same as !faddr.
*/
- if (WARN_ON(faddr && faddr != addr))
- return 0UL;
+ if (faddr && faddr != addr)
+ faddr = 0;
+
/*
* Use the current code if it is not modified by Kprobe
* and it cannot be modified by ftrace.

2022-03-03 11:33:17

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Peter Zijlstra wrote:
> On Wed, Mar 02, 2022 at 02:47:16PM -0500, Steven Rostedt wrote:
>> Note, I just pulled this patch, and I hit this warning:
>>
>> WARNING: CPU: 0 PID: 6965 at arch/x86/kernel/kprobes/core.c:205 recover_probed_instruction+0x8f/0xa0
>>
>> static unsigned long
>> __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
>> {
>> struct kprobe *kp;
>> unsigned long faddr;
>>
>> kp = get_kprobe((void *)addr);
>> faddr = ftrace_location(addr);
>> /*
>> * Addresses inside the ftrace location are refused by
>> * arch_check_ftrace_location(). Something went terribly wrong
>> * if such an address is checked here.
>> */
>> if (WARN_ON(faddr && faddr != addr)) <<---- HERE
>> return 0UL;
>
> Ha! so a bunch of IRC later I figured out how it is possible you hit
> this with just the patch on and how I legitimately hit this and what to
> do about it.
>
> Your problem seems to be that we got ftrace_location() wrong in that
> lookup_rec()'s end argument is inclusive, hence we need:
>
> lookup_rec(ip, ip + size - 1)
>
> Now, the above thing asserts that:
>
> ftrace_location(x) == {0, x}
>
> and that is genuinely false in my case, I get x+4 as additional possible
> output. So I think I need the below change to go on top of all I have:
>
>
> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> index 7f0ce42f8ff9..4c13406e0bc4 100644
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -198,13 +198,14 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
>
> kp = get_kprobe((void *)addr);
> faddr = ftrace_location(addr);
> +
> /*
> - * Addresses inside the ftrace location are refused by
> - * arch_check_ftrace_location(). Something went terribly wrong
> - * if such an address is checked here.
> + * In case addr maps to sym+0 ftrace_location() might return something
> + * other than faddr. In that case consider it the same as !faddr.
> */
> - if (WARN_ON(faddr && faddr != addr))
> - return 0UL;
> + if (faddr && faddr != addr)
> + faddr = 0;
> +
> /*
> * Use the current code if it is not modified by Kprobe
> * and it cannot be modified by ftrace.

I hit this issue yesterday in kprobe generic code in check_ftrace_location().
In both these scenarios, we just want to check if a particular instruction is
reserved by ftrace. ftrace_location_range() should still work for that
purpose, so that may be the easier fix:

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 066fa644e9dfa3..ee3cd035403ca2 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
{
unsigned long ftrace_addr;

- ftrace_addr = ftrace_location((unsigned long)p->addr);
+ ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
if (ftrace_addr) {
#ifdef CONFIG_KPROBES_ON_FTRACE
/* Given address is not on the instruction boundary */



- Naveen

2022-03-03 15:15:32

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, Mar 03, 2022 at 03:15:14PM +0530, Naveen N. Rao wrote:

> > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
> > index 7f0ce42f8ff9..4c13406e0bc4 100644
> > --- a/arch/x86/kernel/kprobes/core.c
> > +++ b/arch/x86/kernel/kprobes/core.c
> > @@ -198,13 +198,14 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
> >
> > kp = get_kprobe((void *)addr);
> > faddr = ftrace_location(addr);
> > +
> > /*
> > - * Addresses inside the ftrace location are refused by
> > - * arch_check_ftrace_location(). Something went terribly wrong
> > - * if such an address is checked here.
> > + * In case addr maps to sym+0 ftrace_location() might return something
> > + * other than faddr. In that case consider it the same as !faddr.
> > */
> > - if (WARN_ON(faddr && faddr != addr))
> > - return 0UL;
> > + if (faddr && faddr != addr)
> > + faddr = 0;
> > +
> > /*
> > * Use the current code if it is not modified by Kprobe
> > * and it cannot be modified by ftrace.
>
> I hit this issue yesterday in kprobe generic code in
> check_ftrace_location().

What exactly where you running to trigger this? (so that I can extend my
test coverage etc..)

> In both these scenarios, we just want to check if a
> particular instruction is reserved by ftrace. ftrace_location_range()
> should still work for that purpose, so that may be the easier fix:
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 066fa644e9dfa3..ee3cd035403ca2 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
> {
> unsigned long ftrace_addr;
>
> - ftrace_addr = ftrace_location((unsigned long)p->addr);
> + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);

Yes, although perhaps a new helper. I'll go ponder during lunch.

PS. I posted v3 but forgot to Cc you:

https://lkml.kernel.org/r/[email protected]

I think the above hunk ended up in the kprobe patch, but on second
thought I should've put it in the ftrace one. I'll go ammend and add
this other site you found.

2022-03-03 15:32:16

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Peter Zijlstra wrote:
> On Thu, Mar 03, 2022 at 03:15:14PM +0530, Naveen N. Rao wrote:
>
>> > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
>> > index 7f0ce42f8ff9..4c13406e0bc4 100644
>> > --- a/arch/x86/kernel/kprobes/core.c
>> > +++ b/arch/x86/kernel/kprobes/core.c
>> > @@ -198,13 +198,14 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
>> >
>> > kp = get_kprobe((void *)addr);
>> > faddr = ftrace_location(addr);
>> > +
>> > /*
>> > - * Addresses inside the ftrace location are refused by
>> > - * arch_check_ftrace_location(). Something went terribly wrong
>> > - * if such an address is checked here.
>> > + * In case addr maps to sym+0 ftrace_location() might return something
>> > + * other than faddr. In that case consider it the same as !faddr.
>> > */
>> > - if (WARN_ON(faddr && faddr != addr))
>> > - return 0UL;
>> > + if (faddr && faddr != addr)
>> > + faddr = 0;
>> > +
>> > /*
>> > * Use the current code if it is not modified by Kprobe
>> > * and it cannot be modified by ftrace.
>>
>> I hit this issue yesterday in kprobe generic code in
>> check_ftrace_location().
>
> What exactly where you running to trigger this? (so that I can extend my
> test coverage etc..)

With the changes here, we always promote probes at +0 offset to the LEP
at +8. So, we get the old behavior of ftrace_location(). But, we also
have functions that do not have a GEP. For those, we allow probing at an
offset of +0, resulting in ftrace_location() giving us the actual ftrace
site, which on ppc64le will be at +4.

On ppc32, this will affect all probes since the ftrace site is usually
at an offset of +8.

I don't think x86 has this issue since you will always have ftrace site
at +0 if a function does not have the endbr instruction. And if you do
have the endbr instruction, then you adjust the probe address so it
won't be at an offset of 0 into the function.

>
>> In both these scenarios, we just want to check if a
>> particular instruction is reserved by ftrace. ftrace_location_range()
>> should still work for that purpose, so that may be the easier fix:
>>
>> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
>> index 066fa644e9dfa3..ee3cd035403ca2 100644
>> --- a/kernel/kprobes.c
>> +++ b/kernel/kprobes.c
>> @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
>> {
>> unsigned long ftrace_addr;
>>
>> - ftrace_addr = ftrace_location((unsigned long)p->addr);
>> + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
>
> Yes, although perhaps a new helper. I'll go ponder during lunch.

Sure. We do have ftrace_text_reserved(), but that only returns a
boolean. Masami wanted to ensure that the probe is at an instruction
boundary, hence this check.

>
> PS. I posted v3 but forgot to Cc you:
>
> https://lkml.kernel.org/r/[email protected]
>
> I think the above hunk ended up in the kprobe patch, but on second
> thought I should've put it in the ftrace one. I'll go ammend and add
> this other site you found.

Thanks, I'll take a look.

- Naveen

2022-03-03 16:06:32

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 3 Mar 2022 14:04:52 +0100
Peter Zijlstra <[email protected]> wrote:

> > @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
> > {
> > unsigned long ftrace_addr;
> >
> > - ftrace_addr = ftrace_location((unsigned long)p->addr);
> > + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
>
> Yes, although perhaps a new helper. I'll go ponder during lunch.

Is there more places to add that to make it worth creating a helper?

If not, I would just keep using the ftrace_location_range().

If there is to be a helper function, then we should not have touched
ftrace_location() in the first place, and instead created a new function
that does the offset check.

Because thinking about this more, ftrace_location() is suppose to act just
like ftrace_location_range() and now it does not.

I rather keep ftrace_location() the same as ftrace_location_range() if
there's going to be another API. Maybe create a function ftrace_addr() that
does the new ftrace_location() that you have, and leave ftrace_location()
as is?

This is actually what I suggested in the beginning.

-- Steve

2022-03-03 17:26:31

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, Mar 03, 2022 at 09:34:13AM -0500, Steven Rostedt wrote:
> On Thu, 3 Mar 2022 14:04:52 +0100
> Peter Zijlstra <[email protected]> wrote:
>
> > > @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
> > > {
> > > unsigned long ftrace_addr;
> > >
> > > - ftrace_addr = ftrace_location((unsigned long)p->addr);
> > > + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
> >
> > Yes, although perhaps a new helper. I'll go ponder during lunch.
>
> Is there more places to add that to make it worth creating a helper?

This is what I ended up with, I've looked at all ftrace_location() sites
there are, seems to work too, both the built-in boot time ftrace tests
and the selftests work splat-less.

I should update the Changelog some though.

Naveen also mentioned register_ftrace_direct() could be further cleaned
up, but I didn't want to do too much in once go.

---

Subject: x86/ibt,ftrace: Search for __fentry__ location
From: Peter Zijlstra <[email protected]>
Date: Wed Feb 23 10:01:38 CET 2022

Have ftrace_location() search the symbol for the __fentry__ location
when it isn't at func+0 and use this for {,un}register_ftrace_direct().

This avoids a whole bunch of assumptions about __fentry__ being at
func+0.

Suggested-by: Steven Rostedt <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/kernel/kprobes/core.c | 11 +---------
kernel/bpf/trampoline.c | 20 +++----------------
kernel/kprobes.c | 8 +------
kernel/trace/ftrace.c | 43 +++++++++++++++++++++++++++++++++--------
4 files changed, 43 insertions(+), 39 deletions(-)

--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -193,17 +193,10 @@ static unsigned long
__recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
{
struct kprobe *kp;
- unsigned long faddr;
+ bool faddr;

kp = get_kprobe((void *)addr);
- faddr = ftrace_location(addr);
- /*
- * Addresses inside the ftrace location are refused by
- * arch_check_ftrace_location(). Something went terribly wrong
- * if such an address is checked here.
- */
- if (WARN_ON(faddr && faddr != addr))
- return 0UL;
+ faddr = ftrace_location(addr) == addr;
/*
* Use the current code if it is not modified by Kprobe
* and it cannot be modified by ftrace.
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -117,18 +117,6 @@ static void bpf_trampoline_module_put(st
tr->mod = NULL;
}

-static int is_ftrace_location(void *ip)
-{
- long addr;
-
- addr = ftrace_location((long)ip);
- if (!addr)
- return 0;
- if (WARN_ON_ONCE(addr != (long)ip))
- return -EFAULT;
- return 1;
-}
-
static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
{
void *ip = tr->func.addr;
@@ -160,12 +148,12 @@ static int modify_fentry(struct bpf_tram
static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
{
void *ip = tr->func.addr;
+ unsigned long faddr;
int ret;

- ret = is_ftrace_location(ip);
- if (ret < 0)
- return ret;
- tr->func.ftrace_managed = ret;
+ faddr = ftrace_location((unsigned long)ip);
+ if (faddr)
+ tr->func.ftrace_managed = true;

if (bpf_trampoline_module_get(tr))
return -ENOENT;
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1562,14 +1562,10 @@ static inline int warn_kprobe_rereg(stru

static int check_ftrace_location(struct kprobe *p)
{
- unsigned long ftrace_addr;
+ unsigned long addr = (unsigned long)p->addr;

- ftrace_addr = ftrace_location((unsigned long)p->addr);
- if (ftrace_addr) {
+ if (ftrace_location(addr) == addr) {
#ifdef CONFIG_KPROBES_ON_FTRACE
- /* Given address is not on the instruction boundary */
- if ((unsigned long)p->addr != ftrace_addr)
- return -EILSEQ;
p->flags |= KPROBE_FLAG_FTRACE;
#else /* !CONFIG_KPROBES_ON_FTRACE */
return -EINVAL;
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1568,17 +1568,34 @@ unsigned long ftrace_location_range(unsi
}

/**
- * ftrace_location - return true if the ip giving is a traced location
+ * ftrace_location - return the ftrace location
* @ip: the instruction pointer to check
*
- * Returns rec->ip if @ip given is a pointer to a ftrace location.
- * That is, the instruction that is either a NOP or call to
- * the function tracer. It checks the ftrace internal tables to
- * determine if the address belongs or not.
+ * If @ip matches the ftrace location, return @ip.
+ * If @ip matches sym+0, return sym's ftrace location.
+ * Otherwise, return 0.
*/
unsigned long ftrace_location(unsigned long ip)
{
- return ftrace_location_range(ip, ip);
+ struct dyn_ftrace *rec;
+ unsigned long offset;
+ unsigned long size;
+
+ rec = lookup_rec(ip, ip);
+ if (!rec) {
+ if (!kallsyms_lookup_size_offset(ip, &size, &offset))
+ goto out;
+
+ /* map sym+0 to __fentry__ */
+ if (!offset)
+ rec = lookup_rec(ip, ip + size - 1);
+ }
+
+ if (rec)
+ return rec->ip;
+
+out:
+ return 0;
}

/**
@@ -4962,7 +4979,8 @@ ftrace_match_addr(struct ftrace_hash *ha
{
struct ftrace_func_entry *entry;

- if (!ftrace_location(ip))
+ ip = ftrace_location(ip);
+ if (!ip)
return -EINVAL;

if (remove) {
@@ -5110,11 +5128,16 @@ int register_ftrace_direct(unsigned long
struct ftrace_func_entry *entry;
struct ftrace_hash *free_hash = NULL;
struct dyn_ftrace *rec;
- int ret = -EBUSY;
+ int ret = -ENODEV;

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
/* See if there's a direct function at @ip already */
+ ret = -EBUSY;
if (ftrace_find_rec_direct(ip))
goto out_unlock;

@@ -5222,6 +5245,10 @@ int unregister_ftrace_direct(unsigned lo

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
entry = find_direct_entry(&ip, NULL);
if (!entry)
goto out_unlock;

2022-03-06 06:47:18

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

On Thu, 3 Mar 2022 16:59:03 +0100
Peter Zijlstra <[email protected]> wrote:

> On Thu, Mar 03, 2022 at 09:34:13AM -0500, Steven Rostedt wrote:
> > On Thu, 3 Mar 2022 14:04:52 +0100
> > Peter Zijlstra <[email protected]> wrote:
> >
> > > > @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
> > > > {
> > > > unsigned long ftrace_addr;
> > > >
> > > > - ftrace_addr = ftrace_location((unsigned long)p->addr);
> > > > + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
> > >
> > > Yes, although perhaps a new helper. I'll go ponder during lunch.
> >
> > Is there more places to add that to make it worth creating a helper?
>
> This is what I ended up with, I've looked at all ftrace_location() sites
> there are, seems to work too, both the built-in boot time ftrace tests
> and the selftests work splat-less.
>
> I should update the Changelog some though.
>
> Naveen also mentioned register_ftrace_direct() could be further cleaned
> up, but I didn't want to do too much in once go.
>
> ---
>
> Subject: x86/ibt,ftrace: Search for __fentry__ location
> From: Peter Zijlstra <[email protected]>
> Date: Wed Feb 23 10:01:38 CET 2022
>
> Have ftrace_location() search the symbol for the __fentry__ location
> when it isn't at func+0 and use this for {,un}register_ftrace_direct().
>
> This avoids a whole bunch of assumptions about __fentry__ being at
> func+0.
>
> Suggested-by: Steven Rostedt <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/kernel/kprobes/core.c | 11 +---------
> kernel/bpf/trampoline.c | 20 +++----------------
> kernel/kprobes.c | 8 +------
> kernel/trace/ftrace.c | 43 +++++++++++++++++++++++++++++++++--------
> 4 files changed, 43 insertions(+), 39 deletions(-)
>
> --- a/arch/x86/kernel/kprobes/core.c
> +++ b/arch/x86/kernel/kprobes/core.c
> @@ -193,17 +193,10 @@ static unsigned long
> __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr)
> {
> struct kprobe *kp;
> - unsigned long faddr;
> + bool faddr;
>
> kp = get_kprobe((void *)addr);
> - faddr = ftrace_location(addr);
> - /*
> - * Addresses inside the ftrace location are refused by
> - * arch_check_ftrace_location(). Something went terribly wrong
> - * if such an address is checked here.
> - */
> - if (WARN_ON(faddr && faddr != addr))
> - return 0UL;
> + faddr = ftrace_location(addr) == addr;

OK, this looks good to me.

> /*
> * Use the current code if it is not modified by Kprobe
> * and it cannot be modified by ftrace.
> --- a/kernel/bpf/trampoline.c
> +++ b/kernel/bpf/trampoline.c
> @@ -117,18 +117,6 @@ static void bpf_trampoline_module_put(st
> tr->mod = NULL;
> }
>
> -static int is_ftrace_location(void *ip)
> -{
> - long addr;
> -
> - addr = ftrace_location((long)ip);
> - if (!addr)
> - return 0;
> - if (WARN_ON_ONCE(addr != (long)ip))
> - return -EFAULT;
> - return 1;
> -}
> -
> static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
> {
> void *ip = tr->func.addr;
> @@ -160,12 +148,12 @@ static int modify_fentry(struct bpf_tram
> static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
> {
> void *ip = tr->func.addr;
> + unsigned long faddr;
> int ret;
>
> - ret = is_ftrace_location(ip);
> - if (ret < 0)
> - return ret;
> - tr->func.ftrace_managed = ret;
> + faddr = ftrace_location((unsigned long)ip);
> + if (faddr)
> + tr->func.ftrace_managed = true;
>
> if (bpf_trampoline_module_get(tr))
> return -ENOENT;
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1562,14 +1562,10 @@ static inline int warn_kprobe_rereg(stru
>
> static int check_ftrace_location(struct kprobe *p)
> {
> - unsigned long ftrace_addr;
> + unsigned long addr = (unsigned long)p->addr;
>
> - ftrace_addr = ftrace_location((unsigned long)p->addr);
> - if (ftrace_addr) {
> + if (ftrace_location(addr) == addr) {
> #ifdef CONFIG_KPROBES_ON_FTRACE
> - /* Given address is not on the instruction boundary */
> - if ((unsigned long)p->addr != ftrace_addr)
> - return -EILSEQ;

OK, so this means we only use the ftrace if the kprobe puts the
sym+ftrace-offset. Thus if there is ENDBR at the first instruction,
kprobe will use int3, right?
I agree with this, but later I have to add another patch to use ftrace
for the kprobes on symbol+0. But anyway, that is another issue.

So this looks good to me.

Acked-by: Masami Hiramatsu <[email protected]>

Thank you!

> p->flags |= KPROBE_FLAG_FTRACE;
> #else /* !CONFIG_KPROBES_ON_FTRACE */
> return -EINVAL;
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1568,17 +1568,34 @@ unsigned long ftrace_location_range(unsi
> }
>
> /**
> - * ftrace_location - return true if the ip giving is a traced location
> + * ftrace_location - return the ftrace location
> * @ip: the instruction pointer to check
> *
> - * Returns rec->ip if @ip given is a pointer to a ftrace location.
> - * That is, the instruction that is either a NOP or call to
> - * the function tracer. It checks the ftrace internal tables to
> - * determine if the address belongs or not.
> + * If @ip matches the ftrace location, return @ip.
> + * If @ip matches sym+0, return sym's ftrace location.
> + * Otherwise, return 0.
> */
> unsigned long ftrace_location(unsigned long ip)
> {
> - return ftrace_location_range(ip, ip);
> + struct dyn_ftrace *rec;
> + unsigned long offset;
> + unsigned long size;
> +
> + rec = lookup_rec(ip, ip);
> + if (!rec) {
> + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> + goto out;
> +
> + /* map sym+0 to __fentry__ */
> + if (!offset)
> + rec = lookup_rec(ip, ip + size - 1);
> + }
> +
> + if (rec)
> + return rec->ip;
> +
> +out:
> + return 0;
> }
>
> /**
> @@ -4962,7 +4979,8 @@ ftrace_match_addr(struct ftrace_hash *ha
> {
> struct ftrace_func_entry *entry;
>
> - if (!ftrace_location(ip))
> + ip = ftrace_location(ip);
> + if (!ip)
> return -EINVAL;
>
> if (remove) {
> @@ -5110,11 +5128,16 @@ int register_ftrace_direct(unsigned long
> struct ftrace_func_entry *entry;
> struct ftrace_hash *free_hash = NULL;
> struct dyn_ftrace *rec;
> - int ret = -EBUSY;
> + int ret = -ENODEV;
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> /* See if there's a direct function at @ip already */
> + ret = -EBUSY;
> if (ftrace_find_rec_direct(ip))
> goto out_unlock;
>
> @@ -5222,6 +5245,10 @@ int unregister_ftrace_direct(unsigned lo
>
> mutex_lock(&direct_mutex);
>
> + ip = ftrace_location(ip);
> + if (!ip)
> + goto out_unlock;
> +
> entry = find_direct_entry(&ip, NULL);
> if (!entry)
> goto out_unlock;


--
Masami Hiramatsu <[email protected]>

2022-03-09 12:36:18

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 12/39] x86/ibt,ftrace: Search for __fentry__ location

Peter Zijlstra wrote:
> On Thu, Mar 03, 2022 at 09:34:13AM -0500, Steven Rostedt wrote:
>> On Thu, 3 Mar 2022 14:04:52 +0100
>> Peter Zijlstra <[email protected]> wrote:
>>
>> > > @@ -1596,7 +1596,7 @@ static int check_ftrace_location(struct kprobe *p)
>> > > {
>> > > unsigned long ftrace_addr;
>> > >
>> > > - ftrace_addr = ftrace_location((unsigned long)p->addr);
>> > > + ftrace_addr = ftrace_location_range((unsigned long)p->addr, (unsigned long)p->addr);
>> >
>> > Yes, although perhaps a new helper. I'll go ponder during lunch.
>>
>> Is there more places to add that to make it worth creating a helper?
>
> This is what I ended up with, I've looked at all ftrace_location() sites
> there are, seems to work too, both the built-in boot time ftrace tests
> and the selftests work splat-less.
>
> I should update the Changelog some though.
>
> Naveen also mentioned register_ftrace_direct() could be further cleaned
> up, but I didn't want to do too much in once go.

Not a problem, I can send those as cleanups atop this series.

>
> ---
>
> Subject: x86/ibt,ftrace: Search for __fentry__ location
> From: Peter Zijlstra <[email protected]>
> Date: Wed Feb 23 10:01:38 CET 2022
>
> Have ftrace_location() search the symbol for the __fentry__ location
> when it isn't at func+0 and use this for {,un}register_ftrace_direct().
>
> This avoids a whole bunch of assumptions about __fentry__ being at
> func+0.
>
> Suggested-by: Steven Rostedt <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---

This version looks good to me.
Acked-by: Naveen N. Rao <[email protected]>


Thanks,
Naveen