2022-02-19 18:09:35

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Fri, Feb 18, 2022 at 05:49:06PM +0100, Peter Zijlstra wrote:
> Currently livepatch assumes __fentry__ lives at func+0, which is most
> likely untrue with IBT on. Override the weak klp_get_ftrace_location()
> function with an arch specific version that's IBT aware.
>
> Also make the weak fallback verify the location is an actual ftrace
> location as a sanity check.
>
> Suggested-by: Miroslav Benes <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/include/asm/livepatch.h | 9 +++++++++
> kernel/livepatch/patch.c | 2 +-
> 2 files changed, 10 insertions(+), 1 deletion(-)
>
> --- a/arch/x86/include/asm/livepatch.h
> +++ b/arch/x86/include/asm/livepatch.h
> @@ -17,4 +17,13 @@ static inline void klp_arch_set_pc(struc
> ftrace_instruction_pointer_set(fregs, ip);
> }
>
> +#define klp_get_ftrace_location klp_get_ftrace_location
> +static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
> +{
> + unsigned long addr = ftrace_location(faddr);
> + if (!addr && IS_ENABLED(CONFIG_X86_IBT))
> + addr = ftrace_location(faddr + 4);
> + return addr;

I'm kind of surprised this logic doesn't exist in ftrace itself. Is
livepatch really the only user that needs to find the fentry for a given
function?

I had to do a double take for the ftrace_location() semantics, as I
originally assumed that's what it did, based on its name and signature.

Instead it apparently functions like a bool but returns its argument on
success.

Though the function comment tells a different story:

/**
* ftrace_location - return true if the ip giving is a traced location

So it's all kinds of confusing...

--
Josh


2022-02-23 10:18:17

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Fri, Feb 18, 2022 at 01:08:31PM -0800, Josh Poimboeuf wrote:
> On Fri, Feb 18, 2022 at 05:49:06PM +0100, Peter Zijlstra wrote:
> > Currently livepatch assumes __fentry__ lives at func+0, which is most
> > likely untrue with IBT on. Override the weak klp_get_ftrace_location()
> > function with an arch specific version that's IBT aware.
> >
> > Also make the weak fallback verify the location is an actual ftrace
> > location as a sanity check.
> >
> > Suggested-by: Miroslav Benes <[email protected]>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > arch/x86/include/asm/livepatch.h | 9 +++++++++
> > kernel/livepatch/patch.c | 2 +-
> > 2 files changed, 10 insertions(+), 1 deletion(-)
> >
> > --- a/arch/x86/include/asm/livepatch.h
> > +++ b/arch/x86/include/asm/livepatch.h
> > @@ -17,4 +17,13 @@ static inline void klp_arch_set_pc(struc
> > ftrace_instruction_pointer_set(fregs, ip);
> > }
> >
> > +#define klp_get_ftrace_location klp_get_ftrace_location
> > +static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
> > +{
> > + unsigned long addr = ftrace_location(faddr);
> > + if (!addr && IS_ENABLED(CONFIG_X86_IBT))
> > + addr = ftrace_location(faddr + 4);
> > + return addr;
>
> I'm kind of surprised this logic doesn't exist in ftrace itself. Is
> livepatch really the only user that needs to find the fentry for a given
> function?
>
> I had to do a double take for the ftrace_location() semantics, as I
> originally assumed that's what it did, based on its name and signature.
>
> Instead it apparently functions like a bool but returns its argument on
> success.
>
> Though the function comment tells a different story:
>
> /**
> * ftrace_location - return true if the ip giving is a traced location
>
> So it's all kinds of confusing...

Yes.. so yesterday, when making function-graph tracing not explode, I
ran into a similar issue. Steve suggested something along the lines of
.... this.

(modified from his actual suggestion to also cover this case)

Let me go try this...

--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
*/
unsigned long ftrace_location(unsigned long ip)
{
- return ftrace_location_range(ip, ip);
+ struct dyn_ftrace *rec;
+ unsigned long offset;
+ unsigned long size;
+
+ rec = lookup_rec(ip, ip);
+ if (!rec) {
+ if (!kallsyms_lookup(ip, &size, &offset, NULL, NULL))
+ goto out;
+
+ rec = lookup_rec(ip - offset, (ip - offset) + size);
+ }
+
+ if (rec)
+ return rec->ip;
+
+out:
+ return 0;
}

/**
@@ -5110,11 +5126,16 @@ int register_ftrace_direct(unsigned long
struct ftrace_func_entry *entry;
struct ftrace_hash *free_hash = NULL;
struct dyn_ftrace *rec;
- int ret = -EBUSY;
+ int ret = -ENODEV;

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
/* See if there's a direct function at @ip already */
+ ret = -EBUSY;
if (ftrace_find_rec_direct(ip))
goto out_unlock;

@@ -5222,6 +5243,10 @@ int unregister_ftrace_direct(unsigned lo

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
entry = find_direct_entry(&ip, NULL);
if (!entry)
goto out_unlock;

2022-02-23 12:05:24

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Wed, Feb 23, 2022 at 11:09:44AM +0100, Peter Zijlstra wrote:
> Yes.. so yesterday, when making function-graph tracing not explode, I
> ran into a similar issue. Steve suggested something along the lines of
> .... this.
>
> (modified from his actual suggestion to also cover this case)
>
> Let me go try this...

This one actually works...

---
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
*/
unsigned long ftrace_location(unsigned long ip)
{
- return ftrace_location_range(ip, ip);
+ struct dyn_ftrace *rec;
+ unsigned long offset;
+ unsigned long size;
+
+ rec = lookup_rec(ip, ip);
+ if (!rec) {
+ if (!kallsyms_lookup_size_offset(ip, &size, &offset))
+ goto out;
+
+ rec = lookup_rec(ip - offset, (ip - offset) + size);
+ }
+
+ if (rec)
+ return rec->ip;
+
+out:
+ return 0;
}

/**
@@ -5110,11 +5126,16 @@ int register_ftrace_direct(unsigned long
struct ftrace_func_entry *entry;
struct ftrace_hash *free_hash = NULL;
struct dyn_ftrace *rec;
- int ret = -EBUSY;
+ int ret = -ENODEV;

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
/* See if there's a direct function at @ip already */
+ ret = -EBUSY;
if (ftrace_find_rec_direct(ip))
goto out_unlock;

@@ -5222,6 +5243,10 @@ int unregister_ftrace_direct(unsigned lo

mutex_lock(&direct_mutex);

+ ip = ftrace_location(ip);
+ if (!ip)
+ goto out_unlock;
+
entry = find_direct_entry(&ip, NULL);
if (!entry)
goto out_unlock;

2022-02-23 13:16:55

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Wed, 23 Feb 2022 11:57:26 +0100
Peter Zijlstra <[email protected]> wrote:

> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
> */
> unsigned long ftrace_location(unsigned long ip)
> {
> - return ftrace_location_range(ip, ip);
> + struct dyn_ftrace *rec;
> + unsigned long offset;
> + unsigned long size;
> +
> + rec = lookup_rec(ip, ip);
> + if (!rec) {
> + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> + goto out;
> +
> + rec = lookup_rec(ip - offset, (ip - offset) + size);
> + }
> +

Please create a new function for this. Perhaps find_ftrace_location().

ftrace_location() is used to see if the address given is a ftrace
nop or not. This change will make it always return true.

-- Steve

2022-02-24 00:38:13

by Miroslav Benes

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Wed, 23 Feb 2022, Peter Zijlstra wrote:

> On Fri, Feb 18, 2022 at 01:08:31PM -0800, Josh Poimboeuf wrote:
> > On Fri, Feb 18, 2022 at 05:49:06PM +0100, Peter Zijlstra wrote:
> > > Currently livepatch assumes __fentry__ lives at func+0, which is most
> > > likely untrue with IBT on. Override the weak klp_get_ftrace_location()
> > > function with an arch specific version that's IBT aware.
> > >
> > > Also make the weak fallback verify the location is an actual ftrace
> > > location as a sanity check.
> > >
> > > Suggested-by: Miroslav Benes <[email protected]>
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > ---
> > > arch/x86/include/asm/livepatch.h | 9 +++++++++
> > > kernel/livepatch/patch.c | 2 +-
> > > 2 files changed, 10 insertions(+), 1 deletion(-)
> > >
> > > --- a/arch/x86/include/asm/livepatch.h
> > > +++ b/arch/x86/include/asm/livepatch.h
> > > @@ -17,4 +17,13 @@ static inline void klp_arch_set_pc(struc
> > > ftrace_instruction_pointer_set(fregs, ip);
> > > }
> > >
> > > +#define klp_get_ftrace_location klp_get_ftrace_location
> > > +static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
> > > +{
> > > + unsigned long addr = ftrace_location(faddr);
> > > + if (!addr && IS_ENABLED(CONFIG_X86_IBT))
> > > + addr = ftrace_location(faddr + 4);
> > > + return addr;
> >
> > I'm kind of surprised this logic doesn't exist in ftrace itself. Is
> > livepatch really the only user that needs to find the fentry for a given
> > function?
> >
> > I had to do a double take for the ftrace_location() semantics, as I
> > originally assumed that's what it did, based on its name and signature.
> >
> > Instead it apparently functions like a bool but returns its argument on
> > success.
> >
> > Though the function comment tells a different story:
> >
> > /**
> > * ftrace_location - return true if the ip giving is a traced location
> >
> > So it's all kinds of confusing...
>
> Yes.. so yesterday, when making function-graph tracing not explode, I
> ran into a similar issue. Steve suggested something along the lines of
> .... this.
>
> (modified from his actual suggestion to also cover this case)
>
> Let me go try this...

Yes, this looks good.

> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
> */
> unsigned long ftrace_location(unsigned long ip)
> {
> - return ftrace_location_range(ip, ip);
> + struct dyn_ftrace *rec;
> + unsigned long offset;
> + unsigned long size;
> +
> + rec = lookup_rec(ip, ip);
> + if (!rec) {
> + if (!kallsyms_lookup(ip, &size, &offset, NULL, NULL))

Since we do not care about a symbol name, kallsyms_lookup_size_offset()
would be better I think.

Miroslav

2022-02-24 00:39:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Wed, Feb 23, 2022 at 07:41:39AM -0500, Steven Rostedt wrote:
> On Wed, 23 Feb 2022 11:57:26 +0100
> Peter Zijlstra <[email protected]> wrote:
>
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
> > */
> > unsigned long ftrace_location(unsigned long ip)
> > {
> > - return ftrace_location_range(ip, ip);
> > + struct dyn_ftrace *rec;
> > + unsigned long offset;
> > + unsigned long size;
> > +
> > + rec = lookup_rec(ip, ip);
> > + if (!rec) {
> > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > + goto out;
> > +
> > + rec = lookup_rec(ip - offset, (ip - offset) + size);
> > + }
> > +
>
> Please create a new function for this. Perhaps find_ftrace_location().
>
> ftrace_location() is used to see if the address given is a ftrace
> nop or not. This change will make it always return true.
>

# git grep ftrace_location
arch/powerpc/include/asm/livepatch.h:#define klp_get_ftrace_location klp_get_ftrace_location
arch/powerpc/include/asm/livepatch.h:static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
arch/powerpc/include/asm/livepatch.h: return ftrace_location_range(faddr, faddr + 16);
arch/powerpc/kernel/kprobes.c: faddr = ftrace_location_range((unsigned long)addr,
arch/x86/kernel/kprobes/core.c: faddr = ftrace_location(addr);
arch/x86/kernel/kprobes/core.c: * arch_check_ftrace_location(). Something went terribly wrong
include/linux/ftrace.h:unsigned long ftrace_location(unsigned long ip);
include/linux/ftrace.h:unsigned long ftrace_location_range(unsigned long start, unsigned long end);
include/linux/ftrace.h:static inline unsigned long ftrace_location(unsigned long ip)
kernel/bpf/trampoline.c:static int is_ftrace_location(void *ip)
kernel/bpf/trampoline.c: addr = ftrace_location((long)ip);
kernel/bpf/trampoline.c: ret = is_ftrace_location(ip);
kernel/kprobes.c: unsigned long faddr = ftrace_location((unsigned long)addr);
kernel/kprobes.c:static int check_ftrace_location(struct kprobe *p)
kernel/kprobes.c: ftrace_addr = ftrace_location((unsigned long)p->addr);
kernel/kprobes.c: ret = check_ftrace_location(p);
kernel/livepatch/patch.c:#ifndef klp_get_ftrace_location
kernel/livepatch/patch.c:static unsigned long klp_get_ftrace_location(unsigned long faddr)
kernel/livepatch/patch.c: return ftrace_location(faddr);
kernel/livepatch/patch.c: klp_get_ftrace_location((unsigned long)func->old_func);
kernel/livepatch/patch.c: klp_get_ftrace_location((unsigned long)func->old_func);
kernel/trace/ftrace.c: * ftrace_location_range - return the first address of a traced location
kernel/trace/ftrace.c:unsigned long ftrace_location_range(unsigned long start, unsigned long end)
kernel/trace/ftrace.c: * ftrace_location - return true if the ip giving is a traced location
kernel/trace/ftrace.c:unsigned long ftrace_location(unsigned long ip)
kernel/trace/ftrace.c: ret = ftrace_location_range((unsigned long)start,
kernel/trace/ftrace.c: if (!ftrace_location(ip))
kernel/trace/ftrace.c: ip = ftrace_location(ip);
kernel/trace/ftrace.c: ip = ftrace_location(ip);
kernel/trace/trace_kprobe.c: * Since ftrace_location_range() does inclusive range check, we need
kernel/trace/trace_kprobe.c: return !ftrace_location_range(addr, addr + size - 1);

and yet almost every caller takes the address it returns...

2022-02-24 01:43:24

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 04/29] x86/livepatch: Validate __fentry__ location

On Wed, 23 Feb 2022 07:41:39 -0500
Steven Rostedt <[email protected]> wrote:

> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -1578,7 +1578,23 @@ unsigned long ftrace_location_range(unsi
> > */
> > unsigned long ftrace_location(unsigned long ip)
> > {
> > - return ftrace_location_range(ip, ip);
> > + struct dyn_ftrace *rec;
> > + unsigned long offset;
> > + unsigned long size;
> > +
> > + rec = lookup_rec(ip, ip);
> > + if (!rec) {
> > + if (!kallsyms_lookup_size_offset(ip, &size, &offset))
> > + goto out;
> > +
> > + rec = lookup_rec(ip - offset, (ip - offset) + size);
> > + }
> > +
>
> Please create a new function for this. Perhaps find_ftrace_location().
>
> ftrace_location() is used to see if the address given is a ftrace
> nop or not. This change will make it always return true.

Now we could do:

return ip <= (rec->ip + MCOUNT_INSN_SIZE) ? rec->ip : 0;

Since we would want rec->ip if the pointer is before the ftrace
instruction. But we would need to audit all use cases and make sure this is
not called from any hot paths (in a callback).

This will affect kprobes and BPF as they both use ftrace_location() as well.

-- Steve