2014-01-17 09:00:29

by Stephane Eranian

[permalink] [raw]
Subject: [BUG] perf stat: corrupts memory when using PMU cpumask

Hi,

I have been debugging a NULL pointer issue with perf stat unit/scale code
and in the process I ran into what appeared like a double-free issue reported
by glibc. It took me a while to realize that it was because of memory corruption
caused by a recent change in how evsel are freed.

My test case is simple. I used RAPL but I think any event with a suggested
cpumask in /sys/devices/XXX/cpumask will do:

# perf stat -a -e power/energy-cores/ ls

The issue boils down to the fact that evsels have their file descriptors closed
twice nowadays. Once in __run_per_stat() via perf_evsel__close_fd() and
twice in perf_evlist__close().

Now, calling close() twice is okay. However the fd is then set to -1.
That's still okay with close(). The problem is elsewhere.

It comes from the ncpus argument passed to perf_evsel__close(). It is
DIFFERENT between the evsel and the evlist when cpumask are used.

Take my case, 8 CPUs machine but a 1 CPU cpumask. The evsel allocates
the xyarray for 1 CPU 1 thread. The fd are first close with 1 CPU, 1 thread.
But then evlist_close() comes in and STILL thinks the events were using
8 CPUs, 1 thread and thus a xyarray of that size. And this causes writes
to entries that are beyond the xyarray when the fds are set to -1, thereby
causing memory corruption which I was lucky to catch via glibc.

First, why are we closing the descriptors twice?

Second, I have a fix that seems to work for me. It uses the evsel->cpus
if evsel->cpus exists, otherwise it defaults to evtlist->cpus. Looks like
a reasonable thing to do to me, but is it? I would rather avoid the double
close altogether.


Opinion?


2014-01-17 14:06:13

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [BUG] perf stat: corrupts memory when using PMU cpumask

Em Fri, Jan 17, 2014 at 10:00:20AM +0100, Stephane Eranian escreveu:
> Hi,
>
> I have been debugging a NULL pointer issue with perf stat unit/scale code
> and in the process I ran into what appeared like a double-free issue reported
> by glibc. It took me a while to realize that it was because of memory corruption
> caused by a recent change in how evsel are freed.
>
> My test case is simple. I used RAPL but I think any event with a suggested
> cpumask in /sys/devices/XXX/cpumask will do:
>
> # perf stat -a -e power/energy-cores/ ls
>
> The issue boils down to the fact that evsels have their file descriptors closed
> twice nowadays. Once in __run_per_stat() via perf_evsel__close_fd() and
> twice in perf_evlist__close().
>
> Now, calling close() twice is okay. However the fd is then set to -1.
> That's still okay with close(). The problem is elsewhere.
>
> It comes from the ncpus argument passed to perf_evsel__close(). It is
> DIFFERENT between the evsel and the evlist when cpumask are used.

Oops, at some point I knew that set of globals and mixup of evlists in
builtin-stat would bite :-\

I guess it was introduced in:

commit 7ae92e744e3fb389afb1e24920ecda331d360c61
Author: Yan, Zheng <[email protected]>
Date: Mon Sep 10 15:53:50 2012 +0800

perf stat: Check PMU cpumask file

I need to untangle that direct usage of the target, and global evlist to
properly fix this, but in the meantime I'll take a look at your patch,
thanks for doing this work.


> Take my case, 8 CPUs machine but a 1 CPU cpumask. The evsel allocates
> the xyarray for 1 CPU 1 thread. The fd are first close with 1 CPU, 1 thread.
> But then evlist_close() comes in and STILL thinks the events were using
> 8 CPUs, 1 thread and thus a xyarray of that size. And this causes writes
> to entries that are beyond the xyarray when the fds are set to -1, thereby
> causing memory corruption which I was lucky to catch via glibc.
>
> First, why are we closing the descriptors twice?
>
> Second, I have a fix that seems to work for me. It uses the evsel->cpus
> if evsel->cpus exists, otherwise it defaults to evtlist->cpus. Looks like
> a reasonable thing to do to me, but is it? I would rather avoid the double
> close altogether.
>
>
> Opinion?

2014-01-17 14:09:31

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [BUG] perf stat: corrupts memory when using PMU cpumask

Em Fri, Jan 17, 2014 at 10:00:20AM +0100, Stephane Eranian escreveu:
> The issue boils down to the fact that evsels have their file descriptors closed
> twice nowadays. Once in __run_per_stat() via perf_evsel__close_fd() and
> twice in perf_evlist__close().

> Now, calling close() twice is okay. However the fd is then set to -1.
> That's still okay with close(). The problem is elsewhere.

> It comes from the ncpus argument passed to perf_evsel__close(). It is
> DIFFERENT between the evsel and the evlist when cpumask are used.

> Take my case, 8 CPUs machine but a 1 CPU cpumask. The evsel allocates
> the xyarray for 1 CPU 1 thread. The fd are first close with 1 CPU, 1 thread.
> But then evlist_close() comes in and STILL thinks the events were using
> 8 CPUs, 1 thread and thus a xyarray of that size. And this causes writes
> to entries that are beyond the xyarray when the fds are set to -1, thereby
> causing memory corruption which I was lucky to catch via glibc.

> First, why are we closing the descriptors twice?

The idea here was to reduce the boilerplate that tools need to do when
they are done dealing with evlists, so evlist__delete would do what the
kernel does to resources allocated to a thread when it exits without
explicitely deallocating them: release them all.

So it seems, from your analysis, that bugs were left that need to be
hammered out so that this works as intended. Can you share your patch?

> Second, I have a fix that seems to work for me. It uses the evsel->cpus
> if evsel->cpus exists, otherwise it defaults to evtlist->cpus. Looks like
> a reasonable thing to do to me, but is it? I would rather avoid the double
> close altogether.
>
>
> Opinion?

2014-01-17 15:36:38

by Stephane Eranian

[permalink] [raw]
Subject: Re: [BUG] perf stat: corrupts memory when using PMU cpumask

Arnaldo,

I just sent the patches I wrote to fix the bugs I ran into since yesterday.


On Fri, Jan 17, 2014 at 3:09 PM, Arnaldo Carvalho de Melo
<[email protected]> wrote:
> Em Fri, Jan 17, 2014 at 10:00:20AM +0100, Stephane Eranian escreveu:
>> The issue boils down to the fact that evsels have their file descriptors closed
>> twice nowadays. Once in __run_per_stat() via perf_evsel__close_fd() and
>> twice in perf_evlist__close().
>
>> Now, calling close() twice is okay. However the fd is then set to -1.
>> That's still okay with close(). The problem is elsewhere.
>
>> It comes from the ncpus argument passed to perf_evsel__close(). It is
>> DIFFERENT between the evsel and the evlist when cpumask are used.
>
>> Take my case, 8 CPUs machine but a 1 CPU cpumask. The evsel allocates
>> the xyarray for 1 CPU 1 thread. The fd are first close with 1 CPU, 1 thread.
>> But then evlist_close() comes in and STILL thinks the events were using
>> 8 CPUs, 1 thread and thus a xyarray of that size. And this causes writes
>> to entries that are beyond the xyarray when the fds are set to -1, thereby
>> causing memory corruption which I was lucky to catch via glibc.
>
>> First, why are we closing the descriptors twice?
>
> The idea here was to reduce the boilerplate that tools need to do when
> they are done dealing with evlists, so evlist__delete would do what the
> kernel does to resources allocated to a thread when it exits without
> explicitely deallocating them: release them all.
>
> So it seems, from your analysis, that bugs were left that need to be
> hammered out so that this works as intended. Can you share your patch?
>
>> Second, I have a fix that seems to work for me. It uses the evsel->cpus
>> if evsel->cpus exists, otherwise it defaults to evtlist->cpus. Looks like
>> a reasonable thing to do to me, but is it? I would rather avoid the double
>> close altogether.
>>
>>
>> Opinion?