2020-01-17 23:46:36

by Song Liu

[permalink] [raw]
Subject: [PATCH] perf/core: fix mlock accounting in perf_mmap()

sysctl_perf_event_mlock and user->locked_vm can change value
independently, so we can't guarantee:

user->locked_vm <= user_lock_limit

When user->locked_vm is larger than user_lock_limit, we cannot simply
update extra and user_extra as:

extra = user_locked - user_lock_limit;
user_extra -= extra;

Otherwise, user_extra will be negative. In extreme cases, this may lead to
negative user->locked_vm (until this perf-mmap is closed), which break
locked_vm badly.

Fix this with two separate conditions, which make sure user_extra is
always positive.

Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Signed-off-by: Song Liu <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
kernel/events/core.c | 28 ++++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index a1f8bde19b56..89acdd1574ef 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)

if (user_locked > user_lock_limit) {
/*
- * charge locked_vm until it hits user_lock_limit;
- * charge the rest from pinned_vm
+ * sysctl_perf_event_mlock and user->locked_vm can change
+ * value independently, so we can't guarantee:
+ *
+ * user->locked_vm <= user_lock_limit
+ *
+ * We need be careful to make sure user_extra >=0.
+ *
+ * Using "user_locked - user_extra" to avoid calling
+ * atomic_long_read() again.
*/
- extra = user_locked - user_lock_limit;
- user_extra -= extra;
+ if (user_locked - user_extra >= user_lock_limit) {
+ /*
+ * already used all user_locked_limit, charge all
+ * to pinned_vm
+ */
+ extra = user_extra;
+ user_extra = 0;
+ } else {
+ /*
+ * charge locked_vm until it hits user_lock_limit;
+ * charge the rest from pinned_vm
+ */
+ extra = user_locked - user_lock_limit;
+ user_extra -= extra;
+ }
}

lock_limit = rlimit(RLIMIT_MEMLOCK);
--
2.17.1


2020-01-20 08:25:14

by Alexander Shishkin

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()

Song Liu <[email protected]> writes:

> sysctl_perf_event_mlock and user->locked_vm can change value
> independently, so we can't guarantee:
>
> user->locked_vm <= user_lock_limit

This means: if the sysctl got sufficiently decreased, so that the
existing locked_vm exceeds it, we need to deal with the overflow, right?

> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index a1f8bde19b56..89acdd1574ef 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>
> if (user_locked > user_lock_limit) {
> /*
> - * charge locked_vm until it hits user_lock_limit;
> - * charge the rest from pinned_vm
> + * sysctl_perf_event_mlock and user->locked_vm can change
> + * value independently, so we can't guarantee:
> + *
> + * user->locked_vm <= user_lock_limit
> + *
> + * We need be careful to make sure user_extra >=0.
> + *
> + * Using "user_locked - user_extra" to avoid calling
> + * atomic_long_read() again.
> */
> - extra = user_locked - user_lock_limit;
> - user_extra -= extra;
> + if (user_locked - user_extra >= user_lock_limit) {
> + /*
> + * already used all user_locked_limit, charge all
> + * to pinned_vm
> + */
> + extra = user_extra;
> + user_extra = 0;
> + } else {
> + /*
> + * charge locked_vm until it hits user_lock_limit;
> + * charge the rest from pinned_vm
> + */
> + extra = user_locked - user_lock_limit;
> + user_extra -= extra;
> + }

How about the below for the sake of brevity?

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 763cf34b5a63..632505ce6c12 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5917,7 +5917,14 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
*/
user_lock_limit *= num_online_cpus();

- user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+ user_locked = atomic_long_read(&user->locked_vm);
+ /*
+ * If perf_event_mlock has changed since earlier mmaps, so that
+ * it's smaller than user->locked_vm, discard the overflow.
+ */
+ if (user_locked > user_lock_limit)
+ user_locked = user_lock_limit;
+ user_locked += user_extra;

if (user_locked > user_lock_limit) {
/*

Regards,
--
Alex

2020-01-21 18:58:45

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()



> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <[email protected]> wrote:
>
> Song Liu <[email protected]> writes:
>
>> sysctl_perf_event_mlock and user->locked_vm can change value
>> independently, so we can't guarantee:
>>
>> user->locked_vm <= user_lock_limit
>
> This means: if the sysctl got sufficiently decreased, so that the
> existing locked_vm exceeds it, we need to deal with the overflow, right?

Reducing sysctl is one way to generate the overflow. Another way is to
call setrlimit() from user space to allow bigger user->locked_vm.

>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a1f8bde19b56..89acdd1574ef 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>>
>> if (user_locked > user_lock_limit) {
>> /*
>> - * charge locked_vm until it hits user_lock_limit;
>> - * charge the rest from pinned_vm
>> + * sysctl_perf_event_mlock and user->locked_vm can change
>> + * value independently, so we can't guarantee:
>> + *
>> + * user->locked_vm <= user_lock_limit
>> + *
>> + * We need be careful to make sure user_extra >=0.
>> + *
>> + * Using "user_locked - user_extra" to avoid calling
>> + * atomic_long_read() again.
>> */
>> - extra = user_locked - user_lock_limit;
>> - user_extra -= extra;
>> + if (user_locked - user_extra >= user_lock_limit) {
>> + /*
>> + * already used all user_locked_limit, charge all
>> + * to pinned_vm
>> + */
>> + extra = user_extra;
>> + user_extra = 0;
>> + } else {
>> + /*
>> + * charge locked_vm until it hits user_lock_limit;
>> + * charge the rest from pinned_vm
>> + */
>> + extra = user_locked - user_lock_limit;
>> + user_extra -= extra;
>> + }
>
> How about the below for the sake of brevity?
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 763cf34b5a63..632505ce6c12 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5917,7 +5917,14 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> */
> user_lock_limit *= num_online_cpus();
>
> - user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> + user_locked = atomic_long_read(&user->locked_vm);
> + /*
> + * If perf_event_mlock has changed since earlier mmaps, so that
> + * it's smaller than user->locked_vm, discard the overflow.
> + */

Since changes in perf_event_mlock is not the only reason for the overflow,
we need to revise this comment.

> + if (user_locked > user_lock_limit)
> + user_locked = user_lock_limit;
> + user_locked += user_extra;
>
> if (user_locked > user_lock_limit) {
> /*

I think this is logically correct, and probably easier to follow. Let me
respin v2 based on this version.

Thanks,
Song

2020-01-21 19:37:01

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()



> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <[email protected]> wrote:
>
> Song Liu <[email protected]> writes:
>
>> sysctl_perf_event_mlock and user->locked_vm can change value
>> independently, so we can't guarantee:
>>
>> user->locked_vm <= user_lock_limit
>
> This means: if the sysctl got sufficiently decreased, so that the
> existing locked_vm exceeds it, we need to deal with the overflow, right?
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a1f8bde19b56..89acdd1574ef 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -5920,11 +5920,31 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
>>
>> if (user_locked > user_lock_limit) {
>> /*
>> - * charge locked_vm until it hits user_lock_limit;
>> - * charge the rest from pinned_vm
>> + * sysctl_perf_event_mlock and user->locked_vm can change
>> + * value independently, so we can't guarantee:
>> + *
>> + * user->locked_vm <= user_lock_limit
>> + *
>> + * We need be careful to make sure user_extra >=0.
>> + *
>> + * Using "user_locked - user_extra" to avoid calling
>> + * atomic_long_read() again.
>> */
>> - extra = user_locked - user_lock_limit;
>> - user_extra -= extra;
>> + if (user_locked - user_extra >= user_lock_limit) {
>> + /*
>> + * already used all user_locked_limit, charge all
>> + * to pinned_vm
>> + */
>> + extra = user_extra;
>> + user_extra = 0;
>> + } else {
>> + /*
>> + * charge locked_vm until it hits user_lock_limit;
>> + * charge the rest from pinned_vm
>> + */
>> + extra = user_locked - user_lock_limit;
>> + user_extra -= extra;
>> + }
>
> How about the below for the sake of brevity?
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 763cf34b5a63..632505ce6c12 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5917,7 +5917,14 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> */
> user_lock_limit *= num_online_cpus();
>
> - user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> + user_locked = atomic_long_read(&user->locked_vm);
> + /*
> + * If perf_event_mlock has changed since earlier mmaps, so that
> + * it's smaller than user->locked_vm, discard the overflow.
> + */
> + if (user_locked > user_lock_limit)
> + user_locked = user_lock_limit;
> + user_locked += user_extra;
>
> if (user_locked > user_lock_limit) {
> /*

Actually, I think this is cleaner.

diff --git i/kernel/events/core.c w/kernel/events/core.c
index 2173c23c25b4..debd84fcf9cc 100644
--- i/kernel/events/core.c
+++ w/kernel/events/core.c
@@ -5916,14 +5916,18 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
*/
user_lock_limit *= num_online_cpus();

- user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+ user_locked = atomic_long_read(&user->locked_vm);

if (user_locked > user_lock_limit) {
+ /* charge all to pinned_vm */
+ extra = user_extra;
+ user_extra = 0;
+ } else if (user_lock + user_extra > user_lock_limit)
/*
* charge locked_vm until it hits user_lock_limit;
* charge the rest from pinned_vm
*/
- extra = user_locked - user_lock_limit;
+ extra = user_locked + user_extra - user_lock_limit;
user_extra -= extra;
}

Alexander, does this look good to you?

Thanks,
Song

2020-01-22 08:52:28

by Alexander Shishkin

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()

Song Liu <[email protected]> writes:

> Actually, I think this is cleaner.

I don't think multiple conditional blocks are cleaner, at least in this
case.

> diff --git i/kernel/events/core.c w/kernel/events/core.c
> index 2173c23c25b4..debd84fcf9cc 100644
> --- i/kernel/events/core.c
> +++ w/kernel/events/core.c
> @@ -5916,14 +5916,18 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> */
> user_lock_limit *= num_online_cpus();
>
> - user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> + user_locked = atomic_long_read(&user->locked_vm);
>
> if (user_locked > user_lock_limit) {
> + /* charge all to pinned_vm */
> + extra = user_extra;
> + user_extra = 0;
> + } else if (user_lock + user_extra > user_lock_limit)

You probably mean "user_locked" here.

> /*
> * charge locked_vm until it hits user_lock_limit;
> * charge the rest from pinned_vm
> */
> - extra = user_locked - user_lock_limit;
> + extra = user_locked + user_extra - user_lock_limit;

To me, this is a bit harder to read.

> user_extra -= extra;
> }
>
> Alexander, does this look good to you?

I like to think of this as: we charge the pages to locked_vm until we
exhaust user_lock_limit, and the rest we charge to pinned_vm. Everything
else are just corner cases, and they fit into the same general case. When
we start calculating each corner case in its own block, we just multiply
the potential errors. And there have been errors in this particular path
before. So, the shorter, and the fewer the "if...else if..." statements,
the better it looks to me. Otherwise, it's a matter of preference.

Thanks,
--
Alex

2020-01-23 09:21:58

by Alexander Shishkin

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()

Song Liu <[email protected]> writes:

>> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <[email protected]> wrote:
>>
>> Song Liu <[email protected]> writes:
>>
>>> sysctl_perf_event_mlock and user->locked_vm can change value
>>> independently, so we can't guarantee:
>>>
>>> user->locked_vm <= user_lock_limit
>>
>> This means: if the sysctl got sufficiently decreased, so that the
>> existing locked_vm exceeds it, we need to deal with the overflow, right?
>
> Reducing sysctl is one way to generate the overflow. Another way is to
> call setrlimit() from user space to allow bigger user->locked_vm.

You mean RLIMIT_MEMLOCK? That's a limit on mm->pinned_vm. Doesn't affect
user->locked_vm.

Regards,
--
Alex

2020-01-23 17:26:09

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH] perf/core: fix mlock accounting in perf_mmap()



> On Jan 23, 2020, at 1:19 AM, Alexander Shishkin <[email protected]> wrote:
>
> Song Liu <[email protected]> writes:
>
>>> On Jan 20, 2020, at 12:24 AM, Alexander Shishkin <[email protected]> wrote:
>>>
>>> Song Liu <[email protected]> writes:
>>>
>>>> sysctl_perf_event_mlock and user->locked_vm can change value
>>>> independently, so we can't guarantee:
>>>>
>>>> user->locked_vm <= user_lock_limit
>>>
>>> This means: if the sysctl got sufficiently decreased, so that the
>>> existing locked_vm exceeds it, we need to deal with the overflow, right?
>>
>> Reducing sysctl is one way to generate the overflow. Another way is to
>> call setrlimit() from user space to allow bigger user->locked_vm.
>
> You mean RLIMIT_MEMLOCK? That's a limit on mm->pinned_vm. Doesn't affect
> user->locked_vm.

This depends. For example, bpf_charge_memlock() uses RLIMIT_MEMLOCK as the
limit for user->locked_vm. This makes sense, because the bpf map created by
a process may stay longer than the process.

Thanks,
Song