2020-01-22 19:28:04

by Song Liu

[permalink] [raw]
Subject: [PATCH v2] perf/core: fix mlock accounting in perf_mmap()

sysctl_perf_event_mlock and user->locked_vm can change value
independently, so we can't guarantee:

user->locked_vm <= user_lock_limit

When user->locked_vm is larger than user_lock_limit, we cannot simply
update extra and user_extra as:

extra = user_locked - user_lock_limit;
user_extra -= extra;

Otherwise, user_extra will be negative. In extreme cases, this may lead to
negative user->locked_vm (until this perf-mmap is closed), which break
locked_vm badly.

Fix this by adjusting user_locked before calculating extra and user_extra.

Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Signed-off-by: Song Liu <[email protected]>
Suggested-by: Alexander Shishkin <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
kernel/events/core.c | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 2173c23c25b4..d25f2de45996 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5916,8 +5916,19 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
*/
user_lock_limit *= num_online_cpus();

- user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+ user_locked = atomic_long_read(&user->locked_vm);

+ /*
+ * sysctl_perf_event_mlock and user->locked_vm can change value
+ * independently. so we can't guarantee:
+ * user->locked_vm <= user_lock_limit
+ *
+ * Adjust user_locked to be <= user_lock_limit so we can calcualte
+ * correct extra and user_extra.
+ */
+ user_locked = min_t(unsigned long, user_locked, user_lock_limit);
+
+ user_locked += user_extra;
if (user_locked > user_lock_limit) {
/*
* charge locked_vm until it hits user_lock_limit;
--
2.17.1


2020-01-23 09:35:16

by Alexander Shishkin

[permalink] [raw]
Subject: Re: [PATCH v2] perf/core: fix mlock accounting in perf_mmap()

Song Liu <[email protected]> writes:

> sysctl_perf_event_mlock and user->locked_vm can change value
> independently, so we can't guarantee:

Looks good, I still have some suggestions below.

>
> user->locked_vm <= user_lock_limit
>
> When user->locked_vm is larger than user_lock_limit, we cannot simply
> update extra and user_extra as:
>
> extra = user_locked - user_lock_limit;
> user_extra -= extra;
>
> Otherwise, user_extra will be negative. In extreme cases, this may lead to
> negative user->locked_vm (until this perf-mmap is closed), which break
> locked_vm badly.
>
> Fix this by adjusting user_locked before calculating extra and user_extra.

The commit message is just talking about the code. We can see the code
when we scroll down to the diff. What this can be instead is:

1. Problem statement: decreasing sysctl_perf_event_mlock between two
consecutive mmap()s of a perf ring buffer may lead to an integer
underflow in locked memory accounting. This may lead to the following
undesired behavior: <an example of bad behavior as opposed to expected
behavior>.

2. Fix description: address this by adjusting the accounting logic to
take into account the possibility that the amount of already locked
memory may exceed the current limit.

> Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
> Signed-off-by: Song Liu <[email protected]>
> Suggested-by: Alexander Shishkin <[email protected]>
> Cc: Alexander Shishkin <[email protected]>
> Cc: Arnaldo Carvalho de Melo <[email protected]>
> Cc: Jiri Olsa <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> ---
> kernel/events/core.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 2173c23c25b4..d25f2de45996 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -5916,8 +5916,19 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> */
> user_lock_limit *= num_online_cpus();
>
> - user_locked = atomic_long_read(&user->locked_vm) + user_extra;
> + user_locked = atomic_long_read(&user->locked_vm);
>
> + /*
> + * sysctl_perf_event_mlock and user->locked_vm can change value
> + * independently. so we can't guarantee:
> + * user->locked_vm <= user_lock_limit

"sysctl_perf_event_mlock may have changed, so that user->locked_vm >
user_lock_limit".

> + *
> + * Adjust user_locked to be <= user_lock_limit so we can calcualte
> + * correct extra and user_extra.

This comment is also verbalizing the C code that follows. I don't think
it's necessary.

> + */
> + user_locked = min_t(unsigned long, user_locked, user_lock_limit);

A matter of preference, but to me the "if (user_locked >=
user_lock_limit)" is easier to read.

> +
> + user_locked += user_extra;
> if (user_locked > user_lock_limit) {
> /*
> * charge locked_vm until it hits user_lock_limit;

Thanks,
--
Alex