smaps_rollup will try to grab mmap_lock and go through the whole vma
list until it finishes the iterating. When encountering large processes,
the mmap_lock will be held for a longer time, which may block other
write requests like mmap and munmap from progressing smoothly.
There are upcoming mmap_lock optimizations like range-based locks, but
the lock applied to smaps_rollup would be the coarse type, which doesn't
avoid the occurrence of unpleasant contention.
To solve aforementioned issue, we add a check which detects whether
anyone wants to grab mmap_lock for write attempts.
Change since v1:
- If current VMA is freed after dropping the lock, it will return
- incomplete result. To fix this issue, refine the code flow as
- suggested by Steve. [1]
[1] https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Chinwen Chang <[email protected]>
---
fs/proc/task_mmu.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index dbda449..23b3a447 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -853,9 +853,63 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
hold_task_mempolicy(priv);
- for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
+ for (vma = priv->mm->mmap; vma;) {
smap_gather_stats(vma, &mss);
last_vma_end = vma->vm_end;
+
+ /*
+ * Release mmap_lock temporarily if someone wants to
+ * access it for write request.
+ */
+ if (mmap_lock_is_contended(mm)) {
+ mmap_read_unlock(mm);
+ ret = mmap_read_lock_killable(mm);
+ if (ret) {
+ release_task_mempolicy(priv);
+ goto out_put_mm;
+ }
+
+ /*
+ * After dropping the lock, there are three cases to
+ * consider. See the following example for explanation.
+ *
+ * +------+------+-----------+
+ * | VMA1 | VMA2 | VMA3 |
+ * +------+------+-----------+
+ * | | | |
+ * 4k 8k 16k 400k
+ *
+ * Suppose we drop the lock after reading VMA2 due to
+ * contention, then we get:
+ *
+ * last_vma_end = 16k
+ *
+ * 1) VMA2 is freed, but VMA3 exists:
+ *
+ * find_vma(mm, 16k - 1) will return VMA3.
+ * In this case, just continue from VMA3.
+ *
+ * 2) VMA2 still exists:
+ *
+ * find_vma(mm, 16k - 1) will return VMA2.
+ * Iterate the loop like the original one.
+ *
+ * 3) No more VMAs can be found:
+ *
+ * find_vma(mm, 16k - 1) will return NULL.
+ * No more things to do, just break.
+ */
+ vma = find_vma(mm, last_vma_end - 1);
+ /* Case 3 above */
+ if (!vma)
+ break;
+
+ /* Case 1 above */
+ if (vma->vm_start >= last_vma_end)
+ continue;
+ }
+ /* Case 2 above */
+ vma = vma->vm_next;
}
show_vma_header_prefix(m, priv->mm->mmap->vm_start,
--
1.9.1
On 13/08/2020 03:13, Chinwen Chang wrote:
> smaps_rollup will try to grab mmap_lock and go through the whole vma
> list until it finishes the iterating. When encountering large processes,
> the mmap_lock will be held for a longer time, which may block other
> write requests like mmap and munmap from progressing smoothly.
>
> There are upcoming mmap_lock optimizations like range-based locks, but
> the lock applied to smaps_rollup would be the coarse type, which doesn't
> avoid the occurrence of unpleasant contention.
>
> To solve aforementioned issue, we add a check which detects whether
> anyone wants to grab mmap_lock for write attempts.
>
> Change since v1:
> - If current VMA is freed after dropping the lock, it will return
> - incomplete result. To fix this issue, refine the code flow as
> - suggested by Steve. [1]
>
> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Signed-off-by: Chinwen Chang <[email protected]>
Reviewed-by: Steven Price <[email protected]>
> ---
> fs/proc/task_mmu.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index dbda449..23b3a447 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -853,9 +853,63 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
>
> hold_task_mempolicy(priv);
>
> - for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
> + for (vma = priv->mm->mmap; vma;) {
> smap_gather_stats(vma, &mss);
> last_vma_end = vma->vm_end;
> +
> + /*
> + * Release mmap_lock temporarily if someone wants to
> + * access it for write request.
> + */
> + if (mmap_lock_is_contended(mm)) {
> + mmap_read_unlock(mm);
> + ret = mmap_read_lock_killable(mm);
> + if (ret) {
> + release_task_mempolicy(priv);
> + goto out_put_mm;
> + }
> +
> + /*
> + * After dropping the lock, there are three cases to
> + * consider. See the following example for explanation.
> + *
> + * +------+------+-----------+
> + * | VMA1 | VMA2 | VMA3 |
> + * +------+------+-----------+
> + * | | | |
> + * 4k 8k 16k 400k
> + *
> + * Suppose we drop the lock after reading VMA2 due to
> + * contention, then we get:
> + *
> + * last_vma_end = 16k
> + *
> + * 1) VMA2 is freed, but VMA3 exists:
> + *
> + * find_vma(mm, 16k - 1) will return VMA3.
> + * In this case, just continue from VMA3.
> + *
> + * 2) VMA2 still exists:
> + *
> + * find_vma(mm, 16k - 1) will return VMA2.
> + * Iterate the loop like the original one.
> + *
> + * 3) No more VMAs can be found:
> + *
> + * find_vma(mm, 16k - 1) will return NULL.
> + * No more things to do, just break.
> + */
> + vma = find_vma(mm, last_vma_end - 1);
> + /* Case 3 above */
> + if (!vma)
> + break;
> +
> + /* Case 1 above */
> + if (vma->vm_start >= last_vma_end)
> + continue;
> + }
> + /* Case 2 above */
> + vma = vma->vm_next;
> }
>
> show_vma_header_prefix(m, priv->mm->mmap->vm_start,
>
On Wed, Aug 12, 2020 at 7:13 PM Chinwen Chang
<[email protected]> wrote:
> smaps_rollup will try to grab mmap_lock and go through the whole vma
> list until it finishes the iterating. When encountering large processes,
> the mmap_lock will be held for a longer time, which may block other
> write requests like mmap and munmap from progressing smoothly.
>
> There are upcoming mmap_lock optimizations like range-based locks, but
> the lock applied to smaps_rollup would be the coarse type, which doesn't
> avoid the occurrence of unpleasant contention.
>
> To solve aforementioned issue, we add a check which detects whether
> anyone wants to grab mmap_lock for write attempts.
I think your retry mechanism still doesn't handle all cases. When you
get back the mmap lock, the address where you stopped last time could
now be in the middle of a vma. I think the consistent thing to do in
that case would be to retry scanning from the address you stopped at,
even if it's not on a vma boundary anymore. You may have to change
smap_gather_stats to support that, though.
On Fri, 2020-08-14 at 01:35 -0700, Michel Lespinasse wrote:
> On Wed, Aug 12, 2020 at 7:13 PM Chinwen Chang
> <[email protected]> wrote:
> > smaps_rollup will try to grab mmap_lock and go through the whole vma
> > list until it finishes the iterating. When encountering large processes,
> > the mmap_lock will be held for a longer time, which may block other
> > write requests like mmap and munmap from progressing smoothly.
> >
> > There are upcoming mmap_lock optimizations like range-based locks, but
> > the lock applied to smaps_rollup would be the coarse type, which doesn't
> > avoid the occurrence of unpleasant contention.
> >
> > To solve aforementioned issue, we add a check which detects whether
> > anyone wants to grab mmap_lock for write attempts.
>
> I think your retry mechanism still doesn't handle all cases. When you
> get back the mmap lock, the address where you stopped last time could
> now be in the middle of a vma. I think the consistent thing to do in
> that case would be to retry scanning from the address you stopped at,
> even if it's not on a vma boundary anymore. You may have to change
> smap_gather_stats to support that, though.
Hi Michel,
I think I got your point. Let me try to prepare new patch series for
further reviews.
Thank you for your suggestion :)
Chinwen