2018-10-16 17:44:36

by Kuo-Hsin Yang

[permalink] [raw]
Subject: [PATCH 0/2] shmem, drm/i915: Mark pinned shmemfs pages as unevictable

When a graphics heavy application is running, i915 driver may pin a lot
of shmemfs pages and vmscan slows down significantly by scanning these
pinned pages. This patch is an alternative to the patch by Chris Wilson
[1]. As i915 driver pins all pages in an address space, marking an
address space as unevictable is sufficient to solve this issue.

[1]: https://patchwork.kernel.org/patch/9768741/

Kuo-Hsin Yang (2):
shmem: export shmem_unlock_mapping
drm/i915: Mark pinned shmemfs pages as unevictable

Documentation/vm/unevictable-lru.rst | 4 +++-
drivers/gpu/drm/i915/i915_gem.c | 8 ++++++++
mm/shmem.c | 2 ++
3 files changed, 13 insertions(+), 1 deletion(-)

--
2.19.1.331.ge82ca0e54c-goog



2018-10-16 17:44:52

by Kuo-Hsin Yang

[permalink] [raw]
Subject: [PATCH 1/2] shmem: export shmem_unlock_mapping

By exporting this function, drivers can mark/unmark a shmemfs address
space as unevictable in the following way: 1. mark an address space as
unevictable with mapping_set_unevictable(), pages in the address space
will be moved to unevictable list in vmscan. 2. mark an address space
evictable with mapping_clear_unevictable(), and move these pages back to
evictable list with shmem_unlock_mapping().

Signed-off-by: Kuo-Hsin Yang <[email protected]>
---
Documentation/vm/unevictable-lru.rst | 4 +++-
mm/shmem.c | 2 ++
2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..a812fb55136d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@ using a number of wrapper functions:
Query the address space, and return true if it is completely
unevictable.

-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:

(1) By ramfs to mark the address spaces of its inodes when they are created,
and this mark remains for the life of the inode.
@@ -154,6 +154,8 @@ These are currently used in two places in the kernel:
swapped out; the application must touch the pages manually if it wants to
ensure they're in memory.

+ (3) By the i915 driver to mark pinned address space until it's unpinned.
+

Detecting Unevictable Pages
---------------------------
diff --git a/mm/shmem.c b/mm/shmem.c
index 446942677cd4..d1ce34c09df6 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -786,6 +786,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
cond_resched();
}
}
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);

/*
* Remove range of pages and swap entries from radix tree, and free them.
@@ -3874,6 +3875,7 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
void shmem_unlock_mapping(struct address_space *mapping)
{
}
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);

#ifdef CONFIG_MMU
unsigned long shmem_get_unmapped_area(struct file *file,
--
2.19.1.331.ge82ca0e54c-goog


2018-10-16 17:44:58

by Kuo-Hsin Yang

[permalink] [raw]
Subject: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

The i915 driver use shmemfs to allocate backing storage for gem objects.
These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. Mark these pinned
pages as unevictable to speed up vmscan.

Signed-off-by: Kuo-Hsin Yang <[email protected]>
---
drivers/gpu/drm/i915/i915_gem.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fcc73a6ab503..e0ff5b736128 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2390,6 +2390,7 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
{
struct sgt_iter sgt_iter;
struct page *page;
+ struct address_space *mapping;

__i915_gem_object_release_shmem(obj, pages, true);

@@ -2409,6 +2410,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
}
obj->mm.dirty = false;

+ mapping = file_inode(obj->base.filp)->i_mapping;
+ mapping_clear_unevictable(mapping);
+ shmem_unlock_mapping(mapping);
+
sg_free_table(pages);
kfree(pages);
}
@@ -2551,6 +2556,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
* Fail silently without starting the shrinker
*/
mapping = obj->base.filp->f_mapping;
+ mapping_set_unevictable(mapping);
noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
noreclaim |= __GFP_NORETRY | __GFP_NOWARN;

@@ -2664,6 +2670,8 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
err_pages:
for_each_sgt_page(page, sgt_iter, st)
put_page(page);
+ mapping_clear_unevictable(mapping);
+ shmem_unlock_mapping(mapping);
sg_free_table(st);
kfree(st);

--
2.19.1.331.ge82ca0e54c-goog


2018-10-16 18:22:57

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> The i915 driver use shmemfs to allocate backing storage for gem objects.
> These shmemfs pages can be pinned (increased ref count) by
> shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> wastes a lot of time scanning these pinned pages. Mark these pinned
> pages as unevictable to speed up vmscan.

I would squash the two patches into the single one. One more thing
though. One more thing to be careful about here. Unless I miss something
such a page is not migrateable so it shouldn't be allocated from a
movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
yes, we want to drop it as well. Other than that the patch makes sense
with my very limited knowlege of the i915 code of course.
--
Michal Hocko
SUSE Labs

2018-10-16 18:34:06

by Chris Wilson

[permalink] [raw]
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

Quoting Michal Hocko (2018-10-16 19:21:55)
> On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> > The i915 driver use shmemfs to allocate backing storage for gem objects.
> > These shmemfs pages can be pinned (increased ref count) by
> > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> > wastes a lot of time scanning these pinned pages. Mark these pinned
> > pages as unevictable to speed up vmscan.
>
> I would squash the two patches into the single one. One more thing
> though. One more thing to be careful about here. Unless I miss something
> such a page is not migrateable so it shouldn't be allocated from a
> movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
> yes, we want to drop it as well. Other than that the patch makes sense
> with my very limited knowlege of the i915 code of course.

They are not migrateable today. But we have proposed hooking up
.migratepage and setting __GFP_MOVABLE which would then include unlocking
the mapping at migrate time.

Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
nullifying the advantage gained from not walking the lists in reclaim.
I'll have better numbers in a couple of days.
-Chris

2018-10-16 19:16:23

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

On Tue 16-10-18 19:31:06, Chris Wilson wrote:
> Quoting Michal Hocko (2018-10-16 19:21:55)
> > On Wed 17-10-18 01:43:00, Kuo-Hsin Yang wrote:
> > > The i915 driver use shmemfs to allocate backing storage for gem objects.
> > > These shmemfs pages can be pinned (increased ref count) by
> > > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
> > > wastes a lot of time scanning these pinned pages. Mark these pinned
> > > pages as unevictable to speed up vmscan.
> >
> > I would squash the two patches into the single one. One more thing
> > though. One more thing to be careful about here. Unless I miss something
> > such a page is not migrateable so it shouldn't be allocated from a
> > movable zone. Does mapping_gfp_constraint contains __GFP_MOVABLE? If
> > yes, we want to drop it as well. Other than that the patch makes sense
> > with my very limited knowlege of the i915 code of course.
>
> They are not migrateable today. But we have proposed hooking up
> .migratepage and setting __GFP_MOVABLE which would then include unlocking
> the mapping at migrate time.

if the mapping_gfp doesn't include __GFP_MOVABLE today then there is no
issue I've had in mind.
--
Michal Hocko
SUSE Labs

2018-10-17 09:00:00

by Kuo-Hsin Yang

[permalink] [raw]
Subject: [PATCH v2] shmem, drm/i915: mark pinned shmemfs pages as unevictable

The i915 driver uses shmemfs to allocate backing storage for gem
objects. These shmemfs pages can be pinned (increased ref count) by
shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan
wastes a lot of time scanning these pinned pages. In some extreme case,
all pages in the inactive anon lru are pinned, and only the inactive
anon lru is scanned due to inactive_ratio, the system cannot swap and
invokes the oom-killer. Mark these pinned pages as unevictable to speed
up vmscan.

By exporting shmem_unlock_mapping, drivers can: 1. mark a shmemfs
address space as unevictable with mapping_set_unevictable(), pages in
the address space will be moved to unevictable list in vmscan. 2. mark
an address space as evictable with mapping_clear_unevictable(), and move
these pages back to evictable list with shmem_unlock_mapping().

This patch was inspired by Chris Wilson's change [1].

[1]: https://patchwork.kernel.org/patch/9768741/

Signed-off-by: Kuo-Hsin Yang <[email protected]>
---
Changes for v2:
Squashed the two patches.

Documentation/vm/unevictable-lru.rst | 4 +++-
drivers/gpu/drm/i915/i915_gem.c | 8 ++++++++
mm/shmem.c | 2 ++
3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst
index fdd84cb8d511..a812fb55136d 100644
--- a/Documentation/vm/unevictable-lru.rst
+++ b/Documentation/vm/unevictable-lru.rst
@@ -143,7 +143,7 @@ using a number of wrapper functions:
Query the address space, and return true if it is completely
unevictable.

-These are currently used in two places in the kernel:
+These are currently used in three places in the kernel:

(1) By ramfs to mark the address spaces of its inodes when they are created,
and this mark remains for the life of the inode.
@@ -154,6 +154,8 @@ These are currently used in two places in the kernel:
swapped out; the application must touch the pages manually if it wants to
ensure they're in memory.

+ (3) By the i915 driver to mark pinned address space until it's unpinned.
+

Detecting Unevictable Pages
---------------------------
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fcc73a6ab503..e0ff5b736128 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2390,6 +2390,7 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
{
struct sgt_iter sgt_iter;
struct page *page;
+ struct address_space *mapping;

__i915_gem_object_release_shmem(obj, pages, true);

@@ -2409,6 +2410,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
}
obj->mm.dirty = false;

+ mapping = file_inode(obj->base.filp)->i_mapping;
+ mapping_clear_unevictable(mapping);
+ shmem_unlock_mapping(mapping);
+
sg_free_table(pages);
kfree(pages);
}
@@ -2551,6 +2556,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
* Fail silently without starting the shrinker
*/
mapping = obj->base.filp->f_mapping;
+ mapping_set_unevictable(mapping);
noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM);
noreclaim |= __GFP_NORETRY | __GFP_NOWARN;

@@ -2664,6 +2670,8 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
err_pages:
for_each_sgt_page(page, sgt_iter, st)
put_page(page);
+ mapping_clear_unevictable(mapping);
+ shmem_unlock_mapping(mapping);
sg_free_table(st);
kfree(st);

diff --git a/mm/shmem.c b/mm/shmem.c
index 446942677cd4..d1ce34c09df6 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -786,6 +786,7 @@ void shmem_unlock_mapping(struct address_space *mapping)
cond_resched();
}
}
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);

/*
* Remove range of pages and swap entries from radix tree, and free them.
@@ -3874,6 +3875,7 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
void shmem_unlock_mapping(struct address_space *mapping)
{
}
+EXPORT_SYMBOL_GPL(shmem_unlock_mapping);

#ifdef CONFIG_MMU
unsigned long shmem_get_unmapped_area(struct file *file,
--
2.19.1.331.ge82ca0e54c-goog


2018-10-18 06:59:18

by Chris Wilson

[permalink] [raw]
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

Quoting Chris Wilson (2018-10-16 19:31:06)
> Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
> nullifying the advantage gained from not walking the lists in reclaim.
> I'll have better numbers in a couple of days.

Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl)
consisting of cycletest with a background load of trying to allocate +
populate 2MiB (to hit thp) while catting all files to /dev/null, the
result of using mapping_set_unevictable is mixed.

Each test run consists of running cycletest for 120s measuring the mean
and maximum wakeup latency and then repeating that 120 times.

x baseline-mean.txt # no i915 activity
+ tip-mean.txt # current stock i915 with a continuous load
+------------------------------------------------------------------------+
| x + |
| x + |
|xx + |
|xx + |
|xx + |
|xx ++ |
|xx +++ |
|xx +++ |
|xx +++ |
|xx +++ |
|xx +++ |
|xx ++++ |
|xx +++++ |
|xx ++++++ |
|xx ++++++ |
|xx ++++++ |
|xx ++++++ |
|xx ++++++ |
|xx +++++++ + |
|xx ++++++++ + |
|xx ++++++++++ |
|xx+++++++++++ + + |
|xx+++++++++++ + + + + + ++ +|
| A |
||______M_A_________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 359.153 876.915 863.548 778.80319 186.15875
+ 120 2475.318 73172.303 7666.812 9579.4671 9552.865

Our target then is 863us, but currently i915 adds 7ms of uninterruptable
delay on hitting the shrinker.

x baseline-mean.txt
+ mapping-mean.txt # applying the mapping_set_evictable patch
* tip-mean.txt
+------------------------------------------------------------------------+
| x * + |
| x * + |
|xx * + |
|xx * + |
|xx * + |
|xx ** + |
|xx *** ++ |
|xx *** ++ |
|xx *** ++ |
|xx *** ++ |
|xx *** ++ |
|xx **** + ++ |
|xx *****+ ++ ++ |
|xx ******+ ++ ++ |
|xx ******+ ++ + ++ |
|xx ******+ ++ + ++ |
|xx ******+ ++ ++++ |
|xx ******+ ++ ++++ |
|xx ******* *+ ++++ |
|xx ******** *+ +++++ |
|xx **********+ +++++ |
|xx***********+*+++++* |
|xx***********+*+++++* * + * * ** *|
| A |
| |___AM___| |
||______M_A_________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 359.153 876.915 863.548 778.80319 186.15875
+ 120 3291.633 26644.894 15829.186 14654.781 4466.6997
* 120 2475.318 73172.303 7666.812 9579.4671 9552.865

Shows that if we use the mapping_set_evictable() +
shmem_unlock_mapping() we add a further 8ms uninterruptable delay to the
system... That's the opposite of our goal! ;)

x baseline-mean.txt
+ lock_vma-mean.txt # the old approach of pinning each page
* tip-mean.txt
+------------------------------------------------------------------------+
| *+ * |
| *+ * * |
| *+ * * |
| *+ * * |
| *+ *** |
| *+ *** |
| *+ *** |
| *+ *** |
| *+ *** |
| *+ *** |
| *+ *** |
| *+ **** |
| *+ ***** |
| *+ ****** |
| *+ ****** * |
| *+ ****** * |
| *+ ******* * |
| *+******** * |
| *+******** * |
| *+******** * |
| *+******** * * * |
| *+******** * * + * * * * * * *|
| A |
||MA| |
||_______M_A________| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 359.153 876.915 863.548 778.80319 186.15875
+ 120 511.415 18757.367 1276.302 1416.0016 1679.3965
* 120 2475.318 73172.303 7666.812 9579.4671 9552.865

By contrast, the previous approach of using mlock_page_vma() does
dramatically reduce the uninterruptable delay -- which suggests that the
mapping_set_evictable() isn't keeping our unshrinkable pages off the
shrinker lru.

However, if instead of looking at the average uninterruptable delay
during the 120s of cycletest, but look at the worst case, things get a
little more interesting. Currently i915 is terrible.

x baseline-max.txt
+ tip-max.txt
+------------------------------------------------------------------------+
| * |
[snip 100 lines]
| * |
| * |
| * |
| * |
| * |
| * |
| * |
| * |
| * +++ ++ + + + + +|
| A |
||_____M_A_______| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 7391 58543 51953 51564.033 5044.6375
+ 120 2284928 6.752085e+08 3385097 20825362 80352645

Worst case with no i915 is 52ms, but as soon as we load up i915 with
some work, the worst case uninterruptable delay is on average 20s!!! As
suggested by the median, the data is severely skewed by a few outliers.
(Worst worst case is so bad khungtaskd often makes an appearance.)

x baseline-max.txt
+ mapping-max.txt
* tip-max.txt
+------------------------------------------------------------------------+
| * |
[snip 100 lines]
| * |
| * |
| * |
| * |
| * |
| * |
| * |
| *+ |
| *+*** ** * * +* * *|
| A |
| |_A__| |
||_____M_A_______| |
+------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 120 7391 58543 51953 51564.033 5044.6375
+ 120 3088140 2.9181602e+08 4022581 6528993.3 26278426
* 120 2284928 6.752085e+08 3385097 20825362 80352645

So while the mapping_set_evictable patch did reduce the maximum observed
delay within the 4 hour sample, on average (median, to exclude those worst
worst case outliers) it still fares worse than stock i915. The
mlock_page_vma() has no impact on worst case wrt stock.

My conclusion is that the mapping_set_evictable patch makes both the
average and worst case uninterruptable latency (as observed by other
users of the system) significantly worse. (Although the maximum latency
is not stable enough to draw a real conclusion other than i915 is
shockingly terrible.)
-Chris

2018-10-18 08:17:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 2/2] drm/i915: Mark pinned shmemfs pages as unevictable

On Thu 18-10-18 07:56:45, Chris Wilson wrote:
> Quoting Chris Wilson (2018-10-16 19:31:06)
> > Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost
> > nullifying the advantage gained from not walking the lists in reclaim.
> > I'll have better numbers in a couple of days.
>
> Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl)
> consisting of cycletest with a background load of trying to allocate +
> populate 2MiB (to hit thp) while catting all files to /dev/null, the
> result of using mapping_set_unevictable is mixed.

I haven't really read through your report completely yet but I wanted to
point out that the above test scenario is unlikely show the real effect of
the LRU scanning overhead because shmem pages do live on the anonymous
LRU list. With a plenty of file page cache available we do not even scan
anonymous LRU lists. You would have to generate a swapout workload to
test this properly.

On the other hand if mapping_set_unevictable has really a measurably bad
performance impact then this is probably not worth much because most
workloads are swap modest.
--
Michal Hocko
SUSE Labs