2020-02-05 16:36:27

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v1 0/3] virtio-balloon: Fixes + switch back to OOM handler

Two fixes for issues I stumbled over while working on patch #3.

Switch back to the good ol' OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM
as the switch to the shrinker introduce some undesired side effects. Keep
the shrinker in place to handle VIRTIO_BALLOON_F_FREE_PAGE_HINT.
Lengthy discussion under [1].

I tested with QEMU and "deflate-on-oom=on". Works as expected. Did not
test the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, as it is
hard to trigger (only when migrating a VM, and even then, it might not
trigger).

[1] https://www.spinics.net/lists/linux-virtualization/msg40863.html

David Hildenbrand (3):
virtio-balloon: Fix memory leak when unloading while hinting is in
progress
virtio_balloon: Fix memory leaks on errors in virtballoon_probe()
virtio-balloon: Switch back to OOM handler for
VIRTIO_BALLOON_F_DEFLATE_ON_OOM

drivers/virtio/virtio_balloon.c | 124 +++++++++++++++-----------------
1 file changed, 57 insertions(+), 67 deletions(-)

--
2.24.1


2020-02-05 16:36:27

by David Hildenbrand

[permalink] [raw]
Subject: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
changed the behavior when deflation happens automatically. Instead of
deflating when called by the OOM handler, the shrinker is used.

However, the balloon is not simply some slab cache that should be
shrunk when under memory pressure. The shrinker does not have a concept of
priorities, so this behavior cannot be configured.

There was a report that this results in undesired side effects when
inflating the balloon to shrink the page cache. [1]
"When inflating the balloon against page cache (i.e. no free memory
remains) vmscan.c will both shrink page cache, but also invoke the
shrinkers -- including the balloon's shrinker. So the balloon
driver allocates memory which requires reclaim, vmscan gets this
memory by shrinking the balloon, and then the driver adds the
memory back to the balloon. Basically a busy no-op."

The name "deflate on OOM" makes it pretty clear when deflation should
happen - after other approaches to reclaim memory failed, not while
reclaiming. This allows to minimize the footprint of a guest - memory
will only be taken out of the balloon when really needed.

Especially, a drop_slab() will result in the whole balloon getting
deflated - undesired. While handling it via the OOM handler might not be
perfect, it keeps existing behavior. If we want a different behavior, then
we need a new feature bit and document it properly (although, there should
be a clear use case and the intended effects should be well described).

Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
this has no such side effects. Always register the shrinker with
VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
pages that are still to be processed by the guest. The hypervisor takes
care of identifying and resolving possible races between processing a
hinting request and the guest reusing a page.

In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
notifier with shrinker"), don't add a moodule parameter to configure the
number of pages to deflate on OOM. Can be re-added if really needed.
Also, pay attention that leak_balloon() returns the number of 4k pages -
convert it properly in virtio_balloon_oom_notify().

Note1: using the OOM handler is frowned upon, but it really is what we
need for this feature.

Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
could actually skip sending deflation requests to our hypervisor,
making the OOM path *very* simple. Besically freeing pages and
updating the balloon. If the communication with the host ever
becomes a problem on this call path.

[1] https://www.spinics.net/lists/linux-virtualization/msg40863.html

Reported-by: Tyler Sanderson <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
Cc: Wei Wang <[email protected]>
Cc: Alexander Duyck <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Michal Hocko <[email protected]>
Signed-off-by: David Hildenbrand <[email protected]>
---
drivers/virtio/virtio_balloon.c | 107 +++++++++++++-------------------
1 file changed, 44 insertions(+), 63 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7e5d84caeb94..e7b18f556c5e 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -14,6 +14,7 @@
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/balloon_compaction.h>
+#include <linux/oom.h>
#include <linux/wait.h>
#include <linux/mm.h>
#include <linux/mount.h>
@@ -27,7 +28,9 @@
*/
#define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
#define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
-#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
+/* Maximum number of (4k) pages to deflate on OOM notifications. */
+#define VIRTIO_BALLOON_OOM_NR_PAGES 256
+#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80

#define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
__GFP_NOMEMALLOC)
@@ -112,8 +115,11 @@ struct virtio_balloon {
/* Memory statistics */
struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];

- /* To register a shrinker to shrink memory upon memory pressure */
+ /* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
struct shrinker shrinker;
+
+ /* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
+ struct notifier_block oom_nb;
};

static struct virtio_device_id id_table[] = {
@@ -786,50 +792,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
}

-static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
- unsigned long pages_to_free)
-{
- return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
- VIRTIO_BALLOON_PAGES_PER_PAGE;
-}
-
-static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
- unsigned long pages_to_free)
-{
- unsigned long pages_freed = 0;
-
- /*
- * One invocation of leak_balloon can deflate at most
- * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
- * multiple times to deflate pages till reaching pages_to_free.
- */
- while (vb->num_pages && pages_freed < pages_to_free)
- pages_freed += leak_balloon_pages(vb,
- pages_to_free - pages_freed);
-
- update_balloon_size(vb);
-
- return pages_freed;
-}
-
static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
struct shrink_control *sc)
{
- unsigned long pages_to_free, pages_freed = 0;
struct virtio_balloon *vb = container_of(shrinker,
struct virtio_balloon, shrinker);

- pages_to_free = sc->nr_to_scan;
-
- if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
- pages_freed = shrink_free_pages(vb, pages_to_free);
-
- if (pages_freed >= pages_to_free)
- return pages_freed;
-
- pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
-
- return pages_freed;
+ return shrink_free_pages(vb, sc->nr_to_scan);
}

static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
@@ -837,26 +806,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
{
struct virtio_balloon *vb = container_of(shrinker,
struct virtio_balloon, shrinker);
- unsigned long count;
-
- count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
- count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;

- return count;
+ return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
}

-static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
+static int virtio_balloon_oom_notify(struct notifier_block *nb,
+ unsigned long dummy, void *parm)
{
- unregister_shrinker(&vb->shrinker);
-}
+ struct virtio_balloon *vb = container_of(nb,
+ struct virtio_balloon, oom_nb);
+ unsigned long *freed = parm;

-static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
-{
- vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
- vb->shrinker.count_objects = virtio_balloon_shrinker_count;
- vb->shrinker.seeks = DEFAULT_SEEKS;
+ *freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
+ VIRTIO_BALLOON_PAGES_PER_PAGE;
+ update_balloon_size(vb);

- return register_shrinker(&vb->shrinker);
+ return NOTIFY_OK;
}

static int virtballoon_probe(struct virtio_device *vdev)
@@ -933,22 +898,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
virtio_cwrite(vb->vdev, struct virtio_balloon_config,
poison_val, &poison_val);
}
- }
- /*
- * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
- * shrinker needs to be registered to relieve memory pressure.
- */
- if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
- err = virtio_balloon_register_shrinker(vb);
+
+ /*
+ * We're allowed to reuse any free pages, even if they are
+ * still to be processed by the host.
+ */
+ vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
+ vb->shrinker.count_objects = virtio_balloon_shrinker_count;
+ vb->shrinker.seeks = DEFAULT_SEEKS;
+ err = register_shrinker(&vb->shrinker);
if (err)
goto out_del_balloon_wq;
}
+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
+ vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
+ vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
+ err = register_oom_notifier(&vb->oom_nb);
+ if (err < 0)
+ goto out_unregister_shrinker;
+ }
+
virtio_device_ready(vdev);

if (towards_target(vb))
virtballoon_changed(vdev);
return 0;

+out_unregister_shrinker:
+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+ unregister_shrinker(&vb->shrinker);
out_del_balloon_wq:
if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
destroy_workqueue(vb->balloon_wq);
@@ -987,8 +965,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
{
struct virtio_balloon *vb = vdev->priv;

- if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
- virtio_balloon_unregister_shrinker(vb);
+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
+ unregister_oom_notifier(&vb->oom_nb);
+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+ unregister_shrinker(&vb->shrinker);
+
spin_lock_irq(&vb->stop_update_lock);
vb->stop_update = true;
spin_unlock_irq(&vb->stop_update_lock);
--
2.24.1

2020-02-06 07:47:17

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be
> shrunk when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.
>
> There was a report that this results in undesired side effects when
> inflating the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory failed, not while
> reclaiming. This allows to minimize the footprint of a guest - memory
> will only be taken out of the balloon when really needed.
>
> Especially, a drop_slab() will result in the whole balloon getting
> deflated - undesired. While handling it via the OOM handler might not be
> perfect, it keeps existing behavior. If we want a different behavior, then
> we need a new feature bit and document it properly (although, there should
> be a clear use case and the intended effects should be well described).
>
> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> this has no such side effects. Always register the shrinker with
> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> pages that are still to be processed by the guest. The hypervisor takes
> care of identifying and resolving possible races between processing a
> hinting request and the guest reusing a page.
>
> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> notifier with shrinker"), don't add a moodule parameter to configure the
> number of pages to deflate on OOM. Can be re-added if really needed.

I agree. And to make this case even stronger:

The oom_pages module parameter was known to be broken: whatever its
value, we return at most VIRTIO_BALLOON_ARRAY_PFNS_MAX. So module
parameter values > 256 never worked, and it seems highly unlikely that
freeing 1Mbyte on OOM is too aggressive.
There was a patch
virtio-balloon: deflate up to oom_pages on OOM
by Wei Wang to try to fix it:
https://lore.kernel.org/r/[email protected]
but this was dropped.

> Also, pay attention that leak_balloon() returns the number of 4k pages -
> convert it properly in virtio_balloon_oom_notify().

Oh. So it was returning a wrong value originally (before 71994620bb25).
However what really matters for notifiers is whether the value is 0 -
whether we made progress. So it's cosmetic.

> Note1: using the OOM handler is frowned upon, but it really is what we
> need for this feature.

Quite. However, I went back researching why we dropped the OOM notifier,
and found this:

https://lore.kernel.org/r/[email protected]

To quote from there:

The balloon_lock was used to synchronize the access demand to elements
of struct virtio_balloon and its queue operations (please see commit
e22504296d). This prevents the concurrent run of the leak_balloon and
fill_balloon functions, thereby resulting in a deadlock issue on OOM:

fill_balloon: take balloon_lock and wait for OOM to get some memory;
oom_notify: release some inflated memory via leak_balloon();
leak_balloon: wait for balloon_lock to be released by fill_balloon.





> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> could actually skip sending deflation requests to our hypervisor,
> making the OOM path *very* simple. Besically freeing pages and
> updating the balloon.

Well not exactly. !VIRTIO_BALLOON_F_MUST_TELL_HOST does not actually
mean "never tell host". It means "host will not discard pages in the
balloon, you can defer host notification until after use".

This was the original implementation:

+ if (vb->tell_host_first) {
+ tell_host(vb, vb->deflate_vq);
+ release_pages_by_pfn(vb->pfns, vb->num_pfns);
+ } else {
+ release_pages_by_pfn(vb->pfns, vb->num_pfns);
+ tell_host(vb, vb->deflate_vq);
+ }
+}

I don't know whether completely skipping host notifications
when !VIRTIO_BALLOON_F_MUST_TELL_HOST will break any hosts.

> If the communication with the host ever
> becomes a problem on this call path.
>
> [1] https://www.spinics.net/lists/linux-virtualization/msg40863.html
>
> Reported-by: Tyler Sanderson <[email protected]>
> Cc: Michael S. Tsirkin <[email protected]>
> Cc: Wei Wang <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Nadav Amit <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>
> ---
> drivers/virtio/virtio_balloon.c | 107 +++++++++++++-------------------
> 1 file changed, 44 insertions(+), 63 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7e5d84caeb94..e7b18f556c5e 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -14,6 +14,7 @@
> #include <linux/slab.h>
> #include <linux/module.h>
> #include <linux/balloon_compaction.h>
> +#include <linux/oom.h>
> #include <linux/wait.h>
> #include <linux/mm.h>
> #include <linux/mount.h>
> @@ -27,7 +28,9 @@
> */
> #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
> #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
> -#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
> +/* Maximum number of (4k) pages to deflate on OOM notifications. */
> +#define VIRTIO_BALLOON_OOM_NR_PAGES 256
> +#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
>
> #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
> __GFP_NOMEMALLOC)
> @@ -112,8 +115,11 @@ struct virtio_balloon {
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
>
> - /* To register a shrinker to shrink memory upon memory pressure */
> + /* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
> struct shrinker shrinker;
> +
> + /* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
> + struct notifier_block oom_nb;
> };
>
> static struct virtio_device_id id_table[] = {
> @@ -786,50 +792,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
> return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
> - VIRTIO_BALLOON_PAGES_PER_PAGE;
> -}
> -
> -static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - unsigned long pages_freed = 0;
> -
> - /*
> - * One invocation of leak_balloon can deflate at most
> - * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
> - * multiple times to deflate pages till reaching pages_to_free.
> - */
> - while (vb->num_pages && pages_freed < pages_to_free)
> - pages_freed += leak_balloon_pages(vb,
> - pages_to_free - pages_freed);
> -
> - update_balloon_size(vb);
> -
> - return pages_freed;
> -}
> -
> static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
> struct shrink_control *sc)
> {
> - unsigned long pages_to_free, pages_freed = 0;
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
>
> - pages_to_free = sc->nr_to_scan;
> -
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> - pages_freed = shrink_free_pages(vb, pages_to_free);
> -
> - if (pages_freed >= pages_to_free)
> - return pages_freed;
> -
> - pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
> -
> - return pages_freed;
> + return shrink_free_pages(vb, sc->nr_to_scan);
> }
>
> static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> @@ -837,26 +806,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> {
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
> - unsigned long count;
> -
> - count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
> - count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
>
> - return count;
> + return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
> +static int virtio_balloon_oom_notify(struct notifier_block *nb,
> + unsigned long dummy, void *parm)
> {
> - unregister_shrinker(&vb->shrinker);
> -}
> + struct virtio_balloon *vb = container_of(nb,
> + struct virtio_balloon, oom_nb);
> + unsigned long *freed = parm;
>
> -static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
> -{
> - vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> - vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> - vb->shrinker.seeks = DEFAULT_SEEKS;
> + *freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
> + VIRTIO_BALLOON_PAGES_PER_PAGE;
> + update_balloon_size(vb);
>
> - return register_shrinker(&vb->shrinker);
> + return NOTIFY_OK;
> }
>
> static int virtballoon_probe(struct virtio_device *vdev)
> @@ -933,22 +898,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
> virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> poison_val, &poison_val);
> }
> - }
> - /*
> - * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
> - * shrinker needs to be registered to relieve memory pressure.
> - */
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> - err = virtio_balloon_register_shrinker(vb);
> +
> + /*
> + * We're allowed to reuse any free pages, even if they are
> + * still to be processed by the host.
> + */
> + vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> + vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> + vb->shrinker.seeks = DEFAULT_SEEKS;
> + err = register_shrinker(&vb->shrinker);
> if (err)
> goto out_del_balloon_wq;
> }
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> + vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
> + vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
> + err = register_oom_notifier(&vb->oom_nb);
> + if (err < 0)
> + goto out_unregister_shrinker;
> + }
> +
> virtio_device_ready(vdev);
>
> if (towards_target(vb))
> virtballoon_changed(vdev);
> return 0;
>
> +out_unregister_shrinker:
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> out_del_balloon_wq:
> if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> destroy_workqueue(vb->balloon_wq);
> @@ -987,8 +965,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
> {
> struct virtio_balloon *vb = vdev->priv;
>
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> - virtio_balloon_unregister_shrinker(vb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> + unregister_oom_notifier(&vb->oom_nb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> +
> spin_lock_irq(&vb->stop_update_lock);
> vb->stop_update = true;
> spin_unlock_irq(&vb->stop_update_lock);
> --
> 2.24.1

2020-02-06 08:44:22

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On 06.02.20 08:40, Michael S. Tsirkin wrote:
> On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
>> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
>> changed the behavior when deflation happens automatically. Instead of
>> deflating when called by the OOM handler, the shrinker is used.
>>
>> However, the balloon is not simply some slab cache that should be
>> shrunk when under memory pressure. The shrinker does not have a concept of
>> priorities, so this behavior cannot be configured.
>>
>> There was a report that this results in undesired side effects when
>> inflating the balloon to shrink the page cache. [1]
>> "When inflating the balloon against page cache (i.e. no free memory
>> remains) vmscan.c will both shrink page cache, but also invoke the
>> shrinkers -- including the balloon's shrinker. So the balloon
>> driver allocates memory which requires reclaim, vmscan gets this
>> memory by shrinking the balloon, and then the driver adds the
>> memory back to the balloon. Basically a busy no-op."
>>
>> The name "deflate on OOM" makes it pretty clear when deflation should
>> happen - after other approaches to reclaim memory failed, not while
>> reclaiming. This allows to minimize the footprint of a guest - memory
>> will only be taken out of the balloon when really needed.
>>
>> Especially, a drop_slab() will result in the whole balloon getting
>> deflated - undesired. While handling it via the OOM handler might not be
>> perfect, it keeps existing behavior. If we want a different behavior, then
>> we need a new feature bit and document it properly (although, there should
>> be a clear use case and the intended effects should be well described).
>>
>> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
>> this has no such side effects. Always register the shrinker with
>> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
>> pages that are still to be processed by the guest. The hypervisor takes
>> care of identifying and resolving possible races between processing a
>> hinting request and the guest reusing a page.
>>
>> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
>> notifier with shrinker"), don't add a moodule parameter to configure the
>> number of pages to deflate on OOM. Can be re-added if really needed.
>
> I agree. And to make this case even stronger:
>
> The oom_pages module parameter was known to be broken: whatever its
> value, we return at most VIRTIO_BALLOON_ARRAY_PFNS_MAX. So module
> parameter values > 256 never worked, and it seems highly unlikely that
> freeing 1Mbyte on OOM is too aggressive.
> There was a patch
> virtio-balloon: deflate up to oom_pages on OOM
> by Wei Wang to try to fix it:
> https://lore.kernel.org/r/[email protected]
> but this was dropped.

Makes sense. 1MB is usually good enough.

>
>> Also, pay attention that leak_balloon() returns the number of 4k pages -
>> convert it properly in virtio_balloon_oom_notify().
>
> Oh. So it was returning a wrong value originally (before 71994620bb25).
> However what really matters for notifiers is whether the value is 0 -
> whether we made progress. So it's cosmetic.

Yes, that's also my understanding.

>
>> Note1: using the OOM handler is frowned upon, but it really is what we
>> need for this feature.
>
> Quite. However, I went back researching why we dropped the OOM notifier,
> and found this:
>
> https://lore.kernel.org/r/[email protected]
>
> To quote from there:
>
> The balloon_lock was used to synchronize the access demand to elements
> of struct virtio_balloon and its queue operations (please see commit
> e22504296d). This prevents the concurrent run of the leak_balloon and
> fill_balloon functions, thereby resulting in a deadlock issue on OOM:
>
> fill_balloon: take balloon_lock and wait for OOM to get some memory;
> oom_notify: release some inflated memory via leak_balloon();
> leak_balloon: wait for balloon_lock to be released by fill_balloon.

fill_balloon does the allocation *before* taking the lock. tell_host()
should not allocate memory AFAIR. So how could this ever happen?

Anyhow, we could simply work around this by doing a trylock in
fill_balloon() and retrying in the caller. That should be easy. But I
want to understand first, how something like that would even be possible.

>> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
>> could actually skip sending deflation requests to our hypervisor,
>> making the OOM path *very* simple. Besically freeing pages and
>> updating the balloon.
>
> Well not exactly. !VIRTIO_BALLOON_F_MUST_TELL_HOST does not actually
> mean "never tell host". It means "host will not discard pages in the
> balloon, you can defer host notification until after use".
>
> This was the original implementation:
>
> + if (vb->tell_host_first) {
> + tell_host(vb, vb->deflate_vq);
> + release_pages_by_pfn(vb->pfns, vb->num_pfns);
> + } else {
> + release_pages_by_pfn(vb->pfns, vb->num_pfns);
> + tell_host(vb, vb->deflate_vq);
> + }
> +}
>
> I don't know whether completely skipping host notifications
> when !VIRTIO_BALLOON_F_MUST_TELL_HOST will break any hosts.

We discussed this already somewhere else, but here is again what I found.

commit bf50e69f63d21091e525185c3ae761412be0ba72
Author: Dave Hansen <[email protected]>
Date: Thu Apr 7 10:43:25 2011 -0700

virtio balloon: kill tell-host-first logic

The virtio balloon driver has a VIRTIO_BALLOON_F_MUST_TELL_HOST
feature bit. Whenever the bit is set, the guest kernel must
always tell the host before we free pages back to the allocator.
Without this feature, we might free a page (and have another
user touch it) while the hypervisor is unprepared for it.

But, if the bit is _not_ set, we are under no obligation to
reverse the order; we're under no obligation to do _anything_.
As of now, qemu-kvm defines the bit, but doesn't set it.

MUST_TELL_HOST really means "no need to deflate, just reuse a page". We
should finally document this somewhere.

--
Thanks,

David / dhildenb

2020-02-06 09:31:37

by Wang, Wei W

[permalink] [raw]
Subject: RE: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Thursday, February 6, 2020 12:34 AM, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be shrunk
> when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.
>
> There was a report that this results in undesired side effects when inflating
> the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."

Not sure if we need to go back to OOM, which has many drawbacks as we discussed.
Just posted out another approach, which is simple.

Best,
Wei

2020-02-06 09:37:51

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Thu, Feb 06, 2020 at 10:05:43AM +0100, David Hildenbrand wrote:
> >> commit bf50e69f63d21091e525185c3ae761412be0ba72
> >> Author: Dave Hansen <[email protected]>
> >> Date: Thu Apr 7 10:43:25 2011 -0700
> >>
> >> virtio balloon: kill tell-host-first logic
> >>
> >> The virtio balloon driver has a VIRTIO_BALLOON_F_MUST_TELL_HOST
> >> feature bit. Whenever the bit is set, the guest kernel must
> >> always tell the host before we free pages back to the allocator.
> >> Without this feature, we might free a page (and have another
> >> user touch it) while the hypervisor is unprepared for it.
> >>
> >> But, if the bit is _not_ set, we are under no obligation to
> >> reverse the order; we're under no obligation to do _anything_.
> >> As of now, qemu-kvm defines the bit, but doesn't set it.
> >
> > Well this is not what the spec says in the end.
>
> I didn't check the spec, maybe I should do that :)
>
> > To continue that commit message:
> >
> > This patch makes the "tell host first" logic the only case. This
> > should make everybody happy, and reduce the amount of untested or
> > untestable code in the kernel.
>
> Yeah, but this comment explains that the current deflate is only in
> place, because it makes the code simpler (to support both cases). Of
> course, doing the deflate might result in performance improvements.
> (e.g., MADV_WILLNEED)
>
> >
> > you can try proposing the change to the virtio TC, see what do others
> > think.
>
> We can just drop the comment from this patch for now. The tell_host host
> not be an issue AFAIKS.

I guess it's a good idea.


> --
> Thanks,
>
> David / dhildenb

2020-02-06 09:39:45

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be
> shrunk when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.
>
> There was a report that this results in undesired side effects when
> inflating the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory failed, not while
> reclaiming. This allows to minimize the footprint of a guest - memory
> will only be taken out of the balloon when really needed.
>
> Especially, a drop_slab() will result in the whole balloon getting
> deflated - undesired. While handling it via the OOM handler might not be
> perfect, it keeps existing behavior. If we want a different behavior, then
> we need a new feature bit and document it properly (although, there should
> be a clear use case and the intended effects should be well described).
>
> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> this has no such side effects. Always register the shrinker with
> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> pages that are still to be processed by the guest. The hypervisor takes
> care of identifying and resolving possible races between processing a
> hinting request and the guest reusing a page.
>
> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> notifier with shrinker"), don't add a moodule parameter to configure the
> number of pages to deflate on OOM. Can be re-added if really needed.
> Also, pay attention that leak_balloon() returns the number of 4k pages -
> convert it properly in virtio_balloon_oom_notify().
>
> Note1: using the OOM handler is frowned upon, but it really is what we
> need for this feature.
>
> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> could actually skip sending deflation requests to our hypervisor,
> making the OOM path *very* simple. Besically freeing pages and
> updating the balloon. If the communication with the host ever
> becomes a problem on this call path.
>
> [1] https://www.spinics.net/lists/linux-virtualization/msg40863.html
>
> Reported-by: Tyler Sanderson <[email protected]>
> Cc: Michael S. Tsirkin <[email protected]>
> Cc: Wei Wang <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Nadav Amit <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>

So the revert looks ok, from that POV and with commit log changes

Acked-by: Michael S. Tsirkin <[email protected]>

however, let's see what do others think, and whether Wei can come
up with a fixup for the shrinker.


> ---
> drivers/virtio/virtio_balloon.c | 107 +++++++++++++-------------------
> 1 file changed, 44 insertions(+), 63 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7e5d84caeb94..e7b18f556c5e 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -14,6 +14,7 @@
> #include <linux/slab.h>
> #include <linux/module.h>
> #include <linux/balloon_compaction.h>
> +#include <linux/oom.h>
> #include <linux/wait.h>
> #include <linux/mm.h>
> #include <linux/mount.h>
> @@ -27,7 +28,9 @@
> */
> #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
> #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
> -#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
> +/* Maximum number of (4k) pages to deflate on OOM notifications. */
> +#define VIRTIO_BALLOON_OOM_NR_PAGES 256
> +#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
>
> #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
> __GFP_NOMEMALLOC)
> @@ -112,8 +115,11 @@ struct virtio_balloon {
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
>
> - /* To register a shrinker to shrink memory upon memory pressure */
> + /* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
> struct shrinker shrinker;
> +
> + /* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
> + struct notifier_block oom_nb;
> };
>
> static struct virtio_device_id id_table[] = {
> @@ -786,50 +792,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
> return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
> - VIRTIO_BALLOON_PAGES_PER_PAGE;
> -}
> -
> -static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - unsigned long pages_freed = 0;
> -
> - /*
> - * One invocation of leak_balloon can deflate at most
> - * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
> - * multiple times to deflate pages till reaching pages_to_free.
> - */
> - while (vb->num_pages && pages_freed < pages_to_free)
> - pages_freed += leak_balloon_pages(vb,
> - pages_to_free - pages_freed);
> -
> - update_balloon_size(vb);
> -
> - return pages_freed;
> -}
> -
> static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
> struct shrink_control *sc)
> {
> - unsigned long pages_to_free, pages_freed = 0;
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
>
> - pages_to_free = sc->nr_to_scan;
> -
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> - pages_freed = shrink_free_pages(vb, pages_to_free);
> -
> - if (pages_freed >= pages_to_free)
> - return pages_freed;
> -
> - pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
> -
> - return pages_freed;
> + return shrink_free_pages(vb, sc->nr_to_scan);
> }
>
> static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> @@ -837,26 +806,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> {
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
> - unsigned long count;
> -
> - count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
> - count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
>
> - return count;
> + return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
> +static int virtio_balloon_oom_notify(struct notifier_block *nb,
> + unsigned long dummy, void *parm)
> {
> - unregister_shrinker(&vb->shrinker);
> -}
> + struct virtio_balloon *vb = container_of(nb,
> + struct virtio_balloon, oom_nb);
> + unsigned long *freed = parm;
>
> -static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
> -{
> - vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> - vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> - vb->shrinker.seeks = DEFAULT_SEEKS;
> + *freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
> + VIRTIO_BALLOON_PAGES_PER_PAGE;
> + update_balloon_size(vb);
>
> - return register_shrinker(&vb->shrinker);
> + return NOTIFY_OK;
> }
>
> static int virtballoon_probe(struct virtio_device *vdev)
> @@ -933,22 +898,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
> virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> poison_val, &poison_val);
> }
> - }
> - /*
> - * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
> - * shrinker needs to be registered to relieve memory pressure.
> - */
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> - err = virtio_balloon_register_shrinker(vb);
> +
> + /*
> + * We're allowed to reuse any free pages, even if they are
> + * still to be processed by the host.
> + */
> + vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> + vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> + vb->shrinker.seeks = DEFAULT_SEEKS;
> + err = register_shrinker(&vb->shrinker);
> if (err)
> goto out_del_balloon_wq;
> }
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> + vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
> + vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
> + err = register_oom_notifier(&vb->oom_nb);
> + if (err < 0)
> + goto out_unregister_shrinker;
> + }
> +
> virtio_device_ready(vdev);
>
> if (towards_target(vb))
> virtballoon_changed(vdev);
> return 0;
>
> +out_unregister_shrinker:
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> out_del_balloon_wq:
> if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> destroy_workqueue(vb->balloon_wq);
> @@ -987,8 +965,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
> {
> struct virtio_balloon *vb = vdev->priv;
>
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> - virtio_balloon_unregister_shrinker(vb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> + unregister_oom_notifier(&vb->oom_nb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> +
> spin_lock_irq(&vb->stop_update_lock);
> vb->stop_update = true;
> spin_unlock_irq(&vb->stop_update_lock);
> --
> 2.24.1

2020-02-06 09:44:24

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On 06.02.20 10:12, Michael S. Tsirkin wrote:
> On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
>> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
>> changed the behavior when deflation happens automatically. Instead of
>> deflating when called by the OOM handler, the shrinker is used.
>>
>> However, the balloon is not simply some slab cache that should be
>> shrunk when under memory pressure. The shrinker does not have a concept of
>> priorities, so this behavior cannot be configured.
>>
>> There was a report that this results in undesired side effects when
>> inflating the balloon to shrink the page cache. [1]
>> "When inflating the balloon against page cache (i.e. no free memory
>> remains) vmscan.c will both shrink page cache, but also invoke the
>> shrinkers -- including the balloon's shrinker. So the balloon
>> driver allocates memory which requires reclaim, vmscan gets this
>> memory by shrinking the balloon, and then the driver adds the
>> memory back to the balloon. Basically a busy no-op."
>>
>> The name "deflate on OOM" makes it pretty clear when deflation should
>> happen - after other approaches to reclaim memory failed, not while
>> reclaiming. This allows to minimize the footprint of a guest - memory
>> will only be taken out of the balloon when really needed.
>>
>> Especially, a drop_slab() will result in the whole balloon getting
>> deflated - undesired. While handling it via the OOM handler might not be
>> perfect, it keeps existing behavior. If we want a different behavior, then
>> we need a new feature bit and document it properly (although, there should
>> be a clear use case and the intended effects should be well described).
>>
>> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
>> this has no such side effects. Always register the shrinker with
>> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
>> pages that are still to be processed by the guest. The hypervisor takes
>> care of identifying and resolving possible races between processing a
>> hinting request and the guest reusing a page.
>>
>> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
>> notifier with shrinker"), don't add a moodule parameter to configure the
>> number of pages to deflate on OOM. Can be re-added if really needed.
>> Also, pay attention that leak_balloon() returns the number of 4k pages -
>> convert it properly in virtio_balloon_oom_notify().
>>
>> Note1: using the OOM handler is frowned upon, but it really is what we
>> need for this feature.
>>
>> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
>> could actually skip sending deflation requests to our hypervisor,
>> making the OOM path *very* simple. Besically freeing pages and
>> updating the balloon. If the communication with the host ever
>> becomes a problem on this call path.
>>
>> [1] https://www.spinics.net/lists/linux-virtualization/msg40863.html
>>
>> Reported-by: Tyler Sanderson <[email protected]>
>> Cc: Michael S. Tsirkin <[email protected]>
>> Cc: Wei Wang <[email protected]>
>> Cc: Alexander Duyck <[email protected]>
>> Cc: David Rientjes <[email protected]>
>> Cc: Nadav Amit <[email protected]>
>> Cc: Michal Hocko <[email protected]>
>> Signed-off-by: David Hildenbrand <[email protected]>
>
>
> I guess we should add a Fixes tag to the patch it's reverting,
> this way it's backported and hypervisors will be able to rely on OOM
> behaviour.

Makes sense,

Fixes: 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")

--
Thanks,

David / dhildenb

2020-02-06 11:02:48

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Thu, Feb 06, 2020 at 09:42:34AM +0100, David Hildenbrand wrote:
> On 06.02.20 08:40, Michael S. Tsirkin wrote:
> > On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
> >> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> >> changed the behavior when deflation happens automatically. Instead of
> >> deflating when called by the OOM handler, the shrinker is used.
> >>
> >> However, the balloon is not simply some slab cache that should be
> >> shrunk when under memory pressure. The shrinker does not have a concept of
> >> priorities, so this behavior cannot be configured.
> >>
> >> There was a report that this results in undesired side effects when
> >> inflating the balloon to shrink the page cache. [1]
> >> "When inflating the balloon against page cache (i.e. no free memory
> >> remains) vmscan.c will both shrink page cache, but also invoke the
> >> shrinkers -- including the balloon's shrinker. So the balloon
> >> driver allocates memory which requires reclaim, vmscan gets this
> >> memory by shrinking the balloon, and then the driver adds the
> >> memory back to the balloon. Basically a busy no-op."
> >>
> >> The name "deflate on OOM" makes it pretty clear when deflation should
> >> happen - after other approaches to reclaim memory failed, not while
> >> reclaiming. This allows to minimize the footprint of a guest - memory
> >> will only be taken out of the balloon when really needed.
> >>
> >> Especially, a drop_slab() will result in the whole balloon getting
> >> deflated - undesired. While handling it via the OOM handler might not be
> >> perfect, it keeps existing behavior. If we want a different behavior, then
> >> we need a new feature bit and document it properly (although, there should
> >> be a clear use case and the intended effects should be well described).
> >>
> >> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> >> this has no such side effects. Always register the shrinker with
> >> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> >> pages that are still to be processed by the guest. The hypervisor takes
> >> care of identifying and resolving possible races between processing a
> >> hinting request and the guest reusing a page.
> >>
> >> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> >> notifier with shrinker"), don't add a moodule parameter to configure the
> >> number of pages to deflate on OOM. Can be re-added if really needed.
> >
> > I agree. And to make this case even stronger:
> >
> > The oom_pages module parameter was known to be broken: whatever its
> > value, we return at most VIRTIO_BALLOON_ARRAY_PFNS_MAX. So module
> > parameter values > 256 never worked, and it seems highly unlikely that
> > freeing 1Mbyte on OOM is too aggressive.
> > There was a patch
> > virtio-balloon: deflate up to oom_pages on OOM
> > by Wei Wang to try to fix it:
> > https://lore.kernel.org/r/[email protected]
> > but this was dropped.
>
> Makes sense. 1MB is usually good enough.
>
> >
> >> Also, pay attention that leak_balloon() returns the number of 4k pages -
> >> convert it properly in virtio_balloon_oom_notify().
> >
> > Oh. So it was returning a wrong value originally (before 71994620bb25).
> > However what really matters for notifiers is whether the value is 0 -
> > whether we made progress. So it's cosmetic.
>
> Yes, that's also my understanding.
>
> >
> >> Note1: using the OOM handler is frowned upon, but it really is what we
> >> need for this feature.
> >
> > Quite. However, I went back researching why we dropped the OOM notifier,
> > and found this:
> >
> > https://lore.kernel.org/r/[email protected]
> >
> > To quote from there:
> >
> > The balloon_lock was used to synchronize the access demand to elements
> > of struct virtio_balloon and its queue operations (please see commit
> > e22504296d). This prevents the concurrent run of the leak_balloon and
> > fill_balloon functions, thereby resulting in a deadlock issue on OOM:
> >
> > fill_balloon: take balloon_lock and wait for OOM to get some memory;
> > oom_notify: release some inflated memory via leak_balloon();
> > leak_balloon: wait for balloon_lock to be released by fill_balloon.
>
> fill_balloon does the allocation *before* taking the lock. tell_host()
> should not allocate memory AFAIR. So how could this ever happen?
>
> Anyhow, we could simply work around this by doing a trylock in
> fill_balloon() and retrying in the caller. That should be easy. But I
> want to understand first, how something like that would even be possible.

Hmm it looks like you are right. Sorry!


> >> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> >> could actually skip sending deflation requests to our hypervisor,
> >> making the OOM path *very* simple. Besically freeing pages and
> >> updating the balloon.
> >
> > Well not exactly. !VIRTIO_BALLOON_F_MUST_TELL_HOST does not actually
> > mean "never tell host". It means "host will not discard pages in the
> > balloon, you can defer host notification until after use".
> >
> > This was the original implementation:
> >
> > + if (vb->tell_host_first) {
> > + tell_host(vb, vb->deflate_vq);
> > + release_pages_by_pfn(vb->pfns, vb->num_pfns);
> > + } else {
> > + release_pages_by_pfn(vb->pfns, vb->num_pfns);
> > + tell_host(vb, vb->deflate_vq);
> > + }
> > +}
> >
> > I don't know whether completely skipping host notifications
> > when !VIRTIO_BALLOON_F_MUST_TELL_HOST will break any hosts.
>
> We discussed this already somewhere else, but here is again what I found.
>
> commit bf50e69f63d21091e525185c3ae761412be0ba72
> Author: Dave Hansen <[email protected]>
> Date: Thu Apr 7 10:43:25 2011 -0700
>
> virtio balloon: kill tell-host-first logic
>
> The virtio balloon driver has a VIRTIO_BALLOON_F_MUST_TELL_HOST
> feature bit. Whenever the bit is set, the guest kernel must
> always tell the host before we free pages back to the allocator.
> Without this feature, we might free a page (and have another
> user touch it) while the hypervisor is unprepared for it.
>
> But, if the bit is _not_ set, we are under no obligation to
> reverse the order; we're under no obligation to do _anything_.
> As of now, qemu-kvm defines the bit, but doesn't set it.

Well this is not what the spec says in the end.
To continue that commit message:

This patch makes the "tell host first" logic the only case. This
should make everybody happy, and reduce the amount of untested or
untestable code in the kernel.

you can try proposing the change to the virtio TC, see what do others
think.


> MUST_TELL_HOST really means "no need to deflate, just reuse a page". We
> should finally document this somewhere.

I'm not sure it's not too late to change what that flag means. If not
sending deflate messages at all is a useful optimization, it seems
safer to add a feature flag for that.

> --
> Thanks,
>
> David / dhildenb

2020-02-06 11:03:45

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Wed, Feb 05, 2020 at 05:34:02PM +0100, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be
> shrunk when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.
>
> There was a report that this results in undesired side effects when
> inflating the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory failed, not while
> reclaiming. This allows to minimize the footprint of a guest - memory
> will only be taken out of the balloon when really needed.
>
> Especially, a drop_slab() will result in the whole balloon getting
> deflated - undesired. While handling it via the OOM handler might not be
> perfect, it keeps existing behavior. If we want a different behavior, then
> we need a new feature bit and document it properly (although, there should
> be a clear use case and the intended effects should be well described).
>
> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> this has no such side effects. Always register the shrinker with
> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> pages that are still to be processed by the guest. The hypervisor takes
> care of identifying and resolving possible races between processing a
> hinting request and the guest reusing a page.
>
> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> notifier with shrinker"), don't add a moodule parameter to configure the
> number of pages to deflate on OOM. Can be re-added if really needed.
> Also, pay attention that leak_balloon() returns the number of 4k pages -
> convert it properly in virtio_balloon_oom_notify().
>
> Note1: using the OOM handler is frowned upon, but it really is what we
> need for this feature.
>
> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> could actually skip sending deflation requests to our hypervisor,
> making the OOM path *very* simple. Besically freeing pages and
> updating the balloon. If the communication with the host ever
> becomes a problem on this call path.
>
> [1] https://www.spinics.net/lists/linux-virtualization/msg40863.html
>
> Reported-by: Tyler Sanderson <[email protected]>
> Cc: Michael S. Tsirkin <[email protected]>
> Cc: Wei Wang <[email protected]>
> Cc: Alexander Duyck <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Nadav Amit <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Signed-off-by: David Hildenbrand <[email protected]>


I guess we should add a Fixes tag to the patch it's reverting,
this way it's backported and hypervisors will be able to rely on OOM
behaviour.

> ---
> drivers/virtio/virtio_balloon.c | 107 +++++++++++++-------------------
> 1 file changed, 44 insertions(+), 63 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7e5d84caeb94..e7b18f556c5e 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -14,6 +14,7 @@
> #include <linux/slab.h>
> #include <linux/module.h>
> #include <linux/balloon_compaction.h>
> +#include <linux/oom.h>
> #include <linux/wait.h>
> #include <linux/mm.h>
> #include <linux/mount.h>
> @@ -27,7 +28,9 @@
> */
> #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
> #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
> -#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
> +/* Maximum number of (4k) pages to deflate on OOM notifications. */
> +#define VIRTIO_BALLOON_OOM_NR_PAGES 256
> +#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
>
> #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
> __GFP_NOMEMALLOC)
> @@ -112,8 +115,11 @@ struct virtio_balloon {
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
>
> - /* To register a shrinker to shrink memory upon memory pressure */
> + /* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
> struct shrinker shrinker;
> +
> + /* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
> + struct notifier_block oom_nb;
> };
>
> static struct virtio_device_id id_table[] = {
> @@ -786,50 +792,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
> return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
> - VIRTIO_BALLOON_PAGES_PER_PAGE;
> -}
> -
> -static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
> - unsigned long pages_to_free)
> -{
> - unsigned long pages_freed = 0;
> -
> - /*
> - * One invocation of leak_balloon can deflate at most
> - * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
> - * multiple times to deflate pages till reaching pages_to_free.
> - */
> - while (vb->num_pages && pages_freed < pages_to_free)
> - pages_freed += leak_balloon_pages(vb,
> - pages_to_free - pages_freed);
> -
> - update_balloon_size(vb);
> -
> - return pages_freed;
> -}
> -
> static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
> struct shrink_control *sc)
> {
> - unsigned long pages_to_free, pages_freed = 0;
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
>
> - pages_to_free = sc->nr_to_scan;
> -
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> - pages_freed = shrink_free_pages(vb, pages_to_free);
> -
> - if (pages_freed >= pages_to_free)
> - return pages_freed;
> -
> - pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
> -
> - return pages_freed;
> + return shrink_free_pages(vb, sc->nr_to_scan);
> }
>
> static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> @@ -837,26 +806,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
> {
> struct virtio_balloon *vb = container_of(shrinker,
> struct virtio_balloon, shrinker);
> - unsigned long count;
> -
> - count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
> - count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
>
> - return count;
> + return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
> }
>
> -static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
> +static int virtio_balloon_oom_notify(struct notifier_block *nb,
> + unsigned long dummy, void *parm)
> {
> - unregister_shrinker(&vb->shrinker);
> -}
> + struct virtio_balloon *vb = container_of(nb,
> + struct virtio_balloon, oom_nb);
> + unsigned long *freed = parm;
>
> -static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
> -{
> - vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> - vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> - vb->shrinker.seeks = DEFAULT_SEEKS;
> + *freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
> + VIRTIO_BALLOON_PAGES_PER_PAGE;
> + update_balloon_size(vb);
>
> - return register_shrinker(&vb->shrinker);
> + return NOTIFY_OK;
> }
>
> static int virtballoon_probe(struct virtio_device *vdev)
> @@ -933,22 +898,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
> virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> poison_val, &poison_val);
> }
> - }
> - /*
> - * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
> - * shrinker needs to be registered to relieve memory pressure.
> - */
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> - err = virtio_balloon_register_shrinker(vb);
> +
> + /*
> + * We're allowed to reuse any free pages, even if they are
> + * still to be processed by the host.
> + */
> + vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
> + vb->shrinker.count_objects = virtio_balloon_shrinker_count;
> + vb->shrinker.seeks = DEFAULT_SEEKS;
> + err = register_shrinker(&vb->shrinker);
> if (err)
> goto out_del_balloon_wq;
> }
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
> + vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
> + vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
> + err = register_oom_notifier(&vb->oom_nb);
> + if (err < 0)
> + goto out_unregister_shrinker;
> + }
> +
> virtio_device_ready(vdev);
>
> if (towards_target(vb))
> virtballoon_changed(vdev);
> return 0;
>
> +out_unregister_shrinker:
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> out_del_balloon_wq:
> if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> destroy_workqueue(vb->balloon_wq);
> @@ -987,8 +965,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
> {
> struct virtio_balloon *vb = vdev->priv;
>
> - if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> - virtio_balloon_unregister_shrinker(vb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> + unregister_oom_notifier(&vb->oom_nb);
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> + unregister_shrinker(&vb->shrinker);
> +
> spin_lock_irq(&vb->stop_update_lock);
> vb->stop_update = true;
> spin_unlock_irq(&vb->stop_update_lock);
> --
> 2.24.1

2020-02-06 11:08:52

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

>> commit bf50e69f63d21091e525185c3ae761412be0ba72
>> Author: Dave Hansen <[email protected]>
>> Date: Thu Apr 7 10:43:25 2011 -0700
>>
>> virtio balloon: kill tell-host-first logic
>>
>> The virtio balloon driver has a VIRTIO_BALLOON_F_MUST_TELL_HOST
>> feature bit. Whenever the bit is set, the guest kernel must
>> always tell the host before we free pages back to the allocator.
>> Without this feature, we might free a page (and have another
>> user touch it) while the hypervisor is unprepared for it.
>>
>> But, if the bit is _not_ set, we are under no obligation to
>> reverse the order; we're under no obligation to do _anything_.
>> As of now, qemu-kvm defines the bit, but doesn't set it.
>
> Well this is not what the spec says in the end.

I didn't check the spec, maybe I should do that :)

> To continue that commit message:
>
> This patch makes the "tell host first" logic the only case. This
> should make everybody happy, and reduce the amount of untested or
> untestable code in the kernel.

Yeah, but this comment explains that the current deflate is only in
place, because it makes the code simpler (to support both cases). Of
course, doing the deflate might result in performance improvements.
(e.g., MADV_WILLNEED)

>
> you can try proposing the change to the virtio TC, see what do others
> think.

We can just drop the comment from this patch for now. The tell_host host
not be an issue AFAIKS.

--
Thanks,

David / dhildenb

2020-02-14 09:53:20

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On 05.02.20 17:34, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be
> shrunk when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.
>
> There was a report that this results in undesired side effects when
> inflating the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory failed, not while
> reclaiming. This allows to minimize the footprint of a guest - memory
> will only be taken out of the balloon when really needed.
>
> Especially, a drop_slab() will result in the whole balloon getting
> deflated - undesired. While handling it via the OOM handler might not be
> perfect, it keeps existing behavior. If we want a different behavior, then
> we need a new feature bit and document it properly (although, there should
> be a clear use case and the intended effects should be well described).
>
> Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> this has no such side effects. Always register the shrinker with
> VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> pages that are still to be processed by the guest. The hypervisor takes
> care of identifying and resolving possible races between processing a
> hinting request and the guest reusing a page.
>
> In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> notifier with shrinker"), don't add a moodule parameter to configure the
> number of pages to deflate on OOM. Can be re-added if really needed.
> Also, pay attention that leak_balloon() returns the number of 4k pages -
> convert it properly in virtio_balloon_oom_notify().
>
> Note1: using the OOM handler is frowned upon, but it really is what we
> need for this feature.
>
> Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> could actually skip sending deflation requests to our hypervisor,
> making the OOM path *very* simple. Besically freeing pages and
> updating the balloon. If the communication with the host ever
> becomes a problem on this call path.
>

@Michael, how to proceed with this?


--
Thanks,

David / dhildenb

2020-02-14 13:32:24

by Wang, Wei W

[permalink] [raw]
Subject: RE: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Friday, February 14, 2020 5:52 PM, David Hildenbrand wrote:
> > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with
> > shrinker") changed the behavior when deflation happens automatically.
> > Instead of deflating when called by the OOM handler, the shrinker is used.
> >
> > However, the balloon is not simply some slab cache that should be
> > shrunk when under memory pressure. The shrinker does not have a
> > concept of priorities, so this behavior cannot be configured.
> >
> > There was a report that this results in undesired side effects when
> > inflating the balloon to shrink the page cache. [1]
> > "When inflating the balloon against page cache (i.e. no free memory
> > remains) vmscan.c will both shrink page cache, but also invoke the
> > shrinkers -- including the balloon's shrinker. So the balloon
> > driver allocates memory which requires reclaim, vmscan gets this
> > memory by shrinking the balloon, and then the driver adds the
> > memory back to the balloon. Basically a busy no-op."
> >
> > The name "deflate on OOM" makes it pretty clear when deflation should
> > happen - after other approaches to reclaim memory failed, not while
> > reclaiming. This allows to minimize the footprint of a guest - memory
> > will only be taken out of the balloon when really needed.
> >
> > Especially, a drop_slab() will result in the whole balloon getting
> > deflated - undesired. While handling it via the OOM handler might not
> > be perfect, it keeps existing behavior. If we want a different
> > behavior, then we need a new feature bit and document it properly
> > (although, there should be a clear use case and the intended effects should
> be well described).
> >
> > Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT,
> because
> > this has no such side effects. Always register the shrinker with
> > VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to
> reuse
> > free pages that are still to be processed by the guest. The hypervisor
> > takes care of identifying and resolving possible races between
> > processing a hinting request and the guest reusing a page.
> >
> > In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> > notifier with shrinker"), don't add a moodule parameter to configure
> > the number of pages to deflate on OOM. Can be re-added if really needed.
> > Also, pay attention that leak_balloon() returns the number of 4k pages
> > - convert it properly in virtio_balloon_oom_notify().
> >
> > Note1: using the OOM handler is frowned upon, but it really is what we
> > need for this feature.
> >
> > Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with
> QEMU) we
> > could actually skip sending deflation requests to our hypervisor,
> > making the OOM path *very* simple. Besically freeing pages and
> > updating the balloon. If the communication with the host ever
> > becomes a problem on this call path.
> >
>
> @Michael, how to proceed with this?
>

I vote for not going back. When there are solid request and strong reasons in the future, we could reopen this discussion.

Best,
Wei

2020-02-14 14:07:34

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Wed 05-02-20 17:34:02, David Hildenbrand wrote:
> Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> changed the behavior when deflation happens automatically. Instead of
> deflating when called by the OOM handler, the shrinker is used.
>
> However, the balloon is not simply some slab cache that should be
> shrunk when under memory pressure. The shrinker does not have a concept of
> priorities, so this behavior cannot be configured.

Adding a priority to the shrinker doesn't sound like a big problem to
me. Shrinkers already get shrink_control data structure already and
priority could be added there.

> There was a report that this results in undesired side effects when
> inflating the balloon to shrink the page cache. [1]
> "When inflating the balloon against page cache (i.e. no free memory
> remains) vmscan.c will both shrink page cache, but also invoke the
> shrinkers -- including the balloon's shrinker. So the balloon
> driver allocates memory which requires reclaim, vmscan gets this
> memory by shrinking the balloon, and then the driver adds the
> memory back to the balloon. Basically a busy no-op."
>
> The name "deflate on OOM" makes it pretty clear when deflation should
> happen - after other approaches to reclaim memory failed, not while
> reclaiming. This allows to minimize the footprint of a guest - memory
> will only be taken out of the balloon when really needed.
>
> Especially, a drop_slab() will result in the whole balloon getting
> deflated - undesired.

Could you explain why some more? drop_caches shouldn't be really used in
any production workloads and if somebody really wants all the cache to
be dropped then why is balloon any different?

--
Michal Hocko
SUSE Labs

2020-02-14 14:19:31

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

>> There was a report that this results in undesired side effects when
>> inflating the balloon to shrink the page cache. [1]
>> "When inflating the balloon against page cache (i.e. no free memory
>> remains) vmscan.c will both shrink page cache, but also invoke the
>> shrinkers -- including the balloon's shrinker. So the balloon
>> driver allocates memory which requires reclaim, vmscan gets this
>> memory by shrinking the balloon, and then the driver adds the
>> memory back to the balloon. Basically a busy no-op."
>>
>> The name "deflate on OOM" makes it pretty clear when deflation should
>> happen - after other approaches to reclaim memory failed, not while
>> reclaiming. This allows to minimize the footprint of a guest - memory
>> will only be taken out of the balloon when really needed.
>>
>> Especially, a drop_slab() will result in the whole balloon getting
>> deflated - undesired.
>
> Could you explain why some more? drop_caches shouldn't be really used in
> any production workloads and if somebody really wants all the cache to
> be dropped then why is balloon any different?
>

Deflation should happen when the guest is out of memory, not when
somebody thinks it's time to reclaim some memory. That's what the
feature promised from the beginning: Only give the guest more memory in
case it *really* needs more memory.

Deflate on oom, not deflate on reclaim/memory pressure. (that's what the
report was all about)

A priority for shrinkers might be a step into the right direction.

--
Thanks,

David / dhildenb

2020-02-16 09:47:39

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Fri, Feb 14, 2020 at 12:48:42PM -0800, Tyler Sanderson wrote:
> Regarding Wei's patch that modifies the shrinker implementation, versus this
> patch which reverts to OOM notifier:
> I am in favor of both patches. But I do want to make sure a fix gets back
> ported to 4.19 where the performance regression was first introduced.
> My concern with reverting to the OOM notifier is, as mst@ put it (in the other
> thread):
> "when linux hits OOM?all kind of error paths are being hit, latent bugs start
> triggering,?latency goes up drastically."
> The guest could be in a lot of pain before the OOM notifier is invoked, and it
> seems like the shrinker API might allow more fine grained control of when we
> deflate.
>
> On the other hand, I'm not totally convinced that Wei's patch is an expected
> use of the shrinker/page-cache APIs, and maybe it is fragile. Needs more
> testing?and scrutiny.
>
> It seems to me like the shrinker API is the right API in the long run, perhaps
> with some fixes and modifications. But maybe reverting to OOM notifier is the
> best patch to back port?

In that case can I see some Tested-by reports pls?


> On Fri, Feb 14, 2020 at 6:19 AM David Hildenbrand <[email protected]> wrote:
>
> >> There was a report that this results in undesired side effects when
> >> inflating the balloon to shrink the page cache. [1]
> >>? ? ? "When inflating the balloon against page cache (i.e. no free memory
> >>? ? ? ?remains) vmscan.c will both shrink page cache, but also invoke the
> >>? ? ? ?shrinkers -- including the balloon's shrinker. So the balloon
> >>? ? ? ?driver allocates memory which requires reclaim, vmscan gets this
> >>? ? ? ?memory by shrinking the balloon, and then the driver adds the
> >>? ? ? ?memory back to the balloon. Basically a busy no-op."
> >>
> >> The name "deflate on OOM" makes it pretty clear when deflation should
> >> happen - after other approaches to reclaim memory failed, not while
> >> reclaiming. This allows to minimize the footprint of a guest - memory
> >> will only be taken out of the balloon when really needed.
> >>
> >> Especially, a drop_slab() will result in the whole balloon getting
> >> deflated - undesired.
> >
> > Could you explain why some more? drop_caches shouldn't be really used in
> > any production workloads and if somebody really wants all the cache to
> > be dropped then why is balloon any different?
> >
>
> Deflation should happen when the guest is out of memory, not when
> somebody thinks it's time to reclaim some memory. That's what the
> feature promised from the beginning: Only give the guest more memory in
> case it *really* needs more memory.
>
> Deflate on oom, not deflate on reclaim/memory pressure. (that's what the
> report was all about)
>
> A priority for shrinkers might be a step into the right direction.
>
> --
> Thanks,
>
> David / dhildenb
>
>

2020-02-16 09:48:32

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Fri, Feb 14, 2020 at 10:51:43AM +0100, David Hildenbrand wrote:
> On 05.02.20 17:34, David Hildenbrand wrote:
> > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
> > changed the behavior when deflation happens automatically. Instead of
> > deflating when called by the OOM handler, the shrinker is used.
> >
> > However, the balloon is not simply some slab cache that should be
> > shrunk when under memory pressure. The shrinker does not have a concept of
> > priorities, so this behavior cannot be configured.
> >
> > There was a report that this results in undesired side effects when
> > inflating the balloon to shrink the page cache. [1]
> > "When inflating the balloon against page cache (i.e. no free memory
> > remains) vmscan.c will both shrink page cache, but also invoke the
> > shrinkers -- including the balloon's shrinker. So the balloon
> > driver allocates memory which requires reclaim, vmscan gets this
> > memory by shrinking the balloon, and then the driver adds the
> > memory back to the balloon. Basically a busy no-op."
> >
> > The name "deflate on OOM" makes it pretty clear when deflation should
> > happen - after other approaches to reclaim memory failed, not while
> > reclaiming. This allows to minimize the footprint of a guest - memory
> > will only be taken out of the balloon when really needed.
> >
> > Especially, a drop_slab() will result in the whole balloon getting
> > deflated - undesired. While handling it via the OOM handler might not be
> > perfect, it keeps existing behavior. If we want a different behavior, then
> > we need a new feature bit and document it properly (although, there should
> > be a clear use case and the intended effects should be well described).
> >
> > Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
> > this has no such side effects. Always register the shrinker with
> > VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
> > pages that are still to be processed by the guest. The hypervisor takes
> > care of identifying and resolving possible races between processing a
> > hinting request and the guest reusing a page.
> >
> > In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
> > notifier with shrinker"), don't add a moodule parameter to configure the
> > number of pages to deflate on OOM. Can be re-added if really needed.
> > Also, pay attention that leak_balloon() returns the number of 4k pages -
> > convert it properly in virtio_balloon_oom_notify().
> >
> > Note1: using the OOM handler is frowned upon, but it really is what we
> > need for this feature.
> >
> > Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
> > could actually skip sending deflation requests to our hypervisor,
> > making the OOM path *very* simple. Besically freeing pages and
> > updating the balloon. If the communication with the host ever
> > becomes a problem on this call path.
> >
>
> @Michael, how to proceed with this?
>

I'd like to see some reports that this helps people.
e.g. a tested-by tag.

> --
> Thanks,
>
> David / dhildenb

2020-03-08 04:48:18

by Tyler Sanderson

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

Tested-by: Tyler Sanderson <[email protected]>

Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
GB file full of random bytes that we continually cat to /dev/null.
This fills the page cache as the file is read. Meanwhile we trigger
the balloon to inflate, with a target size of 53 GB. This setup causes
the balloon inflation to pressure the page cache as the page cache is
also trying to grow. Afterwards we shrink the balloon back to zero (so
total deflate = total inflate).

Without patch (kernel 4.19.0-5):
Inflation never reaches the target until we stop the "cat file >
/dev/null" process. Total inflation time was 542 seconds. The longest
period that made no net forward progress was 315 seconds (see attached
graph).
Result of "grep balloon /proc/vmstat" after the test:
balloon_inflate 154828377
balloon_deflate 154828377

With patch (kernel 5.6.0-rc4+):
Total inflation duration was 63 seconds. No deflate-queue activity
occurs when pressuring the page-cache.
Result of "grep balloon /proc/vmstat" after the test:
balloon_inflate 12968539
balloon_deflate 12968539

Conclusion: This patch fixes the issue. In the test it reduced
inflate/deflate activity by 12x, and reduced inflation time by 8.6x.
But more importantly, if we hadn't killed the "grep balloon
/proc/vmstat" process then, without the patch, the inflation process
would never reach the target.

Attached is a png of a graph showing the problematic behavior without
this patch. It shows deflate-queue activity increasing linearly while
balloon size stays constant over the course of more than 8 minutes of
the test.


On Thu, Feb 20, 2020 at 7:29 PM Tyler Sanderson <[email protected]> wrote:
>
> Testing this patch is on my short-term TODO list, but I wasn't able to get to it this week. It is prioritized.
>
> In the meantime, I can anecdotally vouch that kernels before 4.19, the ones using the OOM notifier callback, have roughly 10x faster balloon inflation when pressuring the cache. So I anticipate this patch will return to that state and help my use case.
>
> I will try to post official measurements of this patch next week.
>
> On Sun, Feb 16, 2020 at 1:47 AM Michael S. Tsirkin <[email protected]> wrote:
>>
>> On Fri, Feb 14, 2020 at 10:51:43AM +0100, David Hildenbrand wrote:
>> > On 05.02.20 17:34, David Hildenbrand wrote:
>> > > Commit 71994620bb25 ("virtio_balloon: replace oom notifier with shrinker")
>> > > changed the behavior when deflation happens automatically. Instead of
>> > > deflating when called by the OOM handler, the shrinker is used.
>> > >
>> > > However, the balloon is not simply some slab cache that should be
>> > > shrunk when under memory pressure. The shrinker does not have a concept of
>> > > priorities, so this behavior cannot be configured.
>> > >
>> > > There was a report that this results in undesired side effects when
>> > > inflating the balloon to shrink the page cache. [1]
>> > > "When inflating the balloon against page cache (i.e. no free memory
>> > > remains) vmscan.c will both shrink page cache, but also invoke the
>> > > shrinkers -- including the balloon's shrinker. So the balloon
>> > > driver allocates memory which requires reclaim, vmscan gets this
>> > > memory by shrinking the balloon, and then the driver adds the
>> > > memory back to the balloon. Basically a busy no-op."
>> > >
>> > > The name "deflate on OOM" makes it pretty clear when deflation should
>> > > happen - after other approaches to reclaim memory failed, not while
>> > > reclaiming. This allows to minimize the footprint of a guest - memory
>> > > will only be taken out of the balloon when really needed.
>> > >
>> > > Especially, a drop_slab() will result in the whole balloon getting
>> > > deflated - undesired. While handling it via the OOM handler might not be
>> > > perfect, it keeps existing behavior. If we want a different behavior, then
>> > > we need a new feature bit and document it properly (although, there should
>> > > be a clear use case and the intended effects should be well described).
>> > >
>> > > Keep using the shrinker for VIRTIO_BALLOON_F_FREE_PAGE_HINT, because
>> > > this has no such side effects. Always register the shrinker with
>> > > VIRTIO_BALLOON_F_FREE_PAGE_HINT now. We are always allowed to reuse free
>> > > pages that are still to be processed by the guest. The hypervisor takes
>> > > care of identifying and resolving possible races between processing a
>> > > hinting request and the guest reusing a page.
>> > >
>> > > In contrast to pre commit 71994620bb25 ("virtio_balloon: replace oom
>> > > notifier with shrinker"), don't add a moodule parameter to configure the
>> > > number of pages to deflate on OOM. Can be re-added if really needed.
>> > > Also, pay attention that leak_balloon() returns the number of 4k pages -
>> > > convert it properly in virtio_balloon_oom_notify().
>> > >
>> > > Note1: using the OOM handler is frowned upon, but it really is what we
>> > > need for this feature.
>> > >
>> > > Note2: without VIRTIO_BALLOON_F_MUST_TELL_HOST (iow, always with QEMU) we
>> > > could actually skip sending deflation requests to our hypervisor,
>> > > making the OOM path *very* simple. Besically freeing pages and
>> > > updating the balloon. If the communication with the host ever
>> > > becomes a problem on this call path.
>> > >
>> >
>> > @Michael, how to proceed with this?
>> >
>>
>> I'd like to see some reports that this helps people.
>> e.g. a tested-by tag.
>>
>> > --
>> > Thanks,
>> >
>> > David / dhildenb
>>


Attachments:
without_patch.png (13.19 kB)

2020-03-09 09:04:08

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On 08.03.20 05:47, Tyler Sanderson wrote:
> Tested-by: Tyler Sanderson <[email protected]>
>
> Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
> GB file full of random bytes that we continually cat to /dev/null.
> This fills the page cache as the file is read. Meanwhile we trigger
> the balloon to inflate, with a target size of 53 GB. This setup causes
> the balloon inflation to pressure the page cache as the page cache is
> also trying to grow. Afterwards we shrink the balloon back to zero (so
> total deflate = total inflate).
>
> Without patch (kernel 4.19.0-5):
> Inflation never reaches the target until we stop the "cat file >
> /dev/null" process. Total inflation time was 542 seconds. The longest
> period that made no net forward progress was 315 seconds (see attached
> graph).
> Result of "grep balloon /proc/vmstat" after the test:
> balloon_inflate 154828377
> balloon_deflate 154828377
>
> With patch (kernel 5.6.0-rc4+):
> Total inflation duration was 63 seconds. No deflate-queue activity
> occurs when pressuring the page-cache.
> Result of "grep balloon /proc/vmstat" after the test:
> balloon_inflate 12968539
> balloon_deflate 12968539
>
> Conclusion: This patch fixes the issue. In the test it reduced
> inflate/deflate activity by 12x, and reduced inflation time by 8.6x.
> But more importantly, if we hadn't killed the "grep balloon
> /proc/vmstat" process then, without the patch, the inflation process
> would never reach the target.
>
> Attached is a png of a graph showing the problematic behavior without
> this patch. It shows deflate-queue activity increasing linearly while
> balloon size stays constant over the course of more than 8 minutes of
> the test.

Thanks a lot for the extended test!

--
Thanks,

David / dhildenb

2020-03-09 10:15:55

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Mon, Mar 09, 2020 at 10:03:14AM +0100, David Hildenbrand wrote:
> On 08.03.20 05:47, Tyler Sanderson wrote:
> > Tested-by: Tyler Sanderson <[email protected]>
> >
> > Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
> > GB file full of random bytes that we continually cat to /dev/null.
> > This fills the page cache as the file is read. Meanwhile we trigger
> > the balloon to inflate, with a target size of 53 GB. This setup causes
> > the balloon inflation to pressure the page cache as the page cache is
> > also trying to grow. Afterwards we shrink the balloon back to zero (so
> > total deflate = total inflate).
> >
> > Without patch (kernel 4.19.0-5):
> > Inflation never reaches the target until we stop the "cat file >
> > /dev/null" process. Total inflation time was 542 seconds. The longest
> > period that made no net forward progress was 315 seconds (see attached
> > graph).
> > Result of "grep balloon /proc/vmstat" after the test:
> > balloon_inflate 154828377
> > balloon_deflate 154828377
> >
> > With patch (kernel 5.6.0-rc4+):
> > Total inflation duration was 63 seconds. No deflate-queue activity
> > occurs when pressuring the page-cache.
> > Result of "grep balloon /proc/vmstat" after the test:
> > balloon_inflate 12968539
> > balloon_deflate 12968539
> >
> > Conclusion: This patch fixes the issue. In the test it reduced
> > inflate/deflate activity by 12x, and reduced inflation time by 8.6x.
> > But more importantly, if we hadn't killed the "grep balloon
> > /proc/vmstat" process then, without the patch, the inflation process
> > would never reach the target.
> >
> > Attached is a png of a graph showing the problematic behavior without
> > this patch. It shows deflate-queue activity increasing linearly while
> > balloon size stays constant over the course of more than 8 minutes of
> > the test.
>
> Thanks a lot for the extended test!


Given we shipped this for a long time, I think the best way
to make progress is to merge 1/3, 2/3 right now, and 3/3
in the next release.

> --
> Thanks,
>
> David / dhildenb

2020-03-09 10:25:17

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On Sat, Mar 07, 2020 at 08:47:25PM -0800, Tyler Sanderson wrote:
> Tested-by: Tyler Sanderson <[email protected]>
>
> Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
> GB file full of random bytes that we continually cat to /dev/null.
> This fills the page cache as the file is read. Meanwhile we trigger
> the balloon to inflate, with a target size of 53 GB. This setup causes
> the balloon inflation to pressure the page cache as the page cache is
> also trying to grow. Afterwards we shrink the balloon back to zero (so
> total deflate = total inflate).
>
> Without patch (kernel 4.19.0-5):
> Inflation never reaches the target until we stop the "cat file >
> /dev/null" process. Total inflation time was 542 seconds. The longest
> period that made no net forward progress was 315 seconds (see attached
> graph).
> Result of "grep balloon /proc/vmstat" after the test:
> balloon_inflate 154828377
> balloon_deflate 154828377
>
> With patch (kernel 5.6.0-rc4+):
> Total inflation duration was 63 seconds. No deflate-queue activity
> occurs when pressuring the page-cache.
> Result of "grep balloon /proc/vmstat" after the test:
> balloon_inflate 12968539
> balloon_deflate 12968539
>
> Conclusion: This patch fixes the issue. In the test it reduced
> inflate/deflate activity by 12x, and reduced inflation time by 8.6x.
> But more importantly, if we hadn't killed the "grep balloon
> /proc/vmstat" process then, without the patch, the inflation process
> would never reach the target.
>
> Attached is a png of a graph showing the problematic behavior without
> this patch. It shows deflate-queue activity increasing linearly while
> balloon size stays constant over the course of more than 8 minutes of
> the test.

OK this is now queued for -next. Tyler thanks a lot for the detailed
test report - it's really awesome! I included it in the commit log in
full so that if we need to come back to this it's easy to reproduce the
testing.

--
MST

2020-03-09 11:00:50

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v1 3/3] virtio-balloon: Switch back to OOM handler for VIRTIO_BALLOON_F_DEFLATE_ON_OOM

On 09.03.20 11:14, Michael S. Tsirkin wrote:
> On Mon, Mar 09, 2020 at 10:03:14AM +0100, David Hildenbrand wrote:
>> On 08.03.20 05:47, Tyler Sanderson wrote:
>>> Tested-by: Tyler Sanderson <[email protected]>
>>>
>>> Test setup: VM with 16 CPU, 64GB RAM. Running Debian 10. We have a 42
>>> GB file full of random bytes that we continually cat to /dev/null.
>>> This fills the page cache as the file is read. Meanwhile we trigger
>>> the balloon to inflate, with a target size of 53 GB. This setup causes
>>> the balloon inflation to pressure the page cache as the page cache is
>>> also trying to grow. Afterwards we shrink the balloon back to zero (so
>>> total deflate = total inflate).
>>>
>>> Without patch (kernel 4.19.0-5):
>>> Inflation never reaches the target until we stop the "cat file >
>>> /dev/null" process. Total inflation time was 542 seconds. The longest
>>> period that made no net forward progress was 315 seconds (see attached
>>> graph).
>>> Result of "grep balloon /proc/vmstat" after the test:
>>> balloon_inflate 154828377
>>> balloon_deflate 154828377
>>>
>>> With patch (kernel 5.6.0-rc4+):
>>> Total inflation duration was 63 seconds. No deflate-queue activity
>>> occurs when pressuring the page-cache.
>>> Result of "grep balloon /proc/vmstat" after the test:
>>> balloon_inflate 12968539
>>> balloon_deflate 12968539
>>>
>>> Conclusion: This patch fixes the issue. In the test it reduced
>>> inflate/deflate activity by 12x, and reduced inflation time by 8.6x.
>>> But more importantly, if we hadn't killed the "grep balloon
>>> /proc/vmstat" process then, without the patch, the inflation process
>>> would never reach the target.
>>>
>>> Attached is a png of a graph showing the problematic behavior without
>>> this patch. It shows deflate-queue activity increasing linearly while
>>> balloon size stays constant over the course of more than 8 minutes of
>>> the test.
>>
>> Thanks a lot for the extended test!
>
>
> Given we shipped this for a long time, I think the best way
> to make progress is to merge 1/3, 2/3 right now, and 3/3
> in the next release.

Agreed.

--
Thanks,

David / dhildenb