2017-08-03 06:48:52

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 0/5] Virtio-balloon Enhancement

This patch series enhances the existing virtio-balloon with the following
new features:
1) fast ballooning: transfer ballooned pages between the guest and host in
chunks using sgs, instead of one by one; and
2) free_page_vq: a new virtqueue to report guest free pages to the host.

The second feature can be used to accelerate live migration of VMs. Here
are some details:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

The second feature enables the optimization of the 1st round memory
transfer - the hypervisor can skip the transfer of guest free pages in the
1st round. It is not concerned that the memory pages are used after they
are given to the hypervisor as a hint of the free pages, because they will
be tracked by the hypervisor and transferred in the next round if they are
used and written.

Change Log:
v12->v13:
1) mm: use a callback function to handle the the free page blocks from the
report function. This avoids exposing the zone internal to a kernel module.
2) virtio-balloon: send balloon pages or a free page block using a single sg
each time. This has the benefits of simpler implementation with no new APIs.
3) virtio-balloon: the free_page_vq is used to report free pages only (no
multiple usages interleaving)
4) virtio-balloon: Balloon pages and free page blocks are sent via input sgs,
and the completion signal to the host is sent via an output sg.

v11->v12:
1) xbitmap: use the xbitmap from Matthew Wilcox to record ballooned pages.
2) virtio-ring: enable the driver to build up a desc chain using vring desc.
3) virtio-ring: Add locking to the existing START_USE() and END_USE() macro
to lock/unlock the vq when a vq operation starts/ends.
4) virtio-ring: add virtqueue_kick_sync() and virtqueue_kick_async()
5) virtio-balloon: describe chunks of ballooned pages and free pages blocks
directly using one or more chains of desc from the vq.

v10->v11:
1) virtio_balloon: use vring_desc to describe a chunk;
2) virtio_ring: support to add an indirect desc table to virtqueue;
3) virtio_balloon: use cmdq to report guest memory statistics.

v9->v10:
1) mm: put report_unused_page_block() under CONFIG_VIRTIO_BALLOON;
2) virtio-balloon: add virtballoon_validate();
3) virtio-balloon: msg format change;
4) virtio-balloon: move miscq handling to a task on system_freezable_wq;
5) virtio-balloon: code cleanup.

v8->v9:
1) Split the two new features, VIRTIO_BALLOON_F_BALLOON_CHUNKS and
VIRTIO_BALLOON_F_MISC_VQ, which were mixed together in the previous
implementation;
2) Simpler function to get the free page block.

v7->v8:
1) Use only one chunk format, instead of two.
2) re-write the virtio-balloon implementation patch.
3) commit changes
4) patch re-org

Matthew Wilcox (1):
Introduce xbitmap

Wei Wang (4):
xbitmap: add xb_find_next_bit() and xb_zero()
virtio-balloon: VIRTIO_BALLOON_F_SG
mm: support reporting free page blocks
virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

drivers/virtio/virtio_balloon.c | 302 +++++++++++++++++++++++++++++++-----
include/linux/mm.h | 7 +
include/linux/mmzone.h | 5 +
include/linux/radix-tree.h | 2 +
include/linux/xbitmap.h | 53 +++++++
include/uapi/linux/virtio_balloon.h | 2 +
lib/radix-tree.c | 167 +++++++++++++++++++-
mm/page_alloc.c | 109 +++++++++++++
8 files changed, 609 insertions(+), 38 deletions(-)
create mode 100644 include/linux/xbitmap.h

--
2.7.4


2017-08-03 06:48:54

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 2/5] xbitmap: add xb_find_next_bit() and xb_zero()

xb_find_next_bit() supports to find the next "1" or "0" bit in the
given range. xb_zero() supports to zero the given range of bits.

Signed-off-by: Wei Wang <[email protected]>
---
include/linux/xbitmap.h | 4 ++++
lib/radix-tree.c | 28 ++++++++++++++++++++++++++++
2 files changed, 32 insertions(+)

diff --git a/include/linux/xbitmap.h b/include/linux/xbitmap.h
index 0b93a46..88c2045 100644
--- a/include/linux/xbitmap.h
+++ b/include/linux/xbitmap.h
@@ -36,6 +36,10 @@ int xb_set_bit(struct xb *xb, unsigned long bit);
bool xb_test_bit(const struct xb *xb, unsigned long bit);
int xb_clear_bit(struct xb *xb, unsigned long bit);

+void xb_zero(struct xb *xb, unsigned long start, unsigned long end);
+unsigned long xb_find_next_bit(struct xb *xb, unsigned long start,
+ unsigned long end, bool set);
+
static inline bool xb_empty(const struct xb *xb)
{
return radix_tree_empty(&xb->xbrt);
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index d8c3c18..84842a3 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -2272,6 +2272,34 @@ bool xb_test_bit(const struct xb *xb, unsigned long bit)
return test_bit(bit, bitmap->bitmap);
}

+void xb_zero(struct xb *xb, unsigned long start, unsigned long end)
+{
+ unsigned long i;
+
+ for (i = start; i <= end; i++)
+ xb_clear_bit(xb, i);
+}
+EXPORT_SYMBOL(xb_zero);
+
+/*
+ * Find the next one (@set = 1) or zero (@set = 0) bit within the bit range
+ * from @start to @end in @xb. If no such bit is found in the given range,
+ * bit end + 1 will be returned.
+ */
+unsigned long xb_find_next_bit(struct xb *xb, unsigned long start,
+ unsigned long end, bool set)
+{
+ unsigned long i;
+
+ for (i = start; i <= end; i++) {
+ if (xb_test_bit(xb, i) == set)
+ break;
+ }
+
+ return i;
+}
+EXPORT_SYMBOL(xb_find_next_bit);
+
void __rcu **idr_get_free(struct radix_tree_root *root,
struct radix_tree_iter *iter, gfp_t gfp, int end)
{
--
2.7.4

2017-08-03 06:48:53

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 1/5] Introduce xbitmap

From: Matthew Wilcox <[email protected]>

The eXtensible Bitmap is a sparse bitmap representation which is
efficient for set bits which tend to cluster. It supports up to
'unsigned long' worth of bits, and this commit adds the bare bones --
xb_set_bit(), xb_clear_bit() and xb_test_bit().

Signed-off-by: Matthew Wilcox <[email protected]>
Signed-off-by: Wei Wang <[email protected]>
---
include/linux/radix-tree.h | 2 +
include/linux/xbitmap.h | 49 ++++++++++++++++
lib/radix-tree.c | 139 ++++++++++++++++++++++++++++++++++++++++++++-
3 files changed, 188 insertions(+), 2 deletions(-)
create mode 100644 include/linux/xbitmap.h

diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index 3e57350..428ccc9 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -317,6 +317,8 @@ void radix_tree_iter_delete(struct radix_tree_root *,
struct radix_tree_iter *iter, void __rcu **slot);
void *radix_tree_delete_item(struct radix_tree_root *, unsigned long, void *);
void *radix_tree_delete(struct radix_tree_root *, unsigned long);
+bool __radix_tree_delete(struct radix_tree_root *root,
+ struct radix_tree_node *node, void __rcu **slot);
void radix_tree_clear_tags(struct radix_tree_root *, struct radix_tree_node *,
void __rcu **slot);
unsigned int radix_tree_gang_lookup(const struct radix_tree_root *,
diff --git a/include/linux/xbitmap.h b/include/linux/xbitmap.h
new file mode 100644
index 0000000..0b93a46
--- /dev/null
+++ b/include/linux/xbitmap.h
@@ -0,0 +1,49 @@
+/*
+ * eXtensible Bitmaps
+ * Copyright (c) 2017 Microsoft Corporation <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * eXtensible Bitmaps provide an unlimited-size sparse bitmap facility.
+ * All bits are initially zero.
+ */
+
+#include <linux/idr.h>
+
+struct xb {
+ struct radix_tree_root xbrt;
+};
+
+#define XB_INIT { \
+ .xbrt = RADIX_TREE_INIT(IDR_RT_MARKER | GFP_NOWAIT), \
+}
+#define DEFINE_XB(name) struct xb name = XB_INIT
+
+static inline void xb_init(struct xb *xb)
+{
+ INIT_RADIX_TREE(&xb->xbrt, IDR_RT_MARKER | GFP_NOWAIT);
+}
+
+int xb_set_bit(struct xb *xb, unsigned long bit);
+bool xb_test_bit(const struct xb *xb, unsigned long bit);
+int xb_clear_bit(struct xb *xb, unsigned long bit);
+
+static inline bool xb_empty(const struct xb *xb)
+{
+ return radix_tree_empty(&xb->xbrt);
+}
+
+void xb_preload(gfp_t gfp);
+
+static inline void xb_preload_end(void)
+{
+ preempt_enable();
+}
diff --git a/lib/radix-tree.c b/lib/radix-tree.c
index 898e879..d8c3c18 100644
--- a/lib/radix-tree.c
+++ b/lib/radix-tree.c
@@ -37,6 +37,7 @@
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/string.h>
+#include <linux/xbitmap.h>


/* Number of nodes in fully populated tree of given height */
@@ -78,6 +79,14 @@ static struct kmem_cache *radix_tree_node_cachep;
#define IDA_PRELOAD_SIZE (IDA_MAX_PATH * 2 - 1)

/*
+ * The XB can go up to unsigned long, but also uses a bitmap.
+ */
+#define XB_INDEX_BITS (BITS_PER_LONG - ilog2(IDA_BITMAP_BITS))
+#define XB_MAX_PATH (DIV_ROUND_UP(XB_INDEX_BITS, \
+ RADIX_TREE_MAP_SHIFT))
+#define XB_PRELOAD_SIZE (XB_MAX_PATH * 2 - 1)
+
+/*
* Per-cpu pool of preloaded nodes
*/
struct radix_tree_preload {
@@ -840,6 +849,8 @@ int __radix_tree_create(struct radix_tree_root *root, unsigned long index,
offset, 0, 0);
if (!child)
return -ENOMEM;
+ if (is_idr(root))
+ all_tag_set(child, IDR_FREE);
rcu_assign_pointer(*slot, node_to_entry(child));
if (node)
node->count++;
@@ -1986,8 +1997,8 @@ void __radix_tree_delete_node(struct radix_tree_root *root,
delete_node(root, node, update_node, private);
}

-static bool __radix_tree_delete(struct radix_tree_root *root,
- struct radix_tree_node *node, void __rcu **slot)
+bool __radix_tree_delete(struct radix_tree_root *root,
+ struct radix_tree_node *node, void __rcu **slot)
{
void *old = rcu_dereference_raw(*slot);
int exceptional = radix_tree_exceptional_entry(old) ? -1 : 0;
@@ -2137,6 +2148,130 @@ int ida_pre_get(struct ida *ida, gfp_t gfp)
}
EXPORT_SYMBOL(ida_pre_get);

+void xb_preload(gfp_t gfp)
+{
+ __radix_tree_preload(gfp, XB_PRELOAD_SIZE);
+ if (!this_cpu_read(ida_bitmap)) {
+ struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp);
+
+ if (!bitmap)
+ return;
+ bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap);
+ kfree(bitmap);
+ }
+}
+EXPORT_SYMBOL(xb_preload);
+
+int xb_set_bit(struct xb *xb, unsigned long bit)
+{
+ int err;
+ unsigned long index = bit / IDA_BITMAP_BITS;
+ struct radix_tree_root *root = &xb->xbrt;
+ struct radix_tree_node *node;
+ void **slot;
+ struct ida_bitmap *bitmap;
+ unsigned long ebit;
+
+ bit %= IDA_BITMAP_BITS;
+ ebit = bit + 2;
+
+ err = __radix_tree_create(root, index, 0, &node, &slot);
+ if (err)
+ return err;
+ bitmap = rcu_dereference_raw(*slot);
+ if (radix_tree_exception(bitmap)) {
+ unsigned long tmp = (unsigned long)bitmap;
+
+ if (ebit < BITS_PER_LONG) {
+ tmp |= 1UL << ebit;
+ rcu_assign_pointer(*slot, (void *)tmp);
+ return 0;
+ }
+ bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (!bitmap)
+ return -EAGAIN;
+ memset(bitmap, 0, sizeof(*bitmap));
+ bitmap->bitmap[0] = tmp >> RADIX_TREE_EXCEPTIONAL_SHIFT;
+ rcu_assign_pointer(*slot, bitmap);
+ }
+
+ if (!bitmap) {
+ if (ebit < BITS_PER_LONG) {
+ bitmap = (void *)((1UL << ebit) |
+ RADIX_TREE_EXCEPTIONAL_ENTRY);
+ __radix_tree_replace(root, node, slot, bitmap, NULL,
+ NULL);
+ return 0;
+ }
+ bitmap = this_cpu_xchg(ida_bitmap, NULL);
+ if (!bitmap)
+ return -EAGAIN;
+ memset(bitmap, 0, sizeof(*bitmap));
+ __radix_tree_replace(root, node, slot, bitmap, NULL, NULL);
+ }
+
+ __set_bit(bit, bitmap->bitmap);
+ return 0;
+}
+EXPORT_SYMBOL(xb_set_bit);
+
+int xb_clear_bit(struct xb *xb, unsigned long bit)
+{
+ unsigned long index = bit / IDA_BITMAP_BITS;
+ struct radix_tree_root *root = &xb->xbrt;
+ struct radix_tree_node *node;
+ void **slot;
+ struct ida_bitmap *bitmap;
+ unsigned long ebit;
+
+ bit %= IDA_BITMAP_BITS;
+ ebit = bit + 2;
+
+ bitmap = __radix_tree_lookup(root, index, &node, &slot);
+ if (radix_tree_exception(bitmap)) {
+ unsigned long tmp = (unsigned long)bitmap;
+
+ if (ebit >= BITS_PER_LONG)
+ return 0;
+ tmp &= ~(1UL << ebit);
+ if (tmp == RADIX_TREE_EXCEPTIONAL_ENTRY)
+ __radix_tree_delete(root, node, slot);
+ else
+ rcu_assign_pointer(*slot, (void *)tmp);
+ return 0;
+ }
+
+ if (!bitmap)
+ return 0;
+
+ __clear_bit(bit, bitmap->bitmap);
+ if (bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
+ kfree(bitmap);
+ __radix_tree_delete(root, node, slot);
+ }
+
+ return 0;
+}
+
+bool xb_test_bit(const struct xb *xb, unsigned long bit)
+{
+ unsigned long index = bit / IDA_BITMAP_BITS;
+ const struct radix_tree_root *root = &xb->xbrt;
+ struct ida_bitmap *bitmap = radix_tree_lookup(root, index);
+
+ bit %= IDA_BITMAP_BITS;
+
+ if (!bitmap)
+ return false;
+ if (radix_tree_exception(bitmap)) {
+ bit += RADIX_TREE_EXCEPTIONAL_SHIFT;
+ if (bit > BITS_PER_LONG)
+ return false;
+ return (unsigned long)bitmap & (1UL << bit);
+ }
+ return test_bit(bit, bitmap->bitmap);
+}
+
void __rcu **idr_get_free(struct radix_tree_root *root,
struct radix_tree_iter *iter, gfp_t gfp, int end)
{
--
2.7.4

2017-08-03 06:49:35

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

Add a new vq to report hints of guest free pages to the host.

Signed-off-by: Wei Wang <[email protected]>
Signed-off-by: Liang Li <[email protected]>
---
drivers/virtio/virtio_balloon.c | 164 ++++++++++++++++++++++++++++++------
include/uapi/linux/virtio_balloon.h | 1 +
2 files changed, 140 insertions(+), 25 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 29aca0c..29c4a61 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -54,11 +54,12 @@ static struct vfsmount *balloon_mnt;

struct virtio_balloon {
struct virtio_device *vdev;
- struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+ struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;

/* The balloon servicing is delegated to a freezable workqueue. */
struct work_struct update_balloon_stats_work;
struct work_struct update_balloon_size_work;
+ struct work_struct report_free_page_work;

/* Prevent updating balloon when it is being canceled. */
spinlock_t stop_update_lock;
@@ -90,6 +91,13 @@ struct virtio_balloon {
/* Memory statistics */
struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];

+ /*
+ * Used by the device and driver to signal each other.
+ * device->driver: start the free page report.
+ * driver->device: end the free page report.
+ */
+ __virtio32 report_free_page_signal;
+
/* To register callback in oom notifier call chain */
struct notifier_block nb;
};
@@ -146,7 +154,7 @@ static void set_page_pfns(struct virtio_balloon *vb,
}

static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
- void *addr, uint32_t size)
+ void *addr, uint32_t size, bool busywait)
{
struct scatterlist sg;
unsigned int len;
@@ -165,7 +173,12 @@ static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
cpu_relax();
}
virtqueue_kick(vq);
- wait_event(vb->acked, virtqueue_get_buf(vq, &len));
+ if (busywait)
+ while (!virtqueue_get_buf(vq, &len) &&
+ !virtqueue_is_broken(vq))
+ cpu_relax();
+ else
+ wait_event(vb->acked, virtqueue_get_buf(vq, &len));
}

/*
@@ -197,11 +210,11 @@ static void tell_host_sgs(struct virtio_balloon *vb,
sg_addr = pfn_to_kaddr(sg_pfn_start);
sg_len = (sg_pfn_end - sg_pfn_start) << PAGE_SHIFT;
while (sg_len > sg_max_len) {
- send_one_sg(vb, vq, sg_addr, sg_max_len);
+ send_one_sg(vb, vq, sg_addr, sg_max_len, 0);
sg_addr += sg_max_len;
sg_len -= sg_max_len;
}
- send_one_sg(vb, vq, sg_addr, sg_len);
+ send_one_sg(vb, vq, sg_addr, sg_len, 0);
xb_zero(&vb->page_xb, sg_pfn_start, sg_pfn_end);
sg_pfn_start = sg_pfn_end + 1;
}
@@ -503,42 +516,138 @@ static void update_balloon_size_func(struct work_struct *work)
queue_work(system_freezable_wq, work);
}

+static void virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
+ unsigned long nr_pages)
+{
+ struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
+ void *addr = pfn_to_kaddr(pfn);
+ uint32_t len = nr_pages << PAGE_SHIFT;
+
+ send_one_sg(vb, vb->free_page_vq, addr, len, 1);
+}
+
+static void report_free_page_completion(struct virtio_balloon *vb)
+{
+ struct virtqueue *vq = vb->free_page_vq;
+ struct scatterlist sg;
+ unsigned int len;
+
+ sg_init_one(&sg, &vb->report_free_page_signal, sizeof(__virtio32));
+ while (unlikely(virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)
+ == -ENOSPC)) {
+ virtqueue_kick(vq);
+ while (!virtqueue_get_buf(vq, &len) &&
+ !virtqueue_is_broken(vq))
+ cpu_relax();
+ }
+ virtqueue_kick(vq);
+}
+
+static void report_free_page(struct work_struct *work)
+{
+ struct virtio_balloon *vb;
+
+ vb = container_of(work, struct virtio_balloon, report_free_page_work);
+ walk_free_mem_block(vb, 1, &virtio_balloon_send_free_pages);
+ report_free_page_completion(vb);
+}
+
+static void free_page_request(struct virtqueue *vq)
+{
+ struct virtio_balloon *vb = vq->vdev->priv;
+
+ queue_work(system_freezable_wq, &vb->report_free_page_work);
+}
+
static int init_vqs(struct virtio_balloon *vb)
{
- struct virtqueue *vqs[3];
- vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
- static const char * const names[] = { "inflate", "deflate", "stats" };
- int err, nvqs;
+ struct virtqueue **vqs;
+ vq_callback_t **callbacks;
+ const char **names;
+ struct scatterlist sg;
+ int i, nvqs, err = -ENOMEM;
+
+ /* Inflateq and deflateq are used unconditionally */
+ nvqs = 2;
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ))
+ nvqs++;
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
+ nvqs++;
+
+ /* Allocate space for find_vqs parameters */
+ vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
+ if (!vqs)
+ goto err_vq;
+ callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
+ if (!callbacks)
+ goto err_callback;
+ names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);
+ if (!names)
+ goto err_names;
+
+ callbacks[0] = balloon_ack;
+ names[0] = "inflate";
+ callbacks[1] = balloon_ack;
+ names[1] = "deflate";
+
+ i = 2;
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+ callbacks[i] = stats_request;
+ names[i] = "stats";
+ i++;
+ }

- /*
- * We expect two virtqueues: inflate and deflate, and
- * optionally stat.
- */
- nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
- err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
+ callbacks[i] = free_page_request;
+ names[i] = "free_page_vq";
+ }
+
+ err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names,
+ NULL, NULL);
if (err)
- return err;
+ goto err_find;

vb->inflate_vq = vqs[0];
vb->deflate_vq = vqs[1];
+ i = 2;
if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
- struct scatterlist sg;
- unsigned int num_stats;
- vb->stats_vq = vqs[2];
-
+ vb->stats_vq = vqs[i++];
/*
* Prime this virtqueue with one buffer so the hypervisor can
* use it to signal us later (it can't be broken yet!).
*/
- num_stats = update_balloon_stats(vb);
-
- sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
+ sg_init_one(&sg, vb->stats, sizeof(vb->stats));
if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
< 0)
BUG();
virtqueue_kick(vb->stats_vq);
}
+
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
+ vb->free_page_vq = vqs[i];
+ vb->report_free_page_signal = 0;
+ sg_init_one(&sg, &vb->report_free_page_signal,
+ sizeof(__virtio32));
+ if (virtqueue_add_outbuf(vb->free_page_vq, &sg, 1, vb,
+ GFP_KERNEL) < 0)
+ dev_warn(&vb->vdev->dev, "%s: add signal buf fail\n",
+ __func__);
+ virtqueue_kick(vb->free_page_vq);
+ }
+
+ kfree(names);
+ kfree(callbacks);
+ kfree(vqs);
return 0;
+
+err_find:
+ kfree(names);
+err_names:
+ kfree(callbacks);
+err_callback:
+ kfree(vqs);
+err_vq:
+ return err;
}

#ifdef CONFIG_BALLOON_COMPACTION
@@ -590,7 +699,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
if (use_sg) {
send_one_sg(vb, vb->inflate_vq, page_address(newpage),
- PAGE_SIZE);
+ PAGE_SIZE, 0);
} else {
vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
set_page_pfns(vb, vb->pfns, newpage);
@@ -600,7 +709,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
balloon_page_delete(page);
if (use_sg) {
send_one_sg(vb, vb->deflate_vq, page_address(page),
- PAGE_SIZE);
+ PAGE_SIZE, 0);
} else {
vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
set_page_pfns(vb, vb->pfns, page);
@@ -667,6 +776,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_SG))
xb_init(&vb->page_xb);

+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
+ INIT_WORK(&vb->report_free_page_work, report_free_page);
+
vb->nb.notifier_call = virtballoon_oom_notify;
vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
err = register_oom_notifier(&vb->nb);
@@ -731,6 +843,7 @@ static void virtballoon_remove(struct virtio_device *vdev)
spin_unlock_irq(&vb->stop_update_lock);
cancel_work_sync(&vb->update_balloon_size_work);
cancel_work_sync(&vb->update_balloon_stats_work);
+ cancel_work_sync(&vb->report_free_page_work);

xb_empty(&vb->page_xb);
remove_common(vb);
@@ -785,6 +898,7 @@ static unsigned int features[] = {
VIRTIO_BALLOON_F_STATS_VQ,
VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
VIRTIO_BALLOON_F_SG,
+ VIRTIO_BALLOON_F_FREE_PAGE_VQ,
};

static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 37780a7..8214f84 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -35,6 +35,7 @@
#define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
#define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
#define VIRTIO_BALLOON_F_SG 3 /* Use sg instead of PFN lists */
+#define VIRTIO_BALLOON_F_FREE_PAGE_VQ 4 /* Virtqueue to report free pages */

/* Size of a PFN in the balloon interface. */
#define VIRTIO_BALLOON_PFN_SHIFT 12
--
2.7.4

2017-08-03 06:50:05

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG

Add a new feature, VIRTIO_BALLOON_F_SG, which enables the transfer
of balloon (i.e. inflated/deflated) pages using scatter-gather lists
to the host.

The implementation of the previous virtio-balloon is not very
efficient, because the balloon pages are transferred to the
host one by one. Here is the breakdown of the time in percentage
spent on each step of the balloon inflating process (inflating
7GB of an 8GB idle guest).

1) allocating pages (6.5%)
2) sending PFNs to host (68.3%)
3) address translation (6.1%)
4) madvise (19%)

It takes about 4126ms for the inflating process to complete.
The above profiling shows that the bottlenecks are stage 2)
and stage 4).

This patch optimizes step 2) by transferring pages to the host in
sgs. An sg describes a chunk of guest physically continuous pages.
With this mechanism, step 4) can also be optimized by doing address
translation and madvise() in chunks rather than page by page.

With this new feature, the above ballooning process takes ~541ms
resulting in an improvement of ~87%.

TODO: optimize stage 1) by allocating/freeing a chunk of pages
instead of a single page each time.

Signed-off-by: Wei Wang <[email protected]>
Signed-off-by: Liang Li <[email protected]>
Suggested-by: Michael S. Tsirkin <[email protected]>
---
drivers/virtio/virtio_balloon.c | 150 ++++++++++++++++++++++++++++++++----
include/uapi/linux/virtio_balloon.h | 1 +
2 files changed, 134 insertions(+), 17 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index f0b3a0b..29aca0c 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -32,6 +32,7 @@
#include <linux/mm.h>
#include <linux/mount.h>
#include <linux/magic.h>
+#include <linux/xbitmap.h>

/*
* Balloon device works in 4K page units. So each page is pointed to by
@@ -79,6 +80,9 @@ struct virtio_balloon {
/* Synchronize access/update to this struct virtio_balloon elements */
struct mutex balloon_lock;

+ /* The xbitmap used to record ballooned pages */
+ struct xb page_xb;
+
/* The array of pfns we tell the Host about. */
unsigned int num_pfns;
__virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
@@ -141,13 +145,90 @@ static void set_page_pfns(struct virtio_balloon *vb,
page_to_balloon_pfn(page) + i);
}

+static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
+ void *addr, uint32_t size)
+{
+ struct scatterlist sg;
+ unsigned int len;
+
+ sg_init_one(&sg, addr, size);
+ while (unlikely(virtqueue_add_inbuf(vq, &sg, 1, vb, GFP_KERNEL)
+ == -ENOSPC)) {
+ /*
+ * It is uncommon to see the vq is full, because the sg is sent
+ * one by one and the device is able to handle it in time. But
+ * if that happens, we kick and wait for an entry is released.
+ */
+ virtqueue_kick(vq);
+ while (!virtqueue_get_buf(vq, &len) &&
+ !virtqueue_is_broken(vq))
+ cpu_relax();
+ }
+ virtqueue_kick(vq);
+ wait_event(vb->acked, virtqueue_get_buf(vq, &len));
+}
+
+/*
+ * Send balloon pages in sgs to host. The balloon pages are recorded in the
+ * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
+ * The page xbitmap is searched for continuous "1" bits, which correspond
+ * to continuous pages, to chunk into sgs.
+ *
+ * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
+ * need to be searched.
+ */
+static void tell_host_sgs(struct virtio_balloon *vb,
+ struct virtqueue *vq,
+ unsigned long page_xb_start,
+ unsigned long page_xb_end)
+{
+ unsigned long sg_pfn_start, sg_pfn_end;
+ void *sg_addr;
+ uint32_t sg_len, sg_max_len = round_down(UINT_MAX, PAGE_SIZE);
+
+ sg_pfn_start = page_xb_start;
+ while (sg_pfn_start < page_xb_end) {
+ sg_pfn_start = xb_find_next_bit(&vb->page_xb, sg_pfn_start,
+ page_xb_end, 1);
+ if (sg_pfn_start == page_xb_end + 1)
+ break;
+ sg_pfn_end = xb_find_next_bit(&vb->page_xb, sg_pfn_start + 1,
+ page_xb_end, 0);
+ sg_addr = pfn_to_kaddr(sg_pfn_start);
+ sg_len = (sg_pfn_end - sg_pfn_start) << PAGE_SHIFT;
+ while (sg_len > sg_max_len) {
+ send_one_sg(vb, vq, sg_addr, sg_max_len);
+ sg_addr += sg_max_len;
+ sg_len -= sg_max_len;
+ }
+ send_one_sg(vb, vq, sg_addr, sg_len);
+ xb_zero(&vb->page_xb, sg_pfn_start, sg_pfn_end);
+ sg_pfn_start = sg_pfn_end + 1;
+ }
+}
+
+static inline void xb_set_page(struct virtio_balloon *vb,
+ struct page *page,
+ unsigned long *pfn_min,
+ unsigned long *pfn_max)
+{
+ unsigned long pfn = page_to_pfn(page);
+
+ *pfn_min = min(pfn, *pfn_min);
+ *pfn_max = max(pfn, *pfn_max);
+ xb_set_bit(&vb->page_xb, pfn);
+}
+
static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
{
struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
unsigned num_allocated_pages;
+ bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
+ unsigned long pfn_max = 0, pfn_min = ULONG_MAX;

/* We can only do one array worth at a time. */
- num = min(num, ARRAY_SIZE(vb->pfns));
+ if (!use_sg)
+ num = min(num, ARRAY_SIZE(vb->pfns));

mutex_lock(&vb->balloon_lock);
for (vb->num_pfns = 0; vb->num_pfns < num;
@@ -162,7 +243,12 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
msleep(200);
break;
}
- set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+
+ if (use_sg)
+ xb_set_page(vb, page, &pfn_min, &pfn_max);
+ else
+ set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+
vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE;
if (!virtio_has_feature(vb->vdev,
VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
@@ -171,8 +257,12 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)

num_allocated_pages = vb->num_pfns;
/* Did we get any? */
- if (vb->num_pfns != 0)
- tell_host(vb, vb->inflate_vq);
+ if (vb->num_pfns) {
+ if (use_sg)
+ tell_host_sgs(vb, vb->inflate_vq, pfn_min, pfn_max);
+ else
+ tell_host(vb, vb->inflate_vq);
+ }
mutex_unlock(&vb->balloon_lock);

return num_allocated_pages;
@@ -198,9 +288,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
struct page *page;
struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
LIST_HEAD(pages);
+ bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
+ unsigned long pfn_max = 0, pfn_min = ULONG_MAX;

- /* We can only do one array worth at a time. */
- num = min(num, ARRAY_SIZE(vb->pfns));
+ /* Traditionally, we can only do one array worth at a time. */
+ if (!use_sg)
+ num = min(num, ARRAY_SIZE(vb->pfns));

mutex_lock(&vb->balloon_lock);
/* We can't release more pages than taken */
@@ -210,7 +303,11 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
page = balloon_page_dequeue(vb_dev_info);
if (!page)
break;
- set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+ if (use_sg)
+ xb_set_page(vb, page, &pfn_min, &pfn_max);
+ else
+ set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+
list_add(&page->lru, &pages);
vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE;
}
@@ -221,8 +318,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
* virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST);
* is true, we *have* to do it in this order
*/
- if (vb->num_pfns != 0)
- tell_host(vb, vb->deflate_vq);
+ if (vb->num_pfns) {
+ if (use_sg)
+ tell_host_sgs(vb, vb->deflate_vq, pfn_min, pfn_max);
+ else
+ tell_host(vb, vb->deflate_vq);
+ }
release_pages_balloon(vb, &pages);
mutex_unlock(&vb->balloon_lock);
return num_freed_pages;
@@ -441,6 +542,7 @@ static int init_vqs(struct virtio_balloon *vb)
}

#ifdef CONFIG_BALLOON_COMPACTION
+
/*
* virtballoon_migratepage - perform the balloon page migration on behalf of
* a compation thread. (called under page lock)
@@ -464,6 +566,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
{
struct virtio_balloon *vb = container_of(vb_dev_info,
struct virtio_balloon, vb_dev_info);
+ bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
unsigned long flags;

/*
@@ -485,16 +588,24 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
vb_dev_info->isolated_pages--;
__count_vm_event(BALLOON_MIGRATE);
spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
- vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
- set_page_pfns(vb, vb->pfns, newpage);
- tell_host(vb, vb->inflate_vq);
-
+ if (use_sg) {
+ send_one_sg(vb, vb->inflate_vq, page_address(newpage),
+ PAGE_SIZE);
+ } else {
+ vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
+ set_page_pfns(vb, vb->pfns, newpage);
+ tell_host(vb, vb->inflate_vq);
+ }
/* balloon's page migration 2nd step -- deflate "page" */
balloon_page_delete(page);
- vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
- set_page_pfns(vb, vb->pfns, page);
- tell_host(vb, vb->deflate_vq);
-
+ if (use_sg) {
+ send_one_sg(vb, vb->deflate_vq, page_address(page),
+ PAGE_SIZE);
+ } else {
+ vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
+ set_page_pfns(vb, vb->pfns, page);
+ tell_host(vb, vb->deflate_vq);
+ }
mutex_unlock(&vb->balloon_lock);

put_page(page); /* balloon reference */
@@ -553,6 +664,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
if (err)
goto out_free_vb;

+ if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_SG))
+ xb_init(&vb->page_xb);
+
vb->nb.notifier_call = virtballoon_oom_notify;
vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
err = register_oom_notifier(&vb->nb);
@@ -618,6 +732,7 @@ static void virtballoon_remove(struct virtio_device *vdev)
cancel_work_sync(&vb->update_balloon_size_work);
cancel_work_sync(&vb->update_balloon_stats_work);

+ xb_empty(&vb->page_xb);
remove_common(vb);
#ifdef CONFIG_BALLOON_COMPACTION
if (vb->vb_dev_info.inode)
@@ -669,6 +784,7 @@ static unsigned int features[] = {
VIRTIO_BALLOON_F_MUST_TELL_HOST,
VIRTIO_BALLOON_F_STATS_VQ,
VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+ VIRTIO_BALLOON_F_SG,
};

static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 343d7dd..37780a7 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,6 +34,7 @@
#define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */
#define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
#define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_SG 3 /* Use sg instead of PFN lists */

/* Size of a PFN in the balloon interface. */
#define VIRTIO_BALLOON_PFN_SHIFT 12
--
2.7.4

2017-08-03 06:50:04

by Wang, Wei W

[permalink] [raw]
Subject: [PATCH v13 4/5] mm: support reporting free page blocks

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after the report function returns, so it is the
caller's responsibility to either detect or prevent the use of such
pages.

Signed-off-by: Wei Wang <[email protected]>
Signed-off-by: Liang Li <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michael S. Tsirkin <[email protected]>
---
include/linux/mm.h | 7 ++++
include/linux/mmzone.h | 5 +++
mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 121 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 46b9ac5..24481e3 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
unsigned long zone_start_pfn, unsigned long *zholes_size);
extern void free_initmem(void);

+#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
+extern void walk_free_mem_block(void *opaque1,
+ unsigned int min_order,
+ void (*visit)(void *opaque2,
+ unsigned long pfn,
+ unsigned long nr_pages));
+#endif
/*
* Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
* into the buddy system. The freed pages will be poisoned with pattern
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fc14b8b..59eacf2 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -83,6 +83,11 @@ static inline bool is_migrate_movable(int mt)
for (order = 0; order < MAX_ORDER; order++) \
for (type = 0; type < MIGRATE_TYPES; type++)

+#define for_each_migratetype_order_decend(min_order, order, type) \
+ for (order = MAX_ORDER - 1; order < MAX_ORDER && order >= min_order; \
+ order--) \
+ for (type = 0; type < MIGRATE_TYPES; type++)
+
extern int page_group_by_mobility_disabled;

#define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6d30e91..b90b513 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
show_swap_cache_info();
}

+#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
+
+/*
+ * Heuristically get a free page block in the system.
+ *
+ * It is possible that pages from the page block are used immediately after
+ * report_free_page_block() returns. It is the caller's responsibility to
+ * either detect or prevent the use of such pages.
+ *
+ * The input parameters specify the free list to check for a free page block:
+ * zone->free_area[order].free_list[migratetype]
+ *
+ * If the caller supplied page block (i.e. **page) is on the free list, offer
+ * the next page block on the list to the caller. Otherwise, offer the first
+ * page block on the list.
+ *
+ * Return 0 when a page block is found on the caller specified free list.
+ * Otherwise, no page block is found.
+ */
+static int report_free_page_block(struct zone *zone, unsigned int order,
+ unsigned int migratetype, struct page **page)
+{
+ struct list_head *free_list;
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&zone->lock, flags);
+
+ free_list = &zone->free_area[order].free_list[migratetype];
+ if (list_empty(free_list)) {
+ *page = NULL;
+ ret = -EAGAIN;
+ goto out;
+ }
+
+ /* The caller is asking for the first free page block on the list */
+ if (!(*page)) {
+ *page = list_first_entry(free_list, struct page, lru);
+ ret = 0;
+ goto out;
+ }
+
+ /*
+ * The page block passed from the caller is not on this free list
+ * anymore (e.g. a 1MB free page block has been split). In this case,
+ * offer the first page block on the free list that the caller is
+ * asking for.
+ */
+ if (PageBuddy(*page) && order != page_order(*page)) {
+ *page = list_first_entry(free_list, struct page, lru);
+ ret = 0;
+ goto out;
+ }
+
+ /*
+ * The page block passed from the caller has been the last page block
+ * on the list.
+ */
+ if ((*page)->lru.next == free_list) {
+ *page = NULL;
+ ret = -EAGAIN;
+ goto out;
+ }
+
+ /*
+ * Finally, fall into the regular case: the page block passed from the
+ * caller is still on the free list. Offer the next one.
+ */
+ *page = list_next_entry((*page), lru);
+out:
+ spin_unlock_irqrestore(&zone->lock, flags);
+ return ret;
+}
+
+/*
+ * Walk through the free page blocks in the system. The @visit callback is
+ * invoked to handle each free page block.
+ *
+ * Note: some page blocks may be used after the report function returns, so it
+ * is not safe for the callback to use any pages or discard data on such page
+ * blocks.
+ */
+void walk_free_mem_block(void *opaque1,
+ unsigned int min_order,
+ void (*visit)(void *opaque2,
+ unsigned long pfn,
+ unsigned long nr_pages))
+{
+ struct zone *zone = NULL;
+ struct page *page = NULL;
+ unsigned int order;
+ unsigned long pfn, nr_pages;
+ int type;
+
+ for_each_populated_zone(zone) {
+ for_each_migratetype_order_decend(min_order, order, type) {
+ while (!report_free_page_block(zone, order, type,
+ &page)) {
+ pfn = page_to_pfn(page);
+ nr_pages = 1 << order;
+ visit(opaque1, pfn, nr_pages);
+ }
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(walk_free_mem_block);
+
+#endif
+
static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
{
zoneref->zone = zone;
--
2.7.4

2017-08-03 08:13:52

by Pankaj Gupta

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ


>
> Add a new vq to report hints of guest free pages to the host.
>
> Signed-off-by: Wei Wang <[email protected]>
> Signed-off-by: Liang Li <[email protected]>
> ---
> drivers/virtio/virtio_balloon.c | 164
> ++++++++++++++++++++++++++++++------
> include/uapi/linux/virtio_balloon.h | 1 +
> 2 files changed, 140 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c
> b/drivers/virtio/virtio_balloon.c
> index 29aca0c..29c4a61 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -54,11 +54,12 @@ static struct vfsmount *balloon_mnt;
>
> struct virtio_balloon {
> struct virtio_device *vdev;
> - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
>
> /* The balloon servicing is delegated to a freezable workqueue. */
> struct work_struct update_balloon_stats_work;
> struct work_struct update_balloon_size_work;
> + struct work_struct report_free_page_work;
>
> /* Prevent updating balloon when it is being canceled. */
> spinlock_t stop_update_lock;
> @@ -90,6 +91,13 @@ struct virtio_balloon {
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
>
> + /*
> + * Used by the device and driver to signal each other.
> + * device->driver: start the free page report.
> + * driver->device: end the free page report.
> + */
> + __virtio32 report_free_page_signal;
> +
> /* To register callback in oom notifier call chain */
> struct notifier_block nb;
> };
> @@ -146,7 +154,7 @@ static void set_page_pfns(struct virtio_balloon *vb,
> }
>
> static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> - void *addr, uint32_t size)
> + void *addr, uint32_t size, bool busywait)
> {
> struct scatterlist sg;
> unsigned int len;
> @@ -165,7 +173,12 @@ static void send_one_sg(struct virtio_balloon *vb,
> struct virtqueue *vq,
> cpu_relax();
> }
> virtqueue_kick(vq);
> - wait_event(vb->acked, virtqueue_get_buf(vq, &len));
> + if (busywait)
> + while (!virtqueue_get_buf(vq, &len) &&
> + !virtqueue_is_broken(vq))
> + cpu_relax();
> + else
> + wait_event(vb->acked, virtqueue_get_buf(vq, &len));
> }
>
> /*
> @@ -197,11 +210,11 @@ static void tell_host_sgs(struct virtio_balloon *vb,
> sg_addr = pfn_to_kaddr(sg_pfn_start);
> sg_len = (sg_pfn_end - sg_pfn_start) << PAGE_SHIFT;
> while (sg_len > sg_max_len) {
> - send_one_sg(vb, vq, sg_addr, sg_max_len);
> + send_one_sg(vb, vq, sg_addr, sg_max_len, 0);
> sg_addr += sg_max_len;
> sg_len -= sg_max_len;
> }
> - send_one_sg(vb, vq, sg_addr, sg_len);
> + send_one_sg(vb, vq, sg_addr, sg_len, 0);
> xb_zero(&vb->page_xb, sg_pfn_start, sg_pfn_end);
> sg_pfn_start = sg_pfn_end + 1;
> }
> @@ -503,42 +516,138 @@ static void update_balloon_size_func(struct
> work_struct *work)
> queue_work(system_freezable_wq, work);
> }
>
> +static void virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> + unsigned long nr_pages)
> +{
> + struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> + void *addr = pfn_to_kaddr(pfn);
> + uint32_t len = nr_pages << PAGE_SHIFT;
> +
> + send_one_sg(vb, vb->free_page_vq, addr, len, 1);
> +}
> +
> +static void report_free_page_completion(struct virtio_balloon *vb)
> +{
> + struct virtqueue *vq = vb->free_page_vq;
> + struct scatterlist sg;
> + unsigned int len;
> +
> + sg_init_one(&sg, &vb->report_free_page_signal, sizeof(__virtio32));
> + while (unlikely(virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)
> + == -ENOSPC)) {
> + virtqueue_kick(vq);
> + while (!virtqueue_get_buf(vq, &len) &&
> + !virtqueue_is_broken(vq))
> + cpu_relax();
> + }
> + virtqueue_kick(vq);
> +}
> +
> +static void report_free_page(struct work_struct *work)
> +{
> + struct virtio_balloon *vb;
> +
> + vb = container_of(work, struct virtio_balloon, report_free_page_work);
> + walk_free_mem_block(vb, 1, &virtio_balloon_send_free_pages);
> + report_free_page_completion(vb);
> +}
> +
> +static void free_page_request(struct virtqueue *vq)
> +{
> + struct virtio_balloon *vb = vq->vdev->priv;
> +
> + queue_work(system_freezable_wq, &vb->report_free_page_work);
> +}
> +
> static int init_vqs(struct virtio_balloon *vb)
> {
> - struct virtqueue *vqs[3];
> - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> - static const char * const names[] = { "inflate", "deflate", "stats" };
> - int err, nvqs;
> + struct virtqueue **vqs;
> + vq_callback_t **callbacks;
> + const char **names;
> + struct scatterlist sg;
> + int i, nvqs, err = -ENOMEM;
> +
> + /* Inflateq and deflateq are used unconditionally */
> + nvqs = 2;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ))
> + nvqs++;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
> + nvqs++;
> +
> + /* Allocate space for find_vqs parameters */
> + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
> + if (!vqs)
> + goto err_vq;
> + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
> + if (!callbacks)
> + goto err_callback;
> + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);

is size here (integer) intentional?

> + if (!names)
> + goto err_names;
> +
> + callbacks[0] = balloon_ack;
> + names[0] = "inflate";
> + callbacks[1] = balloon_ack;
> + names[1] = "deflate";
> +
> + i = 2;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> + callbacks[i] = stats_request;

just thinking if memory for callbacks[3] & names[3] is allocated?

> + names[i] = "stats";
> + i++;
> + }
>
> - /*
> - * We expect two virtqueues: inflate and deflate, and
> - * optionally stat.
> - */
> - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> - err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
> + callbacks[i] = free_page_request;
> + names[i] = "free_page_vq";
> + }
> +
> + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names,
> + NULL, NULL);
> if (err)
> - return err;
> + goto err_find;
>
> vb->inflate_vq = vqs[0];
> vb->deflate_vq = vqs[1];
> + i = 2;
> if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> - struct scatterlist sg;
> - unsigned int num_stats;
> - vb->stats_vq = vqs[2];
> -
> + vb->stats_vq = vqs[i++];
> /*
> * Prime this virtqueue with one buffer so the hypervisor can
> * use it to signal us later (it can't be broken yet!).
> */
> - num_stats = update_balloon_stats(vb);
> -
> - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> + sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> < 0)
> BUG();
> virtqueue_kick(vb->stats_vq);
> }
> +
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
> + vb->free_page_vq = vqs[i];
> + vb->report_free_page_signal = 0;
> + sg_init_one(&sg, &vb->report_free_page_signal,
> + sizeof(__virtio32));
> + if (virtqueue_add_outbuf(vb->free_page_vq, &sg, 1, vb,
> + GFP_KERNEL) < 0)
> + dev_warn(&vb->vdev->dev, "%s: add signal buf fail\n",
> + __func__);
> + virtqueue_kick(vb->free_page_vq);
> + }
> +
> + kfree(names);
> + kfree(callbacks);
> + kfree(vqs);
> return 0;
> +
> +err_find:
> + kfree(names);
> +err_names:
> + kfree(callbacks);
> +err_callback:
> + kfree(vqs);
> +err_vq:
> + return err;
> }
>
> #ifdef CONFIG_BALLOON_COMPACTION
> @@ -590,7 +699,7 @@ static int virtballoon_migratepage(struct
> balloon_dev_info *vb_dev_info,
> spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
> if (use_sg) {
> send_one_sg(vb, vb->inflate_vq, page_address(newpage),
> - PAGE_SIZE);
> + PAGE_SIZE, 0);
> } else {
> vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> set_page_pfns(vb, vb->pfns, newpage);
> @@ -600,7 +709,7 @@ static int virtballoon_migratepage(struct
> balloon_dev_info *vb_dev_info,
> balloon_page_delete(page);
> if (use_sg) {
> send_one_sg(vb, vb->deflate_vq, page_address(page),
> - PAGE_SIZE);
> + PAGE_SIZE, 0);
> } else {
> vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> set_page_pfns(vb, vb->pfns, page);
> @@ -667,6 +776,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
> if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_SG))
> xb_init(&vb->page_xb);
>
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
> + INIT_WORK(&vb->report_free_page_work, report_free_page);
> +
> vb->nb.notifier_call = virtballoon_oom_notify;
> vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
> err = register_oom_notifier(&vb->nb);
> @@ -731,6 +843,7 @@ static void virtballoon_remove(struct virtio_device
> *vdev)
> spin_unlock_irq(&vb->stop_update_lock);
> cancel_work_sync(&vb->update_balloon_size_work);
> cancel_work_sync(&vb->update_balloon_stats_work);
> + cancel_work_sync(&vb->report_free_page_work);
>
> xb_empty(&vb->page_xb);
> remove_common(vb);
> @@ -785,6 +898,7 @@ static unsigned int features[] = {
> VIRTIO_BALLOON_F_STATS_VQ,
> VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> VIRTIO_BALLOON_F_SG,
> + VIRTIO_BALLOON_F_FREE_PAGE_VQ,
> };
>
> static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h
> b/include/uapi/linux/virtio_balloon.h
> index 37780a7..8214f84 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -35,6 +35,7 @@
> #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
> #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
> #define VIRTIO_BALLOON_F_SG 3 /* Use sg instead of PFN lists */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_VQ 4 /* Virtqueue to report free pages */
>
> /* Size of a PFN in the balloon interface. */
> #define VIRTIO_BALLOON_PFN_SHIFT 12
> --
> 2.7.4
>
>

2017-08-03 09:11:56

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 03-08-17 14:38:18, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after the report function returns, so it is the
> caller's responsibility to either detect or prevent the use of such
> pages.
>
> Signed-off-by: Wei Wang <[email protected]>
> Signed-off-by: Liang Li <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Michael S. Tsirkin <[email protected]>
> ---
> include/linux/mm.h | 7 ++++
> include/linux/mmzone.h | 5 +++
> mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 121 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 46b9ac5..24481e3 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> unsigned long zone_start_pfn, unsigned long *zholes_size);
> extern void free_initmem(void);
>
> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
> +extern void walk_free_mem_block(void *opaque1,
> + unsigned int min_order,
> + void (*visit)(void *opaque2,
> + unsigned long pfn,
> + unsigned long nr_pages));
> +#endif

Is the ifdef necessary. Sure only virtio balloon driver will use this
currently but this looks like a generic functionality not specific to
virtio at all so the ifdef is rather confusing.

> /*
> * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index fc14b8b..59eacf2 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -83,6 +83,11 @@ static inline bool is_migrate_movable(int mt)
> for (order = 0; order < MAX_ORDER; order++) \
> for (type = 0; type < MIGRATE_TYPES; type++)
>
> +#define for_each_migratetype_order_decend(min_order, order, type) \
> + for (order = MAX_ORDER - 1; order < MAX_ORDER && order >= min_order; \
> + order--) \
> + for (type = 0; type < MIGRATE_TYPES; type++)
> +

Is there going to be any other user outside of mm/page_alloc.c? If not
then do not export this.

> extern int page_group_by_mobility_disabled;
>
> #define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6d30e91..b90b513 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> show_swap_cache_info();
> }
>
> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
> +
> +/*
> + * Heuristically get a free page block in the system.
> + *
> + * It is possible that pages from the page block are used immediately after
> + * report_free_page_block() returns. It is the caller's responsibility to
> + * either detect or prevent the use of such pages.
> + *
> + * The input parameters specify the free list to check for a free page block:
> + * zone->free_area[order].free_list[migratetype]
> + *
> + * If the caller supplied page block (i.e. **page) is on the free list, offer
> + * the next page block on the list to the caller. Otherwise, offer the first
> + * page block on the list.
> + *
> + * Return 0 when a page block is found on the caller specified free list.
> + * Otherwise, no page block is found.
> + */
> +static int report_free_page_block(struct zone *zone, unsigned int order,
> + unsigned int migratetype, struct page **page)

This is just too ugly and wrong actually. Never provide struct page
pointers outside of the zone->lock. What I've had in mind was to simply
walk free lists of the suitable order and call the callback for each one.
Something as simple as

for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = &pgdat->node_zones[i];

if (!populated_zone(zone))
continue;
spin_lock_irqsave(&zone->lock, flags);
for (order = min_order; order < MAX_ORDER; ++order) {
struct free_area *free_area = &zone->free_area[order];
enum migratetype mt;
struct page *page;

if (!free_area->nr_pages)
continue;

for_each_migratetype_order(order, mt) {
list_for_each_entry(page,
&free_area->free_list[mt], lru) {

pfn = page_to_pfn(page);
visit(opaque2, prn, 1<<order);
}
}
}

spin_unlock_irqrestore(&zone->lock, flags);
}

[...]

> +/*
> + * Walk through the free page blocks in the system. The @visit callback is
> + * invoked to handle each free page block.
> + *
> + * Note: some page blocks may be used after the report function returns, so it
> + * is not safe for the callback to use any pages or discard data on such page
> + * blocks.
> + */
> +void walk_free_mem_block(void *opaque1,
> + unsigned int min_order,
> + void (*visit)(void *opaque2,
> + unsigned long pfn,
> + unsigned long nr_pages))

Is there any reason why there is no node id? I guess you just do not
care for your particular use case. Not that I care too much either. If
somebody wants this per node then it would be trivial to extend I was
just wondering whether this is a deliberate decision or an omission.

> +{
> + struct zone *zone = NULL;
> + struct page *page = NULL;
> + unsigned int order;
> + unsigned long pfn, nr_pages;
> + int type;
> +
> + for_each_populated_zone(zone) {
> + for_each_migratetype_order_decend(min_order, order, type) {
> + while (!report_free_page_block(zone, order, type,
> + &page)) {
> + pfn = page_to_pfn(page);
> + nr_pages = 1 << order;
> + visit(opaque1, pfn, nr_pages);
> + }
> + }
> + }
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> +
> +#endif
> +
> static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
> {
> zoneref->zone = zone;
> --
> 2.7.4

--
Michal Hocko
SUSE Labs

2017-08-03 10:39:35

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/03/2017 05:11 PM, Michal Hocko wrote:
> On Thu 03-08-17 14:38:18, Wei Wang wrote:
>> This patch adds support to walk through the free page blocks in the
>> system and report them via a callback function. Some page blocks may
>> leave the free list after the report function returns, so it is the
>> caller's responsibility to either detect or prevent the use of such
>> pages.
>>
>> Signed-off-by: Wei Wang <[email protected]>
>> Signed-off-by: Liang Li <[email protected]>
>> Cc: Michal Hocko <[email protected]>
>> Cc: Michael S. Tsirkin <[email protected]>
>> ---
>> include/linux/mm.h | 7 ++++
>> include/linux/mmzone.h | 5 +++
>> mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 121 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 46b9ac5..24481e3 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>> unsigned long zone_start_pfn, unsigned long *zholes_size);
>> extern void free_initmem(void);
>>
>> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
>> +extern void walk_free_mem_block(void *opaque1,
>> + unsigned int min_order,
>> + void (*visit)(void *opaque2,
>> + unsigned long pfn,
>> + unsigned long nr_pages));
>> +#endif
> Is the ifdef necessary. Sure only virtio balloon driver will use this
> currently but this looks like a generic functionality not specific to
> virtio at all so the ifdef is rather confusing.

OK. We can remove the condition if no objection from others.


>
>> extern int page_group_by_mobility_disabled;
>>
>> #define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1)
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 6d30e91..b90b513 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>> show_swap_cache_info();
>> }
>>
>> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON)
>> +
>> +/*
>> + * Heuristically get a free page block in the system.
>> + *
>> + * It is possible that pages from the page block are used immediately after
>> + * report_free_page_block() returns. It is the caller's responsibility to
>> + * either detect or prevent the use of such pages.
>> + *
>> + * The input parameters specify the free list to check for a free page block:
>> + * zone->free_area[order].free_list[migratetype]
>> + *
>> + * If the caller supplied page block (i.e. **page) is on the free list, offer
>> + * the next page block on the list to the caller. Otherwise, offer the first
>> + * page block on the list.
>> + *
>> + * Return 0 when a page block is found on the caller specified free list.
>> + * Otherwise, no page block is found.
>> + */
>> +static int report_free_page_block(struct zone *zone, unsigned int order,
>> + unsigned int migratetype, struct page **page)
> This is just too ugly and wrong actually. Never provide struct page
> pointers outside of the zone->lock. What I've had in mind was to simply
> walk free lists of the suitable order and call the callback for each one.
> Something as simple as
>
> for (i = 0; i < MAX_NR_ZONES; i++) {
> struct zone *zone = &pgdat->node_zones[i];
>
> if (!populated_zone(zone))
> continue;
> spin_lock_irqsave(&zone->lock, flags);
> for (order = min_order; order < MAX_ORDER; ++order) {
> struct free_area *free_area = &zone->free_area[order];
> enum migratetype mt;
> struct page *page;
>
> if (!free_area->nr_pages)
> continue;
>
> for_each_migratetype_order(order, mt) {
> list_for_each_entry(page,
> &free_area->free_list[mt], lru) {
>
> pfn = page_to_pfn(page);
> visit(opaque2, prn, 1<<order);
> }
> }
> }
>
> spin_unlock_irqrestore(&zone->lock, flags);
> }
>
> [...]


I think the above would take the lock for too long time. That's why we
prefer
to take one free page block each time, and taking it one by one also doesn't
make a difference, in terms of the performance that we need.

The struct page is used as a "state" to get the next free page block. It
is only
given for an internal implementation of a function in mm ( not seen by the
outside caller). Would this be OK?
If not, how about pfn - we can also pass in pfn to the function, and do
pfn_to_page each time the function starts, and then do page_to_pfn when
returns.


>> +/*
>> + * Walk through the free page blocks in the system. The @visit callback is
>> + * invoked to handle each free page block.
>> + *
>> + * Note: some page blocks may be used after the report function returns, so it
>> + * is not safe for the callback to use any pages or discard data on such page
>> + * blocks.
>> + */
>> +void walk_free_mem_block(void *opaque1,
>> + unsigned int min_order,
>> + void (*visit)(void *opaque2,
>> + unsigned long pfn,
>> + unsigned long nr_pages))
> Is there any reason why there is no node id? I guess you just do not
> care for your particular use case. Not that I care too much either. If
> somebody wants this per node then it would be trivial to extend I was
> just wondering whether this is a deliberate decision or an omission.
>

Right, we don't care about the node id. Live migration transfers all the
guest
system memory, so we just want to get the hint of all the free page blocks
from the system.


Best,
Wei

2017-08-03 10:44:23

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 03-08-17 18:42:15, Wei Wang wrote:
> On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >On Thu 03-08-17 14:38:18, Wei Wang wrote:
[...]
> >>+static int report_free_page_block(struct zone *zone, unsigned int order,
> >>+ unsigned int migratetype, struct page **page)
> >This is just too ugly and wrong actually. Never provide struct page
> >pointers outside of the zone->lock. What I've had in mind was to simply
> >walk free lists of the suitable order and call the callback for each one.
> >Something as simple as
> >
> > for (i = 0; i < MAX_NR_ZONES; i++) {
> > struct zone *zone = &pgdat->node_zones[i];
> >
> > if (!populated_zone(zone))
> > continue;
> > spin_lock_irqsave(&zone->lock, flags);
> > for (order = min_order; order < MAX_ORDER; ++order) {
> > struct free_area *free_area = &zone->free_area[order];
> > enum migratetype mt;
> > struct page *page;
> >
> > if (!free_area->nr_pages)
> > continue;
> >
> > for_each_migratetype_order(order, mt) {
> > list_for_each_entry(page,
> > &free_area->free_list[mt], lru) {
> >
> > pfn = page_to_pfn(page);
> > visit(opaque2, prn, 1<<order);
> > }
> > }
> > }
> >
> > spin_unlock_irqrestore(&zone->lock, flags);
> > }
> >
> >[...]
>
>
> I think the above would take the lock for too long time. That's why we
> prefer to take one free page block each time, and taking it one by one
> also doesn't make a difference, in terms of the performance that we
> need.

I think you should start with simple approach and impove incrementally
if this turns out to be not optimal. I really detest taking struct pages
outside of the lock. You never know what might happen after the lock is
dropped. E.g. can you race with the memory hotremove?

> The struct page is used as a "state" to get the next free page block. It is
> only
> given for an internal implementation of a function in mm ( not seen by the
> outside caller). Would this be OK?
> If not, how about pfn - we can also pass in pfn to the function, and do
> pfn_to_page each time the function starts, and then do page_to_pfn when
> returns.

No, just do not try to play tricks with struct pages which might have
gone away.
--
Michal Hocko
SUSE Labs

2017-08-03 11:24:38

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/03/2017 06:44 PM, Michal Hocko wrote:
> On Thu 03-08-17 18:42:15, Wei Wang wrote:
>> On 08/03/2017 05:11 PM, Michal Hocko wrote:
>>> On Thu 03-08-17 14:38:18, Wei Wang wrote:
> [...]
>>>> +static int report_free_page_block(struct zone *zone, unsigned int order,
>>>> + unsigned int migratetype, struct page **page)
>>> This is just too ugly and wrong actually. Never provide struct page
>>> pointers outside of the zone->lock. What I've had in mind was to simply
>>> walk free lists of the suitable order and call the callback for each one.
>>> Something as simple as
>>>
>>> for (i = 0; i < MAX_NR_ZONES; i++) {
>>> struct zone *zone = &pgdat->node_zones[i];
>>>
>>> if (!populated_zone(zone))
>>> continue;
>>> spin_lock_irqsave(&zone->lock, flags);
>>> for (order = min_order; order < MAX_ORDER; ++order) {
>>> struct free_area *free_area = &zone->free_area[order];
>>> enum migratetype mt;
>>> struct page *page;
>>>
>>> if (!free_area->nr_pages)
>>> continue;
>>>
>>> for_each_migratetype_order(order, mt) {
>>> list_for_each_entry(page,
>>> &free_area->free_list[mt], lru) {
>>>
>>> pfn = page_to_pfn(page);
>>> visit(opaque2, prn, 1<<order);
>>> }
>>> }
>>> }
>>>
>>> spin_unlock_irqrestore(&zone->lock, flags);
>>> }
>>>
>>> [...]
>>
>> I think the above would take the lock for too long time. That's why we
>> prefer to take one free page block each time, and taking it one by one
>> also doesn't make a difference, in terms of the performance that we
>> need.
> I think you should start with simple approach and impove incrementally
> if this turns out to be not optimal. I really detest taking struct pages
> outside of the lock. You never know what might happen after the lock is
> dropped. E.g. can you race with the memory hotremove?


The caller won't use pages returned from the function, so I think there
shouldn't be an issue or race if the returned pages are used (i.e. not free
anymore) or simply gone due to hotremove.


Best,
Wei

2017-08-03 11:28:35

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 03-08-17 19:27:19, Wei Wang wrote:
> On 08/03/2017 06:44 PM, Michal Hocko wrote:
> >On Thu 03-08-17 18:42:15, Wei Wang wrote:
> >>On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >>>On Thu 03-08-17 14:38:18, Wei Wang wrote:
> >[...]
> >>>>+static int report_free_page_block(struct zone *zone, unsigned int order,
> >>>>+ unsigned int migratetype, struct page **page)
> >>>This is just too ugly and wrong actually. Never provide struct page
> >>>pointers outside of the zone->lock. What I've had in mind was to simply
> >>>walk free lists of the suitable order and call the callback for each one.
> >>>Something as simple as
> >>>
> >>> for (i = 0; i < MAX_NR_ZONES; i++) {
> >>> struct zone *zone = &pgdat->node_zones[i];
> >>>
> >>> if (!populated_zone(zone))
> >>> continue;
> >>> spin_lock_irqsave(&zone->lock, flags);
> >>> for (order = min_order; order < MAX_ORDER; ++order) {
> >>> struct free_area *free_area = &zone->free_area[order];
> >>> enum migratetype mt;
> >>> struct page *page;
> >>>
> >>> if (!free_area->nr_pages)
> >>> continue;
> >>>
> >>> for_each_migratetype_order(order, mt) {
> >>> list_for_each_entry(page,
> >>> &free_area->free_list[mt], lru) {
> >>>
> >>> pfn = page_to_pfn(page);
> >>> visit(opaque2, prn, 1<<order);
> >>> }
> >>> }
> >>> }
> >>>
> >>> spin_unlock_irqrestore(&zone->lock, flags);
> >>> }
> >>>
> >>>[...]
> >>
> >>I think the above would take the lock for too long time. That's why we
> >>prefer to take one free page block each time, and taking it one by one
> >>also doesn't make a difference, in terms of the performance that we
> >>need.
> >I think you should start with simple approach and impove incrementally
> >if this turns out to be not optimal. I really detest taking struct pages
> >outside of the lock. You never know what might happen after the lock is
> >dropped. E.g. can you race with the memory hotremove?
>
>
> The caller won't use pages returned from the function, so I think there
> shouldn't be an issue or race if the returned pages are used (i.e. not free
> anymore) or simply gone due to hotremove.

No, this is just too error prone. Consider that struct page pointer
itself could get invalid in the meantime. Please always keep robustness
in mind first. Optimizations are nice but it is even not clear whether
the simple variant will cause any problems.
--
Michal Hocko
SUSE Labs

2017-08-03 12:09:20

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/03/2017 07:28 PM, Michal Hocko wrote:
> On Thu 03-08-17 19:27:19, Wei Wang wrote:
>> On 08/03/2017 06:44 PM, Michal Hocko wrote:
>>> On Thu 03-08-17 18:42:15, Wei Wang wrote:
>>>> On 08/03/2017 05:11 PM, Michal Hocko wrote:
>>>>> On Thu 03-08-17 14:38:18, Wei Wang wrote:
>>> [...]
>>>>>> +static int report_free_page_block(struct zone *zone, unsigned int order,
>>>>>> + unsigned int migratetype, struct page **page)
>>>>> This is just too ugly and wrong actually. Never provide struct page
>>>>> pointers outside of the zone->lock. What I've had in mind was to simply
>>>>> walk free lists of the suitable order and call the callback for each one.
>>>>> Something as simple as
>>>>>
>>>>> for (i = 0; i < MAX_NR_ZONES; i++) {
>>>>> struct zone *zone = &pgdat->node_zones[i];
>>>>>
>>>>> if (!populated_zone(zone))
>>>>> continue;
>>>>> spin_lock_irqsave(&zone->lock, flags);
>>>>> for (order = min_order; order < MAX_ORDER; ++order) {
>>>>> struct free_area *free_area = &zone->free_area[order];
>>>>> enum migratetype mt;
>>>>> struct page *page;
>>>>>
>>>>> if (!free_area->nr_pages)
>>>>> continue;
>>>>>
>>>>> for_each_migratetype_order(order, mt) {
>>>>> list_for_each_entry(page,
>>>>> &free_area->free_list[mt], lru) {
>>>>>
>>>>> pfn = page_to_pfn(page);
>>>>> visit(opaque2, prn, 1<<order);
>>>>> }
>>>>> }
>>>>> }
>>>>>
>>>>> spin_unlock_irqrestore(&zone->lock, flags);
>>>>> }
>>>>>
>>>>> [...]
>>>> I think the above would take the lock for too long time. That's why we
>>>> prefer to take one free page block each time, and taking it one by one
>>>> also doesn't make a difference, in terms of the performance that we
>>>> need.
>>> I think you should start with simple approach and impove incrementally
>>> if this turns out to be not optimal. I really detest taking struct pages
>>> outside of the lock. You never know what might happen after the lock is
>>> dropped. E.g. can you race with the memory hotremove?
>>
>> The caller won't use pages returned from the function, so I think there
>> shouldn't be an issue or race if the returned pages are used (i.e. not free
>> anymore) or simply gone due to hotremove.
> No, this is just too error prone. Consider that struct page pointer
> itself could get invalid in the meantime. Please always keep robustness
> in mind first. Optimizations are nice but it is even not clear whether
> the simple variant will cause any problems.


how about this:

for_each_populated_zone(zone) {
for_each_migratetype_order_decend(min_order, order, type) {
do {
=> spin_lock_irqsave(&zone->lock, flags);
ret = report_free_page_block(zone, order, type,
&page)) {
pfn = page_to_pfn(page);
nr_pages = 1 << order;
visit(opaque1, pfn, nr_pages);
}
=> spin_unlock_irqrestore(&zone->lock, flags);
} while (!ret)
}

In this way, we can still keep the lock granularity at one free page block
while having the struct page operated under the lock.



Best,
Wei











2017-08-03 12:25:31

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

On 08/03/2017 04:13 PM, Pankaj Gupta wrote:
>>
>> + /* Allocate space for find_vqs parameters */
>> + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
>> + if (!vqs)
>> + goto err_vq;
>> + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
>> + if (!callbacks)
>> + goto err_callback;
>> + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);
>
> is size here (integer) intentional?


Sorry, I didn't get it. Could you please elaborate more?


>
>> + if (!names)
>> + goto err_names;
>> +
>> + callbacks[0] = balloon_ack;
>> + names[0] = "inflate";
>> + callbacks[1] = balloon_ack;
>> + names[1] = "deflate";
>> +
>> + i = 2;
>> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
>> + callbacks[i] = stats_request;
> just thinking if memory for callbacks[3] & names[3] is allocated?


Yes, the above kmalloc_array allocated them.


Best,
Wei

2017-08-03 12:34:03

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

On Thu, Aug 03, 2017 at 02:38:19PM +0800, Wei Wang wrote:
> Add a new vq to report hints of guest free pages to the host.
>
> Signed-off-by: Wei Wang <[email protected]>
> Signed-off-by: Liang Li <[email protected]>
> ---
> drivers/virtio/virtio_balloon.c | 164 ++++++++++++++++++++++++++++++------
> include/uapi/linux/virtio_balloon.h | 1 +
> 2 files changed, 140 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 29aca0c..29c4a61 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -54,11 +54,12 @@ static struct vfsmount *balloon_mnt;
>
> struct virtio_balloon {
> struct virtio_device *vdev;
> - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
>
> /* The balloon servicing is delegated to a freezable workqueue. */
> struct work_struct update_balloon_stats_work;
> struct work_struct update_balloon_size_work;
> + struct work_struct report_free_page_work;
>
> /* Prevent updating balloon when it is being canceled. */
> spinlock_t stop_update_lock;
> @@ -90,6 +91,13 @@ struct virtio_balloon {
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
>
> + /*
> + * Used by the device and driver to signal each other.
> + * device->driver: start the free page report.
> + * driver->device: end the free page report.
> + */
> + __virtio32 report_free_page_signal;
> +
> /* To register callback in oom notifier call chain */
> struct notifier_block nb;
> };
> @@ -146,7 +154,7 @@ static void set_page_pfns(struct virtio_balloon *vb,
> }
>
> static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> - void *addr, uint32_t size)
> + void *addr, uint32_t size, bool busywait)
> {
> struct scatterlist sg;
> unsigned int len;
> @@ -165,7 +173,12 @@ static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> cpu_relax();
> }
> virtqueue_kick(vq);
> - wait_event(vb->acked, virtqueue_get_buf(vq, &len));
> + if (busywait)
> + while (!virtqueue_get_buf(vq, &len) &&
> + !virtqueue_is_broken(vq))
> + cpu_relax();
> + else
> + wait_event(vb->acked, virtqueue_get_buf(vq, &len));
> }
>
> /*
> @@ -197,11 +210,11 @@ static void tell_host_sgs(struct virtio_balloon *vb,
> sg_addr = pfn_to_kaddr(sg_pfn_start);
> sg_len = (sg_pfn_end - sg_pfn_start) << PAGE_SHIFT;
> while (sg_len > sg_max_len) {
> - send_one_sg(vb, vq, sg_addr, sg_max_len);
> + send_one_sg(vb, vq, sg_addr, sg_max_len, 0);
> sg_addr += sg_max_len;
> sg_len -= sg_max_len;
> }
> - send_one_sg(vb, vq, sg_addr, sg_len);
> + send_one_sg(vb, vq, sg_addr, sg_len, 0);
> xb_zero(&vb->page_xb, sg_pfn_start, sg_pfn_end);
> sg_pfn_start = sg_pfn_end + 1;
> }
> @@ -503,42 +516,138 @@ static void update_balloon_size_func(struct work_struct *work)
> queue_work(system_freezable_wq, work);
> }
>
> +static void virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> + unsigned long nr_pages)
> +{
> + struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> + void *addr = pfn_to_kaddr(pfn);
> + uint32_t len = nr_pages << PAGE_SHIFT;
> +
> + send_one_sg(vb, vb->free_page_vq, addr, len, 1);
> +}
> +
> +static void report_free_page_completion(struct virtio_balloon *vb)
> +{
> + struct virtqueue *vq = vb->free_page_vq;
> + struct scatterlist sg;
> + unsigned int len;
> +
> + sg_init_one(&sg, &vb->report_free_page_signal, sizeof(__virtio32));
> + while (unlikely(virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)
> + == -ENOSPC)) {
> + virtqueue_kick(vq);
> + while (!virtqueue_get_buf(vq, &len) &&
> + !virtqueue_is_broken(vq))
> + cpu_relax();
> + }
> + virtqueue_kick(vq);
> +}

This unlimited busy waiting needs to go away. A bit of polling might be
ok but even though it'd be better off as a separate driver. You do not
want to peg CPU for unlimited periods of time for something that's an
optimization.

> +
> +static void report_free_page(struct work_struct *work)
> +{
> + struct virtio_balloon *vb;
> +
> + vb = container_of(work, struct virtio_balloon, report_free_page_work);
> + walk_free_mem_block(vb, 1, &virtio_balloon_send_free_pages);
> + report_free_page_completion(vb);
> +}
> +
> +static void free_page_request(struct virtqueue *vq)
> +{
> + struct virtio_balloon *vb = vq->vdev->priv;
> +
> + queue_work(system_freezable_wq, &vb->report_free_page_work);
> +}
> +
> static int init_vqs(struct virtio_balloon *vb)
> {
> - struct virtqueue *vqs[3];
> - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> - static const char * const names[] = { "inflate", "deflate", "stats" };
> - int err, nvqs;
> + struct virtqueue **vqs;
> + vq_callback_t **callbacks;
> + const char **names;
> + struct scatterlist sg;
> + int i, nvqs, err = -ENOMEM;
> +
> + /* Inflateq and deflateq are used unconditionally */
> + nvqs = 2;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ))
> + nvqs++;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
> + nvqs++;
> +
> + /* Allocate space for find_vqs parameters */
> + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
> + if (!vqs)
> + goto err_vq;
> + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
> + if (!callbacks)
> + goto err_callback;
> + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);
> + if (!names)
> + goto err_names;
> +
> + callbacks[0] = balloon_ack;
> + names[0] = "inflate";
> + callbacks[1] = balloon_ack;
> + names[1] = "deflate";
> +
> + i = 2;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> + callbacks[i] = stats_request;
> + names[i] = "stats";
> + i++;
> + }
>
> - /*
> - * We expect two virtqueues: inflate and deflate, and
> - * optionally stat.
> - */
> - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> - err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
> + callbacks[i] = free_page_request;
> + names[i] = "free_page_vq";
> + }
> +
> + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names,
> + NULL, NULL);
> if (err)
> - return err;
> + goto err_find;
>
> vb->inflate_vq = vqs[0];
> vb->deflate_vq = vqs[1];
> + i = 2;
> if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> - struct scatterlist sg;
> - unsigned int num_stats;
> - vb->stats_vq = vqs[2];
> -
> + vb->stats_vq = vqs[i++];
> /*
> * Prime this virtqueue with one buffer so the hypervisor can
> * use it to signal us later (it can't be broken yet!).
> */
> - num_stats = update_balloon_stats(vb);
> -
> - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> + sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> < 0)
> BUG();
> virtqueue_kick(vb->stats_vq);
> }
> +
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ)) {
> + vb->free_page_vq = vqs[i];
> + vb->report_free_page_signal = 0;
> + sg_init_one(&sg, &vb->report_free_page_signal,
> + sizeof(__virtio32));
> + if (virtqueue_add_outbuf(vb->free_page_vq, &sg, 1, vb,
> + GFP_KERNEL) < 0)
> + dev_warn(&vb->vdev->dev, "%s: add signal buf fail\n",

failed.

And we likely want to fail probe here.

> + __func__);
> + virtqueue_kick(vb->free_page_vq);
> + }
> +
> + kfree(names);
> + kfree(callbacks);
> + kfree(vqs);
> return 0;
> +
> +err_find:
> + kfree(names);
> +err_names:
> + kfree(callbacks);
> +err_callback:
> + kfree(vqs);
> +err_vq:
> + return err;
> }
>
> #ifdef CONFIG_BALLOON_COMPACTION
> @@ -590,7 +699,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
> if (use_sg) {
> send_one_sg(vb, vb->inflate_vq, page_address(newpage),
> - PAGE_SIZE);
> + PAGE_SIZE, 0);
> } else {
> vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> set_page_pfns(vb, vb->pfns, newpage);
> @@ -600,7 +709,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> balloon_page_delete(page);
> if (use_sg) {
> send_one_sg(vb, vb->deflate_vq, page_address(page),
> - PAGE_SIZE);
> + PAGE_SIZE, 0);
> } else {
> vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> set_page_pfns(vb, vb->pfns, page);
> @@ -667,6 +776,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
> if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_SG))
> xb_init(&vb->page_xb);
>
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_VQ))
> + INIT_WORK(&vb->report_free_page_work, report_free_page);
> +
> vb->nb.notifier_call = virtballoon_oom_notify;
> vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
> err = register_oom_notifier(&vb->nb);
> @@ -731,6 +843,7 @@ static void virtballoon_remove(struct virtio_device *vdev)
> spin_unlock_irq(&vb->stop_update_lock);
> cancel_work_sync(&vb->update_balloon_size_work);
> cancel_work_sync(&vb->update_balloon_stats_work);
> + cancel_work_sync(&vb->report_free_page_work);
>
> xb_empty(&vb->page_xb);
> remove_common(vb);
> @@ -785,6 +898,7 @@ static unsigned int features[] = {
> VIRTIO_BALLOON_F_STATS_VQ,
> VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> VIRTIO_BALLOON_F_SG,
> + VIRTIO_BALLOON_F_FREE_PAGE_VQ,
> };
>
> static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 37780a7..8214f84 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -35,6 +35,7 @@
> #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
> #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
> #define VIRTIO_BALLOON_F_SG 3 /* Use sg instead of PFN lists */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_VQ 4 /* Virtqueue to report free pages */
>
> /* Size of a PFN in the balloon interface. */
> #define VIRTIO_BALLOON_PFN_SHIFT 12
> --
> 2.7.4

2017-08-03 12:41:11

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 03-08-17 20:11:58, Wei Wang wrote:
> On 08/03/2017 07:28 PM, Michal Hocko wrote:
> >On Thu 03-08-17 19:27:19, Wei Wang wrote:
> >>On 08/03/2017 06:44 PM, Michal Hocko wrote:
> >>>On Thu 03-08-17 18:42:15, Wei Wang wrote:
> >>>>On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >>>>>On Thu 03-08-17 14:38:18, Wei Wang wrote:
> >>>[...]
> >>>>>>+static int report_free_page_block(struct zone *zone, unsigned int order,
> >>>>>>+ unsigned int migratetype, struct page **page)
> >>>>>This is just too ugly and wrong actually. Never provide struct page
> >>>>>pointers outside of the zone->lock. What I've had in mind was to simply
> >>>>>walk free lists of the suitable order and call the callback for each one.
> >>>>>Something as simple as
> >>>>>
> >>>>> for (i = 0; i < MAX_NR_ZONES; i++) {
> >>>>> struct zone *zone = &pgdat->node_zones[i];
> >>>>>
> >>>>> if (!populated_zone(zone))
> >>>>> continue;
> >>>>> spin_lock_irqsave(&zone->lock, flags);
> >>>>> for (order = min_order; order < MAX_ORDER; ++order) {
> >>>>> struct free_area *free_area = &zone->free_area[order];
> >>>>> enum migratetype mt;
> >>>>> struct page *page;
> >>>>>
> >>>>> if (!free_area->nr_pages)
> >>>>> continue;
> >>>>>
> >>>>> for_each_migratetype_order(order, mt) {
> >>>>> list_for_each_entry(page,
> >>>>> &free_area->free_list[mt], lru) {
> >>>>>
> >>>>> pfn = page_to_pfn(page);
> >>>>> visit(opaque2, prn, 1<<order);
> >>>>> }
> >>>>> }
> >>>>> }
> >>>>>
> >>>>> spin_unlock_irqrestore(&zone->lock, flags);
> >>>>> }
> >>>>>
> >>>>>[...]
> >>>>I think the above would take the lock for too long time. That's why we
> >>>>prefer to take one free page block each time, and taking it one by one
> >>>>also doesn't make a difference, in terms of the performance that we
> >>>>need.
> >>>I think you should start with simple approach and impove incrementally
> >>>if this turns out to be not optimal. I really detest taking struct pages
> >>>outside of the lock. You never know what might happen after the lock is
> >>>dropped. E.g. can you race with the memory hotremove?
> >>
> >>The caller won't use pages returned from the function, so I think there
> >>shouldn't be an issue or race if the returned pages are used (i.e. not free
> >>anymore) or simply gone due to hotremove.
> >No, this is just too error prone. Consider that struct page pointer
> >itself could get invalid in the meantime. Please always keep robustness
> >in mind first. Optimizations are nice but it is even not clear whether
> >the simple variant will cause any problems.
>
>
> how about this:
>
> for_each_populated_zone(zone) {
> for_each_migratetype_order_decend(min_order, order, type) {
> do {
> => spin_lock_irqsave(&zone->lock, flags);
> ret = report_free_page_block(zone, order, type,
> &page)) {
> pfn = page_to_pfn(page);
> nr_pages = 1 << order;
> visit(opaque1, pfn, nr_pages);
> }
> => spin_unlock_irqrestore(&zone->lock, flags);
> } while (!ret)
> }
>
> In this way, we can still keep the lock granularity at one free page block
> while having the struct page operated under the lock.

How can you continue iteration of free_list after the lock has been
dropped? If you want to keep the lock held for each migrate type then
why not. Just push the lock inside for_each_migratetype_order loop from
my example.

--
Michal Hocko
SUSE Labs

2017-08-03 13:05:23

by Pankaj Gupta

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ


>
> On 08/03/2017 04:13 PM, Pankaj Gupta wrote:
> >>
> >> + /* Allocate space for find_vqs parameters */
> >> + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
> >> + if (!vqs)
> >> + goto err_vq;
> >> + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
> >> + if (!callbacks)
> >> + goto err_callback;
> >> + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);
> >
> > is size here (integer) intentional?
>
>
> Sorry, I didn't get it. Could you please elaborate more?

This is okay

>
>
> >
> >> + if (!names)
> >> + goto err_names;
> >> +
> >> + callbacks[0] = balloon_ack;
> >> + names[0] = "inflate";
> >> + callbacks[1] = balloon_ack;
> >> + names[1] = "deflate";
> >> +
> >> + i = 2;
> >> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> >> + callbacks[i] = stats_request;
> > just thinking if memory for callbacks[3] & names[3] is allocated?
>
>
> Yes, the above kmalloc_array allocated them.

I mean we have created callbacks array for two entries 0,1?

callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);

But we are trying to access location '2' which is third:

i = 2;
+ if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+ callbacks[i] = stats_request; <---- callbacks[2]
+ names[i] = "stats"; <----- names[2]
+ i++;
+ }

I am missing anything obvious here?

>
>
> Best,
> Wei
>

2017-08-03 13:14:44

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/03/2017 08:41 PM, Michal Hocko wrote:
> On Thu 03-08-17 20:11:58, Wei Wang wrote:
>> On 08/03/2017 07:28 PM, Michal Hocko wrote:
>>> On Thu 03-08-17 19:27:19, Wei Wang wrote:
>>>> On 08/03/2017 06:44 PM, Michal Hocko wrote:
>>>>> On Thu 03-08-17 18:42:15, Wei Wang wrote:
>>>>>> On 08/03/2017 05:11 PM, Michal Hocko wrote:
>>>>>>> On Thu 03-08-17 14:38:18, Wei Wang wrote:
>>>>> [...]
>>>>>>>> +static int report_free_page_block(struct zone *zone, unsigned int order,
>>>>>>>> + unsigned int migratetype, struct page **page)
>>>>>>> This is just too ugly and wrong actually. Never provide struct page
>>>>>>> pointers outside of the zone->lock. What I've had in mind was to simply
>>>>>>> walk free lists of the suitable order and call the callback for each one.
>>>>>>> Something as simple as
>>>>>>>
>>>>>>> for (i = 0; i < MAX_NR_ZONES; i++) {
>>>>>>> struct zone *zone = &pgdat->node_zones[i];
>>>>>>>
>>>>>>> if (!populated_zone(zone))
>>>>>>> continue;
>>>>>>> spin_lock_irqsave(&zone->lock, flags);
>>>>>>> for (order = min_order; order < MAX_ORDER; ++order) {
>>>>>>> struct free_area *free_area = &zone->free_area[order];
>>>>>>> enum migratetype mt;
>>>>>>> struct page *page;
>>>>>>>
>>>>>>> if (!free_area->nr_pages)
>>>>>>> continue;
>>>>>>>
>>>>>>> for_each_migratetype_order(order, mt) {
>>>>>>> list_for_each_entry(page,
>>>>>>> &free_area->free_list[mt], lru) {
>>>>>>>
>>>>>>> pfn = page_to_pfn(page);
>>>>>>> visit(opaque2, prn, 1<<order);
>>>>>>> }
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> spin_unlock_irqrestore(&zone->lock, flags);
>>>>>>> }
>>>>>>>
>>>>>>> [...]
>>>>>> I think the above would take the lock for too long time. That's why we
>>>>>> prefer to take one free page block each time, and taking it one by one
>>>>>> also doesn't make a difference, in terms of the performance that we
>>>>>> need.
>>>>> I think you should start with simple approach and impove incrementally
>>>>> if this turns out to be not optimal. I really detest taking struct pages
>>>>> outside of the lock. You never know what might happen after the lock is
>>>>> dropped. E.g. can you race with the memory hotremove?
>>>> The caller won't use pages returned from the function, so I think there
>>>> shouldn't be an issue or race if the returned pages are used (i.e. not free
>>>> anymore) or simply gone due to hotremove.
>>> No, this is just too error prone. Consider that struct page pointer
>>> itself could get invalid in the meantime. Please always keep robustness
>>> in mind first. Optimizations are nice but it is even not clear whether
>>> the simple variant will cause any problems.
>>
>> how about this:
>>
>> for_each_populated_zone(zone) {
>> for_each_migratetype_order_decend(min_order, order, type) {
>> do {
>> => spin_lock_irqsave(&zone->lock, flags);
>> ret = report_free_page_block(zone, order, type,
>> &page)) {
>> pfn = page_to_pfn(page);
>> nr_pages = 1 << order;
>> visit(opaque1, pfn, nr_pages);
>> }
>> => spin_unlock_irqrestore(&zone->lock, flags);
>> } while (!ret)
>> }
>>
>> In this way, we can still keep the lock granularity at one free page block
>> while having the struct page operated under the lock.
> How can you continue iteration of free_list after the lock has been
> dropped?

report_free_page_block() has handled all the possible cases after the
lock is
dropped. For example, if the previous reported page has not been on the free
list, then the first node from the list of this order will be given.
This is because
page allocation takes page blocks from the head to end, for example:

1,2,3,4,5,6
if the previous reported free block is 2, when we give 2 to the report
function
to get the next page block, and find 1,2,3 have all gone, it will report
4, which
is the head of the free list.

> If you want to keep the lock held for each migrate type then
> why not. Just push the lock inside for_each_migratetype_order loop from
> my example.
>

The above lock is held for each free page block, instead of each migrate
type, since
the report function only reports one page block each time.


Best,
Wei




2017-08-03 13:18:44

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

On 08/03/2017 09:05 PM, Pankaj Gupta wrote:
>> On 08/03/2017 04:13 PM, Pankaj Gupta wrote:
>>>> + /* Allocate space for find_vqs parameters */
>>>> + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL);
>>>> + if (!vqs)
>>>> + goto err_vq;
>>>> + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
>>>> + if (!callbacks)
>>>> + goto err_callback;
>>>> + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL);
>>>
>>> is size here (integer) intentional?
>>
>> Sorry, I didn't get it. Could you please elaborate more?
> This is okay
>
>>
>>>> + if (!names)
>>>> + goto err_names;
>>>> +
>>>> + callbacks[0] = balloon_ack;
>>>> + names[0] = "inflate";
>>>> + callbacks[1] = balloon_ack;
>>>> + names[1] = "deflate";
>>>> +
>>>> + i = 2;
>>>> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
>>>> + callbacks[i] = stats_request;
>>> just thinking if memory for callbacks[3] & names[3] is allocated?
>>
>> Yes, the above kmalloc_array allocated them.
> I mean we have created callbacks array for two entries 0,1?
>
> callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL);
>
> But we are trying to access location '2' which is third:
>
> i = 2;
> + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> + callbacks[i] = stats_request; <---- callbacks[2]
> + names[i] = "stats"; <----- names[2]
> + i++;
> + }
>
> I am missing anything obvious here?


Yes.
if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) is true
nvqs will be 3, that is, callbacks[2] is allocated.

Best,
Wei



2017-08-03 13:50:53

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 03-08-17 21:17:25, Wei Wang wrote:
> On 08/03/2017 08:41 PM, Michal Hocko wrote:
> >On Thu 03-08-17 20:11:58, Wei Wang wrote:
> >>On 08/03/2017 07:28 PM, Michal Hocko wrote:
> >>>On Thu 03-08-17 19:27:19, Wei Wang wrote:
> >>>>On 08/03/2017 06:44 PM, Michal Hocko wrote:
> >>>>>On Thu 03-08-17 18:42:15, Wei Wang wrote:
> >>>>>>On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >>>>>>>On Thu 03-08-17 14:38:18, Wei Wang wrote:
> >>>>>[...]
> >>>>>>>>+static int report_free_page_block(struct zone *zone, unsigned int order,
> >>>>>>>>+ unsigned int migratetype, struct page **page)
> >>>>>>>This is just too ugly and wrong actually. Never provide struct page
> >>>>>>>pointers outside of the zone->lock. What I've had in mind was to simply
> >>>>>>>walk free lists of the suitable order and call the callback for each one.
> >>>>>>>Something as simple as
> >>>>>>>
> >>>>>>> for (i = 0; i < MAX_NR_ZONES; i++) {
> >>>>>>> struct zone *zone = &pgdat->node_zones[i];
> >>>>>>>
> >>>>>>> if (!populated_zone(zone))
> >>>>>>> continue;
> >>>>>>> spin_lock_irqsave(&zone->lock, flags);
> >>>>>>> for (order = min_order; order < MAX_ORDER; ++order) {
> >>>>>>> struct free_area *free_area = &zone->free_area[order];
> >>>>>>> enum migratetype mt;
> >>>>>>> struct page *page;
> >>>>>>>
> >>>>>>> if (!free_area->nr_pages)
> >>>>>>> continue;
> >>>>>>>
> >>>>>>> for_each_migratetype_order(order, mt) {
> >>>>>>> list_for_each_entry(page,
> >>>>>>> &free_area->free_list[mt], lru) {
> >>>>>>>
> >>>>>>> pfn = page_to_pfn(page);
> >>>>>>> visit(opaque2, prn, 1<<order);
> >>>>>>> }
> >>>>>>> }
> >>>>>>> }
> >>>>>>>
> >>>>>>> spin_unlock_irqrestore(&zone->lock, flags);
> >>>>>>> }
> >>>>>>>
> >>>>>>>[...]
> >>>>>>I think the above would take the lock for too long time. That's why we
> >>>>>>prefer to take one free page block each time, and taking it one by one
> >>>>>>also doesn't make a difference, in terms of the performance that we
> >>>>>>need.
> >>>>>I think you should start with simple approach and impove incrementally
> >>>>>if this turns out to be not optimal. I really detest taking struct pages
> >>>>>outside of the lock. You never know what might happen after the lock is
> >>>>>dropped. E.g. can you race with the memory hotremove?
> >>>>The caller won't use pages returned from the function, so I think there
> >>>>shouldn't be an issue or race if the returned pages are used (i.e. not free
> >>>>anymore) or simply gone due to hotremove.
> >>>No, this is just too error prone. Consider that struct page pointer
> >>>itself could get invalid in the meantime. Please always keep robustness
> >>>in mind first. Optimizations are nice but it is even not clear whether
> >>>the simple variant will cause any problems.
> >>
> >>how about this:
> >>
> >>for_each_populated_zone(zone) {
> >> for_each_migratetype_order_decend(min_order, order, type) {
> >> do {
> >> => spin_lock_irqsave(&zone->lock, flags);
> >> ret = report_free_page_block(zone, order, type,
> >> &page)) {
> >> pfn = page_to_pfn(page);
> >> nr_pages = 1 << order;
> >> visit(opaque1, pfn, nr_pages);
> >> }
> >> => spin_unlock_irqrestore(&zone->lock, flags);
> >> } while (!ret)
> >>}
> >>
> >>In this way, we can still keep the lock granularity at one free page block
> >>while having the struct page operated under the lock.
> >How can you continue iteration of free_list after the lock has been
> >dropped?
>
> report_free_page_block() has handled all the possible cases after the lock
> is
> dropped. For example, if the previous reported page has not been on the free
> list, then the first node from the list of this order will be given. This is
> because
> page allocation takes page blocks from the head to end, for example:
>
> 1,2,3,4,5,6
> if the previous reported free block is 2, when we give 2 to the report
> function
> to get the next page block, and find 1,2,3 have all gone, it will report 4,
> which
> is the head of the free list.

As I've said earlier. Start simple optimize incrementally with some
numbers to justify a more subtle code.
--
Michal Hocko
SUSE Labs

2017-08-03 14:22:46

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v13 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG

On Thu, Aug 03, 2017 at 02:38:17PM +0800, Wei Wang wrote:
> Add a new feature, VIRTIO_BALLOON_F_SG, which enables the transfer
> of balloon (i.e. inflated/deflated) pages using scatter-gather lists
> to the host.
>
> The implementation of the previous virtio-balloon is not very
> efficient, because the balloon pages are transferred to the
> host one by one. Here is the breakdown of the time in percentage
> spent on each step of the balloon inflating process (inflating
> 7GB of an 8GB idle guest).
>
> 1) allocating pages (6.5%)
> 2) sending PFNs to host (68.3%)
> 3) address translation (6.1%)
> 4) madvise (19%)
>
> It takes about 4126ms for the inflating process to complete.
> The above profiling shows that the bottlenecks are stage 2)
> and stage 4).
>
> This patch optimizes step 2) by transferring pages to the host in
> sgs. An sg describes a chunk of guest physically continuous pages.
> With this mechanism, step 4) can also be optimized by doing address
> translation and madvise() in chunks rather than page by page.
>
> With this new feature, the above ballooning process takes ~541ms
> resulting in an improvement of ~87%.
>
> TODO: optimize stage 1) by allocating/freeing a chunk of pages
> instead of a single page each time.
>
> Signed-off-by: Wei Wang <[email protected]>
> Signed-off-by: Liang Li <[email protected]>
> Suggested-by: Michael S. Tsirkin <[email protected]>
> ---
> drivers/virtio/virtio_balloon.c | 150 ++++++++++++++++++++++++++++++++----
> include/uapi/linux/virtio_balloon.h | 1 +
> 2 files changed, 134 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index f0b3a0b..29aca0c 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -32,6 +32,7 @@
> #include <linux/mm.h>
> #include <linux/mount.h>
> #include <linux/magic.h>
> +#include <linux/xbitmap.h>
>
> /*
> * Balloon device works in 4K page units. So each page is pointed to by
> @@ -79,6 +80,9 @@ struct virtio_balloon {
> /* Synchronize access/update to this struct virtio_balloon elements */
> struct mutex balloon_lock;
>
> + /* The xbitmap used to record ballooned pages */
> + struct xb page_xb;
> +
> /* The array of pfns we tell the Host about. */
> unsigned int num_pfns;
> __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
> @@ -141,13 +145,90 @@ static void set_page_pfns(struct virtio_balloon *vb,
> page_to_balloon_pfn(page) + i);
> }
>
> +static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> + void *addr, uint32_t size)
> +{
> + struct scatterlist sg;
> + unsigned int len;
> +
> + sg_init_one(&sg, addr, size);
> + while (unlikely(virtqueue_add_inbuf(vq, &sg, 1, vb, GFP_KERNEL)
> + == -ENOSPC)) {
> + /*
> + * It is uncommon to see the vq is full, because the sg is sent
> + * one by one and the device is able to handle it in time. But
> + * if that happens, we kick and wait for an entry is released.

is released -> to get used.

> + */
> + virtqueue_kick(vq);
> + while (!virtqueue_get_buf(vq, &len) &&
> + !virtqueue_is_broken(vq))
> + cpu_relax();

Please rework to use wait_event in that case too.


> + }
> + virtqueue_kick(vq);
> + wait_event(vb->acked, virtqueue_get_buf(vq, &len));
> +}
> +
> +/*
> + * Send balloon pages in sgs to host. The balloon pages are recorded in the
> + * page xbitmap. Each bit in the bitmap corresponds to a page of PAGE_SIZE.
> + * The page xbitmap is searched for continuous "1" bits, which correspond
> + * to continuous pages, to chunk into sgs.
> + *
> + * @page_xb_start and @page_xb_end form the range of bits in the xbitmap that
> + * need to be searched.
> + */
> +static void tell_host_sgs(struct virtio_balloon *vb,
> + struct virtqueue *vq,
> + unsigned long page_xb_start,
> + unsigned long page_xb_end)
> +{
> + unsigned long sg_pfn_start, sg_pfn_end;
> + void *sg_addr;
> + uint32_t sg_len, sg_max_len = round_down(UINT_MAX, PAGE_SIZE);
> +
> + sg_pfn_start = page_xb_start;
> + while (sg_pfn_start < page_xb_end) {
> + sg_pfn_start = xb_find_next_bit(&vb->page_xb, sg_pfn_start,
> + page_xb_end, 1);
> + if (sg_pfn_start == page_xb_end + 1)
> + break;
> + sg_pfn_end = xb_find_next_bit(&vb->page_xb, sg_pfn_start + 1,
> + page_xb_end, 0);
> + sg_addr = pfn_to_kaddr(sg_pfn_start);
> + sg_len = (sg_pfn_end - sg_pfn_start) << PAGE_SHIFT;
> + while (sg_len > sg_max_len) {
> + send_one_sg(vb, vq, sg_addr, sg_max_len);
> + sg_addr += sg_max_len;
> + sg_len -= sg_max_len;
> + }
> + send_one_sg(vb, vq, sg_addr, sg_len);
> + xb_zero(&vb->page_xb, sg_pfn_start, sg_pfn_end);
> + sg_pfn_start = sg_pfn_end + 1;
> + }
> +}
> +
> +static inline void xb_set_page(struct virtio_balloon *vb,
> + struct page *page,
> + unsigned long *pfn_min,
> + unsigned long *pfn_max)
> +{
> + unsigned long pfn = page_to_pfn(page);
> +
> + *pfn_min = min(pfn, *pfn_min);
> + *pfn_max = max(pfn, *pfn_max);
> + xb_set_bit(&vb->page_xb, pfn);
> +}
> +
> static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
> {
> struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
> unsigned num_allocated_pages;
> + bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
> + unsigned long pfn_max = 0, pfn_min = ULONG_MAX;
>
> /* We can only do one array worth at a time. */
> - num = min(num, ARRAY_SIZE(vb->pfns));
> + if (!use_sg)
> + num = min(num, ARRAY_SIZE(vb->pfns));
>
> mutex_lock(&vb->balloon_lock);
> for (vb->num_pfns = 0; vb->num_pfns < num;
> @@ -162,7 +243,12 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
> msleep(200);
> break;
> }
> - set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
> +
> + if (use_sg)
> + xb_set_page(vb, page, &pfn_min, &pfn_max);
> + else
> + set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
> +
> vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE;
> if (!virtio_has_feature(vb->vdev,
> VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
> @@ -171,8 +257,12 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num)
>
> num_allocated_pages = vb->num_pfns;
> /* Did we get any? */
> - if (vb->num_pfns != 0)
> - tell_host(vb, vb->inflate_vq);
> + if (vb->num_pfns) {
> + if (use_sg)
> + tell_host_sgs(vb, vb->inflate_vq, pfn_min, pfn_max);
> + else
> + tell_host(vb, vb->inflate_vq);
> + }
> mutex_unlock(&vb->balloon_lock);
>
> return num_allocated_pages;
> @@ -198,9 +288,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
> struct page *page;
> struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
> LIST_HEAD(pages);
> + bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
> + unsigned long pfn_max = 0, pfn_min = ULONG_MAX;
>
> - /* We can only do one array worth at a time. */
> - num = min(num, ARRAY_SIZE(vb->pfns));
> + /* Traditionally, we can only do one array worth at a time. */
> + if (!use_sg)
> + num = min(num, ARRAY_SIZE(vb->pfns));
>
> mutex_lock(&vb->balloon_lock);
> /* We can't release more pages than taken */
> @@ -210,7 +303,11 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
> page = balloon_page_dequeue(vb_dev_info);
> if (!page)
> break;
> - set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
> + if (use_sg)
> + xb_set_page(vb, page, &pfn_min, &pfn_max);
> + else
> + set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
> +
> list_add(&page->lru, &pages);
> vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE;
> }
> @@ -221,8 +318,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num)
> * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST);
> * is true, we *have* to do it in this order
> */
> - if (vb->num_pfns != 0)
> - tell_host(vb, vb->deflate_vq);
> + if (vb->num_pfns) {
> + if (use_sg)
> + tell_host_sgs(vb, vb->deflate_vq, pfn_min, pfn_max);
> + else
> + tell_host(vb, vb->deflate_vq);
> + }
> release_pages_balloon(vb, &pages);
> mutex_unlock(&vb->balloon_lock);
> return num_freed_pages;
> @@ -441,6 +542,7 @@ static int init_vqs(struct virtio_balloon *vb)
> }
>
> #ifdef CONFIG_BALLOON_COMPACTION
> +
> /*
> * virtballoon_migratepage - perform the balloon page migration on behalf of
> * a compation thread. (called under page lock)
> @@ -464,6 +566,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> {
> struct virtio_balloon *vb = container_of(vb_dev_info,
> struct virtio_balloon, vb_dev_info);
> + bool use_sg = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_SG);
> unsigned long flags;
>
> /*
> @@ -485,16 +588,24 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> vb_dev_info->isolated_pages--;
> __count_vm_event(BALLOON_MIGRATE);
> spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
> - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> - set_page_pfns(vb, vb->pfns, newpage);
> - tell_host(vb, vb->inflate_vq);
> -
> + if (use_sg) {
> + send_one_sg(vb, vb->inflate_vq, page_address(newpage),
> + PAGE_SIZE);
> + } else {
> + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> + set_page_pfns(vb, vb->pfns, newpage);
> + tell_host(vb, vb->inflate_vq);
> + }
> /* balloon's page migration 2nd step -- deflate "page" */
> balloon_page_delete(page);
> - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> - set_page_pfns(vb, vb->pfns, page);
> - tell_host(vb, vb->deflate_vq);
> -
> + if (use_sg) {
> + send_one_sg(vb, vb->deflate_vq, page_address(page),
> + PAGE_SIZE);
> + } else {
> + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> + set_page_pfns(vb, vb->pfns, page);
> + tell_host(vb, vb->deflate_vq);
> + }
> mutex_unlock(&vb->balloon_lock);
>
> put_page(page); /* balloon reference */
> @@ -553,6 +664,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
> if (err)
> goto out_free_vb;
>
> + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_SG))
> + xb_init(&vb->page_xb);
> +
> vb->nb.notifier_call = virtballoon_oom_notify;
> vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
> err = register_oom_notifier(&vb->nb);
> @@ -618,6 +732,7 @@ static void virtballoon_remove(struct virtio_device *vdev)
> cancel_work_sync(&vb->update_balloon_size_work);
> cancel_work_sync(&vb->update_balloon_stats_work);
>
> + xb_empty(&vb->page_xb);
> remove_common(vb);
> #ifdef CONFIG_BALLOON_COMPACTION
> if (vb->vb_dev_info.inode)
> @@ -669,6 +784,7 @@ static unsigned int features[] = {
> VIRTIO_BALLOON_F_MUST_TELL_HOST,
> VIRTIO_BALLOON_F_STATS_VQ,
> VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> + VIRTIO_BALLOON_F_SG,
> };
>
> static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 343d7dd..37780a7 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -34,6 +34,7 @@
> #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */
> #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */
> #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */
> +#define VIRTIO_BALLOON_F_SG 3 /* Use sg instead of PFN lists */
>
> /* Size of a PFN in the balloon interface. */
> #define VIRTIO_BALLOON_PFN_SHIFT 12
> --
> 2.7.4

2017-08-03 15:18:06

by Wang, Wei W

[permalink] [raw]
Subject: RE: [PATCH v13 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG

On Thursday, August 3, 2017 10:23 PM, Michael S. Tsirkin wrote:
> On Thu, Aug 03, 2017 at 02:38:17PM +0800, Wei Wang wrote:
> > +static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> > + void *addr, uint32_t size)
> > +{
> > + struct scatterlist sg;
> > + unsigned int len;
> > +
> > + sg_init_one(&sg, addr, size);
> > + while (unlikely(virtqueue_add_inbuf(vq, &sg, 1, vb, GFP_KERNEL)
> > + == -ENOSPC)) {
> > + /*
> > + * It is uncommon to see the vq is full, because the sg is sent
> > + * one by one and the device is able to handle it in time. But
> > + * if that happens, we kick and wait for an entry is released.
>
> is released -> to get used.
>
> > + */
> > + virtqueue_kick(vq);
> > + while (!virtqueue_get_buf(vq, &len) &&
> > + !virtqueue_is_broken(vq))
> > + cpu_relax();
>
> Please rework to use wait_event in that case too.

For the balloon page case here, it is fine to use wait_event. But for the free page
case, I think it might not be suitable because the mm lock is being held.

Best,
Wei

2017-08-03 15:20:18

by Wang, Wei W

[permalink] [raw]
Subject: RE: [PATCH v13 4/5] mm: support reporting free page blocks

On Thursday, August 3, 2017 9:51 PM, Michal Hocko:
> As I've said earlier. Start simple optimize incrementally with some numbers to
> justify a more subtle code.
> --

OK. Let's start with the simple implementation as you suggested.

Best,
Wei

2017-08-03 15:56:02

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v13 3/5] virtio-balloon: VIRTIO_BALLOON_F_SG

On Thu, Aug 03, 2017 at 03:17:59PM +0000, Wang, Wei W wrote:
> On Thursday, August 3, 2017 10:23 PM, Michael S. Tsirkin wrote:
> > On Thu, Aug 03, 2017 at 02:38:17PM +0800, Wei Wang wrote:
> > > +static void send_one_sg(struct virtio_balloon *vb, struct virtqueue *vq,
> > > + void *addr, uint32_t size)
> > > +{
> > > + struct scatterlist sg;
> > > + unsigned int len;
> > > +
> > > + sg_init_one(&sg, addr, size);
> > > + while (unlikely(virtqueue_add_inbuf(vq, &sg, 1, vb, GFP_KERNEL)
> > > + == -ENOSPC)) {
> > > + /*
> > > + * It is uncommon to see the vq is full, because the sg is sent
> > > + * one by one and the device is able to handle it in time. But
> > > + * if that happens, we kick and wait for an entry is released.
> >
> > is released -> to get used.
> >
> > > + */
> > > + virtqueue_kick(vq);
> > > + while (!virtqueue_get_buf(vq, &len) &&
> > > + !virtqueue_is_broken(vq))
> > > + cpu_relax();
> >
> > Please rework to use wait_event in that case too.
>
> For the balloon page case here, it is fine to use wait_event. But for the free page
> case, I think it might not be suitable because the mm lock is being held.
>
> Best,
> Wei

You will have to find a way to drop the lock and restart from where you
stopped then.

--
MST

2017-08-03 16:12:25

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v13 5/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ

Hi Wei,

[auto build test WARNING on v4.13-rc3]
[also build test WARNING on next-20170803]
[cannot apply to linus/master linux/master mmotm/master]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Wei-Wang/Virtio-balloon-Enhancement/20170803-223740
config: xtensa-allmodconfig (attached as .config)
compiler: xtensa-linux-gcc (GCC) 4.9.0
reproduce:
wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=xtensa

All warnings (new ones prefixed by >>):

drivers//virtio/virtio_balloon.c: In function 'tell_host_sgs':
drivers//virtio/virtio_balloon.c:210:3: error: implicit declaration of function 'pfn_to_kaddr' [-Werror=implicit-function-declaration]
sg_addr = pfn_to_kaddr(sg_pfn_start);
^
drivers//virtio/virtio_balloon.c:210:11: warning: assignment makes pointer from integer without a cast
sg_addr = pfn_to_kaddr(sg_pfn_start);
^
drivers//virtio/virtio_balloon.c: In function 'virtio_balloon_send_free_pages':
>> drivers//virtio/virtio_balloon.c:523:15: warning: initialization makes pointer from integer without a cast
void *addr = pfn_to_kaddr(pfn);
^
cc1: some warnings being treated as errors

vim +523 drivers//virtio/virtio_balloon.c

518
519 static void virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
520 unsigned long nr_pages)
521 {
522 struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> 523 void *addr = pfn_to_kaddr(pfn);
524 uint32_t len = nr_pages << PAGE_SHIFT;
525
526 send_one_sg(vb, vb->free_page_vq, addr, len, 1);
527 }
528

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (1.97 kB)
.config.gz (49.73 kB)
Download all attachments

2017-08-03 21:02:13

by Michael S. Tsirkin

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu, Aug 03, 2017 at 03:20:09PM +0000, Wang, Wei W wrote:
> On Thursday, August 3, 2017 9:51 PM, Michal Hocko:
> > As I've said earlier. Start simple optimize incrementally with some numbers to
> > justify a more subtle code.
> > --
>
> OK. Let's start with the simple implementation as you suggested.
>
> Best,
> Wei

The tricky part is when you need to drop the lock and
then restart because the device is busy. Would it maybe
make sense to rotate the list so that new head
will consist of pages not yet sent to device?

--
MST

2017-08-04 07:53:42

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Fri 04-08-17 00:02:01, Michael S. Tsirkin wrote:
> On Thu, Aug 03, 2017 at 03:20:09PM +0000, Wang, Wei W wrote:
> > On Thursday, August 3, 2017 9:51 PM, Michal Hocko:
> > > As I've said earlier. Start simple optimize incrementally with some numbers to
> > > justify a more subtle code.
> > > --
> >
> > OK. Let's start with the simple implementation as you suggested.
> >
> > Best,
> > Wei
>
> The tricky part is when you need to drop the lock and
> then restart because the device is busy. Would it maybe
> make sense to rotate the list so that new head
> will consist of pages not yet sent to device?

No, I this should be strictly non-modifying API.
--
Michal Hocko
SUSE Labs

2017-08-04 08:13:01

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/04/2017 03:53 PM, Michal Hocko wrote:
> On Fri 04-08-17 00:02:01, Michael S. Tsirkin wrote:
>> On Thu, Aug 03, 2017 at 03:20:09PM +0000, Wang, Wei W wrote:
>>> On Thursday, August 3, 2017 9:51 PM, Michal Hocko:
>>>> As I've said earlier. Start simple optimize incrementally with some numbers to
>>>> justify a more subtle code.
>>>> --
>>> OK. Let's start with the simple implementation as you suggested.
>>>
>>> Best,
>>> Wei
>> The tricky part is when you need to drop the lock and
>> then restart because the device is busy. Would it maybe
>> make sense to rotate the list so that new head
>> will consist of pages not yet sent to device?
> No, I this should be strictly non-modifying API.


Just get the context here for discussion:

spin_lock_irqsave(&zone->lock, flags);
...
visit(opaque2, pfn, 1<<order);
spin_unlock_irqrestore(&zone->lock, flags);

The concern is that the callback may cause the lock be
taken too long.


I think here we can have two options:
- Option 1: Put a Note for the callback: the callback function
should not block and it should finish as soon as possible.
(when implementing an interrupt handler, we also have
such similar rules in mind, right?).

For our use case, the callback just puts the reported page
block to the ring, then returns. If the ring is full as the host
is busy, then I think it should skip this one, and just return.
Because:
A. This is an optimization feature, losing a couple of free
pages to report isn't that important;
B. In reality, I think it's uncommon to see this ring getting
full (I didn't observe ring full in the tests), since the host
(consumer) is notified to take out the page block right
after it is added.

- Option 2: Put the callback function outside the lock
What's input into the callback is just a pfn, and the callback
won't access the corresponding pages. So, I still think it won't
be an issue no matter what status of the pages is after they
are reported (even they doesn't exit due to hot-remove).


What would you guys think?

Best,
Wei

2017-08-04 08:24:28

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Fri 04-08-17 16:15:24, Wei Wang wrote:
> On 08/04/2017 03:53 PM, Michal Hocko wrote:
> >On Fri 04-08-17 00:02:01, Michael S. Tsirkin wrote:
> >>On Thu, Aug 03, 2017 at 03:20:09PM +0000, Wang, Wei W wrote:
> >>>On Thursday, August 3, 2017 9:51 PM, Michal Hocko:
> >>>>As I've said earlier. Start simple optimize incrementally with some numbers to
> >>>>justify a more subtle code.
> >>>>--
> >>>OK. Let's start with the simple implementation as you suggested.
> >>>
> >>>Best,
> >>>Wei
> >>The tricky part is when you need to drop the lock and
> >>then restart because the device is busy. Would it maybe
> >>make sense to rotate the list so that new head
> >>will consist of pages not yet sent to device?
> >No, I this should be strictly non-modifying API.
>
>
> Just get the context here for discussion:
>
> spin_lock_irqsave(&zone->lock, flags);
> ...
> visit(opaque2, pfn, 1<<order);
> spin_unlock_irqrestore(&zone->lock, flags);
>
> The concern is that the callback may cause the lock be
> taken too long.
>
>
> I think here we can have two options:
> - Option 1: Put a Note for the callback: the callback function
> should not block and it should finish as soon as possible.
> (when implementing an interrupt handler, we also have
> such similar rules in mind, right?).

absolutely

> For our use case, the callback just puts the reported page
> block to the ring, then returns. If the ring is full as the host
> is busy, then I think it should skip this one, and just return.
> Because:
> A. This is an optimization feature, losing a couple of free
> pages to report isn't that important;
> B. In reality, I think it's uncommon to see this ring getting
> full (I didn't observe ring full in the tests), since the host
> (consumer) is notified to take out the page block right
> after it is added.

I thought you only updated a pre allocated bitmat... Anyway, I cannot
comment on this part much as I am not familiar with your usecase.

> - Option 2: Put the callback function outside the lock
> What's input into the callback is just a pfn, and the callback
> won't access the corresponding pages. So, I still think it won't
> be an issue no matter what status of the pages is after they
> are reported (even they doesn't exit due to hot-remove).

This would make the API implementation more complex and I am not yet
convinced we really need that.

--
Michal Hocko
SUSE Labs

2017-08-04 08:52:42

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/04/2017 04:24 PM, Michal Hocko wrote:
>
>> For our use case, the callback just puts the reported page
>> block to the ring, then returns. If the ring is full as the host
>> is busy, then I think it should skip this one, and just return.
>> Because:
>> A. This is an optimization feature, losing a couple of free
>> pages to report isn't that important;
>> B. In reality, I think it's uncommon to see this ring getting
>> full (I didn't observe ring full in the tests), since the host
>> (consumer) is notified to take out the page block right
>> after it is added.
> I thought you only updated a pre allocated bitmat... Anyway, I cannot
> comment on this part much as I am not familiar with your usecase.
>

Actually the bitmap is in the hypervisor (host). The callback puts the
(pfn,size) on a ring which is shared with the hypervisor, then the
hypervisor takes that info from the ring and updates that bitmap.


Best,
Wei

2017-08-07 06:56:02

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 1/5] Introduce xbitmap

On 08/03/2017 02:38 PM, Wei Wang wrote:
> From: Matthew Wilcox <[email protected]>
>
> The eXtensible Bitmap is a sparse bitmap representation which is
> efficient for set bits which tend to cluster. It supports up to
> 'unsigned long' worth of bits, and this commit adds the bare bones --
> xb_set_bit(), xb_clear_bit() and xb_test_bit().
>
> Signed-off-by: Matthew Wilcox <[email protected]>
> Signed-off-by: Wei Wang <[email protected]>
> ---
> include/linux/radix-tree.h | 2 +
> include/linux/xbitmap.h | 49 ++++++++++++++++
> lib/radix-tree.c | 139 ++++++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 188 insertions(+), 2 deletions(-)
> create mode 100644 include/linux/xbitmap.h
>
> diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
> index 3e57350..428ccc9 100644
> --- a/include/linux/radix-tree.h
> +++ b/include/linux/radix-tree.h


Hi Matthew,

Could you please help to upstream this patch?


Best,
Wei

2017-08-08 06:10:13

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/03/2017 05:11 PM, Michal Hocko wrote:
> On Thu 03-08-17 14:38:18, Wei Wang wrote:
> This is just too ugly and wrong actually. Never provide struct page
> pointers outside of the zone->lock. What I've had in mind was to simply
> walk free lists of the suitable order and call the callback for each one.
> Something as simple as
>
> for (i = 0; i < MAX_NR_ZONES; i++) {
> struct zone *zone = &pgdat->node_zones[i];
>
> if (!populated_zone(zone))
> continue;

Can we directly use for_each_populated_zone(zone) here?


> spin_lock_irqsave(&zone->lock, flags);
> for (order = min_order; order < MAX_ORDER; ++order) {


This appears to be covered by for_each_migratetype_order(order, mt) below.


> struct free_area *free_area = &zone->free_area[order];
> enum migratetype mt;
> struct page *page;
>
> if (!free_area->nr_pages)
> continue;
>
> for_each_migratetype_order(order, mt) {
> list_for_each_entry(page,
> &free_area->free_list[mt], lru) {
>
> pfn = page_to_pfn(page);
> visit(opaque2, prn, 1<<order);
> }
> }
> }
>
> spin_unlock_irqrestore(&zone->lock, flags);
> }
>
> [...]
>

What do you think if we further simply the above implementation like this:

for_each_populated_zone(zone) {
for_each_migratetype_order_decend(1, order, mt) {
spin_lock_irqsave(&zone->lock, flags);
list_for_each_entry(page,
&zone->free_area[order].free_list[mt], lru) {
pfn = page_to_pfn(page);
visit(opaque1, pfn, 1 << order);
}
spin_unlock_irqrestore(&zone->lock, flags);
}
}


Best,
Wei

2017-08-08 06:31:48

by Wang, Wei W

[permalink] [raw]
Subject: Re: [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/08/2017 02:12 PM, Wei Wang wrote:
> On 08/03/2017 05:11 PM, Michal Hocko wrote:
>> On Thu 03-08-17 14:38:18, Wei Wang wrote:
>> This is just too ugly and wrong actually. Never provide struct page
>> pointers outside of the zone->lock. What I've had in mind was to simply
>> walk free lists of the suitable order and call the callback for each
>> one.
>> Something as simple as
>>
>> for (i = 0; i < MAX_NR_ZONES; i++) {
>> struct zone *zone = &pgdat->node_zones[i];
>>
>> if (!populated_zone(zone))
>> continue;
>
> Can we directly use for_each_populated_zone(zone) here?
>
>
>> spin_lock_irqsave(&zone->lock, flags);
>> for (order = min_order; order < MAX_ORDER; ++order) {
>
>
> This appears to be covered by for_each_migratetype_order(order, mt)
> below.
>
>
>> struct free_area *free_area = &zone->free_area[order];
>> enum migratetype mt;
>> struct page *page;
>>
>> if (!free_area->nr_pages)
>> continue;
>>
>> for_each_migratetype_order(order, mt) {
>> list_for_each_entry(page,
>> &free_area->free_list[mt], lru) {
>>
>> pfn = page_to_pfn(page);
>> visit(opaque2, prn, 1<<order);
>> }
>> }
>> }
>>
>> spin_unlock_irqrestore(&zone->lock, flags);
>> }
>>
>> [...]
>>
>
> What do you think if we further simply the above implementation like
> this:
>
> for_each_populated_zone(zone) {
> for_each_migratetype_order_decend(1, order, mt) {

here it will be min_order (passed by the caller), instead of "1",
that is, for_each_migratetype_order_decend(min_order, order, mt)


> spin_lock_irqsave(&zone->lock, flags);
> list_for_each_entry(page,
> &zone->free_area[order].free_list[mt], lru) {
> pfn = page_to_pfn(page);
> visit(opaque1, pfn, 1 << order);
> }
> spin_unlock_irqrestore(&zone->lock, flags);
> }
> }
>
>


Best,
Wei

2017-08-09 21:36:43

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v13 1/5] Introduce xbitmap

On Thu, 3 Aug 2017 14:38:15 +0800 Wei Wang <[email protected]> wrote:

> From: Matthew Wilcox <[email protected]>
>
> The eXtensible Bitmap is a sparse bitmap representation which is
> efficient for set bits which tend to cluster. It supports up to
> 'unsigned long' worth of bits, and this commit adds the bare bones --
> xb_set_bit(), xb_clear_bit() and xb_test_bit().

Would like to see some additional details here justifying the change.
The sole user is virtio-balloon, yes? What alternatives were examined
and what are the benefits of this approach?

Have you identified any other subsystems which could utilize this?

>
> ...
>
> --- a/lib/radix-tree.c
> +++ b/lib/radix-tree.c
> @@ -37,6 +37,7 @@
> #include <linux/rcupdate.h>
> #include <linux/slab.h>
> #include <linux/string.h>
> +#include <linux/xbitmap.h>
>
>
> /* Number of nodes in fully populated tree of given height */
> @@ -78,6 +79,14 @@ static struct kmem_cache *radix_tree_node_cachep;
> #define IDA_PRELOAD_SIZE (IDA_MAX_PATH * 2 - 1)
>
> /*
> + * The XB can go up to unsigned long, but also uses a bitmap.

This comment is hard to understand.

> + */
> +#define XB_INDEX_BITS (BITS_PER_LONG - ilog2(IDA_BITMAP_BITS))
> +#define XB_MAX_PATH (DIV_ROUND_UP(XB_INDEX_BITS, \
> + RADIX_TREE_MAP_SHIFT))
> +#define XB_PRELOAD_SIZE (XB_MAX_PATH * 2 - 1)
> +
>
> ...
>
> +void xb_preload(gfp_t gfp)
> +{
> + __radix_tree_preload(gfp, XB_PRELOAD_SIZE);
> + if (!this_cpu_read(ida_bitmap)) {
> + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp);
> +
> + if (!bitmap)
> + return;
> + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap);
> + kfree(bitmap);
> + }
> +}
> +EXPORT_SYMBOL(xb_preload);

Please document the exported API. It's conventional to do this in
kerneldoc but for some reason kerneldoc makes people write
uninteresting and unuseful documentation. Be sure to cover the
*useful* stuff: what it does, why it does it, under which circumstances
it should be used, what the caller-provided locking should look like,
what the return values mean, etc. Stuff which programmers actually
will benefit from knowing.

> +int xb_set_bit(struct xb *xb, unsigned long bit)
>
> ...
>
> +int xb_clear_bit(struct xb *xb, unsigned long bit)

There's quite a lot of common code here. Did you investigate factoring
that out in some fashion?

> +bool xb_test_bit(const struct xb *xb, unsigned long bit)
> +{
> + unsigned long index = bit / IDA_BITMAP_BITS;
> + const struct radix_tree_root *root = &xb->xbrt;
> + struct ida_bitmap *bitmap = radix_tree_lookup(root, index);
> +
> + bit %= IDA_BITMAP_BITS;
> +
> + if (!bitmap)
> + return false;
> + if (radix_tree_exception(bitmap)) {
> + bit += RADIX_TREE_EXCEPTIONAL_SHIFT;
> + if (bit > BITS_PER_LONG)
> + return false;
> + return (unsigned long)bitmap & (1UL << bit);
> + }
> + return test_bit(bit, bitmap->bitmap);
> +}
> +

Missing EXPORT_SYMBOL?


Perhaps all this code should go into a new lib/xbitmap.c.

2017-08-10 06:01:07

by Wang, Wei W

[permalink] [raw]
Subject: Re: [PATCH v13 1/5] Introduce xbitmap

On 08/10/2017 05:36 AM, Andrew Morton wrote:
> On Thu, 3 Aug 2017 14:38:15 +0800 Wei Wang <[email protected]> wrote:
>
>> From: Matthew Wilcox <[email protected]>
>>
>> The eXtensible Bitmap is a sparse bitmap representation which is
>> efficient for set bits which tend to cluster. It supports up to
>> 'unsigned long' worth of bits, and this commit adds the bare bones --
>> xb_set_bit(), xb_clear_bit() and xb_test_bit().
> Would like to see some additional details here justifying the change.
> The sole user is virtio-balloon, yes? What alternatives were examined
> and what are the benefits of this approach?
>
> Have you identified any other subsystems which could utilize this?


The idea and implementation comes from Matthew, but I can share
my thought here (mostly from a user perspective):

This seems to be the first kind that uses bitmaps based on radix like
structures for id recording purposes. The id is given by the user,
which is different from ida (ida is used for id allocation purpose). A
bitmap is allocated on demand when an id provided by the user to
record beyond the existing id range that the allocated bitmaps can
cover. Benefits are actually from the radix implementation - efficient
storage and quick lookup.

We use it in virtio-balloon to record the pfns of balloon pages, that is,
a pfn is an id to be recorded into the bitmap. The bitmaps are latter
searched for continuous "1" bits, which correspond to continuous pfns.

Virtio-ballon is the first user of it. I'm not sure about other subsystems,
but other developers may notice it and use it once it's available there.


>> ...
>>
>> --- a/lib/radix-tree.c
>> +++ b/lib/radix-tree.c
>> @@ -37,6 +37,7 @@
>> #include <linux/rcupdate.h>
>> #include <linux/slab.h>
>> #include <linux/string.h>
>> +#include <linux/xbitmap.h>
>>
>>
>> /* Number of nodes in fully populated tree of given height */
>> @@ -78,6 +79,14 @@ static struct kmem_cache *radix_tree_node_cachep;
>> #define IDA_PRELOAD_SIZE (IDA_MAX_PATH * 2 - 1)
>>
>> /*
>> + * The XB can go up to unsigned long, but also uses a bitmap.
> This comment is hard to understand.

Also not sure bout it.

>
>> + */
>> +#define XB_INDEX_BITS (BITS_PER_LONG - ilog2(IDA_BITMAP_BITS))
>> +#define XB_MAX_PATH (DIV_ROUND_UP(XB_INDEX_BITS, \
>> + RADIX_TREE_MAP_SHIFT))
>> +#define XB_PRELOAD_SIZE (XB_MAX_PATH * 2 - 1)
>> +
>>
>> ...
>>
>> +void xb_preload(gfp_t gfp)
>> +{
>> + __radix_tree_preload(gfp, XB_PRELOAD_SIZE);
>> + if (!this_cpu_read(ida_bitmap)) {
>> + struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp);
>> +
>> + if (!bitmap)
>> + return;
>> + bitmap = this_cpu_cmpxchg(ida_bitmap, NULL, bitmap);
>> + kfree(bitmap);
>> + }
>> +}
>> +EXPORT_SYMBOL(xb_preload);
> Please document the exported API. It's conventional to do this in
> kerneldoc but for some reason kerneldoc makes people write
> uninteresting and unuseful documentation. Be sure to cover the
> *useful* stuff: what it does, why it does it, under which circumstances
> it should be used, what the caller-provided locking should look like,
> what the return values mean, etc. Stuff which programmers actually
> will benefit from knowing.

OK.

>
>> +int xb_set_bit(struct xb *xb, unsigned long bit)
>>
>> ...
>>
>> +int xb_clear_bit(struct xb *xb, unsigned long bit)
> There's quite a lot of common code here. Did you investigate factoring
> that out in some fashion?


If we combine the functions into one
xb_bit_ops(struct xb *xb, unsigned long bit, enum xb_ops ops),
it will be a big function with some if (ops == set/clear/test)-else,
not sure if that would look good.


>
>> +bool xb_test_bit(const struct xb *xb, unsigned long bit)
>> +{
>> + unsigned long index = bit / IDA_BITMAP_BITS;
>> + const struct radix_tree_root *root = &xb->xbrt;
>> + struct ida_bitmap *bitmap = radix_tree_lookup(root, index);
>> +
>> + bit %= IDA_BITMAP_BITS;
>> +
>> + if (!bitmap)
>> + return false;
>> + if (radix_tree_exception(bitmap)) {
>> + bit += RADIX_TREE_EXCEPTIONAL_SHIFT;
>> + if (bit > BITS_PER_LONG)
>> + return false;
>> + return (unsigned long)bitmap & (1UL << bit);
>> + }
>> + return test_bit(bit, bitmap->bitmap);
>> +}
>> +
> Missing EXPORT_SYMBOL?

Yes, will add that, thanks.

>
> Perhaps all this code should go into a new lib/xbitmap.c.

Ok, will relocate.


Best,
Wei


2017-08-10 07:05:22

by Michal Hocko

[permalink] [raw]
Subject: Re: [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Tue 08-08-17 14:34:25, Wei Wang wrote:
> On 08/08/2017 02:12 PM, Wei Wang wrote:
> >On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >>On Thu 03-08-17 14:38:18, Wei Wang wrote:
> >>This is just too ugly and wrong actually. Never provide struct page
> >>pointers outside of the zone->lock. What I've had in mind was to simply
> >>walk free lists of the suitable order and call the callback for each
> >>one.
> >>Something as simple as
> >>
> >> for (i = 0; i < MAX_NR_ZONES; i++) {
> >> struct zone *zone = &pgdat->node_zones[i];
> >>
> >> if (!populated_zone(zone))
> >> continue;
> >
> >Can we directly use for_each_populated_zone(zone) here?

yes, my example couldn't because I was still assuming per-node API

> >>spin_lock_irqsave(&zone->lock, flags);
> >> for (order = min_order; order < MAX_ORDER; ++order) {
> >
> >
> >This appears to be covered by for_each_migratetype_order(order, mt) below.

yes but
#define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
for (type = 0; type < MIGRATE_TYPES; type++)

so you would have to skip orders < min_order
--
Michal Hocko
SUSE Labs

2017-08-10 07:36:25

by Wang, Wei W

[permalink] [raw]
Subject: Re: [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks

On 08/10/2017 03:05 PM, Michal Hocko wrote:
> On Tue 08-08-17 14:34:25, Wei Wang wrote:
>> On 08/08/2017 02:12 PM, Wei Wang wrote:
>>> On 08/03/2017 05:11 PM, Michal Hocko wrote:
>>>> On Thu 03-08-17 14:38:18, Wei Wang wrote:
>>>> This is just too ugly and wrong actually. Never provide struct page
>>>> pointers outside of the zone->lock. What I've had in mind was to simply
>>>> walk free lists of the suitable order and call the callback for each
>>>> one.
>>>> Something as simple as
>>>>
>>>> for (i = 0; i < MAX_NR_ZONES; i++) {
>>>> struct zone *zone = &pgdat->node_zones[i];
>>>>
>>>> if (!populated_zone(zone))
>>>> continue;
>>> Can we directly use for_each_populated_zone(zone) here?
> yes, my example couldn't because I was still assuming per-node API
>
>>>> spin_lock_irqsave(&zone->lock, flags);
>>>> for (order = min_order; order < MAX_ORDER; ++order) {
>>>
>>> This appears to be covered by for_each_migratetype_order(order, mt) below.
> yes but
> #define for_each_migratetype_order(order, type) \
> for (order = 0; order < MAX_ORDER; order++) \
> for (type = 0; type < MIGRATE_TYPES; type++)
>
> so you would have to skip orders < min_order

Yes, that's why we have a new macro

#define for_each_migratetype_order_decend(min_order, order, type) \
for (order = MAX_ORDER - 1; order < MAX_ORDER && order >= min_order; \
order--) \
for (type = 0; type < MIGRATE_TYPES; type++)

If you don't like the macro, we can also directly use it in the code.

I think it would be better to report the larger free page block first, since
the callback has an opportunity (though just a theoretical possibility,
good to
take that into consideration if possible) to skip reporting the given
free page
block to the hypervisor as the ring gets full. Losing the small block is
better
than losing the larger one, in terms of the optimization work.


Best,
Wei



2017-08-10 07:53:54

by Michal Hocko

[permalink] [raw]
Subject: Re: [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks

On Thu 10-08-17 15:38:34, Wei Wang wrote:
> On 08/10/2017 03:05 PM, Michal Hocko wrote:
> >On Tue 08-08-17 14:34:25, Wei Wang wrote:
> >>On 08/08/2017 02:12 PM, Wei Wang wrote:
> >>>On 08/03/2017 05:11 PM, Michal Hocko wrote:
> >>>>On Thu 03-08-17 14:38:18, Wei Wang wrote:
> >>>>This is just too ugly and wrong actually. Never provide struct page
> >>>>pointers outside of the zone->lock. What I've had in mind was to simply
> >>>>walk free lists of the suitable order and call the callback for each
> >>>>one.
> >>>>Something as simple as
> >>>>
> >>>> for (i = 0; i < MAX_NR_ZONES; i++) {
> >>>> struct zone *zone = &pgdat->node_zones[i];
> >>>>
> >>>> if (!populated_zone(zone))
> >>>> continue;
> >>>Can we directly use for_each_populated_zone(zone) here?
> >yes, my example couldn't because I was still assuming per-node API
> >
> >>>>spin_lock_irqsave(&zone->lock, flags);
> >>>> for (order = min_order; order < MAX_ORDER; ++order) {
> >>>
> >>>This appears to be covered by for_each_migratetype_order(order, mt) below.
> >yes but
> >#define for_each_migratetype_order(order, type) \
> > for (order = 0; order < MAX_ORDER; order++) \
> > for (type = 0; type < MIGRATE_TYPES; type++)
> >
> >so you would have to skip orders < min_order
>
> Yes, that's why we have a new macro
>
> #define for_each_migratetype_order_decend(min_order, order, type) \
> for (order = MAX_ORDER - 1; order < MAX_ORDER && order >= min_order; \
> order--) \
> for (type = 0; type < MIGRATE_TYPES; type++)
>
> If you don't like the macro, we can also directly use it in the code.
>
> I think it would be better to report the larger free page block first, since
> the callback has an opportunity (though just a theoretical possibility, good
> to
> take that into consideration if possible) to skip reporting the given free
> page
> block to the hypervisor as the ring gets full. Losing the small block is
> better
> than losing the larger one, in terms of the optimization work.

I see. But I think this is so specialized that opencoding the macro
would be easier to read.

--
Michal Hocko
SUSE Labs

2017-08-16 05:57:47

by Adam Tao

[permalink] [raw]
Subject: Re: [virtio-dev] [PATCH v13 0/5] Virtio-balloon Enhancement

On Thu, Aug 03, 2017 at 02:38:14PM +0800, Wei Wang wrote:
> This patch series enhances the existing virtio-balloon with the following
> new features:
> 1) fast ballooning: transfer ballooned pages between the guest and host in
> chunks using sgs, instead of one by one; and
> 2) free_page_vq: a new virtqueue to report guest free pages to the host.
>
Hi wei,
The reason we add the new vq for the migration feature is based on
what(original design based on inflate and deflate vq)?
I am wondering if we add new feature in the future do we still need to add new type
of vq?
Do we need to add one command queue for the common purpose(including
different type of requests except the in/deflate ones)?
Thanks
Adam
> The second feature can be used to accelerate live migration of VMs. Here
> are some details:
>
> Live migration needs to transfer the VM's memory from the source machine
> to the destination round by round. For the 1st round, all the VM's memory
> is transferred. From the 2nd round, only the pieces of memory that were
> written by the guest (after the 1st round) are transferred. One method
> that is popularly used by the hypervisor to track which part of memory is
> written is to write-protect all the guest memory.
>
> The second feature enables the optimization of the 1st round memory
> transfer - the hypervisor can skip the transfer of guest free pages in the
> 1st round. It is not concerned that the memory pages are used after they
> are given to the hypervisor as a hint of the free pages, because they will
> be tracked by the hypervisor and transferred in the next round if they are
> used and written.
>
> Change Log:
> v12->v13:
> 1) mm: use a callback function to handle the the free page blocks from the
> report function. This avoids exposing the zone internal to a kernel module.
> 2) virtio-balloon: send balloon pages or a free page block using a single sg
> each time. This has the benefits of simpler implementation with no new APIs.
> 3) virtio-balloon: the free_page_vq is used to report free pages only (no
> multiple usages interleaving)
> 4) virtio-balloon: Balloon pages and free page blocks are sent via input sgs,
> and the completion signal to the host is sent via an output sg.
>
> v11->v12:
> 1) xbitmap: use the xbitmap from Matthew Wilcox to record ballooned pages.
> 2) virtio-ring: enable the driver to build up a desc chain using vring desc.
> 3) virtio-ring: Add locking to the existing START_USE() and END_USE() macro
> to lock/unlock the vq when a vq operation starts/ends.
> 4) virtio-ring: add virtqueue_kick_sync() and virtqueue_kick_async()
> 5) virtio-balloon: describe chunks of ballooned pages and free pages blocks
> directly using one or more chains of desc from the vq.
>
> v10->v11:
> 1) virtio_balloon: use vring_desc to describe a chunk;
> 2) virtio_ring: support to add an indirect desc table to virtqueue;
> 3) virtio_balloon: use cmdq to report guest memory statistics.
>
> v9->v10:
> 1) mm: put report_unused_page_block() under CONFIG_VIRTIO_BALLOON;
> 2) virtio-balloon: add virtballoon_validate();
> 3) virtio-balloon: msg format change;
> 4) virtio-balloon: move miscq handling to a task on system_freezable_wq;
> 5) virtio-balloon: code cleanup.
>
> v8->v9:
> 1) Split the two new features, VIRTIO_BALLOON_F_BALLOON_CHUNKS and
> VIRTIO_BALLOON_F_MISC_VQ, which were mixed together in the previous
> implementation;
> 2) Simpler function to get the free page block.
>
> v7->v8:
> 1) Use only one chunk format, instead of two.
> 2) re-write the virtio-balloon implementation patch.
> 3) commit changes
> 4) patch re-org
>
> Matthew Wilcox (1):
> Introduce xbitmap
>
> Wei Wang (4):
> xbitmap: add xb_find_next_bit() and xb_zero()
> virtio-balloon: VIRTIO_BALLOON_F_SG
> mm: support reporting free page blocks
> virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_VQ
>
> drivers/virtio/virtio_balloon.c | 302 +++++++++++++++++++++++++++++++-----
> include/linux/mm.h | 7 +
> include/linux/mmzone.h | 5 +
> include/linux/radix-tree.h | 2 +
> include/linux/xbitmap.h | 53 +++++++
> include/uapi/linux/virtio_balloon.h | 2 +
> lib/radix-tree.c | 167 +++++++++++++++++++-
> mm/page_alloc.c | 109 +++++++++++++
> 8 files changed, 609 insertions(+), 38 deletions(-)
> create mode 100644 include/linux/xbitmap.h
>
> --
> 2.7.4
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]

2017-08-16 09:30:34

by Wang, Wei W

[permalink] [raw]
Subject: Re: [virtio-dev] [PATCH v13 0/5] Virtio-balloon Enhancement

On 08/16/2017 01:57 PM, Adam Tao wrote:
> On Thu, Aug 03, 2017 at 02:38:14PM +0800, Wei Wang wrote:
>> This patch series enhances the existing virtio-balloon with the following
>> new features:
>> 1) fast ballooning: transfer ballooned pages between the guest and host in
>> chunks using sgs, instead of one by one; and
>> 2) free_page_vq: a new virtqueue to report guest free pages to the host.
>>
> Hi wei,
> The reason we add the new vq for the migration feature is based on
> what(original design based on inflate and deflate vq)?
> I am wondering if we add new feature in the future do we still need to add new type
> of vq?
> Do we need to add one command queue for the common purpose(including
> different type of requests except the in/deflate ones)?
> Thanks
> Adam

Hi Adam,

The the free_page_vq is added to report free pages to the hypervisor.
Neither inflate nor deflate vq was for this purpose.

Based on the current implementation, a vq dedicated to one usage (i.e.
report
free pages) is better, since mixing with other usages, e.g. a command vq to
handle multiple commands at the same time, would have some issues (e.g. one
being delayed by another due to some resource control), and it also
results in
more complex interfaces between the driver and device.

For future usages which are still unknown at present, I think we can discuss
them case by case in the future.

Best,
Wei