2018-08-13 14:04:32

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 0/5] xen/blk: persistent grant rework

Persistent grants are used in the Xen's blkfront/blkback drivers to
avoid mapping/unmapping of I/O buffers in the backend for each I/O.

While this speeds up processing quite a bit there are problems related
to persistent grants in some configurations: domains with multiple
block devices making use of persistent grants might suffer from a lack
of grants if each of the block devices experienced a high I/O load at
some time. This is due to the number of persistent grants per device
only to be limited by a rather high maximum value, but never being
released even in case of longer times without any I/O.

This series modifies xen-blkback to unmap any domU page mapped via a
persistent grant after a timeout (default: 60 seconds). The timeout
is set to its default value again when a persistent grant has been
used for an I/O.

xen-blkfront is modified to scan every 10 seconds for persistent grants
not in use by blkback any more and to remove such grants.

The last 3 patches are small cleanups of blkfront and blkback drivers.

V3:
- patch 1: make timeout parameter static

V2:
- patch 1: added new module parameter doc
- patch 1: removed PERSISTENT_GNT_WAS_ACTIVE flag
- patch 2: removed global worker active flag
- added new patch 4

Juergen Gross (5):
xen/blkback: don't keep persistent grants too long
xen/blkfront: cleanup stale persistent grants
xen/blkfront: reorder tests in xlblk_init()
xen/blkback: move persistent grants flags to bool
xen/blkback: remove unused pers_gnts_lock from struct xen_blkif_ring

Documentation/ABI/testing/sysfs-driver-xen-blkback | 10 ++
drivers/block/xen-blkback/blkback.c | 99 ++++++++++---------
drivers/block/xen-blkback/common.h | 14 +--
drivers/block/xen-blkfront.c | 110 ++++++++++++++++++---
4 files changed, 163 insertions(+), 70 deletions(-)

--
2.13.7



2018-08-13 14:02:36

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 1/5] xen/blkback: don't keep persistent grants too long

Persistent grants are allocated until a threshold per ring is being
reached. Those grants won't be freed until the ring is being destroyed
meaning there will be resources kept busy which might no longer be
used.

Instead of freeing only persistent grants until the threshold is
reached add a timestamp and remove all persistent grants not having
been in use for a minute.

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]>
---
V3: make timeout parameter static
---
Documentation/ABI/testing/sysfs-driver-xen-blkback | 10 +++
drivers/block/xen-blkback/blkback.c | 88 ++++++++++++----------
drivers/block/xen-blkback/common.h | 8 +-
3 files changed, 60 insertions(+), 46 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
index 8bb43b66eb55..4e7babb3ba1f 100644
--- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
+++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
@@ -15,3 +15,13 @@ Description:
blkback. If the frontend tries to use more than
max_persistent_grants, the LRU kicks in and starts
removing 5% of max_persistent_grants every 100ms.
+
+What: /sys/module/xen_blkback/parameters/persistent_grant_unused_seconds
+Date: August 2018
+KernelVersion: 4.19
+Contact: Roger Pau Monné <[email protected]>
+Description:
+ How long a persistent grant is allowed to remain
+ allocated without being in use. The time is in
+ seconds, 0 means indefinitely long.
+ The default is 60 seconds.
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index b55b245e8052..9eae7b243f68 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -84,6 +84,18 @@ MODULE_PARM_DESC(max_persistent_grants,
"Maximum number of grants to map persistently");

/*
+ * How long a persistent grant is allowed to remain allocated without being in
+ * use. The time is in seconds, 0 means indefinitely long.
+ */
+
+static unsigned int xen_blkif_pgrant_timeout = 60;
+module_param_named(persistent_grant_unused_seconds, xen_blkif_pgrant_timeout,
+ uint, 0644);
+MODULE_PARM_DESC(persistent_grant_unused_seconds,
+ "Time in seconds an unused persistent grant is allowed to "
+ "remain allocated. Default is 60, 0 means unlimited.");
+
+/*
* Maximum number of rings/queues blkback supports, allow as many queues as there
* are CPUs if user has not specified a value.
*/
@@ -123,6 +135,13 @@ module_param(log_stats, int, 0644);
/* Number of free pages to remove on each call to gnttab_free_pages */
#define NUM_BATCH_FREE_PAGES 10

+static inline bool persistent_gnt_timeout(struct persistent_gnt *persistent_gnt)
+{
+ return xen_blkif_pgrant_timeout &&
+ (jiffies - persistent_gnt->last_used >=
+ HZ * xen_blkif_pgrant_timeout);
+}
+
static inline int get_free_page(struct xen_blkif_ring *ring, struct page **page)
{
unsigned long flags;
@@ -278,7 +297,7 @@ static void put_persistent_gnt(struct xen_blkif_ring *ring,
{
if(!test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
pr_alert_ratelimited("freeing a grant already unused\n");
- set_bit(PERSISTENT_GNT_WAS_ACTIVE, persistent_gnt->flags);
+ persistent_gnt->last_used = jiffies;
clear_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags);
atomic_dec(&ring->persistent_gnt_in_use);
}
@@ -371,26 +390,26 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
struct persistent_gnt *persistent_gnt;
struct rb_node *n;
unsigned int num_clean, total;
- bool scan_used = false, clean_used = false;
+ bool scan_used = false;
struct rb_root *root;

- if (ring->persistent_gnt_c < xen_blkif_max_pgrants ||
- (ring->persistent_gnt_c == xen_blkif_max_pgrants &&
- !ring->blkif->vbd.overflow_max_grants)) {
- goto out;
- }
-
if (work_busy(&ring->persistent_purge_work)) {
pr_alert_ratelimited("Scheduled work from previous purge is still busy, cannot purge list\n");
goto out;
}

- num_clean = (xen_blkif_max_pgrants / 100) * LRU_PERCENT_CLEAN;
- num_clean = ring->persistent_gnt_c - xen_blkif_max_pgrants + num_clean;
- num_clean = min(ring->persistent_gnt_c, num_clean);
- if ((num_clean == 0) ||
- (num_clean > (ring->persistent_gnt_c - atomic_read(&ring->persistent_gnt_in_use))))
- goto out;
+ if (ring->persistent_gnt_c < xen_blkif_max_pgrants ||
+ (ring->persistent_gnt_c == xen_blkif_max_pgrants &&
+ !ring->blkif->vbd.overflow_max_grants)) {
+ num_clean = 0;
+ } else {
+ num_clean = (xen_blkif_max_pgrants / 100) * LRU_PERCENT_CLEAN;
+ num_clean = ring->persistent_gnt_c - xen_blkif_max_pgrants +
+ num_clean;
+ num_clean = min(ring->persistent_gnt_c, num_clean);
+ pr_debug("Going to purge at least %u persistent grants\n",
+ num_clean);
+ }

/*
* At this point, we can assure that there will be no calls
@@ -401,9 +420,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
* number of grants.
*/

- total = num_clean;
-
- pr_debug("Going to purge %u persistent grants\n", num_clean);
+ total = 0;

BUG_ON(!list_empty(&ring->persistent_purge_list));
root = &ring->persistent_gnts;
@@ -412,46 +429,37 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
BUG_ON(persistent_gnt->handle ==
BLKBACK_INVALID_HANDLE);

- if (clean_used) {
- clear_bit(PERSISTENT_GNT_WAS_ACTIVE, persistent_gnt->flags);
- continue;
- }
-
if (test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
continue;
- if (!scan_used &&
- (test_bit(PERSISTENT_GNT_WAS_ACTIVE, persistent_gnt->flags)))
+ if (!scan_used && !persistent_gnt_timeout(persistent_gnt))
+ continue;
+ if (scan_used && total >= num_clean)
continue;

rb_erase(&persistent_gnt->node, root);
list_add(&persistent_gnt->remove_node,
&ring->persistent_purge_list);
- if (--num_clean == 0)
- goto finished;
+ total++;
}
/*
- * If we get here it means we also need to start cleaning
+ * Check whether we also need to start cleaning
* grants that were used since last purge in order to cope
* with the requested num
*/
- if (!scan_used && !clean_used) {
- pr_debug("Still missing %u purged frames\n", num_clean);
+ if (!scan_used && total < num_clean) {
+ pr_debug("Still missing %u purged frames\n", num_clean - total);
scan_used = true;
goto purge_list;
}
-finished:
- if (!clean_used) {
- pr_debug("Finished scanning for grants to clean, removing used flag\n");
- clean_used = true;
- goto purge_list;
- }

- ring->persistent_gnt_c -= (total - num_clean);
- ring->blkif->vbd.overflow_max_grants = 0;
+ if (total) {
+ ring->persistent_gnt_c -= total;
+ ring->blkif->vbd.overflow_max_grants = 0;

- /* We can defer this work */
- schedule_work(&ring->persistent_purge_work);
- pr_debug("Purged %u/%u\n", (total - num_clean), total);
+ /* We can defer this work */
+ schedule_work(&ring->persistent_purge_work);
+ pr_debug("Purged %u/%u\n", num_clean, total);
+ }

out:
return;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index ecb35fe8ca8d..7bff72db3b7e 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -234,14 +234,9 @@ struct xen_vbd {
struct backend_info;

/* Number of available flags */
-#define PERSISTENT_GNT_FLAGS_SIZE 2
+#define PERSISTENT_GNT_FLAGS_SIZE 1
/* This persistent grant is currently in use */
#define PERSISTENT_GNT_ACTIVE 0
-/*
- * This persistent grant has been used, this flag is set when we remove the
- * PERSISTENT_GNT_ACTIVE, to know that this grant has been used recently.
- */
-#define PERSISTENT_GNT_WAS_ACTIVE 1

/* Number of requests that we can fit in a ring */
#define XEN_BLKIF_REQS_PER_PAGE 32
@@ -250,6 +245,7 @@ struct persistent_gnt {
struct page *page;
grant_ref_t gnt;
grant_handle_t handle;
+ unsigned long last_used;
DECLARE_BITMAP(flags, PERSISTENT_GNT_FLAGS_SIZE);
struct rb_node node;
struct list_head remove_node;
--
2.13.7


2018-08-13 14:02:50

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 5/5] xen/blkback: remove unused pers_gnts_lock from struct xen_blkif_ring

pers_gnts_lock isn't being used anywhere. Remove it.

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]>
---
drivers/block/xen-blkback/common.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index 2339b8d39c5e..1d3002d773f7 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -269,7 +269,6 @@ struct xen_blkif_ring {
wait_queue_head_t pending_free_wq;

/* Tree to store persistent grants. */
- spinlock_t pers_gnts_lock;
struct rb_root persistent_gnts;
unsigned int persistent_gnt_c;
atomic_t persistent_gnt_in_use;
--
2.13.7


2018-08-13 14:02:53

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 4/5] xen/blkback: move persistent grants flags to bool

The struct persistent_gnt flags member is meant to be a bitfield of
different flags. There is only PERSISTENT_GNT_ACTIVE flag left, so
convert it to a bool named "active".

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]>
---
drivers/block/xen-blkback/blkback.c | 13 ++++++-------
drivers/block/xen-blkback/common.h | 7 +------
2 files changed, 7 insertions(+), 13 deletions(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 9eae7b243f68..fd1e19f1a49f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -255,8 +255,7 @@ static int add_persistent_gnt(struct xen_blkif_ring *ring,
}
}

- bitmap_zero(persistent_gnt->flags, PERSISTENT_GNT_FLAGS_SIZE);
- set_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags);
+ persistent_gnt->active = true;
/* Add new node and rebalance tree. */
rb_link_node(&(persistent_gnt->node), parent, new);
rb_insert_color(&(persistent_gnt->node), &ring->persistent_gnts);
@@ -280,11 +279,11 @@ static struct persistent_gnt *get_persistent_gnt(struct xen_blkif_ring *ring,
else if (gref > data->gnt)
node = node->rb_right;
else {
- if(test_bit(PERSISTENT_GNT_ACTIVE, data->flags)) {
+ if (data->active) {
pr_alert_ratelimited("requesting a grant already in use\n");
return NULL;
}
- set_bit(PERSISTENT_GNT_ACTIVE, data->flags);
+ data->active = true;
atomic_inc(&ring->persistent_gnt_in_use);
return data;
}
@@ -295,10 +294,10 @@ static struct persistent_gnt *get_persistent_gnt(struct xen_blkif_ring *ring,
static void put_persistent_gnt(struct xen_blkif_ring *ring,
struct persistent_gnt *persistent_gnt)
{
- if(!test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
+ if (!persistent_gnt->active)
pr_alert_ratelimited("freeing a grant already unused\n");
persistent_gnt->last_used = jiffies;
- clear_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags);
+ persistent_gnt->active = false;
atomic_dec(&ring->persistent_gnt_in_use);
}

@@ -429,7 +428,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
BUG_ON(persistent_gnt->handle ==
BLKBACK_INVALID_HANDLE);

- if (test_bit(PERSISTENT_GNT_ACTIVE, persistent_gnt->flags))
+ if (persistent_gnt->active)
continue;
if (!scan_used && !persistent_gnt_timeout(persistent_gnt))
continue;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index 7bff72db3b7e..2339b8d39c5e 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -233,11 +233,6 @@ struct xen_vbd {

struct backend_info;

-/* Number of available flags */
-#define PERSISTENT_GNT_FLAGS_SIZE 1
-/* This persistent grant is currently in use */
-#define PERSISTENT_GNT_ACTIVE 0
-
/* Number of requests that we can fit in a ring */
#define XEN_BLKIF_REQS_PER_PAGE 32

@@ -246,7 +241,7 @@ struct persistent_gnt {
grant_ref_t gnt;
grant_handle_t handle;
unsigned long last_used;
- DECLARE_BITMAP(flags, PERSISTENT_GNT_FLAGS_SIZE);
+ bool active;
struct rb_node node;
struct list_head remove_node;
};
--
2.13.7


2018-08-13 14:03:17

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 2/5] xen/blkfront: cleanup stale persistent grants

Add a periodic cleanup function to remove old persistent grants which
are no longer in use on the backend side. This avoids starvation in
case there are lots of persistent grants for a device which no longer
is involved in I/O business.

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]>
---
drivers/block/xen-blkfront.c | 94 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 90 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index b5cedccb5d7d..7c4b1b9d3971 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -46,6 +46,7 @@
#include <linux/scatterlist.h>
#include <linux/bitmap.h>
#include <linux/list.h>
+#include <linux/workqueue.h>

#include <xen/xen.h>
#include <xen/xenbus.h>
@@ -121,6 +122,8 @@ static inline struct blkif_req *blkif_req(struct request *rq)

static DEFINE_MUTEX(blkfront_mutex);
static const struct block_device_operations xlvbd_block_fops;
+static struct delayed_work blkfront_work;
+static LIST_HEAD(info_list);

/*
* Maximum number of segments in indirect requests, the actual value used by
@@ -216,6 +219,7 @@ struct blkfront_info
/* Save uncomplete reqs and bios for migration. */
struct list_head requests;
struct bio_list bio_list;
+ struct list_head info_list;
};

static unsigned int nr_minors;
@@ -1764,6 +1768,12 @@ static int write_per_ring_nodes(struct xenbus_transaction xbt,
return err;
}

+static void free_info(struct blkfront_info *info)
+{
+ list_del(&info->info_list);
+ kfree(info);
+}
+
/* Common code used when first setting up, and when resuming. */
static int talk_to_blkback(struct xenbus_device *dev,
struct blkfront_info *info)
@@ -1885,7 +1895,10 @@ static int talk_to_blkback(struct xenbus_device *dev,
destroy_blkring:
blkif_free(info, 0);

- kfree(info);
+ mutex_lock(&blkfront_mutex);
+ free_info(info);
+ mutex_unlock(&blkfront_mutex);
+
dev_set_drvdata(&dev->dev, NULL);

return err;
@@ -1996,6 +2009,10 @@ static int blkfront_probe(struct xenbus_device *dev,
info->handle = simple_strtoul(strrchr(dev->nodename, '/')+1, NULL, 0);
dev_set_drvdata(&dev->dev, info);

+ mutex_lock(&blkfront_mutex);
+ list_add(&info->info_list, &info_list);
+ mutex_unlock(&blkfront_mutex);
+
return 0;
}

@@ -2306,6 +2323,12 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
if (indirect_segments <= BLKIF_MAX_SEGMENTS_PER_REQUEST)
indirect_segments = 0;
info->max_indirect_segments = indirect_segments;
+
+ if (info->feature_persistent) {
+ mutex_lock(&blkfront_mutex);
+ schedule_delayed_work(&blkfront_work, HZ * 10);
+ mutex_unlock(&blkfront_mutex);
+ }
}

/*
@@ -2487,7 +2510,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
mutex_unlock(&info->mutex);

if (!bdev) {
- kfree(info);
+ mutex_lock(&blkfront_mutex);
+ free_info(info);
+ mutex_unlock(&blkfront_mutex);
return 0;
}

@@ -2507,7 +2532,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
if (info && !bdev->bd_openers) {
xlvbd_release_gendisk(info);
disk->private_data = NULL;
- kfree(info);
+ mutex_lock(&blkfront_mutex);
+ free_info(info);
+ mutex_unlock(&blkfront_mutex);
}

mutex_unlock(&bdev->bd_mutex);
@@ -2590,7 +2617,7 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
dev_info(disk_to_dev(bdev->bd_disk), "releasing disk\n");
xlvbd_release_gendisk(info);
disk->private_data = NULL;
- kfree(info);
+ free_info(info);
}

out:
@@ -2623,6 +2650,61 @@ static struct xenbus_driver blkfront_driver = {
.is_ready = blkfront_is_ready,
};

+static void purge_persistent_grants(struct blkfront_info *info)
+{
+ unsigned int i;
+ unsigned long flags;
+
+ for (i = 0; i < info->nr_rings; i++) {
+ struct blkfront_ring_info *rinfo = &info->rinfo[i];
+ struct grant *gnt_list_entry, *tmp;
+
+ spin_lock_irqsave(&rinfo->ring_lock, flags);
+
+ if (rinfo->persistent_gnts_c == 0) {
+ spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ continue;
+ }
+
+ list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
+ node) {
+ if (gnt_list_entry->gref == GRANT_INVALID_REF ||
+ gnttab_query_foreign_access(gnt_list_entry->gref))
+ continue;
+
+ list_del(&gnt_list_entry->node);
+ gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
+ rinfo->persistent_gnts_c--;
+ __free_page(gnt_list_entry->page);
+ kfree(gnt_list_entry);
+ }
+
+ spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+ }
+}
+
+static void blkfront_delay_work(struct work_struct *work)
+{
+ struct blkfront_info *info;
+ bool need_schedule_work = false;
+
+ mutex_lock(&blkfront_mutex);
+
+ list_for_each_entry(info, &info_list, info_list) {
+ if (info->feature_persistent) {
+ need_schedule_work = true;
+ mutex_lock(&info->mutex);
+ purge_persistent_grants(info);
+ mutex_unlock(&info->mutex);
+ }
+ }
+
+ if (need_schedule_work)
+ schedule_delayed_work(&blkfront_work, HZ * 10);
+
+ mutex_unlock(&blkfront_mutex);
+}
+
static int __init xlblk_init(void)
{
int ret;
@@ -2655,6 +2737,8 @@ static int __init xlblk_init(void)
return -ENODEV;
}

+ INIT_DELAYED_WORK(&blkfront_work, blkfront_delay_work);
+
ret = xenbus_register_frontend(&blkfront_driver);
if (ret) {
unregister_blkdev(XENVBD_MAJOR, DEV_NAME);
@@ -2668,6 +2752,8 @@ module_init(xlblk_init);

static void __exit xlblk_exit(void)
{
+ cancel_delayed_work_sync(&blkfront_work);
+
xenbus_unregister_driver(&blkfront_driver);
unregister_blkdev(XENVBD_MAJOR, DEV_NAME);
kfree(minors);
--
2.13.7


2018-08-13 14:04:19

by Jürgen Groß

[permalink] [raw]
Subject: [PATCH v3 3/5] xen/blkfront: reorder tests in xlblk_init()

In case we don't want pv block devices we should not test parameters
for sanity and eventually print out error messages. So test precluding
conditions before checking parameters.

Signed-off-by: Juergen Gross <[email protected]>
Reviewed-by: Roger Pau Monné <[email protected]>
---
drivers/block/xen-blkfront.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 7c4b1b9d3971..36e4ca41e765 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -2713,6 +2713,15 @@ static int __init xlblk_init(void)
if (!xen_domain())
return -ENODEV;

+ if (!xen_has_pv_disk_devices())
+ return -ENODEV;
+
+ if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) {
+ pr_warn("xen_blk: can't get major %d with name %s\n",
+ XENVBD_MAJOR, DEV_NAME);
+ return -ENODEV;
+ }
+
if (xen_blkif_max_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST)
xen_blkif_max_segments = BLKIF_MAX_SEGMENTS_PER_REQUEST;

@@ -2728,15 +2737,6 @@ static int __init xlblk_init(void)
xen_blkif_max_queues = nr_cpus;
}

- if (!xen_has_pv_disk_devices())
- return -ENODEV;
-
- if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) {
- printk(KERN_WARNING "xen_blk: can't get major %d with name %s\n",
- XENVBD_MAJOR, DEV_NAME);
- return -ENODEV;
- }
-
INIT_DELAYED_WORK(&blkfront_work, blkfront_delay_work);

ret = xenbus_register_frontend(&blkfront_driver);
--
2.13.7


2018-08-17 16:00:38

by Roger Pau Monne

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] xen/blk: persistent grant rework

On Mon, Aug 13, 2018 at 04:01:09PM +0200, Juergen Gross wrote:
> Persistent grants are used in the Xen's blkfront/blkback drivers to
> avoid mapping/unmapping of I/O buffers in the backend for each I/O.
>
> While this speeds up processing quite a bit there are problems related
> to persistent grants in some configurations: domains with multiple
> block devices making use of persistent grants might suffer from a lack
> of grants if each of the block devices experienced a high I/O load at
> some time. This is due to the number of persistent grants per device
> only to be limited by a rather high maximum value, but never being
> released even in case of longer times without any I/O.
>
> This series modifies xen-blkback to unmap any domU page mapped via a
> persistent grant after a timeout (default: 60 seconds). The timeout
> is set to its default value again when a persistent grant has been
> used for an I/O.
>
> xen-blkfront is modified to scan every 10 seconds for persistent grants
> not in use by blkback any more and to remove such grants.
>
> The last 3 patches are small cleanups of blkfront and blkback drivers.
>
> V3:
> - patch 1: make timeout parameter static

Konrad if you are OK with this series, could you please send a pull
request to Jens?

Thanks, Roger.

2018-08-24 13:54:26

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] xen/blk: persistent grant rework

On 17/08/18 17:59, Roger Pau Monné wrote:
> On Mon, Aug 13, 2018 at 04:01:09PM +0200, Juergen Gross wrote:
>> Persistent grants are used in the Xen's blkfront/blkback drivers to
>> avoid mapping/unmapping of I/O buffers in the backend for each I/O.
>>
>> While this speeds up processing quite a bit there are problems related
>> to persistent grants in some configurations: domains with multiple
>> block devices making use of persistent grants might suffer from a lack
>> of grants if each of the block devices experienced a high I/O load at
>> some time. This is due to the number of persistent grants per device
>> only to be limited by a rather high maximum value, but never being
>> released even in case of longer times without any I/O.
>>
>> This series modifies xen-blkback to unmap any domU page mapped via a
>> persistent grant after a timeout (default: 60 seconds). The timeout
>> is set to its default value again when a persistent grant has been
>> used for an I/O.
>>
>> xen-blkfront is modified to scan every 10 seconds for persistent grants
>> not in use by blkback any more and to remove such grants.
>>
>> The last 3 patches are small cleanups of blkfront and blkback drivers.
>>
>> V3:
>> - patch 1: make timeout parameter static
>
> Konrad if you are OK with this series, could you please send a pull
> request to Jens?

Ping?


Juergen

2018-08-27 16:12:13

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v3 0/5] xen/blk: persistent grant rework

On Fri, Aug 24, 2018 at 03:52:23PM +0200, Juergen Gross wrote:
> On 17/08/18 17:59, Roger Pau Monn? wrote:
> > On Mon, Aug 13, 2018 at 04:01:09PM +0200, Juergen Gross wrote:
> >> Persistent grants are used in the Xen's blkfront/blkback drivers to
> >> avoid mapping/unmapping of I/O buffers in the backend for each I/O.
> >>
> >> While this speeds up processing quite a bit there are problems related
> >> to persistent grants in some configurations: domains with multiple
> >> block devices making use of persistent grants might suffer from a lack
> >> of grants if each of the block devices experienced a high I/O load at
> >> some time. This is due to the number of persistent grants per device
> >> only to be limited by a rather high maximum value, but never being
> >> released even in case of longer times without any I/O.
> >>
> >> This series modifies xen-blkback to unmap any domU page mapped via a
> >> persistent grant after a timeout (default: 60 seconds). The timeout
> >> is set to its default value again when a persistent grant has been
> >> used for an I/O.
> >>
> >> xen-blkfront is modified to scan every 10 seconds for persistent grants
> >> not in use by blkback any more and to remove such grants.
> >>
> >> The last 3 patches are small cleanups of blkfront and blkback drivers.
> >>
> >> V3:
> >> - patch 1: make timeout parameter static
> >
> > Konrad if you are OK with this series, could you please send a pull
> > request to Jens?
>
> Ping?

Yes, let me do that now.
>
>
> Juergen