2019-03-17 12:23:54

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 0/6] dm snapshot: Improve performance using a more fine-grained locking scheme

dm-snapshot uses a single mutex to serialize every access to the
snapshot state, including accesses to the exception hash tables. This
mutex is a bottleneck preventing dm-snapshot to scale as the number of
threads doing IO increases.

The major contention points are __origin_write()/snapshot_map() and
pending_complete(), i.e., the submission and completion of pending
exceptions.

This patchset substitutes the single mutex with:

* A read-write semaphore, which protects the mostly read fields of the
snapshot structure.

* Per-bucket bit spinlocks, that protect accesses to the exception
hash tables.

fio benchmarks using the null_blk device show significant performance
improvements as the number of worker processes increases. Write latency
is almost halved and write IOPS are nearly doubled.

The relevant patch provides detailed benchmark results.

A summary of the patchset follows:

1. The first patch removes an unnecessary use of WRITE_ONCE() in
hlist_add_behind().

2. The second patch adds two helper functions to linux/list_bl.h,
which is used to implement the per-bucket bit spinlocks in
dm-snapshot.

3. The third patch removes the need to sleep holding the snapshot lock
in pending_complete(), thus allowing us to replace the mutex with
the per-bucket bit spinlocks.

4. Patches 4, 5 and 6 change the locking scheme, as described
previously.

Changes in v3:
- Don't use WRITE_ONCE() in hlist_bl_add_behind(), as it's not needed.
- Fix hlist_add_behind() to also not use WRITE_ONCE().
- Use uintptr_t instead of unsigned long in hlist_bl_add_before().

v2: https://www.redhat.com/archives/dm-devel/2019-March/msg00007.html

Changes in v2:
- Split third patch of v1 into three patches: 3/5, 4/5, 5/5.

v1: https://www.redhat.com/archives/dm-devel/2018-December/msg00161.html

Nikos Tsironis (6):
list: Don't use WRITE_ONCE() in hlist_add_behind()
list_bl: Add hlist_bl_add_before/behind helpers
dm snapshot: Don't sleep holding the snapshot lock
dm snapshot: Replace mutex with rw semaphore
dm snapshot: Make exception tables scalable
dm snapshot: Use fine-grained locking scheme

drivers/md/dm-exception-store.h | 3 +-
drivers/md/dm-snap.c | 359 +++++++++++++++++++++++++++-------------
include/linux/list.h | 2 +-
include/linux/list_bl.h | 26 +++
4 files changed, 269 insertions(+), 121 deletions(-)

--
2.11.0



2019-03-17 12:23:56

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 2/6] list_bl: Add hlist_bl_add_before/behind helpers

Add hlist_bl_add_before/behind helpers to add an element before/after an
existing element in a bl_list.

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
include/linux/list_bl.h | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 3fc2cc57ba1b..ae1b541446c9 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -86,6 +86,32 @@ static inline void hlist_bl_add_head(struct hlist_bl_node *n,
hlist_bl_set_first(h, n);
}

+static inline void hlist_bl_add_before(struct hlist_bl_node *n,
+ struct hlist_bl_node *next)
+{
+ struct hlist_bl_node **pprev = next->pprev;
+
+ n->pprev = pprev;
+ n->next = next;
+ next->pprev = &n->next;
+
+ /* pprev may be `first`, so be careful not to lose the lock bit */
+ WRITE_ONCE(*pprev,
+ (struct hlist_bl_node *)
+ ((uintptr_t)n | ((uintptr_t)*pprev & LIST_BL_LOCKMASK)));
+}
+
+static inline void hlist_bl_add_behind(struct hlist_bl_node *n,
+ struct hlist_bl_node *prev)
+{
+ n->next = prev->next;
+ n->pprev = &prev->next;
+ prev->next = n;
+
+ if (n->next)
+ n->next->pprev = &n->next;
+}
+
static inline void __hlist_bl_del(struct hlist_bl_node *n)
{
struct hlist_bl_node *next = n->next;
--
2.11.0


2019-03-17 12:24:06

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 4/6] dm snapshot: Replace mutex with rw semaphore

dm-snapshot uses a single mutex to serialize every access to the
snapshot state. This includes all accesses to the complete and pending
exception tables, which occur at every origin write, every snapshot
read/write and every exception completion.

The lock statistics indicate that this mutex is a bottleneck (average
wait time ~480 usecs for 8 processes doing random 4K writes to the
origin device) preventing dm-snapshot to scale as the number of threads
doing IO increases.

The major contention points are __origin_write()/snapshot_map() and
pending_complete(), i.e., the submission and completion of pending
exceptions.

Replace this mutex with a rw semaphore.

We essentially revert commit ae1093be5a0ef9 ("dm snapshot: use mutex
instead of rw_semaphore") and together with the next two patches we
substitute the single mutex with a fine-grained locking scheme, where we
use a read-write semaphore to protect the mostly read fields of the
snapshot structure, e.g., valid, active, etc., and per-bucket bit
spinlocks to protect accesses to the complete and pending exception
tables.

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
drivers/md/dm-snap.c | 88 +++++++++++++++++++++++++---------------------------
1 file changed, 43 insertions(+), 45 deletions(-)

diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 4b34bfa0900a..209da5dd0ba6 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -48,7 +48,7 @@ struct dm_exception_table {
};

struct dm_snapshot {
- struct mutex lock;
+ struct rw_semaphore lock;

struct dm_dev *origin;
struct dm_dev *cow;
@@ -457,9 +457,9 @@ static int __find_snapshots_sharing_cow(struct dm_snapshot *snap,
if (!bdev_equal(s->cow->bdev, snap->cow->bdev))
continue;

- mutex_lock(&s->lock);
+ down_read(&s->lock);
active = s->active;
- mutex_unlock(&s->lock);
+ up_read(&s->lock);

if (active) {
if (snap_src)
@@ -927,7 +927,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s)
int r;
chunk_t old_chunk = s->first_merging_chunk + s->num_merging_chunks - 1;

- mutex_lock(&s->lock);
+ down_write(&s->lock);

/*
* Process chunks (and associated exceptions) in reverse order
@@ -942,7 +942,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s)
b = __release_queued_bios_after_merge(s);

out:
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
if (b)
flush_bios(b);

@@ -1001,9 +1001,9 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s)
if (linear_chunks < 0) {
DMERR("Read error in exception store: "
"shutting down merge");
- mutex_lock(&s->lock);
+ down_write(&s->lock);
s->merge_failed = 1;
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
}
goto shut;
}
@@ -1044,10 +1044,10 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s)
previous_count = read_pending_exceptions_done_count();
}

- mutex_lock(&s->lock);
+ down_write(&s->lock);
s->first_merging_chunk = old_chunk;
s->num_merging_chunks = linear_chunks;
- mutex_unlock(&s->lock);
+ up_write(&s->lock);

/* Wait until writes to all 'linear_chunks' drain */
for (i = 0; i < linear_chunks; i++)
@@ -1089,10 +1089,10 @@ static void merge_callback(int read_err, unsigned long write_err, void *context)
return;

shut:
- mutex_lock(&s->lock);
+ down_write(&s->lock);
s->merge_failed = 1;
b = __release_queued_bios_after_merge(s);
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
error_bios(b);

merge_shutdown(s);
@@ -1191,7 +1191,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
s->exception_start_sequence = 0;
s->exception_complete_sequence = 0;
s->out_of_order_tree = RB_ROOT;
- mutex_init(&s->lock);
+ init_rwsem(&s->lock);
INIT_LIST_HEAD(&s->list);
spin_lock_init(&s->pe_lock);
s->state_bits = 0;
@@ -1357,9 +1357,9 @@ static void snapshot_dtr(struct dm_target *ti)
/* Check whether exception handover must be cancelled */
(void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL);
if (snap_src && snap_dest && (s == snap_src)) {
- mutex_lock(&snap_dest->lock);
+ down_write(&snap_dest->lock);
snap_dest->valid = 0;
- mutex_unlock(&snap_dest->lock);
+ up_write(&snap_dest->lock);
DMERR("Cancelling snapshot handover.");
}
up_read(&_origins_lock);
@@ -1390,8 +1390,6 @@ static void snapshot_dtr(struct dm_target *ti)

dm_exception_store_destroy(s->store);

- mutex_destroy(&s->lock);
-
dm_put_device(ti, s->cow);

dm_put_device(ti, s->origin);
@@ -1479,7 +1477,7 @@ static void pending_complete(void *context, int success)

if (!success) {
/* Read/write error - snapshot is unusable */
- mutex_lock(&s->lock);
+ down_write(&s->lock);
__invalidate_snapshot(s, -EIO);
error = 1;
goto out;
@@ -1487,14 +1485,14 @@ static void pending_complete(void *context, int success)

e = alloc_completed_exception(GFP_NOIO);
if (!e) {
- mutex_lock(&s->lock);
+ down_write(&s->lock);
__invalidate_snapshot(s, -ENOMEM);
error = 1;
goto out;
}
*e = pe->e;

- mutex_lock(&s->lock);
+ down_write(&s->lock);
if (!s->valid) {
free_completed_exception(e);
error = 1;
@@ -1512,9 +1510,9 @@ static void pending_complete(void *context, int success)

/* Wait for conflicting reads to drain */
if (__chunk_is_tracked(s, pe->e.old_chunk)) {
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
__check_for_conflicting_io(s, pe->e.old_chunk);
- mutex_lock(&s->lock);
+ down_write(&s->lock);
}

out:
@@ -1527,7 +1525,7 @@ static void pending_complete(void *context, int success)
full_bio->bi_end_io = pe->full_bio_end_io;
increment_pending_exceptions_done_count();

- mutex_unlock(&s->lock);
+ up_write(&s->lock);

/* Submit any pending write bios */
if (error) {
@@ -1750,7 +1748,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (!s->valid)
return DM_MAPIO_KILL;

- mutex_lock(&s->lock);
+ down_write(&s->lock);

if (!s->valid || (unlikely(s->snapshot_overflowed) &&
bio_data_dir(bio) == WRITE)) {
@@ -1773,9 +1771,9 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (bio_data_dir(bio) == WRITE) {
pe = __lookup_pending_exception(s, chunk);
if (!pe) {
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
pe = alloc_pending_exception(s);
- mutex_lock(&s->lock);
+ down_write(&s->lock);

if (!s->valid || s->snapshot_overflowed) {
free_pending_exception(pe);
@@ -1810,7 +1808,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
bio->bi_iter.bi_size ==
(s->store->chunk_size << SECTOR_SHIFT)) {
pe->started = 1;
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
start_full_bio(pe, bio);
goto out;
}
@@ -1820,7 +1818,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (!pe->started) {
/* this is protected by snap->lock */
pe->started = 1;
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
start_copy(pe);
goto out;
}
@@ -1830,7 +1828,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
}

out_unlock:
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
out:
return r;
}
@@ -1866,7 +1864,7 @@ static int snapshot_merge_map(struct dm_target *ti, struct bio *bio)

chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector);

- mutex_lock(&s->lock);
+ down_write(&s->lock);

/* Full merging snapshots are redirected to the origin */
if (!s->valid)
@@ -1897,12 +1895,12 @@ static int snapshot_merge_map(struct dm_target *ti, struct bio *bio)
bio_set_dev(bio, s->origin->bdev);

if (bio_data_dir(bio) == WRITE) {
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
return do_origin(s->origin, bio);
}

out_unlock:
- mutex_unlock(&s->lock);
+ up_write(&s->lock);

return r;
}
@@ -1934,7 +1932,7 @@ static int snapshot_preresume(struct dm_target *ti)
down_read(&_origins_lock);
(void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL);
if (snap_src && snap_dest) {
- mutex_lock(&snap_src->lock);
+ down_read(&snap_src->lock);
if (s == snap_src) {
DMERR("Unable to resume snapshot source until "
"handover completes.");
@@ -1944,7 +1942,7 @@ static int snapshot_preresume(struct dm_target *ti)
"source is suspended.");
r = -EINVAL;
}
- mutex_unlock(&snap_src->lock);
+ up_read(&snap_src->lock);
}
up_read(&_origins_lock);

@@ -1990,11 +1988,11 @@ static void snapshot_resume(struct dm_target *ti)

(void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL);
if (snap_src && snap_dest) {
- mutex_lock(&snap_src->lock);
- mutex_lock_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING);
+ down_write(&snap_src->lock);
+ down_write_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING);
__handover_exceptions(snap_src, snap_dest);
- mutex_unlock(&snap_dest->lock);
- mutex_unlock(&snap_src->lock);
+ up_write(&snap_dest->lock);
+ up_write(&snap_src->lock);
}

up_read(&_origins_lock);
@@ -2009,9 +2007,9 @@ static void snapshot_resume(struct dm_target *ti)
/* Now we have correct chunk size, reregister */
reregister_snapshot(s);

- mutex_lock(&s->lock);
+ down_write(&s->lock);
s->active = 1;
- mutex_unlock(&s->lock);
+ up_write(&s->lock);
}

static uint32_t get_origin_minimum_chunksize(struct block_device *bdev)
@@ -2051,7 +2049,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type,
switch (type) {
case STATUSTYPE_INFO:

- mutex_lock(&snap->lock);
+ down_write(&snap->lock);

if (!snap->valid)
DMEMIT("Invalid");
@@ -2076,7 +2074,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type,
DMEMIT("Unknown");
}

- mutex_unlock(&snap->lock);
+ up_write(&snap->lock);

break;

@@ -2142,7 +2140,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
if (dm_target_is_snapshot_merge(snap->ti))
continue;

- mutex_lock(&snap->lock);
+ down_write(&snap->lock);

/* Only deal with valid and active snapshots */
if (!snap->valid || !snap->active)
@@ -2169,9 +2167,9 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
if (e)
goto next_snapshot;

- mutex_unlock(&snap->lock);
+ up_write(&snap->lock);
pe = alloc_pending_exception(snap);
- mutex_lock(&snap->lock);
+ down_write(&snap->lock);

if (!snap->valid) {
free_pending_exception(pe);
@@ -2221,7 +2219,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
}

next_snapshot:
- mutex_unlock(&snap->lock);
+ up_write(&snap->lock);

if (pe_to_start_now) {
start_copy(pe_to_start_now);
--
2.11.0


2019-03-17 12:24:15

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 6/6] dm snapshot: Use fine-grained locking scheme

Substitute the global locking scheme with a fine grained one, employing
the read-write semaphore and the scalable exception tables with
per-bucket locks introduced by the previous two commits.

Summarizing, we now use a read-write semaphore to protect the mostly
read fields of the snapshot structure, e.g., valid, active, etc., and
per-bucket bit spinlocks to protect accesses to the complete and pending
exception tables.

Finally, we use an extra spinlock (pe_allocation_lock) to serialize the
allocation of new exceptions by the exception store. This allocation is
really fast, so the extra spinlock doesn't hurt the performance.

This scheme allows dm-snapshot to scale better, resulting in increased
IOPS and reduced latency.

Following are some benchmark results using the null_blk device:

modprobe null_blk gb=1024 bs=512 submit_queues=8 hw_queue_depth=4096 \
queue_mode=2 irqmode=1 completion_nsec=1 nr_devices=1

* Benchmark fio_origin_randwrite_throughput_N, from the device mapper
test suite [1] (direct IO, random 4K writes to origin device, IO
engine libaio):

+--------------+-------------+------------+
| # of workers | IOPS Before | IOPS After |
+--------------+-------------+------------+
| 1 | 57708 | 66421 |
| 2 | 63415 | 77589 |
| 4 | 67276 | 98839 |
| 8 | 60564 | 109258 |
+--------------+-------------+------------+

* Benchmark fio_origin_randwrite_latency_N, from the device mapper test
suite [1] (direct IO, random 4K writes to origin device, IO engine
psync):

+--------------+-----------------------+----------------------+
| # of workers | Latency (usec) Before | Latency (usec) After |
+--------------+-----------------------+----------------------+
| 1 | 16.25 | 13.27 |
| 2 | 31.65 | 25.08 |
| 4 | 55.28 | 41.08 |
| 8 | 121.47 | 74.44 |
+--------------+-----------------------+----------------------+

* Benchmark fio_snapshot_randwrite_throughput_N, from the device mapper
test suite [1] (direct IO, random 4K writes to snapshot device, IO
engine libaio):

+--------------+-------------+------------+
| # of workers | IOPS Before | IOPS After |
+--------------+-------------+------------+
| 1 | 72593 | 84938 |
| 2 | 97379 | 134973 |
| 4 | 90610 | 143077 |
| 8 | 90537 | 180085 |
+--------------+-------------+------------+

* Benchmark fio_snapshot_randwrite_latency_N, from the device mapper
test suite [1] (direct IO, random 4K writes to snapshot device, IO
engine psync):

+--------------+-----------------------+----------------------+
| # of workers | Latency (usec) Before | Latency (usec) After |
+--------------+-----------------------+----------------------+
| 1 | 12.53 | 10.6 |
| 2 | 19.78 | 14.89 |
| 4 | 40.37 | 23.47 |
| 8 | 89.32 | 48.48 |
+--------------+-----------------------+----------------------+

[1] https://github.com/jthornber/device-mapper-test-suite

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
drivers/md/dm-snap.c | 84 +++++++++++++++++++++++++++-------------------------
1 file changed, 44 insertions(+), 40 deletions(-)

diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index ab75857309aa..4530d0620a72 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -77,7 +77,9 @@ struct dm_snapshot {

atomic_t pending_exceptions_count;

- /* Protected by "lock" */
+ spinlock_t pe_allocation_lock;
+
+ /* Protected by "pe_allocation_lock" */
sector_t exception_start_sequence;

/* Protected by kcopyd single-threaded callback */
@@ -1245,6 +1247,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
s->snapshot_overflowed = 0;
s->active = 0;
atomic_set(&s->pending_exceptions_count, 0);
+ spin_lock_init(&s->pe_allocation_lock);
s->exception_start_sequence = 0;
s->exception_complete_sequence = 0;
s->out_of_order_tree = RB_ROOT;
@@ -1522,6 +1525,13 @@ static void __invalidate_snapshot(struct dm_snapshot *s, int err)
dm_table_event(s->ti->table);
}

+static void invalidate_snapshot(struct dm_snapshot *s, int err)
+{
+ down_write(&s->lock);
+ __invalidate_snapshot(s, err);
+ up_write(&s->lock);
+}
+
static void pending_complete(void *context, int success)
{
struct dm_snap_pending_exception *pe = context;
@@ -1537,8 +1547,7 @@ static void pending_complete(void *context, int success)

if (!success) {
/* Read/write error - snapshot is unusable */
- down_write(&s->lock);
- __invalidate_snapshot(s, -EIO);
+ invalidate_snapshot(s, -EIO);
error = 1;

dm_exception_table_lock(&lock);
@@ -1547,8 +1556,7 @@ static void pending_complete(void *context, int success)

e = alloc_completed_exception(GFP_NOIO);
if (!e) {
- down_write(&s->lock);
- __invalidate_snapshot(s, -ENOMEM);
+ invalidate_snapshot(s, -ENOMEM);
error = 1;

dm_exception_table_lock(&lock);
@@ -1556,11 +1564,13 @@ static void pending_complete(void *context, int success)
}
*e = pe->e;

- down_write(&s->lock);
+ down_read(&s->lock);
dm_exception_table_lock(&lock);
if (!s->valid) {
+ up_read(&s->lock);
free_completed_exception(e);
error = 1;
+
goto out;
}

@@ -1572,13 +1582,12 @@ static void pending_complete(void *context, int success)
* merging can overwrite the chunk in origin.
*/
dm_insert_exception(&s->complete, e);
+ up_read(&s->lock);

/* Wait for conflicting reads to drain */
if (__chunk_is_tracked(s, pe->e.old_chunk)) {
dm_exception_table_unlock(&lock);
- up_write(&s->lock);
__check_for_conflicting_io(s, pe->e.old_chunk);
- down_write(&s->lock);
dm_exception_table_lock(&lock);
}

@@ -1595,8 +1604,6 @@ static void pending_complete(void *context, int success)
full_bio->bi_end_io = pe->full_bio_end_io;
increment_pending_exceptions_done_count();

- up_write(&s->lock);
-
/* Submit any pending write bios */
if (error) {
if (full_bio)
@@ -1738,8 +1745,8 @@ __lookup_pending_exception(struct dm_snapshot *s, chunk_t chunk)
/*
* Inserts a pending exception into the pending table.
*
- * NOTE: a write lock must be held on snap->lock before calling
- * this.
+ * NOTE: a write lock must be held on the chunk's pending exception table slot
+ * before calling this.
*/
static struct dm_snap_pending_exception *
__insert_pending_exception(struct dm_snapshot *s,
@@ -1751,12 +1758,15 @@ __insert_pending_exception(struct dm_snapshot *s,
pe->started = 0;
pe->full_bio = NULL;

+ spin_lock(&s->pe_allocation_lock);
if (s->store->type->prepare_exception(s->store, &pe->e)) {
+ spin_unlock(&s->pe_allocation_lock);
free_pending_exception(pe);
return NULL;
}

pe->exception_sequence = s->exception_start_sequence++;
+ spin_unlock(&s->pe_allocation_lock);

dm_insert_exception(&s->pending, &pe->e);

@@ -1768,8 +1778,8 @@ __insert_pending_exception(struct dm_snapshot *s,
* for this chunk, otherwise it allocates a new one and inserts
* it into the pending table.
*
- * NOTE: a write lock must be held on snap->lock before calling
- * this.
+ * NOTE: a write lock must be held on the chunk's pending exception table slot
+ * before calling this.
*/
static struct dm_snap_pending_exception *
__find_pending_exception(struct dm_snapshot *s,
@@ -1820,7 +1830,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (!s->valid)
return DM_MAPIO_KILL;

- down_write(&s->lock);
+ down_read(&s->lock);
dm_exception_table_lock(&lock);

if (!s->valid || (unlikely(s->snapshot_overflowed) &&
@@ -1845,17 +1855,9 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
pe = __lookup_pending_exception(s, chunk);
if (!pe) {
dm_exception_table_unlock(&lock);
- up_write(&s->lock);
pe = alloc_pending_exception(s);
- down_write(&s->lock);
dm_exception_table_lock(&lock);

- if (!s->valid || s->snapshot_overflowed) {
- free_pending_exception(pe);
- r = DM_MAPIO_KILL;
- goto out_unlock;
- }
-
e = dm_lookup_exception(&s->complete, chunk);
if (e) {
free_pending_exception(pe);
@@ -1866,10 +1868,15 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
pe = __find_pending_exception(s, pe, chunk);
if (!pe) {
dm_exception_table_unlock(&lock);
+ up_read(&s->lock);
+
+ down_write(&s->lock);

if (s->store->userspace_supports_overflow) {
- s->snapshot_overflowed = 1;
- DMERR("Snapshot overflowed: Unable to allocate exception.");
+ if (s->valid && !s->snapshot_overflowed) {
+ s->snapshot_overflowed = 1;
+ DMERR("Snapshot overflowed: Unable to allocate exception.");
+ }
} else
__invalidate_snapshot(s, -ENOMEM);
up_write(&s->lock);
@@ -1887,8 +1894,10 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
bio->bi_iter.bi_size ==
(s->store->chunk_size << SECTOR_SHIFT)) {
pe->started = 1;
+
dm_exception_table_unlock(&lock);
- up_write(&s->lock);
+ up_read(&s->lock);
+
start_full_bio(pe, bio);
goto out;
}
@@ -1896,10 +1905,12 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
bio_list_add(&pe->snapshot_bios, bio);

if (!pe->started) {
- /* this is protected by snap->lock */
+ /* this is protected by the exception table lock */
pe->started = 1;
+
dm_exception_table_unlock(&lock);
- up_write(&s->lock);
+ up_read(&s->lock);
+
start_copy(pe);
goto out;
}
@@ -1910,7 +1921,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)

out_unlock:
dm_exception_table_unlock(&lock);
- up_write(&s->lock);
+ up_read(&s->lock);
out:
return r;
}
@@ -2234,7 +2245,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
chunk = sector_to_chunk(snap->store, sector);
dm_exception_table_lock_init(snap, chunk, &lock);

- down_write(&snap->lock);
+ down_read(&snap->lock);
dm_exception_table_lock(&lock);

/* Only deal with valid and active snapshots */
@@ -2253,16 +2264,9 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
goto next_snapshot;

dm_exception_table_unlock(&lock);
- up_write(&snap->lock);
pe = alloc_pending_exception(snap);
- down_write(&snap->lock);
dm_exception_table_lock(&lock);

- if (!snap->valid) {
- free_pending_exception(pe);
- goto next_snapshot;
- }
-
pe2 = __lookup_pending_exception(snap, chunk);

if (!pe2) {
@@ -2275,9 +2279,9 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
pe = __insert_pending_exception(snap, pe, chunk);
if (!pe) {
dm_exception_table_unlock(&lock);
- __invalidate_snapshot(snap, -ENOMEM);
- up_write(&snap->lock);
+ up_read(&snap->lock);

+ invalidate_snapshot(snap, -ENOMEM);
continue;
}
} else {
@@ -2310,7 +2314,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,

next_snapshot:
dm_exception_table_unlock(&lock);
- up_write(&snap->lock);
+ up_read(&snap->lock);

if (pe_to_start_now) {
start_copy(pe_to_start_now);
--
2.11.0


2019-03-17 12:24:29

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 5/6] dm snapshot: Make exception tables scalable

Use list_bl to implement the exception hash tables' buckets. This change
permits concurrent access, to distinct buckets, by multiple threads.

Also, implement helper functions to lock and unlock the exception tables
based on the chunk number of the exception at hand.

We retain the global locking, by means of down_write(), which is
replaced by the next commit.

Still, we must acquire the per-bucket spinlocks when accessing the hash
tables, since list_bl does not allow modification on unlocked lists.

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
drivers/md/dm-exception-store.h | 3 +-
drivers/md/dm-snap.c | 137 +++++++++++++++++++++++++++++++++-------
2 files changed, 116 insertions(+), 24 deletions(-)

diff --git a/drivers/md/dm-exception-store.h b/drivers/md/dm-exception-store.h
index 12b5216c2cfe..5a3c696c057f 100644
--- a/drivers/md/dm-exception-store.h
+++ b/drivers/md/dm-exception-store.h
@@ -11,6 +11,7 @@
#define _LINUX_DM_EXCEPTION_STORE

#include <linux/blkdev.h>
+#include <linux/list_bl.h>
#include <linux/device-mapper.h>

/*
@@ -27,7 +28,7 @@ typedef sector_t chunk_t;
* chunk within the device.
*/
struct dm_exception {
- struct list_head hash_list;
+ struct hlist_bl_node hash_list;

chunk_t old_chunk;
chunk_t new_chunk;
diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 209da5dd0ba6..ab75857309aa 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -13,6 +13,7 @@
#include <linux/init.h>
#include <linux/kdev_t.h>
#include <linux/list.h>
+#include <linux/list_bl.h>
#include <linux/mempool.h>
#include <linux/module.h>
#include <linux/slab.h>
@@ -44,7 +45,7 @@ static const char dm_snapshot_merge_target_name[] = "snapshot-merge";
struct dm_exception_table {
uint32_t hash_mask;
unsigned hash_shift;
- struct list_head *table;
+ struct hlist_bl_head *table;
};

struct dm_snapshot {
@@ -618,6 +619,36 @@ static void unregister_snapshot(struct dm_snapshot *s)
* The lowest hash_shift bits of the chunk number are ignored, allowing
* some consecutive chunks to be grouped together.
*/
+static uint32_t exception_hash(struct dm_exception_table *et, chunk_t chunk);
+
+/* Lock to protect access to the completed and pending exception hash tables. */
+struct dm_exception_table_lock {
+ struct hlist_bl_head *complete_slot;
+ struct hlist_bl_head *pending_slot;
+};
+
+static void dm_exception_table_lock_init(struct dm_snapshot *s, chunk_t chunk,
+ struct dm_exception_table_lock *lock)
+{
+ struct dm_exception_table *complete = &s->complete;
+ struct dm_exception_table *pending = &s->pending;
+
+ lock->complete_slot = &complete->table[exception_hash(complete, chunk)];
+ lock->pending_slot = &pending->table[exception_hash(pending, chunk)];
+}
+
+static void dm_exception_table_lock(struct dm_exception_table_lock *lock)
+{
+ hlist_bl_lock(lock->complete_slot);
+ hlist_bl_lock(lock->pending_slot);
+}
+
+static void dm_exception_table_unlock(struct dm_exception_table_lock *lock)
+{
+ hlist_bl_unlock(lock->pending_slot);
+ hlist_bl_unlock(lock->complete_slot);
+}
+
static int dm_exception_table_init(struct dm_exception_table *et,
uint32_t size, unsigned hash_shift)
{
@@ -625,12 +656,12 @@ static int dm_exception_table_init(struct dm_exception_table *et,

et->hash_shift = hash_shift;
et->hash_mask = size - 1;
- et->table = dm_vcalloc(size, sizeof(struct list_head));
+ et->table = dm_vcalloc(size, sizeof(struct hlist_bl_head));
if (!et->table)
return -ENOMEM;

for (i = 0; i < size; i++)
- INIT_LIST_HEAD(et->table + i);
+ INIT_HLIST_BL_HEAD(et->table + i);

return 0;
}
@@ -638,15 +669,16 @@ static int dm_exception_table_init(struct dm_exception_table *et,
static void dm_exception_table_exit(struct dm_exception_table *et,
struct kmem_cache *mem)
{
- struct list_head *slot;
- struct dm_exception *ex, *next;
+ struct hlist_bl_head *slot;
+ struct dm_exception *ex;
+ struct hlist_bl_node *pos, *n;
int i, size;

size = et->hash_mask + 1;
for (i = 0; i < size; i++) {
slot = et->table + i;

- list_for_each_entry_safe (ex, next, slot, hash_list)
+ hlist_bl_for_each_entry_safe(ex, pos, n, slot, hash_list)
kmem_cache_free(mem, ex);
}

@@ -660,7 +692,7 @@ static uint32_t exception_hash(struct dm_exception_table *et, chunk_t chunk)

static void dm_remove_exception(struct dm_exception *e)
{
- list_del(&e->hash_list);
+ hlist_bl_del(&e->hash_list);
}

/*
@@ -670,11 +702,12 @@ static void dm_remove_exception(struct dm_exception *e)
static struct dm_exception *dm_lookup_exception(struct dm_exception_table *et,
chunk_t chunk)
{
- struct list_head *slot;
+ struct hlist_bl_head *slot;
+ struct hlist_bl_node *pos;
struct dm_exception *e;

slot = &et->table[exception_hash(et, chunk)];
- list_for_each_entry (e, slot, hash_list)
+ hlist_bl_for_each_entry(e, pos, slot, hash_list)
if (chunk >= e->old_chunk &&
chunk <= e->old_chunk + dm_consecutive_chunk_count(e))
return e;
@@ -721,7 +754,8 @@ static void free_pending_exception(struct dm_snap_pending_exception *pe)
static void dm_insert_exception(struct dm_exception_table *eh,
struct dm_exception *new_e)
{
- struct list_head *l;
+ struct hlist_bl_head *l;
+ struct hlist_bl_node *pos;
struct dm_exception *e = NULL;

l = &eh->table[exception_hash(eh, new_e->old_chunk)];
@@ -731,7 +765,7 @@ static void dm_insert_exception(struct dm_exception_table *eh,
goto out;

/* List is ordered by old_chunk */
- list_for_each_entry_reverse(e, l, hash_list) {
+ hlist_bl_for_each_entry(e, pos, l, hash_list) {
/* Insert after an existing chunk? */
if (new_e->old_chunk == (e->old_chunk +
dm_consecutive_chunk_count(e) + 1) &&
@@ -752,12 +786,24 @@ static void dm_insert_exception(struct dm_exception_table *eh,
return;
}

- if (new_e->old_chunk > e->old_chunk)
+ if (new_e->old_chunk < e->old_chunk)
break;
}

out:
- list_add(&new_e->hash_list, e ? &e->hash_list : l);
+ if (!e) {
+ /*
+ * Either the table doesn't support consecutive chunks or slot
+ * l is empty.
+ */
+ hlist_bl_add_head(&new_e->hash_list, l);
+ } else if (new_e->old_chunk < e->old_chunk) {
+ /* Add before an existing exception */
+ hlist_bl_add_before(&new_e->hash_list, &e->hash_list);
+ } else {
+ /* Add to l's tail: e is the last exception in this slot */
+ hlist_bl_add_behind(&new_e->hash_list, &e->hash_list);
+ }
}

/*
@@ -766,6 +812,7 @@ static void dm_insert_exception(struct dm_exception_table *eh,
*/
static int dm_add_exception(void *context, chunk_t old, chunk_t new)
{
+ struct dm_exception_table_lock lock;
struct dm_snapshot *s = context;
struct dm_exception *e;

@@ -778,7 +825,17 @@ static int dm_add_exception(void *context, chunk_t old, chunk_t new)
/* Consecutive_count is implicitly initialised to zero */
e->new_chunk = new;

+ /*
+ * Although there is no need to lock access to the exception tables
+ * here, if we don't then hlist_bl_add_head(), called by
+ * dm_insert_exception(), will complain about accessing the
+ * corresponding list without locking it first.
+ */
+ dm_exception_table_lock_init(s, old, &lock);
+
+ dm_exception_table_lock(&lock);
dm_insert_exception(&s->complete, e);
+ dm_exception_table_unlock(&lock);

return 0;
}
@@ -807,7 +864,7 @@ static int calc_max_buckets(void)
{
/* use a fixed size of 2MB */
unsigned long mem = 2 * 1024 * 1024;
- mem /= sizeof(struct list_head);
+ mem /= sizeof(struct hlist_bl_head);

return mem;
}
@@ -1473,13 +1530,18 @@ static void pending_complete(void *context, int success)
struct bio *origin_bios = NULL;
struct bio *snapshot_bios = NULL;
struct bio *full_bio = NULL;
+ struct dm_exception_table_lock lock;
int error = 0;

+ dm_exception_table_lock_init(s, pe->e.old_chunk, &lock);
+
if (!success) {
/* Read/write error - snapshot is unusable */
down_write(&s->lock);
__invalidate_snapshot(s, -EIO);
error = 1;
+
+ dm_exception_table_lock(&lock);
goto out;
}

@@ -1488,11 +1550,14 @@ static void pending_complete(void *context, int success)
down_write(&s->lock);
__invalidate_snapshot(s, -ENOMEM);
error = 1;
+
+ dm_exception_table_lock(&lock);
goto out;
}
*e = pe->e;

down_write(&s->lock);
+ dm_exception_table_lock(&lock);
if (!s->valid) {
free_completed_exception(e);
error = 1;
@@ -1510,14 +1575,19 @@ static void pending_complete(void *context, int success)

/* Wait for conflicting reads to drain */
if (__chunk_is_tracked(s, pe->e.old_chunk)) {
+ dm_exception_table_unlock(&lock);
up_write(&s->lock);
__check_for_conflicting_io(s, pe->e.old_chunk);
down_write(&s->lock);
+ dm_exception_table_lock(&lock);
}

out:
/* Remove the in-flight exception from the list */
dm_remove_exception(&pe->e);
+
+ dm_exception_table_unlock(&lock);
+
snapshot_bios = bio_list_get(&pe->snapshot_bios);
origin_bios = bio_list_get(&pe->origin_bios);
full_bio = pe->full_bio;
@@ -1733,6 +1803,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
int r = DM_MAPIO_REMAPPED;
chunk_t chunk;
struct dm_snap_pending_exception *pe = NULL;
+ struct dm_exception_table_lock lock;

init_tracked_chunk(bio);

@@ -1742,6 +1813,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
}

chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector);
+ dm_exception_table_lock_init(s, chunk, &lock);

/* Full snapshots are not usable */
/* To get here the table must be live so s->active is always set. */
@@ -1749,6 +1821,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
return DM_MAPIO_KILL;

down_write(&s->lock);
+ dm_exception_table_lock(&lock);

if (!s->valid || (unlikely(s->snapshot_overflowed) &&
bio_data_dir(bio) == WRITE)) {
@@ -1771,9 +1844,11 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (bio_data_dir(bio) == WRITE) {
pe = __lookup_pending_exception(s, chunk);
if (!pe) {
+ dm_exception_table_unlock(&lock);
up_write(&s->lock);
pe = alloc_pending_exception(s);
down_write(&s->lock);
+ dm_exception_table_lock(&lock);

if (!s->valid || s->snapshot_overflowed) {
free_pending_exception(pe);
@@ -1790,13 +1865,17 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)

pe = __find_pending_exception(s, pe, chunk);
if (!pe) {
+ dm_exception_table_unlock(&lock);
+
if (s->store->userspace_supports_overflow) {
s->snapshot_overflowed = 1;
DMERR("Snapshot overflowed: Unable to allocate exception.");
} else
__invalidate_snapshot(s, -ENOMEM);
+ up_write(&s->lock);
+
r = DM_MAPIO_KILL;
- goto out_unlock;
+ goto out;
}
}

@@ -1808,6 +1887,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
bio->bi_iter.bi_size ==
(s->store->chunk_size << SECTOR_SHIFT)) {
pe->started = 1;
+ dm_exception_table_unlock(&lock);
up_write(&s->lock);
start_full_bio(pe, bio);
goto out;
@@ -1818,6 +1898,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
if (!pe->started) {
/* this is protected by snap->lock */
pe->started = 1;
+ dm_exception_table_unlock(&lock);
up_write(&s->lock);
start_copy(pe);
goto out;
@@ -1828,6 +1909,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio)
}

out_unlock:
+ dm_exception_table_unlock(&lock);
up_write(&s->lock);
out:
return r;
@@ -2129,6 +2211,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
struct dm_snap_pending_exception *pe, *pe2;
struct dm_snap_pending_exception *pe_to_start_now = NULL;
struct dm_snap_pending_exception *pe_to_start_last = NULL;
+ struct dm_exception_table_lock lock;
chunk_t chunk;

/* Do all the snapshots on this origin */
@@ -2140,21 +2223,23 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
if (dm_target_is_snapshot_merge(snap->ti))
continue;

- down_write(&snap->lock);
-
- /* Only deal with valid and active snapshots */
- if (!snap->valid || !snap->active)
- goto next_snapshot;
-
/* Nothing to do if writing beyond end of snapshot */
if (sector >= dm_table_get_size(snap->ti->table))
- goto next_snapshot;
+ continue;

/*
* Remember, different snapshots can have
* different chunk sizes.
*/
chunk = sector_to_chunk(snap->store, sector);
+ dm_exception_table_lock_init(snap, chunk, &lock);
+
+ down_write(&snap->lock);
+ dm_exception_table_lock(&lock);
+
+ /* Only deal with valid and active snapshots */
+ if (!snap->valid || !snap->active)
+ goto next_snapshot;

pe = __lookup_pending_exception(snap, chunk);
if (!pe) {
@@ -2167,9 +2252,11 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
if (e)
goto next_snapshot;

+ dm_exception_table_unlock(&lock);
up_write(&snap->lock);
pe = alloc_pending_exception(snap);
down_write(&snap->lock);
+ dm_exception_table_lock(&lock);

if (!snap->valid) {
free_pending_exception(pe);
@@ -2187,8 +2274,11 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,

pe = __insert_pending_exception(snap, pe, chunk);
if (!pe) {
+ dm_exception_table_unlock(&lock);
__invalidate_snapshot(snap, -ENOMEM);
- goto next_snapshot;
+ up_write(&snap->lock);
+
+ continue;
}
} else {
free_pending_exception(pe);
@@ -2219,6 +2309,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
}

next_snapshot:
+ dm_exception_table_unlock(&lock);
up_write(&snap->lock);

if (pe_to_start_now) {
--
2.11.0


2019-03-17 12:24:52

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 3/6] dm snapshot: Don't sleep holding the snapshot lock

When completing a pending exception, pending_complete() waits for all
conflicting reads to drain, before inserting the final, completed
exception. Conflicting reads are snapshot reads redirected to the
origin, because the relevant chunk is not remapped to the COW device the
moment we receive the read.

The completed exception must be inserted into the exception table after
all conflicting reads drain to ensure snapshot reads don't return
corrupted data. This is required because inserting the completed
exception into the exception table signals that the relevant chunk is
remapped and both origin writes and snapshot merging will now overwrite
the chunk in origin.

This wait is done holding the snapshot lock to ensure that
pending_complete() doesn't starve if new snapshot reads keep coming for
this chunk.

In preparation for the next commit, where we use a spinlock instead of a
mutex to protect the exception tables, we remove the need for holding
the lock while waiting for conflicting reads to drain.

We achieve this in two steps:

1. pending_complete() inserts the completed exception before waiting for
conflicting reads to drain and removes the pending exception after
all conflicting reads drain.

This ensures that new snapshot reads will be redirected to the COW
device, instead of the origin, and thus pending_complete() will not
starve. Moreover, we use the existence of both a completed and
a pending exception to signify that the COW is done but there are
conflicting reads in flight.

2. In __origin_write() we check first if there is a pending exception
and then if there is a completed exception. If there is a pending
exception any submitted BIO is delayed on the pe->origin_bios list and
DM_MAPIO_SUBMITTED is returned. This ensures that neither writes to the
origin nor snapshot merging can overwrite the origin chunk, until all
conflicting reads drain, and thus snapshot reads will not return
corrupted data.

Summarizing, we now have the following possible combinations of pending
and completed exceptions for a chunk, along with their meaning:

A. No exceptions exist: The chunk has not been remapped yet.
B. Only a pending exception exists: The chunk is currently being copied
to the COW device.
C. Both a pending and a completed exception exist: COW for this chunk
has completed but there are snapshot reads in flight which had been
redirected to the origin before the chunk was remapped.
D. Only the completed exception exists: COW has been completed and there
are no conflicting reads in flight.

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
drivers/md/dm-snap.c | 102 ++++++++++++++++++++++++++++++++-------------------
1 file changed, 65 insertions(+), 37 deletions(-)

diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
index 36805b12661e..4b34bfa0900a 100644
--- a/drivers/md/dm-snap.c
+++ b/drivers/md/dm-snap.c
@@ -1501,16 +1501,24 @@ static void pending_complete(void *context, int success)
goto out;
}

- /* Check for conflicting reads */
- __check_for_conflicting_io(s, pe->e.old_chunk);
-
/*
- * Add a proper exception, and remove the
- * in-flight exception from the list.
+ * Add a proper exception. After inserting the completed exception all
+ * subsequent snapshot reads to this chunk will be redirected to the
+ * COW device. This ensures that we do not starve. Moreover, as long
+ * as the pending exception exists, neither origin writes nor snapshot
+ * merging can overwrite the chunk in origin.
*/
dm_insert_exception(&s->complete, e);

+ /* Wait for conflicting reads to drain */
+ if (__chunk_is_tracked(s, pe->e.old_chunk)) {
+ mutex_unlock(&s->lock);
+ __check_for_conflicting_io(s, pe->e.old_chunk);
+ mutex_lock(&s->lock);
+ }
+
out:
+ /* Remove the in-flight exception from the list */
dm_remove_exception(&pe->e);
snapshot_bios = bio_list_get(&pe->snapshot_bios);
origin_bios = bio_list_get(&pe->origin_bios);
@@ -1660,25 +1668,15 @@ __lookup_pending_exception(struct dm_snapshot *s, chunk_t chunk)
}

/*
- * Looks to see if this snapshot already has a pending exception
- * for this chunk, otherwise it allocates a new one and inserts
- * it into the pending table.
+ * Inserts a pending exception into the pending table.
*
* NOTE: a write lock must be held on snap->lock before calling
* this.
*/
static struct dm_snap_pending_exception *
-__find_pending_exception(struct dm_snapshot *s,
- struct dm_snap_pending_exception *pe, chunk_t chunk)
+__insert_pending_exception(struct dm_snapshot *s,
+ struct dm_snap_pending_exception *pe, chunk_t chunk)
{
- struct dm_snap_pending_exception *pe2;
-
- pe2 = __lookup_pending_exception(s, chunk);
- if (pe2) {
- free_pending_exception(pe);
- return pe2;
- }
-
pe->e.old_chunk = chunk;
bio_list_init(&pe->origin_bios);
bio_list_init(&pe->snapshot_bios);
@@ -1697,6 +1695,29 @@ __find_pending_exception(struct dm_snapshot *s,
return pe;
}

+/*
+ * Looks to see if this snapshot already has a pending exception
+ * for this chunk, otherwise it allocates a new one and inserts
+ * it into the pending table.
+ *
+ * NOTE: a write lock must be held on snap->lock before calling
+ * this.
+ */
+static struct dm_snap_pending_exception *
+__find_pending_exception(struct dm_snapshot *s,
+ struct dm_snap_pending_exception *pe, chunk_t chunk)
+{
+ struct dm_snap_pending_exception *pe2;
+
+ pe2 = __lookup_pending_exception(s, chunk);
+ if (pe2) {
+ free_pending_exception(pe);
+ return pe2;
+ }
+
+ return __insert_pending_exception(s, pe, chunk);
+}
+
static void remap_exception(struct dm_snapshot *s, struct dm_exception *e,
struct bio *bio, chunk_t chunk)
{
@@ -2107,7 +2128,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
int r = DM_MAPIO_REMAPPED;
struct dm_snapshot *snap;
struct dm_exception *e;
- struct dm_snap_pending_exception *pe;
+ struct dm_snap_pending_exception *pe, *pe2;
struct dm_snap_pending_exception *pe_to_start_now = NULL;
struct dm_snap_pending_exception *pe_to_start_last = NULL;
chunk_t chunk;
@@ -2137,17 +2158,17 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
*/
chunk = sector_to_chunk(snap->store, sector);

- /*
- * Check exception table to see if block
- * is already remapped in this snapshot
- * and trigger an exception if not.
- */
- e = dm_lookup_exception(&snap->complete, chunk);
- if (e)
- goto next_snapshot;
-
pe = __lookup_pending_exception(snap, chunk);
if (!pe) {
+ /*
+ * Check exception table to see if block is already
+ * remapped in this snapshot and trigger an exception
+ * if not.
+ */
+ e = dm_lookup_exception(&snap->complete, chunk);
+ if (e)
+ goto next_snapshot;
+
mutex_unlock(&snap->lock);
pe = alloc_pending_exception(snap);
mutex_lock(&snap->lock);
@@ -2157,16 +2178,23 @@ static int __origin_write(struct list_head *snapshots, sector_t sector,
goto next_snapshot;
}

- e = dm_lookup_exception(&snap->complete, chunk);
- if (e) {
+ pe2 = __lookup_pending_exception(snap, chunk);
+
+ if (!pe2) {
+ e = dm_lookup_exception(&snap->complete, chunk);
+ if (e) {
+ free_pending_exception(pe);
+ goto next_snapshot;
+ }
+
+ pe = __insert_pending_exception(snap, pe, chunk);
+ if (!pe) {
+ __invalidate_snapshot(snap, -ENOMEM);
+ goto next_snapshot;
+ }
+ } else {
free_pending_exception(pe);
- goto next_snapshot;
- }
-
- pe = __find_pending_exception(snap, pe, chunk);
- if (!pe) {
- __invalidate_snapshot(snap, -ENOMEM);
- goto next_snapshot;
+ pe = pe2;
}
}

--
2.11.0


2019-03-17 12:25:11

by Nikos Tsironis

[permalink] [raw]
Subject: [PATCH v3 1/6] list: Don't use WRITE_ONCE() in hlist_add_behind()

Commit 1c97be677f72b3 ("list: Use WRITE_ONCE() when adding to lists and
hlists") introduced the use of WRITE_ONCE() to atomically write the list
head's ->next pointer.

hlist_add_behind() doesn't touch the hlist head's ->first pointer so
there is no reason to use WRITE_ONCE() in this case.

Signed-off-by: Nikos Tsironis <[email protected]>
Signed-off-by: Ilias Tsitsimpis <[email protected]>
---
include/linux/list.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/list.h b/include/linux/list.h
index edb7628e46ed..b68d2a85859b 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -743,7 +743,7 @@ static inline void hlist_add_behind(struct hlist_node *n,
struct hlist_node *prev)
{
n->next = prev->next;
- WRITE_ONCE(prev->next, n);
+ prev->next = n;
n->pprev = &prev->next;

if (n->next)
--
2.11.0


2019-03-18 15:41:44

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v3 1/6] list: Don't use WRITE_ONCE() in hlist_add_behind()

On Sun, Mar 17, 2019 at 02:22:53PM +0200, Nikos Tsironis wrote:
> Commit 1c97be677f72b3 ("list: Use WRITE_ONCE() when adding to lists and
> hlists") introduced the use of WRITE_ONCE() to atomically write the list
> head's ->next pointer.
>
> hlist_add_behind() doesn't touch the hlist head's ->first pointer so
> there is no reason to use WRITE_ONCE() in this case.
>
> Signed-off-by: Nikos Tsironis <[email protected]>
> Signed-off-by: Ilias Tsitsimpis <[email protected]>

Reviewed-by: Paul E. McKenney <[email protected]>

> ---
> include/linux/list.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/list.h b/include/linux/list.h
> index edb7628e46ed..b68d2a85859b 100644
> --- a/include/linux/list.h
> +++ b/include/linux/list.h
> @@ -743,7 +743,7 @@ static inline void hlist_add_behind(struct hlist_node *n,
> struct hlist_node *prev)
> {
> n->next = prev->next;
> - WRITE_ONCE(prev->next, n);
> + prev->next = n;
> n->pprev = &prev->next;
>
> if (n->next)
> --
> 2.11.0
>


2019-03-18 15:41:59

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v3 2/6] list_bl: Add hlist_bl_add_before/behind helpers

On Sun, Mar 17, 2019 at 02:22:54PM +0200, Nikos Tsironis wrote:
> Add hlist_bl_add_before/behind helpers to add an element before/after an
> existing element in a bl_list.
>
> Signed-off-by: Nikos Tsironis <[email protected]>
> Signed-off-by: Ilias Tsitsimpis <[email protected]>

On both this and the previous patch, the double signed-off-by lines
are a bit strange. You might be wanting Co-developed-by, but please
see Documentation/process/submitting-patches.rst.

Other than that:

Reviewed-by: Paul E. McKenney <[email protected]>

> ---
> include/linux/list_bl.h | 26 ++++++++++++++++++++++++++
> 1 file changed, 26 insertions(+)
>
> diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
> index 3fc2cc57ba1b..ae1b541446c9 100644
> --- a/include/linux/list_bl.h
> +++ b/include/linux/list_bl.h
> @@ -86,6 +86,32 @@ static inline void hlist_bl_add_head(struct hlist_bl_node *n,
> hlist_bl_set_first(h, n);
> }
>
> +static inline void hlist_bl_add_before(struct hlist_bl_node *n,
> + struct hlist_bl_node *next)
> +{
> + struct hlist_bl_node **pprev = next->pprev;
> +
> + n->pprev = pprev;
> + n->next = next;
> + next->pprev = &n->next;
> +
> + /* pprev may be `first`, so be careful not to lose the lock bit */
> + WRITE_ONCE(*pprev,
> + (struct hlist_bl_node *)
> + ((uintptr_t)n | ((uintptr_t)*pprev & LIST_BL_LOCKMASK)));
> +}
> +
> +static inline void hlist_bl_add_behind(struct hlist_bl_node *n,
> + struct hlist_bl_node *prev)
> +{
> + n->next = prev->next;
> + n->pprev = &prev->next;
> + prev->next = n;
> +
> + if (n->next)
> + n->next->pprev = &n->next;
> +}
> +
> static inline void __hlist_bl_del(struct hlist_bl_node *n)
> {
> struct hlist_bl_node *next = n->next;
> --
> 2.11.0
>


2019-03-20 17:45:30

by Nikos Tsironis

[permalink] [raw]
Subject: Re: [PATCH v3 2/6] list_bl: Add hlist_bl_add_before/behind helpers

On 3/18/19 5:41 PM, Paul E. McKenney wrote:
> On Sun, Mar 17, 2019 at 02:22:54PM +0200, Nikos Tsironis wrote:
>> Add hlist_bl_add_before/behind helpers to add an element before/after an
>> existing element in a bl_list.
>>
>> Signed-off-by: Nikos Tsironis <[email protected]>
>> Signed-off-by: Ilias Tsitsimpis <[email protected]>
>
> On both this and the previous patch, the double signed-off-by lines
> are a bit strange. You might be wanting Co-developed-by, but please
> see Documentation/process/submitting-patches.rst.

Hi Paul,

Thanks for your suggestion. I will make sure to reread the Documentation
regarding patch submission.

Thanks,
Nikos

>
> Other than that:
>
> Reviewed-by: Paul E. McKenney <[email protected]>
>
>> ---
>> include/linux/list_bl.h | 26 ++++++++++++++++++++++++++
>> 1 file changed, 26 insertions(+)
>>
>> diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
>> index 3fc2cc57ba1b..ae1b541446c9 100644
>> --- a/include/linux/list_bl.h
>> +++ b/include/linux/list_bl.h
>> @@ -86,6 +86,32 @@ static inline void hlist_bl_add_head(struct hlist_bl_node *n,
>> hlist_bl_set_first(h, n);
>> }
>>
>> +static inline void hlist_bl_add_before(struct hlist_bl_node *n,
>> + struct hlist_bl_node *next)
>> +{
>> + struct hlist_bl_node **pprev = next->pprev;
>> +
>> + n->pprev = pprev;
>> + n->next = next;
>> + next->pprev = &n->next;
>> +
>> + /* pprev may be `first`, so be careful not to lose the lock bit */
>> + WRITE_ONCE(*pprev,
>> + (struct hlist_bl_node *)
>> + ((uintptr_t)n | ((uintptr_t)*pprev & LIST_BL_LOCKMASK)));
>> +}
>> +
>> +static inline void hlist_bl_add_behind(struct hlist_bl_node *n,
>> + struct hlist_bl_node *prev)
>> +{
>> + n->next = prev->next;
>> + n->pprev = &prev->next;
>> + prev->next = n;
>> +
>> + if (n->next)
>> + n->next->pprev = &n->next;
>> +}
>> +
>> static inline void __hlist_bl_del(struct hlist_bl_node *n)
>> {
>> struct hlist_bl_node *next = n->next;
>> --
>> 2.11.0
>>
>

2019-03-20 19:08:05

by Mikulas Patocka

[permalink] [raw]
Subject: Re: [PATCH v3 0/6] dm snapshot: Improve performance using a more fine-grained locking scheme


Acked-by: Mikulas Patocka <[email protected]>



On Sun, 17 Mar 2019, Nikos Tsironis wrote:

> dm-snapshot uses a single mutex to serialize every access to the
> snapshot state, including accesses to the exception hash tables. This
> mutex is a bottleneck preventing dm-snapshot to scale as the number of
> threads doing IO increases.
>
> The major contention points are __origin_write()/snapshot_map() and
> pending_complete(), i.e., the submission and completion of pending
> exceptions.
>
> This patchset substitutes the single mutex with:
>
> * A read-write semaphore, which protects the mostly read fields of the
> snapshot structure.
>
> * Per-bucket bit spinlocks, that protect accesses to the exception
> hash tables.
>
> fio benchmarks using the null_blk device show significant performance
> improvements as the number of worker processes increases. Write latency
> is almost halved and write IOPS are nearly doubled.
>
> The relevant patch provides detailed benchmark results.
>
> A summary of the patchset follows:
>
> 1. The first patch removes an unnecessary use of WRITE_ONCE() in
> hlist_add_behind().
>
> 2. The second patch adds two helper functions to linux/list_bl.h,
> which is used to implement the per-bucket bit spinlocks in
> dm-snapshot.
>
> 3. The third patch removes the need to sleep holding the snapshot lock
> in pending_complete(), thus allowing us to replace the mutex with
> the per-bucket bit spinlocks.
>
> 4. Patches 4, 5 and 6 change the locking scheme, as described
> previously.
>
> Changes in v3:
> - Don't use WRITE_ONCE() in hlist_bl_add_behind(), as it's not needed.
> - Fix hlist_add_behind() to also not use WRITE_ONCE().
> - Use uintptr_t instead of unsigned long in hlist_bl_add_before().
>
> v2: https://www.redhat.com/archives/dm-devel/2019-March/msg00007.html
>
> Changes in v2:
> - Split third patch of v1 into three patches: 3/5, 4/5, 5/5.
>
> v1: https://www.redhat.com/archives/dm-devel/2018-December/msg00161.html
>
> Nikos Tsironis (6):
> list: Don't use WRITE_ONCE() in hlist_add_behind()
> list_bl: Add hlist_bl_add_before/behind helpers
> dm snapshot: Don't sleep holding the snapshot lock
> dm snapshot: Replace mutex with rw semaphore
> dm snapshot: Make exception tables scalable
> dm snapshot: Use fine-grained locking scheme
>
> drivers/md/dm-exception-store.h | 3 +-
> drivers/md/dm-snap.c | 359 +++++++++++++++++++++++++++-------------
> include/linux/list.h | 2 +-
> include/linux/list_bl.h | 26 +++
> 4 files changed, 269 insertions(+), 121 deletions(-)
>
> --
> 2.11.0
>