Hi,
This is a repost of DRBD, to keep you updated about the ongoing
cleanups.
Description
DRBD is a shared-nothing, synchronously replicated block device. It
is designed to serve as a building block for high availability
clusters and in this context, is a "drop-in" replacement for shared
storage. Simplistically, you could see it as a network RAID 1.
Each minor device has a role, which can be 'primary' or 'secondary'.
On the node with the primary device the application is supposed to
run and to access the device (/dev/drbdX). Every write is sent to
the local 'lower level block device' and, across the network, to the
node with the device in 'secondary' state. The secondary device
simply writes the data to its lower level block device.
DRBD can also be used in dual-Primary mode (device writable on both
nodes), which means it can exhibit shared disk semantics in a
shared-nothing cluster. Needless to say, on top of dual-Primary
DRBD utilizing a cluster file system is necessary to maintain for
cache coherency.
This is one of the areas where DRBD differs notably from RAID1 (say
md) stacked on top of NBD or iSCSI. DRBD solves the issue of
concurrent writes to the same on disk location. That is an error of
the layer above us -- it usually indicates a broken lock manager in
a cluster file system --, but DRBD has to ensure that both sides
agree on which write came last, and therefore overwrites the other
write.
More background on this can be found in this paper:
http://www.drbd.org/fileadmin/drbd/publications/drbd8.pdf
Beyond that, DRBD addresses various issues of cluster partitioning,
which the MD/NBD stack, to the best of our knowledge, does not
solve. The above-mentioned paper goes into some detail about that as
well.
DRBD can operate in synchronous mode, or in asynchronous mode. I want
to point out that we guarantee not to violate a single possible write
after write dependency when writing on the standby node. More on that
can be found in this paper:
http://www.drbd.org/fileadmin/drbd/publications/drbd_lk9.pdf
Last not least DRBD offers background resynchronisation and keeps
a on disk representation of the dirty bitmap up-to-date. A reasonable
tradeoff between number of updates, and resyncing more than needed
is implemented with the activity log.
More on that:
http://www.drbd.org/fileadmin/drbd/publications/drbd-activity-logging_v6.pdf
Changes since the last post from DRBD upstream
* Updated to the final drbd-8.3.1 code
* Optionally run-length encode bitmap transfers
Changes triggered by reviews
* Using the latest proc_create() now
* Moved the allocation of md_io_tmpp to attach/detach out of drbd_md_sync_page_io()
* Removing the mode selection comments for emacs
* Removed DRBD_ratelimit()
cheers,
Phil
The lru_cache is a fixed size cache of equal sized objects. It allows its
users to do arbitrary transactions in case an element in the cache needs to
be replaced. Its replacement policy is LRU.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/lru_cache.h linux-2.6.29-drbd/drivers/block/drbd/lru_cache.h
--- linux-2.6.29/drivers/block/drbd/lru_cache.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/lru_cache.h 2009-03-26 15:55:39.595134000 +0100
@@ -0,0 +1,116 @@
+/*
+ lru_cache.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#ifndef LRU_CACHE_H
+#define LRU_CACHE_H
+
+#include <linux/list.h>
+
+struct lc_element {
+ struct hlist_node colision;
+ struct list_head list; /* LRU list or free list */
+ unsigned int refcnt;
+ unsigned int lc_number;
+};
+
+struct lru_cache {
+ struct list_head lru;
+ struct list_head free;
+ struct list_head in_use;
+ size_t element_size;
+ unsigned int nr_elements;
+ unsigned int new_number;
+
+ unsigned int used;
+ unsigned long flags;
+ unsigned long hits, misses, starving, dirty, changed;
+ struct lc_element *changing_element; /* just for paranoia */
+
+ void *lc_private;
+ const char *name;
+
+ struct hlist_head slot[0];
+ /* hash colision chains here, then element storage. */
+};
+
+
+/* flag-bits for lru_cache */
+enum {
+ __LC_PARANOIA,
+ __LC_DIRTY,
+ __LC_STARVING,
+};
+#define LC_PARANOIA (1<<__LC_PARANOIA)
+#define LC_DIRTY (1<<__LC_DIRTY)
+#define LC_STARVING (1<<__LC_STARVING)
+
+extern struct lru_cache *lc_alloc(const char *name, unsigned int e_count,
+ size_t e_size, void *private_p);
+extern void lc_reset(struct lru_cache *lc);
+extern void lc_free(struct lru_cache *lc);
+extern void lc_set(struct lru_cache *lc, unsigned int enr, int index);
+extern void lc_del(struct lru_cache *lc, struct lc_element *element);
+
+extern struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr);
+extern struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr);
+extern struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr);
+extern unsigned int lc_put(struct lru_cache *lc, struct lc_element *e);
+extern void lc_changed(struct lru_cache *lc, struct lc_element *e);
+
+struct seq_file;
+extern size_t lc_printf_stats(struct seq_file *seq, struct lru_cache *lc);
+
+void lc_dump(struct lru_cache *lc, struct seq_file *seq, char *utext,
+ void (*detail) (struct seq_file *, struct lc_element *));
+
+/* This can be used to stop lc_get from changing the set of active elements.
+ * Note that the reference counts and order on the lru list may still change.
+ * returns true if we aquired the lock.
+ */
+static inline int lc_try_lock(struct lru_cache *lc)
+{
+ return !test_and_set_bit(__LC_DIRTY, &lc->flags);
+}
+
+static inline void lc_unlock(struct lru_cache *lc)
+{
+ clear_bit(__LC_DIRTY, &lc->flags);
+ smp_mb__after_clear_bit();
+}
+
+static inline int lc_is_used(struct lru_cache *lc, unsigned int enr)
+{
+ struct lc_element *e = lc_find(lc, enr);
+ return e && e->refcnt;
+}
+
+#define LC_FREE (-1U)
+
+#define lc_e_base(lc) ((char *)((lc)->slot + (lc)->nr_elements))
+#define lc_entry(lc, i) ((struct lc_element *) \
+ (lc_e_base(lc) + (i)*(lc)->element_size))
+#define lc_index_of(lc, e) (((char *)(e) - lc_e_base(lc))/(lc)->element_size)
+
+#endif
diff -uNrp linux-2.6.29/drivers/block/drbd/lru_cache.c linux-2.6.29-drbd/drivers/block/drbd/lru_cache.c
--- linux-2.6.29/drivers/block/drbd/lru_cache.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/lru_cache.c 2009-03-26 15:55:39.591134000 +0100
@@ -0,0 +1,397 @@
+/*
+ lru_cache.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/bitops.h>
+#include <linux/vmalloc.h>
+#include <linux/string.h> /* for memset */
+#include <linux/seq_file.h> /* for seq_printf */
+#include "lru_cache.h"
+
+/* this is developers aid only! */
+#define PARANOIA_ENTRY() BUG_ON(test_and_set_bit(__LC_PARANOIA, &lc->flags))
+#define PARANOIA_LEAVE() do { clear_bit(__LC_PARANOIA, &lc->flags); smp_mb__after_clear_bit(); } while (0)
+#define RETURN(x...) do { PARANOIA_LEAVE(); return x ; } while (0)
+
+static inline size_t size_of_lc(unsigned int e_count, size_t e_size)
+{
+ return sizeof(struct lru_cache)
+ + e_count * (e_size + sizeof(struct hlist_head));
+}
+
+static inline void lc_init(struct lru_cache *lc,
+ const size_t bytes, const char *name,
+ const unsigned int e_count, const size_t e_size,
+ void *private_p)
+{
+ struct lc_element *e;
+ unsigned int i;
+
+ memset(lc, 0, bytes);
+ INIT_LIST_HEAD(&lc->in_use);
+ INIT_LIST_HEAD(&lc->lru);
+ INIT_LIST_HEAD(&lc->free);
+ lc->element_size = e_size;
+ lc->nr_elements = e_count;
+ lc->new_number = -1;
+ lc->lc_private = private_p;
+ lc->name = name;
+ for (i = 0; i < e_count; i++) {
+ e = lc_entry(lc, i);
+ e->lc_number = LC_FREE;
+ list_add(&e->list, &lc->free);
+ /* memset(,0,) did the rest of init for us */
+ }
+}
+
+/**
+ * lc_alloc: allocates memory for @e_count objects of @e_size bytes plus the
+ * struct lru_cache, and the hash table slots.
+ * returns pointer to a newly initialized lru_cache object with said parameters.
+ */
+struct lru_cache *lc_alloc(const char *name, unsigned int e_count,
+ size_t e_size, void *private_p)
+{
+ struct lru_cache *lc;
+ size_t bytes;
+
+ BUG_ON(!e_count);
+ e_size = max(sizeof(struct lc_element), e_size);
+ bytes = size_of_lc(e_count, e_size);
+ lc = vmalloc(bytes);
+ if (lc)
+ lc_init(lc, bytes, name, e_count, e_size, private_p);
+ return lc;
+}
+
+/**
+ * lc_free: Frees memory allocated by lc_alloc.
+ * @lc: The lru_cache object
+ */
+void lc_free(struct lru_cache *lc)
+{
+ vfree(lc);
+}
+
+/**
+ * lc_reset: does a full reset for @lc and the hash table slots.
+ * It is roughly the equivalent of re-allocating a fresh lru_cache object,
+ * basically a short cut to lc_free(lc); lc = lc_alloc(...);
+ */
+void lc_reset(struct lru_cache *lc)
+{
+ lc_init(lc, size_of_lc(lc->nr_elements, lc->element_size), lc->name,
+ lc->nr_elements, lc->element_size, lc->lc_private);
+}
+
+size_t lc_printf_stats(struct seq_file *seq, struct lru_cache *lc)
+{
+ /* NOTE:
+ * total calls to lc_get are
+ * (starving + hits + misses)
+ * misses include "dirty" count (update from an other thread in
+ * progress) and "changed", when this in fact lead to an successful
+ * update of the cache.
+ */
+ return seq_printf(seq, "\t%s: used:%u/%u "
+ "hits:%lu misses:%lu starving:%lu dirty:%lu changed:%lu\n",
+ lc->name, lc->used, lc->nr_elements,
+ lc->hits, lc->misses, lc->starving, lc->dirty, lc->changed);
+}
+
+static unsigned int lc_hash_fn(struct lru_cache *lc, unsigned int enr)
+{
+ return enr % lc->nr_elements;
+}
+
+
+/**
+ * lc_find: Returns the pointer to an element, if the element is present
+ * in the hash table. In case it is not this function returns NULL.
+ * @lc: The lru_cache object
+ * @enr: element number
+ */
+struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr)
+{
+ struct hlist_node *n;
+ struct lc_element *e;
+
+ BUG_ON(!lc);
+ BUG_ON(!lc->nr_elements);
+ hlist_for_each_entry(e, n, lc->slot + lc_hash_fn(lc, enr), colision) {
+ if (e->lc_number == enr)
+ return e;
+ }
+ return NULL;
+}
+
+static struct lc_element *lc_evict(struct lru_cache *lc)
+{
+ struct list_head *n;
+ struct lc_element *e;
+
+ if (list_empty(&lc->lru))
+ return NULL;
+
+ n = lc->lru.prev;
+ e = list_entry(n, struct lc_element, list);
+
+ list_del(&e->list);
+ hlist_del(&e->colision);
+ return e;
+}
+
+/**
+ * lc_del: Removes an element from the cache (and therefore adds the
+ * element's storage to the free list)
+ *
+ * @lc: The lru_cache object
+ * @e: The element to remove
+ */
+void lc_del(struct lru_cache *lc, struct lc_element *e)
+{
+ PARANOIA_ENTRY();
+ BUG_ON(e->refcnt);
+ list_del(&e->list);
+ hlist_del_init(&e->colision);
+ e->lc_number = LC_FREE;
+ e->refcnt = 0;
+ list_add(&e->list, &lc->free);
+ RETURN();
+}
+
+static struct lc_element *lc_get_unused_element(struct lru_cache *lc)
+{
+ struct list_head *n;
+
+ if (list_empty(&lc->free))
+ return lc_evict(lc);
+
+ n = lc->free.next;
+ list_del(n);
+ return list_entry(n, struct lc_element, list);
+}
+
+static int lc_unused_element_available(struct lru_cache *lc)
+{
+ if (!list_empty(&lc->free))
+ return 1; /* something on the free list */
+ if (!list_empty(&lc->lru))
+ return 1; /* something to evict */
+
+ return 0;
+}
+
+
+/**
+ * lc_get: Finds an element in the cache, increases its usage count,
+ * "touches" and returns it.
+ * In case the requested number is not present, it needs to be added to the
+ * cache. Therefore it is possible that an other element becomes eviced from
+ * the cache. In either case, the user is notified so he is able to e.g. keep
+ * a persistent log of the cache changes, and therefore the objects in use.
+ *
+ * Return values:
+ * NULL if the requested element number was not in the cache, and no unused
+ * element could be recycled
+ * pointer to the element with the REQUESTED element number
+ * In this case, it can be used right away
+ *
+ * pointer to an UNUSED element with some different element number.
+ * In this case, the cache is marked dirty, and the returned element
+ * pointer is removed from the lru list and hash collision chains.
+ * The user now should do whatever houskeeping is necessary. Then he
+ * needs to call lc_element_changed(lc,element_pointer), to finish the
+ * change.
+ *
+ * NOTE: The user needs to check the lc_number on EACH use, so he recognizes
+ * any cache set change.
+ *
+ * @lc: The lru_cache object
+ * @enr: element number
+ */
+struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr)
+{
+ struct lc_element *e;
+
+ BUG_ON(!lc);
+ BUG_ON(!lc->nr_elements);
+
+ PARANOIA_ENTRY();
+ if (lc->flags & LC_STARVING) {
+ ++lc->starving;
+ RETURN(NULL);
+ }
+
+ e = lc_find(lc, enr);
+ if (e) {
+ ++lc->hits;
+ if (e->refcnt++ == 0)
+ lc->used++;
+ list_move(&e->list, &lc->in_use); /* Not evictable... */
+ RETURN(e);
+ }
+
+ ++lc->misses;
+
+ /* In case there is nothing available and we can not kick out
+ * the LRU element, we have to wait ...
+ */
+ if (!lc_unused_element_available(lc)) {
+ __set_bit(__LC_STARVING, &lc->flags);
+ RETURN(NULL);
+ }
+
+ /* it was not present in the cache, find an unused element,
+ * which then is replaced.
+ * we need to update the cache; serialize on lc->flags & LC_DIRTY
+ */
+ if (test_and_set_bit(__LC_DIRTY, &lc->flags)) {
+ ++lc->dirty;
+ RETURN(NULL);
+ }
+
+ e = lc_get_unused_element(lc);
+ BUG_ON(!e);
+
+ clear_bit(__LC_STARVING, &lc->flags);
+ BUG_ON(++e->refcnt != 1);
+ lc->used++;
+
+ lc->changing_element = e;
+ lc->new_number = enr;
+
+ RETURN(e);
+}
+
+/* similar to lc_get,
+ * but only gets a new reference on an existing element.
+ * you either get the requested element, or NULL.
+ */
+struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr)
+{
+ struct lc_element *e;
+
+ BUG_ON(!lc);
+ BUG_ON(!lc->nr_elements);
+
+ PARANOIA_ENTRY();
+ if (lc->flags & LC_STARVING) {
+ ++lc->starving;
+ RETURN(NULL);
+ }
+
+ e = lc_find(lc, enr);
+ if (e) {
+ ++lc->hits;
+ if (e->refcnt++ == 0)
+ lc->used++;
+ list_move(&e->list, &lc->in_use); /* Not evictable... */
+ }
+ RETURN(e);
+}
+
+void lc_changed(struct lru_cache *lc, struct lc_element *e)
+{
+ PARANOIA_ENTRY();
+ BUG_ON(e != lc->changing_element);
+ ++lc->changed;
+ e->lc_number = lc->new_number;
+ list_add(&e->list, &lc->in_use);
+ hlist_add_head(&e->colision,
+ lc->slot + lc_hash_fn(lc, lc->new_number));
+ lc->changing_element = NULL;
+ lc->new_number = -1;
+ clear_bit(__LC_DIRTY, &lc->flags);
+ smp_mb__after_clear_bit();
+ PARANOIA_LEAVE();
+}
+
+
+unsigned int lc_put(struct lru_cache *lc, struct lc_element *e)
+{
+ BUG_ON(!lc);
+ BUG_ON(!lc->nr_elements);
+ BUG_ON(!e);
+
+ PARANOIA_ENTRY();
+ BUG_ON(e->refcnt == 0);
+ BUG_ON(e == lc->changing_element);
+ if (--e->refcnt == 0) {
+ /* move it to the front of LRU. */
+ list_move(&e->list, &lc->lru);
+ lc->used--;
+ clear_bit(__LC_STARVING, &lc->flags);
+ smp_mb__after_clear_bit();
+ }
+ RETURN(e->refcnt);
+}
+
+
+/**
+ * lc_set: Sets an element in the cache. You might use this function to
+ * setup the cache. It is expected that the elements are properly initialized.
+ * @lc: The lru_cache object
+ * @enr: element number
+ * @index: The elements' position in the cache
+ */
+void lc_set(struct lru_cache *lc, unsigned int enr, int index)
+{
+ struct lc_element *e;
+
+ if (index < 0 || index >= lc->nr_elements)
+ return;
+
+ e = lc_entry(lc, index);
+ e->lc_number = enr;
+
+ hlist_del_init(&e->colision);
+ hlist_add_head(&e->colision, lc->slot + lc_hash_fn(lc, enr));
+ list_move(&e->list, e->refcnt ? &lc->in_use : &lc->lru);
+}
+
+/**
+ * lc_dump: Dump a complete LRU cache to seq in textual form.
+ */
+void lc_dump(struct lru_cache *lc, struct seq_file *seq, char *utext,
+ void (*detail) (struct seq_file *, struct lc_element *))
+{
+ unsigned int nr_elements = lc->nr_elements;
+ struct lc_element *e;
+ int i;
+
+ seq_printf(seq, "\tnn: lc_number refcnt %s\n ", utext);
+ for (i = 0; i < nr_elements; i++) {
+ e = lc_entry(lc, i);
+ if (e->lc_number == LC_FREE) {
+ seq_printf(seq, "\t%2d: FREE\n", i);
+ } else {
+ seq_printf(seq, "\t%2d: %4u %4u ", i,
+ e->lc_number,
+ e->refcnt);
+ detail(seq, e);
+ }
+ }
+}
+
DRBD maintains a dirty bitmap in case it has to run without peer node or
without local disk. Writes to the on disk dirty bitmap are minimized by the
activity log (=AL). Each time an extent is evicted from the AL the part of
the bitmap no longer covered by the AL is written to disk.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_bitmap.c linux-2.6.29-drbd/drivers/block/drbd/drbd_bitmap.c
--- linux-2.6.29/drivers/block/drbd/drbd_bitmap.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_bitmap.c 2009-03-30 16:49:08.215133000 +0200
@@ -0,0 +1,1307 @@
+/*
+ drbd_bitmap.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2004-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2004-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2004-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/bitops.h>
+#include <linux/vmalloc.h>
+#include <linux/string.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+
+/* OPAQUE outside this file!
+ * interface defined in drbd_int.h
+
+ * convetion:
+ * function name drbd_bm_... => used elsewhere, "public".
+ * function name bm_... => internal to implementation, "private".
+
+ * Note that since find_first_bit returns int, at the current granularity of
+ * the bitmap (4KB per byte), this implementation "only" supports up to
+ * 1<<(32+12) == 16 TB...
+ */
+
+/*
+ * NOTE
+ * Access to the *bm_pages is protected by bm_lock.
+ * It is safe to read the other members within the lock.
+ *
+ * drbd_bm_set_bits is called from bio_endio callbacks,
+ * We may be called with irq already disabled,
+ * so we need spin_lock_irqsave().
+ * And we need the kmap_atomic.
+ */
+struct drbd_bitmap {
+ struct page **bm_pages;
+ spinlock_t bm_lock;
+ /* WARNING unsigned long bm_*:
+ * 32bit number of bit offset is just enough for 512 MB bitmap.
+ * it will blow up if we make the bitmap bigger...
+ * not that it makes much sense to have a bitmap that large,
+ * rather change the granularity to 16k or 64k or something.
+ * (that implies other problems, however...)
+ */
+ unsigned long bm_set; /* nr of set bits; THINK maybe atomic_t? */
+ unsigned long bm_bits;
+ size_t bm_words;
+ size_t bm_number_of_pages;
+ sector_t bm_dev_capacity;
+ struct semaphore bm_change; /* serializes resize operations */
+
+ atomic_t bm_async_io;
+ wait_queue_head_t bm_io_wait;
+
+ unsigned long bm_flags;
+
+ /* debugging aid, in case we are still racy somewhere */
+ char *bm_why;
+ struct task_struct *bm_task;
+};
+
+/* definition of bits in bm_flags */
+#define BM_LOCKED 0
+#define BM_MD_IO_ERROR (BITS_PER_LONG-1) /* 31? 63? */
+
+static inline int bm_is_locked(struct drbd_bitmap *b)
+{
+ return test_bit(BM_LOCKED, &b->bm_flags);
+}
+
+#define bm_print_lock_info(m) __bm_print_lock_info(m, __func__)
+static void __bm_print_lock_info(struct drbd_conf *mdev, const char *func)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ if (!__ratelimit(&drbd_ratelimit_state))
+ return;
+ ERR("FIXME %s in %s, bitmap locked for '%s' by %s\n",
+ current == mdev->receiver.task ? "receiver" :
+ current == mdev->asender.task ? "asender" :
+ current == mdev->worker.task ? "worker" : current->comm,
+ func, b->bm_why ?: "?",
+ b->bm_task == mdev->receiver.task ? "receiver" :
+ b->bm_task == mdev->asender.task ? "asender" :
+ b->bm_task == mdev->worker.task ? "worker" : "?");
+}
+
+void drbd_bm_lock(struct drbd_conf *mdev, char *why)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ int trylock_failed;
+
+ if (!b) {
+ ERR("FIXME no bitmap in drbd_bm_lock!?\n");
+ return;
+ }
+
+ trylock_failed = down_trylock(&b->bm_change);
+
+ if (trylock_failed) {
+ DBG("%s going to '%s' but bitmap already locked for '%s' by %s\n",
+ current == mdev->receiver.task ? "receiver" :
+ current == mdev->asender.task ? "asender" :
+ current == mdev->worker.task ? "worker" : "?",
+ why, b->bm_why ?: "?",
+ b->bm_task == mdev->receiver.task ? "receiver" :
+ b->bm_task == mdev->asender.task ? "asender" :
+ b->bm_task == mdev->worker.task ? "worker" : "?");
+ down(&b->bm_change);
+ }
+ if (__test_and_set_bit(BM_LOCKED, &b->bm_flags))
+ ERR("FIXME bitmap already locked in bm_lock\n");
+
+ b->bm_why = why;
+ b->bm_task = current;
+}
+
+void drbd_bm_unlock(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ if (!b) {
+ ERR("FIXME no bitmap in drbd_bm_unlock!?\n");
+ return;
+ }
+
+ if (!__test_and_clear_bit(BM_LOCKED, &mdev->bitmap->bm_flags))
+ ERR("FIXME bitmap not locked in bm_unlock\n");
+
+ b->bm_why = NULL;
+ b->bm_task = NULL;
+ up(&b->bm_change);
+}
+
+#define bm_end_info(ignored...) ((void)(0))
+
+#if 0
+#define catch_oob_access_start() do { \
+ do { \
+ if ((bm-p_addr) >= PAGE_SIZE/sizeof(long)) { \
+ printk(KERN_ALERT "drbd_bitmap.c:%u %s: p_addr:%p bm:%p %d\n", \
+ __LINE__ , __func__ , p_addr, bm, (bm-p_addr)); \
+ break; \
+ }
+#define catch_oob_access_end() \
+ } while (0); } while (0)
+#else
+#define catch_oob_access_start() do {
+#define catch_oob_access_end() } while (0)
+#endif
+
+/* word offset to long pointer */
+STATIC unsigned long *__bm_map_paddr(struct drbd_bitmap *b, unsigned long offset, const enum km_type km)
+{
+ struct page *page;
+ unsigned long page_nr;
+
+ /* page_nr = (word*sizeof(long)) >> PAGE_SHIFT; */
+ page_nr = offset >> (PAGE_SHIFT - LN2_BPL + 3);
+ BUG_ON(page_nr >= b->bm_number_of_pages);
+ page = b->bm_pages[page_nr];
+
+ return (unsigned long *) kmap_atomic(page, km);
+}
+
+unsigned long * bm_map_paddr(struct drbd_bitmap *b, unsigned long offset)
+{
+ return __bm_map_paddr(b, offset, KM_IRQ1);
+}
+
+void __bm_unmap(unsigned long *p_addr, const enum km_type km)
+{
+ kunmap_atomic(p_addr, km);
+};
+
+void bm_unmap(unsigned long *p_addr)
+{
+ return __bm_unmap(p_addr, KM_IRQ1);
+}
+
+/* long word offset of _bitmap_ sector */
+#define S2W(s) ((s)<<(BM_EXT_SIZE_B-BM_BLOCK_SIZE_B-LN2_BPL))
+/* word offset from start of bitmap to word number _in_page_
+ * modulo longs per page
+#define MLPP(X) ((X) % (PAGE_SIZE/sizeof(long))
+ hm, well, Philipp thinks gcc might not optimze the % into & (... - 1)
+ so do it explicitly:
+ */
+#define MLPP(X) ((X) & ((PAGE_SIZE/sizeof(long))-1))
+
+/* Long words per page */
+#define LWPP (PAGE_SIZE/sizeof(long))
+
+/*
+ * actually most functions herein should take a struct drbd_bitmap*, not a
+ * struct drbd_conf*, but for the debug macros I like to have the mdev around
+ * to be able to report device specific.
+ */
+
+STATIC void bm_free_pages(struct page **pages, unsigned long number)
+{
+ unsigned long i;
+ if (!pages)
+ return;
+
+ for (i = 0; i < number; i++) {
+ if (!pages[i]) {
+ printk(KERN_ALERT "drbd: bm_free_pages tried to free "
+ "a NULL pointer; i=%lu n=%lu\n",
+ i, number);
+ continue;
+ }
+ __free_page(pages[i]);
+ pages[i] = NULL;
+ }
+}
+
+/*
+ * "have" and "want" are NUMBER OF PAGES.
+ */
+STATIC struct page **bm_realloc_pages(struct page **old_pages,
+ unsigned long have,
+ unsigned long want)
+{
+ struct page **new_pages, *page;
+ unsigned int i, bytes;
+
+ BUG_ON(have == 0 && old_pages != NULL);
+ BUG_ON(have != 0 && old_pages == NULL);
+
+ if (have == want)
+ return old_pages;
+
+ /* To use kmalloc here is ok, as long as we support 4TB at max...
+ * otherwise this might become bigger than 128KB, which is
+ * the maximum for kmalloc.
+ *
+ * no, it is not: on 64bit boxes, sizeof(void*) == 8,
+ * 128MB bitmap @ 4K pages -> 256K of page pointers.
+ * ==> use vmalloc for now again.
+ * then again, we could do something like
+ * if (nr_pages > watermark) vmalloc else kmalloc :*> ...
+ * or do cascading page arrays:
+ * one page for the page array of the page array,
+ * those pages for the real bitmap pages.
+ * there we could even add some optimization members,
+ * so we won't need to kmap_atomic in bm_find_next_bit just to see
+ * that the page has no bits set ...
+ * or we can try a "huge" page ;-)
+ */
+ bytes = sizeof(struct page *)*want;
+ new_pages = vmalloc(bytes);
+ if (!new_pages)
+ return NULL;
+
+ memset(new_pages, 0, bytes);
+ if (want >= have) {
+ for (i = 0; i < have; i++)
+ new_pages[i] = old_pages[i];
+ for (; i < want; i++) {
+ page = alloc_page(GFP_HIGHUSER);
+ if (!page) {
+ bm_free_pages(new_pages + have, i - have);
+ vfree(new_pages);
+ return NULL;
+ }
+ new_pages[i] = page;
+ }
+ } else {
+ for (i = 0; i < want; i++)
+ new_pages[i] = old_pages[i];
+ /* NOT HERE, we are outside the spinlock!
+ bm_free_pages(old_pages + want, have - want);
+ */
+ }
+
+ return new_pages;
+}
+
+/*
+ * called on driver init only. TODO call when a device is created.
+ * allocates the drbd_bitmap, and stores it in mdev->bitmap.
+ */
+int drbd_bm_init(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ WARN_ON(b != NULL);
+ b = kzalloc(sizeof(struct drbd_bitmap), GFP_KERNEL);
+ if (!b)
+ return -ENOMEM;
+ spin_lock_init(&b->bm_lock);
+ init_MUTEX(&b->bm_change);
+ init_waitqueue_head(&b->bm_io_wait);
+
+ mdev->bitmap = b;
+
+ return 0;
+}
+
+sector_t drbd_bm_capacity(struct drbd_conf *mdev)
+{
+ ERR_IF(!mdev->bitmap) return 0;
+ return mdev->bitmap->bm_dev_capacity;
+}
+
+/* called on driver unload. TODO: call when a device is destroyed.
+ */
+void drbd_bm_cleanup(struct drbd_conf *mdev)
+{
+ ERR_IF (!mdev->bitmap) return;
+ bm_free_pages(mdev->bitmap->bm_pages, mdev->bitmap->bm_number_of_pages);
+ vfree(mdev->bitmap->bm_pages);
+ kfree(mdev->bitmap);
+ mdev->bitmap = NULL;
+}
+
+/*
+ * since (b->bm_bits % BITS_PER_LONG) != 0,
+ * this masks out the remaining bits.
+ * Rerturns the number of bits cleared.
+ */
+STATIC int bm_clear_surplus(struct drbd_bitmap *b)
+{
+ const unsigned long mask = (1UL << (b->bm_bits & (BITS_PER_LONG-1))) - 1;
+ size_t w = b->bm_bits >> LN2_BPL;
+ int cleared = 0;
+ unsigned long *p_addr, *bm;
+
+ p_addr = bm_map_paddr(b, w);
+ bm = p_addr + MLPP(w);
+ if (w < b->bm_words) {
+ catch_oob_access_start();
+ cleared = hweight_long(*bm & ~mask);
+ *bm &= mask;
+ catch_oob_access_end();
+ w++; bm++;
+ }
+
+ if (w < b->bm_words) {
+ catch_oob_access_start();
+ cleared += hweight_long(*bm);
+ *bm = 0;
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+ return cleared;
+}
+
+STATIC void bm_set_surplus(struct drbd_bitmap *b)
+{
+ const unsigned long mask = (1UL << (b->bm_bits & (BITS_PER_LONG-1))) - 1;
+ size_t w = b->bm_bits >> LN2_BPL;
+ unsigned long *p_addr, *bm;
+
+ p_addr = bm_map_paddr(b, w);
+ bm = p_addr + MLPP(w);
+ if (w < b->bm_words) {
+ catch_oob_access_start();
+ *bm |= ~mask;
+ bm++; w++;
+ catch_oob_access_end();
+ }
+
+ if (w < b->bm_words) {
+ catch_oob_access_start();
+ *bm = ~(0UL);
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+}
+
+STATIC unsigned long __bm_count_bits(struct drbd_bitmap *b, const int swap_endian)
+{
+ unsigned long *p_addr, *bm, offset = 0;
+ unsigned long bits = 0;
+ unsigned long i, do_now;
+
+ while (offset < b->bm_words) {
+ i = do_now = min_t(size_t, b->bm_words-offset, LWPP);
+ p_addr = bm_map_paddr(b, offset);
+ bm = p_addr + MLPP(offset);
+ while (i--) {
+ catch_oob_access_start();
+#ifndef __LITTLE_ENDIAN
+ if (swap_endian)
+ *bm = lel_to_cpu(*bm);
+#endif
+ bits += hweight_long(*bm++);
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+ offset += do_now;
+ }
+
+ return bits;
+}
+
+static inline unsigned long bm_count_bits(struct drbd_bitmap *b)
+{
+ return __bm_count_bits(b, 0);
+}
+
+static inline unsigned long bm_count_bits_swap_endian(struct drbd_bitmap *b)
+{
+ return __bm_count_bits(b, 1);
+}
+
+void _drbd_bm_recount_bits(struct drbd_conf *mdev, char *file, int line)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long flags, bits;
+
+ ERR_IF(!b) return;
+
+ /* IMO this should be inside drbd_bm_lock/unlock.
+ * Unfortunately it is used outside of the locks.
+ * And I'm not yet sure where we need to place the
+ * lock/unlock correctly.
+ */
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ bits = bm_count_bits(b);
+ if (bits != b->bm_set) {
+ ERR("bm_set was %lu, corrected to %lu. %s:%d\n",
+ b->bm_set, bits, file, line);
+ b->bm_set = bits;
+ }
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+}
+
+/* offset and len in long words.*/
+STATIC void bm_memset(struct drbd_bitmap *b, size_t offset, int c, size_t len)
+{
+ unsigned long *p_addr, *bm;
+ size_t do_now, end;
+
+#define BM_SECTORS_PER_BIT (BM_BLOCK_SIZE/512)
+
+ end = offset + len;
+
+ if (end > b->bm_words) {
+ printk(KERN_ALERT "drbd: bm_memset end > bm_words\n");
+ return;
+ }
+
+ while (offset < end) {
+ do_now = min_t(size_t, ALIGN(offset + 1, LWPP), end) - offset;
+ p_addr = bm_map_paddr(b, offset);
+ bm = p_addr + MLPP(offset);
+ catch_oob_access_start();
+ if (bm+do_now > p_addr + LWPP) {
+ printk(KERN_ALERT "drbd: BUG BUG BUG! p_addr:%p bm:%p do_now:%d\n",
+ p_addr, bm, (int)do_now);
+ break; /* breaks to after catch_oob_access_end() only! */
+ }
+ memset(bm, c, do_now * sizeof(long));
+ catch_oob_access_end();
+ bm_unmap(p_addr);
+ offset += do_now;
+ }
+}
+
+/*
+ * make sure the bitmap has enough room for the attached storage,
+ * if neccessary, resize.
+ * called whenever we may have changed the device size.
+ * returns -ENOMEM if we could not allocate enough memory, 0 on success.
+ * In case this is actually a resize, we copy the old bitmap into the new one.
+ * Otherwise, the bitmap is initiallized to all bits set.
+ */
+int drbd_bm_resize(struct drbd_conf *mdev, sector_t capacity)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long bits, words, owords, obits, *p_addr, *bm;
+ unsigned long want, have, onpages; /* number of pages */
+ struct page **npages, **opages = NULL;
+ int err = 0, growing;
+
+ ERR_IF(!b) return -ENOMEM;
+
+ drbd_bm_lock(mdev, "resize");
+
+ INFO("drbd_bm_resize called with capacity == %llu\n",
+ (unsigned long long)capacity);
+
+ if (capacity == b->bm_dev_capacity)
+ goto out;
+
+ if (capacity == 0) {
+ spin_lock_irq(&b->bm_lock);
+ opages = b->bm_pages;
+ onpages = b->bm_number_of_pages;
+ owords = b->bm_words;
+ b->bm_pages = NULL;
+ b->bm_number_of_pages =
+ b->bm_set =
+ b->bm_bits =
+ b->bm_words =
+ b->bm_dev_capacity = 0;
+ spin_unlock_irq(&b->bm_lock);
+ bm_free_pages(opages, onpages);
+ vfree(opages);
+ goto out;
+ }
+ bits = BM_SECT_TO_BIT(ALIGN(capacity, BM_SECT_PER_BIT));
+
+ /* if we would use
+ words = ALIGN(bits,BITS_PER_LONG) >> LN2_BPL;
+ a 32bit host could present the wrong number of words
+ to a 64bit host.
+ */
+ words = ALIGN(bits, 64) >> LN2_BPL;
+
+ if (inc_local(mdev)) {
+ D_ASSERT((u64)bits <= (((u64)mdev->bc->md.md_size_sect-MD_BM_OFFSET) << 12));
+ dec_local(mdev);
+ }
+
+ /* one extra long to catch off by one errors */
+ want = ALIGN((words+1)*sizeof(long), PAGE_SIZE) >> PAGE_SHIFT;
+ have = b->bm_number_of_pages;
+ if (want == have) {
+ D_ASSERT(b->bm_pages != NULL);
+ npages = b->bm_pages;
+ } else
+ npages = bm_realloc_pages(b->bm_pages, have, want);
+
+ if (!npages) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ spin_lock_irq(&b->bm_lock);
+ opages = b->bm_pages;
+ owords = b->bm_words;
+ obits = b->bm_bits;
+
+ growing = bits > obits;
+ if (opages)
+ bm_set_surplus(b);
+
+ b->bm_pages = npages;
+ b->bm_number_of_pages = want;
+ b->bm_bits = bits;
+ b->bm_words = words;
+ b->bm_dev_capacity = capacity;
+
+ if (growing) {
+ bm_memset(b, owords, 0xff, words-owords);
+ b->bm_set += bits - obits;
+ }
+
+ if (want < have) {
+ /* implicit: (opages != NULL) && (opages != npages) */
+ bm_free_pages(opages + want, have - want);
+ }
+
+ p_addr = bm_map_paddr(b, words);
+ bm = p_addr + MLPP(words);
+ catch_oob_access_start();
+ *bm = DRBD_MAGIC;
+ catch_oob_access_end();
+ bm_unmap(p_addr);
+
+ (void)bm_clear_surplus(b);
+ if (!growing)
+ b->bm_set = bm_count_bits(b);
+
+ bm_end_info(mdev, __func__);
+ spin_unlock_irq(&b->bm_lock);
+ if (opages != npages)
+ vfree(opages);
+ INFO("resync bitmap: bits=%lu words=%lu\n", bits, words);
+
+ out:
+ drbd_bm_unlock(mdev);
+ return err;
+}
+
+/* inherently racy:
+ * if not protected by other means, return value may be out of date when
+ * leaving this function...
+ * we still need to lock it, since it is important that this returns
+ * bm_set == 0 precisely.
+ *
+ * maybe bm_set should be atomic_t ?
+ */
+unsigned long drbd_bm_total_weight(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long s;
+ unsigned long flags;
+
+ /* if I don't have a disk, I don't know about out-of-sync status */
+ if (!inc_local_if_state(mdev, Negotiating))
+ return 0;
+
+ ERR_IF(!b) return 0;
+ ERR_IF(!b->bm_pages) return 0;
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ s = b->bm_set;
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+
+ dec_local(mdev);
+
+ return s;
+}
+
+size_t drbd_bm_words(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ ERR_IF(!b) return 0;
+ ERR_IF(!b->bm_pages) return 0;
+
+ return b->bm_words;
+}
+
+unsigned long drbd_bm_bits(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ ERR_IF(!b) return 0;
+
+ return b->bm_bits;
+}
+
+/* merge number words from buffer into the bitmap starting at offset.
+ * buffer[i] is expected to be little endian unsigned long.
+ * bitmap must be locked by drbd_bm_lock.
+ * currently only used from receive_bitmap.
+ */
+void drbd_bm_merge_lel(struct drbd_conf *mdev, size_t offset, size_t number,
+ unsigned long *buffer)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr, *bm;
+ unsigned long word, bits;
+ size_t end, do_now;
+
+ end = offset + number;
+
+ ERR_IF(!b) return;
+ ERR_IF(!b->bm_pages) return;
+ if (number == 0)
+ return;
+ WARN_ON(offset >= b->bm_words);
+ WARN_ON(end > b->bm_words);
+
+ spin_lock_irq(&b->bm_lock);
+ while (offset < end) {
+ do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset;
+ p_addr = bm_map_paddr(b, offset);
+ bm = p_addr + MLPP(offset);
+ offset += do_now;
+ while (do_now--) {
+ catch_oob_access_start();
+ bits = hweight_long(*bm);
+ word = *bm | lel_to_cpu(*buffer++);
+ *bm++ = word;
+ b->bm_set += hweight_long(word) - bits;
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+ }
+ /* with 32bit <-> 64bit cross-platform connect
+ * this is only correct for current usage,
+ * where we _know_ that we are 64 bit aligned,
+ * and know that this function is used in this way, too...
+ */
+ if (end == b->bm_words) {
+ b->bm_set -= bm_clear_surplus(b);
+ bm_end_info(mdev, __func__);
+ }
+ spin_unlock_irq(&b->bm_lock);
+}
+
+/* copy number words from the bitmap starting at offset into the buffer.
+ * buffer[i] will be little endian unsigned long.
+ */
+void drbd_bm_get_lel(struct drbd_conf *mdev, size_t offset, size_t number,
+ unsigned long *buffer)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr, *bm;
+ size_t end, do_now;
+
+ end = offset + number;
+
+ ERR_IF(!b) return;
+ ERR_IF(!b->bm_pages) return;
+
+ spin_lock_irq(&b->bm_lock);
+ if ((offset >= b->bm_words) ||
+ (end > b->bm_words) ||
+ (number <= 0))
+ ERR("offset=%lu number=%lu bm_words=%lu\n",
+ (unsigned long) offset,
+ (unsigned long) number,
+ (unsigned long) b->bm_words);
+ else {
+ while (offset < end) {
+ do_now = min_t(size_t, ALIGN(offset+1, LWPP), end) - offset;
+ p_addr = bm_map_paddr(b, offset);
+ bm = p_addr + MLPP(offset);
+ offset += do_now;
+ while (do_now--) {
+ catch_oob_access_start();
+ *buffer++ = cpu_to_lel(*bm++);
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+ }
+ }
+ spin_unlock_irq(&b->bm_lock);
+}
+
+/* set all bits in the bitmap */
+void drbd_bm_set_all(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ ERR_IF(!b) return;
+ ERR_IF(!b->bm_pages) return;
+
+ spin_lock_irq(&b->bm_lock);
+ bm_memset(b, 0, 0xff, b->bm_words);
+ (void)bm_clear_surplus(b);
+ b->bm_set = b->bm_bits;
+ spin_unlock_irq(&b->bm_lock);
+}
+
+/* clear all bits in the bitmap */
+void drbd_bm_clear_all(struct drbd_conf *mdev)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ ERR_IF(!b) return;
+ ERR_IF(!b->bm_pages) return;
+
+ spin_lock_irq(&b->bm_lock);
+ bm_memset(b, 0, 0, b->bm_words);
+ b->bm_set = 0;
+ spin_unlock_irq(&b->bm_lock);
+}
+
+static void bm_async_io_complete(struct bio *bio, int error)
+{
+ struct drbd_bitmap *b = bio->bi_private;
+ int uptodate = bio_flagged(bio, BIO_UPTODATE);
+
+
+ /* strange behaviour of some lower level drivers...
+ * fail the request by clearing the uptodate flag,
+ * but do not return any error?!
+ * do we want to WARN() on this? */
+ if (!error && !uptodate)
+ error = -EIO;
+
+ if (error) {
+ /* doh. what now?
+ * for now, set all bits, and flag MD_IO_ERROR */
+ __set_bit(BM_MD_IO_ERROR, &b->bm_flags);
+ }
+ if (atomic_dec_and_test(&b->bm_async_io))
+ wake_up(&b->bm_io_wait);
+
+ bio_put(bio);
+}
+
+STATIC void bm_page_io_async(struct drbd_conf *mdev, struct drbd_bitmap *b, int page_nr, int rw) __must_hold(local)
+{
+ /* we are process context. we always get a bio */
+ struct bio *bio = bio_alloc(GFP_KERNEL, 1);
+ unsigned int len;
+ sector_t on_disk_sector =
+ mdev->bc->md.md_offset + mdev->bc->md.bm_offset;
+ on_disk_sector += ((sector_t)page_nr) << (PAGE_SHIFT-9);
+
+ /* this might happen with very small
+ * flexible external meta data device */
+ len = min_t(unsigned int, PAGE_SIZE,
+ (drbd_md_last_sector(mdev->bc) - on_disk_sector + 1)<<9);
+
+ bio->bi_bdev = mdev->bc->md_bdev;
+ bio->bi_sector = on_disk_sector;
+ bio_add_page(bio, b->bm_pages[page_nr], len, 0);
+ bio->bi_private = b;
+ bio->bi_end_io = bm_async_io_complete;
+
+ if (FAULT_ACTIVE(mdev, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD)) {
+ bio->bi_rw |= rw;
+ bio_endio(bio, -EIO);
+ } else {
+ submit_bio(rw, bio);
+ }
+}
+
+# if defined(__LITTLE_ENDIAN)
+ /* nothing to do, on disk == in memory */
+# define bm_cpu_to_lel(x) ((void)0)
+# else
+void bm_cpu_to_lel(struct drbd_bitmap *b)
+{
+ /* need to cpu_to_lel all the pages ...
+ * this may be optimized by using
+ * cpu_to_lel(-1) == -1 and cpu_to_lel(0) == 0;
+ * the following is still not optimal, but better than nothing */
+ if (b->bm_set == 0) {
+ /* no page at all; avoid swap if all is 0 */
+ i = b->bm_number_of_pages;
+ } else if (b->bm_set == b->bm_bits) {
+ /* only the last page */
+ i = b->bm_number_of_pages - 1;
+ } else {
+ /* all pages */
+ i = 0;
+ }
+ for (; i < b->bm_number_of_pages; i++) {
+ unsigned long *bm;
+ /* if you'd want to use kmap_atomic, you'd have to disable irq! */
+ p_addr = kmap(b->bm_pages[i]);
+ for (bm = p_addr; bm < p_addr + PAGE_SIZE/sizeof(long); bm++)
+ *bm = cpu_to_lel(*bm);
+ kunmap(p_addr);
+ }
+}
+# endif
+/* lel_to_cpu == cpu_to_lel */
+# define bm_lel_to_cpu(x) bm_cpu_to_lel(x)
+
+/*
+ * bm_rw: read/write the whole bitmap from/to its on disk location.
+ */
+STATIC int bm_rw(struct drbd_conf *mdev, int rw) __must_hold(local)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ /* sector_t sector; */
+ int bm_words, num_pages, i;
+ unsigned long now;
+ char ppb[10];
+ int err = 0;
+
+ WARN_ON(!bm_is_locked(b));
+
+ /* no spinlock here, the drbd_bm_lock should be enough! */
+
+ bm_words = drbd_bm_words(mdev);
+ num_pages = (bm_words*sizeof(long) + PAGE_SIZE-1) >> PAGE_SHIFT;
+
+ /* on disk bitmap is little endian */
+ if (rw == WRITE)
+ bm_cpu_to_lel(b);
+
+ now = jiffies;
+ atomic_set(&b->bm_async_io, num_pages);
+ __clear_bit(BM_MD_IO_ERROR, &b->bm_flags);
+
+ /* let the layers below us try to merge these bios... */
+ for (i = 0; i < num_pages; i++)
+ bm_page_io_async(mdev, b, i, rw);
+
+ drbd_blk_run_queue(bdev_get_queue(mdev->bc->md_bdev));
+ wait_event(b->bm_io_wait, atomic_read(&b->bm_async_io) == 0);
+
+ MTRACE(TraceTypeMDIO, TraceLvlSummary,
+ INFO("%s of bitmap took %lu jiffies\n",
+ rw == READ ? "reading" : "writing", jiffies - now);
+ );
+
+ if (test_bit(BM_MD_IO_ERROR, &b->bm_flags)) {
+ ALERT("we had at least one MD IO ERROR during bitmap IO\n");
+ drbd_chk_io_error(mdev, 1, TRUE);
+ drbd_io_error(mdev, TRUE);
+ err = -EIO;
+ }
+
+ now = jiffies;
+ if (rw == WRITE) {
+ /* swap back endianness */
+ bm_lel_to_cpu(b);
+ /* flush bitmap to stable storage */
+ drbd_md_flush(mdev);
+ } else /* rw == READ */ {
+ /* just read, if neccessary adjust endianness */
+ b->bm_set = bm_count_bits_swap_endian(b);
+ INFO("recounting of set bits took additional %lu jiffies\n",
+ jiffies - now);
+ }
+ now = b->bm_set;
+
+ INFO("%s (%lu bits) marked out-of-sync by on disk bit-map.\n",
+ ppsize(ppb, now << (BM_BLOCK_SIZE_B-10)), now);
+
+ return err;
+}
+
+/**
+ * drbd_bm_read: Read the whole bitmap from its on disk location.
+ *
+ * currently only called from "drbd_nl_disk_conf"
+ */
+int drbd_bm_read(struct drbd_conf *mdev) __must_hold(local)
+{
+ return bm_rw(mdev, READ);
+}
+
+/**
+ * drbd_bm_write: Write the whole bitmap to its on disk location.
+ *
+ * called at various occasions.
+ */
+int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local)
+{
+ return bm_rw(mdev, WRITE);
+}
+
+/**
+ * drbd_bm_write_sect: Writes a 512 byte piece of the bitmap to its
+ * on disk location. On disk bitmap is little endian.
+ *
+ * @enr: The _sector_ offset from the start of the bitmap.
+ *
+ */
+int drbd_bm_write_sect(struct drbd_conf *mdev, unsigned long enr) __must_hold(local)
+{
+ sector_t on_disk_sector = enr + mdev->bc->md.md_offset
+ + mdev->bc->md.bm_offset;
+ int bm_words, num_words, offset;
+ int err = 0;
+
+ mutex_lock(&mdev->md_io_mutex);
+ bm_words = drbd_bm_words(mdev);
+ offset = S2W(enr); /* word offset into bitmap */
+ num_words = min(S2W(1), bm_words - offset);
+ if (num_words < S2W(1))
+ memset(page_address(mdev->md_io_page), 0, MD_HARDSECT);
+ drbd_bm_get_lel(mdev, offset, num_words,
+ page_address(mdev->md_io_page));
+ if (!drbd_md_sync_page_io(mdev, mdev->bc, on_disk_sector, WRITE)) {
+ int i;
+ err = -EIO;
+ ERR("IO ERROR writing bitmap sector %lu "
+ "(meta-disk sector %llus)\n",
+ enr, (unsigned long long)on_disk_sector);
+ drbd_chk_io_error(mdev, 1, TRUE);
+ drbd_io_error(mdev, TRUE);
+ for (i = 0; i < AL_EXT_PER_BM_SECT; i++)
+ drbd_bm_ALe_set_all(mdev, enr*AL_EXT_PER_BM_SECT+i);
+ }
+ mdev->bm_writ_cnt++;
+ mutex_unlock(&mdev->md_io_mutex);
+ return err;
+}
+
+/* NOTE
+ * find_first_bit returns int, we return unsigned long.
+ * should not make much difference anyways, but ...
+ *
+ * this returns a bit number, NOT a sector!
+ */
+#define BPP_MASK ((1UL << (PAGE_SHIFT+3)) - 1)
+static unsigned long __bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo,
+ const int find_zero_bit, const enum km_type km)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long i = -1UL;
+ unsigned long *p_addr;
+ unsigned long bit_offset; /* bit offset of the mapped page. */
+
+ if (bm_fo > b->bm_bits) {
+ ERR("bm_fo=%lu bm_bits=%lu\n", bm_fo, b->bm_bits);
+ } else {
+ while (bm_fo < b->bm_bits) {
+ unsigned long offset;
+ bit_offset = bm_fo & ~BPP_MASK; /* bit offset of the page */
+ offset = bit_offset >> LN2_BPL; /* word offset of the page */
+ p_addr = __bm_map_paddr(b, offset, km);
+
+ if (find_zero_bit)
+ i = find_next_zero_bit(p_addr, PAGE_SIZE*8, bm_fo & BPP_MASK);
+ else
+ i = find_next_bit(p_addr, PAGE_SIZE*8, bm_fo & BPP_MASK);
+
+ __bm_unmap(p_addr, km);
+ if (i < PAGE_SIZE*8) {
+ i = bit_offset + i;
+ if (i >= b->bm_bits)
+ break;
+ goto found;
+ }
+ bm_fo = bit_offset + PAGE_SIZE*8;
+ }
+ i = -1UL;
+ }
+ found:
+ return i;
+}
+
+static unsigned long bm_find_next(struct drbd_conf *mdev,
+ unsigned long bm_fo, const int find_zero_bit)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long i = -1UL;
+
+ ERR_IF(!b) return i;
+ ERR_IF(!b->bm_pages) return i;
+
+ spin_lock_irq(&b->bm_lock);
+ if (bm_is_locked(b))
+ bm_print_lock_info(mdev);
+
+ i = __bm_find_next(mdev, bm_fo, find_zero_bit, KM_IRQ1);
+
+ spin_unlock_irq(&b->bm_lock);
+ return i;
+}
+
+unsigned long drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo)
+{
+ return bm_find_next(mdev, bm_fo, 0);
+}
+
+#if 0
+/* not yet needed for anything. */
+unsigned long drbd_bm_find_next_zero(struct drbd_conf *mdev, unsigned long bm_fo)
+{
+ return bm_find_next(mdev, bm_fo, 1);
+}
+#endif
+
+/* does not spin_lock_irqsave.
+ * you must take drbd_bm_lock() first */
+unsigned long _drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo)
+{
+ /* WARN_ON(!bm_is_locked(mdev)); */
+ return __bm_find_next(mdev, bm_fo, 0, KM_USER1);
+}
+
+unsigned long _drbd_bm_find_next_zero(struct drbd_conf *mdev, unsigned long bm_fo)
+{
+ /* WARN_ON(!bm_is_locked(mdev)); */
+ return __bm_find_next(mdev, bm_fo, 1, KM_USER1);
+}
+
+/* returns number of bits actually changed.
+ * for val != 0, we change 0 -> 1, return code positiv
+ * for val == 0, we change 1 -> 0, return code negative
+ * wants bitnr, not sector.
+ * Must hold bitmap lock already. */
+
+int __bm_change_bits_to(struct drbd_conf *mdev, const unsigned long s,
+ const unsigned long e, int val, const enum km_type km)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr = NULL;
+ unsigned long bitnr;
+ unsigned long last_page_nr = -1UL;
+ int c = 0;
+
+ for (bitnr = s; bitnr <= e; bitnr++) {
+ ERR_IF (bitnr >= b->bm_bits) {
+ ERR("bitnr=%lu bm_bits=%lu\n", bitnr, b->bm_bits);
+ } else {
+ unsigned long offset = bitnr>>LN2_BPL;
+ unsigned long page_nr = offset >> (PAGE_SHIFT - LN2_BPL + 3);
+ if (page_nr != last_page_nr) {
+ if (p_addr)
+ __bm_unmap(p_addr, km);
+ p_addr = __bm_map_paddr(b, offset, km);
+ last_page_nr = page_nr;
+ }
+ if (val)
+ c += (0 == __test_and_set_bit(bitnr & BPP_MASK, p_addr));
+ else
+ c -= (0 != __test_and_clear_bit(bitnr & BPP_MASK, p_addr));
+ }
+ }
+ if (p_addr)
+ __bm_unmap(p_addr, km);
+ b->bm_set += c;
+ return c;
+}
+
+/* returns number of bits actually changed.
+ * for val != 0, we change 0 -> 1, return code positiv
+ * for val == 0, we change 1 -> 0, return code negative
+ * wants bitnr, not sector */
+int bm_change_bits_to(struct drbd_conf *mdev, const unsigned long s,
+ const unsigned long e, int val)
+{
+ unsigned long flags;
+ struct drbd_bitmap *b = mdev->bitmap;
+ int c = 0;
+
+ ERR_IF(!b) return 1;
+ ERR_IF(!b->bm_pages) return 0;
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ if (bm_is_locked(b))
+ bm_print_lock_info(mdev);
+
+ c = __bm_change_bits_to(mdev, s, e, val, KM_IRQ1);
+
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+ return c;
+}
+
+/* returns number of bits changed 0 -> 1 */
+int drbd_bm_set_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e)
+{
+ return bm_change_bits_to(mdev, s, e, 1);
+}
+
+/* returns number of bits changed 1 -> 0 */
+int drbd_bm_clear_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e)
+{
+ return -bm_change_bits_to(mdev, s, e, 0);
+}
+
+/* the same thing, but without taking the spin_lock_irqsave.
+ * you must first drbd_bm_lock(). */
+int _drbd_bm_set_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e)
+{
+ /* WARN_ON(!bm_is_locked(b)); */
+ return __bm_change_bits_to(mdev, s, e, 1, KM_USER0);
+}
+
+/* returns bit state
+ * wants bitnr, NOT sector.
+ * inherently racy... area needs to be locked by means of {al,rs}_lru
+ * 1 ... bit set
+ * 0 ... bit not set
+ * -1 ... first out of bounds access, stop testing for bits!
+ */
+int drbd_bm_test_bit(struct drbd_conf *mdev, const unsigned long bitnr)
+{
+ unsigned long flags;
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr;
+ int i;
+
+ ERR_IF(!b) return 0;
+ ERR_IF(!b->bm_pages) return 0;
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ if (bm_is_locked(b))
+ bm_print_lock_info(mdev);
+ if (bitnr < b->bm_bits) {
+ unsigned long offset = bitnr>>LN2_BPL;
+ p_addr = bm_map_paddr(b, offset);
+ i = test_bit(bitnr & BPP_MASK, p_addr) ? 1 : 0;
+ bm_unmap(p_addr);
+ } else if (bitnr == b->bm_bits) {
+ i = -1;
+ } else { /* (bitnr > b->bm_bits) */
+ ERR("bitnr=%lu > bm_bits=%lu\n", bitnr, b->bm_bits);
+ i = 0;
+ }
+
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+ return i;
+}
+
+/* returns number of bits set */
+int drbd_bm_count_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e)
+{
+ unsigned long flags;
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr = NULL, page_nr = -1;
+ unsigned long bitnr;
+ int c = 0;
+ size_t w;
+
+ /* If this is called without a bitmap, that is a bug. But just to be
+ * robust in case we screwed up elsewhere, in that case pretend there
+ * was one dirty bit in the requested area, so we won't try to do a
+ * local read there (no bitmap probably implies no disk) */
+ ERR_IF(!b) return 1;
+ ERR_IF(!b->bm_pages) return 1;
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ for (bitnr = s; bitnr <= e; bitnr++) {
+ w = bitnr >> LN2_BPL;
+ if (page_nr != w >> (PAGE_SHIFT - LN2_BPL + 3)) {
+ page_nr = w >> (PAGE_SHIFT - LN2_BPL + 3);
+ if (p_addr)
+ bm_unmap(p_addr);
+ p_addr = bm_map_paddr(b, w);
+ }
+ ERR_IF (bitnr >= b->bm_bits) {
+ ERR("bitnr=%lu bm_bits=%lu\n", bitnr, b->bm_bits);
+ } else {
+ c += (0 != test_bit(bitnr - (page_nr << (PAGE_SHIFT+3)), p_addr));
+ }
+ }
+ if (p_addr)
+ bm_unmap(p_addr);
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+ return c;
+}
+
+
+/* inherently racy...
+ * return value may be already out-of-date when this function returns.
+ * but the general usage is that this is only use during a cstate when bits are
+ * only cleared, not set, and typically only care for the case when the return
+ * value is zero, or we already "locked" this "bitmap extent" by other means.
+ *
+ * enr is bm-extent number, since we chose to name one sector (512 bytes)
+ * worth of the bitmap a "bitmap extent".
+ *
+ * TODO
+ * I think since we use it like a reference count, we should use the real
+ * reference count of some bitmap extent element from some lru instead...
+ *
+ */
+int drbd_bm_e_weight(struct drbd_conf *mdev, unsigned long enr)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ int count, s, e;
+ unsigned long flags;
+ unsigned long *p_addr, *bm;
+
+ ERR_IF(!b) return 0;
+ ERR_IF(!b->bm_pages) return 0;
+
+ spin_lock_irqsave(&b->bm_lock, flags);
+ if (bm_is_locked(b))
+ bm_print_lock_info(mdev);
+
+ s = S2W(enr);
+ e = min((size_t)S2W(enr+1), b->bm_words);
+ count = 0;
+ if (s < b->bm_words) {
+ int n = e-s;
+ p_addr = bm_map_paddr(b, s);
+ bm = p_addr + MLPP(s);
+ while (n--) {
+ catch_oob_access_start();
+ count += hweight_long(*bm++);
+ catch_oob_access_end();
+ }
+ bm_unmap(p_addr);
+ } else {
+ ERR("start offset (%d) too large in drbd_bm_e_weight\n", s);
+ }
+ spin_unlock_irqrestore(&b->bm_lock, flags);
+ return count;
+}
+
+/* set all bits covered by the AL-extent al_enr */
+unsigned long drbd_bm_ALe_set_all(struct drbd_conf *mdev, unsigned long al_enr)
+{
+ struct drbd_bitmap *b = mdev->bitmap;
+ unsigned long *p_addr, *bm;
+ unsigned long weight;
+ int count, s, e, i, do_now;
+ ERR_IF(!b) return 0;
+ ERR_IF(!b->bm_pages) return 0;
+
+ spin_lock_irq(&b->bm_lock);
+ if (bm_is_locked(b))
+ bm_print_lock_info(mdev);
+ weight = b->bm_set;
+
+ s = al_enr * BM_WORDS_PER_AL_EXT;
+ e = min_t(size_t, s + BM_WORDS_PER_AL_EXT, b->bm_words);
+ /* assert that s and e are on the same page */
+ D_ASSERT((e-1) >> (PAGE_SHIFT - LN2_BPL + 3)
+ == s >> (PAGE_SHIFT - LN2_BPL + 3));
+ count = 0;
+ if (s < b->bm_words) {
+ i = do_now = e-s;
+ p_addr = bm_map_paddr(b, s);
+ bm = p_addr + MLPP(s);
+ while (i--) {
+ catch_oob_access_start();
+ count += hweight_long(*bm);
+ *bm = -1UL;
+ catch_oob_access_end();
+ bm++;
+ }
+ bm_unmap(p_addr);
+ b->bm_set += do_now*BITS_PER_LONG - count;
+ if (e == b->bm_words)
+ b->bm_set -= bm_clear_surplus(b);
+ } else {
+ ERR("start offset (%d) too large in drbd_bm_ALe_set_all\n", s);
+ }
+ weight = b->bm_set - weight;
+ spin_unlock_irq(&b->bm_lock);
+ return weight;
+}
Within DRBD the activity log is used to track extents (4MB each) in which IO
happens (or happened recently). It is based on the LRU cache. Each change of
the activity log causes a meta data update (single sector write). The size
of the activity log is configured by the user, and is a tradeoff between
minimizing updates to the meta data and the resync time after the crash of a
primary node.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_actlog.c linux-2.6.29-drbd/drivers/block/drbd/drbd_actlog.c
--- linux-2.6.29/drivers/block/drbd/drbd_actlog.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_actlog.c 2009-03-30 16:51:50.123214000 +0200
@@ -0,0 +1,1473 @@
+/*
+ drbd_actlog.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/slab.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "drbd_wrappers.h"
+
+/* I do not believe that all storage medias can guarantee atomic
+ * 512 byte write operations. When the journal is read, only
+ * transactions with correct xor_sums are considered.
+ * sizeof() = 512 byte */
+struct __attribute__((packed)) al_transaction {
+ u32 magic;
+ u32 tr_number;
+ struct __attribute__((packed)) {
+ u32 pos;
+ u32 extent; } updates[1 + AL_EXTENTS_PT];
+ u32 xor_sum;
+};
+
+struct update_odbm_work {
+ struct drbd_work w;
+ unsigned int enr;
+};
+
+struct update_al_work {
+ struct drbd_work w;
+ struct lc_element *al_ext;
+ struct completion event;
+ unsigned int enr;
+ /* if old_enr != LC_FREE, write corresponding bitmap sector, too */
+ unsigned int old_enr;
+};
+
+struct drbd_atodb_wait {
+ atomic_t count;
+ struct completion io_done;
+ struct drbd_conf *mdev;
+ int error;
+};
+
+
+int w_al_write_transaction(struct drbd_conf *, struct drbd_work *, int);
+
+STATIC int _drbd_md_sync_page_io(struct drbd_conf *mdev,
+ struct drbd_backing_dev *bdev,
+ struct page *page, sector_t sector,
+ int rw, int size)
+{
+ struct bio *bio;
+ struct drbd_md_io md_io;
+ int ok;
+
+ md_io.mdev = mdev;
+ init_completion(&md_io.event);
+ md_io.error = 0;
+
+ if (rw == WRITE && !test_bit(MD_NO_BARRIER, &mdev->flags))
+ rw |= (1<<BIO_RW_BARRIER);
+ rw |= ((1<<BIO_RW_UNPLUG) | (1<<BIO_RW_SYNCIO));
+
+ retry:
+ bio = bio_alloc(GFP_NOIO, 1);
+ bio->bi_bdev = bdev->md_bdev;
+ bio->bi_sector = sector;
+ ok = (bio_add_page(bio, page, size, 0) == size);
+ if (!ok)
+ goto out;
+ bio->bi_private = &md_io;
+ bio->bi_end_io = drbd_md_io_complete;
+ bio->bi_rw = rw;
+
+ dump_internal_bio("Md", mdev, bio, 0);
+
+ if (FAULT_ACTIVE(mdev, (rw & WRITE) ? DRBD_FAULT_MD_WR : DRBD_FAULT_MD_RD))
+ bio_endio(bio, -EIO);
+ else
+ submit_bio(rw, bio);
+ wait_for_completion(&md_io.event);
+ ok = bio_flagged(bio, BIO_UPTODATE) && md_io.error == 0;
+
+ /* check for unsupported barrier op.
+ * would rather check on EOPNOTSUPP, but that is not reliable.
+ * don't try again for ANY return value != 0 */
+ if (unlikely(bio_barrier(bio) && !ok)) {
+ /* Try again with no barrier */
+ drbd_WARN("Barriers not supported on meta data device - disabling\n");
+ set_bit(MD_NO_BARRIER, &mdev->flags);
+ rw &= ~(1 << BIO_RW_BARRIER);
+ bio_put(bio);
+ goto retry;
+ }
+ out:
+ bio_put(bio);
+ return ok;
+}
+
+int drbd_md_sync_page_io(struct drbd_conf *mdev, struct drbd_backing_dev *bdev,
+ sector_t sector, int rw)
+{
+ int hardsect, mask, ok;
+ int offset = 0;
+ struct page *iop = mdev->md_io_page;
+
+ D_ASSERT(mutex_is_locked(&mdev->md_io_mutex));
+
+ BUG_ON(!bdev->md_bdev);
+
+ hardsect = drbd_get_hardsect(bdev->md_bdev);
+ if (hardsect == 0)
+ hardsect = MD_HARDSECT;
+
+ /* in case hardsect != 512 [ s390 only? ] */
+ if (hardsect != MD_HARDSECT) {
+ mask = (hardsect / MD_HARDSECT) - 1;
+ D_ASSERT(mask == 1 || mask == 3 || mask == 7);
+ D_ASSERT(hardsect == (mask+1) * MD_HARDSECT);
+ offset = sector & mask;
+ sector = sector & ~mask;
+ iop = mdev->md_io_tmpp;
+
+ if (rw == WRITE) {
+ void *p = page_address(mdev->md_io_page);
+ void *hp = page_address(mdev->md_io_tmpp);
+
+ ok = _drbd_md_sync_page_io(mdev, bdev, iop,
+ sector, READ, hardsect);
+
+ if (unlikely(!ok)) {
+ ERR("drbd_md_sync_page_io(,%llus,"
+ "READ [hardsect!=512]) failed!\n",
+ (unsigned long long)sector);
+ return 0;
+ }
+
+ memcpy(hp + offset*MD_HARDSECT , p, MD_HARDSECT);
+ }
+ }
+
+ if (sector < drbd_md_first_sector(bdev) ||
+ sector > drbd_md_last_sector(bdev))
+ ALERT("%s [%d]:%s(,%llus,%s) out of range md access!\n",
+ current->comm, current->pid, __func__,
+ (unsigned long long)sector, rw ? "WRITE" : "READ");
+
+ ok = _drbd_md_sync_page_io(mdev, bdev, iop, sector, rw, hardsect);
+ if (unlikely(!ok)) {
+ ERR("drbd_md_sync_page_io(,%llus,%s) failed!\n",
+ (unsigned long long)sector, rw ? "WRITE" : "READ");
+ return 0;
+ }
+
+ if (hardsect != MD_HARDSECT && rw == READ) {
+ void *p = page_address(mdev->md_io_page);
+ void *hp = page_address(mdev->md_io_tmpp);
+
+ memcpy(p, hp + offset*MD_HARDSECT, MD_HARDSECT);
+ }
+
+ return ok;
+}
+
+static inline
+struct lc_element *_al_get(struct drbd_conf *mdev, unsigned int enr)
+{
+ struct lc_element *al_ext;
+ struct bm_extent *bm_ext;
+ unsigned long al_flags = 0;
+
+ spin_lock_irq(&mdev->al_lock);
+ bm_ext = (struct bm_extent *)
+ lc_find(mdev->resync, enr/AL_EXT_PER_BM_SECT);
+ if (unlikely(bm_ext != NULL)) {
+ if (test_bit(BME_NO_WRITES, &bm_ext->flags)) {
+ spin_unlock_irq(&mdev->al_lock);
+ return NULL;
+ }
+ }
+ al_ext = lc_get(mdev->act_log, enr);
+ al_flags = mdev->act_log->flags;
+ spin_unlock_irq(&mdev->al_lock);
+
+ /*
+ if (!al_ext) {
+ if (al_flags & LC_STARVING)
+ drbd_WARN("Have to wait for LRU element (AL too small?)\n");
+ if (al_flags & LC_DIRTY)
+ drbd_WARN("Ongoing AL update (AL device too slow?)\n");
+ }
+ */
+
+ return al_ext;
+}
+
+void drbd_al_begin_io(struct drbd_conf *mdev, sector_t sector)
+{
+ unsigned int enr = (sector >> (AL_EXTENT_SIZE_B-9));
+ struct lc_element *al_ext;
+ struct update_al_work al_work;
+
+ D_ASSERT(atomic_read(&mdev->local_cnt) > 0);
+
+ MTRACE(TraceTypeALExts, TraceLvlMetrics,
+ INFO("al_begin_io( sec=%llus (al_enr=%u) (rs_enr=%d) )\n",
+ (unsigned long long) sector, enr,
+ (int)BM_SECT_TO_EXT(sector));
+ );
+
+ wait_event(mdev->al_wait, (al_ext = _al_get(mdev, enr)));
+
+ if (al_ext->lc_number != enr) {
+ /* drbd_al_write_transaction(mdev,al_ext,enr);
+ generic_make_request() are serialized on the
+ current->bio_tail list now. Therefore we have
+ to deligate writing something to AL to the
+ worker thread. */
+ init_completion(&al_work.event);
+ al_work.al_ext = al_ext;
+ al_work.enr = enr;
+ al_work.old_enr = al_ext->lc_number;
+ al_work.w.cb = w_al_write_transaction;
+ drbd_queue_work_front(&mdev->data.work, &al_work.w);
+ wait_for_completion(&al_work.event);
+
+ mdev->al_writ_cnt++;
+
+ spin_lock_irq(&mdev->al_lock);
+ lc_changed(mdev->act_log, al_ext);
+ spin_unlock_irq(&mdev->al_lock);
+ wake_up(&mdev->al_wait);
+ }
+}
+
+void drbd_al_complete_io(struct drbd_conf *mdev, sector_t sector)
+{
+ unsigned int enr = (sector >> (AL_EXTENT_SIZE_B-9));
+ struct lc_element *extent;
+ unsigned long flags;
+
+ MTRACE(TraceTypeALExts, TraceLvlMetrics,
+ INFO("al_complete_io( sec=%llus (al_enr=%u) (rs_enr=%d) )\n",
+ (unsigned long long) sector, enr,
+ (int)BM_SECT_TO_EXT(sector));
+ );
+
+ spin_lock_irqsave(&mdev->al_lock, flags);
+
+ extent = lc_find(mdev->act_log, enr);
+
+ if (!extent) {
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+ ERR("al_complete_io() called on inactive extent %u\n", enr);
+ return;
+ }
+
+ if (lc_put(mdev->act_log, extent) == 0)
+ wake_up(&mdev->al_wait);
+
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+}
+
+int
+w_al_write_transaction(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct update_al_work *aw = (struct update_al_work *)w;
+ struct lc_element *updated = aw->al_ext;
+ const unsigned int new_enr = aw->enr;
+ const unsigned int evicted = aw->old_enr;
+
+ struct al_transaction *buffer;
+ sector_t sector;
+ int i, n, mx;
+ unsigned int extent_nr;
+ u32 xor_sum = 0;
+
+ if (!inc_local(mdev)) {
+ ERR("inc_local() failed in w_al_write_transaction\n");
+ complete(&((struct update_al_work *)w)->event);
+ return 1;
+ }
+ /* do we have to do a bitmap write, first?
+ * TODO reduce maximum latency:
+ * submit both bios, then wait for both,
+ * instead of doing two synchronous sector writes. */
+ if (mdev->state.conn < Connected && evicted != LC_FREE)
+ drbd_bm_write_sect(mdev, evicted/AL_EXT_PER_BM_SECT);
+
+ mutex_lock(&mdev->md_io_mutex); /* protects md_io_page, al_tr_cycle, ... */
+ buffer = (struct al_transaction *)page_address(mdev->md_io_page);
+
+ buffer->magic = __constant_cpu_to_be32(DRBD_MAGIC);
+ buffer->tr_number = cpu_to_be32(mdev->al_tr_number);
+
+ n = lc_index_of(mdev->act_log, updated);
+
+ buffer->updates[0].pos = cpu_to_be32(n);
+ buffer->updates[0].extent = cpu_to_be32(new_enr);
+
+ xor_sum ^= new_enr;
+
+ mx = min_t(int, AL_EXTENTS_PT,
+ mdev->act_log->nr_elements - mdev->al_tr_cycle);
+ for (i = 0; i < mx; i++) {
+ extent_nr = lc_entry(mdev->act_log,
+ mdev->al_tr_cycle+i)->lc_number;
+ buffer->updates[i+1].pos = cpu_to_be32(mdev->al_tr_cycle+i);
+ buffer->updates[i+1].extent = cpu_to_be32(extent_nr);
+ xor_sum ^= extent_nr;
+ }
+ for (; i < AL_EXTENTS_PT; i++) {
+ buffer->updates[i+1].pos = __constant_cpu_to_be32(-1);
+ buffer->updates[i+1].extent = __constant_cpu_to_be32(LC_FREE);
+ xor_sum ^= LC_FREE;
+ }
+ mdev->al_tr_cycle += AL_EXTENTS_PT;
+ if (mdev->al_tr_cycle >= mdev->act_log->nr_elements)
+ mdev->al_tr_cycle = 0;
+
+ buffer->xor_sum = cpu_to_be32(xor_sum);
+
+ sector = mdev->bc->md.md_offset
+ + mdev->bc->md.al_offset + mdev->al_tr_pos;
+
+ if (!drbd_md_sync_page_io(mdev, mdev->bc, sector, WRITE)) {
+ drbd_chk_io_error(mdev, 1, TRUE);
+ drbd_io_error(mdev, TRUE);
+ }
+
+ if (++mdev->al_tr_pos >
+ div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT))
+ mdev->al_tr_pos = 0;
+
+ D_ASSERT(mdev->al_tr_pos < MD_AL_MAX_SIZE);
+ mdev->al_tr_number++;
+
+ mutex_unlock(&mdev->md_io_mutex);
+
+ complete(&((struct update_al_work *)w)->event);
+ dec_local(mdev);
+
+ return 1;
+}
+
+/**
+ * drbd_al_read_tr: Reads a single transaction record form the
+ * on disk activity log.
+ * Returns -1 on IO error, 0 on checksum error and 1 if it is a valid
+ * record.
+ */
+STATIC int drbd_al_read_tr(struct drbd_conf *mdev,
+ struct drbd_backing_dev *bdev,
+ struct al_transaction *b,
+ int index)
+{
+ sector_t sector;
+ int rv, i;
+ u32 xor_sum = 0;
+
+ sector = bdev->md.md_offset + bdev->md.al_offset + index;
+
+ /* Dont process error normally,
+ * as this is done before disk is atached! */
+ if (!drbd_md_sync_page_io(mdev, bdev, sector, READ))
+ return -1;
+
+ rv = (be32_to_cpu(b->magic) == DRBD_MAGIC);
+
+ for (i = 0; i < AL_EXTENTS_PT + 1; i++)
+ xor_sum ^= be32_to_cpu(b->updates[i].extent);
+ rv &= (xor_sum == be32_to_cpu(b->xor_sum));
+
+ return rv;
+}
+
+/**
+ * drbd_al_read_log: Restores the activity log from its on disk
+ * representation. Returns 1 on success, returns 0 when
+ * reading the log failed due to IO errors.
+ */
+int drbd_al_read_log(struct drbd_conf *mdev, struct drbd_backing_dev *bdev)
+{
+ struct al_transaction *buffer;
+ int i;
+ int rv;
+ int mx;
+ int cnr;
+ int active_extents = 0;
+ int transactions = 0;
+ int overflow = 0;
+ int from = -1;
+ int to = -1;
+ u32 from_tnr = -1;
+ u32 to_tnr = 0;
+
+ mx = div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT);
+
+ /* lock out all other meta data io for now,
+ * and make sure the page is mapped.
+ */
+ mutex_lock(&mdev->md_io_mutex);
+ buffer = page_address(mdev->md_io_page);
+
+ /* Find the valid transaction in the log */
+ for (i = 0; i <= mx; i++) {
+ rv = drbd_al_read_tr(mdev, bdev, buffer, i);
+ if (rv == 0)
+ continue;
+ if (rv == -1) {
+ mutex_unlock(&mdev->md_io_mutex);
+ return 0;
+ }
+ cnr = be32_to_cpu(buffer->tr_number);
+
+ if (cnr == -1)
+ overflow = 1;
+
+ if (cnr < from_tnr && !overflow) {
+ from = i;
+ from_tnr = cnr;
+ }
+ if (cnr > to_tnr) {
+ to = i;
+ to_tnr = cnr;
+ }
+ }
+
+ if (from == -1 || to == -1) {
+ drbd_WARN("No usable activity log found.\n");
+
+ mutex_unlock(&mdev->md_io_mutex);
+ return 1;
+ }
+
+ /* Read the valid transactions.
+ * INFO("Reading from %d to %d.\n",from,to); */
+ i = from;
+ while (1) {
+ int j, pos;
+ unsigned int extent_nr;
+ unsigned int trn;
+
+ rv = drbd_al_read_tr(mdev, bdev, buffer, i);
+ ERR_IF(rv == 0) goto cancel;
+ if (rv == -1) {
+ mutex_unlock(&mdev->md_io_mutex);
+ return 0;
+ }
+
+ trn = be32_to_cpu(buffer->tr_number);
+
+ spin_lock_irq(&mdev->al_lock);
+
+ /* This loop runs backwards because in the cyclic
+ elements there might be an old version of the
+ updated element (in slot 0). So the element in slot 0
+ can overwrite old versions. */
+ for (j = AL_EXTENTS_PT; j >= 0; j--) {
+ pos = be32_to_cpu(buffer->updates[j].pos);
+ extent_nr = be32_to_cpu(buffer->updates[j].extent);
+
+ if (extent_nr == LC_FREE)
+ continue;
+
+ lc_set(mdev->act_log, extent_nr, pos);
+ active_extents++;
+ }
+ spin_unlock_irq(&mdev->al_lock);
+
+ transactions++;
+
+cancel:
+ if (i == to)
+ break;
+ i++;
+ if (i > mx)
+ i = 0;
+ }
+
+ mdev->al_tr_number = to_tnr+1;
+ mdev->al_tr_pos = to;
+ if (++mdev->al_tr_pos >
+ div_ceil(mdev->act_log->nr_elements, AL_EXTENTS_PT))
+ mdev->al_tr_pos = 0;
+
+ /* ok, we are done with it */
+ mutex_unlock(&mdev->md_io_mutex);
+
+ INFO("Found %d transactions (%d active extents) in activity log.\n",
+ transactions, active_extents);
+
+ return 1;
+}
+
+STATIC void atodb_endio(struct bio *bio, int error)
+{
+ struct drbd_atodb_wait *wc = bio->bi_private;
+ struct drbd_conf *mdev = wc->mdev;
+ struct page *page;
+ int uptodate = bio_flagged(bio, BIO_UPTODATE);
+
+ /* strange behaviour of some lower level drivers...
+ * fail the request by clearing the uptodate flag,
+ * but do not return any error?! */
+ if (!error && !uptodate)
+ error = -EIO;
+
+ /* corresponding drbd_io_error is in drbd_al_to_on_disk_bm */
+ drbd_chk_io_error(mdev, error, TRUE);
+ if (error && wc->error == 0)
+ wc->error = error;
+
+ if (atomic_dec_and_test(&wc->count))
+ complete(&wc->io_done);
+
+ page = bio->bi_io_vec[0].bv_page;
+ put_page(page);
+ bio_put(bio);
+ mdev->bm_writ_cnt++;
+ dec_local(mdev);
+}
+
+#define S2W(s) ((s)<<(BM_EXT_SIZE_B-BM_BLOCK_SIZE_B-LN2_BPL))
+/* activity log to on disk bitmap -- prepare bio unless that sector
+ * is already covered by previously prepared bios */
+STATIC int atodb_prepare_unless_covered(struct drbd_conf *mdev,
+ struct bio **bios,
+ unsigned int enr,
+ struct drbd_atodb_wait *wc) __must_hold(local)
+{
+ struct bio *bio;
+ struct page *page;
+ sector_t on_disk_sector = enr + mdev->bc->md.md_offset
+ + mdev->bc->md.bm_offset;
+ unsigned int page_offset = PAGE_SIZE;
+ int offset;
+ int i = 0;
+ int err = -ENOMEM;
+
+ /* Check if that enr is already covered by an already created bio.
+ * Caution, bios[] is not NULL terminated,
+ * but only initialized to all NULL.
+ * For completely scattered activity log,
+ * the last invocation iterates over all bios,
+ * and finds the last NULL entry.
+ */
+ while ((bio = bios[i])) {
+ if (bio->bi_sector == on_disk_sector)
+ return 0;
+ i++;
+ }
+ /* bios[i] == NULL, the next not yet used slot */
+
+ bio = bio_alloc(GFP_KERNEL, 1);
+ if (bio == NULL)
+ return -ENOMEM;
+
+ if (i > 0) {
+ const struct bio_vec *prev_bv = bios[i-1]->bi_io_vec;
+ page_offset = prev_bv->bv_offset + prev_bv->bv_len;
+ page = prev_bv->bv_page;
+ }
+ if (page_offset == PAGE_SIZE) {
+ page = alloc_page(__GFP_HIGHMEM);
+ if (page == NULL)
+ goto out_bio_put;
+ page_offset = 0;
+ } else {
+ get_page(page);
+ }
+
+ offset = S2W(enr);
+ drbd_bm_get_lel(mdev, offset,
+ min_t(size_t, S2W(1), drbd_bm_words(mdev) - offset),
+ kmap(page) + page_offset);
+ kunmap(page);
+
+ bio->bi_private = wc;
+ bio->bi_end_io = atodb_endio;
+ bio->bi_bdev = mdev->bc->md_bdev;
+ bio->bi_sector = on_disk_sector;
+
+ if (bio_add_page(bio, page, MD_HARDSECT, page_offset) != MD_HARDSECT)
+ goto out_put_page;
+
+ atomic_inc(&wc->count);
+ /* we already know that we may do this...
+ * inc_local_if_state(mdev,Attaching);
+ * just get the extra reference, so that the local_cnt reflects
+ * the number of pending IO requests DRBD at its backing device.
+ */
+ atomic_inc(&mdev->local_cnt);
+
+ bios[i] = bio;
+
+ return 0;
+
+out_put_page:
+ err = -EINVAL;
+ put_page(page);
+out_bio_put:
+ bio_put(bio);
+ return err;
+}
+
+/**
+ * drbd_al_to_on_disk_bm:
+ * Writes the areas of the bitmap which are covered by the AL.
+ * called when we detach (unconfigure) local storage,
+ * or when we go from Primary to Secondary state.
+ */
+void drbd_al_to_on_disk_bm(struct drbd_conf *mdev)
+{
+ int i, nr_elements;
+ unsigned int enr;
+ struct bio **bios;
+ struct drbd_atodb_wait wc;
+
+ ERR_IF (!inc_local_if_state(mdev, Attaching))
+ return; /* sorry, I don't have any act_log etc... */
+
+ wait_event(mdev->al_wait, lc_try_lock(mdev->act_log));
+
+ nr_elements = mdev->act_log->nr_elements;
+
+ bios = kzalloc(sizeof(struct bio *) * nr_elements, GFP_KERNEL);
+ if (!bios)
+ goto submit_one_by_one;
+
+ atomic_set(&wc.count, 0);
+ init_completion(&wc.io_done);
+ wc.mdev = mdev;
+ wc.error = 0;
+
+ for (i = 0; i < nr_elements; i++) {
+ enr = lc_entry(mdev->act_log, i)->lc_number;
+ if (enr == LC_FREE)
+ continue;
+ /* next statement also does atomic_inc wc.count and local_cnt */
+ if (atodb_prepare_unless_covered(mdev, bios,
+ enr/AL_EXT_PER_BM_SECT,
+ &wc))
+ goto free_bios_submit_one_by_one;
+ }
+
+ /* unneccessary optimization? */
+ lc_unlock(mdev->act_log);
+ wake_up(&mdev->al_wait);
+
+ /* all prepared, submit them */
+ for (i = 0; i < nr_elements; i++) {
+ if (bios[i] == NULL)
+ break;
+ if (FAULT_ACTIVE(mdev, DRBD_FAULT_MD_WR)) {
+ bios[i]->bi_rw = WRITE;
+ bio_endio(bios[i], -EIO);
+ } else {
+ submit_bio(WRITE, bios[i]);
+ }
+ }
+
+ drbd_blk_run_queue(bdev_get_queue(mdev->bc->md_bdev));
+
+ /* always (try to) flush bitmap to stable storage */
+ drbd_md_flush(mdev);
+
+ /* In case we did not submit a single IO do not wait for
+ * them to complete. ( Because we would wait forever here. )
+ *
+ * In case we had IOs and they are already complete, there
+ * is not point in waiting anyways.
+ * Therefore this if () ... */
+ if (atomic_read(&wc.count))
+ wait_for_completion(&wc.io_done);
+
+ dec_local(mdev);
+
+ if (wc.error)
+ drbd_io_error(mdev, TRUE);
+ kfree(bios);
+ return;
+
+ free_bios_submit_one_by_one:
+ /* free everything by calling the endio callback directly. */
+ for (i = 0; i < nr_elements && bios[i]; i++)
+ bio_endio(bios[i], 0);
+
+ kfree(bios);
+
+ submit_one_by_one:
+ drbd_WARN("Using the slow drbd_al_to_on_disk_bm()\n");
+
+ for (i = 0; i < mdev->act_log->nr_elements; i++) {
+ enr = lc_entry(mdev->act_log, i)->lc_number;
+ if (enr == LC_FREE)
+ continue;
+ /* Really slow: if we have al-extents 16..19 active,
+ * sector 4 will be written four times! Synchronous! */
+ drbd_bm_write_sect(mdev, enr/AL_EXT_PER_BM_SECT);
+ }
+
+ lc_unlock(mdev->act_log);
+ wake_up(&mdev->al_wait);
+ dec_local(mdev);
+}
+
+/**
+ * drbd_al_apply_to_bm: Sets the bits in the bitmap that are described
+ * by the active extents of the AL.
+ */
+void drbd_al_apply_to_bm(struct drbd_conf *mdev)
+{
+ unsigned int enr;
+ unsigned long add = 0;
+ char ppb[10];
+ int i;
+
+ wait_event(mdev->al_wait, lc_try_lock(mdev->act_log));
+
+ for (i = 0; i < mdev->act_log->nr_elements; i++) {
+ enr = lc_entry(mdev->act_log, i)->lc_number;
+ if (enr == LC_FREE)
+ continue;
+ add += drbd_bm_ALe_set_all(mdev, enr);
+ }
+
+ lc_unlock(mdev->act_log);
+ wake_up(&mdev->al_wait);
+
+ INFO("Marked additional %s as out-of-sync based on AL.\n",
+ ppsize(ppb, Bit2KB(add)));
+}
+
+static inline int _try_lc_del(struct drbd_conf *mdev, struct lc_element *al_ext)
+{
+ int rv;
+
+ spin_lock_irq(&mdev->al_lock);
+ rv = (al_ext->refcnt == 0);
+ if (likely(rv))
+ lc_del(mdev->act_log, al_ext);
+ spin_unlock_irq(&mdev->al_lock);
+
+ MTRACE(TraceTypeALExts, TraceLvlMetrics,
+ if (unlikely(!rv))
+ INFO("Waiting for extent in drbd_al_shrink()\n");
+ );
+
+ return rv;
+}
+
+/**
+ * drbd_al_shrink: Removes all active extents form the AL. (but does not
+ * write any transactions)
+ * You need to lock mdev->act_log with lc_try_lock() / lc_unlock()
+ */
+void drbd_al_shrink(struct drbd_conf *mdev)
+{
+ struct lc_element *al_ext;
+ int i;
+
+ D_ASSERT(test_bit(__LC_DIRTY, &mdev->act_log->flags));
+
+ for (i = 0; i < mdev->act_log->nr_elements; i++) {
+ al_ext = lc_entry(mdev->act_log, i);
+ if (al_ext->lc_number == LC_FREE)
+ continue;
+ wait_event(mdev->al_wait, _try_lc_del(mdev, al_ext));
+ }
+
+ wake_up(&mdev->al_wait);
+}
+
+STATIC int w_update_odbm(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct update_odbm_work *udw = (struct update_odbm_work *)w;
+
+ if (!inc_local(mdev)) {
+ if (__ratelimit(&drbd_ratelimit_state))
+ drbd_WARN("Can not update on disk bitmap, local IO disabled.\n");
+ return 1;
+ }
+
+ drbd_bm_write_sect(mdev, udw->enr);
+ dec_local(mdev);
+
+ kfree(udw);
+
+ if (drbd_bm_total_weight(mdev) <= mdev->rs_failed) {
+ switch (mdev->state.conn) {
+ case SyncSource: case SyncTarget:
+ case PausedSyncS: case PausedSyncT:
+ drbd_resync_finished(mdev);
+ default:
+ /* nothing to do */
+ break;
+ }
+ }
+ drbd_bcast_sync_progress(mdev);
+
+ return 1;
+}
+
+
+/* ATTENTION. The AL's extents are 4MB each, while the extents in the
+ * resync LRU-cache are 16MB each.
+ * The caller of this function has to hold an inc_local() reference.
+ *
+ * TODO will be obsoleted once we have a caching lru of the on disk bitmap
+ */
+STATIC void drbd_try_clear_on_disk_bm(struct drbd_conf *mdev, sector_t sector,
+ int count, int success)
+{
+ struct bm_extent *ext;
+ struct update_odbm_work *udw;
+
+ unsigned int enr;
+
+ D_ASSERT(atomic_read(&mdev->local_cnt));
+
+ /* I simply assume that a sector/size pair never crosses
+ * a 16 MB extent border. (Currently this is true...) */
+ enr = BM_SECT_TO_EXT(sector);
+
+ ext = (struct bm_extent *) lc_get(mdev->resync, enr);
+ if (ext) {
+ if (ext->lce.lc_number == enr) {
+ if (success)
+ ext->rs_left -= count;
+ else
+ ext->rs_failed += count;
+ if (ext->rs_left < ext->rs_failed) {
+ ERR("BAD! sector=%llus enr=%u rs_left=%d "
+ "rs_failed=%d count=%d\n",
+ (unsigned long long)sector,
+ ext->lce.lc_number, ext->rs_left,
+ ext->rs_failed, count);
+ dump_stack();
+
+ lc_put(mdev->resync, &ext->lce);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return;
+ }
+ } else {
+ /* Normally this element should be in the cache,
+ * since drbd_rs_begin_io() pulled it already in.
+ *
+ * But maybe an application write finished, and we set
+ * something outside the resync lru_cache in sync.
+ */
+ int rs_left = drbd_bm_e_weight(mdev, enr);
+ if (ext->flags != 0) {
+ drbd_WARN("changing resync lce: %d[%u;%02lx]"
+ " -> %d[%u;00]\n",
+ ext->lce.lc_number, ext->rs_left,
+ ext->flags, enr, rs_left);
+ ext->flags = 0;
+ }
+ if (ext->rs_failed) {
+ drbd_WARN("Kicking resync_lru element enr=%u "
+ "out with rs_failed=%d\n",
+ ext->lce.lc_number, ext->rs_failed);
+ set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags);
+ }
+ ext->rs_left = rs_left;
+ ext->rs_failed = success ? 0 : count;
+ lc_changed(mdev->resync, &ext->lce);
+ }
+ lc_put(mdev->resync, &ext->lce);
+ /* no race, we are within the al_lock! */
+
+ if (ext->rs_left == ext->rs_failed) {
+ ext->rs_failed = 0;
+
+ udw = kmalloc(sizeof(*udw), GFP_ATOMIC);
+ if (udw) {
+ udw->enr = ext->lce.lc_number;
+ udw->w.cb = w_update_odbm;
+ drbd_queue_work_front(&mdev->data.work, &udw->w);
+ } else {
+ drbd_WARN("Could not kmalloc an udw\n");
+ set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags);
+ }
+ }
+ } else {
+ ERR("lc_get() failed! locked=%d/%d flags=%lu\n",
+ mdev->resync_locked,
+ mdev->resync->nr_elements,
+ mdev->resync->flags);
+ }
+}
+
+/* clear the bit corresponding to the piece of storage in question:
+ * size byte of data starting from sector. Only clear a bits of the affected
+ * one ore more _aligned_ BM_BLOCK_SIZE blocks.
+ *
+ * called by worker on SyncTarget and receiver on SyncSource.
+ *
+ */
+void __drbd_set_in_sync(struct drbd_conf *mdev, sector_t sector, int size,
+ const char *file, const unsigned int line)
+{
+ /* Is called from worker and receiver context _only_ */
+ unsigned long sbnr, ebnr, lbnr;
+ unsigned long count = 0;
+ sector_t esector, nr_sectors;
+ int wake_up = 0;
+ unsigned long flags;
+
+ if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) {
+ ERR("drbd_set_in_sync: sector=%llus size=%d nonsense!\n",
+ (unsigned long long)sector, size);
+ return;
+ }
+ nr_sectors = drbd_get_capacity(mdev->this_bdev);
+ esector = sector + (size >> 9) - 1;
+
+ ERR_IF(sector >= nr_sectors) return;
+ ERR_IF(esector >= nr_sectors) esector = (nr_sectors-1);
+
+ lbnr = BM_SECT_TO_BIT(nr_sectors-1);
+
+ /* we clear it (in sync).
+ * round up start sector, round down end sector. we make sure we only
+ * clear full, alligned, BM_BLOCK_SIZE (4K) blocks */
+ if (unlikely(esector < BM_SECT_PER_BIT-1))
+ return;
+ if (unlikely(esector == (nr_sectors-1)))
+ ebnr = lbnr;
+ else
+ ebnr = BM_SECT_TO_BIT(esector - (BM_SECT_PER_BIT-1));
+ sbnr = BM_SECT_TO_BIT(sector + BM_SECT_PER_BIT-1);
+
+ MTRACE(TraceTypeResync, TraceLvlMetrics,
+ INFO("drbd_set_in_sync: sector=%llus size=%u sbnr=%lu ebnr=%lu\n",
+ (unsigned long long)sector, size, sbnr, ebnr);
+ );
+
+ if (sbnr > ebnr)
+ return;
+
+ /*
+ * ok, (capacity & 7) != 0 sometimes, but who cares...
+ * we count rs_{total,left} in bits, not sectors.
+ */
+ spin_lock_irqsave(&mdev->al_lock, flags);
+ count = drbd_bm_clear_bits(mdev, sbnr, ebnr);
+ if (count) {
+ /* we need the lock for drbd_try_clear_on_disk_bm */
+ if (jiffies - mdev->rs_mark_time > HZ*10) {
+ /* should be roling marks,
+ * but we estimate only anyways. */
+ if (mdev->rs_mark_left != drbd_bm_total_weight(mdev) &&
+ mdev->state.conn != PausedSyncT &&
+ mdev->state.conn != PausedSyncS) {
+ mdev->rs_mark_time = jiffies;
+ mdev->rs_mark_left = drbd_bm_total_weight(mdev);
+ }
+ }
+ if (inc_local(mdev)) {
+ drbd_try_clear_on_disk_bm(mdev, sector, count, TRUE);
+ dec_local(mdev);
+ }
+ /* just wake_up unconditional now, various lc_chaged(),
+ * lc_put() in drbd_try_clear_on_disk_bm(). */
+ wake_up = 1;
+ }
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+ if (wake_up)
+ wake_up(&mdev->al_wait);
+}
+
+/*
+ * this is intended to set one request worth of data out of sync.
+ * affects at least 1 bit,
+ * and at most 1+DRBD_MAX_SEGMENT_SIZE/BM_BLOCK_SIZE bits.
+ *
+ * called by tl_clear and drbd_send_dblock (==drbd_make_request).
+ * so this can be _any_ process.
+ */
+void __drbd_set_out_of_sync(struct drbd_conf *mdev, sector_t sector, int size,
+ const char *file, const unsigned int line)
+{
+ unsigned long sbnr, ebnr, lbnr, flags;
+ sector_t esector, nr_sectors;
+ unsigned int enr, count;
+ struct bm_extent *ext;
+
+ if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) {
+ ERR("sector: %llus, size: %d\n",
+ (unsigned long long)sector, size);
+ return;
+ }
+
+ if (!inc_local(mdev))
+ return; /* no disk, no metadata, no bitmap to set bits in */
+
+ nr_sectors = drbd_get_capacity(mdev->this_bdev);
+ esector = sector + (size >> 9) - 1;
+
+ ERR_IF(sector >= nr_sectors)
+ goto out;
+ ERR_IF(esector >= nr_sectors)
+ esector = (nr_sectors-1);
+
+ lbnr = BM_SECT_TO_BIT(nr_sectors-1);
+
+ /* we set it out of sync,
+ * we do not need to round anything here */
+ sbnr = BM_SECT_TO_BIT(sector);
+ ebnr = BM_SECT_TO_BIT(esector);
+
+ MTRACE(TraceTypeResync, TraceLvlMetrics,
+ INFO("drbd_set_out_of_sync: sector=%llus size=%u "
+ "sbnr=%lu ebnr=%lu\n",
+ (unsigned long long)sector, size, sbnr, ebnr);
+ );
+
+ /* ok, (capacity & 7) != 0 sometimes, but who cares...
+ * we count rs_{total,left} in bits, not sectors. */
+ spin_lock_irqsave(&mdev->al_lock, flags);
+ count = drbd_bm_set_bits(mdev, sbnr, ebnr);
+
+ enr = BM_SECT_TO_EXT(sector);
+ ext = (struct bm_extent *) lc_find(mdev->resync, enr);
+ if (ext)
+ ext->rs_left += count;
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+
+out:
+ dec_local(mdev);
+}
+
+static inline
+struct bm_extent *_bme_get(struct drbd_conf *mdev, unsigned int enr)
+{
+ struct bm_extent *bm_ext;
+ int wakeup = 0;
+ unsigned long rs_flags;
+
+ spin_lock_irq(&mdev->al_lock);
+ if (mdev->resync_locked > mdev->resync->nr_elements/2) {
+ spin_unlock_irq(&mdev->al_lock);
+ return NULL;
+ }
+ bm_ext = (struct bm_extent *) lc_get(mdev->resync, enr);
+ if (bm_ext) {
+ if (bm_ext->lce.lc_number != enr) {
+ bm_ext->rs_left = drbd_bm_e_weight(mdev, enr);
+ bm_ext->rs_failed = 0;
+ lc_changed(mdev->resync, (struct lc_element *)bm_ext);
+ wakeup = 1;
+ }
+ if (bm_ext->lce.refcnt == 1)
+ mdev->resync_locked++;
+ set_bit(BME_NO_WRITES, &bm_ext->flags);
+ }
+ rs_flags = mdev->resync->flags;
+ spin_unlock_irq(&mdev->al_lock);
+ if (wakeup)
+ wake_up(&mdev->al_wait);
+
+ if (!bm_ext) {
+ if (rs_flags & LC_STARVING)
+ drbd_WARN("Have to wait for element"
+ " (resync LRU too small?)\n");
+ BUG_ON(rs_flags & LC_DIRTY);
+ }
+
+ return bm_ext;
+}
+
+static inline int _is_in_al(struct drbd_conf *mdev, unsigned int enr)
+{
+ struct lc_element *al_ext;
+ int rv = 0;
+
+ spin_lock_irq(&mdev->al_lock);
+ if (unlikely(enr == mdev->act_log->new_number))
+ rv = 1;
+ else {
+ al_ext = lc_find(mdev->act_log, enr);
+ if (al_ext) {
+ if (al_ext->refcnt)
+ rv = 1;
+ }
+ }
+ spin_unlock_irq(&mdev->al_lock);
+
+ /*
+ if (unlikely(rv)) {
+ INFO("Delaying sync read until app's write is done\n");
+ }
+ */
+ return rv;
+}
+
+/**
+ * drbd_rs_begin_io: Gets an extent in the resync LRU cache and sets it
+ * to BME_LOCKED.
+ *
+ * @sector: The sector number
+ *
+ * sleeps on al_wait.
+ * returns 1 if successful.
+ * returns 0 if interrupted.
+ */
+int drbd_rs_begin_io(struct drbd_conf *mdev, sector_t sector)
+{
+ unsigned int enr = BM_SECT_TO_EXT(sector);
+ struct bm_extent *bm_ext;
+ int i, sig;
+
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("drbd_rs_begin_io: sector=%llus (rs_end=%d)\n",
+ (unsigned long long)sector, enr);
+ );
+
+ sig = wait_event_interruptible(mdev->al_wait,
+ (bm_ext = _bme_get(mdev, enr)));
+ if (sig)
+ return 0;
+
+ if (test_bit(BME_LOCKED, &bm_ext->flags))
+ return 1;
+
+ for (i = 0; i < AL_EXT_PER_BM_SECT; i++) {
+ sig = wait_event_interruptible(mdev->al_wait,
+ !_is_in_al(mdev, enr * AL_EXT_PER_BM_SECT + i));
+ if (sig) {
+ spin_lock_irq(&mdev->al_lock);
+ if (lc_put(mdev->resync, &bm_ext->lce) == 0) {
+ clear_bit(BME_NO_WRITES, &bm_ext->flags);
+ mdev->resync_locked--;
+ wake_up(&mdev->al_wait);
+ }
+ spin_unlock_irq(&mdev->al_lock);
+ return 0;
+ }
+ }
+
+ set_bit(BME_LOCKED, &bm_ext->flags);
+
+ return 1;
+}
+
+/**
+ * drbd_try_rs_begin_io: Gets an extent in the resync LRU cache, sets it
+ * to BME_NO_WRITES, then tries to set it to BME_LOCKED.
+ *
+ * @sector: The sector number
+ *
+ * does not sleep.
+ * returns zero if we could set BME_LOCKED and can proceed,
+ * -EAGAIN if we need to try again.
+ */
+int drbd_try_rs_begin_io(struct drbd_conf *mdev, sector_t sector)
+{
+ unsigned int enr = BM_SECT_TO_EXT(sector);
+ const unsigned int al_enr = enr*AL_EXT_PER_BM_SECT;
+ struct bm_extent *bm_ext;
+ int i;
+
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("drbd_try_rs_begin_io: sector=%llus\n",
+ (unsigned long long)sector);
+ );
+
+ spin_lock_irq(&mdev->al_lock);
+ if (mdev->resync_wenr != LC_FREE && mdev->resync_wenr != enr) {
+ /* in case you have very heavy scattered io, it may
+ * stall the syncer undefined if we giveup the ref count
+ * when we try again and requeue.
+ *
+ * if we don't give up the refcount, but the next time
+ * we are scheduled this extent has been "synced" by new
+ * application writes, we'd miss the lc_put on the
+ * extent we keept the refcount on.
+ * so we remembered which extent we had to try agin, and
+ * if the next requested one is something else, we do
+ * the lc_put here...
+ * we also have to wake_up
+ */
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("dropping %u, aparently got 'synced' "
+ "by application io\n", mdev->resync_wenr);
+ );
+ bm_ext = (struct bm_extent *)
+ lc_find(mdev->resync, mdev->resync_wenr);
+ if (bm_ext) {
+ D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags));
+ D_ASSERT(test_bit(BME_NO_WRITES, &bm_ext->flags));
+ clear_bit(BME_NO_WRITES, &bm_ext->flags);
+ mdev->resync_wenr = LC_FREE;
+ if (lc_put(mdev->resync, &bm_ext->lce) == 0)
+ mdev->resync_locked--;
+ wake_up(&mdev->al_wait);
+ } else {
+ ALERT("LOGIC BUG\n");
+ }
+ }
+ bm_ext = (struct bm_extent *)lc_try_get(mdev->resync, enr);
+ if (bm_ext) {
+ if (test_bit(BME_LOCKED, &bm_ext->flags))
+ goto proceed;
+ if (!test_and_set_bit(BME_NO_WRITES, &bm_ext->flags)) {
+ mdev->resync_locked++;
+ } else {
+ /* we did set the BME_NO_WRITES,
+ * but then could not set BME_LOCKED,
+ * so we tried again.
+ * drop the extra reference. */
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("dropping extra reference on %u\n", enr);
+ );
+ bm_ext->lce.refcnt--;
+ D_ASSERT(bm_ext->lce.refcnt > 0);
+ }
+ goto check_al;
+ } else {
+ if (mdev->resync_locked > mdev->resync->nr_elements-3) {
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("resync_locked = %u!\n", mdev->resync_locked);
+ );
+ goto try_again;
+ }
+ bm_ext = (struct bm_extent *)lc_get(mdev->resync, enr);
+ if (!bm_ext) {
+ const unsigned long rs_flags = mdev->resync->flags;
+ if (rs_flags & LC_STARVING)
+ drbd_WARN("Have to wait for element"
+ " (resync LRU too small?)\n");
+ BUG_ON(rs_flags & LC_DIRTY);
+ goto try_again;
+ }
+ if (bm_ext->lce.lc_number != enr) {
+ bm_ext->rs_left = drbd_bm_e_weight(mdev, enr);
+ bm_ext->rs_failed = 0;
+ lc_changed(mdev->resync, (struct lc_element *)bm_ext);
+ wake_up(&mdev->al_wait);
+ D_ASSERT(test_bit(BME_LOCKED, &bm_ext->flags) == 0);
+ }
+ set_bit(BME_NO_WRITES, &bm_ext->flags);
+ D_ASSERT(bm_ext->lce.refcnt == 1);
+ mdev->resync_locked++;
+ goto check_al;
+ }
+check_al:
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("checking al for %u\n", enr);
+ );
+ for (i = 0; i < AL_EXT_PER_BM_SECT; i++) {
+ if (unlikely(al_enr+i == mdev->act_log->new_number))
+ goto try_again;
+ if (lc_is_used(mdev->act_log, al_enr+i))
+ goto try_again;
+ }
+ set_bit(BME_LOCKED, &bm_ext->flags);
+proceed:
+ mdev->resync_wenr = LC_FREE;
+ spin_unlock_irq(&mdev->al_lock);
+ return 0;
+
+try_again:
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("need to try again for %u\n", enr);
+ );
+ if (bm_ext)
+ mdev->resync_wenr = enr;
+ spin_unlock_irq(&mdev->al_lock);
+ return -EAGAIN;
+}
+
+void drbd_rs_complete_io(struct drbd_conf *mdev, sector_t sector)
+{
+ unsigned int enr = BM_SECT_TO_EXT(sector);
+ struct bm_extent *bm_ext;
+ unsigned long flags;
+
+ MTRACE(TraceTypeResync, TraceLvlAll,
+ INFO("drbd_rs_complete_io: sector=%llus (rs_enr=%d)\n",
+ (long long)sector, enr);
+ );
+
+ spin_lock_irqsave(&mdev->al_lock, flags);
+ bm_ext = (struct bm_extent *) lc_find(mdev->resync, enr);
+ if (!bm_ext) {
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+ ERR("drbd_rs_complete_io() called, but extent not found\n");
+ return;
+ }
+
+ if (bm_ext->lce.refcnt == 0) {
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+ ERR("drbd_rs_complete_io(,%llu [=%u]) called, "
+ "but refcnt is 0!?\n",
+ (unsigned long long)sector, enr);
+ return;
+ }
+
+ if (lc_put(mdev->resync, (struct lc_element *)bm_ext) == 0) {
+ clear_bit(BME_LOCKED, &bm_ext->flags);
+ clear_bit(BME_NO_WRITES, &bm_ext->flags);
+ mdev->resync_locked--;
+ wake_up(&mdev->al_wait);
+ }
+
+ spin_unlock_irqrestore(&mdev->al_lock, flags);
+}
+
+/**
+ * drbd_rs_cancel_all: Removes extents from the resync LRU. Even
+ * if they are BME_LOCKED.
+ */
+void drbd_rs_cancel_all(struct drbd_conf *mdev)
+{
+ MTRACE(TraceTypeResync, TraceLvlMetrics,
+ INFO("drbd_rs_cancel_all\n");
+ );
+
+ spin_lock_irq(&mdev->al_lock);
+
+ if (inc_local_if_state(mdev, Failed)) { /* Makes sure ->resync is there. */
+ lc_reset(mdev->resync);
+ dec_local(mdev);
+ }
+ mdev->resync_locked = 0;
+ mdev->resync_wenr = LC_FREE;
+ spin_unlock_irq(&mdev->al_lock);
+ wake_up(&mdev->al_wait);
+}
+
+/**
+ * drbd_rs_del_all: Gracefully remove all extents from the resync LRU.
+ * there may be still a reference hold by someone. In that case this function
+ * returns -EAGAIN.
+ * In case all elements got removed it returns zero.
+ */
+int drbd_rs_del_all(struct drbd_conf *mdev)
+{
+ struct bm_extent *bm_ext;
+ int i;
+
+ MTRACE(TraceTypeResync, TraceLvlMetrics,
+ INFO("drbd_rs_del_all\n");
+ );
+
+ spin_lock_irq(&mdev->al_lock);
+
+ if (inc_local_if_state(mdev, Failed)) {
+ /* ok, ->resync is there. */
+ for (i = 0; i < mdev->resync->nr_elements; i++) {
+ bm_ext = (struct bm_extent *) lc_entry(mdev->resync, i);
+ if (bm_ext->lce.lc_number == LC_FREE)
+ continue;
+ if (bm_ext->lce.lc_number == mdev->resync_wenr) {
+ INFO("dropping %u in drbd_rs_del_all, apparently"
+ " got 'synced' by application io\n",
+ mdev->resync_wenr);
+ D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags));
+ D_ASSERT(test_bit(BME_NO_WRITES, &bm_ext->flags));
+ clear_bit(BME_NO_WRITES, &bm_ext->flags);
+ mdev->resync_wenr = LC_FREE;
+ lc_put(mdev->resync, &bm_ext->lce);
+ }
+ if (bm_ext->lce.refcnt != 0) {
+ INFO("Retrying drbd_rs_del_all() later. "
+ "refcnt=%d\n", bm_ext->lce.refcnt);
+ dec_local(mdev);
+ spin_unlock_irq(&mdev->al_lock);
+ return -EAGAIN;
+ }
+ D_ASSERT(!test_bit(BME_LOCKED, &bm_ext->flags));
+ D_ASSERT(!test_bit(BME_NO_WRITES, &bm_ext->flags));
+ lc_del(mdev->resync, &bm_ext->lce);
+ }
+ D_ASSERT(mdev->resync->used == 0);
+ dec_local(mdev);
+ }
+ spin_unlock_irq(&mdev->al_lock);
+
+ return 0;
+}
+
+/* Record information on a failure to resync the specified blocks
+ *
+ * called on SyncTarget when resync write fails or NegRSDReply received
+ *
+ */
+void drbd_rs_failed_io(struct drbd_conf *mdev, sector_t sector, int size)
+{
+ /* Is called from worker and receiver context _only_ */
+ unsigned long sbnr, ebnr, lbnr;
+ unsigned long count;
+ sector_t esector, nr_sectors;
+ int wake_up = 0;
+
+ MTRACE(TraceTypeResync, TraceLvlSummary,
+ INFO("drbd_rs_failed_io: sector=%llus, size=%u\n",
+ (unsigned long long)sector, size);
+ );
+
+ if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) {
+ ERR("drbd_rs_failed_io: sector=%llus size=%d nonsense!\n",
+ (unsigned long long)sector, size);
+ return;
+ }
+ nr_sectors = drbd_get_capacity(mdev->this_bdev);
+ esector = sector + (size >> 9) - 1;
+
+ ERR_IF(sector >= nr_sectors) return;
+ ERR_IF(esector >= nr_sectors) esector = (nr_sectors-1);
+
+ lbnr = BM_SECT_TO_BIT(nr_sectors-1);
+
+ /*
+ * round up start sector, round down end sector. we make sure we only
+ * handle full, alligned, BM_BLOCK_SIZE (4K) blocks */
+ if (unlikely(esector < BM_SECT_PER_BIT-1))
+ return;
+ if (unlikely(esector == (nr_sectors-1)))
+ ebnr = lbnr;
+ else
+ ebnr = BM_SECT_TO_BIT(esector - (BM_SECT_PER_BIT-1));
+ sbnr = BM_SECT_TO_BIT(sector + BM_SECT_PER_BIT-1);
+
+ if (sbnr > ebnr)
+ return;
+
+ /*
+ * ok, (capacity & 7) != 0 sometimes, but who cares...
+ * we count rs_{total,left} in bits, not sectors.
+ */
+ spin_lock_irq(&mdev->al_lock);
+ count = drbd_bm_count_bits(mdev, sbnr, ebnr);
+ if (count) {
+ mdev->rs_failed += count;
+
+ if (inc_local(mdev)) {
+ drbd_try_clear_on_disk_bm(mdev, sector, count, FALSE);
+ dec_local(mdev);
+ }
+
+ /* just wake_up unconditional now, various lc_chaged(),
+ * lc_put() in drbd_try_clear_on_disk_bm(). */
+ wake_up = 1;
+ }
+ spin_unlock_irq(&mdev->al_lock);
+ if (wake_up)
+ wake_up(&mdev->al_wait);
+}
The request state engine.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_req.h linux-2.6.29-drbd/drivers/block/drbd/drbd_req.h
--- linux-2.6.29/drivers/block/drbd/drbd_req.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_req.h 2009-03-30 15:41:59.655275000 +0200
@@ -0,0 +1,327 @@
+/*
+ drbd_req.h
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2006-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2006-2008, Lars Ellenberg <[email protected]>.
+ Copyright (C) 2006-2008, Philipp Reisner <[email protected]>.
+
+ DRBD is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ DRBD is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _DRBD_REQ_H
+#define _DRBD_REQ_H
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+
+#include <linux/slab.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "drbd_wrappers.h"
+
+/* The request callbacks will be called in irq context by the IDE drivers,
+ and in Softirqs/Tasklets/BH context by the SCSI drivers,
+ and by the receiver and worker in kernel-thread context.
+ Try to get the locking right :) */
+
+/*
+ * Objects of type struct drbd_request do only exist on a Primary node, and are
+ * associated with IO requests originating from the block layer above us.
+ *
+ * There are quite a few things that may happen to a drbd request
+ * during its lifetime.
+ *
+ * It will be created.
+ * It will be marked with the intention to be
+ * submitted to local disk and/or
+ * send via the network.
+ *
+ * It has to be placed on the transfer log and other housekeeping lists,
+ * In case we have a network connection.
+ *
+ * It may be identified as a concurrent (write) request
+ * and be handled accordingly.
+ *
+ * It may me handed over to the local disk subsystem.
+ * It may be completed by the local disk subsystem,
+ * either sucessfully or with io-error.
+ * In case it is a READ request, and it failed locally,
+ * it may be retried remotely.
+ *
+ * It may be queued for sending.
+ * It may be handed over to the network stack,
+ * which may fail.
+ * It may be acknowledged by the "peer" according to the wire_protocol in use.
+ * this may be a negative ack.
+ * It may receive a faked ack when the network connection is lost and the
+ * transfer log is cleaned up.
+ * Sending may be canceled due to network connection loss.
+ * When it finally has outlived its time,
+ * corresponding dirty bits in the resync-bitmap may be cleared or set,
+ * it will be destroyed,
+ * and completion will be signalled to the originator,
+ * with or without "success".
+ *
+ * See also documentation/drbd-request-state-overview.dot
+ * (dot -Tps2 documentation/drbd-request-state-overview.dot | display -)
+ */
+
+enum drbd_req_event {
+ created,
+ to_be_send,
+ to_be_submitted,
+
+ /* XXX yes, now I am inconsistent...
+ * these two are not "events" but "actions"
+ * oh, well... */
+ queue_for_net_write,
+ queue_for_net_read,
+
+ send_canceled,
+ send_failed,
+ handed_over_to_network,
+ connection_lost_while_pending,
+ recv_acked_by_peer,
+ write_acked_by_peer,
+ write_acked_by_peer_and_sis, /* and set_in_sync */
+ conflict_discarded_by_peer,
+ neg_acked,
+ barrier_acked, /* in protocol A and B */
+ data_received, /* (remote read) */
+
+ read_completed_with_error,
+ write_completed_with_error,
+ completed_ok,
+};
+
+/* encoding of request states for now. we don't actually need that many bits.
+ * we don't need to do atomic bit operations either, since most of the time we
+ * need to look at the connection state and/or manipulate some lists at the
+ * same time, so we should hold the request lock anyways.
+ */
+enum drbd_req_state_bits {
+ /* 210
+ * 000: no local possible
+ * 001: to be submitted
+ * UNUSED, we could map: 011: submitted, completion still pending
+ * 110: completed ok
+ * 010: completed with error
+ */
+ __RQ_LOCAL_PENDING,
+ __RQ_LOCAL_COMPLETED,
+ __RQ_LOCAL_OK,
+
+ /* 76543
+ * 00000: no network possible
+ * 00001: to be send
+ * 00011: to be send, on worker queue
+ * 00101: sent, expecting recv_ack (B) or write_ack (C)
+ * 11101: sent,
+ * recv_ack (B) or implicit "ack" (A),
+ * still waiting for the barrier ack.
+ * master_bio may already be completed and invalidated.
+ * 11100: write_acked (C),
+ * data_received (for remote read, any protocol)
+ * or finally the barrier ack has arrived (B,A)...
+ * request can be freed
+ * 01100: neg-acked (write, protocol C)
+ * or neg-d-acked (read, any protocol)
+ * or killed from the transfer log
+ * during cleanup after connection loss
+ * request can be freed
+ * 01000: canceled or send failed...
+ * request can be freed
+ */
+
+ /* if "SENT" is not set, yet, this can still fail or be canceled.
+ * if "SENT" is set already, we still wait for an Ack packet.
+ * when cleared, the master_bio may be completed.
+ * in (B,A) the request object may still linger on the transaction log
+ * until the corresponding barrier ack comes in */
+ __RQ_NET_PENDING,
+
+ /* If it is QUEUED, and it is a WRITE, it is also registered in the
+ * transfer log. Currently we need this flag to avoid conflicts between
+ * worker canceling the request and tl_clear_barrier killing it from
+ * transfer log. We should restructure the code so this conflict does
+ * no longer occur. */
+ __RQ_NET_QUEUED,
+
+ /* well, actually only "handed over to the network stack".
+ *
+ * TODO can potentially be dropped because of the similar meaning
+ * of RQ_NET_SENT and ~RQ_NET_QUEUED.
+ * however it is not exactly the same. before we drop it
+ * we must ensure that we can tell a request with network part
+ * from a request without, regardless of what happens to it. */
+ __RQ_NET_SENT,
+
+ /* when set, the request may be freed (if RQ_NET_QUEUED is clear).
+ * basically this means the corresponding BarrierAck was received */
+ __RQ_NET_DONE,
+
+ /* whether or not we know (C) or pretend (B,A) that the write
+ * was successfully written on the peer.
+ */
+ __RQ_NET_OK,
+
+ /* peer called drbd_set_in_sync() for this write */
+ __RQ_NET_SIS,
+
+ /* keep this last, its for the RQ_NET_MASK */
+ __RQ_NET_MAX,
+};
+
+#define RQ_LOCAL_PENDING (1UL << __RQ_LOCAL_PENDING)
+#define RQ_LOCAL_COMPLETED (1UL << __RQ_LOCAL_COMPLETED)
+#define RQ_LOCAL_OK (1UL << __RQ_LOCAL_OK)
+
+#define RQ_LOCAL_MASK ((RQ_LOCAL_OK << 1)-1) /* 0x07 */
+
+#define RQ_NET_PENDING (1UL << __RQ_NET_PENDING)
+#define RQ_NET_QUEUED (1UL << __RQ_NET_QUEUED)
+#define RQ_NET_SENT (1UL << __RQ_NET_SENT)
+#define RQ_NET_DONE (1UL << __RQ_NET_DONE)
+#define RQ_NET_OK (1UL << __RQ_NET_OK)
+#define RQ_NET_SIS (1UL << __RQ_NET_SIS)
+
+/* 0x1f8 */
+#define RQ_NET_MASK (((1UL << __RQ_NET_MAX)-1) & ~RQ_LOCAL_MASK)
+
+/* epoch entries */
+static inline
+struct hlist_head *ee_hash_slot(struct drbd_conf *mdev, sector_t sector)
+{
+ BUG_ON(mdev->ee_hash_s == 0);
+ return mdev->ee_hash +
+ ((unsigned int)(sector>>HT_SHIFT) % mdev->ee_hash_s);
+}
+
+/* transfer log (drbd_request objects) */
+static inline
+struct hlist_head *tl_hash_slot(struct drbd_conf *mdev, sector_t sector)
+{
+ BUG_ON(mdev->tl_hash_s == 0);
+ return mdev->tl_hash +
+ ((unsigned int)(sector>>HT_SHIFT) % mdev->tl_hash_s);
+}
+
+/* when we receive the ACK for a write request,
+ * verify that we actually know about it */
+static inline struct drbd_request *_ack_id_to_req(struct drbd_conf *mdev,
+ u64 id, sector_t sector)
+{
+ struct hlist_head *slot = tl_hash_slot(mdev, sector);
+ struct hlist_node *n;
+ struct drbd_request *req;
+
+ hlist_for_each_entry(req, n, slot, colision) {
+ if ((unsigned long)req == (unsigned long)id) {
+ if (req->sector != sector) {
+ ERR("_ack_id_to_req: found req %p but it has "
+ "wrong sector (%llus versus %llus)\n", req,
+ (unsigned long long)req->sector,
+ (unsigned long long)sector);
+ break;
+ }
+ return req;
+ }
+ }
+ ERR("_ack_id_to_req: failed to find req %p, sector %llus in list\n",
+ (void *)(unsigned long)id, (unsigned long long)sector);
+ return NULL;
+}
+
+/* application reads (drbd_request objects) */
+static struct hlist_head *ar_hash_slot(struct drbd_conf *mdev, sector_t sector)
+{
+ return mdev->app_reads_hash
+ + ((unsigned int)(sector) % APP_R_HSIZE);
+}
+
+/* when we receive the answer for a read request,
+ * verify that we actually know about it */
+static inline struct drbd_request *_ar_id_to_req(struct drbd_conf *mdev,
+ u64 id, sector_t sector)
+{
+ struct hlist_head *slot = ar_hash_slot(mdev, sector);
+ struct hlist_node *n;
+ struct drbd_request *req;
+
+ hlist_for_each_entry(req, n, slot, colision) {
+ if ((unsigned long)req == (unsigned long)id) {
+ D_ASSERT(req->sector == sector);
+ return req;
+ }
+ }
+ return NULL;
+}
+
+static inline struct drbd_request *drbd_req_new(struct drbd_conf *mdev,
+ struct bio *bio_src)
+{
+ struct bio *bio;
+ struct drbd_request *req =
+ mempool_alloc(drbd_request_mempool, GFP_NOIO);
+ if (likely(req)) {
+ bio = bio_clone(bio_src, GFP_NOIO); /* XXX cannot fail?? */
+
+ req->rq_state = 0;
+ req->mdev = mdev;
+ req->master_bio = bio_src;
+ req->private_bio = bio;
+ req->epoch = 0;
+ req->sector = bio->bi_sector;
+ req->size = bio->bi_size;
+ req->start_time = jiffies;
+ INIT_HLIST_NODE(&req->colision);
+ INIT_LIST_HEAD(&req->tl_requests);
+ INIT_LIST_HEAD(&req->w.list);
+
+ bio->bi_private = req;
+ bio->bi_end_io = drbd_endio_pri;
+ bio->bi_next = NULL;
+ }
+ return req;
+}
+
+static inline void drbd_req_free(struct drbd_request *req)
+{
+ mempool_free(req, drbd_request_mempool);
+}
+
+static inline int overlaps(sector_t s1, int l1, sector_t s2, int l2)
+{
+ return !((s1 + (l1>>9) <= s2) || (s1 >= s2 + (l2>>9)));
+}
+
+/* aparently too large to be inlined...
+ * moved to drbd_req.c */
+extern void _req_may_be_done(struct drbd_request *req, int error);
+extern void _req_mod(struct drbd_request *req,
+ enum drbd_req_event what, int error);
+
+/* If you need it irqsave, do it your self! */
+static inline void req_mod(struct drbd_request *req,
+ enum drbd_req_event what, int error)
+{
+ struct drbd_conf *mdev = req->mdev;
+ spin_lock_irq(&mdev->req_lock);
+ _req_mod(req, what, error);
+ spin_unlock_irq(&mdev->req_lock);
+}
+#endif
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_req.c linux-2.6.29-drbd/drivers/block/drbd/drbd_req.c
--- linux-2.6.29/drivers/block/drbd/drbd_req.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_req.c 2009-03-30 16:53:20.872601000 +0200
@@ -0,0 +1,1206 @@
+/*
+ drbd_req.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+
+#include <linux/slab.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "drbd_req.h"
+
+/* outside of the ifdef
+ * because of the _print_rq_state(,FIXME) in barrier_acked */
+STATIC void _print_rq_state(struct drbd_request *req, const char *txt)
+{
+ const unsigned long s = req->rq_state;
+ struct drbd_conf *mdev = req->mdev;
+ const int rw = (req->master_bio == NULL ||
+ bio_data_dir(req->master_bio) == WRITE) ?
+ 'W' : 'R';
+
+ INFO("%s %p %c L%c%c%cN%c%c%c%c%c %u (%llus +%u) %s\n",
+ txt, req, rw,
+ s & RQ_LOCAL_PENDING ? 'p' : '-',
+ s & RQ_LOCAL_COMPLETED ? 'c' : '-',
+ s & RQ_LOCAL_OK ? 'o' : '-',
+ s & RQ_NET_PENDING ? 'p' : '-',
+ s & RQ_NET_QUEUED ? 'q' : '-',
+ s & RQ_NET_SENT ? 's' : '-',
+ s & RQ_NET_DONE ? 'd' : '-',
+ s & RQ_NET_OK ? 'o' : '-',
+ req->epoch,
+ (unsigned long long)req->sector,
+ req->size,
+ conns_to_name(mdev->state.conn));
+}
+
+/* #define VERBOSE_REQUEST_CODE */
+#if defined(VERBOSE_REQUEST_CODE) || defined(ENABLE_DYNAMIC_TRACE)
+STATIC void _print_req_mod(struct drbd_request *req, enum drbd_req_event what)
+{
+ struct drbd_conf *mdev = req->mdev;
+ const int rw = (req->master_bio == NULL ||
+ bio_data_dir(req->master_bio) == WRITE) ?
+ 'W' : 'R';
+
+ static const char *rq_event_names[] = {
+ [created] = "created",
+ [to_be_send] = "to_be_send",
+ [to_be_submitted] = "to_be_submitted",
+ [queue_for_net_write] = "queue_for_net_write",
+ [queue_for_net_read] = "queue_for_net_read",
+ [send_canceled] = "send_canceled",
+ [send_failed] = "send_failed",
+ [handed_over_to_network] = "handed_over_to_network",
+ [connection_lost_while_pending] =
+ "connection_lost_while_pending",
+ [recv_acked_by_peer] = "recv_acked_by_peer",
+ [write_acked_by_peer] = "write_acked_by_peer",
+ [neg_acked] = "neg_acked",
+ [conflict_discarded_by_peer] = "conflict_discarded_by_peer",
+ [barrier_acked] = "barrier_acked",
+ [data_received] = "data_received",
+ [read_completed_with_error] = "read_completed_with_error",
+ [write_completed_with_error] = "write_completed_with_error",
+ [completed_ok] = "completed_ok",
+ };
+
+ INFO("_req_mod(%p %c ,%s)\n", req, rw, rq_event_names[what]);
+}
+
+# ifdef ENABLE_DYNAMIC_TRACE
+# define print_rq_state(R, T) \
+ MTRACE(TraceTypeRq, TraceLvlMetrics, _print_rq_state(R, T);)
+# define print_req_mod(T, W) \
+ MTRACE(TraceTypeRq, TraceLvlMetrics, _print_req_mod(T, W);)
+# else
+# define print_rq_state(R, T) _print_rq_state(R, T)
+# define print_req_mod(T, W) _print_req_mod(T, W)
+# endif
+
+#else
+#define print_rq_state(R, T)
+#define print_req_mod(T, W)
+#endif
+
+/* Update disk stats at start of I/O request */
+static inline void _drbd_start_io_acct(struct drbd_conf *mdev, struct drbd_request *req, struct bio *bio)
+{
+ const int rw = bio_data_dir(bio);
+ int cpu;
+ cpu = part_stat_lock();
+ part_stat_inc(cpu, &mdev->vdisk->part0, ios[rw]);
+ part_stat_add(cpu, &mdev->vdisk->part0, sectors[rw], bio_sectors(bio));
+ part_stat_unlock();
+ mdev->vdisk->part0.in_flight++;
+}
+
+/* Update disk stats when completing request upwards */
+static inline void _drbd_end_io_acct(struct drbd_conf *mdev, struct drbd_request *req)
+{
+ int rw = bio_data_dir(req->master_bio);
+ unsigned long duration = jiffies - req->start_time;
+ int cpu;
+ cpu = part_stat_lock();
+ part_stat_add(cpu, &mdev->vdisk->part0, ticks[rw], duration);
+ part_round_stats(cpu, &mdev->vdisk->part0);
+ part_stat_unlock();
+ mdev->vdisk->part0.in_flight--;
+}
+
+static void _req_is_done(struct drbd_conf *mdev, struct drbd_request *req, const int rw)
+{
+ const unsigned long s = req->rq_state;
+ /* if it was a write, we may have to set the corresponding
+ * bit(s) out-of-sync first. If it had a local part, we need to
+ * release the reference to the activity log. */
+ if (rw == WRITE) {
+ /* remove it from the transfer log.
+ * well, only if it had been there in the first
+ * place... if it had not (local only or conflicting
+ * and never sent), it should still be "empty" as
+ * initialised in drbd_req_new(), so we can list_del() it
+ * here unconditionally */
+ list_del(&req->tl_requests);
+ /* Set out-of-sync unless both OK flags are set
+ * (local only or remote failed).
+ * Other places where we set out-of-sync:
+ * READ with local io-error */
+ if (!(s & RQ_NET_OK) || !(s & RQ_LOCAL_OK))
+ drbd_set_out_of_sync(mdev, req->sector, req->size);
+
+ if ((s & RQ_NET_OK) && (s & RQ_LOCAL_OK) && (s & RQ_NET_SIS))
+ drbd_set_in_sync(mdev, req->sector, req->size);
+
+ /* one might be tempted to move the drbd_al_complete_io
+ * to the local io completion callback drbd_endio_pri.
+ * but, if this was a mirror write, we may only
+ * drbd_al_complete_io after this is RQ_NET_DONE,
+ * otherwise the extent could be dropped from the al
+ * before it has actually been written on the peer.
+ * if we crash before our peer knows about the request,
+ * but after the extent has been dropped from the al,
+ * we would forget to resync the corresponding extent.
+ */
+ if (s & RQ_LOCAL_MASK) {
+ if (inc_local_if_state(mdev, Failed)) {
+ drbd_al_complete_io(mdev, req->sector);
+ dec_local(mdev);
+ } else if (__ratelimit(&drbd_ratelimit_state)) {
+ drbd_WARN("Should have called drbd_al_complete_io(, %llu), "
+ "but my Disk seems to have failed :(\n",
+ (unsigned long long) req->sector);
+ }
+ }
+ }
+
+ /* if it was a local io error, we want to notify our
+ * peer about that, and see if we need to
+ * detach the disk and stuff.
+ * to avoid allocating some special work
+ * struct, reuse the request. */
+
+ /* THINK
+ * why do we do this not when we detect the error,
+ * but delay it until it is "done", i.e. possibly
+ * until the next barrier ack? */
+
+ if (rw == WRITE &&
+ ((s & RQ_LOCAL_MASK) && !(s & RQ_LOCAL_OK))) {
+ if (!(req->w.list.next == LIST_POISON1 ||
+ list_empty(&req->w.list))) {
+ /* DEBUG ASSERT only; if this triggers, we
+ * probably corrupt the worker list here */
+ DUMPP(req->w.list.next);
+ DUMPP(req->w.list.prev);
+ }
+ req->w.cb = w_io_error;
+ drbd_queue_work(&mdev->data.work, &req->w);
+ /* drbd_req_free() is done in w_io_error */
+ } else {
+ drbd_req_free(req);
+ }
+}
+
+static void queue_barrier(struct drbd_conf *mdev)
+{
+ struct drbd_barrier *b;
+
+ /* We are within the req_lock. Once we queued the barrier for sending,
+ * we set the CREATE_BARRIER bit. It is cleared as soon as a new
+ * barrier/epoch object is added. This is the only place this bit is
+ * set. It indicates that the barrier for this epoch is already queued,
+ * and no new epoch has been created yet. */
+ if (test_bit(CREATE_BARRIER, &mdev->flags))
+ return;
+
+ b = mdev->newest_barrier;
+ b->w.cb = w_send_barrier;
+ /* inc_ap_pending done here, so we won't
+ * get imbalanced on connection loss.
+ * dec_ap_pending will be done in got_BarrierAck
+ * or (on connection loss) in tl_clear. */
+ inc_ap_pending(mdev);
+ drbd_queue_work(&mdev->data.work, &b->w);
+ set_bit(CREATE_BARRIER, &mdev->flags);
+}
+
+static void _about_to_complete_local_write(struct drbd_conf *mdev,
+ struct drbd_request *req)
+{
+ const unsigned long s = req->rq_state;
+ struct drbd_request *i;
+ struct Tl_epoch_entry *e;
+ struct hlist_node *n;
+ struct hlist_head *slot;
+
+ /* before we can signal completion to the upper layers,
+ * we may need to close the current epoch */
+ if (mdev->state.conn >= Connected &&
+ req->epoch == mdev->newest_barrier->br_number)
+ queue_barrier(mdev);
+
+ /* we need to do the conflict detection stuff,
+ * if we have the ee_hash (two_primaries) and
+ * this has been on the network */
+ if ((s & RQ_NET_DONE) && mdev->ee_hash != NULL) {
+ const sector_t sector = req->sector;
+ const int size = req->size;
+
+ /* ASSERT:
+ * there must be no conflicting requests, since
+ * they must have been failed on the spot */
+#define OVERLAPS overlaps(sector, size, i->sector, i->size)
+ slot = tl_hash_slot(mdev, sector);
+ hlist_for_each_entry(i, n, slot, colision) {
+ if (OVERLAPS) {
+ ALERT("LOGIC BUG: completed: %p %llus +%u; "
+ "other: %p %llus +%u\n",
+ req, (unsigned long long)sector, size,
+ i, (unsigned long long)i->sector, i->size);
+ }
+ }
+
+ /* maybe "wake" those conflicting epoch entries
+ * that wait for this request to finish.
+ *
+ * currently, there can be only _one_ such ee
+ * (well, or some more, which would be pending
+ * DiscardAck not yet sent by the asender...),
+ * since we block the receiver thread upon the
+ * first conflict detection, which will wait on
+ * misc_wait. maybe we want to assert that?
+ *
+ * anyways, if we found one,
+ * we just have to do a wake_up. */
+#undef OVERLAPS
+#define OVERLAPS overlaps(sector, size, e->sector, e->size)
+ slot = ee_hash_slot(mdev, req->sector);
+ hlist_for_each_entry(e, n, slot, colision) {
+ if (OVERLAPS) {
+ wake_up(&mdev->misc_wait);
+ break;
+ }
+ }
+ }
+#undef OVERLAPS
+}
+
+static void _complete_master_bio(struct drbd_conf *mdev,
+ struct drbd_request *req, int error)
+{
+ dump_bio(mdev, req->master_bio, 1, req);
+ bio_endio(req->master_bio, error);
+ req->master_bio = NULL;
+ dec_ap_bio(mdev);
+}
+
+void _req_may_be_done(struct drbd_request *req, int error)
+{
+ const unsigned long s = req->rq_state;
+ struct drbd_conf *mdev = req->mdev;
+ int rw;
+
+ print_rq_state(req, "_req_may_be_done");
+
+ /* we must not complete the master bio, while it is
+ * still being processed by _drbd_send_zc_bio (drbd_send_dblock)
+ * not yet acknowledged by the peer
+ * not yet completed by the local io subsystem
+ * these flags may get cleared in any order by
+ * the worker,
+ * the receiver,
+ * the bio_endio completion callbacks.
+ */
+ if (s & RQ_NET_QUEUED)
+ return;
+ if (s & RQ_NET_PENDING)
+ return;
+ if (s & RQ_LOCAL_PENDING)
+ return;
+
+ if (req->master_bio) {
+ /* this is data_received (remote read)
+ * or protocol C WriteAck
+ * or protocol B RecvAck
+ * or protocol A "handed_over_to_network" (SendAck)
+ * or canceled or failed,
+ * or killed from the transfer log due to connection loss.
+ */
+
+ /*
+ * figure out whether to report success or failure.
+ *
+ * report success when at least one of the operations suceeded.
+ * or, to put the other way,
+ * only report failure, when both operations failed.
+ *
+ * what to do about the failures is handled elsewhere.
+ * what we need to do here is just: complete the master_bio.
+ */
+ int ok = (s & RQ_LOCAL_OK) || (s & RQ_NET_OK);
+ rw = bio_data_dir(req->master_bio);
+
+ /* remove the request from the conflict detection
+ * respective block_id verification hash */
+ if (!hlist_unhashed(&req->colision))
+ hlist_del(&req->colision);
+ else
+ D_ASSERT((s & RQ_NET_MASK) == 0);
+
+ /* for writes we need to do some extra housekeeping */
+ if (rw == WRITE)
+ _about_to_complete_local_write(mdev, req);
+
+ /* Update disk stats */
+ _drbd_end_io_acct(mdev, req);
+
+ _complete_master_bio(mdev, req,
+ ok ? 0 : (error ? error : -EIO));
+ } else {
+ /* only WRITE requests can end up here without a master_bio */
+ rw = WRITE;
+ }
+
+ if ((s & RQ_NET_MASK) == 0 || (s & RQ_NET_DONE)) {
+ /* this is disconnected (local only) operation,
+ * or protocol C WriteAck,
+ * or protocol A or B BarrierAck,
+ * or killed from the transfer log due to connection loss. */
+ _req_is_done(mdev, req, rw);
+ }
+ /* else: network part and not DONE yet. that is
+ * protocol A or B, barrier ack still pending... */
+}
+
+/*
+ * checks whether there was an overlapping request
+ * or ee already registered.
+ *
+ * if so, return 1, in which case this request is completed on the spot,
+ * without ever being submitted or send.
+ *
+ * return 0 if it is ok to submit this request.
+ *
+ * NOTE:
+ * paranoia: assume something above us is broken, and issues different write
+ * requests for the same block simultaneously...
+ *
+ * To ensure these won't be reordered differently on both nodes, resulting in
+ * diverging data sets, we discard the later one(s). Not that this is supposed
+ * to happen, but this is the rationale why we also have to check for
+ * conflicting requests with local origin, and why we have to do so regardless
+ * of whether we allowed multiple primaries.
+ *
+ * BTW, in case we only have one primary, the ee_hash is empty anyways, and the
+ * second hlist_for_each_entry becomes a noop. This is even simpler than to
+ * grab a reference on the net_conf, and check for the two_primaries flag...
+ */
+STATIC int _req_conflicts(struct drbd_request *req)
+{
+ struct drbd_conf *mdev = req->mdev;
+ const sector_t sector = req->sector;
+ const int size = req->size;
+ struct drbd_request *i;
+ struct Tl_epoch_entry *e;
+ struct hlist_node *n;
+ struct hlist_head *slot;
+
+ D_ASSERT(hlist_unhashed(&req->colision));
+
+ if (!inc_net(mdev))
+ return 0;
+
+ /* BUG_ON */
+ ERR_IF (mdev->tl_hash_s == 0)
+ goto out_no_conflict;
+ BUG_ON(mdev->tl_hash == NULL);
+
+#define OVERLAPS overlaps(i->sector, i->size, sector, size)
+ slot = tl_hash_slot(mdev, sector);
+ hlist_for_each_entry(i, n, slot, colision) {
+ if (OVERLAPS) {
+ ALERT("%s[%u] Concurrent local write detected! "
+ "[DISCARD L] new: %llus +%u; "
+ "pending: %llus +%u\n",
+ current->comm, current->pid,
+ (unsigned long long)sector, size,
+ (unsigned long long)i->sector, i->size);
+ goto out_conflict;
+ }
+ }
+
+ if (mdev->ee_hash_s) {
+ /* now, check for overlapping requests with remote origin */
+ BUG_ON(mdev->ee_hash == NULL);
+#undef OVERLAPS
+#define OVERLAPS overlaps(e->sector, e->size, sector, size)
+ slot = ee_hash_slot(mdev, sector);
+ hlist_for_each_entry(e, n, slot, colision) {
+ if (OVERLAPS) {
+ ALERT("%s[%u] Concurrent remote write detected!"
+ " [DISCARD L] new: %llus +%u; "
+ "pending: %llus +%u\n",
+ current->comm, current->pid,
+ (unsigned long long)sector, size,
+ (unsigned long long)e->sector, e->size);
+ goto out_conflict;
+ }
+ }
+ }
+#undef OVERLAPS
+
+out_no_conflict:
+ /* this is like it should be, and what we expected.
+ * our users do behave after all... */
+ dec_net(mdev);
+ return 0;
+
+out_conflict:
+ dec_net(mdev);
+ return 1;
+}
+
+/* obviously this could be coded as many single functions
+ * instead of one huge switch,
+ * or by putting the code directly in the respective locations
+ * (as it has been before).
+ *
+ * but having it this way
+ * enforces that it is all in this one place, where it is easier to audit,
+ * it makes it obvious that whatever "event" "happens" to a request should
+ * happen "atomically" within the req_lock,
+ * and it enforces that we have to think in a very structured manner
+ * about the "events" that may happen to a request during its life time ...
+ *
+ * Though I think it is likely that we break this again into many
+ * static inline void _req_mod_ ## what (req) ...
+ */
+void _req_mod(struct drbd_request *req, enum drbd_req_event what, int error)
+{
+ struct drbd_conf *mdev = req->mdev;
+
+ if (error && (bio_rw(req->master_bio) != READA))
+ ERR("got an _req_mod() errno of %d\n", error);
+
+ print_req_mod(req, what);
+
+ switch (what) {
+ default:
+ ERR("LOGIC BUG in %s:%u\n", __FILE__ , __LINE__);
+ return;
+
+ /* does not happen...
+ * initialization done in drbd_req_new
+ case created:
+ break;
+ */
+
+ case to_be_send: /* via network */
+ /* reached via drbd_make_request_common
+ * and from w_read_retry_remote */
+ D_ASSERT(!(req->rq_state & RQ_NET_MASK));
+ req->rq_state |= RQ_NET_PENDING;
+ inc_ap_pending(mdev);
+ break;
+
+ case to_be_submitted: /* locally */
+ /* reached via drbd_make_request_common */
+ D_ASSERT(!(req->rq_state & RQ_LOCAL_MASK));
+ req->rq_state |= RQ_LOCAL_PENDING;
+ break;
+
+ case completed_ok:
+ if (bio_data_dir(req->private_bio) == WRITE)
+ mdev->writ_cnt += req->size>>9;
+ else
+ mdev->read_cnt += req->size>>9;
+
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+
+ req->rq_state |= (RQ_LOCAL_COMPLETED|RQ_LOCAL_OK);
+ req->rq_state &= ~RQ_LOCAL_PENDING;
+
+ _req_may_be_done(req, error);
+ dec_local(mdev);
+ break;
+
+ case write_completed_with_error:
+ req->rq_state |= RQ_LOCAL_COMPLETED;
+ req->rq_state &= ~RQ_LOCAL_PENDING;
+
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+ ALERT("Local WRITE failed sec=%llus size=%u\n",
+ (unsigned long long)req->sector, req->size);
+ /* and now: check how to handle local io error. */
+ __drbd_chk_io_error(mdev, FALSE);
+ _req_may_be_done(req, error);
+ dec_local(mdev);
+ break;
+
+ case read_completed_with_error:
+ if (bio_rw(req->master_bio) != READA)
+ drbd_set_out_of_sync(mdev, req->sector, req->size);
+
+ req->rq_state |= RQ_LOCAL_COMPLETED;
+ req->rq_state &= ~RQ_LOCAL_PENDING;
+
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+ if (bio_rw(req->master_bio) == READA) {
+ /* it is legal to fail READA */
+ _req_may_be_done(req, error);
+ dec_local(mdev);
+ break;
+ }
+ /* else */
+ ALERT("Local READ failed sec=%llus size=%u\n",
+ (unsigned long long)req->sector, req->size);
+ /* _req_mod(req,to_be_send); oops, recursion in static inline */
+ D_ASSERT(!(req->rq_state & RQ_NET_MASK));
+ req->rq_state |= RQ_NET_PENDING;
+ inc_ap_pending(mdev);
+
+ __drbd_chk_io_error(mdev, FALSE);
+ dec_local(mdev);
+ /* NOTE: if we have no connection,
+ * or know the peer has no good data either,
+ * then we don't actually need to "queue_for_net_read",
+ * but we do so anyways, since the drbd_io_error()
+ * and the potential state change to "Diskless"
+ * needs to be done from process context */
+
+ /* fall through: _req_mod(req,queue_for_net_read); */
+
+ case queue_for_net_read:
+ /* READ or READA, and
+ * no local disk,
+ * or target area marked as invalid,
+ * or just got an io-error. */
+ /* from drbd_make_request_common
+ * or from bio_endio during read io-error recovery */
+
+ /* so we can verify the handle in the answer packet
+ * corresponding hlist_del is in _req_may_be_done() */
+ hlist_add_head(&req->colision, ar_hash_slot(mdev, req->sector));
+
+ set_bit(UNPLUG_REMOTE, &mdev->flags); /* why? */
+
+ D_ASSERT(req->rq_state & RQ_NET_PENDING);
+ req->rq_state |= RQ_NET_QUEUED;
+ req->w.cb = (req->rq_state & RQ_LOCAL_MASK)
+ ? w_read_retry_remote
+ : w_send_read_req;
+ drbd_queue_work(&mdev->data.work, &req->w);
+ break;
+
+ case queue_for_net_write:
+ /* assert something? */
+ /* from drbd_make_request_common only */
+
+ hlist_add_head(&req->colision, tl_hash_slot(mdev, req->sector));
+ /* corresponding hlist_del is in _req_may_be_done() */
+
+ /* NOTE
+ * In case the req ended up on the transfer log before being
+ * queued on the worker, it could lead to this request being
+ * missed during cleanup after connection loss.
+ * So we have to do both operations here,
+ * within the same lock that protects the transfer log.
+ *
+ * _req_add_to_epoch(req); this has to be after the
+ * _maybe_start_new_epoch(req); which happened in
+ * drbd_make_request_common, because we now may set the bit
+ * again ourselves to close the current epoch.
+ *
+ * Add req to the (now) current epoch (barrier). */
+
+ /* see drbd_make_request_common,
+ * just after it grabs the req_lock */
+ D_ASSERT(test_bit(CREATE_BARRIER, &mdev->flags) == 0);
+
+ req->epoch = mdev->newest_barrier->br_number;
+ list_add_tail(&req->tl_requests,
+ &mdev->newest_barrier->requests);
+
+ /* increment size of current epoch */
+ mdev->newest_barrier->n_req++;
+
+ /* queue work item to send data */
+ D_ASSERT(req->rq_state & RQ_NET_PENDING);
+ req->rq_state |= RQ_NET_QUEUED;
+ req->w.cb = w_send_dblock;
+ drbd_queue_work(&mdev->data.work, &req->w);
+
+ /* close the epoch, in case it outgrew the limit */
+ if (mdev->newest_barrier->n_req >= mdev->net_conf->max_epoch_size)
+ queue_barrier(mdev);
+
+ break;
+
+ case send_canceled:
+ /* treat it the same */
+ case send_failed:
+ /* real cleanup will be done from tl_clear. just update flags
+ * so it is no longer marked as on the worker queue */
+ req->rq_state &= ~RQ_NET_QUEUED;
+ /* if we did it right, tl_clear should be scheduled only after
+ * this, so this should not be necessary! */
+ _req_may_be_done(req, error);
+ break;
+
+ case handed_over_to_network:
+ /* assert something? */
+ if (bio_data_dir(req->master_bio) == WRITE &&
+ mdev->net_conf->wire_protocol == DRBD_PROT_A) {
+ /* this is what is dangerous about protocol A:
+ * pretend it was sucessfully written on the peer. */
+ if (req->rq_state & RQ_NET_PENDING) {
+ dec_ap_pending(mdev);
+ req->rq_state &= ~RQ_NET_PENDING;
+ req->rq_state |= RQ_NET_OK;
+ } /* else: neg-ack was faster... */
+ /* it is still not yet RQ_NET_DONE until the
+ * corresponding epoch barrier got acked as well,
+ * so we know what to dirty on connection loss */
+ }
+ req->rq_state &= ~RQ_NET_QUEUED;
+ req->rq_state |= RQ_NET_SENT;
+ /* because _drbd_send_zc_bio could sleep, and may want to
+ * dereference the bio even after the "write_acked_by_peer" and
+ * "completed_ok" events came in, once we return from
+ * _drbd_send_zc_bio (drbd_send_dblock), we have to check
+ * whether it is done already, and end it. */
+ _req_may_be_done(req, error);
+ break;
+
+ case connection_lost_while_pending:
+ /* transfer log cleanup after connection loss */
+ /* assert something? */
+ if (req->rq_state & RQ_NET_PENDING)
+ dec_ap_pending(mdev);
+ req->rq_state &= ~(RQ_NET_OK|RQ_NET_PENDING);
+ req->rq_state |= RQ_NET_DONE;
+ /* if it is still queued, we may not complete it here.
+ * it will be canceled soon. */
+ if (!(req->rq_state & RQ_NET_QUEUED))
+ _req_may_be_done(req, error);
+ break;
+
+ case write_acked_by_peer_and_sis:
+ req->rq_state |= RQ_NET_SIS;
+ case conflict_discarded_by_peer:
+ /* for discarded conflicting writes of multiple primarys,
+ * there is no need to keep anything in the tl, potential
+ * node crashes are covered by the activity log. */
+ req->rq_state |= RQ_NET_DONE;
+ /* fall through */
+ case write_acked_by_peer:
+ /* protocol C; successfully written on peer.
+ * Nothing to do here.
+ * We want to keep the tl in place for all protocols, to cater
+ * for volatile write-back caches on lower level devices.
+ *
+ * A barrier request is expected to have forced all prior
+ * requests onto stable storage, so completion of a barrier
+ * request could set NET_DONE right here, and not wait for the
+ * BarrierAck, but that is an unecessary optimisation. */
+
+ /* this makes it effectively the same as for: */
+ case recv_acked_by_peer:
+ /* protocol B; pretends to be sucessfully written on peer.
+ * see also notes above in handed_over_to_network about
+ * protocol != C */
+ req->rq_state |= RQ_NET_OK;
+ D_ASSERT(req->rq_state & RQ_NET_PENDING);
+ dec_ap_pending(mdev);
+ req->rq_state &= ~RQ_NET_PENDING;
+ _req_may_be_done(req, error);
+ break;
+
+ case neg_acked:
+ /* assert something? */
+ if (req->rq_state & RQ_NET_PENDING)
+ dec_ap_pending(mdev);
+ req->rq_state &= ~(RQ_NET_OK|RQ_NET_PENDING);
+
+ req->rq_state |= RQ_NET_DONE;
+ _req_may_be_done(req, error);
+ /* else: done by handed_over_to_network */
+ break;
+
+ case barrier_acked:
+ if (req->rq_state & RQ_NET_PENDING) {
+ /* barrier came in before all requests have been acked.
+ * this is bad, because if the connection is lost now,
+ * we won't be able to clean them up... */
+ _print_rq_state(req,
+ "FIXME (barrier_acked but pending)");
+ list_move(&req->tl_requests, &mdev->out_of_sequence_requests);
+ }
+ D_ASSERT(req->rq_state & RQ_NET_SENT);
+ req->rq_state |= RQ_NET_DONE;
+ _req_may_be_done(req, error);
+ break;
+
+ case data_received:
+ D_ASSERT(req->rq_state & RQ_NET_PENDING);
+ dec_ap_pending(mdev);
+ req->rq_state &= ~RQ_NET_PENDING;
+ req->rq_state |= (RQ_NET_OK|RQ_NET_DONE);
+ _req_may_be_done(req, error);
+ break;
+ };
+}
+
+/* we may do a local read if:
+ * - we are consistent (of course),
+ * - or we are generally inconsistent,
+ * BUT we are still/already IN SYNC for this area.
+ * since size may be bigger than BM_BLOCK_SIZE,
+ * we may need to check several bits.
+ */
+STATIC int drbd_may_do_local_read(struct drbd_conf *mdev, sector_t sector, int size)
+{
+ unsigned long sbnr, ebnr;
+ sector_t esector, nr_sectors;
+
+ if (mdev->state.disk == UpToDate)
+ return 1;
+ if (mdev->state.disk >= Outdated)
+ return 0;
+ if (mdev->state.disk < Inconsistent)
+ return 0;
+ /* state.disk == Inconsistent We will have a look at the BitMap */
+ nr_sectors = drbd_get_capacity(mdev->this_bdev);
+ esector = sector + (size >> 9) - 1;
+
+ D_ASSERT(sector < nr_sectors);
+ D_ASSERT(esector < nr_sectors);
+
+ sbnr = BM_SECT_TO_BIT(sector);
+ ebnr = BM_SECT_TO_BIT(esector);
+
+ return 0 == drbd_bm_count_bits(mdev, sbnr, ebnr);
+}
+
+STATIC int drbd_make_request_common(struct drbd_conf *mdev, struct bio *bio)
+{
+ const int rw = bio_rw(bio);
+ const int size = bio->bi_size;
+ const sector_t sector = bio->bi_sector;
+ struct drbd_barrier *b = NULL;
+ struct drbd_request *req;
+ int local, remote;
+ int err = -EIO;
+
+ /* allocate outside of all locks; */
+ req = drbd_req_new(mdev, bio);
+ if (!req) {
+ dec_ap_bio(mdev);
+ /* only pass the error to the upper layers.
+ * if user cannot handle io errors, thats not our business. */
+ ERR("could not kmalloc() req\n");
+ bio_endio(bio, -ENOMEM);
+ return 0;
+ }
+
+ dump_bio(mdev, bio, 0, req);
+
+ local = inc_local(mdev);
+ if (!local) {
+ bio_put(req->private_bio); /* or we get a bio leak */
+ req->private_bio = NULL;
+ }
+ if (rw == WRITE) {
+ remote = 1;
+ } else {
+ /* READ || READA */
+ if (local) {
+ if (!drbd_may_do_local_read(mdev, sector, size)) {
+ /* we could kick the syncer to
+ * sync this extent asap, wait for
+ * it, then continue locally.
+ * Or just issue the request remotely.
+ */
+ local = 0;
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+ dec_local(mdev);
+ }
+ }
+ remote = !local && mdev->state.pdsk >= UpToDate;
+ }
+
+ /* If we have a disk, but a READA request is mapped to remote,
+ * we are Primary, Inconsistent, SyncTarget.
+ * Just fail that READA request right here.
+ *
+ * THINK: maybe fail all READA when not local?
+ * or make this configurable...
+ * if network is slow, READA won't do any good.
+ */
+ if (rw == READA && mdev->state.disk >= Inconsistent && !local) {
+ err = -EWOULDBLOCK;
+ goto fail_and_free_req;
+ }
+
+ /* For WRITES going to the local disk, grab a reference on the target
+ * extent. This waits for any resync activity in the corresponding
+ * resync extent to finish, and, if necessary, pulls in the target
+ * extent into the activity log, which involves further disk io because
+ * of transactional on-disk meta data updates. */
+ if (rw == WRITE && local)
+ drbd_al_begin_io(mdev, sector);
+
+ remote = remote && (mdev->state.pdsk == UpToDate ||
+ (mdev->state.pdsk == Inconsistent &&
+ mdev->state.conn >= Connected));
+
+ if (!(local || remote)) {
+ ERR("IO ERROR: neither local nor remote disk\n");
+ goto fail_free_complete;
+ }
+
+ /* For WRITE request, we have to make sure that we have an
+ * unused_spare_barrier, in case we need to start a new epoch.
+ * I try to be smart and avoid to pre-allocate always "just in case",
+ * but there is a race between testing the bit and pointer outside the
+ * spinlock, and grabbing the spinlock.
+ * if we lost that race, we retry. */
+ if (rw == WRITE && remote &&
+ mdev->unused_spare_barrier == NULL &&
+ test_bit(CREATE_BARRIER, &mdev->flags)) {
+allocate_barrier:
+ b = kmalloc(sizeof(struct drbd_barrier), GFP_NOIO);
+ if (!b) {
+ ERR("Failed to alloc barrier.\n");
+ err = -ENOMEM;
+ goto fail_free_complete;
+ }
+ }
+
+ /* GOOD, everything prepared, grab the spin_lock */
+ spin_lock_irq(&mdev->req_lock);
+
+ if (remote) {
+ remote = (mdev->state.pdsk == UpToDate ||
+ (mdev->state.pdsk == Inconsistent &&
+ mdev->state.conn >= Connected));
+ if (!remote)
+ drbd_WARN("lost connection while grabbing the req_lock!\n");
+ if (!(local || remote)) {
+ ERR("IO ERROR: neither local nor remote disk\n");
+ spin_unlock_irq(&mdev->req_lock);
+ goto fail_free_complete;
+ }
+ }
+
+ if (b && mdev->unused_spare_barrier == NULL) {
+ mdev->unused_spare_barrier = b;
+ b = NULL;
+ }
+ if (rw == WRITE && remote &&
+ mdev->unused_spare_barrier == NULL &&
+ test_bit(CREATE_BARRIER, &mdev->flags)) {
+ /* someone closed the current epoch
+ * while we were grabbing the spinlock */
+ spin_unlock_irq(&mdev->req_lock);
+ goto allocate_barrier;
+ }
+
+
+ /* Update disk stats */
+ _drbd_start_io_acct(mdev, req, bio);
+
+ /* _maybe_start_new_epoch(mdev);
+ * If we need to generate a write barrier packet, we have to add the
+ * new epoch (barrier) object, and queue the barrier packet for sending,
+ * and queue the req's data after it _within the same lock_, otherwise
+ * we have race conditions were the reorder domains could be mixed up.
+ *
+ * Even read requests may start a new epoch and queue the corresponding
+ * barrier packet. To get the write ordering right, we only have to
+ * make sure that, if this is a write request and it triggered a
+ * barrier packet, this request is queued within the same spinlock. */
+ if (remote && mdev->unused_spare_barrier &&
+ test_and_clear_bit(CREATE_BARRIER, &mdev->flags)) {
+ _tl_add_barrier(mdev, mdev->unused_spare_barrier);
+ mdev->unused_spare_barrier = NULL;
+ } else {
+ D_ASSERT(!(remote && rw == WRITE &&
+ test_bit(CREATE_BARRIER, &mdev->flags)));
+ }
+
+ /* NOTE
+ * Actually, 'local' may be wrong here already, since we may have failed
+ * to write to the meta data, and may become wrong anytime because of
+ * local io-error for some other request, which would lead to us
+ * "detaching" the local disk.
+ *
+ * 'remote' may become wrong any time because the network could fail.
+ *
+ * This is a harmless race condition, though, since it is handled
+ * correctly at the appropriate places; so it just deferres the failure
+ * of the respective operation.
+ */
+
+ /* mark them early for readability.
+ * this just sets some state flags. */
+ if (remote)
+ _req_mod(req, to_be_send, 0);
+ if (local)
+ _req_mod(req, to_be_submitted, 0);
+
+ /* check this request on the colison detection hash tables.
+ * if we have a conflict, just complete it here.
+ * THINK do we want to check reads, too? (I don't think so...) */
+ if (rw == WRITE && _req_conflicts(req)) {
+ /* this is a conflicting request.
+ * even though it may have been only _partially_
+ * overlapping with one of the currently pending requests,
+ * without even submitting or sending it, we will
+ * pretend that it was successfully served right now.
+ */
+ if (local) {
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+ drbd_al_complete_io(mdev, req->sector);
+ dec_local(mdev);
+ local = 0;
+ }
+ if (remote)
+ dec_ap_pending(mdev);
+ _drbd_end_io_acct(mdev, req);
+ /* THINK: do we want to fail it (-EIO), or pretend success? */
+ bio_endio(req->master_bio, 0);
+ req->master_bio = NULL;
+ dec_ap_bio(mdev);
+ drbd_req_free(req);
+ remote = 0;
+ }
+
+ /* NOTE remote first: to get the concurrent write detection right,
+ * we must register the request before start of local IO. */
+ if (remote) {
+ /* either WRITE and Connected,
+ * or READ, and no local disk,
+ * or READ, but not in sync.
+ */
+ if (rw == WRITE)
+ _req_mod(req, queue_for_net_write, 0);
+ else
+ _req_mod(req, queue_for_net_read, 0);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+ kfree(b); /* if someone else has beaten us to it... */
+
+ if (local) {
+ req->private_bio->bi_bdev = mdev->bc->backing_bdev;
+
+ dump_internal_bio("Pri", mdev, req->private_bio, 0);
+
+ if (FAULT_ACTIVE(mdev, rw == WRITE ? DRBD_FAULT_DT_WR
+ : rw == READ ? DRBD_FAULT_DT_RD
+ : DRBD_FAULT_DT_RA))
+ bio_endio(req->private_bio, -EIO);
+ else
+ generic_make_request(req->private_bio);
+ }
+
+ /* we need to plug ALWAYS since we possibly need to kick lo_dev.
+ * we plug after submit, so we won't miss an unplug event */
+ drbd_plug_device(mdev);
+
+ return 0;
+
+fail_free_complete:
+ if (rw == WRITE && local)
+ drbd_al_complete_io(mdev, sector);
+fail_and_free_req:
+ if (local) {
+ bio_put(req->private_bio);
+ req->private_bio = NULL;
+ dec_local(mdev);
+ }
+ bio_endio(bio, err);
+ drbd_req_free(req);
+ dec_ap_bio(mdev);
+ kfree(b);
+
+ return 0;
+}
+
+/* helper function for drbd_make_request
+ * if we can determine just by the mdev (state) that this request will fail,
+ * return 1
+ * otherwise return 0
+ */
+static int drbd_fail_request_early(struct drbd_conf *mdev, int is_write)
+{
+ /* Unconfigured */
+ if (mdev->state.conn == Disconnecting &&
+ mdev->state.disk == Diskless)
+ return 1;
+
+ if (mdev->state.role != Primary &&
+ (!allow_oos || is_write)) {
+ if (__ratelimit(&drbd_ratelimit_state)) {
+ ERR("Process %s[%u] tried to %s; "
+ "since we are not in Primary state, "
+ "we cannot allow this\n",
+ current->comm, current->pid,
+ is_write ? "WRITE" : "READ");
+ }
+ return 1;
+ }
+
+ /*
+ * Paranoia: we might have been primary, but sync target, or
+ * even diskless, then lost the connection.
+ * This should have been handled (panic? suspend?) somehwere
+ * else. But maybe it was not, so check again here.
+ * Caution: as long as we do not have a read/write lock on mdev,
+ * to serialize state changes, this is racy, since we may lose
+ * the connection *after* we test for the cstate.
+ */
+ if (mdev->state.disk < UpToDate && mdev->state.pdsk < UpToDate) {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Sorry, I have no access to good data anymore.\n");
+ return 1;
+ }
+
+ return 0;
+}
+
+int drbd_make_request_26(struct request_queue *q, struct bio *bio)
+{
+ unsigned int s_enr, e_enr;
+ struct drbd_conf *mdev = (struct drbd_conf *) q->queuedata;
+
+ if (drbd_fail_request_early(mdev, bio_data_dir(bio) & WRITE)) {
+ bio_endio(bio, -EPERM);
+ return 0;
+ }
+
+ /* Reject barrier requests if we know the underlying device does
+ * not support them.
+ * XXX: Need to get this info from peer as well some how so we
+ * XXX: reject if EITHER side/data/metadata area does not support them.
+ *
+ * because of those XXX, this is not yet enabled,
+ * i.e. in drbd_init_set_defaults we set the NO_BARRIER_SUPP bit.
+ */
+ if (unlikely(bio_barrier(bio) && test_bit(NO_BARRIER_SUPP, &mdev->flags))) {
+ /* drbd_WARN("Rejecting barrier request as underlying device does not support\n"); */
+ bio_endio(bio, -EOPNOTSUPP);
+ return 0;
+ }
+
+ /*
+ * what we "blindly" assume:
+ */
+ D_ASSERT(bio->bi_size > 0);
+ D_ASSERT((bio->bi_size & 0x1ff) == 0);
+ D_ASSERT(bio->bi_idx == 0);
+
+ /* to make some things easier, force allignment of requests within the
+ * granularity of our hash tables */
+ s_enr = bio->bi_sector >> HT_SHIFT;
+ e_enr = (bio->bi_sector+(bio->bi_size>>9)-1) >> HT_SHIFT;
+
+ if (likely(s_enr == e_enr)) {
+ inc_ap_bio(mdev, 1);
+ return drbd_make_request_common(mdev, bio);
+ }
+
+ /* can this bio be split generically?
+ * Maybe add our own split-arbitrary-bios function. */
+ if (bio->bi_vcnt != 1 || bio->bi_idx != 0 || bio->bi_size > DRBD_MAX_SEGMENT_SIZE) {
+ /* rather error out here than BUG in bio_split */
+ ERR("bio would need to, but cannot, be split: "
+ "(vcnt=%u,idx=%u,size=%u,sector=%llu)\n",
+ bio->bi_vcnt, bio->bi_idx, bio->bi_size,
+ (unsigned long long)bio->bi_sector);
+ bio_endio(bio, -EINVAL);
+ } else {
+ /* This bio crosses some boundary, so we have to split it. */
+ struct bio_pair *bp;
+ /* works for the "do not cross hash slot boundaries" case
+ * e.g. sector 262269, size 4096
+ * s_enr = 262269 >> 6 = 4097
+ * e_enr = (262269+8-1) >> 6 = 4098
+ * HT_SHIFT = 6
+ * sps = 64, mask = 63
+ * first_sectors = 64 - (262269 & 63) = 3
+ */
+ const sector_t sect = bio->bi_sector;
+ const int sps = 1 << HT_SHIFT; /* sectors per slot */
+ const int mask = sps - 1;
+ const sector_t first_sectors = sps - (sect & mask);
+ bp = bio_split(bio,
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
+ bio_split_pool,
+#endif
+ first_sectors);
+
+ /* we need to get a "reference count" (ap_bio_cnt)
+ * to avoid races with the disconnect/reconnect/suspend code.
+ * In case we need to split the bio here, we need to get two references
+ * atomically, otherwise we might deadlock when trying to submit the
+ * second one! */
+ inc_ap_bio(mdev, 2);
+
+ D_ASSERT(e_enr == s_enr + 1);
+
+ drbd_make_request_common(mdev, &bp->bio1);
+ drbd_make_request_common(mdev, &bp->bio2);
+ bio_pair_release(bp);
+ }
+ return 0;
+}
+
+/* This is called by bio_add_page(). With this function we reduce
+ * the number of BIOs that span over multiple DRBD_MAX_SEGMENT_SIZEs
+ * units (was AL_EXTENTs).
+ *
+ * we do the calculation within the lower 32bit of the byte offsets,
+ * since we don't care for actual offset, but only check whether it
+ * would cross "activity log extent" boundaries.
+ *
+ * As long as the BIO is emtpy we have to allow at least one bvec,
+ * regardless of size and offset. so the resulting bio may still
+ * cross extent boundaries. those are dealt with (bio_split) in
+ * drbd_make_request_26.
+ */
+int drbd_merge_bvec(struct request_queue *q, struct bvec_merge_data *bvm, struct bio_vec *bvec)
+{
+ struct drbd_conf *mdev = (struct drbd_conf *) q->queuedata;
+ unsigned int bio_offset =
+ (unsigned int)bvm->bi_sector << 9; /* 32 bit */
+ unsigned int bio_size = bvm->bi_size;
+ int limit, backing_limit;
+
+ limit = DRBD_MAX_SEGMENT_SIZE
+ - ((bio_offset & (DRBD_MAX_SEGMENT_SIZE-1)) + bio_size);
+ if (limit < 0)
+ limit = 0;
+ if (bio_size == 0) {
+ if (limit <= bvec->bv_len)
+ limit = bvec->bv_len;
+ } else if (limit && inc_local(mdev)) {
+ struct request_queue * const b =
+ mdev->bc->backing_bdev->bd_disk->queue;
+ if (b->merge_bvec_fn && mdev->bc->dc.use_bmbv) {
+ backing_limit = b->merge_bvec_fn(b, bvm, bvec);
+ limit = min(limit, backing_limit);
+ }
+ dec_local(mdev);
+ }
+ return limit;
+}
The /proc/drbd interface.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_proc.c linux-2.6.29-drbd/drivers/block/drbd/drbd_proc.c
--- linux-2.6.29/drivers/block/drbd/drbd_proc.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_proc.c 2009-03-26 15:55:39.571133000 +0100
@@ -0,0 +1,271 @@
+/*
+ drbd_proc.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+
+#include <asm/uaccess.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/slab.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "lru_cache.h" /* for lc_sprintf_stats */
+
+STATIC int drbd_proc_open(struct inode *inode, struct file *file);
+
+
+struct proc_dir_entry *drbd_proc;
+struct file_operations drbd_proc_fops = {
+ .owner = THIS_MODULE,
+ .open = drbd_proc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+
+/*lge
+ * progress bars shamelessly adapted from driver/md/md.c
+ * output looks like
+ * [=====>..............] 33.5% (23456/123456)
+ * finish: 2:20:20 speed: 6,345 (6,456) K/sec
+ */
+STATIC void drbd_syncer_progress(struct drbd_conf *mdev, struct seq_file *seq)
+{
+ unsigned long db, dt, dbdt, rt, rs_left;
+ unsigned int res;
+ int i, x, y;
+
+ drbd_get_syncer_progress(mdev, &rs_left, &res);
+
+ x = res/50;
+ y = 20-x;
+ seq_printf(seq, "\t[");
+ for (i = 1; i < x; i++)
+ seq_printf(seq, "=");
+ seq_printf(seq, ">");
+ for (i = 0; i < y; i++)
+ seq_printf(seq, ".");
+ seq_printf(seq, "] ");
+
+ seq_printf(seq, "sync'ed:%3u.%u%% ", res / 10, res % 10);
+ /* if more than 1 GB display in MB */
+ if (mdev->rs_total > 0x100000L)
+ seq_printf(seq, "(%lu/%lu)M\n\t",
+ (unsigned long) Bit2KB(rs_left >> 10),
+ (unsigned long) Bit2KB(mdev->rs_total >> 10));
+ else
+ seq_printf(seq, "(%lu/%lu)K\n\t",
+ (unsigned long) Bit2KB(rs_left),
+ (unsigned long) Bit2KB(mdev->rs_total));
+
+ /* see drivers/md/md.c
+ * We do not want to overflow, so the order of operands and
+ * the * 100 / 100 trick are important. We do a +1 to be
+ * safe against division by zero. We only estimate anyway.
+ *
+ * dt: time from mark until now
+ * db: blocks written from mark until now
+ * rt: remaining time
+ */
+ dt = (jiffies - mdev->rs_mark_time) / HZ;
+
+ if (dt > 20) {
+ /* if we made no update to rs_mark_time for too long,
+ * we are stalled. show that. */
+ seq_printf(seq, "stalled\n");
+ return;
+ }
+
+ if (!dt)
+ dt++;
+ db = mdev->rs_mark_left - rs_left;
+ rt = (dt * (rs_left / (db/100+1)))/100; /* seconds */
+
+ seq_printf(seq, "finish: %lu:%02lu:%02lu",
+ rt / 3600, (rt % 3600) / 60, rt % 60);
+
+ /* current speed average over (SYNC_MARKS * SYNC_MARK_STEP) jiffies */
+ dbdt = Bit2KB(db/dt);
+ if (dbdt > 1000)
+ seq_printf(seq, " speed: %ld,%03ld",
+ dbdt/1000, dbdt % 1000);
+ else
+ seq_printf(seq, " speed: %ld", dbdt);
+
+ /* mean speed since syncer started
+ * we do account for PausedSync periods */
+ dt = (jiffies - mdev->rs_start - mdev->rs_paused) / HZ;
+ if (dt <= 0)
+ dt = 1;
+ db = mdev->rs_total - rs_left;
+ dbdt = Bit2KB(db/dt);
+ if (dbdt > 1000)
+ seq_printf(seq, " (%ld,%03ld)",
+ dbdt/1000, dbdt % 1000);
+ else
+ seq_printf(seq, " (%ld)", dbdt);
+
+ seq_printf(seq, " K/sec\n");
+}
+
+#ifdef ENABLE_DYNAMIC_TRACE
+STATIC void resync_dump_detail(struct seq_file *seq, struct lc_element *e)
+{
+ struct bm_extent *bme = (struct bm_extent *)e;
+
+ seq_printf(seq, "%5d %s %s\n", bme->rs_left,
+ bme->flags & BME_NO_WRITES ? "NO_WRITES" : "---------",
+ bme->flags & BME_LOCKED ? "LOCKED" : "------"
+ );
+}
+#endif
+
+STATIC int drbd_seq_show(struct seq_file *seq, void *v)
+{
+ int i, hole = 0;
+ const char *sn;
+ struct drbd_conf *mdev;
+
+ static char write_ordering_chars[] = {
+ [WO_none] = 'n',
+ [WO_drain_io] = 'd',
+ [WO_bdev_flush] = 'f',
+ [WO_bio_barrier] = 'b',
+ };
+
+ seq_printf(seq, "version: " REL_VERSION " (api:%d/proto:%d-%d)\n%s\n",
+ API_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX, drbd_buildtag());
+
+ /*
+ cs .. connection state
+ ro .. node role (local/remote)
+ ds .. disk state (local/remote)
+ protocol
+ various flags
+ ns .. network send
+ nr .. network receive
+ dw .. disk write
+ dr .. disk read
+ al .. activity log write count
+ bm .. bitmap update write count
+ pe .. pending (waiting for ack or data reply)
+ ua .. unack'd (still need to send ack or data reply)
+ ap .. application requests accepted, but not yet completed
+ ep .. number of epochs currently "on the fly", BarrierAck pending
+ wo .. write ordering mode currently in use
+ oos .. known out-of-sync kB
+ */
+
+ for (i = 0; i < minor_count; i++) {
+ mdev = minor_to_mdev(i);
+ if (!mdev) {
+ hole = 1;
+ continue;
+ }
+ if (hole) {
+ hole = 0;
+ seq_printf(seq, "\n");
+ }
+
+ sn = conns_to_name(mdev->state.conn);
+
+ if (mdev->state.conn == StandAlone &&
+ mdev->state.disk == Diskless &&
+ mdev->state.role == Secondary) {
+ seq_printf(seq, "%2d: cs:Unconfigured\n", i);
+ } else {
+ seq_printf(seq,
+ "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c\n"
+ " ns:%u nr:%u dw:%u dr:%u al:%u bm:%u "
+ "lo:%d pe:%d ua:%d ap:%d ep:%d wo:%c",
+ i, sn,
+ roles_to_name(mdev->state.role),
+ roles_to_name(mdev->state.peer),
+ disks_to_name(mdev->state.disk),
+ disks_to_name(mdev->state.pdsk),
+ (mdev->net_conf == NULL ? ' ' :
+ (mdev->net_conf->wire_protocol - DRBD_PROT_A+'A')),
+ mdev->state.susp ? 's' : 'r',
+ mdev->state.aftr_isp ? 'a' : '-',
+ mdev->state.peer_isp ? 'p' : '-',
+ mdev->state.user_isp ? 'u' : '-',
+ mdev->congestion_reason,
+ mdev->send_cnt/2,
+ mdev->recv_cnt/2,
+ mdev->writ_cnt/2,
+ mdev->read_cnt/2,
+ mdev->al_writ_cnt,
+ mdev->bm_writ_cnt,
+ atomic_read(&mdev->local_cnt),
+ atomic_read(&mdev->ap_pending_cnt) +
+ atomic_read(&mdev->rs_pending_cnt),
+ atomic_read(&mdev->unacked_cnt),
+ atomic_read(&mdev->ap_bio_cnt),
+ mdev->epochs,
+ write_ordering_chars[mdev->write_ordering]
+ );
+ seq_printf(seq, " oos:%lu\n",
+ Bit2KB(drbd_bm_total_weight(mdev)));
+ }
+ if (mdev->state.conn == SyncSource ||
+ mdev->state.conn == SyncTarget)
+ drbd_syncer_progress(mdev, seq);
+
+ if (mdev->state.conn == VerifyS || mdev->state.conn == VerifyT)
+ seq_printf(seq, "\t%3d%% %lu/%lu\n",
+ (int)((mdev->rs_total-mdev->ov_left) /
+ (mdev->rs_total/100+1)),
+ mdev->rs_total - mdev->ov_left,
+ mdev->rs_total);
+
+#ifdef ENABLE_DYNAMIC_TRACE
+ if (proc_details >= 1 && inc_local_if_state(mdev, Failed)) {
+ lc_printf_stats(seq, mdev->resync);
+ lc_printf_stats(seq, mdev->act_log);
+ dec_local(mdev);
+ }
+
+ if (proc_details >= 2) {
+ if (mdev->resync) {
+ lc_dump(mdev->resync, seq, "rs_left",
+ resync_dump_detail);
+ }
+ }
+#endif
+ }
+
+ return 0;
+}
+
+STATIC int drbd_proc_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, drbd_seq_show, PDE(inode)->data);
+}
+
+/* PROC FS stuff end */
Our generic worker thread. Does the actual sending of data via the network
link. Does all the after-state-change activities, that have to be done
without holding the req_lock spinlock. And some other stuff.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_worker.c linux-2.6.29-drbd/drivers/block/drbd/drbd_worker.c
--- linux-2.6.29/drivers/block/drbd/drbd_worker.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_worker.c 2009-03-30 16:51:48.331706000 +0200
@@ -0,0 +1,1463 @@
+/*
+ drbd_worker.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+#include <linux/version.h>
+
+#include <linux/sched.h>
+#include <linux/smp_lock.h>
+#include <linux/wait.h>
+#include <linux/mm.h>
+#include <linux/drbd_config.h>
+#include <linux/memcontrol.h>
+#include <linux/mm_inline.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+#include <linux/scatterlist.h>
+
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "drbd_req.h"
+
+#define SLEEP_TIME (HZ/10)
+
+STATIC int w_make_ov_request(struct drbd_conf *mdev, struct drbd_work *w, int cancel);
+
+
+
+/* defined here:
+ drbd_md_io_complete
+ drbd_endio_write_sec
+ drbd_endio_read_sec
+ drbd_endio_pri
+
+ * more endio handlers:
+ atodb_endio in drbd_actlog.c
+ drbd_bm_async_io_complete in drbd_bitmap.c
+
+ * For all these callbacks, note the follwing:
+ * The callbacks will be called in irq context by the IDE drivers,
+ * and in Softirqs/Tasklets/BH context by the SCSI drivers.
+ * Try to get the locking right :)
+ *
+ */
+
+
+/* About the global_state_lock
+ Each state transition on an device holds a read lock. In case we have
+ to evaluate the sync after dependencies, we grab a write lock, because
+ we need stable states on all devices for that. */
+rwlock_t global_state_lock;
+
+/* used for synchronous meta data and bitmap IO
+ * submitted by drbd_md_sync_page_io()
+ */
+void drbd_md_io_complete(struct bio *bio, int error)
+{
+ struct drbd_md_io *md_io;
+
+ /* error parameter ignored:
+ * drbd_md_sync_page_io explicitly tests bio_uptodate(bio); */
+
+ md_io = (struct drbd_md_io *)bio->bi_private;
+
+ md_io->error = error;
+
+ dump_internal_bio("Md", md_io->mdev, bio, 1);
+
+ complete(&md_io->event);
+}
+
+/* reads on behalf of the partner,
+ * "submitted" by the receiver
+ */
+void drbd_endio_read_sec(struct bio *bio, int error) __releases(local)
+{
+ unsigned long flags = 0;
+ struct Tl_epoch_entry *e = NULL;
+ struct drbd_conf *mdev;
+ int uptodate = bio_flagged(bio, BIO_UPTODATE);
+
+ e = bio->bi_private;
+ mdev = e->mdev;
+
+ if (!error && !uptodate) {
+ /* strange behaviour of some lower level drivers...
+ * fail the request by clearing the uptodate flag,
+ * but do not return any error?!
+ * do we want to drbd_WARN() on this? */
+ error = -EIO;
+ }
+
+ D_ASSERT(e->block_id != ID_VACANT);
+
+ dump_internal_bio("Sec", mdev, bio, 1);
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ mdev->read_cnt += e->size >> 9;
+ list_del(&e->w.list);
+ if (list_empty(&mdev->read_ee))
+ wake_up(&mdev->ee_wait);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ drbd_chk_io_error(mdev, error, FALSE);
+ drbd_queue_work(&mdev->data.work, &e->w);
+ dec_local(mdev);
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("Moved EE (READ) to worker sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+}
+
+/* writes on behalf of the partner, or resync writes,
+ * "submitted" by the receiver.
+ */
+void drbd_endio_write_sec(struct bio *bio, int error) __releases(local)
+{
+ unsigned long flags = 0;
+ struct Tl_epoch_entry *e = NULL;
+ struct drbd_conf *mdev;
+ sector_t e_sector;
+ int do_wake;
+ int is_syncer_req;
+ int do_al_complete_io;
+ int uptodate = bio_flagged(bio, BIO_UPTODATE);
+
+ e = bio->bi_private;
+ mdev = e->mdev;
+
+ if (!error && !uptodate) {
+ /* strange behaviour of some lower level drivers...
+ * fail the request by clearing the uptodate flag,
+ * but do not return any error?!
+ * do we want to drbd_WARN() on this? */
+ error = -EIO;
+ }
+
+ /* error == -ENOTSUPP would be a better test,
+ * alas it is not reliable */
+ if (error && e->flags & EE_IS_BARRIER) {
+ drbd_bump_write_ordering(mdev, WO_bdev_flush);
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ list_del(&e->w.list);
+ e->w.cb = w_e_reissue;
+ __release(local); /* Actually happens in w_e_reissue. */
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+ drbd_queue_work(&mdev->data.work, &e->w);
+ return;
+ }
+
+ D_ASSERT(e->block_id != ID_VACANT);
+
+ dump_internal_bio("Sec", mdev, bio, 1);
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ mdev->writ_cnt += e->size >> 9;
+ is_syncer_req = is_syncer_block_id(e->block_id);
+
+ /* after we moved e to done_ee,
+ * we may no longer access it,
+ * it may be freed/reused already!
+ * (as soon as we release the req_lock) */
+ e_sector = e->sector;
+ do_al_complete_io = e->flags & EE_CALL_AL_COMPLETE_IO;
+
+ list_del(&e->w.list); /* has been on active_ee or sync_ee */
+ list_add_tail(&e->w.list, &mdev->done_ee);
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("Moved EE (WRITE) to done_ee sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+
+ /* No hlist_del_init(&e->colision) here, we did not send the Ack yet,
+ * neither did we wake possibly waiting conflicting requests.
+ * done from "drbd_process_done_ee" within the appropriate w.cb
+ * (e_end_block/e_end_resync_block) or from _drbd_clear_done_ee */
+
+ do_wake = is_syncer_req
+ ? list_empty(&mdev->sync_ee)
+ : list_empty(&mdev->active_ee);
+
+ if (error)
+ __drbd_chk_io_error(mdev, FALSE);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ if (is_syncer_req)
+ drbd_rs_complete_io(mdev, e_sector);
+
+ if (do_wake)
+ wake_up(&mdev->ee_wait);
+
+ if (do_al_complete_io)
+ drbd_al_complete_io(mdev, e_sector);
+
+ wake_asender(mdev);
+ dec_local(mdev);
+
+}
+
+/* read, readA or write requests on Primary comming from drbd_make_request
+ */
+void drbd_endio_pri(struct bio *bio, int error)
+{
+ unsigned long flags;
+ struct drbd_request *req = bio->bi_private;
+ struct drbd_conf *mdev = req->mdev;
+ enum drbd_req_event what;
+ int uptodate = bio_flagged(bio, BIO_UPTODATE);
+
+ if (!error && !uptodate) {
+ /* strange behaviour of some lower level drivers...
+ * fail the request by clearing the uptodate flag,
+ * but do not return any error?!
+ * do we want to drbd_WARN() on this? */
+ error = -EIO;
+ }
+
+ dump_internal_bio("Pri", mdev, bio, 1);
+
+ /* to avoid recursion in _req_mod */
+ what = error
+ ? (bio_data_dir(bio) == WRITE)
+ ? write_completed_with_error
+ : read_completed_with_error
+ : completed_ok;
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ _req_mod(req, what, error);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+}
+
+int w_io_error(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct drbd_request *req = (struct drbd_request *)w;
+ int ok;
+
+ /* NOTE: mdev->bc can be NULL by the time we get here! */
+ /* D_ASSERT(mdev->bc->dc.on_io_error != PassOn); */
+
+ /* the only way this callback is scheduled is from _req_may_be_done,
+ * when it is done and had a local write error, see comments there */
+ drbd_req_free(req);
+
+ ok = drbd_io_error(mdev, FALSE);
+ if (unlikely(!ok))
+ ERR("Sending in w_io_error() failed\n");
+ return ok;
+}
+
+int w_read_retry_remote(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct drbd_request *req = (struct drbd_request *)w;
+
+ /* We should not detach for read io-error,
+ * but try to WRITE the DataReply to the failed location,
+ * to give the disk the chance to relocate that block */
+ drbd_io_error(mdev, FALSE); /* tries to schedule a detach and notifies peer */
+
+ spin_lock_irq(&mdev->req_lock);
+ if (cancel ||
+ mdev->state.conn < Connected ||
+ mdev->state.pdsk <= Inconsistent) {
+ _req_mod(req, send_canceled, 0);
+ spin_unlock_irq(&mdev->req_lock);
+ ALERT("WE ARE LOST. Local IO failure, no peer.\n");
+ return 1;
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ return w_send_read_req(mdev, w, 0);
+}
+
+int w_resync_inactive(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ ERR_IF(cancel) return 1;
+ ERR("resync inactive, but callback triggered??\n");
+ return 1; /* Simply ignore this! */
+}
+
+STATIC void drbd_csum(struct drbd_conf *mdev, struct crypto_hash *tfm, struct bio *bio, void *digest)
+{
+ struct hash_desc desc;
+ struct scatterlist sg;
+ struct bio_vec *bvec;
+ int i;
+
+ desc.tfm = tfm;
+ desc.flags = 0;
+
+ sg_init_table(&sg, 1);
+ crypto_hash_init(&desc);
+
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ sg_set_page(&sg, bvec->bv_page, bvec->bv_len, bvec->bv_offset);
+ crypto_hash_update(&desc, &sg, sg.length);
+ }
+ crypto_hash_final(&desc, digest);
+}
+
+STATIC int w_e_send_csum(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ int digest_size;
+ void *digest;
+ int ok;
+
+ D_ASSERT(e->block_id == DRBD_MAGIC + 0xbeef);
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ return 1;
+ }
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ digest_size = crypto_hash_digestsize(mdev->csums_tfm);
+ digest = kmalloc(digest_size, GFP_KERNEL);
+ if (digest) {
+ drbd_csum(mdev, mdev->csums_tfm, e->private_bio, digest);
+
+ inc_rs_pending(mdev);
+ ok = drbd_send_drequest_csum(mdev,
+ e->sector,
+ e->size,
+ digest,
+ digest_size,
+ CsumRSRequest);
+ kfree(digest);
+ } else {
+ ERR("kmalloc() of digest failed.\n");
+ ok = 0;
+ }
+ } else {
+ drbd_io_error(mdev, FALSE);
+ ok = 1;
+ }
+
+ drbd_free_ee(mdev, e);
+
+ if (unlikely(!ok))
+ ERR("drbd_send_drequest(..., csum) failed\n");
+ return ok;
+}
+
+#define GFP_TRY (__GFP_HIGHMEM | __GFP_NOWARN)
+
+STATIC int read_for_csum(struct drbd_conf *mdev, sector_t sector, int size)
+{
+ struct Tl_epoch_entry *e;
+
+ if (!inc_local(mdev))
+ return 0;
+
+ if (FAULT_ACTIVE(mdev, DRBD_FAULT_AL_EE))
+ return 2;
+
+ e = drbd_alloc_ee(mdev, DRBD_MAGIC+0xbeef, sector, size, GFP_TRY);
+ if (!e) {
+ dec_local(mdev);
+ return 2;
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ list_add(&e->w.list, &mdev->read_ee);
+ spin_unlock_irq(&mdev->req_lock);
+
+ e->private_bio->bi_end_io = drbd_endio_read_sec;
+ e->private_bio->bi_rw = READ;
+ e->w.cb = w_e_send_csum;
+
+ mdev->read_cnt += size >> 9;
+ drbd_generic_make_request(mdev, DRBD_FAULT_RS_RD, e->private_bio);
+
+ return 1;
+}
+
+void resync_timer_fn(unsigned long data)
+{
+ unsigned long flags;
+ struct drbd_conf *mdev = (struct drbd_conf *) data;
+ int queue;
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+
+ if (likely(!test_and_clear_bit(STOP_SYNC_TIMER, &mdev->flags))) {
+ queue = 1;
+ if (mdev->state.conn == VerifyS)
+ mdev->resync_work.cb = w_make_ov_request;
+ else
+ mdev->resync_work.cb = w_make_resync_request;
+ } else {
+ queue = 0;
+ mdev->resync_work.cb = w_resync_inactive;
+ }
+
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ /* harmless race: list_empty outside data.work.q_lock */
+ if (list_empty(&mdev->resync_work.list) && queue)
+ drbd_queue_work(&mdev->data.work, &mdev->resync_work);
+}
+
+int w_make_resync_request(struct drbd_conf *mdev,
+ struct drbd_work *w, int cancel)
+{
+ unsigned long bit;
+ sector_t sector;
+ const sector_t capacity = drbd_get_capacity(mdev->this_bdev);
+ int max_segment_size = mdev->rq_queue->max_segment_size;
+ int number, i, size;
+ int align;
+
+ if (unlikely(cancel))
+ return 1;
+
+ if (unlikely(mdev->state.conn < Connected)) {
+ ERR("Confused in w_make_resync_request()! cstate < Connected");
+ return 0;
+ }
+
+ if (mdev->state.conn != SyncTarget)
+ ERR("%s in w_make_resync_request\n",
+ conns_to_name(mdev->state.conn));
+
+ if (!inc_local(mdev)) {
+ /* Since we only need to access mdev->rsync a
+ inc_local_if_state(mdev,Failed) would be sufficient, but
+ to continue resync with a broken disk makes no sense at
+ all */
+ ERR("Disk broke down during resync!\n");
+ mdev->resync_work.cb = w_resync_inactive;
+ return 1;
+ }
+ /* All goto requeses have to happend after this block: inc_local() */
+
+ number = SLEEP_TIME*mdev->sync_conf.rate / ((BM_BLOCK_SIZE/1024)*HZ);
+
+ if (atomic_read(&mdev->rs_pending_cnt) > number)
+ goto requeue;
+ number -= atomic_read(&mdev->rs_pending_cnt);
+
+ for (i = 0; i < number; i++) {
+next_sector:
+ size = BM_BLOCK_SIZE;
+ bit = drbd_bm_find_next(mdev, mdev->bm_resync_fo);
+
+ if (bit == -1UL) {
+ mdev->bm_resync_fo = drbd_bm_bits(mdev);
+ mdev->resync_work.cb = w_resync_inactive;
+ dec_local(mdev);
+ return 1;
+ }
+
+ sector = BM_BIT_TO_SECT(bit);
+
+ if (drbd_try_rs_begin_io(mdev, sector)) {
+ mdev->bm_resync_fo = bit;
+ goto requeue;
+ }
+ mdev->bm_resync_fo = bit + 1;
+
+ if (unlikely(drbd_bm_test_bit(mdev, bit) == 0)) {
+ drbd_rs_complete_io(mdev, sector);
+ goto next_sector;
+ }
+
+#if DRBD_MAX_SEGMENT_SIZE > BM_BLOCK_SIZE
+ /* try to find some adjacent bits.
+ * we stop if we have already the maximum req size.
+ *
+ * Aditionally always align bigger requests, in order to
+ * be prepared for all stripe sizes of software RAIDs.
+ *
+ * we _do_ care about the agreed-uppon q->max_segment_size
+ * here, as splitting up the requests on the other side is more
+ * difficult. the consequence is, that on lvm and md and other
+ * "indirect" devices, this is dead code, since
+ * q->max_segment_size will be PAGE_SIZE.
+ */
+ align = 1;
+ for (;;) {
+ if (size + BM_BLOCK_SIZE > max_segment_size)
+ break;
+
+ /* Be always aligned */
+ if (sector & ((1<<(align+3))-1))
+ break;
+
+ /* do not cross extent boundaries */
+ if (((bit+1) & BM_BLOCKS_PER_BM_EXT_MASK) == 0)
+ break;
+ /* now, is it actually dirty, after all?
+ * caution, drbd_bm_test_bit is tri-state for some
+ * obscure reason; ( b == 0 ) would get the out-of-band
+ * only accidentally right because of the "oddly sized"
+ * adjustment below */
+ if (drbd_bm_test_bit(mdev, bit+1) != 1)
+ break;
+ bit++;
+ size += BM_BLOCK_SIZE;
+ if ((BM_BLOCK_SIZE << align) <= size)
+ align++;
+ i++;
+ }
+ /* if we merged some,
+ * reset the offset to start the next drbd_bm_find_next from */
+ if (size > BM_BLOCK_SIZE)
+ mdev->bm_resync_fo = bit + 1;
+#endif
+
+ /* adjust very last sectors, in case we are oddly sized */
+ if (sector + (size>>9) > capacity)
+ size = (capacity-sector)<<9;
+ if (mdev->agreed_pro_version >= 89 && mdev->csums_tfm) {
+ switch (read_for_csum(mdev, sector, size)) {
+ case 0: /* Disk failure*/
+ dec_local(mdev);
+ return 0;
+ case 2: /* Allocation failed */
+ drbd_rs_complete_io(mdev, sector);
+ mdev->bm_resync_fo = BM_SECT_TO_BIT(sector);
+ goto requeue;
+ /* case 1: everything ok */
+ }
+ } else {
+ inc_rs_pending(mdev);
+ if (!drbd_send_drequest(mdev, RSDataRequest,
+ sector, size, ID_SYNCER)) {
+ ERR("drbd_send_drequest() failed, aborting...\n");
+ dec_rs_pending(mdev);
+ dec_local(mdev);
+ return 0;
+ }
+ }
+ }
+
+ if (mdev->bm_resync_fo >= drbd_bm_bits(mdev)) {
+ /* last syncer _request_ was sent,
+ * but the RSDataReply not yet received. sync will end (and
+ * next sync group will resume), as soon as we receive the last
+ * resync data block, and the last bit is cleared.
+ * until then resync "work" is "inactive" ...
+ */
+ mdev->resync_work.cb = w_resync_inactive;
+ dec_local(mdev);
+ return 1;
+ }
+
+ requeue:
+ mod_timer(&mdev->resync_timer, jiffies + SLEEP_TIME);
+ dec_local(mdev);
+ return 1;
+}
+
+int w_make_ov_request(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ int number, i, size;
+ sector_t sector;
+ const sector_t capacity = drbd_get_capacity(mdev->this_bdev);
+
+ if (unlikely(cancel))
+ return 1;
+
+ if (unlikely(mdev->state.conn < Connected)) {
+ ERR("Confused in w_make_ov_request()! cstate < Connected");
+ return 0;
+ }
+
+ number = SLEEP_TIME*mdev->sync_conf.rate / ((BM_BLOCK_SIZE/1024)*HZ);
+ if (atomic_read(&mdev->rs_pending_cnt) > number)
+ goto requeue;
+
+ number -= atomic_read(&mdev->rs_pending_cnt);
+
+ sector = mdev->ov_position;
+ for (i = 0; i < number; i++) {
+ size = BM_BLOCK_SIZE;
+
+ if (drbd_try_rs_begin_io(mdev, sector)) {
+ mdev->ov_position = sector;
+ goto requeue;
+ }
+
+ if (sector + (size>>9) > capacity)
+ size = (capacity-sector)<<9;
+
+ inc_rs_pending(mdev);
+ if (!drbd_send_ov_request(mdev, sector, size)) {
+ dec_rs_pending(mdev);
+ return 0;
+ }
+ sector += BM_SECT_PER_BIT;
+ if (sector >= capacity) {
+ mdev->resync_work.cb = w_resync_inactive;
+
+ return 1;
+ }
+ }
+ mdev->ov_position = sector;
+
+ requeue:
+ mod_timer(&mdev->resync_timer, jiffies + SLEEP_TIME);
+ return 1;
+}
+
+
+int w_ov_finished(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ kfree(w);
+ ov_oos_print(mdev);
+ drbd_resync_finished(mdev);
+
+ return 1;
+}
+
+STATIC int w_resync_finished(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ kfree(w);
+
+ drbd_resync_finished(mdev);
+
+ return 1;
+}
+
+int drbd_resync_finished(struct drbd_conf *mdev)
+{
+ unsigned long db, dt, dbdt;
+ unsigned long n_oos;
+ union drbd_state_t os, ns;
+ struct drbd_work *w;
+ char *khelper_cmd = NULL;
+
+ /* Remove all elements from the resync LRU. Since future actions
+ * might set bits in the (main) bitmap, then the entries in the
+ * resync LRU would be wrong. */
+ if (drbd_rs_del_all(mdev)) {
+ /* In case this is not possible now, most probabely because
+ * there are RSDataReply Packets lingering on the worker's
+ * queue (or even the read operations for those packets
+ * is not finished by now). Retry in 100ms. */
+
+ drbd_kick_lo(mdev);
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ / 10);
+ w = kmalloc(sizeof(struct drbd_work), GFP_ATOMIC);
+ if (w) {
+ w->cb = w_resync_finished;
+ drbd_queue_work(&mdev->data.work, w);
+ return 1;
+ }
+ ERR("Warn failed to drbd_rs_del_all() and to kmalloc(w).\n");
+ }
+
+ dt = (jiffies - mdev->rs_start - mdev->rs_paused) / HZ;
+ if (dt <= 0)
+ dt = 1;
+ db = mdev->rs_total;
+ dbdt = Bit2KB(db/dt);
+ mdev->rs_paused /= HZ;
+
+ if (!inc_local(mdev))
+ goto out;
+
+ spin_lock_irq(&mdev->req_lock);
+ os = mdev->state;
+
+ /* This protects us against multiple calls (that can happen in the presence
+ of application IO), and against connectivity loss just before we arrive here. */
+ if (os.conn <= Connected)
+ goto out_unlock;
+
+ ns = os;
+ ns.conn = Connected;
+
+ INFO("%s done (total %lu sec; paused %lu sec; %lu K/sec)\n",
+ (os.conn == VerifyS || os.conn == VerifyT) ?
+ "Online verify " : "Resync",
+ dt + mdev->rs_paused, mdev->rs_paused, dbdt);
+
+ n_oos = drbd_bm_total_weight(mdev);
+
+ if (os.conn == VerifyS || os.conn == VerifyT) {
+ if (n_oos) {
+ ALERT("Online verify found %lu %dk block out of sync!\n",
+ n_oos, Bit2KB(1));
+ khelper_cmd = "out-of-sync";
+ }
+ } else {
+ D_ASSERT((n_oos - mdev->rs_failed) == 0);
+
+ if (os.conn == SyncTarget || os.conn == PausedSyncT)
+ khelper_cmd = "after-resync-target";
+
+ if (mdev->csums_tfm && mdev->rs_total) {
+ const unsigned long s = mdev->rs_same_csum;
+ const unsigned long t = mdev->rs_total;
+ const int ratio =
+ (t == 0) ? 0 :
+ (t < 100000) ? ((s*100)/t) : (s/(t/100));
+ INFO("%u %% had equal check sums, eliminated: %luK; "
+ "transferred %luK total %luK\n",
+ ratio,
+ Bit2KB(mdev->rs_same_csum),
+ Bit2KB(mdev->rs_total - mdev->rs_same_csum),
+ Bit2KB(mdev->rs_total));
+ }
+ }
+
+ if (mdev->rs_failed) {
+ INFO(" %lu failed blocks\n", mdev->rs_failed);
+
+ if (os.conn == SyncTarget || os.conn == PausedSyncT) {
+ ns.disk = Inconsistent;
+ ns.pdsk = UpToDate;
+ } else {
+ ns.disk = UpToDate;
+ ns.pdsk = Inconsistent;
+ }
+ } else {
+ ns.disk = UpToDate;
+ ns.pdsk = UpToDate;
+
+ if (os.conn == SyncTarget || os.conn == PausedSyncT) {
+ if (mdev->p_uuid) {
+ int i;
+ for (i = Bitmap ; i <= History_end ; i++)
+ _drbd_uuid_set(mdev, i, mdev->p_uuid[i]);
+ drbd_uuid_set(mdev, Bitmap, mdev->bc->md.uuid[Current]);
+ _drbd_uuid_set(mdev, Current, mdev->p_uuid[Current]);
+ } else {
+ ERR("mdev->p_uuid is NULL! BUG\n");
+ }
+ }
+
+ drbd_uuid_set_bm(mdev, 0UL);
+
+ if (mdev->p_uuid) {
+ /* Now the two UUID sets are equal, update what we
+ * know of the peer. */
+ int i;
+ for (i = Current ; i <= History_end ; i++)
+ mdev->p_uuid[i] = mdev->bc->md.uuid[i];
+ }
+ }
+
+ _drbd_set_state(mdev, ns, ChgStateVerbose, NULL);
+out_unlock:
+ spin_unlock_irq(&mdev->req_lock);
+ dec_local(mdev);
+out:
+ mdev->rs_total = 0;
+ mdev->rs_failed = 0;
+ mdev->rs_paused = 0;
+
+ if (test_and_clear_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags)) {
+ drbd_WARN("Writing the whole bitmap, due to failed kmalloc\n");
+ drbd_queue_bitmap_io(mdev, &drbd_bm_write, NULL, "write from resync_finished");
+ }
+
+ drbd_bm_recount_bits(mdev);
+
+ if (khelper_cmd)
+ drbd_khelper(mdev, khelper_cmd);
+
+ return 1;
+}
+
+/**
+ * w_e_end_data_req: Send the answer (DataReply) in response to a DataRequest.
+ */
+int w_e_end_data_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ int ok;
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ dec_unacked(mdev);
+ return 1;
+ }
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ ok = drbd_send_block(mdev, DataReply, e);
+ } else {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Sending NegDReply. sector=%llus.\n",
+ (unsigned long long)e->sector);
+
+ ok = drbd_send_ack(mdev, NegDReply, e);
+
+ drbd_io_error(mdev, FALSE);
+ }
+
+ dec_unacked(mdev);
+
+ spin_lock_irq(&mdev->req_lock);
+ if (drbd_bio_has_active_page(e->private_bio)) {
+ /* This might happen if sendpage() has not finished */
+ list_add_tail(&e->w.list, &mdev->net_ee);
+ } else {
+ drbd_free_ee(mdev, e);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (unlikely(!ok))
+ ERR("drbd_send_block() failed\n");
+ return ok;
+}
+
+/**
+ * w_e_end_rsdata_req: Send the answer (RSDataReply) to a RSDataRequest.
+ */
+int w_e_end_rsdata_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ int ok;
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ dec_unacked(mdev);
+ return 1;
+ }
+
+ if (inc_local_if_state(mdev, Failed)) {
+ drbd_rs_complete_io(mdev, e->sector);
+ dec_local(mdev);
+ }
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ if (likely(mdev->state.pdsk >= Inconsistent)) {
+ inc_rs_pending(mdev);
+ ok = drbd_send_block(mdev, RSDataReply, e);
+ } else {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Not sending RSDataReply, "
+ "partner DISKLESS!\n");
+ ok = 1;
+ }
+ } else {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Sending NegRSDReply. sector %llus.\n",
+ (unsigned long long)e->sector);
+
+ ok = drbd_send_ack(mdev, NegRSDReply, e);
+
+ drbd_io_error(mdev, FALSE);
+
+ /* update resync data with failure */
+ drbd_rs_failed_io(mdev, e->sector, e->size);
+ }
+
+ dec_unacked(mdev);
+
+ spin_lock_irq(&mdev->req_lock);
+ if (drbd_bio_has_active_page(e->private_bio)) {
+ /* This might happen if sendpage() has not finished */
+ list_add_tail(&e->w.list, &mdev->net_ee);
+ } else {
+ drbd_free_ee(mdev, e);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (unlikely(!ok))
+ ERR("drbd_send_block() failed\n");
+ return ok;
+}
+
+int w_e_end_csum_rs_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ struct digest_info *di;
+ int digest_size;
+ void *digest = NULL;
+ int ok, eq = 0;
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ dec_unacked(mdev);
+ return 1;
+ }
+
+ drbd_rs_complete_io(mdev, e->sector);
+
+ di = (struct digest_info *)(unsigned long)e->block_id;
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ /* quick hack to try to avoid a race against reconfiguration.
+ * a real fix would be much more involved,
+ * introducing more locking mechanisms */
+ if (mdev->csums_tfm) {
+ digest_size = crypto_hash_digestsize(mdev->csums_tfm);
+ D_ASSERT(digest_size == di->digest_size);
+ digest = kmalloc(digest_size, GFP_KERNEL);
+ }
+ if (digest) {
+ drbd_csum(mdev, mdev->csums_tfm, e->private_bio, digest);
+ eq = !memcmp(digest, di->digest, digest_size);
+ kfree(digest);
+ }
+
+ if (eq) {
+ drbd_set_in_sync(mdev, e->sector, e->size);
+ mdev->rs_same_csum++;
+ ok = drbd_send_ack(mdev, RSIsInSync, e);
+ } else {
+ inc_rs_pending(mdev);
+ e->block_id = ID_SYNCER;
+ ok = drbd_send_block(mdev, RSDataReply, e);
+ }
+ } else {
+ ok = drbd_send_ack(mdev, NegRSDReply, e);
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Sending NegDReply. I guess it gets messy.\n");
+ drbd_io_error(mdev, FALSE);
+ }
+
+ dec_unacked(mdev);
+
+ kfree(di);
+
+ spin_lock_irq(&mdev->req_lock);
+ if (drbd_bio_has_active_page(e->private_bio)) {
+ /* This might happen if sendpage() has not finished */
+ list_add_tail(&e->w.list, &mdev->net_ee);
+ } else {
+ drbd_free_ee(mdev, e);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (unlikely(!ok))
+ ERR("drbd_send_block/ack() failed\n");
+ return ok;
+}
+
+int w_e_end_ov_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ int digest_size;
+ void *digest;
+ int ok = 1;
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ dec_unacked(mdev);
+ return 1;
+ }
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ digest_size = crypto_hash_digestsize(mdev->verify_tfm);
+ digest = kmalloc(digest_size, GFP_KERNEL);
+ if (digest) {
+ drbd_csum(mdev, mdev->verify_tfm, e->private_bio, digest);
+ ok = drbd_send_drequest_csum(mdev, e->sector, e->size,
+ digest, digest_size, OVReply);
+ if (ok)
+ inc_rs_pending(mdev);
+ kfree(digest);
+ }
+ }
+
+ dec_unacked(mdev);
+
+ spin_lock_irq(&mdev->req_lock);
+ drbd_free_ee(mdev, e);
+ spin_unlock_irq(&mdev->req_lock);
+
+ return ok;
+}
+
+void drbd_ov_oos_found(struct drbd_conf *mdev, sector_t sector, int size)
+{
+ if (mdev->ov_last_oos_start + mdev->ov_last_oos_size == sector) {
+ mdev->ov_last_oos_size += size>>9;
+ } else {
+ mdev->ov_last_oos_start = sector;
+ mdev->ov_last_oos_size = size>>9;
+ }
+ drbd_set_out_of_sync(mdev, sector, size);
+ set_bit(WRITE_BM_AFTER_RESYNC, &mdev->flags);
+}
+
+int w_e_end_ov_reply(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ struct digest_info *di;
+ int digest_size;
+ void *digest;
+ int ok, eq = 0;
+
+ if (unlikely(cancel)) {
+ drbd_free_ee(mdev, e);
+ dec_unacked(mdev);
+ return 1;
+ }
+
+ /* after "cancel", because after drbd_disconnect/drbd_rs_cancel_all
+ * the resync lru has been cleaned up already */
+ drbd_rs_complete_io(mdev, e->sector);
+
+ di = (struct digest_info *)(unsigned long)e->block_id;
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ digest_size = crypto_hash_digestsize(mdev->verify_tfm);
+ digest = kmalloc(digest_size, GFP_KERNEL);
+ if (digest) {
+ drbd_csum(mdev, mdev->verify_tfm, e->private_bio, digest);
+
+ D_ASSERT(digest_size == di->digest_size);
+ eq = !memcmp(digest, di->digest, digest_size);
+ kfree(digest);
+ }
+ } else {
+ ok = drbd_send_ack(mdev, NegRSDReply, e);
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Sending NegDReply. I guess it gets messy.\n");
+ drbd_io_error(mdev, FALSE);
+ }
+
+ dec_unacked(mdev);
+
+ kfree(di);
+
+ if (!eq)
+ drbd_ov_oos_found(mdev, e->sector, e->size);
+ else
+ ov_oos_print(mdev);
+
+ ok = drbd_send_ack_ex(mdev, OVResult, e->sector, e->size,
+ eq ? ID_IN_SYNC : ID_OUT_OF_SYNC);
+
+ spin_lock_irq(&mdev->req_lock);
+ drbd_free_ee(mdev, e);
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (--mdev->ov_left == 0) {
+ ov_oos_print(mdev);
+ drbd_resync_finished(mdev);
+ }
+
+ return ok;
+}
+
+int w_prev_work_done(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ clear_bit(WORK_PENDING, &mdev->flags);
+ wake_up(&mdev->misc_wait);
+ return 1;
+}
+
+int w_send_barrier(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct drbd_barrier *b = (struct drbd_barrier *)w;
+ struct Drbd_Barrier_Packet *p = &mdev->data.sbuf.Barrier;
+ int ok = 1;
+
+ /* really avoid racing with tl_clear. w.cb may have been referenced
+ * just before it was reassigned and requeued, so double check that.
+ * actually, this race was harmless, since we only try to send the
+ * barrier packet here, and otherwise do nothing with the object.
+ * but compare with the head of w_clear_epoch */
+ spin_lock_irq(&mdev->req_lock);
+ if (w->cb != w_send_barrier || mdev->state.conn < Connected)
+ cancel = 1;
+ spin_unlock_irq(&mdev->req_lock);
+ if (cancel)
+ return 1;
+
+ if (!drbd_get_data_sock(mdev))
+ return 0;
+ p->barrier = b->br_number;
+ /* inc_ap_pending was done where this was queued.
+ * dec_ap_pending will be done in got_BarrierAck
+ * or (on connection loss) in w_clear_epoch. */
+ ok = _drbd_send_cmd(mdev, mdev->data.socket, Barrier,
+ (struct Drbd_Header *)p, sizeof(*p), 0);
+ drbd_put_data_sock(mdev);
+
+ return ok;
+}
+
+int w_send_write_hint(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ if (cancel)
+ return 1;
+ return drbd_send_short_cmd(mdev, UnplugRemote);
+}
+
+/**
+ * w_send_dblock: Send a mirrored write request.
+ */
+int w_send_dblock(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct drbd_request *req = (struct drbd_request *)w;
+ int ok;
+
+ if (unlikely(cancel)) {
+ req_mod(req, send_canceled, 0);
+ return 1;
+ }
+
+ ok = drbd_send_dblock(mdev, req);
+ req_mod(req, ok ? handed_over_to_network : send_failed, 0);
+
+ return ok;
+}
+
+/**
+ * w_send_read_req: Send a read requests.
+ */
+int w_send_read_req(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct drbd_request *req = (struct drbd_request *)w;
+ int ok;
+
+ if (unlikely(cancel)) {
+ req_mod(req, send_canceled, 0);
+ return 1;
+ }
+
+ ok = drbd_send_drequest(mdev, DataRequest, req->sector, req->size,
+ (unsigned long)req);
+
+ if (!ok) {
+ /* ?? we set Timeout or BrokenPipe in drbd_send();
+ * so this is probably redundant */
+ if (mdev->state.conn >= Connected)
+ drbd_force_state(mdev, NS(conn, NetworkFailure));
+ }
+ req_mod(req, ok ? handed_over_to_network : send_failed, 0);
+
+ return ok;
+}
+
+STATIC int _drbd_may_sync_now(struct drbd_conf *mdev)
+{
+ struct drbd_conf *odev = mdev;
+
+ while (1) {
+ if (odev->sync_conf.after == -1)
+ return 1;
+ odev = minor_to_mdev(odev->sync_conf.after);
+ ERR_IF(!odev) return 1;
+ if ((odev->state.conn >= SyncSource &&
+ odev->state.conn <= PausedSyncT) ||
+ odev->state.aftr_isp || odev->state.peer_isp ||
+ odev->state.user_isp)
+ return 0;
+ }
+}
+
+/**
+ * _drbd_pause_after:
+ * Finds all devices that may not resync now, and causes them to
+ * pause their resynchronisation.
+ * Called from process context only (admin command and after_state_ch).
+ */
+STATIC int _drbd_pause_after(struct drbd_conf *mdev)
+{
+ struct drbd_conf *odev;
+ int i, rv = 0;
+
+ for (i = 0; i < minor_count; i++) {
+ odev = minor_to_mdev(i);
+ if (!odev)
+ continue;
+ if (odev->state.conn == StandAlone && odev->state.disk == Diskless)
+ continue;
+ if (!_drbd_may_sync_now(odev))
+ rv |= (__drbd_set_state(_NS(odev, aftr_isp, 1), ChgStateHard, NULL)
+ != SS_NothingToDo);
+ }
+
+ return rv;
+}
+
+/**
+ * _drbd_resume_next:
+ * Finds all devices that can resume resynchronisation
+ * process, and causes them to resume.
+ * Called from process context only (admin command and worker).
+ */
+STATIC int _drbd_resume_next(struct drbd_conf *mdev)
+{
+ struct drbd_conf *odev;
+ int i, rv = 0;
+
+ for (i = 0; i < minor_count; i++) {
+ odev = minor_to_mdev(i);
+ if (!odev)
+ continue;
+ if (odev->state.aftr_isp) {
+ if (_drbd_may_sync_now(odev))
+ rv |= (__drbd_set_state(_NS(odev, aftr_isp, 0),
+ ChgStateHard, NULL)
+ != SS_NothingToDo) ;
+ }
+ }
+ return rv;
+}
+
+void resume_next_sg(struct drbd_conf *mdev)
+{
+ write_lock_irq(&global_state_lock);
+ _drbd_resume_next(mdev);
+ write_unlock_irq(&global_state_lock);
+}
+
+void suspend_other_sg(struct drbd_conf *mdev)
+{
+ write_lock_irq(&global_state_lock);
+ _drbd_pause_after(mdev);
+ write_unlock_irq(&global_state_lock);
+}
+
+void drbd_alter_sa(struct drbd_conf *mdev, int na)
+{
+ int changes;
+
+ write_lock_irq(&global_state_lock);
+ mdev->sync_conf.after = na;
+
+ do {
+ changes = _drbd_pause_after(mdev);
+ changes |= _drbd_resume_next(mdev);
+ } while (changes);
+
+ write_unlock_irq(&global_state_lock);
+}
+
+/**
+ * drbd_start_resync:
+ * @side: Either SyncSource or SyncTarget
+ * Start the resync process. Called from process context only,
+ * either admin command or drbd_receiver.
+ * Note, this function might bring you directly into one of the
+ * PausedSync* states.
+ */
+void drbd_start_resync(struct drbd_conf *mdev, enum drbd_conns side)
+{
+ union drbd_state_t ns;
+ int r;
+
+ MTRACE(TraceTypeResync, TraceLvlSummary,
+ INFO("Resync starting: side=%s\n",
+ side == SyncTarget ? "SyncTarget" : "SyncSource");
+ );
+
+ drbd_bm_recount_bits(mdev);
+
+ /* In case a previous resync run was aborted by an IO error... */
+ drbd_rs_cancel_all(mdev);
+
+ if (side == SyncTarget) {
+ /* Since application IO was locked out during WFBitMapT and
+ WFSyncUUID we are still unmodified. Before going to SyncTarget
+ we check that we might make the data inconsistent. */
+ r = drbd_khelper(mdev, "before-resync-target");
+ r = (r >> 8) & 0xff;
+ if (r > 0) {
+ INFO("before-resync-target handler returned %d, "
+ "dropping connection.\n", r);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return;
+ }
+ }
+
+ drbd_state_lock(mdev);
+
+ if (!inc_local_if_state(mdev, Negotiating)) {
+ drbd_state_unlock(mdev);
+ return;
+ }
+
+ if (side == SyncTarget) {
+ mdev->bm_resync_fo = 0;
+ } else /* side == SyncSource */ {
+ u64 uuid;
+
+ get_random_bytes(&uuid, sizeof(u64));
+ drbd_uuid_set(mdev, Bitmap, uuid);
+ drbd_send_sync_uuid(mdev, uuid);
+
+ D_ASSERT(mdev->state.disk == UpToDate);
+ }
+
+ write_lock_irq(&global_state_lock);
+ ns = mdev->state;
+
+ ns.aftr_isp = !_drbd_may_sync_now(mdev);
+
+ ns.conn = side;
+
+ if (side == SyncTarget)
+ ns.disk = Inconsistent;
+ else /* side == SyncSource */
+ ns.pdsk = Inconsistent;
+
+ r = __drbd_set_state(mdev, ns, ChgStateVerbose, NULL);
+ ns = mdev->state;
+
+ if (ns.conn < Connected)
+ r = SS_UnknownError;
+
+ if (r == SS_Success) {
+ mdev->rs_total =
+ mdev->rs_mark_left = drbd_bm_total_weight(mdev);
+ mdev->rs_failed = 0;
+ mdev->rs_paused = 0;
+ mdev->rs_start =
+ mdev->rs_mark_time = jiffies;
+ mdev->rs_same_csum = 0;
+ _drbd_pause_after(mdev);
+ }
+ write_unlock_irq(&global_state_lock);
+ drbd_state_unlock(mdev);
+ dec_local(mdev);
+
+ if (r == SS_Success) {
+ INFO("Began resync as %s (will sync %lu KB [%lu bits set]).\n",
+ conns_to_name(ns.conn),
+ (unsigned long) mdev->rs_total << (BM_BLOCK_SIZE_B-10),
+ (unsigned long) mdev->rs_total);
+
+ if (mdev->rs_total == 0) {
+ drbd_resync_finished(mdev);
+ return;
+ }
+
+ if (ns.conn == SyncTarget) {
+ D_ASSERT(!test_bit(STOP_SYNC_TIMER, &mdev->flags));
+ mod_timer(&mdev->resync_timer, jiffies);
+ }
+
+ drbd_md_sync(mdev);
+ }
+}
+
+int drbd_worker(struct Drbd_thread *thi)
+{
+ struct drbd_conf *mdev = thi->mdev;
+ struct drbd_work *w = NULL;
+ LIST_HEAD(work_list);
+ int intr = 0, i;
+
+ sprintf(current->comm, "drbd%d_worker", mdev_to_minor(mdev));
+
+ while (get_t_state(thi) == Running) {
+ drbd_thread_current_set_cpu(mdev);
+
+ if (down_trylock(&mdev->data.work.s)) {
+ mutex_lock(&mdev->data.mutex);
+ if (mdev->data.socket && !mdev->net_conf->no_cork)
+ drbd_tcp_uncork(mdev->data.socket);
+ mutex_unlock(&mdev->data.mutex);
+
+ intr = down_interruptible(&mdev->data.work.s);
+
+ mutex_lock(&mdev->data.mutex);
+ if (mdev->data.socket && !mdev->net_conf->no_cork)
+ drbd_tcp_cork(mdev->data.socket);
+ mutex_unlock(&mdev->data.mutex);
+ }
+
+ if (intr) {
+ D_ASSERT(intr == -EINTR);
+ flush_signals(current);
+ ERR_IF (get_t_state(thi) == Running)
+ continue;
+ break;
+ }
+
+ if (get_t_state(thi) != Running)
+ break;
+ /* With this break, we have done a down() but not consumed
+ the entry from the list. The cleanup code takes care of
+ this... */
+
+ w = NULL;
+ spin_lock_irq(&mdev->data.work.q_lock);
+ ERR_IF(list_empty(&mdev->data.work.q)) {
+ /* something terribly wrong in our logic.
+ * we were able to down() the semaphore,
+ * but the list is empty... doh.
+ *
+ * what is the best thing to do now?
+ * try again from scratch, restarting the receiver,
+ * asender, whatnot? could break even more ugly,
+ * e.g. when we are primary, but no good local data.
+ *
+ * I'll try to get away just starting over this loop.
+ */
+ spin_unlock_irq(&mdev->data.work.q_lock);
+ continue;
+ }
+ w = list_entry(mdev->data.work.q.next, struct drbd_work, list);
+ list_del_init(&w->list);
+ spin_unlock_irq(&mdev->data.work.q_lock);
+
+ if (!w->cb(mdev, w, mdev->state.conn < Connected)) {
+ /* drbd_WARN("worker: a callback failed! \n"); */
+ if (mdev->state.conn >= Connected)
+ drbd_force_state(mdev,
+ NS(conn, NetworkFailure));
+ }
+ }
+
+ spin_lock_irq(&mdev->data.work.q_lock);
+ i = 0;
+ while (!list_empty(&mdev->data.work.q)) {
+ list_splice_init(&mdev->data.work.q, &work_list);
+ spin_unlock_irq(&mdev->data.work.q_lock);
+
+ while (!list_empty(&work_list)) {
+ w = list_entry(work_list.next, struct drbd_work, list);
+ list_del_init(&w->list);
+ w->cb(mdev, w, 1);
+ i++; /* dead debugging code */
+ }
+
+ spin_lock_irq(&mdev->data.work.q_lock);
+ }
+ sema_init(&mdev->data.work.s, 0);
+ /* DANGEROUS race: if someone did queue his work within the spinlock,
+ * but up() ed outside the spinlock, we could get an up() on the
+ * semaphore without corresponding list entry.
+ * So don't do that.
+ */
+ spin_unlock_irq(&mdev->data.work.q_lock);
+
+ D_ASSERT(mdev->state.disk == Diskless && mdev->state.conn == StandAlone);
+ /* _drbd_set_state only uses stop_nowait.
+ * wait here for the Exiting receiver. */
+ drbd_thread_stop(&mdev->receiver);
+ drbd_mdev_cleanup(mdev);
+
+ INFO("worker terminated\n");
+
+ return 0;
+}
The big "struct drbd_conf". It actually describes one DRBD device.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_int.h linux-2.6.29-drbd/drivers/block/drbd/drbd_int.h
--- linux-2.6.29/drivers/block/drbd/drbd_int.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_int.h 2009-03-30 18:46:14.164468432 +0200
@@ -0,0 +1,2320 @@
+/*
+ drbd_int.h
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+*/
+
+#ifndef _DRBD_INT_H
+#define _DRBD_INT_H
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/crypto.h>
+#include <linux/tcp.h>
+#include <linux/mutex.h>
+#include <linux/major.h>
+#include <linux/blkdev.h>
+#include <linux/bio.h>
+#include <net/tcp.h>
+#include "lru_cache.h"
+
+#ifdef __CHECKER__
+# define __protected_by(x) __attribute__((require_context(x,1,999,"rdwr")))
+# define __protected_read_by(x) __attribute__((require_context(x,1,999,"read")))
+# define __protected_write_by(x) __attribute__((require_context(x,1,999,"write")))
+# define __must_hold(x) __attribute__((context(x,1,1), require_context(x,1,999,"call")))
+#else
+# define __protected_by(x)
+# define __protected_read_by(x)
+# define __protected_write_by(x)
+# define __must_hold(x)
+#endif
+
+#define __no_warn(lock, stmt) do { __acquire(lock); stmt; __release(lock); } while (0)
+
+/* module parameter, defined in drbd_main.c */
+extern unsigned int minor_count;
+extern int allow_oos;
+extern unsigned int cn_idx;
+
+#ifdef DRBD_ENABLE_FAULTS
+extern int enable_faults;
+extern int fault_rate;
+extern int fault_devs;
+#endif
+
+extern char usermode_helper[];
+
+
+#ifndef TRUE
+#define TRUE 1
+#endif
+#ifndef FALSE
+#define FALSE 0
+#endif
+
+/* I don't remember why XCPU ...
+ * This is used to wake the asender,
+ * and to interrupt sending the sending task
+ * on disconnect.
+ */
+#define DRBD_SIG SIGXCPU
+
+/* This is used to stop/restart our threads.
+ * Cannot use SIGTERM nor SIGKILL, since these
+ * are sent out by init on runlevel changes
+ * I choose SIGHUP for now.
+ */
+#define DRBD_SIGKILL SIGHUP
+
+/* All EEs on the free list should have ID_VACANT (== 0)
+ * freshly allocated EEs get !ID_VACANT (== 1)
+ * so if it says "cannot dereference null pointer at adress 0x00000001",
+ * it is most likely one of these :( */
+
+#define ID_IN_SYNC (4711ULL)
+#define ID_OUT_OF_SYNC (4712ULL)
+
+#define ID_SYNCER (-1ULL)
+#define ID_VACANT 0
+#define is_syncer_block_id(id) ((id) == ID_SYNCER)
+
+struct drbd_conf;
+
+#ifdef DBG_ALL_SYMBOLS
+# define STATIC
+#else
+# define STATIC static
+#endif
+
+/*
+ * Some Message Macros
+ *************************/
+
+#define DUMPP(A) ERR(#A " = %p in %s:%d\n", (A), __FILE__, __LINE__);
+#define DUMPLU(A) ERR(#A " = %lu in %s:%d\n", (unsigned long)(A), __FILE__, __LINE__);
+#define DUMPLLU(A) ERR(#A " = %llu in %s:%d\n", (unsigned long long)(A), __FILE__, __LINE__);
+#define DUMPLX(A) ERR(#A " = %lx in %s:%d\n", (A), __FILE__, __LINE__);
+#define DUMPI(A) ERR(#A " = %d in %s:%d\n", (int)(A), __FILE__, __LINE__);
+
+
+#define PRINTK(level, fmt, args...) \
+ printk(level "drbd%d: " fmt, \
+ mdev->minor , ##args)
+
+#define ALERT(fmt, args...) PRINTK(KERN_ALERT, fmt , ##args)
+#define ERR(fmt, args...) PRINTK(KERN_ERR, fmt , ##args)
+/* nowadays, WARN() is defined as BUG() without crash in bug.h */
+#define drbd_WARN(fmt, args...) PRINTK(KERN_WARNING, fmt , ##args)
+#define INFO(fmt, args...) PRINTK(KERN_INFO, fmt , ##args)
+#define DBG(fmt, args...) PRINTK(KERN_DEBUG, fmt , ##args)
+
+#define D_ASSERT(exp) if (!(exp)) \
+ ERR("ASSERT( " #exp " ) in %s:%d\n", __FILE__, __LINE__)
+
+#define ERR_IF(exp) if (({ \
+ int _b = (exp) != 0; \
+ if (_b) ERR("%s: (%s) in %s:%d\n", \
+ __func__, #exp, __FILE__, __LINE__); \
+ _b; \
+ }))
+
+/* Defines to control fault insertion */
+enum {
+ DRBD_FAULT_MD_WR = 0, /* meta data write */
+ DRBD_FAULT_MD_RD, /* read */
+ DRBD_FAULT_RS_WR, /* resync */
+ DRBD_FAULT_RS_RD,
+ DRBD_FAULT_DT_WR, /* data */
+ DRBD_FAULT_DT_RD,
+ DRBD_FAULT_DT_RA, /* data read ahead */
+ DRBD_FAULT_AL_EE, /* alloc ee */
+
+ DRBD_FAULT_MAX,
+};
+
+#ifdef DRBD_ENABLE_FAULTS
+extern unsigned int
+_drbd_insert_fault(struct drbd_conf *mdev, unsigned int type);
+static inline int
+drbd_insert_fault(struct drbd_conf *mdev, unsigned int type) {
+ return fault_rate &&
+ (enable_faults & (1<<type)) &&
+ _drbd_insert_fault(mdev, type);
+}
+#define FAULT_ACTIVE(_m, _t) (drbd_insert_fault((_m), (_t)))
+
+#else
+#define FAULT_ACTIVE(_m, _t) (0)
+#endif
+
+/* integer division, round _UP_ to the next integer */
+#define div_ceil(A, B) ((A)/(B) + ((A)%(B) ? 1 : 0))
+/* usual integer division */
+#define div_floor(A, B) ((A)/(B))
+
+/* drbd_meta-data.c (still in drbd_main.c) */
+/* 4th incarnation of the disk layout. */
+#define DRBD_MD_MAGIC (DRBD_MAGIC+4)
+
+extern struct drbd_conf **minor_table;
+extern struct ratelimit_state drbd_ratelimit_state;
+
+/***
+ * on the wire
+ *********************************************************************/
+
+enum Drbd_Packet_Cmd {
+ /* receiver (data socket) */
+ Data = 0x00,
+ DataReply = 0x01, /* Response to DataRequest */
+ RSDataReply = 0x02, /* Response to RSDataRequest */
+ Barrier = 0x03,
+ ReportBitMap = 0x04,
+ BecomeSyncTarget = 0x05,
+ BecomeSyncSource = 0x06,
+ UnplugRemote = 0x07, /* Used at various times to hint the peer */
+ DataRequest = 0x08, /* Used to ask for a data block */
+ RSDataRequest = 0x09, /* Used to ask for a data block for resync */
+ SyncParam = 0x0a,
+ ReportProtocol = 0x0b,
+ ReportUUIDs = 0x0c,
+ ReportSizes = 0x0d,
+ ReportState = 0x0e,
+ ReportSyncUUID = 0x0f,
+ AuthChallenge = 0x10,
+ AuthResponse = 0x11,
+ StateChgRequest = 0x12,
+
+ /* asender (meta socket */
+ Ping = 0x13,
+ PingAck = 0x14,
+ RecvAck = 0x15, /* Used in protocol B */
+ WriteAck = 0x16, /* Used in protocol C */
+ RSWriteAck = 0x17, /* Is a WriteAck, additionally call set_in_sync(). */
+ DiscardAck = 0x18, /* Used in proto C, two-primaries conflict detection */
+ NegAck = 0x19, /* Sent if local disk is unusable */
+ NegDReply = 0x1a, /* Local disk is broken... */
+ NegRSDReply = 0x1b, /* Local disk is broken... */
+ BarrierAck = 0x1c,
+ StateChgReply = 0x1d,
+
+ /* "new" commands, no longer fitting into the ordering scheme above */
+
+ OVRequest = 0x1e, /* data socket */
+ OVReply = 0x1f,
+ OVResult = 0x20, /* meta socket */
+ CsumRSRequest = 0x21, /* data socket */
+ RSIsInSync = 0x22, /* meta socket */
+ SyncParam89 = 0x23, /* data socket, protocol version 89 replacement for SyncParam */
+ ReportCBitMap = 0x24, /* compressed or otherwise encoded bitmap transfer */
+
+ MAX_CMD = 0x25,
+ MayIgnore = 0x100, /* Flag to test if (cmd > MayIgnore) ... */
+ MAX_OPT_CMD = 0x101,
+
+ /* special command ids for handshake */
+
+ HandShakeM = 0xfff1, /* First Packet on the MetaSock */
+ HandShakeS = 0xfff2, /* First Packet on the Socket */
+
+ HandShake = 0xfffe /* FIXED for the next century! */
+};
+
+static inline const char *cmdname(enum Drbd_Packet_Cmd cmd)
+{
+ /* THINK may need to become several global tables
+ * when we want to support more than
+ * one PRO_VERSION */
+ static const char *cmdnames[] = {
+ [Data] = "Data",
+ [DataReply] = "DataReply",
+ [RSDataReply] = "RSDataReply",
+ [Barrier] = "Barrier",
+ [ReportBitMap] = "ReportBitMap",
+ [BecomeSyncTarget] = "BecomeSyncTarget",
+ [BecomeSyncSource] = "BecomeSyncSource",
+ [UnplugRemote] = "UnplugRemote",
+ [DataRequest] = "DataRequest",
+ [RSDataRequest] = "RSDataRequest",
+ [SyncParam] = "SyncParam",
+ [SyncParam89] = "SyncParam89",
+ [ReportProtocol] = "ReportProtocol",
+ [ReportUUIDs] = "ReportUUIDs",
+ [ReportSizes] = "ReportSizes",
+ [ReportState] = "ReportState",
+ [ReportSyncUUID] = "ReportSyncUUID",
+ [AuthChallenge] = "AuthChallenge",
+ [AuthResponse] = "AuthResponse",
+ [Ping] = "Ping",
+ [PingAck] = "PingAck",
+ [RecvAck] = "RecvAck",
+ [WriteAck] = "WriteAck",
+ [RSWriteAck] = "RSWriteAck",
+ [DiscardAck] = "DiscardAck",
+ [NegAck] = "NegAck",
+ [NegDReply] = "NegDReply",
+ [NegRSDReply] = "NegRSDReply",
+ [BarrierAck] = "BarrierAck",
+ [StateChgRequest] = "StateChgRequest",
+ [StateChgReply] = "StateChgReply",
+ [OVRequest] = "OVRequest",
+ [OVReply] = "OVReply",
+ [OVResult] = "OVResult",
+ [CsumRSRequest] = "CsumRSRequest",
+ [RSIsInSync] = "RSIsInSync",
+ [ReportCBitMap] = "ReportCBitMap",
+ [MAX_CMD] = NULL,
+ };
+
+ if (cmd == HandShakeM)
+ return "HandShakeM";
+ if (cmd == HandShakeS)
+ return "HandShakeS";
+ if (cmd == HandShake)
+ return "HandShake";
+ if (cmd >= MAX_CMD)
+ return "Unknown";
+ return cmdnames[cmd];
+}
+
+/* for sending/receiving the bitmap,
+ * possibly in some encoding scheme */
+struct bm_xfer_ctx {
+ /* "const"
+ * stores total bits and long words
+ * of the bitmap, so we don't need to
+ * call the accessor functions over and again. */
+ unsigned long bm_bits;
+ unsigned long bm_words;
+ /* during xfer, current position within the bitmap */
+ unsigned long bit_offset;
+ unsigned long word_offset;
+
+ /* statistics; index: (h->command == ReportBitMap) */
+ unsigned packets[2];
+ unsigned bytes[2];
+};
+
+extern void INFO_bm_xfer_stats(struct drbd_conf *mdev,
+ const char *direction, struct bm_xfer_ctx *c);
+
+static inline void bm_xfer_ctx_bit_to_word_offset(struct bm_xfer_ctx *c)
+{
+ /* word_offset counts "native long words" (32 or 64 bit),
+ * aligned at 64 bit.
+ * Encoded packet may end at an unaligned bit offset.
+ * In case a fallback clear text packet is transmitted in
+ * between, we adjust this offset back to the last 64bit
+ * aligned "native long word", which makes coding and decoding
+ * the plain text bitmap much more convenient. */
+#if BITS_PER_LONG == 64
+ c->word_offset = c->bit_offset >> 6;
+#elif BITS_PER_LONG == 32
+ c->word_offset = c->bit_offset >> 5;
+ c->word_offset &= ~(1UL);
+#else
+# error "unsupported BITS_PER_LONG"
+#endif
+}
+
+/* This is the layout for a packet on the wire.
+ * The byteorder is the network byte order.
+ * (except block_id and barrier fields.
+ * these are pointers to local structs
+ * and have no relevance for the partner,
+ * which just echoes them as received.)
+ *
+ * NOTE that the payload starts at a long aligned offset,
+ * regardless of 32 or 64 bit arch!
+ */
+struct Drbd_Header {
+ u32 magic;
+ u16 command;
+ u16 length; /* bytes of data after this header */
+ u8 payload[0];
+} __attribute((packed));
+/* 8 bytes. packet FIXED for the next century! */
+
+/*
+ * short commands, packets without payload, plain Drbd_Header:
+ * Ping
+ * PingAck
+ * BecomeSyncTarget
+ * BecomeSyncSource
+ * UnplugRemote
+ */
+
+/*
+ * commands with out-of-struct payload:
+ * ReportBitMap (no additional fields)
+ * Data, DataReply (see Drbd_Data_Packet)
+ * ReportCBitMap (see receive_compressed_bitmap)
+ */
+
+/* these defines must not be changed without changing the protocol version */
+#define DP_HARDBARRIER 1
+#define DP_RW_SYNC 2
+#define DP_MAY_SET_IN_SYNC 4
+
+struct Drbd_Data_Packet {
+ struct Drbd_Header head;
+ u64 sector; /* 64 bits sector number */
+ u64 block_id; /* to identify the request in protocol B&C */
+ u32 seq_num;
+ u32 dp_flags;
+} __attribute((packed));
+
+/*
+ * commands which share a struct:
+ * Drbd_BlockAck_Packet:
+ * RecvAck (proto B), WriteAck (proto C),
+ * DiscardAck (proto C, two-primaries conflict detection)
+ * Drbd_BlockRequest_Packet:
+ * DataRequest, RSDataRequest
+ */
+struct Drbd_BlockAck_Packet {
+ struct Drbd_Header head;
+ u64 sector;
+ u64 block_id;
+ u32 blksize;
+ u32 seq_num;
+} __attribute((packed));
+
+
+struct Drbd_BlockRequest_Packet {
+ struct Drbd_Header head;
+ u64 sector;
+ u64 block_id;
+ u32 blksize;
+ u32 pad; /* to multiple of 8 Byte */
+} __attribute((packed));
+
+/*
+ * commands with their own struct for additional fields:
+ * HandShake
+ * Barrier
+ * BarrierAck
+ * SyncParam
+ * ReportParams
+ */
+
+struct Drbd_HandShake_Packet {
+ struct Drbd_Header head; /* 8 bytes */
+ u32 protocol_min;
+ u32 feature_flags;
+ u32 protocol_max;
+
+ /* should be more than enough for future enhancements
+ * for now, feature_flags and the reserverd array shall be zero.
+ */
+
+ u32 _pad;
+ u64 reserverd[7];
+} __attribute((packed));
+/* 80 bytes, FIXED for the next century */
+
+struct Drbd_Barrier_Packet {
+ struct Drbd_Header head;
+ u32 barrier; /* barrier number _handle_ only */
+ u32 pad; /* to multiple of 8 Byte */
+} __attribute((packed));
+
+struct Drbd_BarrierAck_Packet {
+ struct Drbd_Header head;
+ u32 barrier;
+ u32 set_size;
+} __attribute((packed));
+
+struct Drbd_SyncParam_Packet {
+ struct Drbd_Header head;
+ u32 rate;
+
+ /* Since protocol version 88 and higher. */
+ char verify_alg[0];
+} __attribute((packed));
+
+struct Drbd_SyncParam89_Packet {
+ struct Drbd_Header head;
+ u32 rate;
+ /* protocol version 89: */
+ char verify_alg[SHARED_SECRET_MAX];
+ char csums_alg[SHARED_SECRET_MAX];
+} __attribute((packed));
+
+struct Drbd_Protocol_Packet {
+ struct Drbd_Header head;
+ u32 protocol;
+ u32 after_sb_0p;
+ u32 after_sb_1p;
+ u32 after_sb_2p;
+ u32 want_lose;
+ u32 two_primaries;
+
+ /* Since protocol version 87 and higher. */
+ char integrity_alg[0];
+
+} __attribute((packed));
+
+struct Drbd_GenCnt_Packet {
+ struct Drbd_Header head;
+ u64 uuid[EXT_UUID_SIZE];
+} __attribute((packed));
+
+struct Drbd_SyncUUID_Packet {
+ struct Drbd_Header head;
+ u64 uuid;
+} __attribute((packed));
+
+struct Drbd_Sizes_Packet {
+ struct Drbd_Header head;
+ u64 d_size; /* size of disk */
+ u64 u_size; /* user requested size */
+ u64 c_size; /* current exported size */
+ u32 max_segment_size; /* Maximal size of a BIO */
+ u32 queue_order_type;
+} __attribute((packed));
+
+struct Drbd_State_Packet {
+ struct Drbd_Header head;
+ u32 state;
+} __attribute((packed));
+
+struct Drbd_Req_State_Packet {
+ struct Drbd_Header head;
+ u32 mask;
+ u32 val;
+} __attribute((packed));
+
+struct Drbd_RqS_Reply_Packet {
+ struct Drbd_Header head;
+ u32 retcode;
+} __attribute((packed));
+
+struct Drbd06_Parameter_P {
+ u64 size;
+ u32 state;
+ u32 blksize;
+ u32 protocol;
+ u32 version;
+ u32 gen_cnt[5];
+ u32 bit_map_gen[5];
+} __attribute((packed));
+
+struct Drbd_Discard_Packet {
+ struct Drbd_Header head;
+ u64 block_id;
+ u32 seq_num;
+ u32 pad;
+} __attribute((packed));
+
+/* Valid values for the encoding field.
+ * Bump proto version when changing this. */
+enum Drbd_bitmap_code {
+ RLE_VLI_Bytes = 0,
+ RLE_VLI_BitsFibD_0_1 = 1,
+ RLE_VLI_BitsFibD_1_1 = 2,
+ RLE_VLI_BitsFibD_1_2 = 3,
+ RLE_VLI_BitsFibD_2_3 = 4,
+ RLE_VLI_BitsFibD_3_5 = 5,
+};
+
+struct Drbd_Compressed_Bitmap_Packet {
+ struct Drbd_Header head;
+ /* (encoding & 0x0f): actual encoding, see enum Drbd_bitmap_code
+ * (encoding & 0x80): polarity (set/unset) of first runlength
+ * ((encoding >> 4) & 0x07): pad_bits, number of trailing zero bits
+ * used to pad up to head.length bytes
+ */
+ u8 encoding;
+
+ u8 code[0];
+} __attribute((packed));
+
+static inline enum Drbd_bitmap_code
+DCBP_get_code(struct Drbd_Compressed_Bitmap_Packet *p)
+{
+ return (enum Drbd_bitmap_code)(p->encoding & 0x0f);
+}
+
+static inline void
+DCBP_set_code(struct Drbd_Compressed_Bitmap_Packet *p, enum Drbd_bitmap_code code)
+{
+ BUG_ON(code & ~0xf);
+ p->encoding = (p->encoding & ~0xf) | code;
+}
+
+static inline int
+DCBP_get_start(struct Drbd_Compressed_Bitmap_Packet *p)
+{
+ return (p->encoding & 0x80) != 0;
+}
+
+static inline void
+DCBP_set_start(struct Drbd_Compressed_Bitmap_Packet *p, int set)
+{
+ p->encoding = (p->encoding & ~0x80) | (set ? 0x80 : 0);
+}
+
+static inline int
+DCBP_get_pad_bits(struct Drbd_Compressed_Bitmap_Packet *p)
+{
+ return (p->encoding >> 4) & 0x7;
+}
+
+static inline void
+DCBP_set_pad_bits(struct Drbd_Compressed_Bitmap_Packet *p, int n)
+{
+ BUG_ON(n & ~0x7);
+ p->encoding = (p->encoding & (~0x7 << 4)) | (n << 4);
+}
+
+/* one bitmap packet, including the Drbd_Header,
+ * should fit within one _architecture independend_ page.
+ * so we need to use the fixed size 4KiB page size
+ * most architechtures have used for a long time.
+ */
+#define BM_PACKET_PAYLOAD_BYTES (4096 - sizeof(struct Drbd_Header))
+#define BM_PACKET_WORDS (BM_PACKET_PAYLOAD_BYTES/sizeof(long))
+#define BM_PACKET_VLI_BYTES_MAX (4096 - sizeof(struct Drbd_Compressed_Bitmap_Packet))
+#if (PAGE_SIZE < 4096)
+/* drbd_send_bitmap / receive_bitmap would break horribly */
+#error "PAGE_SIZE too small"
+#endif
+
+union Drbd_Polymorph_Packet {
+ struct Drbd_Header head;
+ struct Drbd_HandShake_Packet HandShake;
+ struct Drbd_Data_Packet Data;
+ struct Drbd_BlockAck_Packet BlockAck;
+ struct Drbd_Barrier_Packet Barrier;
+ struct Drbd_BarrierAck_Packet BarrierAck;
+ struct Drbd_SyncParam89_Packet SyncParam89;
+ struct Drbd_Protocol_Packet Protocol;
+ struct Drbd_Sizes_Packet Sizes;
+ struct Drbd_GenCnt_Packet GenCnt;
+ struct Drbd_State_Packet State;
+ struct Drbd_Req_State_Packet ReqState;
+ struct Drbd_RqS_Reply_Packet RqSReply;
+ struct Drbd_BlockRequest_Packet BlockRequest;
+} __attribute((packed));
+
+/**********************************************************************/
+enum Drbd_thread_state {
+ None,
+ Running,
+ Exiting,
+ Restarting
+};
+
+struct Drbd_thread {
+ spinlock_t t_lock;
+ struct task_struct *task;
+ struct completion stop;
+ enum Drbd_thread_state t_state;
+ int (*function) (struct Drbd_thread *);
+ struct drbd_conf *mdev;
+ int reset_cpu_mask;
+};
+
+static inline enum Drbd_thread_state get_t_state(struct Drbd_thread *thi)
+{
+ /* THINK testing the t_state seems to be uncritical in all cases
+ * (but thread_{start,stop}), so we can read it *without* the lock.
+ * --lge */
+
+ smp_rmb();
+ return thi->t_state;
+}
+
+
+/*
+ * Having this as the first member of a struct provides sort of "inheritance".
+ * "derived" structs can be "drbd_queue_work()"ed.
+ * The callback should know and cast back to the descendant struct.
+ * drbd_request and Tl_epoch_entry are descendants of drbd_work.
+ */
+struct drbd_work;
+typedef int (*drbd_work_cb)(struct drbd_conf *, struct drbd_work *, int cancel);
+struct drbd_work {
+ struct list_head list;
+ drbd_work_cb cb;
+};
+
+struct drbd_barrier;
+struct drbd_request {
+ struct drbd_work w;
+ struct drbd_conf *mdev;
+ struct bio *private_bio;
+ struct hlist_node colision;
+ sector_t sector;
+ unsigned int size;
+ unsigned int epoch; /* barrier_nr */
+
+ /* barrier_nr: used to check on "completion" whether this req was in
+ * the current epoch, and we therefore have to close it,
+ * starting a new epoch...
+ */
+
+ /* up to here, the struct layout is identical to Tl_epoch_entry;
+ * we might be able to use that to our advantage... */
+
+ struct list_head tl_requests; /* ring list in the transfer log */
+ struct bio *master_bio; /* master bio pointer */
+ unsigned long rq_state; /* see comments above _req_mod() */
+ int seq_num;
+ unsigned long start_time;
+};
+
+struct drbd_barrier {
+ struct drbd_work w;
+ struct list_head requests; /* requests before */
+ struct drbd_barrier *next; /* pointer to the next barrier */
+ unsigned int br_number; /* the barriers identifier. */
+ int n_req; /* number of requests attached before this barrier */
+};
+
+struct drbd_request;
+
+/* These Tl_epoch_entries may be in one of 6 lists:
+ active_ee .. data packet being written
+ sync_ee .. syncer block being written
+ done_ee .. block written, need to send WriteAck
+ read_ee .. [RS]DataRequest being read
+*/
+
+struct drbd_epoch {
+ struct list_head list;
+ unsigned int barrier_nr;
+ atomic_t epoch_size; /* increased on every request added. */
+ atomic_t active; /* increased on every req. added, and dec on every finished. */
+ unsigned long flags;
+};
+
+/* drbd_epoch flag bits */
+enum {
+ DE_BARRIER_IN_NEXT_EPOCH_ISSUED,
+ DE_BARRIER_IN_NEXT_EPOCH_DONE,
+ DE_CONTAINS_A_BARRIER,
+ DE_HAVE_BARRIER_NUMBER,
+ DE_IS_FINISHING,
+};
+
+struct Tl_epoch_entry {
+ struct drbd_work w;
+ struct drbd_conf *mdev;
+ struct bio *private_bio;
+ struct hlist_node colision;
+ sector_t sector;
+ unsigned int size;
+ struct drbd_epoch *epoch;
+
+ /* up to here, the struct layout is identical to drbd_request;
+ * we might be able to use that to our advantage... */
+
+ unsigned int flags;
+ u64 block_id;
+};
+
+struct digest_info {
+ int digest_size;
+ void *digest;
+};
+
+/* ee flag bits */
+enum {
+ __EE_CALL_AL_COMPLETE_IO,
+ __EE_CONFLICT_PENDING,
+ __EE_MAY_SET_IN_SYNC,
+ __EE_IS_BARRIER,
+};
+#define EE_CALL_AL_COMPLETE_IO (1<<__EE_CALL_AL_COMPLETE_IO)
+#define EE_CONFLICT_PENDING (1<<__EE_CONFLICT_PENDING)
+#define EE_MAY_SET_IN_SYNC (1<<__EE_MAY_SET_IN_SYNC)
+#define EE_IS_BARRIER (1<<__EE_IS_BARRIER)
+
+/* global flag bits */
+enum {
+ CREATE_BARRIER, /* next Data is preceeded by a Barrier */
+ SIGNAL_ASENDER, /* whether asender wants to be interrupted */
+ SEND_PING, /* whether asender should send a ping asap */
+ WORK_PENDING, /* completion flag for drbd_disconnect */
+ STOP_SYNC_TIMER, /* tell timer to cancel itself */
+ UNPLUG_QUEUED, /* only relevant with kernel 2.4 */
+ UNPLUG_REMOTE, /* sending a "UnplugRemote" could help */
+ MD_DIRTY, /* current uuids and flags not yet on disk */
+ DISCARD_CONCURRENT, /* Set on one node, cleared on the peer! */
+ USE_DEGR_WFC_T, /* degr-wfc-timeout instead of wfc-timeout. */
+ CLUSTER_ST_CHANGE, /* Cluster wide state change going on... */
+ CL_ST_CHG_SUCCESS,
+ CL_ST_CHG_FAIL,
+ CRASHED_PRIMARY, /* This node was a crashed primary.
+ * Gets cleared when the state.conn
+ * goes into Connected state. */
+ WRITE_BM_AFTER_RESYNC, /* A kmalloc() during resync failed */
+ NO_BARRIER_SUPP, /* underlying block device doesn't implement barriers */
+ CONSIDER_RESYNC,
+
+ MD_NO_BARRIER, /* meta data device does not support barriers,
+ so don't even try */
+ SUSPEND_IO, /* suspend application io */
+ BITMAP_IO, /* suspend application io;
+ once no more io in flight, start bitmap io */
+ BITMAP_IO_QUEUED, /* Started bitmap IO */
+ RESYNC_AFTER_NEG, /* Resync after online grow after the attach&negotiate finished. */
+ NET_CONGESTED, /* The data socket is congested */
+};
+
+struct drbd_bitmap; /* opaque for drbd_conf */
+
+/* TODO sort members for performance
+ * MAYBE group them further */
+
+/* THINK maybe we actually want to use the default "event/%s" worker threads
+ * or similar in linux 2.6, which uses per cpu data and threads.
+ *
+ * To be general, this might need a spin_lock member.
+ * For now, please use the mdev->req_lock to protect list_head,
+ * see drbd_queue_work below.
+ */
+struct drbd_work_queue {
+ struct list_head q;
+ struct semaphore s; /* producers up it, worker down()s it */
+ spinlock_t q_lock; /* to protect the list. */
+};
+
+struct drbd_socket {
+ struct drbd_work_queue work;
+ struct mutex mutex;
+ struct socket *socket;
+ /* this way we get our
+ * send/receive buffers off the stack */
+ union Drbd_Polymorph_Packet sbuf;
+ union Drbd_Polymorph_Packet rbuf;
+};
+
+struct drbd_md {
+ u64 md_offset; /* sector offset to 'super' block */
+
+ u64 la_size_sect; /* last agreed size, unit sectors */
+ u64 uuid[UUID_SIZE];
+ u64 device_uuid;
+ u32 flags;
+ u32 md_size_sect;
+
+ s32 al_offset; /* signed relative sector offset to al area */
+ s32 bm_offset; /* signed relative sector offset to bitmap */
+
+ /* u32 al_nr_extents; important for restoring the AL
+ * is stored into sync_conf.al_extents, which in turn
+ * gets applied to act_log->nr_elements
+ */
+};
+
+/* for sync_conf and other types... */
+#define NL_PACKET(name, number, fields) struct name { fields };
+#define NL_INTEGER(pn,pr,member) int member;
+#define NL_INT64(pn,pr,member) __u64 member;
+#define NL_BIT(pn,pr,member) unsigned member:1;
+#define NL_STRING(pn,pr,member,len) unsigned char member[len]; int member ## _len;
+#include "linux/drbd_nl.h"
+
+struct drbd_backing_dev {
+ struct block_device *backing_bdev;
+ struct block_device *md_bdev;
+ struct file *lo_file;
+ struct file *md_file;
+ struct drbd_md md;
+ struct disk_conf dc; /* The user provided config... */
+ sector_t known_size; /* last known size of that backing device */
+};
+
+struct drbd_md_io {
+ struct drbd_conf *mdev;
+ struct completion event;
+ int error;
+};
+
+struct bm_io_work {
+ struct drbd_work w;
+ char *why;
+ int (*io_fn)(struct drbd_conf *mdev);
+ void (*done)(struct drbd_conf *mdev, int rv);
+};
+
+enum write_ordering_e {
+ WO_none,
+ WO_drain_io,
+ WO_bdev_flush,
+ WO_bio_barrier
+};
+
+struct drbd_conf {
+ /* things that are stored as / read from meta data on disk */
+ unsigned long flags;
+
+ /* configured by drbdsetup */
+ struct net_conf *net_conf; /* protected by inc_net() and dec_net() */
+ struct syncer_conf sync_conf;
+ struct drbd_backing_dev *bc __protected_by(local);
+
+ sector_t p_size; /* partner's disk size */
+ struct request_queue *rq_queue;
+ struct block_device *this_bdev;
+ struct gendisk *vdisk;
+
+ struct drbd_socket data; /* data/barrier/cstate/parameter packets */
+ struct drbd_socket meta; /* ping/ack (metadata) packets */
+ int agreed_pro_version; /* actually used protocol version */
+ unsigned long last_received; /* in jiffies, either socket */
+ unsigned int ko_count;
+ struct drbd_work resync_work,
+ unplug_work,
+ md_sync_work;
+ struct timer_list resync_timer;
+ struct timer_list md_sync_timer;
+
+ /* Used after attach while negotiating new disk state. */
+ union drbd_state_t new_state_tmp;
+
+ union drbd_state_t state;
+ wait_queue_head_t misc_wait;
+ wait_queue_head_t state_wait; /* upon each state change. */
+ unsigned int send_cnt;
+ unsigned int recv_cnt;
+ unsigned int read_cnt;
+ unsigned int writ_cnt;
+ unsigned int al_writ_cnt;
+ unsigned int bm_writ_cnt;
+ atomic_t ap_bio_cnt; /* Requests we need to complete */
+ atomic_t ap_pending_cnt; /* AP data packets on the wire, ack expected */
+ atomic_t rs_pending_cnt; /* RS request/data packets on the wire */
+ atomic_t unacked_cnt; /* Need to send replys for */
+ atomic_t local_cnt; /* Waiting for local completion */
+ atomic_t net_cnt; /* Users of net_conf */
+ spinlock_t req_lock;
+ struct drbd_barrier *unused_spare_barrier; /* for pre-allocation */
+ struct drbd_barrier *newest_barrier;
+ struct drbd_barrier *oldest_barrier;
+ struct list_head out_of_sequence_requests;
+ struct hlist_head *tl_hash;
+ unsigned int tl_hash_s;
+
+ /* blocks to sync in this run [unit BM_BLOCK_SIZE] */
+ unsigned long rs_total;
+ /* number of sync IOs that failed in this run */
+ unsigned long rs_failed;
+ /* Syncer's start time [unit jiffies] */
+ unsigned long rs_start;
+ /* cumulated time in PausedSyncX state [unit jiffies] */
+ unsigned long rs_paused;
+ /* block not up-to-date at mark [unit BM_BLOCK_SIZE] */
+ unsigned long rs_mark_left;
+ /* marks's time [unit jiffies] */
+ unsigned long rs_mark_time;
+ /* skipped because csum was equeal [unit BM_BLOCK_SIZE] */
+ unsigned long rs_same_csum;
+ sector_t ov_position;
+ /* Start sector of out of sync range. */
+ sector_t ov_last_oos_start;
+ /* size of out-of-sync range in sectors. */
+ sector_t ov_last_oos_size;
+ unsigned long ov_left;
+ struct crypto_hash *csums_tfm;
+ struct crypto_hash *verify_tfm;
+
+ struct Drbd_thread receiver;
+ struct Drbd_thread worker;
+ struct Drbd_thread asender;
+ struct drbd_bitmap *bitmap;
+ unsigned long bm_resync_fo; /* bit offset for drbd_bm_find_next */
+
+ /* Used to track operations of resync... */
+ struct lru_cache *resync;
+ /* Number of locked elements in resync LRU */
+ unsigned int resync_locked;
+ /* resync extent number waiting for application requests */
+ unsigned int resync_wenr;
+
+ int open_cnt;
+ u64 *p_uuid;
+ struct drbd_epoch *current_epoch;
+ spinlock_t epoch_lock;
+ unsigned int epochs;
+ enum write_ordering_e write_ordering;
+ struct list_head active_ee; /* IO in progress */
+ struct list_head sync_ee; /* IO in progress */
+ struct list_head done_ee; /* send ack */
+ struct list_head read_ee; /* IO in progress */
+ struct list_head net_ee; /* zero-copy network send in progress */
+ struct hlist_head *ee_hash; /* is proteced by req_lock! */
+ unsigned int ee_hash_s;
+
+ /* this one is protected by ee_lock, single thread */
+ struct Tl_epoch_entry *last_write_w_barrier;
+
+ int next_barrier_nr;
+ struct hlist_head *app_reads_hash; /* is proteced by req_lock */
+ struct list_head resync_reads;
+ atomic_t pp_in_use;
+ wait_queue_head_t ee_wait;
+ struct page *md_io_page; /* one page buffer for md_io */
+ struct page *md_io_tmpp; /* for hardsect != 512 [s390 only?] */
+ struct mutex md_io_mutex; /* protects the md_io_buffer */
+ spinlock_t al_lock;
+ wait_queue_head_t al_wait;
+ struct lru_cache *act_log; /* activity log */
+ unsigned int al_tr_number;
+ int al_tr_cycle;
+ int al_tr_pos; /* position of the next transaction in the journal */
+ struct crypto_hash *cram_hmac_tfm;
+ struct crypto_hash *integrity_w_tfm; /* to be used by the worker thread */
+ struct crypto_hash *integrity_r_tfm; /* to be used by the receiver thread */
+ void *int_dig_out;
+ void *int_dig_in;
+ void *int_dig_vv;
+ wait_queue_head_t seq_wait;
+ atomic_t packet_seq;
+ unsigned int peer_seq;
+ spinlock_t peer_seq_lock;
+ unsigned int minor;
+ unsigned long comm_bm_set; /* communicated number of set bits. */
+ cpumask_t cpu_mask;
+ struct bm_io_work bm_io_work;
+ u64 ed_uuid; /* UUID of the exposed data */
+ struct mutex state_mutex;
+ char congestion_reason; /* Why we where congested... */
+};
+
+static inline struct drbd_conf *minor_to_mdev(unsigned int minor)
+{
+ struct drbd_conf *mdev;
+
+ mdev = minor < minor_count ? minor_table[minor] : NULL;
+
+ return mdev;
+}
+
+static inline unsigned int mdev_to_minor(struct drbd_conf *mdev)
+{
+ return mdev->minor;
+}
+
+/* returns 1 if it was successfull,
+ * returns 0 if there was no data socket.
+ * so wherever you are going to use the data.socket, e.g. do
+ * if (!drbd_get_data_sock(mdev))
+ * return 0;
+ * CODE();
+ * drbd_put_data_sock(mdev);
+ */
+static inline int drbd_get_data_sock(struct drbd_conf *mdev)
+{
+ mutex_lock(&mdev->data.mutex);
+ /* drbd_disconnect() could have called drbd_free_sock()
+ * while we were waiting in down()... */
+ if (unlikely(mdev->data.socket == NULL)) {
+ mutex_unlock(&mdev->data.mutex);
+ return 0;
+ }
+ return 1;
+}
+
+static inline void drbd_put_data_sock(struct drbd_conf *mdev)
+{
+ mutex_unlock(&mdev->data.mutex);
+}
+
+/*
+ * function declarations
+ *************************/
+
+/* drbd_main.c */
+
+enum chg_state_flags {
+ ChgStateHard = 1,
+ ChgStateVerbose = 2,
+ ChgWaitComplete = 4,
+ ChgSerialize = 8,
+ ChgOrdered = ChgWaitComplete + ChgSerialize,
+};
+
+extern void drbd_init_set_defaults(struct drbd_conf *mdev);
+extern int drbd_change_state(struct drbd_conf *mdev, enum chg_state_flags f,
+ union drbd_state_t mask, union drbd_state_t val);
+extern void drbd_force_state(struct drbd_conf *, union drbd_state_t,
+ union drbd_state_t);
+extern int _drbd_request_state(struct drbd_conf *, union drbd_state_t,
+ union drbd_state_t, enum chg_state_flags);
+extern int __drbd_set_state(struct drbd_conf *, union drbd_state_t,
+ enum chg_state_flags, struct completion *done);
+extern void print_st_err(struct drbd_conf *, union drbd_state_t,
+ union drbd_state_t, int);
+extern int drbd_thread_start(struct Drbd_thread *thi);
+extern void _drbd_thread_stop(struct Drbd_thread *thi, int restart, int wait);
+#ifdef CONFIG_SMP
+extern void drbd_thread_current_set_cpu(struct drbd_conf *mdev);
+extern cpumask_t drbd_calc_cpu_mask(struct drbd_conf *mdev);
+#else
+#define drbd_thread_current_set_cpu(A) ({})
+#define drbd_calc_cpu_mask(A) CPU_MASK_ALL
+#endif
+extern void drbd_free_resources(struct drbd_conf *mdev);
+extern void tl_release(struct drbd_conf *mdev, unsigned int barrier_nr,
+ unsigned int set_size);
+extern void tl_clear(struct drbd_conf *mdev);
+extern void _tl_add_barrier(struct drbd_conf *, struct drbd_barrier *);
+extern void drbd_free_sock(struct drbd_conf *mdev);
+extern int drbd_send(struct drbd_conf *mdev, struct socket *sock,
+ void *buf, size_t size, unsigned msg_flags);
+extern int drbd_send_protocol(struct drbd_conf *mdev);
+extern int _drbd_send_uuids(struct drbd_conf *mdev);
+extern int drbd_send_uuids(struct drbd_conf *mdev);
+extern int drbd_send_sync_uuid(struct drbd_conf *mdev, u64 val);
+extern int drbd_send_sizes(struct drbd_conf *mdev);
+extern int _drbd_send_state(struct drbd_conf *mdev);
+extern int drbd_send_state(struct drbd_conf *mdev);
+extern int _drbd_send_cmd(struct drbd_conf *mdev, struct socket *sock,
+ enum Drbd_Packet_Cmd cmd, struct Drbd_Header *h,
+ size_t size, unsigned msg_flags);
+#define USE_DATA_SOCKET 1
+#define USE_META_SOCKET 0
+extern int drbd_send_cmd(struct drbd_conf *mdev, int use_data_socket,
+ enum Drbd_Packet_Cmd cmd, struct Drbd_Header *h,
+ size_t size);
+extern int drbd_send_cmd2(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ char *data, size_t size);
+extern int drbd_send_sync_param(struct drbd_conf *mdev, struct syncer_conf *sc);
+extern int drbd_send_b_ack(struct drbd_conf *mdev, u32 barrier_nr,
+ u32 set_size);
+extern int drbd_send_ack(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Tl_epoch_entry *e);
+extern int drbd_send_ack_rp(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Drbd_BlockRequest_Packet *rp);
+extern int drbd_send_ack_dp(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Drbd_Data_Packet *dp);
+extern int drbd_send_ack_ex(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ sector_t sector, int blksize, u64 block_id);
+extern int _drbd_send_page(struct drbd_conf *mdev, struct page *page,
+ int offset, size_t size);
+extern int drbd_send_block(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Tl_epoch_entry *e);
+extern int drbd_send_dblock(struct drbd_conf *mdev, struct drbd_request *req);
+extern int _drbd_send_barrier(struct drbd_conf *mdev,
+ struct drbd_barrier *barrier);
+extern int drbd_send_drequest(struct drbd_conf *mdev, int cmd,
+ sector_t sector, int size, u64 block_id);
+extern int drbd_send_drequest_csum(struct drbd_conf *mdev,
+ sector_t sector,int size,
+ void *digest, int digest_size,
+ enum Drbd_Packet_Cmd cmd);
+extern int drbd_send_ov_request(struct drbd_conf *mdev,sector_t sector,int size);
+
+extern int drbd_send_bitmap(struct drbd_conf *mdev);
+extern int _drbd_send_bitmap(struct drbd_conf *mdev);
+extern int drbd_send_sr_reply(struct drbd_conf *mdev, int retcode);
+extern void drbd_free_bc(struct drbd_backing_dev *bc);
+extern int drbd_io_error(struct drbd_conf *mdev, int forcedetach);
+extern void drbd_mdev_cleanup(struct drbd_conf *mdev);
+
+/* drbd_meta-data.c (still in drbd_main.c) */
+extern void drbd_md_sync(struct drbd_conf *mdev);
+extern int drbd_md_read(struct drbd_conf *mdev, struct drbd_backing_dev *bdev);
+/* maybe define them below as inline? */
+extern void drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local);
+extern void _drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local);
+extern void drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local);
+extern void _drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local);
+extern void drbd_uuid_set_bm(struct drbd_conf *mdev, u64 val) __must_hold(local);
+extern void drbd_md_set_flag(struct drbd_conf *mdev, int flags) __must_hold(local);
+extern void drbd_md_clear_flag(struct drbd_conf *mdev, int flags)__must_hold(local);
+extern int drbd_md_test_flag(struct drbd_backing_dev *, int);
+extern void drbd_md_mark_dirty(struct drbd_conf *mdev);
+extern void drbd_queue_bitmap_io(struct drbd_conf *mdev,
+ int (*io_fn)(struct drbd_conf *),
+ void (*done)(struct drbd_conf *, int),
+ char *why);
+extern int drbd_bmio_set_n_write(struct drbd_conf *mdev);
+extern int drbd_bmio_clear_n_write(struct drbd_conf *mdev);
+extern int drbd_bitmap_io(struct drbd_conf *mdev, int (*io_fn)(struct drbd_conf *), char *why);
+
+
+/* Meta data layout
+ We reserve a 128MB Block (4k aligned)
+ * either at the end of the backing device
+ * or on a seperate meta data device. */
+
+#define MD_RESERVED_SECT (128LU << 11) /* 128 MB, unit sectors */
+/* The following numbers are sectors */
+#define MD_AL_OFFSET 8 /* 8 Sectors after start of meta area */
+#define MD_AL_MAX_SIZE 64 /* = 32 kb LOG ~ 3776 extents ~ 14 GB Storage */
+/* Allows up to about 3.8TB */
+#define MD_BM_OFFSET (MD_AL_OFFSET + MD_AL_MAX_SIZE)
+
+/* Since the smalles IO unit is usually 512 byte */
+#define MD_HARDSECT_B 9
+#define MD_HARDSECT (1<<MD_HARDSECT_B)
+
+/* activity log */
+#define AL_EXTENTS_PT ((MD_HARDSECT-12)/8-1) /* 61 ; Extents per 512B sector */
+#define AL_EXTENT_SIZE_B 22 /* One extent represents 4M Storage */
+#define AL_EXTENT_SIZE (1<<AL_EXTENT_SIZE_B)
+
+#if BITS_PER_LONG == 32
+#define LN2_BPL 5
+#define cpu_to_lel(A) cpu_to_le32(A)
+#define lel_to_cpu(A) le32_to_cpu(A)
+#elif BITS_PER_LONG == 64
+#define LN2_BPL 6
+#define cpu_to_lel(A) cpu_to_le64(A)
+#define lel_to_cpu(A) le64_to_cpu(A)
+#else
+#error "LN2 of BITS_PER_LONG unknown!"
+#endif
+
+/* resync bitmap */
+/* 16MB sized 'bitmap extent' to track syncer usage */
+struct bm_extent {
+ struct lc_element lce;
+ int rs_left; /* number of bits set (out of sync) in this extent. */
+ int rs_failed; /* number of failed resync requests in this extent. */
+ unsigned long flags;
+};
+
+#define BME_NO_WRITES 0 /* bm_extent.flags: no more requests on this one! */
+#define BME_LOCKED 1 /* bm_extent.flags: syncer active on this one. */
+
+/* drbd_bitmap.c */
+/*
+ * We need to store one bit for a block.
+ * Example: 1GB disk @ 4096 byte blocks ==> we need 32 KB bitmap.
+ * Bit 0 ==> local node thinks this block is binary identical on both nodes
+ * Bit 1 ==> local node thinks this block needs to be synced.
+ */
+
+#define BM_BLOCK_SIZE_B 12 /* 4k per bit */
+#define BM_BLOCK_SIZE (1<<BM_BLOCK_SIZE_B)
+/* (9+3) : 512 bytes @ 8 bits; representing 16M storage
+ * per sector of on disk bitmap */
+#define BM_EXT_SIZE_B (BM_BLOCK_SIZE_B + MD_HARDSECT_B + 3) /* = 24 */
+#define BM_EXT_SIZE (1<<BM_EXT_SIZE_B)
+
+#if (BM_EXT_SIZE_B != 24) || (BM_BLOCK_SIZE_B != 12)
+#error "HAVE YOU FIXED drbdmeta AS WELL??"
+#endif
+
+/* thus many _storage_ sectors are described by one bit */
+#define BM_SECT_TO_BIT(x) ((x)>>(BM_BLOCK_SIZE_B-9))
+#define BM_BIT_TO_SECT(x) ((sector_t)(x)<<(BM_BLOCK_SIZE_B-9))
+#define BM_SECT_PER_BIT BM_BIT_TO_SECT(1)
+
+/* bit to represented kilo byte conversion */
+#define Bit2KB(bits) ((bits)<<(BM_BLOCK_SIZE_B-10))
+
+/* in which _bitmap_ extent (resp. sector) the bit for a certain
+ * _storage_ sector is located in */
+#define BM_SECT_TO_EXT(x) ((x)>>(BM_EXT_SIZE_B-9))
+
+/* how much _storage_ sectors we have per bitmap sector */
+#define BM_EXT_TO_SECT(x) ((sector_t)(x) << (BM_EXT_SIZE_B-9))
+#define BM_SECT_PER_EXT BM_EXT_TO_SECT(1)
+
+/* in one sector of the bitmap, we have this many activity_log extents. */
+#define AL_EXT_PER_BM_SECT (1 << (BM_EXT_SIZE_B - AL_EXTENT_SIZE_B))
+#define BM_WORDS_PER_AL_EXT (1 << (AL_EXTENT_SIZE_B-BM_BLOCK_SIZE_B-LN2_BPL))
+
+#define BM_BLOCKS_PER_BM_EXT_B (BM_EXT_SIZE_B - BM_BLOCK_SIZE_B)
+#define BM_BLOCKS_PER_BM_EXT_MASK ((1<<BM_BLOCKS_PER_BM_EXT_B) - 1)
+
+/* the extent in "PER_EXTENT" below is an activity log extent
+ * we need that many (long words/bytes) to store the bitmap
+ * of one AL_EXTENT_SIZE chunk of storage.
+ * we can store the bitmap for that many AL_EXTENTS within
+ * one sector of the _on_disk_ bitmap:
+ * bit 0 bit 37 bit 38 bit (512*8)-1
+ * ...|........|........|.. // ..|........|
+ * sect. 0 `296 `304 ^(512*8*8)-1
+ *
+#define BM_WORDS_PER_EXT ( (AL_EXT_SIZE/BM_BLOCK_SIZE) / BITS_PER_LONG )
+#define BM_BYTES_PER_EXT ( (AL_EXT_SIZE/BM_BLOCK_SIZE) / 8 ) // 128
+#define BM_EXT_PER_SECT ( 512 / BM_BYTES_PER_EXTENT ) // 4
+ */
+
+#define DRBD_MAX_SECTORS_32 (0xffffffffLU)
+#define DRBD_MAX_SECTORS_BM \
+ ((MD_RESERVED_SECT - MD_BM_OFFSET) * (1LL<<(BM_EXT_SIZE_B-9)))
+#if DRBD_MAX_SECTORS_BM < DRBD_MAX_SECTORS_32
+#define DRBD_MAX_SECTORS DRBD_MAX_SECTORS_BM
+#define DRBD_MAX_SECTORS_FLEX DRBD_MAX_SECTORS_BM
+#elif !defined(CONFIG_LBD) && BITS_PER_LONG == 32
+#define DRBD_MAX_SECTORS DRBD_MAX_SECTORS_32
+#define DRBD_MAX_SECTORS_FLEX DRBD_MAX_SECTORS_32
+#else
+#define DRBD_MAX_SECTORS DRBD_MAX_SECTORS_BM
+/* 16 TB in units of sectors */
+#if BITS_PER_LONG == 32
+/* adjust by one page worth of bitmap,
+ * so we won't wrap around in drbd_bm_find_next_bit.
+ * you should use 64bit OS for that much storage, anyways. */
+#define DRBD_MAX_SECTORS_FLEX BM_BIT_TO_SECT(0xffff7fff)
+#else
+#define DRBD_MAX_SECTORS_FLEX BM_BIT_TO_SECT(0x1LU << 32)
+#endif
+#endif
+
+/* Sector shift value for the "hash" functions of tl_hash and ee_hash tables.
+ * With a value of 6 all IO in one 32K block make it to the same slot of the
+ * hash table. */
+#define HT_SHIFT 6
+#define DRBD_MAX_SEGMENT_SIZE (1U<<(9+HT_SHIFT))
+
+/* Number of elements in the app_reads_hash */
+#define APP_R_HSIZE 15
+
+extern int drbd_bm_init(struct drbd_conf *mdev);
+extern int drbd_bm_resize(struct drbd_conf *mdev, sector_t sectors);
+extern void drbd_bm_cleanup(struct drbd_conf *mdev);
+extern void drbd_bm_set_all(struct drbd_conf *mdev);
+extern void drbd_bm_clear_all(struct drbd_conf *mdev);
+extern int drbd_bm_set_bits(
+ struct drbd_conf *mdev, unsigned long s, unsigned long e);
+extern int drbd_bm_clear_bits(
+ struct drbd_conf *mdev, unsigned long s, unsigned long e);
+/* bm_set_bits variant for use while holding drbd_bm_lock */
+extern int _drbd_bm_set_bits(struct drbd_conf *mdev,
+ const unsigned long s, const unsigned long e);
+extern int drbd_bm_test_bit(struct drbd_conf *mdev, unsigned long bitnr);
+extern int drbd_bm_e_weight(struct drbd_conf *mdev, unsigned long enr);
+extern int drbd_bm_write_sect(struct drbd_conf *mdev, unsigned long enr) __must_hold(local);
+extern int drbd_bm_read(struct drbd_conf *mdev) __must_hold(local);
+extern int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local);
+extern unsigned long drbd_bm_ALe_set_all(struct drbd_conf *mdev,
+ unsigned long al_enr);
+extern size_t drbd_bm_words(struct drbd_conf *mdev);
+extern unsigned long drbd_bm_bits(struct drbd_conf *mdev);
+extern sector_t drbd_bm_capacity(struct drbd_conf *mdev);
+extern unsigned long drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo);
+/* bm_find_next variants for use while you hold drbd_bm_lock() */
+extern unsigned long _drbd_bm_find_next(struct drbd_conf *mdev, unsigned long bm_fo);
+extern unsigned long _drbd_bm_find_next_zero(struct drbd_conf *mdev, unsigned long bm_fo);
+extern unsigned long drbd_bm_total_weight(struct drbd_conf *mdev);
+extern int drbd_bm_rs_done(struct drbd_conf *mdev);
+/* for receive_bitmap */
+extern void drbd_bm_merge_lel(struct drbd_conf *mdev, size_t offset,
+ size_t number, unsigned long *buffer);
+/* for _drbd_send_bitmap and drbd_bm_write_sect */
+extern void drbd_bm_get_lel(struct drbd_conf *mdev, size_t offset,
+ size_t number, unsigned long *buffer);
+
+extern void drbd_bm_lock(struct drbd_conf *mdev, char *why);
+extern void drbd_bm_unlock(struct drbd_conf *mdev);
+
+extern void _drbd_bm_recount_bits(struct drbd_conf *mdev, char *file, int line);
+#define drbd_bm_recount_bits(mdev) \
+ _drbd_bm_recount_bits(mdev, __FILE__, __LINE__)
+extern int drbd_bm_count_bits(struct drbd_conf *mdev, const unsigned long s, const unsigned long e);
+/* drbd_main.c */
+
+extern struct kmem_cache *drbd_request_cache;
+extern struct kmem_cache *drbd_ee_cache;
+extern mempool_t *drbd_request_mempool;
+extern mempool_t *drbd_ee_mempool;
+
+extern struct page *drbd_pp_pool; /* drbd's page pool */
+extern spinlock_t drbd_pp_lock;
+extern int drbd_pp_vacant;
+extern wait_queue_head_t drbd_pp_wait;
+
+extern rwlock_t global_state_lock;
+
+extern struct drbd_conf *drbd_new_device(unsigned int minor);
+extern void drbd_free_mdev(struct drbd_conf *mdev);
+
+/* Dynamic tracing framework */
+#ifdef ENABLE_DYNAMIC_TRACE
+
+extern int proc_details;
+extern int trace_type;
+extern int trace_devs;
+extern int trace_level;
+
+enum {
+ TraceLvlAlways = 0,
+ TraceLvlSummary,
+ TraceLvlMetrics,
+ TraceLvlAll,
+ TraceLvlMax
+};
+
+enum {
+ TraceTypePacket = 0x00000001,
+ TraceTypeRq = 0x00000002,
+ TraceTypeUuid = 0x00000004,
+ TraceTypeResync = 0x00000008,
+ TraceTypeEE = 0x00000010,
+ TraceTypeUnplug = 0x00000020,
+ TraceTypeNl = 0x00000040,
+ TraceTypeALExts = 0x00000080,
+ TraceTypeIntRq = 0x00000100,
+ TraceTypeMDIO = 0x00000200,
+ TraceTypeEpochs = 0x00000400,
+};
+
+static inline int
+is_trace(unsigned int type, unsigned int level) {
+ return (trace_level >= level) && (type & trace_type);
+}
+static inline int
+is_mdev_trace(struct drbd_conf *mdev, unsigned int type, unsigned int level) {
+ return is_trace(type, level) &&
+ ((1 << mdev_to_minor(mdev)) & trace_devs);
+}
+
+#define MTRACE(type, lvl, code...) \
+do { \
+ if (unlikely(is_mdev_trace(mdev, type, lvl))) { \
+ code \
+ } \
+} while (0)
+
+#define TRACE(type, lvl, code...) \
+do { \
+ if (unlikely(is_trace(type, lvl))) { \
+ code \
+ } \
+} while (0)
+
+/* Buffer printing support
+ * dbg_print_flags: used for Flags arg to drbd_print_buffer
+ * - DBGPRINT_BUFFADDR; if set, each line starts with the
+ * virtual address of the line being output. If clear,
+ * each line starts with the offset from the beginning
+ * of the buffer. */
+enum dbg_print_flags {
+ DBGPRINT_BUFFADDR = 0x0001,
+};
+
+extern void drbd_print_uuid(struct drbd_conf *mdev, unsigned int idx);
+
+extern void drbd_print_buffer(const char *prefix, unsigned int flags, int size,
+ const void *buffer, const void *buffer_va,
+ unsigned int length);
+
+/* Bio printing support */
+extern void _dump_bio(const char *pfx, struct drbd_conf *mdev, struct bio *bio, int complete, struct drbd_request *r);
+
+static inline void dump_bio(struct drbd_conf *mdev,
+ struct bio *bio, int complete, struct drbd_request *r)
+{
+ MTRACE(TraceTypeRq, TraceLvlSummary,
+ _dump_bio("Rq", mdev, bio, complete, r);
+ );
+}
+
+static inline void dump_internal_bio(const char *pfx, struct drbd_conf *mdev, struct bio *bio, int complete)
+{
+ MTRACE(TraceTypeIntRq, TraceLvlSummary,
+ _dump_bio(pfx, mdev, bio, complete, NULL);
+ );
+}
+
+/* Packet dumping support */
+extern void _dump_packet(struct drbd_conf *mdev, struct socket *sock,
+ int recv, union Drbd_Polymorph_Packet *p,
+ char *file, int line);
+
+static inline void
+dump_packet(struct drbd_conf *mdev, struct socket *sock,
+ int recv, union Drbd_Polymorph_Packet *p, char *file, int line)
+{
+ MTRACE(TraceTypePacket, TraceLvlSummary,
+ _dump_packet(mdev, sock, recv, p, file, line);
+ );
+}
+
+#else
+
+#define MTRACE(ignored...) ((void)0)
+#define TRACE(ignored...) ((void)0)
+
+#define dump_bio(ignored...) ((void)0)
+#define dump_internal_bio(ignored...) ((void)0)
+#define dump_packet(ignored...) ((void)0)
+#endif
+
+/* drbd_req */
+extern int drbd_make_request_26(struct request_queue *q, struct bio *bio);
+extern int drbd_read_remote(struct drbd_conf *mdev, struct drbd_request *req);
+extern int drbd_merge_bvec(struct request_queue *q, struct bvec_merge_data *bvm, struct bio_vec *bvec);
+extern int is_valid_ar_handle(struct drbd_request *, sector_t);
+
+
+/* drbd_nl.c */
+extern void drbd_suspend_io(struct drbd_conf *mdev);
+extern void drbd_resume_io(struct drbd_conf *mdev);
+extern char *ppsize(char *buf, unsigned long long size);
+extern sector_t drbd_new_dev_size(struct drbd_conf *,
+ struct drbd_backing_dev *);
+enum determin_dev_size_enum { dev_size_error = -1, unchanged = 0, shrunk = 1, grew = 2 };
+extern enum determin_dev_size_enum drbd_determin_dev_size(struct drbd_conf *) __must_hold(local);
+extern void resync_after_online_grow(struct drbd_conf *);
+extern void drbd_setup_queue_param(struct drbd_conf *mdev, unsigned int) __must_hold(local);
+extern int drbd_set_role(struct drbd_conf *mdev, enum drbd_role new_role,
+ int force);
+enum drbd_disk_state drbd_try_outdate_peer(struct drbd_conf *mdev);
+extern int drbd_khelper(struct drbd_conf *mdev, char *cmd);
+
+/* drbd_worker.c */
+extern int drbd_worker(struct Drbd_thread *thi);
+extern void drbd_alter_sa(struct drbd_conf *mdev, int na);
+extern void drbd_start_resync(struct drbd_conf *mdev, enum drbd_conns side);
+extern void resume_next_sg(struct drbd_conf *mdev);
+extern void suspend_other_sg(struct drbd_conf *mdev);
+extern int drbd_resync_finished(struct drbd_conf *mdev);
+/* maybe rather drbd_main.c ? */
+extern int drbd_md_sync_page_io(struct drbd_conf *mdev,
+ struct drbd_backing_dev *bdev, sector_t sector, int rw);
+extern void drbd_ov_oos_found(struct drbd_conf*, sector_t, int);
+
+static inline void ov_oos_print(struct drbd_conf *mdev)
+{
+ if (mdev->ov_last_oos_size) {
+ ERR("Out of sync: start=%llu, size=%lu (sectors)\n",
+ (unsigned long long)mdev->ov_last_oos_start,
+ (unsigned long)mdev->ov_last_oos_size);
+ }
+ mdev->ov_last_oos_size=0;
+}
+
+
+void drbd_csum(struct drbd_conf *, struct crypto_hash *, struct bio *, void *);
+/* worker callbacks */
+extern int w_req_cancel_conflict(struct drbd_conf *, struct drbd_work *, int);
+extern int w_read_retry_remote(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_end_data_req(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_end_rsdata_req(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_end_csum_rs_req(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_end_ov_reply(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_end_ov_req(struct drbd_conf *, struct drbd_work *, int);
+extern int w_ov_finished(struct drbd_conf *, struct drbd_work *, int);
+extern int w_resync_inactive(struct drbd_conf *, struct drbd_work *, int);
+extern int w_resume_next_sg(struct drbd_conf *, struct drbd_work *, int);
+extern int w_io_error(struct drbd_conf *, struct drbd_work *, int);
+extern int w_send_write_hint(struct drbd_conf *, struct drbd_work *, int);
+extern int w_make_resync_request(struct drbd_conf *, struct drbd_work *, int);
+extern int w_send_dblock(struct drbd_conf *, struct drbd_work *, int);
+extern int w_send_barrier(struct drbd_conf *, struct drbd_work *, int);
+extern int w_send_read_req(struct drbd_conf *, struct drbd_work *, int);
+extern int w_prev_work_done(struct drbd_conf *, struct drbd_work *, int);
+extern int w_e_reissue(struct drbd_conf *, struct drbd_work *, int);
+
+extern void resync_timer_fn(unsigned long data);
+
+/* drbd_receiver.c */
+extern int drbd_release_ee(struct drbd_conf *mdev, struct list_head *list);
+extern struct Tl_epoch_entry *drbd_alloc_ee(struct drbd_conf *mdev,
+ u64 id,
+ sector_t sector,
+ unsigned int data_size,
+ gfp_t gfp_mask) __must_hold(local);
+extern void drbd_free_ee(struct drbd_conf *mdev, struct Tl_epoch_entry *e);
+extern void drbd_wait_ee_list_empty(struct drbd_conf *mdev,
+ struct list_head *head);
+extern void _drbd_wait_ee_list_empty(struct drbd_conf *mdev,
+ struct list_head *head);
+extern void drbd_set_recv_tcq(struct drbd_conf *mdev, int tcq_enabled);
+extern void _drbd_clear_done_ee(struct drbd_conf *mdev);
+
+/* yes, there is kernel_setsockopt, but only since 2.6.18. we don't need to
+ * mess with get_fs/set_fs, we know we are KERNEL_DS always. */
+static inline int drbd_setsockopt(struct socket *sock, int level, int optname,
+ char __user *optval, int optlen)
+{
+ int err;
+ if (level == SOL_SOCKET)
+ err = sock_setsockopt(sock, level, optname, optval, optlen);
+ else
+ err = sock->ops->setsockopt(sock, level, optname, optval,
+ optlen);
+ return err;
+}
+
+static inline void drbd_tcp_cork(struct socket *sock)
+{
+ int __user val = 1;
+ (void) drbd_setsockopt(sock, SOL_TCP, TCP_CORK,
+ (char __user *)&val, sizeof(val));
+}
+
+static inline void drbd_tcp_uncork(struct socket *sock)
+{
+ int __user val = 0;
+ (void) drbd_setsockopt(sock, SOL_TCP, TCP_CORK,
+ (char __user *)&val, sizeof(val));
+}
+
+static inline void drbd_tcp_nodelay(struct socket *sock)
+{
+ int __user val = 1;
+ (void) drbd_setsockopt(sock, SOL_TCP, TCP_NODELAY,
+ (char __user *)&val, sizeof(val));
+}
+
+static inline void drbd_tcp_quickack(struct socket *sock)
+{
+ int __user val = 1;
+ (void) drbd_setsockopt(sock, SOL_TCP, TCP_QUICKACK,
+ (char __user *)&val, sizeof(val));
+}
+
+void drbd_bump_write_ordering(struct drbd_conf *mdev, enum write_ordering_e wo);
+
+/* drbd_proc.c */
+extern struct proc_dir_entry *drbd_proc;
+extern struct file_operations drbd_proc_fops;
+extern const char *conns_to_name(enum drbd_conns s);
+extern const char *roles_to_name(enum drbd_role s);
+
+/* drbd_actlog.c */
+extern void drbd_al_begin_io(struct drbd_conf *mdev, sector_t sector);
+extern void drbd_al_complete_io(struct drbd_conf *mdev, sector_t sector);
+extern void drbd_rs_complete_io(struct drbd_conf *mdev, sector_t sector);
+extern int drbd_rs_begin_io(struct drbd_conf *mdev, sector_t sector);
+extern int drbd_try_rs_begin_io(struct drbd_conf *mdev, sector_t sector);
+extern void drbd_rs_cancel_all(struct drbd_conf *mdev);
+extern int drbd_rs_del_all(struct drbd_conf *mdev);
+extern void drbd_rs_failed_io(struct drbd_conf *mdev,
+ sector_t sector, int size);
+extern int drbd_al_read_log(struct drbd_conf *mdev, struct drbd_backing_dev *);
+extern void __drbd_set_in_sync(struct drbd_conf *mdev, sector_t sector,
+ int size, const char *file, const unsigned int line);
+#define drbd_set_in_sync(mdev, sector, size) \
+ __drbd_set_in_sync(mdev, sector, size, __FILE__, __LINE__)
+extern void __drbd_set_out_of_sync(struct drbd_conf *mdev, sector_t sector,
+ int size, const char *file, const unsigned int line);
+#define drbd_set_out_of_sync(mdev, sector, size) \
+ __drbd_set_out_of_sync(mdev, sector, size, __FILE__, __LINE__)
+extern void drbd_al_apply_to_bm(struct drbd_conf *mdev);
+extern void drbd_al_to_on_disk_bm(struct drbd_conf *mdev);
+extern void drbd_al_shrink(struct drbd_conf *mdev);
+
+
+/* drbd_nl.c */
+
+void drbd_nl_cleanup(void);
+int __init drbd_nl_init(void);
+void drbd_bcast_state(struct drbd_conf *mdev, union drbd_state_t);
+void drbd_bcast_sync_progress(struct drbd_conf *mdev);
+void drbd_bcast_ee(struct drbd_conf *mdev,
+ const char *reason, const int dgs,
+ const char* seen_hash, const char* calc_hash,
+ const struct Tl_epoch_entry* e);
+
+
+/** DRBD State macros:
+ * These macros are used to express state changes in easily readable form.
+ *
+ * The NS macros expand to a mask and a value, that can be bit ored onto the
+ * current state as soon as the spinlock (req_lock) was taken.
+ *
+ * The _NS macros are used for state functions that get called with the
+ * spinlock. These macros expand directly to the new state value.
+ *
+ * Besides the basic forms NS() and _NS() additional _?NS[23] are defined
+ * to express state changes that affect more than one aspect of the state.
+ *
+ * E.g. NS2(conn, Connected, peer, Secondary)
+ * Means that the network connection was established and that the peer
+ * is in secondary role.
+ */
+#define peer_mask role_mask
+#define pdsk_mask disk_mask
+#define susp_mask 1
+#define user_isp_mask 1
+#define aftr_isp_mask 1
+
+#define NS(T, S) \
+ ({ union drbd_state_t mask; mask.i = 0; mask.T = T##_mask; mask; }), \
+ ({ union drbd_state_t val; val.i = 0; val.T = (S); val; })
+#define NS2(T1, S1, T2, S2) \
+ ({ union drbd_state_t mask; mask.i = 0; mask.T1 = T1##_mask; \
+ mask.T2 = T2##_mask; mask; }), \
+ ({ union drbd_state_t val; val.i = 0; val.T1 = (S1); \
+ val.T2 = (S2); val; })
+#define NS3(T1, S1, T2, S2, T3, S3) \
+ ({ union drbd_state_t mask; mask.i = 0; mask.T1 = T1##_mask; \
+ mask.T2 = T2##_mask; mask.T3 = T3##_mask; mask; }), \
+ ({ union drbd_state_t val; val.i = 0; val.T1 = (S1); \
+ val.T2 = (S2); val.T3 = (S3); val; })
+
+#define _NS(D, T, S) \
+ D, ({ union drbd_state_t __ns; __ns.i = D->state.i; __ns.T = (S); __ns; })
+#define _NS2(D, T1, S1, T2, S2) \
+ D, ({ union drbd_state_t __ns; __ns.i = D->state.i; __ns.T1 = (S1); \
+ __ns.T2 = (S2); __ns; })
+#define _NS3(D, T1, S1, T2, S2, T3, S3) \
+ D, ({ union drbd_state_t __ns; __ns.i = D->state.i; __ns.T1 = (S1); \
+ __ns.T2 = (S2); __ns.T3 = (S3); __ns; })
+
+/*
+ * inline helper functions
+ *************************/
+
+static inline void drbd_state_lock(struct drbd_conf *mdev)
+{
+ wait_event(mdev->misc_wait,
+ !test_and_set_bit(CLUSTER_ST_CHANGE, &mdev->flags));
+}
+
+static inline void drbd_state_unlock(struct drbd_conf *mdev)
+{
+ clear_bit(CLUSTER_ST_CHANGE, &mdev->flags);
+ wake_up(&mdev->misc_wait);
+}
+
+static inline int _drbd_set_state(struct drbd_conf *mdev,
+ union drbd_state_t ns, enum chg_state_flags flags,
+ struct completion *done)
+{
+ int rv;
+
+ read_lock(&global_state_lock);
+ rv = __drbd_set_state(mdev, ns, flags, done);
+ read_unlock(&global_state_lock);
+
+ return rv;
+}
+
+static inline int drbd_request_state(struct drbd_conf *mdev,
+ union drbd_state_t mask,
+ union drbd_state_t val)
+{
+ return _drbd_request_state(mdev, mask, val, ChgStateVerbose + ChgOrdered);
+}
+
+/**
+ * drbd_chk_io_error: Handles the on_io_error setting, should be called from
+ * all io completion handlers. See also drbd_io_error().
+ */
+static inline void __drbd_chk_io_error(struct drbd_conf *mdev, int forcedetach)
+{
+ switch (mdev->bc->dc.on_io_error) {
+ case PassOn:
+ if (!forcedetach) {
+ if (printk_ratelimit())
+ ERR("Local IO failed. Passing error on...\n");
+ break;
+ }
+ /* NOTE fall through to detach case if forcedetach set */
+ case Detach:
+ case CallIOEHelper:
+ if (mdev->state.disk > Failed) {
+ _drbd_set_state(_NS(mdev, disk, Failed), ChgStateHard, NULL);
+ ERR("Local IO failed. Detaching...\n");
+ }
+ break;
+ }
+}
+
+static inline void drbd_chk_io_error(struct drbd_conf *mdev,
+ int error, int forcedetach)
+{
+ if (error) {
+ unsigned long flags;
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ __drbd_chk_io_error(mdev, forcedetach);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+ }
+}
+
+/* Returns the first sector number of our meta data,
+ * which, for internal meta data, happens to be the maximum capacity
+ * we could agree upon with our peer
+ */
+static inline sector_t drbd_md_first_sector(struct drbd_backing_dev *bdev)
+{
+ switch (bdev->dc.meta_dev_idx) {
+ case DRBD_MD_INDEX_INTERNAL:
+ case DRBD_MD_INDEX_FLEX_INT:
+ return bdev->md.md_offset + bdev->md.bm_offset;
+ case DRBD_MD_INDEX_FLEX_EXT:
+ default:
+ return bdev->md.md_offset;
+ }
+}
+
+/* returns the last sector number of our meta data,
+ * to be able to catch out of band md access */
+static inline sector_t drbd_md_last_sector(struct drbd_backing_dev *bdev)
+{
+ switch (bdev->dc.meta_dev_idx) {
+ case DRBD_MD_INDEX_INTERNAL:
+ case DRBD_MD_INDEX_FLEX_INT:
+ return bdev->md.md_offset + MD_AL_OFFSET - 1;
+ case DRBD_MD_INDEX_FLEX_EXT:
+ default:
+ return bdev->md.md_offset + bdev->md.md_size_sect;
+ }
+}
+
+/* Returns the number of 512 byte sectors of the device */
+static inline sector_t drbd_get_capacity(struct block_device *bdev)
+{
+ /* return bdev ? get_capacity(bdev->bd_disk) : 0; */
+ return bdev ? bdev->bd_inode->i_size >> 9 : 0;
+}
+
+/* returns the capacity we announce to out peer.
+ * we clip ourselves at the various MAX_SECTORS, because if we don't,
+ * current implementation will oops sooner or later */
+static inline sector_t drbd_get_max_capacity(struct drbd_backing_dev *bdev)
+{
+ sector_t s;
+ switch (bdev->dc.meta_dev_idx) {
+ case DRBD_MD_INDEX_INTERNAL:
+ case DRBD_MD_INDEX_FLEX_INT:
+ s = drbd_get_capacity(bdev->backing_bdev)
+ ? min_t(sector_t, DRBD_MAX_SECTORS_FLEX,
+ drbd_md_first_sector(bdev))
+ : 0;
+ break;
+ case DRBD_MD_INDEX_FLEX_EXT:
+ s = min_t(sector_t, DRBD_MAX_SECTORS_FLEX,
+ drbd_get_capacity(bdev->backing_bdev));
+ /* clip at maximum size the meta device can support */
+ s = min_t(sector_t, s,
+ BM_EXT_TO_SECT(bdev->md.md_size_sect
+ - bdev->md.bm_offset));
+ break;
+ default:
+ s = min_t(sector_t, DRBD_MAX_SECTORS,
+ drbd_get_capacity(bdev->backing_bdev));
+ }
+ return s;
+}
+
+/* returns the sector number of our meta data 'super' block */
+static inline sector_t drbd_md_ss__(struct drbd_conf *mdev,
+ struct drbd_backing_dev *bdev)
+{
+ switch (bdev->dc.meta_dev_idx) {
+ default: /* external, some index */
+ return MD_RESERVED_SECT * bdev->dc.meta_dev_idx;
+ case DRBD_MD_INDEX_INTERNAL:
+ /* with drbd08, internal meta data is always "flexible" */
+ case DRBD_MD_INDEX_FLEX_INT:
+ /* sizeof(struct md_on_disk_07) == 4k
+ * position: last 4k aligned block of 4k size */
+ if (!bdev->backing_bdev) {
+ if (__ratelimit(&drbd_ratelimit_state)) {
+ ERR("bdev->backing_bdev==NULL\n");
+ dump_stack();
+ }
+ return 0;
+ }
+ return (drbd_get_capacity(bdev->backing_bdev) & ~7ULL)
+ - MD_AL_OFFSET;
+ case DRBD_MD_INDEX_FLEX_EXT:
+ return 0;
+ }
+}
+
+static inline void
+_drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w)
+{
+ list_add_tail(&w->list, &q->q);
+ up(&q->s);
+}
+
+static inline void
+drbd_queue_work_front(struct drbd_work_queue *q, struct drbd_work *w)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&q->q_lock, flags);
+ list_add(&w->list, &q->q);
+ up(&q->s); /* within the spinlock,
+ see comment near end of drbd_worker() */
+ spin_unlock_irqrestore(&q->q_lock, flags);
+}
+
+static inline void
+drbd_queue_work(struct drbd_work_queue *q, struct drbd_work *w)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&q->q_lock, flags);
+ list_add_tail(&w->list, &q->q);
+ up(&q->s); /* within the spinlock,
+ see comment near end of drbd_worker() */
+ spin_unlock_irqrestore(&q->q_lock, flags);
+}
+
+static inline void wake_asender(struct drbd_conf *mdev)
+{
+ if (test_bit(SIGNAL_ASENDER, &mdev->flags))
+ force_sig(DRBD_SIG, mdev->asender.task);
+}
+
+static inline void request_ping(struct drbd_conf *mdev)
+{
+ set_bit(SEND_PING, &mdev->flags);
+ wake_asender(mdev);
+}
+
+static inline int drbd_send_short_cmd(struct drbd_conf *mdev,
+ enum Drbd_Packet_Cmd cmd)
+{
+ struct Drbd_Header h;
+ return drbd_send_cmd(mdev, USE_DATA_SOCKET, cmd, &h, sizeof(h));
+}
+
+static inline int drbd_send_ping(struct drbd_conf *mdev)
+{
+ struct Drbd_Header h;
+ return drbd_send_cmd(mdev, USE_META_SOCKET, Ping, &h, sizeof(h));
+}
+
+static inline int drbd_send_ping_ack(struct drbd_conf *mdev)
+{
+ struct Drbd_Header h;
+ return drbd_send_cmd(mdev, USE_META_SOCKET, PingAck, &h, sizeof(h));
+}
+
+static inline void drbd_thread_stop(struct Drbd_thread *thi)
+{
+ _drbd_thread_stop(thi, FALSE, TRUE);
+}
+
+static inline void drbd_thread_stop_nowait(struct Drbd_thread *thi)
+{
+ _drbd_thread_stop(thi, FALSE, FALSE);
+}
+
+static inline void drbd_thread_restart_nowait(struct Drbd_thread *thi)
+{
+ _drbd_thread_stop(thi, TRUE, FALSE);
+}
+
+/* counts how many answer packets packets we expect from our peer,
+ * for either explicit application requests,
+ * or implicit barrier packets as necessary.
+ * increased:
+ * w_send_barrier
+ * _req_mod(req, queue_for_net_write or queue_for_net_read);
+ * it is much easier and equally valid to count what we queue for the
+ * worker, even before it actually was queued or send.
+ * (drbd_make_request_common; recovery path on read io-error)
+ * decreased:
+ * got_BarrierAck (respective tl_clear, tl_clear_barrier)
+ * _req_mod(req, data_received)
+ * [from receive_DataReply]
+ * _req_mod(req, write_acked_by_peer or recv_acked_by_peer or neg_acked)
+ * [from got_BlockAck (WriteAck, RecvAck)]
+ * for some reason it is NOT decreased in got_NegAck,
+ * but in the resulting cleanup code from report_params.
+ * we should try to remember the reason for that...
+ * _req_mod(req, send_failed or send_canceled)
+ * _req_mod(req, connection_lost_while_pending)
+ * [from tl_clear_barrier]
+ */
+static inline void inc_ap_pending(struct drbd_conf *mdev)
+{
+ atomic_inc(&mdev->ap_pending_cnt);
+}
+
+#define ERR_IF_CNT_IS_NEGATIVE(which) \
+ if (atomic_read(&mdev->which) < 0) \
+ ERR("in %s:%d: " #which " = %d < 0 !\n", \
+ __func__ , __LINE__ , \
+ atomic_read(&mdev->which))
+
+#define dec_ap_pending(mdev) do { \
+ typecheck(struct drbd_conf *, mdev); \
+ if (atomic_dec_and_test(&mdev->ap_pending_cnt)) \
+ wake_up(&mdev->misc_wait); \
+ ERR_IF_CNT_IS_NEGATIVE(ap_pending_cnt); } while (0)
+
+/* counts how many resync-related answers we still expect from the peer
+ * increase decrease
+ * SyncTarget sends RSDataRequest (and expects RSDataReply)
+ * SyncSource sends RSDataReply (and expects WriteAck whith ID_SYNCER)
+ * (or NegAck with ID_SYNCER)
+ */
+static inline void inc_rs_pending(struct drbd_conf *mdev)
+{
+ atomic_inc(&mdev->rs_pending_cnt);
+}
+
+#define dec_rs_pending(mdev) do { \
+ typecheck(struct drbd_conf *, mdev); \
+ atomic_dec(&mdev->rs_pending_cnt); \
+ ERR_IF_CNT_IS_NEGATIVE(rs_pending_cnt); } while (0)
+
+/* counts how many answers we still need to send to the peer.
+ * increased on
+ * receive_Data unless protocol A;
+ * we need to send a RecvAck (proto B)
+ * or WriteAck (proto C)
+ * receive_RSDataReply (recv_resync_read) we need to send a WriteAck
+ * receive_DataRequest (receive_RSDataRequest) we need to send back Data
+ * receive_Barrier_* we need to send a BarrierAck
+ */
+static inline void inc_unacked(struct drbd_conf *mdev)
+{
+ atomic_inc(&mdev->unacked_cnt);
+}
+
+#define dec_unacked(mdev) do { \
+ typecheck(struct drbd_conf *, mdev); \
+ atomic_dec(&mdev->unacked_cnt); \
+ ERR_IF_CNT_IS_NEGATIVE(unacked_cnt); } while (0)
+
+#define sub_unacked(mdev, n) do { \
+ typecheck(struct drbd_conf *, mdev); \
+ atomic_sub(n, &mdev->unacked_cnt); \
+ ERR_IF_CNT_IS_NEGATIVE(unacked_cnt); } while (0)
+
+
+static inline void dec_net(struct drbd_conf *mdev)
+{
+ if (atomic_dec_and_test(&mdev->net_cnt))
+ wake_up(&mdev->misc_wait);
+}
+
+/**
+ * inc_net: Returns TRUE when it is ok to access mdev->net_conf. You
+ * should call dec_net() when finished looking at mdev->net_conf.
+ */
+static inline int inc_net(struct drbd_conf *mdev)
+{
+ int have_net_conf;
+
+ atomic_inc(&mdev->net_cnt);
+ have_net_conf = mdev->state.conn >= Unconnected;
+ if (!have_net_conf)
+ dec_net(mdev);
+ return have_net_conf;
+}
+
+/**
+ * inc_local: Returns TRUE when local IO is possible. If it returns
+ * TRUE you should call dec_local() after IO is completed.
+ */
+#define inc_local_if_state(M,MINS) __cond_lock(local, _inc_local_if_state(M,MINS))
+#define inc_local(M) __cond_lock(local, _inc_local_if_state(M,Inconsistent))
+
+static inline void dec_local(struct drbd_conf *mdev)
+{
+ __release(local);
+ if (atomic_dec_and_test(&mdev->local_cnt))
+ wake_up(&mdev->misc_wait);
+ D_ASSERT(atomic_read(&mdev->local_cnt) >= 0);
+}
+
+#ifndef __CHECKER__
+static inline int _inc_local_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins)
+{
+ int io_allowed;
+
+ atomic_inc(&mdev->local_cnt);
+ io_allowed = (mdev->state.disk >= mins);
+ if (!io_allowed)
+ dec_local(mdev);
+ return io_allowed;
+}
+#else
+extern int _inc_local_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins);
+#endif
+
+/* you must have an "inc_local" reference */
+static inline void drbd_get_syncer_progress(struct drbd_conf *mdev,
+ unsigned long *bits_left, unsigned int *per_mil_done)
+{
+ /*
+ * this is to break it at compile time when we change that
+ * (we may feel 4TB maximum storage per drbd is not enough)
+ */
+ typecheck(unsigned long, mdev->rs_total);
+
+ /* note: both rs_total and rs_left are in bits, i.e. in
+ * units of BM_BLOCK_SIZE.
+ * for the percentage, we don't care. */
+
+ *bits_left = drbd_bm_total_weight(mdev) - mdev->rs_failed;
+ /* >> 10 to prevent overflow,
+ * +1 to prevent division by zero */
+ if (*bits_left > mdev->rs_total) {
+ /* doh. maybe a logic bug somewhere.
+ * may also be just a race condition
+ * between this and a disconnect during sync.
+ * for now, just prevent in-kernel buffer overflow.
+ */
+ smp_rmb();
+ drbd_WARN("cs:%s rs_left=%lu > rs_total=%lu (rs_failed %lu)\n",
+ conns_to_name(mdev->state.conn),
+ *bits_left, mdev->rs_total, mdev->rs_failed);
+ *per_mil_done = 0;
+ } else {
+ /* make sure the calculation happens in long context */
+ unsigned long tmp = 1000UL -
+ (*bits_left >> 10)*1000UL
+ / ((mdev->rs_total >> 10) + 1UL);
+ *per_mil_done = tmp;
+ }
+}
+
+
+/* this throttles on-the-fly application requests
+ * according to max_buffers settings;
+ * maybe re-implement using semaphores? */
+static inline int drbd_get_max_buffers(struct drbd_conf *mdev)
+{
+ int mxb = 1000000; /* arbitrary limit on open requests */
+ if (inc_net(mdev)) {
+ mxb = mdev->net_conf->max_buffers;
+ dec_net(mdev);
+ }
+ return mxb;
+}
+
+static inline int drbd_state_is_stable(union drbd_state_t s)
+{
+
+ /* DO NOT add a default clause, we want the compiler to warn us
+ * for any newly introduced state we may have forgotten to add here */
+
+ switch ((enum drbd_conns)s.conn) {
+ /* new io only accepted when there is no connection, ... */
+ case StandAlone:
+ case WFConnection:
+ /* ... or there is a well established connection. */
+ case Connected:
+ case SyncSource:
+ case SyncTarget:
+ case VerifyS:
+ case VerifyT:
+ case PausedSyncS:
+ case PausedSyncT:
+ /* maybe stable, look at the disk state */
+ break;
+
+ /* no new io accepted during tansitional states
+ * like handshake or teardown */
+ case Disconnecting:
+ case Unconnected:
+ case Timeout:
+ case BrokenPipe:
+ case NetworkFailure:
+ case ProtocolError:
+ case TearDown:
+ case WFReportParams:
+ case StartingSyncS:
+ case StartingSyncT:
+ case WFBitMapS:
+ case WFBitMapT:
+ case WFSyncUUID:
+ case conn_mask:
+ /* not "stable" */
+ return 0;
+ }
+
+ switch ((enum drbd_disk_state)s.disk) {
+ case Diskless:
+ case Inconsistent:
+ case Outdated:
+ case Consistent:
+ case UpToDate:
+ /* disk state is stable as well. */
+ break;
+
+ /* no new io accepted during tansitional states */
+ case Attaching:
+ case Failed:
+ case Negotiating:
+ case DUnknown:
+ case disk_mask:
+ /* not "stable" */
+ return 0;
+ }
+
+ return 1;
+}
+
+static inline int __inc_ap_bio_cond(struct drbd_conf *mdev)
+{
+ int mxb = drbd_get_max_buffers(mdev);
+
+ if (mdev->state.susp)
+ return 0;
+ if (test_bit(SUSPEND_IO, &mdev->flags))
+ return 0;
+
+ /* to avoid potential deadlock or bitmap corruption,
+ * in various places, we only allow new application io
+ * to start during "stable" states. */
+
+ /* no new io accepted when attaching or detaching the disk */
+ if (!drbd_state_is_stable(mdev->state))
+ return 0;
+
+ /* since some older kernels don't have atomic_add_unless,
+ * and we are within the spinlock anyways, we have this workaround. */
+ if (atomic_read(&mdev->ap_bio_cnt) > mxb)
+ return 0;
+ if (test_bit(BITMAP_IO, &mdev->flags))
+ return 0;
+ return 1;
+}
+
+/* I'd like to use wait_event_lock_irq,
+ * but I'm not sure when it got introduced,
+ * and not sure when it has 3 or 4 arguments */
+static inline void inc_ap_bio(struct drbd_conf *mdev, int one_or_two)
+{
+ /* compare with after_state_ch,
+ * os.conn != WFBitMapS && ns.conn == WFBitMapS */
+ DEFINE_WAIT(wait);
+
+ /* we wait here
+ * as long as the device is suspended
+ * until the bitmap is no longer on the fly during connection
+ * handshake as long as we would exeed the max_buffer limit.
+ *
+ * to avoid races with the reconnect code,
+ * we need to atomic_inc within the spinlock. */
+
+ spin_lock_irq(&mdev->req_lock);
+ while (!__inc_ap_bio_cond(mdev)) {
+ prepare_to_wait(&mdev->misc_wait, &wait, TASK_UNINTERRUPTIBLE);
+ spin_unlock_irq(&mdev->req_lock);
+ schedule();
+ finish_wait(&mdev->misc_wait, &wait);
+ spin_lock_irq(&mdev->req_lock);
+ }
+ atomic_add(one_or_two, &mdev->ap_bio_cnt);
+ spin_unlock_irq(&mdev->req_lock);
+}
+
+static inline void dec_ap_bio(struct drbd_conf *mdev)
+{
+ int mxb = drbd_get_max_buffers(mdev);
+ int ap_bio = atomic_dec_return(&mdev->ap_bio_cnt);
+
+ D_ASSERT(ap_bio >= 0);
+ /* this currently does wake_up for every dec_ap_bio!
+ * maybe rather introduce some type of hysteresis?
+ * e.g. (ap_bio == mxb/2 || ap_bio == 0) ? */
+ if (ap_bio < mxb)
+ wake_up(&mdev->misc_wait);
+ if (ap_bio == 0 && test_bit(BITMAP_IO, &mdev->flags)) {
+ if (!test_and_set_bit(BITMAP_IO_QUEUED, &mdev->flags))
+ drbd_queue_work(&mdev->data.work, &mdev->bm_io_work.w);
+ }
+}
+
+static inline void drbd_set_ed_uuid(struct drbd_conf *mdev, u64 val)
+{
+ mdev->ed_uuid = val;
+
+ MTRACE(TraceTypeUuid, TraceLvlMetrics,
+ INFO(" exposed data uuid now %016llX\n",
+ (unsigned long long)val);
+ );
+}
+
+static inline int seq_cmp(u32 a, u32 b)
+{
+ /* we assume wrap around at 32bit.
+ * for wrap around at 24bit (old atomic_t),
+ * we'd have to
+ * a <<= 8; b <<= 8;
+ */
+ return (s32)(a) - (s32)(b);
+}
+#define seq_lt(a, b) (seq_cmp((a), (b)) < 0)
+#define seq_gt(a, b) (seq_cmp((a), (b)) > 0)
+#define seq_ge(a, b) (seq_cmp((a), (b)) >= 0)
+#define seq_le(a, b) (seq_cmp((a), (b)) <= 0)
+/* CAUTION: please no side effects in arguments! */
+#define seq_max(a, b) ((u32)(seq_gt((a), (b)) ? (a) : (b)))
+
+static inline void update_peer_seq(struct drbd_conf *mdev, unsigned int new_seq)
+{
+ unsigned int m;
+ spin_lock(&mdev->peer_seq_lock);
+ m = seq_max(mdev->peer_seq, new_seq);
+ mdev->peer_seq = m;
+ spin_unlock(&mdev->peer_seq_lock);
+ if (m == new_seq)
+ wake_up(&mdev->seq_wait);
+}
+
+static inline void drbd_update_congested(struct drbd_conf *mdev)
+{
+ struct sock *sk = mdev->data.socket->sk;
+ if (sk->sk_wmem_queued > sk->sk_sndbuf * 4 / 5)
+ set_bit(NET_CONGESTED, &mdev->flags);
+}
+
+static inline int drbd_queue_order_type(struct drbd_conf *mdev)
+{
+ /* sorry, we currently have no working implementation
+ * of distributed TCQ stuff */
+#ifndef QUEUE_ORDERED_NONE
+#define QUEUE_ORDERED_NONE 0
+#endif
+ return QUEUE_ORDERED_NONE;
+}
+
+static inline void drbd_blk_run_queue(struct request_queue *q)
+{
+ if (q && q->unplug_fn)
+ q->unplug_fn(q);
+}
+
+static inline void drbd_kick_lo(struct drbd_conf *mdev)
+{
+ if (inc_local(mdev)) {
+ drbd_blk_run_queue(bdev_get_queue(mdev->bc->backing_bdev));
+ dec_local(mdev);
+ }
+}
+
+static inline void drbd_md_flush(struct drbd_conf *mdev)
+{
+ int r;
+
+ if (test_bit(MD_NO_BARRIER, &mdev->flags))
+ return;
+
+ r = blkdev_issue_flush(mdev->bc->md_bdev, NULL);
+ if (r) {
+ set_bit(MD_NO_BARRIER, &mdev->flags);
+ ERR("meta data flush failed with status %d, disabling md-flushes\n", r);
+ }
+}
+
+#endif
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_wrappers.h linux-2.6.29-drbd/drivers/block/drbd/drbd_wrappers.h
--- linux-2.6.29/drivers/block/drbd/drbd_wrappers.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_wrappers.h 2009-03-26 15:55:39.587133000 +0100
@@ -0,0 +1,117 @@
+#include <linux/ctype.h>
+#include <linux/mm.h>
+
+
+/* see get_sb_bdev and bd_claim */
+extern char *drbd_sec_holder;
+
+static inline sector_t drbd_get_hardsect(struct block_device *bdev)
+{
+ return bdev->bd_disk->queue->hardsect_size;
+}
+
+/* sets the number of 512 byte sectors of our virtual device */
+static inline void drbd_set_my_capacity(struct drbd_conf *mdev,
+ sector_t size)
+{
+ /* set_capacity(mdev->this_bdev->bd_disk, size); */
+ set_capacity(mdev->vdisk, size);
+ mdev->this_bdev->bd_inode->i_size = (loff_t)size << 9;
+}
+
+#define drbd_bio_uptodate(bio) bio_flagged(bio, BIO_UPTODATE)
+
+static inline int drbd_bio_has_active_page(struct bio *bio)
+{
+ struct bio_vec *bvec;
+ int i;
+
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ if (page_count(bvec->bv_page) > 1)
+ return 1;
+ }
+
+ return 0;
+}
+
+/* bi_end_io handlers */
+extern void drbd_md_io_complete(struct bio *bio, int error);
+extern void drbd_endio_read_sec(struct bio *bio, int error);
+extern void drbd_endio_write_sec(struct bio *bio, int error);
+extern void drbd_endio_pri(struct bio *bio, int error);
+
+/* how to get to the kobj of a gendisk.
+ * see also upstream commits
+ * edfaa7c36574f1bf09c65ad602412db9da5f96bf
+ * ed9e1982347b36573cd622ee5f4e2a7ccd79b3fd
+ * 548b10eb2959c96cef6fc29fc96e0931eeb53bc5
+ */
+#ifndef dev_to_disk
+# define disk_to_kobj(disk) (&(disk)->kobj)
+#else
+# ifndef disk_to_dev
+# define disk_to_dev(disk) (&(disk)->dev)
+# endif
+# define disk_to_kobj(disk) (&disk_to_dev(disk)->kobj)
+#endif
+static inline void drbd_kobject_uevent(struct drbd_conf *mdev)
+{
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,10)
+#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,15)
+ kobject_uevent(disk_to_kobj(mdev->vdisk), KOBJ_CHANGE, NULL);
+#else
+ kobject_uevent(disk_to_kobj(mdev->vdisk), KOBJ_CHANGE);
+ /* rhel4 / sles9 and older don't have this at all,
+ * which means user space (udev) won't get events about possible changes of
+ * corresponding resource + disk names after the initial drbd minor creation.
+ */
+#endif
+#endif
+}
+
+
+/*
+ * used to submit our private bio
+ */
+static inline void drbd_generic_make_request(struct drbd_conf *mdev,
+ int fault_type, struct bio *bio)
+{
+ __release(local);
+ if (!bio->bi_bdev) {
+ printk(KERN_ERR "drbd%d: drbd_generic_make_request: "
+ "bio->bi_bdev == NULL\n",
+ mdev_to_minor(mdev));
+ dump_stack();
+ bio_endio(bio, -ENODEV);
+ return;
+ }
+
+ if (FAULT_ACTIVE(mdev, fault_type))
+ bio_endio(bio, -EIO);
+ else
+ generic_make_request(bio);
+}
+
+static inline void drbd_plug_device(struct drbd_conf *mdev)
+{
+ struct request_queue *q;
+ q = bdev_get_queue(mdev->this_bdev);
+
+ spin_lock_irq(q->queue_lock);
+
+/* XXX the check on !blk_queue_plugged is redundant,
+ * implicitly checked in blk_plug_device */
+
+ if (!blk_queue_plugged(q)) {
+ blk_plug_device(q);
+ del_timer(&q->unplug_timer);
+ /* unplugging should not happen automatically... */
+ }
+ spin_unlock_irq(q->queue_lock);
+}
+
+#ifndef __CHECKER__
+# undef __cond_lock
+# define __cond_lock(x,c) (c)
+#endif
+
Nearly everything of the "receiver" and the "asender" is in this file. The
receiver is the thread that processes all data packets. The receiver might
gets blocked while waiting for memory or being slowed while submitting IO.
The asender on the other hand is used to send out acknowledgements and
to receive them. It only blocks while waiting on its socket.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_receiver.c linux-2.6.29-drbd/drivers/block/drbd/drbd_receiver.c
--- linux-2.6.29/drivers/block/drbd/drbd_receiver.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_receiver.c 2009-03-30 16:51:49.739133000 +0200
@@ -0,0 +1,4375 @@
+/*
+ drbd_receiver.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+
+#include <asm/uaccess.h>
+#include <net/sock.h>
+
+#include <linux/version.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/in.h>
+#include <linux/mm.h>
+#include <linux/drbd_config.h>
+#include <linux/memcontrol.h>
+#include <linux/mm_inline.h>
+#include <linux/slab.h>
+#include <linux/smp_lock.h>
+#include <linux/pkt_sched.h>
+#define __KERNEL_SYSCALLS__
+#include <linux/unistd.h>
+#include <linux/vmalloc.h>
+#include <linux/random.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+#include <linux/scatterlist.h>
+#include <linux/drbd.h>
+#include "drbd_int.h"
+#include "drbd_req.h"
+
+#include "drbd_vli.h"
+
+struct flush_work {
+ struct drbd_work w;
+ struct drbd_epoch *epoch;
+};
+
+enum epoch_event {
+ EV_put,
+ EV_got_barrier_nr,
+ EV_barrier_done,
+ EV_became_last,
+ EV_cleanup = 32, /* used as flag */
+};
+
+enum finish_epoch {
+ FE_still_live,
+ FE_destroyed,
+ FE_recycled,
+};
+
+STATIC int drbd_do_handshake(struct drbd_conf *mdev);
+STATIC int drbd_do_auth(struct drbd_conf *mdev);
+
+STATIC enum finish_epoch drbd_may_finish_epoch(struct drbd_conf *, struct drbd_epoch *, enum epoch_event);
+STATIC int e_end_block(struct drbd_conf *, struct drbd_work *, int);
+static inline struct drbd_epoch *previous_epoch(struct drbd_conf *mdev, struct drbd_epoch *epoch)
+{
+ struct drbd_epoch *prev;
+ spin_lock(&mdev->epoch_lock);
+ prev = list_entry(epoch->list.prev, struct drbd_epoch, list);
+ if (prev == epoch || prev == mdev->current_epoch)
+ prev = NULL;
+ spin_unlock(&mdev->epoch_lock);
+ return prev;
+}
+
+#define GFP_TRY (__GFP_HIGHMEM | __GFP_NOWARN)
+
+/**
+ * drbd_bp_alloc: Returns a page. Fails only if a signal comes in.
+ */
+STATIC struct page *drbd_pp_alloc(struct drbd_conf *mdev, gfp_t gfp_mask)
+{
+ unsigned long flags = 0;
+ struct page *page;
+ DEFINE_WAIT(wait);
+
+ spin_lock_irqsave(&drbd_pp_lock, flags);
+ page = drbd_pp_pool;
+ if (page) {
+ drbd_pp_pool = (struct page *)page_private(page);
+ set_page_private(page, 0); /* just to be polite */
+ drbd_pp_vacant--;
+ }
+ spin_unlock_irqrestore(&drbd_pp_lock, flags);
+ if (page)
+ goto got_page;
+
+ drbd_kick_lo(mdev);
+
+ for (;;) {
+ prepare_to_wait(&drbd_pp_wait, &wait, TASK_INTERRUPTIBLE);
+
+ /* try the pool again, maybe the drbd_kick_lo set some free */
+ spin_lock_irqsave(&drbd_pp_lock, flags);
+ page = drbd_pp_pool;
+ if (page) {
+ drbd_pp_pool = (struct page *)page_private(page);
+ drbd_pp_vacant--;
+ }
+ spin_unlock_irqrestore(&drbd_pp_lock, flags);
+
+ if (page)
+ break;
+
+ /* hm. pool was empty. try to allocate from kernel.
+ * don't wait, if none is available, though.
+ */
+ if (atomic_read(&mdev->pp_in_use)
+ < mdev->net_conf->max_buffers) {
+ page = alloc_page(GFP_TRY);
+ if (page)
+ break;
+ }
+
+ /* doh. still no page.
+ * either used up the configured maximum number,
+ * or we are low on memory.
+ * wait for someone to return a page into the pool.
+ * unless, of course, someone signalled us.
+ */
+ if (signal_pending(current)) {
+ drbd_WARN("drbd_pp_alloc interrupted!\n");
+ finish_wait(&drbd_pp_wait, &wait);
+ return NULL;
+ }
+ drbd_kick_lo(mdev);
+ if (!(gfp_mask & __GFP_WAIT)) {
+ finish_wait(&drbd_pp_wait, &wait);
+ return NULL;
+ }
+ schedule();
+ }
+ finish_wait(&drbd_pp_wait, &wait);
+
+ got_page:
+ atomic_inc(&mdev->pp_in_use);
+ return page;
+}
+
+STATIC void drbd_pp_free(struct drbd_conf *mdev, struct page *page)
+{
+ unsigned long flags = 0;
+ int free_it;
+
+ spin_lock_irqsave(&drbd_pp_lock, flags);
+ if (drbd_pp_vacant > (DRBD_MAX_SEGMENT_SIZE/PAGE_SIZE)*minor_count) {
+ free_it = 1;
+ } else {
+ set_page_private(page, (unsigned long)drbd_pp_pool);
+ drbd_pp_pool = page;
+ drbd_pp_vacant++;
+ free_it = 0;
+ }
+ spin_unlock_irqrestore(&drbd_pp_lock, flags);
+
+ atomic_dec(&mdev->pp_in_use);
+
+ if (free_it)
+ __free_page(page);
+
+ wake_up(&drbd_pp_wait);
+}
+
+/*
+You need to hold the req_lock:
+ drbd_free_ee()
+ _drbd_wait_ee_list_empty()
+
+You must not have the req_lock:
+ drbd_alloc_ee()
+ drbd_init_ee()
+ drbd_release_ee()
+ drbd_ee_fix_bhs()
+ drbd_process_done_ee()
+ drbd_clear_done_ee()
+ drbd_wait_ee_list_empty()
+*/
+
+struct Tl_epoch_entry *drbd_alloc_ee(struct drbd_conf *mdev,
+ u64 id,
+ sector_t sector,
+ unsigned int data_size,
+ gfp_t gfp_mask) __must_hold(local)
+{
+ struct request_queue *q;
+ struct Tl_epoch_entry *e;
+ struct bio_vec *bvec;
+ struct page *page;
+ struct bio *bio;
+ unsigned int ds;
+ int i;
+
+ e = mempool_alloc(drbd_ee_mempool, gfp_mask & ~__GFP_HIGHMEM);
+ if (!e) {
+ if (!(gfp_mask & __GFP_NOWARN))
+ ERR("alloc_ee: Allocation of an EE failed\n");
+ return NULL;
+ }
+
+ bio = bio_alloc(gfp_mask & ~__GFP_HIGHMEM, div_ceil(data_size, PAGE_SIZE));
+ if (!bio) {
+ if (!(gfp_mask & __GFP_NOWARN))
+ ERR("alloc_ee: Allocation of a bio failed\n");
+ goto fail1;
+ }
+
+ bio->bi_bdev = mdev->bc->backing_bdev;
+ bio->bi_sector = sector;
+
+ ds = data_size;
+ while (ds) {
+ page = drbd_pp_alloc(mdev, gfp_mask);
+ if (!page) {
+ if (!(gfp_mask & __GFP_NOWARN))
+ ERR("alloc_ee: Allocation of a page failed\n");
+ goto fail2;
+ }
+ if (!bio_add_page(bio, page, min_t(int, ds, PAGE_SIZE), 0)) {
+ drbd_pp_free(mdev, page);
+ ERR("alloc_ee: bio_add_page(s=%llu,"
+ "data_size=%u,ds=%u) failed\n",
+ (unsigned long long)sector, data_size, ds);
+
+ q = bdev_get_queue(bio->bi_bdev);
+ if (q->merge_bvec_fn) {
+ struct bvec_merge_data bvm = {
+ .bi_bdev = bio->bi_bdev,
+ .bi_sector = bio->bi_sector,
+ .bi_size = bio->bi_size,
+ .bi_rw = bio->bi_rw,
+ };
+ int l = q->merge_bvec_fn(q, &bvm,
+ &bio->bi_io_vec[bio->bi_vcnt]);
+ ERR("merge_bvec_fn() = %d\n", l);
+ }
+
+ /* dump more of the bio. */
+ DUMPI(bio->bi_max_vecs);
+ DUMPI(bio->bi_vcnt);
+ DUMPI(bio->bi_size);
+ DUMPI(bio->bi_phys_segments);
+
+ goto fail2;
+ break;
+ }
+ ds -= min_t(int, ds, PAGE_SIZE);
+ }
+
+ D_ASSERT(data_size == bio->bi_size);
+
+ bio->bi_private = e;
+ e->mdev = mdev;
+ e->sector = sector;
+ e->size = bio->bi_size;
+
+ e->private_bio = bio;
+ e->block_id = id;
+ INIT_HLIST_NODE(&e->colision);
+ e->epoch = NULL;
+ e->flags = 0;
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("allocated EE sec=%llus size=%u ee=%p\n",
+ (unsigned long long)sector, data_size, e);
+ );
+
+ return e;
+
+ fail2:
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ drbd_pp_free(mdev, bvec->bv_page);
+ }
+ bio_put(bio);
+ fail1:
+ mempool_free(e, drbd_ee_mempool);
+
+ return NULL;
+}
+
+void drbd_free_ee(struct drbd_conf *mdev, struct Tl_epoch_entry *e)
+{
+ struct bio *bio = e->private_bio;
+ struct bio_vec *bvec;
+ int i;
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("Free EE sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ drbd_pp_free(mdev, bvec->bv_page);
+ }
+
+ bio_put(bio);
+
+ D_ASSERT(hlist_unhashed(&e->colision));
+
+ mempool_free(e, drbd_ee_mempool);
+}
+
+/* currently on module unload only */
+int drbd_release_ee(struct drbd_conf *mdev, struct list_head *list)
+{
+ int count = 0;
+ struct Tl_epoch_entry *e;
+ struct list_head *le;
+
+ spin_lock_irq(&mdev->req_lock);
+ while (!list_empty(list)) {
+ le = list->next;
+ list_del(le);
+ e = list_entry(le, struct Tl_epoch_entry, w.list);
+ drbd_free_ee(mdev, e);
+ count++;
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ return count;
+}
+
+
+STATIC void reclaim_net_ee(struct drbd_conf *mdev)
+{
+ struct Tl_epoch_entry *e;
+ struct list_head *le, *tle;
+
+ /* The EEs are always appended to the end of the list. Since
+ they are sent in order over the wire, they have to finish
+ in order. As soon as we see the first not finished we can
+ stop to examine the list... */
+
+ list_for_each_safe(le, tle, &mdev->net_ee) {
+ e = list_entry(le, struct Tl_epoch_entry, w.list);
+ if (drbd_bio_has_active_page(e->private_bio))
+ break;
+ list_del(le);
+ drbd_free_ee(mdev, e);
+ }
+}
+
+
+/*
+ * This function is called from _asender only_
+ * but see also comments in _req_mod(,barrier_acked)
+ * and receive_Barrier.
+ *
+ * Move entries from net_ee to done_ee, if ready.
+ * Grab done_ee, call all callbacks, free the entries.
+ * The callbacks typically send out ACKs.
+ */
+STATIC int drbd_process_done_ee(struct drbd_conf *mdev)
+{
+ LIST_HEAD(work_list);
+ struct Tl_epoch_entry *e, *t;
+ int ok = 1;
+
+ spin_lock_irq(&mdev->req_lock);
+ reclaim_net_ee(mdev);
+ list_splice_init(&mdev->done_ee, &work_list);
+ spin_unlock_irq(&mdev->req_lock);
+
+ /* possible callbacks here:
+ * e_end_block, and e_end_resync_block, e_send_discard_ack.
+ * all ignore the last argument.
+ */
+ list_for_each_entry_safe(e, t, &work_list, w.list) {
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("Process EE on done_ee sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+ /* list_del not necessary, next/prev members not touched */
+ if (e->w.cb(mdev, &e->w, 0) == 0)
+ ok = 0;
+ drbd_free_ee(mdev, e);
+ }
+ wake_up(&mdev->ee_wait);
+
+ return ok;
+}
+
+
+
+/* clean-up helper for drbd_disconnect */
+void _drbd_clear_done_ee(struct drbd_conf *mdev)
+{
+ struct list_head *le;
+ struct Tl_epoch_entry *e;
+ struct drbd_epoch *epoch;
+ int n = 0;
+
+
+ reclaim_net_ee(mdev);
+
+ while (!list_empty(&mdev->done_ee)) {
+ le = mdev->done_ee.next;
+ list_del(le);
+ e = list_entry(le, struct Tl_epoch_entry, w.list);
+ if (mdev->net_conf->wire_protocol == DRBD_PROT_C
+ || is_syncer_block_id(e->block_id))
+ ++n;
+
+ if (!hlist_unhashed(&e->colision))
+ hlist_del_init(&e->colision);
+
+ if (e->epoch) {
+ if (e->flags & EE_IS_BARRIER) {
+ epoch = previous_epoch(mdev, e->epoch);
+ if (epoch)
+ drbd_may_finish_epoch(mdev, epoch, EV_barrier_done + EV_cleanup);
+ }
+ drbd_may_finish_epoch(mdev, e->epoch, EV_put + EV_cleanup);
+ }
+ drbd_free_ee(mdev, e);
+ }
+
+ sub_unacked(mdev, n);
+}
+
+void _drbd_wait_ee_list_empty(struct drbd_conf *mdev, struct list_head *head)
+{
+ DEFINE_WAIT(wait);
+
+ /* avoids spin_lock/unlock
+ * and calling prepare_to_wait in the fast path */
+ while (!list_empty(head)) {
+ prepare_to_wait(&mdev->ee_wait, &wait, TASK_UNINTERRUPTIBLE);
+ spin_unlock_irq(&mdev->req_lock);
+ drbd_kick_lo(mdev);
+ schedule();
+ finish_wait(&mdev->ee_wait, &wait);
+ spin_lock_irq(&mdev->req_lock);
+ }
+}
+
+void drbd_wait_ee_list_empty(struct drbd_conf *mdev, struct list_head *head)
+{
+ spin_lock_irq(&mdev->req_lock);
+ _drbd_wait_ee_list_empty(mdev, head);
+ spin_unlock_irq(&mdev->req_lock);
+}
+
+/* see also kernel_accept; which is only present since 2.6.18.
+ * also we want to log which part of it failed, exactly */
+STATIC int drbd_accept(struct drbd_conf *mdev, const char **what,
+ struct socket *sock, struct socket **newsock)
+{
+ struct sock *sk = sock->sk;
+ int err = 0;
+
+ *what = "listen";
+ err = sock->ops->listen(sock, 5);
+ if (err < 0)
+ goto out;
+
+ *what = "sock_create_lite";
+ err = sock_create_lite(sk->sk_family, sk->sk_type, sk->sk_protocol,
+ newsock);
+ if (err < 0)
+ goto out;
+
+ *what = "accept";
+ err = sock->ops->accept(sock, *newsock, 0);
+ if (err < 0) {
+ sock_release(*newsock);
+ *newsock = NULL;
+ goto out;
+ }
+ (*newsock)->ops = sock->ops;
+
+out:
+ return err;
+}
+
+STATIC int drbd_recv_short(struct drbd_conf *mdev, struct socket *sock,
+ void *buf, size_t size, int flags)
+{
+ mm_segment_t oldfs;
+ struct kvec iov = {
+ .iov_base = buf,
+ .iov_len = size,
+ };
+ struct msghdr msg = {
+ .msg_iovlen = 1,
+ .msg_iov = (struct iovec *)&iov,
+ .msg_flags = (flags ? flags : MSG_WAITALL | MSG_NOSIGNAL)
+ };
+ int rv;
+
+ oldfs = get_fs();
+ set_fs(KERNEL_DS);
+ rv = sock_recvmsg(sock, &msg, size, msg.msg_flags);
+ set_fs(oldfs);
+
+ return rv;
+}
+
+STATIC int drbd_recv(struct drbd_conf *mdev, void *buf, size_t size)
+{
+ mm_segment_t oldfs;
+ struct kvec iov = {
+ .iov_base = buf,
+ .iov_len = size,
+ };
+ struct msghdr msg = {
+ .msg_iovlen = 1,
+ .msg_iov = (struct iovec *)&iov,
+ .msg_flags = MSG_WAITALL | MSG_NOSIGNAL
+ };
+ int rv;
+
+ oldfs = get_fs();
+ set_fs(KERNEL_DS);
+
+ for (;;) {
+ rv = sock_recvmsg(mdev->data.socket, &msg, size, msg.msg_flags);
+ if (rv == size)
+ break;
+
+ /* Note:
+ * ECONNRESET other side closed the connection
+ * ERESTARTSYS (on sock) we got a signal
+ */
+
+ if (rv < 0) {
+ if (rv == -ECONNRESET)
+ INFO("sock was reset by peer\n");
+ else if (rv != -ERESTARTSYS)
+ ERR("sock_recvmsg returned %d\n", rv);
+ break;
+ } else if (rv == 0) {
+ INFO("sock was shut down by peer\n");
+ break;
+ } else {
+ /* signal came in, or peer/link went down,
+ * after we read a partial message
+ */
+ /* D_ASSERT(signal_pending(current)); */
+ break;
+ }
+ };
+
+ set_fs(oldfs);
+
+ if (rv != size)
+ drbd_force_state(mdev, NS(conn, BrokenPipe));
+
+ return rv;
+}
+
+STATIC struct socket *drbd_try_connect(struct drbd_conf *mdev)
+{
+ const char *what;
+ struct socket *sock;
+ struct sockaddr_in6 src_in6;
+ int err;
+ int disconnect_on_error = 1;
+
+ if (!inc_net(mdev))
+ return NULL;
+
+ what = "sock_create_kern";
+ err = sock_create_kern(((struct sockaddr *)mdev->net_conf->my_addr)->sa_family,
+ SOCK_STREAM, IPPROTO_TCP, &sock);
+ if (err < 0) {
+ sock = NULL;
+ goto out;
+ }
+
+ sock->sk->sk_rcvtimeo =
+ sock->sk->sk_sndtimeo = mdev->net_conf->try_connect_int*HZ;
+
+ /* explicitly bind to the configured IP as source IP
+ * for the outgoing connections.
+ * This is needed for multihomed hosts and to be
+ * able to use lo: interfaces for drbd.
+ * Make sure to use 0 as portnumber, so linux selects
+ * a free one dynamically.
+ */
+ memcpy(&src_in6, mdev->net_conf->my_addr,
+ min_t(int, mdev->net_conf->my_addr_len, sizeof(src_in6)));
+ if (((struct sockaddr *)mdev->net_conf->my_addr)->sa_family == AF_INET6)
+ src_in6.sin6_port = 0;
+ else
+ ((struct sockaddr_in *)&src_in6)->sin_port = 0; /* AF_INET & AF_SCI */
+
+ what = "bind before connect";
+ err = sock->ops->bind(sock,
+ (struct sockaddr *) &src_in6,
+ mdev->net_conf->my_addr_len);
+ if (err < 0)
+ goto out;
+
+ /* connect may fail, peer not yet available.
+ * stay WFConnection, don't go Disconnecting! */
+ disconnect_on_error = 0;
+ what = "connect";
+ err = sock->ops->connect(sock,
+ (struct sockaddr *)mdev->net_conf->peer_addr,
+ mdev->net_conf->peer_addr_len, 0);
+
+out:
+ if (err < 0) {
+ if (sock) {
+ sock_release(sock);
+ sock = NULL;
+ }
+ switch (-err) {
+ /* timeout, busy, signal pending */
+ case ETIMEDOUT: case EAGAIN: case EINPROGRESS:
+ case EINTR: case ERESTARTSYS:
+ /* peer not (yet) available, network problem */
+ case ECONNREFUSED: case ENETUNREACH:
+ case EHOSTDOWN: case EHOSTUNREACH:
+ disconnect_on_error = 0;
+ break;
+ default:
+ ERR("%s failed, err = %d\n", what, err);
+ }
+ if (disconnect_on_error)
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ }
+ dec_net(mdev);
+ return sock;
+}
+
+STATIC struct socket *drbd_wait_for_connect(struct drbd_conf *mdev)
+{
+ int timeo, err;
+ struct socket *s_estab = NULL, *s_listen;
+ const char *what;
+
+ if (!inc_net(mdev))
+ return NULL;
+
+ what = "sock_create_kern";
+ err = sock_create_kern(((struct sockaddr *)mdev->net_conf->my_addr)->sa_family,
+ SOCK_STREAM, IPPROTO_TCP, &s_listen);
+ if (err) {
+ s_listen = NULL;
+ goto out;
+ }
+
+ timeo = mdev->net_conf->try_connect_int * HZ;
+ timeo += (random32() & 1) ? timeo / 7 : -timeo / 7; /* 28.5% random jitter */
+
+ s_listen->sk->sk_reuse = 1; /* SO_REUSEADDR */
+ s_listen->sk->sk_rcvtimeo = timeo;
+ s_listen->sk->sk_sndtimeo = timeo;
+
+ what = "bind before listen";
+ err = s_listen->ops->bind(s_listen,
+ (struct sockaddr *) mdev->net_conf->my_addr,
+ mdev->net_conf->my_addr_len);
+ if (err < 0)
+ goto out;
+
+ err = drbd_accept(mdev, &what, s_listen, &s_estab);
+
+out:
+ if (s_listen)
+ sock_release(s_listen);
+ if (err < 0) {
+ if (err != -EAGAIN && err != -EINTR && err != -ERESTARTSYS) {
+ ERR("%s failed, err = %d\n", what, err);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ }
+ }
+ dec_net(mdev);
+
+ return s_estab;
+}
+
+STATIC int drbd_send_fp(struct drbd_conf *mdev,
+ struct socket *sock, enum Drbd_Packet_Cmd cmd)
+{
+ struct Drbd_Header *h = (struct Drbd_Header *) &mdev->data.sbuf.head;
+
+ return _drbd_send_cmd(mdev, sock, cmd, h, sizeof(*h), 0);
+}
+
+STATIC enum Drbd_Packet_Cmd drbd_recv_fp(struct drbd_conf *mdev, struct socket *sock)
+{
+ struct Drbd_Header *h = (struct Drbd_Header *) &mdev->data.sbuf.head;
+ int rr;
+
+ rr = drbd_recv_short(mdev, sock, h, sizeof(*h), 0);
+
+ if (rr == sizeof(*h) && h->magic == BE_DRBD_MAGIC)
+ return be16_to_cpu(h->command);
+
+ return 0xffff;
+}
+
+/**
+ * drbd_socket_okay:
+ * Tests if the connection behind the socket still exists. If not it frees
+ * the socket.
+ */
+static int drbd_socket_okay(struct drbd_conf *mdev, struct socket **sock)
+{
+ int rr;
+ char tb[4];
+
+ if (!*sock)
+ return FALSE;
+
+ rr = drbd_recv_short(mdev, *sock, tb, 4, MSG_DONTWAIT | MSG_PEEK);
+
+ if (rr > 0 || rr == -EAGAIN) {
+ return TRUE;
+ } else {
+ sock_release(*sock);
+ *sock = NULL;
+ return FALSE;
+ }
+}
+
+/*
+ * return values:
+ * 1 yess, we have a valid connection
+ * 0 oops, did not work out, please try again
+ * -1 peer talks different language,
+ * no point in trying again, please go standalone.
+ * -2 We do not have a network config...
+ */
+STATIC int drbd_connect(struct drbd_conf *mdev)
+{
+ struct socket *s, *sock, *msock;
+ int try, h, ok;
+
+ D_ASSERT(!mdev->data.socket);
+
+ if (test_and_clear_bit(CREATE_BARRIER, &mdev->flags))
+ ERR("CREATE_BARRIER flag was set in drbd_connect - now cleared!\n");
+
+ if (drbd_request_state(mdev, NS(conn, WFConnection)) < SS_Success)
+ return -2;
+
+ clear_bit(DISCARD_CONCURRENT, &mdev->flags);
+
+ sock = NULL;
+ msock = NULL;
+
+ do {
+ for (try = 0;;) {
+ /* 3 tries, this should take less than a second! */
+ s = drbd_try_connect(mdev);
+ if (s || ++try >= 3)
+ break;
+ /* give the other side time to call bind() & listen() */
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ / 10);
+ }
+
+ if (s) {
+ if (!sock) {
+ drbd_send_fp(mdev, s, HandShakeS);
+ sock = s;
+ s = NULL;
+ } else if (!msock) {
+ drbd_send_fp(mdev, s, HandShakeM);
+ msock = s;
+ s = NULL;
+ } else {
+ ERR("Logic error in drbd_connect()\n");
+ return -1;
+ }
+ }
+
+ if (sock && msock) {
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ / 10);
+ ok = drbd_socket_okay(mdev, &sock);
+ ok = drbd_socket_okay(mdev, &msock) && ok;
+ if (ok)
+ break;
+ }
+
+retry:
+ s = drbd_wait_for_connect(mdev);
+ if (s) {
+ try = drbd_recv_fp(mdev, s);
+ drbd_socket_okay(mdev, &sock);
+ drbd_socket_okay(mdev, &msock);
+ switch (try) {
+ case HandShakeS:
+ if (sock) {
+ drbd_WARN("initial packet S crossed\n");
+ sock_release(sock);
+ }
+ sock = s;
+ break;
+ case HandShakeM:
+ if (msock) {
+ drbd_WARN("initial packet M crossed\n");
+ sock_release(msock);
+ }
+ msock = s;
+ set_bit(DISCARD_CONCURRENT, &mdev->flags);
+ break;
+ default:
+ drbd_WARN("Error receiving initial packet\n");
+ sock_release(s);
+ if (random32() & 1)
+ goto retry;
+ }
+ }
+
+ if (mdev->state.conn <= Disconnecting)
+ return -1;
+ if (signal_pending(current)) {
+ flush_signals(current);
+ smp_rmb();
+ if (get_t_state(&mdev->receiver) == Exiting) {
+ if (sock)
+ sock_release(sock);
+ if (msock)
+ sock_release(msock);
+ return -1;
+ }
+ }
+
+ if (sock && msock) {
+ ok = drbd_socket_okay(mdev, &sock);
+ ok = drbd_socket_okay(mdev, &msock) && ok;
+ if (ok)
+ break;
+ }
+ } while (1);
+
+ msock->sk->sk_reuse = 1; /* SO_REUSEADDR */
+ sock->sk->sk_reuse = 1; /* SO_REUSEADDR */
+
+ sock->sk->sk_allocation = GFP_NOIO;
+ msock->sk->sk_allocation = GFP_NOIO;
+
+ sock->sk->sk_priority = TC_PRIO_INTERACTIVE_BULK;
+ msock->sk->sk_priority = TC_PRIO_INTERACTIVE;
+
+ if (mdev->net_conf->sndbuf_size) {
+ sock->sk->sk_sndbuf = mdev->net_conf->sndbuf_size;
+ sock->sk->sk_rcvbuf = mdev->net_conf->sndbuf_size;
+ sock->sk->sk_userlocks |= SOCK_SNDBUF_LOCK | SOCK_RCVBUF_LOCK;
+ }
+
+ /* NOT YET ...
+ * sock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10;
+ * sock->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
+ * first set it to the HandShake timeout, wich is hardcoded for now: */
+ sock->sk->sk_sndtimeo =
+ sock->sk->sk_rcvtimeo = 2*HZ;
+
+ msock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10;
+ msock->sk->sk_rcvtimeo = mdev->net_conf->ping_int*HZ;
+
+ /* we don't want delays.
+ * we use TCP_CORK where apropriate, though */
+ drbd_tcp_nodelay(sock);
+ drbd_tcp_nodelay(msock);
+
+ mdev->data.socket = sock;
+ mdev->meta.socket = msock;
+ mdev->last_received = jiffies;
+
+ D_ASSERT(mdev->asender.task == NULL);
+
+ h = drbd_do_handshake(mdev);
+ if (h <= 0)
+ return h;
+
+ if (mdev->cram_hmac_tfm) {
+ /* drbd_request_state(mdev, NS(conn, WFAuth)); */
+ if (!drbd_do_auth(mdev)) {
+ ERR("Authentication of peer failed\n");
+ return -1;
+ }
+ }
+
+ if (drbd_request_state(mdev, NS(conn, WFReportParams)) < SS_Success)
+ return 0;
+
+ sock->sk->sk_sndtimeo = mdev->net_conf->timeout*HZ/10;
+ sock->sk->sk_rcvtimeo = MAX_SCHEDULE_TIMEOUT;
+
+ atomic_set(&mdev->packet_seq, 0);
+ mdev->peer_seq = 0;
+
+ drbd_thread_start(&mdev->asender);
+
+ drbd_send_protocol(mdev);
+ drbd_send_sync_param(mdev, &mdev->sync_conf);
+ drbd_send_sizes(mdev);
+ drbd_send_uuids(mdev);
+ drbd_send_state(mdev);
+ clear_bit(USE_DEGR_WFC_T, &mdev->flags);
+
+ return 1;
+}
+
+STATIC int drbd_recv_header(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ int r;
+
+ r = drbd_recv(mdev, h, sizeof(*h));
+
+ if (unlikely(r != sizeof(*h))) {
+ ERR("short read expecting header on sock: r=%d\n", r);
+ return FALSE;
+ };
+ h->command = be16_to_cpu(h->command);
+ h->length = be16_to_cpu(h->length);
+ if (unlikely(h->magic != BE_DRBD_MAGIC)) {
+ ERR("magic?? on data m: 0x%lx c: %d l: %d\n",
+ (long)be32_to_cpu(h->magic),
+ h->command, h->length);
+ return FALSE;
+ }
+ mdev->last_received = jiffies;
+
+ return TRUE;
+}
+
+STATIC enum finish_epoch drbd_flush_after_epoch(struct drbd_conf *mdev, struct drbd_epoch *epoch)
+{
+ int rv;
+
+ if (mdev->write_ordering >= WO_bdev_flush && inc_local(mdev)) {
+ rv = blkdev_issue_flush(mdev->bc->backing_bdev, NULL);
+ if (rv) {
+ ERR("local disk flush failed with status %d\n", rv);
+ /* would rather check on EOPNOTSUPP, but that is not reliable.
+ * don't try again for ANY return value != 0
+ * if (rv == -EOPNOTSUPP) */
+ drbd_bump_write_ordering(mdev, WO_drain_io);
+ }
+ dec_local(mdev);
+ }
+
+ return drbd_may_finish_epoch(mdev, epoch, EV_barrier_done);
+}
+
+/**
+ * w_flush: Checks if an epoch can be closed and therefore might
+ * close and/or free the epoch object.
+ */
+STATIC int w_flush(struct drbd_conf *mdev, struct drbd_work *w, int cancel)
+{
+ struct flush_work *fw = (struct flush_work *)w;
+ struct drbd_epoch *epoch = fw->epoch;
+
+ kfree(w);
+
+ if (!test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags))
+ drbd_flush_after_epoch(mdev, epoch);
+
+ drbd_may_finish_epoch(mdev, epoch, EV_put |
+ (mdev->state.conn < Connected ? EV_cleanup : 0));
+
+ return 1;
+}
+
+/**
+ * drbd_may_finish_epoch: Checks if an epoch can be closed and therefore might
+ * close and/or free the epoch object.
+ */
+STATIC enum finish_epoch drbd_may_finish_epoch(struct drbd_conf *mdev,
+ struct drbd_epoch *epoch,
+ enum epoch_event ev)
+{
+ int finish, epoch_size;
+ struct drbd_epoch *next_epoch;
+ int schedule_flush = 0;
+ enum finish_epoch rv = FE_still_live;
+
+ static char *epoch_event_str[] = {
+ [EV_put] = "put",
+ [EV_got_barrier_nr] = "got_barrier_nr",
+ [EV_barrier_done] = "barrier_done",
+ [EV_became_last] = "became_last",
+ };
+
+ spin_lock(&mdev->epoch_lock);
+ do {
+ next_epoch = NULL;
+ finish = 0;
+
+ epoch_size = atomic_read(&epoch->epoch_size);
+
+ switch (ev & ~EV_cleanup) {
+ case EV_put:
+ atomic_dec(&epoch->active);
+ break;
+ case EV_got_barrier_nr:
+ set_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags);
+
+ /* Special case: If we just switched from WO_bio_barrier to
+ WO_bdev_flush we should not finish the current epoch */
+ if (test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags) && epoch_size == 1 &&
+ mdev->write_ordering != WO_bio_barrier &&
+ epoch == mdev->current_epoch)
+ clear_bit(DE_CONTAINS_A_BARRIER, &epoch->flags);
+ break;
+ case EV_barrier_done:
+ set_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags);
+ break;
+ case EV_became_last:
+ /* nothing to do*/
+ break;
+ }
+
+ MTRACE(TraceTypeEpochs, TraceLvlAll,
+ INFO("Update epoch %p/%d { size=%d active=%d %c%c n%c%c } ev=%s\n",
+ epoch, epoch->barrier_nr, epoch_size, atomic_read(&epoch->active),
+ test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) ? 'n' : '-',
+ test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags) ? 'b' : '-',
+ test_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags) ? 'i' : '-',
+ test_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags) ? 'd' : '-',
+ epoch_event_str[ev]);
+ );
+
+ if (epoch_size != 0 &&
+ atomic_read(&epoch->active) == 0 &&
+ test_bit(DE_HAVE_BARRIER_NUMBER, &epoch->flags) &&
+ epoch->list.prev == &mdev->current_epoch->list &&
+ !test_bit(DE_IS_FINISHING, &epoch->flags)) {
+ /* Nearly all conditions are met to finish that epoch... */
+ if (test_bit(DE_BARRIER_IN_NEXT_EPOCH_DONE, &epoch->flags) ||
+ mdev->write_ordering == WO_none ||
+ (epoch_size == 1 && test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) ||
+ ev & EV_cleanup) {
+ finish = 1;
+ set_bit(DE_IS_FINISHING, &epoch->flags);
+ } else if (!test_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags) &&
+ mdev->write_ordering == WO_bio_barrier) {
+ atomic_inc(&epoch->active);
+ schedule_flush = 1;
+ }
+ }
+ if (finish) {
+ if (!(ev & EV_cleanup)) {
+ spin_unlock(&mdev->epoch_lock);
+ drbd_send_b_ack(mdev, epoch->barrier_nr, epoch_size);
+ spin_lock(&mdev->epoch_lock);
+ }
+ dec_unacked(mdev);
+
+ if (mdev->current_epoch != epoch) {
+ next_epoch = list_entry(epoch->list.next, struct drbd_epoch, list);
+ list_del(&epoch->list);
+ ev = EV_became_last | (ev & EV_cleanup);
+ mdev->epochs--;
+ MTRACE(TraceTypeEpochs, TraceLvlSummary,
+ INFO("Freeing epoch %p/%d { size=%d } nr_epochs=%d\n",
+ epoch, epoch->barrier_nr, epoch_size, mdev->epochs);
+ );
+ kfree(epoch);
+
+ if (rv == FE_still_live)
+ rv = FE_destroyed;
+ } else {
+ epoch->flags = 0;
+ atomic_set(&epoch->epoch_size, 0);
+ /* atomic_set(&epoch->active, 0); is alrady zero */
+ if (rv == FE_still_live)
+ rv = FE_recycled;
+ }
+ }
+
+ if (!next_epoch)
+ break;
+
+ epoch = next_epoch;
+ } while (1);
+
+ spin_unlock(&mdev->epoch_lock);
+
+ if (schedule_flush) {
+ struct flush_work *fw;
+ fw = kmalloc(sizeof(*fw), GFP_ATOMIC);
+ if (fw) {
+ MTRACE(TraceTypeEpochs, TraceLvlMetrics,
+ INFO("Schedul flush %p/%d { size=%d } nr_epochs=%d\n",
+ epoch, epoch->barrier_nr, epoch_size, mdev->epochs);
+ );
+ fw->w.cb = w_flush;
+ fw->epoch = epoch;
+ drbd_queue_work(&mdev->data.work, &fw->w);
+ } else {
+ drbd_WARN("Could not kmalloc a flush_work obj\n");
+ set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags);
+ /* That is not a recursion, only one level */
+ drbd_may_finish_epoch(mdev, epoch, EV_barrier_done);
+ drbd_may_finish_epoch(mdev, epoch, EV_put);
+ }
+ }
+
+ return rv;
+}
+
+/**
+ * drbd_bump_write_ordering: It turned out that the current mdev->write_ordering
+ * method does not work on the backing block device. Try the next allowed method.
+ */
+void drbd_bump_write_ordering(struct drbd_conf *mdev, enum write_ordering_e wo) __must_hold(local)
+{
+ enum write_ordering_e pwo;
+ static char *write_ordering_str[] = {
+ [WO_none] = "none",
+ [WO_drain_io] = "drain",
+ [WO_bdev_flush] = "flush",
+ [WO_bio_barrier] = "barrier",
+ };
+
+ pwo = mdev->write_ordering;
+ wo = min(pwo, wo);
+ if (wo == WO_bio_barrier && mdev->bc->dc.no_disk_barrier)
+ wo = WO_bdev_flush;
+ if (wo == WO_bdev_flush && mdev->bc->dc.no_disk_flush)
+ wo = WO_drain_io;
+ if (wo == WO_drain_io && mdev->bc->dc.no_disk_drain)
+ wo = WO_none;
+ mdev->write_ordering = wo;
+ if (pwo != mdev->write_ordering || wo == WO_bio_barrier)
+ INFO("Method to ensure write ordering: %s\n", write_ordering_str[mdev->write_ordering]);
+}
+
+/**
+ * w_e_reissue: In case the IO subsystem delivered an error for an BIO with the
+ * BIO_RW_BARRIER flag set, retry that bio without the barrier flag set.
+ */
+int w_e_reissue(struct drbd_conf *mdev, struct drbd_work *w, int cancel) __releases(local)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ struct bio *bio = e->private_bio;
+
+ /* We leave DE_CONTAINS_A_BARRIER and EE_IS_BARRIER in place,
+ (and DE_BARRIER_IN_NEXT_EPOCH_ISSUED in the previous Epoch)
+ so that we can finish that epoch in drbd_may_finish_epoch().
+ That is necessary if we already have a long chain of Epochs, before
+ we realize that BIO_RW_BARRIER is actually not supported */
+
+ /* As long as the -ENOTSUPP on the barrier is reported immediately
+ that will never trigger. It it is reported late, we will just
+ print that warning an continue corretly for all future requests
+ with WO_bdev_flush */
+ if (previous_epoch(mdev, e->epoch))
+ drbd_WARN("Write ordering was not enforced (one time event)\n");
+
+ /* prepare bio for re-submit,
+ * re-init volatile members */
+ /* we still have a local reference,
+ * inc_local was done in receive_Data. */
+ bio->bi_bdev = mdev->bc->backing_bdev;
+ bio->bi_sector = e->sector;
+ bio->bi_size = e->size;
+ bio->bi_idx = 0;
+
+ bio->bi_flags &= ~(BIO_POOL_MASK - 1);
+ bio->bi_flags |= 1 << BIO_UPTODATE;
+
+ /* don't know whether this is necessary: */
+ bio->bi_phys_segments = 0;
+ bio->bi_next = NULL;
+
+ /* these should be unchanged: */
+ /* bio->bi_end_io = drbd_endio_write_sec; */
+ /* bio->bi_vcnt = whatever; */
+
+ e->w.cb = e_end_block;
+
+ /* This is no longer a barrier request. */
+ bio->bi_rw &= ~(1UL << BIO_RW_BARRIER);
+
+ drbd_generic_make_request(mdev, DRBD_FAULT_DT_WR, bio);
+
+ return 1;
+}
+
+STATIC int receive_Barrier(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ int rv, issue_flush;
+ struct Drbd_Barrier_Packet *p = (struct Drbd_Barrier_Packet *)h;
+ struct drbd_epoch *epoch;
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE;
+
+ rv = drbd_recv(mdev, h->payload, h->length);
+ ERR_IF(rv != h->length) return FALSE;
+
+ inc_unacked(mdev);
+
+ if (mdev->net_conf->wire_protocol != DRBD_PROT_C)
+ drbd_kick_lo(mdev);
+
+ mdev->current_epoch->barrier_nr = p->barrier;
+ rv = drbd_may_finish_epoch(mdev, mdev->current_epoch, EV_got_barrier_nr);
+
+ /* BarrierAck may imply that the corresponding extent is dropped from
+ * the activity log, which means it would not be resynced in case the
+ * Primary crashes now.
+ * Therefore we must send the barrier_ack after the barrier request was
+ * completed. */
+ switch (mdev->write_ordering) {
+ case WO_bio_barrier:
+ case WO_none:
+ if (rv == FE_recycled)
+ return TRUE;
+ break;
+
+ case WO_bdev_flush:
+ case WO_drain_io:
+ D_ASSERT(rv == FE_still_live);
+ set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &mdev->current_epoch->flags);
+ drbd_wait_ee_list_empty(mdev, &mdev->active_ee);
+ rv = drbd_flush_after_epoch(mdev, mdev->current_epoch);
+ if (rv == FE_recycled)
+ return TRUE;
+
+ /* The asender will send all the ACKs and barrier ACKs out, since
+ all EEs moved from the active_ee to the done_ee. We need to
+ provide a new epoch object for the EEs that come in soon */
+ break;
+ }
+
+ epoch = kmalloc(sizeof(struct drbd_epoch), GFP_KERNEL);
+ if (!epoch) {
+ drbd_WARN("Allocation of an epoch failed, slowing down\n");
+ issue_flush = !test_and_set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags);
+ drbd_wait_ee_list_empty(mdev, &mdev->active_ee);
+ if (issue_flush) {
+ rv = drbd_flush_after_epoch(mdev, mdev->current_epoch);
+ if (rv == FE_recycled)
+ return TRUE;
+ }
+
+ drbd_wait_ee_list_empty(mdev, &mdev->done_ee);
+
+ return TRUE;
+ }
+
+ epoch->flags = 0;
+ atomic_set(&epoch->epoch_size, 0);
+ atomic_set(&epoch->active, 0);
+
+ spin_lock(&mdev->epoch_lock);
+ if (atomic_read(&mdev->current_epoch->epoch_size)) {
+ list_add(&epoch->list, &mdev->current_epoch->list);
+ mdev->current_epoch = epoch;
+ mdev->epochs++;
+ MTRACE(TraceTypeEpochs, TraceLvlMetrics,
+ INFO("Allocat epoch %p/xxxx { } nr_epochs=%d\n", epoch, mdev->epochs);
+ );
+ } else {
+ /* The current_epoch got recycled while we allocated this one... */
+ kfree(epoch);
+ }
+ spin_unlock(&mdev->epoch_lock);
+
+ return TRUE;
+}
+
+/* used from receive_RSDataReply (recv_resync_read)
+ * and from receive_Data */
+STATIC struct Tl_epoch_entry *
+read_in_block(struct drbd_conf *mdev, u64 id, sector_t sector, int data_size) __must_hold(local)
+{
+ struct Tl_epoch_entry *e;
+ struct bio_vec *bvec;
+ struct page *page;
+ struct bio *bio;
+ int dgs, ds, i, rr;
+ void *dig_in = mdev->int_dig_in;
+ void *dig_vv = mdev->int_dig_vv;
+
+ dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_r_tfm) ?
+ crypto_hash_digestsize(mdev->integrity_r_tfm) : 0;
+
+ if (dgs) {
+ rr = drbd_recv(mdev, dig_in, dgs);
+ if (rr != dgs) {
+ drbd_WARN("short read receiving data digest: read %d expected %d\n",
+ rr, dgs);
+ return NULL;
+ }
+ }
+
+ data_size -= dgs;
+
+ ERR_IF(data_size & 0x1ff) return NULL;
+ ERR_IF(data_size > DRBD_MAX_SEGMENT_SIZE) return NULL;
+
+ e = drbd_alloc_ee(mdev, id, sector, data_size, GFP_KERNEL);
+ if (!e)
+ return NULL;
+ bio = e->private_bio;
+ ds = data_size;
+ bio_for_each_segment(bvec, bio, i) {
+ page = bvec->bv_page;
+ rr = drbd_recv(mdev, kmap(page), min_t(int, ds, PAGE_SIZE));
+ kunmap(page);
+ if (rr != min_t(int, ds, PAGE_SIZE)) {
+ drbd_free_ee(mdev, e);
+ drbd_WARN("short read receiving data: read %d expected %d\n",
+ rr, min_t(int, ds, PAGE_SIZE));
+ return NULL;
+ }
+ ds -= rr;
+ }
+
+ if (dgs) {
+ drbd_csum(mdev, mdev->integrity_r_tfm, bio, dig_vv);
+ if (memcmp(dig_in, dig_vv, dgs)) {
+ ERR("Digest integrity check FAILED.\n");
+ drbd_bcast_ee(mdev, "digest failed",
+ dgs, dig_in, dig_vv, e);
+ drbd_free_ee(mdev, e);
+ return NULL;
+ }
+ }
+ mdev->recv_cnt += data_size>>9;
+ return e;
+}
+
+/* drbd_drain_block() just takes a data block
+ * out of the socket input buffer, and discards it.
+ */
+STATIC int drbd_drain_block(struct drbd_conf *mdev, int data_size)
+{
+ struct page *page;
+ int rr, rv = 1;
+ void *data;
+
+ page = drbd_pp_alloc(mdev, GFP_KERNEL);
+
+ data = kmap(page);
+ while (data_size) {
+ rr = drbd_recv(mdev, data, min_t(int, data_size, PAGE_SIZE));
+ if (rr != min_t(int, data_size, PAGE_SIZE)) {
+ rv = 0;
+ drbd_WARN("short read receiving data: read %d expected %d\n",
+ rr, min_t(int, data_size, PAGE_SIZE));
+ break;
+ }
+ data_size -= rr;
+ }
+ kunmap(page);
+ drbd_pp_free(mdev, page);
+ return rv;
+}
+
+/* kick lower level device, if we have more than (arbitrary number)
+ * reference counts on it, which typically are locally submitted io
+ * requests. don't use unacked_cnt, so we speed up proto A and B, too. */
+static void maybe_kick_lo(struct drbd_conf *mdev)
+{
+ if (atomic_read(&mdev->local_cnt) >= mdev->net_conf->unplug_watermark)
+ drbd_kick_lo(mdev);
+}
+
+STATIC int recv_dless_read(struct drbd_conf *mdev, struct drbd_request *req,
+ sector_t sector, int data_size)
+{
+ struct bio_vec *bvec;
+ struct bio *bio;
+ int dgs, rr, i, expect;
+ void *dig_in = mdev->int_dig_in;
+ void *dig_vv = mdev->int_dig_vv;
+
+ dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_r_tfm) ?
+ crypto_hash_digestsize(mdev->integrity_r_tfm) : 0;
+
+ if (dgs) {
+ rr = drbd_recv(mdev, dig_in, dgs);
+ if (rr != dgs) {
+ drbd_WARN("short read receiving data reply digest: read %d expected %d\n",
+ rr, dgs);
+ return 0;
+ }
+ }
+
+ data_size -= dgs;
+
+ bio = req->master_bio;
+ D_ASSERT(sector == bio->bi_sector);
+
+ bio_for_each_segment(bvec, bio, i) {
+ expect = min_t(int, data_size, bvec->bv_len);
+ rr = drbd_recv(mdev,
+ kmap(bvec->bv_page)+bvec->bv_offset,
+ expect);
+ kunmap(bvec->bv_page);
+ if (rr != expect) {
+ drbd_WARN("short read receiving data reply: "
+ "read %d expected %d\n",
+ rr, expect);
+ return 0;
+ }
+ data_size -= rr;
+ }
+
+ if (dgs) {
+ drbd_csum(mdev, mdev->integrity_r_tfm, bio, dig_vv);
+ if (memcmp(dig_in, dig_vv, dgs)) {
+ ERR("Digest integrity check FAILED. Broken NICs?\n");
+ return 0;
+ }
+ }
+
+ D_ASSERT(data_size == 0);
+ return 1;
+}
+
+/* e_end_resync_block() is called via
+ * drbd_process_done_ee() by asender only */
+STATIC int e_end_resync_block(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ sector_t sector = e->sector;
+ int ok;
+
+ D_ASSERT(hlist_unhashed(&e->colision));
+
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ drbd_set_in_sync(mdev, sector, e->size);
+ ok = drbd_send_ack(mdev, RSWriteAck, e);
+ } else {
+ /* Record failure to sync */
+ drbd_rs_failed_io(mdev, sector, e->size);
+
+ ok = drbd_send_ack(mdev, NegAck, e);
+ ok &= drbd_io_error(mdev, FALSE);
+ }
+ dec_unacked(mdev);
+
+ return ok;
+}
+
+STATIC int recv_resync_read(struct drbd_conf *mdev, sector_t sector, int data_size) __releases(local)
+{
+ struct Tl_epoch_entry *e;
+
+ e = read_in_block(mdev, ID_SYNCER, sector, data_size);
+ if (!e) {
+ dec_local(mdev);
+ return FALSE;
+ }
+
+ dec_rs_pending(mdev);
+
+ e->private_bio->bi_end_io = drbd_endio_write_sec;
+ e->private_bio->bi_rw = WRITE;
+ e->w.cb = e_end_resync_block;
+
+ inc_unacked(mdev);
+ /* corresponding dec_unacked() in e_end_resync_block()
+ * respective _drbd_clear_done_ee */
+
+ spin_lock_irq(&mdev->req_lock);
+ list_add(&e->w.list, &mdev->sync_ee);
+ spin_unlock_irq(&mdev->req_lock);
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("submit EE (RS)WRITE sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+ dump_internal_bio("Sec", mdev, e->private_bio, 0);
+ drbd_generic_make_request(mdev, DRBD_FAULT_RS_WR, e->private_bio);
+ /* accounting done in endio */
+
+ maybe_kick_lo(mdev);
+ return TRUE;
+}
+
+STATIC int receive_DataReply(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct drbd_request *req;
+ sector_t sector;
+ unsigned int header_size, data_size;
+ int ok;
+ struct Drbd_Data_Packet *p = (struct Drbd_Data_Packet *)h;
+
+ header_size = sizeof(*p) - sizeof(*h);
+ data_size = h->length - header_size;
+
+ ERR_IF(data_size == 0) return FALSE;
+
+ if (drbd_recv(mdev, h->payload, header_size) != header_size)
+ return FALSE;
+
+ sector = be64_to_cpu(p->sector);
+
+ spin_lock_irq(&mdev->req_lock);
+ req = _ar_id_to_req(mdev, p->block_id, sector);
+ spin_unlock_irq(&mdev->req_lock);
+ if (unlikely(!req)) {
+ ERR("Got a corrupt block_id/sector pair(1).\n");
+ return FALSE;
+ }
+
+ /* hlist_del(&req->colision) is done in _req_may_be_done, to avoid
+ * special casing it there for the various failure cases.
+ * still no race with drbd_fail_pending_reads */
+ ok = recv_dless_read(mdev, req, sector, data_size);
+
+ if (ok)
+ req_mod(req, data_received, 0);
+ /* else: nothing. handled from drbd_disconnect...
+ * I don't think we may complete this just yet
+ * in case we are "on-disconnect: freeze" */
+
+ return ok;
+}
+
+STATIC int receive_RSDataReply(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ sector_t sector;
+ unsigned int header_size, data_size;
+ int ok;
+ struct Drbd_Data_Packet *p = (struct Drbd_Data_Packet *)h;
+
+ header_size = sizeof(*p) - sizeof(*h);
+ data_size = h->length - header_size;
+
+ ERR_IF(data_size == 0) return FALSE;
+
+ if (drbd_recv(mdev, h->payload, header_size) != header_size)
+ return FALSE;
+
+ sector = be64_to_cpu(p->sector);
+ D_ASSERT(p->block_id == ID_SYNCER);
+
+ if (inc_local(mdev)) {
+ /* data is submitted to disk within recv_resync_read.
+ * corresponding dec_local done below on error,
+ * or in drbd_endio_write_sec. */
+ ok = recv_resync_read(mdev, sector, data_size);
+ } else {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Can not write resync data to local disk.\n");
+
+ ok = drbd_drain_block(mdev, data_size);
+
+ drbd_send_ack_dp(mdev, NegAck, p);
+ }
+
+ return ok;
+}
+
+/* e_end_block() is called via drbd_process_done_ee().
+ * this means this function only runs in the asender thread
+ */
+STATIC int e_end_block(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ sector_t sector = e->sector;
+ struct drbd_epoch *epoch;
+ int ok = 1, pcmd;
+
+ if (e->flags & EE_IS_BARRIER) {
+ epoch = previous_epoch(mdev, e->epoch);
+ if (epoch)
+ drbd_may_finish_epoch(mdev, epoch, EV_barrier_done);
+ }
+
+ if (mdev->net_conf->wire_protocol == DRBD_PROT_C) {
+ if (likely(drbd_bio_uptodate(e->private_bio))) {
+ pcmd = (mdev->state.conn >= SyncSource &&
+ mdev->state.conn <= PausedSyncT &&
+ e->flags & EE_MAY_SET_IN_SYNC) ?
+ RSWriteAck : WriteAck;
+ ok &= drbd_send_ack(mdev, pcmd, e);
+ if (pcmd == RSWriteAck)
+ drbd_set_in_sync(mdev, sector, e->size);
+ } else {
+ ok = drbd_send_ack(mdev, NegAck, e);
+ ok &= drbd_io_error(mdev, FALSE);
+ /* we expect it to be marked out of sync anyways...
+ * maybe assert this? */
+ }
+ dec_unacked(mdev);
+ } else if (unlikely(!drbd_bio_uptodate(e->private_bio))) {
+ ok = drbd_io_error(mdev, FALSE);
+ }
+
+ /* we delete from the conflict detection hash _after_ we sent out the
+ * WriteAck / NegAck, to get the sequence number right. */
+ if (mdev->net_conf->two_primaries) {
+ spin_lock_irq(&mdev->req_lock);
+ D_ASSERT(!hlist_unhashed(&e->colision));
+ hlist_del_init(&e->colision);
+ spin_unlock_irq(&mdev->req_lock);
+ } else {
+ D_ASSERT(hlist_unhashed(&e->colision));
+ }
+
+ drbd_may_finish_epoch(mdev, e->epoch, EV_put);
+
+ return ok;
+}
+
+STATIC int e_send_discard_ack(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct Tl_epoch_entry *e = (struct Tl_epoch_entry *)w;
+ int ok = 1;
+
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C);
+ ok = drbd_send_ack(mdev, DiscardAck, e);
+
+ spin_lock_irq(&mdev->req_lock);
+ D_ASSERT(!hlist_unhashed(&e->colision));
+ hlist_del_init(&e->colision);
+ spin_unlock_irq(&mdev->req_lock);
+
+ dec_unacked(mdev);
+
+ return ok;
+}
+
+/* Called from receive_Data.
+ * Synchronize packets on sock with packets on msock.
+ *
+ * This is here so even when a Data packet traveling via sock overtook an Ack
+ * packet traveling on msock, they are still processed in the order they have
+ * been sent.
+ *
+ * Note: we don't care for Ack packets overtaking Data packets.
+ *
+ * In case packet_seq is larger than mdev->peer_seq number, there are
+ * outstanding packets on the msock. We wait for them to arrive.
+ * In case we are the logically next packet, we update mdev->peer_seq
+ * ourselves. Correctly handles 32bit wrap around.
+ *
+ * Assume we have a 10 GBit connection, that is about 1<<30 byte per second,
+ * about 1<<21 sectors per second. So "worst" case, we have 1<<3 == 8 seconds
+ * for the 24bit wrap (historical atomic_t guarantee on some archs), and we have
+ * 1<<9 == 512 seconds aka ages for the 32bit wrap around...
+ *
+ * returns 0 if we may process the packet,
+ * -ERESTARTSYS if we were interrupted (by disconnect signal). */
+static int drbd_wait_peer_seq(struct drbd_conf *mdev, const u32 packet_seq)
+{
+ DEFINE_WAIT(wait);
+ unsigned int p_seq;
+ long timeout;
+ int ret = 0;
+ spin_lock(&mdev->peer_seq_lock);
+ for (;;) {
+ prepare_to_wait(&mdev->seq_wait, &wait, TASK_INTERRUPTIBLE);
+ if (seq_le(packet_seq, mdev->peer_seq+1))
+ break;
+ if (signal_pending(current)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+ p_seq = mdev->peer_seq;
+ spin_unlock(&mdev->peer_seq_lock);
+ timeout = schedule_timeout(30*HZ);
+ spin_lock(&mdev->peer_seq_lock);
+ if (timeout == 0 && p_seq == mdev->peer_seq) {
+ ret = -ETIMEDOUT;
+ ERR("ASSERT FAILED waited 30 seconds for sequence update, forcing reconnect\n");
+ break;
+ }
+ }
+ finish_wait(&mdev->seq_wait, &wait);
+ if (mdev->peer_seq+1 == packet_seq)
+ mdev->peer_seq++;
+ spin_unlock(&mdev->peer_seq_lock);
+ return ret;
+}
+
+/* mirrored write */
+STATIC int receive_Data(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ sector_t sector;
+ struct Tl_epoch_entry *e;
+ struct Drbd_Data_Packet *p = (struct Drbd_Data_Packet *)h;
+ int header_size, data_size;
+ int rw = WRITE;
+ u32 dp_flags;
+
+ header_size = sizeof(*p) - sizeof(*h);
+ data_size = h->length - header_size;
+
+ ERR_IF(data_size == 0) return FALSE;
+
+ if (drbd_recv(mdev, h->payload, header_size) != header_size)
+ return FALSE;
+
+ if (!inc_local(mdev)) {
+ /* data is submitted to disk at the end of this function.
+ * corresponding dec_local done either below (on error),
+ * or in drbd_endio_write_sec. */
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Can not write mirrored data block "
+ "to local disk.\n");
+ spin_lock(&mdev->peer_seq_lock);
+ if (mdev->peer_seq+1 == be32_to_cpu(p->seq_num))
+ mdev->peer_seq++;
+ spin_unlock(&mdev->peer_seq_lock);
+
+ drbd_send_ack_dp(mdev, NegAck, p);
+ atomic_inc(&mdev->current_epoch->epoch_size);
+ return drbd_drain_block(mdev, data_size);
+ }
+
+ sector = be64_to_cpu(p->sector);
+ e = read_in_block(mdev, p->block_id, sector, data_size);
+ if (!e) {
+ dec_local(mdev);
+ return FALSE;
+ }
+
+ e->private_bio->bi_end_io = drbd_endio_write_sec;
+ e->w.cb = e_end_block;
+
+ spin_lock(&mdev->epoch_lock);
+ e->epoch = mdev->current_epoch;
+ atomic_inc(&e->epoch->epoch_size);
+ atomic_inc(&e->epoch->active);
+
+ if (mdev->write_ordering == WO_bio_barrier && atomic_read(&e->epoch->epoch_size) == 1) {
+ struct drbd_epoch *epoch;
+ /* Issue a barrier if we start a new epoch, and the previous epoch
+ was not a epoch containing a single request which already was
+ a Barrier. */
+ epoch = list_entry(e->epoch->list.prev, struct drbd_epoch, list);
+ if (epoch == e->epoch) {
+ MTRACE(TraceTypeEpochs, TraceLvlMetrics,
+ INFO("Add barrier %p/%d\n",
+ epoch, epoch->barrier_nr);
+ );
+ set_bit(DE_CONTAINS_A_BARRIER, &e->epoch->flags);
+ rw |= (1<<BIO_RW_BARRIER);
+ e->flags |= EE_IS_BARRIER;
+ } else {
+ if (atomic_read(&epoch->epoch_size) > 1 ||
+ !test_bit(DE_CONTAINS_A_BARRIER, &epoch->flags)) {
+ MTRACE(TraceTypeEpochs, TraceLvlMetrics,
+ INFO("Add barrier %p/%d, setting bi in %p/%d\n",
+ e->epoch, e->epoch->barrier_nr,
+ epoch, epoch->barrier_nr);
+ );
+ set_bit(DE_BARRIER_IN_NEXT_EPOCH_ISSUED, &epoch->flags);
+ set_bit(DE_CONTAINS_A_BARRIER, &e->epoch->flags);
+ rw |= (1<<BIO_RW_BARRIER);
+ e->flags |= EE_IS_BARRIER;
+ }
+ }
+ }
+ spin_unlock(&mdev->epoch_lock);
+
+ dp_flags = be32_to_cpu(p->dp_flags);
+ if (dp_flags & DP_HARDBARRIER)
+ rw |= (1<<BIO_RW_BARRIER);
+ if (dp_flags & DP_RW_SYNC)
+ rw |= (1<<BIO_RW_SYNCIO) | (1<<BIO_RW_UNPLUG);
+ if (dp_flags & DP_MAY_SET_IN_SYNC)
+ e->flags |= EE_MAY_SET_IN_SYNC;
+
+ /* I'm the receiver, I do hold a net_cnt reference. */
+ if (!mdev->net_conf->two_primaries) {
+ spin_lock_irq(&mdev->req_lock);
+ } else {
+ /* don't get the req_lock yet,
+ * we may sleep in drbd_wait_peer_seq */
+ const int size = e->size;
+ const int discard = test_bit(DISCARD_CONCURRENT, &mdev->flags);
+ DEFINE_WAIT(wait);
+ struct drbd_request *i;
+ struct hlist_node *n;
+ struct hlist_head *slot;
+ int first;
+
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C);
+ BUG_ON(mdev->ee_hash == NULL);
+ BUG_ON(mdev->tl_hash == NULL);
+
+ /* conflict detection and handling:
+ * 1. wait on the sequence number,
+ * in case this data packet overtook ACK packets.
+ * 2. check our hash tables for conflicting requests.
+ * we only need to walk the tl_hash, since an ee can not
+ * have a conflict with an other ee: on the submitting
+ * node, the corresponding req had already been conflicting,
+ * and a conflicting req is never sent.
+ *
+ * Note: for two_primaries, we are protocol C,
+ * so there cannot be any request that is DONE
+ * but still on the transfer log.
+ *
+ * unconditionally add to the ee_hash.
+ *
+ * if no conflicting request is found:
+ * submit.
+ *
+ * if any conflicting request is found
+ * that has not yet been acked,
+ * AND I have the "discard concurrent writes" flag:
+ * queue (via done_ee) the DiscardAck; OUT.
+ *
+ * if any conflicting request is found:
+ * block the receiver, waiting on misc_wait
+ * until no more conflicting requests are there,
+ * or we get interrupted (disconnect).
+ *
+ * we do not just write after local io completion of those
+ * requests, but only after req is done completely, i.e.
+ * we wait for the DiscardAck to arrive!
+ *
+ * then proceed normally, i.e. submit.
+ */
+ if (drbd_wait_peer_seq(mdev, be32_to_cpu(p->seq_num)))
+ goto out_interrupted;
+
+ spin_lock_irq(&mdev->req_lock);
+
+ hlist_add_head(&e->colision, ee_hash_slot(mdev, sector));
+
+#define OVERLAPS overlaps(i->sector, i->size, sector, size)
+ slot = tl_hash_slot(mdev, sector);
+ first = 1;
+ for (;;) {
+ int have_unacked = 0;
+ int have_conflict = 0;
+ prepare_to_wait(&mdev->misc_wait, &wait,
+ TASK_INTERRUPTIBLE);
+ hlist_for_each_entry(i, n, slot, colision) {
+ if (OVERLAPS) {
+ /* only ALERT on first iteration,
+ * we may be woken up early... */
+ if (first)
+ ALERT("%s[%u] Concurrent local write detected!"
+ " new: %llus +%u; pending: %llus +%u\n",
+ current->comm, current->pid,
+ (unsigned long long)sector, size,
+ (unsigned long long)i->sector, i->size);
+ if (i->rq_state & RQ_NET_PENDING)
+ ++have_unacked;
+ ++have_conflict;
+ }
+ }
+#undef OVERLAPS
+ if (!have_conflict)
+ break;
+
+ /* Discard Ack only for the _first_ iteration */
+ if (first && discard && have_unacked) {
+ ALERT("Concurrent write! [DISCARD BY FLAG] sec=%llus\n",
+ (unsigned long long)sector);
+ inc_unacked(mdev);
+ e->w.cb = e_send_discard_ack;
+ list_add_tail(&e->w.list, &mdev->done_ee);
+
+ spin_unlock_irq(&mdev->req_lock);
+
+ /* we could probably send that DiscardAck ourselves,
+ * but I don't like the receiver using the msock */
+
+ dec_local(mdev);
+ wake_asender(mdev);
+ finish_wait(&mdev->misc_wait, &wait);
+ return TRUE;
+ }
+
+ if (signal_pending(current)) {
+ hlist_del_init(&e->colision);
+
+ spin_unlock_irq(&mdev->req_lock);
+
+ finish_wait(&mdev->misc_wait, &wait);
+ goto out_interrupted;
+ }
+
+ spin_unlock_irq(&mdev->req_lock);
+ if (first) {
+ first = 0;
+ ALERT("Concurrent write! [W AFTERWARDS] "
+ "sec=%llus\n", (unsigned long long)sector);
+ } else if (discard) {
+ /* we had none on the first iteration.
+ * there must be none now. */
+ D_ASSERT(have_unacked == 0);
+ }
+ schedule();
+ spin_lock_irq(&mdev->req_lock);
+ }
+ finish_wait(&mdev->misc_wait, &wait);
+ }
+
+ list_add(&e->w.list, &mdev->active_ee);
+ spin_unlock_irq(&mdev->req_lock);
+
+ switch (mdev->net_conf->wire_protocol) {
+ case DRBD_PROT_C:
+ inc_unacked(mdev);
+ /* corresponding dec_unacked() in e_end_block()
+ * respective _drbd_clear_done_ee */
+ break;
+ case DRBD_PROT_B:
+ /* I really don't like it that the receiver thread
+ * sends on the msock, but anyways */
+ drbd_send_ack(mdev, RecvAck, e);
+ break;
+ case DRBD_PROT_A:
+ /* nothing to do */
+ break;
+ }
+
+ if (mdev->state.pdsk == Diskless) {
+ /* In case we have the only disk of the cluster, */
+ drbd_set_out_of_sync(mdev, e->sector, e->size);
+ e->flags |= EE_CALL_AL_COMPLETE_IO;
+ drbd_al_begin_io(mdev, e->sector);
+ }
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("submit EE (DATA)WRITE sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+
+ e->private_bio->bi_rw = rw;
+ dump_internal_bio("Sec", mdev, e->private_bio, 0);
+ drbd_generic_make_request(mdev, DRBD_FAULT_DT_WR, e->private_bio);
+ /* accounting done in endio */
+
+ maybe_kick_lo(mdev);
+ return TRUE;
+
+out_interrupted:
+ /* yes, the epoch_size now is imbalanced.
+ * but we drop the connection anyways, so we don't have a chance to
+ * receive a barrier... atomic_inc(&mdev->epoch_size); */
+ dec_local(mdev);
+ drbd_free_ee(mdev, e);
+ return FALSE;
+}
+
+STATIC int receive_DataRequest(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ sector_t sector;
+ const sector_t capacity = drbd_get_capacity(mdev->this_bdev);
+ struct Tl_epoch_entry *e;
+ struct digest_info *di;
+ int size, digest_size;
+ unsigned int fault_type;
+ struct Drbd_BlockRequest_Packet *p =
+ (struct Drbd_BlockRequest_Packet *)h;
+ const int brps = sizeof(*p)-sizeof(*h);
+
+ if (drbd_recv(mdev, h->payload, brps) != brps)
+ return FALSE;
+
+ sector = be64_to_cpu(p->sector);
+ size = be32_to_cpu(p->blksize);
+
+ if (size <= 0 || (size & 0x1ff) != 0 || size > DRBD_MAX_SEGMENT_SIZE) {
+ ERR("%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
+ (unsigned long long)sector, size);
+ return FALSE;
+ }
+ if (sector + (size>>9) > capacity) {
+ ERR("%s:%d: sector: %llus, size: %u\n", __FILE__, __LINE__,
+ (unsigned long long)sector, size);
+ return FALSE;
+ }
+
+ if (!inc_local_if_state(mdev, UpToDate)) {
+ if (__ratelimit(&drbd_ratelimit_state))
+ ERR("Can not satisfy peer's read request, "
+ "no local data.\n");
+ drbd_send_ack_rp(mdev, h->command == DataRequest ? NegDReply :
+ NegRSDReply , p);
+ return TRUE;
+ }
+
+ e = drbd_alloc_ee(mdev, p->block_id, sector, size, GFP_KERNEL);
+ if (!e) {
+ dec_local(mdev);
+ return FALSE;
+ }
+
+ e->private_bio->bi_rw = READ;
+ e->private_bio->bi_end_io = drbd_endio_read_sec;
+
+ switch (h->command) {
+ case DataRequest:
+ e->w.cb = w_e_end_data_req;
+ fault_type = DRBD_FAULT_DT_RD;
+ break;
+ case RSDataRequest:
+ e->w.cb = w_e_end_rsdata_req;
+ fault_type = DRBD_FAULT_RS_RD;
+ /* Eventually this should become asynchrously. Currently it
+ * blocks the whole receiver just to delay the reading of a
+ * resync data block.
+ * the drbd_work_queue mechanism is made for this...
+ */
+ if (!drbd_rs_begin_io(mdev, sector)) {
+ /* we have been interrupted,
+ * probably connection lost! */
+ D_ASSERT(signal_pending(current));
+ dec_local(mdev);
+ drbd_free_ee(mdev, e);
+ return 0;
+ }
+ break;
+
+ case OVReply:
+ case CsumRSRequest:
+ fault_type = DRBD_FAULT_RS_RD;
+ digest_size = h->length - brps ;
+ di = kmalloc(sizeof(*di) + digest_size, GFP_KERNEL);
+ if (!di) {
+ dec_local(mdev);
+ drbd_free_ee(mdev, e);
+ return 0;
+ }
+
+ di->digest_size = digest_size;
+ di->digest = (((char *)di)+sizeof(struct digest_info));
+
+ if (drbd_recv(mdev, di->digest, digest_size) != digest_size) {
+ dec_local(mdev);
+ drbd_free_ee(mdev, e);
+ kfree(di);
+ return FALSE;
+ }
+
+ e->block_id = (u64)(unsigned long)di;
+ if (h->command == CsumRSRequest) {
+ D_ASSERT(mdev->agreed_pro_version >= 89);
+ e->w.cb = w_e_end_csum_rs_req;
+ } else if (h->command == OVReply) {
+ e->w.cb = w_e_end_ov_reply;
+ dec_rs_pending(mdev);
+ break;
+ }
+
+ if (!drbd_rs_begin_io(mdev, sector)) {
+ /* we have been interrupted, probably connection lost! */
+ D_ASSERT(signal_pending(current));
+ drbd_free_ee(mdev, e);
+ kfree(di);
+ dec_local(mdev);
+ return FALSE;
+ }
+ break;
+
+ case OVRequest:
+ e->w.cb = w_e_end_ov_req;
+ fault_type = DRBD_FAULT_RS_RD;
+ /* Eventually this should become asynchrously. Currently it
+ * blocks the whole receiver just to delay the reading of a
+ * resync data block.
+ * the drbd_work_queue mechanism is made for this...
+ */
+ if (!drbd_rs_begin_io(mdev, sector)) {
+ /* we have been interrupted,
+ * probably connection lost! */
+ D_ASSERT(signal_pending(current));
+ dec_local(mdev);
+ drbd_free_ee(mdev, e);
+ return 0;
+ }
+ break;
+
+
+ default:
+ ERR("unexpected command (%s) in receive_DataRequest\n",
+ cmdname(h->command));
+ fault_type = DRBD_FAULT_MAX;
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ list_add(&e->w.list, &mdev->read_ee);
+ spin_unlock_irq(&mdev->req_lock);
+
+ inc_unacked(mdev);
+
+ MTRACE(TraceTypeEE, TraceLvlAll,
+ INFO("submit EE READ sec=%llus size=%u ee=%p\n",
+ (unsigned long long)e->sector, e->size, e);
+ );
+
+ dump_internal_bio("Sec", mdev, e->private_bio, 0);
+ drbd_generic_make_request(mdev, fault_type, e->private_bio);
+ maybe_kick_lo(mdev);
+
+ return TRUE;
+}
+
+STATIC int drbd_asb_recover_0p(struct drbd_conf *mdev) __must_hold(local)
+{
+ int self, peer, rv = -100;
+ unsigned long ch_self, ch_peer;
+
+ self = mdev->bc->md.uuid[Bitmap] & 1;
+ peer = mdev->p_uuid[Bitmap] & 1;
+
+ ch_peer = mdev->p_uuid[UUID_SIZE];
+ ch_self = mdev->comm_bm_set;
+
+ switch (mdev->net_conf->after_sb_0p) {
+ case Consensus:
+ case DiscardSecondary:
+ case CallHelper:
+ ERR("Configuration error.\n");
+ break;
+ case Disconnect:
+ break;
+ case DiscardYoungerPri:
+ if (self == 0 && peer == 1) {
+ rv = -1;
+ break;
+ }
+ if (self == 1 && peer == 0) {
+ rv = 1;
+ break;
+ }
+ /* Else fall through to one of the other strategies... */
+ case DiscardOlderPri:
+ if (self == 0 && peer == 1) {
+ rv = 1;
+ break;
+ }
+ if (self == 1 && peer == 0) {
+ rv = -1;
+ break;
+ }
+ /* Else fall through to one of the other strategies... */
+ drbd_WARN("Discard younger/older primary did not found a decision\n"
+ "Using discard-least-changes instead\n");
+ case DiscardZeroChg:
+ if (ch_peer == 0 && ch_self == 0) {
+ rv = test_bit(DISCARD_CONCURRENT, &mdev->flags)
+ ? -1 : 1;
+ break;
+ } else {
+ if (ch_peer == 0) { rv = 1; break; }
+ if (ch_self == 0) { rv = -1; break; }
+ }
+ if (mdev->net_conf->after_sb_0p == DiscardZeroChg)
+ break;
+ case DiscardLeastChg:
+ if (ch_self < ch_peer)
+ rv = -1;
+ else if (ch_self > ch_peer)
+ rv = 1;
+ else /* ( ch_self == ch_peer ) */
+ /* Well, then use something else. */
+ rv = test_bit(DISCARD_CONCURRENT, &mdev->flags)
+ ? -1 : 1;
+ break;
+ case DiscardLocal:
+ rv = -1;
+ break;
+ case DiscardRemote:
+ rv = 1;
+ }
+
+ return rv;
+}
+
+STATIC int drbd_asb_recover_1p(struct drbd_conf *mdev) __must_hold(local)
+{
+ int self, peer, hg, rv = -100;
+
+ self = mdev->bc->md.uuid[Bitmap] & 1;
+ peer = mdev->p_uuid[Bitmap] & 1;
+
+ switch (mdev->net_conf->after_sb_1p) {
+ case DiscardYoungerPri:
+ case DiscardOlderPri:
+ case DiscardLeastChg:
+ case DiscardLocal:
+ case DiscardRemote:
+ ERR("Configuration error.\n");
+ break;
+ case Disconnect:
+ break;
+ case Consensus:
+ hg = drbd_asb_recover_0p(mdev);
+ if (hg == -1 && mdev->state.role == Secondary)
+ rv = hg;
+ if (hg == 1 && mdev->state.role == Primary)
+ rv = hg;
+ break;
+ case Violently:
+ rv = drbd_asb_recover_0p(mdev);
+ break;
+ case DiscardSecondary:
+ return mdev->state.role == Primary ? 1 : -1;
+ case CallHelper:
+ hg = drbd_asb_recover_0p(mdev);
+ if (hg == -1 && mdev->state.role == Primary) {
+ self = drbd_set_role(mdev, Secondary, 0);
+ if (self != SS_Success) {
+ drbd_khelper(mdev, "pri-lost-after-sb");
+ } else {
+ drbd_WARN("Sucessfully gave up primary role.\n");
+ rv = hg;
+ }
+ } else
+ rv = hg;
+ }
+
+ return rv;
+}
+
+STATIC int drbd_asb_recover_2p(struct drbd_conf *mdev) __must_hold(local)
+{
+ int self, peer, hg, rv = -100;
+
+ self = mdev->bc->md.uuid[Bitmap] & 1;
+ peer = mdev->p_uuid[Bitmap] & 1;
+
+ switch (mdev->net_conf->after_sb_2p) {
+ case DiscardYoungerPri:
+ case DiscardOlderPri:
+ case DiscardLeastChg:
+ case DiscardLocal:
+ case DiscardRemote:
+ case Consensus:
+ case DiscardSecondary:
+ ERR("Configuration error.\n");
+ break;
+ case Violently:
+ rv = drbd_asb_recover_0p(mdev);
+ break;
+ case Disconnect:
+ break;
+ case CallHelper:
+ hg = drbd_asb_recover_0p(mdev);
+ if (hg == -1) {
+ self = drbd_set_role(mdev, Secondary, 0);
+ if (self != SS_Success) {
+ drbd_khelper(mdev, "pri-lost-after-sb");
+ } else {
+ drbd_WARN("Sucessfully gave up primary role.\n");
+ rv = hg;
+ }
+ } else
+ rv = hg;
+ }
+
+ return rv;
+}
+
+STATIC void drbd_uuid_dump(struct drbd_conf *mdev, char *text, u64 *uuid,
+ u64 bits, u64 flags)
+{
+ if (!uuid) {
+ INFO("%s uuid info vanished while I was looking!\n", text);
+ return;
+ }
+ INFO("%s %016llX:%016llX:%016llX:%016llX bits:%llu flags:%llX\n",
+ text,
+ (unsigned long long)uuid[Current],
+ (unsigned long long)uuid[Bitmap],
+ (unsigned long long)uuid[History_start],
+ (unsigned long long)uuid[History_end],
+ (unsigned long long)bits,
+ (unsigned long long)flags);
+}
+
+/*
+ 100 after split brain try auto recover
+ 2 SyncSource set BitMap
+ 1 SyncSource use BitMap
+ 0 no Sync
+ -1 SyncTarget use BitMap
+ -2 SyncTarget set BitMap
+ -100 after split brain, disconnect
+-1000 unrelated data
+ */
+STATIC int drbd_uuid_compare(struct drbd_conf *mdev, int *rule_nr) __must_hold(local)
+{
+ u64 self, peer;
+ int i, j;
+
+ self = mdev->bc->md.uuid[Current] & ~((u64)1);
+ peer = mdev->p_uuid[Current] & ~((u64)1);
+
+ *rule_nr = 1;
+ if (self == UUID_JUST_CREATED && peer == UUID_JUST_CREATED)
+ return 0;
+
+ *rule_nr = 2;
+ if ((self == UUID_JUST_CREATED || self == (u64)0) &&
+ peer != UUID_JUST_CREATED)
+ return -2;
+
+ *rule_nr = 3;
+ if (self != UUID_JUST_CREATED &&
+ (peer == UUID_JUST_CREATED || peer == (u64)0))
+ return 2;
+
+ *rule_nr = 4;
+ if (self == peer) { /* Common power [off|failure] */
+ int rct, dc; /* roles at crash time */
+
+ rct = (test_bit(CRASHED_PRIMARY, &mdev->flags) ? 1 : 0) +
+ (mdev->p_uuid[UUID_FLAGS] & 2);
+ /* lowest bit is set when we were primary,
+ * next bit (weight 2) is set when peer was primary */
+
+ MTRACE(TraceTypeUuid, TraceLvlMetrics, DUMPI(rct););
+
+ switch (rct) {
+ case 0: /* !self_pri && !peer_pri */ return 0;
+ case 1: /* self_pri && !peer_pri */ return 1;
+ case 2: /* !self_pri && peer_pri */ return -1;
+ case 3: /* self_pri && peer_pri */
+ dc = test_bit(DISCARD_CONCURRENT, &mdev->flags);
+ MTRACE(TraceTypeUuid, TraceLvlMetrics, DUMPI(dc););
+ return dc ? -1 : 1;
+ }
+ }
+
+ *rule_nr = 5;
+ peer = mdev->p_uuid[Bitmap] & ~((u64)1);
+ if (self == peer)
+ return -1;
+
+ *rule_nr = 6;
+ for (i = History_start; i <= History_end; i++) {
+ peer = mdev->p_uuid[i] & ~((u64)1);
+ if (self == peer)
+ return -2;
+ }
+
+ *rule_nr = 7;
+ self = mdev->bc->md.uuid[Bitmap] & ~((u64)1);
+ peer = mdev->p_uuid[Current] & ~((u64)1);
+ if (self == peer)
+ return 1;
+
+ *rule_nr = 8;
+ for (i = History_start; i <= History_end; i++) {
+ self = mdev->bc->md.uuid[i] & ~((u64)1);
+ if (self == peer)
+ return 2;
+ }
+
+ *rule_nr = 9;
+ self = mdev->bc->md.uuid[Bitmap] & ~((u64)1);
+ peer = mdev->p_uuid[Bitmap] & ~((u64)1);
+ if (self == peer && self != ((u64)0))
+ return 100;
+
+ *rule_nr = 10;
+ for (i = History_start; i <= History_end; i++) {
+ self = mdev->p_uuid[i] & ~((u64)1);
+ for (j = History_start; j <= History_end; j++) {
+ peer = mdev->p_uuid[j] & ~((u64)1);
+ if (self == peer)
+ return -100;
+ }
+ }
+
+ return -1000;
+}
+
+/* drbd_sync_handshake() returns the new conn state on success, or
+ conn_mask (-1) on failure.
+ */
+STATIC enum drbd_conns drbd_sync_handshake(struct drbd_conf *mdev, enum drbd_role peer_role,
+ enum drbd_disk_state peer_disk) __must_hold(local)
+{
+ int hg, rule_nr;
+ enum drbd_conns rv = conn_mask;
+ enum drbd_disk_state mydisk;
+
+ mydisk = mdev->state.disk;
+ if (mydisk == Negotiating)
+ mydisk = mdev->new_state_tmp.disk;
+
+ hg = drbd_uuid_compare(mdev, &rule_nr);
+
+ INFO("drbd_sync_handshake:\n");
+ drbd_uuid_dump(mdev, "self", mdev->bc->md.uuid,
+ mdev->state.disk >= Negotiating ? drbd_bm_total_weight(mdev) : 0, 0);
+ drbd_uuid_dump(mdev, "peer", mdev->p_uuid,
+ mdev->p_uuid[UUID_SIZE], mdev->p_uuid[UUID_FLAGS]);
+ INFO("uuid_compare()=%d by rule %d\n", hg, rule_nr);
+
+ if (hg == -1000) {
+ ALERT("Unrelated data, aborting!\n");
+ return conn_mask;
+ }
+
+ if ((mydisk == Inconsistent && peer_disk > Inconsistent) ||
+ (peer_disk == Inconsistent && mydisk > Inconsistent)) {
+ int f = (hg == -100) || abs(hg) == 2;
+ hg = mydisk > Inconsistent ? 1 : -1;
+ if (f)
+ hg = hg*2;
+ INFO("Becoming sync %s due to disk states.\n",
+ hg > 0 ? "source" : "target");
+ }
+
+ if (hg == 100 || (hg == -100 && mdev->net_conf->always_asbp)) {
+ int pcount = (mdev->state.role == Primary)
+ + (peer_role == Primary);
+ int forced = (hg == -100);
+
+ switch (pcount) {
+ case 0:
+ hg = drbd_asb_recover_0p(mdev);
+ break;
+ case 1:
+ hg = drbd_asb_recover_1p(mdev);
+ break;
+ case 2:
+ hg = drbd_asb_recover_2p(mdev);
+ break;
+ }
+ if (abs(hg) < 100) {
+ drbd_WARN("Split-Brain detected, %d primaries, "
+ "automatically solved. Sync from %s node\n",
+ pcount, (hg < 0) ? "peer" : "this");
+ if (forced) {
+ drbd_WARN("Doing a full sync, since"
+ " UUIDs where ambiguous.\n");
+ hg = hg*2;
+ }
+ }
+ }
+
+ if (hg == -100) {
+ if (mdev->net_conf->want_lose && !(mdev->p_uuid[UUID_FLAGS]&1))
+ hg = -1;
+ if (!mdev->net_conf->want_lose && (mdev->p_uuid[UUID_FLAGS]&1))
+ hg = 1;
+
+ if (abs(hg) < 100)
+ drbd_WARN("Split-Brain detected, manually solved. "
+ "Sync from %s node\n",
+ (hg < 0) ? "peer" : "this");
+ }
+
+ if (hg == -100) {
+ ALERT("Split-Brain detected, dropping connection!\n");
+ drbd_khelper(mdev, "split-brain");
+ return conn_mask;
+ }
+
+ if (hg > 0 && mydisk <= Inconsistent) {
+ ERR("I shall become SyncSource, but I am inconsistent!\n");
+ return conn_mask;
+ }
+
+ if (hg < 0 && /* by intention we do not use mydisk here. */
+ mdev->state.role == Primary && mdev->state.disk >= Consistent) {
+ switch (mdev->net_conf->rr_conflict) {
+ case CallHelper:
+ drbd_khelper(mdev, "pri-lost");
+ /* fall through */
+ case Disconnect:
+ ERR("I shall become SyncTarget, but I am primary!\n");
+ return conn_mask;
+ case Violently:
+ drbd_WARN("Becoming SyncTarget, violating the stable-data"
+ "assumption\n");
+ }
+ }
+
+ if (abs(hg) >= 2) {
+ INFO("Writing the whole bitmap, full sync required after drbd_sync_handshake.\n");
+ if (drbd_bitmap_io(mdev, &drbd_bmio_set_n_write, "set_n_write from sync_handshake"))
+ return conn_mask;
+ }
+
+ if (hg > 0) { /* become sync source. */
+ rv = WFBitMapS;
+ } else if (hg < 0) { /* become sync target */
+ rv = WFBitMapT;
+ } else {
+ rv = Connected;
+ if (drbd_bm_total_weight(mdev)) {
+ INFO("No resync, but %lu bits in bitmap!\n",
+ drbd_bm_total_weight(mdev));
+ }
+ }
+
+ drbd_bm_recount_bits(mdev);
+
+ return rv;
+}
+
+/* returns 1 if invalid */
+STATIC int cmp_after_sb(enum after_sb_handler peer, enum after_sb_handler self)
+{
+ /* DiscardRemote - DiscardLocal is valid */
+ if ((peer == DiscardRemote && self == DiscardLocal) ||
+ (self == DiscardRemote && peer == DiscardLocal))
+ return 0;
+
+ /* any other things with DiscardRemote or DiscardLocal are invalid */
+ if (peer == DiscardRemote || peer == DiscardLocal ||
+ self == DiscardRemote || self == DiscardLocal)
+ return 1;
+
+ /* everything else is valid if they are equal on both sides. */
+ if (peer == self)
+ return 0;
+
+ /* everything es is invalid. */
+ return 1;
+}
+
+STATIC int receive_protocol(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_Protocol_Packet *p = (struct Drbd_Protocol_Packet *)h;
+ int header_size, data_size;
+ int p_proto, p_after_sb_0p, p_after_sb_1p, p_after_sb_2p;
+ int p_want_lose, p_two_primaries;
+ char p_integrity_alg[SHARED_SECRET_MAX] = "";
+
+ header_size = sizeof(*p) - sizeof(*h);
+ data_size = h->length - header_size;
+
+ if (drbd_recv(mdev, h->payload, header_size) != header_size)
+ return FALSE;
+
+ p_proto = be32_to_cpu(p->protocol);
+ p_after_sb_0p = be32_to_cpu(p->after_sb_0p);
+ p_after_sb_1p = be32_to_cpu(p->after_sb_1p);
+ p_after_sb_2p = be32_to_cpu(p->after_sb_2p);
+ p_want_lose = be32_to_cpu(p->want_lose);
+ p_two_primaries = be32_to_cpu(p->two_primaries);
+
+ if (p_proto != mdev->net_conf->wire_protocol) {
+ ERR("incompatible communication protocols\n");
+ goto disconnect;
+ }
+
+ if (cmp_after_sb(p_after_sb_0p, mdev->net_conf->after_sb_0p)) {
+ ERR("incompatible after-sb-0pri settings\n");
+ goto disconnect;
+ }
+
+ if (cmp_after_sb(p_after_sb_1p, mdev->net_conf->after_sb_1p)) {
+ ERR("incompatible after-sb-1pri settings\n");
+ goto disconnect;
+ }
+
+ if (cmp_after_sb(p_after_sb_2p, mdev->net_conf->after_sb_2p)) {
+ ERR("incompatible after-sb-2pri settings\n");
+ goto disconnect;
+ }
+
+ if (p_want_lose && mdev->net_conf->want_lose) {
+ ERR("both sides have the 'want_lose' flag set\n");
+ goto disconnect;
+ }
+
+ if (p_two_primaries != mdev->net_conf->two_primaries) {
+ ERR("incompatible setting of the two-primaries options\n");
+ goto disconnect;
+ }
+
+ if (mdev->agreed_pro_version >= 87) {
+ unsigned char *my_alg = mdev->net_conf->integrity_alg;
+
+ if (drbd_recv(mdev, p_integrity_alg, data_size) != data_size)
+ return FALSE;
+
+ p_integrity_alg[SHARED_SECRET_MAX-1] = 0;
+ if (strcmp(p_integrity_alg, my_alg)) {
+ ERR("incompatible setting of the data-integrity-alg\n");
+ goto disconnect;
+ }
+ INFO("data-integrity-alg: %s\n",
+ my_alg[0] ? my_alg : (unsigned char *)"<not-used>");
+ }
+
+ return TRUE;
+
+disconnect:
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+}
+
+/* helper function
+ * input: alg name, feature name
+ * return: NULL (alg name was "")
+ * ERR_PTR(error) if something goes wrong
+ * or the crypto hash ptr, if it worked out ok. */
+struct crypto_hash *drbd_crypto_alloc_digest_safe(const struct drbd_conf *mdev,
+ const char *alg, const char *name)
+{
+ struct crypto_hash *tfm;
+
+ if (!alg[0])
+ return NULL;
+
+ tfm = crypto_alloc_hash(alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(tfm)) {
+ ERR("Can not allocate \"%s\" as %s (reason: %ld)\n",
+ alg, name, PTR_ERR(tfm));
+ return tfm;
+ }
+ if (crypto_tfm_alg_type(crypto_hash_tfm(tfm)) != CRYPTO_ALG_TYPE_DIGEST) {
+ crypto_free_hash(tfm);
+ ERR("\"%s\" is not a digest (%s)\n", alg, name);
+ return ERR_PTR(-EINVAL);
+ }
+ return tfm;
+}
+
+STATIC int receive_SyncParam(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ int ok = TRUE;
+ struct Drbd_SyncParam89_Packet *p = (struct Drbd_SyncParam89_Packet *)h;
+ unsigned int header_size, data_size, exp_max_sz;
+ struct crypto_hash *verify_tfm = NULL;
+ struct crypto_hash *csums_tfm = NULL;
+ const int apv = mdev->agreed_pro_version;
+
+ exp_max_sz = apv <= 87 ? sizeof(struct Drbd_SyncParam_Packet)
+ : apv == 88 ? sizeof(struct Drbd_SyncParam_Packet)
+ + SHARED_SECRET_MAX
+ : /* 89 */ sizeof(struct Drbd_SyncParam89_Packet);
+
+ if (h->length > exp_max_sz) {
+ ERR("SyncParam packet too long: received %u, expected <= %u bytes\n",
+ h->length, exp_max_sz);
+ return FALSE;
+ }
+
+ if (apv <= 88) {
+ header_size = sizeof(struct Drbd_SyncParam_Packet) - sizeof(*h);
+ data_size = h->length - header_size;
+ } else /* apv >= 89 */ {
+ header_size = sizeof(struct Drbd_SyncParam89_Packet) - sizeof(*h);
+ data_size = h->length - header_size;
+ D_ASSERT(data_size == 0);
+ }
+
+ /* initialize verify_alg and csums_alg */
+ memset(p->verify_alg, 0, 2 * SHARED_SECRET_MAX);
+
+ if (drbd_recv(mdev, h->payload, header_size) != header_size)
+ return FALSE;
+
+ mdev->sync_conf.rate = be32_to_cpu(p->rate);
+
+ if (apv >= 88) {
+ if (apv == 88) {
+ if (data_size > SHARED_SECRET_MAX) {
+ ERR("verify-alg too long, "
+ "peer wants %u, accepting only %u byte\n",
+ data_size, SHARED_SECRET_MAX);
+ return FALSE;
+ }
+
+ if (drbd_recv(mdev, p->verify_alg, data_size) != data_size)
+ return FALSE;
+
+ /* we expect NUL terminated string */
+ /* but just in case someone tries to be evil */
+ D_ASSERT(p->verify_alg[data_size-1] == 0);
+ p->verify_alg[data_size-1] = 0;
+
+ } else /* apv >= 89 */ {
+ /* we still expect NUL terminated strings */
+ /* but just in case someone tries to be evil */
+ D_ASSERT(p->verify_alg[SHARED_SECRET_MAX-1] == 0);
+ D_ASSERT(p->csums_alg[SHARED_SECRET_MAX-1] == 0);
+ p->verify_alg[SHARED_SECRET_MAX-1] = 0;
+ p->csums_alg[SHARED_SECRET_MAX-1] = 0;
+ }
+
+ if (strcmp(mdev->sync_conf.verify_alg, p->verify_alg)) {
+ if (mdev->state.conn == WFReportParams) {
+ ERR("Different verify-alg settings. me=\"%s\" peer=\"%s\"\n",
+ mdev->sync_conf.verify_alg, p->verify_alg);
+ goto disconnect;
+ }
+ verify_tfm = drbd_crypto_alloc_digest_safe(mdev,
+ p->verify_alg, "verify-alg");
+ if (IS_ERR(verify_tfm))
+ goto disconnect;
+ }
+
+ if (apv >= 89 && strcmp(mdev->sync_conf.csums_alg, p->csums_alg)) {
+ if (mdev->state.conn == WFReportParams) {
+ ERR("Different csums-alg settings. me=\"%s\" peer=\"%s\"\n",
+ mdev->sync_conf.csums_alg, p->csums_alg);
+ goto disconnect;
+ }
+ csums_tfm = drbd_crypto_alloc_digest_safe(mdev,
+ p->csums_alg, "csums-alg");
+ if (IS_ERR(csums_tfm))
+ goto disconnect;
+ }
+
+
+ spin_lock(&mdev->peer_seq_lock);
+ /* lock against drbd_nl_syncer_conf() */
+ if (verify_tfm) {
+ strcpy(mdev->sync_conf.verify_alg, p->verify_alg);
+ mdev->sync_conf.verify_alg_len = strlen(p->verify_alg) + 1;
+ crypto_free_hash(mdev->verify_tfm);
+ mdev->verify_tfm = verify_tfm;
+ INFO("using verify-alg: \"%s\"\n", p->verify_alg);
+ }
+ if (csums_tfm) {
+ strcpy(mdev->sync_conf.csums_alg, p->csums_alg);
+ mdev->sync_conf.csums_alg_len = strlen(p->csums_alg) + 1;
+ crypto_free_hash(mdev->csums_tfm);
+ mdev->csums_tfm = csums_tfm;
+ INFO("using csums-alg: \"%s\"\n", p->csums_alg);
+ }
+ spin_unlock(&mdev->peer_seq_lock);
+ }
+
+ return ok;
+disconnect:
+ crypto_free_hash(verify_tfm);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+}
+
+STATIC void drbd_setup_order_type(struct drbd_conf *mdev, int peer)
+{
+ /* sorry, we currently have no working implementation
+ * of distributed TCQ */
+}
+
+/* warn if the arguments differ by more than 12.5% */
+static void warn_if_differ_considerably(struct drbd_conf *mdev,
+ const char *s, sector_t a, sector_t b)
+{
+ sector_t d;
+ if (a == 0 || b == 0)
+ return;
+ d = (a > b) ? (a - b) : (b - a);
+ if (d > (a>>3) || d > (b>>3))
+ drbd_WARN("Considerable difference in %s: %llus vs. %llus\n", s,
+ (unsigned long long)a, (unsigned long long)b);
+}
+
+STATIC int receive_sizes(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_Sizes_Packet *p = (struct Drbd_Sizes_Packet *)h;
+ enum determin_dev_size_enum dd = unchanged;
+ unsigned int max_seg_s;
+ sector_t p_size, p_usize, my_usize;
+ int ldsc = 0; /* local disk size changed */
+ enum drbd_conns nconn;
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE;
+ if (drbd_recv(mdev, h->payload, h->length) != h->length)
+ return FALSE;
+
+ p_size = be64_to_cpu(p->d_size);
+ p_usize = be64_to_cpu(p->u_size);
+
+ if (p_size == 0 && mdev->state.disk == Diskless) {
+ ERR("some backing storage is needed\n");
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+
+ /* just store the peer's disk size for now.
+ * we still need to figure out wether we accept that. */
+ mdev->p_size = p_size;
+
+#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
+ if (inc_local(mdev)) {
+ warn_if_differ_considerably(mdev, "lower level device sizes",
+ p_size, drbd_get_max_capacity(mdev->bc));
+ warn_if_differ_considerably(mdev, "user requested size",
+ p_usize, mdev->bc->dc.disk_size);
+
+ /* if this is the first connect, or an otherwise expected
+ * param exchange, choose the minimum */
+ if (mdev->state.conn == WFReportParams)
+ p_usize = min_not_zero((sector_t)mdev->bc->dc.disk_size,
+ p_usize);
+
+ my_usize = mdev->bc->dc.disk_size;
+
+ if (mdev->bc->dc.disk_size != p_usize) {
+ mdev->bc->dc.disk_size = p_usize;
+ INFO("Peer sets u_size to %lu sectors\n",
+ (unsigned long)mdev->bc->dc.disk_size);
+ }
+
+ /* Never shrink a device with usable data during connect.
+ But allow online shrinking if we are connected. */
+ if (drbd_new_dev_size(mdev, mdev->bc) <
+ drbd_get_capacity(mdev->this_bdev) &&
+ mdev->state.disk >= Outdated &&
+ mdev->state.conn < Connected) {
+ ERR("The peer's disk size is too small!\n");
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ mdev->bc->dc.disk_size = my_usize;
+ dec_local(mdev);
+ return FALSE;
+ }
+ dec_local(mdev);
+ }
+#undef min_not_zero
+
+ if (inc_local(mdev)) {
+ dd = drbd_determin_dev_size(mdev);
+ dec_local(mdev);
+ if (dd == dev_size_error)
+ return FALSE;
+ drbd_md_sync(mdev);
+ } else {
+ /* I am diskless, need to accept the peer's size. */
+ drbd_set_my_capacity(mdev, p_size);
+ }
+
+ if (mdev->p_uuid && mdev->state.conn <= Connected && inc_local(mdev)) {
+ nconn = drbd_sync_handshake(mdev,
+ mdev->state.peer, mdev->state.pdsk);
+ dec_local(mdev);
+
+ if (nconn == conn_mask) {
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+
+ if (drbd_request_state(mdev, NS(conn, nconn)) < SS_Success) {
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+ }
+
+ if (inc_local(mdev)) {
+ if (mdev->bc->known_size != drbd_get_capacity(mdev->bc->backing_bdev)) {
+ mdev->bc->known_size = drbd_get_capacity(mdev->bc->backing_bdev);
+ ldsc = 1;
+ }
+
+ max_seg_s = be32_to_cpu(p->max_segment_size);
+ if (max_seg_s != mdev->rq_queue->max_segment_size)
+ drbd_setup_queue_param(mdev, max_seg_s);
+
+ drbd_setup_order_type(mdev, be32_to_cpu(p->queue_order_type));
+ dec_local(mdev);
+ }
+
+ if (mdev->state.conn > WFReportParams) {
+ if (be64_to_cpu(p->c_size) !=
+ drbd_get_capacity(mdev->this_bdev) || ldsc) {
+ /* we have different sizes, probabely peer
+ * needs to know my new size... */
+ drbd_send_sizes(mdev);
+ }
+ if (dd == grew && mdev->state.conn == Connected) {
+ if (mdev->state.pdsk >= Inconsistent &&
+ mdev->state.disk >= Inconsistent)
+ resync_after_online_grow(mdev);
+ else
+ set_bit(RESYNC_AFTER_NEG, &mdev->flags);
+ }
+ }
+
+ return TRUE;
+}
+
+STATIC int receive_uuids(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_GenCnt_Packet *p = (struct Drbd_GenCnt_Packet *)h;
+ u64 *p_uuid;
+ int i;
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE;
+ if (drbd_recv(mdev, h->payload, h->length) != h->length)
+ return FALSE;
+
+ p_uuid = kmalloc(sizeof(u64)*EXT_UUID_SIZE, GFP_KERNEL);
+
+ for (i = Current; i < EXT_UUID_SIZE; i++)
+ p_uuid[i] = be64_to_cpu(p->uuid[i]);
+
+ kfree(mdev->p_uuid);
+ mdev->p_uuid = p_uuid;
+
+ if (mdev->state.conn < Connected &&
+ mdev->state.disk < Inconsistent &&
+ mdev->state.role == Primary &&
+ (mdev->ed_uuid & ~((u64)1)) != (p_uuid[Current] & ~((u64)1))) {
+ ERR("Can only connect to data with current UUID=%016llX\n",
+ (unsigned long long)mdev->ed_uuid);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+
+ /* Before we test for the disk state, we should wait until an eventually
+ ongoing cluster wide state change is finished. That is important if
+ we are primary and are detaching from our disk. We need to see the
+ new disk state... */
+ wait_event(mdev->misc_wait, !test_bit(CLUSTER_ST_CHANGE, &mdev->flags));
+ if (mdev->state.conn >= Connected && mdev->state.disk < Inconsistent)
+ drbd_set_ed_uuid(mdev, p_uuid[Current]);
+
+ return TRUE;
+}
+
+/**
+ * convert_state:
+ * Switches the view of the state.
+ */
+STATIC union drbd_state_t convert_state(union drbd_state_t ps)
+{
+ union drbd_state_t ms;
+
+ static enum drbd_conns c_tab[] = {
+ [Connected] = Connected,
+
+ [StartingSyncS] = StartingSyncT,
+ [StartingSyncT] = StartingSyncS,
+ [Disconnecting] = TearDown, /* NetworkFailure, */
+ [VerifyS] = VerifyT,
+ [conn_mask] = conn_mask,
+ };
+
+ ms.i = ps.i;
+
+ ms.conn = c_tab[ps.conn];
+ ms.peer = ps.role;
+ ms.role = ps.peer;
+ ms.pdsk = ps.disk;
+ ms.disk = ps.pdsk;
+ ms.peer_isp = (ps.aftr_isp | ps.user_isp);
+
+ return ms;
+}
+
+STATIC int receive_req_state(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_Req_State_Packet *p = (struct Drbd_Req_State_Packet *)h;
+ union drbd_state_t mask, val;
+ int rv;
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE;
+ if (drbd_recv(mdev, h->payload, h->length) != h->length)
+ return FALSE;
+
+ mask.i = be32_to_cpu(p->mask);
+ val.i = be32_to_cpu(p->val);
+
+ if (test_bit(DISCARD_CONCURRENT, &mdev->flags) &&
+ test_bit(CLUSTER_ST_CHANGE, &mdev->flags)) {
+ drbd_send_sr_reply(mdev, SS_ConcurrentStChg);
+ return TRUE;
+ }
+
+ mask = convert_state(mask);
+ val = convert_state(val);
+
+ rv = drbd_change_state(mdev, ChgStateVerbose, mask, val);
+
+ drbd_send_sr_reply(mdev, rv);
+ drbd_md_sync(mdev);
+
+ return TRUE;
+}
+
+STATIC int receive_state(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_State_Packet *p = (struct Drbd_State_Packet *)h;
+ enum drbd_conns nconn, oconn;
+ union drbd_state_t ns, peer_state;
+ enum drbd_disk_state real_peer_disk;
+ int rv;
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h)))
+ return FALSE;
+
+ if (drbd_recv(mdev, h->payload, h->length) != h->length)
+ return FALSE;
+
+ peer_state.i = be32_to_cpu(p->state);
+
+ real_peer_disk = peer_state.disk;
+ if (peer_state.disk == Negotiating) {
+ real_peer_disk = mdev->p_uuid[UUID_FLAGS] & 4 ? Inconsistent : Consistent;
+ INFO("real peer disk state = %s\n", disks_to_name(real_peer_disk));
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ retry:
+ oconn = nconn = mdev->state.conn;
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (nconn == WFReportParams)
+ nconn = Connected;
+
+ if (mdev->p_uuid && peer_state.disk >= Negotiating &&
+ inc_local_if_state(mdev, Negotiating)) {
+ int cr; /* consider resync */
+
+ cr = (oconn < Connected);
+ cr |= (oconn == Connected &&
+ (peer_state.disk == Negotiating ||
+ mdev->state.disk == Negotiating));
+ cr |= test_bit(CONSIDER_RESYNC, &mdev->flags); /* peer forced */
+ cr |= (oconn == Connected && peer_state.conn > Connected);
+
+ if (cr)
+ nconn = drbd_sync_handshake(mdev, peer_state.role, real_peer_disk);
+
+ dec_local(mdev);
+ if (nconn == conn_mask) {
+ if (mdev->state.disk == Negotiating) {
+ drbd_force_state(mdev, NS(disk, Diskless));
+ nconn = Connected;
+ } else if (peer_state.disk == Negotiating) {
+ ERR("Disk attach process on the peer node was aborted.\n");
+ peer_state.disk = Diskless;
+ } else {
+ D_ASSERT(oconn == WFReportParams);
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+ }
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ if (mdev->state.conn != oconn)
+ goto retry;
+ clear_bit(CONSIDER_RESYNC, &mdev->flags);
+ ns.i = mdev->state.i;
+ ns.conn = nconn;
+ ns.peer = peer_state.role;
+ ns.pdsk = real_peer_disk;
+ ns.peer_isp = (peer_state.aftr_isp | peer_state.user_isp);
+ if ((nconn == Connected || nconn == WFBitMapS) && ns.disk == Negotiating)
+ ns.disk = mdev->new_state_tmp.disk;
+
+ rv = _drbd_set_state(mdev, ns, ChgStateVerbose | ChgStateHard, NULL);
+ ns = mdev->state;
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (rv < SS_Success) {
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ return FALSE;
+ }
+
+ if (oconn > WFReportParams) {
+ if (nconn > Connected && peer_state.conn <= Connected &&
+ peer_state.disk != Negotiating ) {
+ /* we want resync, peer has not yet decided to sync... */
+ /* Nowadays only used when forcing a node into primary role and
+ setting its disk to UpTpDate with that */
+ drbd_send_uuids(mdev);
+ drbd_send_state(mdev);
+ }
+ }
+
+ mdev->net_conf->want_lose = 0;
+
+ drbd_md_sync(mdev); /* update connected indicator, la_size, ... */
+
+ return TRUE;
+}
+
+STATIC int receive_sync_uuid(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_SyncUUID_Packet *p = (struct Drbd_SyncUUID_Packet *)h;
+
+ wait_event(mdev->misc_wait,
+ mdev->state.conn < Connected ||
+ mdev->state.conn == WFSyncUUID);
+
+ /* D_ASSERT( mdev->state.conn == WFSyncUUID ); */
+
+ ERR_IF(h->length != (sizeof(*p)-sizeof(*h))) return FALSE;
+ if (drbd_recv(mdev, h->payload, h->length) != h->length)
+ return FALSE;
+
+ /* Here the _drbd_uuid_ functions are right, current should
+ _not_ be rotated into the history */
+ if (inc_local_if_state(mdev, Negotiating)) {
+ _drbd_uuid_set(mdev, Current, be64_to_cpu(p->uuid));
+ _drbd_uuid_set(mdev, Bitmap, 0UL);
+
+ drbd_start_resync(mdev, SyncTarget);
+
+ dec_local(mdev);
+ } else
+ ERR("Ignoring SyncUUID packet!\n");
+
+ return TRUE;
+}
+
+enum receive_bitmap_ret { OK, DONE, FAILED };
+
+static enum receive_bitmap_ret
+receive_bitmap_plain(struct drbd_conf *mdev, struct Drbd_Header *h,
+ unsigned long *buffer, struct bm_xfer_ctx *c)
+{
+ unsigned num_words = min_t(size_t, BM_PACKET_WORDS, c->bm_words - c->word_offset);
+ unsigned want = num_words * sizeof(long);
+
+ if (want != h->length) {
+ ERR("%s:want (%u) != h->length (%u)\n", __func__, want, h->length);
+ return FAILED;
+ }
+ if (want == 0)
+ return DONE;
+ if (drbd_recv(mdev, buffer, want) != want)
+ return FAILED;
+
+ drbd_bm_merge_lel(mdev, c->word_offset, num_words, buffer);
+
+ c->word_offset += num_words;
+ c->bit_offset = c->word_offset * BITS_PER_LONG;
+ if (c->bit_offset > c->bm_bits)
+ c->bit_offset = c->bm_bits;
+
+ return OK;
+}
+
+static enum receive_bitmap_ret
+recv_bm_rle_bits(struct drbd_conf *mdev,
+ struct Drbd_Compressed_Bitmap_Packet *p,
+ struct bm_xfer_ctx *c)
+{
+ struct bitstream bs;
+ u64 look_ahead;
+ u64 rl;
+ u64 tmp;
+ unsigned long s = c->bit_offset;
+ unsigned long e;
+ int len = p->head.length - (sizeof(*p) - sizeof(p->head));
+ int toggle = DCBP_get_start(p);
+ int have;
+ int bits;
+
+ bitstream_init(&bs, p->code, len, DCBP_get_pad_bits(p));
+
+ bits = bitstream_get_bits(&bs, &look_ahead, 64);
+ if (bits < 0)
+ return FAILED;
+
+ for (have = bits; have > 0; s += rl, toggle = !toggle) {
+ bits = vli_decode_bits(&rl, look_ahead);
+ if (bits <= 0)
+ return FAILED;
+
+ if (toggle) {
+ e = s + rl -1;
+ if (e >= c->bm_bits) {
+ ERR("bitmap overflow (e:%lu) while decoding bm RLE packet\n", e);
+ return FAILED;
+ }
+ _drbd_bm_set_bits(mdev, s, e);
+ }
+
+ if (have < bits) {
+ ERR("bitmap decoding error: h:%d b:%d la:0x%08llx l:%u/%u\n", have, bits, look_ahead,
+ bs.cur.b - p->code, bs.buf_len);
+ return FAILED;
+ }
+ look_ahead >>= bits;
+ have -= bits;
+
+ bits = bitstream_get_bits(&bs, &tmp, 64 - have);
+ if (bits < 0)
+ return FAILED;
+ look_ahead |= tmp << have;
+ have += bits;
+ }
+
+ c->bit_offset = s;
+ bm_xfer_ctx_bit_to_word_offset(c);
+
+ return (s == c->bm_bits) ? DONE : OK;
+}
+
+
+static enum receive_bitmap_ret
+recv_bm_rle_bytes(struct drbd_conf *mdev,
+ struct Drbd_Compressed_Bitmap_Packet *p,
+ struct bm_xfer_ctx *c)
+{
+ u64 rl;
+ unsigned char *buf = p->code;
+ unsigned long s;
+ unsigned long e;
+ int len = p->head.length - (p->code - p->head.payload);
+ int toggle;
+ int n;
+
+ s = c->bit_offset;
+
+ /* decoding. the payload of bitmap rle packets is VLI encoded
+ * runlength of set and unset bits, starting with set/unset as defined
+ * in p->encoding & 0x80. */
+ for (toggle = DCBP_get_start(p); len; s += rl, toggle = !toggle) {
+ if (s >= c->bm_bits) {
+ ERR("bitmap overflow (s:%lu) while decoding bitmap RLE packet\n", s);
+ return FAILED;
+ }
+
+ n = vli_decode_bytes(&rl, buf, len);
+ if (n == 0) /* incomplete buffer! */
+ return FAILED;
+ buf += n;
+ len -= n;
+
+ if (rl == 0) {
+ ERR("unexpected zero runlength while decoding bitmap RLE packet\n");
+ return FAILED;
+ }
+
+ /* unset bits: ignore, because of x | 0 == x. */
+ if (!toggle)
+ continue;
+
+ /* set bits: merge into bitmap. */
+ e = s + rl -1;
+ if (e >= c->bm_bits) {
+ ERR("bitmap overflow (e:%lu) while decoding bitmap RLE packet\n", e);
+ return FAILED;
+ }
+ _drbd_bm_set_bits(mdev, s, e);
+ }
+
+ c->bit_offset = s;
+ bm_xfer_ctx_bit_to_word_offset(c);
+
+ return (s == c->bm_bits) ? DONE : OK;
+}
+
+static enum receive_bitmap_ret
+decode_bitmap_c(struct drbd_conf *mdev,
+ struct Drbd_Compressed_Bitmap_Packet *p,
+ struct bm_xfer_ctx *c)
+{
+ switch (DCBP_get_code(p)) {
+ /* no default! I want the compiler to warn me! */
+ case RLE_VLI_BitsFibD_0_1:
+ case RLE_VLI_BitsFibD_1_1:
+ case RLE_VLI_BitsFibD_1_2:
+ case RLE_VLI_BitsFibD_2_3:
+ break; /* TODO */
+ case RLE_VLI_BitsFibD_3_5:
+ return recv_bm_rle_bits(mdev, p, c);
+ case RLE_VLI_Bytes:
+ return recv_bm_rle_bytes(mdev, p, c);
+ }
+ ERR("receive_bitmap_c: unknown encoding %u\n", p->encoding);
+ return FAILED;
+}
+
+void INFO_bm_xfer_stats(struct drbd_conf *mdev,
+ const char *direction, struct bm_xfer_ctx *c)
+{
+ unsigned plain_would_take = sizeof(struct Drbd_Header) *
+ ((c->bm_words+BM_PACKET_WORDS-1)/BM_PACKET_WORDS+1)
+ + c->bm_words * sizeof(long);
+ unsigned total = c->bytes[0] + c->bytes[1];
+ unsigned q, r;
+
+ /* total can not be zero. but just in case: */
+ if (total == 0)
+ return;
+
+ q = plain_would_take / total;
+ r = plain_would_take % total;
+ r = (r > UINT_MAX/100) ? (r / (total+99/100)) : (100 * r / total);
+
+ INFO("%s bitmap stats [Bytes(packets)]: plain %u(%u), RLE %u(%u), "
+ "total %u; compression factor: %u.%02u\n",
+ direction,
+ c->bytes[1], c->packets[1],
+ c->bytes[0], c->packets[0],
+ total, q, r);
+}
+
+/* Since we are processing the bitfield from lower addresses to higher,
+ it does not matter if the process it in 32 bit chunks or 64 bit
+ chunks as long as it is little endian. (Understand it as byte stream,
+ beginning with the lowest byte...) If we would use big endian
+ we would need to process it from the highest address to the lowest,
+ in order to be agnostic to the 32 vs 64 bits issue.
+
+ returns 0 on failure, 1 if we suceessfully received it. */
+STATIC int receive_bitmap(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct bm_xfer_ctx c;
+ void *buffer;
+ enum receive_bitmap_ret ret;
+ int ok = FALSE;
+
+ wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_bio_cnt));
+
+ drbd_bm_lock(mdev, "receive bitmap");
+
+ /* maybe we should use some per thread scratch page,
+ * and allocate that during initial device creation? */
+ buffer = (unsigned long *) __get_free_page(GFP_NOIO);
+ if (!buffer) {
+ ERR("failed to allocate one page buffer in %s\n", __func__);
+ goto out;
+ }
+
+ c = (struct bm_xfer_ctx) {
+ .bm_bits = drbd_bm_bits(mdev),
+ .bm_words = drbd_bm_words(mdev),
+ };
+
+ do {
+ if (h->command == ReportBitMap) {
+ ret = receive_bitmap_plain(mdev, h, buffer, &c);
+ } else if (h->command == ReportCBitMap) {
+ /* MAYBE: sanity check that we speak proto >= 90,
+ * and the feature is enabled! */
+ struct Drbd_Compressed_Bitmap_Packet *p;
+
+ if (h->length > BM_PACKET_PAYLOAD_BYTES) {
+ ERR("ReportCBitmap packet too large\n");
+ goto out;
+ }
+ /* use the page buff */
+ p = buffer;
+ memcpy(p, h, sizeof(*h));
+ if (drbd_recv(mdev, p->head.payload, h->length) != h->length)
+ goto out;
+ if (p->head.length <= (sizeof(*p) - sizeof(p->head))) {
+ ERR("ReportCBitmap packet too small (l:%u)\n", p->head.length);
+ return FAILED;
+ }
+ ret = decode_bitmap_c(mdev, p, &c);
+ } else {
+ drbd_WARN("receive_bitmap: h->command neither ReportBitMap nor ReportCBitMap (is 0x%x)", h->command);
+ goto out;
+ }
+
+ c.packets[h->command == ReportBitMap]++;
+ c.bytes[h->command == ReportBitMap] += sizeof(struct Drbd_Header) + h->length;
+
+ if (ret != OK)
+ break;
+
+ if (!drbd_recv_header(mdev, h))
+ goto out;
+ } while (ret == OK);
+ if (ret == FAILED)
+ goto out;
+
+ INFO_bm_xfer_stats(mdev, "receive", &c);
+
+ if (mdev->state.conn == WFBitMapT) {
+ ok = !drbd_send_bitmap(mdev);
+ if (!ok)
+ goto out;
+ /* Omit ChgOrdered with this state transition to avoid deadlocks. */
+ ok = _drbd_request_state(mdev, NS(conn, WFSyncUUID), ChgStateVerbose);
+ D_ASSERT(ok == SS_Success);
+ } else if (mdev->state.conn != WFBitMapS) {
+ /* admin may have requested Disconnecting,
+ * other threads may have noticed network errors */
+ INFO("unexpected cstate (%s) in receive_bitmap\n",
+ conns_to_name(mdev->state.conn));
+ }
+
+ ok = TRUE;
+ out:
+ drbd_bm_unlock(mdev);
+ if (ok && mdev->state.conn == WFBitMapS)
+ drbd_start_resync(mdev, SyncSource);
+ free_page((unsigned long) buffer);
+ return ok;
+}
+
+STATIC int receive_skip(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ /* TODO zero copy sink :) */
+ static char sink[128];
+ int size, want, r;
+
+ drbd_WARN("skipping unknown optional packet type %d, l: %d!\n",
+ h->command, h->length);
+
+ size = h->length;
+ while (size > 0) {
+ want = min_t(int, size, sizeof(sink));
+ r = drbd_recv(mdev, sink, want);
+ ERR_IF(r <= 0) break;
+ size -= r;
+ }
+ return size == 0;
+}
+
+STATIC int receive_UnplugRemote(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ if (mdev->state.disk >= Inconsistent)
+ drbd_kick_lo(mdev);
+
+ /* Make sure we've acked all the TCP data associated
+ * with the data requests being unplugged */
+ drbd_tcp_quickack(mdev->data.socket);
+
+ return TRUE;
+}
+
+typedef int (*drbd_cmd_handler_f)(struct drbd_conf *, struct Drbd_Header *);
+
+static drbd_cmd_handler_f drbd_default_handler[] = {
+ [Data] = receive_Data,
+ [DataReply] = receive_DataReply,
+ [RSDataReply] = receive_RSDataReply,
+ [Barrier] = receive_Barrier,
+ [ReportBitMap] = receive_bitmap,
+ [ReportCBitMap] = receive_bitmap,
+ [UnplugRemote] = receive_UnplugRemote,
+ [DataRequest] = receive_DataRequest,
+ [RSDataRequest] = receive_DataRequest,
+ [SyncParam] = receive_SyncParam,
+ [SyncParam89] = receive_SyncParam,
+ [ReportProtocol] = receive_protocol,
+ [ReportUUIDs] = receive_uuids,
+ [ReportSizes] = receive_sizes,
+ [ReportState] = receive_state,
+ [StateChgRequest] = receive_req_state,
+ [ReportSyncUUID] = receive_sync_uuid,
+ [OVRequest] = receive_DataRequest,
+ [OVReply] = receive_DataRequest,
+ [CsumRSRequest] = receive_DataRequest,
+ /* anything missing from this table is in
+ * the asender_tbl, see get_asender_cmd */
+ [MAX_CMD] = NULL,
+};
+
+static drbd_cmd_handler_f *drbd_cmd_handler = drbd_default_handler;
+static drbd_cmd_handler_f *drbd_opt_cmd_handler;
+
+STATIC void drbdd(struct drbd_conf *mdev)
+{
+ drbd_cmd_handler_f handler;
+ struct Drbd_Header *header = &mdev->data.rbuf.head;
+
+ while (get_t_state(&mdev->receiver) == Running) {
+ drbd_thread_current_set_cpu(mdev);
+ if (!drbd_recv_header(mdev, header))
+ break;
+
+ if (header->command < MAX_CMD)
+ handler = drbd_cmd_handler[header->command];
+ else if (MayIgnore < header->command
+ && header->command < MAX_OPT_CMD)
+ handler = drbd_opt_cmd_handler[header->command-MayIgnore];
+ else if (header->command > MAX_OPT_CMD)
+ handler = receive_skip;
+ else
+ handler = NULL;
+
+ if (unlikely(!handler)) {
+ ERR("unknown packet type %d, l: %d!\n",
+ header->command, header->length);
+ drbd_force_state(mdev, NS(conn, ProtocolError));
+ break;
+ }
+ if (unlikely(!handler(mdev, header))) {
+ ERR("error receiving %s, l: %d!\n",
+ cmdname(header->command), header->length);
+ drbd_force_state(mdev, NS(conn, ProtocolError));
+ break;
+ }
+
+ dump_packet(mdev, mdev->data.socket, 2, &mdev->data.rbuf,
+ __FILE__, __LINE__);
+ }
+}
+
+STATIC void drbd_fail_pending_reads(struct drbd_conf *mdev)
+{
+ struct hlist_head *slot;
+ struct hlist_node *pos;
+ struct hlist_node *tmp;
+ struct drbd_request *req;
+ int i;
+
+ /*
+ * Application READ requests
+ */
+ spin_lock_irq(&mdev->req_lock);
+ for (i = 0; i < APP_R_HSIZE; i++) {
+ slot = mdev->app_reads_hash+i;
+ hlist_for_each_entry_safe(req, pos, tmp, slot, colision) {
+ /* it may (but should not any longer!)
+ * be on the work queue; if that assert triggers,
+ * we need to also grab the
+ * spin_lock_irq(&mdev->data.work.q_lock);
+ * and list_del_init here. */
+ D_ASSERT(list_empty(&req->w.list));
+ _req_mod(req, connection_lost_while_pending, 0);
+ }
+ }
+ for (i = 0; i < APP_R_HSIZE; i++)
+ if (!hlist_empty(mdev->app_reads_hash+i))
+ drbd_WARN("ASSERT FAILED: app_reads_hash[%d].first: "
+ "%p, should be NULL\n", i, mdev->app_reads_hash[i].first);
+
+ memset(mdev->app_reads_hash, 0, APP_R_HSIZE*sizeof(void *));
+ spin_unlock_irq(&mdev->req_lock);
+}
+
+STATIC void drbd_disconnect(struct drbd_conf *mdev)
+{
+ struct drbd_work prev_work_done;
+ enum fencing_policy fp;
+ union drbd_state_t os, ns;
+ int rv = SS_UnknownError;
+ unsigned int i;
+
+ if (mdev->state.conn == StandAlone)
+ return;
+ if (mdev->state.conn >= WFConnection)
+ ERR("ASSERT FAILED cstate = %s, expected < WFConnection\n",
+ conns_to_name(mdev->state.conn));
+
+ /* asender does not clean up anything. it must not interfere, either */
+ drbd_thread_stop(&mdev->asender);
+
+ mutex_lock(&mdev->data.mutex);
+ drbd_free_sock(mdev);
+ mutex_unlock(&mdev->data.mutex);
+
+ spin_lock_irq(&mdev->req_lock);
+ _drbd_wait_ee_list_empty(mdev, &mdev->active_ee);
+ _drbd_wait_ee_list_empty(mdev, &mdev->sync_ee);
+ _drbd_clear_done_ee(mdev);
+ _drbd_wait_ee_list_empty(mdev, &mdev->read_ee);
+ reclaim_net_ee(mdev);
+ spin_unlock_irq(&mdev->req_lock);
+
+ /* We do not have data structures that would allow us to
+ * get the rs_pending_cnt down to 0 again.
+ * * On SyncTarget we do not have any data structures describing
+ * the pending RSDataRequest's we have sent.
+ * * On SyncSource there is no data structure that tracks
+ * the RSDataReply blocks that we sent to the SyncTarget.
+ * And no, it is not the sum of the reference counts in the
+ * resync_LRU. The resync_LRU tracks the whole operation including
+ * the disk-IO, while the rs_pending_cnt only tracks the blocks
+ * on the fly. */
+ drbd_rs_cancel_all(mdev);
+ mdev->rs_total = 0;
+ mdev->rs_failed = 0;
+ atomic_set(&mdev->rs_pending_cnt, 0);
+ wake_up(&mdev->misc_wait);
+
+ /* make sure syncer is stopped and w_resume_next_sg queued */
+ del_timer_sync(&mdev->resync_timer);
+ set_bit(STOP_SYNC_TIMER, &mdev->flags);
+ resync_timer_fn((unsigned long)mdev);
+
+ /* wait for all w_e_end_data_req, w_e_end_rsdata_req, w_send_barrier,
+ * w_make_resync_request etc. which may still be on the worker queue
+ * to be "canceled" */
+ set_bit(WORK_PENDING, &mdev->flags);
+ prev_work_done.cb = w_prev_work_done;
+ drbd_queue_work(&mdev->data.work, &prev_work_done);
+ wait_event(mdev->misc_wait, !test_bit(WORK_PENDING, &mdev->flags));
+
+ kfree(mdev->p_uuid);
+ mdev->p_uuid = NULL;
+
+ if (!mdev->state.susp)
+ tl_clear(mdev);
+
+ drbd_fail_pending_reads(mdev);
+
+ INFO("Connection closed\n");
+
+ drbd_md_sync(mdev);
+
+ fp = DontCare;
+ if (inc_local(mdev)) {
+ fp = mdev->bc->dc.fencing;
+ dec_local(mdev);
+ }
+
+ if (mdev->state.role == Primary) {
+ if (fp >= Resource && mdev->state.pdsk >= DUnknown) {
+ enum drbd_disk_state nps = drbd_try_outdate_peer(mdev);
+ drbd_request_state(mdev, NS(pdsk, nps));
+ }
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ os = mdev->state;
+ if (os.conn >= Unconnected) {
+ /* Do not restart in case we are Disconnecting */
+ ns = os;
+ ns.conn = Unconnected;
+ rv = _drbd_set_state(mdev, ns, ChgStateVerbose, NULL);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (os.conn == Disconnecting) {
+ struct hlist_head *h;
+ wait_event(mdev->misc_wait, atomic_read(&mdev->net_cnt) == 0);
+
+ /* we must not free the tl_hash
+ * while application io is still on the fly */
+ wait_event(mdev->misc_wait, atomic_read(&mdev->ap_bio_cnt) == 0);
+
+ spin_lock_irq(&mdev->req_lock);
+ /* paranoia code */
+ for (h = mdev->ee_hash; h < mdev->ee_hash + mdev->ee_hash_s; h++)
+ if (h->first)
+ ERR("ASSERT FAILED ee_hash[%u].first == %p, expected NULL\n",
+ (int)(h - mdev->ee_hash), h->first);
+ kfree(mdev->ee_hash);
+ mdev->ee_hash = NULL;
+ mdev->ee_hash_s = 0;
+
+ /* paranoia code */
+ for (h = mdev->tl_hash; h < mdev->tl_hash + mdev->tl_hash_s; h++)
+ if (h->first)
+ ERR("ASSERT FAILED tl_hash[%u] == %p, expected NULL\n",
+ (int)(h - mdev->tl_hash), h->first);
+ kfree(mdev->tl_hash);
+ mdev->tl_hash = NULL;
+ mdev->tl_hash_s = 0;
+ spin_unlock_irq(&mdev->req_lock);
+
+ crypto_free_hash(mdev->cram_hmac_tfm);
+ mdev->cram_hmac_tfm = NULL;
+
+ kfree(mdev->net_conf);
+ mdev->net_conf = NULL;
+ drbd_request_state(mdev, NS(conn, StandAlone));
+ }
+
+ /* they do trigger all the time.
+ * hm. why won't tcp release the page references,
+ * we already released the socket!? */
+ i = atomic_read(&mdev->pp_in_use);
+ if (i)
+ DBG("pp_in_use = %u, expected 0\n", i);
+ if (!list_empty(&mdev->net_ee))
+ DBG("net_ee not empty!\n");
+
+ D_ASSERT(list_empty(&mdev->read_ee));
+ D_ASSERT(list_empty(&mdev->active_ee));
+ D_ASSERT(list_empty(&mdev->sync_ee));
+ D_ASSERT(list_empty(&mdev->done_ee));
+
+ /* ok, no more ee's on the fly, it is safe to reset the epoch_size */
+ atomic_set(&mdev->current_epoch->epoch_size, 0);
+ D_ASSERT(list_empty(&mdev->current_epoch->list));
+}
+
+/*
+ * We support PRO_VERSION_MIN to PRO_VERSION_MAX. The protocol version
+ * we can agree on is stored in agreed_pro_version.
+ *
+ * feature flags and the reserved array should be enough room for future
+ * enhancements of the handshake protocol, and possible plugins...
+ *
+ * for now, they are expected to be zero, but ignored.
+ */
+STATIC int drbd_send_handshake(struct drbd_conf *mdev)
+{
+ /* ASSERT current == mdev->receiver ... */
+ struct Drbd_HandShake_Packet *p = &mdev->data.sbuf.HandShake;
+ int ok;
+
+ if (mutex_lock_interruptible(&mdev->data.mutex)) {
+ ERR("interrupted during initial handshake\n");
+ return 0; /* interrupted. not ok. */
+ }
+
+ if (mdev->data.socket == NULL) {
+ mutex_unlock(&mdev->data.mutex);
+ return 0;
+ }
+
+ memset(p, 0, sizeof(*p));
+ p->protocol_min = cpu_to_be32(PRO_VERSION_MIN);
+ p->protocol_max = cpu_to_be32(PRO_VERSION_MAX);
+ ok = _drbd_send_cmd( mdev, mdev->data.socket, HandShake,
+ (struct Drbd_Header *)p, sizeof(*p), 0 );
+ mutex_unlock(&mdev->data.mutex);
+ return ok;
+}
+
+/*
+ * return values:
+ * 1 yess, we have a valid connection
+ * 0 oops, did not work out, please try again
+ * -1 peer talks different language,
+ * no point in trying again, please go standalone.
+ */
+int drbd_do_handshake(struct drbd_conf *mdev)
+{
+ /* ASSERT current == mdev->receiver ... */
+ struct Drbd_HandShake_Packet *p = &mdev->data.rbuf.HandShake;
+ const int expect = sizeof(struct Drbd_HandShake_Packet)
+ -sizeof(struct Drbd_Header);
+ int rv;
+
+ rv = drbd_send_handshake(mdev);
+ if (!rv)
+ return 0;
+
+ rv = drbd_recv_header(mdev, &p->head);
+ if (!rv)
+ return 0;
+
+ if (p->head.command != HandShake) {
+ ERR("expected HandShake packet, received: %s (0x%04x)\n",
+ cmdname(p->head.command), p->head.command);
+ return -1;
+ }
+
+ if (p->head.length != expect) {
+ ERR("expected HandShake length: %u, received: %u\n",
+ expect, p->head.length);
+ return -1;
+ }
+
+ rv = drbd_recv(mdev, &p->head.payload, expect);
+
+ if (rv != expect) {
+ ERR("short read receiving handshake packet: l=%u\n", rv);
+ return 0;
+ }
+
+ dump_packet(mdev, mdev->data.socket, 2, &mdev->data.rbuf,
+ __FILE__, __LINE__);
+
+ p->protocol_min = be32_to_cpu(p->protocol_min);
+ p->protocol_max = be32_to_cpu(p->protocol_max);
+ if (p->protocol_max == 0)
+ p->protocol_max = p->protocol_min;
+
+ if (PRO_VERSION_MAX < p->protocol_min ||
+ PRO_VERSION_MIN > p->protocol_max)
+ goto incompat;
+
+ mdev->agreed_pro_version = min_t(int, PRO_VERSION_MAX, p->protocol_max);
+
+ INFO("Handshake successful: "
+ "Agreed network protocol version %d\n", mdev->agreed_pro_version);
+
+ return 1;
+
+ incompat:
+ ERR("incompatible DRBD dialects: "
+ "I support %d-%d, peer supports %d-%d\n",
+ PRO_VERSION_MIN, PRO_VERSION_MAX,
+ p->protocol_min, p->protocol_max);
+ return -1;
+}
+
+#if !defined(CONFIG_CRYPTO_HMAC) && !defined(CONFIG_CRYPTO_HMAC_MODULE)
+int drbd_do_auth(struct drbd_conf *mdev)
+{
+ ERR("This kernel was build without CONFIG_CRYPTO_HMAC.\n");
+ ERR("You need to disable 'cram-hmac-alg' in drbd.conf.\n");
+ return 0;
+}
+#else
+#define CHALLENGE_LEN 64
+int drbd_do_auth(struct drbd_conf *mdev)
+{
+ char my_challenge[CHALLENGE_LEN]; /* 64 Bytes... */
+ struct scatterlist sg;
+ char *response = NULL;
+ char *right_response = NULL;
+ char *peers_ch = NULL;
+ struct Drbd_Header p;
+ unsigned int key_len = strlen(mdev->net_conf->shared_secret);
+ unsigned int resp_size;
+ struct hash_desc desc;
+ int rv;
+
+ desc.tfm = mdev->cram_hmac_tfm;
+ desc.flags = 0;
+
+ rv = crypto_hash_setkey(mdev->cram_hmac_tfm,
+ (u8 *)mdev->net_conf->shared_secret, key_len);
+ if (rv) {
+ ERR("crypto_hash_setkey() failed with %d\n", rv);
+ rv = 0;
+ goto fail;
+ }
+
+ get_random_bytes(my_challenge, CHALLENGE_LEN);
+
+ rv = drbd_send_cmd2(mdev, AuthChallenge, my_challenge, CHALLENGE_LEN);
+ if (!rv)
+ goto fail;
+
+ rv = drbd_recv_header(mdev, &p);
+ if (!rv)
+ goto fail;
+
+ if (p.command != AuthChallenge) {
+ ERR("expected AuthChallenge packet, received: %s (0x%04x)\n",
+ cmdname(p.command), p.command);
+ rv = 0;
+ goto fail;
+ }
+
+ if (p.length > CHALLENGE_LEN*2) {
+ ERR("expected AuthChallenge payload too big.\n");
+ rv = 0;
+ goto fail;
+ }
+
+ peers_ch = kmalloc(p.length, GFP_KERNEL);
+ if (peers_ch == NULL) {
+ ERR("kmalloc of peers_ch failed\n");
+ rv = 0;
+ goto fail;
+ }
+
+ rv = drbd_recv(mdev, peers_ch, p.length);
+
+ if (rv != p.length) {
+ ERR("short read AuthChallenge: l=%u\n", rv);
+ rv = 0;
+ goto fail;
+ }
+
+ resp_size = crypto_hash_digestsize(mdev->cram_hmac_tfm);
+ response = kmalloc(resp_size, GFP_KERNEL);
+ if (response == NULL) {
+ ERR("kmalloc of response failed\n");
+ rv = 0;
+ goto fail;
+ }
+
+ sg_init_table(&sg, 1);
+ sg_set_buf(&sg, peers_ch, p.length);
+
+ rv = crypto_hash_digest(&desc, &sg, sg.length, response);
+ if (rv) {
+ ERR("crypto_hash_digest() failed with %d\n", rv);
+ rv = 0;
+ goto fail;
+ }
+
+ rv = drbd_send_cmd2(mdev, AuthResponse, response, resp_size);
+ if (!rv)
+ goto fail;
+
+ rv = drbd_recv_header(mdev, &p);
+ if (!rv)
+ goto fail;
+
+ if (p.command != AuthResponse) {
+ ERR("expected AuthResponse packet, received: %s (0x%04x)\n",
+ cmdname(p.command), p.command);
+ rv = 0;
+ goto fail;
+ }
+
+ if (p.length != resp_size) {
+ ERR("expected AuthResponse payload of wrong size\n");
+ rv = 0;
+ goto fail;
+ }
+
+ rv = drbd_recv(mdev, response , resp_size);
+
+ if (rv != resp_size) {
+ ERR("short read receiving AuthResponse: l=%u\n", rv);
+ rv = 0;
+ goto fail;
+ }
+
+ right_response = kmalloc(resp_size, GFP_KERNEL);
+ if (response == NULL) {
+ ERR("kmalloc of right_response failed\n");
+ rv = 0;
+ goto fail;
+ }
+
+ sg_set_buf(&sg, my_challenge, CHALLENGE_LEN);
+
+ rv = crypto_hash_digest(&desc, &sg, sg.length, right_response);
+ if (rv) {
+ ERR("crypto_hash_digest() failed with %d\n", rv);
+ rv = 0;
+ goto fail;
+ }
+
+ rv = !memcmp(response, right_response, resp_size);
+
+ if (rv)
+ INFO("Peer authenticated using %d bytes of '%s' HMAC\n",
+ resp_size, mdev->net_conf->cram_hmac_alg);
+
+ fail:
+ kfree(peers_ch);
+ kfree(response);
+ kfree(right_response);
+
+ return rv;
+}
+#endif
+
+STATIC int drbdd_init(struct Drbd_thread *thi)
+{
+ struct drbd_conf *mdev = thi->mdev;
+ unsigned int minor = mdev_to_minor(mdev);
+ int h;
+
+ sprintf(current->comm, "drbd%d_receiver", minor);
+
+ INFO("receiver (re)started\n");
+
+ do {
+ h = drbd_connect(mdev);
+ if (h == 0) {
+ drbd_disconnect(mdev);
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ);
+ }
+ if (h == -1) {
+ drbd_WARN("Discarding network configuration.\n");
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ }
+ } while (h == 0);
+
+ if (h > 0) {
+ if (inc_net(mdev)) {
+ drbdd(mdev);
+ dec_net(mdev);
+ }
+ }
+
+ drbd_disconnect(mdev);
+
+ INFO("receiver terminated\n");
+ return 0;
+}
+
+/* ********* acknowledge sender ******** */
+
+STATIC int got_RqSReply(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_RqS_Reply_Packet *p = (struct Drbd_RqS_Reply_Packet *)h;
+
+ int retcode = be32_to_cpu(p->retcode);
+
+ if (retcode >= SS_Success) {
+ set_bit(CL_ST_CHG_SUCCESS, &mdev->flags);
+ } else {
+ set_bit(CL_ST_CHG_FAIL, &mdev->flags);
+ ERR("Requested state change failed by peer: %s (%d)\n",
+ set_st_err_name(retcode), retcode);
+ }
+ wake_up(&mdev->state_wait);
+
+ return TRUE;
+}
+
+STATIC int got_Ping(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ return drbd_send_ping_ack(mdev);
+
+}
+
+STATIC int got_PingAck(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ /* restore idle timeout */
+ mdev->meta.socket->sk->sk_rcvtimeo = mdev->net_conf->ping_int*HZ;
+
+ return TRUE;
+}
+
+STATIC int got_IsInSync(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+ sector_t sector = be64_to_cpu(p->sector);
+ int blksize = be32_to_cpu(p->blksize);
+
+ D_ASSERT(mdev->agreed_pro_version >= 89);
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ drbd_rs_complete_io(mdev, sector);
+ drbd_set_in_sync(mdev, sector, blksize);
+ /* rs_same_csums is supposed to count in units of BM_BLOCK_SIZE */
+ mdev->rs_same_csum += (blksize >> BM_BLOCK_SIZE_B);
+ dec_rs_pending(mdev);
+
+ return TRUE;
+}
+
+STATIC int got_BlockAck(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct drbd_request *req;
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+ sector_t sector = be64_to_cpu(p->sector);
+ int blksize = be32_to_cpu(p->blksize);
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ if (is_syncer_block_id(p->block_id)) {
+ drbd_set_in_sync(mdev, sector, blksize);
+ dec_rs_pending(mdev);
+ } else {
+ spin_lock_irq(&mdev->req_lock);
+ req = _ack_id_to_req(mdev, p->block_id, sector);
+
+ if (unlikely(!req)) {
+ spin_unlock_irq(&mdev->req_lock);
+ ERR("Got a corrupt block_id/sector pair(2).\n");
+ return FALSE;
+ }
+
+ switch (be16_to_cpu(h->command)) {
+ case RSWriteAck:
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C);
+ _req_mod(req, write_acked_by_peer_and_sis, 0);
+ break;
+ case WriteAck:
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C);
+ _req_mod(req, write_acked_by_peer, 0);
+ break;
+ case RecvAck:
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_B);
+ _req_mod(req, recv_acked_by_peer, 0);
+ break;
+ case DiscardAck:
+ D_ASSERT(mdev->net_conf->wire_protocol == DRBD_PROT_C);
+ ALERT("Got DiscardAck packet %llus +%u!"
+ " DRBD is not a random data generator!\n",
+ (unsigned long long)req->sector, req->size);
+ _req_mod(req, conflict_discarded_by_peer, 0);
+ break;
+ default:
+ D_ASSERT(0);
+ }
+ spin_unlock_irq(&mdev->req_lock);
+ }
+ /* dec_ap_pending is handled within _req_mod */
+
+ return TRUE;
+}
+
+STATIC int got_NegAck(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+ sector_t sector = be64_to_cpu(p->sector);
+ struct drbd_request *req;
+
+ if (__ratelimit(&drbd_ratelimit_state))
+ drbd_WARN("Got NegAck packet. Peer is in troubles?\n");
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ if (is_syncer_block_id(p->block_id)) {
+ int size = be32_to_cpu(p->blksize);
+
+ dec_rs_pending(mdev);
+
+ drbd_rs_failed_io(mdev, sector, size);
+ } else {
+ spin_lock_irq(&mdev->req_lock);
+ req = _ack_id_to_req(mdev, p->block_id, sector);
+
+ if (unlikely(!req)) {
+ spin_unlock_irq(&mdev->req_lock);
+ ERR("Got a corrupt block_id/sector pair(2).\n");
+ return FALSE;
+ }
+
+ _req_mod(req, neg_acked, 0);
+ spin_unlock_irq(&mdev->req_lock);
+ }
+
+ return TRUE;
+}
+
+STATIC int got_NegDReply(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct drbd_request *req;
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+ sector_t sector = be64_to_cpu(p->sector);
+
+ spin_lock_irq(&mdev->req_lock);
+ req = _ar_id_to_req(mdev, p->block_id, sector);
+ if (unlikely(!req)) {
+ spin_unlock_irq(&mdev->req_lock);
+ ERR("Got a corrupt block_id/sector pair(3).\n");
+ return FALSE;
+ }
+
+ _req_mod(req, neg_acked, 0);
+ spin_unlock_irq(&mdev->req_lock);
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ ERR("Got NegDReply; Sector %llus, len %u; Fail original request.\n",
+ (unsigned long long)sector, be32_to_cpu(p->blksize));
+
+ return TRUE;
+}
+
+STATIC int got_NegRSDReply(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ sector_t sector;
+ int size;
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+
+ sector = be64_to_cpu(p->sector);
+ size = be32_to_cpu(p->blksize);
+ D_ASSERT(p->block_id == ID_SYNCER);
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ dec_rs_pending(mdev);
+
+ if (inc_local_if_state(mdev, Failed)) {
+ drbd_rs_complete_io(mdev, sector);
+ drbd_rs_failed_io(mdev, sector, size);
+ dec_local(mdev);
+ }
+
+ return TRUE;
+}
+
+STATIC int got_BarrierAck(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_BarrierAck_Packet *p = (struct Drbd_BarrierAck_Packet *)h;
+
+ tl_release(mdev, p->barrier, be32_to_cpu(p->set_size));
+
+ return TRUE;
+}
+
+STATIC int got_OVResult(struct drbd_conf *mdev, struct Drbd_Header *h)
+{
+ struct Drbd_BlockAck_Packet *p = (struct Drbd_BlockAck_Packet *)h;
+ struct drbd_work *w;
+ sector_t sector;
+ int size;
+
+ sector = be64_to_cpu(p->sector);
+ size = be32_to_cpu(p->blksize);
+
+ update_peer_seq(mdev, be32_to_cpu(p->seq_num));
+
+ if (be64_to_cpu(p->block_id) == ID_OUT_OF_SYNC)
+ drbd_ov_oos_found(mdev, sector, size);
+ else
+ ov_oos_print(mdev);
+
+ drbd_rs_complete_io(mdev, sector);
+ dec_rs_pending(mdev);
+
+ if (--mdev->ov_left == 0) {
+ w = kmalloc(sizeof(*w), GFP_KERNEL);
+ if (w) {
+ w->cb = w_ov_finished;
+ drbd_queue_work_front(&mdev->data.work, w);
+ } else {
+ ERR("kmalloc(w) failed.");
+ drbd_resync_finished(mdev);
+ }
+ }
+ return TRUE;
+}
+
+struct asender_cmd {
+ size_t pkt_size;
+ int (*process)(struct drbd_conf *mdev, struct Drbd_Header *h);
+};
+
+static struct asender_cmd *get_asender_cmd(int cmd)
+{
+ static struct asender_cmd asender_tbl[] = {
+ /* anything missing from this table is in
+ * the drbd_cmd_handler (drbd_default_handler) table,
+ * see the beginning of drbdd() */
+ [Ping] = { sizeof(struct Drbd_Header), got_Ping },
+ [PingAck] = { sizeof(struct Drbd_Header), got_PingAck },
+ [RecvAck] = { sizeof(struct Drbd_BlockAck_Packet), got_BlockAck },
+ [WriteAck] = { sizeof(struct Drbd_BlockAck_Packet), got_BlockAck },
+ [RSWriteAck] = { sizeof(struct Drbd_BlockAck_Packet), got_BlockAck },
+ [DiscardAck] = { sizeof(struct Drbd_BlockAck_Packet), got_BlockAck },
+ [NegAck] = { sizeof(struct Drbd_BlockAck_Packet), got_NegAck },
+ [NegDReply] = { sizeof(struct Drbd_BlockAck_Packet), got_NegDReply },
+ [NegRSDReply] = { sizeof(struct Drbd_BlockAck_Packet), got_NegRSDReply},
+ [OVResult] = { sizeof(struct Drbd_BlockAck_Packet), got_OVResult },
+ [BarrierAck] = { sizeof(struct Drbd_BarrierAck_Packet), got_BarrierAck },
+ [StateChgReply] = { sizeof(struct Drbd_RqS_Reply_Packet), got_RqSReply },
+ [RSIsInSync] = { sizeof(struct Drbd_BlockAck_Packet), got_IsInSync },
+ [MAX_CMD] = { 0, NULL },
+ };
+ if (cmd > MAX_CMD)
+ return NULL;
+ return &asender_tbl[cmd];
+}
+
+STATIC int drbd_asender(struct Drbd_thread *thi)
+{
+ struct drbd_conf *mdev = thi->mdev;
+ struct Drbd_Header *h = &mdev->meta.rbuf.head;
+ struct asender_cmd *cmd = NULL;
+
+ int rv, len;
+ void *buf = h;
+ int received = 0;
+ int expect = sizeof(struct Drbd_Header);
+ int empty;
+
+ sprintf(current->comm, "drbd%d_asender", mdev_to_minor(mdev));
+
+ current->policy = SCHED_RR; /* Make this a realtime task! */
+ current->rt_priority = 2; /* more important than all other tasks */
+
+ while (get_t_state(thi) == Running) {
+ drbd_thread_current_set_cpu(mdev);
+ if (test_and_clear_bit(SEND_PING, &mdev->flags)) {
+ ERR_IF(!drbd_send_ping(mdev)) goto reconnect;
+ mdev->meta.socket->sk->sk_rcvtimeo =
+ mdev->net_conf->ping_timeo*HZ/10;
+ }
+
+ /* conditionally cork;
+ * it may hurt latency if we cork without much to send */
+ if (!mdev->net_conf->no_cork &&
+ 3 < atomic_read(&mdev->unacked_cnt))
+ drbd_tcp_cork(mdev->meta.socket);
+ while (1) {
+ clear_bit(SIGNAL_ASENDER, &mdev->flags);
+ flush_signals(current);
+ if (!drbd_process_done_ee(mdev)) {
+ ERR("process_done_ee() = NOT_OK\n");
+ goto reconnect;
+ }
+ /* to avoid race with newly queued ACKs */
+ set_bit(SIGNAL_ASENDER, &mdev->flags);
+ spin_lock_irq(&mdev->req_lock);
+ empty = list_empty(&mdev->done_ee);
+ spin_unlock_irq(&mdev->req_lock);
+ /* new ack may have been queued right here,
+ * but then there is also a signal pending,
+ * and we start over... */
+ if (empty)
+ break;
+ }
+ /* but unconditionally uncork unless disabled */
+ if (!mdev->net_conf->no_cork)
+ drbd_tcp_uncork(mdev->meta.socket);
+
+ /* short circuit, recv_msg would return EINTR anyways. */
+ if (signal_pending(current))
+ continue;
+
+ rv = drbd_recv_short(mdev, mdev->meta.socket,
+ buf, expect-received, 0);
+ clear_bit(SIGNAL_ASENDER, &mdev->flags);
+
+ flush_signals(current);
+
+ /* Note:
+ * -EINTR (on meta) we got a signal
+ * -EAGAIN (on meta) rcvtimeo expired
+ * -ECONNRESET other side closed the connection
+ * -ERESTARTSYS (on data) we got a signal
+ * rv < 0 other than above: unexpected error!
+ * rv == expected: full header or command
+ * rv < expected: "woken" by signal during receive
+ * rv == 0 : "connection shut down by peer"
+ */
+ if (likely(rv > 0)) {
+ received += rv;
+ buf += rv;
+ } else if (rv == 0) {
+ ERR("meta connection shut down by peer.\n");
+ goto reconnect;
+ } else if (rv == -EAGAIN) {
+ if (mdev->meta.socket->sk->sk_rcvtimeo ==
+ mdev->net_conf->ping_timeo*HZ/10) {
+ ERR("PingAck did not arrive in time.\n");
+ goto reconnect;
+ }
+ set_bit(SEND_PING, &mdev->flags);
+ continue;
+ } else if (rv == -EINTR) {
+ continue;
+ } else {
+ ERR("sock_recvmsg returned %d\n", rv);
+ goto reconnect;
+ }
+
+ if (received == expect && cmd == NULL) {
+ if (unlikely(h->magic != BE_DRBD_MAGIC)) {
+ ERR("magic?? on meta m: 0x%lx c: %d l: %d\n",
+ (long)be32_to_cpu(h->magic),
+ h->command, h->length);
+ goto reconnect;
+ }
+ cmd = get_asender_cmd(be16_to_cpu(h->command));
+ len = be16_to_cpu(h->length);
+ if (unlikely(cmd == NULL)) {
+ ERR("unknown command?? on meta m: 0x%lx c: %d l: %d\n",
+ (long)be32_to_cpu(h->magic),
+ h->command, h->length);
+ goto disconnect;
+ }
+ expect = cmd->pkt_size;
+ ERR_IF(len != expect-sizeof(struct Drbd_Header)) {
+ dump_packet(mdev, mdev->meta.socket, 1, (void *)h, __FILE__, __LINE__);
+ DUMPI(expect);
+ goto reconnect;
+ }
+ }
+ if (received == expect) {
+ D_ASSERT(cmd != NULL);
+ dump_packet(mdev, mdev->meta.socket, 1, (void *)h, __FILE__, __LINE__);
+ if (!cmd->process(mdev, h))
+ goto reconnect;
+
+ buf = h;
+ received = 0;
+ expect = sizeof(struct Drbd_Header);
+ cmd = NULL;
+ }
+ }
+
+ if (0) {
+reconnect:
+ drbd_force_state(mdev, NS(conn, NetworkFailure));
+ }
+ if (0) {
+disconnect:
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ }
+ clear_bit(SIGNAL_ASENDER, &mdev->flags);
+
+ D_ASSERT(mdev->state.conn < Connected);
+ INFO("asender terminated\n");
+
+ return 0;
+}
DRBD uses netlink via connector. The packets are composed of extensible tag
lists. That interface can be extended over time without breaking old
userspace programs.
The nice part of the interface to userspace: drbd.h. The ugly part is for
sure drbd_tag_magic.h. I realize that macros are generally frowned upon, but
this way it is easier to maintain. The code that gets generated by
repeatedly including drbd_nl.h is hard to maintain over time if it is open
coded. (BTW, did you know that the samba 4 people are proud to have more
than 50% of their code auto generated:)
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/include/linux/drbd.h linux-2.6.29-drbd/include/linux/drbd.h
--- linux-2.6.29/include/linux/drbd.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/drbd.h 2009-03-26 15:53:46.520275000 +0100
@@ -0,0 +1,372 @@
+/*
+ drbd.h
+ Kernel module for 2.6.x Kernels
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2001-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2001-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+*/
+#ifndef DRBD_H
+#define DRBD_H
+#include <linux/drbd_config.h>
+#include <linux/connector.h>
+
+#include <asm/types.h>
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#else
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <limits.h>
+
+/* Altough the Linux source code makes a difference between
+ generic endiness and the bitfields' endianess, there is no
+ architecture as of Linux-2.6.24-rc4 where the bitfileds' endianess
+ does not match the generic endianess. */
+
+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define __LITTLE_ENDIAN_BITFIELD
+#elif __BYTE_ORDER == __BIG_ENDIAN
+#define __BIG_ENDIAN_BITFIELD
+#else
+# error "sorry, weird endianness on this box"
+#endif
+
+#endif
+
+
+enum io_error_handler {
+ PassOn, /* FIXME should the better be named "Ignore"? */
+ CallIOEHelper,
+ Detach
+};
+
+enum fencing_policy {
+ DontCare,
+ Resource,
+ Stonith
+};
+
+enum disconnect_handler {
+ Reconnect,
+ DropNetConf,
+ FreezeIO
+};
+
+enum after_sb_handler {
+ Disconnect,
+ DiscardYoungerPri,
+ DiscardOlderPri,
+ DiscardZeroChg,
+ DiscardLeastChg,
+ DiscardLocal,
+ DiscardRemote,
+ Consensus,
+ DiscardSecondary,
+ CallHelper,
+ Violently
+};
+
+/* KEEP the order, do not delete or insert!
+ * Or change the API_VERSION, too. */
+enum ret_codes {
+ RetCodeBase = 100,
+ NoError, /* 101 ... */
+ LAAlreadyInUse,
+ OAAlreadyInUse,
+ LDNameInvalid,
+ MDNameInvalid,
+ LDAlreadyInUse,
+ LDNoBlockDev,
+ MDNoBlockDev,
+ LDOpenFailed,
+ MDOpenFailed,
+ LDDeviceTooSmall,
+ MDDeviceTooSmall,
+ LDNoConfig,
+ LDMounted,
+ MDMounted,
+ LDMDInvalid,
+ LDDeviceTooLarge,
+ MDIOError,
+ MDInvalid,
+ CRAMAlgNotAvail,
+ CRAMAlgNotDigest,
+ KMallocFailed,
+ DiscardNotAllowed,
+ HaveDiskConfig,
+ HaveNetConfig,
+ UnknownMandatoryTag,
+ MinorNotKnown,
+ StateNotAllowed,
+ GotSignal, /* EINTR */
+ NoResizeDuringResync,
+ APrimaryNodeNeeded,
+ SyncAfterInvalid,
+ SyncAfterCycle,
+ PauseFlagAlreadySet,
+ PauseFlagAlreadyClear,
+ DiskLowerThanOutdated, /* obsolete, now SS_LowerThanOutdated */
+ UnknownNetLinkPacket,
+ HaveNoDiskConfig,
+ ProtocolCRequired,
+ VMallocFailed,
+ IntegrityAlgNotAvail,
+ IntegrityAlgNotDigest,
+ CPUMaskParseFailed,
+ CSUMSAlgNotAvail,
+ CSUMSAlgNotDigest,
+ VERIFYAlgNotAvail,
+ VERIFYAlgNotDigest,
+ CSUMSResyncRunning,
+ VERIFYIsRunning,
+ DataOfWrongCurrent,
+ MayNotBeConnected,
+
+ /* insert new ones above this line */
+ AfterLastRetCode,
+};
+
+#define DRBD_PROT_A 1
+#define DRBD_PROT_B 2
+#define DRBD_PROT_C 3
+
+enum drbd_role {
+ Unknown = 0,
+ Primary = 1, /* role */
+ Secondary = 2, /* role */
+ role_mask = 3,
+};
+
+/* The order of these constants is important.
+ * The lower ones (<WFReportParams) indicate
+ * that there is no socket!
+ * >=WFReportParams ==> There is a socket
+ */
+enum drbd_conns {
+ StandAlone,
+ Disconnecting, /* Temporal state on the way to StandAlone. */
+ Unconnected, /* >= Unconnected -> inc_net() succeeds */
+
+ /* These temporal states are all used on the way
+ * from >= Connected to Unconnected.
+ * The 'disconnect reason' states
+ * I do not allow to change beween them. */
+ Timeout,
+ BrokenPipe,
+ NetworkFailure,
+ ProtocolError,
+ TearDown,
+
+ WFConnection,
+ WFReportParams, /* we have a socket */
+ Connected, /* we have introduced each other */
+ StartingSyncS, /* starting full sync by IOCTL. */
+ StartingSyncT, /* stariing full sync by IOCTL. */
+ WFBitMapS,
+ WFBitMapT,
+ WFSyncUUID,
+
+ /* All SyncStates are tested with this comparison
+ * xx >= SyncSource && xx <= PausedSyncT */
+ SyncSource,
+ SyncTarget,
+ VerifyS,
+ VerifyT,
+ PausedSyncS,
+ PausedSyncT,
+ conn_mask = 31
+};
+
+enum drbd_disk_state {
+ Diskless,
+ Attaching, /* In the process of reading the meta-data */
+ Failed, /* Becomes Diskless as soon as we told it the peer */
+ /* when >= Failed it is legal to access mdev->bc */
+ Negotiating, /* Late attaching state, we need to talk to the peer */
+ Inconsistent,
+ Outdated,
+ DUnknown, /* Only used for the peer, never for myself */
+ Consistent, /* Might be Outdated, might be UpToDate ... */
+ UpToDate, /* Only this disk state allows applications' IO ! */
+ disk_mask = 15
+};
+
+union drbd_state_t {
+/* According to gcc's docs is the ...
+ * The order of allocation of bit-fields within a unit (C90 6.5.2.1, C99 6.7.2.1).
+ * Determined by ABI.
+ * pointed out by Maxim Uvarov q<[email protected]>
+ * even though we transmit as "cpu_to_be32(state)",
+ * the offsets of the bitfields still need to be swapped
+ * on different endianess.
+ */
+ struct {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ unsigned role:2 ; /* 3/4 primary/secondary/unknown */
+ unsigned peer:2 ; /* 3/4 primary/secondary/unknown */
+ unsigned conn:5 ; /* 17/32 cstates */
+ unsigned disk:4 ; /* 8/16 from Diskless to UpToDate */
+ unsigned pdsk:4 ; /* 8/16 from Diskless to UpToDate */
+ unsigned susp:1 ; /* 2/2 IO suspended no/yes */
+ unsigned aftr_isp:1 ; /* isp .. imposed sync pause */
+ unsigned peer_isp:1 ;
+ unsigned user_isp:1 ;
+ unsigned _pad:11; /* 0 unused */
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ unsigned _pad:11; /* 0 unused */
+ unsigned user_isp:1 ;
+ unsigned peer_isp:1 ;
+ unsigned aftr_isp:1 ; /* isp .. imposed sync pause */
+ unsigned susp:1 ; /* 2/2 IO suspended no/yes */
+ unsigned pdsk:4 ; /* 8/16 from Diskless to UpToDate */
+ unsigned disk:4 ; /* 8/16 from Diskless to UpToDate */
+ unsigned conn:5 ; /* 17/32 cstates */
+ unsigned peer:2 ; /* 3/4 primary/secondary/unknown */
+ unsigned role:2 ; /* 3/4 primary/secondary/unknown */
+#else
+# error "this endianess is not supported"
+#endif
+#ifndef DRBD_DEBUG_STATE_CHANGES
+#define DRBD_DEBUG_STATE_CHANGES 0
+#endif
+#if DRBD_DEBUG_STATE_CHANGES
+ unsigned int line;
+ const char *func;
+#endif
+ };
+ unsigned int i;
+};
+
+enum set_st_err {
+ SS_CW_NoNeed = 4,
+ SS_CW_Success = 3,
+ SS_NothingToDo = 2,
+ SS_Success = 1,
+ SS_UnknownError = 0, /* Used to sleep longer in _drbd_request_state */
+ SS_TwoPrimaries = -1,
+ SS_NoUpToDateDisk = -2,
+ SS_BothInconsistent = -4,
+ SS_SyncingDiskless = -5,
+ SS_ConnectedOutdates = -6,
+ SS_PrimaryNOP = -7,
+ SS_ResyncRunning = -8,
+ SS_AlreadyStandAlone = -9,
+ SS_CW_FailedByPeer = -10,
+ SS_IsDiskLess = -11,
+ SS_DeviceInUse = -12,
+ SS_NoNetConfig = -13,
+ SS_NoVerifyAlg = -14, /* drbd-8.2 only */
+ SS_NeedConnection = -15, /* drbd-8.2 only */
+ SS_LowerThanOutdated = -16,
+ SS_NotSupported = -17, /* drbd-8.2 only */
+ SS_InTransientState = -18, /* Retry after the next state change */
+ SS_ConcurrentStChg = -19, /* Concurrent cluster side state change! */
+ SS_AfterLastError = -20, /* Keep this at bottom */
+};
+
+/* from drbd_strings.c */
+extern const char *conns_to_name(enum drbd_conns);
+extern const char *roles_to_name(enum drbd_role);
+extern const char *disks_to_name(enum drbd_disk_state);
+extern const char *set_st_err_name(enum set_st_err);
+
+#ifndef BDEVNAME_SIZE
+# define BDEVNAME_SIZE 32
+#endif
+
+#define SHARED_SECRET_MAX 64
+
+enum MetaDataFlags {
+ __MDF_Consistent,
+ __MDF_PrimaryInd,
+ __MDF_ConnectedInd,
+ __MDF_FullSync,
+ __MDF_WasUpToDate,
+ __MDF_PeerOutDated, /* or worse (e.g. invalid). */
+ __MDF_CrashedPrimary,
+};
+#define MDF_Consistent (1<<__MDF_Consistent)
+#define MDF_PrimaryInd (1<<__MDF_PrimaryInd)
+#define MDF_ConnectedInd (1<<__MDF_ConnectedInd)
+#define MDF_FullSync (1<<__MDF_FullSync)
+#define MDF_WasUpToDate (1<<__MDF_WasUpToDate)
+#define MDF_PeerOutDated (1<<__MDF_PeerOutDated)
+#define MDF_CrashedPrimary (1<<__MDF_CrashedPrimary)
+
+enum UuidIndex {
+ Current,
+ Bitmap,
+ History_start,
+ History_end,
+ UUID_SIZE, /* nl-packet: number of dirty bits */
+ UUID_FLAGS, /* nl-packet: flags */
+ EXT_UUID_SIZE /* Everything. */
+};
+
+enum UseTimeout {
+ UT_Default = 0,
+ UT_Degraded = 1,
+ UT_PeerOutdated = 2,
+};
+
+#define UUID_JUST_CREATED ((__u64)4)
+
+#define DRBD_MAGIC 0x83740267
+#define BE_DRBD_MAGIC __constant_cpu_to_be32(DRBD_MAGIC)
+
+/* these are of type "int" */
+#define DRBD_MD_INDEX_INTERNAL -1
+#define DRBD_MD_INDEX_FLEX_EXT -2
+#define DRBD_MD_INDEX_FLEX_INT -3
+
+/* Start of the new netlink/connector stuff */
+
+#define DRBD_NL_CREATE_DEVICE 0x01
+#define DRBD_NL_SET_DEFAULTS 0x02
+
+/* The following line should be moved over to linux/connector.h
+ * when the time comes */
+#ifndef CN_IDX_DRBD
+# define CN_IDX_DRBD 0x4
+/* Ubuntu "intrepid ibex" release defined CN_IDX_DRBD as 0x6 */
+#endif
+#define CN_VAL_DRBD 0x1
+
+/* For searching a vacant cn_idx value */
+#define CN_IDX_STEP 6977
+
+struct drbd_nl_cfg_req {
+ int packet_type;
+ unsigned int drbd_minor;
+ int flags;
+ unsigned short tag_list[];
+};
+
+struct drbd_nl_cfg_reply {
+ int packet_type;
+ unsigned int minor;
+ int ret_code; /* enum ret_code or set_st_err_t */
+ unsigned short tag_list[]; /* only used with get_* calls */
+};
+
+#endif
diff -uNrp linux-2.6.29/include/linux/drbd_config.h linux-2.6.29-drbd/include/linux/drbd_config.h
--- linux-2.6.29/include/linux/drbd_config.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/drbd_config.h 2009-03-30 15:48:09.935190000 +0200
@@ -0,0 +1,43 @@
+/*
+ drbd_config.h
+ DRBD's compile time configuration.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#ifndef DRBD_CONFIG_H
+#define DRBD_CONFIG_H
+
+extern const char *drbd_buildtag(void);
+
+#define REL_VERSION "8.3.1"
+#define API_VERSION 88
+#define PRO_VERSION_MIN 86
+#define PRO_VERSION_MAX 90
+
+#ifndef __CHECKER__ /* for a sparse run, we need all STATICs */
+#define DBG_ALL_SYMBOLS /* no static functs, improves quality of OOPS traces */
+#endif
+
+
+/* Define this to enable dynamic tracing controlled by module parameters
+ * at run time. This enables ALL use of dynamic tracing including packet
+ * and bio dumping, etc */
+#define ENABLE_DYNAMIC_TRACE
+
+/* Enable fault insertion code */
+#define DRBD_ENABLE_FAULTS
+
+#endif
diff -uNrp linux-2.6.29/include/linux/drbd_limits.h linux-2.6.29-drbd/include/linux/drbd_limits.h
--- linux-2.6.29/include/linux/drbd_limits.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/drbd_limits.h 2009-03-17 14:37:57.263132000 +0100
@@ -0,0 +1,133 @@
+/*
+ drbd_limits.h
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+*/
+
+/*
+ * Our current limitations.
+ * Some of them are hard limits,
+ * some of them are arbitrary range limits, that make it easier to provide
+ * feedback about nonsense settings for certain configurable values.
+ */
+
+#ifndef DRBD_LIMITS_H
+#define DRBD_LIMITS_H 1
+
+#define DEBUG_RANGE_CHECK 0
+
+#define DRBD_MINOR_COUNT_MIN 1
+#define DRBD_MINOR_COUNT_MAX 255
+
+#define DRBD_DIALOG_REFRESH_MIN 0
+#define DRBD_DIALOG_REFRESH_MAX 600
+
+/* valid port number */
+#define DRBD_PORT_MIN 1
+#define DRBD_PORT_MAX 0xffff
+
+/* startup { */
+ /* if you want more than 3.4 days, disable */
+#define DRBD_WFC_TIMEOUT_MIN 0
+#define DRBD_WFC_TIMEOUT_MAX 300000
+#define DRBD_WFC_TIMEOUT_DEF 0
+
+#define DRBD_DEGR_WFC_TIMEOUT_MIN 0
+#define DRBD_DEGR_WFC_TIMEOUT_MAX 300000
+#define DRBD_DEGR_WFC_TIMEOUT_DEF 0
+
+#define DRBD_OUTDATED_WFC_TIMEOUT_MIN 0
+#define DRBD_OUTDATED_WFC_TIMEOUT_MAX 300000
+#define DRBD_OUTDATED_WFC_TIMEOUT_DEF 0
+/* }*/
+
+/* net { */
+ /* timeout, unit centi seconds
+ * more than one minute timeout is not usefull */
+#define DRBD_TIMEOUT_MIN 1
+#define DRBD_TIMEOUT_MAX 600
+#define DRBD_TIMEOUT_DEF 60 /* 6 seconds */
+
+ /* active connection retries when WFConnection */
+#define DRBD_CONNECT_INT_MIN 1
+#define DRBD_CONNECT_INT_MAX 120
+#define DRBD_CONNECT_INT_DEF 10 /* seconds */
+
+ /* keep-alive probes when idle */
+#define DRBD_PING_INT_MIN 1
+#define DRBD_PING_INT_MAX 120
+#define DRBD_PING_INT_DEF 10
+
+ /* timeout for the ping packets.*/
+#define DRBD_PING_TIMEO_MIN 1
+#define DRBD_PING_TIMEO_MAX 100
+#define DRBD_PING_TIMEO_DEF 5
+
+ /* max number of write requests between write barriers */
+#define DRBD_MAX_EPOCH_SIZE_MIN 1
+#define DRBD_MAX_EPOCH_SIZE_MAX 20000
+#define DRBD_MAX_EPOCH_SIZE_DEF 2048
+
+ /* I don't think that a tcp send buffer of more than 10M is usefull */
+#define DRBD_SNDBUF_SIZE_MIN 0
+#define DRBD_SNDBUF_SIZE_MAX (10<<20)
+#define DRBD_SNDBUF_SIZE_DEF (2*65535)
+
+ /* @4k PageSize -> 128kB - 512MB */
+#define DRBD_MAX_BUFFERS_MIN 32
+#define DRBD_MAX_BUFFERS_MAX 131072
+#define DRBD_MAX_BUFFERS_DEF 2048
+
+ /* @4k PageSize -> 4kB - 512MB */
+#define DRBD_UNPLUG_WATERMARK_MIN 1
+#define DRBD_UNPLUG_WATERMARK_MAX 131072
+#define DRBD_UNPLUG_WATERMARK_DEF (DRBD_MAX_BUFFERS_DEF/16)
+
+ /* 0 is disabled.
+ * 200 should be more than enough even for very short timeouts */
+#define DRBD_KO_COUNT_MIN 0
+#define DRBD_KO_COUNT_MAX 200
+#define DRBD_KO_COUNT_DEF 0
+/* } */
+
+/* syncer { */
+ /* FIXME allow rate to be zero? */
+#define DRBD_RATE_MIN 1
+/* channel bonding 10 GbE, or other hardware */
+#define DRBD_RATE_MAX (4 << 20)
+#define DRBD_RATE_DEF 250 /* kb/second */
+
+ /* less than 7 would hit performance unneccessarily.
+ * 3833 is the largest prime that still does fit
+ * into 64 sectors of activity log */
+#define DRBD_AL_EXTENTS_MIN 7
+#define DRBD_AL_EXTENTS_MAX 3833
+#define DRBD_AL_EXTENTS_DEF 127
+
+#define DRBD_AFTER_MIN -1
+#define DRBD_AFTER_MAX 255
+#define DRBD_AFTER_DEF -1
+
+/* } */
+
+/* drbdsetup XY resize -d Z
+ * you are free to reduce the device size to nothing, if you want to.
+ * the upper limit with 64bit kernel, enough ram and flexible meta data
+ * is 16 TB, currently. */
+/* DRBD_MAX_SECTORS */
+#define DRBD_DISK_SIZE_SECT_MIN 0
+#define DRBD_DISK_SIZE_SECT_MAX (16 * (2LLU << 30))
+#define DRBD_DISK_SIZE_SECT_DEF 0 /* = disabled = no user size... */
+
+#define DRBD_ON_IO_ERROR_DEF PassOn
+#define DRBD_FENCING_DEF DontCare
+#define DRBD_AFTER_SB_0P_DEF Disconnect
+#define DRBD_AFTER_SB_1P_DEF Disconnect
+#define DRBD_AFTER_SB_2P_DEF Disconnect
+#define DRBD_RR_CONFLICT_DEF Disconnect
+
+#define DRBD_MAX_BIO_BVECS_MIN 0
+#define DRBD_MAX_BIO_BVECS_MAX 128
+#define DRBD_MAX_BIO_BVECS_DEF 0
+
+#undef RANGE
+#endif
diff -uNrp linux-2.6.29/include/linux/drbd_nl.h linux-2.6.29-drbd/include/linux/drbd_nl.h
--- linux-2.6.29/include/linux/drbd_nl.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/drbd_nl.h 2009-03-30 15:41:58.567154000 +0200
@@ -0,0 +1,135 @@
+/*
+ PAKET( name,
+ TYPE ( pn, pr, member )
+ ...
+ )
+
+ You may never reissue one of the pn arguments
+*/
+
+#if !defined(NL_PACKET) || !defined(NL_STRING) || !defined(NL_INTEGER) || !defined(NL_BIT) || !defined(NL_INT64)
+#error "The macros NL_PACKET, NL_STRING, NL_INTEGER, NL_INT64 and NL_BIT needs to be defined"
+#endif
+
+NL_PACKET(primary, 1,
+ NL_BIT( 1, T_MAY_IGNORE, overwrite_peer)
+)
+
+NL_PACKET(secondary, 2, )
+
+NL_PACKET(disk_conf, 3,
+ NL_INT64( 2, T_MAY_IGNORE, disk_size)
+ NL_STRING( 3, T_MANDATORY, backing_dev, 128)
+ NL_STRING( 4, T_MANDATORY, meta_dev, 128)
+ NL_INTEGER( 5, T_MANDATORY, meta_dev_idx)
+ NL_INTEGER( 6, T_MAY_IGNORE, on_io_error)
+ NL_INTEGER( 7, T_MAY_IGNORE, fencing)
+ NL_BIT( 37, T_MAY_IGNORE, use_bmbv)
+ NL_BIT( 53, T_MAY_IGNORE, no_disk_flush)
+ NL_BIT( 54, T_MAY_IGNORE, no_md_flush)
+ /* 55 max_bio_size was available in 8.2.6rc2 */
+ NL_INTEGER( 56, T_MAY_IGNORE, max_bio_bvecs)
+ NL_BIT( 57, T_MAY_IGNORE, no_disk_barrier)
+ NL_BIT( 58, T_MAY_IGNORE, no_disk_drain)
+)
+
+NL_PACKET(detach, 4, )
+
+NL_PACKET(net_conf, 5,
+ NL_STRING( 8, T_MANDATORY, my_addr, 128)
+ NL_STRING( 9, T_MANDATORY, peer_addr, 128)
+ NL_STRING( 10, T_MAY_IGNORE, shared_secret, SHARED_SECRET_MAX)
+ NL_STRING( 11, T_MAY_IGNORE, cram_hmac_alg, SHARED_SECRET_MAX)
+ NL_STRING( 44, T_MAY_IGNORE, integrity_alg, SHARED_SECRET_MAX)
+ NL_INTEGER( 14, T_MAY_IGNORE, timeout)
+ NL_INTEGER( 15, T_MANDATORY, wire_protocol)
+ NL_INTEGER( 16, T_MAY_IGNORE, try_connect_int)
+ NL_INTEGER( 17, T_MAY_IGNORE, ping_int)
+ NL_INTEGER( 18, T_MAY_IGNORE, max_epoch_size)
+ NL_INTEGER( 19, T_MAY_IGNORE, max_buffers)
+ NL_INTEGER( 20, T_MAY_IGNORE, unplug_watermark)
+ NL_INTEGER( 21, T_MAY_IGNORE, sndbuf_size)
+ NL_INTEGER( 22, T_MAY_IGNORE, ko_count)
+ NL_INTEGER( 24, T_MAY_IGNORE, after_sb_0p)
+ NL_INTEGER( 25, T_MAY_IGNORE, after_sb_1p)
+ NL_INTEGER( 26, T_MAY_IGNORE, after_sb_2p)
+ NL_INTEGER( 39, T_MAY_IGNORE, rr_conflict)
+ NL_INTEGER( 40, T_MAY_IGNORE, ping_timeo)
+ /* 59 addr_family was available in GIT, never released */
+ NL_BIT( 60, T_MANDATORY, mind_af)
+ NL_BIT( 27, T_MAY_IGNORE, want_lose)
+ NL_BIT( 28, T_MAY_IGNORE, two_primaries)
+ NL_BIT( 41, T_MAY_IGNORE, always_asbp)
+ NL_BIT( 61, T_MAY_IGNORE, no_cork)
+ NL_BIT( 62, T_MANDATORY, auto_sndbuf_size)
+)
+
+NL_PACKET(disconnect, 6, )
+
+NL_PACKET(resize, 7,
+ NL_INT64( 29, T_MAY_IGNORE, resize_size)
+)
+
+NL_PACKET(syncer_conf, 8,
+ NL_INTEGER( 30, T_MAY_IGNORE, rate)
+ NL_INTEGER( 31, T_MAY_IGNORE, after)
+ NL_INTEGER( 32, T_MAY_IGNORE, al_extents)
+ NL_STRING( 52, T_MAY_IGNORE, verify_alg, SHARED_SECRET_MAX)
+ NL_STRING( 51, T_MAY_IGNORE, cpu_mask, 32)
+ NL_STRING( 64, T_MAY_IGNORE, csums_alg, SHARED_SECRET_MAX)
+ NL_BIT( 65, T_MAY_IGNORE, use_rle_encoding)
+)
+
+NL_PACKET(invalidate, 9, )
+NL_PACKET(invalidate_peer, 10, )
+NL_PACKET(pause_sync, 11, )
+NL_PACKET(resume_sync, 12, )
+NL_PACKET(suspend_io, 13, )
+NL_PACKET(resume_io, 14, )
+NL_PACKET(outdate, 15, )
+NL_PACKET(get_config, 16, )
+NL_PACKET(get_state, 17,
+ NL_INTEGER( 33, T_MAY_IGNORE, state_i)
+)
+
+NL_PACKET(get_uuids, 18,
+ NL_STRING( 34, T_MAY_IGNORE, uuids, (UUID_SIZE*sizeof(__u64)))
+ NL_INTEGER( 35, T_MAY_IGNORE, uuids_flags)
+)
+
+NL_PACKET(get_timeout_flag, 19,
+ NL_BIT( 36, T_MAY_IGNORE, use_degraded)
+)
+
+NL_PACKET(call_helper, 20,
+ NL_STRING( 38, T_MAY_IGNORE, helper, 32)
+)
+
+/* Tag nr 42 already allocated in drbd-8.1 development. */
+
+NL_PACKET(sync_progress, 23,
+ NL_INTEGER( 43, T_MAY_IGNORE, sync_progress)
+)
+
+NL_PACKET(dump_ee, 24,
+ NL_STRING( 45, T_MAY_IGNORE, dump_ee_reason, 32)
+ NL_STRING( 46, T_MAY_IGNORE, seen_digest, SHARED_SECRET_MAX)
+ NL_STRING( 47, T_MAY_IGNORE, calc_digest, SHARED_SECRET_MAX)
+ NL_INT64( 48, T_MAY_IGNORE, ee_sector)
+ NL_INT64( 49, T_MAY_IGNORE, ee_block_id)
+ NL_STRING( 50, T_MAY_IGNORE, ee_data, 32 << 10)
+)
+
+NL_PACKET(start_ov, 25,
+)
+
+NL_PACKET(new_c_uuid, 26,
+ NL_BIT( 63, T_MANDATORY, clear_bm)
+)
+
+#undef NL_PACKET
+#undef NL_INTEGER
+#undef NL_INT64
+#undef NL_BIT
+#undef NL_STRING
+
diff -uNrp linux-2.6.29/include/linux/drbd_tag_magic.h linux-2.6.29-drbd/include/linux/drbd_tag_magic.h
--- linux-2.6.29/include/linux/drbd_tag_magic.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/drbd_tag_magic.h 2009-03-16 18:36:50.707173000 +0100
@@ -0,0 +1,83 @@
+#ifndef DRBD_TAG_MAGIC_H
+#define DRBD_TAG_MAGIC_H
+
+#define TT_END 0
+#define TT_REMOVED 0xE000
+
+/* declare packet_type enums */
+enum packet_types {
+#define NL_PACKET(name, number, fields) P_ ## name = number,
+#define NL_INTEGER(pn, pr, member)
+#define NL_INT64(pn, pr, member)
+#define NL_BIT(pn, pr, member)
+#define NL_STRING(pn, pr, member, len)
+#include "drbd_nl.h"
+ P_nl_after_last_packet,
+};
+
+/* These struct are used to deduce the size of the tag lists: */
+#define NL_PACKET(name, number, fields) \
+ struct name ## _tag_len_struct { fields };
+#define NL_INTEGER(pn, pr, member) \
+ int member; int tag_and_len ## member;
+#define NL_INT64(pn, pr, member) \
+ __u64 member; int tag_and_len ## member;
+#define NL_BIT(pn, pr, member) \
+ unsigned char member:1; int tag_and_len ## member;
+#define NL_STRING(pn, pr, member, len) \
+ unsigned char member[len]; int member ## _len; \
+ int tag_and_len ## member;
+#include "linux/drbd_nl.h"
+
+/* declate tag-list-sizes */
+static const int tag_list_sizes[] = {
+#define NL_PACKET(name, number, fields) 2 fields ,
+#define NL_INTEGER(pn, pr, member) + 4 + 4
+#define NL_INT64(pn, pr, member) + 4 + 8
+#define NL_BIT(pn, pr, member) + 4 + 1
+#define NL_STRING(pn, pr, member, len) + 4 + (len)
+#include "drbd_nl.h"
+};
+
+/* The two highest bits are used for the tag type */
+#define TT_MASK 0xC000
+#define TT_INTEGER 0x0000
+#define TT_INT64 0x4000
+#define TT_BIT 0x8000
+#define TT_STRING 0xC000
+/* The next bit indicates if processing of the tag is mandatory */
+#define T_MANDATORY 0x2000
+#define T_MAY_IGNORE 0x0000
+#define TN_MASK 0x1fff
+/* The remaining 13 bits are used to enumerate the tags */
+
+#define tag_type(T) ((T) & TT_MASK)
+#define tag_number(T) ((T) & TN_MASK)
+
+/* declare tag enums */
+#define NL_PACKET(name, number, fields) fields
+enum drbd_tags {
+#define NL_INTEGER(pn, pr, member) T_ ## member = pn | TT_INTEGER | pr ,
+#define NL_INT64(pn, pr, member) T_ ## member = pn | TT_INT64 | pr ,
+#define NL_BIT(pn, pr, member) T_ ## member = pn | TT_BIT | pr ,
+#define NL_STRING(pn, pr, member, len) T_ ## member = pn | TT_STRING | pr ,
+#include "drbd_nl.h"
+};
+
+struct tag {
+ const char *name;
+ int type_n_flags;
+ int max_len;
+};
+
+/* declare tag names */
+#define NL_PACKET(name, number, fields) fields
+static const struct tag tag_descriptions[] = {
+#define NL_INTEGER(pn, pr, member) [ pn ] = { #member, TT_INTEGER | pr, sizeof(int) },
+#define NL_INT64(pn, pr, member) [ pn ] = { #member, TT_INT64 | pr, sizeof(__u64) },
+#define NL_BIT(pn, pr, member) [ pn ] = { #member, TT_BIT | pr, sizeof(int) },
+#define NL_STRING(pn, pr, member, len) [ pn ] = { #member, TT_STRING | pr, (len) },
+#include "drbd_nl.h"
+};
+
+#endif
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_nl.c linux-2.6.29-drbd/drivers/block/drbd/drbd_nl.c
--- linux-2.6.29/drivers/block/drbd/drbd_nl.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_nl.c 2009-03-30 15:41:59.607152000 +0200
@@ -0,0 +1,2426 @@
+/*
+ drbd_nl.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+#include <linux/in.h>
+#include <linux/fs.h>
+#include <linux/buffer_head.h> /* for fsync_bdev */
+#include <linux/file.h>
+#include <linux/slab.h>
+#include <linux/connector.h>
+#include <linux/drbd.h>
+#include <linux/blkpg.h>
+#include <linux/cpumask.h>
+
+#include "drbd_int.h"
+#include "drbd_wrappers.h"
+#include <linux/drbd_tag_magic.h>
+#include <linux/drbd_limits.h>
+
+/* see get_sb_bdev and bd_claim */
+static char *drbd_m_holder = "Hands off! this is DRBD's meta data device.";
+
+/* Generate the tag_list to struct functions */
+#define NL_PACKET(name, number, fields) \
+STATIC int name ## _from_tags(struct drbd_conf *mdev, \
+ unsigned short *tags, struct name *arg) \
+{ \
+ int tag; \
+ int dlen; \
+ \
+ while ((tag = *tags++) != TT_END) { \
+ dlen = *tags++; \
+ switch (tag_number(tag)) { \
+ fields \
+ default: \
+ if (tag & T_MANDATORY) { \
+ ERR("Unknown tag: %d\n", tag_number(tag)); \
+ return 0; \
+ } \
+ } \
+ tags = (unsigned short *)((char *)tags + dlen); \
+ } \
+ return 1; \
+}
+#define NL_INTEGER(pn, pr, member) \
+ case pn: /* D_ASSERT( tag_type(tag) == TT_INTEGER ); */ \
+ arg->member = *(int *)(tags); \
+ break;
+#define NL_INT64(pn, pr, member) \
+ case pn: /* D_ASSERT( tag_type(tag) == TT_INT64 ); */ \
+ arg->member = *(u64 *)(tags); \
+ break;
+#define NL_BIT(pn, pr, member) \
+ case pn: /* D_ASSERT( tag_type(tag) == TT_BIT ); */ \
+ arg->member = *(char *)(tags) ? 1 : 0; \
+ break;
+#define NL_STRING(pn, pr, member, len) \
+ case pn: /* D_ASSERT( tag_type(tag) == TT_STRING ); */ \
+ if (dlen > len) { \
+ ERR("arg too long: %s (%u wanted, max len: %u bytes)\n", \
+ #member, dlen, (unsigned int)len); \
+ return 0; \
+ } \
+ arg->member ## _len = dlen; \
+ memcpy(arg->member, tags, min_t(size_t, dlen, len)); \
+ break;
+#include "linux/drbd_nl.h"
+
+/* Generate the struct to tag_list functions */
+#define NL_PACKET(name, number, fields) \
+STATIC unsigned short* \
+name ## _to_tags(struct drbd_conf *mdev, \
+ struct name *arg, unsigned short *tags) \
+{ \
+ fields \
+ return tags; \
+}
+
+#define NL_INTEGER(pn, pr, member) \
+ *tags++ = pn | pr | TT_INTEGER; \
+ *tags++ = sizeof(int); \
+ *(int *)tags = arg->member; \
+ tags = (unsigned short *)((char *)tags+sizeof(int));
+#define NL_INT64(pn, pr, member) \
+ *tags++ = pn | pr | TT_INT64; \
+ *tags++ = sizeof(u64); \
+ *(u64 *)tags = arg->member; \
+ tags = (unsigned short *)((char *)tags+sizeof(u64));
+#define NL_BIT(pn, pr, member) \
+ *tags++ = pn | pr | TT_BIT; \
+ *tags++ = sizeof(char); \
+ *(char *)tags = arg->member; \
+ tags = (unsigned short *)((char *)tags+sizeof(char));
+#define NL_STRING(pn, pr, member, len) \
+ *tags++ = pn | pr | TT_STRING; \
+ *tags++ = arg->member ## _len; \
+ memcpy(tags, arg->member, arg->member ## _len); \
+ tags = (unsigned short *)((char *)tags + arg->member ## _len);
+#include "linux/drbd_nl.h"
+
+void drbd_bcast_ev_helper(struct drbd_conf *mdev, char *helper_name);
+void drbd_nl_send_reply(struct cn_msg *, int);
+
+STATIC char *nl_packet_name(int packet_type)
+{
+/* Generate packet type strings */
+#define NL_PACKET(name, number, fields) \
+ [P_ ## name] = # name,
+#define NL_INTEGER Argh!
+#define NL_BIT Argh!
+#define NL_INT64 Argh!
+#define NL_STRING Argh!
+
+ static char *nl_tag_name[P_nl_after_last_packet] = {
+#include "linux/drbd_nl.h"
+ };
+
+ return (packet_type < sizeof(nl_tag_name)/sizeof(nl_tag_name[0])) ?
+ nl_tag_name[packet_type] : "*Unknown*";
+}
+
+STATIC void nl_trace_packet(void *data)
+{
+ struct cn_msg *req = data;
+ struct drbd_nl_cfg_req *nlp = (struct drbd_nl_cfg_req *)req->data;
+
+ printk(KERN_INFO "drbd%d: "
+ "Netlink: << %s (%d) - seq: %x, ack: %x, len: %x\n",
+ nlp->drbd_minor,
+ nl_packet_name(nlp->packet_type),
+ nlp->packet_type,
+ req->seq, req->ack, req->len);
+}
+
+STATIC void nl_trace_reply(void *data)
+{
+ struct cn_msg *req = data;
+ struct drbd_nl_cfg_reply *nlp = (struct drbd_nl_cfg_reply *)req->data;
+
+ printk(KERN_INFO "drbd%d: "
+ "Netlink: >> %s (%d) - seq: %x, ack: %x, len: %x\n",
+ nlp->minor,
+ nlp->packet_type == P_nl_after_last_packet ?
+ "Empty-Reply" : nl_packet_name(nlp->packet_type),
+ nlp->packet_type,
+ req->seq, req->ack, req->len);
+}
+
+int drbd_khelper(struct drbd_conf *mdev, char *cmd)
+{
+ char mb[12];
+ char *argv[] = {usermode_helper, cmd, mb, NULL };
+ int ret;
+ static char *envp[] = { "HOME=/",
+ "TERM=linux",
+ "PATH=/sbin:/usr/sbin:/bin:/usr/bin",
+ NULL };
+
+ snprintf(mb, 12, "minor-%d", mdev_to_minor(mdev));
+
+ INFO("helper command: %s %s %s\n", usermode_helper, cmd, mb);
+
+ drbd_bcast_ev_helper(mdev, cmd);
+ ret = call_usermodehelper(usermode_helper, argv, envp, 1);
+ if (ret)
+ drbd_WARN("helper command: %s %s %s exit code %u (0x%x)\n",
+ usermode_helper, cmd, mb,
+ (ret >> 8) & 0xff, ret);
+ else
+ INFO("helper command: %s %s %s exit code %u (0x%x)\n",
+ usermode_helper, cmd, mb,
+ (ret >> 8) & 0xff, ret);
+
+ if (ret < 0) /* Ignore any ERRNOs we got. */
+ ret = 0;
+
+ return ret;
+}
+
+enum drbd_disk_state drbd_try_outdate_peer(struct drbd_conf *mdev)
+{
+ char *ex_to_string;
+ int r;
+ enum drbd_disk_state nps;
+ enum fencing_policy fp;
+
+ D_ASSERT(mdev->state.pdsk == DUnknown);
+
+ if (inc_local_if_state(mdev, Consistent)) {
+ fp = mdev->bc->dc.fencing;
+ dec_local(mdev);
+ } else {
+ drbd_WARN("Not fencing peer, I'm not even Consistent myself.\n");
+ return mdev->state.pdsk;
+ }
+
+ if (fp == Stonith)
+ _drbd_request_state(mdev, NS(susp, 1), ChgWaitComplete);
+
+ r = drbd_khelper(mdev, "fence-peer");
+
+ switch ((r>>8) & 0xff) {
+ case 3: /* peer is inconsistent */
+ ex_to_string = "peer is inconsistent or worse";
+ nps = Inconsistent;
+ break;
+ case 4:
+ ex_to_string = "peer is outdated";
+ nps = Outdated;
+ break;
+ case 5: /* peer was down, we will(have) create(d) a new UUID anyways... */
+ /* If we would be more strict, we would return DUnknown here. */
+ ex_to_string = "peer is unreachable, assumed to be dead";
+ nps = Outdated;
+ break;
+ case 6: /* Peer is primary, voluntarily outdate myself.
+ * This is useful when an unconnected Secondary is asked to
+ * become Primary, but findes the other peer being active. */
+ ex_to_string = "peer is active";
+ drbd_WARN("Peer is primary, outdating myself.\n");
+ nps = DUnknown;
+ _drbd_request_state(mdev, NS(disk, Outdated), ChgWaitComplete);
+ break;
+ case 7:
+ if (fp != Stonith)
+ ERR("fence-peer() = 7 && fencing != Stonith !!!\n");
+ ex_to_string = "peer was stonithed";
+ nps = Outdated;
+ break;
+ default:
+ /* The script is broken ... */
+ nps = DUnknown;
+ ERR("fence-peer helper broken, returned %d\n", (r>>8)&0xff);
+ return nps;
+ }
+
+ INFO("fence-peer helper returned %d (%s)\n",
+ (r>>8) & 0xff, ex_to_string);
+ return nps;
+}
+
+
+int drbd_set_role(struct drbd_conf *mdev, enum drbd_role new_role, int force)
+{
+ const int max_tries = 4;
+ int r = 0;
+ int try = 0;
+ int forced = 0;
+ union drbd_state_t mask, val;
+ enum drbd_disk_state nps;
+
+ if (new_role == Primary)
+ request_ping(mdev); /* Detect a dead peer ASAP */
+
+ mutex_lock(&mdev->state_mutex);
+
+ mask.i = 0; mask.role = role_mask;
+ val.i = 0; val.role = new_role;
+
+ while (try++ < max_tries) {
+ r = _drbd_request_state(mdev, mask, val, ChgWaitComplete);
+
+ /* in case we first succeeded to outdate,
+ * but now suddenly could establish a connection */
+ if (r == SS_CW_FailedByPeer && mask.pdsk != 0) {
+ val.pdsk = 0;
+ mask.pdsk = 0;
+ continue;
+ }
+
+ if (r == SS_NoUpToDateDisk && force &&
+ (mdev->state.disk == Inconsistent ||
+ mdev->state.disk == Outdated)) {
+ mask.disk = disk_mask;
+ val.disk = UpToDate;
+ forced = 1;
+ continue;
+ }
+
+ if (r == SS_NoUpToDateDisk &&
+ mdev->state.disk == Consistent) {
+ D_ASSERT(mdev->state.pdsk == DUnknown);
+ nps = drbd_try_outdate_peer(mdev);
+
+ if (nps == Outdated) {
+ val.disk = UpToDate;
+ mask.disk = disk_mask;
+ }
+
+ val.pdsk = nps;
+ mask.pdsk = disk_mask;
+
+ continue;
+ }
+
+ if (r == SS_NothingToDo)
+ goto fail;
+ if (r == SS_PrimaryNOP) {
+ nps = drbd_try_outdate_peer(mdev);
+
+ if (force && nps > Outdated) {
+ drbd_WARN("Forced into split brain situation!\n");
+ nps = Outdated;
+ }
+
+ mask.pdsk = disk_mask;
+ val.pdsk = nps;
+
+ continue;
+ }
+ if (r == SS_TwoPrimaries) {
+ /* Maybe the peer is detected as dead very soon...
+ retry at most once more in this case. */
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((mdev->net_conf->ping_timeo+1)*HZ/10);
+ if (try < max_tries)
+ try = max_tries - 1;
+ continue;
+ }
+ if (r < SS_Success) {
+ r = _drbd_request_state(mdev, mask, val,
+ ChgStateVerbose + ChgWaitComplete);
+ if (r < SS_Success)
+ goto fail;
+ }
+ break;
+ }
+
+ if (forced)
+ drbd_WARN("Forced to consider local data as UpToDate!\n");
+
+ fsync_bdev(mdev->this_bdev);
+
+ /* Wait until nothing is on the fly :) */
+ wait_event(mdev->misc_wait, atomic_read(&mdev->ap_pending_cnt) == 0);
+
+ if (new_role == Secondary) {
+ set_disk_ro(mdev->vdisk, TRUE);
+ if (inc_local(mdev)) {
+ mdev->bc->md.uuid[Current] &= ~(u64)1;
+ dec_local(mdev);
+ }
+ } else {
+ if (inc_net(mdev)) {
+ mdev->net_conf->want_lose = 0;
+ dec_net(mdev);
+ }
+ set_disk_ro(mdev->vdisk, FALSE);
+ if (inc_local(mdev)) {
+ if (((mdev->state.conn < Connected ||
+ mdev->state.pdsk <= Failed)
+ && mdev->bc->md.uuid[Bitmap] == 0) || forced)
+ drbd_uuid_new_current(mdev);
+
+ mdev->bc->md.uuid[Current] |= (u64)1;
+ dec_local(mdev);
+ }
+ }
+
+ if ((new_role == Secondary) && inc_local(mdev)) {
+ drbd_al_to_on_disk_bm(mdev);
+ dec_local(mdev);
+ }
+
+ if (mdev->state.conn >= WFReportParams) {
+ /* if this was forced, we should consider sync */
+ if (forced)
+ drbd_send_uuids(mdev);
+ drbd_send_state(mdev);
+ }
+
+ drbd_md_sync(mdev);
+
+ drbd_kobject_uevent(mdev);
+ fail:
+ mutex_unlock(&mdev->state_mutex);
+ return r;
+}
+
+
+STATIC int drbd_nl_primary(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ struct primary primary_args;
+
+ memset(&primary_args, 0, sizeof(struct primary));
+ if (!primary_from_tags(mdev, nlp->tag_list, &primary_args)) {
+ reply->ret_code = UnknownMandatoryTag;
+ return 0;
+ }
+
+ reply->ret_code =
+ drbd_set_role(mdev, Primary, primary_args.overwrite_peer);
+
+ return 0;
+}
+
+STATIC int drbd_nl_secondary(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ reply->ret_code = drbd_set_role(mdev, Secondary, 0);
+
+ return 0;
+}
+
+/* initializes the md.*_offset members, so we are able to find
+ * the on disk meta data */
+STATIC void drbd_md_set_sector_offsets(struct drbd_conf *mdev,
+ struct drbd_backing_dev *bdev)
+{
+ sector_t md_size_sect = 0;
+ switch (bdev->dc.meta_dev_idx) {
+ default:
+ /* v07 style fixed size indexed meta data */
+ bdev->md.md_size_sect = MD_RESERVED_SECT;
+ bdev->md.md_offset = drbd_md_ss__(mdev, bdev);
+ bdev->md.al_offset = MD_AL_OFFSET;
+ bdev->md.bm_offset = MD_BM_OFFSET;
+ break;
+ case DRBD_MD_INDEX_FLEX_EXT:
+ /* just occupy the full device; unit: sectors */
+ bdev->md.md_size_sect = drbd_get_capacity(bdev->md_bdev);
+ bdev->md.md_offset = 0;
+ bdev->md.al_offset = MD_AL_OFFSET;
+ bdev->md.bm_offset = MD_BM_OFFSET;
+ break;
+ case DRBD_MD_INDEX_INTERNAL:
+ case DRBD_MD_INDEX_FLEX_INT:
+ bdev->md.md_offset = drbd_md_ss__(mdev, bdev);
+ /* al size is still fixed */
+ bdev->md.al_offset = -MD_AL_MAX_SIZE;
+ /* we need (slightly less than) ~ this much bitmap sectors: */
+ md_size_sect = drbd_get_capacity(bdev->backing_bdev);
+ md_size_sect = ALIGN(md_size_sect, BM_SECT_PER_EXT);
+ md_size_sect = BM_SECT_TO_EXT(md_size_sect);
+ md_size_sect = ALIGN(md_size_sect, 8);
+
+ /* plus the "drbd meta data super block",
+ * and the activity log; */
+ md_size_sect += MD_BM_OFFSET;
+
+ bdev->md.md_size_sect = md_size_sect;
+ /* bitmap offset is adjusted by 'super' block size */
+ bdev->md.bm_offset = -md_size_sect + MD_AL_OFFSET;
+ break;
+ }
+}
+
+char *ppsize(char *buf, unsigned long long size)
+{
+ /* Needs 9 bytes at max. */
+ static char units[] = { 'K', 'M', 'G', 'T', 'P', 'E' };
+ int base = 0;
+ while (size >= 10000) {
+ /* shift + round */
+ size = (size >> 10) + !!(size & (1<<9));
+ base++;
+ }
+ sprintf(buf, "%lu %cB", (long)size, units[base]);
+
+ return buf;
+}
+
+/* there is still a theoretical deadlock when called from receiver
+ * on an Inconsistent Primary:
+ * remote READ does inc_ap_bio, receiver would need to receive answer
+ * packet from remote to dec_ap_bio again.
+ * receiver receive_sizes(), comes here,
+ * waits for ap_bio_cnt == 0. -> deadlock.
+ * but this cannot happen, actually, because:
+ * Primary Inconsistent, and peer's disk is unreachable
+ * (not connected, * or bad/no disk on peer):
+ * see drbd_fail_request_early, ap_bio_cnt is zero.
+ * Primary Inconsistent, and SyncTarget:
+ * peer may not initiate a resize.
+ */
+void drbd_suspend_io(struct drbd_conf *mdev)
+{
+ int in_flight;
+ set_bit(SUSPEND_IO, &mdev->flags);
+ in_flight = atomic_read(&mdev->ap_bio_cnt);
+ if (in_flight)
+ wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_bio_cnt));
+}
+
+void drbd_resume_io(struct drbd_conf *mdev)
+{
+ clear_bit(SUSPEND_IO, &mdev->flags);
+ wake_up(&mdev->misc_wait);
+}
+
+/**
+ * drbd_determin_dev_size:
+ * Evaluates all constraints and sets our correct device size.
+ * Negative return values indicate errors. 0 and positive values
+ * indicate success.
+ * You should call drbd_md_sync() after calling this function.
+ */
+enum determin_dev_size_enum drbd_determin_dev_size(struct drbd_conf *mdev) __must_hold(local)
+{
+ sector_t prev_first_sect, prev_size; /* previous meta location */
+ sector_t la_size;
+ sector_t size;
+ char ppb[10];
+
+ int md_moved, la_size_changed;
+ enum determin_dev_size_enum rv = unchanged;
+
+ /* race:
+ * application request passes inc_ap_bio,
+ * but then cannot get an AL-reference.
+ * this function later may wait on ap_bio_cnt == 0. -> deadlock.
+ *
+ * to avoid that:
+ * Suspend IO right here.
+ * still lock the act_log to not trigger ASSERTs there.
+ */
+ drbd_suspend_io(mdev);
+
+ /* no wait necessary anymore, actually we could assert that */
+ wait_event(mdev->al_wait, lc_try_lock(mdev->act_log));
+
+ prev_first_sect = drbd_md_first_sector(mdev->bc);
+ prev_size = mdev->bc->md.md_size_sect;
+ la_size = mdev->bc->md.la_size_sect;
+
+ /* TODO: should only be some assert here, not (re)init... */
+ drbd_md_set_sector_offsets(mdev, mdev->bc);
+
+ size = drbd_new_dev_size(mdev, mdev->bc);
+
+ if (drbd_get_capacity(mdev->this_bdev) != size ||
+ drbd_bm_capacity(mdev) != size) {
+ int err;
+ err = drbd_bm_resize(mdev, size);
+ if (unlikely(err)) {
+ /* currently there is only one error: ENOMEM! */
+ size = drbd_bm_capacity(mdev)>>1;
+ if (size == 0) {
+ ERR("OUT OF MEMORY! "
+ "Could not allocate bitmap! ");
+ } else {
+ ERR("BM resizing failed. "
+ "Leaving size unchanged at size = %lu KB\n",
+ (unsigned long)size);
+ }
+ rv = dev_size_error;
+ }
+ /* racy, see comments above. */
+ drbd_set_my_capacity(mdev, size);
+ mdev->bc->md.la_size_sect = size;
+ INFO("size = %s (%llu KB)\n", ppsize(ppb, size>>1),
+ (unsigned long long)size>>1);
+ }
+ if (rv == dev_size_error)
+ goto out;
+
+ la_size_changed = (la_size != mdev->bc->md.la_size_sect);
+
+ md_moved = prev_first_sect != drbd_md_first_sector(mdev->bc)
+ || prev_size != mdev->bc->md.md_size_sect;
+
+ if (md_moved) {
+ drbd_WARN("Moving meta-data.\n");
+ /* assert: (flexible) internal meta data */
+ }
+
+ if (la_size_changed || md_moved) {
+ drbd_al_shrink(mdev); /* All extents inactive. */
+ INFO("Writing the whole bitmap, size changed\n");
+ rv = drbd_bitmap_io(mdev, &drbd_bm_write, "size changed");
+ drbd_md_mark_dirty(mdev);
+ }
+
+ if (size > la_size)
+ rv = grew;
+ if (size < la_size)
+ rv = shrunk;
+out:
+ lc_unlock(mdev->act_log);
+ wake_up(&mdev->al_wait);
+ drbd_resume_io(mdev);
+
+ return rv;
+}
+
+sector_t
+drbd_new_dev_size(struct drbd_conf *mdev, struct drbd_backing_dev *bdev)
+{
+ sector_t p_size = mdev->p_size; /* partner's disk size. */
+ sector_t la_size = bdev->md.la_size_sect; /* last agreed size. */
+ sector_t m_size; /* my size */
+ sector_t u_size = bdev->dc.disk_size; /* size requested by user. */
+ sector_t size = 0;
+
+ m_size = drbd_get_max_capacity(bdev);
+
+ if (p_size && m_size) {
+ size = min_t(sector_t, p_size, m_size);
+ } else {
+ if (la_size) {
+ size = la_size;
+ if (m_size && m_size < size)
+ size = m_size;
+ if (p_size && p_size < size)
+ size = p_size;
+ } else {
+ if (m_size)
+ size = m_size;
+ if (p_size)
+ size = p_size;
+ }
+ }
+
+ if (size == 0)
+ ERR("Both nodes diskless!\n");
+
+ if (u_size) {
+ if (u_size > size)
+ ERR("Requested disk size is too big (%lu > %lu)\n",
+ (unsigned long)u_size>>1, (unsigned long)size>>1);
+ else
+ size = u_size;
+ }
+
+ return size;
+}
+
+/**
+ * drbd_check_al_size:
+ * checks that the al lru is of requested size, and if neccessary tries to
+ * allocate a new one. returns -EBUSY if current al lru is still used,
+ * -ENOMEM when allocation failed, and 0 on success. You should call
+ * drbd_md_sync() after you called this function.
+ */
+STATIC int drbd_check_al_size(struct drbd_conf *mdev)
+{
+ struct lru_cache *n, *t;
+ struct lc_element *e;
+ unsigned int in_use;
+ int i;
+
+ ERR_IF(mdev->sync_conf.al_extents < 7)
+ mdev->sync_conf.al_extents = 127;
+
+ if (mdev->act_log &&
+ mdev->act_log->nr_elements == mdev->sync_conf.al_extents)
+ return 0;
+
+ in_use = 0;
+ t = mdev->act_log;
+ n = lc_alloc("act_log", mdev->sync_conf.al_extents,
+ sizeof(struct lc_element), mdev);
+
+ if (n == NULL) {
+ ERR("Cannot allocate act_log lru!\n");
+ return -ENOMEM;
+ }
+ spin_lock_irq(&mdev->al_lock);
+ if (t) {
+ for (i = 0; i < t->nr_elements; i++) {
+ e = lc_entry(t, i);
+ if (e->refcnt)
+ ERR("refcnt(%d)==%d\n",
+ e->lc_number, e->refcnt);
+ in_use += e->refcnt;
+ }
+ }
+ if (!in_use)
+ mdev->act_log = n;
+ spin_unlock_irq(&mdev->al_lock);
+ if (in_use) {
+ ERR("Activity log still in use!\n");
+ lc_free(n);
+ return -EBUSY;
+ } else {
+ if (t)
+ lc_free(t);
+ }
+ drbd_md_mark_dirty(mdev); /* we changed mdev->act_log->nr_elemens */
+ return 0;
+}
+
+void drbd_setup_queue_param(struct drbd_conf *mdev, unsigned int max_seg_s) __must_hold(local)
+{
+ struct request_queue * const q = mdev->rq_queue;
+ struct request_queue * const b = mdev->bc->backing_bdev->bd_disk->queue;
+ /* unsigned int old_max_seg_s = q->max_segment_size; */
+ int max_segments = mdev->bc->dc.max_bio_bvecs;
+
+ if (b->merge_bvec_fn && !mdev->bc->dc.use_bmbv)
+ max_seg_s = PAGE_SIZE;
+
+ max_seg_s = min(b->max_sectors * b->hardsect_size, max_seg_s);
+
+ MTRACE(TraceTypeRq, TraceLvlSummary,
+ DUMPI(b->max_sectors);
+ DUMPI(b->max_phys_segments);
+ DUMPI(b->max_hw_segments);
+ DUMPI(b->max_segment_size);
+ DUMPI(b->hardsect_size);
+ DUMPI(b->seg_boundary_mask);
+ );
+
+ q->max_sectors = max_seg_s >> 9;
+ if (max_segments) {
+ q->max_phys_segments = max_segments;
+ q->max_hw_segments = max_segments;
+ } else {
+ q->max_phys_segments = MAX_PHYS_SEGMENTS;
+ q->max_hw_segments = MAX_HW_SEGMENTS;
+ }
+ q->max_segment_size = max_seg_s;
+ q->hardsect_size = 512;
+ q->seg_boundary_mask = PAGE_SIZE-1;
+ blk_queue_stack_limits(q, b);
+
+ /* KERNEL BUG. in ll_rw_blk.c ??
+ * t->max_segment_size = min(t->max_segment_size,b->max_segment_size);
+ * should be
+ * t->max_segment_size = min_not_zero(...,...)
+ * workaround here: */
+ if (q->max_segment_size == 0)
+ q->max_segment_size = max_seg_s;
+
+ MTRACE(TraceTypeRq, TraceLvlSummary,
+ DUMPI(q->max_sectors);
+ DUMPI(q->max_phys_segments);
+ DUMPI(q->max_hw_segments);
+ DUMPI(q->max_segment_size);
+ DUMPI(q->hardsect_size);
+ DUMPI(q->seg_boundary_mask);
+ );
+
+ if (b->merge_bvec_fn)
+ drbd_WARN("Backing device's merge_bvec_fn() = %p\n",
+ b->merge_bvec_fn);
+ INFO("max_segment_size ( = BIO size ) = %u\n", q->max_segment_size);
+
+ if (q->backing_dev_info.ra_pages != b->backing_dev_info.ra_pages) {
+ INFO("Adjusting my ra_pages to backing device's (%lu -> %lu)\n",
+ q->backing_dev_info.ra_pages,
+ b->backing_dev_info.ra_pages);
+ q->backing_dev_info.ra_pages = b->backing_dev_info.ra_pages;
+ }
+}
+
+/* does always return 0;
+ * interesting return code is in reply->ret_code */
+STATIC int drbd_nl_disk_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ enum ret_codes retcode;
+ enum determin_dev_size_enum dd;
+ sector_t max_possible_sectors;
+ sector_t min_md_device_sectors;
+ struct drbd_backing_dev *nbc = NULL; /* new_backing_conf */
+ struct inode *inode, *inode2;
+ struct lru_cache *resync_lru = NULL;
+ union drbd_state_t ns, os;
+ int rv, ntries = 0;
+ int cp_discovered = 0;
+ int hardsect;
+
+ /* if you want to reconfigure, please tear down first */
+ if (mdev->state.disk > Diskless) {
+ retcode = HaveDiskConfig;
+ goto fail;
+ }
+
+ /*
+ * We may have gotten here very quickly from a detach. Wait for a bit
+ * then fail.
+ */
+ while (1) {
+ __no_warn(local, nbc = mdev->bc;);
+ if (nbc == NULL)
+ break;
+ if (ntries++ >= 5) {
+ drbd_WARN("drbd_nl_disk_conf: mdev->bc not NULL.\n");
+ retcode = HaveDiskConfig;
+ goto fail;
+ }
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ/10);
+ }
+
+ nbc = kmalloc(sizeof(struct drbd_backing_dev), GFP_KERNEL);
+ if (!nbc) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+
+ memset(&nbc->md, 0, sizeof(struct drbd_md));
+
+ if (!(nlp->flags & DRBD_NL_SET_DEFAULTS) && inc_local(mdev)) {
+ memcpy(&nbc->dc, &mdev->bc->dc, sizeof(struct disk_conf));
+ dec_local(mdev);
+ } else {
+ memset(&nbc->dc, 0, sizeof(struct disk_conf));
+ nbc->dc.disk_size = DRBD_DISK_SIZE_SECT_DEF;
+ nbc->dc.on_io_error = DRBD_ON_IO_ERROR_DEF;
+ nbc->dc.fencing = DRBD_FENCING_DEF;
+ nbc->dc.max_bio_bvecs = DRBD_MAX_BIO_BVECS_DEF;
+ }
+
+ if (!disk_conf_from_tags(mdev, nlp->tag_list, &nbc->dc)) {
+ retcode = UnknownMandatoryTag;
+ goto fail;
+ }
+
+ nbc->lo_file = NULL;
+ nbc->md_file = NULL;
+
+ if (nbc->dc.meta_dev_idx < DRBD_MD_INDEX_FLEX_INT) {
+ retcode = LDMDInvalid;
+ goto fail;
+ }
+
+ nbc->lo_file = filp_open(nbc->dc.backing_dev, O_RDWR, 0);
+ if (IS_ERR(nbc->lo_file)) {
+ ERR("open(\"%s\") failed with %ld\n", nbc->dc.backing_dev,
+ PTR_ERR(nbc->lo_file));
+ nbc->lo_file = NULL;
+ retcode = LDNameInvalid;
+ goto fail;
+ }
+
+ inode = nbc->lo_file->f_dentry->d_inode;
+
+ if (!S_ISBLK(inode->i_mode)) {
+ retcode = LDNoBlockDev;
+ goto fail;
+ }
+
+ nbc->md_file = filp_open(nbc->dc.meta_dev, O_RDWR, 0);
+ if (IS_ERR(nbc->md_file)) {
+ ERR("open(\"%s\") failed with %ld\n", nbc->dc.meta_dev,
+ PTR_ERR(nbc->md_file));
+ nbc->md_file = NULL;
+ retcode = MDNameInvalid;
+ goto fail;
+ }
+
+ inode2 = nbc->md_file->f_dentry->d_inode;
+
+ if (!S_ISBLK(inode2->i_mode)) {
+ retcode = MDNoBlockDev;
+ goto fail;
+ }
+
+ nbc->backing_bdev = inode->i_bdev;
+ if (bd_claim(nbc->backing_bdev, mdev)) {
+ printk(KERN_ERR "drbd: bd_claim(%p,%p); failed [%p;%p;%u]\n",
+ nbc->backing_bdev, mdev,
+ nbc->backing_bdev->bd_holder,
+ nbc->backing_bdev->bd_contains->bd_holder,
+ nbc->backing_bdev->bd_holders);
+ retcode = LDMounted;
+ goto fail;
+ }
+
+ resync_lru = lc_alloc("resync", 61, sizeof(struct bm_extent), mdev);
+ if (!resync_lru) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+
+ if (!mdev->bitmap) {
+ if (drbd_bm_init(mdev)) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ nbc->md_bdev = inode2->i_bdev;
+ if (bd_claim(nbc->md_bdev,
+ (nbc->dc.meta_dev_idx == DRBD_MD_INDEX_INTERNAL ||
+ nbc->dc.meta_dev_idx == DRBD_MD_INDEX_FLEX_INT) ?
+ (void *)mdev : (void *) drbd_m_holder)) {
+ retcode = MDMounted;
+ goto release_bdev_fail;
+ }
+
+ if ((nbc->backing_bdev == nbc->md_bdev) !=
+ (nbc->dc.meta_dev_idx == DRBD_MD_INDEX_INTERNAL ||
+ nbc->dc.meta_dev_idx == DRBD_MD_INDEX_FLEX_INT)) {
+ retcode = LDMDInvalid;
+ goto release_bdev2_fail;
+ }
+
+ /* RT - for drbd_get_max_capacity() DRBD_MD_INDEX_FLEX_INT */
+ drbd_md_set_sector_offsets(mdev, nbc);
+
+ if (drbd_get_max_capacity(nbc) < nbc->dc.disk_size) {
+ ERR("max capacity %llu smaller than disk size %llu\n",
+ (unsigned long long) drbd_get_max_capacity(nbc),
+ (unsigned long long) nbc->dc.disk_size);
+ retcode = LDDeviceTooSmall;
+ goto release_bdev2_fail;
+ }
+
+ if (nbc->dc.meta_dev_idx < 0) {
+ max_possible_sectors = DRBD_MAX_SECTORS_FLEX;
+ /* at least one MB, otherwise it does not make sense */
+ min_md_device_sectors = (2<<10);
+ } else {
+ max_possible_sectors = DRBD_MAX_SECTORS;
+ min_md_device_sectors = MD_RESERVED_SECT * (nbc->dc.meta_dev_idx + 1);
+ }
+
+ if (drbd_get_capacity(nbc->md_bdev) > max_possible_sectors)
+ drbd_WARN("truncating very big lower level device "
+ "to currently maximum possible %llu sectors\n",
+ (unsigned long long) max_possible_sectors);
+
+ if (drbd_get_capacity(nbc->md_bdev) < min_md_device_sectors) {
+ retcode = MDDeviceTooSmall;
+ drbd_WARN("refusing attach: md-device too small, "
+ "at least %llu sectors needed for this meta-disk type\n",
+ (unsigned long long) min_md_device_sectors);
+ goto release_bdev2_fail;
+ }
+
+ /* Make sure the new disk is big enough
+ * (we may currently be Primary with no local disk...) */
+ if (drbd_get_max_capacity(nbc) <
+ drbd_get_capacity(mdev->this_bdev)) {
+ retcode = LDDeviceTooSmall;
+ goto release_bdev2_fail;
+ }
+
+ nbc->known_size = drbd_get_capacity(nbc->backing_bdev);
+
+ drbd_suspend_io(mdev);
+ wait_event(mdev->misc_wait, !atomic_read(&mdev->ap_pending_cnt));
+ retcode = _drbd_request_state(mdev, NS(disk, Attaching), ChgStateVerbose);
+ drbd_resume_io(mdev);
+ if (retcode < SS_Success)
+ goto release_bdev2_fail;
+
+ if (!inc_local_if_state(mdev, Attaching))
+ goto force_diskless;
+
+ drbd_thread_start(&mdev->worker);
+ drbd_md_set_sector_offsets(mdev, nbc);
+
+ retcode = drbd_md_read(mdev, nbc);
+ if (retcode != NoError)
+ goto force_diskless_dec;
+
+ if (mdev->state.conn < Connected &&
+ mdev->state.role == Primary &&
+ (mdev->ed_uuid & ~((u64)1)) != (nbc->md.uuid[Current] & ~((u64)1))) {
+ ERR("Can only attach to data with current UUID=%016llX\n",
+ (unsigned long long)mdev->ed_uuid);
+ retcode = DataOfWrongCurrent;
+ goto force_diskless_dec;
+ }
+
+ /* Since we are diskless, fix the AL first... */
+ if (drbd_check_al_size(mdev)) {
+ retcode = KMallocFailed;
+ goto force_diskless_dec;
+ }
+
+ /* Prevent shrinking of consistent devices ! */
+ if (drbd_md_test_flag(nbc, MDF_Consistent) &&
+ drbd_new_dev_size(mdev, nbc) < nbc->md.la_size_sect) {
+ drbd_WARN("refusing to truncate a consistent device\n");
+ retcode = LDDeviceTooSmall;
+ goto force_diskless_dec;
+ }
+
+ if (!drbd_al_read_log(mdev, nbc)) {
+ retcode = MDIOError;
+ goto force_diskless_dec;
+ }
+
+ /* allocate a second IO page if hardsect != 512 */
+ hardsect = drbd_get_hardsect(nbc->md_bdev);
+ if (hardsect == 0)
+ hardsect = MD_HARDSECT;
+
+ if (hardsect != MD_HARDSECT) {
+ if (!mdev->md_io_tmpp) {
+ struct page *page = alloc_page(GFP_NOIO);
+ if (!page)
+ goto force_diskless_dec;
+
+ drbd_WARN("Meta data's bdev hardsect = %d != %d\n",
+ hardsect, MD_HARDSECT);
+ drbd_WARN("Workaround engaged (has performace impact).\n");
+
+ mdev->md_io_tmpp = page;
+ }
+ }
+
+ /* Reset the "barriers don't work" bits here, then force meta data to
+ * be written, to ensure we determine if barriers are supported. */
+ if (nbc->dc.no_md_flush)
+ set_bit(MD_NO_BARRIER, &mdev->flags);
+ else
+ clear_bit(MD_NO_BARRIER, &mdev->flags);
+
+ /* Point of no return reached.
+ * Devices and memory are no longer released by error cleanup below.
+ * now mdev takes over responsibility, and the state engine should
+ * clean it up somewhere. */
+ D_ASSERT(mdev->bc == NULL);
+ mdev->bc = nbc;
+ mdev->resync = resync_lru;
+ nbc = NULL;
+ resync_lru = NULL;
+
+ mdev->write_ordering = WO_bio_barrier;
+ drbd_bump_write_ordering(mdev, WO_bio_barrier);
+
+ if (drbd_md_test_flag(mdev->bc, MDF_CrashedPrimary))
+ set_bit(CRASHED_PRIMARY, &mdev->flags);
+ else
+ clear_bit(CRASHED_PRIMARY, &mdev->flags);
+
+ if (drbd_md_test_flag(mdev->bc, MDF_PrimaryInd)) {
+ set_bit(CRASHED_PRIMARY, &mdev->flags);
+ cp_discovered = 1;
+ }
+
+ mdev->send_cnt = 0;
+ mdev->recv_cnt = 0;
+ mdev->read_cnt = 0;
+ mdev->writ_cnt = 0;
+
+ drbd_setup_queue_param(mdev, DRBD_MAX_SEGMENT_SIZE);
+
+ /* If I am currently not Primary,
+ * but meta data primary indicator is set,
+ * I just now recover from a hard crash,
+ * and have been Primary before that crash.
+ *
+ * Now, if I had no connection before that crash
+ * (have been degraded Primary), chances are that
+ * I won't find my peer now either.
+ *
+ * In that case, and _only_ in that case,
+ * we use the degr-wfc-timeout instead of the default,
+ * so we can automatically recover from a crash of a
+ * degraded but active "cluster" after a certain timeout.
+ */
+ clear_bit(USE_DEGR_WFC_T, &mdev->flags);
+ if (mdev->state.role != Primary &&
+ drbd_md_test_flag(mdev->bc, MDF_PrimaryInd) &&
+ !drbd_md_test_flag(mdev->bc, MDF_ConnectedInd))
+ set_bit(USE_DEGR_WFC_T, &mdev->flags);
+
+ dd = drbd_determin_dev_size(mdev);
+ if (dd == dev_size_error) {
+ retcode = VMallocFailed;
+ goto force_diskless_dec;
+ } else if (dd == grew)
+ set_bit(RESYNC_AFTER_NEG, &mdev->flags);
+
+ if (drbd_md_test_flag(mdev->bc, MDF_FullSync)) {
+ INFO("Assuming that all blocks are out of sync "
+ "(aka FullSync)\n");
+ if (drbd_bitmap_io(mdev, &drbd_bmio_set_n_write, "set_n_write from attaching")) {
+ retcode = MDIOError;
+ goto force_diskless_dec;
+ }
+ } else {
+ if (drbd_bitmap_io(mdev, &drbd_bm_read, "read from attaching") < 0) {
+ retcode = MDIOError;
+ goto force_diskless_dec;
+ }
+ }
+
+ if (cp_discovered) {
+ drbd_al_apply_to_bm(mdev);
+ drbd_al_to_on_disk_bm(mdev);
+ }
+
+ spin_lock_irq(&mdev->req_lock);
+ os = mdev->state;
+ ns.i = os.i;
+ /* If MDF_Consistent is not set go into inconsistent state,
+ otherwise investige MDF_WasUpToDate...
+ If MDF_WasUpToDate is not set go into Outdated disk state,
+ otherwise into Consistent state.
+ */
+ if (drbd_md_test_flag(mdev->bc, MDF_Consistent)) {
+ if (drbd_md_test_flag(mdev->bc, MDF_WasUpToDate))
+ ns.disk = Consistent;
+ else
+ ns.disk = Outdated;
+ } else {
+ ns.disk = Inconsistent;
+ }
+
+ if (drbd_md_test_flag(mdev->bc, MDF_PeerOutDated))
+ ns.pdsk = Outdated;
+
+ if ( ns.disk == Consistent &&
+ (ns.pdsk == Outdated || mdev->bc->dc.fencing == DontCare))
+ ns.disk = UpToDate;
+
+ /* All tests on MDF_PrimaryInd, MDF_ConnectedInd,
+ MDF_Consistent and MDF_WasUpToDate must happen before
+ this point, because drbd_request_state() modifies these
+ flags. */
+
+ /* In case we are Connected postpone any desicion on the new disk
+ state after the negotiation phase. */
+ if (mdev->state.conn == Connected) {
+ mdev->new_state_tmp.i = ns.i;
+ ns.i = os.i;
+ ns.disk = Negotiating;
+ }
+
+ rv = _drbd_set_state(mdev, ns, ChgStateVerbose, NULL);
+ ns = mdev->state;
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (rv < SS_Success)
+ goto force_diskless_dec;
+
+ if (mdev->state.role == Primary)
+ mdev->bc->md.uuid[Current] |= (u64)1;
+ else
+ mdev->bc->md.uuid[Current] &= ~(u64)1;
+
+ drbd_md_mark_dirty(mdev);
+ drbd_md_sync(mdev);
+
+ drbd_kobject_uevent(mdev);
+ dec_local(mdev);
+ reply->ret_code = retcode;
+ return 0;
+
+ force_diskless_dec:
+ dec_local(mdev);
+ force_diskless:
+ drbd_force_state(mdev, NS(disk, Diskless));
+ drbd_md_sync(mdev);
+ release_bdev2_fail:
+ if (nbc)
+ bd_release(nbc->md_bdev);
+ release_bdev_fail:
+ if (nbc)
+ bd_release(nbc->backing_bdev);
+ fail:
+ if (nbc) {
+ if (nbc->lo_file)
+ fput(nbc->lo_file);
+ if (nbc->md_file)
+ fput(nbc->md_file);
+ kfree(nbc);
+ }
+ if (resync_lru)
+ lc_free(resync_lru);
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_detach(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ fsync_bdev(mdev->this_bdev);
+ reply->ret_code = drbd_request_state(mdev, NS(disk, Diskless));
+
+ __set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ/20); /* 50ms; Time for worker to finally terminate */
+
+ return 0;
+}
+
+#define HMAC_NAME_L 20
+
+STATIC int drbd_nl_net_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int i, ns;
+ enum ret_codes retcode;
+ struct net_conf *new_conf = NULL;
+ struct crypto_hash *tfm = NULL;
+ struct crypto_hash *integrity_w_tfm = NULL;
+ struct crypto_hash *integrity_r_tfm = NULL;
+ struct hlist_head *new_tl_hash = NULL;
+ struct hlist_head *new_ee_hash = NULL;
+ struct drbd_conf *odev;
+ char hmac_name[HMAC_NAME_L];
+ void *int_dig_out = NULL;
+ void *int_dig_in = NULL;
+ void *int_dig_vv = NULL;
+
+ if (mdev->state.conn > StandAlone) {
+ retcode = HaveNetConfig;
+ goto fail;
+ }
+
+ new_conf = kmalloc(sizeof(struct net_conf), GFP_KERNEL);
+ if (!new_conf) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+
+ if (!(nlp->flags & DRBD_NL_SET_DEFAULTS) && inc_net(mdev)) {
+ memcpy(new_conf, mdev->net_conf, sizeof(struct net_conf));
+ dec_net(mdev);
+ } else {
+ memset(new_conf, 0, sizeof(struct net_conf));
+ new_conf->timeout = DRBD_TIMEOUT_DEF;
+ new_conf->try_connect_int = DRBD_CONNECT_INT_DEF;
+ new_conf->ping_int = DRBD_PING_INT_DEF;
+ new_conf->max_epoch_size = DRBD_MAX_EPOCH_SIZE_DEF;
+ new_conf->max_buffers = DRBD_MAX_BUFFERS_DEF;
+ new_conf->unplug_watermark = DRBD_UNPLUG_WATERMARK_DEF;
+ new_conf->sndbuf_size = DRBD_SNDBUF_SIZE_DEF;
+ new_conf->ko_count = DRBD_KO_COUNT_DEF;
+ new_conf->after_sb_0p = DRBD_AFTER_SB_0P_DEF;
+ new_conf->after_sb_1p = DRBD_AFTER_SB_1P_DEF;
+ new_conf->after_sb_2p = DRBD_AFTER_SB_2P_DEF;
+ new_conf->want_lose = 0;
+ new_conf->two_primaries = 0;
+ new_conf->wire_protocol = DRBD_PROT_C;
+ new_conf->ping_timeo = DRBD_PING_TIMEO_DEF;
+ new_conf->rr_conflict = DRBD_RR_CONFLICT_DEF;
+ }
+
+ if (!net_conf_from_tags(mdev, nlp->tag_list, new_conf)) {
+ retcode = UnknownMandatoryTag;
+ goto fail;
+ }
+
+ if (new_conf->two_primaries
+ && (new_conf->wire_protocol != DRBD_PROT_C)) {
+ retcode = ProtocolCRequired;
+ goto fail;
+ };
+
+ if (mdev->state.role == Primary && new_conf->want_lose) {
+ retcode = DiscardNotAllowed;
+ goto fail;
+ }
+
+#define M_ADDR(A) (((struct sockaddr_in *)&A->my_addr)->sin_addr.s_addr)
+#define M_PORT(A) (((struct sockaddr_in *)&A->my_addr)->sin_port)
+#define O_ADDR(A) (((struct sockaddr_in *)&A->peer_addr)->sin_addr.s_addr)
+#define O_PORT(A) (((struct sockaddr_in *)&A->peer_addr)->sin_port)
+ retcode = NoError;
+ for (i = 0; i < minor_count; i++) {
+ odev = minor_to_mdev(i);
+ if (!odev || odev == mdev)
+ continue;
+ if (inc_net(odev)) {
+ if (M_ADDR(new_conf) == M_ADDR(odev->net_conf) &&
+ M_PORT(new_conf) == M_PORT(odev->net_conf))
+ retcode = LAAlreadyInUse;
+
+ if (O_ADDR(new_conf) == O_ADDR(odev->net_conf) &&
+ O_PORT(new_conf) == O_PORT(odev->net_conf))
+ retcode = OAAlreadyInUse;
+
+ dec_net(odev);
+ if (retcode != NoError)
+ goto fail;
+ }
+ }
+#undef M_ADDR
+#undef M_PORT
+#undef O_ADDR
+#undef O_PORT
+
+ if (new_conf->cram_hmac_alg[0] != 0) {
+ snprintf(hmac_name, HMAC_NAME_L, "hmac(%s)",
+ new_conf->cram_hmac_alg);
+ tfm = crypto_alloc_hash(hmac_name, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(tfm)) {
+ tfm = NULL;
+ retcode = CRAMAlgNotAvail;
+ goto fail;
+ }
+
+ if (crypto_tfm_alg_type(crypto_hash_tfm(tfm))
+ != CRYPTO_ALG_TYPE_HASH) {
+ retcode = CRAMAlgNotDigest;
+ goto fail;
+ }
+ }
+
+ if (new_conf->integrity_alg[0]) {
+ integrity_w_tfm = crypto_alloc_hash(new_conf->integrity_alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(integrity_w_tfm)) {
+ integrity_w_tfm = NULL;
+ retcode=IntegrityAlgNotAvail;
+ goto fail;
+ }
+
+ if (crypto_tfm_alg_type(crypto_hash_tfm(integrity_w_tfm)) != CRYPTO_ALG_TYPE_DIGEST) {
+ retcode=IntegrityAlgNotDigest;
+ goto fail;
+ }
+
+ integrity_r_tfm = crypto_alloc_hash(new_conf->integrity_alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(integrity_r_tfm)) {
+ integrity_r_tfm = NULL;
+ retcode=IntegrityAlgNotAvail;
+ goto fail;
+ }
+ }
+
+ ns = new_conf->max_epoch_size/8;
+ if (mdev->tl_hash_s != ns) {
+ new_tl_hash = kzalloc(ns*sizeof(void *), GFP_KERNEL);
+ if (!new_tl_hash) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ ns = new_conf->max_buffers/8;
+ if (new_conf->two_primaries && (mdev->ee_hash_s != ns)) {
+ new_ee_hash = kzalloc(ns*sizeof(void *), GFP_KERNEL);
+ if (!new_ee_hash) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ ((char *)new_conf->shared_secret)[SHARED_SECRET_MAX-1] = 0;
+
+#if 0
+ /* for the connection loss logic in drbd_recv
+ * I _need_ the resulting timeo in jiffies to be
+ * non-zero and different
+ *
+ * XXX maybe rather store the value scaled to jiffies?
+ * Note: MAX_SCHEDULE_TIMEOUT/HZ*HZ != MAX_SCHEDULE_TIMEOUT
+ * and HZ > 10; which is unlikely to change...
+ * Thus, if interrupted by a signal,
+ * sock_{send,recv}msg returns -EINTR,
+ * if the timeout expires, -EAGAIN.
+ */
+ /* unlikely: someone disabled the timeouts ...
+ * just put some huge values in there. */
+ if (!new_conf->ping_int)
+ new_conf->ping_int = MAX_SCHEDULE_TIMEOUT/HZ;
+ if (!new_conf->timeout)
+ new_conf->timeout = MAX_SCHEDULE_TIMEOUT/HZ*10;
+ if (new_conf->ping_int*10 < new_conf->timeout)
+ new_conf->timeout = new_conf->ping_int*10/6;
+ if (new_conf->ping_int*10 == new_conf->timeout)
+ new_conf->ping_int = new_conf->ping_int+1;
+#endif
+
+ if (integrity_w_tfm) {
+ i = crypto_hash_digestsize(integrity_w_tfm);
+ int_dig_out = kmalloc(i, GFP_KERNEL);
+ if (!int_dig_out) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ int_dig_in = kmalloc(i, GFP_KERNEL);
+ if (!int_dig_in) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ int_dig_vv = kmalloc(i, GFP_KERNEL);
+ if (!int_dig_vv) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ if (!mdev->bitmap) {
+ if(drbd_bm_init(mdev)) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ D_ASSERT(mdev->net_conf == NULL);
+ mdev->net_conf = new_conf;
+
+ mdev->send_cnt = 0;
+ mdev->recv_cnt = 0;
+
+ if (new_tl_hash) {
+ kfree(mdev->tl_hash);
+ mdev->tl_hash_s = mdev->net_conf->max_epoch_size/8;
+ mdev->tl_hash = new_tl_hash;
+ }
+
+ if (new_ee_hash) {
+ kfree(mdev->ee_hash);
+ mdev->ee_hash_s = mdev->net_conf->max_buffers/8;
+ mdev->ee_hash = new_ee_hash;
+ }
+
+ crypto_free_hash(mdev->cram_hmac_tfm);
+ mdev->cram_hmac_tfm = tfm;
+
+ crypto_free_hash(mdev->integrity_w_tfm);
+ mdev->integrity_w_tfm = integrity_w_tfm;
+
+ crypto_free_hash(mdev->integrity_r_tfm);
+ mdev->integrity_r_tfm = integrity_r_tfm;
+
+ kfree(mdev->int_dig_out);
+ kfree(mdev->int_dig_in);
+ kfree(mdev->int_dig_vv);
+ mdev->int_dig_out=int_dig_out;
+ mdev->int_dig_in=int_dig_in;
+ mdev->int_dig_vv=int_dig_vv;
+
+ retcode = _drbd_request_state(mdev, NS(conn, Unconnected), ChgStateVerbose);
+ if (retcode >= SS_Success)
+ drbd_thread_start(&mdev->worker);
+
+ drbd_kobject_uevent(mdev);
+ reply->ret_code = retcode;
+ return 0;
+
+fail:
+ kfree(int_dig_out);
+ kfree(int_dig_in);
+ kfree(int_dig_vv);
+ crypto_free_hash(tfm);
+ crypto_free_hash(integrity_w_tfm);
+ crypto_free_hash(integrity_r_tfm);
+ kfree(new_tl_hash);
+ kfree(new_ee_hash);
+ kfree(new_conf);
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_disconnect(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode;
+
+ retcode = _drbd_request_state(mdev, NS(conn, Disconnecting), ChgOrdered);
+
+ if (retcode == SS_NothingToDo)
+ goto done;
+ else if (retcode == SS_AlreadyStandAlone)
+ goto done;
+ else if (retcode == SS_PrimaryNOP) {
+ /* Our statche checking code wants to see the peer outdated. */
+ retcode = drbd_request_state(mdev, NS2(conn, Disconnecting,
+ pdsk, Outdated));
+ } else if (retcode == SS_CW_FailedByPeer) {
+ /* The peer probabely wants to see us outdated. */
+ retcode = _drbd_request_state(mdev, NS2(conn, Disconnecting,
+ disk, Outdated),
+ ChgOrdered);
+ if (retcode == SS_IsDiskLess || retcode == SS_LowerThanOutdated) {
+ drbd_force_state(mdev, NS(conn, Disconnecting));
+ retcode = SS_Success;
+ }
+ }
+
+ if (retcode < SS_Success)
+ goto fail;
+
+ if (wait_event_interruptible(mdev->state_wait,
+ mdev->state.conn != Disconnecting)) {
+ /* Do not test for mdev->state.conn == StandAlone, since
+ someone else might connect us in the mean time! */
+ retcode = GotSignal;
+ goto fail;
+ }
+
+ done:
+ retcode = NoError;
+ fail:
+ drbd_md_sync(mdev);
+ reply->ret_code = retcode;
+ return 0;
+}
+
+void resync_after_online_grow(struct drbd_conf *mdev)
+{
+ int iass; /* I am sync source */
+
+ INFO("Resync of new storage after online grow\n");
+ if (mdev->state.role != mdev->state.peer)
+ iass = (mdev->state.role == Primary);
+ else
+ iass = test_bit(DISCARD_CONCURRENT, &mdev->flags);
+
+ if (iass)
+ drbd_start_resync(mdev, SyncSource);
+ else
+ _drbd_request_state(mdev, NS(conn, WFSyncUUID), ChgStateVerbose + ChgSerialize);
+}
+
+STATIC int drbd_nl_resize(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ struct resize rs;
+ int retcode = NoError;
+ int ldsc = 0; /* local disk size changed */
+ enum determin_dev_size_enum dd;
+
+ memset(&rs, 0, sizeof(struct resize));
+ if (!resize_from_tags(mdev, nlp->tag_list, &rs)) {
+ retcode = UnknownMandatoryTag;
+ goto fail;
+ }
+
+ if (mdev->state.conn > Connected) {
+ retcode = NoResizeDuringResync;
+ goto fail;
+ }
+
+ if (mdev->state.role == Secondary &&
+ mdev->state.peer == Secondary) {
+ retcode = APrimaryNodeNeeded;
+ goto fail;
+ }
+
+ if (!inc_local(mdev)) {
+ retcode = HaveNoDiskConfig;
+ goto fail;
+ }
+
+ if (mdev->bc->known_size != drbd_get_capacity(mdev->bc->backing_bdev)) {
+ mdev->bc->known_size = drbd_get_capacity(mdev->bc->backing_bdev);
+ ldsc = 1;
+ }
+
+ mdev->bc->dc.disk_size = (sector_t)rs.resize_size;
+ dd = drbd_determin_dev_size(mdev);
+ drbd_md_sync(mdev);
+ dec_local(mdev);
+ if (dd == dev_size_error) {
+ retcode = VMallocFailed;
+ goto fail;
+ }
+
+ if (mdev->state.conn == Connected && (dd != unchanged || ldsc)) {
+ drbd_send_uuids(mdev);
+ drbd_send_sizes(mdev);
+ if (dd == grew)
+ resync_after_online_grow(mdev);
+ }
+
+ fail:
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_syncer_conf(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode = NoError;
+ int err;
+ int ovr; /* online verify running */
+ int rsr; /* re-sync running */
+ struct drbd_conf *odev;
+ struct crypto_hash *verify_tfm = NULL;
+ struct crypto_hash *csums_tfm = NULL;
+ struct syncer_conf sc;
+ cpumask_t n_cpu_mask = CPU_MASK_NONE;
+
+ memcpy(&sc, &mdev->sync_conf, sizeof(struct syncer_conf));
+
+ if (nlp->flags & DRBD_NL_SET_DEFAULTS) {
+ memset(&sc, 0, sizeof(struct syncer_conf));
+ sc.rate = DRBD_RATE_DEF;
+ sc.after = DRBD_AFTER_DEF;
+ sc.al_extents = DRBD_AL_EXTENTS_DEF;
+ }
+
+ if (!syncer_conf_from_tags(mdev, nlp->tag_list, &sc)) {
+ retcode = UnknownMandatoryTag;
+ goto fail;
+ }
+
+ if (sc.after != -1) {
+ if (sc.after < -1 || minor_to_mdev(sc.after) == NULL) {
+ retcode = SyncAfterInvalid;
+ goto fail;
+ }
+ odev = minor_to_mdev(sc.after); /* check against loops in */
+ while (1) {
+ if (odev == mdev) {
+ retcode = SyncAfterCycle;
+ goto fail;
+ }
+ if (odev->sync_conf.after == -1)
+ break; /* no cycles. */
+ odev = minor_to_mdev(odev->sync_conf.after);
+ }
+ }
+
+ /* re-sync running */
+ rsr = ( mdev->state.conn == SyncSource ||
+ mdev->state.conn == SyncTarget ||
+ mdev->state.conn == PausedSyncS ||
+ mdev->state.conn == PausedSyncT );
+
+ if (rsr && strcmp(sc.csums_alg, mdev->sync_conf.csums_alg)) {
+ retcode = CSUMSResyncRunning;
+ goto fail;
+ }
+
+ if (!rsr && sc.csums_alg[0]) {
+ csums_tfm = crypto_alloc_hash(sc.csums_alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(csums_tfm)) {
+ csums_tfm = NULL;
+ retcode = CSUMSAlgNotAvail;
+ goto fail;
+ }
+
+ if (crypto_tfm_alg_type(crypto_hash_tfm(csums_tfm)) != CRYPTO_ALG_TYPE_DIGEST) {
+ retcode = CSUMSAlgNotDigest;
+ goto fail;
+ }
+ }
+
+ /* online verify running */
+ ovr = (mdev->state.conn == VerifyS || mdev->state.conn == VerifyT);
+
+ if (ovr) {
+ if (strcmp(sc.verify_alg, mdev->sync_conf.verify_alg)) {
+ retcode = VERIFYIsRunning;
+ goto fail;
+ }
+ }
+
+ if (!ovr && sc.verify_alg[0]) {
+ verify_tfm = crypto_alloc_hash(sc.verify_alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(verify_tfm)) {
+ verify_tfm = NULL;
+ retcode = VERIFYAlgNotAvail;
+ goto fail;
+ }
+
+ if (crypto_tfm_alg_type(crypto_hash_tfm(verify_tfm)) != CRYPTO_ALG_TYPE_DIGEST) {
+ retcode = VERIFYAlgNotDigest;
+ goto fail;
+ }
+ }
+
+ if (sc.cpu_mask[0] != 0) {
+ err = __bitmap_parse(sc.cpu_mask, 32, 0, (unsigned long *)&n_cpu_mask, NR_CPUS);
+ if (err) {
+ drbd_WARN("__bitmap_parse() failed with %d\n", err);
+ retcode = CPUMaskParseFailed;
+ goto fail;
+ }
+ }
+
+ ERR_IF (sc.rate < 1) sc.rate = 1;
+ ERR_IF (sc.al_extents < 7) sc.al_extents = 127; /* arbitrary minimum */
+#define AL_MAX ((MD_AL_MAX_SIZE-1) * AL_EXTENTS_PT)
+ if (sc.al_extents > AL_MAX) {
+ ERR("sc.al_extents > %d\n", AL_MAX);
+ sc.al_extents = AL_MAX;
+ }
+#undef AL_MAX
+
+ spin_lock(&mdev->peer_seq_lock);
+ /* lock against receive_SyncParam() */
+ mdev->sync_conf = sc;
+
+ if (!rsr) {
+ crypto_free_hash(mdev->csums_tfm);
+ mdev->csums_tfm = csums_tfm;
+ csums_tfm = NULL;
+ }
+
+ if (!ovr) {
+ crypto_free_hash(mdev->verify_tfm);
+ mdev->verify_tfm = verify_tfm;
+ verify_tfm = NULL;
+ }
+ spin_unlock(&mdev->peer_seq_lock);
+
+ if (inc_local(mdev)) {
+ wait_event(mdev->al_wait, lc_try_lock(mdev->act_log));
+ drbd_al_shrink(mdev);
+ err = drbd_check_al_size(mdev);
+ lc_unlock(mdev->act_log);
+ wake_up(&mdev->al_wait);
+
+ dec_local(mdev);
+ drbd_md_sync(mdev);
+
+ if (err) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ }
+
+ if (mdev->state.conn >= Connected)
+ drbd_send_sync_param(mdev, &sc);
+
+ drbd_alter_sa(mdev, sc.after);
+
+ if (!cpus_equal(mdev->cpu_mask, n_cpu_mask)) {
+ mdev->cpu_mask = n_cpu_mask;
+ mdev->cpu_mask = drbd_calc_cpu_mask(mdev);
+ mdev->receiver.reset_cpu_mask = 1;
+ mdev->asender.reset_cpu_mask = 1;
+ mdev->worker.reset_cpu_mask = 1;
+ }
+
+ drbd_kobject_uevent(mdev);
+fail:
+ crypto_free_hash(csums_tfm);
+ crypto_free_hash(verify_tfm);
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_invalidate(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode;
+
+ retcode = _drbd_request_state(mdev, NS(conn, StartingSyncT), ChgOrdered);
+
+ if (retcode < SS_Success && retcode != SS_NeedConnection)
+ retcode = drbd_request_state(mdev, NS(conn, StartingSyncT));
+
+ while (retcode == SS_NeedConnection) {
+ spin_lock_irq(&mdev->req_lock);
+ if (mdev->state.conn < Connected)
+ retcode = _drbd_set_state(_NS(mdev, disk, Inconsistent), ChgStateVerbose, NULL);
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (retcode != SS_NeedConnection)
+ break;
+
+ retcode = drbd_request_state(mdev, NS(conn, StartingSyncT));
+ }
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_invalidate_peer(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+
+ reply->ret_code = drbd_request_state(mdev, NS(conn, StartingSyncS));
+
+ return 0;
+}
+
+STATIC int drbd_nl_pause_sync(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode = NoError;
+
+ if (drbd_request_state(mdev, NS(user_isp, 1)) == SS_NothingToDo)
+ retcode = PauseFlagAlreadySet;
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_resume_sync(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode = NoError;
+
+ if (drbd_request_state(mdev, NS(user_isp, 0)) == SS_NothingToDo)
+ retcode = PauseFlagAlreadyClear;
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC int drbd_nl_suspend_io(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ reply->ret_code = drbd_request_state(mdev, NS(susp, 1));
+
+ return 0;
+}
+
+STATIC int drbd_nl_resume_io(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ reply->ret_code = drbd_request_state(mdev, NS(susp, 0));
+ return 0;
+}
+
+STATIC int drbd_nl_outdate(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ reply->ret_code = drbd_request_state(mdev, NS(disk, Outdated));
+ return 0;
+}
+
+STATIC int drbd_nl_get_config(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ unsigned short *tl;
+
+ tl = reply->tag_list;
+
+ if (inc_local(mdev)) {
+ tl = disk_conf_to_tags(mdev, &mdev->bc->dc, tl);
+ dec_local(mdev);
+ }
+
+ if (inc_net(mdev)) {
+ tl = net_conf_to_tags(mdev, mdev->net_conf, tl);
+ dec_net(mdev);
+ }
+ tl = syncer_conf_to_tags(mdev, &mdev->sync_conf, tl);
+
+ *tl++ = TT_END; /* Close the tag list */
+
+ return (int)((char *)tl - (char *)reply->tag_list);
+}
+
+STATIC int drbd_nl_get_state(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ unsigned short *tl = reply->tag_list;
+ union drbd_state_t s = mdev->state;
+ unsigned long rs_left;
+ unsigned int res;
+
+ tl = get_state_to_tags(mdev, (struct get_state *)&s, tl);
+
+ /* no local ref, no bitmap, no syncer progress. */
+ if (s.conn >= SyncSource && s.conn <= PausedSyncT) {
+ if (inc_local(mdev)) {
+ drbd_get_syncer_progress(mdev, &rs_left, &res);
+ *tl++ = T_sync_progress;
+ *tl++ = sizeof(int);
+ memcpy(tl, &res, sizeof(int));
+ tl = (unsigned short *)((char *)tl + sizeof(int));
+ dec_local(mdev);
+ }
+ }
+ *tl++ = TT_END; /* Close the tag list */
+
+ return (int)((char *)tl - (char *)reply->tag_list);
+}
+
+STATIC int drbd_nl_get_uuids(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ unsigned short *tl;
+
+ tl = reply->tag_list;
+
+ if (inc_local(mdev)) {
+ /* This is a hand crafted add tag ;) */
+ *tl++ = T_uuids;
+ *tl++ = UUID_SIZE*sizeof(u64);
+ memcpy(tl, mdev->bc->md.uuid, UUID_SIZE*sizeof(u64));
+ tl = (unsigned short *)((char *)tl + UUID_SIZE*sizeof(u64));
+ *tl++ = T_uuids_flags;
+ *tl++ = sizeof(int);
+ memcpy(tl, &mdev->bc->md.flags, sizeof(int));
+ tl = (unsigned short *)((char *)tl + sizeof(int));
+ dec_local(mdev);
+ }
+ *tl++ = TT_END; /* Close the tag list */
+
+ return (int)((char *)tl - (char *)reply->tag_list);
+}
+
+/**
+ * drbd_nl_get_timeout_flag:
+ * Is used by drbdsetup to find out which timeout value to use.
+ */
+STATIC int drbd_nl_get_timeout_flag(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ unsigned short *tl;
+ char rv;
+
+ tl = reply->tag_list;
+
+ rv = mdev->state.pdsk == Outdated ? UT_PeerOutdated :
+ test_bit(USE_DEGR_WFC_T, &mdev->flags) ? UT_Degraded : UT_Default;
+
+ /* This is a hand crafted add tag ;) */
+ *tl++ = T_use_degraded;
+ *tl++ = sizeof(char);
+ *((char *)tl) = rv;
+ tl = (unsigned short *)((char *)tl + sizeof(char));
+ *tl++ = TT_END;
+
+ return (int)((char *)tl - (char *)reply->tag_list);
+}
+
+STATIC int drbd_nl_start_ov(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ reply->ret_code = drbd_request_state(mdev,NS(conn,VerifyS));
+
+ return 0;
+}
+
+
+STATIC int drbd_nl_new_c_uuid(struct drbd_conf *mdev, struct drbd_nl_cfg_req *nlp,
+ struct drbd_nl_cfg_reply *reply)
+{
+ int retcode = NoError;
+ int err;
+
+ struct new_c_uuid args;
+
+ memset(&args, 0, sizeof(struct new_c_uuid));
+ if (!new_c_uuid_from_tags(mdev, nlp->tag_list, &args)) {
+ reply->ret_code = UnknownMandatoryTag;
+ return 0;
+ }
+
+ mutex_lock(&mdev->state_mutex); /* Protects us against serialized state changes. */
+
+ if (mdev->state.conn >= Connected) {
+ retcode = MayNotBeConnected;
+ goto out;
+ }
+
+ if (!inc_local(mdev)) {
+ retcode = HaveNoDiskConfig;
+ goto out;
+ }
+
+ drbd_uuid_set(mdev, Bitmap, 0); /* Rotate Bitmap to History 1, etc... */
+ drbd_uuid_new_current(mdev); /* New current, previous to Bitmap */
+
+ if (args.clear_bm) {
+ err = drbd_bitmap_io(mdev, &drbd_bmio_clear_n_write, "clear_n_write from new_c_uuid");
+ if (err) {
+ ERR("Writing bitmap failed with %d\n",err);
+ retcode = MDIOError;
+ }
+ }
+
+ drbd_md_sync(mdev);
+ dec_local(mdev);
+out:
+ mutex_unlock(&mdev->state_mutex);
+
+ reply->ret_code = retcode;
+ return 0;
+}
+
+STATIC struct drbd_conf *ensure_mdev(struct drbd_nl_cfg_req *nlp)
+{
+ struct drbd_conf *mdev;
+
+ if (nlp->drbd_minor >= minor_count)
+ return NULL;
+
+ mdev = minor_to_mdev(nlp->drbd_minor);
+
+ if (!mdev && (nlp->flags & DRBD_NL_CREATE_DEVICE)) {
+ struct gendisk *disk = NULL;
+ mdev = drbd_new_device(nlp->drbd_minor);
+
+ spin_lock_irq(&drbd_pp_lock);
+ if (minor_table[nlp->drbd_minor] == NULL) {
+ minor_table[nlp->drbd_minor] = mdev;
+ disk = mdev->vdisk;
+ mdev = NULL;
+ } /* else: we lost the race */
+ spin_unlock_irq(&drbd_pp_lock);
+
+ if (disk) /* we won the race above */
+ /* in case we ever add a drbd_delete_device(),
+ * don't forget the del_gendisk! */
+ add_disk(disk);
+ else /* we lost the race above */
+ drbd_free_mdev(mdev);
+
+ mdev = minor_to_mdev(nlp->drbd_minor);
+ }
+
+ return mdev;
+}
+
+struct cn_handler_struct {
+ int (*function)(struct drbd_conf *,
+ struct drbd_nl_cfg_req *,
+ struct drbd_nl_cfg_reply *);
+ int reply_body_size;
+};
+
+static struct cn_handler_struct cnd_table[] = {
+ [ P_primary ] = { &drbd_nl_primary, 0 },
+ [ P_secondary ] = { &drbd_nl_secondary, 0 },
+ [ P_disk_conf ] = { &drbd_nl_disk_conf, 0 },
+ [ P_detach ] = { &drbd_nl_detach, 0 },
+ [ P_net_conf ] = { &drbd_nl_net_conf, 0 },
+ [ P_disconnect ] = { &drbd_nl_disconnect, 0 },
+ [ P_resize ] = { &drbd_nl_resize, 0 },
+ [ P_syncer_conf ] = { &drbd_nl_syncer_conf, 0 },
+ [ P_invalidate ] = { &drbd_nl_invalidate, 0 },
+ [ P_invalidate_peer ] = { &drbd_nl_invalidate_peer, 0 },
+ [ P_pause_sync ] = { &drbd_nl_pause_sync, 0 },
+ [ P_resume_sync ] = { &drbd_nl_resume_sync, 0 },
+ [ P_suspend_io ] = { &drbd_nl_suspend_io, 0 },
+ [ P_resume_io ] = { &drbd_nl_resume_io, 0 },
+ [ P_outdate ] = { &drbd_nl_outdate, 0 },
+ [ P_get_config ] = { &drbd_nl_get_config,
+ sizeof(struct syncer_conf_tag_len_struct) +
+ sizeof(struct disk_conf_tag_len_struct) +
+ sizeof(struct net_conf_tag_len_struct) },
+ [ P_get_state ] = { &drbd_nl_get_state,
+ sizeof(struct get_state_tag_len_struct) +
+ sizeof(struct sync_progress_tag_len_struct) },
+ [ P_get_uuids ] = { &drbd_nl_get_uuids,
+ sizeof(struct get_uuids_tag_len_struct) },
+ [ P_get_timeout_flag ] = { &drbd_nl_get_timeout_flag,
+ sizeof(struct get_timeout_flag_tag_len_struct)},
+ [ P_start_ov ] = { &drbd_nl_start_ov, 0 },
+ [ P_new_c_uuid ] = { &drbd_nl_new_c_uuid, 0 },
+};
+
+STATIC void drbd_connector_callback(void *data)
+{
+ struct cn_msg *req = data;
+ struct drbd_nl_cfg_req *nlp = (struct drbd_nl_cfg_req *)req->data;
+ struct cn_handler_struct *cm;
+ struct cn_msg *cn_reply;
+ struct drbd_nl_cfg_reply *reply;
+ struct drbd_conf *mdev;
+ int retcode, rr;
+ int reply_size = sizeof(struct cn_msg)
+ + sizeof(struct drbd_nl_cfg_reply)
+ + sizeof(short int);
+
+ if (!try_module_get(THIS_MODULE)) {
+ printk(KERN_ERR "drbd: try_module_get() failed!\n");
+ return;
+ }
+
+ mdev = ensure_mdev(nlp);
+ if (!mdev) {
+ retcode = MinorNotKnown;
+ goto fail;
+ }
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_packet(data););
+
+ if (nlp->packet_type >= P_nl_after_last_packet) {
+ retcode = UnknownNetLinkPacket;
+ goto fail;
+ }
+
+ cm = cnd_table + nlp->packet_type;
+
+ /* This may happen if packet number is 0: */
+ if (cm->function == NULL) {
+ retcode = UnknownNetLinkPacket;
+ goto fail;
+ }
+
+ reply_size += cm->reply_body_size;
+
+ cn_reply = kmalloc(reply_size, GFP_KERNEL);
+ if (!cn_reply) {
+ retcode = KMallocFailed;
+ goto fail;
+ }
+ reply = (struct drbd_nl_cfg_reply *) cn_reply->data;
+
+ reply->packet_type =
+ cm->reply_body_size ? nlp->packet_type : P_nl_after_last_packet;
+ reply->minor = nlp->drbd_minor;
+ reply->ret_code = NoError; /* Might by modified by cm->function. */
+ /* reply->tag_list; might be modified by cm->fucntion. */
+
+ rr = cm->function(mdev, nlp, reply);
+
+ cn_reply->id = req->id;
+ cn_reply->seq = req->seq;
+ cn_reply->ack = req->ack + 1;
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply) + rr;
+ cn_reply->flags = 0;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ rr = cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+ if (rr && rr != -ESRCH)
+ printk(KERN_INFO "drbd: cn_netlink_send()=%d\n", rr);
+
+ kfree(cn_reply);
+ module_put(THIS_MODULE);
+ return;
+ fail:
+ drbd_nl_send_reply(req, retcode);
+ module_put(THIS_MODULE);
+}
+
+static atomic_t drbd_nl_seq = ATOMIC_INIT(2); /* two. */
+
+static inline unsigned short *
+__tl_add_blob(unsigned short *tl, enum drbd_tags tag, const void *data,
+ int len, int nul_terminated)
+{
+ int l = tag_descriptions[tag_number(tag)].max_len;
+ l = (len < l) ? len : l;
+ *tl++ = tag;
+ *tl++ = len;
+ memcpy(tl, data, len);
+ /* TODO
+ * maybe we need to add some padding to the data stream.
+ * otherwise we may get strange effects on architectures
+ * that require certain data types to be strictly aligned,
+ * because now the next "unsigned short" may be misaligned. */
+ tl = (unsigned short*)((char*)tl + len);
+ if (nul_terminated)
+ *((char*)tl - 1) = 0;
+ return tl;
+}
+
+static inline unsigned short *
+tl_add_blob(unsigned short *tl, enum drbd_tags tag, const void *data, int len)
+{
+ return __tl_add_blob(tl, tag, data, len, 0);
+}
+
+static inline unsigned short *
+tl_add_str(unsigned short *tl, enum drbd_tags tag, const char *str)
+{
+ return __tl_add_blob(tl, tag, str, strlen(str)+1, 0);
+}
+
+static inline unsigned short *
+tl_add_int(unsigned short *tl, enum drbd_tags tag, const void *val)
+{
+ switch(tag_type(tag)) {
+ case TT_INTEGER:
+ *tl++ = tag;
+ *tl++ = sizeof(int);
+ *(int*)tl = *(int*)val;
+ tl = (unsigned short*)((char*)tl+sizeof(int));
+ break;
+ case TT_INT64:
+ *tl++ = tag;
+ *tl++ = sizeof(u64);
+ *(u64*)tl = *(u64*)val;
+ tl = (unsigned short*)((char*)tl+sizeof(u64));
+ break;
+ default:
+ /* someone did something stupid. */
+ ;
+ }
+ return tl;
+}
+
+void drbd_bcast_state(struct drbd_conf *mdev, union drbd_state_t state)
+{
+ char buffer[sizeof(struct cn_msg)+
+ sizeof(struct drbd_nl_cfg_reply)+
+ sizeof(struct get_state_tag_len_struct)+
+ sizeof(short int)];
+ struct cn_msg *cn_reply = (struct cn_msg *) buffer;
+ struct drbd_nl_cfg_reply *reply =
+ (struct drbd_nl_cfg_reply *)cn_reply->data;
+ unsigned short *tl = reply->tag_list;
+
+ /* drbd_WARN("drbd_bcast_state() got called\n"); */
+
+ tl = get_state_to_tags(mdev, (struct get_state *)&state, tl);
+ *tl++ = TT_END; /* Close the tag list */
+
+ cn_reply->id.idx = CN_IDX_DRBD;
+ cn_reply->id.val = CN_VAL_DRBD;
+
+ cn_reply->seq = atomic_add_return(1, &drbd_nl_seq);
+ cn_reply->ack = 0; /* not used here. */
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply) +
+ (int)((char *)tl - (char *)reply->tag_list);
+ cn_reply->flags = 0;
+
+ reply->packet_type = P_get_state;
+ reply->minor = mdev_to_minor(mdev);
+ reply->ret_code = NoError;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+}
+
+void drbd_bcast_ev_helper(struct drbd_conf *mdev, char *helper_name)
+{
+ char buffer[sizeof(struct cn_msg)+
+ sizeof(struct drbd_nl_cfg_reply)+
+ sizeof(struct call_helper_tag_len_struct)+
+ sizeof(short int)];
+ struct cn_msg *cn_reply = (struct cn_msg *) buffer;
+ struct drbd_nl_cfg_reply *reply =
+ (struct drbd_nl_cfg_reply *)cn_reply->data;
+ unsigned short *tl = reply->tag_list;
+ int str_len;
+
+ /* drbd_WARN("drbd_bcast_state() got called\n"); */
+
+ str_len = strlen(helper_name)+1;
+ *tl++ = T_helper;
+ *tl++ = str_len;
+ memcpy(tl, helper_name, str_len);
+ tl = (unsigned short *)((char *)tl + str_len);
+ *tl++ = TT_END; /* Close the tag list */
+
+ cn_reply->id.idx = CN_IDX_DRBD;
+ cn_reply->id.val = CN_VAL_DRBD;
+
+ cn_reply->seq = atomic_add_return(1, &drbd_nl_seq);
+ cn_reply->ack = 0; /* not used here. */
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply) +
+ (int)((char *)tl - (char *)reply->tag_list);
+ cn_reply->flags = 0;
+
+ reply->packet_type = P_call_helper;
+ reply->minor = mdev_to_minor(mdev);
+ reply->ret_code = NoError;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+}
+
+void drbd_bcast_ee(struct drbd_conf *mdev,
+ const char *reason, const int dgs,
+ const char* seen_hash, const char* calc_hash,
+ const struct Tl_epoch_entry* e)
+{
+ struct cn_msg *cn_reply;
+ struct drbd_nl_cfg_reply *reply;
+ struct bio_vec *bvec;
+ unsigned short *tl;
+ int i;
+
+ if (!e)
+ return;
+ if (!reason || !reason[0])
+ return;
+
+ /* aparently we have to memcpy twice, first to prepare the data for the
+ * struct cn_msg, then within cn_netlink_send from the cn_msg to the
+ * netlink skb. */
+ cn_reply = kmalloc(
+ sizeof(struct cn_msg)+
+ sizeof(struct drbd_nl_cfg_reply)+
+ sizeof(struct dump_ee_tag_len_struct)+
+ sizeof(short int)
+ , GFP_KERNEL);
+
+ if (!cn_reply) {
+ ERR("could not kmalloc buffer for drbd_bcast_ee, sector %llu, size %u\n",
+ (unsigned long long)e->sector, e->size);
+ return;
+ }
+
+ reply = (struct drbd_nl_cfg_reply*)cn_reply->data;
+ tl = reply->tag_list;
+
+ tl = tl_add_str(tl, T_dump_ee_reason, reason);
+ tl = tl_add_blob(tl, T_seen_digest, seen_hash, dgs);
+ tl = tl_add_blob(tl, T_calc_digest, calc_hash, dgs);
+ tl = tl_add_int(tl, T_ee_sector, &e->sector);
+ tl = tl_add_int(tl, T_ee_block_id, &e->block_id);
+
+ *tl++ = T_ee_data;
+ *tl++ = e->size;
+
+ __bio_for_each_segment(bvec, e->private_bio, i, 0) {
+ void *d = kmap(bvec->bv_page);
+ memcpy(tl, d + bvec->bv_offset, bvec->bv_len);
+ kunmap(bvec->bv_page);
+ tl=(unsigned short*)((char*)tl + bvec->bv_len);
+ }
+ *tl++ = TT_END; /* Close the tag list */
+
+ cn_reply->id.idx = CN_IDX_DRBD;
+ cn_reply->id.val = CN_VAL_DRBD;
+
+ cn_reply->seq = atomic_add_return(1,&drbd_nl_seq);
+ cn_reply->ack = 0; // not used here.
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply) +
+ (int)((char*)tl - (char*)reply->tag_list);
+ cn_reply->flags = 0;
+
+ reply->packet_type = P_dump_ee;
+ reply->minor = mdev_to_minor(mdev);
+ reply->ret_code = NoError;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+ kfree(cn_reply);
+}
+
+void drbd_bcast_sync_progress(struct drbd_conf *mdev)
+{
+ char buffer[sizeof(struct cn_msg)+
+ sizeof(struct drbd_nl_cfg_reply)+
+ sizeof(struct sync_progress_tag_len_struct)+
+ sizeof(short int)];
+ struct cn_msg *cn_reply = (struct cn_msg *) buffer;
+ struct drbd_nl_cfg_reply *reply =
+ (struct drbd_nl_cfg_reply *)cn_reply->data;
+ unsigned short *tl = reply->tag_list;
+ unsigned long rs_left;
+ unsigned int res;
+
+ /* no local ref, no bitmap, no syncer progress, no broadcast. */
+ if (!inc_local(mdev))
+ return;
+ drbd_get_syncer_progress(mdev, &rs_left, &res);
+ dec_local(mdev);
+
+ *tl++ = T_sync_progress;
+ *tl++ = sizeof(int);
+ memcpy(tl, &res, sizeof(int));
+ tl = (unsigned short *)((char *)tl + sizeof(int));
+ *tl++ = TT_END; /* Close the tag list */
+
+ cn_reply->id.idx = CN_IDX_DRBD;
+ cn_reply->id.val = CN_VAL_DRBD;
+
+ cn_reply->seq = atomic_add_return(1, &drbd_nl_seq);
+ cn_reply->ack = 0; /* not used here. */
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply) +
+ (int)((char *)tl - (char *)reply->tag_list);
+ cn_reply->flags = 0;
+
+ reply->packet_type = P_sync_progress;
+ reply->minor = mdev_to_minor(mdev);
+ reply->ret_code = NoError;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+}
+
+int __init drbd_nl_init(void)
+{
+ static struct cb_id cn_id_drbd;
+ int err, try=10;
+
+ cn_id_drbd.val = CN_VAL_DRBD;
+ do {
+ cn_id_drbd.idx = cn_idx;
+ err = cn_add_callback(&cn_id_drbd, "cn_drbd", &drbd_connector_callback);
+ if (!err)
+ break;
+ cn_idx = (cn_idx + CN_IDX_STEP);
+ } while (try--);
+
+ if (err) {
+ printk(KERN_ERR "drbd: cn_drbd failed to register\n");
+ return err;
+ }
+
+ return 0;
+}
+
+void drbd_nl_cleanup(void)
+{
+ static struct cb_id cn_id_drbd;
+
+ cn_id_drbd.idx = cn_idx;
+ cn_id_drbd.val = CN_VAL_DRBD;
+
+ cn_del_callback(&cn_id_drbd);
+}
+
+void drbd_nl_send_reply(struct cn_msg *req, int ret_code)
+{
+ char buffer[sizeof(struct cn_msg)+sizeof(struct drbd_nl_cfg_reply)];
+ struct cn_msg *cn_reply = (struct cn_msg *) buffer;
+ struct drbd_nl_cfg_reply *reply =
+ (struct drbd_nl_cfg_reply *)cn_reply->data;
+ int rr;
+
+ cn_reply->id = req->id;
+
+ cn_reply->seq = req->seq;
+ cn_reply->ack = req->ack + 1;
+ cn_reply->len = sizeof(struct drbd_nl_cfg_reply);
+ cn_reply->flags = 0;
+
+ reply->minor = ((struct drbd_nl_cfg_req *)req->data)->drbd_minor;
+ reply->ret_code = ret_code;
+
+ TRACE(TraceTypeNl, TraceLvlSummary, nl_trace_reply(cn_reply););
+
+ rr = cn_netlink_send(cn_reply, CN_IDX_DRBD, GFP_KERNEL);
+ if (rr && rr != -ESRCH)
+ printk(KERN_INFO "drbd: cn_netlink_send()=%d\n", rr);
+}
+
The DRBD state engine, and lots of other stuff, that does not have its own
source file.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_main.c linux-2.6.29-drbd/drivers/block/drbd/drbd_main.c
--- linux-2.6.29/drivers/block/drbd/drbd_main.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_main.c 2009-03-30 16:34:28.351153000 +0200
@@ -0,0 +1,4034 @@
+/*
+ drbd.c
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ */
+
+#include <linux/autoconf.h>
+#include <linux/module.h>
+#include <linux/version.h>
+
+#include <asm/uaccess.h>
+#include <asm/types.h>
+#include <net/sock.h>
+#include <linux/ctype.h>
+#include <linux/smp_lock.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/drbd_config.h>
+#include <linux/memcontrol.h>
+#include <linux/mm_inline.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/reboot.h>
+#include <linux/notifier.h>
+#include <linux/kthread.h>
+
+#define __KERNEL_SYSCALLS__
+#include <linux/unistd.h>
+#include <linux/vmalloc.h>
+
+#include <linux/drbd.h>
+#include <linux/drbd_limits.h>
+#include "drbd_int.h"
+#include "drbd_req.h" /* only for _req_mod in tl_release and tl_clear */
+
+#include "drbd_vli.h"
+
+struct after_state_chg_work {
+ struct drbd_work w;
+ union drbd_state_t os;
+ union drbd_state_t ns;
+ enum chg_state_flags flags;
+ struct completion *done;
+};
+
+int drbdd_init(struct Drbd_thread *);
+int drbd_worker(struct Drbd_thread *);
+int drbd_asender(struct Drbd_thread *);
+
+int drbd_init(void);
+static int drbd_open(struct block_device *bdev, fmode_t mode);
+static int drbd_release(struct gendisk *gd, fmode_t mode);
+STATIC int w_after_state_ch(struct drbd_conf *mdev, struct drbd_work *w, int unused);
+STATIC void after_state_ch(struct drbd_conf *mdev, union drbd_state_t os,
+ union drbd_state_t ns, enum chg_state_flags flags);
+STATIC int w_md_sync(struct drbd_conf *mdev, struct drbd_work *w, int unused);
+STATIC void md_sync_timer_fn(unsigned long data);
+STATIC int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused);
+
+MODULE_AUTHOR("Philipp Reisner <[email protected]>, "
+ "Lars Ellenberg <[email protected]>");
+MODULE_DESCRIPTION("drbd - Distributed Replicated Block Device v" REL_VERSION);
+MODULE_LICENSE("GPL");
+MODULE_PARM_DESC(minor_count, "Maximum number of drbd devices (1-255)");
+MODULE_ALIAS_BLOCKDEV_MAJOR(DRBD_MAJOR);
+
+#include <linux/moduleparam.h>
+/* allow_open_on_secondary */
+MODULE_PARM_DESC(allow_oos, "DONT USE!");
+/* thanks to these macros, if compiled into the kernel (not-module),
+ * this becomes the boot parameter drbd.minor_count */
+module_param(minor_count, uint, 0444);
+module_param(allow_oos, bool, 0);
+module_param(cn_idx, uint, 0444);
+
+#ifdef DRBD_ENABLE_FAULTS
+int enable_faults;
+int fault_rate;
+static int fault_count;
+int fault_devs;
+/* bitmap of enabled faults */
+module_param(enable_faults, int, 0664);
+/* fault rate % value - applies to all enabled faults */
+module_param(fault_rate, int, 0664);
+/* count of faults inserted */
+module_param(fault_count, int, 0664);
+/* bitmap of devices to insert faults on */
+module_param(fault_devs, int, 0644);
+#endif
+
+/* module parameter, defined */
+unsigned int minor_count = 32;
+int allow_oos;
+unsigned int cn_idx = CN_IDX_DRBD;
+
+#ifdef ENABLE_DYNAMIC_TRACE
+int trace_type; /* Bitmap of trace types to enable */
+int trace_level; /* Current trace level */
+int trace_devs; /* Bitmap of devices to trace */
+int proc_details; /* Detail level in proc drbd*/
+
+module_param(trace_level, int, 0644);
+module_param(trace_type, int, 0644);
+module_param(trace_devs, int, 0644);
+module_param(proc_details, int, 0644);
+#endif
+
+/* Module parameter for setting the user mode helper program
+ * to run. Default is /sbin/drbdadm */
+char usermode_helper[80] = "/sbin/drbdadm";
+
+module_param_string(usermode_helper, usermode_helper, sizeof(usermode_helper), 0644);
+
+/* in 2.6.x, our device mapping and config info contains our virtual gendisks
+ * as member "struct gendisk *vdisk;"
+ */
+struct drbd_conf **minor_table;
+
+struct kmem_cache *drbd_request_cache;
+struct kmem_cache *drbd_ee_cache;
+mempool_t *drbd_request_mempool;
+mempool_t *drbd_ee_mempool;
+
+/* I do not use a standard mempool, because:
+ 1) I want to hand out the preallocated objects first.
+ 2) I want to be able to interrupt sleeping allocation with a signal.
+ Note: This is a single linked list, the next pointer is the private
+ member of struct page.
+ */
+struct page *drbd_pp_pool;
+spinlock_t drbd_pp_lock;
+int drbd_pp_vacant;
+wait_queue_head_t drbd_pp_wait;
+
+DEFINE_RATELIMIT_STATE(drbd_ratelimit_state, 5 * HZ, 5);
+
+STATIC struct block_device_operations drbd_ops = {
+ .owner = THIS_MODULE,
+ .open = drbd_open,
+ .release = drbd_release,
+};
+
+#define ARRY_SIZE(A) (sizeof(A)/sizeof(A[0]))
+
+#ifdef __CHECKER__
+/* When checking with sparse, and this is an inline function, sparse will
+ give tons of false positives. When this is a real functions sparse works.
+ */
+int _inc_local_if_state(struct drbd_conf *mdev, enum drbd_disk_state mins)
+{
+ int io_allowed;
+
+ atomic_inc(&mdev->local_cnt);
+ io_allowed = (mdev->state.disk >= mins);
+ if (!io_allowed) {
+ if (atomic_dec_and_test(&mdev->local_cnt))
+ wake_up(&mdev->misc_wait);
+ }
+ return io_allowed;
+}
+
+#endif
+
+/************************* The transfer log start */
+STATIC int tl_init(struct drbd_conf *mdev)
+{
+ struct drbd_barrier *b;
+
+ b = kmalloc(sizeof(struct drbd_barrier), GFP_KERNEL);
+ if (!b)
+ return 0;
+ INIT_LIST_HEAD(&b->requests);
+ INIT_LIST_HEAD(&b->w.list);
+ b->next = NULL;
+ b->br_number = 4711;
+ b->n_req = 0;
+ b->w.cb = NULL; /* if this is != NULL, we need to dec_ap_pending in tl_clear */
+
+ mdev->oldest_barrier = b;
+ mdev->newest_barrier = b;
+ INIT_LIST_HEAD(&mdev->out_of_sequence_requests);
+
+ mdev->tl_hash = NULL;
+ mdev->tl_hash_s = 0;
+
+ return 1;
+}
+
+STATIC void tl_cleanup(struct drbd_conf *mdev)
+{
+ D_ASSERT(mdev->oldest_barrier == mdev->newest_barrier);
+ D_ASSERT(list_empty(&mdev->out_of_sequence_requests));
+ kfree(mdev->oldest_barrier);
+ mdev->oldest_barrier = NULL;
+ kfree(mdev->unused_spare_barrier);
+ mdev->unused_spare_barrier = NULL;
+ kfree(mdev->tl_hash);
+ mdev->tl_hash = NULL;
+ mdev->tl_hash_s = 0;
+}
+
+/**
+ * _tl_add_barrier: Adds a barrier to the TL.
+ */
+void _tl_add_barrier(struct drbd_conf *mdev, struct drbd_barrier *new)
+{
+ struct drbd_barrier *newest_before;
+
+ INIT_LIST_HEAD(&new->requests);
+ INIT_LIST_HEAD(&new->w.list);
+ new->w.cb = NULL; /* if this is != NULL, we need to dec_ap_pending in tl_clear */
+ new->next = NULL;
+ new->n_req = 0;
+
+ newest_before = mdev->newest_barrier;
+ /* never send a barrier number == 0, because that is special-cased
+ * when using TCQ for our write ordering code */
+ new->br_number = (newest_before->br_number+1) ?: 1;
+ if (mdev->newest_barrier != new) {
+ mdev->newest_barrier->next = new;
+ mdev->newest_barrier = new;
+ }
+}
+
+/* when we receive a barrier ack */
+void tl_release(struct drbd_conf *mdev, unsigned int barrier_nr,
+ unsigned int set_size)
+{
+ struct drbd_barrier *b, *nob; /* next old barrier */
+ struct list_head *le, *tle;
+ struct drbd_request *r;
+
+ spin_lock_irq(&mdev->req_lock);
+
+ b = mdev->oldest_barrier;
+
+ /* first some paranoia code */
+ if (b == NULL) {
+ ERR("BAD! BarrierAck #%u received, but no epoch in tl!?\n",
+ barrier_nr);
+ goto bail;
+ }
+ if (b->br_number != barrier_nr) {
+ ERR("BAD! BarrierAck #%u received, expected #%u!\n",
+ barrier_nr, b->br_number);
+ goto bail;
+ }
+ if (b->n_req != set_size) {
+ ERR("BAD! BarrierAck #%u received with n_req=%u, expected n_req=%u!\n",
+ barrier_nr, set_size, b->n_req);
+ goto bail;
+ }
+
+ /* Clean up list of requests processed during current epoch */
+ list_for_each_safe(le, tle, &b->requests) {
+ r = list_entry(le, struct drbd_request, tl_requests);
+ _req_mod(r, barrier_acked, 0);
+ }
+ /* There could be requests on the list waiting for completion
+ of the write to the local disk. To avoid corruptions of
+ slab's data structures we have to remove the lists head.
+
+ Also there could have been a barrier ack out of sequence, overtaking
+ the write acks - which would be a but and violating write ordering.
+ To not deadlock in case we lose connection while such requests are
+ still pending, we need some way to find them for the
+ _req_mode(connection_lost_while_pending).
+
+ These have been list_move'd to the out_of_sequence_requests list in
+ _req_mod(, barrier_acked,) above.
+ */
+ list_del_init(&b->requests);
+
+ nob = b->next;
+ if (test_and_clear_bit(CREATE_BARRIER, &mdev->flags)) {
+ _tl_add_barrier(mdev, b);
+ if (nob)
+ mdev->oldest_barrier = nob;
+ /* if nob == NULL b was the only barrier, and becomes the new
+ barrer. Threfore mdev->oldest_barrier points already to b */
+ } else {
+ D_ASSERT(nob != NULL);
+ mdev->oldest_barrier = nob;
+ kfree(b);
+ }
+
+ spin_unlock_irq(&mdev->req_lock);
+ dec_ap_pending(mdev);
+
+ return;
+
+bail:
+ spin_unlock_irq(&mdev->req_lock);
+ drbd_force_state(mdev, NS(conn, ProtocolError));
+}
+
+
+/* called by drbd_disconnect (exiting receiver thread)
+ * or from some after_state_ch */
+void tl_clear(struct drbd_conf *mdev)
+{
+ struct drbd_barrier *b, *tmp;
+ struct list_head *le, *tle;
+ struct drbd_request *r;
+ int new_initial_bnr = net_random();
+
+ spin_lock_irq(&mdev->req_lock);
+
+ b = mdev->oldest_barrier;
+ while (b) {
+ list_for_each_safe(le, tle, &b->requests) {
+ r = list_entry(le, struct drbd_request, tl_requests);
+ _req_mod(r, connection_lost_while_pending, 0);
+ }
+ tmp = b->next;
+
+ /* there could still be requests on that ring list,
+ * in case local io is still pending */
+ list_del(&b->requests);
+
+ /* dec_ap_pending corresponding to queue_barrier.
+ * the newest barrier may not have been queued yet,
+ * in which case w.cb is still NULL. */
+ if (b->w.cb != NULL)
+ dec_ap_pending(mdev);
+
+ if (b == mdev->newest_barrier) {
+ /* recycle, but reinit! */
+ D_ASSERT(tmp == NULL);
+ INIT_LIST_HEAD(&b->requests);
+ INIT_LIST_HEAD(&b->w.list);
+ b->w.cb = NULL;
+ b->br_number = new_initial_bnr;
+ b->n_req = 0;
+
+ mdev->oldest_barrier = b;
+ break;
+ }
+ kfree(b);
+ b = tmp;
+ }
+
+ /* we expect this list to be empty. */
+ D_ASSERT(list_empty(&mdev->out_of_sequence_requests));
+
+ /* but just in case, clean it up anyways! */
+ list_for_each_safe(le, tle, &mdev->out_of_sequence_requests) {
+ r = list_entry(le, struct drbd_request, tl_requests);
+ _req_mod(r, connection_lost_while_pending, 0);
+ }
+
+ /* ensure bit indicating barrier is required is clear */
+ clear_bit(CREATE_BARRIER, &mdev->flags);
+
+ spin_unlock_irq(&mdev->req_lock);
+}
+
+/**
+ * drbd_io_error: Handles the on_io_error setting, should be called in the
+ * unlikely(!drbd_bio_uptodate(e->bio)) case from kernel thread context.
+ * See also drbd_chk_io_error
+ *
+ * NOTE: we set ourselves FAILED here if on_io_error is Detach or Panic OR
+ * if the forcedetach flag is set. This flag is set when failures
+ * occur writing the meta data portion of the disk as they are
+ * not recoverable.
+ */
+int drbd_io_error(struct drbd_conf *mdev, int forcedetach)
+{
+ enum io_error_handler eh;
+ unsigned long flags;
+ int send;
+ int ok = 1;
+
+ eh = PassOn;
+ if (inc_local_if_state(mdev, Failed)) {
+ eh = mdev->bc->dc.on_io_error;
+ dec_local(mdev);
+ }
+
+ if (!forcedetach && eh == PassOn)
+ return 1;
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ send = (mdev->state.disk == Failed);
+ if (send)
+ _drbd_set_state(_NS(mdev, disk, Diskless), ChgStateHard, NULL);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ if (!send)
+ return ok;
+
+ if (mdev->state.conn >= Connected) {
+ ok = drbd_send_state(mdev);
+ if (ok)
+ drbd_WARN("Notified peer that my disk is broken.\n");
+ else
+ ERR("Sending state in drbd_io_error() failed\n");
+ }
+
+ /* Make sure we try to flush meta-data to disk - we come
+ * in here because of a local disk error so it might fail
+ * but we still need to try -- both because the error might
+ * be in the data portion of the disk and because we need
+ * to ensure the md-sync-timer is stopped if running. */
+ drbd_md_sync(mdev);
+
+ /* Releasing the backing device is done in after_state_ch() */
+
+ if (eh == CallIOEHelper)
+ drbd_khelper(mdev, "local-io-error");
+
+ return ok;
+}
+
+/**
+ * cl_wide_st_chg:
+ * Returns TRUE if this state change should be preformed as a cluster wide
+ * transaction. Of course it returns 0 as soon as the connection is lost.
+ */
+STATIC int cl_wide_st_chg(struct drbd_conf *mdev,
+ union drbd_state_t os, union drbd_state_t ns)
+{
+ return (os.conn >= Connected && ns.conn >= Connected &&
+ ((os.role != Primary && ns.role == Primary) ||
+ (os.conn != StartingSyncT && ns.conn == StartingSyncT) ||
+ (os.conn != StartingSyncS && ns.conn == StartingSyncS) ||
+ (os.disk != Diskless && ns.disk == Diskless))) ||
+ (os.conn >= Connected && ns.conn == Disconnecting) ||
+ (os.conn == Connected && ns.conn == VerifyS);
+}
+
+int drbd_change_state(struct drbd_conf *mdev, enum chg_state_flags f,
+ union drbd_state_t mask, union drbd_state_t val)
+{
+ unsigned long flags;
+ union drbd_state_t os, ns;
+ int rv;
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ os = mdev->state;
+ ns.i = (os.i & ~mask.i) | val.i;
+ rv = _drbd_set_state(mdev, ns, f, NULL);
+ ns = mdev->state;
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ return rv;
+}
+
+void drbd_force_state(struct drbd_conf *mdev,
+ union drbd_state_t mask, union drbd_state_t val)
+{
+ drbd_change_state(mdev, ChgStateHard, mask, val);
+}
+
+int is_valid_state(struct drbd_conf *mdev, union drbd_state_t ns);
+int is_valid_state_transition(struct drbd_conf *,
+ union drbd_state_t, union drbd_state_t);
+int drbd_send_state_req(struct drbd_conf *,
+ union drbd_state_t, union drbd_state_t);
+
+STATIC enum set_st_err _req_st_cond(struct drbd_conf *mdev,
+ union drbd_state_t mask, union drbd_state_t val)
+{
+ union drbd_state_t os, ns;
+ unsigned long flags;
+ int rv;
+
+ if (test_and_clear_bit(CL_ST_CHG_SUCCESS, &mdev->flags))
+ return SS_CW_Success;
+
+ if (test_and_clear_bit(CL_ST_CHG_FAIL, &mdev->flags))
+ return SS_CW_FailedByPeer;
+
+ rv = 0;
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ os = mdev->state;
+ ns.i = (os.i & ~mask.i) | val.i;
+ if (!cl_wide_st_chg(mdev, os, ns))
+ rv = SS_CW_NoNeed;
+ if (!rv) {
+ rv = is_valid_state(mdev, ns);
+ if (rv == SS_Success) {
+ rv = is_valid_state_transition(mdev, ns, os);
+ if (rv == SS_Success)
+ rv = 0; /* cont waiting, otherwise fail. */
+ }
+ }
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ return rv;
+}
+
+/**
+ * _drbd_request_state:
+ * This function is the most gracefull way to change state. For some state
+ * transition this function even does a cluster wide transaction.
+ * It has a cousin named drbd_request_state(), which is always verbose.
+ */
+STATIC int drbd_req_state(struct drbd_conf *mdev,
+ union drbd_state_t mask, union drbd_state_t val,
+ enum chg_state_flags f)
+{
+ struct completion done;
+ unsigned long flags;
+ union drbd_state_t os, ns;
+ int rv;
+
+ init_completion(&done);
+
+ if (f & ChgSerialize)
+ mutex_lock(&mdev->state_mutex);
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ os = mdev->state;
+ ns.i = (os.i & ~mask.i) | val.i;
+
+ if (cl_wide_st_chg(mdev, os, ns)) {
+ rv = is_valid_state(mdev, ns);
+ if (rv == SS_Success)
+ rv = is_valid_state_transition(mdev, ns, os);
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ if (rv < SS_Success) {
+ if (f & ChgStateVerbose)
+ print_st_err(mdev, os, ns, rv);
+ goto abort;
+ }
+
+ drbd_state_lock(mdev);
+ if (!drbd_send_state_req(mdev, mask, val)) {
+ drbd_state_unlock(mdev);
+ rv = SS_CW_FailedByPeer;
+ if (f & ChgStateVerbose)
+ print_st_err(mdev, os, ns, rv);
+ goto abort;
+ }
+
+ wait_event(mdev->state_wait,
+ (rv = _req_st_cond(mdev, mask, val)));
+
+ if (rv < SS_Success) {
+ /* nearly dead code. */
+ drbd_state_unlock(mdev);
+ if (f & ChgStateVerbose)
+ print_st_err(mdev, os, ns, rv);
+ goto abort;
+ }
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ os = mdev->state;
+ ns.i = (os.i & ~mask.i) | val.i;
+ rv = _drbd_set_state(mdev, ns, f, &done);
+ drbd_state_unlock(mdev);
+ } else {
+ rv = _drbd_set_state(mdev, ns, f, &done);
+ }
+
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ if (f & ChgWaitComplete && rv == SS_Success) {
+ D_ASSERT(current != mdev->worker.task);
+ wait_for_completion(&done);
+ }
+
+abort:
+ if (f & ChgSerialize)
+ mutex_unlock(&mdev->state_mutex);
+
+ return rv;
+}
+
+/**
+ * _drbd_request_state:
+ * This function is the most gracefull way to change state. For some state
+ * transition this function even does a cluster wide transaction.
+ * It has a cousin named drbd_request_state(), which is always verbose.
+ */
+int _drbd_request_state(struct drbd_conf *mdev, union drbd_state_t mask,
+ union drbd_state_t val, enum chg_state_flags f)
+{
+ int rv;
+
+ wait_event(mdev->state_wait,
+ (rv = drbd_req_state(mdev, mask, val, f)) != SS_InTransientState);
+
+ return rv;
+}
+
+STATIC void print_st(struct drbd_conf *mdev, char *name, union drbd_state_t ns)
+{
+ ERR(" %s = { cs:%s ro:%s/%s ds:%s/%s %c%c%c%c }\n",
+ name,
+ conns_to_name(ns.conn),
+ roles_to_name(ns.role),
+ roles_to_name(ns.peer),
+ disks_to_name(ns.disk),
+ disks_to_name(ns.pdsk),
+ ns.susp ? 's' : 'r',
+ ns.aftr_isp ? 'a' : '-',
+ ns.peer_isp ? 'p' : '-',
+ ns.user_isp ? 'u' : '-'
+ );
+}
+
+void print_st_err(struct drbd_conf *mdev,
+ union drbd_state_t os, union drbd_state_t ns, int err)
+{
+ if (err == SS_InTransientState)
+ return;
+ ERR("State change failed: %s\n", set_st_err_name(err));
+ print_st(mdev, " state", os);
+ print_st(mdev, "wanted", ns);
+}
+
+
+#define peers_to_name roles_to_name
+#define pdsks_to_name disks_to_name
+
+#define susps_to_name(A) ((A) ? "1" : "0")
+#define aftr_isps_to_name(A) ((A) ? "1" : "0")
+#define peer_isps_to_name(A) ((A) ? "1" : "0")
+#define user_isps_to_name(A) ((A) ? "1" : "0")
+
+#define PSC(A) \
+ ({ if (ns.A != os.A) { \
+ pbp += sprintf(pbp, #A "( %s -> %s ) ", \
+ A##s_to_name(os.A), \
+ A##s_to_name(ns.A)); \
+ } })
+
+int is_valid_state(struct drbd_conf *mdev, union drbd_state_t ns)
+{
+ /* See drbd_state_sw_errors in drbd_strings.c */
+
+ enum fencing_policy fp;
+ int rv = SS_Success;
+
+ fp = DontCare;
+ if (inc_local(mdev)) {
+ fp = mdev->bc->dc.fencing;
+ dec_local(mdev);
+ }
+
+ if (inc_net(mdev)) {
+ if (!mdev->net_conf->two_primaries &&
+ ns.role == Primary && ns.peer == Primary)
+ rv = SS_TwoPrimaries;
+ dec_net(mdev);
+ }
+
+ if (rv <= 0)
+ /* already found a reason to abort */;
+ else if (ns.role == Secondary && mdev->open_cnt)
+ rv = SS_DeviceInUse;
+
+ else if (ns.role == Primary && ns.conn < Connected && ns.disk < UpToDate)
+ rv = SS_NoUpToDateDisk;
+
+ else if (fp >= Resource &&
+ ns.role == Primary && ns.conn < Connected && ns.pdsk >= DUnknown)
+ rv = SS_PrimaryNOP;
+
+ else if (ns.role == Primary && ns.disk <= Inconsistent && ns.pdsk <= Inconsistent)
+ rv = SS_NoUpToDateDisk;
+
+ else if (ns.conn > Connected && ns.disk < UpToDate && ns.pdsk < UpToDate)
+ rv = SS_BothInconsistent;
+
+ else if (ns.conn > Connected && (ns.disk == Diskless || ns.pdsk == Diskless))
+ rv = SS_SyncingDiskless;
+
+ else if ((ns.conn == Connected ||
+ ns.conn == WFBitMapS ||
+ ns.conn == SyncSource ||
+ ns.conn == PausedSyncS) &&
+ ns.disk == Outdated)
+ rv = SS_ConnectedOutdates;
+
+ else if ((ns.conn == VerifyS || ns.conn == VerifyT) &&
+ (mdev->sync_conf.verify_alg[0] == 0))
+ rv = SS_NoVerifyAlg;
+
+ else if ((ns.conn == VerifyS || ns.conn == VerifyT) &&
+ mdev->agreed_pro_version < 88)
+ rv = SS_NotSupported;
+
+ return rv;
+}
+
+int is_valid_state_transition(struct drbd_conf *mdev,
+ union drbd_state_t ns, union drbd_state_t os)
+{
+ int rv = SS_Success;
+
+ if ((ns.conn == StartingSyncT || ns.conn == StartingSyncS) &&
+ os.conn > Connected)
+ rv = SS_ResyncRunning;
+
+ if (ns.conn == Disconnecting && os.conn == StandAlone)
+ rv = SS_AlreadyStandAlone;
+
+ if (ns.disk > Attaching && os.disk == Diskless)
+ rv = SS_IsDiskLess;
+
+ if (ns.conn == WFConnection && os.conn < Unconnected)
+ rv = SS_NoNetConfig;
+
+ if (ns.disk == Outdated && os.disk < Outdated && os.disk != Attaching)
+ rv = SS_LowerThanOutdated;
+
+ if (ns.conn == Disconnecting && os.conn == Unconnected)
+ rv = SS_InTransientState;
+
+ if (ns.conn == os.conn && ns.conn == WFReportParams)
+ rv = SS_InTransientState;
+
+ if ((ns.conn == VerifyS || ns.conn == VerifyT) && os.conn < Connected)
+ rv = SS_NeedConnection;
+
+ if ((ns.conn == VerifyS || ns.conn == VerifyT) &&
+ ns.conn != os.conn && os.conn > Connected)
+ rv = SS_ResyncRunning;
+
+ if ((ns.conn == StartingSyncS || ns.conn == StartingSyncT) &&
+ os.conn < Connected)
+ rv = SS_NeedConnection;
+
+ return rv;
+}
+
+int __drbd_set_state(struct drbd_conf *mdev,
+ union drbd_state_t ns, enum chg_state_flags flags,
+ struct completion *done)
+{
+ union drbd_state_t os;
+ int rv = SS_Success;
+ int warn_sync_abort = 0;
+ enum fencing_policy fp;
+ struct after_state_chg_work *ascw;
+
+
+ os = mdev->state;
+
+ fp = DontCare;
+ if (inc_local(mdev)) {
+ fp = mdev->bc->dc.fencing;
+ dec_local(mdev);
+ }
+
+ /* Early state sanitising. */
+
+ /* Dissalow Network errors to configure a device's network part */
+ if ((ns.conn >= Timeout && ns.conn <= TearDown) &&
+ os.conn <= Disconnecting)
+ ns.conn = os.conn;
+
+ /* After a network error (+TearDown) only Unconnected or Disconnecting can follow */
+ if (os.conn >= Timeout && os.conn <= TearDown &&
+ ns.conn != Unconnected && ns.conn != Disconnecting)
+ ns.conn = os.conn;
+
+ /* After Disconnecting only StandAlone may follow */
+ if (os.conn == Disconnecting && ns.conn != StandAlone)
+ ns.conn = os.conn;
+
+ if (ns.conn < Connected) {
+ ns.peer_isp = 0;
+ ns.peer = Unknown;
+ if (ns.pdsk > DUnknown || ns.pdsk < Inconsistent)
+ ns.pdsk = DUnknown;
+ }
+
+ if (ns.conn <= Disconnecting && ns.disk == Diskless)
+ ns.pdsk = DUnknown;
+
+ if (os.conn > Connected && ns.conn > Connected &&
+ (ns.disk <= Failed || ns.pdsk <= Failed)) {
+ warn_sync_abort = 1;
+ ns.conn = Connected;
+ }
+
+ if (ns.conn >= Connected &&
+ ((ns.disk == Consistent || ns.disk == Outdated) ||
+ (ns.disk == Negotiating && ns.conn == WFBitMapT))) {
+ switch (ns.conn) {
+ case WFBitMapT:
+ case PausedSyncT:
+ ns.disk = Outdated;
+ break;
+ case Connected:
+ case WFBitMapS:
+ case SyncSource:
+ case PausedSyncS:
+ ns.disk = UpToDate;
+ break;
+ case SyncTarget:
+ ns.disk = Inconsistent;
+ drbd_WARN("Implicit set disk state Inconsistent!\n");
+ break;
+ }
+ if (os.disk == Outdated && ns.disk == UpToDate)
+ drbd_WARN("Implicit set disk from Outdate to UpToDate\n");
+ }
+
+ if (ns.conn >= Connected &&
+ (ns.pdsk == Consistent || ns.pdsk == Outdated)) {
+ switch (ns.conn) {
+ case Connected:
+ case WFBitMapT:
+ case PausedSyncT:
+ case SyncTarget:
+ ns.pdsk = UpToDate;
+ break;
+ case WFBitMapS:
+ case PausedSyncS:
+ ns.pdsk = Outdated;
+ break;
+ case SyncSource:
+ ns.pdsk = Inconsistent;
+ drbd_WARN("Implicit set pdsk Inconsistent!\n");
+ break;
+ }
+ if (os.pdsk == Outdated && ns.pdsk == UpToDate)
+ drbd_WARN("Implicit set pdsk from Outdate to UpToDate\n");
+ }
+
+ /* Connection breaks down before we finished "Negotiating" */
+ if (ns.conn < Connected && ns.disk == Negotiating &&
+ inc_local_if_state(mdev, Negotiating)) {
+ if (mdev->ed_uuid == mdev->bc->md.uuid[Current]) {
+ ns.disk = mdev->new_state_tmp.disk;
+ ns.pdsk = mdev->new_state_tmp.pdsk;
+ } else {
+ ALERT("Connection lost while negotiating, no data!\n");
+ ns.disk = Diskless;
+ ns.pdsk = DUnknown;
+ }
+ dec_local(mdev);
+ }
+
+ if (fp == Stonith &&
+ (ns.role == Primary &&
+ ns.conn < Connected &&
+ ns.pdsk > Outdated))
+ ns.susp = 1;
+
+ if (ns.aftr_isp || ns.peer_isp || ns.user_isp) {
+ if (ns.conn == SyncSource)
+ ns.conn = PausedSyncS;
+ if (ns.conn == SyncTarget)
+ ns.conn = PausedSyncT;
+ } else {
+ if (ns.conn == PausedSyncS)
+ ns.conn = SyncSource;
+ if (ns.conn == PausedSyncT)
+ ns.conn = SyncTarget;
+ }
+
+ if (ns.i == os.i)
+ return SS_NothingToDo;
+
+ if (!(flags & ChgStateHard)) {
+ /* pre-state-change checks ; only look at ns */
+ /* See drbd_state_sw_errors in drbd_strings.c */
+
+ rv = is_valid_state(mdev, ns);
+ if (rv < SS_Success) {
+ /* If the old state was illegal as well, then let
+ this happen...*/
+
+ if (is_valid_state(mdev, os) == rv) {
+ ERR("Considering state change from bad state. "
+ "Error would be: '%s'\n",
+ set_st_err_name(rv));
+ print_st(mdev, "old", os);
+ print_st(mdev, "new", ns);
+ rv = is_valid_state_transition(mdev, ns, os);
+ }
+ } else
+ rv = is_valid_state_transition(mdev, ns, os);
+ }
+
+ if (rv < SS_Success) {
+ if (flags & ChgStateVerbose)
+ print_st_err(mdev, os, ns, rv);
+ return rv;
+ }
+
+ if (warn_sync_abort)
+ drbd_WARN("Resync aborted.\n");
+
+ {
+ char *pbp, pb[300];
+ pbp = pb;
+ *pbp = 0;
+ PSC(role);
+ PSC(peer);
+ PSC(conn);
+ PSC(disk);
+ PSC(pdsk);
+ PSC(susp);
+ PSC(aftr_isp);
+ PSC(peer_isp);
+ PSC(user_isp);
+ INFO("%s\n", pb);
+ }
+
+ mdev->state.i = ns.i;
+ wake_up(&mdev->misc_wait);
+ wake_up(&mdev->state_wait);
+
+ /** post-state-change actions **/
+ if (os.conn >= SyncSource && ns.conn <= Connected) {
+ set_bit(STOP_SYNC_TIMER, &mdev->flags);
+ mod_timer(&mdev->resync_timer, jiffies);
+ }
+
+ if ((os.conn == PausedSyncT || os.conn == PausedSyncS) &&
+ (ns.conn == SyncTarget || ns.conn == SyncSource)) {
+ INFO("Syncer continues.\n");
+ mdev->rs_paused += (long)jiffies-(long)mdev->rs_mark_time;
+ if (ns.conn == SyncTarget) {
+ if (!test_and_clear_bit(STOP_SYNC_TIMER, &mdev->flags))
+ mod_timer(&mdev->resync_timer, jiffies);
+ /* This if (!test_bit) is only needed for the case
+ that a device that has ceased to used its timer,
+ i.e. it is already in drbd_resync_finished() gets
+ paused and resumed. */
+ }
+ }
+
+ if ((os.conn == SyncTarget || os.conn == SyncSource) &&
+ (ns.conn == PausedSyncT || ns.conn == PausedSyncS)) {
+ INFO("Resync suspended\n");
+ mdev->rs_mark_time = jiffies;
+ if (ns.conn == PausedSyncT)
+ set_bit(STOP_SYNC_TIMER, &mdev->flags);
+ }
+
+ if (os.conn == Connected &&
+ (ns.conn == VerifyS || ns.conn == VerifyT)) {
+ mdev->ov_position = 0;
+ mdev->ov_left =
+ mdev->rs_total =
+ mdev->rs_mark_left = drbd_bm_bits(mdev);
+ mdev->rs_start =
+ mdev->rs_mark_time = jiffies;
+ mdev->ov_last_oos_size = 0;
+ mdev->ov_last_oos_start = 0;
+
+ if (ns.conn == VerifyS)
+ mod_timer(&mdev->resync_timer, jiffies);
+ }
+
+ if (inc_local(mdev)) {
+ u32 mdf = mdev->bc->md.flags & ~(MDF_Consistent|MDF_PrimaryInd|
+ MDF_ConnectedInd|MDF_WasUpToDate|
+ MDF_PeerOutDated|MDF_CrashedPrimary);
+
+ if (test_bit(CRASHED_PRIMARY, &mdev->flags))
+ mdf |= MDF_CrashedPrimary;
+ if (mdev->state.role == Primary ||
+ (mdev->state.pdsk < Inconsistent && mdev->state.peer == Primary))
+ mdf |= MDF_PrimaryInd;
+ if (mdev->state.conn > WFReportParams)
+ mdf |= MDF_ConnectedInd;
+ if (mdev->state.disk > Inconsistent)
+ mdf |= MDF_Consistent;
+ if (mdev->state.disk > Outdated)
+ mdf |= MDF_WasUpToDate;
+ if (mdev->state.pdsk <= Outdated && mdev->state.pdsk >= Inconsistent)
+ mdf |= MDF_PeerOutDated;
+ if (mdf != mdev->bc->md.flags) {
+ mdev->bc->md.flags = mdf;
+ drbd_md_mark_dirty(mdev);
+ }
+ if (os.disk < Consistent && ns.disk >= Consistent)
+ drbd_set_ed_uuid(mdev, mdev->bc->md.uuid[Current]);
+ dec_local(mdev);
+ }
+
+ /* Peer was forced UpToDate & Primary, consider to resync */
+ if (os.disk == Inconsistent && os.pdsk == Inconsistent &&
+ os.peer == Secondary && ns.peer == Primary)
+ set_bit(CONSIDER_RESYNC, &mdev->flags);
+
+ /* Receiver should clean up itself */
+ if (os.conn != Disconnecting && ns.conn == Disconnecting)
+ drbd_thread_stop_nowait(&mdev->receiver);
+
+ /* Now the receiver finished cleaning up itself, it should die */
+ if (os.conn != StandAlone && ns.conn == StandAlone)
+ drbd_thread_stop_nowait(&mdev->receiver);
+
+ /* Upon network failure, we need to restart the receiver. */
+ if (os.conn > TearDown &&
+ ns.conn <= TearDown && ns.conn >= Timeout)
+ drbd_thread_restart_nowait(&mdev->receiver);
+
+ ascw = kmalloc(sizeof(*ascw), GFP_ATOMIC);
+ if (ascw) {
+ ascw->os = os;
+ ascw->ns = ns;
+ ascw->flags = flags;
+ ascw->w.cb = w_after_state_ch;
+ ascw->done = done;
+ drbd_queue_work(&mdev->data.work, &ascw->w);
+ } else {
+ drbd_WARN("Could not kmalloc an ascw\n");
+ }
+
+ return rv;
+}
+
+STATIC int w_after_state_ch(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct after_state_chg_work *ascw;
+
+ ascw = (struct after_state_chg_work *) w;
+ after_state_ch(mdev, ascw->os, ascw->ns, ascw->flags);
+ if (ascw->flags & ChgWaitComplete) {
+ D_ASSERT(ascw->done != NULL);
+ complete(ascw->done);
+ }
+ kfree(ascw);
+
+ return 1;
+}
+
+static void abw_start_sync(struct drbd_conf *mdev, int rv)
+{
+ if (rv) {
+ ERR("Writing the bitmap failed not starting resync.\n");
+ _drbd_request_state(mdev, NS(conn, Connected), ChgStateVerbose);
+ return;
+ }
+
+ switch (mdev->state.conn) {
+ case StartingSyncT:
+ _drbd_request_state(mdev, NS(conn, WFSyncUUID), ChgStateVerbose);
+ break;
+ case StartingSyncS:
+ drbd_start_resync(mdev, SyncSource);
+ break;
+ }
+}
+
+STATIC void after_state_ch(struct drbd_conf *mdev, union drbd_state_t os,
+ union drbd_state_t ns, enum chg_state_flags flags)
+{
+ enum fencing_policy fp;
+
+ if (os.conn != Connected && ns.conn == Connected) {
+ clear_bit(CRASHED_PRIMARY, &mdev->flags);
+ if (mdev->p_uuid)
+ mdev->p_uuid[UUID_FLAGS] &= ~((u64)2);
+ }
+
+ fp = DontCare;
+ if (inc_local(mdev)) {
+ fp = mdev->bc->dc.fencing;
+ dec_local(mdev);
+ }
+
+ /* Inform userspace about the change... */
+ drbd_bcast_state(mdev, ns);
+
+ if (!(os.role == Primary && os.disk < UpToDate && os.pdsk < UpToDate) &&
+ (ns.role == Primary && ns.disk < UpToDate && ns.pdsk < UpToDate))
+ drbd_khelper(mdev, "pri-on-incon-degr");
+
+ /* Here we have the actions that are performed after a
+ state change. This function might sleep */
+
+ if (fp == Stonith && ns.susp) {
+ /* case1: The outdate peer handler is successfull:
+ * case2: The connection was established again: */
+ if ((os.pdsk > Outdated && ns.pdsk <= Outdated) ||
+ (os.conn < Connected && ns.conn >= Connected)) {
+ tl_clear(mdev);
+ spin_lock_irq(&mdev->req_lock);
+ _drbd_set_state(_NS(mdev, susp, 0), ChgStateVerbose, NULL);
+ spin_unlock_irq(&mdev->req_lock);
+ }
+ }
+ /* Do not change the order of the if above and the two below... */
+ if (os.pdsk == Diskless && ns.pdsk > Diskless) { /* attach on the peer */
+ drbd_send_uuids(mdev);
+ drbd_send_state(mdev);
+ }
+ if (os.conn != WFBitMapS && ns.conn == WFBitMapS)
+ drbd_queue_bitmap_io(mdev, &drbd_send_bitmap, NULL, "send_bitmap (WFBitMapS)");
+
+ /* Lost contact to peer's copy of the data */
+ if ((os.pdsk >= Inconsistent &&
+ os.pdsk != DUnknown &&
+ os.pdsk != Outdated)
+ && (ns.pdsk < Inconsistent ||
+ ns.pdsk == DUnknown ||
+ ns.pdsk == Outdated)) {
+ kfree(mdev->p_uuid);
+ mdev->p_uuid = NULL;
+ if (inc_local(mdev)) {
+ if ((ns.role == Primary || ns.peer == Primary) &&
+ mdev->bc->md.uuid[Bitmap] == 0 && ns.disk >= UpToDate) {
+ drbd_uuid_new_current(mdev);
+ drbd_send_uuids(mdev);
+ }
+ dec_local(mdev);
+ }
+ }
+
+ if (ns.pdsk < Inconsistent && inc_local(mdev)) {
+ if (ns.peer == Primary && mdev->bc->md.uuid[Bitmap] == 0)
+ drbd_uuid_new_current(mdev);
+
+ /* Diskless Peer becomes secondary */
+ if (os.peer == Primary && ns.peer == Secondary)
+ drbd_al_to_on_disk_bm(mdev);
+ dec_local(mdev);
+ }
+
+ /* Last part of the attaching process ... */
+ if (ns.conn >= Connected &&
+ os.disk == Attaching && ns.disk == Negotiating) {
+ kfree(mdev->p_uuid); /* We expect to receive up-to-date UUIDs soon. */
+ mdev->p_uuid = NULL; /* ...to not use the old ones in the mean time */
+ drbd_send_sizes(mdev); /* to start sync... */
+ drbd_send_uuids(mdev);
+ drbd_send_state(mdev);
+ }
+
+ /* We want to pause/continue resync, tell peer. */
+ if (ns.conn >= Connected &&
+ ((os.aftr_isp != ns.aftr_isp) ||
+ (os.user_isp != ns.user_isp)))
+ drbd_send_state(mdev);
+
+ /* In case one of the isp bits got set, suspend other devices. */
+ if ((!os.aftr_isp && !os.peer_isp && !os.user_isp) &&
+ (ns.aftr_isp || ns.peer_isp || ns.user_isp))
+ suspend_other_sg(mdev);
+
+ /* Make sure the peer gets informed about eventual state
+ changes (ISP bits) while we were in WFReportParams. */
+ if (os.conn == WFReportParams && ns.conn >= Connected)
+ drbd_send_state(mdev);
+
+ /* We are in the progress to start a full sync... */
+ if ((os.conn != StartingSyncT && ns.conn == StartingSyncT) ||
+ (os.conn != StartingSyncS && ns.conn == StartingSyncS))
+ drbd_queue_bitmap_io(mdev, &drbd_bmio_set_n_write, &abw_start_sync, "set_n_write from StartingSync");
+
+ /* We are invalidating our self... */
+ if (os.conn < Connected && ns.conn < Connected &&
+ os.disk > Inconsistent && ns.disk == Inconsistent)
+ drbd_queue_bitmap_io(mdev, &drbd_bmio_set_n_write, NULL, "set_n_write from invalidate");
+
+ if (os.disk > Diskless && ns.disk == Diskless) {
+ /* since inc_local() only works as long as disk>=Inconsistent,
+ and it is Diskless here, local_cnt can only go down, it can
+ not increase... It will reach zero */
+ wait_event(mdev->misc_wait, !atomic_read(&mdev->local_cnt));
+
+ lc_free(mdev->resync);
+ mdev->resync = NULL;
+ lc_free(mdev->act_log);
+ mdev->act_log = NULL;
+ __no_warn(local, drbd_free_bc(mdev->bc););
+ wmb(); /* see begin of drbd_nl_disk_conf() */
+ __no_warn(local, mdev->bc = NULL;);
+
+ if (mdev->md_io_tmpp)
+ __free_page(mdev->md_io_tmpp);
+ }
+
+ /* Disks got bigger while they were detached */
+ if (ns.disk > Negotiating && ns.pdsk > Negotiating &&
+ test_and_clear_bit(RESYNC_AFTER_NEG, &mdev->flags)) {
+ if (ns.conn == Connected)
+ resync_after_online_grow(mdev);
+ }
+
+ /* A resync finished or aborted, wake paused devices... */
+ if ((os.conn > Connected && ns.conn <= Connected) ||
+ (os.peer_isp && !ns.peer_isp) ||
+ (os.user_isp && !ns.user_isp))
+ resume_next_sg(mdev);
+
+ /* Upon network connection, we need to start the received */
+ if (os.conn == StandAlone && ns.conn == Unconnected)
+ drbd_thread_start(&mdev->receiver);
+
+ /* Terminate worker thread if we are unconfigured - it will be
+ restarted as needed... */
+ if (ns.disk == Diskless && ns.conn == StandAlone && ns.role == Secondary)
+ drbd_thread_stop_nowait(&mdev->worker);
+
+ drbd_md_sync(mdev);
+}
+
+
+STATIC int drbd_thread_setup(void *arg)
+{
+ struct Drbd_thread *thi = (struct Drbd_thread *) arg;
+ struct drbd_conf *mdev = thi->mdev;
+ int retval;
+
+restart:
+ retval = thi->function(thi);
+
+ spin_lock(&thi->t_lock);
+
+ /* if the receiver has been "Exiting", the last thing it did
+ * was set the conn state to "StandAlone",
+ * if now a re-connect request comes in, conn state goes Unconnected,
+ * and receiver thread will be "started".
+ * drbd_thread_start needs to set "Restarting" in that case.
+ * t_state check and assignement needs to be within the same spinlock,
+ * so either thread_start sees Exiting, and can remap to Restarting,
+ * or thread_start see None, and can proceed as normal.
+ */
+
+ if (thi->t_state == Restarting) {
+ INFO("Restarting %s\n", current->comm);
+ thi->t_state = Running;
+ spin_unlock(&thi->t_lock);
+ goto restart;
+ }
+
+ thi->task = NULL;
+ thi->t_state = None;
+ smp_mb();
+ complete(&thi->stop);
+ spin_unlock(&thi->t_lock);
+
+ INFO("Terminating %s\n", current->comm);
+
+ /* Release mod reference taken when thread was started */
+ module_put(THIS_MODULE);
+ return retval;
+}
+
+STATIC void drbd_thread_init(struct drbd_conf *mdev, struct Drbd_thread *thi,
+ int (*func) (struct Drbd_thread *))
+{
+ spin_lock_init(&thi->t_lock);
+ thi->task = NULL;
+ thi->t_state = None;
+ thi->function = func;
+ thi->mdev = mdev;
+}
+
+int drbd_thread_start(struct Drbd_thread *thi)
+{
+ struct drbd_conf *mdev = thi->mdev;
+ struct task_struct *nt;
+ const char *me =
+ thi == &mdev->receiver ? "receiver" :
+ thi == &mdev->asender ? "asender" :
+ thi == &mdev->worker ? "worker" : "NONSENSE";
+
+ spin_lock(&thi->t_lock);
+ switch (thi->t_state) {
+ case None:
+ INFO("Starting %s thread (from %s [%d])\n",
+ me, current->comm, current->pid);
+
+ /* Get ref on module for thread - this is released when thread exits */
+ if (!try_module_get(THIS_MODULE)) {
+ ERR("Failed to get module reference in drbd_thread_start\n");
+ spin_unlock(&thi->t_lock);
+ return FALSE;
+ }
+
+ D_ASSERT(thi->task == NULL);
+ thi->reset_cpu_mask = 1;
+ thi->t_state = Running;
+ spin_unlock(&thi->t_lock);
+ flush_signals(current); /* otherw. may get -ERESTARTNOINTR */
+
+ nt = kthread_create(drbd_thread_setup, (void *) thi,
+ "drbd%d_%s", mdev_to_minor(mdev), me);
+
+ if (IS_ERR(nt)) {
+ ERR("Couldn't start thread\n");
+
+ module_put(THIS_MODULE);
+ return FALSE;
+ }
+ spin_lock(&thi->t_lock);
+ thi->task = nt;
+ thi->t_state = Running;
+ spin_unlock(&thi->t_lock);
+ wake_up_process(nt);
+ break;
+ case Exiting:
+ thi->t_state = Restarting;
+ INFO("Restarting %s thread (from %s [%d])\n",
+ me, current->comm, current->pid);
+ case Running:
+ case Restarting:
+ default:
+ spin_unlock(&thi->t_lock);
+ break;
+ }
+
+ return TRUE;
+}
+
+
+void _drbd_thread_stop(struct Drbd_thread *thi, int restart, int wait)
+{
+ enum Drbd_thread_state ns = restart ? Restarting : Exiting;
+
+ spin_lock(&thi->t_lock);
+
+ if (thi->t_state == None) {
+ spin_unlock(&thi->t_lock);
+ if (restart)
+ drbd_thread_start(thi);
+ return;
+ }
+
+ if (thi->t_state != ns) {
+ if (thi->task == NULL) {
+ spin_unlock(&thi->t_lock);
+ return;
+ }
+
+ thi->t_state = ns;
+ smp_mb();
+ init_completion(&thi->stop);
+ if (thi->task != current)
+ force_sig(DRBD_SIGKILL, thi->task);
+
+ }
+
+ spin_unlock(&thi->t_lock);
+
+ if (wait) {
+ wait_for_completion(&thi->stop);
+ }
+}
+
+#ifdef CONFIG_SMP
+/**
+ * drbd_calc_cpu_mask: Generates CPU masks, sprad over all CPUs.
+ * Forces all threads of a device onto the same CPU. This is benificial for
+ * DRBD's performance. May be overwritten by user's configuration.
+ */
+cpumask_t drbd_calc_cpu_mask(struct drbd_conf *mdev)
+{
+ int sv, cpu;
+ cpumask_t av_cpu_m;
+
+ if (cpus_weight(mdev->cpu_mask))
+ return mdev->cpu_mask;
+
+ av_cpu_m = cpu_online_map;
+ sv = mdev_to_minor(mdev) % cpus_weight(av_cpu_m);
+
+ for_each_cpu_mask(cpu, av_cpu_m) {
+ if (sv-- == 0)
+ return cpumask_of_cpu(cpu);
+ }
+
+ /* some kernel versions "forget" to add the (cpumask_t) typecast
+ * to that macro, which results in "parse error before '{'" ;-> */
+ return (cpumask_t) CPU_MASK_ALL; /* Never reached. */
+}
+
+/* modifies the cpu mask of the _current_ thread,
+ * call in the "main loop" of _all_ threads.
+ * no need for any mutex, current won't die prematurely.
+ */
+void drbd_thread_current_set_cpu(struct drbd_conf *mdev)
+{
+ struct task_struct *p = current;
+ struct Drbd_thread *thi =
+ p == mdev->asender.task ? &mdev->asender :
+ p == mdev->receiver.task ? &mdev->receiver :
+ p == mdev->worker.task ? &mdev->worker :
+ NULL;
+ ERR_IF(thi == NULL)
+ return;
+ if (!thi->reset_cpu_mask)
+ return;
+ thi->reset_cpu_mask = 0;
+ /* preempt_disable();
+ Thas was a kernel that warned about a call to smp_processor_id() while preemt
+ was not disabled. It seems that this was fixed in manline. */
+ set_cpus_allowed(p, mdev->cpu_mask);
+ /* preempt_enable(); */
+}
+#endif
+
+/* the appropriate socket mutex must be held already */
+int _drbd_send_cmd(struct drbd_conf *mdev, struct socket *sock,
+ enum Drbd_Packet_Cmd cmd, struct Drbd_Header *h,
+ size_t size, unsigned msg_flags)
+{
+ int sent, ok;
+
+ ERR_IF(!h) return FALSE;
+ ERR_IF(!size) return FALSE;
+
+ h->magic = BE_DRBD_MAGIC;
+ h->command = cpu_to_be16(cmd);
+ h->length = cpu_to_be16(size-sizeof(struct Drbd_Header));
+
+ dump_packet(mdev, sock, 0, (void *)h, __FILE__, __LINE__);
+ sent = drbd_send(mdev, sock, h, size, msg_flags);
+
+ ok = (sent == size);
+ if (!ok)
+ ERR("short sent %s size=%d sent=%d\n",
+ cmdname(cmd), (int)size, sent);
+ return ok;
+}
+
+/* don't pass the socket. we may only look at it
+ * when we hold the appropriate socket mutex.
+ */
+int drbd_send_cmd(struct drbd_conf *mdev, int use_data_socket,
+ enum Drbd_Packet_Cmd cmd, struct Drbd_Header *h, size_t size)
+{
+ int ok = 0;
+ struct socket *sock;
+
+ if (use_data_socket) {
+ mutex_lock(&mdev->data.mutex);
+ sock = mdev->data.socket;
+ } else {
+ mutex_lock(&mdev->meta.mutex);
+ sock = mdev->meta.socket;
+ }
+
+ /* drbd_disconnect() could have called drbd_free_sock()
+ * while we were waiting in down()... */
+ if (likely(sock != NULL))
+ ok = _drbd_send_cmd(mdev, sock, cmd, h, size, 0);
+
+ if (use_data_socket)
+ mutex_unlock(&mdev->data.mutex);
+ else
+ mutex_unlock(&mdev->meta.mutex);
+ return ok;
+}
+
+int drbd_send_cmd2(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd, char *data,
+ size_t size)
+{
+ struct Drbd_Header h;
+ int ok;
+
+ h.magic = BE_DRBD_MAGIC;
+ h.command = cpu_to_be16(cmd);
+ h.length = cpu_to_be16(size);
+
+ if (!drbd_get_data_sock(mdev))
+ return 0;
+
+ dump_packet(mdev, mdev->data.socket, 0, (void *)&h, __FILE__, __LINE__);
+
+ ok = (sizeof(h) ==
+ drbd_send(mdev, mdev->data.socket, &h, sizeof(h), 0));
+ ok = ok && (size ==
+ drbd_send(mdev, mdev->data.socket, data, size, 0));
+
+ drbd_put_data_sock(mdev);
+
+ return ok;
+}
+
+int drbd_send_sync_param(struct drbd_conf *mdev, struct syncer_conf *sc)
+{
+ struct Drbd_SyncParam89_Packet *p;
+ struct socket *sock;
+ int size, rv;
+ const int apv = mdev->agreed_pro_version;
+
+ size = apv <= 87 ? sizeof(struct Drbd_SyncParam_Packet)
+ : apv == 88 ? sizeof(struct Drbd_SyncParam_Packet)
+ + strlen(mdev->sync_conf.verify_alg) + 1
+ : /* 89 */ sizeof(struct Drbd_SyncParam89_Packet);
+
+ /* used from admin command context and receiver/worker context.
+ * to avoid kmalloc, grab the socket right here,
+ * then use the pre-allocated sbuf there */
+ mutex_lock(&mdev->data.mutex);
+ sock = mdev->data.socket;
+
+ if (likely(sock != NULL)) {
+ enum Drbd_Packet_Cmd cmd = apv >= 89 ? SyncParam89 : SyncParam;
+
+ p = &mdev->data.sbuf.SyncParam89;
+
+ /* initialize verify_alg and csums_alg */
+ memset(p->verify_alg, 0, 2 * SHARED_SECRET_MAX);
+
+ p->rate = cpu_to_be32(sc->rate);
+
+ if (apv >= 88)
+ strcpy(p->verify_alg, mdev->sync_conf.verify_alg);
+ if (apv >= 89)
+ strcpy(p->csums_alg, mdev->sync_conf.csums_alg);
+
+ rv = _drbd_send_cmd(mdev, sock, cmd, &p->head, size, 0);
+ } else
+ rv = 0; /* not ok */
+
+ mutex_unlock(&mdev->data.mutex);
+
+ return rv;
+}
+
+int drbd_send_protocol(struct drbd_conf *mdev)
+{
+ struct Drbd_Protocol_Packet *p;
+ int size, rv;
+
+ size = sizeof(struct Drbd_Protocol_Packet);
+
+ if (mdev->agreed_pro_version >= 87)
+ size += strlen(mdev->net_conf->integrity_alg) + 1;
+
+ p = kmalloc(size, GFP_KERNEL);
+ if (p == NULL)
+ return 0;
+
+ p->protocol = cpu_to_be32(mdev->net_conf->wire_protocol);
+ p->after_sb_0p = cpu_to_be32(mdev->net_conf->after_sb_0p);
+ p->after_sb_1p = cpu_to_be32(mdev->net_conf->after_sb_1p);
+ p->after_sb_2p = cpu_to_be32(mdev->net_conf->after_sb_2p);
+ p->want_lose = cpu_to_be32(mdev->net_conf->want_lose);
+ p->two_primaries = cpu_to_be32(mdev->net_conf->two_primaries);
+
+ if (mdev->agreed_pro_version >= 87)
+ strcpy(p->integrity_alg, mdev->net_conf->integrity_alg);
+
+ rv = drbd_send_cmd(mdev, USE_DATA_SOCKET, ReportProtocol,
+ (struct Drbd_Header *)p, size);
+ kfree(p);
+ return rv;
+}
+
+int drbd_send_uuids(struct drbd_conf *mdev)
+{
+ struct Drbd_GenCnt_Packet p;
+ int i;
+
+ u64 uuid_flags = 0;
+
+ if (!inc_local_if_state(mdev, Negotiating))
+ return 1;
+
+ for (i = Current; i < UUID_SIZE; i++)
+ p.uuid[i] = mdev->bc ? cpu_to_be64(mdev->bc->md.uuid[i]) : 0;
+
+ mdev->comm_bm_set = drbd_bm_total_weight(mdev);
+ p.uuid[UUID_SIZE] = cpu_to_be64(mdev->comm_bm_set);
+ uuid_flags |= mdev->net_conf->want_lose ? 1 : 0;
+ uuid_flags |= test_bit(CRASHED_PRIMARY, &mdev->flags) ? 2 : 0;
+ uuid_flags |= mdev->new_state_tmp.disk == Inconsistent ? 4 : 0;
+ p.uuid[UUID_FLAGS] = cpu_to_be64(uuid_flags);
+
+ dec_local(mdev);
+
+ return drbd_send_cmd(mdev, USE_DATA_SOCKET, ReportUUIDs,
+ (struct Drbd_Header *)&p, sizeof(p));
+}
+
+int drbd_send_sync_uuid(struct drbd_conf *mdev, u64 val)
+{
+ struct Drbd_SyncUUID_Packet p;
+
+ p.uuid = cpu_to_be64(val);
+
+ return drbd_send_cmd(mdev, USE_DATA_SOCKET, ReportSyncUUID,
+ (struct Drbd_Header *)&p, sizeof(p));
+}
+
+int drbd_send_sizes(struct drbd_conf *mdev)
+{
+ struct Drbd_Sizes_Packet p;
+ sector_t d_size, u_size;
+ int q_order_type;
+ int ok;
+
+ if (inc_local_if_state(mdev, Negotiating)) {
+ D_ASSERT(mdev->bc->backing_bdev);
+ d_size = drbd_get_max_capacity(mdev->bc);
+ u_size = mdev->bc->dc.disk_size;
+ q_order_type = drbd_queue_order_type(mdev);
+ p.queue_order_type = cpu_to_be32(drbd_queue_order_type(mdev));
+ dec_local(mdev);
+ } else {
+ d_size = 0;
+ u_size = 0;
+ q_order_type = QUEUE_ORDERED_NONE;
+ }
+
+ p.d_size = cpu_to_be64(d_size);
+ p.u_size = cpu_to_be64(u_size);
+ p.c_size = cpu_to_be64(drbd_get_capacity(mdev->this_bdev));
+ p.max_segment_size = cpu_to_be32(mdev->rq_queue->max_segment_size);
+ p.queue_order_type = cpu_to_be32(q_order_type);
+
+ ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, ReportSizes,
+ (struct Drbd_Header *)&p, sizeof(p));
+ return ok;
+}
+
+/**
+ * drbd_send_state:
+ * Informs the peer about our state. Only call it when
+ * mdev->state.conn >= Connected (I.e. you may not call it while in
+ * WFReportParams. Though there is one valid and necessary exception,
+ * drbd_connect() calls drbd_send_state() while in it WFReportParams.
+ */
+int drbd_send_state(struct drbd_conf *mdev)
+{
+ struct socket *sock;
+ struct Drbd_State_Packet p;
+ int ok = 0;
+
+ /* Grab state lock so we wont send state if we're in the middle
+ * of a cluster wide state change on another thread */
+ drbd_state_lock(mdev);
+
+ mutex_lock(&mdev->data.mutex);
+
+ p.state = cpu_to_be32(mdev->state.i); /* Within the send mutex */
+ sock = mdev->data.socket;
+
+ if (likely(sock != NULL)) {
+ ok = _drbd_send_cmd(mdev, sock, ReportState,
+ (struct Drbd_Header *)&p, sizeof(p), 0);
+ }
+
+ mutex_unlock(&mdev->data.mutex);
+
+ drbd_state_unlock(mdev);
+ return ok;
+}
+
+int drbd_send_state_req(struct drbd_conf *mdev,
+ union drbd_state_t mask, union drbd_state_t val)
+{
+ struct Drbd_Req_State_Packet p;
+
+ p.mask = cpu_to_be32(mask.i);
+ p.val = cpu_to_be32(val.i);
+
+ return drbd_send_cmd(mdev, USE_DATA_SOCKET, StateChgRequest,
+ (struct Drbd_Header *)&p, sizeof(p));
+}
+
+int drbd_send_sr_reply(struct drbd_conf *mdev, int retcode)
+{
+ struct Drbd_RqS_Reply_Packet p;
+
+ p.retcode = cpu_to_be32(retcode);
+
+ return drbd_send_cmd(mdev, USE_META_SOCKET, StateChgReply,
+ (struct Drbd_Header *)&p, sizeof(p));
+}
+
+/* returns
+ * positive: number of payload bytes needed in this packet.
+ * zero: incompressible. */
+int fill_bitmap_rle_bytes(struct drbd_conf *mdev,
+ struct Drbd_Compressed_Bitmap_Packet *p,
+ struct bm_xfer_ctx *c)
+{
+ unsigned long plain_bits;
+ unsigned long tmp;
+ unsigned long rl;
+ void *buffer;
+ unsigned n;
+ unsigned len;
+ unsigned toggle;
+
+ /* may we use this feature? */
+ if ((mdev->sync_conf.use_rle_encoding == 0) ||
+ (mdev->agreed_pro_version < 90))
+ return 0;
+
+ if (c->bit_offset >= c->bm_bits)
+ return 0; /* nothing to do. */
+
+ /* use at most thus many bytes */
+ len = BM_PACKET_VLI_BYTES_MAX;
+ buffer = p->code;
+ /* plain bits covered in this code string */
+ plain_bits = 0;
+
+ /* p->encoding & 0x80 stores whether the first
+ * run length is set.
+ * bit offset is implicit.
+ * start with toggle == 2 to be able to tell the first iteration */
+ toggle = 2;
+
+ /* see how much plain bits we can stuff into one packet
+ * using RLE and VLI. */
+ do {
+ tmp = (toggle == 0) ? _drbd_bm_find_next_zero(mdev, c->bit_offset)
+ : _drbd_bm_find_next(mdev, c->bit_offset);
+ if (tmp == -1UL)
+ tmp = c->bm_bits;
+ rl = tmp - c->bit_offset;
+
+ if (toggle == 2) { /* first iteration */
+ if (rl == 0) {
+ /* the first checked bit was set,
+ * store start value, */
+ DCBP_set_start(p, 1);
+ /* but skip encoding of zero run length */
+ toggle = !toggle;
+ continue;
+ }
+ DCBP_set_start(p, 0);
+ }
+
+ /* paranoia: catch zero runlength.
+ * can only happen if bitmap is modified while we scan it. */
+ if (rl == 0) {
+ ERR("unexpected zero runlength while encoding bitmap "
+ "t:%u bo:%lu\n", toggle, c->bit_offset);
+ return -1;
+ }
+
+ n = vli_encode_bytes(buffer, rl, len);
+ if (n == 0) /* buffer full */
+ break;
+
+ toggle = !toggle;
+ buffer += n;
+ len -= n;
+ plain_bits += rl;
+ c->bit_offset = tmp;
+ } while (len && c->bit_offset < c->bm_bits);
+
+ len = BM_PACKET_VLI_BYTES_MAX - len;
+
+ if (plain_bits < (len << 3)) {
+ /* incompressible with this method.
+ * we need to rewind both word and bit position. */
+ c->bit_offset -= plain_bits;
+ bm_xfer_ctx_bit_to_word_offset(c);
+ c->bit_offset = c->word_offset * BITS_PER_LONG;
+ return 0;
+ }
+
+ /* RLE + VLI was able to compress it just fine.
+ * update c->word_offset. */
+ bm_xfer_ctx_bit_to_word_offset(c);
+
+ /* store pad_bits */
+ DCBP_set_pad_bits(p, 0);
+
+ return len;
+}
+
+int fill_bitmap_rle_bits(struct drbd_conf *mdev,
+ struct Drbd_Compressed_Bitmap_Packet *p,
+ struct bm_xfer_ctx *c)
+{
+ struct bitstream bs;
+ unsigned long plain_bits;
+ unsigned long tmp;
+ unsigned long rl;
+ unsigned len;
+ unsigned toggle;
+ int bits;
+
+ /* may we use this feature? */
+ if ((mdev->sync_conf.use_rle_encoding == 0) ||
+ (mdev->agreed_pro_version < 90))
+ return 0;
+
+ if (c->bit_offset >= c->bm_bits)
+ return 0; /* nothing to do. */
+
+ /* use at most thus many bytes */
+ bitstream_init(&bs, p->code, BM_PACKET_VLI_BYTES_MAX, 0);
+ memset(p->code, 0, BM_PACKET_VLI_BYTES_MAX);
+ /* plain bits covered in this code string */
+ plain_bits = 0;
+
+ /* p->encoding & 0x80 stores whether the first
+ * run length is set.
+ * bit offset is implicit.
+ * start with toggle == 2 to be able to tell the first iteration */
+ toggle = 2;
+
+ /* see how much plain bits we can stuff into one packet
+ * using RLE and VLI. */
+ do {
+ tmp = (toggle == 0) ? _drbd_bm_find_next_zero(mdev, c->bit_offset)
+ : _drbd_bm_find_next(mdev, c->bit_offset);
+ if (tmp == -1UL)
+ tmp = c->bm_bits;
+ rl = tmp - c->bit_offset;
+
+ if (toggle == 2) { /* first iteration */
+ if (rl == 0) {
+ /* the first checked bit was set,
+ * store start value, */
+ DCBP_set_start(p, 1);
+ /* but skip encoding of zero run length */
+ toggle = !toggle;
+ continue;
+ }
+ DCBP_set_start(p, 0);
+ }
+
+ /* paranoia: catch zero runlength.
+ * can only happen if bitmap is modified while we scan it. */
+ if (rl == 0) {
+ ERR("unexpected zero runlength while encoding bitmap "
+ "t:%u bo:%lu\n", toggle, c->bit_offset);
+ return -1;
+ }
+
+ bits = vli_encode_bits(&bs, rl);
+ if (bits == -ENOBUFS) /* buffer full */
+ break;
+ if (bits <= 0) {
+ ERR("error while encoding bitmap: %d\n", bits);
+ return 0;
+ }
+
+ toggle = !toggle;
+ plain_bits += rl;
+ c->bit_offset = tmp;
+ } while (c->bit_offset < c->bm_bits);
+
+ len = bs.cur.b - p->code + !!bs.cur.bit;
+
+ if (plain_bits < (len << 3)) {
+ /* incompressible with this method.
+ * we need to rewind both word and bit position. */
+ c->bit_offset -= plain_bits;
+ bm_xfer_ctx_bit_to_word_offset(c);
+ c->bit_offset = c->word_offset * BITS_PER_LONG;
+ return 0;
+ }
+
+ /* RLE + VLI was able to compress it just fine.
+ * update c->word_offset. */
+ bm_xfer_ctx_bit_to_word_offset(c);
+
+ /* store pad_bits */
+ DCBP_set_pad_bits(p, (8 - bs.cur.bit) & 0x7);
+
+ return len;
+}
+
+enum { OK, FAILED, DONE }
+send_bitmap_rle_or_plain(struct drbd_conf *mdev,
+ struct Drbd_Header *h, struct bm_xfer_ctx *c)
+{
+ struct Drbd_Compressed_Bitmap_Packet *p = (void*)h;
+ unsigned long num_words;
+ int len;
+ int ok;
+
+ if (0)
+ len = fill_bitmap_rle_bytes(mdev, p, c);
+ else
+ len = fill_bitmap_rle_bits(mdev, p, c);
+
+ if (len < 0)
+ return FAILED;
+ if (len) {
+ DCBP_set_code(p, 0 ? RLE_VLI_Bytes : RLE_VLI_BitsFibD_3_5);
+ ok = _drbd_send_cmd(mdev, mdev->data.socket, ReportCBitMap, h,
+ sizeof(*p) + len, 0);
+
+ c->packets[0]++;
+ c->bytes[0] += sizeof(*p) + len;
+
+ if (c->bit_offset >= c->bm_bits)
+ len = 0; /* DONE */
+ } else {
+ /* was not compressible.
+ * send a buffer full of plain text bits instead. */
+ num_words = min_t(size_t, BM_PACKET_WORDS, c->bm_words - c->word_offset);
+ len = num_words * sizeof(long);
+ if (len)
+ drbd_bm_get_lel(mdev, c->word_offset, num_words, (unsigned long*)h->payload);
+ ok = _drbd_send_cmd(mdev, mdev->data.socket, ReportBitMap,
+ h, sizeof(struct Drbd_Header) + len, 0);
+ c->word_offset += num_words;
+ c->bit_offset = c->word_offset * BITS_PER_LONG;
+
+ c->packets[1]++;
+ c->bytes[1] += sizeof(struct Drbd_Header) + len;
+
+ if (c->bit_offset > c->bm_bits)
+ c->bit_offset = c->bm_bits;
+ }
+ ok = ok ? ((len == 0) ? DONE : OK) : FAILED;
+
+ if (ok == DONE)
+ INFO_bm_xfer_stats(mdev, "send", c);
+ return ok;
+}
+
+/* See the comment at receive_bitmap() */
+int _drbd_send_bitmap(struct drbd_conf *mdev)
+{
+ struct bm_xfer_ctx c;
+ struct Drbd_Header *p;
+ int ret;
+
+ ERR_IF(!mdev->bitmap) return FALSE;
+
+ /* maybe we should use some per thread scratch page,
+ * and allocate that during initial device creation? */
+ p = (struct Drbd_Header *) __get_free_page(GFP_NOIO);
+ if (!p) {
+ ERR("failed to allocate one page buffer in %s\n", __func__);
+ return FALSE;
+ }
+
+ if (inc_local(mdev)) {
+ if (drbd_md_test_flag(mdev->bc, MDF_FullSync)) {
+ INFO("Writing the whole bitmap, MDF_FullSync was set.\n");
+ drbd_bm_set_all(mdev);
+ if (drbd_bm_write(mdev)) {
+ /* write_bm did fail! Leave full sync flag set in Meta Data
+ * but otherwise process as per normal - need to tell other
+ * side that a full resync is required! */
+ ERR("Failed to write bitmap to disk!\n");
+ } else {
+ drbd_md_clear_flag(mdev, MDF_FullSync);
+ drbd_md_sync(mdev);
+ }
+ }
+ dec_local(mdev);
+ }
+
+ c = (struct bm_xfer_ctx) {
+ .bm_bits = drbd_bm_bits(mdev),
+ .bm_words = drbd_bm_words(mdev),
+ };
+
+ do {
+ ret = send_bitmap_rle_or_plain(mdev, p, &c);
+ } while (ret == OK);
+
+ free_page((unsigned long) p);
+ return (ret == DONE);
+}
+
+int drbd_send_bitmap(struct drbd_conf *mdev)
+{
+ int err;
+
+ if (!drbd_get_data_sock(mdev))
+ return -1;
+ err = !_drbd_send_bitmap(mdev);
+ drbd_put_data_sock(mdev);
+ return err;
+}
+
+int drbd_send_b_ack(struct drbd_conf *mdev, u32 barrier_nr, u32 set_size)
+{
+ int ok;
+ struct Drbd_BarrierAck_Packet p;
+
+ p.barrier = barrier_nr;
+ p.set_size = cpu_to_be32(set_size);
+
+ if (mdev->state.conn < Connected)
+ return FALSE;
+ ok = drbd_send_cmd(mdev, USE_META_SOCKET, BarrierAck,
+ (struct Drbd_Header *)&p, sizeof(p));
+ return ok;
+}
+
+/**
+ * _drbd_send_ack:
+ * This helper function expects the sector and block_id parameter already
+ * in big endian!
+ */
+STATIC int _drbd_send_ack(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ u64 sector,
+ u32 blksize,
+ u64 block_id)
+{
+ int ok;
+ struct Drbd_BlockAck_Packet p;
+
+ p.sector = sector;
+ p.block_id = block_id;
+ p.blksize = blksize;
+ p.seq_num = cpu_to_be32(atomic_add_return(1, &mdev->packet_seq));
+
+ if (!mdev->meta.socket || mdev->state.conn < Connected)
+ return FALSE;
+ ok = drbd_send_cmd(mdev, USE_META_SOCKET, cmd,
+ (struct Drbd_Header *)&p, sizeof(p));
+ return ok;
+}
+
+int drbd_send_ack_dp(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Drbd_Data_Packet *dp)
+{
+ const int header_size = sizeof(struct Drbd_Data_Packet)
+ - sizeof(struct Drbd_Header);
+ int data_size = ((struct Drbd_Header *)dp)->length - header_size;
+
+ return _drbd_send_ack(mdev, cmd, dp->sector, cpu_to_be32(data_size),
+ dp->block_id);
+}
+
+int drbd_send_ack_rp(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Drbd_BlockRequest_Packet *rp)
+{
+ return _drbd_send_ack(mdev, cmd, rp->sector, rp->blksize, rp->block_id);
+}
+
+int drbd_send_ack(struct drbd_conf *mdev,
+ enum Drbd_Packet_Cmd cmd, struct Tl_epoch_entry *e)
+{
+ return _drbd_send_ack(mdev, cmd,
+ cpu_to_be64(e->sector),
+ cpu_to_be32(e->size),
+ e->block_id);
+}
+
+/* This function misuses the block_id field to signal if the blocks
+ * are is sync or not. */
+int drbd_send_ack_ex(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ sector_t sector, int blksize, u64 block_id)
+{
+ return _drbd_send_ack(mdev, cmd,
+ cpu_to_be64(sector),
+ cpu_to_be32(blksize),
+ cpu_to_be64(block_id));
+}
+
+int drbd_send_drequest(struct drbd_conf *mdev, int cmd,
+ sector_t sector, int size, u64 block_id)
+{
+ int ok;
+ struct Drbd_BlockRequest_Packet p;
+
+ p.sector = cpu_to_be64(sector);
+ p.block_id = block_id;
+ p.blksize = cpu_to_be32(size);
+
+ ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, cmd,
+ (struct Drbd_Header *)&p, sizeof(p));
+ return ok;
+}
+
+int drbd_send_drequest_csum(struct drbd_conf *mdev,
+ sector_t sector, int size,
+ void *digest, int digest_size,
+ enum Drbd_Packet_Cmd cmd)
+{
+ int ok;
+ struct Drbd_BlockRequest_Packet p;
+
+ p.sector = cpu_to_be64(sector);
+ p.block_id = BE_DRBD_MAGIC + 0xbeef;
+ p.blksize = cpu_to_be32(size);
+
+ p.head.magic = BE_DRBD_MAGIC;
+ p.head.command = cpu_to_be16(cmd);
+ p.head.length = cpu_to_be16(sizeof(p) - sizeof(struct Drbd_Header) + digest_size);
+
+ mutex_lock(&mdev->data.mutex);
+
+ ok = (sizeof(p) == drbd_send(mdev, mdev->data.socket, &p, sizeof(p), 0));
+ ok = ok && (digest_size == drbd_send(mdev, mdev->data.socket, digest, digest_size, 0));
+
+ mutex_unlock(&mdev->data.mutex);
+
+ return ok;
+}
+
+int drbd_send_ov_request(struct drbd_conf *mdev, sector_t sector, int size)
+{
+ int ok;
+ struct Drbd_BlockRequest_Packet p;
+
+ p.sector = cpu_to_be64(sector);
+ p.block_id = BE_DRBD_MAGIC + 0xbabe;
+ p.blksize = cpu_to_be32(size);
+
+ ok = drbd_send_cmd(mdev, USE_DATA_SOCKET, OVRequest,
+ (struct Drbd_Header *)&p, sizeof(p));
+ return ok;
+}
+
+/* called on sndtimeo
+ * returns FALSE if we should retry,
+ * TRUE if we think connection is dead
+ */
+STATIC int we_should_drop_the_connection(struct drbd_conf *mdev, struct socket *sock)
+{
+ int drop_it;
+ /* long elapsed = (long)(jiffies - mdev->last_received); */
+
+ drop_it = mdev->meta.socket == sock
+ || !mdev->asender.task
+ || get_t_state(&mdev->asender) != Running
+ || mdev->state.conn < Connected;
+
+ if (drop_it)
+ return TRUE;
+
+ drop_it = !--mdev->ko_count;
+ if (!drop_it) {
+ ERR("[%s/%d] sock_sendmsg time expired, ko = %u\n",
+ current->comm, current->pid, mdev->ko_count);
+ request_ping(mdev);
+ }
+
+ return drop_it; /* && (mdev->state == Primary) */;
+}
+
+/* The idea of sendpage seems to be to put some kind of reference
+ * to the page into the skb, and to hand it over to the NIC. In
+ * this process get_page() gets called.
+ *
+ * As soon as the page was really sent over the network put_page()
+ * gets called by some part of the network layer. [ NIC driver? ]
+ *
+ * [ get_page() / put_page() increment/decrement the count. If count
+ * reaches 0 the page will be freed. ]
+ *
+ * This works nicely with pages from FSs.
+ * But this means that in protocol A we might signal IO completion too early!
+ *
+ * In order not to corrupt data during a resync we must make sure
+ * that we do not reuse our own buffer pages (EEs) to early, therefore
+ * we have the net_ee list.
+ *
+ * XFS seems to have problems, still, it submits pages with page_count == 0!
+ * As a workaround, we disable sendpage on pages
+ * with page_count == 0 or PageSlab.
+ */
+STATIC int _drbd_no_send_page(struct drbd_conf *mdev, struct page *page,
+ int offset, size_t size)
+{
+ int ret;
+ ret = drbd_send(mdev, mdev->data.socket, kmap(page) + offset, size, 0);
+ kunmap(page);
+ return ret;
+}
+
+int _drbd_send_page(struct drbd_conf *mdev, struct page *page,
+ int offset, size_t size)
+{
+ mm_segment_t oldfs = get_fs();
+ int sent, ok;
+ int len = size;
+
+ /* PARANOIA. if this ever triggers,
+ * something in the layers above us is really kaputt.
+ *one roundtrip later:
+ * doh. it triggered. so XFS _IS_ really kaputt ...
+ * oh well...
+ */
+ if ((page_count(page) < 1) || PageSlab(page)) {
+ /* e.g. XFS meta- & log-data is in slab pages, which have a
+ * page_count of 0 and/or have PageSlab() set...
+ */
+ sent = _drbd_no_send_page(mdev, page, offset, size);
+ if (likely(sent > 0))
+ len -= sent;
+ goto out;
+ }
+
+ drbd_update_congested(mdev);
+ set_fs(KERNEL_DS);
+ do {
+ sent = mdev->data.socket->ops->sendpage(mdev->data.socket, page,
+ offset, len,
+ MSG_NOSIGNAL);
+ if (sent == -EAGAIN) {
+ if (we_should_drop_the_connection(mdev,
+ mdev->data.socket))
+ break;
+ else
+ continue;
+ }
+ if (sent <= 0) {
+ drbd_WARN("%s: size=%d len=%d sent=%d\n",
+ __func__, (int)size, len, sent);
+ break;
+ }
+ len -= sent;
+ offset += sent;
+ } while (len > 0 /* THINK && mdev->cstate >= Connected*/);
+ set_fs(oldfs);
+ clear_bit(NET_CONGESTED, &mdev->flags);
+
+out:
+ ok = (len == 0);
+ if (likely(ok))
+ mdev->send_cnt += size>>9;
+ return ok;
+}
+
+static inline int _drbd_send_bio(struct drbd_conf *mdev, struct bio *bio)
+{
+ struct bio_vec *bvec;
+ int i;
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ if (!_drbd_no_send_page(mdev, bvec->bv_page,
+ bvec->bv_offset, bvec->bv_len))
+ return 0;
+ }
+ return 1;
+}
+
+static inline int _drbd_send_zc_bio(struct drbd_conf *mdev, struct bio *bio)
+{
+ struct bio_vec *bvec;
+ int i;
+ __bio_for_each_segment(bvec, bio, i, 0) {
+ if (!_drbd_send_page(mdev, bvec->bv_page,
+ bvec->bv_offset, bvec->bv_len))
+ return 0;
+ }
+
+ return 1;
+}
+
+/* Used to send write requests
+ * Primary -> Peer (Data)
+ */
+int drbd_send_dblock(struct drbd_conf *mdev, struct drbd_request *req)
+{
+ int ok = 1;
+ struct Drbd_Data_Packet p;
+ unsigned int dp_flags = 0;
+ void *dgb;
+ int dgs;
+
+ if (!drbd_get_data_sock(mdev))
+ return 0;
+
+ dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_w_tfm) ?
+ crypto_hash_digestsize(mdev->integrity_w_tfm) : 0;
+
+ p.head.magic = BE_DRBD_MAGIC;
+ p.head.command = cpu_to_be16(Data);
+ p.head.length =
+ cpu_to_be16(sizeof(p) - sizeof(struct Drbd_Header) + dgs + req->size);
+
+ p.sector = cpu_to_be64(req->sector);
+ p.block_id = (unsigned long)req;
+ p.seq_num = cpu_to_be32(req->seq_num =
+ atomic_add_return(1, &mdev->packet_seq));
+ dp_flags = 0;
+
+ /* NOTE: no need to check if barriers supported here as we would
+ * not pass the test in make_request_common in that case
+ */
+ if (bio_barrier(req->master_bio))
+ dp_flags |= DP_HARDBARRIER;
+ if (bio_sync(req->master_bio))
+ dp_flags |= DP_RW_SYNC;
+ if (mdev->state.conn >= SyncSource &&
+ mdev->state.conn <= PausedSyncT)
+ dp_flags |= DP_MAY_SET_IN_SYNC;
+
+ p.dp_flags = cpu_to_be32(dp_flags);
+ dump_packet(mdev, mdev->data.socket, 0, (void *)&p, __FILE__, __LINE__);
+ set_bit(UNPLUG_REMOTE, &mdev->flags);
+ ok = (sizeof(p) ==
+ drbd_send(mdev, mdev->data.socket, &p, sizeof(p), MSG_MORE));
+ if (ok && dgs) {
+ dgb = mdev->int_dig_out;
+ drbd_csum(mdev, mdev->integrity_w_tfm, req->master_bio, dgb);
+ ok = drbd_send(mdev, mdev->data.socket, dgb, dgs, MSG_MORE);
+ }
+ if (ok) {
+ if (mdev->net_conf->wire_protocol == DRBD_PROT_A)
+ ok = _drbd_send_bio(mdev, req->master_bio);
+ else
+ ok = _drbd_send_zc_bio(mdev, req->master_bio);
+ }
+
+ drbd_put_data_sock(mdev);
+ return ok;
+}
+
+/* answer packet, used to send data back for read requests:
+ * Peer -> (diskless) Primary (DataReply)
+ * SyncSource -> SyncTarget (RSDataReply)
+ */
+int drbd_send_block(struct drbd_conf *mdev, enum Drbd_Packet_Cmd cmd,
+ struct Tl_epoch_entry *e)
+{
+ int ok;
+ struct Drbd_Data_Packet p;
+ void *dgb;
+ int dgs;
+
+ dgs = (mdev->agreed_pro_version >= 87 && mdev->integrity_w_tfm) ?
+ crypto_hash_digestsize(mdev->integrity_w_tfm) : 0;
+
+ p.head.magic = BE_DRBD_MAGIC;
+ p.head.command = cpu_to_be16(cmd);
+ p.head.length =
+ cpu_to_be16(sizeof(p) - sizeof(struct Drbd_Header) + dgs + e->size);
+
+ p.sector = cpu_to_be64(e->sector);
+ p.block_id = e->block_id;
+ /* p.seq_num = 0; No sequence numbers here.. */
+
+ /* Only called by our kernel thread.
+ * This one may be interupted by DRBD_SIG and/or DRBD_SIGKILL
+ * in response to admin command or module unload.
+ */
+ if (!drbd_get_data_sock(mdev))
+ return 0;
+
+ dump_packet(mdev, mdev->data.socket, 0, (void *)&p, __FILE__, __LINE__);
+ ok = sizeof(p) == drbd_send(mdev, mdev->data.socket, &p,
+ sizeof(p), MSG_MORE);
+ if (ok && dgs) {
+ dgb = mdev->int_dig_out;
+ drbd_csum(mdev, mdev->integrity_w_tfm, e->private_bio, dgb);
+ ok = drbd_send(mdev, mdev->data.socket, dgb, dgs, MSG_MORE);
+ }
+ if (ok)
+ ok = _drbd_send_zc_bio(mdev, e->private_bio);
+
+ drbd_put_data_sock(mdev);
+ return ok;
+}
+
+/*
+ drbd_send distinguishes two cases:
+
+ Packets sent via the data socket "sock"
+ and packets sent via the meta data socket "msock"
+
+ sock msock
+ -----------------+-------------------------+------------------------------
+ timeout conf.timeout / 2 conf.timeout / 2
+ timeout action send a ping via msock Abort communication
+ and close all sockets
+*/
+
+/*
+ * you must have down()ed the appropriate [m]sock_mutex elsewhere!
+ */
+int drbd_send(struct drbd_conf *mdev, struct socket *sock,
+ void *buf, size_t size, unsigned msg_flags)
+{
+ struct kvec iov;
+ struct msghdr msg;
+ int rv, sent = 0;
+
+ if (!sock)
+ return -1000;
+
+ /* THINK if (signal_pending) return ... ? */
+
+ iov.iov_base = buf;
+ iov.iov_len = size;
+
+ msg.msg_name = NULL;
+ msg.msg_namelen = 0;
+ msg.msg_control = NULL;
+ msg.msg_controllen = 0;
+ msg.msg_flags = msg_flags | MSG_NOSIGNAL;
+
+ if (sock == mdev->data.socket) {
+ mdev->ko_count = mdev->net_conf->ko_count;
+ drbd_update_congested(mdev);
+ }
+ do {
+ /* STRANGE
+ * tcp_sendmsg does _not_ use its size parameter at all ?
+ *
+ * -EAGAIN on timeout, -EINTR on signal.
+ */
+/* THINK
+ * do we need to block DRBD_SIG if sock == &meta.socket ??
+ * otherwise wake_asender() might interrupt some send_*Ack !
+ */
+ rv = kernel_sendmsg(sock, &msg, &iov, 1, size);
+ if (rv == -EAGAIN) {
+ if (we_should_drop_the_connection(mdev, sock))
+ break;
+ else
+ continue;
+ }
+ D_ASSERT(rv != 0);
+ if (rv == -EINTR) {
+ flush_signals(current);
+ rv = 0;
+ }
+ if (rv < 0)
+ break;
+ sent += rv;
+ iov.iov_base += rv;
+ iov.iov_len -= rv;
+ } while (sent < size);
+
+ if (sock == mdev->data.socket)
+ clear_bit(NET_CONGESTED, &mdev->flags);
+
+ if (rv <= 0) {
+ if (rv != -EAGAIN) {
+ ERR("%s_sendmsg returned %d\n",
+ sock == mdev->meta.socket ? "msock" : "sock",
+ rv);
+ drbd_force_state(mdev, NS(conn, BrokenPipe));
+ } else
+ drbd_force_state(mdev, NS(conn, Timeout));
+ }
+
+ return sent;
+}
+
+static int drbd_open(struct block_device *bdev, fmode_t mode)
+{
+ struct drbd_conf *mdev = bdev->bd_disk->private_data;
+ unsigned long flags;
+ int rv = 0;
+
+ spin_lock_irqsave(&mdev->req_lock, flags);
+ /* to have a stable mdev->state.role
+ * and no race with updating open_cnt */
+
+ if (mdev->state.role != Primary) {
+ if (mode & FMODE_WRITE)
+ rv = -EROFS;
+ else if (!allow_oos)
+ rv = -EMEDIUMTYPE;
+ }
+
+ if (!rv)
+ mdev->open_cnt++;
+ spin_unlock_irqrestore(&mdev->req_lock, flags);
+
+ return rv;
+}
+
+static int drbd_release(struct gendisk *gd, fmode_t mode)
+{
+ struct drbd_conf *mdev = gd->private_data;
+ mdev->open_cnt--;
+ return 0;
+}
+
+STATIC void drbd_unplug_fn(struct request_queue *q)
+{
+ struct drbd_conf *mdev = q->queuedata;
+
+ MTRACE(TraceTypeUnplug, TraceLvlSummary,
+ INFO("got unplugged ap_bio_count=%d\n",
+ atomic_read(&mdev->ap_bio_cnt));
+ );
+
+ /* unplug FIRST */
+ spin_lock_irq(q->queue_lock);
+ blk_remove_plug(q);
+ spin_unlock_irq(q->queue_lock);
+
+ /* only if connected */
+ spin_lock_irq(&mdev->req_lock);
+ if (mdev->state.pdsk >= Inconsistent && mdev->state.conn >= Connected) {
+ D_ASSERT(mdev->state.role == Primary);
+ if (test_and_clear_bit(UNPLUG_REMOTE, &mdev->flags)) {
+ /* add to the data.work queue,
+ * unless already queued.
+ * XXX this might be a good addition to drbd_queue_work
+ * anyways, to detect "double queuing" ... */
+ if (list_empty(&mdev->unplug_work.list))
+ drbd_queue_work(&mdev->data.work,
+ &mdev->unplug_work);
+ }
+ }
+ spin_unlock_irq(&mdev->req_lock);
+
+ if (mdev->state.disk >= Inconsistent)
+ drbd_kick_lo(mdev);
+}
+
+STATIC void drbd_set_defaults(struct drbd_conf *mdev)
+{
+ mdev->sync_conf.after = DRBD_AFTER_DEF;
+ mdev->sync_conf.rate = DRBD_RATE_DEF;
+ mdev->sync_conf.al_extents = DRBD_AL_EXTENTS_DEF;
+ mdev->state = (union drbd_state_t) {
+ { .role = Secondary,
+ .peer = Unknown,
+ .conn = StandAlone,
+ .disk = Diskless,
+ .pdsk = DUnknown,
+ .susp = 0
+ } };
+}
+
+void drbd_init_set_defaults(struct drbd_conf *mdev)
+{
+ /* the memset(,0,) did most of this.
+ * note: only assignments, no allocation in here */
+
+ drbd_set_defaults(mdev);
+
+ /* for now, we do NOT yet support it,
+ * even though we start some framework
+ * to eventually support barriers */
+ set_bit(NO_BARRIER_SUPP, &mdev->flags);
+
+ atomic_set(&mdev->ap_bio_cnt, 0);
+ atomic_set(&mdev->ap_pending_cnt, 0);
+ atomic_set(&mdev->rs_pending_cnt, 0);
+ atomic_set(&mdev->unacked_cnt, 0);
+ atomic_set(&mdev->local_cnt, 0);
+ atomic_set(&mdev->net_cnt, 0);
+ atomic_set(&mdev->packet_seq, 0);
+ atomic_set(&mdev->pp_in_use, 0);
+
+ mutex_init(&mdev->md_io_mutex);
+ mutex_init(&mdev->data.mutex);
+ mutex_init(&mdev->meta.mutex);
+ sema_init(&mdev->data.work.s, 0);
+ sema_init(&mdev->meta.work.s, 0);
+ mutex_init(&mdev->state_mutex);
+
+ spin_lock_init(&mdev->data.work.q_lock);
+ spin_lock_init(&mdev->meta.work.q_lock);
+
+ spin_lock_init(&mdev->al_lock);
+ spin_lock_init(&mdev->req_lock);
+ spin_lock_init(&mdev->peer_seq_lock);
+ spin_lock_init(&mdev->epoch_lock);
+
+ INIT_LIST_HEAD(&mdev->active_ee);
+ INIT_LIST_HEAD(&mdev->sync_ee);
+ INIT_LIST_HEAD(&mdev->done_ee);
+ INIT_LIST_HEAD(&mdev->read_ee);
+ INIT_LIST_HEAD(&mdev->net_ee);
+ INIT_LIST_HEAD(&mdev->resync_reads);
+ INIT_LIST_HEAD(&mdev->data.work.q);
+ INIT_LIST_HEAD(&mdev->meta.work.q);
+ INIT_LIST_HEAD(&mdev->resync_work.list);
+ INIT_LIST_HEAD(&mdev->unplug_work.list);
+ INIT_LIST_HEAD(&mdev->md_sync_work.list);
+ INIT_LIST_HEAD(&mdev->bm_io_work.w.list);
+ mdev->resync_work.cb = w_resync_inactive;
+ mdev->unplug_work.cb = w_send_write_hint;
+ mdev->md_sync_work.cb = w_md_sync;
+ mdev->bm_io_work.w.cb = w_bitmap_io;
+ init_timer(&mdev->resync_timer);
+ init_timer(&mdev->md_sync_timer);
+ mdev->resync_timer.function = resync_timer_fn;
+ mdev->resync_timer.data = (unsigned long) mdev;
+ mdev->md_sync_timer.function = md_sync_timer_fn;
+ mdev->md_sync_timer.data = (unsigned long) mdev;
+
+ init_waitqueue_head(&mdev->misc_wait);
+ init_waitqueue_head(&mdev->state_wait);
+ init_waitqueue_head(&mdev->ee_wait);
+ init_waitqueue_head(&mdev->al_wait);
+ init_waitqueue_head(&mdev->seq_wait);
+
+ drbd_thread_init(mdev, &mdev->receiver, drbdd_init);
+ drbd_thread_init(mdev, &mdev->worker, drbd_worker);
+ drbd_thread_init(mdev, &mdev->asender, drbd_asender);
+
+ mdev->agreed_pro_version = PRO_VERSION_MAX;
+ mdev->write_ordering = WO_bio_barrier;
+ mdev->resync_wenr = LC_FREE;
+}
+
+void drbd_mdev_cleanup(struct drbd_conf *mdev)
+{
+ if (mdev->receiver.t_state != None)
+ ERR("ASSERT FAILED: receiver t_state == %d expected 0.\n",
+ mdev->receiver.t_state);
+
+ /* no need to lock it, I'm the only thread alive */
+ if (atomic_read(&mdev->current_epoch->epoch_size) != 0)
+ ERR("epoch_size:%d\n", atomic_read(&mdev->current_epoch->epoch_size));
+ mdev->al_writ_cnt =
+ mdev->bm_writ_cnt =
+ mdev->read_cnt =
+ mdev->recv_cnt =
+ mdev->send_cnt =
+ mdev->writ_cnt =
+ mdev->p_size =
+ mdev->rs_start =
+ mdev->rs_total =
+ mdev->rs_failed =
+ mdev->rs_mark_left =
+ mdev->rs_mark_time = 0;
+ D_ASSERT(mdev->net_conf == NULL);
+
+ drbd_set_my_capacity(mdev, 0);
+ drbd_bm_resize(mdev, 0);
+ drbd_bm_cleanup(mdev);
+
+ drbd_free_resources(mdev);
+
+ /*
+ * currently we drbd_init_ee only on module load, so
+ * we may do drbd_release_ee only on module unload!
+ */
+ D_ASSERT(list_empty(&mdev->active_ee));
+ D_ASSERT(list_empty(&mdev->sync_ee));
+ D_ASSERT(list_empty(&mdev->done_ee));
+ D_ASSERT(list_empty(&mdev->read_ee));
+ D_ASSERT(list_empty(&mdev->net_ee));
+ D_ASSERT(list_empty(&mdev->resync_reads));
+ D_ASSERT(list_empty(&mdev->data.work.q));
+ D_ASSERT(list_empty(&mdev->meta.work.q));
+ D_ASSERT(list_empty(&mdev->resync_work.list));
+ D_ASSERT(list_empty(&mdev->unplug_work.list));
+
+}
+
+
+STATIC void drbd_destroy_mempools(void)
+{
+ struct page *page;
+
+ while (drbd_pp_pool) {
+ page = drbd_pp_pool;
+ drbd_pp_pool = (struct page *)page_private(page);
+ __free_page(page);
+ drbd_pp_vacant--;
+ }
+
+ /* D_ASSERT(atomic_read(&drbd_pp_vacant)==0); */
+
+ if (drbd_ee_mempool)
+ mempool_destroy(drbd_ee_mempool);
+ if (drbd_request_mempool)
+ mempool_destroy(drbd_request_mempool);
+ if (drbd_ee_cache)
+ kmem_cache_destroy(drbd_ee_cache);
+ if (drbd_request_cache)
+ kmem_cache_destroy(drbd_request_cache);
+
+ drbd_ee_mempool = NULL;
+ drbd_request_mempool = NULL;
+ drbd_ee_cache = NULL;
+ drbd_request_cache = NULL;
+
+ return;
+}
+
+STATIC int drbd_create_mempools(void)
+{
+ struct page *page;
+ const int number = (DRBD_MAX_SEGMENT_SIZE/PAGE_SIZE) * minor_count;
+ int i;
+
+ /* prepare our caches and mempools */
+ drbd_request_mempool = NULL;
+ drbd_ee_cache = NULL;
+ drbd_request_cache = NULL;
+ drbd_pp_pool = NULL;
+
+ /* caches */
+ drbd_request_cache = kmem_cache_create(
+ "drbd_req_cache", sizeof(struct drbd_request), 0, 0, NULL);
+ if (drbd_request_cache == NULL)
+ goto Enomem;
+
+ drbd_ee_cache = kmem_cache_create(
+ "drbd_ee_cache", sizeof(struct Tl_epoch_entry), 0, 0, NULL);
+ if (drbd_ee_cache == NULL)
+ goto Enomem;
+
+ /* mempools */
+ drbd_request_mempool = mempool_create(number,
+ mempool_alloc_slab, mempool_free_slab, drbd_request_cache);
+ if (drbd_request_mempool == NULL)
+ goto Enomem;
+
+ drbd_ee_mempool = mempool_create(number,
+ mempool_alloc_slab, mempool_free_slab, drbd_ee_cache);
+ if (drbd_request_mempool == NULL)
+ goto Enomem;
+
+ /* drbd's page pool */
+ spin_lock_init(&drbd_pp_lock);
+
+ for (i = 0; i < number; i++) {
+ page = alloc_page(GFP_HIGHUSER);
+ if (!page)
+ goto Enomem;
+ set_page_private(page, (unsigned long)drbd_pp_pool);
+ drbd_pp_pool = page;
+ }
+ drbd_pp_vacant = number;
+
+ return 0;
+
+Enomem:
+ drbd_destroy_mempools(); /* in case we allocated some */
+ return -ENOMEM;
+}
+
+STATIC int drbd_notify_sys(struct notifier_block *this, unsigned long code,
+ void *unused)
+{
+ /* just so we have it. you never know what interessting things we
+ * might want to do here some day...
+ */
+
+ return NOTIFY_DONE;
+}
+
+STATIC struct notifier_block drbd_notifier = {
+ .notifier_call = drbd_notify_sys,
+};
+
+static void drbd_release_ee_lists(struct drbd_conf *mdev)
+{
+ int rr;
+
+ rr = drbd_release_ee(mdev, &mdev->active_ee);
+ if (rr)
+ ERR("%d EEs in active list found!\n", rr);
+
+ rr = drbd_release_ee(mdev, &mdev->sync_ee);
+ if (rr)
+ ERR("%d EEs in sync list found!\n", rr);
+
+ rr = drbd_release_ee(mdev, &mdev->read_ee);
+ if (rr)
+ ERR("%d EEs in read list found!\n", rr);
+
+ rr = drbd_release_ee(mdev, &mdev->done_ee);
+ if (rr)
+ ERR("%d EEs in done list found!\n", rr);
+
+ rr = drbd_release_ee(mdev, &mdev->net_ee);
+ if (rr)
+ ERR("%d EEs in net list found!\n", rr);
+}
+
+/* caution. no locking.
+ * currently only used from module cleanup code. */
+static void drbd_delete_device(unsigned int minor)
+{
+ struct drbd_conf *mdev = minor_to_mdev(minor);
+
+ if (!mdev)
+ return;
+
+ /* paranoia asserts */
+ if (mdev->open_cnt != 0)
+ ERR("open_cnt = %d in %s:%u", mdev->open_cnt,
+ __FILE__ , __LINE__);
+
+ ERR_IF (!list_empty(&mdev->data.work.q)) {
+ struct list_head *lp;
+ list_for_each(lp, &mdev->data.work.q) {
+ DUMPP(lp);
+ }
+ };
+ /* end paranoia asserts */
+
+ del_gendisk(mdev->vdisk);
+
+ /* cleanup stuff that may have been allocated during
+ * device (re-)configuration or state changes */
+
+ if (mdev->this_bdev)
+ bdput(mdev->this_bdev);
+
+ drbd_free_resources(mdev);
+
+ drbd_release_ee_lists(mdev);
+
+ /* should be free'd on disconnect? */
+ kfree(mdev->ee_hash);
+ /*
+ mdev->ee_hash_s = 0;
+ mdev->ee_hash = NULL;
+ */
+
+ if (mdev->act_log)
+ lc_free(mdev->act_log);
+ if (mdev->resync)
+ lc_free(mdev->resync);
+
+ kfree(mdev->p_uuid);
+ /* mdev->p_uuid = NULL; */
+
+ kfree(mdev->int_dig_out);
+ kfree(mdev->int_dig_in);
+ kfree(mdev->int_dig_vv);
+
+ /* cleanup the rest that has been
+ * allocated from drbd_new_device
+ * and actually free the mdev itself */
+ drbd_free_mdev(mdev);
+}
+
+STATIC void drbd_cleanup(void)
+{
+ unsigned int i;
+
+ unregister_reboot_notifier(&drbd_notifier);
+
+ drbd_nl_cleanup();
+
+ if (minor_table) {
+ if (drbd_proc)
+ remove_proc_entry("drbd", NULL);
+ i = minor_count;
+ while (i--)
+ drbd_delete_device(i);
+ drbd_destroy_mempools();
+ }
+
+ kfree(minor_table);
+
+ unregister_blkdev(DRBD_MAJOR, "drbd");
+
+ printk(KERN_INFO "drbd: module cleanup done.\n");
+}
+
+/**
+ * drbd_congested: Returns 1<<BDI_write_congested and/or
+ * 1<<BDI_read_congested if we are congested. This interface is known
+ * to be used by pdflush.
+ */
+static int drbd_congested(void *congested_data, int bdi_bits)
+{
+ struct drbd_conf *mdev = congested_data;
+ struct request_queue *q;
+ char reason = '-';
+ int r = 0;
+
+ if (!__inc_ap_bio_cond(mdev)) {
+ /* DRBD has frozen IO */
+ r = bdi_bits;
+ reason = 'd';
+ goto out;
+ }
+
+ if (inc_local(mdev)) {
+ q = bdev_get_queue(mdev->bc->backing_bdev);
+ r = bdi_congested(&q->backing_dev_info, bdi_bits);
+ dec_local(mdev);
+ if (r) {
+ reason = 'b';
+ goto out;
+ }
+ }
+
+ if (bdi_bits & (1 << BDI_write_congested) && test_bit(NET_CONGESTED, &mdev->flags)) {
+ r = (1 << BDI_write_congested);
+ reason = 'n';
+ }
+
+out:
+ mdev->congestion_reason = reason;
+ return r;
+}
+
+struct drbd_conf *drbd_new_device(unsigned int minor)
+{
+ struct drbd_conf *mdev;
+ struct gendisk *disk;
+ struct request_queue *q;
+
+ mdev = kzalloc(sizeof(struct drbd_conf), GFP_KERNEL);
+ if (!mdev)
+ return NULL;
+
+ mdev->minor = minor;
+
+ drbd_init_set_defaults(mdev);
+
+ q = blk_alloc_queue(GFP_KERNEL);
+ if (!q)
+ goto out_no_q;
+ mdev->rq_queue = q;
+ q->queuedata = mdev;
+ q->max_segment_size = DRBD_MAX_SEGMENT_SIZE;
+
+ disk = alloc_disk(1);
+ if (!disk)
+ goto out_no_disk;
+ mdev->vdisk = disk;
+
+ set_disk_ro(disk, TRUE);
+
+ disk->queue = q;
+ disk->major = DRBD_MAJOR;
+ disk->first_minor = minor;
+ disk->fops = &drbd_ops;
+ sprintf(disk->disk_name, "drbd%d", minor);
+ disk->private_data = mdev;
+
+ mdev->this_bdev = bdget(MKDEV(DRBD_MAJOR, minor));
+ /* we have no partitions. we contain only ourselves. */
+ mdev->this_bdev->bd_contains = mdev->this_bdev;
+
+ q->backing_dev_info.congested_fn = drbd_congested;
+ q->backing_dev_info.congested_data = mdev;
+
+ blk_queue_make_request(q, drbd_make_request_26);
+ blk_queue_bounce_limit(q, BLK_BOUNCE_ANY);
+ blk_queue_merge_bvec(q, drbd_merge_bvec);
+ q->queue_lock = &mdev->req_lock; /* needed since we use */
+ /* plugging on a queue, that actually has no requests! */
+ q->unplug_fn = drbd_unplug_fn;
+
+ mdev->md_io_page = alloc_page(GFP_KERNEL);
+ if (!mdev->md_io_page)
+ goto out_no_io_page;
+
+ if (drbd_bm_init(mdev))
+ goto out_no_bitmap;
+ /* no need to lock access, we are still initializing the module. */
+ if (!tl_init(mdev))
+ goto out_no_tl;
+
+ mdev->app_reads_hash = kzalloc(APP_R_HSIZE*sizeof(void *), GFP_KERNEL);
+ if (!mdev->app_reads_hash)
+ goto out_no_app_reads;
+
+ mdev->current_epoch = kzalloc(sizeof(struct drbd_epoch), GFP_KERNEL);
+ if (!mdev->current_epoch)
+ goto out_no_epoch;
+
+ INIT_LIST_HEAD(&mdev->current_epoch->list);
+ mdev->epochs = 1;
+
+ return mdev;
+
+/* out_whatever_else:
+ kfree(mdev->current_epoch); */
+out_no_epoch:
+ kfree(mdev->app_reads_hash);
+out_no_app_reads:
+ tl_cleanup(mdev);
+out_no_tl:
+ drbd_bm_cleanup(mdev);
+out_no_bitmap:
+ __free_page(mdev->md_io_page);
+out_no_io_page:
+ put_disk(disk);
+out_no_disk:
+ blk_cleanup_queue(q);
+out_no_q:
+ kfree(mdev);
+ return NULL;
+}
+
+/* counterpart of drbd_new_device.
+ * last part of drbd_delete_device. */
+void drbd_free_mdev(struct drbd_conf *mdev)
+{
+ kfree(mdev->current_epoch);
+ kfree(mdev->app_reads_hash);
+ tl_cleanup(mdev);
+ if (mdev->bitmap) /* should no longer be there. */
+ drbd_bm_cleanup(mdev);
+ __free_page(mdev->md_io_page);
+ put_disk(mdev->vdisk);
+ blk_cleanup_queue(mdev->rq_queue);
+ kfree(mdev);
+}
+
+
+int __init drbd_init(void)
+{
+ int err;
+
+ if (sizeof(struct Drbd_HandShake_Packet) != 80) {
+ printk(KERN_ERR
+ "drbd: never change the size or layout "
+ "of the HandShake packet.\n");
+ return -EINVAL;
+ }
+
+ if (1 > minor_count || minor_count > 255) {
+ printk(KERN_ERR
+ "drbd: invalid minor_count (%d)\n", minor_count);
+#ifdef MODULE
+ return -EINVAL;
+#else
+ minor_count = 8;
+#endif
+ }
+
+ err = drbd_nl_init();
+ if (err)
+ return err;
+
+ err = register_blkdev(DRBD_MAJOR, "drbd");
+ if (err) {
+ printk(KERN_ERR
+ "drbd: unable to register block device major %d\n",
+ DRBD_MAJOR);
+ return err;
+ }
+
+ register_reboot_notifier(&drbd_notifier);
+
+ /*
+ * allocate all necessary structs
+ */
+ err = -ENOMEM;
+
+ init_waitqueue_head(&drbd_pp_wait);
+
+ drbd_proc = NULL; /* play safe for drbd_cleanup */
+ minor_table = kzalloc(sizeof(struct drbd_conf *)*minor_count,
+ GFP_KERNEL);
+ if (!minor_table)
+ goto Enomem;
+
+ err = drbd_create_mempools();
+ if (err)
+ goto Enomem;
+
+ drbd_proc = proc_create("drbd", S_IFREG | S_IRUGO , NULL, &drbd_proc_fops);
+ if (!drbd_proc) {
+ printk(KERN_ERR "drbd: unable to register proc file\n");
+ goto Enomem;
+ }
+
+ rwlock_init(&global_state_lock);
+
+ printk(KERN_INFO "drbd: initialised. "
+ "Version: " REL_VERSION " (api:%d/proto:%d-%d)\n",
+ API_VERSION, PRO_VERSION_MIN, PRO_VERSION_MAX);
+ printk(KERN_INFO "drbd: %s\n", drbd_buildtag());
+ printk(KERN_INFO "drbd: registered as block device major %d\n",
+ DRBD_MAJOR);
+ printk(KERN_INFO "drbd: minor_table @ 0x%p\n", minor_table);
+
+ return 0; /* Success! */
+
+Enomem:
+ drbd_cleanup();
+ if (err == -ENOMEM)
+ /* currently always the case */
+ printk(KERN_ERR "drbd: ran out of memory\n");
+ else
+ printk(KERN_ERR "drbd: initialization failure\n");
+ return err;
+}
+
+void drbd_free_bc(struct drbd_backing_dev *bc)
+{
+ if (bc == NULL)
+ return;
+
+ bd_release(bc->backing_bdev);
+ bd_release(bc->md_bdev);
+
+ fput(bc->lo_file);
+ fput(bc->md_file);
+
+ kfree(bc);
+}
+
+void drbd_free_sock(struct drbd_conf *mdev)
+{
+ if (mdev->data.socket) {
+ sock_release(mdev->data.socket);
+ mdev->data.socket = NULL;
+ }
+ if (mdev->meta.socket) {
+ sock_release(mdev->meta.socket);
+ mdev->meta.socket = NULL;
+ }
+}
+
+
+void drbd_free_resources(struct drbd_conf *mdev)
+{
+ crypto_free_hash(mdev->csums_tfm);
+ mdev->csums_tfm = NULL;
+ crypto_free_hash(mdev->verify_tfm);
+ mdev->verify_tfm = NULL;
+ crypto_free_hash(mdev->cram_hmac_tfm);
+ mdev->cram_hmac_tfm = NULL;
+ crypto_free_hash(mdev->integrity_w_tfm);
+ mdev->integrity_w_tfm = NULL;
+ crypto_free_hash(mdev->integrity_r_tfm);
+ mdev->integrity_r_tfm = NULL;
+
+ drbd_free_sock(mdev);
+
+ __no_warn(local,
+ drbd_free_bc(mdev->bc);
+ mdev->bc = NULL;);
+}
+
+/*********************************/
+/* meta data management */
+
+struct meta_data_on_disk {
+ u64 la_size; /* last agreed size. */
+ u64 uuid[UUID_SIZE]; /* UUIDs. */
+ u64 device_uuid;
+ u64 reserved_u64_1;
+ u32 flags; /* MDF */
+ u32 magic;
+ u32 md_size_sect;
+ u32 al_offset; /* offset to this block */
+ u32 al_nr_extents; /* important for restoring the AL */
+ /* `-- act_log->nr_elements <-- sync_conf.al_extents */
+ u32 bm_offset; /* offset to the bitmap, from here */
+ u32 bm_bytes_per_bit; /* BM_BLOCK_SIZE */
+ u32 reserved_u32[4];
+
+} __attribute((packed));
+
+/**
+ * drbd_md_sync:
+ * Writes the meta data super block if the MD_DIRTY flag bit is set.
+ */
+void drbd_md_sync(struct drbd_conf *mdev)
+{
+ struct meta_data_on_disk *buffer;
+ sector_t sector;
+ int i;
+
+ if (!test_and_clear_bit(MD_DIRTY, &mdev->flags))
+ return;
+ del_timer(&mdev->md_sync_timer);
+
+ /* We use here Failed and not Attaching because we try to write
+ * metadata even if we detach due to a disk failure! */
+ if (!inc_local_if_state(mdev, Failed))
+ return;
+
+ MTRACE(TraceTypeMDIO, TraceLvlSummary,
+ INFO("Writing meta data super block now.\n");
+ );
+
+ mutex_lock(&mdev->md_io_mutex);
+ buffer = (struct meta_data_on_disk *)page_address(mdev->md_io_page);
+ memset(buffer, 0, 512);
+
+ buffer->la_size = cpu_to_be64(drbd_get_capacity(mdev->this_bdev));
+ for (i = Current; i < UUID_SIZE; i++)
+ buffer->uuid[i] = cpu_to_be64(mdev->bc->md.uuid[i]);
+ buffer->flags = cpu_to_be32(mdev->bc->md.flags);
+ buffer->magic = cpu_to_be32(DRBD_MD_MAGIC);
+
+ buffer->md_size_sect = cpu_to_be32(mdev->bc->md.md_size_sect);
+ buffer->al_offset = cpu_to_be32(mdev->bc->md.al_offset);
+ buffer->al_nr_extents = cpu_to_be32(mdev->act_log->nr_elements);
+ buffer->bm_bytes_per_bit = cpu_to_be32(BM_BLOCK_SIZE);
+ buffer->device_uuid = cpu_to_be64(mdev->bc->md.device_uuid);
+
+ buffer->bm_offset = cpu_to_be32(mdev->bc->md.bm_offset);
+
+ D_ASSERT(drbd_md_ss__(mdev, mdev->bc) == mdev->bc->md.md_offset);
+ sector = mdev->bc->md.md_offset;
+
+ if (drbd_md_sync_page_io(mdev, mdev->bc, sector, WRITE)) {
+ clear_bit(MD_DIRTY, &mdev->flags);
+ } else {
+ /* this was a try anyways ... */
+ ERR("meta data update failed!\n");
+
+ drbd_chk_io_error(mdev, 1, TRUE);
+ drbd_io_error(mdev, TRUE);
+ }
+
+ /* Update mdev->bc->md.la_size_sect,
+ * since we updated it on metadata. */
+ mdev->bc->md.la_size_sect = drbd_get_capacity(mdev->this_bdev);
+
+ mutex_unlock(&mdev->md_io_mutex);
+ dec_local(mdev);
+}
+
+/**
+ * drbd_md_read:
+ * @bdev: describes the backing storage and the meta-data storage
+ * Reads the meta data from bdev. Return 0 (NoError) on success, and an
+ * enum ret_codes in case something goes wrong.
+ * Currently only: MDIOError, MDInvalid.
+ */
+int drbd_md_read(struct drbd_conf *mdev, struct drbd_backing_dev *bdev)
+{
+ struct meta_data_on_disk *buffer;
+ int i, rv = NoError;
+
+ if (!inc_local_if_state(mdev, Attaching))
+ return MDIOError;
+
+ mutex_lock(&mdev->md_io_mutex);
+ buffer = (struct meta_data_on_disk *)page_address(mdev->md_io_page);
+
+ if (!drbd_md_sync_page_io(mdev, bdev, bdev->md.md_offset, READ)) {
+ /* NOTE: cant do normal error processing here as this is
+ called BEFORE disk is attached */
+ ERR("Error while reading metadata.\n");
+ rv = MDIOError;
+ goto err;
+ }
+
+ if (be32_to_cpu(buffer->magic) != DRBD_MD_MAGIC) {
+ ERR("Error while reading metadata, magic not found.\n");
+ rv = MDInvalid;
+ goto err;
+ }
+ if (be32_to_cpu(buffer->al_offset) != bdev->md.al_offset) {
+ ERR("unexpected al_offset: %d (expected %d)\n",
+ be32_to_cpu(buffer->al_offset), bdev->md.al_offset);
+ rv = MDInvalid;
+ goto err;
+ }
+ if (be32_to_cpu(buffer->bm_offset) != bdev->md.bm_offset) {
+ ERR("unexpected bm_offset: %d (expected %d)\n",
+ be32_to_cpu(buffer->bm_offset), bdev->md.bm_offset);
+ rv = MDInvalid;
+ goto err;
+ }
+ if (be32_to_cpu(buffer->md_size_sect) != bdev->md.md_size_sect) {
+ ERR("unexpected md_size: %u (expected %u)\n",
+ be32_to_cpu(buffer->md_size_sect), bdev->md.md_size_sect);
+ rv = MDInvalid;
+ goto err;
+ }
+
+ if (be32_to_cpu(buffer->bm_bytes_per_bit) != BM_BLOCK_SIZE) {
+ ERR("unexpected bm_bytes_per_bit: %u (expected %u)\n",
+ be32_to_cpu(buffer->bm_bytes_per_bit), BM_BLOCK_SIZE);
+ rv = MDInvalid;
+ goto err;
+ }
+
+ bdev->md.la_size_sect = be64_to_cpu(buffer->la_size);
+ for (i = Current; i < UUID_SIZE; i++)
+ bdev->md.uuid[i] = be64_to_cpu(buffer->uuid[i]);
+ bdev->md.flags = be32_to_cpu(buffer->flags);
+ mdev->sync_conf.al_extents = be32_to_cpu(buffer->al_nr_extents);
+ bdev->md.device_uuid = be64_to_cpu(buffer->device_uuid);
+
+ if (mdev->sync_conf.al_extents < 7)
+ mdev->sync_conf.al_extents = 127;
+
+ err:
+ mutex_unlock(&mdev->md_io_mutex);
+ dec_local(mdev);
+
+ return rv;
+}
+
+/**
+ * drbd_md_mark_dirty:
+ * Call this function if you change enything that should be written to
+ * the meta-data super block. This function sets MD_DIRTY, and starts a
+ * timer that ensures that within five seconds you have to call drbd_md_sync().
+ */
+void drbd_md_mark_dirty(struct drbd_conf *mdev)
+{
+ set_bit(MD_DIRTY, &mdev->flags);
+ mod_timer(&mdev->md_sync_timer, jiffies + 5*HZ);
+}
+
+
+STATIC void drbd_uuid_move_history(struct drbd_conf *mdev) __must_hold(local)
+{
+ int i;
+
+ for (i = History_start; i < History_end; i++) {
+ mdev->bc->md.uuid[i+1] = mdev->bc->md.uuid[i];
+
+ MTRACE(TraceTypeUuid, TraceLvlAll,
+ drbd_print_uuid(mdev, i+1);
+ );
+ }
+}
+
+void _drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local)
+{
+ if (idx == Current) {
+ if (mdev->state.role == Primary)
+ val |= 1;
+ else
+ val &= ~((u64)1);
+
+ drbd_set_ed_uuid(mdev, val);
+ }
+
+ mdev->bc->md.uuid[idx] = val;
+
+ MTRACE(TraceTypeUuid, TraceLvlSummary,
+ drbd_print_uuid(mdev, idx);
+ );
+
+ drbd_md_mark_dirty(mdev);
+}
+
+
+void drbd_uuid_set(struct drbd_conf *mdev, int idx, u64 val) __must_hold(local)
+{
+ if (mdev->bc->md.uuid[idx]) {
+ drbd_uuid_move_history(mdev);
+ mdev->bc->md.uuid[History_start] = mdev->bc->md.uuid[idx];
+ MTRACE(TraceTypeUuid, TraceLvlMetrics,
+ drbd_print_uuid(mdev, History_start);
+ );
+ }
+ _drbd_uuid_set(mdev, idx, val);
+}
+
+/**
+ * drbd_uuid_new_current:
+ * Creates a new current UUID, and rotates the old current UUID into
+ * the bitmap slot. Causes an incremental resync upon next connect.
+ */
+void drbd_uuid_new_current(struct drbd_conf *mdev) __must_hold(local)
+{
+ u64 val;
+
+ INFO("Creating new current UUID\n");
+ D_ASSERT(mdev->bc->md.uuid[Bitmap] == 0);
+ mdev->bc->md.uuid[Bitmap] = mdev->bc->md.uuid[Current];
+ MTRACE(TraceTypeUuid, TraceLvlMetrics,
+ drbd_print_uuid(mdev, Bitmap);
+ );
+
+ get_random_bytes(&val, sizeof(u64));
+ _drbd_uuid_set(mdev, Current, val);
+}
+
+void drbd_uuid_set_bm(struct drbd_conf *mdev, u64 val) __must_hold(local)
+{
+ if (mdev->bc->md.uuid[Bitmap] == 0 && val == 0)
+ return;
+
+ if (val == 0) {
+ drbd_uuid_move_history(mdev);
+ mdev->bc->md.uuid[History_start] = mdev->bc->md.uuid[Bitmap];
+ mdev->bc->md.uuid[Bitmap] = 0;
+
+ MTRACE(TraceTypeUuid, TraceLvlMetrics,
+ drbd_print_uuid(mdev, History_start);
+ drbd_print_uuid(mdev, Bitmap);
+ );
+ } else {
+ if (mdev->bc->md.uuid[Bitmap])
+ drbd_WARN("bm UUID already set");
+
+ mdev->bc->md.uuid[Bitmap] = val;
+ mdev->bc->md.uuid[Bitmap] &= ~((u64)1);
+
+ MTRACE(TraceTypeUuid, TraceLvlMetrics,
+ drbd_print_uuid(mdev, Bitmap);
+ );
+ }
+ drbd_md_mark_dirty(mdev);
+}
+
+/**
+ * drbd_bmio_set_n_write:
+ * Is an io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io() that sets
+ * all bits in the bitmap and writes the whole bitmap to stable storage.
+ */
+int drbd_bmio_set_n_write(struct drbd_conf *mdev)
+{
+ int rv = -EIO;
+
+ if (inc_local_if_state(mdev, Attaching)) {
+ drbd_md_set_flag(mdev, MDF_FullSync);
+ drbd_md_sync(mdev);
+ drbd_bm_set_all(mdev);
+
+ rv = drbd_bm_write(mdev);
+
+ if (!rv) {
+ drbd_md_clear_flag(mdev, MDF_FullSync);
+ drbd_md_sync(mdev);
+ }
+
+ dec_local(mdev);
+ }
+
+ return rv;
+}
+
+/**
+ * drbd_bmio_clear_n_write:
+ * Is an io_fn for drbd_queue_bitmap_io() or drbd_bitmap_io() that clears
+ * all bits in the bitmap and writes the whole bitmap to stable storage.
+ */
+int drbd_bmio_clear_n_write(struct drbd_conf *mdev)
+{
+ int rv = -EIO;
+
+ if (inc_local_if_state(mdev, Attaching)) {
+ drbd_bm_clear_all(mdev);
+ rv = drbd_bm_write(mdev);
+ dec_local(mdev);
+ }
+
+ return rv;
+}
+
+STATIC int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ struct bm_io_work *work = (struct bm_io_work *)w;
+ int rv;
+
+ D_ASSERT(atomic_read(&mdev->ap_bio_cnt) == 0);
+
+ drbd_bm_lock(mdev, work->why);
+ rv = work->io_fn(mdev);
+ drbd_bm_unlock(mdev);
+
+ clear_bit(BITMAP_IO, &mdev->flags);
+ wake_up(&mdev->misc_wait);
+
+ if (work->done)
+ work->done(mdev, rv);
+
+ clear_bit(BITMAP_IO_QUEUED, &mdev->flags);
+ work->why = NULL;
+
+ return 1;
+}
+
+/**
+ * drbd_queue_bitmap_io:
+ * Queues an IO operation on the whole bitmap.
+ * While IO on the bitmap happens we freeze appliation IO thus we ensure
+ * that drbd_set_out_of_sync() can not be called.
+ * This function MUST ONLY be called from worker context.
+ * BAD API ALERT!
+ * It MUST NOT be used while a previous such work is still pending!
+ */
+void drbd_queue_bitmap_io(struct drbd_conf *mdev,
+ int (*io_fn)(struct drbd_conf *),
+ void (*done)(struct drbd_conf *, int),
+ char *why)
+{
+ D_ASSERT(current == mdev->worker.task);
+
+ D_ASSERT(!test_bit(BITMAP_IO_QUEUED, &mdev->flags));
+ D_ASSERT(!test_bit(BITMAP_IO, &mdev->flags));
+ D_ASSERT(list_empty(&mdev->bm_io_work.w.list));
+ if (mdev->bm_io_work.why)
+ ERR("FIXME going to queue '%s' but '%s' still pending?\n",
+ why, mdev->bm_io_work.why);
+
+ mdev->bm_io_work.io_fn = io_fn;
+ mdev->bm_io_work.done = done;
+ mdev->bm_io_work.why = why;
+
+ set_bit(BITMAP_IO, &mdev->flags);
+ if (atomic_read(&mdev->ap_bio_cnt) == 0) {
+ if (list_empty(&mdev->bm_io_work.w.list)) {
+ set_bit(BITMAP_IO_QUEUED, &mdev->flags);
+ drbd_queue_work(&mdev->data.work, &mdev->bm_io_work.w);
+ } else
+ ERR("FIXME avoided double queuing bm_io_work\n");
+ }
+}
+
+/**
+ * drbd_bitmap_io:
+ * Does an IO operation on the bitmap, freezing application IO while that
+ * IO operations runs. This functions MUST NOT be called from worker context.
+ */
+int drbd_bitmap_io(struct drbd_conf *mdev, int (*io_fn)(struct drbd_conf *), char *why)
+{
+ int rv;
+
+ D_ASSERT(current != mdev->worker.task);
+
+ drbd_suspend_io(mdev);
+
+ drbd_bm_lock(mdev, why);
+ rv = io_fn(mdev);
+ drbd_bm_unlock(mdev);
+
+ drbd_resume_io(mdev);
+
+ return rv;
+}
+
+void drbd_md_set_flag(struct drbd_conf *mdev, int flag) __must_hold(local)
+{
+ if ((mdev->bc->md.flags & flag) != flag) {
+ drbd_md_mark_dirty(mdev);
+ mdev->bc->md.flags |= flag;
+ }
+}
+
+void drbd_md_clear_flag(struct drbd_conf *mdev, int flag) __must_hold(local)
+{
+ if ((mdev->bc->md.flags & flag) != 0) {
+ drbd_md_mark_dirty(mdev);
+ mdev->bc->md.flags &= ~flag;
+ }
+}
+int drbd_md_test_flag(struct drbd_backing_dev *bdev, int flag)
+{
+ return (bdev->md.flags & flag) != 0;
+}
+
+STATIC void md_sync_timer_fn(unsigned long data)
+{
+ struct drbd_conf *mdev = (struct drbd_conf *) data;
+
+ drbd_queue_work_front(&mdev->data.work, &mdev->md_sync_work);
+}
+
+STATIC int w_md_sync(struct drbd_conf *mdev, struct drbd_work *w, int unused)
+{
+ drbd_WARN("md_sync_timer expired! Worker calls drbd_md_sync().\n");
+ drbd_md_sync(mdev);
+
+ return 1;
+}
+
+#ifdef DRBD_ENABLE_FAULTS
+/* Fault insertion support including random number generator shamelessly
+ * stolen from kernel/rcutorture.c */
+struct fault_random_state {
+ unsigned long state;
+ unsigned long count;
+};
+
+#define FAULT_RANDOM_MULT 39916801 /* prime */
+#define FAULT_RANDOM_ADD 479001701 /* prime */
+#define FAULT_RANDOM_REFRESH 10000
+
+/*
+ * Crude but fast random-number generator. Uses a linear congruential
+ * generator, with occasional help from get_random_bytes().
+ */
+STATIC unsigned long
+_drbd_fault_random(struct fault_random_state *rsp)
+{
+ long refresh;
+
+ if (--rsp->count < 0) {
+ get_random_bytes(&refresh, sizeof(refresh));
+ rsp->state += refresh;
+ rsp->count = FAULT_RANDOM_REFRESH;
+ }
+ rsp->state = rsp->state * FAULT_RANDOM_MULT + FAULT_RANDOM_ADD;
+ return swahw32(rsp->state);
+}
+
+STATIC char *
+_drbd_fault_str(unsigned int type) {
+ static char *_faults[] = {
+ "Meta-data write",
+ "Meta-data read",
+ "Resync write",
+ "Resync read",
+ "Data write",
+ "Data read",
+ "Data read ahead",
+ };
+
+ return (type < DRBD_FAULT_MAX) ? _faults[type] : "**Unknown**";
+}
+
+unsigned int
+_drbd_insert_fault(struct drbd_conf *mdev, unsigned int type)
+{
+ static struct fault_random_state rrs = {0, 0};
+
+ unsigned int ret = (
+ (fault_devs == 0 ||
+ ((1 << mdev_to_minor(mdev)) & fault_devs) != 0) &&
+ (((_drbd_fault_random(&rrs) % 100) + 1) <= fault_rate));
+
+ if (ret) {
+ fault_count++;
+
+ if (printk_ratelimit())
+ drbd_WARN("***Simulating %s failure\n",
+ _drbd_fault_str(type));
+ }
+
+ return ret;
+}
+#endif
+
+#ifdef ENABLE_DYNAMIC_TRACE
+
+STATIC char *_drbd_uuid_str(unsigned int idx)
+{
+ static char *uuid_str[] = {
+ "Current",
+ "Bitmap",
+ "History_start",
+ "History_end",
+ "UUID_SIZE",
+ "UUID_FLAGS",
+ };
+
+ return (idx < EXT_UUID_SIZE) ? uuid_str[idx] : "*Unknown UUID index*";
+}
+
+/* Pretty print a UUID value */
+void drbd_print_uuid(struct drbd_conf *mdev, unsigned int idx) __must_hold(local)
+{
+ INFO(" uuid[%s] now %016llX\n",
+ _drbd_uuid_str(idx), (unsigned long long)mdev->bc->md.uuid[idx]);
+}
+
+
+/*
+ *
+ * drbd_print_buffer
+ *
+ * This routine dumps binary data to the debugging output. Can be
+ * called at interrupt level.
+ *
+ * Arguments:
+ *
+ * prefix - String is output at the beginning of each line output
+ * flags - Control operation of the routine. Currently defined
+ * Flags are:
+ * DBGPRINT_BUFFADDR; if set, each line starts with the
+ * virtual address of the line being outupt. If clear,
+ * each line starts with the offset from the beginning
+ * of the buffer.
+ * size - Indicates the size of each entry in the buffer. Supported
+ * values are sizeof(char), sizeof(short) and sizeof(int)
+ * buffer - Start address of buffer
+ * buffer_va - Virtual address of start of buffer (normally the same
+ * as Buffer, but having it separate allows it to hold
+ * file address for example)
+ * length - length of buffer
+ *
+ */
+void
+drbd_print_buffer(const char *prefix, unsigned int flags, int size,
+ const void *buffer, const void *buffer_va,
+ unsigned int length)
+
+#define LINE_SIZE 16
+#define LINE_ENTRIES (int)(LINE_SIZE/size)
+{
+ const unsigned char *pstart;
+ const unsigned char *pstart_va;
+ const unsigned char *pend;
+ char bytes_str[LINE_SIZE*3+8], ascii_str[LINE_SIZE+8];
+ char *pbytes = bytes_str, *pascii = ascii_str;
+ int offset = 0;
+ long sizemask;
+ int field_width;
+ int index;
+ const unsigned char *pend_str;
+ const unsigned char *p;
+ int count;
+
+ /* verify size parameter */
+ if (size != sizeof(char) &&
+ size != sizeof(short) &&
+ size != sizeof(int)) {
+ printk(KERN_DEBUG "drbd_print_buffer: "
+ "ERROR invalid size %d\n", size);
+ return;
+ }
+
+ sizemask = size-1;
+ field_width = size*2;
+
+ /* Adjust start/end to be on appropriate boundary for size */
+ buffer = (const char *)((long)buffer & ~sizemask);
+ pend = (const unsigned char *)
+ (((long)buffer + length + sizemask) & ~sizemask);
+
+ if (flags & DBGPRINT_BUFFADDR) {
+ /* Move start back to nearest multiple of line size,
+ * if printing address. This results in nicely formatted output
+ * with addresses being on line size (16) byte boundaries */
+ pstart = (const unsigned char *)((long)buffer & ~(LINE_SIZE-1));
+ } else {
+ pstart = (const unsigned char *)buffer;
+ }
+
+ /* Set value of start VA to print if addresses asked for */
+ pstart_va = (const unsigned char *)buffer_va
+ - ((const unsigned char *)buffer-pstart);
+
+ /* Calculate end position to nicely align right hand side */
+ pend_str = pstart + (((pend-pstart) + LINE_SIZE-1) & ~(LINE_SIZE-1));
+
+ /* Init strings */
+ *pbytes = *pascii = '\0';
+
+ /* Start at beginning of first line */
+ p = pstart;
+ count = 0;
+
+ while (p < pend_str) {
+ if (p < (const unsigned char *)buffer || p >= pend) {
+ /* Before start of buffer or after end- print spaces */
+ pbytes += sprintf(pbytes, "%*c ", field_width, ' ');
+ pascii += sprintf(pascii, "%*c", size, ' ');
+ p += size;
+ } else {
+ /* Add hex and ascii to strings */
+ int val;
+ switch (size) {
+ default:
+ case 1:
+ val = *(unsigned char *)p;
+ break;
+ case 2:
+ val = *(unsigned short *)p;
+ break;
+ case 4:
+ val = *(unsigned int *)p;
+ break;
+ }
+
+ pbytes += sprintf(pbytes, "%0*x ", field_width, val);
+
+ for (index = size; index; index--) {
+ *pascii++ = isprint(*p) ? *p : '.';
+ p++;
+ }
+ }
+
+ count++;
+
+ if (count == LINE_ENTRIES || p >= pend_str) {
+ /* Null terminate and print record */
+ *pascii = '\0';
+ printk(KERN_DEBUG "%s%8.8lx: %*s|%*s|\n",
+ prefix,
+ (flags & DBGPRINT_BUFFADDR)
+ ? (long)pstart_va:(long)offset,
+ LINE_ENTRIES*(field_width+1), bytes_str,
+ LINE_SIZE, ascii_str);
+
+ /* Move onto next line */
+ pstart_va += (p-pstart);
+ pstart = p;
+ count = 0;
+ offset += LINE_SIZE;
+
+ /* Re-init strings */
+ pbytes = bytes_str;
+ pascii = ascii_str;
+ *pbytes = *pascii = '\0';
+ }
+ }
+}
+
+#define PSM(A) \
+do { \
+ if (mask.A) { \
+ int i = snprintf(p, len, " " #A "( %s )", \
+ A##s_to_name(val.A)); \
+ if (i >= len) \
+ return op; \
+ p += i; \
+ len -= i; \
+ } \
+} while (0)
+
+STATIC char *dump_st(char *p, int len, union drbd_state_t mask, union drbd_state_t val)
+{
+ char *op = p;
+ *p = '\0';
+ PSM(role);
+ PSM(peer);
+ PSM(conn);
+ PSM(disk);
+ PSM(pdsk);
+
+ return op;
+}
+
+#define INFOP(fmt, args...) \
+do { \
+ if (trace_level >= TraceLvlAll) { \
+ INFO("%s:%d: %s [%d] %s %s " fmt , \
+ file, line, current->comm, current->pid, \
+ sockname, recv ? "<<<" : ">>>" , \
+ ## args); \
+ } else { \
+ INFO("%s %s " fmt, sockname, \
+ recv ? "<<<" : ">>>" , \
+ ## args); \
+ } \
+} while (0)
+
+STATIC char *_dump_block_id(u64 block_id, char *buff)
+{
+ if (is_syncer_block_id(block_id))
+ strcpy(buff, "SyncerId");
+ else
+ sprintf(buff, "%llx", (unsigned long long)block_id);
+
+ return buff;
+}
+
+void
+_dump_packet(struct drbd_conf *mdev, struct socket *sock,
+ int recv, union Drbd_Polymorph_Packet *p, char *file, int line)
+{
+ char *sockname = sock == mdev->meta.socket ? "meta" : "data";
+ int cmd = (recv == 2) ? p->head.command : be16_to_cpu(p->head.command);
+ char tmp[300];
+ union drbd_state_t m, v;
+
+ switch (cmd) {
+ case HandShake:
+ INFOP("%s (protocol %u-%u)\n", cmdname(cmd),
+ be32_to_cpu(p->HandShake.protocol_min),
+ be32_to_cpu(p->HandShake.protocol_max));
+ break;
+
+ case ReportBitMap: /* don't report this */
+ case ReportCBitMap: /* don't report this */
+ break;
+
+ case Data:
+ INFOP("%s (sector %llus, id %s, seq %u, f %x)\n", cmdname(cmd),
+ (unsigned long long)be64_to_cpu(p->Data.sector),
+ _dump_block_id(p->Data.block_id, tmp),
+ be32_to_cpu(p->Data.seq_num),
+ be32_to_cpu(p->Data.dp_flags)
+ );
+ break;
+
+ case DataReply:
+ case RSDataReply:
+ INFOP("%s (sector %llus, id %s)\n", cmdname(cmd),
+ (unsigned long long)be64_to_cpu(p->Data.sector),
+ _dump_block_id(p->Data.block_id, tmp)
+ );
+ break;
+
+ case RecvAck:
+ case WriteAck:
+ case RSWriteAck:
+ case DiscardAck:
+ case NegAck:
+ case NegRSDReply:
+ INFOP("%s (sector %llus, size %u, id %s, seq %u)\n",
+ cmdname(cmd),
+ (long long)be64_to_cpu(p->BlockAck.sector),
+ be32_to_cpu(p->BlockAck.blksize),
+ _dump_block_id(p->BlockAck.block_id, tmp),
+ be32_to_cpu(p->BlockAck.seq_num)
+ );
+ break;
+
+ case DataRequest:
+ case RSDataRequest:
+ INFOP("%s (sector %llus, size %u, id %s)\n", cmdname(cmd),
+ (long long)be64_to_cpu(p->BlockRequest.sector),
+ be32_to_cpu(p->BlockRequest.blksize),
+ _dump_block_id(p->BlockRequest.block_id, tmp)
+ );
+ break;
+
+ case Barrier:
+ case BarrierAck:
+ INFOP("%s (barrier %u)\n", cmdname(cmd), p->Barrier.barrier);
+ break;
+
+ case SyncParam:
+ case SyncParam89:
+ INFOP("%s (rate %u, verify-alg \"%.64s\", csums-alg \"%.64s\")\n",
+ cmdname(cmd), be32_to_cpu(p->SyncParam89.rate),
+ p->SyncParam89.verify_alg, p->SyncParam89.csums_alg);
+ break;
+
+ case ReportUUIDs:
+ INFOP("%s Curr:%016llX, Bitmap:%016llX, "
+ "HisSt:%016llX, HisEnd:%016llX\n",
+ cmdname(cmd),
+ (unsigned long long)be64_to_cpu(p->GenCnt.uuid[Current]),
+ (unsigned long long)be64_to_cpu(p->GenCnt.uuid[Bitmap]),
+ (unsigned long long)be64_to_cpu(p->GenCnt.uuid[History_start]),
+ (unsigned long long)be64_to_cpu(p->GenCnt.uuid[History_end]));
+ break;
+
+ case ReportSizes:
+ INFOP("%s (d %lluMiB, u %lluMiB, c %lldMiB, "
+ "max bio %x, q order %x)\n",
+ cmdname(cmd),
+ (long long)(be64_to_cpu(p->Sizes.d_size)>>(20-9)),
+ (long long)(be64_to_cpu(p->Sizes.u_size)>>(20-9)),
+ (long long)(be64_to_cpu(p->Sizes.c_size)>>(20-9)),
+ be32_to_cpu(p->Sizes.max_segment_size),
+ be32_to_cpu(p->Sizes.queue_order_type));
+ break;
+
+ case ReportState:
+ v.i = be32_to_cpu(p->State.state);
+ m.i = 0xffffffff;
+ dump_st(tmp, sizeof(tmp), m, v);
+ INFOP("%s (s %x {%s})\n", cmdname(cmd), v.i, tmp);
+ break;
+
+ case StateChgRequest:
+ m.i = be32_to_cpu(p->ReqState.mask);
+ v.i = be32_to_cpu(p->ReqState.val);
+ dump_st(tmp, sizeof(tmp), m, v);
+ INFOP("%s (m %x v %x {%s})\n", cmdname(cmd), m.i, v.i, tmp);
+ break;
+
+ case StateChgReply:
+ INFOP("%s (ret %x)\n", cmdname(cmd),
+ be32_to_cpu(p->RqSReply.retcode));
+ break;
+
+ case Ping:
+ case PingAck:
+ /*
+ * Dont trace pings at summary level
+ */
+ if (trace_level < TraceLvlAll)
+ break;
+ /* fall through... */
+ default:
+ INFOP("%s (%u)\n", cmdname(cmd), cmd);
+ break;
+ }
+}
+
+/* Debug routine to dump info about bio */
+
+void _dump_bio(const char *pfx, struct drbd_conf *mdev, struct bio *bio, int complete, struct drbd_request *r)
+{
+#ifdef CONFIG_LBD
+#define SECTOR_FORMAT "%Lx"
+#else
+#define SECTOR_FORMAT "%lx"
+#endif
+#define SECTOR_SHIFT 9
+
+ unsigned long lowaddr = (unsigned long)(bio->bi_sector << SECTOR_SHIFT);
+ char *faddr = (char *)(lowaddr);
+ char rb[sizeof(void *)*2+6] = { 0, };
+ struct bio_vec *bvec;
+ int segno;
+
+ const int rw = bio->bi_rw;
+ const int biorw = (rw & (RW_MASK|RWA_MASK));
+ const int biobarrier = (rw & (1<<BIO_RW_BARRIER));
+ const int biosync = (rw & ((1<<BIO_RW_UNPLUG) | (1<<BIO_RW_SYNCIO)));
+
+ if (r)
+ sprintf(rb, "Req:%p ", r);
+
+ INFO("%s %s:%s%s%s Bio:%p %s- %soffset " SECTOR_FORMAT ", size %x\n",
+ complete ? "<<<" : ">>>",
+ pfx,
+ biorw == WRITE ? "Write" : "Read",
+ biobarrier ? " : B" : "",
+ biosync ? " : S" : "",
+ bio,
+ rb,
+ complete ? (drbd_bio_uptodate(bio) ? "Success, " : "Failed, ") : "",
+ bio->bi_sector << SECTOR_SHIFT,
+ bio->bi_size);
+
+ if (trace_level >= TraceLvlMetrics &&
+ ((biorw == WRITE) ^ complete)) {
+ printk(KERN_DEBUG " ind page offset length\n");
+ __bio_for_each_segment(bvec, bio, segno, 0) {
+ printk(KERN_DEBUG " [%d] %p %8.8x %8.8x\n", segno,
+ bvec->bv_page, bvec->bv_offset, bvec->bv_len);
+
+ if (trace_level >= TraceLvlAll) {
+ char *bvec_buf;
+ unsigned long flags;
+
+ bvec_buf = bvec_kmap_irq(bvec, &flags);
+
+ drbd_print_buffer(" ", DBGPRINT_BUFFADDR, 1,
+ bvec_buf,
+ faddr,
+ (bvec->bv_len <= 0x80)
+ ? bvec->bv_len : 0x80);
+
+ bvec_kunmap_irq(bvec_buf, &flags);
+
+ if (bvec->bv_len > 0x40)
+ printk(KERN_DEBUG " ....\n");
+
+ faddr += bvec->bv_len;
+ }
+ }
+ }
+}
+#endif
+
+module_init(drbd_init)
+module_exit(drbd_cleanup)
Encoding of our simple LRE compression scheme. It is very effective since
large parts of our bitmap are sparse.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_vli.h linux-2.6.29-drbd/drivers/block/drbd/drbd_vli.h
--- linux-2.6.29/drivers/block/drbd/drbd_vli.h 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_vli.h 2009-03-30 15:41:58.419134000 +0200
@@ -0,0 +1,474 @@
+/*
+-*- linux-c -*-
+ drbd_receiver.c
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2001-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 1999-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2002-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _DRBD_VLI_H
+#define _DRBD_VLI_H
+
+/*
+ * At a granularity of 4KiB storage represented per bit,
+ * and stroage sizes of several TiB,
+ * and possibly small-bandwidth replication,
+ * the bitmap transfer time can take much too long,
+ * if transmitted in plain text.
+ *
+ * We try to reduce the transfered bitmap information
+ * by encoding runlengths of bit polarity.
+ *
+ * We never actually need to encode a "zero" (runlengths are positive).
+ * But then we have to store the value of the first bit.
+ * So we can as well have the "zero" be a valid runlength,
+ * and start encoding/decoding by "number of _set_ bits" by convention.
+ *
+ * We assume that large areas are either completely set or unset,
+ * which gives good compression with any runlength method,
+ * even when encoding the runlength as fixed size 32bit/64bit integers.
+ *
+ * Still, there may be areas where the polarity flips every few bits,
+ * and encoding the runlength sequence of those ares with fix size
+ * integers would be much worse than plaintext.
+ *
+ * We want to encode small runlength values with minimum code length,
+ * while still being able to encode a Huge run of all zeros.
+ *
+ * Thus we need a Variable Length Integer encoding, VLI.
+ *
+ * For runlength < 8, we produce more code bits than plaintext input.
+ * we need to send incompressible chunks as plaintext, skip over them
+ * and then see if the next chunk compresses better.
+ *
+ * We don't care too much about "excellent" compression ratio
+ * for large runlengths, 249 bit/24 bit still gives a factor of > 10.
+ *
+ * We care for cpu time needed to actually encode/decode
+ * into the transmitted byte stream.
+ *
+ * There are endless variants of VLI.
+ * For this special purpose, we just need something that is "good enough",
+ * and easy to understand and code, fast to encode and decode,
+ * and does not consume memory.
+ */
+
+/*
+ * buf points to the current position in the tranfered byte stream.
+ * stream is by definition little endian.
+ * *buf_len gives the remaining number of bytes at that position.
+ * *out will receive the decoded value.
+ * returns number of bytes consumed,
+ * or 0 if not enough bytes left in buffer (which would be invalid input).
+ */
+static inline int vli_decode_bytes(u64 *out, unsigned char *buf, unsigned buf_len)
+{
+ u64 tmp = 0;
+ unsigned bytes; /* extra bytes after code byte */
+
+ if (buf_len == 0)
+ return 0;
+
+ switch(*buf) {
+ case 0xff: bytes = 8; break;
+ case 0xfe: bytes = 7; break;
+ case 0xfd: bytes = 6; break;
+ case 0xfc: bytes = 5; break;
+ case 0xfb: bytes = 4; break;
+ case 0xfa: bytes = 3; break;
+ case 0xf9: bytes = 2; break;
+ default:
+ *out = *buf;
+ return 1;
+ }
+
+ if (buf_len <= bytes)
+ return 0;
+
+ /* no pointer cast assignment, there may be funny alignment
+ * requirements on certain architectures */
+ memcpy(&tmp, buf+1, bytes);
+ *out = le64_to_cpu(tmp);
+ return bytes+1;
+}
+
+/*
+ * similarly, encode n into buf.
+ * returns consumed bytes,
+ * or zero if not enough room left in buffer
+ * (in which case the buf is left unchanged).
+ *
+ * encoding is little endian, first byte codes how much bytes follow.
+ * first byte <= 0xf8 means just this byte, value = code byte.
+ * first byte == 0xf9 .. 0xff: (code byte - 0xf7) data bytes follow.
+ */
+static inline int vli_encode_bytes(unsigned char *buf, u64 n, unsigned buf_len)
+{
+ unsigned bytes; /* _extra_ bytes after code byte */
+
+ if (buf_len == 0)
+ return 0;
+
+ if (n <= 0xf8) {
+ *buf = (unsigned char)n;
+ return 1;
+ }
+
+ bytes = (n < (1ULL << 32))
+ ? (n < (1ULL << 16)) ? 2
+ : (n < (1ULL << 24)) ? 3 : 4
+ : (n < (1ULL << 48)) ?
+ (n < (1ULL << 40)) ? 5 : 6
+ : (n < (1ULL << 56)) ? 7 : 8;
+
+ if (buf_len <= bytes)
+ return 0;
+
+ /* no pointer cast assignment, there may be funny alignment
+ * requirements on certain architectures */
+ *buf++ = 0xf7 + bytes; /* code, 0xf9 .. 0xff */
+ n = cpu_to_le64(n);
+ memcpy(buf, &n, bytes); /* plain */
+ return bytes+1;
+}
+
+/* ================================================================== */
+
+/* And here the more involved variants of VLI.
+ *
+ * Code length is determined by some unique (e.g. unary) prefix.
+ * This encodes arbitrary bit length, not whole bytes: we have a bit-stream,
+ * not a byte stream.
+ */
+
+/* for the bitstream, we need a cursor */
+struct bitstream_cursor {
+ /* the current byte */
+ u8 *b;
+ /* the current bit within *b, nomalized: 0..7 */
+ unsigned int bit;
+};
+
+/* initialize cursor to point to first bit of stream */
+static inline void bitstream_cursor_reset(struct bitstream_cursor *cur, void *s)
+{
+ cur->b = s;
+ cur->bit = 0;
+}
+
+/* advance cursor by that many bits; maximum expected input value: 64,
+ * but depending on VLI implementation, it may be more. */
+static inline void bitstream_cursor_advance(struct bitstream_cursor *cur, unsigned int bits)
+{
+ bits += cur->bit;
+ cur->b = cur->b + (bits >> 3);
+ cur->bit = bits & 7;
+}
+
+/* the bitstream itself knows its length */
+struct bitstream {
+ struct bitstream_cursor cur;
+ unsigned char *buf;
+ size_t buf_len; /* in bytes */
+
+ /* for input stream:
+ * number of trailing 0 bits for padding
+ * total number of valid bits in stream: buf_len * 8 - pad_bits */
+ unsigned int pad_bits;
+};
+
+static inline void bitstream_init(struct bitstream *bs, void *s, size_t len, unsigned int pad_bits)
+{
+ bs->buf = s;
+ bs->buf_len = len;
+ bs->pad_bits = pad_bits;
+ bitstream_cursor_reset(&bs->cur, bs->buf);
+}
+
+static inline void bitstream_rewind(struct bitstream *bs)
+{
+ bitstream_cursor_reset(&bs->cur, bs->buf);
+ memset(bs->buf, 0, bs->buf_len);
+}
+
+/* Put (at most 64) least significant bits of val into bitstream, and advance cursor.
+ * Ignores "pad_bits".
+ * Returns zero if bits == 0 (nothing to do).
+ * Returns number of bits used if successful.
+ *
+ * If there is not enough room left in bitstream,
+ * leaves bitstream unchanged and returns -ENOBUFS.
+ */
+static inline int bitstream_put_bits(struct bitstream *bs, u64 val, const unsigned int bits)
+{
+ unsigned char *b = bs->cur.b;
+ unsigned int tmp;
+
+ if (bits == 0)
+ return 0;
+
+ if ((bs->cur.b + ((bs->cur.bit + bits -1) >> 3)) - bs->buf >= bs->buf_len)
+ return -ENOBUFS;
+
+ /* paranoia: strip off hi bits; they should not be set anyways. */
+ if (bits < 64)
+ val &= ~0ULL >> (64 - bits);
+
+ *b++ |= (val & 0xff) << bs->cur.bit;
+
+ for (tmp = 8 - bs->cur.bit; tmp < bits; tmp += 8)
+ *b++ |= (val >> tmp) & 0xff;
+
+ bitstream_cursor_advance(&bs->cur, bits);
+ return bits;
+}
+
+/* Fetch (at most 64) bits from bitstream into *out, and advance cursor.
+ *
+ * If more than 64 bits are requested, returns -EINVAL and leave *out unchanged.
+ *
+ * If there are less than the requested number of valid bits left in the
+ * bitstream, still fetches all available bits.
+ *
+ * Returns number of actually fetched bits.
+ */
+static inline int bitstream_get_bits(struct bitstream *bs, u64 *out, int bits)
+{
+ u64 val;
+ unsigned int n;
+
+ if (bits > 64)
+ return -EINVAL;
+
+ if (bs->cur.b + ((bs->cur.bit + bs->pad_bits + bits -1) >> 3) - bs->buf >= bs->buf_len)
+ bits = ((bs->buf_len - (bs->cur.b - bs->buf)) << 3)
+ - bs->cur.bit - bs->pad_bits;
+
+ if (bits == 0) {
+ *out = 0;
+ return 0;
+ }
+
+ /* get the high bits */
+ val = 0;
+ n = (bs->cur.bit + bits + 7) >> 3;
+ /* n may be at most 9, if cur.bit + bits > 64 */
+ /* which means this copies at most 8 byte */
+ if (n) {
+ memcpy(&val, bs->cur.b+1, n - 1);
+ val = le64_to_cpu(val) << (8 - bs->cur.bit);
+ }
+
+ /* we still need the low bits */
+ val |= bs->cur.b[0] >> bs->cur.bit;
+
+ /* and mask out bits we don't want */
+ val &= ~0ULL >> (64 - bits);
+
+ bitstream_cursor_advance(&bs->cur, bits);
+ *out = val;
+
+ return bits;
+}
+
+/* we still need to actually define the code. */
+
+/*
+ * encoding is "visualised" as
+ * __little endian__ bitstream, least significant bit first (left most)
+ *
+ * this particular encoding is chosen so that the prefix code
+ * starts as unary encoding the level, then modified so that
+ * 11 levels can be described in 8bit, with minimal overhead
+ * for the smaller levels.
+ *
+ * Number of data bits follow fibonacci sequence, with the exception of the
+ * last level (+1 data bit, so it makes 64bit total). The only worse code when
+ * encoding bit polarity runlength is 2 plain bits => 3 code bits.
+prefix data bits max val Nº data bits
+0 0x1 0
+10 x 0x3 1
+110 x 0x5 1
+1110 xx 0x9 2
+11110 xxx 0x11 3
+1111100 x xxxx 0x31 5
+1111101 x xxxxxxx 0x131 8
+11111100 xxxxxxxx xxxxx 0x2131 13
+11111110 xxxxxxxx xxxxxxxx xxxxx 0x202131 21
+11111101 xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xx 0x400202131 34
+11111111 xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx 56
+ * maximum encodable value: 0x100000400202131 == 2**56 + some */
+
+/* LEVEL: (total bits, prefix bits, prefix value),
+ * sorted ascending by number of total bits.
+ * The rest of the code table is calculated at compiletime from this. */
+
+/* fibonacci data 0, 1, ... */
+#define VLI_L_0_1() do { \
+ LEVEL( 1, 1, 0x00); \
+ LEVEL( 3, 2, 0x01); \
+ LEVEL( 4, 3, 0x03); \
+ LEVEL( 6, 4, 0x07); \
+ LEVEL( 8, 5, 0x0f); \
+ LEVEL(12, 7, 0x1f); \
+ LEVEL(15, 7, 0x5f); \
+ LEVEL(21, 8, 0x3f); \
+ LEVEL(29, 8, 0x7f); \
+ LEVEL(42, 8, 0xbf); \
+ LEVEL(64, 8, 0xff); \
+ } while (0)
+
+/* Some variants, differeing in number of levels, prefix value, and number of
+ * databits in each level. I tried a lot of variants. Those where the number
+ * of data bits follows the fibonacci sequence (with a certain offset) simply
+ * "look best" ;-)
+ * All of these can encode at least "2 ** 56". */
+
+/* fibonacci data 1, 1, ... */
+#define VLI_L_1_1() do { \
+ LEVEL( 2, 1, 0x00); \
+ LEVEL( 3, 2, 0x01); \
+ LEVEL( 5, 3, 0x03); \
+ LEVEL( 7, 4, 0x07); \
+ LEVEL(10, 5, 0x0f); \
+ LEVEL(14, 6, 0x1f); \
+ LEVEL(21, 8, 0x3f); \
+ LEVEL(29, 8, 0x7f); \
+ LEVEL(42, 8, 0xbf); \
+ LEVEL(64, 8, 0xff); \
+ } while (0)
+
+/* fibonacci data 1, 2, ... */
+#define VLI_L_1_2() do { \
+ LEVEL( 2, 1, 0x00); \
+ LEVEL( 4, 2, 0x01); \
+ LEVEL( 6, 3, 0x03); \
+ LEVEL( 9, 4, 0x07); \
+ LEVEL(13, 5, 0x0f); \
+ LEVEL(19, 6, 0x1f); \
+ LEVEL(28, 7, 0x3f); \
+ LEVEL(42, 8, 0x7f); \
+ LEVEL(64, 8, 0xff); \
+ } while (0)
+
+/* fibonacci data 2, 3, ... */
+#define VLI_L_2_3() do { \
+ LEVEL( 3, 1, 0x00); \
+ LEVEL( 5, 2, 0x01); \
+ LEVEL( 8, 3, 0x03); \
+ LEVEL(12, 4, 0x07); \
+ LEVEL(18, 5, 0x0f); \
+ LEVEL(27, 6, 0x1f); \
+ LEVEL(41, 7, 0x3f); \
+ LEVEL(64, 7, 0x5f); \
+ } while (0)
+
+/* fibonacci data 3, 5, ... */
+#define VLI_L_3_5() do { \
+ LEVEL( 4, 1, 0x00); \
+ LEVEL( 7, 2, 0x01); \
+ LEVEL(11, 3, 0x03); \
+ LEVEL(17, 4, 0x07); \
+ LEVEL(26, 5, 0x0f); \
+ LEVEL(40, 6, 0x1f); \
+ LEVEL(64, 6, 0x3f); \
+ } while (0)
+
+/* CONFIG */
+#ifndef VLI_LEVELS
+#define VLI_LEVELS() VLI_L_3_5()
+#endif
+
+/* finds a suitable level to decode the least significant part of in.
+ * returns number of bits consumed.
+ *
+ * BUG() for bad input, as that would mean a buggy code table. */
+static inline int vli_decode_bits(u64 *out, const u64 in)
+{
+ u64 adj = 1;
+
+#define LEVEL(t,b,v) \
+ do { \
+ if ((in & ((1 << b) -1)) == v) { \
+ *out = ((in & ((~0ULL) >> (64-t))) >> b) + adj; \
+ return t; \
+ } \
+ adj += 1ULL << (t - b); \
+ } while (0)
+
+ VLI_LEVELS();
+
+ /* NOT REACHED, if VLI_LEVELS code table is defined properly */
+ BUG();
+#undef LEVEL
+}
+
+/* return number of code bits needed,
+ * or negative error number */
+static inline int __vli_encode_bits(u64 *out, const u64 in)
+{
+ u64 max = 0;
+ u64 adj = 1;
+
+ if (in == 0)
+ return -EINVAL;
+
+#define LEVEL(t,b,v) do { \
+ max += 1ULL << (t - b); \
+ if (in <= max) { \
+ if (out) \
+ *out = ((in - adj) << b) | v; \
+ return t; \
+ } \
+ adj = max + 1; \
+ } while (0)
+
+ VLI_LEVELS();
+
+ return -EOVERFLOW;
+#undef LEVEL
+}
+
+/* encodes @in as vli into @bs;
+
+ * return values
+ * > 0: number of bits successfully stored in bitstream
+ * -ENOBUFS @bs is full
+ * -EINVAL input zero (invalid)
+ * -EOVERFLOW input too large for this vli code (invalid)
+ */
+static inline int vli_encode_bits(struct bitstream *bs, u64 in)
+{
+ u64 code = code;
+ int bits = __vli_encode_bits(&code, in);
+
+ if (bits <= 0)
+ return bits;
+
+ return bitstream_put_bits(bs, code, bits);
+}
+
+#undef VLI_L_0_1
+#undef VLI_L_1_1
+#undef VLI_L_1_2
+#undef VLI_L_2_3
+#undef VLI_L_3_5
+
+#undef VLI_LEVELS
+#endif
Kconfig integration, Makefile and major.h
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/include/linux/major.h linux-2.6.29-drbd/include/linux/major.h
--- linux-2.6.29/include/linux/major.h 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/major.h 2009-03-30 18:46:14.227968597 +0200
@@ -145,6 +145,7 @@
#define UNIX98_PTY_MAJOR_COUNT 8
#define UNIX98_PTY_SLAVE_MAJOR (UNIX98_PTY_MASTER_MAJOR+UNIX98_PTY_MAJOR_COUNT)
+#define DRBD_MAJOR 147
#define RTF_MAJOR 150
#define RAW_MAJOR 162
/home/phil/src/drbdXX/scripts
diff -uNrp linux-2.6.29/drivers/block/Kconfig linux-2.6.29-drbd/drivers/block/Kconfig
--- linux-2.6.29/drivers/block/Kconfig 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/Kconfig 2009-03-30 18:46:14.223968559 +0200
@@ -264,6 +264,8 @@ config BLK_DEV_CRYPTOLOOP
instead, which can be configured to be on-disk compatible with the
cryptoloop device.
+source "drivers/block/drbd/Kconfig"
+
config BLK_DEV_NBD
tristate "Network block device support"
depends on NET
diff -uNrp linux-2.6.29/drivers/block/Makefile linux-2.6.29-drbd/drivers/block/Makefile
--- linux-2.6.29/drivers/block/Makefile 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/Makefile 2009-03-30 18:46:14.211968152 +0200
@@ -33,3 +33,4 @@ obj-$(CONFIG_BLK_DEV_UB) += ub.o
obj-$(CONFIG_BLK_DEV_HD) += hd.o
obj-$(CONFIG_XEN_BLKDEV_FRONTEND) += xen-blkfront.o
+obj-$(CONFIG_BLK_DEV_DRBD) += drbd/
diff -uNrp linux-2.6.29/drivers/block/drbd/Kconfig linux-2.6.29-drbd/drivers/block/drbd/Kconfig
--- linux-2.6.29/drivers/block/drbd/Kconfig 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/Kconfig 2007-11-09 16:07:31.952864000 +0100
@@ -0,0 +1,32 @@
+#
+# DRBD device driver configuration
+#
+config BLK_DEV_DRBD
+ tristate "DRBD Distributed Replicated Block Device support"
+ select INET
+ select PROC_FS
+ select CONNECTOR
+ select CRYPTO
+ select CRYPTO_HMAC
+ ---help---
+ DRBD is a block device which is designed to build high availability
+ clusters. This is done by mirroring a whole block device via (a
+ dedicated) network. You could see it as a network RAID 1.
+
+ Each minor device has a state, which can be 'primary' or 'secondary'.
+ On the node with the primary device the application is supposed to
+ run and to access the device (/dev/drbdX). Every write is sent to the
+ local 'lower level block device' and via network to the node with the
+ device in 'secondary' state.
+ The secondary device simply writes the data to its lower level block
+ device. Currently no read-balancing via the network is done.
+
+ DRBD can also be used with "shared-disk semantics" (primary-primary),
+ even though it is a "shared-nothing cluster". You'd need to use a
+ cluster file system on top of that for cache coherency.
+
+ DRBD management is done through user-space tools.
+ For automatic failover you need a cluster manager (e.g. heartbeat).
+ See also: http://www.drbd.org/, http://www.linux-ha.org
+
+ If unsure, say N.
diff -uNrp linux-2.6.29/drivers/block/drbd/Makefile linux-2.6.29-drbd/drivers/block/drbd/Makefile
--- linux-2.6.29/drivers/block/drbd/Makefile 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/Makefile 2009-03-30 18:46:14.155968408 +0200
@@ -0,0 +1,7 @@
+#CFLAGS_drbd_sizeof_sanity_check.o = -Wpadded # -Werror
+
+drbd-objs := drbd_buildtag.o drbd_bitmap.o drbd_proc.o \
+ drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o \
+ lru_cache.o drbd_main.o drbd_strings.o drbd_nl.o
+
+obj-$(CONFIG_BLK_DEV_DRBD) += drbd.o
Kconfig integration, Makefile and major.h
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/include/linux/major.h linux-2.6.29-drbd/include/linux/major.h
--- linux-2.6.29/include/linux/major.h 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/include/linux/major.h 2009-03-30 18:03:13.964468716 +0200
@@ -145,6 +145,7 @@
#define UNIX98_PTY_MAJOR_COUNT 8
#define UNIX98_PTY_SLAVE_MAJOR (UNIX98_PTY_MASTER_MAJOR+UNIX98_PTY_MAJOR_COUNT)
+#define DRBD_MAJOR 147
#define RTF_MAJOR 150
#define RAW_MAJOR 162
/home/phil/src/drbdXX/scripts
diff -uNrp linux-2.6.29/drivers/block/Kconfig linux-2.6.29-drbd/drivers/block/Kconfig
--- linux-2.6.29/drivers/block/Kconfig 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/Kconfig 2009-03-30 18:03:13.959968212 +0200
@@ -264,6 +264,8 @@ config BLK_DEV_CRYPTOLOOP
instead, which can be configured to be on-disk compatible with the
cryptoloop device.
+source "drivers/block/drbd/Kconfig"
+
config BLK_DEV_NBD
tristate "Network block device support"
depends on NET
diff -uNrp linux-2.6.29/drivers/block/Makefile linux-2.6.29-drbd/drivers/block/Makefile
--- linux-2.6.29/drivers/block/Makefile 2009-03-24 00:12:14.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/Makefile 2009-03-30 18:03:13.951968507 +0200
@@ -33,3 +33,4 @@ obj-$(CONFIG_BLK_DEV_UB) += ub.o
obj-$(CONFIG_BLK_DEV_HD) += hd.o
obj-$(CONFIG_XEN_BLKDEV_FRONTEND) += xen-blkfront.o
+obj-$(CONFIG_BLK_DEV_DRBD) += drbd/
diff -uNrp linux-2.6.29/drivers/block/drbd/Kconfig linux-2.6.29-drbd/drivers/block/drbd/Kconfig
--- linux-2.6.29/drivers/block/drbd/Kconfig 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/Kconfig 2007-11-09 16:07:31.952864000 +0100
@@ -0,0 +1,32 @@
+#
+# DRBD device driver configuration
+#
+config BLK_DEV_DRBD
+ tristate "DRBD Distributed Replicated Block Device support"
+ select INET
+ select PROC_FS
+ select CONNECTOR
+ select CRYPTO
+ select CRYPTO_HMAC
+ ---help---
+ DRBD is a block device which is designed to build high availability
+ clusters. This is done by mirroring a whole block device via (a
+ dedicated) network. You could see it as a network RAID 1.
+
+ Each minor device has a state, which can be 'primary' or 'secondary'.
+ On the node with the primary device the application is supposed to
+ run and to access the device (/dev/drbdX). Every write is sent to the
+ local 'lower level block device' and via network to the node with the
+ device in 'secondary' state.
+ The secondary device simply writes the data to its lower level block
+ device. Currently no read-balancing via the network is done.
+
+ DRBD can also be used with "shared-disk semantics" (primary-primary),
+ even though it is a "shared-nothing cluster". You'd need to use a
+ cluster file system on top of that for cache coherency.
+
+ DRBD management is done through user-space tools.
+ For automatic failover you need a cluster manager (e.g. heartbeat).
+ See also: http://www.drbd.org/, http://www.linux-ha.org
+
+ If unsure, say N.
diff -uNrp linux-2.6.29/drivers/block/drbd/Makefile linux-2.6.29-drbd/drivers/block/drbd/Makefile
--- linux-2.6.29/drivers/block/drbd/Makefile 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/Makefile 2009-03-30 18:03:13.903968678 +0200
@@ -0,0 +1,7 @@
+#CFLAGS_drbd_sizeof_sanity_check.o = -Wpadded # -Werror
+
+drbd-objs := drbd_buildtag.o drbd_bitmap.o drbd_proc.o \
+ drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o \
+ lru_cache.o drbd_main.o drbd_strings.o drbd_nl.o
+
+obj-$(CONFIG_BLK_DEV_DRBD) += drbd.o
buildtag.c tag will go away when we are not longer an external module.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_buildtag.c linux-2.6.29-drbd/drivers/block/drbd/drbd_buildtag.c
--- linux-2.6.29/drivers/block/drbd/drbd_buildtag.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_buildtag.c 2009-03-30 16:54:38.511135000 +0200
@@ -0,0 +1,7 @@
+/* automatically generated. DO NOT EDIT. */
+#include <linux/drbd_config.h>
+const char *drbd_buildtag(void)
+{
+ return "GIT-hash: c74771beb9598144d31b861e7ea966f914914c4f drbd/drbd_actlog.c drbd/drbd_bitmap.c drbd/drbd_int.h drbd/drbd_main.c drbd/drbd_receiver.c drbd/drbd_req.c drbd/drbd_worker.c"
+ " build by phil@fat-tyre, 2009-03-30 16:54:38";
+}
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_strings.c linux-2.6.29-drbd/drivers/block/drbd/drbd_strings.c
--- linux-2.6.29/drivers/block/drbd/drbd_strings.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_strings.c 2009-03-26 15:55:39.583134000 +0100
@@ -0,0 +1,115 @@
+/*
+ drbd.h
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+*/
+
+#include <linux/drbd.h>
+
+static const char *drbd_conn_s_names[] = {
+ [StandAlone] = "StandAlone",
+ [Disconnecting] = "Disconnecting",
+ [Unconnected] = "Unconnected",
+ [Timeout] = "Timeout",
+ [BrokenPipe] = "BrokenPipe",
+ [NetworkFailure] = "NetworkFailure",
+ [ProtocolError] = "ProtocolError",
+ [WFConnection] = "WFConnection",
+ [WFReportParams] = "WFReportParams",
+ [TearDown] = "TearDown",
+ [Connected] = "Connected",
+ [StartingSyncS] = "StartingSyncS",
+ [StartingSyncT] = "StartingSyncT",
+ [WFBitMapS] = "WFBitMapS",
+ [WFBitMapT] = "WFBitMapT",
+ [WFSyncUUID] = "WFSyncUUID",
+ [SyncSource] = "SyncSource",
+ [SyncTarget] = "SyncTarget",
+ [VerifyS] = "VerifyS",
+ [VerifyT] = "VerifyT",
+ [PausedSyncS] = "PausedSyncS",
+ [PausedSyncT] = "PausedSyncT"
+};
+
+static const char *drbd_role_s_names[] = {
+ [Primary] = "Primary",
+ [Secondary] = "Secondary",
+ [Unknown] = "Unknown"
+};
+
+static const char *drbd_disk_s_names[] = {
+ [Diskless] = "Diskless",
+ [Attaching] = "Attaching",
+ [Failed] = "Failed",
+ [Negotiating] = "Negotiating",
+ [Inconsistent] = "Inconsistent",
+ [Outdated] = "Outdated",
+ [DUnknown] = "DUnknown",
+ [Consistent] = "Consistent",
+ [UpToDate] = "UpToDate",
+};
+
+static const char *drbd_state_sw_errors[] = {
+ [-SS_TwoPrimaries] = "Multiple primaries not allowed by config",
+ [-SS_NoUpToDateDisk] =
+ "Refusing to be Primary without at least one UpToDate disk",
+ [-SS_BothInconsistent] = "Refusing to be inconsistent on both nodes",
+ [-SS_SyncingDiskless] = "Refusing to be syncing and diskless",
+ [-SS_ConnectedOutdates] = "Refusing to be Outdated while Connected",
+ [-SS_PrimaryNOP] = "Refusing to be Primary while peer is not outdated",
+ [-SS_ResyncRunning] = "Can not start OV/resync since it is already active",
+ [-SS_AlreadyStandAlone] = "Can not disconnect a StandAlone device",
+ [-SS_CW_FailedByPeer] = "State changed was refused by peer node",
+ [-SS_IsDiskLess] =
+ "Device is diskless, the requesed operation requires a disk",
+ [-SS_DeviceInUse] = "Device is held open by someone",
+ [-SS_NoNetConfig] = "Have no net/connection configuration",
+ [-SS_NoVerifyAlg] = "Need a verify algorithm to start online verify",
+ [-SS_NeedConnection] = "Need a connection to start verify or resync",
+ [-SS_NotSupported] = "Peer does not support protocol",
+ [-SS_LowerThanOutdated] = "Disk state is lower than outdated",
+ [-SS_InTransientState] = "In transient state, retry after next state change",
+ [-SS_ConcurrentStChg] = "Concurrent state changes detected and aborted",
+};
+
+const char *conns_to_name(enum drbd_conns s)
+{
+ /* enums are unsigned... */
+ return s > PausedSyncT ? "TOO_LARGE" : drbd_conn_s_names[s];
+}
+
+const char *roles_to_name(enum drbd_role s)
+{
+ return s > Secondary ? "TOO_LARGE" : drbd_role_s_names[s];
+}
+
+const char *disks_to_name(enum drbd_disk_state s)
+{
+ return s > UpToDate ? "TOO_LARGE" : drbd_disk_s_names[s];
+}
+
+const char *set_st_err_name(enum set_st_err err)
+{
+ return err <= SS_AfterLastError ? "TOO_SMALL" :
+ err > SS_TwoPrimaries ? "TOO_LARGE"
+ : drbd_state_sw_errors[-err];
+}
buildtag.c tag will go away when we are not longer an external module.
Signed-off-by: Philipp Reisner <[email protected]>
Signed-off-by: Lars Ellenberg <[email protected]>
---
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_buildtag.c linux-2.6.29-drbd/drivers/block/drbd/drbd_buildtag.c
--- linux-2.6.29/drivers/block/drbd/drbd_buildtag.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_buildtag.c 2009-03-30 16:54:38.511135000 +0200
@@ -0,0 +1,7 @@
+/* automatically generated. DO NOT EDIT. */
+#include <linux/drbd_config.h>
+const char *drbd_buildtag(void)
+{
+ return "GIT-hash: c74771beb9598144d31b861e7ea966f914914c4f drbd/drbd_actlog.c drbd/drbd_bitmap.c drbd/drbd_int.h drbd/drbd_main.c drbd/drbd_receiver.c drbd/drbd_req.c drbd/drbd_worker.c"
+ " build by phil@fat-tyre, 2009-03-30 16:54:38";
+}
diff -uNrp linux-2.6.29/drivers/block/drbd/drbd_strings.c linux-2.6.29-drbd/drivers/block/drbd/drbd_strings.c
--- linux-2.6.29/drivers/block/drbd/drbd_strings.c 1970-01-01 01:00:00.000000000 +0100
+++ linux-2.6.29-drbd/drivers/block/drbd/drbd_strings.c 2009-03-26 15:55:39.583134000 +0100
@@ -0,0 +1,115 @@
+/*
+ drbd.h
+
+ This file is part of DRBD by Philipp Reisner and Lars Ellenberg.
+
+ Copyright (C) 2003-2008, LINBIT Information Technologies GmbH.
+ Copyright (C) 2003-2008, Philipp Reisner <[email protected]>.
+ Copyright (C) 2003-2008, Lars Ellenberg <[email protected]>.
+
+ drbd is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2, or (at your option)
+ any later version.
+
+ drbd is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with drbd; see the file COPYING. If not, write to
+ the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
+
+*/
+
+#include <linux/drbd.h>
+
+static const char *drbd_conn_s_names[] = {
+ [StandAlone] = "StandAlone",
+ [Disconnecting] = "Disconnecting",
+ [Unconnected] = "Unconnected",
+ [Timeout] = "Timeout",
+ [BrokenPipe] = "BrokenPipe",
+ [NetworkFailure] = "NetworkFailure",
+ [ProtocolError] = "ProtocolError",
+ [WFConnection] = "WFConnection",
+ [WFReportParams] = "WFReportParams",
+ [TearDown] = "TearDown",
+ [Connected] = "Connected",
+ [StartingSyncS] = "StartingSyncS",
+ [StartingSyncT] = "StartingSyncT",
+ [WFBitMapS] = "WFBitMapS",
+ [WFBitMapT] = "WFBitMapT",
+ [WFSyncUUID] = "WFSyncUUID",
+ [SyncSource] = "SyncSource",
+ [SyncTarget] = "SyncTarget",
+ [VerifyS] = "VerifyS",
+ [VerifyT] = "VerifyT",
+ [PausedSyncS] = "PausedSyncS",
+ [PausedSyncT] = "PausedSyncT"
+};
+
+static const char *drbd_role_s_names[] = {
+ [Primary] = "Primary",
+ [Secondary] = "Secondary",
+ [Unknown] = "Unknown"
+};
+
+static const char *drbd_disk_s_names[] = {
+ [Diskless] = "Diskless",
+ [Attaching] = "Attaching",
+ [Failed] = "Failed",
+ [Negotiating] = "Negotiating",
+ [Inconsistent] = "Inconsistent",
+ [Outdated] = "Outdated",
+ [DUnknown] = "DUnknown",
+ [Consistent] = "Consistent",
+ [UpToDate] = "UpToDate",
+};
+
+static const char *drbd_state_sw_errors[] = {
+ [-SS_TwoPrimaries] = "Multiple primaries not allowed by config",
+ [-SS_NoUpToDateDisk] =
+ "Refusing to be Primary without at least one UpToDate disk",
+ [-SS_BothInconsistent] = "Refusing to be inconsistent on both nodes",
+ [-SS_SyncingDiskless] = "Refusing to be syncing and diskless",
+ [-SS_ConnectedOutdates] = "Refusing to be Outdated while Connected",
+ [-SS_PrimaryNOP] = "Refusing to be Primary while peer is not outdated",
+ [-SS_ResyncRunning] = "Can not start OV/resync since it is already active",
+ [-SS_AlreadyStandAlone] = "Can not disconnect a StandAlone device",
+ [-SS_CW_FailedByPeer] = "State changed was refused by peer node",
+ [-SS_IsDiskLess] =
+ "Device is diskless, the requesed operation requires a disk",
+ [-SS_DeviceInUse] = "Device is held open by someone",
+ [-SS_NoNetConfig] = "Have no net/connection configuration",
+ [-SS_NoVerifyAlg] = "Need a verify algorithm to start online verify",
+ [-SS_NeedConnection] = "Need a connection to start verify or resync",
+ [-SS_NotSupported] = "Peer does not support protocol",
+ [-SS_LowerThanOutdated] = "Disk state is lower than outdated",
+ [-SS_InTransientState] = "In transient state, retry after next state change",
+ [-SS_ConcurrentStChg] = "Concurrent state changes detected and aborted",
+};
+
+const char *conns_to_name(enum drbd_conns s)
+{
+ /* enums are unsigned... */
+ return s > PausedSyncT ? "TOO_LARGE" : drbd_conn_s_names[s];
+}
+
+const char *roles_to_name(enum drbd_role s)
+{
+ return s > Secondary ? "TOO_LARGE" : drbd_role_s_names[s];
+}
+
+const char *disks_to_name(enum drbd_disk_state s)
+{
+ return s > UpToDate ? "TOO_LARGE" : drbd_disk_s_names[s];
+}
+
+const char *set_st_err_name(enum set_st_err err)
+{
+ return err <= SS_AfterLastError ? "TOO_SMALL" :
+ err > SS_TwoPrimaries ? "TOO_LARGE"
+ : drbd_state_sw_errors[-err];
+}
> +#
> +config BLK_DEV_DRBD
> + tristate "DRBD Distributed Replicated Block Device support"
> + select INET
> + select PROC_FS
> + select CONNECTOR
> + select CRYPTO
> + select CRYPTO_HMAC
Have you double checked that these symbols are supposed to be 'selected'?
If they:
- have dependencies
- have a prompt
then they most likely are not.
> @@ -0,0 +1,7 @@
> +#CFLAGS_drbd_sizeof_sanity_check.o = -Wpadded # -Werror
Commented out?
> +
> +drbd-objs := drbd_buildtag.o drbd_bitmap.o drbd_proc.o \
> + drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o \
> + lru_cache.o drbd_main.o drbd_strings.o drbd_nl.o
Please use:
drdb-y := drbd_buildtag.o drbd_bitmap.o drbd_proc.o
...
And my personal taste favours:
drdb-y := ...
drdb-y += ...
over all the escaping.
Sam
On Monday 30 March 2009 21:05:30 Sam Ravnborg wrote:
> > +#
> > +config BLK_DEV_DRBD
> > + tristate "DRBD Distributed Replicated Block Device support"
> > + select INET
> > + select PROC_FS
> > + select CONNECTOR
> > + select CRYPTO
> > + select CRYPTO_HMAC
>
> Have you double checked that these symbols are supposed to be 'selected'?
> If they:
> - have dependencies
> - have a prompt
> then they most likely are not.
>
Right! Reading kconfig-language.txt makes one wiser ;)
I have changed them into dependencies.
> > @@ -0,0 +1,7 @@
> > +#CFLAGS_drbd_sizeof_sanity_check.o = -Wpadded # -Werror
>
> Commented out?
>
Removed.
> > +
> > +drbd-objs := drbd_buildtag.o drbd_bitmap.o drbd_proc.o \
> > + drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o \
> > + lru_cache.o drbd_main.o drbd_strings.o drbd_nl.o
>
> Please use:
> drdb-y := drbd_buildtag.o drbd_bitmap.o drbd_proc.o
> ...
>
> And my personal taste favours:
> drdb-y := ...
> drdb-y += ...
>
Ok and ok, following your taste.
Thanks for those helpful hints!
-Phil
--
: Dipl-Ing Philipp Reisner
: LINBIT | Your Way to High Availability
: Tel: +43-1-8178292-50, Fax: +43-1-8178292-82
: http://www.linbit.com
DRBD(R) and LINBIT(R) are registered trademarks of LINBIT, Austria.
On 2009-03-30T18:47:08, Philipp Reisner <[email protected]> wrote:
> Hi,
>
> This is a repost of DRBD, to keep you updated about the ongoing
> cleanups.
Hi Philipp,
thanks for the submission!
On reading the code, I think it is in pretty good shape to be merged for
linux-next or Andrew's tree, at the very least.
(Ultimately, of course it'd be very nice if we could reduce the number
of raid engines in the kernel, but that should not necessarily delay the
merge here. Like Greg likes to say, the kernel community also merges
tons of hardware drivers in much worse states.)
Maybe you could also provide a git repository of a kernel tree with your
patches where testers could pull from?
Regards,
Lars
--
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG N?rnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde
Hi Philipp,
On Mon, Mar 30, 2009 at 10:17 PM, Philipp Reisner
<[email protected]> wrote:
> Hi,
>
> This is a repost of DRBD, to keep you updated about the ongoing
> cleanups.
>
> Description
>
> DRBD is a shared-nothing, synchronously replicated block device. It
> is designed to serve as a building block for high availability
> clusters and in this context, is a "drop-in" replacement for shared
> storage. Simplistically, you could see it as a network RAID 1.
>
> Each minor device has a role, which can be 'primary' or 'secondary'.
> On the node with the primary device the application is supposed to
> run and to access the device (/dev/drbdX). Every write is sent to
> the local 'lower level block device' and, across the network, to the
> node with the device in 'secondary' state. The secondary device
> simply writes the data to its lower level block device.
>
> DRBD can also be used in dual-Primary mode (device writable on both
> nodes), which means it can exhibit shared disk semantics in a
> shared-nothing cluster. Needless to say, on top of dual-Primary
> DRBD utilizing a cluster file system is necessary to maintain for
> cache coherency.
>
> This is one of the areas where DRBD differs notably from RAID1 (say
> md) stacked on top of NBD or iSCSI. DRBD solves the issue of
> concurrent writes to the same on disk location. That is an error of
> the layer above us -- it usually indicates a broken lock manager in
> a cluster file system --, but DRBD has to ensure that both sides
> agree on which write came last, and therefore overwrites the other
> write.
>
So this difference to RAID1+NBD is required only if the DLM of the
clustered fs is buggy?
> More background on this can be found in this paper:
> http://www.drbd.org/fileadmin/drbd/publications/drbd8.pdf
>
> Beyond that, DRBD addresses various issues of cluster partitioning,
> which the MD/NBD stack, to the best of our knowledge, does not
> solve. The above-mentioned paper goes into some detail about that as
> well.
>
It would be nice, if you can list those limitations of NBD/RAID here.
Thanks
Nikanth
On Tuesday 07 April 2009 14:23:14 Nikanth K wrote:
> Hi Philipp,
>
> On Mon, Mar 30, 2009 at 10:17 PM, Philipp Reisner
>
> <[email protected]> wrote:
> > Hi,
> >
> > This is a repost of DRBD, to keep you updated about the ongoing
> > cleanups.
> >
> > Description
> >
> > DRBD is a shared-nothing, synchronously replicated block device. It
> > is designed to serve as a building block for high availability
> > clusters and in this context, is a "drop-in" replacement for shared
> > storage. Simplistically, you could see it as a network RAID 1.
> >
> > Each minor device has a role, which can be 'primary' or 'secondary'.
> > On the node with the primary device the application is supposed to
> > run and to access the device (/dev/drbdX). Every write is sent to
> > the local 'lower level block device' and, across the network, to the
> > node with the device in 'secondary' state. The secondary device
> > simply writes the data to its lower level block device.
> >
> > DRBD can also be used in dual-Primary mode (device writable on both
> > nodes), which means it can exhibit shared disk semantics in a
> > shared-nothing cluster. Needless to say, on top of dual-Primary
> > DRBD utilizing a cluster file system is necessary to maintain for
> > cache coherency.
> >
> > This is one of the areas where DRBD differs notably from RAID1 (say
> > md) stacked on top of NBD or iSCSI. DRBD solves the issue of
> > concurrent writes to the same on disk location. That is an error of
> > the layer above us -- it usually indicates a broken lock manager in
> > a cluster file system --, but DRBD has to ensure that both sides
> > agree on which write came last, and therefore overwrites the other
> > write.
>
> So this difference to RAID1+NBD is required only if the DLM of the
> clustered fs is buggy?
>
No, DRBD is much more than RAID1+NBD, I had the impression that by writing
"RAID1+NBD" I can quickly communicate the big picture what DRBD is.
> > More background on this can be found in this paper:
> > http://www.drbd.org/fileadmin/drbd/publications/drbd8.pdf
> >
> > Beyond that, DRBD addresses various issues of cluster partitioning,
> > which the MD/NBD stack, to the best of our knowledge, does not
> > solve. The above-mentioned paper goes into some detail about that as
> > well.
>
> It would be nice, if you can list those limitations of NBD/RAID here.
>
Ok. I will give you two simple examples:
1)
Think of a two node HA cluster. Node A is active ('primary' in DRBD speak)
has the filesystem mounted and the application running. Node B is
in standby mode ('secondary' in DRBD speak).
We loose network connectivity, the primary node continues to run, the
secondary no longer gets updates.
Then we have a complete power failure, both nodes are down. Then they
power up the data center again, but at first the get only the power circuit
of node B up and running again.
Should node B offer the service right now ?
( DRBD has configurable policies for that )
Later on they manage to get node A up and running again, now lets assume
node B was chosen to be the new primary node. What needs to be done ?
Modifications on B since it became primary needs to be resynced to A.
Modifications on A sind it lost contact to B needs to be taken out.
DRBD does that.
How do you fit that into a RAID1+NBD model ? NBD is just a block transport,
it does not offer the ability to exchange dirty bitmaps or data generation
identifiers, nor does the RAID1 code has a concept of that.
2)
When using DRBD over small bandwidth links, one has to run a resync, DRBD
offers the option to do a "checksum based resync". Similar to rsync it
at first only exchanges a checksum, and transmits the whole data block only
if the checksums differ.
That again is something that does not fit into the concepts of NBD or RAID1.
I will write down more examples if you think, that you need more justification
for yet another implementation of RAID in the kernel. DRBD does more, but DRBD
is not suitable for RAID1 on a local box.
PS: Lars Marowsky-Bree requested a GIT tree of the DRBD-for-mainline kernel
patch. I will set that up until Friday, and maintain the code there for
for the merging process.
Best,
Philipp
--
: Dipl-Ing Philipp Reisner
: LINBIT | Your Way to High Availability
: Tel: +43-1-8178292-50, Fax: +43-1-8178292-82
: http://www.linbit.com
DRBD(R) and LINBIT(R) are registered trademarks of LINBIT, Austria.