2015-11-02 17:22:19

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 0/7] staging: lustre: second series for libcfs hash code cleanup

This patch covers more style cleanup series for the libcfs
hash code. Mostly removal of white spaces and resolving the
checkpath issues in libcfs_hash.h.

James Simmons (7):
staging: lustre: remove white space in libcfs_hash.h
staging: lustre: remove obsolete comment in libcfs_hash.h
staging: lustre: move linux hash.h header to start of libcfs_hash.h
staging: lustre: use proper comment blocks for libcfs_hash.h
staging: lustre: handle NULL comparisons correctly for libcfs_hash.h
staging: lustre: remove white space in hash.c
staging: lustre: place linux header first in hash.c

.../lustre/include/linux/libcfs/libcfs_hash.h | 170 +++++-----
drivers/staging/lustre/lustre/libcfs/hash.c | 344 ++++++++++----------
2 files changed, 266 insertions(+), 248 deletions(-)


2015-11-02 17:23:56

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

Cleanup all the unneeded white space in libcfs_hash.h.

Signed-off-by: James Simmons <[email protected]>
---
.../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
1 files changed, 70 insertions(+), 65 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
index 70b8b29..4d73f8a 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
@@ -66,12 +66,12 @@
#include <linux/hash.h>

/** disable debug */
-#define CFS_HASH_DEBUG_NONE 0
+#define CFS_HASH_DEBUG_NONE 0
/** record hash depth and output to console when it's too deep,
* computing overhead is low but consume more memory */
-#define CFS_HASH_DEBUG_1 1
+#define CFS_HASH_DEBUG_1 1
/** expensive, check key validation */
-#define CFS_HASH_DEBUG_2 2
+#define CFS_HASH_DEBUG_2 2

#define CFS_HASH_DEBUG_LEVEL CFS_HASH_DEBUG_NONE

@@ -108,16 +108,18 @@ struct cfs_hash_bucket {
* cfs_hash bucket descriptor, it's normally in stack of caller
*/
struct cfs_hash_bd {
- struct cfs_hash_bucket *bd_bucket; /**< address of bucket */
- unsigned int bd_offset; /**< offset in bucket */
+ /* address of bucket */
+ struct cfs_hash_bucket *bd_bucket;
+ /* offset in bucket */
+ unsigned int bd_offset;
};

-#define CFS_HASH_NAME_LEN 16 /**< default name length */
-#define CFS_HASH_BIGNAME_LEN 64 /**< bigname for param tree */
+#define CFS_HASH_NAME_LEN 16 /**< default name length */
+#define CFS_HASH_BIGNAME_LEN 64 /**< bigname for param tree */

-#define CFS_HASH_BKT_BITS 3 /**< default bits of bucket */
-#define CFS_HASH_BITS_MAX 30 /**< max bits of bucket */
-#define CFS_HASH_BITS_MIN CFS_HASH_BKT_BITS
+#define CFS_HASH_BKT_BITS 3 /**< default bits of bucket */
+#define CFS_HASH_BITS_MAX 30 /**< max bits of bucket */
+#define CFS_HASH_BITS_MIN CFS_HASH_BKT_BITS

/**
* common hash attributes.
@@ -133,41 +135,41 @@ enum cfs_hash_tag {
*/
CFS_HASH_NO_LOCK = 1 << 0,
/** no bucket lock, use one spinlock to protect the whole hash */
- CFS_HASH_NO_BKTLOCK = 1 << 1,
+ CFS_HASH_NO_BKTLOCK = 1 << 1,
/** rwlock to protect bucket */
- CFS_HASH_RW_BKTLOCK = 1 << 2,
+ CFS_HASH_RW_BKTLOCK = 1 << 2,
/** spinlock to protect bucket */
- CFS_HASH_SPIN_BKTLOCK = 1 << 3,
+ CFS_HASH_SPIN_BKTLOCK = 1 << 3,
/** always add new item to tail */
- CFS_HASH_ADD_TAIL = 1 << 4,
+ CFS_HASH_ADD_TAIL = 1 << 4,
/** hash-table doesn't have refcount on item */
- CFS_HASH_NO_ITEMREF = 1 << 5,
+ CFS_HASH_NO_ITEMREF = 1 << 5,
/** big name for param-tree */
CFS_HASH_BIGNAME = 1 << 6,
/** track global count */
CFS_HASH_COUNTER = 1 << 7,
/** rehash item by new key */
- CFS_HASH_REHASH_KEY = 1 << 8,
+ CFS_HASH_REHASH_KEY = 1 << 8,
/** Enable dynamic hash resizing */
- CFS_HASH_REHASH = 1 << 9,
+ CFS_HASH_REHASH = 1 << 9,
/** can shrink hash-size */
- CFS_HASH_SHRINK = 1 << 10,
+ CFS_HASH_SHRINK = 1 << 10,
/** assert hash is empty on exit */
- CFS_HASH_ASSERT_EMPTY = 1 << 11,
+ CFS_HASH_ASSERT_EMPTY = 1 << 11,
/** record hlist depth */
- CFS_HASH_DEPTH = 1 << 12,
+ CFS_HASH_DEPTH = 1 << 12,
/**
* rehash is always scheduled in a different thread, so current
* change on hash table is non-blocking
*/
- CFS_HASH_NBLK_CHANGE = 1 << 13,
+ CFS_HASH_NBLK_CHANGE = 1 << 13,
/** NB, we typed hs_flags as __u16, please change it
* if you need to extend >=16 flags */
};

/** most used attributes */
-#define CFS_HASH_DEFAULT (CFS_HASH_RW_BKTLOCK | \
- CFS_HASH_COUNTER | CFS_HASH_REHASH)
+#define CFS_HASH_DEFAULT (CFS_HASH_RW_BKTLOCK | \
+ CFS_HASH_COUNTER | CFS_HASH_REHASH)

/**
* cfs_hash is a hash-table implementation for general purpose, it can support:
@@ -211,7 +213,7 @@ enum cfs_hash_tag {
struct cfs_hash {
/** serialize with rehash, or serialize all operations if
* the hash-table has CFS_HASH_NO_BKTLOCK */
- union cfs_hash_lock hs_lock;
+ union cfs_hash_lock hs_lock;
/** hash operations */
struct cfs_hash_ops *hs_ops;
/** hash lock operations */
@@ -219,57 +221,57 @@ struct cfs_hash {
/** hash list operations */
struct cfs_hash_hlist_ops *hs_hops;
/** hash buckets-table */
- struct cfs_hash_bucket **hs_buckets;
+ struct cfs_hash_bucket **hs_buckets;
/** total number of items on this hash-table */
- atomic_t hs_count;
+ atomic_t hs_count;
/** hash flags, see cfs_hash_tag for detail */
- __u16 hs_flags;
+ __u16 hs_flags;
/** # of extra-bytes for bucket, for user saving extended attributes */
- __u16 hs_extra_bytes;
+ __u16 hs_extra_bytes;
/** wants to iterate */
- __u8 hs_iterating;
+ __u8 hs_iterating;
/** hash-table is dying */
- __u8 hs_exiting;
+ __u8 hs_exiting;
/** current hash bits */
- __u8 hs_cur_bits;
+ __u8 hs_cur_bits;
/** min hash bits */
- __u8 hs_min_bits;
+ __u8 hs_min_bits;
/** max hash bits */
- __u8 hs_max_bits;
+ __u8 hs_max_bits;
/** bits for rehash */
- __u8 hs_rehash_bits;
+ __u8 hs_rehash_bits;
/** bits for each bucket */
- __u8 hs_bkt_bits;
+ __u8 hs_bkt_bits;
/** resize min threshold */
- __u16 hs_min_theta;
+ __u16 hs_min_theta;
/** resize max threshold */
- __u16 hs_max_theta;
+ __u16 hs_max_theta;
/** resize count */
- __u32 hs_rehash_count;
+ __u32 hs_rehash_count;
/** # of iterators (caller of cfs_hash_for_each_*) */
- __u32 hs_iterators;
+ __u32 hs_iterators;
/** rehash workitem */
- cfs_workitem_t hs_rehash_wi;
+ cfs_workitem_t hs_rehash_wi;
/** refcount on this hash table */
- atomic_t hs_refcount;
+ atomic_t hs_refcount;
/** rehash buckets-table */
- struct cfs_hash_bucket **hs_rehash_buckets;
+ struct cfs_hash_bucket **hs_rehash_buckets;
#if CFS_HASH_DEBUG_LEVEL >= CFS_HASH_DEBUG_1
/** serialize debug members */
spinlock_t hs_dep_lock;
/** max depth */
- unsigned int hs_dep_max;
+ unsigned int hs_dep_max;
/** id of the deepest bucket */
- unsigned int hs_dep_bkt;
+ unsigned int hs_dep_bkt;
/** offset in the deepest bucket */
- unsigned int hs_dep_off;
+ unsigned int hs_dep_off;
/** bits when we found the max depth */
- unsigned int hs_dep_bits;
+ unsigned int hs_dep_bits;
/** workitem to output max depth */
- cfs_workitem_t hs_dep_wi;
+ cfs_workitem_t hs_dep_wi;
#endif
/** name of htable */
- char hs_name[0];
+ char hs_name[0];
};

struct cfs_hash_lock_ops {
@@ -324,11 +326,11 @@ struct cfs_hash_ops {
};

/** total number of buckets in @hs */
-#define CFS_HASH_NBKT(hs) \
+#define CFS_HASH_NBKT(hs) \
(1U << ((hs)->hs_cur_bits - (hs)->hs_bkt_bits))

/** total number of buckets in @hs while rehashing */
-#define CFS_HASH_RH_NBKT(hs) \
+#define CFS_HASH_RH_NBKT(hs) \
(1U << ((hs)->hs_rehash_bits - (hs)->hs_bkt_bits))

/** number of hlist for in bucket */
@@ -433,19 +435,22 @@ cfs_hash_with_nblk_change(struct cfs_hash *hs)

static inline int
cfs_hash_is_exiting(struct cfs_hash *hs)
-{ /* cfs_hash_destroy is called */
+{
+ /* cfs_hash_destroy is called */
return hs->hs_exiting;
}

static inline int
cfs_hash_is_rehashing(struct cfs_hash *hs)
-{ /* rehash is launched */
+{
+ /* rehash is launched */
return hs->hs_rehash_bits != 0;
}

static inline int
cfs_hash_is_iterating(struct cfs_hash *hs)
-{ /* someone is calling cfs_hash_for_each_* */
+{
+ /* someone is calling cfs_hash_for_each_* */
return hs->hs_iterating || hs->hs_iterators != 0;
}

@@ -758,7 +763,7 @@ static inline void
cfs_hash_bucket_validate(struct cfs_hash *hs, struct cfs_hash_bd *bd,
struct hlist_node *hnode)
{
- struct cfs_hash_bd bds[2];
+ struct cfs_hash_bd bds[2];

cfs_hash_dual_bd_get(hs, cfs_hash_key(hs, hnode), bds);
LASSERT(bds[0].bd_bucket == bd->bd_bucket ||
@@ -777,9 +782,9 @@ cfs_hash_bucket_validate(struct cfs_hash *hs, struct cfs_hash_bd *bd,

#endif /* CFS_HASH_DEBUG_LEVEL */

-#define CFS_HASH_THETA_BITS 10
-#define CFS_HASH_MIN_THETA (1U << (CFS_HASH_THETA_BITS - 1))
-#define CFS_HASH_MAX_THETA (1U << (CFS_HASH_THETA_BITS + 1))
+#define CFS_HASH_THETA_BITS 10
+#define CFS_HASH_MIN_THETA (1U << (CFS_HASH_THETA_BITS - 1))
+#define CFS_HASH_MAX_THETA (1U << (CFS_HASH_THETA_BITS + 1))

/* Return integer component of theta */
static inline int __cfs_hash_theta_int(int theta)
@@ -848,20 +853,20 @@ cfs_hash_u64_hash(const __u64 key, unsigned mask)
}

/** iterate over all buckets in @bds (array of struct cfs_hash_bd) */
-#define cfs_hash_for_each_bd(bds, n, i) \
+#define cfs_hash_for_each_bd(bds, n, i) \
for (i = 0; i < n && (bds)[i].bd_bucket != NULL; i++)

/** iterate over all buckets of @hs */
-#define cfs_hash_for_each_bucket(hs, bd, pos) \
- for (pos = 0; \
- pos < CFS_HASH_NBKT(hs) && \
+#define cfs_hash_for_each_bucket(hs, bd, pos) \
+ for (pos = 0; \
+ pos < CFS_HASH_NBKT(hs) && \
((bd)->bd_bucket = (hs)->hs_buckets[pos]) != NULL; pos++)

/** iterate over all hlist of bucket @bd */
-#define cfs_hash_bd_for_each_hlist(hs, bd, hlist) \
- for ((bd)->bd_offset = 0; \
- (bd)->bd_offset < CFS_HASH_BKT_NHLIST(hs) && \
- (hlist = cfs_hash_bd_hhead(hs, bd)) != NULL; \
+#define cfs_hash_bd_for_each_hlist(hs, bd, hlist) \
+ for ((bd)->bd_offset = 0; \
+ (bd)->bd_offset < CFS_HASH_BKT_NHLIST(hs) && \
+ (hlist = cfs_hash_bd_hhead(hs, bd)) != NULL; \
(bd)->bd_offset++)

/* !__LIBCFS__HASH_H__ */
--
1.7.1

2015-11-02 17:22:23

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 2/7] staging: lustre: remove obsolete comment in libcfs_hash.h

Remove comment hash_long which was removed long ago.

Signed-off-by: James Simmons <[email protected]>
---
.../lustre/include/linux/libcfs/libcfs_hash.h | 7 -------
1 files changed, 0 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
index 4d73f8a..4a78e6d 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
@@ -56,13 +56,6 @@
/* 2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
#define CFS_GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL

-/*
- * Ideally we would use HAVE_HASH_LONG for this, but on linux we configure
- * the linux kernel and user space at the same time, so we need to differentiate
- * between them explicitly. If this is not needed on other architectures, then
- * we'll need to move the functions to architecture specific headers.
- */
-
#include <linux/hash.h>

/** disable debug */
--
1.7.1

2015-11-02 17:22:27

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 3/7] staging: lustre: move linux hash.h header to start of libcfs_hash.h

Minor style cleanup to put hash.h header to the top of the
libcfs_hash.h file.

Signed-off-by: James Simmons <[email protected]>
---
.../lustre/include/linux/libcfs/libcfs_hash.h | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
index 4a78e6d..2e0c892 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
@@ -41,6 +41,9 @@

#ifndef __LIBCFS_HASH_H__
#define __LIBCFS_HASH_H__
+
+#include <linux/hash.h>
+
/*
* Knuth recommends primes in approximately golden ratio to the maximum
* integer representable by a machine word for multiplicative hashing.
@@ -56,8 +59,6 @@
/* 2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
#define CFS_GOLDEN_RATIO_PRIME_64 0x9e37fffffffc0001ULL

-#include <linux/hash.h>
-
/** disable debug */
#define CFS_HASH_DEBUG_NONE 0
/** record hash depth and output to console when it's too deep,
--
1.7.1

2015-11-02 17:23:17

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 4/7] staging: lustre: use proper comment blocks for libcfs_hash.h

The script checkpatch.pl reported problems with the style of
the comment blocks. This patch resolves those problems.

Signed-off-by: James Simmons <[email protected]>
---
.../lustre/include/linux/libcfs/libcfs_hash.h | 21 +++++++++++++------
1 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
index 2e0c892..87dee16 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
@@ -61,8 +61,10 @@

/** disable debug */
#define CFS_HASH_DEBUG_NONE 0
-/** record hash depth and output to console when it's too deep,
- * computing overhead is low but consume more memory */
+/*
+ * record hash depth and output to console when it's too deep,
+ * computing overhead is low but consume more memory
+ */
#define CFS_HASH_DEBUG_1 1
/** expensive, check key validation */
#define CFS_HASH_DEBUG_2 2
@@ -158,7 +160,8 @@ enum cfs_hash_tag {
*/
CFS_HASH_NBLK_CHANGE = 1 << 13,
/** NB, we typed hs_flags as __u16, please change it
- * if you need to extend >=16 flags */
+ * if you need to extend >=16 flags
+ */
};

/** most used attributes */
@@ -205,8 +208,10 @@ enum cfs_hash_tag {
*/

struct cfs_hash {
- /** serialize with rehash, or serialize all operations if
- * the hash-table has CFS_HASH_NO_BKTLOCK */
+ /*
+ * serialize with rehash, or serialize all operations if
+ * the hash-table has CFS_HASH_NO_BKTLOCK
+ */
union cfs_hash_lock hs_lock;
/** hash operations */
struct cfs_hash_ops *hs_ops;
@@ -373,9 +378,11 @@ cfs_hash_with_add_tail(struct cfs_hash *hs)
static inline int
cfs_hash_with_no_itemref(struct cfs_hash *hs)
{
- /* hash-table doesn't keep refcount on item,
+ /*
+ * hash-table doesn't keep refcount on item,
* item can't be removed from hash unless it's
- * ZERO refcount */
+ * ZERO refcount.
+ */
return (hs->hs_flags & CFS_HASH_NO_ITEMREF) != 0;
}

--
1.7.1

2015-11-02 17:23:14

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 5/7] staging: lustre: handle NULL comparisons correctly for libcfs_hash.h

Remove all direct NULL comparisons in libcfs_hash.h.

Signed-off-by: James Simmons <[email protected]>
---
.../lustre/include/linux/libcfs/libcfs_hash.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
index 87dee16..6bd2012 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h
@@ -827,7 +827,7 @@ cfs_hash_djb2_hash(const void *key, size_t size, unsigned mask)
{
unsigned i, hash = 5381;

- LASSERT(key != NULL);
+ LASSERT(key);

for (i = 0; i < size; i++)
hash = hash * 33 + ((char *)key)[i];
@@ -855,7 +855,7 @@ cfs_hash_u64_hash(const __u64 key, unsigned mask)

/** iterate over all buckets in @bds (array of struct cfs_hash_bd) */
#define cfs_hash_for_each_bd(bds, n, i) \
- for (i = 0; i < n && (bds)[i].bd_bucket != NULL; i++)
+ for (i = 0; i < n && (bds)[i].bd_bucket; i++)

/** iterate over all buckets of @hs */
#define cfs_hash_for_each_bucket(hs, bd, pos) \
--
1.7.1

2015-11-02 17:22:33

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 6/7] staging: lustre: remove white space in hash.c

Cleanup all the unneeded white space in hash.c.

Signed-off-by: James Simmons <[email protected]>
---
drivers/staging/lustre/lustre/libcfs/hash.c | 342 ++++++++++++++-------------
1 files changed, 177 insertions(+), 165 deletions(-)

diff --git a/drivers/staging/lustre/lustre/libcfs/hash.c b/drivers/staging/lustre/lustre/libcfs/hash.c
index 0308744..ed4e1f1 100644
--- a/drivers/staging/lustre/lustre/libcfs/hash.c
+++ b/drivers/staging/lustre/lustre/libcfs/hash.c
@@ -161,49 +161,49 @@ cfs_hash_rw_unlock(union cfs_hash_lock *lock, int exclusive)
/** No lock hash */
static struct cfs_hash_lock_ops cfs_hash_nl_lops = {
.hs_lock = cfs_hash_nl_lock,
- .hs_unlock = cfs_hash_nl_unlock,
- .hs_bkt_lock = cfs_hash_nl_lock,
- .hs_bkt_unlock = cfs_hash_nl_unlock,
+ .hs_unlock = cfs_hash_nl_unlock,
+ .hs_bkt_lock = cfs_hash_nl_lock,
+ .hs_bkt_unlock = cfs_hash_nl_unlock,
};

/** no bucket lock, one spinlock to protect everything */
static struct cfs_hash_lock_ops cfs_hash_nbl_lops = {
.hs_lock = cfs_hash_spin_lock,
- .hs_unlock = cfs_hash_spin_unlock,
- .hs_bkt_lock = cfs_hash_nl_lock,
- .hs_bkt_unlock = cfs_hash_nl_unlock,
+ .hs_unlock = cfs_hash_spin_unlock,
+ .hs_bkt_lock = cfs_hash_nl_lock,
+ .hs_bkt_unlock = cfs_hash_nl_unlock,
};

/** spin bucket lock, rehash is enabled */
static struct cfs_hash_lock_ops cfs_hash_bkt_spin_lops = {
.hs_lock = cfs_hash_rw_lock,
- .hs_unlock = cfs_hash_rw_unlock,
- .hs_bkt_lock = cfs_hash_spin_lock,
- .hs_bkt_unlock = cfs_hash_spin_unlock,
+ .hs_unlock = cfs_hash_rw_unlock,
+ .hs_bkt_lock = cfs_hash_spin_lock,
+ .hs_bkt_unlock = cfs_hash_spin_unlock,
};

/** rw bucket lock, rehash is enabled */
static struct cfs_hash_lock_ops cfs_hash_bkt_rw_lops = {
.hs_lock = cfs_hash_rw_lock,
- .hs_unlock = cfs_hash_rw_unlock,
- .hs_bkt_lock = cfs_hash_rw_lock,
- .hs_bkt_unlock = cfs_hash_rw_unlock,
+ .hs_unlock = cfs_hash_rw_unlock,
+ .hs_bkt_lock = cfs_hash_rw_lock,
+ .hs_bkt_unlock = cfs_hash_rw_unlock,
};

/** spin bucket lock, rehash is disabled */
static struct cfs_hash_lock_ops cfs_hash_nr_bkt_spin_lops = {
.hs_lock = cfs_hash_nl_lock,
- .hs_unlock = cfs_hash_nl_unlock,
- .hs_bkt_lock = cfs_hash_spin_lock,
- .hs_bkt_unlock = cfs_hash_spin_unlock,
+ .hs_unlock = cfs_hash_nl_unlock,
+ .hs_bkt_lock = cfs_hash_spin_lock,
+ .hs_bkt_unlock = cfs_hash_spin_unlock,
};

/** rw bucket lock, rehash is disabled */
static struct cfs_hash_lock_ops cfs_hash_nr_bkt_rw_lops = {
.hs_lock = cfs_hash_nl_lock,
- .hs_unlock = cfs_hash_nl_unlock,
- .hs_bkt_lock = cfs_hash_rw_lock,
- .hs_bkt_unlock = cfs_hash_rw_unlock,
+ .hs_unlock = cfs_hash_nl_unlock,
+ .hs_bkt_lock = cfs_hash_rw_lock,
+ .hs_bkt_unlock = cfs_hash_rw_unlock,
};

static void
@@ -280,7 +280,7 @@ cfs_hash_hh_hnode_del(struct cfs_hash *hs, struct cfs_hash_bd *bd,
*/
struct cfs_hash_head_dep {
struct hlist_head hd_head; /**< entries list */
- unsigned int hd_depth; /**< list length */
+ unsigned int hd_depth; /**< list length */
};

static int
@@ -328,7 +328,7 @@ cfs_hash_hd_hnode_del(struct cfs_hash *hs, struct cfs_hash_bd *bd,
*/
struct cfs_hash_dhead {
struct hlist_head dh_head; /**< entries list */
- struct hlist_node *dh_tail; /**< the last entry */
+ struct hlist_node *dh_tail; /**< the last entry */
};

static int
@@ -384,8 +384,8 @@ cfs_hash_dh_hnode_del(struct cfs_hash *hs, struct cfs_hash_bd *bd,
*/
struct cfs_hash_dhead_dep {
struct hlist_head dd_head; /**< entries list */
- struct hlist_node *dd_tail; /**< the last entry */
- unsigned int dd_depth; /**< list length */
+ struct hlist_node *dd_tail; /**< the last entry */
+ unsigned int dd_depth; /**< list length */
};

static int
@@ -436,31 +436,31 @@ cfs_hash_dd_hnode_del(struct cfs_hash *hs, struct cfs_hash_bd *bd,
}

static struct cfs_hash_hlist_ops cfs_hash_hh_hops = {
- .hop_hhead = cfs_hash_hh_hhead,
- .hop_hhead_size = cfs_hash_hh_hhead_size,
- .hop_hnode_add = cfs_hash_hh_hnode_add,
- .hop_hnode_del = cfs_hash_hh_hnode_del,
+ .hop_hhead = cfs_hash_hh_hhead,
+ .hop_hhead_size = cfs_hash_hh_hhead_size,
+ .hop_hnode_add = cfs_hash_hh_hnode_add,
+ .hop_hnode_del = cfs_hash_hh_hnode_del,
};

static struct cfs_hash_hlist_ops cfs_hash_hd_hops = {
- .hop_hhead = cfs_hash_hd_hhead,
- .hop_hhead_size = cfs_hash_hd_hhead_size,
- .hop_hnode_add = cfs_hash_hd_hnode_add,
- .hop_hnode_del = cfs_hash_hd_hnode_del,
+ .hop_hhead = cfs_hash_hd_hhead,
+ .hop_hhead_size = cfs_hash_hd_hhead_size,
+ .hop_hnode_add = cfs_hash_hd_hnode_add,
+ .hop_hnode_del = cfs_hash_hd_hnode_del,
};

static struct cfs_hash_hlist_ops cfs_hash_dh_hops = {
- .hop_hhead = cfs_hash_dh_hhead,
- .hop_hhead_size = cfs_hash_dh_hhead_size,
- .hop_hnode_add = cfs_hash_dh_hnode_add,
- .hop_hnode_del = cfs_hash_dh_hnode_del,
+ .hop_hhead = cfs_hash_dh_hhead,
+ .hop_hhead_size = cfs_hash_dh_hhead_size,
+ .hop_hnode_add = cfs_hash_dh_hnode_add,
+ .hop_hnode_del = cfs_hash_dh_hnode_del,
};

static struct cfs_hash_hlist_ops cfs_hash_dd_hops = {
- .hop_hhead = cfs_hash_dd_hhead,
- .hop_hhead_size = cfs_hash_dd_hhead_size,
- .hop_hnode_add = cfs_hash_dd_hnode_add,
- .hop_hnode_del = cfs_hash_dd_hnode_del,
+ .hop_hhead = cfs_hash_dd_hhead,
+ .hop_hhead_size = cfs_hash_dd_hhead_size,
+ .hop_hnode_add = cfs_hash_dd_hnode_add,
+ .hop_hnode_del = cfs_hash_dd_hnode_del,
};

static void
@@ -529,7 +529,7 @@ void
cfs_hash_bd_add_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd,
struct hlist_node *hnode)
{
- int rc;
+ int rc;

rc = hs->hs_hops->hop_hnode_add(hs, bd, hnode);
cfs_hash_bd_dep_record(hs, bd, rc);
@@ -572,7 +572,7 @@ cfs_hash_bd_move_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd_old,
{
struct cfs_hash_bucket *obkt = bd_old->bd_bucket;
struct cfs_hash_bucket *nbkt = bd_new->bd_bucket;
- int rc;
+ int rc;

if (cfs_hash_bd_compare(bd_old, bd_new) == 0)
return;
@@ -597,30 +597,30 @@ EXPORT_SYMBOL(cfs_hash_bd_move_locked);

enum {
/** always set, for sanity (avoid ZERO intent) */
- CFS_HS_LOOKUP_MASK_FIND = BIT(0),
+ CFS_HS_LOOKUP_MASK_FIND = BIT(0),
/** return entry with a ref */
- CFS_HS_LOOKUP_MASK_REF = BIT(1),
+ CFS_HS_LOOKUP_MASK_REF = BIT(1),
/** add entry if not existing */
- CFS_HS_LOOKUP_MASK_ADD = BIT(2),
+ CFS_HS_LOOKUP_MASK_ADD = BIT(2),
/** delete entry, ignore other masks */
- CFS_HS_LOOKUP_MASK_DEL = BIT(3),
+ CFS_HS_LOOKUP_MASK_DEL = BIT(3),
};

enum cfs_hash_lookup_intent {
/** return item w/o refcount */
- CFS_HS_LOOKUP_IT_PEEK = CFS_HS_LOOKUP_MASK_FIND,
+ CFS_HS_LOOKUP_IT_PEEK = CFS_HS_LOOKUP_MASK_FIND,
/** return item with refcount */
- CFS_HS_LOOKUP_IT_FIND = (CFS_HS_LOOKUP_MASK_FIND |
- CFS_HS_LOOKUP_MASK_REF),
+ CFS_HS_LOOKUP_IT_FIND = (CFS_HS_LOOKUP_MASK_FIND |
+ CFS_HS_LOOKUP_MASK_REF),
/** return item w/o refcount if existed, otherwise add */
- CFS_HS_LOOKUP_IT_ADD = (CFS_HS_LOOKUP_MASK_FIND |
- CFS_HS_LOOKUP_MASK_ADD),
+ CFS_HS_LOOKUP_IT_ADD = (CFS_HS_LOOKUP_MASK_FIND |
+ CFS_HS_LOOKUP_MASK_ADD),
/** return item with refcount if existed, otherwise add */
- CFS_HS_LOOKUP_IT_FINDADD = (CFS_HS_LOOKUP_IT_FIND |
- CFS_HS_LOOKUP_MASK_ADD),
+ CFS_HS_LOOKUP_IT_FINDADD = (CFS_HS_LOOKUP_IT_FIND |
+ CFS_HS_LOOKUP_MASK_ADD),
/** delete if existed */
- CFS_HS_LOOKUP_IT_FINDDEL = (CFS_HS_LOOKUP_MASK_FIND |
- CFS_HS_LOOKUP_MASK_DEL)
+ CFS_HS_LOOKUP_IT_FINDDEL = (CFS_HS_LOOKUP_MASK_FIND |
+ CFS_HS_LOOKUP_MASK_DEL)
};

static struct hlist_node *
@@ -629,10 +629,10 @@ cfs_hash_bd_lookup_intent(struct cfs_hash *hs, struct cfs_hash_bd *bd,
enum cfs_hash_lookup_intent intent)

{
- struct hlist_head *hhead = cfs_hash_bd_hhead(hs, bd);
- struct hlist_node *ehnode;
- struct hlist_node *match;
- int intent_add = (intent & CFS_HS_LOOKUP_MASK_ADD) != 0;
+ struct hlist_head *hhead = cfs_hash_bd_hhead(hs, bd);
+ struct hlist_node *ehnode;
+ struct hlist_node *match;
+ int intent_add = (intent & CFS_HS_LOOKUP_MASK_ADD) != 0;

/* with this function, we can avoid a lot of useless refcount ops,
* which are expensive atomic operations most time. */
@@ -665,7 +665,8 @@ cfs_hash_bd_lookup_intent(struct cfs_hash *hs, struct cfs_hash_bd *bd,
}

struct hlist_node *
-cfs_hash_bd_lookup_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd, const void *key)
+cfs_hash_bd_lookup_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd,
+ const void *key)
{
return cfs_hash_bd_lookup_intent(hs, bd, key, NULL,
CFS_HS_LOOKUP_IT_FIND);
@@ -673,7 +674,8 @@ cfs_hash_bd_lookup_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd, const voi
EXPORT_SYMBOL(cfs_hash_bd_lookup_locked);

struct hlist_node *
-cfs_hash_bd_peek_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd, const void *key)
+cfs_hash_bd_peek_locked(struct cfs_hash *hs, struct cfs_hash_bd *bd,
+ const void *key)
{
return cfs_hash_bd_lookup_intent(hs, bd, key, NULL,
CFS_HS_LOOKUP_IT_PEEK);
@@ -706,7 +708,7 @@ cfs_hash_multi_bd_lock(struct cfs_hash *hs, struct cfs_hash_bd *bds,
unsigned n, int excl)
{
struct cfs_hash_bucket *prev = NULL;
- int i;
+ int i;

/**
* bds must be ascendantly ordered by bd->bd_bucket->hsb_index.
@@ -729,7 +731,7 @@ cfs_hash_multi_bd_unlock(struct cfs_hash *hs, struct cfs_hash_bd *bds,
unsigned n, int excl)
{
struct cfs_hash_bucket *prev = NULL;
- int i;
+ int i;

cfs_hash_for_each_bd(bds, n, i) {
if (prev != bds[i].bd_bucket) {
@@ -743,8 +745,8 @@ static struct hlist_node *
cfs_hash_multi_bd_lookup_locked(struct cfs_hash *hs, struct cfs_hash_bd *bds,
unsigned n, const void *key)
{
- struct hlist_node *ehnode;
- unsigned i;
+ struct hlist_node *ehnode;
+ unsigned i;

cfs_hash_for_each_bd(bds, n, i) {
ehnode = cfs_hash_bd_lookup_intent(hs, &bds[i], key, NULL,
@@ -756,13 +758,13 @@ cfs_hash_multi_bd_lookup_locked(struct cfs_hash *hs, struct cfs_hash_bd *bds,
}

static struct hlist_node *
-cfs_hash_multi_bd_findadd_locked(struct cfs_hash *hs,
- struct cfs_hash_bd *bds, unsigned n, const void *key,
+cfs_hash_multi_bd_findadd_locked(struct cfs_hash *hs, struct cfs_hash_bd *bds,
+ unsigned n, const void *key,
struct hlist_node *hnode, int noref)
{
- struct hlist_node *ehnode;
- int intent;
- unsigned i;
+ struct hlist_node *ehnode;
+ int intent;
+ unsigned i;

LASSERT(hnode != NULL);
intent = (!noref * CFS_HS_LOOKUP_MASK_REF) | CFS_HS_LOOKUP_IT_PEEK;
@@ -777,7 +779,7 @@ cfs_hash_multi_bd_findadd_locked(struct cfs_hash *hs,
if (i == 1) { /* only one bucket */
cfs_hash_bd_add_locked(hs, &bds[0], hnode);
} else {
- struct cfs_hash_bd mybd;
+ struct cfs_hash_bd mybd;

cfs_hash_bd_get(hs, key, &mybd);
cfs_hash_bd_add_locked(hs, &mybd, hnode);
@@ -791,8 +793,8 @@ cfs_hash_multi_bd_finddel_locked(struct cfs_hash *hs, struct cfs_hash_bd *bds,
unsigned n, const void *key,
struct hlist_node *hnode)
{
- struct hlist_node *ehnode;
- unsigned i;
+ struct hlist_node *ehnode;
+ unsigned int i;

cfs_hash_for_each_bd(bds, n, i) {
ehnode = cfs_hash_bd_lookup_intent(hs, &bds[i], key, hnode,
@@ -806,7 +808,7 @@ cfs_hash_multi_bd_finddel_locked(struct cfs_hash *hs, struct cfs_hash_bd *bds,
static void
cfs_hash_bd_order(struct cfs_hash_bd *bd1, struct cfs_hash_bd *bd2)
{
- int rc;
+ int rc;

if (bd2->bd_bucket == NULL)
return;
@@ -831,7 +833,8 @@ cfs_hash_bd_order(struct cfs_hash_bd *bd1, struct cfs_hash_bd *bd2)
}

void
-cfs_hash_dual_bd_get(struct cfs_hash *hs, const void *key, struct cfs_hash_bd *bds)
+cfs_hash_dual_bd_get(struct cfs_hash *hs, const void *key,
+ struct cfs_hash_bd *bds)
{
/* NB: caller should hold hs_lock.rw if REHASH is set */
cfs_hash_bd_from_key(hs, hs->hs_buckets,
@@ -894,7 +897,7 @@ static void
cfs_hash_buckets_free(struct cfs_hash_bucket **buckets,
int bkt_size, int prev_size, int size)
{
- int i;
+ int i;

for (i = prev_size; i < size; i++) {
if (buckets[i] != NULL)
@@ -914,7 +917,7 @@ cfs_hash_buckets_realloc(struct cfs_hash *hs, struct cfs_hash_bucket **old_bkts,
unsigned int old_size, unsigned int new_size)
{
struct cfs_hash_bucket **new_bkts;
- int i;
+ int i;

LASSERT(old_size == 0 || old_bkts != NULL);

@@ -932,7 +935,7 @@ cfs_hash_buckets_realloc(struct cfs_hash *hs, struct cfs_hash_bucket **old_bkts,

for (i = old_size; i < new_size; i++) {
struct hlist_head *hhead;
- struct cfs_hash_bd bd;
+ struct cfs_hash_bd bd;

LIBCFS_ALLOC(new_bkts[i], cfs_hash_bkt_size(hs));
if (new_bkts[i] == NULL) {
@@ -969,7 +972,7 @@ cfs_hash_buckets_realloc(struct cfs_hash *hs, struct cfs_hash_bucket **old_bkts,
* @max_bits - Maximum allowed hash table resize, in bits
* @ops - Registered hash table operations
* @flags - CFS_HASH_REHASH enable synamic hash resizing
- * - CFS_HASH_SORT enable chained hash sort
+ * - CFS_HASH_SORT enable chained hash sort
*/
static int cfs_hash_rehash_worker(cfs_workitem_t *wi);

@@ -977,10 +980,10 @@ static int cfs_hash_rehash_worker(cfs_workitem_t *wi);
static int cfs_hash_dep_print(cfs_workitem_t *wi)
{
struct cfs_hash *hs = container_of(wi, struct cfs_hash, hs_dep_wi);
- int dep;
- int bkt;
- int off;
- int bits;
+ int dep;
+ int bkt;
+ int off;
+ int bits;

spin_lock(&hs->hs_dep_lock);
dep = hs->hs_dep_max;
@@ -1031,7 +1034,7 @@ cfs_hash_create(char *name, unsigned cur_bits, unsigned max_bits,
struct cfs_hash_ops *ops, unsigned flags)
{
struct cfs_hash *hs;
- int len;
+ int len;

CLASSERT(CFS_HASH_THETA_BITS < 15);

@@ -1077,7 +1080,7 @@ cfs_hash_create(char *name, unsigned cur_bits, unsigned max_bits,
hs->hs_max_bits = (__u8)max_bits;
hs->hs_bkt_bits = (__u8)bkt_bits;

- hs->hs_ops = ops;
+ hs->hs_ops = ops;
hs->hs_extra_bytes = extra_bytes;
hs->hs_rehash_bits = 0;
cfs_wi_init(&hs->hs_rehash_wi, hs, cfs_hash_rehash_worker);
@@ -1102,10 +1105,10 @@ EXPORT_SYMBOL(cfs_hash_create);
static void
cfs_hash_destroy(struct cfs_hash *hs)
{
- struct hlist_node *hnode;
- struct hlist_node *pos;
- struct cfs_hash_bd bd;
- int i;
+ struct hlist_node *hnode;
+ struct hlist_node *pos;
+ struct cfs_hash_bd bd;
+ int i;

LASSERT(hs != NULL);
LASSERT(!cfs_hash_is_exiting(hs) &&
@@ -1223,8 +1226,8 @@ cfs_hash_rehash_inline(struct cfs_hash *hs)
void
cfs_hash_add(struct cfs_hash *hs, const void *key, struct hlist_node *hnode)
{
- struct cfs_hash_bd bd;
- int bits;
+ struct cfs_hash_bd bd;
+ int bits;

LASSERT(hlist_unhashed(hnode));

@@ -1248,8 +1251,8 @@ cfs_hash_find_or_add(struct cfs_hash *hs, const void *key,
struct hlist_node *hnode, int noref)
{
struct hlist_node *ehnode;
- struct cfs_hash_bd bds[2];
- int bits = 0;
+ struct cfs_hash_bd bds[2];
+ int bits = 0;

LASSERT(hlist_unhashed(hnode));

@@ -1261,7 +1264,7 @@ cfs_hash_find_or_add(struct cfs_hash *hs, const void *key,
hnode, noref);
cfs_hash_dual_bd_unlock(hs, bds, 1);

- if (ehnode == hnode) /* new item added */
+ if (ehnode == hnode) /* new item added */
bits = cfs_hash_rehash_bits(hs);
cfs_hash_unlock(hs, 0);
if (bits > 0)
@@ -1276,7 +1279,8 @@ cfs_hash_find_or_add(struct cfs_hash *hs, const void *key,
* Returns 0 on success or -EALREADY on key collisions.
*/
int
-cfs_hash_add_unique(struct cfs_hash *hs, const void *key, struct hlist_node *hnode)
+cfs_hash_add_unique(struct cfs_hash *hs, const void *key,
+ struct hlist_node *hnode)
{
return cfs_hash_find_or_add(hs, key, hnode, 1) != hnode ?
-EALREADY : 0;
@@ -1309,9 +1313,9 @@ EXPORT_SYMBOL(cfs_hash_findadd_unique);
void *
cfs_hash_del(struct cfs_hash *hs, const void *key, struct hlist_node *hnode)
{
- void *obj = NULL;
- int bits = 0;
- struct cfs_hash_bd bds[2];
+ void *obj = NULL;
+ int bits = 0;
+ struct cfs_hash_bd bds[2];

cfs_hash_lock(hs, 0);
cfs_hash_dual_bd_get_and_lock(hs, key, bds, 1);
@@ -1364,9 +1368,9 @@ EXPORT_SYMBOL(cfs_hash_del_key);
void *
cfs_hash_lookup(struct cfs_hash *hs, const void *key)
{
- void *obj = NULL;
- struct hlist_node *hnode;
- struct cfs_hash_bd bds[2];
+ void *obj = NULL;
+ struct hlist_node *hnode;
+ struct cfs_hash_bd bds[2];

cfs_hash_lock(hs, 0);
cfs_hash_dual_bd_get_and_lock(hs, key, bds, 0);
@@ -1383,7 +1387,8 @@ cfs_hash_lookup(struct cfs_hash *hs, const void *key)
EXPORT_SYMBOL(cfs_hash_lookup);

static void
-cfs_hash_for_each_enter(struct cfs_hash *hs) {
+cfs_hash_for_each_enter(struct cfs_hash *hs)
+{
LASSERT(!cfs_hash_is_exiting(hs));

if (!cfs_hash_with_rehash(hs))
@@ -1408,7 +1413,8 @@ cfs_hash_for_each_enter(struct cfs_hash *hs) {
}

static void
-cfs_hash_for_each_exit(struct cfs_hash *hs) {
+cfs_hash_for_each_exit(struct cfs_hash *hs)
+{
int remained;
int bits;

@@ -1439,14 +1445,15 @@ cfs_hash_for_each_exit(struct cfs_hash *hs) {
*/
static __u64
cfs_hash_for_each_tight(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
- void *data, int remove_safe) {
- struct hlist_node *hnode;
- struct hlist_node *pos;
- struct cfs_hash_bd bd;
- __u64 count = 0;
- int excl = !!remove_safe;
- int loop = 0;
- int i;
+ void *data, int remove_safe)
+{
+ struct hlist_node *hnode;
+ struct hlist_node *pos;
+ struct cfs_hash_bd bd;
+ __u64 count = 0;
+ int excl = !!remove_safe;
+ int loop = 0;
+ int i;

cfs_hash_for_each_enter(hs);

@@ -1514,8 +1521,8 @@ void
cfs_hash_cond_del(struct cfs_hash *hs, cfs_hash_cond_opt_cb_t func, void *data)
{
struct cfs_hash_cond_arg arg = {
- .func = func,
- .arg = data,
+ .func = func,
+ .arg = data,
};

cfs_hash_for_each_tight(hs, cfs_hash_cond_del_locked, &arg, 1);
@@ -1523,16 +1530,17 @@ cfs_hash_cond_del(struct cfs_hash *hs, cfs_hash_cond_opt_cb_t func, void *data)
EXPORT_SYMBOL(cfs_hash_cond_del);

void
-cfs_hash_for_each(struct cfs_hash *hs,
- cfs_hash_for_each_cb_t func, void *data)
+cfs_hash_for_each(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
+ void *data)
{
cfs_hash_for_each_tight(hs, func, data, 0);
}
EXPORT_SYMBOL(cfs_hash_for_each);

void
-cfs_hash_for_each_safe(struct cfs_hash *hs,
- cfs_hash_for_each_cb_t func, void *data) {
+cfs_hash_for_each_safe(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
+ void *data)
+{
cfs_hash_for_each_tight(hs, func, data, 1);
}
EXPORT_SYMBOL(cfs_hash_for_each_safe);
@@ -1581,15 +1589,16 @@ EXPORT_SYMBOL(cfs_hash_size_get);
*/
static int
cfs_hash_for_each_relax(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
- void *data) {
+ void *data)
+{
struct hlist_node *hnode;
struct hlist_node *tmp;
- struct cfs_hash_bd bd;
- __u32 version;
- int count = 0;
- int stop_on_change;
- int rc;
- int i;
+ struct cfs_hash_bd bd;
+ __u32 version;
+ int count = 0;
+ int stop_on_change;
+ int rc;
+ int i;

stop_on_change = cfs_hash_with_rehash_key(hs) ||
!cfs_hash_with_no_itemref(hs) ||
@@ -1645,8 +1654,9 @@ cfs_hash_for_each_relax(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
}

int
-cfs_hash_for_each_nolock(struct cfs_hash *hs,
- cfs_hash_for_each_cb_t func, void *data) {
+cfs_hash_for_each_nolock(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
+ void *data)
+{
if (cfs_hash_with_no_lock(hs) ||
cfs_hash_with_rehash_key(hs) ||
!cfs_hash_with_no_itemref(hs))
@@ -1677,9 +1687,10 @@ EXPORT_SYMBOL(cfs_hash_for_each_nolock);
* the required locking is in place to prevent concurrent insertions.
*/
int
-cfs_hash_for_each_empty(struct cfs_hash *hs,
- cfs_hash_for_each_cb_t func, void *data) {
- unsigned i = 0;
+cfs_hash_for_each_empty(struct cfs_hash *hs, cfs_hash_for_each_cb_t func,
+ void *data)
+{
+ unsigned i = 0;

if (cfs_hash_with_no_lock(hs))
return -EOPNOTSUPP;
@@ -1703,9 +1714,9 @@ void
cfs_hash_hlist_for_each(struct cfs_hash *hs, unsigned hindex,
cfs_hash_for_each_cb_t func, void *data)
{
- struct hlist_head *hhead;
- struct hlist_node *hnode;
- struct cfs_hash_bd bd;
+ struct hlist_head *hhead;
+ struct hlist_node *hnode;
+ struct cfs_hash_bd bd;

cfs_hash_for_each_enter(hs);
cfs_hash_lock(hs, 0);
@@ -1721,7 +1732,7 @@ cfs_hash_hlist_for_each(struct cfs_hash *hs, unsigned hindex,
break;
}
cfs_hash_bd_unlock(hs, &bd, 0);
- out:
+out:
cfs_hash_unlock(hs, 0);
cfs_hash_for_each_exit(hs);
}
@@ -1736,10 +1747,11 @@ EXPORT_SYMBOL(cfs_hash_hlist_for_each);
*/
void
cfs_hash_for_each_key(struct cfs_hash *hs, const void *key,
- cfs_hash_for_each_cb_t func, void *data) {
- struct hlist_node *hnode;
- struct cfs_hash_bd bds[2];
- unsigned i;
+ cfs_hash_for_each_cb_t func, void *data)
+{
+ struct hlist_node *hnode;
+ struct cfs_hash_bd bds[2];
+ unsigned int i;

cfs_hash_lock(hs, 0);

@@ -1777,7 +1789,7 @@ EXPORT_SYMBOL(cfs_hash_for_each_key);
void
cfs_hash_rehash_cancel_locked(struct cfs_hash *hs)
{
- int i;
+ int i;

/* need hold cfs_hash_lock(hs, 1) */
LASSERT(cfs_hash_with_rehash(hs) &&
@@ -1815,7 +1827,7 @@ EXPORT_SYMBOL(cfs_hash_rehash_cancel);
int
cfs_hash_rehash(struct cfs_hash *hs, int do_rehash)
{
- int rc;
+ int rc;

LASSERT(cfs_hash_with_rehash(hs) && !cfs_hash_with_no_lock(hs));

@@ -1845,12 +1857,12 @@ EXPORT_SYMBOL(cfs_hash_rehash);
static int
cfs_hash_rehash_bd(struct cfs_hash *hs, struct cfs_hash_bd *old)
{
- struct cfs_hash_bd new;
- struct hlist_head *hhead;
- struct hlist_node *hnode;
- struct hlist_node *pos;
- void *key;
- int c = 0;
+ struct cfs_hash_bd new;
+ struct hlist_head *hhead;
+ struct hlist_node *hnode;
+ struct hlist_node *pos;
+ void *key;
+ int c = 0;

/* hold cfs_hash_lock(hs, 1), so don't need any bucket lock */
cfs_hash_bd_for_each_hlist(hs, old, hhead) {
@@ -1876,17 +1888,17 @@ cfs_hash_rehash_bd(struct cfs_hash *hs, struct cfs_hash_bd *old)
static int
cfs_hash_rehash_worker(cfs_workitem_t *wi)
{
- struct cfs_hash *hs = container_of(wi, struct cfs_hash, hs_rehash_wi);
+ struct cfs_hash *hs = container_of(wi, struct cfs_hash, hs_rehash_wi);
struct cfs_hash_bucket **bkts;
- struct cfs_hash_bd bd;
- unsigned int old_size;
- unsigned int new_size;
- int bsize;
- int count = 0;
- int rc = 0;
- int i;
+ struct cfs_hash_bd bd;
+ unsigned int old_size;
+ unsigned int new_size;
+ int bsize;
+ int count = 0;
+ int rc = 0;
+ int i;

- LASSERT (hs != NULL && cfs_hash_with_rehash(hs));
+ LASSERT(hs != NULL && cfs_hash_with_rehash(hs));

cfs_hash_lock(hs, 0);
LASSERT(cfs_hash_is_rehashing(hs));
@@ -1958,7 +1970,7 @@ cfs_hash_rehash_worker(cfs_workitem_t *wi)
hs->hs_rehash_buckets = NULL;

hs->hs_cur_bits = hs->hs_rehash_bits;
- out:
+out:
hs->hs_rehash_bits = 0;
if (rc == -ESRCH) /* never be scheduled again */
cfs_wi_exit(cfs_sched_rehash, wi);
@@ -1986,9 +1998,9 @@ cfs_hash_rehash_worker(cfs_workitem_t *wi)
void cfs_hash_rehash_key(struct cfs_hash *hs, const void *old_key,
void *new_key, struct hlist_node *hnode)
{
- struct cfs_hash_bd bds[3];
- struct cfs_hash_bd old_bds[2];
- struct cfs_hash_bd new_bd;
+ struct cfs_hash_bd bds[3];
+ struct cfs_hash_bd old_bds[2];
+ struct cfs_hash_bd new_bd;

LASSERT(!hlist_unhashed(hnode));

@@ -2054,12 +2066,12 @@ cfs_hash_full_nbkt(struct cfs_hash *hs)

void cfs_hash_debug_str(struct cfs_hash *hs, struct seq_file *m)
{
- int dist[8] = { 0, };
- int maxdep = -1;
- int maxdepb = -1;
- int total = 0;
- int theta;
- int i;
+ int dist[8] = { 0, };
+ int maxdep = -1;
+ int maxdepb = -1;
+ int total = 0;
+ int theta;
+ int i;

cfs_hash_lock(hs, 0);
theta = __cfs_hash_theta(hs);
@@ -2085,11 +2097,11 @@ void cfs_hash_debug_str(struct cfs_hash *hs, struct seq_file *m)
* If you hash function results in a non-uniform hash the will
* be observable by outlier bucks in the distribution histogram.
*
- * Uniform hash distribution: 128/128/0/0/0/0/0/0
- * Non-Uniform hash distribution: 128/125/0/0/0/0/2/1
+ * Uniform hash distribution: 128/128/0/0/0/0/0/0
+ * Non-Uniform hash distribution: 128/125/0/0/0/0/2/1
*/
for (i = 0; i < cfs_hash_full_nbkt(hs); i++) {
- struct cfs_hash_bd bd;
+ struct cfs_hash_bd bd;

bd.bd_bucket = cfs_hash_full_bkts(hs)[i];
cfs_hash_bd_lock(hs, &bd, 0);
--
1.7.1

2015-11-02 17:22:31

by James Simmons

[permalink] [raw]
Subject: [PATCH v2 7/7] staging: lustre: place linux header first in hash.c

Always place linux headers first in libcfs header files.
This avoid can potential build issues if any changes to
a libcfs header land that starts using a linux header
definition.

Signed-off-by: James Simmons <[email protected]>
---
drivers/staging/lustre/lustre/libcfs/hash.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/staging/lustre/lustre/libcfs/hash.c b/drivers/staging/lustre/lustre/libcfs/hash.c
index ed4e1f1..4cd8776 100644
--- a/drivers/staging/lustre/lustre/libcfs/hash.c
+++ b/drivers/staging/lustre/lustre/libcfs/hash.c
@@ -106,9 +106,9 @@
* Now we support both locked iteration & lockless iteration of hash
* table. Also, user can break the iteration by return 1 in callback.
*/
+#include <linux/seq_file.h>

#include "../../include/linux/libcfs/libcfs.h"
-#include <linux/seq_file.h>

#if CFS_HASH_DEBUG_LEVEL >= CFS_HASH_DEBUG_1
static unsigned int warn_on_depth = 8;
--
1.7.1

2015-11-04 05:56:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

On Mon, Nov 02, 2015 at 12:22:07PM -0500, James Simmons wrote:
> Cleanup all the unneeded white space in libcfs_hash.h.
>
> Signed-off-by: James Simmons <[email protected]>
> ---
> .../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
> 1 files changed, 70 insertions(+), 65 deletions(-)

Doesn't apply, did I already queue up this series?

confused,

greg k-h

2015-11-04 11:41:46

by Sudip Mukherjee

[permalink] [raw]
Subject: Re: [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

On Tue, Nov 03, 2015 at 07:46:07PM -0800, Greg Kroah-Hartman wrote:
> On Mon, Nov 02, 2015 at 12:22:07PM -0500, James Simmons wrote:
> > Cleanup all the unneeded white space in libcfs_hash.h.
> >
> > Signed-off-by: James Simmons <[email protected]>
> > ---
> > .../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
> > 1 files changed, 70 insertions(+), 65 deletions(-)
>
> Doesn't apply, did I already queue up this series?

No. This did not apply because of:
c7fdf4a3959f ("staging: lustre: fix remaining checkpatch issues for libcfs_hash.h")

regards
sudip

2015-11-04 15:05:44

by Simmons, James A.

[permalink] [raw]
Subject: RE: [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

>On Mon, Nov 02, 2015 at 12:22:07PM -0500, James Simmons wrote:
>> Cleanup all the unneeded white space in libcfs_hash.h.
>>
>> Signed-off-by: James Simmons <[email protected]>
>> ---
>> .../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
>> 1 files changed, 70 insertions(+), 65 deletions(-)
>
>Doesn't apply, did I already queue up this series?

I did a second version of those patches. In one batch you will notice [PATCH v2].
The reason I did a second batch was to break up the "fix checkpatch issues" patch
in the first series. I was trying to behave :-)

2015-11-04 15:07:18

by Simmons, James A.

[permalink] [raw]
Subject: RE: [lustre-devel] [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

>On Tue, Nov 03, 2015 at 07:46:07PM -0800, Greg Kroah-Hartman wrote:
>> On Mon, Nov 02, 2015 at 12:22:07PM -0500, James Simmons wrote:
>> > Cleanup all the unneeded white space in libcfs_hash.h.
>> >
>> > Signed-off-by: James Simmons <[email protected]>
>> > ---
>> > .../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
>> > 1 files changed, 70 insertions(+), 65 deletions(-)
>>
>> Doesn't apply, did I already queue up this series?
>
>No. This did not apply because of:
>c7fdf4a3959f ("staging: lustre: fix remaining checkpatch issues for libcfs_hash.h")

Surprise this was merged which is why I did a second series for this.

2015-11-04 17:20:22

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH v2 1/7] staging: lustre: remove white space in libcfs_hash.h

On Wed, Nov 04, 2015 at 03:07:13PM +0000, Simmons, James A. wrote:
> >On Tue, Nov 03, 2015 at 07:46:07PM -0800, Greg Kroah-Hartman wrote:
> >> On Mon, Nov 02, 2015 at 12:22:07PM -0500, James Simmons wrote:
> >> > Cleanup all the unneeded white space in libcfs_hash.h.
> >> >
> >> > Signed-off-by: James Simmons <[email protected]>
> >> > ---
> >> > .../lustre/include/linux/libcfs/libcfs_hash.h | 135 ++++++++++----------
> >> > 1 files changed, 70 insertions(+), 65 deletions(-)
> >>
> >> Doesn't apply, did I already queue up this series?
> >
> >No. This did not apply because of:
> >c7fdf4a3959f ("staging: lustre: fix remaining checkpatch issues for libcfs_hash.h")
>
> Surprise this was merged which is why I did a second series for this.

Oops, my mistake, I've now dropped this patch from my tree. Please
resend this series so I can try to sync up properly.

thanks,

greg k-h