2013-06-03 18:57:36

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 0/2] introduce list_for_each_entry_del

A purely janitorial patchset. A fairly common pattern is to take a
list, remove every object from it and do something with this object -
usually kfree() some variant. A stupid grep identified roughly 300
instances, with many more hidden behind more complicated patterns to
achieve the same end results.

This patchset moves the boilerplate code into list.h and uses it in a
few example places. Diffstat is pretty clear and imo the code is
improved as well.

Drawback is that object size is growing. I think an ideal compiler
should be able to optimize all the overhead away, but 4.7 just isn't
there yet. Or maybe I just messed up - patches are only
compile-tested after all. Comments/ideas are welcome.

Joern Engel (2):
list: add list_for_each_entry_del
btrfs: use list_for_each_entry_del

fs/btrfs/backref.c | 15 +++------------
fs/btrfs/compression.c | 4 +---
fs/btrfs/disk-io.c | 6 +-----
fs/btrfs/extent-tree.c | 17 +++--------------
fs/btrfs/extent_io.c | 8 ++------
fs/btrfs/inode.c | 16 +++-------------
fs/btrfs/ordered-data.c | 7 +------
fs/btrfs/qgroup.c | 22 ++++------------------
fs/btrfs/relocation.c | 6 +-----
fs/btrfs/scrub.c | 9 +++------
fs/btrfs/transaction.c | 5 +----
fs/btrfs/volumes.c | 11 ++---------
include/linux/list.h | 33 +++++++++++++++++++++++++++++++++
13 files changed, 58 insertions(+), 101 deletions(-)

--
1.7.10.4


2013-06-03 18:57:46

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 1/2] list: add list_for_each_entry_del

I have seen a lot of boilerplate code that either follows the pattern of
while (!list_empty(head)) {
pos = list_entry(head->next, struct foo, list);
list_del(pos->list);
...
}
or some variant thereof.

With this patch in, people can use
list_for_each_entry_del(pos, head, list) {
...
}

The patch also adds a list_for_each_del variant, even though I have
only found a single user for that one so far.

Signed-off-by: Joern Engel <[email protected]>
---
include/linux/list.h | 33 +++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index 6a1f8df..e09fe10 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -361,6 +361,17 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_first_entry(ptr, type, member) \
list_entry((ptr)->next, type, member)

+static inline struct list_head *list_first_del(struct list_head *head)
+{
+ struct list_head *item;
+
+ if (list_empty(head))
+ return NULL;
+ item = head->next;
+ list_del(item);
+ return item;
+}
+
/**
* list_for_each - iterate over a list
* @pos: the &struct list_head to use as a loop cursor.
@@ -483,6 +494,28 @@ static inline void list_splice_tail_init(struct list_head *list,
pos = list_entry(pos->member.next, typeof(*pos), member))

/**
+ * list_for_each_remove - iterate over a list, deleting each entry
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head of your list.
+ *
+ * Calls list_del() on pos on each iteration
+ */
+#define list_for_each_del(pos, head) \
+ while ((pos = list_first_del(head)))
+
+/**
+ * list_for_each_entry_remove - iterate over a list of given type, deleting each entry
+ * @pos: the type * to use as loop cursor.
+ * @head: the head of your list.
+ * @member: the name of the list_struct within the struct
+ *
+ * Calls list_del() on pos on each iteration
+ */
+#define list_for_each_entry_del(pos, head, member) \
+ while (pos = list_entry(list_first_del(head), typeof(*pos), member), \
+ &pos->member)
+
+/**
* list_for_each_entry_safe - iterate over list of given type safe against removal of list entry
* @pos: the type * to use as a loop cursor.
* @n: another type * to use as temporary storage
--
1.7.10.4

2013-06-03 18:58:02

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 2/2] btrfs: use list_for_each_entry_del

Signed-off-by: Joern Engel <[email protected]>
---
fs/btrfs/backref.c | 15 +++------------
fs/btrfs/compression.c | 4 +---
fs/btrfs/disk-io.c | 6 +-----
fs/btrfs/extent-tree.c | 17 +++--------------
fs/btrfs/extent_io.c | 8 ++------
fs/btrfs/inode.c | 16 +++-------------
fs/btrfs/ordered-data.c | 7 +------
fs/btrfs/qgroup.c | 22 ++++------------------
fs/btrfs/relocation.c | 6 +-----
fs/btrfs/scrub.c | 9 +++------
fs/btrfs/transaction.c | 5 +----
fs/btrfs/volumes.c | 11 ++---------
12 files changed, 25 insertions(+), 101 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index bd605c8..ab51655 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -893,9 +893,7 @@ again:
if (ret)
goto out;

- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ list_for_each_entry_del(ref, &prefs, list) {
WARN_ON(ref->count < 0);
if (ref->count && ref->root_id && ref->parent == 0) {
/* no parent == root of tree */
@@ -937,17 +935,10 @@ again:

out:
btrfs_free_path(path);
- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ list_for_each_entry_del(ref, &prefs, list)
kfree(ref);
- }
- while (!list_empty(&prefs_delayed)) {
- ref = list_first_entry(&prefs_delayed, struct __prelim_ref,
- list);
- list_del(&ref->list);
+ list_for_each_entry_del(ref, &prefs_delayed, list)
kfree(ref);
- }

return ret;
}
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 15b9408..c8a890b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -841,9 +841,7 @@ static void free_workspaces(void)
int i;

for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
- while (!list_empty(&comp_idle_workspace[i])) {
- workspace = comp_idle_workspace[i].next;
- list_del(workspace);
+ list_for_each_del(workspace, &comp_idle_workspace[i]) {
btrfs_compress_op[i]->free_workspace(workspace);
atomic_dec(&comp_alloc_workspace[i]);
}
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 6d19a0a..66d99c9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3289,11 +3289,7 @@ static void del_fs_roots(struct btrfs_fs_info *fs_info)
struct btrfs_root *gang[8];
int i;

- while (!list_empty(&fs_info->dead_roots)) {
- gang[0] = list_entry(fs_info->dead_roots.next,
- struct btrfs_root, root_list);
- list_del(&gang[0]->root_list);
-
+ list_for_each_entry_del(gang[0], &fs_info->dead_roots, root_list) {
if (gang[0]->in_radix) {
btrfs_free_fs_root(fs_info, gang[0]);
} else {
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 3d55123..42de094 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2435,10 +2435,7 @@ int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans,
if (!trans->delayed_ref_elem.seq)
return 0;

- while (!list_empty(&trans->qgroup_ref_list)) {
- qgroup_update = list_first_entry(&trans->qgroup_ref_list,
- struct qgroup_update, list);
- list_del(&qgroup_update->list);
+ list_for_each_entry_del(qgroup_update, &trans->qgroup_ref_list, list) {
if (!ret)
ret = btrfs_qgroup_account_ref(
trans, fs_info, qgroup_update->node,
@@ -7821,12 +7818,8 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
struct rb_node *n;

down_write(&info->extent_commit_sem);
- while (!list_empty(&info->caching_block_groups)) {
- caching_ctl = list_entry(info->caching_block_groups.next,
- struct btrfs_caching_control, list);
- list_del(&caching_ctl->list);
+ list_for_each_entry_del(caching_ctl, &info->caching_block_groups, list)
put_caching_control(caching_ctl);
- }
up_write(&info->extent_commit_sem);

spin_lock(&info->block_group_cache_lock);
@@ -7868,10 +7861,7 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)

release_global_block_rsv(info);

- while(!list_empty(&info->space_info)) {
- space_info = list_entry(info->space_info.next,
- struct btrfs_space_info,
- list);
+ list_for_each_entry_del(space_info, &info->space_info, list) {
if (btrfs_test_opt(info->tree_root, ENOSPC_DEBUG)) {
if (space_info->bytes_pinned > 0 ||
space_info->bytes_reserved > 0 ||
@@ -7880,7 +7870,6 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
dump_space_info(space_info, 0, 0);
}
}
- list_del(&space_info->list);
kfree(space_info);
}
return 0;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index cdee391..2b226dd 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -87,24 +87,20 @@ void extent_io_exit(void)
struct extent_state *state;
struct extent_buffer *eb;

- while (!list_empty(&states)) {
- state = list_entry(states.next, struct extent_state, leak_list);
+ list_for_each_entry_del(state, &states, leak_list) {
printk(KERN_ERR "btrfs state leak: start %llu end %llu "
"state %lu in tree %p refs %d\n",
(unsigned long long)state->start,
(unsigned long long)state->end,
state->state, state->tree, atomic_read(&state->refs));
- list_del(&state->leak_list);
kmem_cache_free(extent_state_cache, state);

}

- while (!list_empty(&buffers)) {
- eb = list_entry(buffers.next, struct extent_buffer, leak_list);
+ list_for_each_entry_del(eb, &buffers, leak_list) {
printk(KERN_ERR "btrfs buffer leak start %llu len %lu "
"refs %d\n", (unsigned long long)eb->start,
eb->len, atomic_read(&eb->refs));
- list_del(&eb->leak_list);
kmem_cache_free(extent_buffer_cache, eb);
}

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 09c58a3..0230755 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -623,13 +623,8 @@ static noinline int submit_compressed_extents(struct inode *inode,
return 0;

again:
- while (!list_empty(&async_cow->extents)) {
- async_extent = list_entry(async_cow->extents.next,
- struct async_extent, list);
- list_del(&async_extent->list);
-
+ list_for_each_entry_del(async_extent, &async_cow->extents, list) {
io_tree = &BTRFS_I(inode)->io_tree;
-
retry:
/* did the compression code fall back to uncompressed IO? */
if (!async_extent->pages) {
@@ -1161,11 +1156,8 @@ static noinline int csum_exist_in_range(struct btrfs_root *root,
if (ret == 0 && list_empty(&list))
return 0;

- while (!list_empty(&list)) {
- sums = list_entry(list.next, struct btrfs_ordered_sum, list);
- list_del(&sums->list);
+ list_for_each_entry_del(sums, &list, list)
kfree(sums);
- }
return 1;
}

@@ -2882,9 +2874,7 @@ void btrfs_run_delayed_iputs(struct btrfs_root *root)
list_splice_init(&fs_info->delayed_iputs, &list);
spin_unlock(&fs_info->delayed_iput_lock);

- while (!list_empty(&list)) {
- delayed = list_entry(list.next, struct delayed_iput, list);
- list_del(&delayed->list);
+ list_for_each_entry_del(delayed, &list, list) {
iput(delayed->inode);
kfree(delayed);
}
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 005c45d..f2a2f19 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -479,7 +479,6 @@ void btrfs_free_logged_extents(struct btrfs_root *log, u64 transid)
*/
void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
{
- struct list_head *cur;
struct btrfs_ordered_sum *sum;

trace_btrfs_ordered_extent_put(entry->inode, entry);
@@ -487,12 +486,8 @@ void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
if (atomic_dec_and_test(&entry->refs)) {
if (entry->inode)
btrfs_add_delayed_iput(entry->inode);
- while (!list_empty(&entry->list)) {
- cur = entry->list.next;
- sum = list_entry(cur, struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ list_for_each_entry_del(sum, &entry->list, list)
kfree(sum);
- }
kmem_cache_free(btrfs_ordered_extent_cache, entry);
}
}
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index b44124d..3228737 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -164,19 +164,13 @@ static int del_qgroup_rb(struct btrfs_fs_info *fs_info, u64 qgroupid)
rb_erase(&qgroup->node, &fs_info->qgroup_tree);
list_del(&qgroup->dirty);

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list, next_group);
- list_del(&list->next_group);
+ list_for_each_entry_del(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list, next_member);
+ list_for_each_entry_del(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
@@ -422,21 +416,13 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)

WARN_ON(!list_empty(&qgroup->dirty));

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list,
- next_group);
- list_del(&list->next_group);
+ list_for_each_entry_del(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list,
- next_member);
+ list_for_each_entry_del(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b67171e..385105e 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4270,11 +4270,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)

rc->merge_reloc_tree = 1;

- while (!list_empty(&reloc_roots)) {
- reloc_root = list_entry(reloc_roots.next,
- struct btrfs_root, root_list);
- list_del(&reloc_root->root_list);
-
+ list_for_each_entry_del(reloc_root, &reloc_roots, root_list) {
if (btrfs_root_refs(&reloc_root->root_item) == 0) {
list_add_tail(&reloc_root->root_list,
&rc->reloc_roots);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 85e072b..b8954312 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -306,13 +306,10 @@ static void scrub_pending_trans_workers_dec(struct scrub_ctx *sctx)

static void scrub_free_csums(struct scrub_ctx *sctx)
{
- while (!list_empty(&sctx->csum_list)) {
- struct btrfs_ordered_sum *sum;
- sum = list_first_entry(&sctx->csum_list,
- struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ struct btrfs_ordered_sum *sum;
+
+ list_for_each_entry_del(sum, &sctx->csum_list, list)
kfree(sum);
- }
}

static noinline_for_stack void scrub_free_ctx(struct scrub_ctx *sctx)
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 50767bb..70b092b 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1885,12 +1885,9 @@ int btrfs_clean_old_snapshots(struct btrfs_root *root)
list_splice_init(&fs_info->dead_roots, &list);
spin_unlock(&fs_info->trans_lock);

- while (!list_empty(&list)) {
+ list_for_each_entry_del(root, &list, root_list) {
int ret;

- root = list_entry(list.next, struct btrfs_root, root_list);
- list_del(&root->root_list);
-
btrfs_kill_all_delayed_nodes(root);

if (btrfs_header_backref_rev(root->node) <
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 2854c82..1117d36 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -65,10 +65,7 @@ static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
{
struct btrfs_device *device;
WARN_ON(fs_devices->opened);
- while (!list_empty(&fs_devices->devices)) {
- device = list_entry(fs_devices->devices.next,
- struct btrfs_device, dev_list);
- list_del(&device->dev_list);
+ list_for_each_entry_del(device, &fs_devices->devices, dev_list) {
rcu_string_free(device->name);
kfree(device);
}
@@ -92,12 +89,8 @@ void btrfs_cleanup_fs_uuids(void)
{
struct btrfs_fs_devices *fs_devices;

- while (!list_empty(&fs_uuids)) {
- fs_devices = list_entry(fs_uuids.next,
- struct btrfs_fs_devices, list);
- list_del(&fs_devices->list);
+ list_for_each_entry_del(fs_devices, &fs_uuids, list)
free_fs_devices(fs_devices);
- }
}

static noinline struct btrfs_device *__find_device(struct list_head *head,
--
1.7.10.4

2013-06-03 19:36:08

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Mon, 3 June 2013 13:28:03 -0400, Joern Engel wrote:
>
> Drawback is that object size is growing. I think an ideal compiler
> should be able to optimize all the overhead away, but 4.7 just isn't
> there yet. Or maybe I just messed up - patches are only
> compile-tested after all. Comments/ideas are welcome.

Actually, replacing the inline function with a bit of macro hell seems
to yield identical object size again. And in the case of
compression.o, it even removes 528 bytes or so of object code.

New 1/2 below.

Jörn

--
A defeated army first battles and then seeks victory.
-- Sun Tzu

[PATCH 1/2] list: add list_for_each_entry_del

I have seen a lot of boilerplate code that either follows the pattern of
while (!list_empty(head)) {
pos = list_entry(head->next, struct foo, list);
list_del(pos->list);
...
}
or some variant thereof.

With this patch in, people can use
list_for_each_entry_del(pos, head, list) {
...
}

The patch also adds a list_for_each_del variant, even though I have
only found a single user for that one so far.

Signed-off-by: Joern Engel <[email protected]>
---
include/linux/list.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index 6a1f8df..49325ca 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -483,6 +483,27 @@ static inline void list_splice_tail_init(struct list_head *list,
pos = list_entry(pos->member.next, typeof(*pos), member))

/**
+ * list_for_each_remove - iterate over a list, deleting each entry
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head of your list.
+ *
+ * Calls list_del() on pos on each iteration
+ */
+#define list_for_each_del(pos, head) \
+ while (list_empty(head) ? 0 : (pos = (head)->next, list_del(pos), 1))
+
+/**
+ * list_for_each_entry_remove - iterate over a list of given type, deleting each entry
+ * @pos: the type * to use as loop cursor.
+ * @head: the head of your list.
+ * @member: the name of the list_struct within the struct
+ *
+ * Calls list_del() on pos on each iteration
+ */
+#define list_for_each_entry_del(pos, head, member) \
+ while (list_empty(head) ? 0 : (pos = list_first_entry((head), typeof(*pos), member), list_del(&pos->member), 1))
+
+/**
* list_for_each_entry_safe - iterate over list of given type safe against removal of list entry
* @pos: the type * to use as a loop cursor.
* @n: another type * to use as temporary storage
--
1.7.10.4

2013-06-03 20:49:33

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

I can't say I like the structure.

A list_pop that removes and entry from the head or returns NULL if the
list is empty would lead to nice while loops that are obviously
readable instead.

2013-06-03 21:05:53

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Mon, 3 June 2013 13:49:30 -0700, Christoph Hellwig wrote:
>
> I can't say I like the structure.
>
> A list_pop that removes and entry from the head or returns NULL if the
> list is empty would lead to nice while loops that are obviously
> readable instead.

Something like this?

#define list_pop(head) \
({ struct list_head *____pos; \
list_empty(head) ? NULL : (____pos = (head)->next, \
list_del(____pos), ____pos) \
})

#define list_pop_entry(head, type, member) \
({ struct list_head *____pos; \
list_empty(head) ? NULL : (____pos = (head)->next, \
list_del(____pos), list_entry(____pos, type, member) \
})

Would be fine with me as well.

Jörn

--
Beware of bugs in the above code; I have only proved it correct, but
not tried it.
-- Donald Knuth

2013-06-03 21:25:00

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Mon, 3 June 2013 15:36:47 -0400, Jörn Engel wrote:
> On Mon, 3 June 2013 13:49:30 -0700, Christoph Hellwig wrote:
> >
> > I can't say I like the structure.
> >
> > A list_pop that removes and entry from the head or returns NULL if the
> > list is empty would lead to nice while loops that are obviously
> > readable instead.
>
> Something like this?
>
> #define list_pop(head) \
> ({ struct list_head *____pos; \
> list_empty(head) ? NULL : (____pos = (head)->next, \
> list_del(____pos), ____pos) \
> })
>
> #define list_pop_entry(head, type, member) \
> ({ struct list_head *____pos; \
> list_empty(head) ? NULL : (____pos = (head)->next, \
> list_del(____pos), list_entry(____pos, type, member) \
> })
>
> Would be fine with me as well.

Actually, when I compare the two invocations, I prefer the
list_for_each_entry_del() variant over list_pop_entry().

while ((ref = list_pop_entry(&prefs, struct __prelim_ref, list))) {
list_for_each_entry_del(ref, &prefs, list) {

Christoph?

Jörn

--
"Translations are and will always be problematic. They inflict violence
upon two languages." (translation from German)

2013-06-04 14:48:59

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
> Actually, when I compare the two invocations, I prefer the
> list_for_each_entry_del() variant over list_pop_entry().
>
> while ((ref = list_pop_entry(&prefs, struct __prelim_ref, list))) {
> list_for_each_entry_del(ref, &prefs, list) {
>
> Christoph?

I really don't like something that looks like an iterator (*for_each*)
to modify a list. Maybe it's just me, so I'd love to hear others chime
in.

2013-06-04 14:53:29

by Chris Mason

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

Quoting Christoph Hellwig (2013-06-04 10:48:56)
> On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
> > Actually, when I compare the two invocations, I prefer the
> > list_for_each_entry_del() variant over list_pop_entry().
> >
> > while ((ref = list_pop_entry(&prefs, struct __prelim_ref, list))) {
> > list_for_each_entry_del(ref, &prefs, list) {
> >
> > Christoph?
>
> I really don't like something that looks like an iterator (*for_each*)
> to modify a list. Maybe it's just me, so I'd love to hear others chime
> in.

Have to agree with Christoph. I just couldn't put my finger on why I
didn't like it until I saw the list_pop_entry suggestion.

-chris

2013-06-04 20:07:37

by Arne Jansen

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On 06/04/13 16:53, Chris Mason wrote:
> Quoting Christoph Hellwig (2013-06-04 10:48:56)
>> On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
>>> Actually, when I compare the two invocations, I prefer the
>>> list_for_each_entry_del() variant over list_pop_entry().
>>>
>>> while ((ref = list_pop_entry(&prefs, struct __prelim_ref, list))) {
>>> list_for_each_entry_del(ref, &prefs, list) {
>>>
>>> Christoph?
>>
>> I really don't like something that looks like an iterator (*for_each*)
>> to modify a list. Maybe it's just me, so I'd love to hear others chime
>> in.
>
> Have to agree with Christoph. I just couldn't put my finger on why I
> didn't like it until I saw the list_pop_entry suggestion.

list_pop_each_entry?

>
> -chris
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

2013-06-04 20:13:49

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Tue, 4 June 2013 22:09:13 +0200, Arne Jansen wrote:
> On 06/04/13 16:53, Chris Mason wrote:
> > Quoting Christoph Hellwig (2013-06-04 10:48:56)
> >> On Mon, Jun 03, 2013 at 03:55:55PM -0400, J??rn Engel wrote:
> >>> Actually, when I compare the two invocations, I prefer the
> >>> list_for_each_entry_del() variant over list_pop_entry().
> >>>
> >>> while ((ref = list_pop_entry(&prefs, struct __prelim_ref, list))) {
> >>> list_for_each_entry_del(ref, &prefs, list) {
> >>>
> >>> Christoph?
> >>
> >> I really don't like something that looks like an iterator (*for_each*)
> >> to modify a list. Maybe it's just me, so I'd love to hear others chime
> >> in.
> >
> > Have to agree with Christoph. I just couldn't put my finger on why I
> > didn't like it until I saw the list_pop_entry suggestion.
>
> list_pop_each_entry?

Or while_list_drain?

I agree the *for_each* cover didn't exactly match the content. But if
we find a better name and you are not opposed to the concept, ...

Jörn

--
tglx1 thinks that joern should get a (TM) for "Thinking Is Hard"
-- Thomas Gleixner

2013-06-05 03:32:57

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 1/2] list: add while_list_drain_entry

I have seen a lot of boilerplate code that either follows the pattern of
while (!list_empty(head)) {
pos = list_entry(head->next, struct foo, list);
list_del(pos->list);
...
}
or some variant thereof.

With this patch in, people can use
while_list_drain_entry(pos, head, list) {
...
}

The patch also adds a while_list_drain variant, even though I have
only found a single user for that one so far.

Signed-off-by: Joern Engel <[email protected]>
---
include/linux/list.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index 6a1f8df..ab39c7d 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -557,6 +557,24 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_safe_reset_next(pos, n, member) \
n = list_entry(pos->member.next, typeof(*pos), member)

+/**
+ * while_list_drain - removes an entry from the list until it is empty
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head of your list.
+ */
+#define while_list_drain(pos, head) \
+ while (list_empty(head) ? 0 : (pos = (head)->next, list_del(pos), 1))
+
+/**
+ * while_list_drain_entry - removes an entry from the list until it is empty
+ * @pos: the type * to use as loop cursor.
+ * @head: the head of your list.
+ * @member: the name of the list_struct within the struct
+ */
+#define while_list_drain_entry(pos, head, member) \
+ while (list_empty(head) && (pos = list_first_entry((head), \
+ typeof(*pos), member), list_del((head)->next), 1))
+
/*
* Double linked lists with a single pointer list head.
* Mostly useful for hash tables where the two pointer list head is
--
1.7.10.4

2013-06-05 03:33:14

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 2/2] btrfs: use while_list_drain_entry

Signed-off-by: Joern Engel <[email protected]>
---
fs/btrfs/backref.c | 15 +++------------
fs/btrfs/compression.c | 4 +---
fs/btrfs/disk-io.c | 6 +-----
fs/btrfs/extent-tree.c | 17 +++--------------
fs/btrfs/extent_io.c | 8 ++------
fs/btrfs/inode.c | 16 +++-------------
fs/btrfs/ordered-data.c | 7 +------
fs/btrfs/qgroup.c | 22 ++++------------------
fs/btrfs/relocation.c | 6 +-----
fs/btrfs/scrub.c | 9 +++------
fs/btrfs/transaction.c | 5 +----
fs/btrfs/volumes.c | 11 ++---------
12 files changed, 25 insertions(+), 101 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index bd605c8..3a45e75 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -893,9 +893,7 @@ again:
if (ret)
goto out;

- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ while_list_drain_entry(ref, &prefs, list) {
WARN_ON(ref->count < 0);
if (ref->count && ref->root_id && ref->parent == 0) {
/* no parent == root of tree */
@@ -937,17 +935,10 @@ again:

out:
btrfs_free_path(path);
- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ while_list_drain_entry(ref, &prefs, list)
kfree(ref);
- }
- while (!list_empty(&prefs_delayed)) {
- ref = list_first_entry(&prefs_delayed, struct __prelim_ref,
- list);
- list_del(&ref->list);
+ while_list_drain_entry(ref, &prefs_delayed, list)
kfree(ref);
- }

return ret;
}
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 15b9408..e5a7475 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -841,9 +841,7 @@ static void free_workspaces(void)
int i;

for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
- while (!list_empty(&comp_idle_workspace[i])) {
- workspace = comp_idle_workspace[i].next;
- list_del(workspace);
+ while_list_drain(workspace, &comp_idle_workspace[i]) {
btrfs_compress_op[i]->free_workspace(workspace);
atomic_dec(&comp_alloc_workspace[i]);
}
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 6d19a0a..2767b18 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3289,11 +3289,7 @@ static void del_fs_roots(struct btrfs_fs_info *fs_info)
struct btrfs_root *gang[8];
int i;

- while (!list_empty(&fs_info->dead_roots)) {
- gang[0] = list_entry(fs_info->dead_roots.next,
- struct btrfs_root, root_list);
- list_del(&gang[0]->root_list);
-
+ while_list_drain_entry(gang[0], &fs_info->dead_roots, root_list) {
if (gang[0]->in_radix) {
btrfs_free_fs_root(fs_info, gang[0]);
} else {
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 3d55123..f7afb9e 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2435,10 +2435,7 @@ int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans,
if (!trans->delayed_ref_elem.seq)
return 0;

- while (!list_empty(&trans->qgroup_ref_list)) {
- qgroup_update = list_first_entry(&trans->qgroup_ref_list,
- struct qgroup_update, list);
- list_del(&qgroup_update->list);
+ while_list_drain_entry(qgroup_update, &trans->qgroup_ref_list, list) {
if (!ret)
ret = btrfs_qgroup_account_ref(
trans, fs_info, qgroup_update->node,
@@ -7821,12 +7818,8 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
struct rb_node *n;

down_write(&info->extent_commit_sem);
- while (!list_empty(&info->caching_block_groups)) {
- caching_ctl = list_entry(info->caching_block_groups.next,
- struct btrfs_caching_control, list);
- list_del(&caching_ctl->list);
+ while_list_drain_entry(caching_ctl, &info->caching_block_groups, list)
put_caching_control(caching_ctl);
- }
up_write(&info->extent_commit_sem);

spin_lock(&info->block_group_cache_lock);
@@ -7868,10 +7861,7 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)

release_global_block_rsv(info);

- while(!list_empty(&info->space_info)) {
- space_info = list_entry(info->space_info.next,
- struct btrfs_space_info,
- list);
+ while_list_drain_entry(space_info, &info->space_info, list) {
if (btrfs_test_opt(info->tree_root, ENOSPC_DEBUG)) {
if (space_info->bytes_pinned > 0 ||
space_info->bytes_reserved > 0 ||
@@ -7880,7 +7870,6 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
dump_space_info(space_info, 0, 0);
}
}
- list_del(&space_info->list);
kfree(space_info);
}
return 0;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index cdee391..b158145 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -87,24 +87,20 @@ void extent_io_exit(void)
struct extent_state *state;
struct extent_buffer *eb;

- while (!list_empty(&states)) {
- state = list_entry(states.next, struct extent_state, leak_list);
+ while_list_drain_entry(state, &states, leak_list) {
printk(KERN_ERR "btrfs state leak: start %llu end %llu "
"state %lu in tree %p refs %d\n",
(unsigned long long)state->start,
(unsigned long long)state->end,
state->state, state->tree, atomic_read(&state->refs));
- list_del(&state->leak_list);
kmem_cache_free(extent_state_cache, state);

}

- while (!list_empty(&buffers)) {
- eb = list_entry(buffers.next, struct extent_buffer, leak_list);
+ while_list_drain_entry(eb, &buffers, leak_list) {
printk(KERN_ERR "btrfs buffer leak start %llu len %lu "
"refs %d\n", (unsigned long long)eb->start,
eb->len, atomic_read(&eb->refs));
- list_del(&eb->leak_list);
kmem_cache_free(extent_buffer_cache, eb);
}

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 09c58a3..b402f00 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -623,13 +623,8 @@ static noinline int submit_compressed_extents(struct inode *inode,
return 0;

again:
- while (!list_empty(&async_cow->extents)) {
- async_extent = list_entry(async_cow->extents.next,
- struct async_extent, list);
- list_del(&async_extent->list);
-
+ while_list_drain_entry(async_extent, &async_cow->extents, list) {
io_tree = &BTRFS_I(inode)->io_tree;
-
retry:
/* did the compression code fall back to uncompressed IO? */
if (!async_extent->pages) {
@@ -1161,11 +1156,8 @@ static noinline int csum_exist_in_range(struct btrfs_root *root,
if (ret == 0 && list_empty(&list))
return 0;

- while (!list_empty(&list)) {
- sums = list_entry(list.next, struct btrfs_ordered_sum, list);
- list_del(&sums->list);
+ while_list_drain_entry(sums, &list, list)
kfree(sums);
- }
return 1;
}

@@ -2882,9 +2874,7 @@ void btrfs_run_delayed_iputs(struct btrfs_root *root)
list_splice_init(&fs_info->delayed_iputs, &list);
spin_unlock(&fs_info->delayed_iput_lock);

- while (!list_empty(&list)) {
- delayed = list_entry(list.next, struct delayed_iput, list);
- list_del(&delayed->list);
+ while_list_drain_entry(delayed, &list, list) {
iput(delayed->inode);
kfree(delayed);
}
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 005c45d..61a502c 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -479,7 +479,6 @@ void btrfs_free_logged_extents(struct btrfs_root *log, u64 transid)
*/
void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
{
- struct list_head *cur;
struct btrfs_ordered_sum *sum;

trace_btrfs_ordered_extent_put(entry->inode, entry);
@@ -487,12 +486,8 @@ void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
if (atomic_dec_and_test(&entry->refs)) {
if (entry->inode)
btrfs_add_delayed_iput(entry->inode);
- while (!list_empty(&entry->list)) {
- cur = entry->list.next;
- sum = list_entry(cur, struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ while_list_drain_entry(sum, &entry->list, list)
kfree(sum);
- }
kmem_cache_free(btrfs_ordered_extent_cache, entry);
}
}
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index b44124d..677c524 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -164,19 +164,13 @@ static int del_qgroup_rb(struct btrfs_fs_info *fs_info, u64 qgroupid)
rb_erase(&qgroup->node, &fs_info->qgroup_tree);
list_del(&qgroup->dirty);

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list, next_group);
- list_del(&list->next_group);
+ while_list_drain_entry(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list, next_member);
+ while_list_drain_entry(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
@@ -422,21 +416,13 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)

WARN_ON(!list_empty(&qgroup->dirty));

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list,
- next_group);
- list_del(&list->next_group);
+ while_list_drain_entry(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list,
- next_member);
+ while_list_drain_entry(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b67171e..8b9d03b 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4270,11 +4270,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)

rc->merge_reloc_tree = 1;

- while (!list_empty(&reloc_roots)) {
- reloc_root = list_entry(reloc_roots.next,
- struct btrfs_root, root_list);
- list_del(&reloc_root->root_list);
-
+ while_list_drain_entry(reloc_root, &reloc_roots, root_list) {
if (btrfs_root_refs(&reloc_root->root_item) == 0) {
list_add_tail(&reloc_root->root_list,
&rc->reloc_roots);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 85e072b..50652c6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -306,13 +306,10 @@ static void scrub_pending_trans_workers_dec(struct scrub_ctx *sctx)

static void scrub_free_csums(struct scrub_ctx *sctx)
{
- while (!list_empty(&sctx->csum_list)) {
- struct btrfs_ordered_sum *sum;
- sum = list_first_entry(&sctx->csum_list,
- struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ struct btrfs_ordered_sum *sum;
+
+ while_list_drain_entry(sum, &sctx->csum_list, list)
kfree(sum);
- }
}

static noinline_for_stack void scrub_free_ctx(struct scrub_ctx *sctx)
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 50767bb..3d13a43 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1885,12 +1885,9 @@ int btrfs_clean_old_snapshots(struct btrfs_root *root)
list_splice_init(&fs_info->dead_roots, &list);
spin_unlock(&fs_info->trans_lock);

- while (!list_empty(&list)) {
+ while_list_drain_entry(root, &list, root_list) {
int ret;

- root = list_entry(list.next, struct btrfs_root, root_list);
- list_del(&root->root_list);
-
btrfs_kill_all_delayed_nodes(root);

if (btrfs_header_backref_rev(root->node) <
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 2854c82..5b6dcf3 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -65,10 +65,7 @@ static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
{
struct btrfs_device *device;
WARN_ON(fs_devices->opened);
- while (!list_empty(&fs_devices->devices)) {
- device = list_entry(fs_devices->devices.next,
- struct btrfs_device, dev_list);
- list_del(&device->dev_list);
+ while_list_drain_entry(device, &fs_devices->devices, dev_list) {
rcu_string_free(device->name);
kfree(device);
}
@@ -92,12 +89,8 @@ void btrfs_cleanup_fs_uuids(void)
{
struct btrfs_fs_devices *fs_devices;

- while (!list_empty(&fs_uuids)) {
- fs_devices = list_entry(fs_uuids.next,
- struct btrfs_fs_devices, list);
- list_del(&fs_devices->list);
+ while_list_drain_entry(fs_devices, &fs_uuids, list)
free_fs_devices(fs_devices);
- }
}

static noinline struct btrfs_device *__find_device(struct list_head *head,
--
1.7.10.4

2013-06-05 03:38:51

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Tue, 4 June 2013 14:44:35 -0400, Jörn Engel wrote:
>
> Or while_list_drain?

Not sure if the silence is approval or lack of interest, but a new set
of patches is posted. By playing around with the implementation a
bit, I have actually found a variant that makes the object code
shrink. Not one variant gave same-size object code. There's compiler
optimization for you.

Jörn

--
Money can buy bandwidth, but latency is forever.
-- John R. Mashey

2013-06-05 06:53:56

by Arne Jansen

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On 05.06.2013 04:09, J?rn Engel wrote:
> On Tue, 4 June 2013 14:44:35 -0400, J?rn Engel wrote:
>>
>> Or while_list_drain?

I'm fine with while_list_drain, although a name starting with list_
like all other list macros would be nice. How about just list_drain?
The next question is where to put it in the header so that anyone
doing list cleanup stumbles upon it. Maybe directly below list_del?

-Arne

>
> Not sure if the silence is approval or lack of interest, but a new set
> of patches is posted. By playing around with the implementation a
> bit, I have actually found a variant that makes the object code
> shrink. Not one variant gave same-size object code. There's compiler
> optimization for you.
>
> J?rn
>
> --
> Money can buy bandwidth, but latency is forever.
> -- John R. Mashey

2013-06-05 14:25:52

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Wed, Jun 05, 2013 at 08:53:50AM +0200, Arne Jansen wrote:
> On 05.06.2013 04:09, J?rn Engel wrote:
> > On Tue, 4 June 2013 14:44:35 -0400, J?rn Engel wrote:
> >>
> >> Or while_list_drain?
>
> I'm fine with while_list_drain, although a name starting with list_
> like all other list macros would be nice. How about just list_drain?

The list_ prefix makes it imho more readable, we know that list_* are
macros, but 'while_list_drain' does not fit to the group. I'd go for

list_drain or list_drain_entry


david

2013-06-05 14:32:47

by David Sterba

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add while_list_drain_entry

On Tue, Jun 04, 2013 at 10:03:41PM -0400, J?rn Engel wrote:
> I have seen a lot of boilerplate code that either follows the pattern of
> while (!list_empty(head)) {
> pos = list_entry(head->next, struct foo, list);
> list_del(pos->list);
> ...
> }
> or some variant thereof.
>
> With this patch in, people can use
> while_list_drain_entry(pos, head, list) {
> ...
> }

The while_list_drain_entry way changes the semantics: the list link is
deleted before the {...} body starts, so it's not possible to access the
list anymore. None of the code you've converted uses this, from that
point it's safe, but if this is about to be a public interface, it
should be (at least) documented.

Macro trickery to postpone list_del after the {...} finishes does not
work if kfree/kmem_cache_free/rcu_string_free/... is used.


david

2013-06-06 19:32:57

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Mon, Jun 3, 2013 at 8:28 PM, Joern Engel <[email protected]> wrote:
> I have seen a lot of boilerplate code that either follows the pattern of
> while (!list_empty(head)) {
> pos = list_entry(head->next, struct foo, list);
> list_del(pos->list);
> ...
> }
> or some variant thereof.

What the problem to use list_for_each_safe()?


--
With Best Regards,
Andy Shevchenko

2013-06-06 19:42:04

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Thu, 6 June 2013 22:32:55 +0300, Andy Shevchenko wrote:
> On Mon, Jun 3, 2013 at 8:28 PM, Joern Engel <[email protected]> wrote:
> > I have seen a lot of boilerplate code that either follows the pattern of
> > while (!list_empty(head)) {
> > pos = list_entry(head->next, struct foo, list);
> > list_del(pos->list);
> > ...
> > }
> > or some variant thereof.
>
> What the problem to use list_for_each_safe()?

The loop may terminate with elements left on the list. There is more,
but I would consider this the main problem.

Jörn

--
If you're willing to restrict the flexibility of your approach,
you can almost always do something better.
-- John Carmack

2013-06-06 19:49:24

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Thu, Jun 6, 2013 at 9:12 PM, Jörn Engel <[email protected]> wrote:
> On Thu, 6 June 2013 22:32:55 +0300, Andy Shevchenko wrote:
>> On Mon, Jun 3, 2013 at 8:28 PM, Joern Engel <[email protected]> wrote:
>> > I have seen a lot of boilerplate code that either follows the pattern of
>> > while (!list_empty(head)) {
>> > pos = list_entry(head->next, struct foo, list);
>> > list_del(pos->list);
>> > ...
>> > }
>> > or some variant thereof.
>>
>> What the problem to use list_for_each_safe()?
>
> The loop may terminate with elements left on the list. There is more,
> but I would consider this the main problem.

I didn't quite get what you mean.

list_for_each_safe() and list_for_each_entry_safe() traverse across
list. User actually decides when the proper time to remove an element
from the list.

I'm sorry, I didn't see any advantages from list_for_each_del() (or
entry variant).

--
With Best Regards,
Andy Shevchenko

2013-06-07 18:06:15

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Thu, 6 June 2013 22:49:22 +0300, Andy Shevchenko wrote:
> On Thu, Jun 6, 2013 at 9:12 PM, Jörn Engel <[email protected]> wrote:
> > On Thu, 6 June 2013 22:32:55 +0300, Andy Shevchenko wrote:
> >> On Mon, Jun 3, 2013 at 8:28 PM, Joern Engel <[email protected]> wrote:
> >> > I have seen a lot of boilerplate code that either follows the pattern of
> >> > while (!list_empty(head)) {
> >> > pos = list_entry(head->next, struct foo, list);
> >> > list_del(pos->list);
> >> > ...
> >> > }
> >> > or some variant thereof.
> >>
> >> What the problem to use list_for_each_safe()?
> >
> > The loop may terminate with elements left on the list. There is more,
> > but I would consider this the main problem.
>
> I didn't quite get what you mean.

Take two threads, one doing a list_for_each_entry_safe loop and
dropping the lock after list_del, the other doing list_add. Result is
that you finish the list_for_each_entry_safe loop with something
remaining on the list.

spin_lock
list_for_each_entry_safe
list_del
spin_unlock
spin_lock
list_add
spin_unlock

If you search for this pattern in the kernel, you won't find too many
examples. Quite likely that is because a) people realized this and
used a while (!list_empty()) loop to begin with or b) they started out
wrong and fixed the bug later. Not sure how many examples of b) there
are.

At any rate, this is a purely janitorial patch. It is almost by
definition of moderate utility and if there is significant opposition
or the patch actually causes harm in some way, we should drop it.

Jörn

--
Time? What's that? Time is only worth what you do with it.
-- Theo de Raadt

2013-06-07 18:30:18

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Fri, Jun 7, 2013 at 7:36 PM, Jörn Engel <[email protected]> wrote:
> On Thu, 6 June 2013 22:49:22 +0300, Andy Shevchenko wrote:
>> On Thu, Jun 6, 2013 at 9:12 PM, Jörn Engel <[email protected]> wrote:
>> > On Thu, 6 June 2013 22:32:55 +0300, Andy Shevchenko wrote:
>> >> What the problem to use list_for_each_safe()?
>> >
>> > The loop may terminate with elements left on the list. There is more,
>> > but I would consider this the main problem.
>>
>> I didn't quite get what you mean.
>
> Take two threads, one doing a list_for_each_entry_safe loop and
> dropping the lock after list_del, the other doing list_add. Result is
> that you finish the list_for_each_entry_safe loop with something
> remaining on the list.
>
> spin_lock
> list_for_each_entry_safe
> list_del
> spin_unlock

Who is doing such thing?

Usually if you unlock, you exit from function, or you already done
iteration through the list.

Like
lock()
list_for_each_safe() {
if (condition) {
list_del()
unlock()
return;
}
}
unlock()
return;

In case you have to do unlock()/lock() routine inside iteration you always can
do an additional check at the end

list_for_each_safe() {
unlock();
lock();
}
if (!list_empty()) {
do_smth()
}
unlock();

Thus, I don't see how list*del will help.

> If you search for this pattern in the kernel, you won't find too many
> examples. Quite likely that is because a) people realized this and
> used a while (!list_empty()) loop to begin with or b) they started out
> wrong and fixed the bug later. Not sure how many examples of b) there
> are.

--
With Best Regards,
Andy Shevchenko

2013-06-07 20:18:02

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Fri, 7 June 2013 21:30:16 +0300, Andy Shevchenko wrote:
> >
> > spin_lock
> > list_for_each_entry_safe
> > list_del
> > spin_unlock
>
> Who is doing such thing?

Replace list_for_each_entry_safe with 'while (!list_empty(...))' and
just grep. My patch is about 'while (!list_empty(...))', not about
list_for_each_entry_safe.

Jörn

--
There are three principal ways to lose money: wine, women, and engineers.
While the first two are more pleasant, the third is by far the more certain.
-- Baron Rothschild

2013-06-08 00:03:26

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_for_each_entry_del

On Fri, Jun 7, 2013 at 9:48 PM, Jörn Engel <[email protected]> wrote:
> On Fri, 7 June 2013 21:30:16 +0300, Andy Shevchenko wrote:
>> >
>> > spin_lock
>> > list_for_each_entry_safe
>> > list_del
>> > spin_unlock
>>
>> Who is doing such thing?
>
> Replace list_for_each_entry_safe with 'while (!list_empty(...))' and
> just grep. My patch is about 'while (!list_empty(...))', not about
> list_for_each_entry_safe.

I saw your patch against btrfs.
I didn't see locking there.

Any excerpt like

while (!list_empty(&prefs)) {
ref = list_first_entry(&prefs, struct __prelim_ref, list);
list_del(&ref->list);

could be transformed to
struct __prelim_ref *_ref;
list_for_each_entry_safe(ref, _ref, &prefs, list) {
list_del(&ref->list);
...
}

but is it worth to do? (Same question to your approach)

I see two potential issues with while_list_drain_entry() or whatever
name you like:
- you delete list as a first operation - you limit user to think in
that way, if you choose deletion as last operation, who will go to
free resources (call kfree() for example)?
- you always use the same ordering (list_first_entry() call), someone
may not be satisfied by that

So, the approach with while (!list_empty()) { e = list_entry();
list_del(&e->list); ... }
actually not bad. If there any bugs in code, better to fix those bugs
explicitly.

What do I not understand?

--
With Best Regards,
Andy Shevchenko

2013-07-05 22:11:50

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Mon, 3 June 2013 13:28:03 -0400, Joern Engel wrote:
>
> A purely janitorial patchset. A fairly common pattern is to take a
> list, remove every object from it and do something with this object -
> usually kfree() some variant. A stupid grep identified roughly 300
> instances, with many more hidden behind more complicated patterns to
> achieve the same end results.

Next version of the same patchset. Object size is shrinking now, at
least for the one compiler I tested. And a few kernel hackers met on
a frozen lake in hell with pigs flying overhead and could actually
agree on a name. While I am sure almost every reader will still
disagree and have one or two better suggestions, I would like to use
this historical moment.

list_del_each and list_del_each_entry is shall be!

Jörn

--
It's just what we asked for, but not what we want!
-- anonymous

2013-07-05 22:12:22

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 1/2] list: add list_del_each_entry

I have seen a lot of boilerplate code that either follows the pattern of
while (!list_empty(head)) {
pos = list_entry(head->next, struct foo, list);
list_del(pos->list);
...
}
or some variant thereof.

With this patch in, people can use
list_del_each_entry(pos, head, list) {
...
}

The patch also adds a list_del_each variant, even though I have
only found a single user for that one so far.

Signed-off-by: Joern Engel <[email protected]>
---
include/linux/list.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index 6a1f8df..ab39c7d 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -557,6 +557,24 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_safe_reset_next(pos, n, member) \
n = list_entry(pos->member.next, typeof(*pos), member)

+/**
+ * list_del_each - removes an entry from the list until it is empty
+ * @pos: the &struct list_head to use as a loop cursor.
+ * @head: the head of your list.
+ */
+#define list_del_each(pos, head) \
+ while (list_empty(head) ? 0 : (pos = (head)->next, list_del(pos), 1))
+
+/**
+ * list_del_each_entry - removes an entry from the list until it is empty
+ * @pos: the type * to use as loop cursor.
+ * @head: the head of your list.
+ * @member: the name of the list_struct within the struct
+ */
+#define list_del_each_entry(pos, head, member) \
+ while (list_empty(head) && (pos = list_first_entry((head), \
+ typeof(*pos), member), list_del((head)->next), 1))
+
/*
* Double linked lists with a single pointer list head.
* Mostly useful for hash tables where the two pointer list head is
--
1.7.10.4

2013-07-05 22:12:41

by Jörn Engel

[permalink] [raw]
Subject: [PATCH 2/2] btrfs: use list_del_each_entry

Signed-off-by: Joern Engel <[email protected]>
---
fs/btrfs/backref.c | 15 +++------------
fs/btrfs/compression.c | 4 +---
fs/btrfs/disk-io.c | 6 +-----
fs/btrfs/extent-tree.c | 17 +++--------------
fs/btrfs/extent_io.c | 8 ++------
fs/btrfs/inode.c | 16 +++-------------
fs/btrfs/ordered-data.c | 7 +------
fs/btrfs/qgroup.c | 22 ++++------------------
fs/btrfs/relocation.c | 6 +-----
fs/btrfs/scrub.c | 9 +++------
fs/btrfs/transaction.c | 5 +----
fs/btrfs/volumes.c | 11 ++---------
12 files changed, 25 insertions(+), 101 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index bd605c8..3a45e75 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -893,9 +893,7 @@ again:
if (ret)
goto out;

- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ list_del_each_entry(ref, &prefs, list) {
WARN_ON(ref->count < 0);
if (ref->count && ref->root_id && ref->parent == 0) {
/* no parent == root of tree */
@@ -937,17 +935,10 @@ again:

out:
btrfs_free_path(path);
- while (!list_empty(&prefs)) {
- ref = list_first_entry(&prefs, struct __prelim_ref, list);
- list_del(&ref->list);
+ list_del_each_entry(ref, &prefs, list)
kfree(ref);
- }
- while (!list_empty(&prefs_delayed)) {
- ref = list_first_entry(&prefs_delayed, struct __prelim_ref,
- list);
- list_del(&ref->list);
+ list_del_each_entry(ref, &prefs_delayed, list)
kfree(ref);
- }

return ret;
}
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 15b9408..e5a7475 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -841,9 +841,7 @@ static void free_workspaces(void)
int i;

for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
- while (!list_empty(&comp_idle_workspace[i])) {
- workspace = comp_idle_workspace[i].next;
- list_del(workspace);
+ list_del_each(workspace, &comp_idle_workspace[i]) {
btrfs_compress_op[i]->free_workspace(workspace);
atomic_dec(&comp_alloc_workspace[i]);
}
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 6d19a0a..2767b18 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3289,11 +3289,7 @@ static void del_fs_roots(struct btrfs_fs_info *fs_info)
struct btrfs_root *gang[8];
int i;

- while (!list_empty(&fs_info->dead_roots)) {
- gang[0] = list_entry(fs_info->dead_roots.next,
- struct btrfs_root, root_list);
- list_del(&gang[0]->root_list);
-
+ list_del_each_entry(gang[0], &fs_info->dead_roots, root_list) {
if (gang[0]->in_radix) {
btrfs_free_fs_root(fs_info, gang[0]);
} else {
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 3d55123..f7afb9e 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2435,10 +2435,7 @@ int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans,
if (!trans->delayed_ref_elem.seq)
return 0;

- while (!list_empty(&trans->qgroup_ref_list)) {
- qgroup_update = list_first_entry(&trans->qgroup_ref_list,
- struct qgroup_update, list);
- list_del(&qgroup_update->list);
+ list_del_each_entry(qgroup_update, &trans->qgroup_ref_list, list) {
if (!ret)
ret = btrfs_qgroup_account_ref(
trans, fs_info, qgroup_update->node,
@@ -7821,12 +7818,8 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
struct rb_node *n;

down_write(&info->extent_commit_sem);
- while (!list_empty(&info->caching_block_groups)) {
- caching_ctl = list_entry(info->caching_block_groups.next,
- struct btrfs_caching_control, list);
- list_del(&caching_ctl->list);
+ list_del_each_entry(caching_ctl, &info->caching_block_groups, list)
put_caching_control(caching_ctl);
- }
up_write(&info->extent_commit_sem);

spin_lock(&info->block_group_cache_lock);
@@ -7868,10 +7861,7 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)

release_global_block_rsv(info);

- while(!list_empty(&info->space_info)) {
- space_info = list_entry(info->space_info.next,
- struct btrfs_space_info,
- list);
+ list_del_each_entry(space_info, &info->space_info, list) {
if (btrfs_test_opt(info->tree_root, ENOSPC_DEBUG)) {
if (space_info->bytes_pinned > 0 ||
space_info->bytes_reserved > 0 ||
@@ -7880,7 +7870,6 @@ int btrfs_free_block_groups(struct btrfs_fs_info *info)
dump_space_info(space_info, 0, 0);
}
}
- list_del(&space_info->list);
kfree(space_info);
}
return 0;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index cdee391..b158145 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -87,24 +87,20 @@ void extent_io_exit(void)
struct extent_state *state;
struct extent_buffer *eb;

- while (!list_empty(&states)) {
- state = list_entry(states.next, struct extent_state, leak_list);
+ list_del_each_entry(state, &states, leak_list) {
printk(KERN_ERR "btrfs state leak: start %llu end %llu "
"state %lu in tree %p refs %d\n",
(unsigned long long)state->start,
(unsigned long long)state->end,
state->state, state->tree, atomic_read(&state->refs));
- list_del(&state->leak_list);
kmem_cache_free(extent_state_cache, state);

}

- while (!list_empty(&buffers)) {
- eb = list_entry(buffers.next, struct extent_buffer, leak_list);
+ list_del_each_entry(eb, &buffers, leak_list) {
printk(KERN_ERR "btrfs buffer leak start %llu len %lu "
"refs %d\n", (unsigned long long)eb->start,
eb->len, atomic_read(&eb->refs));
- list_del(&eb->leak_list);
kmem_cache_free(extent_buffer_cache, eb);
}

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 09c58a3..b402f00 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -623,13 +623,8 @@ static noinline int submit_compressed_extents(struct inode *inode,
return 0;

again:
- while (!list_empty(&async_cow->extents)) {
- async_extent = list_entry(async_cow->extents.next,
- struct async_extent, list);
- list_del(&async_extent->list);
-
+ list_del_each_entry(async_extent, &async_cow->extents, list) {
io_tree = &BTRFS_I(inode)->io_tree;
-
retry:
/* did the compression code fall back to uncompressed IO? */
if (!async_extent->pages) {
@@ -1161,11 +1156,8 @@ static noinline int csum_exist_in_range(struct btrfs_root *root,
if (ret == 0 && list_empty(&list))
return 0;

- while (!list_empty(&list)) {
- sums = list_entry(list.next, struct btrfs_ordered_sum, list);
- list_del(&sums->list);
+ list_del_each_entry(sums, &list, list)
kfree(sums);
- }
return 1;
}

@@ -2882,9 +2874,7 @@ void btrfs_run_delayed_iputs(struct btrfs_root *root)
list_splice_init(&fs_info->delayed_iputs, &list);
spin_unlock(&fs_info->delayed_iput_lock);

- while (!list_empty(&list)) {
- delayed = list_entry(list.next, struct delayed_iput, list);
- list_del(&delayed->list);
+ list_del_each_entry(delayed, &list, list) {
iput(delayed->inode);
kfree(delayed);
}
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 005c45d..61a502c 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -479,7 +479,6 @@ void btrfs_free_logged_extents(struct btrfs_root *log, u64 transid)
*/
void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
{
- struct list_head *cur;
struct btrfs_ordered_sum *sum;

trace_btrfs_ordered_extent_put(entry->inode, entry);
@@ -487,12 +486,8 @@ void btrfs_put_ordered_extent(struct btrfs_ordered_extent *entry)
if (atomic_dec_and_test(&entry->refs)) {
if (entry->inode)
btrfs_add_delayed_iput(entry->inode);
- while (!list_empty(&entry->list)) {
- cur = entry->list.next;
- sum = list_entry(cur, struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ list_del_each_entry(sum, &entry->list, list)
kfree(sum);
- }
kmem_cache_free(btrfs_ordered_extent_cache, entry);
}
}
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index b44124d..677c524 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -164,19 +164,13 @@ static int del_qgroup_rb(struct btrfs_fs_info *fs_info, u64 qgroupid)
rb_erase(&qgroup->node, &fs_info->qgroup_tree);
list_del(&qgroup->dirty);

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list, next_group);
- list_del(&list->next_group);
+ list_del_each_entry(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list, next_member);
+ list_del_each_entry(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
@@ -422,21 +416,13 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)

WARN_ON(!list_empty(&qgroup->dirty));

- while (!list_empty(&qgroup->groups)) {
- list = list_first_entry(&qgroup->groups,
- struct btrfs_qgroup_list,
- next_group);
- list_del(&list->next_group);
+ list_del_each_entry(list, &qgroup->groups, next_group) {
list_del(&list->next_member);
kfree(list);
}

- while (!list_empty(&qgroup->members)) {
- list = list_first_entry(&qgroup->members,
- struct btrfs_qgroup_list,
- next_member);
+ list_del_each_entry(list, &qgroup->members, next_member) {
list_del(&list->next_group);
- list_del(&list->next_member);
kfree(list);
}
kfree(qgroup);
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b67171e..8b9d03b 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4270,11 +4270,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)

rc->merge_reloc_tree = 1;

- while (!list_empty(&reloc_roots)) {
- reloc_root = list_entry(reloc_roots.next,
- struct btrfs_root, root_list);
- list_del(&reloc_root->root_list);
-
+ list_del_each_entry(reloc_root, &reloc_roots, root_list) {
if (btrfs_root_refs(&reloc_root->root_item) == 0) {
list_add_tail(&reloc_root->root_list,
&rc->reloc_roots);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 85e072b..50652c6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -306,13 +306,10 @@ static void scrub_pending_trans_workers_dec(struct scrub_ctx *sctx)

static void scrub_free_csums(struct scrub_ctx *sctx)
{
- while (!list_empty(&sctx->csum_list)) {
- struct btrfs_ordered_sum *sum;
- sum = list_first_entry(&sctx->csum_list,
- struct btrfs_ordered_sum, list);
- list_del(&sum->list);
+ struct btrfs_ordered_sum *sum;
+
+ list_del_each_entry(sum, &sctx->csum_list, list)
kfree(sum);
- }
}

static noinline_for_stack void scrub_free_ctx(struct scrub_ctx *sctx)
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 50767bb..3d13a43 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1885,12 +1885,9 @@ int btrfs_clean_old_snapshots(struct btrfs_root *root)
list_splice_init(&fs_info->dead_roots, &list);
spin_unlock(&fs_info->trans_lock);

- while (!list_empty(&list)) {
+ list_del_each_entry(root, &list, root_list) {
int ret;

- root = list_entry(list.next, struct btrfs_root, root_list);
- list_del(&root->root_list);
-
btrfs_kill_all_delayed_nodes(root);

if (btrfs_header_backref_rev(root->node) <
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 2854c82..5b6dcf3 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -65,10 +65,7 @@ static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
{
struct btrfs_device *device;
WARN_ON(fs_devices->opened);
- while (!list_empty(&fs_devices->devices)) {
- device = list_entry(fs_devices->devices.next,
- struct btrfs_device, dev_list);
- list_del(&device->dev_list);
+ list_del_each_entry(device, &fs_devices->devices, dev_list) {
rcu_string_free(device->name);
kfree(device);
}
@@ -92,12 +89,8 @@ void btrfs_cleanup_fs_uuids(void)
{
struct btrfs_fs_devices *fs_devices;

- while (!list_empty(&fs_uuids)) {
- fs_devices = list_entry(fs_uuids.next,
- struct btrfs_fs_devices, list);
- list_del(&fs_devices->list);
+ list_del_each_entry(fs_devices, &fs_uuids, list)
free_fs_devices(fs_devices);
- }
}

static noinline struct btrfs_device *__find_device(struct list_head *head,
--
1.7.10.4

2013-07-05 22:38:05

by Filipe Manana

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_del_each_entry

On Fri, Jul 5, 2013 at 9:41 PM, Jörn Engel <[email protected]> wrote:
> I have seen a lot of boilerplate code that either follows the pattern of
> while (!list_empty(head)) {
> pos = list_entry(head->next, struct foo, list);
> list_del(pos->list);
> ...
> }
> or some variant thereof.
>
> With this patch in, people can use
> list_del_each_entry(pos, head, list) {
> ...
> }
>
> The patch also adds a list_del_each variant, even though I have
> only found a single user for that one so far.
>
> Signed-off-by: Joern Engel <[email protected]>
> ---
> include/linux/list.h | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/include/linux/list.h b/include/linux/list.h
> index 6a1f8df..ab39c7d 100644
> --- a/include/linux/list.h
> +++ b/include/linux/list.h
> @@ -557,6 +557,24 @@ static inline void list_splice_tail_init(struct list_head *list,
> #define list_safe_reset_next(pos, n, member) \
> n = list_entry(pos->member.next, typeof(*pos), member)
>
> +/**
> + * list_del_each - removes an entry from the list until it is empty
> + * @pos: the &struct list_head to use as a loop cursor.
> + * @head: the head of your list.
> + */
> +#define list_del_each(pos, head) \
> + while (list_empty(head) ? 0 : (pos = (head)->next, list_del(pos), 1))
> +
> +/**
> + * list_del_each_entry - removes an entry from the list until it is empty
> + * @pos: the type * to use as loop cursor.
> + * @head: the head of your list.
> + * @member: the name of the list_struct within the struct
> + */
> +#define list_del_each_entry(pos, head, member) \
> + while (list_empty(head) && (pos = list_first_entry((head), \
> + typeof(*pos), member), list_del((head)->next), 1))
> +

Shouldn't it be while (!list_empty(head) ... ?
(not operator addition)

thanks

> /*
> * Double linked lists with a single pointer list head.
> * Mostly useful for hash tables where the two pointer list head is
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html



--
Filipe David Manana,

"Reasonable men adapt themselves to the world.
Unreasonable men adapt the world to themselves.
That's why all progress depends on unreasonable men."

2013-07-08 04:37:21

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH 0/2] introduce list_for_each_entry_del

On Fri, Jul 05, 2013 at 04:41:00PM -0400, J?rn Engel wrote:
> On Mon, 3 June 2013 13:28:03 -0400, Joern Engel wrote:
> >
> > A purely janitorial patchset. A fairly common pattern is to take a
> > list, remove every object from it and do something with this object -
> > usually kfree() some variant. A stupid grep identified roughly 300
> > instances, with many more hidden behind more complicated patterns to
> > achieve the same end results.
>
> Next version of the same patchset. Object size is shrinking now, at
> least for the one compiler I tested. And a few kernel hackers met on
> a frozen lake in hell with pigs flying overhead and could actually
> agree on a name. While I am sure almost every reader will still
> disagree and have one or two better suggestions, I would like to use
> this historical moment.
>
> list_del_each and list_del_each_entry is shall be!

Can you add _init variants to this? There are many loops that
actually require list_del_init() rather than list_del()...

Cheers,

Dave.
--
Dave Chinner
[email protected]

2013-07-15 19:06:31

by Jörn Engel

[permalink] [raw]
Subject: Re: [PATCH 1/2] list: add list_del_each_entry

On Fri, 5 July 2013 23:38:01 +0100, Filipe David Manana wrote:
>
> > +#define list_del_each_entry(pos, head, member) \
> > + while (list_empty(head) && (pos = list_first_entry((head), \
> > + typeof(*pos), member), list_del((head)->next), 1))
> > +
>
> Shouldn't it be while (!list_empty(head) ... ?
> (not operator addition)

Obviously! Now where is that brown paperbag...

Will resend in a few days, including the _init variants dchinner asked
for.

Jörn

--
Computer system analysis is like child-rearing; you can do grievous damage,
but you cannot ensure success."
-- Tom DeMarco