2016-04-12 22:55:20

by Waiman Long

[permalink] [raw]
Subject: [PATCH v7 0/4] vfs: Use per-cpu list for SB's s_inodes list

v6->v7:
- Fix the race condition in __pcpu_list_next_cpu() as reported by
Jan Kara.
- No changes in patches 2-4.

v5->v6:
- Remove patch 5 which can increase the kernel testing matrix.
- Disable preemption in pcpu_list_add() as it was complained by
the 0-day test even though it is not technically necessary.
- Add a PERCPU_LIST_WARN_ON() macro to simplify code.
- No changes in patches 2-4.

v4->v5:
- Fix the UP panic problem reported by 0day test by unifying the SMP
and UP code.
- Add patch 5 to add a new kernel config parameter to allow disabling
per-cpu list for small systems that won't benefit much from this
feature.

v3->v4:
- Fix some racing conditions in the code.
- Add another patch from Jan to replace list_for_each_entry_safe()
by list_for_each_entry().
- Add lockdep annotation.

v2->v3:
- Directly replace list_for_each_entry() and
list_for_each_entry_safe() by pcpu_list_iterate() and
pcpu_list_iterate_safe() respectively instead. Those 2 functions
provide a stateful per-cpu list iteration interface.
- Include Jan Kara's patch to clean up the fsnotify_unmount_inodes()
function.

v1->v2:
- Use separate structures for list head and nodes & provide a
cleaner interface.
- Use existing list_for_each_entry() or list_for_each_entry_safe()
macros for each of the sb's s_inodes iteration functions instead
of using list_for_each_entry_safe() for all of them which may not
be safe in some cases.
- Use an iterator interface to access all the nodes of a group of
per-cpu lists. This approach is cleaner than the previous double-for
macro which is kind of hacky. However, it does require more lines
of code changes.
- Add a preparatory patch 2 to extract out the per-inode codes from
the superblock s_inodes list iteration functions to minimize code
changes needed in the patch 3.

This patch is a replacement of my previous list batching patch -
https://lwn.net/Articles/674105/. Compared with the previous patch,
this one provides better performance and fairness. However, it also
requires a bit more changes in the VFS layer.

This patchset is a derivative of Andi Kleen's patch on "Initial per
cpu list for the per sb inode list"

https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git/commit/?h=hle315/
combined&id=f1cf9e715a40f44086662ae3b29f123cf059cbf4

Patch 1 introduces the per-cpu list.

Patch 2 cleans up the fsnotify_unmount_inodes() function by making
the code simpler and more standard.

Patch 3 replaces the use of list_for_each_entry_safe() in
evict_inodes() and invalidate_inodes() by list_for_each_entry().

Patch 4 modifies the superblock and inode structures to use the per-cpu
list. The corresponding functions that reference those structures
are modified.

Jan Kara (2):
fsnotify: Simplify inode iteration on umount
vfs: Remove unnecessary list_for_each_entry_safe() variants

Waiman Long (2):
lib/percpu-list: Per-cpu list with associated per-cpu locks
vfs: Use per-cpu list for superblock's inode list

fs/block_dev.c | 13 ++-
fs/drop_caches.c | 10 +-
fs/fs-writeback.c | 13 ++-
fs/inode.c | 40 +++-----
fs/notify/inode_mark.c | 53 +++--------
fs/quota/dquot.c | 16 ++--
fs/super.c | 7 +-
include/linux/fs.h | 8 +-
include/linux/percpu-list.h | 231 +++++++++++++++++++++++++++++++++++++++++++
lib/Makefile | 2 +-
lib/percpu-list.c | 100 +++++++++++++++++++
11 files changed, 397 insertions(+), 96 deletions(-)
create mode 100644 include/linux/percpu-list.h
create mode 100644 lib/percpu-list.c


2016-04-12 22:55:24

by Waiman Long

[permalink] [raw]
Subject: [PATCH v7 2/4] fsnotify: Simplify inode iteration on umount

From: Jan Kara <[email protected]>

fsnotify_unmount_inodes() played complex tricks to pin next inode in the
sb->s_inodes list when iterating over all inodes. If we switch to
keeping current inode pinned somewhat longer, we can make the code much
simpler and standard.

Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Waiman Long <[email protected]>
---
fs/notify/inode_mark.c | 45 +++++++++------------------------------------
1 files changed, 9 insertions(+), 36 deletions(-)

diff --git a/fs/notify/inode_mark.c b/fs/notify/inode_mark.c
index 741077d..a364524 100644
--- a/fs/notify/inode_mark.c
+++ b/fs/notify/inode_mark.c
@@ -150,12 +150,10 @@ int fsnotify_add_inode_mark(struct fsnotify_mark *mark,
*/
void fsnotify_unmount_inodes(struct super_block *sb)
{
- struct inode *inode, *next_i, *need_iput = NULL;
+ struct inode *inode, *iput_inode = NULL;

spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry_safe(inode, next_i, &sb->s_inodes, i_sb_list) {
- struct inode *need_iput_tmp;
-
+ list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
/*
* We cannot __iget() an inode in state I_FREEING,
* I_WILL_FREE, or I_NEW which is fine because by that point
@@ -178,49 +176,24 @@ void fsnotify_unmount_inodes(struct super_block *sb)
continue;
}

- need_iput_tmp = need_iput;
- need_iput = NULL;
-
- /* In case fsnotify_inode_delete() drops a reference. */
- if (inode != need_iput_tmp)
- __iget(inode);
- else
- need_iput_tmp = NULL;
+ __iget(inode);
spin_unlock(&inode->i_lock);
-
- /* In case the dropping of a reference would nuke next_i. */
- while (&next_i->i_sb_list != &sb->s_inodes) {
- spin_lock(&next_i->i_lock);
- if (!(next_i->i_state & (I_FREEING | I_WILL_FREE)) &&
- atomic_read(&next_i->i_count)) {
- __iget(next_i);
- need_iput = next_i;
- spin_unlock(&next_i->i_lock);
- break;
- }
- spin_unlock(&next_i->i_lock);
- next_i = list_next_entry(next_i, i_sb_list);
- }
-
- /*
- * We can safely drop s_inode_list_lock here because either
- * we actually hold references on both inode and next_i or
- * end of list. Also no new inodes will be added since the
- * umount has begun.
- */
spin_unlock(&sb->s_inode_list_lock);

- if (need_iput_tmp)
- iput(need_iput_tmp);
+ if (iput_inode)
+ iput(iput_inode);

/* for each watch, send FS_UNMOUNT and then remove it */
fsnotify(inode, FS_UNMOUNT, inode, FSNOTIFY_EVENT_INODE, NULL, 0);

fsnotify_inode_delete(inode);

- iput(inode);
+ iput_inode = inode;

spin_lock(&sb->s_inode_list_lock);
}
spin_unlock(&sb->s_inode_list_lock);
+
+ if (iput_inode)
+ iput(iput_inode);
}
--
1.7.1

2016-04-12 22:55:35

by Waiman Long

[permalink] [raw]
Subject: [PATCH v7 3/4] vfs: Remove unnecessary list_for_each_entry_safe() variants

From: Jan Kara <[email protected]>

evict_inodes() and invalidate_inodes() use list_for_each_entry_safe()
to iterate sb->s_inodes list. However, since we use i_lru list entry for
our local temporary list of inodes to destroy, the inode is guaranteed
to stay in sb->s_inodes list while we hold sb->s_inode_list_lock. So
there is no real need for safe iteration variant and we can use
list_for_each_entry() just fine.

Signed-off-by: Jan Kara <[email protected]>
Signed-off-by: Waiman Long <[email protected]>
---
fs/inode.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/inode.c b/fs/inode.c
index 69b8b52..c9cbea8 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -596,12 +596,12 @@ static void dispose_list(struct list_head *head)
*/
void evict_inodes(struct super_block *sb)
{
- struct inode *inode, *next;
+ struct inode *inode;
LIST_HEAD(dispose);

again:
spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) {
+ list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
if (atomic_read(&inode->i_count))
continue;

@@ -646,11 +646,11 @@ again:
int invalidate_inodes(struct super_block *sb, bool kill_dirty)
{
int busy = 0;
- struct inode *inode, *next;
+ struct inode *inode;
LIST_HEAD(dispose);

spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) {
+ list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
spin_lock(&inode->i_lock);
if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
spin_unlock(&inode->i_lock);
--
1.7.1

2016-04-12 22:59:30

by Waiman Long

[permalink] [raw]
Subject: [PATCH v7 4/4] vfs: Use per-cpu list for superblock's inode list

When many threads are trying to add or delete inode to or from
a superblock's s_inodes list, spinlock contention on the list can
become a performance bottleneck.

This patch changes the s_inodes field to become a per-cpu list with
per-cpu spinlocks. As a result, the following superblock inode list
(sb->s_inodes) iteration functions in vfs are also being modified:

1. iterate_bdevs()
2. drop_pagecache_sb()
3. wait_sb_inodes()
4. evict_inodes()
5. invalidate_inodes()
6. fsnotify_unmount_inodes()
7. add_dquot_ref()
8. remove_dquot_ref()

With an exit microbenchmark that creates a large number of threads,
attachs many inodes to them and then exits. The runtimes of that
microbenchmark with 1000 threads before and after the patch on a
4-socket Intel E7-4820 v3 system (40 cores, 80 threads) were as
follows:

Kernel Elapsed Time System Time
------ ------------ -----------
Vanilla 4.5-rc4 65.29s 82m14s
Patched 4.5-rc4 22.81s 23m03s

Before the patch, spinlock contention at the inode_sb_list_add()
function at the startup phase and the inode_sb_list_del() function at
the exit phase were about 79% and 93% of total CPU time respectively
(as measured by perf). After the patch, the percpu_list_add()
function consumed only about 0.04% of CPU time at startup phase. The
percpu_list_del() function consumed about 0.4% of CPU time at exit
phase. There were still some spinlock contention, but they happened
elsewhere.

Signed-off-by: Waiman Long <[email protected]>
---
fs/block_dev.c | 13 +++++++------
fs/drop_caches.c | 10 +++++-----
fs/fs-writeback.c | 13 +++++++------
fs/inode.c | 36 +++++++++++++++---------------------
fs/notify/inode_mark.c | 10 +++++-----
fs/quota/dquot.c | 16 ++++++++--------
fs/super.c | 7 ++++---
include/linux/fs.h | 8 ++++----
8 files changed, 55 insertions(+), 58 deletions(-)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 20a2c02..fd893c0 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1884,11 +1884,13 @@ EXPORT_SYMBOL(__invalidate_device);
void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
{
struct inode *inode, *old_inode = NULL;
+ DEFINE_PCPU_LIST_STATE(state);

- spin_lock(&blockdev_superblock->s_inode_list_lock);
- list_for_each_entry(inode, &blockdev_superblock->s_inodes, i_sb_list) {
- struct address_space *mapping = inode->i_mapping;
+ while (pcpu_list_iterate(blockdev_superblock->s_inodes, &state)) {
+ struct address_space *mapping;

+ inode = list_entry(state.curr, struct inode, i_sb_list);
+ mapping = inode->i_mapping;
spin_lock(&inode->i_lock);
if (inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW) ||
mapping->nrpages == 0) {
@@ -1897,7 +1899,7 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
}
__iget(inode);
spin_unlock(&inode->i_lock);
- spin_unlock(&blockdev_superblock->s_inode_list_lock);
+ spin_unlock(state.lock);
/*
* We hold a reference to 'inode' so it couldn't have been
* removed from s_inodes list while we dropped the
@@ -1911,8 +1913,7 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)

func(I_BDEV(inode), arg);

- spin_lock(&blockdev_superblock->s_inode_list_lock);
+ spin_lock(state.lock);
}
- spin_unlock(&blockdev_superblock->s_inode_list_lock);
iput(old_inode);
}
diff --git a/fs/drop_caches.c b/fs/drop_caches.c
index d72d52b..ec272ed 100644
--- a/fs/drop_caches.c
+++ b/fs/drop_caches.c
@@ -16,9 +16,10 @@ int sysctl_drop_caches;
static void drop_pagecache_sb(struct super_block *sb, void *unused)
{
struct inode *inode, *toput_inode = NULL;
+ DEFINE_PCPU_LIST_STATE(state);

- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
+ inode = list_entry(state.curr, struct inode, i_sb_list);
spin_lock(&inode->i_lock);
if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
(inode->i_mapping->nrpages == 0)) {
@@ -27,15 +28,14 @@ static void drop_pagecache_sb(struct super_block *sb, void *unused)
}
__iget(inode);
spin_unlock(&inode->i_lock);
- spin_unlock(&sb->s_inode_list_lock);
+ spin_unlock(state.lock);

invalidate_mapping_pages(inode->i_mapping, 0, -1);
iput(toput_inode);
toput_inode = inode;

- spin_lock(&sb->s_inode_list_lock);
+ spin_lock(state.lock);
}
- spin_unlock(&sb->s_inode_list_lock);
iput(toput_inode);
}

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 592cea5..f97f6d6 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -2154,6 +2154,7 @@ EXPORT_SYMBOL(__mark_inode_dirty);
static void wait_sb_inodes(struct super_block *sb)
{
struct inode *inode, *old_inode = NULL;
+ DEFINE_PCPU_LIST_STATE(state);

/*
* We need to be protected against the filesystem going from
@@ -2162,7 +2163,6 @@ static void wait_sb_inodes(struct super_block *sb)
WARN_ON(!rwsem_is_locked(&sb->s_umount));

mutex_lock(&sb->s_sync_lock);
- spin_lock(&sb->s_inode_list_lock);

/*
* Data integrity sync. Must wait for all pages under writeback,
@@ -2171,9 +2171,11 @@ static void wait_sb_inodes(struct super_block *sb)
* In which case, the inode may not be on the dirty list, but
* we still have to wait for that writeout.
*/
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
- struct address_space *mapping = inode->i_mapping;
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
+ struct address_space *mapping;

+ inode = list_entry(state.curr, struct inode, i_sb_list);
+ mapping = inode->i_mapping;
spin_lock(&inode->i_lock);
if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
(mapping->nrpages == 0)) {
@@ -2182,7 +2184,7 @@ static void wait_sb_inodes(struct super_block *sb)
}
__iget(inode);
spin_unlock(&inode->i_lock);
- spin_unlock(&sb->s_inode_list_lock);
+ spin_unlock(state.lock);

/*
* We hold a reference to 'inode' so it couldn't have been
@@ -2204,9 +2206,8 @@ static void wait_sb_inodes(struct super_block *sb)

cond_resched();

- spin_lock(&sb->s_inode_list_lock);
+ spin_lock(state.lock);
}
- spin_unlock(&sb->s_inode_list_lock);
iput(old_inode);
mutex_unlock(&sb->s_sync_lock);
}
diff --git a/fs/inode.c b/fs/inode.c
index c9cbea8..58d1a13 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -28,7 +28,7 @@
* inode->i_state, inode->i_hash, __iget()
* Inode LRU list locks protect:
* inode->i_sb->s_inode_lru, inode->i_lru
- * inode->i_sb->s_inode_list_lock protects:
+ * inode->i_sb->s_inodes->lock protects:
* inode->i_sb->s_inodes, inode->i_sb_list
* bdi->wb.list_lock protects:
* bdi->wb.b_{dirty,io,more_io,dirty_time}, inode->i_io_list
@@ -37,7 +37,7 @@
*
* Lock ordering:
*
- * inode->i_sb->s_inode_list_lock
+ * inode->i_sb->s_inodes->lock
* inode->i_lock
* Inode LRU list locks
*
@@ -45,7 +45,7 @@
* inode->i_lock
*
* inode_hash_lock
- * inode->i_sb->s_inode_list_lock
+ * inode->i_sb->s_inodes->lock
* inode->i_lock
*
* iunique_lock
@@ -430,19 +430,14 @@ static void inode_lru_list_del(struct inode *inode)
*/
void inode_sb_list_add(struct inode *inode)
{
- spin_lock(&inode->i_sb->s_inode_list_lock);
- list_add(&inode->i_sb_list, &inode->i_sb->s_inodes);
- spin_unlock(&inode->i_sb->s_inode_list_lock);
+ pcpu_list_add(&inode->i_sb_list, inode->i_sb->s_inodes);
}
EXPORT_SYMBOL_GPL(inode_sb_list_add);

static inline void inode_sb_list_del(struct inode *inode)
{
- if (!list_empty(&inode->i_sb_list)) {
- spin_lock(&inode->i_sb->s_inode_list_lock);
- list_del_init(&inode->i_sb_list);
- spin_unlock(&inode->i_sb->s_inode_list_lock);
- }
+ if (!list_empty(&inode->i_sb_list.list))
+ pcpu_list_del(&inode->i_sb_list);
}

static unsigned long hash(struct super_block *sb, unsigned long hashval)
@@ -597,11 +592,13 @@ static void dispose_list(struct list_head *head)
void evict_inodes(struct super_block *sb)
{
struct inode *inode;
+ struct pcpu_list_state state;
LIST_HEAD(dispose);

again:
- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ init_pcpu_list_state(&state);
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
+ inode = list_entry(state.curr, struct inode, i_sb_list);
if (atomic_read(&inode->i_count))
continue;

@@ -622,13 +619,12 @@ again:
* bit so we don't livelock.
*/
if (need_resched()) {
- spin_unlock(&sb->s_inode_list_lock);
+ spin_unlock(state.lock);
cond_resched();
dispose_list(&dispose);
goto again;
}
}
- spin_unlock(&sb->s_inode_list_lock);

dispose_list(&dispose);
}
@@ -648,9 +644,10 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty)
int busy = 0;
struct inode *inode;
LIST_HEAD(dispose);
+ DEFINE_PCPU_LIST_STATE(state);

- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
+ inode = list_entry(state.curr, struct inode, i_sb_list);
spin_lock(&inode->i_lock);
if (inode->i_state & (I_NEW | I_FREEING | I_WILL_FREE)) {
spin_unlock(&inode->i_lock);
@@ -672,7 +669,6 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty)
spin_unlock(&inode->i_lock);
list_add(&inode->i_lru, &dispose);
}
- spin_unlock(&sb->s_inode_list_lock);

dispose_list(&dispose);

@@ -887,7 +883,7 @@ struct inode *new_inode_pseudo(struct super_block *sb)
spin_lock(&inode->i_lock);
inode->i_state = 0;
spin_unlock(&inode->i_lock);
- INIT_LIST_HEAD(&inode->i_sb_list);
+ init_pcpu_list_node(&inode->i_sb_list);
}
return inode;
}
@@ -908,8 +904,6 @@ struct inode *new_inode(struct super_block *sb)
{
struct inode *inode;

- spin_lock_prefetch(&sb->s_inode_list_lock);
-
inode = new_inode_pseudo(sb);
if (inode)
inode_sb_list_add(inode);
diff --git a/fs/notify/inode_mark.c b/fs/notify/inode_mark.c
index a364524..12515a4 100644
--- a/fs/notify/inode_mark.c
+++ b/fs/notify/inode_mark.c
@@ -151,14 +151,15 @@ int fsnotify_add_inode_mark(struct fsnotify_mark *mark,
void fsnotify_unmount_inodes(struct super_block *sb)
{
struct inode *inode, *iput_inode = NULL;
+ DEFINE_PCPU_LIST_STATE(state);

- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
/*
* We cannot __iget() an inode in state I_FREEING,
* I_WILL_FREE, or I_NEW which is fine because by that point
* the inode cannot have any associated watches.
*/
+ inode = list_entry(state.curr, struct inode, i_sb_list);
spin_lock(&inode->i_lock);
if (inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) {
spin_unlock(&inode->i_lock);
@@ -178,7 +179,7 @@ void fsnotify_unmount_inodes(struct super_block *sb)

__iget(inode);
spin_unlock(&inode->i_lock);
- spin_unlock(&sb->s_inode_list_lock);
+ spin_unlock(state.lock);

if (iput_inode)
iput(iput_inode);
@@ -190,9 +191,8 @@ void fsnotify_unmount_inodes(struct super_block *sb)

iput_inode = inode;

- spin_lock(&sb->s_inode_list_lock);
+ spin_lock(state.lock);
}
- spin_unlock(&sb->s_inode_list_lock);

if (iput_inode)
iput(iput_inode);
diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
index ff21980..389c7dc 100644
--- a/fs/quota/dquot.c
+++ b/fs/quota/dquot.c
@@ -936,12 +936,13 @@ static int dqinit_needed(struct inode *inode, int type)
static void add_dquot_ref(struct super_block *sb, int type)
{
struct inode *inode, *old_inode = NULL;
+ DEFINE_PCPU_LIST_STATE(state);
#ifdef CONFIG_QUOTA_DEBUG
int reserved = 0;
#endif

- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
+ inode = list_entry(state.curr, struct inode, i_sb_list);
spin_lock(&inode->i_lock);
if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||
!atomic_read(&inode->i_writecount) ||
@@ -951,7 +952,7 @@ static void add_dquot_ref(struct super_block *sb, int type)
}
__iget(inode);
spin_unlock(&inode->i_lock);
- spin_unlock(&sb->s_inode_list_lock);
+ spin_unlock(state.lock);

#ifdef CONFIG_QUOTA_DEBUG
if (unlikely(inode_get_rsv_space(inode) > 0))
@@ -969,9 +970,8 @@ static void add_dquot_ref(struct super_block *sb, int type)
* later.
*/
old_inode = inode;
- spin_lock(&sb->s_inode_list_lock);
+ spin_lock(state.lock);
}
- spin_unlock(&sb->s_inode_list_lock);
iput(old_inode);

#ifdef CONFIG_QUOTA_DEBUG
@@ -1039,15 +1039,16 @@ static void remove_dquot_ref(struct super_block *sb, int type,
{
struct inode *inode;
int reserved = 0;
+ DEFINE_PCPU_LIST_STATE(state);

- spin_lock(&sb->s_inode_list_lock);
- list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {
+ while (pcpu_list_iterate(sb->s_inodes, &state)) {
/*
* We have to scan also I_NEW inodes because they can already
* have quota pointer initialized. Luckily, we need to touch
* only quota pointers and these have separate locking
* (dq_data_lock).
*/
+ inode = list_entry(state.curr, struct inode, i_sb_list);
spin_lock(&dq_data_lock);
if (!IS_NOQUOTA(inode)) {
if (unlikely(inode_get_rsv_space(inode) > 0))
@@ -1056,7 +1057,6 @@ static void remove_dquot_ref(struct super_block *sb, int type,
}
spin_unlock(&dq_data_lock);
}
- spin_unlock(&sb->s_inode_list_lock);
#ifdef CONFIG_QUOTA_DEBUG
if (reserved) {
printk(KERN_WARNING "VFS (%s): Writes happened after quota"
diff --git a/fs/super.c b/fs/super.c
index 74914b1..774193d 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -163,6 +163,7 @@ static void destroy_super(struct super_block *s)
{
list_lru_destroy(&s->s_dentry_lru);
list_lru_destroy(&s->s_inode_lru);
+ free_pcpu_list_head(&s->s_inodes);
security_sb_free(s);
WARN_ON(!list_empty(&s->s_mounts));
kfree(s->s_subtype);
@@ -204,9 +205,9 @@ static struct super_block *alloc_super(struct file_system_type *type, int flags)
INIT_HLIST_NODE(&s->s_instances);
INIT_HLIST_BL_HEAD(&s->s_anon);
mutex_init(&s->s_sync_lock);
- INIT_LIST_HEAD(&s->s_inodes);
- spin_lock_init(&s->s_inode_list_lock);

+ if (init_pcpu_list_head(&s->s_inodes))
+ goto fail;
if (list_lru_init_memcg(&s->s_dentry_lru))
goto fail;
if (list_lru_init_memcg(&s->s_inode_lru))
@@ -427,7 +428,7 @@ void generic_shutdown_super(struct super_block *sb)
if (sop->put_super)
sop->put_super(sb);

- if (!list_empty(&sb->s_inodes)) {
+ if (!pcpu_list_empty(sb->s_inodes)) {
printk("VFS: Busy inodes after unmount of %s. "
"Self-destruct in 5 seconds. Have a nice day...\n",
sb->s_id);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 70e61b5..afc2a6c 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -28,6 +28,7 @@
#include <linux/uidgid.h>
#include <linux/lockdep.h>
#include <linux/percpu-rwsem.h>
+#include <linux/percpu-list.h>
#include <linux/blk_types.h>
#include <linux/workqueue.h>
#include <linux/percpu-rwsem.h>
@@ -651,7 +652,7 @@ struct inode {
u16 i_wb_frn_history;
#endif
struct list_head i_lru; /* inode LRU list */
- struct list_head i_sb_list;
+ struct pcpu_list_node i_sb_list;
union {
struct hlist_head i_dentry;
struct rcu_head i_rcu;
@@ -1416,9 +1417,8 @@ struct super_block {
*/
int s_stack_depth;

- /* s_inode_list_lock protects s_inodes */
- spinlock_t s_inode_list_lock ____cacheline_aligned_in_smp;
- struct list_head s_inodes; /* all inodes */
+ /* The percpu locks protect s_inodes */
+ struct pcpu_list_head __percpu *s_inodes; /* all inodes */
};

extern struct timespec current_fs_time(struct super_block *sb);
--
1.7.1

2016-04-12 23:00:51

by Waiman Long

[permalink] [raw]
Subject: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

Linked list is used everywhere in the Linux kernel. However, if many
threads are trying to add or delete entries into the same linked list,
it can create a performance bottleneck.

This patch introduces a new per-cpu list subystem with associated
per-cpu locks for protecting each of the lists individually. This
allows list entries insertion and deletion operations to happen in
parallel instead of being serialized with a global list and lock.

List entry insertion is strictly per cpu. List deletion, however, can
happen in a cpu other than the one that did the insertion. So we still
need lock to protect the list. Because of that, there may still be
a small amount of contention when deletion is being done.

A new header file include/linux/percpu-list.h will be added with the
associated pcpu_list_head and pcpu_list_node structures. The following
functions are provided to manage the per-cpu list:

1. int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
2. void pcpu_list_add(struct pcpu_list_node *node,
struct pcpu_list_head *head)
3. void pcpu_list_del(struct pcpu_list *node)

Iteration of all the list entries within a group of per-cpu
lists is done by calling either the pcpu_list_iterate() or
pcpu_list_iterate_safe() functions in a while loop. They correspond
to the list_for_each_entry() and list_for_each_entry_safe() macros
respectively. The iteration states are keep in a pcpu_list_state
structure that is passed to the iteration functions.

Signed-off-by: Waiman Long <[email protected]>
---
include/linux/percpu-list.h | 231 +++++++++++++++++++++++++++++++++++++++++++
lib/Makefile | 2 +-
lib/percpu-list.c | 100 +++++++++++++++++++
3 files changed, 332 insertions(+), 1 deletions(-)
create mode 100644 include/linux/percpu-list.h
create mode 100644 lib/percpu-list.c

diff --git a/include/linux/percpu-list.h b/include/linux/percpu-list.h
new file mode 100644
index 0000000..ce8238a
--- /dev/null
+++ b/include/linux/percpu-list.h
@@ -0,0 +1,231 @@
+/*
+ * Per-cpu list
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <[email protected]>
+ */
+#ifndef __LINUX_PERCPU_LIST_H
+#define __LINUX_PERCPU_LIST_H
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/percpu.h>
+
+/*
+ * include/linux/percpu-list.h
+ *
+ * A per-cpu list protected by a per-cpu spinlock.
+ *
+ * The pcpu_list_head structure contains the spinlock, the other
+ * pcpu_list_node structures only contains a pointer to the spinlock in
+ * pcpu_list_head.
+ */
+struct pcpu_list_head {
+ struct list_head list;
+ spinlock_t lock;
+};
+
+#define PCPU_LIST_HEAD_INIT(name) \
+ { \
+ .list.prev = &name.list, \
+ .list.next = &name.list, \
+ .list.lock = __SPIN_LOCK_UNLOCKED(name), \
+ }
+
+/*
+ * Per-cpu list iteration state
+ */
+struct pcpu_list_state {
+ int cpu;
+ spinlock_t *lock;
+ struct list_head *head; /* List head of current per-cpu list */
+ struct pcpu_list_node *curr;
+ struct pcpu_list_node *next;
+};
+
+#define PCPU_LIST_STATE_INIT() \
+ { \
+ .cpu = -1, \
+ .lock = NULL, \
+ .head = NULL, \
+ .curr = NULL, \
+ .next = NULL, \
+ }
+
+#define DEFINE_PCPU_LIST_STATE(s) \
+ struct pcpu_list_state s = PCPU_LIST_STATE_INIT()
+
+static inline void init_pcpu_list_state(struct pcpu_list_state *state)
+{
+ state->cpu = -1;
+ state->lock = NULL;
+ state->head = NULL;
+ state->curr = NULL;
+ state->next = NULL;
+}
+
+#ifdef CONFIG_DEBUG_SPINLOCK
+#define PERCPU_LIST_WARN_ON(x) WARN_ON(x)
+#else
+#define PERCPU_LIST_WARN_ON(x)
+#endif
+
+/*
+ * Next per-cpu list entry
+ */
+#define pcpu_list_next_entry(pos, member) list_next_entry(pos, member.list)
+
+/*
+ * Per-cpu node data structure
+ */
+struct pcpu_list_node {
+ struct list_head list;
+ spinlock_t *lockptr;
+};
+
+#define PCPU_LIST_NODE_INIT(name) \
+ { \
+ .list.prev = &name.list, \
+ .list.next = &name.list, \
+ .list.lockptr = NULL \
+ }
+
+static inline void init_pcpu_list_node(struct pcpu_list_node *node)
+{
+ INIT_LIST_HEAD(&node->list);
+ node->lockptr = NULL;
+}
+
+static inline void free_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
+{
+ free_percpu(*ppcpu_head);
+ *ppcpu_head = NULL;
+}
+
+/*
+ * Check if all the per-cpu lists are empty
+ */
+static inline bool pcpu_list_empty(struct pcpu_list_head *pcpu_head)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ if (!list_empty(&per_cpu_ptr(pcpu_head, cpu)->list))
+ return false;
+ return true;
+}
+
+/*
+ * Helper function to find the first entry of the next per-cpu list
+ * It works somewhat like for_each_possible_cpu(cpu).
+ *
+ * Return: true if the entry is found, false if all the lists exhausted
+ */
+static __always_inline bool
+__pcpu_list_next_cpu(struct pcpu_list_head *head, struct pcpu_list_state *state)
+{
+ if (state->lock)
+ spin_unlock(state->lock);
+next_cpu:
+ /*
+ * for_each_possible_cpu(cpu)
+ */
+ state->cpu = cpumask_next(state->cpu, cpu_possible_mask);
+ if (state->cpu >= nr_cpu_ids)
+ return false; /* All the per-cpu lists iterated */
+
+ state->head = &per_cpu_ptr(head, state->cpu)->list;
+ if (list_empty(state->head))
+ goto next_cpu;
+
+ state->lock = &per_cpu_ptr(head, state->cpu)->lock;
+ spin_lock(state->lock);
+ /*
+ * There is a slight chance that the list may become empty just
+ * before the lock is acquired. So an additional check is
+ * needed to make sure that state->curr points to a valid entry.
+ */
+ if (list_empty(state->head)) {
+ spin_unlock(state->lock);
+ goto next_cpu;
+ }
+ state->curr = list_entry(state->head->next,
+ struct pcpu_list_node, list);
+ return true;
+}
+
+/*
+ * Iterate to the next entry of the group of per-cpu lists
+ *
+ * Return: true if the next entry is found, false if all the entries iterated
+ */
+static inline bool pcpu_list_iterate(struct pcpu_list_head *head,
+ struct pcpu_list_state *state)
+{
+ /*
+ * Find next entry
+ */
+ if (state->curr)
+ state->curr = list_next_entry(state->curr, list);
+
+ if (!state->curr || (&state->curr->list == state->head)) {
+ /*
+ * The current per-cpu list has been exhausted, try the next
+ * per-cpu list.
+ */
+ if (!__pcpu_list_next_cpu(head, state))
+ return false;
+ }
+
+ PERCPU_LIST_WARN_ON(state->curr->lockptr != state->lock);
+ return true; /* Continue the iteration */
+}
+
+/*
+ * Iterate to the next entry of the group of per-cpu lists and safe
+ * against removal of list_entry
+ *
+ * Return: true if the next entry is found, false if all the entries iterated
+ */
+static inline bool pcpu_list_iterate_safe(struct pcpu_list_head *head,
+ struct pcpu_list_state *state)
+{
+ /*
+ * Find next entry
+ */
+ if (state->curr) {
+ state->curr = state->next;
+ state->next = list_next_entry(state->next, list);
+ }
+
+ if (!state->curr || (&state->curr->list == state->head)) {
+ /*
+ * The current per-cpu list has been exhausted, try the next
+ * per-cpu list.
+ */
+ if (!__pcpu_list_next_cpu(head, state))
+ return false;
+ state->next = list_next_entry(state->curr, list);
+ }
+
+ PERCPU_LIST_WARN_ON(state->curr->lockptr != state->lock);
+ return true; /* Continue the iteration */
+}
+
+extern void pcpu_list_add(struct pcpu_list_node *node,
+ struct pcpu_list_head *head);
+extern void pcpu_list_del(struct pcpu_list_node *node);
+extern int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head);
+
+#endif /* __LINUX_PERCPU_LIST_H */
diff --git a/lib/Makefile b/lib/Makefile
index a65e9a8..387ed2b 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -40,7 +40,7 @@ obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \
bsearch.o find_bit.o llist.o memweight.o kfifo.o \
percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o \
- once.o
+ once.o percpu-list.o
obj-y += string_helpers.o
obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o
obj-y += hexdump.o
diff --git a/lib/percpu-list.c b/lib/percpu-list.c
new file mode 100644
index 0000000..8a96001
--- /dev/null
+++ b/lib/percpu-list.c
@@ -0,0 +1,100 @@
+/*
+ * Per-cpu list
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2016 Hewlett-Packard Enterprise Development LP
+ *
+ * Authors: Waiman Long <[email protected]>
+ */
+#include <linux/percpu-list.h>
+#include <linux/lockdep.h>
+
+/*
+ * The per-cpu list lock needs its own class to avoid warning and stack
+ * trace when lockdep is enabled.
+ */
+static struct lock_class_key percpu_list_key;
+
+/*
+ * Initialize the per-cpu list head
+ */
+int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
+{
+ struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
+ int cpu;
+
+ if (!pcpu_head)
+ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+ struct pcpu_list_head *head = per_cpu_ptr(pcpu_head, cpu);
+
+ INIT_LIST_HEAD(&head->list);
+ head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
+ lockdep_set_class(&head->lock, &percpu_list_key);
+ }
+
+ *ppcpu_head = pcpu_head;
+ return 0;
+}
+
+/*
+ * List selection is based on the CPU being used when the pcpu_list_add()
+ * function is called. However, deletion may be done by a different CPU.
+ * So we still need to use a lock to protect the content of the list.
+ */
+void pcpu_list_add(struct pcpu_list_node *node, struct pcpu_list_head *head)
+{
+ struct pcpu_list_head *myhead;
+
+ /*
+ * Disable preemption to make sure that CPU won't gets changed.
+ */
+ preempt_disable();
+ myhead = this_cpu_ptr(head);
+ spin_lock(&myhead->lock);
+ node->lockptr = &myhead->lock;
+ list_add(&node->list, &myhead->list);
+ spin_unlock(&myhead->lock);
+ preempt_enable();
+}
+
+/*
+ * Delete a node from a percpu list
+ *
+ * We need to check the lock pointer again after taking the lock to guard
+ * against concurrent delete of the same node. If the lock pointer changes
+ * (becomes NULL or to a different one), we assume that the deletion was done
+ * elsewhere.
+ */
+void pcpu_list_del(struct pcpu_list_node *node)
+{
+ spinlock_t *lock = READ_ONCE(node->lockptr);
+
+ if (unlikely(!lock)) {
+ WARN(1, "pcpu_list_del: node 0x%lx has no associated lock\n",
+ (unsigned long)node);
+ return;
+ }
+
+ spin_lock(lock);
+ if (likely(lock == node->lockptr)) {
+ list_del_init(&node->list);
+ node->lockptr = NULL;
+ } else {
+ /*
+ * This path should never be executed.
+ */
+ WARN_ON(1);
+ }
+ spin_unlock(lock);
+}
--
1.7.1

2016-04-13 02:10:21

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

Hi Waiman,

On Tue, Apr 12, 2016 at 06:54:43PM -0400, Waiman Long wrote:
[...]
> +
> +/*
> + * Initialize the per-cpu list head
> + */
> +int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
> +{
> + struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
> + int cpu;
> +
> + if (!pcpu_head)
> + return -ENOMEM;
> +
> + for_each_possible_cpu(cpu) {
> + struct pcpu_list_head *head = per_cpu_ptr(pcpu_head, cpu);
> +
> + INIT_LIST_HEAD(&head->list);
> + head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
> + lockdep_set_class(&head->lock, &percpu_list_key);
> + }
> +
> + *ppcpu_head = pcpu_head;
> + return 0;
> +}

The first time I looked at this patch, I had a hard time to figure out
which "struct pcpu_list_head" pointer is pointing to percpu data(the
pointer could be the parameter for per/this_cpu_ptr()), and which
pointer is pointing to actual structure. For example, 'pcpu_head' and
'head' above are different types of pointers.

So besides improving my code reading skills, I think the following patch
helps ;-) Also it can resolve several splats of sparse when running
'make C=1 lib/'.

Thoughts?

Regards,
Boqun

-------------------------------------------->8
From: Boqun Feng <[email protected]>
Date: Wed, 13 Apr 2016 09:49:13 +0800
Subject: [PATCH] lib/percpu-list: Add __percpu modifier for parameters

Add __percpu modifier properly to help:

1. Differ pointers to actual structures with those to percpu
structures, which could improve readability.

2. Prevent sparse from complaining about "different address spaces"

Signed-off-by: Boqun Feng <[email protected]>
---
include/linux/percpu-list.h | 16 +++++++++-------
lib/percpu-list.c | 8 +++++---
2 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/include/linux/percpu-list.h b/include/linux/percpu-list.h
index ce8238a78198..4c8496004dc2 100644
--- a/include/linux/percpu-list.h
+++ b/include/linux/percpu-list.h
@@ -107,7 +107,8 @@ static inline void init_pcpu_list_node(struct pcpu_list_node *node)
node->lockptr = NULL;
}

-static inline void free_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
+static inline void
+free_pcpu_list_head(struct pcpu_list_head __percpu **ppcpu_head)
{
free_percpu(*ppcpu_head);
*ppcpu_head = NULL;
@@ -116,7 +117,7 @@ static inline void free_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
/*
* Check if all the per-cpu lists are empty
*/
-static inline bool pcpu_list_empty(struct pcpu_list_head *pcpu_head)
+static inline bool pcpu_list_empty(struct pcpu_list_head __percpu *pcpu_head)
{
int cpu;

@@ -133,7 +134,8 @@ static inline bool pcpu_list_empty(struct pcpu_list_head *pcpu_head)
* Return: true if the entry is found, false if all the lists exhausted
*/
static __always_inline bool
-__pcpu_list_next_cpu(struct pcpu_list_head *head, struct pcpu_list_state *state)
+__pcpu_list_next_cpu(struct pcpu_list_head __percpu *head,
+ struct pcpu_list_state *state)
{
if (state->lock)
spin_unlock(state->lock);
@@ -170,7 +172,7 @@ next_cpu:
*
* Return: true if the next entry is found, false if all the entries iterated
*/
-static inline bool pcpu_list_iterate(struct pcpu_list_head *head,
+static inline bool pcpu_list_iterate(struct pcpu_list_head __percpu *head,
struct pcpu_list_state *state)
{
/*
@@ -198,7 +200,7 @@ static inline bool pcpu_list_iterate(struct pcpu_list_head *head,
*
* Return: true if the next entry is found, false if all the entries iterated
*/
-static inline bool pcpu_list_iterate_safe(struct pcpu_list_head *head,
+static inline bool pcpu_list_iterate_safe(struct pcpu_list_head __percpu *head,
struct pcpu_list_state *state)
{
/*
@@ -224,8 +226,8 @@ static inline bool pcpu_list_iterate_safe(struct pcpu_list_head *head,
}

extern void pcpu_list_add(struct pcpu_list_node *node,
- struct pcpu_list_head *head);
+ struct pcpu_list_head __percpu *head);
extern void pcpu_list_del(struct pcpu_list_node *node);
-extern int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head);
+extern int init_pcpu_list_head(struct pcpu_list_head __percpu **ppcpu_head);

#endif /* __LINUX_PERCPU_LIST_H */
diff --git a/lib/percpu-list.c b/lib/percpu-list.c
index 8a9600169966..ef2bcb8e5a1b 100644
--- a/lib/percpu-list.c
+++ b/lib/percpu-list.c
@@ -27,9 +27,10 @@ static struct lock_class_key percpu_list_key;
/*
* Initialize the per-cpu list head
*/
-int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
+int init_pcpu_list_head(struct pcpu_list_head __percpu **ppcpu_head)
{
- struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
+ struct pcpu_list_head __percpu *pcpu_head =
+ alloc_percpu(struct pcpu_list_head);
int cpu;

if (!pcpu_head)
@@ -52,7 +53,8 @@ int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
* function is called. However, deletion may be done by a different CPU.
* So we still need to use a lock to protect the content of the list.
*/
-void pcpu_list_add(struct pcpu_list_node *node, struct pcpu_list_head *head)
+void pcpu_list_add(struct pcpu_list_node *node,
+ struct pcpu_list_head __percpu *head)
{
struct pcpu_list_head *myhead;

--
2.8.0

Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On Tue, 12 Apr 2016, Waiman Long wrote:

> List entry insertion is strictly per cpu. List deletion, however, can
> happen in a cpu other than the one that did the insertion. So we still
> need lock to protect the list. Because of that, there may still be
> a small amount of contention when deletion is being done.

Ok then the list is not per cpu anymore. Can we call this something else
please to avoid confusion? Spinlocks in per cpu structures are a bit
confusing otherwise. Seems that there is no requirement that the list can
only be accessed from a single cpu so its not per cpu per se anymore.

Maybe lock-list instead of percpu-list?

2016-04-13 17:47:30

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On 04/13/2016 11:03 AM, Christoph Lameter wrote:
> On Tue, 12 Apr 2016, Waiman Long wrote:
>
>> List entry insertion is strictly per cpu. List deletion, however, can
>> happen in a cpu other than the one that did the insertion. So we still
>> need lock to protect the list. Because of that, there may still be
>> a small amount of contention when deletion is being done.
> Ok then the list is not per cpu anymore. Can we call this something else
> please to avoid confusion? Spinlocks in per cpu structures are a bit
> confusing otherwise. Seems that there is no requirement that the list can
> only be accessed from a single cpu so its not per cpu per se anymore.
>
> Maybe lock-list instead of percpu-list?
>

I am fine with a name change. I am not that good in naming stuff. How
about distributed and locked list, or dlock_list in short?

Cheers,
Longman

2016-04-13 17:53:40

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On 04/12/2016 10:09 PM, Boqun Feng wrote:
> Hi Waiman,
>
> On Tue, Apr 12, 2016 at 06:54:43PM -0400, Waiman Long wrote:
> [...]
>> +
>> +/*
>> + * Initialize the per-cpu list head
>> + */
>> +int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
>> +{
>> + struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
>> + int cpu;
>> +
>> + if (!pcpu_head)
>> + return -ENOMEM;
>> +
>> + for_each_possible_cpu(cpu) {
>> + struct pcpu_list_head *head = per_cpu_ptr(pcpu_head, cpu);
>> +
>> + INIT_LIST_HEAD(&head->list);
>> + head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
>> + lockdep_set_class(&head->lock,&percpu_list_key);
>> + }
>> +
>> + *ppcpu_head = pcpu_head;
>> + return 0;
>> +}
> The first time I looked at this patch, I had a hard time to figure out
> which "struct pcpu_list_head" pointer is pointing to percpu data(the
> pointer could be the parameter for per/this_cpu_ptr()), and which
> pointer is pointing to actual structure. For example, 'pcpu_head' and
> 'head' above are different types of pointers.
>
> So besides improving my code reading skills, I think the following patch
> helps ;-) Also it can resolve several splats of sparse when running
> 'make C=1 lib/'.
>
> Thoughts?

Yes, I think your patch is helpful. I will include your patch in my
patchset.

Thanks,
Longman

Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On Wed, 13 Apr 2016, Waiman Long wrote:

> I am fine with a name change. I am not that good in naming stuff. How about
> distributed and locked list, or dlock_list in short?

dlock_list sounds better.

2016-04-14 14:10:41

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On Tue 12-04-16 18:54:43, Waiman Long wrote:
> Linked list is used everywhere in the Linux kernel. However, if many
> threads are trying to add or delete entries into the same linked list,
> it can create a performance bottleneck.
>
> This patch introduces a new per-cpu list subystem with associated
> per-cpu locks for protecting each of the lists individually. This
> allows list entries insertion and deletion operations to happen in
> parallel instead of being serialized with a global list and lock.
>
> List entry insertion is strictly per cpu. List deletion, however, can
> happen in a cpu other than the one that did the insertion. So we still
> need lock to protect the list. Because of that, there may still be
> a small amount of contention when deletion is being done.
>
> A new header file include/linux/percpu-list.h will be added with the
> associated pcpu_list_head and pcpu_list_node structures. The following
> functions are provided to manage the per-cpu list:
>
> 1. int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
> 2. void pcpu_list_add(struct pcpu_list_node *node,
> struct pcpu_list_head *head)
> 3. void pcpu_list_del(struct pcpu_list *node)
>
> Iteration of all the list entries within a group of per-cpu
> lists is done by calling either the pcpu_list_iterate() or
> pcpu_list_iterate_safe() functions in a while loop. They correspond
> to the list_for_each_entry() and list_for_each_entry_safe() macros
> respectively. The iteration states are keep in a pcpu_list_state
> structure that is passed to the iteration functions.
>
> Signed-off-by: Waiman Long <[email protected]>

The patch looks good to me now. So you can add:

Reviewed-by: Jan Kara <[email protected]>

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2016-04-14 14:21:08

by Jan Kara

[permalink] [raw]
Subject: Re: [PATCH v7 4/4] vfs: Use per-cpu list for superblock's inode list

On Tue 12-04-16 18:54:46, Waiman Long wrote:
> When many threads are trying to add or delete inode to or from
> a superblock's s_inodes list, spinlock contention on the list can
> become a performance bottleneck.
>
> This patch changes the s_inodes field to become a per-cpu list with
> per-cpu spinlocks. As a result, the following superblock inode list
> (sb->s_inodes) iteration functions in vfs are also being modified:
>
> 1. iterate_bdevs()
> 2. drop_pagecache_sb()
> 3. wait_sb_inodes()
> 4. evict_inodes()
> 5. invalidate_inodes()
> 6. fsnotify_unmount_inodes()
> 7. add_dquot_ref()
> 8. remove_dquot_ref()
>
> With an exit microbenchmark that creates a large number of threads,
> attachs many inodes to them and then exits. The runtimes of that
> microbenchmark with 1000 threads before and after the patch on a
> 4-socket Intel E7-4820 v3 system (40 cores, 80 threads) were as
> follows:
>
> Kernel Elapsed Time System Time
> ------ ------------ -----------
> Vanilla 4.5-rc4 65.29s 82m14s
> Patched 4.5-rc4 22.81s 23m03s
>
> Before the patch, spinlock contention at the inode_sb_list_add()
> function at the startup phase and the inode_sb_list_del() function at
> the exit phase were about 79% and 93% of total CPU time respectively
> (as measured by perf). After the patch, the percpu_list_add()
> function consumed only about 0.04% of CPU time at startup phase. The
> percpu_list_del() function consumed about 0.4% of CPU time at exit
> phase. There were still some spinlock contention, but they happened
> elsewhere.
>
> Signed-off-by: Waiman Long <[email protected]>

The patch looks good to me. You can add:

Reviewed-by: Jan Kara <[email protected]>

Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR

2016-04-14 18:48:24

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On 04/14/2016 10:10 AM, Jan Kara wrote:
> On Tue 12-04-16 18:54:43, Waiman Long wrote:
>> Linked list is used everywhere in the Linux kernel. However, if many
>> threads are trying to add or delete entries into the same linked list,
>> it can create a performance bottleneck.
>>
>> This patch introduces a new per-cpu list subystem with associated
>> per-cpu locks for protecting each of the lists individually. This
>> allows list entries insertion and deletion operations to happen in
>> parallel instead of being serialized with a global list and lock.
>>
>> List entry insertion is strictly per cpu. List deletion, however, can
>> happen in a cpu other than the one that did the insertion. So we still
>> need lock to protect the list. Because of that, there may still be
>> a small amount of contention when deletion is being done.
>>
>> A new header file include/linux/percpu-list.h will be added with the
>> associated pcpu_list_head and pcpu_list_node structures. The following
>> functions are provided to manage the per-cpu list:
>>
>> 1. int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
>> 2. void pcpu_list_add(struct pcpu_list_node *node,
>> struct pcpu_list_head *head)
>> 3. void pcpu_list_del(struct pcpu_list *node)
>>
>> Iteration of all the list entries within a group of per-cpu
>> lists is done by calling either the pcpu_list_iterate() or
>> pcpu_list_iterate_safe() functions in a while loop. They correspond
>> to the list_for_each_entry() and list_for_each_entry_safe() macros
>> respectively. The iteration states are keep in a pcpu_list_state
>> structure that is passed to the iteration functions.
>>
>> Signed-off-by: Waiman Long<[email protected]>
> The patch looks good to me now. So you can add:
>
> Reviewed-by: Jan Kara<[email protected]>
>
> Honza

Thanks for the review.

Cheers,
Longman

2016-04-14 23:34:22

by Boqun Feng

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On Wed, Apr 13, 2016 at 01:38:33PM -0400, Waiman Long wrote:
> On 04/12/2016 10:09 PM, Boqun Feng wrote:
> > Hi Waiman,
> >
> > On Tue, Apr 12, 2016 at 06:54:43PM -0400, Waiman Long wrote:
> > [...]
> > > +
> > > +/*
> > > + * Initialize the per-cpu list head
> > > + */
> > > +int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
> > > +{
> > > + struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
> > > + int cpu;
> > > +
> > > + if (!pcpu_head)
> > > + return -ENOMEM;
> > > +
> > > + for_each_possible_cpu(cpu) {
> > > + struct pcpu_list_head *head = per_cpu_ptr(pcpu_head, cpu);
> > > +
> > > + INIT_LIST_HEAD(&head->list);
> > > + head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
> > > + lockdep_set_class(&head->lock,&percpu_list_key);
> > > + }
> > > +
> > > + *ppcpu_head = pcpu_head;
> > > + return 0;
> > > +}
> > The first time I looked at this patch, I had a hard time to figure out
> > which "struct pcpu_list_head" pointer is pointing to percpu data(the
> > pointer could be the parameter for per/this_cpu_ptr()), and which
> > pointer is pointing to actual structure. For example, 'pcpu_head' and
> > 'head' above are different types of pointers.
> >
> > So besides improving my code reading skills, I think the following patch
> > helps ;-) Also it can resolve several splats of sparse when running
> > 'make C=1 lib/'.
> >
> > Thoughts?
>
> Yes, I think your patch is helpful. I will include your patch in my
> patchset.
>

Given that a renaming will happen in the next version, carrying this as
a standalone patch will be a pain, I think. So feel free to squash this
into the patch #1, if that could make your job eariser ;-)

Regards,
Boqun

> Thanks,
> Longman
>


Attachments:
(No filename) (1.68 kB)
signature.asc (473.00 B)
Download all attachments

2016-04-15 17:14:54

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v7 1/4] lib/percpu-list: Per-cpu list with associated per-cpu locks

On 04/14/2016 07:33 PM, Boqun Feng wrote:
> On Wed, Apr 13, 2016 at 01:38:33PM -0400, Waiman Long wrote:
>> On 04/12/2016 10:09 PM, Boqun Feng wrote:
>>> Hi Waiman,
>>>
>>> On Tue, Apr 12, 2016 at 06:54:43PM -0400, Waiman Long wrote:
>>> [...]
>>>> +
>>>> +/*
>>>> + * Initialize the per-cpu list head
>>>> + */
>>>> +int init_pcpu_list_head(struct pcpu_list_head **ppcpu_head)
>>>> +{
>>>> + struct pcpu_list_head *pcpu_head = alloc_percpu(struct pcpu_list_head);
>>>> + int cpu;
>>>> +
>>>> + if (!pcpu_head)
>>>> + return -ENOMEM;
>>>> +
>>>> + for_each_possible_cpu(cpu) {
>>>> + struct pcpu_list_head *head = per_cpu_ptr(pcpu_head, cpu);
>>>> +
>>>> + INIT_LIST_HEAD(&head->list);
>>>> + head->lock = __SPIN_LOCK_UNLOCKED(&head->lock);
>>>> + lockdep_set_class(&head->lock,&percpu_list_key);
>>>> + }
>>>> +
>>>> + *ppcpu_head = pcpu_head;
>>>> + return 0;
>>>> +}
>>> The first time I looked at this patch, I had a hard time to figure out
>>> which "struct pcpu_list_head" pointer is pointing to percpu data(the
>>> pointer could be the parameter for per/this_cpu_ptr()), and which
>>> pointer is pointing to actual structure. For example, 'pcpu_head' and
>>> 'head' above are different types of pointers.
>>>
>>> So besides improving my code reading skills, I think the following patch
>>> helps ;-) Also it can resolve several splats of sparse when running
>>> 'make C=1 lib/'.
>>>
>>> Thoughts?
>> Yes, I think your patch is helpful. I will include your patch in my
>> patchset.
>>
> Given that a renaming will happen in the next version, carrying this as
> a standalone patch will be a pain, I think. So feel free to squash this
> into the patch #1, if that could make your job eariser ;-)
>
> Regards,
> Boqun
>
>

That is not a problem. I do want to acknowledge your contribution to
this patchset.

Cheers,
Longman