2023-12-06 06:07:22

by Dave Chinner

[permalink] [raw]
Subject: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list

From: Waiman Long <[email protected]>

The dlock list needs one list for each of the CPUs available. However,
for sibling CPUs, they are sharing the L2 and probably L1 caches
too. As a result, there is not much to gain in term of avoiding
cacheline contention while increasing the cacheline footprint of the
L1/L2 caches as separate lists may need to be in the cache.

This patch makes all the sibling CPUs share the same list, thus
reducing the number of lists that need to be maintained in each
dlock list without having any noticeable impact on performance. It
also improves dlock list iteration performance as fewer lists need
to be iterated.

Signed-off-by: Waiman Long <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
lib/dlock-list.c | 74 ++++++++++++++++++++++++++++++++++++++----------
1 file changed, 59 insertions(+), 15 deletions(-)

diff --git a/lib/dlock-list.c b/lib/dlock-list.c
index f64ea4cc5e79..e2860944ec9f 100644
--- a/lib/dlock-list.c
+++ b/lib/dlock-list.c
@@ -25,31 +25,65 @@
* The distributed and locked list is a distributed set of lists each of
* which is protected by its own spinlock, but acts like a single
* consolidated list to the callers. For scaling purpose, the number of
- * lists used is equal to the number of possible CPUs in the system to
- * minimize contention.
+ * lists used is equal to the number of possible cores in the system to
+ * minimize contention. All threads of the same CPU core will share the
+ * same list.
*
- * However, it is possible that individual CPU numbers may be equal to
- * or greater than the number of possible CPUs when there are holes in
- * the CPU number list. As a result, we need to map the CPU number to a
- * list index.
+ * We need to map each CPU number to a list index.
*/
static DEFINE_PER_CPU_READ_MOSTLY(int, cpu2idx);
+static int nr_dlock_lists __read_mostly;

/*
- * Initialize cpu2idx mapping table
+ * Initialize cpu2idx mapping table & nr_dlock_lists.
*
* It is possible that a dlock-list can be allocated before the cpu2idx is
* initialized. In this case, all the cpus are mapped to the first entry
* before initialization.
*
+ * All the sibling CPUs of a sibling group will map to the same dlock list so
+ * as to reduce the number of dlock lists to be maintained while minimizing
+ * cacheline contention.
+ *
+ * As the sibling masks are set up in the core initcall phase, this function
+ * has to be done in the postcore phase to get the right data.
*/
static int __init cpu2idx_init(void)
{
int idx, cpu;
+ struct cpumask *sibling_mask;
+ static struct cpumask mask __initdata;

+ cpumask_clear(&mask);
idx = 0;
- for_each_possible_cpu(cpu)
- per_cpu(cpu2idx, cpu) = idx++;
+ for_each_possible_cpu(cpu) {
+ int scpu;
+
+ if (cpumask_test_cpu(cpu, &mask))
+ continue;
+ per_cpu(cpu2idx, cpu) = idx;
+ cpumask_set_cpu(cpu, &mask);
+
+ sibling_mask = topology_sibling_cpumask(cpu);
+ if (sibling_mask) {
+ for_each_cpu(scpu, sibling_mask) {
+ per_cpu(cpu2idx, scpu) = idx;
+ cpumask_set_cpu(scpu, &mask);
+ }
+ }
+ idx++;
+ }
+
+ /*
+ * nr_dlock_lists can only be set after cpu2idx is properly
+ * initialized.
+ */
+ smp_mb();
+ nr_dlock_lists = idx;
+ WARN_ON(nr_dlock_lists > nr_cpu_ids);
+
+ pr_info("dlock-list: %d head entries per dlock list.\n",
+ nr_dlock_lists);
return 0;
}
postcore_initcall(cpu2idx_init);
@@ -67,19 +101,23 @@ postcore_initcall(cpu2idx_init);
*
* Dynamically allocated locks need to have their own special lock class
* to avoid lockdep warning.
+ *
+ * Since nr_dlock_lists will always be <= nr_cpu_ids, having more lists
+ * than necessary allocated is not a problem other than some wasted memory.
+ * The extra lists will not be ever used as all the cpu2idx entries will be
+ * 0 before initialization.
*/
int __alloc_dlock_list_heads(struct dlock_list_heads *dlist,
struct lock_class_key *key)
{
- int idx;
+ int idx, cnt = nr_dlock_lists ? nr_dlock_lists : nr_cpu_ids;

- dlist->heads = kcalloc(nr_cpu_ids, sizeof(struct dlock_list_head),
- GFP_KERNEL);
+ dlist->heads = kcalloc(cnt, sizeof(struct dlock_list_head), GFP_KERNEL);

if (!dlist->heads)
return -ENOMEM;

- for (idx = 0; idx < nr_cpu_ids; idx++) {
+ for (idx = 0; idx < cnt; idx++) {
struct dlock_list_head *head = &dlist->heads[idx];

INIT_LIST_HEAD(&head->list);
@@ -117,7 +155,10 @@ bool dlock_lists_empty(struct dlock_list_heads *dlist)
{
int idx;

- for (idx = 0; idx < nr_cpu_ids; idx++)
+ /* Shouldn't be called before nr_dlock_lists is initialized */
+ WARN_ON_ONCE(!nr_dlock_lists);
+
+ for (idx = 0; idx < nr_dlock_lists; idx++)
if (!list_empty(&dlist->heads[idx].list))
return false;
return true;
@@ -199,6 +240,9 @@ struct dlock_list_node *__dlock_list_next_list(struct dlock_list_iter *iter)
struct dlock_list_node *next;
struct dlock_list_head *head;

+ /* Shouldn't be called before nr_dlock_lists is initialized */
+ WARN_ON_ONCE(!nr_dlock_lists);
+
restart:
if (iter->entry) {
spin_unlock(&iter->entry->lock);
@@ -209,7 +253,7 @@ struct dlock_list_node *__dlock_list_next_list(struct dlock_list_iter *iter)
/*
* Try next list
*/
- if (++iter->index >= nr_cpu_ids)
+ if (++iter->index >= nr_dlock_lists)
return NULL; /* All the entries iterated */

if (list_empty(&iter->head[iter->index].list))
--
2.42.0


2023-12-07 04:32:12

by Kent Overstreet

[permalink] [raw]
Subject: Re: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list

On Wed, Dec 06, 2023 at 05:05:33PM +1100, Dave Chinner wrote:
> From: Waiman Long <[email protected]>
>
> The dlock list needs one list for each of the CPUs available. However,
> for sibling CPUs, they are sharing the L2 and probably L1 caches
> too. As a result, there is not much to gain in term of avoiding
> cacheline contention while increasing the cacheline footprint of the
> L1/L2 caches as separate lists may need to be in the cache.
>
> This patch makes all the sibling CPUs share the same list, thus
> reducing the number of lists that need to be maintained in each
> dlock list without having any noticeable impact on performance. It
> also improves dlock list iteration performance as fewer lists need
> to be iterated.
>
> Signed-off-by: Waiman Long <[email protected]>
> Reviewed-by: Jan Kara <[email protected]>

We badly need this done in a more generic way. Besides shared caches,
I've done a bunch of percpu algorithms where "amount of x stranded on
percpu lists" is a major consideration and this would be preferable over
percpu lists (including in fs/aio.c).

2023-12-07 05:43:21

by Kent Overstreet

[permalink] [raw]
Subject: Re: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list

On Wed, Dec 06, 2023 at 05:05:33PM +1100, Dave Chinner wrote:
> From: Waiman Long <[email protected]>
>
> The dlock list needs one list for each of the CPUs available. However,
> for sibling CPUs, they are sharing the L2 and probably L1 caches
> too. As a result, there is not much to gain in term of avoiding
> cacheline contention while increasing the cacheline footprint of the
> L1/L2 caches as separate lists may need to be in the cache.
>
> This patch makes all the sibling CPUs share the same list, thus
> reducing the number of lists that need to be maintained in each
> dlock list without having any noticeable impact on performance. It
> also improves dlock list iteration performance as fewer lists need
> to be iterated.

Seems Waiman was missed on the CC

it looks like there's some duplication of this with list_lru
functionality - similar list-sharded-by-node idea.

list_lru does the sharding by page_to_nid() of the item, which saves a
pointer and allows just using a list_head in the item. OTOH, it's less
granular than what dlock-list is doing?

I think some attempt ought to be made to factor out the common ideas
hear; perhaps reworking list_lru to use this thing, and I hope someone
has looked at the page_nid idea vs. dlock_list using the current core.

But it's nice and small, and I'd like to use it elsewhere.

Reviewed-by: Kent Overstreet <[email protected]>

2023-12-07 06:25:57

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list

On Thu, Dec 07, 2023 at 12:42:59AM -0500, Kent Overstreet wrote:
> On Wed, Dec 06, 2023 at 05:05:33PM +1100, Dave Chinner wrote:
> > From: Waiman Long <[email protected]>
> >
> > The dlock list needs one list for each of the CPUs available. However,
> > for sibling CPUs, they are sharing the L2 and probably L1 caches
> > too. As a result, there is not much to gain in term of avoiding
> > cacheline contention while increasing the cacheline footprint of the
> > L1/L2 caches as separate lists may need to be in the cache.
> >
> > This patch makes all the sibling CPUs share the same list, thus
> > reducing the number of lists that need to be maintained in each
> > dlock list without having any noticeable impact on performance. It
> > also improves dlock list iteration performance as fewer lists need
> > to be iterated.
>
> Seems Waiman was missed on the CC

Oops, I knew I missed someone important....

> it looks like there's some duplication of this with list_lru
> functionality - similar list-sharded-by-node idea.

For completely different reasons. The list_lru is aligned to the mm
zone architecture which only partitions memory management accounting
and scanning actions down into NUMA nodes. It's also a per-node
ordered list (LRU), and it has intricate locking semantics that
expose internal list locks to external isolation functions that can
be called whilst a lock protected traversal is in progress.

Further, we have to consider that list-lru is tightly tied to
memcgs. For a single NUMA- and memcg- aware list-lru, there is
actually nr_memcgs * nr_nodes LRUs per list. The memory footprint of
a superblock list_lru gets quite gigantic when we start talking
about machines with hundreds of nodes running tens of thousands of
containers each with tens of superblocks.

That's the biggest problem with using a more memory expensive
structure for the list_lru - we're talking gigabytes of memory just
for the superblock shrinker tracking structure overhead on large
machines. This was one of the reasons why we haven't tried to make
list_lrus any more fine-grained that it absolutely needs to be to
provide acceptible scalability.

> list_lru does the sharding by page_to_nid() of the item, which
> saves a pointer and allows just using a list_head in the item.
> OTOH, it's less granular than what dlock-list is doing?

Sure, but there's a lot more to list_lrus than it being a "per-node
list". OTOH, dlock_list is really nothing more than a "per-cpu
list"....

> I think some attempt ought to be made to factor out the common
> ideas hear; perhaps reworking list_lru to use this thing, and I
> hope someone has looked at the page_nid idea vs. dlock_list using
> the current core.

I certainly have, and I haven't been able to justify the additional
memory footprint of a dlock_list over the existing per-node lists.
That may change given that XFS appears to be on the theshold of
per-node list-lru lock breakdown at 64 threads, but there's a lot
more to consider from a system perspective here than just
inode/dentry cache scalability....

-Dave.
--
Dave Chinner
[email protected]

2023-12-07 06:51:00

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH 04/11] lib/dlock-list: Make sibling CPUs share the same linked list

On Wed, Dec 06, 2023 at 05:05:33PM +1100, Dave Chinner wrote:
> From: Waiman Long <[email protected]>
>
> The dlock list needs one list for each of the CPUs available. However,
> for sibling CPUs, they are sharing the L2 and probably L1 caches
> too. As a result, there is not much to gain in term of avoiding
> cacheline contention while increasing the cacheline footprint of the
> L1/L2 caches as separate lists may need to be in the cache.
>
> This patch makes all the sibling CPUs share the same list, thus
> reducing the number of lists that need to be maintained in each
> dlock list without having any noticeable impact on performance. It
> also improves dlock list iteration performance as fewer lists need
> to be iterated.

Probably a dumb question, but... "available" != "possible"; the code
actually goes for the latter, which avoids nasty questions about
CPU hotplug interations. Is the sibling relation on CPUs unchanging
on CPU hotplug?