2015-11-12 17:11:26

by Nathan Zimmer

[permalink] [raw]
Subject: [RFC] mempolicy: convert the shared_policy lock to a rwlock

When running the SPECint_rate gcc on some very large boxes it was noticed
that the system was spending lots of time in mpol_shared_policy_lookup.
The gamess benchmark can also show it and is what I mostly used to chase
down the issue since the setup for that I found a easier.

To be clear the binaries were on tmpfs because of disk I/O reqruirements.
We then used text replication to avoid icache misses and having all the
copies banging on the memory where the instruction code resides.
This results in us hitting a bottle neck in mpol_shared_policy_lookup
since lookup is serialised by the shared_policy lock.

I have only reproduced this on very large (3k+ cores) boxes. The problem
starts showing up at just a few hundred ranks getting worse until it
threatens to livelock once it gets large enough.
For example on the gamess benchmark at 128 ranks this area consumes only
~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is
over 90%.

To alleviate the contention on this area I converted the spinslock to a
rwlock. This allows the large number of lookups to happen simultaneously.
The results were quite good reducing this to consumtion at max ranks to
around 2%.

Cc: Andrew Morton <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Nathan Zimmer <[email protected]>
---
include/linux/mempolicy.h | 2 +-
mm/mempolicy.c | 16 ++++++++--------
2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 3d385c8..2696c1f 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -122,7 +122,7 @@ struct sp_node {

struct shared_policy {
struct rb_root root;
- spinlock_t lock;
+ rwlock_t lock;
};

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 87a1779..ebf82a3 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2211,13 +2211,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)

if (!sp->root.rb_node)
return NULL;
- spin_lock(&sp->lock);
+ read_lock(&sp->lock);
sn = sp_lookup(sp, idx, idx+1);
if (sn) {
mpol_get(sn->policy);
pol = sn->policy;
}
- spin_unlock(&sp->lock);
+ read_unlock(&sp->lock);
return pol;
}

@@ -2360,7 +2360,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
int ret = 0;

restart:
- spin_lock(&sp->lock);
+ write_lock(&sp->lock);
n = sp_lookup(sp, start, end);
/* Take care of old policies in the same range. */
while (n && n->start < end) {
@@ -2393,7 +2393,7 @@ restart:
}
if (new)
sp_insert(sp, new);
- spin_unlock(&sp->lock);
+ write_unlock(&sp->lock);
ret = 0;

err_out:
@@ -2405,7 +2405,7 @@ err_out:
return ret;

alloc_new:
- spin_unlock(&sp->lock);
+ write_unlock(&sp->lock);
ret = -ENOMEM;
n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL);
if (!n_new)
@@ -2431,7 +2431,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
int ret;

sp->root = RB_ROOT; /* empty tree == default mempolicy */
- spin_lock_init(&sp->lock);
+ rwlock_init(&sp->lock);

if (mpol) {
struct vm_area_struct pvma;
@@ -2497,14 +2497,14 @@ void mpol_free_shared_policy(struct shared_policy *p)

if (!p->root.rb_node)
return;
- spin_lock(&p->lock);
+ write_lock(&p->lock);
next = rb_first(&p->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
sp_delete(p, n);
}
- spin_unlock(&p->lock);
+ write_unlock(&p->lock);
}

#ifdef CONFIG_NUMA_BALANCING
--
1.8.2.1


2015-11-12 21:10:31

by David Rientjes

[permalink] [raw]
Subject: Re: [RFC] mempolicy: convert the shared_policy lock to a rwlock

On Thu, 12 Nov 2015, Nathan Zimmer wrote:

> When running the SPECint_rate gcc on some very large boxes it was noticed
> that the system was spending lots of time in mpol_shared_policy_lookup.
> The gamess benchmark can also show it and is what I mostly used to chase
> down the issue since the setup for that I found a easier.
>
> To be clear the binaries were on tmpfs because of disk I/O reqruirements.
> We then used text replication to avoid icache misses and having all the
> copies banging on the memory where the instruction code resides.
> This results in us hitting a bottle neck in mpol_shared_policy_lookup
> since lookup is serialised by the shared_policy lock.
>
> I have only reproduced this on very large (3k+ cores) boxes. The problem
> starts showing up at just a few hundred ranks getting worse until it
> threatens to livelock once it gets large enough.
> For example on the gamess benchmark at 128 ranks this area consumes only
> ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is
> over 90%.
>
> To alleviate the contention on this area I converted the spinslock to a
> rwlock. This allows the large number of lookups to happen simultaneously.
> The results were quite good reducing this to consumtion at max ranks to
> around 2%.
>

There're a couple of places in the sp_lookup() comment that would need to
be fixed to either correct that this is no longer a spinlock and that the
caller must hold the read lock. The comment for sp_insert() would have to
be fixed to specify the caller must hold the write lock. When that's
fixed, feel free to add

Acked-by: David Rientjes <[email protected]>

2015-11-17 16:18:16

by Nathan Zimmer

[permalink] [raw]
Subject: [PATCH] mempolicy: convert the shared_policy lock to a rwlock

When running the SPECint_rate gcc on some very large boxes it was noticed
that the system was spending lots of time in mpol_shared_policy_lookup.
The gamess benchmark can also show it and is what I mostly used to chase
down the issue since the setup for that I found a easier.

To be clear the binaries were on tmpfs because of disk I/O reqruirements.
We then used text replication to avoid icache misses and having all the
copies banging on the memory where the instruction code resides.
This results in us hitting a bottle neck in mpol_shared_policy_lookup
since lookup is serialised by the shared_policy lock.

I have only reproduced this on very large (3k+ cores) boxes. The problem
starts showing up at just a few hundred ranks getting worse until it
threatens to livelock once it gets large enough.
For example on the gamess benchmark at 128 ranks this area consumes only
~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is
over 90%.

To alleviate the contention on this area I converted the spinslock to a
rwlock. This allows the large number of lookups to happen simultaneously.
The results were quite good reducing this to consumtion at max ranks to
around 2%.

Acked-by: David Rientjes <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Nadia Yvette Chambers <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Nathan Zimmer <[email protected]>
---
fs/hugetlbfs/inode.c | 2 +-
include/linux/mempolicy.h | 2 +-
mm/mempolicy.c | 20 ++++++++++----------
3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 316adb9..ab7b155 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -739,7 +739,7 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb,
/*
* The policy is initialized here even if we are creating a
* private inode because initialization simply creates an
- * an empty rb tree and calls spin_lock_init(), later when we
+ * an empty rb tree and calls rwlock_init(), later when we
* call mpol_free_shared_policy() it will just return because
* the rb tree will still be empty.
*/
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 3d385c8..2696c1f 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -122,7 +122,7 @@ struct sp_node {

struct shared_policy {
struct rb_root root;
- spinlock_t lock;
+ rwlock_t lock;
};

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 87a1779..197d917 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2142,7 +2142,7 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
*
* Remember policies even when nobody has shared memory mapped.
* The policies are kept in Red-Black tree linked from the inode.
- * They are protected by the sp->lock spinlock, which should be held
+ * They are protected by the sp->lock rwlock, which should be held
* for any accesses to the tree.
*/

@@ -2179,7 +2179,7 @@ sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
}

/* Insert a new shared policy into the list. */
-/* Caller holds sp->lock */
+/* Caller holds the write of sp->lock */
static void sp_insert(struct shared_policy *sp, struct sp_node *new)
{
struct rb_node **p = &sp->root.rb_node;
@@ -2211,13 +2211,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)

if (!sp->root.rb_node)
return NULL;
- spin_lock(&sp->lock);
+ read_lock(&sp->lock);
sn = sp_lookup(sp, idx, idx+1);
if (sn) {
mpol_get(sn->policy);
pol = sn->policy;
}
- spin_unlock(&sp->lock);
+ read_unlock(&sp->lock);
return pol;
}

@@ -2360,7 +2360,7 @@ static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
int ret = 0;

restart:
- spin_lock(&sp->lock);
+ write_lock(&sp->lock);
n = sp_lookup(sp, start, end);
/* Take care of old policies in the same range. */
while (n && n->start < end) {
@@ -2393,7 +2393,7 @@ restart:
}
if (new)
sp_insert(sp, new);
- spin_unlock(&sp->lock);
+ write_unlock(&sp->lock);
ret = 0;

err_out:
@@ -2405,7 +2405,7 @@ err_out:
return ret;

alloc_new:
- spin_unlock(&sp->lock);
+ write_unlock(&sp->lock);
ret = -ENOMEM;
n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL);
if (!n_new)
@@ -2431,7 +2431,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
int ret;

sp->root = RB_ROOT; /* empty tree == default mempolicy */
- spin_lock_init(&sp->lock);
+ rwlock_init(&sp->lock);

if (mpol) {
struct vm_area_struct pvma;
@@ -2497,14 +2497,14 @@ void mpol_free_shared_policy(struct shared_policy *p)

if (!p->root.rb_node)
return;
- spin_lock(&p->lock);
+ write_lock(&p->lock);
next = rb_first(&p->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
sp_delete(p, n);
}
- spin_unlock(&p->lock);
+ write_unlock(&p->lock);
}

#ifdef CONFIG_NUMA_BALANCING
--
1.8.2.1

2015-11-18 13:50:10

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock

On 11/17/2015 05:17 PM, Nathan Zimmer wrote:
> When running the SPECint_rate gcc on some very large boxes it was noticed
> that the system was spending lots of time in mpol_shared_policy_lookup.
> The gamess benchmark can also show it and is what I mostly used to chase
> down the issue since the setup for that I found a easier.
>
> To be clear the binaries were on tmpfs because of disk I/O reqruirements.
> We then used text replication to avoid icache misses and having all the
> copies banging on the memory where the instruction code resides.
> This results in us hitting a bottle neck in mpol_shared_policy_lookup
> since lookup is serialised by the shared_policy lock.
>
> I have only reproduced this on very large (3k+ cores) boxes. The problem
> starts showing up at just a few hundred ranks getting worse until it
> threatens to livelock once it gets large enough.
> For example on the gamess benchmark at 128 ranks this area consumes only
> ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is
> over 90%.
>
> To alleviate the contention on this area I converted the spinslock to a
> rwlock. This allows the large number of lookups to happen simultaneously.
> The results were quite good reducing this to consumtion at max ranks to
> around 2%.

At first glance it seems that RCU would be a good fit here and achieve even
better lookup scalability, have you considered it?

2015-11-19 10:50:48

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock

On 11/18/2015 09:05 PM, Nathan Zimmer wrote:
>
>
> On 11/18/2015 07:50 AM, Vlastimil Babka wrote:
>> At first glance it seems that RCU would be a good fit here and achieve even
>> better lookup scalability, have you considered it?
>>
>
> Originally that was my plan but when I saw how good the results were
> with the rwlock, I chickened out and took the less prone to mistakes way.
>
> I should also note that the 2% time left in system is not from this lookup
> but another area.

Ah, I see, thanks!
Vlastimil

> Nate
>

2015-12-21 13:15:14

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: convert the shared_policy lock to a rwlock

On 11/17/2015 05:17 PM, Nathan Zimmer wrote:
> When running the SPECint_rate gcc on some very large boxes it was noticed
> that the system was spending lots of time in mpol_shared_policy_lookup.
> The gamess benchmark can also show it and is what I mostly used to chase
> down the issue since the setup for that I found a easier.
>
> To be clear the binaries were on tmpfs because of disk I/O reqruirements.
> We then used text replication to avoid icache misses and having all the
> copies banging on the memory where the instruction code resides.
> This results in us hitting a bottle neck in mpol_shared_policy_lookup
> since lookup is serialised by the shared_policy lock.
>
> I have only reproduced this on very large (3k+ cores) boxes. The problem
> starts showing up at just a few hundred ranks getting worse until it
> threatens to livelock once it gets large enough.
> For example on the gamess benchmark at 128 ranks this area consumes only
> ~1% of time, at 512 ranks it consumes nearly 13%, and at 2k ranks it is
> over 90%.
>
> To alleviate the contention on this area I converted the spinslock to a
> rwlock. This allows the large number of lookups to happen simultaneously.
> The results were quite good reducing this to consumtion at max ranks to
> around 2%.
>
> Acked-by: David Rientjes <[email protected]>

Acked-by: Vlastimil Babka <[email protected]>