2017-09-07 10:58:57

by Cihangir Akturk

[permalink] [raw]
Subject: [PATCH] staging: lustre: avoid going through unlock/lock overhead

Unlocking a spin lock and then immediately locking without doing
anything useful in between buys us nothing, except wasting CPU cycles.

Also code size gets smaller.

Before:

text data bss dec hex filename
70415 2356 4108 76879 12c4f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o

After:

text data bss dec hex filename
70095 2356 4108 76559 12b0f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o

Signed-off-by: Cihangir Akturk <[email protected]>
---
drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
index 64763aa..5d9cd33 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
@@ -1624,8 +1624,9 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
__u64 version;
int rc;

- again:
+again:
spin_lock(&fps->fps_lock);
+again_locked:
version = fps->fps_version;
list_for_each_entry(fpo, &fps->fps_pool_list, fpo_list) {
fpo->fpo_deadline = cfs_time_shift(IBLND_POOL_DEADLINE);
@@ -1722,10 +1723,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
}

/* EAGAIN and ... */
- if (version != fps->fps_version) {
- spin_unlock(&fps->fps_lock);
- goto again;
- }
+ if (version != fps->fps_version)
+ goto again_locked;
}

if (fps->fps_increasing) {
@@ -1754,9 +1753,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
} else {
fps->fps_next_retry = cfs_time_shift(IBLND_POOL_RETRY);
}
- spin_unlock(&fps->fps_lock);

- goto again;
+ goto again_locked;
}

static void kiblnd_fini_pool(struct kib_pool *pool)
@@ -1901,8 +1899,9 @@ struct list_head *kiblnd_pool_alloc_node(struct kib_poolset *ps)
unsigned int trips = 0;
int rc;

- again:
+again:
spin_lock(&ps->ps_lock);
+again_locked:
list_for_each_entry(pool, &ps->ps_pool_list, po_list) {
if (list_empty(&pool->po_free_list))
continue;
@@ -1960,9 +1959,8 @@ struct list_head *kiblnd_pool_alloc_node(struct kib_poolset *ps)
CERROR("Can't allocate new %s pool because out of memory\n",
ps->ps_name);
}
- spin_unlock(&ps->ps_lock);

- goto again;
+ goto again_locked;
}

static void kiblnd_destroy_tx_pool(struct kib_pool *pool)
--
2.7.4


2017-09-07 12:33:53

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH] staging: lustre: avoid going through unlock/lock overhead

On Thu, Sep 07, 2017 at 01:57:42PM +0300, Cihangir Akturk wrote:
> Unlocking a spin lock and then immediately locking without doing
> anything useful in between buys us nothing, except wasting CPU cycles.

Not always, it can be a "gate" for other users of the lock.

Are you sure that is not what is going on here? Did you test this out
on a lustre system? The locks here are anything but trivial...

>
> Also code size gets smaller.
>
> Before:
>
> text data bss dec hex filename
> 70415 2356 4108 76879 12c4f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o
>
> After:
>
> text data bss dec hex filename
> 70095 2356 4108 76559 12b0f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o
>
> Signed-off-by: Cihangir Akturk <[email protected]>
> ---
> drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c | 18 ++++++++----------
> 1 file changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> index 64763aa..5d9cd33 100644
> --- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> +++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> @@ -1624,8 +1624,9 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> __u64 version;
> int rc;
>
> - again:
> +again:
> spin_lock(&fps->fps_lock);
> +again_locked:
> version = fps->fps_version;
> list_for_each_entry(fpo, &fps->fps_pool_list, fpo_list) {
> fpo->fpo_deadline = cfs_time_shift(IBLND_POOL_DEADLINE);
> @@ -1722,10 +1723,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> }
>
> /* EAGAIN and ... */
> - if (version != fps->fps_version) {
> - spin_unlock(&fps->fps_lock);
> - goto again;
> - }
> + if (version != fps->fps_version)
> + goto again_locked;
> }
>
> if (fps->fps_increasing) {
> @@ -1754,9 +1753,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> } else {
> fps->fps_next_retry = cfs_time_shift(IBLND_POOL_RETRY);
> }
> - spin_unlock(&fps->fps_lock);
>
> - goto again;
> + goto again_locked;

Really, gotos backwards? Ick, that's horrid as well, so maybe this is
better? I hate this whole codebase...

I'll let the Lustre maintainers decide about this one...

greg k-h

2017-09-07 13:29:54

by Cihangir Akturk

[permalink] [raw]
Subject: Re: [PATCH] staging: lustre: avoid going through unlock/lock overhead

On Thu, Sep 07, 2017 at 02:33:49PM +0200, Greg KH wrote:
> On Thu, Sep 07, 2017 at 01:57:42PM +0300, Cihangir Akturk wrote:
> > Unlocking a spin lock and then immediately locking without doing
> > anything useful in between buys us nothing, except wasting CPU cycles.
>
> Not always, it can be a "gate" for other users of the lock.

OK, I get it.


> Are you sure that is not what is going on here?

No, I'm not sure. But yes, that's possible it might be used to let in
other users of the lock.

> Did you test this out on a lustre system? The locks here are
> anything but trivial...

Unfortunately I haven't tested this change on a lustre system. Just
compile-tested.

> >
> > Also code size gets smaller.
> >
> > Before:
> >
> > text data bss dec hex filename
> > 70415 2356 4108 76879 12c4f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o
> >
> > After:
> >
> > text data bss dec hex filename
> > 70095 2356 4108 76559 12b0f drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.o
> >
> > Signed-off-by: Cihangir Akturk <[email protected]>
> > ---
> > drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c | 18 ++++++++----------
> > 1 file changed, 8 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> > index 64763aa..5d9cd33 100644
> > --- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> > +++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
> > @@ -1624,8 +1624,9 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> > __u64 version;
> > int rc;
> >
> > - again:
> > +again:
> > spin_lock(&fps->fps_lock);
> > +again_locked:
> > version = fps->fps_version;
> > list_for_each_entry(fpo, &fps->fps_pool_list, fpo_list) {
> > fpo->fpo_deadline = cfs_time_shift(IBLND_POOL_DEADLINE);
> > @@ -1722,10 +1723,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> > }
> >
> > /* EAGAIN and ... */
> > - if (version != fps->fps_version) {
> > - spin_unlock(&fps->fps_lock);
> > - goto again;
> > - }
> > + if (version != fps->fps_version)
> > + goto again_locked;
> > }
> >
> > if (fps->fps_increasing) {
> > @@ -1754,9 +1753,8 @@ int kiblnd_fmr_pool_map(struct kib_fmr_poolset *fps, struct kib_tx *tx,
> > } else {
> > fps->fps_next_retry = cfs_time_shift(IBLND_POOL_RETRY);
> > }
> > - spin_unlock(&fps->fps_lock);
> >
> > - goto again;
> > + goto again_locked;
>
> Really, gotos backwards? Ick, that's horrid as well, so maybe this is
> better? I hate this whole codebase...
>
> I'll let the Lustre maintainers decide about this one...
>
> greg k-h