Hi,
At first, thank you for the report and please follow the email writing
rules. :)
Anyway, I agree to the below issue.
One thing that I can think of is that we don't need to use the
spin_lock, since we don't care about the exact lock number, but just
need to get any not-collided number.
So, how about removing the spin_lock?
And how about using a random number?
Thanks,
2013-09-06 (금), 09:48 +0000, Chao Yu:
> Hi Kim:
>
> I think there is a performance problem: when all sbi->fs_lock is
> holded,
>
> then all other threads may get the same next_lock value from
> sbi->next_lock_num in function mutex_lock_op,
>
> and wait to get the same lock at position fs_lock[next_lock], it
> unbalance the fs_lock usage.
>
> It may lost performance when we do the multithread test.
>
>
>
> Here is the patch to fix this problem:
>
>
>
> Signed-off-by: Yu Chao <[email protected]>
>
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>
> old mode 100644
>
> new mode 100755
>
> index 467d42d..983bb45
>
> --- a/fs/f2fs/f2fs.h
>
> +++ b/fs/f2fs/f2fs.h
>
> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>
> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> operations */
>
> struct mutex node_write; /* locking node writes
> */
>
> struct mutex writepages; /* mutex for
> writepages() */
>
> + spinlock_t spin_lock; /* lock for
> next_lock_num */
>
> unsigned char next_lock_num; /* round-robin global
> locks */
>
> int por_doing; /* recovery is doing
> or not */
>
> int on_build_free_nids; /* build_free_nids is
> doing */
>
> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> f2fs_sb_info *sbi)
>
>
>
> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>
> {
>
> - unsigned char next_lock = sbi->next_lock_num %
> NR_GLOBAL_LOCKS;
>
> + unsigned char next_lock;
>
> int i = 0;
>
>
>
> for (; i < NR_GLOBAL_LOCKS; i++)
>
> if (mutex_trylock(&sbi->fs_lock[i]))
>
> return i;
>
>
>
> - mutex_lock(&sbi->fs_lock[next_lock]);
>
> + spin_lock(&sbi->spin_lock);
>
> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>
> sbi->next_lock_num++;
>
> + spin_unlock(&sbi->spin_lock);
>
> +
>
> + mutex_lock(&sbi->fs_lock[next_lock]);
>
> return next_lock;
>
> }
>
>
>
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>
> old mode 100644
>
> new mode 100755
>
> index 75c7dc3..4f27596
>
> --- a/fs/f2fs/super.c
>
> +++ b/fs/f2fs/super.c
>
> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
> void *data, int silent)
>
> mutex_init(&sbi->cp_mutex);
>
> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>
> mutex_init(&sbi->fs_lock[i]);
>
> + spin_lock_init(&sbi->spin_lock);
>
> mutex_init(&sbi->node_write);
>
> sbi->por_doing = 0;
>
> spin_lock_init(&sbi->stat_lock);
>
> (END)
>
>
>
>
>
>
--
Jaegeuk Kim
Samsung
Hi Kim,
I did some tests as you mention of using random instead of spin_lock.
The test model is as following:
eight threads race to grab one of eight locks for one thousand times,
and I used four methods to generate lock num:
1.atomic_add_return(1, &sbi->next_lock_num) % NR_GLOBAL_LOCKS;
2.spin_lock(); next_lock_num++ % NR_GLOBAL_LOCKS; spin_unlock();
3.ktime_get().tv64 % NR_GLOBAL_LOCKS;
4.get_random_bytes(&next_lock, sizeof(unsigned int));
the result indicate that:
max count of collide continuously: 4 > 3 > 2 = 1
max-min count of lock is grabbed: 4 > 3 > 2 = 1
elapsed time of generating: 3 > 2 > 4 > 1
So I think it's better to use atomic_add_return in round-robin method to
cost less time and reduce collide.
What's your opinion?
thanks
------- Original Message -------
Sender : ???<[email protected]> S5(??)/??/?????????(???)/????
Date : ???? 10, 2013 09:52 (GMT+09:00)
Title : Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
Hi,
At first, thank you for the report and please follow the email writing
rules. :)
Anyway, I agree to the below issue.
One thing that I can think of is that we don't need to use the
spin_lock, since we don't care about the exact lock number, but just
need to get any not-collided number.
So, how about removing the spin_lock?
And how about using a random number?
Thanks,
2013-09-06 (?), 09:48 +0000, Chao Yu:
> Hi Kim:
>
> I think there is a performance problem: when all sbi->fs_lock is
> holded,
>
> then all other threads may get the same next_lock value from
> sbi->next_lock_num in function mutex_lock_op,
>
> and wait to get the same lock at position fs_lock[next_lock], it
> unbalance the fs_lock usage.
>
> It may lost performance when we do the multithread test.
>
>
>
> Here is the patch to fix this problem:
>
>
>
> Signed-off-by: Yu Chao
>
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>
> old mode 100644
>
> new mode 100755
>
> index 467d42d..983bb45
>
> --- a/fs/f2fs/f2fs.h
>
> +++ b/fs/f2fs/f2fs.h
>
> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>
> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> operations */
>
> struct mutex node_write; /* locking node writes
> */
>
> struct mutex writepages; /* mutex for
> writepages() */
>
> + spinlock_t spin_lock; /* lock for
> next_lock_num */
>
> unsigned char next_lock_num; /* round-robin global
> locks */
>
> int por_doing; /* recovery is doing
> or not */
>
> int on_build_free_nids; /* build_free_nids is
> doing */
>
> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> f2fs_sb_info *sbi)
>
>
>
> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>
> {
>
> - unsigned char next_lock = sbi->next_lock_num %
> NR_GLOBAL_LOCKS;
>
> + unsigned char next_lock;
>
> int i = 0;
>
>
>
> for (; i < NR_GLOBAL_LOCKS; i++)
>
> if (mutex_trylock(&sbi->fs_lock[i]))
>
> return i;
>
>
>
> - mutex_lock(&sbi->fs_lock[next_lock]);
>
> + spin_lock(&sbi->spin_lock);
>
> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>
> sbi->next_lock_num++;
>
> + spin_unlock(&sbi->spin_lock);
>
> +
>
> + mutex_lock(&sbi->fs_lock[next_lock]);
>
> return next_lock;
>
> }
>
>
>
> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>
> old mode 100644
>
> new mode 100755
>
> index 75c7dc3..4f27596
>
> --- a/fs/f2fs/super.c
>
> +++ b/fs/f2fs/super.c
>
> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
> void *data, int silent)
>
> mutex_init(&sbi->cp_mutex);
>
> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>
> mutex_init(&sbi->fs_lock[i]);
>
> + spin_lock_init(&sbi->spin_lock);
>
> mutex_init(&sbi->node_write);
>
> sbi->por_doing = 0;
>
> spin_lock_init(&sbi->stat_lock);
>
> (END)
>
>
>
>
>
>
--
Jaegeuk Kim
Samsung????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m????????????I?
Hi Jaegeuk,
On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
> Hi,
>
> At first, thank you for the report and please follow the email writing
> rules. :)
>
> Anyway, I agree to the below issue.
> One thing that I can think of is that we don't need to use the
> spin_lock, since we don't care about the exact lock number, but just
> need to get any not-collided number.
Agree, but if all the locks are held, IMO, we need to balance the following
threads to wait for each not-collided number lock, though complete balance is unreachable.
>
> So, how about removing the spin_lock?
Yeah, in this case, spin_lock is a bit heavy cost.
> And how about using a random number?
Now NR_GLOBAL_LOCKS is 8, it seems that random can not offer an balance number as we expected.
Regards,
Gu
> Thanks,
>
> 2013-09-06 (금), 09:48 +0000, Chao Yu:
>> Hi Kim:
>>
>> I think there is a performance problem: when all sbi->fs_lock is
>> holded,
>>
>> then all other threads may get the same next_lock value from
>> sbi->next_lock_num in function mutex_lock_op,
>>
>> and wait to get the same lock at position fs_lock[next_lock], it
>> unbalance the fs_lock usage.
>>
>> It may lost performance when we do the multithread test.
>>
>>
>>
>> Here is the patch to fix this problem:
>>
>>
>>
>> Signed-off-by: Yu Chao <[email protected]>
>>
>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>
>> old mode 100644
>>
>> new mode 100755
>>
>> index 467d42d..983bb45
>>
>> --- a/fs/f2fs/f2fs.h
>>
>> +++ b/fs/f2fs/f2fs.h
>>
>> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>>
>> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
>> operations */
>>
>> struct mutex node_write; /* locking node writes
>> */
>>
>> struct mutex writepages; /* mutex for
>> writepages() */
>>
>> + spinlock_t spin_lock; /* lock for
>> next_lock_num */
>>
>> unsigned char next_lock_num; /* round-robin global
>> locks */
>>
>> int por_doing; /* recovery is doing
>> or not */
>>
>> int on_build_free_nids; /* build_free_nids is
>> doing */
>>
>> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
>> f2fs_sb_info *sbi)
>>
>>
>>
>> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>>
>> {
>>
>> - unsigned char next_lock = sbi->next_lock_num %
>> NR_GLOBAL_LOCKS;
>>
>> + unsigned char next_lock;
>>
>> int i = 0;
>>
>>
>>
>> for (; i < NR_GLOBAL_LOCKS; i++)
>>
>> if (mutex_trylock(&sbi->fs_lock[i]))
>>
>> return i;
>>
>>
>>
>> - mutex_lock(&sbi->fs_lock[next_lock]);
>>
>> + spin_lock(&sbi->spin_lock);
>>
>> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>>
>> sbi->next_lock_num++;
>>
>> + spin_unlock(&sbi->spin_lock);
>>
>> +
>>
>> + mutex_lock(&sbi->fs_lock[next_lock]);
>>
>> return next_lock;
>>
>> }
>>
>>
>>
>> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>>
>> old mode 100644
>>
>> new mode 100755
>>
>> index 75c7dc3..4f27596
>>
>> --- a/fs/f2fs/super.c
>>
>> +++ b/fs/f2fs/super.c
>>
>> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
>> void *data, int silent)
>>
>> mutex_init(&sbi->cp_mutex);
>>
>> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>>
>> mutex_init(&sbi->fs_lock[i]);
>>
>> + spin_lock_init(&sbi->spin_lock);
>>
>> mutex_init(&sbi->node_write);
>>
>> sbi->por_doing = 0;
>>
>> spin_lock_init(&sbi->stat_lock);
>>
>> (END)
>>
>>
>>
>>
>>
>>
>
Hi Jaegeuk, Chao,
On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
> Hi,
>
> At first, thank you for the report and please follow the email writing
> rules. :)
>
> Anyway, I agree to the below issue.
> One thing that I can think of is that we don't need to use the
> spin_lock, since we don't care about the exact lock number, but just
> need to get any not-collided number.
IMHO, just moving sbi->next_lock_num++ before mutex_lock(&sbi->fs_lock[next_lock])
can avoid unbalance issue mostly.
IMO, the case two or more threads increase sbi->next_lock_num in the same time is
really very very little. If you think it is not rigorous, change next_lock_num to
atomic one can fix it.
What's your opinion?
Regards,
Gu
>
> So, how about removing the spin_lock?
> And how about using a random number?
> Thanks,
>
> 2013-09-06 (금), 09:48 +0000, Chao Yu:
>> Hi Kim:
>>
>> I think there is a performance problem: when all sbi->fs_lock is
>> holded,
>>
>> then all other threads may get the same next_lock value from
>> sbi->next_lock_num in function mutex_lock_op,
>>
>> and wait to get the same lock at position fs_lock[next_lock], it
>> unbalance the fs_lock usage.
>>
>> It may lost performance when we do the multithread test.
>>
>>
>>
>> Here is the patch to fix this problem:
>>
>>
>>
>> Signed-off-by: Yu Chao <[email protected]>
>>
>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>
>> old mode 100644
>>
>> new mode 100755
>>
>> index 467d42d..983bb45
>>
>> --- a/fs/f2fs/f2fs.h
>>
>> +++ b/fs/f2fs/f2fs.h
>>
>> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>>
>> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
>> operations */
>>
>> struct mutex node_write; /* locking node writes
>> */
>>
>> struct mutex writepages; /* mutex for
>> writepages() */
>>
>> + spinlock_t spin_lock; /* lock for
>> next_lock_num */
>>
>> unsigned char next_lock_num; /* round-robin global
>> locks */
>>
>> int por_doing; /* recovery is doing
>> or not */
>>
>> int on_build_free_nids; /* build_free_nids is
>> doing */
>>
>> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
>> f2fs_sb_info *sbi)
>>
>>
>>
>> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>>
>> {
>>
>> - unsigned char next_lock = sbi->next_lock_num %
>> NR_GLOBAL_LOCKS;
>>
>> + unsigned char next_lock;
>>
>> int i = 0;
>>
>>
>>
>> for (; i < NR_GLOBAL_LOCKS; i++)
>>
>> if (mutex_trylock(&sbi->fs_lock[i]))
>>
>> return i;
>>
>>
>>
>> - mutex_lock(&sbi->fs_lock[next_lock]);
>>
>> + spin_lock(&sbi->spin_lock);
>>
>> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>>
>> sbi->next_lock_num++;
>>
>> + spin_unlock(&sbi->spin_lock);
>>
>> +
>>
>> + mutex_lock(&sbi->fs_lock[next_lock]);
>>
>> return next_lock;
>>
>> }
>>
>>
>>
>> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>>
>> old mode 100644
>>
>> new mode 100755
>>
>> index 75c7dc3..4f27596
>>
>> --- a/fs/f2fs/super.c
>>
>> +++ b/fs/f2fs/super.c
>>
>> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
>> void *data, int silent)
>>
>> mutex_init(&sbi->cp_mutex);
>>
>> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>>
>> mutex_init(&sbi->fs_lock[i]);
>>
>> + spin_lock_init(&sbi->spin_lock);
>>
>> mutex_init(&sbi->node_write);
>>
>> sbi->por_doing = 0;
>>
>> spin_lock_init(&sbi->stat_lock);
>>
>> (END)
>>
>>
>>
>>
>>
>>
>
Hi,
2013/9/11 Chao Yu <[email protected]>
>
> Hi Kim,
>
> I did some tests as you mention of using random instead of spin_lock.
> The test model is as following:
> eight threads race to grab one of eight locks for one thousand times,
> and I used four methods to generate lock num:
>
> 1.atomic_add_return(1, &sbi->next_lock_num) % NR_GLOBAL_LOCKS;
> 2.spin_lock(); next_lock_num++ % NR_GLOBAL_LOCKS; spin_unlock();
> 3.ktime_get().tv64 % NR_GLOBAL_LOCKS;
> 4.get_random_bytes(&next_lock, sizeof(unsigned int));
>
> the result indicate that:
> max count of collide continuously: 4 > 3 > 2 = 1
> max-min count of lock is grabbed: 4 > 3 > 2 = 1
> elapsed time of generating: 3 > 2 > 4 > 1
>
> So I think it's better to use atomic_add_return in round-robin method to
> cost less time and reduce collide.
> What's your opinion?
Could you test with sbi->next_lock_num++ only instead of using
atomic_add_return?
IMO, this is just an integer value and still I don't think this value should
be covered by any kind of locks.
Thanks,
>
> thanks
>
> ------- Original Message -------
> Sender : ???<[email protected]> S5(??)/??/?????????(???)/????
> Date : ???? 10, 2013 09:52 (GMT+09:00)
> Title : Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
>
> Hi,
>
> At first, thank you for the report and please follow the email writing
> rules. :)
>
> Anyway, I agree to the below issue.
> One thing that I can think of is that we don't need to use the
> spin_lock, since we don't care about the exact lock number, but just
> need to get any not-collided number.
>
> So, how about removing the spin_lock?
> And how about using a random number?
> Thanks,
>
> 2013-09-06 (?), 09:48 +0000, Chao Yu:
> > Hi Kim:
> >
> > I think there is a performance problem: when all sbi->fs_lock is
> > holded,
> >
> > then all other threads may get the same next_lock value from
> > sbi->next_lock_num in function mutex_lock_op,
> >
> > and wait to get the same lock at position fs_lock[next_lock], it
> > unbalance the fs_lock usage.
> >
> > It may lost performance when we do the multithread test.
> >
> >
> >
> > Here is the patch to fix this problem:
> >
> >
> >
> > Signed-off-by: Yu Chao
> >
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >
> > old mode 100644
> >
> > new mode 100755
> >
> > index 467d42d..983bb45
> >
> > --- a/fs/f2fs/f2fs.h
> >
> > +++ b/fs/f2fs/f2fs.h
> >
> > @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> >
> > struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> > operations */
> >
> > struct mutex node_write; /* locking node writes
> > */
> >
> > struct mutex writepages; /* mutex for
> > writepages() */
> >
> > + spinlock_t spin_lock; /* lock for
> > next_lock_num */
> >
> > unsigned char next_lock_num; /* round-robin global
> > locks */
> >
> > int por_doing; /* recovery is doing
> > or not */
> >
> > int on_build_free_nids; /* build_free_nids is
> > doing */
> >
> > @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> > f2fs_sb_info *sbi)
> >
> >
> >
> > static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> >
> > {
> >
> > - unsigned char next_lock = sbi->next_lock_num %
> > NR_GLOBAL_LOCKS;
> >
> > + unsigned char next_lock;
> >
> > int i = 0;
> >
> >
> >
> > for (; i < NR_GLOBAL_LOCKS; i++)
> >
> > if (mutex_trylock(&sbi->fs_lock[i]))
> >
> > return i;
> >
> >
> >
> > - mutex_lock(&sbi->fs_lock[next_lock]);
> >
> > + spin_lock(&sbi->spin_lock);
> >
> > + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> >
> > sbi->next_lock_num++;
> >
> > + spin_unlock(&sbi->spin_lock);
> >
> > +
> >
> > + mutex_lock(&sbi->fs_lock[next_lock]);
> >
> > return next_lock;
> >
> > }
> >
> >
> >
> > diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> >
> > old mode 100644
> >
> > new mode 100755
> >
> > index 75c7dc3..4f27596
> >
> > --- a/fs/f2fs/super.c
> >
> > +++ b/fs/f2fs/super.c
> >
> > @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
> > void *data, int silent)
> >
> > mutex_init(&sbi->cp_mutex);
> >
> > for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> >
> > mutex_init(&sbi->fs_lock[i]);
> >
> > + spin_lock_init(&sbi->spin_lock);
> >
> > mutex_init(&sbi->node_write);
> >
> > sbi->por_doing = 0;
> >
> > spin_lock_init(&sbi->stat_lock);
> >
> > (END)
> >
> >
> >
> >
> >
> >
>
> --
> Jaegeuk Kim
> Samsung
Hi Gu,
2013/9/11 Gu Zheng <[email protected]>:
> Hi Jaegeuk, Chao,
>
> On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
>
>> Hi,
>>
>> At first, thank you for the report and please follow the email writing
>> rules. :)
>>
>> Anyway, I agree to the below issue.
>> One thing that I can think of is that we don't need to use the
>> spin_lock, since we don't care about the exact lock number, but just
>> need to get any not-collided number.
>
> IMHO, just moving sbi->next_lock_num++ before mutex_lock(&sbi->fs_lock[next_lock])
> can avoid unbalance issue mostly.
> IMO, the case two or more threads increase sbi->next_lock_num in the same time is
> really very very little. If you think it is not rigorous, change next_lock_num to
> atomic one can fix it.
> What's your opinion?
As your opinion, I think it is enough to replace it with simple
sbi->next_lock_num++.
Thanks,
>
> Regards,
> Gu
>
>>
>> So, how about removing the spin_lock?
>> And how about using a random number?
>
>> Thanks,
>>
>> 2013-09-06 (??), 09:48 +0000, Chao Yu:
>>> Hi Kim:
>>>
>>> I think there is a performance problem: when all sbi->fs_lock is
>>> holded,
>>>
>>> then all other threads may get the same next_lock value from
>>> sbi->next_lock_num in function mutex_lock_op,
>>>
>>> and wait to get the same lock at position fs_lock[next_lock], it
>>> unbalance the fs_lock usage.
>>>
>>> It may lost performance when we do the multithread test.
>>>
>>>
>>>
>>> Here is the patch to fix this problem:
>>>
>>>
>>>
>>> Signed-off-by: Yu Chao <[email protected]>
>>>
>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>>
>>> old mode 100644
>>>
>>> new mode 100755
>>>
>>> index 467d42d..983bb45
>>>
>>> --- a/fs/f2fs/f2fs.h
>>>
>>> +++ b/fs/f2fs/f2fs.h
>>>
>>> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>>>
>>> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
>>> operations */
>>>
>>> struct mutex node_write; /* locking node writes
>>> */
>>>
>>> struct mutex writepages; /* mutex for
>>> writepages() */
>>>
>>> + spinlock_t spin_lock; /* lock for
>>> next_lock_num */
>>>
>>> unsigned char next_lock_num; /* round-robin global
>>> locks */
>>>
>>> int por_doing; /* recovery is doing
>>> or not */
>>>
>>> int on_build_free_nids; /* build_free_nids is
>>> doing */
>>>
>>> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
>>> f2fs_sb_info *sbi)
>>>
>>>
>>>
>>> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>>>
>>> {
>>>
>>> - unsigned char next_lock = sbi->next_lock_num %
>>> NR_GLOBAL_LOCKS;
>>>
>>> + unsigned char next_lock;
>>>
>>> int i = 0;
>>>
>>>
>>>
>>> for (; i < NR_GLOBAL_LOCKS; i++)
>>>
>>> if (mutex_trylock(&sbi->fs_lock[i]))
>>>
>>> return i;
>>>
>>>
>>>
>>> - mutex_lock(&sbi->fs_lock[next_lock]);
>>>
>>> + spin_lock(&sbi->spin_lock);
>>>
>>> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>>>
>>> sbi->next_lock_num++;
>>>
>>> + spin_unlock(&sbi->spin_lock);
>>>
>>> +
>>>
>>> + mutex_lock(&sbi->fs_lock[next_lock]);
>>>
>>> return next_lock;
>>>
>>> }
>>>
>>>
>>>
>>> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>>>
>>> old mode 100644
>>>
>>> new mode 100755
>>>
>>> index 75c7dc3..4f27596
>>>
>>> --- a/fs/f2fs/super.c
>>>
>>> +++ b/fs/f2fs/super.c
>>>
>>> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block *sb,
>>> void *data, int silent)
>>>
>>> mutex_init(&sbi->cp_mutex);
>>>
>>> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>>>
>>> mutex_init(&sbi->fs_lock[i]);
>>>
>>> + spin_lock_init(&sbi->spin_lock);
>>>
>>> mutex_init(&sbi->node_write);
>>>
>>> sbi->por_doing = 0;
>>>
>>> spin_lock_init(&sbi->stat_lock);
>>>
>>> (END)
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
>
>
> ------------------------------------------------------------------------------
> How ServiceNow helps IT people transform IT departments:
> 1. Consolidate legacy IT systems to a single system of record for IT
> 2. Standardize and globalize service processes across IT
> 3. Implement zero-touch automation to replace manual, redundant tasks
> http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk
> _______________________________________________
> Linux-f2fs-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
Hi Kim
> -----Original Message-----
> From: Kim Jaegeuk [mailto:[email protected]]
> Sent: Wednesday, September 11, 2013 9:15 PM
> To: [email protected]
> Cc: ???; ̷??; [email protected];
[email protected];
> [email protected]
> Subject: Re: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better
performance
>
> Hi,
>
> 2013/9/11 Chao Yu <[email protected]>
> >
> > Hi Kim,
> >
> > I did some tests as you mention of using random instead of spin_lock.
> > The test model is as following:
> > eight threads race to grab one of eight locks for one thousand times,
> > and I used four methods to generate lock num:
> >
> > 1.atomic_add_return(1, &sbi->next_lock_num) % NR_GLOBAL_LOCKS;
> > 2.spin_lock(); next_lock_num++ % NR_GLOBAL_LOCKS; spin_unlock();
> > 3.ktime_get().tv64 % NR_GLOBAL_LOCKS;
> > 4.get_random_bytes(&next_lock, sizeof(unsigned int));
> >
> > the result indicate that:
> > max count of collide continuously: 4 > 3 > 2 = 1 max-min count of lock
> > is grabbed: 4 > 3 > 2 = 1 elapsed time of generating: 3 > 2 > 4 > 1
> >
> > So I think it's better to use atomic_add_return in round-robin method
> > to cost less time and reduce collide.
> > What's your opinion?
>
> Could you test with sbi->next_lock_num++ only instead of using
> atomic_add_return?
> IMO, this is just an integer value and still I don't think this value
should be
> covered by any kind of locks.
> Thanks,
Thanks for the advice, I have tested sbi->next_lock_num++.
The time cost of it is a little bit lower than the atomic one's.
for running 8 thread for 1000000 times test.
the performance of it's balance and collide play quit well than I expected.
Can we modify this patch as following?
root@virtaulmachine:/home/yuchao/git/linux-next/fs/f2fs# git diff --stat
fs/f2fs/f2fs.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
root@virtaulmachine:/home/yuchao/git/linux-next/fs/f2fs# git diff
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 608f0df..7fd99d8 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -544,15 +544,15 @@ static inline void mutex_unlock_all(struct
f2fs_sb_info *sbi)
static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
{
- unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
+ unsigned char next_lock;
int i = 0;
for (; i < NR_GLOBAL_LOCKS; i++)
if (mutex_trylock(&sbi->fs_lock[i]))
return i;
+ next_lock = sbi->next_lock_num++ % NR_GLOBAL_LOCKS;
mutex_lock(&sbi->fs_lock[next_lock]);
- sbi->next_lock_num++;
return next_lock;
}
>
> >
> > thanks
> >
> > ------- Original Message -------
> > Sender : ???<[email protected]> S5(??)/??/?????????(???)/????
> > Date : ???? 10, 2013 09:52 (GMT+09:00)
> > Title : Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better
> > performance
> >
> > Hi,
> >
> > At first, thank you for the report and please follow the email writing
> > rules. :)
> >
> > Anyway, I agree to the below issue.
> > One thing that I can think of is that we don't need to use the
> > spin_lock, since we don't care about the exact lock number, but just
> > need to get any not-collided number.
> >
> > So, how about removing the spin_lock?
> > And how about using a random number?
> > Thanks,
> >
> > 2013-09-06 (?), 09:48 +0000, Chao Yu:
> > > Hi Kim:
> > >
> > > I think there is a performance problem: when all sbi->fs_lock
> > > is holded,
> > >
> > > then all other threads may get the same next_lock value from
> > > sbi->next_lock_num in function mutex_lock_op,
> > >
> > > and wait to get the same lock at position fs_lock[next_lock], it
> > > unbalance the fs_lock usage.
> > >
> > > It may lost performance when we do the multithread test.
> > >
> > >
> > >
> > > Here is the patch to fix this problem:
> > >
> > >
> > >
> > > Signed-off-by: Yu Chao
> > >
> > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > >
> > > old mode 100644
> > >
> > > new mode 100755
> > >
> > > index 467d42d..983bb45
> > >
> > > --- a/fs/f2fs/f2fs.h
> > >
> > > +++ b/fs/f2fs/f2fs.h
> > >
> > > @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> > >
> > > struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> > > operations */
> > >
> > > struct mutex node_write; /* locking node
> writes
> > > */
> > >
> > > struct mutex writepages; /* mutex for
> > > writepages() */
> > >
> > > + spinlock_t spin_lock; /* lock for
> > > next_lock_num */
> > >
> > > unsigned char next_lock_num; /* round-robin
> global
> > > locks */
> > >
> > > int por_doing; /* recovery is
> doing
> > > or not */
> > >
> > > int on_build_free_nids; /* build_free_nids is
> > > doing */
> > >
> > > @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> > > f2fs_sb_info *sbi)
> > >
> > >
> > >
> > > static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> > >
> > > {
> > >
> > > - unsigned char next_lock = sbi->next_lock_num %
> > > NR_GLOBAL_LOCKS;
> > >
> > > + unsigned char next_lock;
> > >
> > > int i = 0;
> > >
> > >
> > >
> > > for (; i < NR_GLOBAL_LOCKS; i++)
> > >
> > > if (mutex_trylock(&sbi->fs_lock[i]))
> > >
> > > return i;
> > >
> > >
> > >
> > > - mutex_lock(&sbi->fs_lock[next_lock]);
> > >
> > > + spin_lock(&sbi->spin_lock);
> > >
> > > + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> > >
> > > sbi->next_lock_num++;
> > >
> > > + spin_unlock(&sbi->spin_lock);
> > >
> > > +
> > >
> > > + mutex_lock(&sbi->fs_lock[next_lock]);
> > >
> > > return next_lock;
> > >
> > > }
> > >
> > >
> > >
> > > diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> > >
> > > old mode 100644
> > >
> > > new mode 100755
> > >
> > > index 75c7dc3..4f27596
> > >
> > > --- a/fs/f2fs/super.c
> > >
> > > +++ b/fs/f2fs/super.c
> > >
> > > @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
> > > *sb, void *data, int silent)
> > >
> > > mutex_init(&sbi->cp_mutex);
> > >
> > > for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> > >
> > > mutex_init(&sbi->fs_lock[i]);
> > >
> > > + spin_lock_init(&sbi->spin_lock);
> > >
> > > mutex_init(&sbi->node_write);
> > >
> > > sbi->por_doing = 0;
> > >
> > > spin_lock_init(&sbi->stat_lock);
> > >
> > > (END)
> > >
> > >
> > >
> > >
> > >
> > >
> >
> > --
> > Jaegeuk Kim
> > Samsung
Hi Gu
> -----Original Message-----
> From: Gu Zheng [mailto:[email protected]]
> Sent: Wednesday, September 11, 2013 1:38 PM
> To: [email protected]
> Cc: [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]
> Subject: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
>
> Hi Jaegeuk, Chao,
>
> On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
>
> > Hi,
> >
> > At first, thank you for the report and please follow the email writing
> > rules. :)
> >
> > Anyway, I agree to the below issue.
> > One thing that I can think of is that we don't need to use the
> > spin_lock, since we don't care about the exact lock number, but just
> > need to get any not-collided number.
>
> IMHO, just moving sbi->next_lock_num++ before
> mutex_lock(&sbi->fs_lock[next_lock])
> can avoid unbalance issue mostly.
> IMO, the case two or more threads increase sbi->next_lock_num in the same
> time is really very very little. If you think it is not rigorous, change
> next_lock_num to atomic one can fix it.
> What's your opinion?
>
> Regards,
> Gu
I did the test sbi->next_lock_num++ compare with the atomic one,
And I found performance of them is almost the same under a small number thread racing.
So as your and Kim's opinion, it's enough to use "sbi->next_lock_num++" to fix this issue.
Thanks for the advice.
>
> >
> > So, how about removing the spin_lock?
> > And how about using a random number?
>
> > Thanks,
> >
> > 2013-09-06 (금), 09:48 +0000, Chao Yu:
> >> Hi Kim:
> >>
> >> I think there is a performance problem: when all sbi->fs_lock is
> >> holded,
> >>
> >> then all other threads may get the same next_lock value from
> >> sbi->next_lock_num in function mutex_lock_op,
> >>
> >> and wait to get the same lock at position fs_lock[next_lock], it
> >> unbalance the fs_lock usage.
> >>
> >> It may lost performance when we do the multithread test.
> >>
> >>
> >>
> >> Here is the patch to fix this problem:
> >>
> >>
> >>
> >> Signed-off-by: Yu Chao <[email protected]>
> >>
> >> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 467d42d..983bb45
> >>
> >> --- a/fs/f2fs/f2fs.h
> >>
> >> +++ b/fs/f2fs/f2fs.h
> >>
> >> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
> >>
> >> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
> >> operations */
> >>
> >> struct mutex node_write; /* locking node
> writes
> >> */
> >>
> >> struct mutex writepages; /* mutex for
> >> writepages() */
> >>
> >> + spinlock_t spin_lock; /* lock for
> >> next_lock_num */
> >>
> >> unsigned char next_lock_num; /* round-robin
> global
> >> locks */
> >>
> >> int por_doing; /* recovery is doing
> >> or not */
> >>
> >> int on_build_free_nids; /* build_free_nids is
> >> doing */
> >>
> >> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
> >> f2fs_sb_info *sbi)
> >>
> >>
> >>
> >> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
> >>
> >> {
> >>
> >> - unsigned char next_lock = sbi->next_lock_num %
> >> NR_GLOBAL_LOCKS;
> >>
> >> + unsigned char next_lock;
> >>
> >> int i = 0;
> >>
> >>
> >>
> >> for (; i < NR_GLOBAL_LOCKS; i++)
> >>
> >> if (mutex_trylock(&sbi->fs_lock[i]))
> >>
> >> return i;
> >>
> >>
> >>
> >> - mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >> + spin_lock(&sbi->spin_lock);
> >>
> >> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
> >>
> >> sbi->next_lock_num++;
> >>
> >> + spin_unlock(&sbi->spin_lock);
> >>
> >> +
> >>
> >> + mutex_lock(&sbi->fs_lock[next_lock]);
> >>
> >> return next_lock;
> >>
> >> }
> >>
> >>
> >>
> >> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
> >>
> >> old mode 100644
> >>
> >> new mode 100755
> >>
> >> index 75c7dc3..4f27596
> >>
> >> --- a/fs/f2fs/super.c
> >>
> >> +++ b/fs/f2fs/super.c
> >>
> >> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
> >> *sb, void *data, int silent)
> >>
> >> mutex_init(&sbi->cp_mutex);
> >>
> >> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
> >>
> >> mutex_init(&sbi->fs_lock[i]);
> >>
> >> + spin_lock_init(&sbi->spin_lock);
> >>
> >> mutex_init(&sbi->node_write);
> >>
> >> sbi->por_doing = 0;
> >>
> >> spin_lock_init(&sbi->stat_lock);
> >>
> >> (END)
> >>
> >>
> >>
> >>
> >>
> >>
> >
>
>
> =
From: Yu Chao <[email protected]>
There is a performance problem: when all sbi->fs_lock are holded, then
all the following threads may get the same next_lock value from sbi->next_lock_num
in function mutex_lock_op, and wait for the same lock(fs_lock[next_lock]),
it may cause performance reduce.
So we move the sbi->next_lock_num++ before getting lock, this will average the
following threads if all sbi->fs_lock are holded.
v1-->v2:
Drop the needless spin_lock as Jaegeuk suggested.
Suggested-by: Jaegeuk Kim <[email protected]>
Signed-off-by: Yu Chao <[email protected]>
Signed-off-by: Gu Zheng <[email protected]>
---
fs/f2fs/f2fs.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 608f0df..7fd99d8 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -544,15 +544,15 @@ static inline void mutex_unlock_all(struct f2fs_sb_info *sbi)
static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
{
- unsigned char next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
+ unsigned char next_lock;
int i = 0;
for (; i < NR_GLOBAL_LOCKS; i++)
if (mutex_trylock(&sbi->fs_lock[i]))
return i;
+ next_lock = sbi->next_lock_num++ % NR_GLOBAL_LOCKS;
mutex_lock(&sbi->fs_lock[next_lock]);
- sbi->next_lock_num++;
return next_lock;
}
--
1.7.7
Hi Chao,
On 09/12/2013 10:40 AM, 俞超 wrote:
> Hi Gu
>
>> -----Original Message-----
>> From: Gu Zheng [mailto:[email protected]]
>> Sent: Wednesday, September 11, 2013 1:38 PM
>> To: [email protected]
>> Cc: [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]
>> Subject: Re: [f2fs-dev][PATCH] f2fs: optimize fs_lock for better performance
>>
>> Hi Jaegeuk, Chao,
>>
>> On 09/10/2013 08:52 AM, Jaegeuk Kim wrote:
>>
>>> Hi,
>>>
>>> At first, thank you for the report and please follow the email writing
>>> rules. :)
>>>
>>> Anyway, I agree to the below issue.
>>> One thing that I can think of is that we don't need to use the
>>> spin_lock, since we don't care about the exact lock number, but just
>>> need to get any not-collided number.
>>
>> IMHO, just moving sbi->next_lock_num++ before
>> mutex_lock(&sbi->fs_lock[next_lock])
>> can avoid unbalance issue mostly.
>> IMO, the case two or more threads increase sbi->next_lock_num in the same
>> time is really very very little. If you think it is not rigorous, change
>> next_lock_num to atomic one can fix it.
>> What's your opinion?
>>
>> Regards,
>> Gu
>
> I did the test sbi->next_lock_num++ compare with the atomic one,
> And I found performance of them is almost the same under a small number thread racing.
> So as your and Kim's opinion, it's enough to use "sbi->next_lock_num++" to fix this issue.
Good, but it seems that your replay patch is out of format, and it's hard for Jaegeuk to merge.
I'll format it, see the following thread.
Thanks,
Gu
>
> Thanks for the advice.
>>
>>>
>>> So, how about removing the spin_lock?
>>> And how about using a random number?
>>
>>> Thanks,
>>>
>>> 2013-09-06 (금), 09:48 +0000, Chao Yu:
>>>> Hi Kim:
>>>>
>>>> I think there is a performance problem: when all sbi->fs_lock is
>>>> holded,
>>>>
>>>> then all other threads may get the same next_lock value from
>>>> sbi->next_lock_num in function mutex_lock_op,
>>>>
>>>> and wait to get the same lock at position fs_lock[next_lock], it
>>>> unbalance the fs_lock usage.
>>>>
>>>> It may lost performance when we do the multithread test.
>>>>
>>>>
>>>>
>>>> Here is the patch to fix this problem:
>>>>
>>>>
>>>>
>>>> Signed-off-by: Yu Chao <[email protected]>
>>>>
>>>> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
>>>>
>>>> old mode 100644
>>>>
>>>> new mode 100755
>>>>
>>>> index 467d42d..983bb45
>>>>
>>>> --- a/fs/f2fs/f2fs.h
>>>>
>>>> +++ b/fs/f2fs/f2fs.h
>>>>
>>>> @@ -371,6 +371,7 @@ struct f2fs_sb_info {
>>>>
>>>> struct mutex fs_lock[NR_GLOBAL_LOCKS]; /* blocking FS
>>>> operations */
>>>>
>>>> struct mutex node_write; /* locking node
>> writes
>>>> */
>>>>
>>>> struct mutex writepages; /* mutex for
>>>> writepages() */
>>>>
>>>> + spinlock_t spin_lock; /* lock for
>>>> next_lock_num */
>>>>
>>>> unsigned char next_lock_num; /* round-robin
>> global
>>>> locks */
>>>>
>>>> int por_doing; /* recovery is doing
>>>> or not */
>>>>
>>>> int on_build_free_nids; /* build_free_nids is
>>>> doing */
>>>>
>>>> @@ -533,15 +534,19 @@ static inline void mutex_unlock_all(struct
>>>> f2fs_sb_info *sbi)
>>>>
>>>>
>>>>
>>>> static inline int mutex_lock_op(struct f2fs_sb_info *sbi)
>>>>
>>>> {
>>>>
>>>> - unsigned char next_lock = sbi->next_lock_num %
>>>> NR_GLOBAL_LOCKS;
>>>>
>>>> + unsigned char next_lock;
>>>>
>>>> int i = 0;
>>>>
>>>>
>>>>
>>>> for (; i < NR_GLOBAL_LOCKS; i++)
>>>>
>>>> if (mutex_trylock(&sbi->fs_lock[i]))
>>>>
>>>> return i;
>>>>
>>>>
>>>>
>>>> - mutex_lock(&sbi->fs_lock[next_lock]);
>>>>
>>>> + spin_lock(&sbi->spin_lock);
>>>>
>>>> + next_lock = sbi->next_lock_num % NR_GLOBAL_LOCKS;
>>>>
>>>> sbi->next_lock_num++;
>>>>
>>>> + spin_unlock(&sbi->spin_lock);
>>>>
>>>> +
>>>>
>>>> + mutex_lock(&sbi->fs_lock[next_lock]);
>>>>
>>>> return next_lock;
>>>>
>>>> }
>>>>
>>>>
>>>>
>>>> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>>>>
>>>> old mode 100644
>>>>
>>>> new mode 100755
>>>>
>>>> index 75c7dc3..4f27596
>>>>
>>>> --- a/fs/f2fs/super.c
>>>>
>>>> +++ b/fs/f2fs/super.c
>>>>
>>>> @@ -657,6 +657,7 @@ static int f2fs_fill_super(struct super_block
>>>> *sb, void *data, int silent)
>>>>
>>>> mutex_init(&sbi->cp_mutex);
>>>>
>>>> for (i = 0; i < NR_GLOBAL_LOCKS; i++)
>>>>
>>>> mutex_init(&sbi->fs_lock[i]);
>>>>
>>>> + spin_lock_init(&sbi->spin_lock);
>>>>
>>>> mutex_init(&sbi->node_write);
>>>>
>>>> sbi->por_doing = 0;
>>>>
>>>> spin_lock_init(&sbi->stat_lock);
>>>>
>>>> (END)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>> =
>
>