2010-07-18 13:57:04

by shenghui

[permalink] [raw]
Subject: [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0

Sorry to resend this patch. For the 2nd patch should
be applied after this patch, I just send them together.

Following is the explanation of the patch:
The comment for struct shrinker in include/linux/mm.h says
"shrink...It should return the number of objects which remain in the
cache."
Please notice the word "remain".

In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
static struct shrinker mb_cache_shrinker = {
.shrink = mb_cache_shrink_fn,
.seeks = DEFAULT_SEEKS,
};
In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
number of mb_cache_entry before shrink operation. It may because the
memory usage for mbcache is low, so the effect is not so obvious.

Per Eric Sandeen, we should do the counting only once.
Per Christoph Hellwig, we should use list_for_each_entry instead of
list_for_each here.

Following patch is against 2.6.35-rc4. Please check it.


Signed-off-by: Wang Sheng-Hui <[email protected]>
---
fs/mbcache.c | 22 +++++++++++-----------
1 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/mbcache.c b/fs/mbcache.c
index ec88ff3..5697d9e 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
{
LIST_HEAD(free_list);
struct list_head *l, *ltmp;
+ struct mb_cache *cache;
int count = 0;

- spin_lock(&mb_cache_spinlock);
- list_for_each(l, &mb_cache_list) {
- struct mb_cache *cache =
- list_entry(l, struct mb_cache, c_cache_list);
- mb_debug("cache %s (%d)", cache->c_name,
- atomic_read(&cache->c_entry_count));
- count += atomic_read(&cache->c_entry_count);
- }
mb_debug("trying to free %d entries", nr_to_scan);
- if (nr_to_scan == 0) {
- spin_unlock(&mb_cache_spinlock);
+ if (nr_to_scan == 0)
goto out;
- }
+
while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
struct mb_cache_entry *ce =
list_entry(mb_cache_lru_list.next,
@@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
e_lru_list), gfp_mask);
}
out:
+ spin_lock(&mb_cache_spinlock);
+ list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
+ mb_debug("cache %s (%d)", cache->c_name,
+ atomic_read(&cache->c_entry_count));
+ count += atomic_read(&cache->c_entry_count);
+ }
+ spin_unlock(&mb_cache_spinlock);
+
return (count / 100) * sysctl_vfs_cache_pressure;
}

--
1.7.1.1




--
Thanks and Regards,
shenghui

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>


2010-07-18 13:58:59

by shenghui

[permalink] [raw]
Subject: Re: [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0

在 2010年7月18日 下午9:57,Wang Sheng-Hui <[email protected]> 写道:
> Sorry to resend this patch. For the 2nd patch should
> be applied after this patch, I just send them together.
>
> Following is the explanation of the patch:
> The comment for struct shrinker in include/linux/mm.h says
> "shrink...It should return the number of objects which remain in the
> cache."
> Please notice the word "remain".
>
> In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
>       static struct shrinker mb_cache_shrinker = {
>               .shrink = mb_cache_shrink_fn,
>               .seeks = DEFAULT_SEEKS,
>       };
> In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
> number of mb_cache_entry before shrink operation. It may because the
> memory usage for mbcache is low, so the effect is not so obvious.
>
> Per Eric Sandeen, we should do the counting only once.
> Per Christoph Hellwig, we should use list_for_each_entry instead of
> list_for_each here.
>
> Following patch is against 2.6.35-rc4. Please check it.
>
>

Sorry, made a typo. It's against 2.6.35-rc5.

--


Thanks and Best Regards,
shenghui

2010-07-19 16:27:21

by Eric Sandeen

[permalink] [raw]
Subject: Re: [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0

On 07/18/2010 08:57 AM, Wang Sheng-Hui wrote:
> Sorry to resend this patch. For the 2nd patch should
> be applied after this patch, I just send them together.
>
> Following is the explanation of the patch:
> The comment for struct shrinker in include/linux/mm.h says
> "shrink...It should return the number of objects which remain in the
> cache."
> Please notice the word "remain".
>
> In fs/mbcache.h, mb_cache_shrink_fn is used as the shrink function:
> static struct shrinker mb_cache_shrinker = {
> .shrink = mb_cache_shrink_fn,
> .seeks = DEFAULT_SEEKS,
> };
> In mb_cache_shrink_fn, the return value for nr_to_scan > 0 is the
> number of mb_cache_entry before shrink operation. It may because the
> memory usage for mbcache is low, so the effect is not so obvious.
>
> Per Eric Sandeen, we should do the counting only once.
> Per Christoph Hellwig, we should use list_for_each_entry instead of
> list_for_each here.
>
> Following patch is against 2.6.35-rc4. Please check it.
>
>
> Signed-off-by: Wang Sheng-Hui <[email protected]>

Reviewed-by: Eric Sandeen <[email protected]>

Thanks,
-Eric

> ---
> fs/mbcache.c | 22 +++++++++++-----------
> 1 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index ec88ff3..5697d9e 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
> {
> LIST_HEAD(free_list);
> struct list_head *l, *ltmp;
> + struct mb_cache *cache;
> int count = 0;
>
> - spin_lock(&mb_cache_spinlock);
> - list_for_each(l, &mb_cache_list) {
> - struct mb_cache *cache =
> - list_entry(l, struct mb_cache, c_cache_list);
> - mb_debug("cache %s (%d)", cache->c_name,
> - atomic_read(&cache->c_entry_count));
> - count += atomic_read(&cache->c_entry_count);
> - }
> mb_debug("trying to free %d entries", nr_to_scan);
> - if (nr_to_scan == 0) {
> - spin_unlock(&mb_cache_spinlock);
> + if (nr_to_scan == 0)
> goto out;
> - }
> +
> while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
> struct mb_cache_entry *ce =
> list_entry(mb_cache_lru_list.next,
> @@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
> e_lru_list), gfp_mask);
> }
> out:
> + spin_lock(&mb_cache_spinlock);
> + list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
> + mb_debug("cache %s (%d)", cache->c_name,
> + atomic_read(&cache->c_entry_count));
> + count += atomic_read(&cache->c_entry_count);
> + }
> + spin_unlock(&mb_cache_spinlock);
> +
> return (count / 100) * sysctl_vfs_cache_pressure;
> }
>


2010-07-20 16:50:13

by Eric Sandeen

[permalink] [raw]
Subject: Re: [PATCH 1/2 RESEND] fix return value for mb_cache_shrink_fn when nr_to_scan > 0

Eric Sandeen wrote:

...

> Reviewed-by: Eric Sandeen <[email protected]>

Actually retract that, as Andreas pointed out:

>> fs/mbcache.c | 22 +++++++++++-----------
>> 1 files changed, 11 insertions(+), 11 deletions(-)
>>
>> diff --git a/fs/mbcache.c b/fs/mbcache.c
>> index ec88ff3..5697d9e 100644
>> --- a/fs/mbcache.c
>> +++ b/fs/mbcache.c
>> @@ -201,21 +201,13 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>> {
>> LIST_HEAD(free_list);
>> struct list_head *l, *ltmp;
>> + struct mb_cache *cache;
>> int count = 0;
>>
>> - spin_lock(&mb_cache_spinlock);

you've lost this spin_lock ...

>> - list_for_each(l, &mb_cache_list) {
>> - struct mb_cache *cache =
>> - list_entry(l, struct mb_cache, c_cache_list);
>> - mb_debug("cache %s (%d)", cache->c_name,
>> - atomic_read(&cache->c_entry_count));
>> - count += atomic_read(&cache->c_entry_count);
>> - }
>> mb_debug("trying to free %d entries", nr_to_scan);
>> - if (nr_to_scan == 0) {
>> - spin_unlock(&mb_cache_spinlock);
>> + if (nr_to_scan == 0)
>> goto out;
>> - }
>> +

and here you're iterating over it while unlocked....

>> while (nr_to_scan-- && !list_empty(&mb_cache_lru_list)) {
>> struct mb_cache_entry *ce =
>> list_entry(mb_cache_lru_list.next,
struct mb_cache_entry, e_lru_list);
list_move_tail(&ce->e_lru_list, &free_list);
__mb_cache_entry_unhash(ce);
}
spin_unlock(&mb_cache_spinlock);

.... and here you unlock an unlocked spinlock.

Sorry I missed that.

-Eric

>> @@ -229,6 +221,14 @@ mb_cache_shrink_fn(int nr_to_scan, gfp_t gfp_mask)
>> e_lru_list), gfp_mask);
>> }
>> out:
>> + spin_lock(&mb_cache_spinlock);
>> + list_for_each_entry(cache, &mb_cache_list, c_cache_list) {
>> + mb_debug("cache %s (%d)", cache->c_name,
>> + atomic_read(&cache->c_entry_count));
>> + count += atomic_read(&cache->c_entry_count);
>> + }
>> + spin_unlock(&mb_cache_spinlock);
>> +
>> return (count / 100) * sysctl_vfs_cache_pressure;
>> }
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html