From: Yang Guo <[email protected]>
@es_stats_cache_hits and @es_stats_cache_misses are accessed frequently in
ext4_es_lookup_extent function, it would influence the ext4 read/write
performance in NUMA system.
Let's optimize it using percpu_counter, it is profitable for the
performance.
The test command is as below:
fio -name=randwrite -numjobs=8 -filename=/mnt/test1 -rw=randwrite
-ioengine=libaio -direct=1 -iodepth=64 -sync=0 -norandommap -group_reporting
-runtime=120 -time_based -bs=4k -size=5G
And the result is better 10% than the initial implement:
without the patch,IOPS=197k, BW=770MiB/s (808MB/s)(90.3GiB/120002msec)
with the patch, IOPS=218k, BW=852MiB/s (894MB/s)(99.9GiB/120002msec)
Cc: "Theodore Ts'o" <[email protected]>
Cc: Andreas Dilger <[email protected]>
Signed-off-by: Yang Guo <[email protected]>
Signed-off-by: Shaokun Zhang <[email protected]>
---
fs/ext4/extents_status.c | 20 +++++++++++++-------
fs/ext4/extents_status.h | 4 ++--
2 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
index 7521de2dcf3a..7699e80ae236 100644
--- a/fs/ext4/extents_status.c
+++ b/fs/ext4/extents_status.c
@@ -947,9 +947,9 @@ int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
es->es_pblk = es1->es_pblk;
if (!ext4_es_is_referenced(es1))
ext4_es_set_referenced(es1);
- stats->es_stats_cache_hits++;
+ percpu_counter_inc(&stats->es_stats_cache_hits);
} else {
- stats->es_stats_cache_misses++;
+ percpu_counter_inc(&stats->es_stats_cache_misses);
}
read_unlock(&EXT4_I(inode)->i_es_lock);
@@ -1235,9 +1235,9 @@ int ext4_seq_es_shrinker_info_show(struct seq_file *seq, void *v)
seq_printf(seq, "stats:\n %lld objects\n %lld reclaimable objects\n",
percpu_counter_sum_positive(&es_stats->es_stats_all_cnt),
percpu_counter_sum_positive(&es_stats->es_stats_shk_cnt));
- seq_printf(seq, " %lu/%lu cache hits/misses\n",
- es_stats->es_stats_cache_hits,
- es_stats->es_stats_cache_misses);
+ seq_printf(seq, " %llu/%llu cache hits/misses\n",
+ percpu_counter_sum_positive(&es_stats->es_stats_cache_hits),
+ percpu_counter_sum_positive(&es_stats->es_stats_cache_misses));
if (inode_cnt)
seq_printf(seq, " %d inodes on list\n", inode_cnt);
@@ -1264,8 +1264,14 @@ int ext4_es_register_shrinker(struct ext4_sb_info *sbi)
sbi->s_es_nr_inode = 0;
spin_lock_init(&sbi->s_es_lock);
sbi->s_es_stats.es_stats_shrunk = 0;
- sbi->s_es_stats.es_stats_cache_hits = 0;
- sbi->s_es_stats.es_stats_cache_misses = 0;
+ err = percpu_counter_init(&sbi->s_es_stats.es_stats_cache_hits, 0,
+ GFP_KERNEL);
+ if (err)
+ return err;
+ err = percpu_counter_init(&sbi->s_es_stats.es_stats_cache_misses, 0,
+ GFP_KERNEL);
+ if (err)
+ return err;
sbi->s_es_stats.es_stats_scan_time = 0;
sbi->s_es_stats.es_stats_max_scan_time = 0;
err = percpu_counter_init(&sbi->s_es_stats.es_stats_all_cnt, 0, GFP_KERNEL);
diff --git a/fs/ext4/extents_status.h b/fs/ext4/extents_status.h
index 131a8b7df265..e722dd9bd06e 100644
--- a/fs/ext4/extents_status.h
+++ b/fs/ext4/extents_status.h
@@ -70,8 +70,8 @@ struct ext4_es_tree {
struct ext4_es_stats {
unsigned long es_stats_shrunk;
- unsigned long es_stats_cache_hits;
- unsigned long es_stats_cache_misses;
+ struct percpu_counter es_stats_cache_hits;
+ struct percpu_counter es_stats_cache_misses;
u64 es_stats_scan_time;
u64 es_stats_max_scan_time;
struct percpu_counter es_stats_all_cnt;
--
2.7.4
On Fri, Aug 23, 2019 at 10:47:34AM +0800, Shaokun Zhang wrote:
> From: Yang Guo <[email protected]>
>
> @es_stats_cache_hits and @es_stats_cache_misses are accessed frequently in
> ext4_es_lookup_extent function, it would influence the ext4 read/write
> performance in NUMA system.
> Let's optimize it using percpu_counter, it is profitable for the
> performance.
>
> The test command is as below:
> fio -name=randwrite -numjobs=8 -filename=/mnt/test1 -rw=randwrite
> -ioengine=libaio -direct=1 -iodepth=64 -sync=0 -norandommap -group_reporting
> -runtime=120 -time_based -bs=4k -size=5G
>
> And the result is better 10% than the initial implement:
> without the patch,IOPS=197k, BW=770MiB/s (808MB/s)(90.3GiB/120002msec)
> with the patch, IOPS=218k, BW=852MiB/s (894MB/s)(99.9GiB/120002msec)
>
> Cc: "Theodore Ts'o" <[email protected]>
> Cc: Andreas Dilger <[email protected]>
> Signed-off-by: Yang Guo <[email protected]>
> Signed-off-by: Shaokun Zhang <[email protected]>
Applied with some adjustments so it would apply. I also changed the patch summary to:
ext4: use percpu_counters for extent_status cache hits/misses
- Ted
On Sat, Aug 24, 2019 at 11:25:24PM -0400, Theodore Y. Ts'o wrote:
> On Fri, Aug 23, 2019 at 10:47:34AM +0800, Shaokun Zhang wrote:
> > From: Yang Guo <[email protected]>
> >
> > @es_stats_cache_hits and @es_stats_cache_misses are accessed frequently in
> > ext4_es_lookup_extent function, it would influence the ext4 read/write
> > performance in NUMA system.
> > Let's optimize it using percpu_counter, it is profitable for the
> > performance.
> >
> > The test command is as below:
> > fio -name=randwrite -numjobs=8 -filename=/mnt/test1 -rw=randwrite
> > -ioengine=libaio -direct=1 -iodepth=64 -sync=0 -norandommap -group_reporting
> > -runtime=120 -time_based -bs=4k -size=5G
> >
> > And the result is better 10% than the initial implement:
> > without the patch,IOPS=197k, BW=770MiB/s (808MB/s)(90.3GiB/120002msec)
> > with the patch, IOPS=218k, BW=852MiB/s (894MB/s)(99.9GiB/120002msec)
> >
> > Cc: "Theodore Ts'o" <[email protected]>
> > Cc: Andreas Dilger <[email protected]>
> > Signed-off-by: Yang Guo <[email protected]>
> > Signed-off-by: Shaokun Zhang <[email protected]>
>
> Applied with some adjustments so it would apply. I also changed the patch summary to:
>
> ext4: use percpu_counters for extent_status cache hits/misses
>
> - Ted
This patch is causing the following. Probably because there's no calls to
percpu_counter_destroy() for the new counters?
==================================================================
BUG: KASAN: use-after-free in __list_del_entry_valid+0x168/0x180 lib/list_debug.c:51
Read of size 8 at addr ffff888063168fa8 by task umount/611
CPU: 1 PID: 611 Comm: umount Not tainted 5.3.0-rc4-00015-gcc08b68e62ec #6
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x86/0xca lib/dump_stack.c:113
print_address_description+0x6e/0x2e7 mm/kasan/report.c:351
__kasan_report.cold+0x1b/0x35 mm/kasan/report.c:482
kasan_report+0x12/0x17 mm/kasan/common.c:612
__asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:132
__list_del_entry_valid+0x168/0x180 lib/list_debug.c:51
__list_del_entry include/linux/list.h:131 [inline]
list_del include/linux/list.h:139 [inline]
percpu_counter_destroy+0x5d/0x230 lib/percpu_counter.c:157
ext4_put_super+0x319/0xbb0 fs/ext4/super.c:1010
generic_shutdown_super+0x128/0x320 fs/super.c:458
kill_block_super+0x97/0xe0 fs/super.c:1310
deactivate_locked_super+0x7b/0xd0 fs/super.c:331
deactivate_super+0x138/0x150 fs/super.c:362
cleanup_mnt+0x298/0x3f0 fs/namespace.c:1102
__cleanup_mnt+0xd/0x10 fs/namespace.c:1109
task_work_run+0x103/0x180 kernel/task_work.c:113
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_usermode_loop+0x10b/0x130 arch/x86/entry/common.c:163
prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
do_syscall_64+0x343/0x450 arch/x86/entry/common.c:299
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7f7caed23d77
Code: 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 8
RSP: 002b:00007ffe960e7c98 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000560c039e1060 RCX: 00007f7caed23d77
RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000560c039e3c90
RBP: 0000560c039e3c90 R08: 0000560c039e2ec0 R09: 0000000000000014
R10: 00000000000006b4 R11: 0000000000000246 R12: 00007f7caf225e64
R13: 0000000000000000 R14: 0000560c039e1240 R15: 00007ffe960e7f20
Allocated by task 596:
save_stack mm/kasan/common.c:69 [inline]
set_track mm/kasan/common.c:77 [inline]
__kasan_kmalloc.part.0+0x41/0xb0 mm/kasan/common.c:487
__kasan_kmalloc.constprop.0+0xba/0xc0 mm/kasan/common.c:468
kasan_kmalloc+0x9/0x10 mm/kasan/common.c:501
kmem_cache_alloc_trace+0x11e/0x2e0 mm/slab.c:3550
kmalloc include/linux/slab.h:552 [inline]
kzalloc include/linux/slab.h:748 [inline]
ext4_fill_super+0x111/0x80a0 fs/ext4/super.c:3610
mount_bdev+0x286/0x350 fs/super.c:1283
ext4_mount+0x10/0x20 fs/ext4/super.c:6007
legacy_get_tree+0x101/0x1f0 fs/fs_context.c:661
vfs_get_tree+0x86/0x2e0 fs/super.c:1413
do_new_mount fs/namespace.c:2791 [inline]
do_mount+0x1093/0x1b30 fs/namespace.c:3111
ksys_mount+0x7d/0xd0 fs/namespace.c:3320
__do_sys_mount fs/namespace.c:3334 [inline]
__se_sys_mount fs/namespace.c:3331 [inline]
__x64_sys_mount+0xb9/0x150 fs/namespace.c:3331
do_syscall_64+0x8f/0x450 arch/x86/entry/common.c:296
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 600:
save_stack mm/kasan/common.c:69 [inline]
set_track mm/kasan/common.c:77 [inline]
__kasan_slab_free+0x127/0x1f0 mm/kasan/common.c:449
kasan_slab_free+0xe/0x10 mm/kasan/common.c:457
__cache_free mm/slab.c:3425 [inline]
kfree+0xc1/0x1e0 mm/slab.c:3756
ext4_put_super+0x78c/0xbb0 fs/ext4/super.c:1061
generic_shutdown_super+0x128/0x320 fs/super.c:458
kill_block_super+0x97/0xe0 fs/super.c:1310
deactivate_locked_super+0x7b/0xd0 fs/super.c:331
deactivate_super+0x138/0x150 fs/super.c:362
cleanup_mnt+0x298/0x3f0 fs/namespace.c:1102
__cleanup_mnt+0xd/0x10 fs/namespace.c:1109
task_work_run+0x103/0x180 kernel/task_work.c:113
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_usermode_loop+0x10b/0x130 arch/x86/entry/common.c:163
prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
do_syscall_64+0x343/0x450 arch/x86/entry/common.c:299
entry_SYSCALL_64_after_hwframe+0x49/0xbe
The buggy address belongs to the object at ffff888063168980
which belongs to the cache kmalloc-4k of size 4096
The buggy address is located 1576 bytes inside of
4096-byte region [ffff888063168980, ffff888063169980)
The buggy address belongs to the page:
page:ffffea00015acec0 refcount:1 mapcount:0 mapping:ffff88806d000900 index:0x0 compound_mapcount: 0
flags: 0x100000000010200(slab|head)
raw: 0100000000010200 ffffea00015b9f08 ffffea0001749e58 ffff88806d000900
raw: 0000000000000000 ffff888063168980 0000000100000001
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffff888063168e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888063168f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff888063168f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
ffff888063169000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
ffff888063169080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
Hi Eric,
On 2019/8/26 1:28, Eric Biggers wrote:
> On Sat, Aug 24, 2019 at 11:25:24PM -0400, Theodore Y. Ts'o wrote:
>> On Fri, Aug 23, 2019 at 10:47:34AM +0800, Shaokun Zhang wrote:
>>> From: Yang Guo <[email protected]>
>>>
>>> @es_stats_cache_hits and @es_stats_cache_misses are accessed frequently in
>>> ext4_es_lookup_extent function, it would influence the ext4 read/write
>>> performance in NUMA system.
>>> Let's optimize it using percpu_counter, it is profitable for the
>>> performance.
>>>
>>> The test command is as below:
>>> fio -name=randwrite -numjobs=8 -filename=/mnt/test1 -rw=randwrite
>>> -ioengine=libaio -direct=1 -iodepth=64 -sync=0 -norandommap -group_reporting
>>> -runtime=120 -time_based -bs=4k -size=5G
>>>
>>> And the result is better 10% than the initial implement:
>>> without the patch,IOPS=197k, BW=770MiB/s (808MB/s)(90.3GiB/120002msec)
>>> with the patch, IOPS=218k, BW=852MiB/s (894MB/s)(99.9GiB/120002msec)
>>>
>>> Cc: "Theodore Ts'o" <[email protected]>
>>> Cc: Andreas Dilger <[email protected]>
>>> Signed-off-by: Yang Guo <[email protected]>
>>> Signed-off-by: Shaokun Zhang <[email protected]>
>>
>> Applied with some adjustments so it would apply. I also changed the patch summary to:
>>
>> ext4: use percpu_counters for extent_status cache hits/misses
>>
>> - Ted
>
> This patch is causing the following. Probably because there's no calls to
> percpu_counter_destroy() for the new counters?
>
Apologies, We missed it and let's fix it soon.
Thanks,
Shaokun
> ==================================================================
> BUG: KASAN: use-after-free in __list_del_entry_valid+0x168/0x180 lib/list_debug.c:51
> Read of size 8 at addr ffff888063168fa8 by task umount/611
>
> CPU: 1 PID: 611 Comm: umount Not tainted 5.3.0-rc4-00015-gcc08b68e62ec #6
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0x86/0xca lib/dump_stack.c:113
> print_address_description+0x6e/0x2e7 mm/kasan/report.c:351
> __kasan_report.cold+0x1b/0x35 mm/kasan/report.c:482
> kasan_report+0x12/0x17 mm/kasan/common.c:612
> __asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:132
> __list_del_entry_valid+0x168/0x180 lib/list_debug.c:51
> __list_del_entry include/linux/list.h:131 [inline]
> list_del include/linux/list.h:139 [inline]
> percpu_counter_destroy+0x5d/0x230 lib/percpu_counter.c:157
> ext4_put_super+0x319/0xbb0 fs/ext4/super.c:1010
> generic_shutdown_super+0x128/0x320 fs/super.c:458
> kill_block_super+0x97/0xe0 fs/super.c:1310
> deactivate_locked_super+0x7b/0xd0 fs/super.c:331
> deactivate_super+0x138/0x150 fs/super.c:362
> cleanup_mnt+0x298/0x3f0 fs/namespace.c:1102
> __cleanup_mnt+0xd/0x10 fs/namespace.c:1109
> task_work_run+0x103/0x180 kernel/task_work.c:113
> tracehook_notify_resume include/linux/tracehook.h:188 [inline]
> exit_to_usermode_loop+0x10b/0x130 arch/x86/entry/common.c:163
> prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
> syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
> do_syscall_64+0x343/0x450 arch/x86/entry/common.c:299
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x7f7caed23d77
> Code: 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 8
> RSP: 002b:00007ffe960e7c98 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
> RAX: 0000000000000000 RBX: 0000560c039e1060 RCX: 00007f7caed23d77
> RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000560c039e3c90
> RBP: 0000560c039e3c90 R08: 0000560c039e2ec0 R09: 0000000000000014
> R10: 00000000000006b4 R11: 0000000000000246 R12: 00007f7caf225e64
> R13: 0000000000000000 R14: 0000560c039e1240 R15: 00007ffe960e7f20
>
> Allocated by task 596:
> save_stack mm/kasan/common.c:69 [inline]
> set_track mm/kasan/common.c:77 [inline]
> __kasan_kmalloc.part.0+0x41/0xb0 mm/kasan/common.c:487
> __kasan_kmalloc.constprop.0+0xba/0xc0 mm/kasan/common.c:468
> kasan_kmalloc+0x9/0x10 mm/kasan/common.c:501
> kmem_cache_alloc_trace+0x11e/0x2e0 mm/slab.c:3550
> kmalloc include/linux/slab.h:552 [inline]
> kzalloc include/linux/slab.h:748 [inline]
> ext4_fill_super+0x111/0x80a0 fs/ext4/super.c:3610
> mount_bdev+0x286/0x350 fs/super.c:1283
> ext4_mount+0x10/0x20 fs/ext4/super.c:6007
> legacy_get_tree+0x101/0x1f0 fs/fs_context.c:661
> vfs_get_tree+0x86/0x2e0 fs/super.c:1413
> do_new_mount fs/namespace.c:2791 [inline]
> do_mount+0x1093/0x1b30 fs/namespace.c:3111
> ksys_mount+0x7d/0xd0 fs/namespace.c:3320
> __do_sys_mount fs/namespace.c:3334 [inline]
> __se_sys_mount fs/namespace.c:3331 [inline]
> __x64_sys_mount+0xb9/0x150 fs/namespace.c:3331
> do_syscall_64+0x8f/0x450 arch/x86/entry/common.c:296
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> Freed by task 600:
> save_stack mm/kasan/common.c:69 [inline]
> set_track mm/kasan/common.c:77 [inline]
> __kasan_slab_free+0x127/0x1f0 mm/kasan/common.c:449
> kasan_slab_free+0xe/0x10 mm/kasan/common.c:457
> __cache_free mm/slab.c:3425 [inline]
> kfree+0xc1/0x1e0 mm/slab.c:3756
> ext4_put_super+0x78c/0xbb0 fs/ext4/super.c:1061
> generic_shutdown_super+0x128/0x320 fs/super.c:458
> kill_block_super+0x97/0xe0 fs/super.c:1310
> deactivate_locked_super+0x7b/0xd0 fs/super.c:331
> deactivate_super+0x138/0x150 fs/super.c:362
> cleanup_mnt+0x298/0x3f0 fs/namespace.c:1102
> __cleanup_mnt+0xd/0x10 fs/namespace.c:1109
> task_work_run+0x103/0x180 kernel/task_work.c:113
> tracehook_notify_resume include/linux/tracehook.h:188 [inline]
> exit_to_usermode_loop+0x10b/0x130 arch/x86/entry/common.c:163
> prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline]
> syscall_return_slowpath arch/x86/entry/common.c:274 [inline]
> do_syscall_64+0x343/0x450 arch/x86/entry/common.c:299
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
>
> The buggy address belongs to the object at ffff888063168980
> which belongs to the cache kmalloc-4k of size 4096
> The buggy address is located 1576 bytes inside of
> 4096-byte region [ffff888063168980, ffff888063169980)
> The buggy address belongs to the page:
> page:ffffea00015acec0 refcount:1 mapcount:0 mapping:ffff88806d000900 index:0x0 compound_mapcount: 0
> flags: 0x100000000010200(slab|head)
> raw: 0100000000010200 ffffea00015b9f08 ffffea0001749e58 ffff88806d000900
> raw: 0000000000000000 ffff888063168980 0000000100000001
> page dumped because: kasan: bad access detected
>
> Memory state around the buggy address:
> ffff888063168e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ffff888063168f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>> ffff888063168f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ^
> ffff888063169000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ffff888063169080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
> ==================================================================
>
> .
>
Hi Ted,
On 2019/8/26 8:47, Theodore Y. Ts'o wrote:
> On Sun, Aug 25, 2019 at 10:28:03AM -0700, Eric Biggers wrote:
>> This patch is causing the following. Probably because there's no calls to
>> percpu_counter_destroy() for the new counters?
>
> Yeah, I noticed this from my test runs last night as well. It looks
> like original patch was never tested with CONFIG_HOTPLUG_CPU.
>
Sorry that We may miss it completely, we shall double check it and
make the proper patch carefully.
> The other problem with this patch is that it initializes
> es_stats_cache_hits and es_stats_cache_miesses too late. They will
> get used when the journal inode is loaded. This is mostly harmless,
I have checked it again, @es_stats_cache_hits and @es_stats_cache_miesses
have been initialized before the journal inode is loaded, Maybe I miss
something else?
egrep "ext4_es_register_shrinker|ext4_load_journal" fs/ext4/super.c
4260: if (ext4_es_register_shrinker(sbi))
4302: err = ext4_load_journal(sb, es, journal_devnum);
Thanks,
Shaokun
> but it's also wrong.
>
> I've dropped this patch from the ext4 git tree.
>
> - Ted
>
> .
>
On Mon, Aug 26, 2019 at 04:24:20PM +0800, Shaokun Zhang wrote:
> > The other problem with this patch is that it initializes
> > es_stats_cache_hits and es_stats_cache_miesses too late. They will
> > get used when the journal inode is loaded. This is mostly harmless,
>
> I have checked it again, @es_stats_cache_hits and @es_stats_cache_miesses
> have been initialized before the journal inode is loaded, Maybe I miss
> something else?
No, sorry, that was my mistake. I misread things when I was looking
over your patch last night.
Please resubmit your patch once you've fixed things up and tested it.
I would recommend that you at least try running your patch using the
kvm-xfstests's smoke test[1] before submitting them. It will save you
and me time.
[1] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
Thanks,
- Ted
Hi Theodore,
On 2019/8/26 23:57, Theodore Y. Ts'o wrote:
> On Mon, Aug 26, 2019 at 04:24:20PM +0800, Shaokun Zhang wrote:
>>> The other problem with this patch is that it initializes
>>> es_stats_cache_hits and es_stats_cache_miesses too late. They will
>>> get used when the journal inode is loaded. This is mostly harmless,
>>
>> I have checked it again, @es_stats_cache_hits and @es_stats_cache_miesses
>> have been initialized before the journal inode is loaded, Maybe I miss
>> something else?
>
> No, sorry, that was my mistake. I misread things when I was looking
> over your patch last night.
>
> Please resubmit your patch once you've fixed things up and tested it.
>
Sure, will do it soon.
> I would recommend that you at least try running your patch using the
> kvm-xfstests's smoke test[1] before submitting them. It will save you
> and me time.
>
Ok, thanks your guidance.
Shaokun,
> [1] https://github.com/tytso/xfstests-bld/blob/master/Documentation/kvm-quickstart.md
>
> Thanks,
>
> - Ted
>
>
> .
>