Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966397AbcKXUAA (ORCPT ); Thu, 24 Nov 2016 15:00:00 -0500 Received: from mx1.mpynet.fi ([82.197.21.84]:40885 "EHLO mx1.mpynet.fi" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966406AbcKXT6t (ORCPT ); Thu, 24 Nov 2016 14:58:49 -0500 X-Greylist: delayed 403 seconds by postgrey-1.27 at vger.kernel.org; Thu, 24 Nov 2016 14:58:49 EST Date: Thu, 24 Nov 2016 21:50:23 +0200 From: Tuomas Tynkkynen To: , , Subject: 9pfs hangs since 4.7 Message-ID: <20161124215023.02deb03c@duuni> X-Mailer: Claws Mail 3.14.0 (GTK+ 2.24.31; x86_64-unknown-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: tuxera-exch.ad.tuxera.com (10.20.48.11) To tuxera-exch.ad.tuxera.com (10.20.48.11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5892 Lines: 102 Hi fsdevel, I have been observing hangs when running xfstests generic/224. Curiously enough, the test is *not* causing problems on the FS under test (I've tried both ext4 and f2fs) but instead it's causing the 9pfs that I'm using as the root filesystem to crap out. How it shows up is that the test doesn't finish in time (usually takes ~50 sec) but the hung task detector triggers for some task in d_alloc_parallel(): [ 660.701646] INFO: task 224:7800 blocked for more than 300 seconds. [ 660.702756] Not tainted 4.9.0-rc5 #1-NixOS [ 660.703232] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 660.703927] 224 D 0 7800 549 0x00000000 [ 660.704501] ffff8a82ec022800 0000000000000000 ffff8a82fc03c800 ffff8a82ff217dc0 [ 660.705302] ffff8a82d0f88c00 ffffa94a41a27b88 ffffffffaeb4ad1d ffffa94a41a27b78 [ 660.706125] ffffffffae800fc6 ffff8a82fbd90f08 ffff8a82d0f88c00 ffff8a82fbfd5418 [ 660.706924] Call Trace: [ 660.707185] [] ? __schedule+0x18d/0x640 [ 660.707751] [] ? __d_alloc+0x126/0x1e0 [ 660.708304] [] schedule+0x36/0x80 [ 660.708841] [] d_alloc_parallel+0x3a7/0x480 [ 660.709454] [] ? wake_up_q+0x70/0x70 [ 660.710007] [] lookup_slow+0x73/0x140 [ 660.710572] [] walk_component+0x1ca/0x2f0 [ 660.711167] [] ? path_init+0x1d9/0x330 [ 660.711747] [] ? mntput+0x24/0x40 [ 660.716962] [] path_lookupat+0x5d/0x110 [ 660.717581] [] filename_lookup+0x9e/0x150 [ 660.718194] [] ? kmem_cache_alloc+0x156/0x1b0 [ 660.719037] [] ? getname_flags+0x56/0x1f0 [ 660.719801] [] ? getname_flags+0x72/0x1f0 [ 660.720492] [] user_path_at_empty+0x36/0x40 [ 660.721206] [] vfs_fstatat+0x53/0xa0 [ 660.721980] [] SYSC_newstat+0x1f/0x40 [ 660.722732] [] SyS_newstat+0xe/0x10 [ 660.723702] [] entry_SYSCALL_64_fastpath+0x1a/0xa9 SysRq-T is full of things stuck inside p9_client_rpc like: [ 271.703598] bash S 0 100 96 0x00000000 [ 271.703968] ffff8a82ff824800 0000000000000000 ffff8a82faee4800 ffff8a82ff217dc0 [ 271.704486] ffff8a82fb946c00 ffffa94a404ebae8 ffffffffaeb4ad1d ffff8a82fb9fc058 [ 271.705024] ffffa94a404ebb10 ffffffffae8f21f9 ffff8a82fb946c00 ffff8a82fbbba000 [ 271.705542] Call Trace: [ 271.705715] [] ? __schedule+0x18d/0x640 [ 271.706079] [] ? idr_get_empty_slot+0x199/0x3b0 [ 271.706489] [] schedule+0x36/0x80 [ 271.706825] [] p9_client_rpc+0x12a/0x460 [9pnet] [ 271.707239] [] ? idr_alloc+0x87/0x100 [ 271.707596] [] ? wake_atomic_t_function+0x60/0x60 [ 271.708043] [] p9_client_walk+0x77/0x200 [9pnet] [ 271.708459] [] v9fs_vfs_lookup.part.16+0x59/0x120 [9p] [ 271.708912] [] v9fs_vfs_lookup+0x1f/0x30 [9p] [ 271.709308] [] lookup_slow+0x96/0x140 [ 271.709664] [] walk_component+0x1ca/0x2f0 [ 271.710036] [] ? path_init+0x1d9/0x330 [ 271.710390] [] path_lookupat+0x5d/0x110 [ 271.710763] [] filename_lookup+0x9e/0x150 [ 271.711136] [] ? mem_cgroup_commit_charge+0x7e/0x4a0 [ 271.711581] [] ? kmem_cache_alloc+0x156/0x1b0 [ 271.711977] [] ? getname_flags+0x56/0x1f0 [ 271.712349] [] ? getname_flags+0x72/0x1f0 [ 271.712726] [] user_path_at_empty+0x36/0x40 [ 271.713110] [] vfs_fstatat+0x53/0xa0 [ 271.713454] [] SYSC_newstat+0x1f/0x40 [ 271.713810] [] SyS_newstat+0xe/0x10 [ 271.714150] [] entry_SYSCALL_64_fastpath+0x1a/0xa9 [ 271.729022] sleep S 0 218 216 0x00000002 [ 271.729391] ffff8a82fb990800 0000000000000000 ffff8a82fc0d8000 ffff8a82ff317dc0 [ 271.729915] ffff8a82fbbec800 ffffa94a404f3cf8 ffffffffaeb4ad1d ffff8a82fb9fc058 [ 271.730426] ffffec95c1ee08c0 0000000000000001 ffff8a82fbbec800 ffff8a82fbbba000 [ 271.730950] Call Trace: [ 271.731115] [] ? __schedule+0x18d/0x640 [ 271.731479] [] schedule+0x36/0x80 [ 271.731814] [] p9_client_rpc+0x12a/0x460 [9pnet] [ 271.732226] [] ? wake_atomic_t_function+0x60/0x60 [ 271.732649] [] p9_client_clunk+0x38/0xb0 [9pnet] [ 271.733061] [] v9fs_dir_release+0x1a/0x30 [9p] [ 271.733494] [] __fput+0xdf/0x1f0 [ 271.733844] [] ____fput+0xe/0x10 [ 271.734176] [] task_work_run+0x7e/0xa0 [ 271.734532] [] do_exit+0x2b9/0xad0 [ 271.734888] [] ? __do_page_fault+0x287/0x4b0 [ 271.735276] [] do_group_exit+0x43/0xb0 [ 271.735639] [] SyS_exit_group+0x14/0x20 [ 271.736002] [] entry_SYSCALL_64_fastpath+0x1a/0xa9 Full dmesgs is available from: https://gist.githubusercontent.com/dezgeg/31c2a50a1ce82e4284f6c9e617e7eba8/raw/e5ed6e62c7a1a5234d9316563154e530a2e95586/dmesg (shorter) https://gist.githubusercontent.com/dezgeg/6989a0746ba8c000324455f473dc58e9/raw/4d7c6c58de88ef9d0367147c4ef1d990cfb267ce/dmesg (longer) This typically reproduces quite fast, maybe half an hour or so. 4.7, 4.8.10 and 4.9-rc5 all are affected. This happens in a 2-core QEMU+KVM vm with 2GB RAM using its internal 9p server (-virtfs local,path=MOUNTPOINT,security_model=none,mount_tag=store). Any ideas for further debugging?