Return-Path: linux-nfs-owner@vger.kernel.org Received: from api.opinsys.fi ([217.112.254.4]:49551 "EHLO mail.opinsys.fi" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751158AbaJQIie convert rfc822-to-8bit (ORCPT ); Fri, 17 Oct 2014 04:38:34 -0400 Date: Fri, 17 Oct 2014 08:38:30 +0000 (UTC) From: Tuomas =?utf-8?B?UsOkc8OkbmVu?= To: Trond Myklebust Cc: Linux NFS Mailing List Message-ID: <566943989.27100.1413535110583.JavaMail.zimbra@opinsys.fi> In-Reply-To: References: <1508157147.125995.1412239112248.JavaMail.zimbra@opinsys.fi> <242225503.126017.1412240463629.JavaMail.zimbra@opinsys.fi> Subject: Re: [RFC]: make nfs_wait_on_request() KILLABLE MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Sender: linux-nfs-owner@vger.kernel.org List-ID: ----- Original Message ----- > From: "Trond Myklebust" > On Thu, Oct 2, 2014 at 5:01 AM, Tuomas Räsänen > wrote: > > Hi > > > > Before David Jefferey's commit: > > > > 92a5655 nfs: Don't busy-wait on SIGKILL in __nfs_iocounter_wait > > > > we often experienced softlockups in our systems due to busy-looping > > after SIGKILL. > > > > With that patch applied, the frequency of softlockups has decreased > > but they are not completely gone. Now softlockups happen with > > following kind of call traces: > > > > [] ? kvm_clock_get_cycles+0x17/0x20 > > [] ? ktime_get_ts+0x48/0x140 > > [] ? nfs_free_request+0x90/0x90 [nfs] > > [] io_schedule+0x86/0x100 > > [] nfs_wait_bit_uninterruptible+0xd/0x20 [nfs] > > [] __wait_on_bit+0x51/0x70 > > [] ? nfs_free_request+0x90/0x90 [nfs] > > [] ? nfs_free_request+0x90/0x90 [nfs] > > [] out_of_line_wait_on_bit+0x5b/0x70 > > [] ? autoremove_wake_function+0x40/0x40 > > [] nfs_wait_on_request+0x2e/0x30 [nfs] > > [] nfs_updatepage+0x11e/0x7d0 [nfs] > > [] ? nfs_page_find_request+0x3b/0x50 [nfs] > > [] ? nfs_flush_incompatible+0x6d/0xe0 [nfs] > > [] nfs_write_end+0x110/0x280 [nfs] > > [] ? kmap_atomic_prot+0xe2/0x100 > > [] ? __kunmap_atomic+0x63/0x80 > > [] generic_file_buffered_write+0x132/0x210 > > [] __generic_file_aio_write+0x25d/0x460 > > [] ? __nfs_revalidate_inode+0x102/0x2e0 [nfs] > > [] generic_file_aio_write+0x53/0x90 > > [] nfs_file_write+0xa7/0x1d0 [nfs] > > [] ? common_file_perm+0x4b/0xe0 > > [] do_sync_write+0x57/0x90 > > [] ? do_sync_readv_writev+0x80/0x80 > > [] vfs_write+0x95/0x1b0 > > [] SyS_write+0x49/0x90 > > [] syscall_call+0x7/0x7 > > [] ? balance_dirty_pages.isra.18+0x390/0x4c3 > > > > As I understand it, there are some outstanding requests going on which > > nfs_wait_on_request() is waiting for. For some reason, they are not > > finished in timely manner and the process is eventually killed with > > Why are those outstanding requests not completing, and why would > killing the tasks that are waiting for that completion help? I, quite naively, assumed that, if the process just gets killed, all the bad would magically go away.. (I'm in the middle of replacing assumptions with knowledge, that is, learning). The scenario in which we are experiencing the problem is as follows: - Client kernels from series 3.10, 3.12 and 3.13 - Server kernel from series 3.10 - NFS4.0 mounted /home, sec=krb5, lots of desktop users Increasing IO-load on /home seems to increase the likelihood of lockups. Unfortunately the problem is relatively rare, it might take several days of continuous automated desktop usage. But that's obviously way too frequent for a good production quality. Would you have any ideas where I should look at and what could be the potential causes of traces like that? How the problem could be reproduced more effectively? I'd really appreciate any help. -- Tuomas