Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751608AbaJPLwq (ORCPT ); Thu, 16 Oct 2014 07:52:46 -0400 Received: from mail-ob0-f179.google.com ([209.85.214.179]:33475 "EHLO mail-ob0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751376AbaJPLwo (ORCPT ); Thu, 16 Oct 2014 07:52:44 -0400 MIME-Version: 1.0 Date: Thu, 16 Oct 2014 07:52:43 -0400 X-Google-Sender-Auth: hlog_BXJrEb8XhSpilY1gaMV4AQ Message-ID: Subject: XFS lockdep with Linux v3.17-5503-g35a9ad8af0bb From: Josh Boyer To: Dave Chinner , Eric Sandeen Cc: xfs@oss.sgi.com, "Linux-Kernel@Vger. Kernel. Org" Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi All, Colin reported a lockdep spew with XFS using Linus' tree last week. The lockdep report is below. He noted that his application was using splice. josh [1] https://bugzilla.redhat.com/show_bug.cgi?id=1152813 [14689.265161] ====================================================== [14689.265175] [ INFO: possible circular locking dependency detected ] [14689.265186] 3.18.0-0.rc0.git2.1.fc22.x86_64 #1 Not tainted [14689.265190] ------------------------------------------------------- [14689.265199] atomic/1144 is trying to acquire lock: [14689.265203] (&sb->s_type->i_mutex_key#13){+.+.+.}, at: [] xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.265245] but task is already holding lock: [14689.265249] (&pipe->mutex/1){+.+.+.}, at: [] pipe_lock+0x1e/0x20 [14689.265262] which lock already depends on the new lock. [14689.265268] the existing dependency chain (in reverse order) is: [14689.265287] -> #2 (&pipe->mutex/1){+.+.+.}: [14689.265296] [] lock_acquire+0xa4/0x1d0 [14689.265303] [] mutex_lock_nested+0x85/0x440 [14689.265310] [] pipe_lock+0x1e/0x20 [14689.265315] [] splice_to_pipe+0x2a/0x260 [14689.265321] [] __generic_file_splice_read+0x57f/0x620 [14689.265328] [] generic_file_splice_read+0x3b/0x90 [14689.265334] [] xfs_file_splice_read+0xb0/0x1e0 [xfs] [14689.265350] [] do_splice_to+0x6c/0x90 [14689.265356] [] SyS_splice+0x6dd/0x800 [14689.265362] [] system_call_fastpath+0x16/0x1b [14689.265368] -> #1 (&(&ip->i_iolock)->mr_lock){++++++}: [14689.265424] [] lock_acquire+0xa4/0x1d0 [14689.265494] [] down_write_nested+0x5e/0xc0 [14689.265553] [] xfs_ilock+0xb9/0x1c0 [xfs] [14689.265629] [] xfs_file_buffered_aio_write.isra.10+0x87/0x310 [xfs] [14689.265693] [] xfs_file_write_iter+0x8a/0x130 [xfs] [14689.265749] [] new_sync_write+0x8e/0xd0 [14689.265811] [] vfs_write+0xba/0x200 [14689.265862] [] SyS_write+0x5c/0xd0 [14689.265912] [] system_call_fastpath+0x16/0x1b [14689.265963] -> #0 (&sb->s_type->i_mutex_key#13){+.+.+.}: [14689.266024] [] __lock_acquire+0x1b0e/0x1c10 [14689.266024] [] lock_acquire+0xa4/0x1d0 [14689.266024] [] mutex_lock_nested+0x85/0x440 [14689.266024] [] xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.266024] [] xfs_file_write_iter+0x8a/0x130 [xfs] [14689.266024] [] iter_file_splice_write+0x2ec/0x4b0 [14689.266024] [] SyS_splice+0x381/0x800 [14689.266024] [] system_call_fastpath+0x16/0x1b [14689.266024] other info that might help us debug this: [14689.266024] Chain exists of: &sb->s_type->i_mutex_key#13 --> &(&ip->i_iolock)->mr_lock --> &pipe->mutex/1 [14689.266024] Possible unsafe locking scenario: [14689.266024] CPU0 CPU1 [14689.266024] ---- ---- [14689.266024] lock(&pipe->mutex/1); [14689.266024] lock(&(&ip->i_iolock)->mr_lock); [14689.266024] lock(&pipe->mutex/1); [14689.266024] lock(&sb->s_type->i_mutex_key#13); [14689.266024] *** DEADLOCK *** [14689.266024] 2 locks held by atomic/1144: [14689.266024] #0: (sb_writers#8){.+.+.+}, at: [] SyS_splice+0x77f/0x800 [14689.266024] #1: (&pipe->mutex/1){+.+.+.}, at: [] pipe_lock+0x1e/0x20 [14689.266024] stack backtrace: [14689.266024] CPU: 0 PID: 1144 Comm: atomic Not tainted 3.18.0-0.rc0.git2.1.fc22.x86_64 #1 [14689.266024] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [14689.266024] 0000000000000000 00000000fd91796b ffff88003793bad0 ffffffff81838f3e [14689.266024] ffffffff82c03eb0 ffff88003793bb10 ffffffff81836b36 ffff88003793bb70 [14689.266024] ffff88003828a670 ffff880038289a40 0000000000000002 ffff880038289ab0 [14689.266024] Call Trace: [14689.266024] [] dump_stack+0x4d/0x66 [14689.266024] [] print_circular_bug+0x201/0x20f [14689.266024] [] __lock_acquire+0x1b0e/0x1c10 [14689.266024] [] ? dump_trace+0x170/0x350 [14689.266024] [] lock_acquire+0xa4/0x1d0 [14689.266024] [] ? xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.266024] [] mutex_lock_nested+0x85/0x440 [14689.266024] [] ? xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.266024] [] ? xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.266024] [] ? mark_held_locks+0x7c/0xb0 [14689.266024] [] xfs_file_buffered_aio_write.isra.10+0x7a/0x310 [xfs] [14689.266024] [] ? pipe_lock+0x1e/0x20 [14689.266024] [] xfs_file_write_iter+0x8a/0x130 [xfs] [14689.266024] [] iter_file_splice_write+0x2ec/0x4b0 [14689.266024] [] SyS_splice+0x381/0x800 [14689.266024] [] system_call_fastpath+0x16/0x1b -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/