Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751056AbWJNBvL (ORCPT ); Fri, 13 Oct 2006 21:51:11 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752036AbWJNBvL (ORCPT ); Fri, 13 Oct 2006 21:51:11 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:18378 "EHLO e35.co.us.ibm.com") by vger.kernel.org with ESMTP id S1751056AbWJNBvJ (ORCPT ); Fri, 13 Oct 2006 21:51:09 -0400 Date: Fri, 13 Oct 2006 18:52:44 -0700 From: Nishanth Aravamudan To: ipslinux@adaptec.com Cc: LKML Subject: ips: scheduling while atomic in 2.6.18 Message-ID: <20061014015244.GC10744@us.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Operating-System: Linux 2.6.18 (x86_64) User-Agent: Mutt/1.5.11 Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4366 Lines: 115 Hi all, A server I administer just dumped three scheduling while atomics before (sort of) hanging hard. Still responds to ping, but ssh is now dead and the serial console stopped logging. 8-way PIII, 2.6.18 with the 3:1 split. Wanted to get my report out there before I reset the box, though. ips 0000:0d:06.0: Resetting controller. BUG: scheduling while atomic: ipssend/0x00000001/11199 [] schedule+0x8ad/0x920 [] _spin_unlock_irqrestore+0xf/0x30 [] release_console_sem+0x203/0x220 [] vprintk+0x29e/0x380 [] lock_timer_base+0x20/0x50 [] _spin_unlock_irqrestore+0xf/0x30 [] __mod_timer+0x9c/0xc0 [] schedule_timeout+0x57/0xd0 [] process_timeout+0x0/0x10 [] msleep+0x28/0x40 [] ips_reset_copperhead_memio+0x21/0x60 [] __ips_eh_reset+0x17c/0x380 [] scsi_done+0x0/0x30 [] ips_queue+0x17e/0x1b0 [] scsi_dispatch_cmd+0x161/0x260 [] scsi_done+0x0/0x30 [] scsi_times_out+0x0/0x80 [] scsi_request_fn+0x187/0x2f0 [] blk_execute_rq_nowait+0x6e/0xc0 [] scsi_execute_async+0x2b1/0x3c0 [] scsi_end_async+0x0/0x60 [] sg_cmd_done+0x0/0x260 [] sg_common_write+0x288/0x700 [] sg_cmd_done+0x0/0x260 [] sg_write+0x21c/0x300 [] sunrpc_cache_lookup+0x140/0x150 [] do_ioctl+0x87/0x90 [] vfs_write+0xb5/0x190 [] sys_write+0x4b/0x80 [] syscall_call+0x7/0xb BUG: scheduling while atomic: ipssend/0x00000001/11199 [] schedule+0x8ad/0x920 [] _spin_unlock_irqrestore+0xf/0x30 [] release_console_sem+0x203/0x220 [] lock_timer_base+0x20/0x50 [] _spin_unlock_irqrestore+0xf/0x30 [] __mod_timer+0x9c/0xc0 [] schedule_timeout+0x57/0xd0 [] process_timeout+0x0/0x10 [] msleep+0x28/0x40 [] ips_reset_copperhead_memio+0x37/0x60 [] __ips_eh_reset+0x17c/0x380 [] scsi_done+0x0/0x30 [] ips_queue+0x17e/0x1b0 [] scsi_dispatch_cmd+0x161/0x260 [] scsi_done+0x0/0x30 [] scsi_times_out+0x0/0x80 [] scsi_request_fn+0x187/0x2f0 [] blk_execute_rq_nowait+0x6e/0xc0 [] scsi_execute_async+0x2b1/0x3c0 [] scsi_end_async+0x0/0x60 [] sg_cmd_done+0x0/0x260 [] sg_common_write+0x288/0x700 [] sg_cmd_done+0x0/0x260 [] sg_write+0x21c/0x300 [] sunrpc_cache_lookup+0x140/0x150 [] do_ioctl+0x87/0x90 [] vfs_write+0xb5/0x190 [] sys_write+0x4b/0x80 [] syscall_call+0x7/0xb BUG: scheduling while atomic: ipssend/0x00000001/11199 [] schedule+0x8ad/0x920 [] schedule+0x3a7/0x920 [] lock_timer_base+0x20/0x50 [] _spin_unlock_irqrestore+0xf/0x30 [] __mod_timer+0x9c/0xc0 [] schedule_timeout+0x57/0xd0 [] _spin_unlock_irqrestore+0xf/0x30 [] process_timeout+0x0/0x10 [] msleep+0x28/0x40 [] ips_init_copperhead_memio+0x20/0x150 [] process_timeout+0x0/0x10 [] ips_reset_copperhead_memio+0x42/0x60 [] __ips_eh_reset+0x17c/0x380 [] scsi_done+0x0/0x30 [] ips_queue+0x17e/0x1b0 [] scsi_dispatch_cmd+0x161/0x260 [] scsi_done+0x0/0x30 [] scsi_times_out+0x0/0x80 [] scsi_request_fn+0x187/0x2f0 [] blk_execute_rq_nowait+0x6e/0xc0 [] scsi_execute_async+0x2b1/0x3c0 [] scsi_end_async+0x0/0x60 [] sg_cmd_done+0x0/0x260 [] sg_common_write+0x288/0x700 [] sg_cmd_done+0x0/0x260 [] sg_write+0x21c/0x300 [] sunrpc_cache_lookup+0x140/0x150 [] do_ioctl+0x87/0x90 [] vfs_write+0xb5/0x190 [] sys_write+0x4b/0x80 [] syscall_call+0x7/0xb Thanks, Nish -- Nishanth Aravamudan IBM Linux Technology Center - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/