Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S263688AbUJ2XSW (ORCPT ); Fri, 29 Oct 2004 19:18:22 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S263669AbUJ2XQM (ORCPT ); Fri, 29 Oct 2004 19:16:12 -0400 Received: from viper.oldcity.dca.net ([216.158.38.4]:52678 "HELO viper.oldcity.dca.net") by vger.kernel.org with SMTP id S263641AbUJ2XMt (ORCPT ); Fri, 29 Oct 2004 19:12:49 -0400 Subject: Re: [Fwd: Re: [patch] Real-Time Preemption, -RT-2.6.9-mm1-V0.4] From: Lee Revell To: Ingo Molnar Cc: Florian Schmidt , Paul Davis , Thomas Gleixner , LKML , mark_h_johnson@raytheon.com, Bill Huey , Adam Heath , Michal Schmidt , Fernando Pablo Lopez-Lezcano , Karsten Wiese , jackit-devel , Rui Nuno Capela In-Reply-To: <20041029214602.GA15605@elte.hu> References: <20041029170237.GA12374@elte.hu> <20041029170948.GA13727@elte.hu> <20041029193303.7d3990b4@mango.fruits.de> <20041029172151.GB16276@elte.hu> <20041029172243.GA19630@elte.hu> <20041029203619.37b54cba@mango.fruits.de> <20041029204220.GA6727@elte.hu> <20041029233117.6d29c383@mango.fruits.de> <20041029212545.GA13199@elte.hu> <1099086166.1468.4.camel@krustophenia.net> <20041029214602.GA15605@elte.hu> Content-Type: text/plain Date: Fri, 29 Oct 2004 19:12:46 -0400 Message-Id: <1099091566.1461.8.camel@krustophenia.net> Mime-Version: 1.0 X-Mailer: Evolution 2.0.2 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7604 Lines: 189 On Fri, 2004-10-29 at 23:46 +0200, Ingo Molnar wrote: > * Lee Revell wrote: > > > On Fri, 2004-10-29 at 23:25 +0200, Ingo Molnar wrote: > > > > will do so. btw: i think i'm a bit confused right now. What debugging > > > > features should i have enabled for this test? > > > > > > this particular one (atomicity-checking) is always-enabled if you have > > > the -RT patch applied (it's a really cheap check). > > > > One more question, what do you recommend the priorities of the IRQ > > threads be set to? AIUI for xrun-free operation with JACK, all that > > is needed is to set the RT priorities of the soundcard IRQ thread > > highest, followed by the JACK threads, then the other IRQ threads. Is > > this correct? > > correct. softirqs are not used by the sound subsystem so there's no > ksoftirqd dependency. > OK well I set all IRQ threads to SCHED_OTHER except the soundcard, which is SCHED_FIFO, and I still get a LOT of xruns, compared to zero xruns over tens of millions of cycles with T3. rlrevell@mindpipe:~$ for p in `ps auxww | grep IRQ | grep -v grep | awk '{print $2}'`; do chrt -p $p ;done pid 647's current scheduling policy: SCHED_OTHER pid 647's current scheduling priority: 0 pid 655's current scheduling policy: SCHED_OTHER pid 655's current scheduling priority: 0 pid 678's current scheduling policy: SCHED_OTHER pid 678's current scheduling priority: 0 pid 693's current scheduling policy: SCHED_OTHER pid 693's current scheduling priority: 0 pid 831's current scheduling policy: SCHED_OTHER pid 831's current scheduling priority: 0 pid 835's current scheduling policy: SCHED_FIFO <-- soundcard irq pid 835's current scheduling priority: 90 Here is the dmesg output. It looks like the problem could be related to jackd's printing from the realtime thread. But, this has to be the kernel's fault on some level, because with an earlier version I get no xruns. jackd:1726 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] work_resched+0x6/0x17 (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1726 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] _down_write+0xcc/0x170 (32) [] lock_kernel+0x23/0x30 (16) [] tty_write+0x170/0x230 (64) [] vfs_write+0xbc/0x110 (36) [] sys_write+0x41/0x70 (44) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1726 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1731 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1736 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1775 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] (ksoftirqd/0/2/CPU#0): new 12 us maximum-latency wakeup. (ksoftirqd/0/2/CPU#0): new 15 us maximum-latency wakeup. (ksoftirqd/0/2/CPU#0): new 22 us maximum-latency wakeup. (ksoftirqd/0/2/CPU#0): new 31 us maximum-latency wakeup. (ksoftirqd/0/2/CPU#0): new 32 us maximum-latency wakeup. (IRQ 1/693/CPU#0): new 39 us maximum-latency wakeup. jackd:1787 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) (ksoftirqd/0/2/CPU#0): new 42 us maximum-latency wakeup. (desched/0/3/CPU#0): new 43 us maximum-latency wakeup. (IRQ 15/678/CPU#0): new 44 us maximum-latency wakeup. (IRQ 11/831/CPU#0): new 45 us maximum-latency wakeup. (IRQ 11/831/CPU#0): new 52 us maximum-latency wakeup. (IRQ 11/831/CPU#0): new 55 us maximum-latency wakeup. jackd:1846 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] schedule_timeout+0xbb/0xc0 (80) [] futex_wait+0x10f/0x190 (168) [] do_futex+0x36/0x80 (32) [] sys_futex+0xca/0xe0 (68) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1846 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) jackd:1854 userspace BUG: scheduling in user-atomic context! [] dump_stack+0x1c/0x20 (20) [] schedule+0x70/0x100 (24) [] do_exit+0x29a/0x500 (24) [] sys_exit+0x16/0x20 (12) [] syscall_call+0x7/0xb (-8124) --------------------------- | preempt count: 00000001 ] | 1-level deep critical section nesting: ---------------------------------------- .. [] .... print_traces+0x13/0x50 .....[] .. ( <= dump_stack+0x1c/0x20) Lee - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/