Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S261627AbUKOPv7 (ORCPT ); Mon, 15 Nov 2004 10:51:59 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S261628AbUKOPv7 (ORCPT ); Mon, 15 Nov 2004 10:51:59 -0500 Received: from mx1.elte.hu ([157.181.1.137]:37080 "EHLO mx1.elte.hu") by vger.kernel.org with ESMTP id S261627AbUKOPv5 (ORCPT ); Mon, 15 Nov 2004 10:51:57 -0500 Date: Mon, 15 Nov 2004 17:46:04 +0100 From: Ingo Molnar To: Mark_H_Johnson@raytheon.com Cc: linux-kernel@vger.kernel.org, Lee Revell , Rui Nuno Capela , "K.R. Foley" , Bill Huey , Adam Heath , Florian Schmidt , Thomas Gleixner , Michal Schmidt , Fernando Pablo Lopez-Lezcano , Karsten Wiese , Gunther Persoons , emann@mrv.com, Shane Shrybman , Amit Shah Subject: Re: [patch] Real-Time Preemption, -RT-2.6.10-rc1-mm3-V0.7.25-1 Message-ID: <20041115164604.GA1456@elte.hu> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.1i X-ELTE-SpamVersion: MailScanner 4.31.6-itk1 (ELTE 1.2) SpamAssassin 2.63 ClamAV 0.73 X-ELTE-VirusStatus: clean X-ELTE-SpamCheck: no X-ELTE-SpamCheck-Details: score=-4.9, required 5.9, autolearn=not spam, BAYES_00 -4.90 X-ELTE-SpamLevel: X-ELTE-SpamScore: -4 Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1426 Lines: 34 * Mark_H_Johnson@raytheon.com wrote: > [1] major network delays while latencytest is running (ping drops > packets or they get delayed by minutes). I did not see this on some > previous tests where I made more of the /0 and /1 tasks RT. May have > to do that again. i think this is directly related to what priority the ksoftirqd threads have. > [6] the latency trace may have some SMP race conditions where the > entries displayed do not match the header. Examples are a 100 usec > trace header followed by 8 entries that last about 4 usec. i think i fixed a related bug in the latest kernel(s): touch_preempt_timing() was mistakenly 'touching' a live user-triggered trace and could interfere in a similar fashion. Please re-report if this still happens with -V0.7.26-3-ish or later kernels. > [8] Some samples of /proc/loadavg during my big test showed some > extremely large numbers. For example: > 5.07 402.44 0.58 5/120 4448 i'm currently trying to track down this one. The rq->nr_uninterruptible count got out of sync during one of the scheduler changes - and this causes large negative task counts, messing up the load-average. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/