Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S267223AbUIOSaO (ORCPT ); Wed, 15 Sep 2004 14:30:14 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S267248AbUIOSaO (ORCPT ); Wed, 15 Sep 2004 14:30:14 -0400 Received: from omx1-ext.sgi.com ([192.48.179.11]:7326 "EHLO omx1.americas.sgi.com") by vger.kernel.org with ESMTP id S267230AbUIOSaF (ORCPT ); Wed, 15 Sep 2004 14:30:05 -0400 Date: Wed, 15 Sep 2004 11:28:31 -0700 (PDT) From: Christoph Lameter X-X-Sender: clameter@schroedinger.engr.sgi.com To: George Anzinger cc: john stultz , Albert Cahalan , lkml , tim@physik3.uni-rostock.de, Ulrich.Windl@rz.uni-regensburg.de, Len Brown , linux@dominikbrodowski.de, David Mosberger , Andi Kleen , paulus@samba.org, schwidefsky@de.ibm.com, jimix@us.ibm.com, keith maanthey , greg kh , Patricia Gaughen , Chris McDermott Subject: Re: [RFC][PATCH] new timeofday core subsystem (v.A0) In-Reply-To: <41488346.1020101@mvista.com> Message-ID: References: <1094159238.14662.318.camel@cog.beaverton.ibm.com> <41381C2D.7080207@mvista.com> <1094239673.14662.510.camel@cog.beaverton.ibm.com> <4138EBE5.2080205@mvista.com> <1094254342.29408.64.camel@cog.beaverton.ibm.com> <41390622.2010602@mvista.com> <1094666844.29408.67.camel@cog.beaverton.ibm.com> <413F9F17.5010904@mvista.com> <1094691118.29408.102.camel@cog.beaverton.ibm.com> <1094700768.29408.124.camel@cog.beaverton.ibm.com> <413FDC9F.1030409@mvista.com> <1094756870.29408.157.camel@cog.beaverton.ibm.com> <4140C1ED.4040505@mvista.com> <41479369.6020506@mvista.com> <4147F774.6000800@mvista.com> <41488346.1020101@mvista.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1144 Lines: 23 On Wed, 15 Sep 2004, George Anzinger wrote: > > I am not following you here. Why does the context switch overhead > > increase? Because there are multiple interrupts for different tasks done > > in the tick? > > > Each task has several timers, i.e. time slice, time limit, and possibly itimer > profile. Granted only one of these needs to be sent to the timer code, but that > takes a bit of time, not much, but enough to increase the context switch > overhead such that a system with a modest amount of context switching will incur > more timer management overhead than the periodic tick generates. I thought that the timer handling could be separate from the time slice handling of the scheduler. With the ability schedule events as needed the scheduler could generate its own independent timing from the necessary clock adjustments. The tick would be disassembled into its various components. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/