Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Wed, 2 Jan 2002 17:44:54 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Wed, 2 Jan 2002 17:44:44 -0500 Received: from mail.xmailserver.org ([208.129.208.52]:19984 "EHLO mail.xmailserver.org") by vger.kernel.org with ESMTP id ; Wed, 2 Jan 2002 17:44:36 -0500 Date: Wed, 2 Jan 2002 14:48:13 -0800 (PST) From: Davide Libenzi X-X-Sender: davide@blue1.dev.mcafeelabs.com To: Peter Osterlund cc: lkml , Linus Torvalds Subject: Re: [PATCH] scheduler fixups ... In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On 2 Jan 2002, Peter Osterlund wrote: > Davide Libenzi writes: > > > a still lower ts > > This also lowers the effectiveness of nice values. In 2.5.2-pre6, if I > run two cpu hogs at nice values 0 and 19 respectively, the niced task > will get approximately 20% cpu time (on x86 with HZ=100) and this > patch will give even more cpu time to the niced task. Isn't 20% too > much? The problem is that with HZ == 100 you don't have enough granularity to correctly scale down nice time slices. Shorter time slices helps the interactive feel that's why i'm pushing for this. Anyway i'm currently running experiments with 30-40ms time slices. Another thing to remember is that cpu hog processes will sit at dyn_prio 0 while processes like for example gcc during a kernel build will range between 5-8 to 36 and in this case their ts is actually doubled by the fact that they can require another extra ts. For all processes that does not sit at dyn_prio 0 ( 90% ) the nice tasks cpu time is going to be half. - Davide - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/