Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Sat, 5 Jan 2002 19:49:30 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Sat, 5 Jan 2002 19:49:20 -0500 Received: from neon-gw-l3.transmeta.com ([63.209.4.196]:57864 "EHLO neon-gw.transmeta.com") by vger.kernel.org with ESMTP id ; Sat, 5 Jan 2002 19:49:12 -0500 Date: Sat, 5 Jan 2002 16:47:58 -0800 (PST) From: Linus Torvalds To: Davide Libenzi cc: Ingo Molnar , lkml , Alan Cox Subject: Re: [announce] [patch] ultra-scalable O(1) SMP and UP scheduler In-Reply-To: Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 5 Jan 2002, Davide Libenzi wrote: > > No Ingo the fact that you coded the patch this time does not really change > the workloads, once you've a per-cpu run queue and lock. The thing that > makes big servers to suffer in the common queue plus the cache coherency > traffic due the common lock. What do the per-CPU queue patches look like? I agree with Davide that it seems much more sensible from a scalability standpoint to allow O(n) (or whatever) but with local queues. That should also very naturally give CPU affinity ;) The problem with local queues is how to sanely break the CPU affinity when it needs breaking. Which is not necessarily all that often, but clearly does need to happen. It would be nice to have the notion of a cluster scheduler with a clear "transfer to another CPU" operation, and actively trying to minimize the number of transfers. What are those algorithms like? This are must have tons of scientific papers. Linus - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/