Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Mon, 30 Sep 2002 12:22:58 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Mon, 30 Sep 2002 12:22:57 -0400 Received: from mx1.elte.hu ([157.181.1.137]:13290 "HELO mx1.elte.hu") by vger.kernel.org with SMTP id ; Mon, 30 Sep 2002 12:22:55 -0400 Date: Mon, 30 Sep 2002 18:38:03 +0200 (CEST) From: Ingo Molnar Reply-To: Ingo Molnar To: Christoph Hellwig Cc: lord@sgi.com, Arjan van de Ven , , , Linus Torvalds Subject: Re: [patch] smptimers, old BH removal, tq-cleanup, 2.5.39 In-Reply-To: <20020930193713.A13195@sgi.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 926 Lines: 24 On Mon, 30 Sep 2002, Christoph Hellwig wrote: > But it breaks XFS. Chris Wedgwood fixed it to use schedule_task() > instead (and I cleaned it up a littler more, see patch below), but this > does effectively simgle-thead XFS I/O completion. see the workqueues patch i posted a couple of minutes ago. Does this solve XFS's problems? > Altennatively we could allow kernel code to create it's own kevends with > associated task-queues, but that sounds rather ugly.. why is it ugly? I can add a simple interface to the workqueues subsystem that will bind the XFS worker threads to given sets of CPUs. That should give you per-CPU workqueues, with separate per-CPU locking. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/