Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752722AbZK3RX6 (ORCPT ); Mon, 30 Nov 2009 12:23:58 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752475AbZK3RX5 (ORCPT ); Mon, 30 Nov 2009 12:23:57 -0500 Received: from mga02.intel.com ([134.134.136.20]:26235 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752062AbZK3RX4 (ORCPT ); Mon, 30 Nov 2009 12:23:56 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.47,315,1257148800"; d="scan'208";a="471574007" Subject: Re: [PATCH v2] irq: Add node_affinity CPU masks for smarter irqbalance hints From: Peter P Waskiewicz Jr To: Thomas Gleixner Cc: David Miller , "linux-kernel@vger.kernel.org" , "arjan@linux.jf.intel.com" , "mingo@elte.hu" , "yong.zhang0@gmail.com" , "netdev@vger.kernel.org" In-Reply-To: References: <20091124093518.3909.16435.stgit@ppwaskie-hc2.jf.intel.com> <20091124.095703.107687163.davem@davemloft.net> <1259100343.2631.78.camel@ppwaskie-mobl2> Content-Type: text/plain; charset="UTF-8" Date: Mon, 30 Nov 2009 09:24:02 -0800 Message-Id: <1259601842.2172.4.camel@localhost> Mime-Version: 1.0 X-Mailer: Evolution 2.28.0 (2.28.0-2.fc12) Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2229 Lines: 53 On Tue, 2009-11-24 at 14:23 -0800, Thomas Gleixner wrote: > On Tue, 24 Nov 2009, Peter P Waskiewicz Jr wrote: > > On Tue, 2009-11-24 at 13:56 -0800, Thomas Gleixner wrote: > > > On Tue, 24 Nov 2009, David Miller wrote: > > > > > > > From: Thomas Gleixner > > > > Date: Tue, 24 Nov 2009 12:07:35 +0100 (CET) > > > > > > > > > And what does the kernel do with this information and why are we not > > > > > using the existing device/numa_node information ? > > > > > > > > It's a different problem space Thomas. > > > > > > > > If the device lives on NUMA node X, we still end up wanting to > > > > allocate memory resources (RX ring buffers) on other NUMA nodes on a > > > > per-queue basis. > > > > > > > > Otherwise a network card's forwarding performance is limited by the > > > > memory bandwidth of a single NUMA node, and on a multiqueue cards we > > > > therefore fare much better by allocating each device RX queue's memory > > > > resources on a different NUMA node. > > > > > > > > It is this NUMA usage that PJ is trying to export somehow to userspace > > > > so that irqbalanced and friends can choose the IRQ cpu masks more > > > > intelligently. > > > > > > So you need a preferred irq mask information on a per IRQ basis and > > > that mask is not restricted to the CPUs of a single NUMA node, right ? > > > > > Just to clarify, I need a preferred CPU mask on a per IRQ basis. And > > yes, that mask may not be restricted to the CPUs of a single NUMA node. > > But in the normal case, the mask will be restricted to CPUs of a single > > node. > > Right, but the normal case does not help much if we need to consider > the special case of multiple nodes affected which requires another > cpumask in irq_desc. That's what I really want to avoid. > > I at least understand the exact problem you guys want to solve. Will > think more about it. > Just a friendly ping Thomas. Any progress on your thinking about this proposal? Cheers, -PJ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/