Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755459AbZDNW1g (ORCPT ); Tue, 14 Apr 2009 18:27:36 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752901AbZDNW1R (ORCPT ); Tue, 14 Apr 2009 18:27:17 -0400 Received: from ovro.ovro.caltech.edu ([192.100.16.2]:37289 "EHLO ovro.ovro.caltech.edu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753762AbZDNW1P (ORCPT ); Tue, 14 Apr 2009 18:27:15 -0400 Message-ID: <49E50DC4.3080108@ovro.caltech.edu> Date: Tue, 14 Apr 2009 15:27:16 -0700 From: David Hawkins User-Agent: Thunderbird 2.0.0.21 (Windows/20090302) MIME-Version: 1.0 To: Grant Likely CC: Ira Snyder , Arnd Bergmann , Jan-Bernd Themann , netdev@vger.kernel.org, Rusty Russell , linux-kernel@vger.kernel.org, linuxppc-dev@ozlabs.org Subject: Re: [RFC v2] virtio: add virtio-over-PCI driver References: <20090224000002.GA578@ovro.caltech.edu> <49E4FED0.1020003@ovro.caltech.edu> <49E505B3.5070005@ovro.caltech.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.0 (ovro.ovro.caltech.edu); Tue, 14 Apr 2009 15:27:15 -0700 (PDT) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1512 Lines: 40 Hi Grant, > Hmmm, I hadn't thought about this. I was intending to use the > Virtex's memory region for all virtio, but if I can allocate memory > regions on both sides of the PCI bus, then that may be best. Sounds like you can experiment and see what works best :) >> If you use >> a PCI Target only core, then the MPC5200 DMA controller >> will have to do all the work, and read transfers might >> be slightly less efficient. > > I'll definitely intend to enable master mode on the Xilinx PCI controller. Since you understand the lingo, you clearly understand there are core differences :) >> Our target boards (PowerPC) live in compactPCI backplanes >> and talk to x86 boards that do not have DMA controllers. >> So the PCI target board DMA controllers are used to >> transfer data efficiently to the x86 host (writes) >> and less efficiently from the host to the boards >> (reads). Our bandwidth requirements are 'to the host', >> so we can live with the asymmetry in performance. > > Fortunately I don't have very high bandwidth requirements for the > first spin, so I have some room to experiment. :-) Yes, in theory you have enough bandwidth ... then a few features are added, the PCI core is not quite as fast as advertised, etc etc :) Cheers, Dave -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/