Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Fri, 9 Nov 2001 05:49:17 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Fri, 9 Nov 2001 05:49:07 -0500 Received: from addleston.eee.nott.ac.uk ([128.243.70.70]:52202 "HELO addleston.eee.nott.ac.uk") by vger.kernel.org with SMTP id ; Fri, 9 Nov 2001 05:49:00 -0500 Date: Fri, 9 Nov 2001 10:48:58 +0000 (GMT) From: Matthew Clark X-X-Sender: To: Subject: dev driver / pci throughput Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Dear kernel-list, I am writing a dev driver in which large amounts of data are passed from user space to PCI device memory and I am seeing a far lower throughput than I expected. I know this is likely to be high architecture dependant but I would appreciate some general guidance. The essential bit of code looks like #define CHUNK 512->4096 depending on implementation static ssize_t BSL_write(..., const char *buf, size_t count..){ char chunk[CHUNK]; int i,pos, for(i=0,pos=0;i