2006-03-11 02:26:59

by Chris Leech

[permalink] [raw]
Subject: [PATCH 0/8] Intel I/O Acceleration Technology (I/OAT)

This patch series is the a full release of the Intel(R) I/O
Acceleration Technology (I/OAT) for Linux. It includes an in kernel API
for offloading memory copies to hardware, a driver for the I/OAT DMA memcpy
engine, and changes to the TCP stack to offload copies of received
networking data to application space.

Changes from last weeks posting:
fixed return value from sysfs show functions as suggested by Joe Perches
code style fixes suggested by Andrew Morton, David Miller, and others
renamed anything related to pinning pages from lock/locked to pin/pinned
renamed ioatdma register read/write functions with less generic names
return a pinned list from dma_pin_iovec_pages instead of passing in a
**dma_pinned_list
replaced all cb/CB symbol prefixes in ioatdma with ioat/IOAT,
CB was an abbreviation of an early code name
use set_page_dirty_lock instead of SetPageDirty pointed out by Andrew Morton
rename dma_async_try_early_copy to tcp_dma_try_early_copy and stop exporting

I'll be focusing on reducing ifdefs and adding much needed comments, with
another release early next week.

These changes apply to DaveM's net-2.6.17 tree as of commit
32639ad6b7e3da27f233c0516471f0747f1178f5 ([SPARC]: Fixup SO_*SEC values on 32-bit sparc.)

They are available to pull from
git://198.78.49.142/~cleech/linux-2.6 ioat-2.6.17

There are 8 patches in the series:
1) The memcpy offload APIs and class code
2) The Intel I/OAT DMA driver (ioatdma)
3) Core networking code to setup networking as a DMA memcpy client
4) Utility functions for sk_buff to iovec offloaded copy
5) Structure changes needed for TCP receive offload
6) Rename cleanup_rbuf to tcp_cleanup_rbuf
7) Add a sysctl to tune the minimum offloaded I/O size for TCP
8) The main TCP receive offload changes

--
Chris Leech <[email protected]>
I/O Acceleration Technology Software Development
LAN Access Division / Digital Enterprise Group


2006-03-11 02:27:34

by Chris Leech

[permalink] [raw]
Subject: [PATCH 6/8] [I/OAT] Rename cleanup_rbuf to tcp_cleanup_rbuf and make non-static

Needed to be able to call tcp_cleanup_rbuf in tcp_input.c for I/OAT

Signed-off-by: Chris Leech <[email protected]>
---

include/net/tcp.h | 2 ++
net/ipv4/tcp.c | 10 +++++-----
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/net/tcp.h b/include/net/tcp.h
index 610f66b..afc4b8a 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -296,6 +296,8 @@ extern int tcp_rcv_established(struct

extern void tcp_rcv_space_adjust(struct sock *sk);

+extern void tcp_cleanup_rbuf(struct sock *sk, int copied);
+
extern int tcp_twsk_unique(struct sock *sk,
struct sock *sktw, void *twp);

diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 4b0272c..9122520 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -936,7 +936,7 @@ static int tcp_recv_urg(struct sock *sk,
* calculation of whether or not we must ACK for the sake of
* a window update.
*/
-static void cleanup_rbuf(struct sock *sk, int copied)
+void tcp_cleanup_rbuf(struct sock *sk, int copied)
{
struct tcp_sock *tp = tcp_sk(sk);
int time_to_ack = 0;
@@ -1085,7 +1085,7 @@ int tcp_read_sock(struct sock *sk, read_

/* Clean up data we have read: This will do ACK frames. */
if (copied)
- cleanup_rbuf(sk, copied);
+ tcp_cleanup_rbuf(sk, copied);
return copied;
}

@@ -1219,7 +1219,7 @@ int tcp_recvmsg(struct kiocb *iocb, stru
}
}

- cleanup_rbuf(sk, copied);
+ tcp_cleanup_rbuf(sk, copied);

if (!sysctl_tcp_low_latency && tp->ucopy.task == user_recv) {
/* Install new reader */
@@ -1390,7 +1390,7 @@ skip_copy:
*/

/* Clean up data we have read: This will do ACK frames. */
- cleanup_rbuf(sk, copied);
+ tcp_cleanup_rbuf(sk, copied);

TCP_CHECK_TIMER(sk);
release_sock(sk);
@@ -1852,7 +1852,7 @@ static int do_tcp_setsockopt(struct sock
(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT) &&
inet_csk_ack_scheduled(sk)) {
icsk->icsk_ack.pending |= ICSK_ACK_PUSHED;
- cleanup_rbuf(sk, 1);
+ tcp_cleanup_rbuf(sk, 1);
if (!(val & 1))
icsk->icsk_ack.pingpong = 1;
}

2006-03-11 02:27:16

by Chris Leech

[permalink] [raw]
Subject: [PATCH 3/8] [I/OAT] Setup the networking subsystem as a DMA client

Attempts to allocate per-CPU DMA channels

Signed-off-by: Chris Leech <[email protected]>
---

drivers/dma/Kconfig | 12 +++++
include/linux/netdevice.h | 6 +++
include/net/netdma.h | 37 +++++++++++++++++
net/core/dev.c | 100 +++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 155 insertions(+), 0 deletions(-)

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 0f15e76..30d021d 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -10,6 +10,18 @@ config DMA_ENGINE
DMA engines offload copy operations from the CPU to dedicated
hardware, allowing the copies to happen asynchronously.

+comment "DMA Clients"
+
+config NET_DMA
+ bool "Network: TCP receive copy offload"
+ depends on DMA_ENGINE && NET
+ default y
+ ---help---
+ This enables the use of DMA engines in the network stack to
+ offload receive copy-to-user operations, freeing CPU cycles.
+ Since this is the main user of the DMA engine, it should be enabled;
+ say Y here.
+
comment "DMA Devices"

config INTEL_IOATDMA
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 950dc55..25d8610 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -37,6 +37,9 @@
#include <linux/config.h>
#include <linux/device.h>
#include <linux/percpu.h>
+#ifdef CONFIG_NET_DMA
+#include <linux/dmaengine.h>
+#endif

struct divert_blk;
struct vlan_group;
@@ -592,6 +595,9 @@ struct softnet_data
struct sk_buff *completion_queue;

struct net_device backlog_dev; /* Sorry. 8) */
+#ifdef CONFIG_NET_DMA
+ struct dma_chan *net_dma;
+#endif
};

DECLARE_PER_CPU(struct softnet_data,softnet_data);
diff --git a/include/net/netdma.h b/include/net/netdma.h
new file mode 100644
index 0000000..6435aef
--- /dev/null
+++ b/include/net/netdma.h
@@ -0,0 +1,37 @@
+/*****************************************************************************
+Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+The full GNU General Public License is included in this distribution in the
+file called LICENSE.
+*****************************************************************************/
+#ifndef NETDMA_H
+#define NETDMA_H
+#ifdef CONFIG_NET_DMA
+#include <linux/dmaengine.h>
+
+static inline struct dma_chan *get_softnet_dma(void)
+{
+ struct dma_chan *chan;
+ rcu_read_lock();
+ chan = rcu_dereference(__get_cpu_var(softnet_data.net_dma));
+ if (chan)
+ dma_chan_get(chan);
+ rcu_read_unlock();
+ return chan;
+}
+#endif /* CONFIG_NET_DMA */
+#endif /* NETDMA_H */
diff --git a/net/core/dev.c b/net/core/dev.c
index f7f6f99..d7e61b4 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -115,6 +115,7 @@
#include <linux/wireless.h> /* Note : will define WIRELESS_EXT */
#include <net/iw_handler.h>
#endif /* CONFIG_NET_RADIO */
+#include <linux/dmaengine.h>
#include <asm/current.h>

/*
@@ -149,6 +150,11 @@ static DEFINE_SPINLOCK(ptype_lock);
static struct list_head ptype_base[16]; /* 16 way hashed list */
static struct list_head ptype_all; /* Taps */

+#ifdef CONFIG_NET_DMA
+static struct dma_client *net_dma_client;
+static unsigned int net_dma_count;
+#endif
+
/*
* The @dev_base list is protected by @dev_base_lock and the rtln
* semaphore.
@@ -1750,6 +1756,19 @@ static void net_rx_action(struct softirq
}
}
out:
+#ifdef CONFIG_NET_DMA
+ /*
+ * There may not be any more sk_buffs coming right now, so push
+ * any pending DMA copies to hardware
+ */
+ if (net_dma_client) {
+ struct dma_chan *chan;
+ rcu_read_lock();
+ list_for_each_entry_rcu(chan, &net_dma_client->channels, client_node)
+ dma_async_memcpy_issue_pending(chan);
+ rcu_read_unlock();
+ }
+#endif
local_irq_enable();
return;

@@ -3205,6 +3224,85 @@ static int dev_cpu_callback(struct notif
}
#endif /* CONFIG_HOTPLUG_CPU */

+#ifdef CONFIG_NET_DMA
+/**
+ * net_dma_rebalance -
+ * This is called when the number of channels allocated to the net_dma_client
+ * changes. The net_dma_client tries to have one DMA channel per CPU.
+ */
+static void net_dma_rebalance(void)
+{
+ unsigned int cpu, i, n;
+ struct dma_chan *chan;
+
+ lock_cpu_hotplug();
+
+ if (net_dma_count == 0) {
+ for_each_online_cpu(cpu)
+ rcu_assign_pointer(per_cpu(softnet_data.net_dma, cpu), NULL);
+ unlock_cpu_hotplug();
+ return;
+ }
+
+ i = 0;
+ cpu = first_cpu(cpu_online_map);
+
+ rcu_read_lock();
+ list_for_each_entry(chan, &net_dma_client->channels, client_node) {
+ n = ((num_online_cpus() / net_dma_count)
+ + (i < (num_online_cpus() % net_dma_count) ? 1 : 0));
+
+ while(n) {
+ per_cpu(softnet_data.net_dma, cpu) = chan;
+ cpu = next_cpu(cpu, cpu_online_map);
+ n--;
+ }
+ i++;
+ }
+ rcu_read_unlock();
+
+ unlock_cpu_hotplug();
+}
+
+/**
+ * netdev_dma_event - event callback for the net_dma_client
+ * @client: should always be net_dma_client
+ * @chan:
+ * @event:
+ */
+static void netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
+ enum dma_event event)
+{
+ switch (event) {
+ case DMA_RESOURCE_ADDED:
+ net_dma_count++;
+ net_dma_rebalance();
+ break;
+ case DMA_RESOURCE_REMOVED:
+ net_dma_count--;
+ net_dma_rebalance();
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * netdev_dma_regiser - register the networking subsystem as a DMA client
+ */
+static int __init netdev_dma_register(void)
+{
+ net_dma_client = dma_async_client_register(netdev_dma_event);
+ if (net_dma_client == NULL)
+ return -ENOMEM;
+
+ dma_async_client_chan_request(net_dma_client, num_online_cpus());
+ return 0;
+}
+
+#else
+static int __init netdev_dma_register(void) { return -ENODEV; }
+#endif /* CONFIG_NET_DMA */

/*
* Initialize the DEV module. At boot time this walks the device list and
@@ -3258,6 +3356,8 @@ static int __init net_dev_init(void)
atomic_set(&queue->backlog_dev.refcnt, 1);
}

+ netdev_dma_register();
+
dev_boot_phase = 0;

open_softirq(NET_TX_SOFTIRQ, net_tx_action, NULL);

2006-03-11 02:27:07

by Chris Leech

[permalink] [raw]
Subject: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

Provides an API for offloading memory copies to DMA devices

Signed-off-by: Chris Leech <[email protected]>
---

drivers/Kconfig | 2
drivers/Makefile | 1
drivers/dma/Kconfig | 13 ++
drivers/dma/Makefile | 1
drivers/dma/dmaengine.c | 360 +++++++++++++++++++++++++++++++++++++++++++++
include/linux/dmaengine.h | 323 ++++++++++++++++++++++++++++++++++++++++
6 files changed, 700 insertions(+), 0 deletions(-)

diff --git a/drivers/Kconfig b/drivers/Kconfig
index bddf431..ce7ffa7 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -70,4 +70,6 @@ source "drivers/sn/Kconfig"

source "drivers/edac/Kconfig"

+source "drivers/dma/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 5c69b86..516ba5e 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -73,3 +73,4 @@ obj-$(CONFIG_SGI_SN) += sn/
obj-y += firmware/
obj-$(CONFIG_CRYPTO) += crypto/
obj-$(CONFIG_SUPERH) += sh/
+obj-$(CONFIG_DMA_ENGINE) += dma/
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
new file mode 100644
index 0000000..f9ac4bc
--- /dev/null
+++ b/drivers/dma/Kconfig
@@ -0,0 +1,13 @@
+#
+# DMA engine configuration
+#
+
+menu "DMA Engine support"
+
+config DMA_ENGINE
+ bool "Support for DMA engines"
+ ---help---
+ DMA engines offload copy operations from the CPU to dedicated
+ hardware, allowing the copies to happen asynchronously.
+
+endmenu
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
new file mode 100644
index 0000000..10b7391
--- /dev/null
+++ b/drivers/dma/Makefile
@@ -0,0 +1 @@
+obj-y += dmaengine.o
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
new file mode 100644
index 0000000..35a63d8
--- /dev/null
+++ b/drivers/dma/dmaengine.c
@@ -0,0 +1,360 @@
+/*****************************************************************************
+Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+The full GNU General Public License is included in this distribution in the
+file called LICENSE.
+*****************************************************************************/
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/dmaengine.h>
+#include <linux/hardirq.h>
+#include <linux/spinlock.h>
+#include <linux/percpu.h>
+#include <linux/rcupdate.h>
+
+static DEFINE_SPINLOCK(dma_list_lock);
+static LIST_HEAD(dma_device_list);
+static LIST_HEAD(dma_client_list);
+
+/* --- sysfs implementation --- */
+
+static ssize_t show_memcpy_count(struct class_device *cd, char *buf)
+{
+ struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
+ unsigned long count = 0;
+ int i;
+
+ for_each_cpu(i)
+ count += per_cpu_ptr(chan->local, i)->memcpy_count;
+
+ return sprintf(buf, "%lu\n", count);
+}
+
+static ssize_t show_bytes_transferred(struct class_device *cd, char *buf)
+{
+ struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
+ unsigned long count = 0;
+ int i;
+
+ for_each_cpu(i)
+ count += per_cpu_ptr(chan->local, i)->bytes_transferred;
+
+ return sprintf(buf, "%lu\n", count);
+}
+
+static ssize_t show_in_use(struct class_device *cd, char *buf)
+{
+ struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
+
+ return sprintf(buf, "%d\n", (chan->client ? 1 : 0));
+}
+
+static struct class_device_attribute dma_class_attrs[] = {
+ __ATTR(memcpy_count, S_IRUGO, show_memcpy_count, NULL),
+ __ATTR(bytes_transferred, S_IRUGO, show_bytes_transferred, NULL),
+ __ATTR(in_use, S_IRUGO, show_in_use, NULL),
+ __ATTR_NULL
+};
+
+static void dma_async_device_cleanup(struct kref *kref);
+
+static void dma_class_dev_release(struct class_device *cd)
+{
+ struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
+ kref_put(&chan->device->refcount, dma_async_device_cleanup);
+}
+
+static struct class dma_devclass = {
+ .name = "dma",
+ .class_dev_attrs = dma_class_attrs,
+ .release = dma_class_dev_release,
+};
+
+/* --- client and device registration --- */
+
+/**
+ * dma_client_chan_alloc - try to allocate a channel to a client
+ * @client: &dma_client
+ *
+ * Called with dma_list_lock held.
+ */
+static struct dma_chan *dma_client_chan_alloc(struct dma_client *client)
+{
+ struct dma_device *device;
+ struct dma_chan *chan;
+ unsigned long flags;
+
+ /* Find a channel, any DMA engine will do */
+ list_for_each_entry(device, &dma_device_list, global_node) {
+ list_for_each_entry(chan, &device->channels, device_node) {
+ if (chan->client)
+ continue;
+
+ if (chan->device->device_alloc_chan_resources(chan) >= 0) {
+ kref_get(&device->refcount);
+ kref_init(&chan->refcount);
+ chan->slow_ref = 0;
+ INIT_RCU_HEAD(&chan->rcu);
+ chan->client = client;
+ spin_lock_irqsave(&client->lock, flags);
+ list_add_tail_rcu(&chan->client_node, &client->channels);
+ spin_unlock_irqrestore(&client->lock, flags);
+ return chan;
+ }
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * dma_client_chan_free - release a DMA channel
+ * @chan: &dma_chan
+ */
+void dma_async_device_cleanup(struct kref *kref);
+void dma_chan_cleanup(struct kref *kref)
+{
+ struct dma_chan *chan = container_of(kref, struct dma_chan, refcount);
+ chan->device->device_free_chan_resources(chan);
+ chan->client = NULL;
+ kref_put(&chan->device->refcount, dma_async_device_cleanup);
+}
+
+static void dma_chan_free_rcu(struct rcu_head *rcu)
+{
+ struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu);
+ int bias = 0x7FFFFFFF;
+ int i;
+ for_each_cpu(i)
+ bias -= local_read(&per_cpu_ptr(chan->local, i)->refcount);
+ atomic_sub(bias, &chan->refcount.refcount);
+ kref_put(&chan->refcount, dma_chan_cleanup);
+}
+
+static void dma_client_chan_free(struct dma_chan *chan)
+{
+ atomic_add(0x7FFFFFFF, &chan->refcount.refcount);
+ chan->slow_ref = 1;
+ call_rcu(&chan->rcu, dma_chan_free_rcu);
+}
+
+/**
+ * dma_chans_rebalance - reallocate channels to clients
+ *
+ * When the number of DMA channel in the system changes,
+ * channels need to be rebalanced among clients
+ */
+static void dma_chans_rebalance(void)
+{
+ struct dma_client *client;
+ struct dma_chan *chan;
+ unsigned long flags;
+
+ spin_lock(&dma_list_lock);
+ list_for_each_entry(client, &dma_client_list, global_node) {
+
+ while (client->chans_desired > client->chan_count) {
+ chan = dma_client_chan_alloc(client);
+ if (!chan)
+ break;
+
+ client->chan_count++;
+ client->event_callback(client, chan, DMA_RESOURCE_ADDED);
+ }
+
+ while (client->chans_desired < client->chan_count) {
+ spin_lock_irqsave(&client->lock, flags);
+ chan = list_entry(client->channels.next, struct dma_chan, client_node);
+ list_del_rcu(&chan->client_node);
+ spin_unlock_irqrestore(&client->lock, flags);
+ client->chan_count--;
+ client->event_callback(client, chan, DMA_RESOURCE_REMOVED);
+ dma_client_chan_free(chan);
+ }
+ }
+ spin_unlock(&dma_list_lock);
+}
+
+/**
+ * dma_async_client_register - allocate and register a &dma_client
+ * @event_callback: callback for notification of channel addition/removal
+ */
+struct dma_client *dma_async_client_register(dma_event_callback event_callback)
+{
+ struct dma_client *client;
+
+ client = kzalloc(sizeof(*client), GFP_KERNEL);
+ if (!client)
+ return NULL;
+
+ INIT_LIST_HEAD(&client->channels);
+ spin_lock_init(&client->lock);
+
+ client->chans_desired = 0;
+ client->chan_count = 0;
+ client->event_callback = event_callback;
+
+ spin_lock(&dma_list_lock);
+ list_add_tail(&client->global_node, &dma_client_list);
+ spin_unlock(&dma_list_lock);
+
+ return client;
+}
+
+/**
+ * dma_async_client_unregister - unregister a client and free the &dma_client
+ * @client:
+ *
+ * Force frees any allocated DMA channels, frees the &dma_client memory
+ */
+void dma_async_client_unregister(struct dma_client *client)
+{
+ struct dma_chan *chan;
+
+ if (!client)
+ return;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(chan, &client->channels, client_node) {
+ dma_client_chan_free(chan);
+ }
+ rcu_read_unlock();
+
+ spin_lock(&dma_list_lock);
+ list_del(&client->global_node);
+ spin_unlock(&dma_list_lock);
+
+ kfree(client);
+ dma_chans_rebalance();
+}
+
+/**
+ * dma_async_client_chan_request - request DMA channels
+ * @client: &dma_client
+ * @number: count of DMA channels requested
+ *
+ * Clients call dma_async_client_chan_request() to specify how many
+ * DMA channels they need, 0 to free all currently allocated.
+ * The resulting allocations/frees are indicated to the client via the
+ * event callback.
+ */
+void dma_async_client_chan_request(struct dma_client *client,
+ unsigned int number)
+{
+ client->chans_desired = number;
+ dma_chans_rebalance();
+}
+
+/**
+ * dma_async_device_register -
+ * @device: &dma_device
+ */
+int dma_async_device_register(struct dma_device *device)
+{
+ static int id;
+ int chancnt = 0;
+ struct dma_chan* chan;
+
+ if (!device)
+ return -ENODEV;
+
+ init_completion(&device->done);
+ kref_init(&device->refcount);
+ device->dev_id = id++;
+
+ /* represent channels in sysfs. Probably want devs too */
+ list_for_each_entry(chan, &device->channels, device_node) {
+ chan->local = alloc_percpu(typeof(*chan->local));
+ if (chan->local == NULL)
+ continue;
+
+ chan->chan_id = chancnt++;
+ chan->class_dev.class = &dma_devclass;
+ chan->class_dev.dev = NULL;
+ snprintf(chan->class_dev.class_id, BUS_ID_SIZE, "dma%dchan%d",
+ device->dev_id, chan->chan_id);
+
+ kref_get(&device->refcount);
+ class_device_register(&chan->class_dev);
+ }
+
+ spin_lock(&dma_list_lock);
+ list_add_tail(&device->global_node, &dma_device_list);
+ spin_unlock(&dma_list_lock);
+
+ dma_chans_rebalance();
+
+ return 0;
+}
+
+/**
+ * dma_async_device_unregister -
+ * @device: &dma_device
+ */
+static void dma_async_device_cleanup(struct kref *kref)
+{
+ struct dma_device *device = container_of(kref, struct dma_device, refcount);
+ complete(&device->done);
+}
+
+void dma_async_device_unregister(struct dma_device* device)
+{
+ struct dma_chan *chan;
+ unsigned long flags;
+
+ spin_lock(&dma_list_lock);
+ list_del(&device->global_node);
+ spin_unlock(&dma_list_lock);
+
+ list_for_each_entry(chan, &device->channels, device_node) {
+ if (chan->client) {
+ spin_lock_irqsave(&chan->client->lock, flags);
+ list_del(&chan->client_node);
+ chan->client->chan_count--;
+ spin_unlock_irqrestore(&chan->client->lock, flags);
+ chan->client->event_callback(chan->client, chan, DMA_RESOURCE_REMOVED);
+ dma_client_chan_free(chan);
+ }
+ class_device_unregister(&chan->class_dev);
+ }
+
+ dma_chans_rebalance();
+
+ kref_put(&device->refcount, dma_async_device_cleanup);
+ wait_for_completion(&device->done);
+}
+
+static int __init dma_bus_init(void)
+{
+ spin_lock_init(&dma_list_lock);
+
+ return class_register(&dma_devclass);
+}
+
+subsys_initcall(dma_bus_init);
+
+EXPORT_SYMBOL(dma_async_client_register);
+EXPORT_SYMBOL(dma_async_client_unregister);
+EXPORT_SYMBOL(dma_async_client_chan_request);
+EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
+EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
+EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg);
+EXPORT_SYMBOL(dma_async_memcpy_complete);
+EXPORT_SYMBOL(dma_async_memcpy_issue_pending);
+EXPORT_SYMBOL(dma_async_device_register);
+EXPORT_SYMBOL(dma_async_device_unregister);
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
new file mode 100644
index 0000000..ac3bff9
--- /dev/null
+++ b/include/linux/dmaengine.h
@@ -0,0 +1,323 @@
+/*****************************************************************************
+Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+The full GNU General Public License is included in this distribution in the
+file called LICENSE.
+*****************************************************************************/
+#ifndef DMAENGINE_H
+#define DMAENGINE_H
+#ifdef CONFIG_DMA_ENGINE
+
+#include <linux/device.h>
+#include <linux/uio.h>
+#include <linux/kref.h>
+#include <linux/completion.h>
+#include <linux/rcupdate.h>
+
+/**
+ * enum dma_event - resource PNP/power managment events
+ * @DMA_RESOURCE_SUSPEND: DMA device going into low power state
+ * @DMA_RESOURCE_RESUME: DMA device returning to full power
+ * @DMA_RESOURCE_ADDED: DMA device added to the system
+ * @DMA_RESOURCE_REMOVED: DMA device removed from the system
+ */
+enum dma_event {
+ DMA_RESOURCE_SUSPEND,
+ DMA_RESOURCE_RESUME,
+ DMA_RESOURCE_ADDED,
+ DMA_RESOURCE_REMOVED,
+};
+
+/**
+ * typedef dma_cookie_t
+ *
+ * if dma_cookie_t is >0 it's a DMA request cookie, <0 it's an error code
+ */
+typedef s32 dma_cookie_t;
+
+#define dma_submit_error(cookie) ((cookie) < 0 ? 1 : 0)
+
+/**
+ * enum dma_status - DMA transaction status
+ * @DMA_SUCCESS: transaction completed successfully
+ * @DMA_IN_PROGRESS: transaction not yet processed
+ * @DMA_ERROR: transaction failed
+ */
+enum dma_status {
+ DMA_SUCCESS,
+ DMA_IN_PROGRESS,
+ DMA_ERROR,
+};
+
+struct dma_chan_percpu {
+ local_t refcount;
+ /* stats */
+ unsigned long memcpy_count;
+ unsigned long bytes_transferred;
+};
+
+/**
+ * struct dma_chan - devices supply DMA channels, clients use them
+ * @client: ptr to the client user of this chan, will be NULL when unused
+ * @device: ptr to the dma device who supplies this channel, always !NULL
+ * @cookie: last cookie value returned to client
+ * @chan_id:
+ * @class_dev:
+ * @client_node: used to add this to the client chan list
+ * @device_node: used to add this to the device chan list
+ */
+struct dma_chan {
+ struct dma_client *client;
+ struct dma_device *device;
+ dma_cookie_t cookie;
+
+ /* sysfs */
+ int chan_id;
+ struct class_device class_dev;
+
+ struct kref refcount;
+ int slow_ref;
+ struct rcu_head rcu;
+
+ struct list_head client_node;
+ struct list_head device_node;
+ struct dma_chan_percpu *local;
+};
+
+void dma_chan_cleanup(struct kref *kref);
+
+static inline void dma_chan_get(struct dma_chan *chan)
+{
+ if (unlikely(chan->slow_ref))
+ kref_get(&chan->refcount);
+ else {
+ local_inc(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
+ put_cpu();
+ }
+}
+
+static inline void dma_chan_put(struct dma_chan *chan)
+{
+ if (unlikely(chan->slow_ref))
+ kref_put(&chan->refcount, dma_chan_cleanup);
+ else {
+ local_dec(&(per_cpu_ptr(chan->local, get_cpu())->refcount));
+ put_cpu();
+ }
+}
+
+/*
+ * typedef dma_event_callback - function pointer to a DMA event callback
+ */
+typedef void (*dma_event_callback) (struct dma_client *client,
+ struct dma_chan *chan, enum dma_event event);
+
+/**
+ * struct dma_client - info on the entity making use of DMA services
+ * @event_callback: func ptr to call when something happens
+ * @chan_count: number of chans allocated
+ * @chans_desired: number of chans requested. Can be +/- chan_count
+ * @lock: protects access to the channels list
+ * @channels: the list of DMA channels allocated
+ * @global_node: list_head for global dma_client_list
+ */
+struct dma_client {
+ dma_event_callback event_callback;
+ unsigned int chan_count;
+ unsigned int chans_desired;
+
+ spinlock_t lock;
+ struct list_head channels;
+ struct list_head global_node;
+};
+
+/**
+ * struct dma_device - info on the entity supplying DMA services
+ * @chancnt: how many DMA channels are supported
+ * @channels: the list of struct dma_chan
+ * @global_node: list_head for global dma_device_list
+ * @dev_id:
+ * Other func ptrs: used to make use of this device's capabilities
+ */
+struct dma_device {
+
+ unsigned int chancnt;
+ struct list_head channels;
+ struct list_head global_node;
+
+ struct kref refcount;
+ struct completion done;
+
+ int dev_id;
+
+ int (*device_alloc_chan_resources)(struct dma_chan *chan);
+ void (*device_free_chan_resources)(struct dma_chan *chan);
+ dma_cookie_t (*device_memcpy_buf_to_buf)(struct dma_chan *chan,
+ void *dest, void *src, size_t len);
+ dma_cookie_t (*device_memcpy_buf_to_pg)(struct dma_chan *chan,
+ struct page *page, unsigned int offset, void *kdata,
+ size_t len);
+ dma_cookie_t (*device_memcpy_pg_to_pg)(struct dma_chan *chan,
+ struct page *dest_pg, unsigned int dest_off,
+ struct page *src_pg, unsigned int src_off, size_t len);
+ enum dma_status (*device_memcpy_complete)(struct dma_chan *chan,
+ dma_cookie_t cookie, dma_cookie_t *last,
+ dma_cookie_t *used);
+ void (*device_memcpy_issue_pending)(struct dma_chan *chan);
+};
+
+/* --- public DMA engine API --- */
+
+struct dma_client *dma_async_client_register(dma_event_callback event_callback);
+void dma_async_client_unregister(struct dma_client *client);
+void dma_async_client_chan_request(struct dma_client *client,
+ unsigned int number);
+
+/**
+ * dma_async_memcpy_buf_to_buf - offloaded copy between virtual addresses
+ * @chan: DMA channel to offload copy to
+ * @dest: destination address (virtual)
+ * @src: source address (virtual)
+ * @len: length
+ *
+ * Both @dest and @src must be mappable to a bus address according to the
+ * DMA mapping API rules for streaming mappings.
+ * Both @dest and @src must stay memory resident (kernel memory or locked
+ * user space pages)
+ */
+static inline dma_cookie_t dma_async_memcpy_buf_to_buf(struct dma_chan *chan,
+ void *dest, void *src, size_t len)
+{
+ int cpu = get_cpu();
+ per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
+ per_cpu_ptr(chan->local, cpu)->memcpy_count++;
+ put_cpu();
+
+ return chan->device->device_memcpy_buf_to_buf(chan, dest, src, len);
+}
+
+/**
+ * dma_async_memcpy_buf_to_pg - offloaded copy
+ * @chan: DMA channel to offload copy to
+ * @page: destination page
+ * @offset: offset in page to copy to
+ * @kdata: source address (virtual)
+ * @len: length
+ *
+ * Both @page/@offset and @kdata must be mappable to a bus address according
+ * to the DMA mapping API rules for streaming mappings.
+ * Both @page/@offset and @kdata must stay memory resident (kernel memory or
+ * locked user space pages)
+ */
+static inline dma_cookie_t dma_async_memcpy_buf_to_pg(struct dma_chan *chan,
+ struct page *page, unsigned int offset, void *kdata, size_t len)
+{
+ int cpu = get_cpu();
+ per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
+ per_cpu_ptr(chan->local, cpu)->memcpy_count++;
+ put_cpu();
+
+ return chan->device->device_memcpy_buf_to_pg(chan, page, offset,
+ kdata, len);
+}
+
+/**
+ * dma_async_memcpy_buf_to_pg - offloaded copy
+ * @chan: DMA channel to offload copy to
+ * @dest_page: destination page
+ * @dest_off: offset in page to copy to
+ * @src_page: source page
+ * @src_off: offset in page to copy from
+ * @len: length
+ *
+ * Both @dest_page/@dest_off and @src_page/@src_off must be mappable to a bus
+ * address according to the DMA mapping API rules for streaming mappings.
+ * Both @dest_page/@dest_off and @src_page/@src_off must stay memory resident
+ * (kernel memory or locked user space pages)
+ */
+static inline dma_cookie_t dma_async_memcpy_pg_to_pg(struct dma_chan *chan,
+ struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
+ unsigned int src_off, size_t len)
+{
+ int cpu = get_cpu();
+ per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
+ per_cpu_ptr(chan->local, cpu)->memcpy_count++;
+ put_cpu();
+
+ return chan->device->device_memcpy_pg_to_pg(chan, dest_pg, dest_off,
+ src_pg, src_off, len);
+}
+
+/**
+ * dma_async_memcpy_issue_pending - flush pending copies to HW
+ * @chan:
+ *
+ * This allows drivers to push copies to HW in batches,
+ * reducing MMIO writes where possible.
+ */
+static inline void dma_async_memcpy_issue_pending(struct dma_chan *chan)
+{
+ return chan->device->device_memcpy_issue_pending(chan);
+}
+
+/**
+ * dma_async_memcpy_complete - poll for transaction completion
+ * @chan: DMA channel
+ * @cookie: transaction identifier to check status of
+ * @last: returns last completed cookie, can be NULL
+ * @used: returns last issued cookie, can be NULL
+ *
+ * If @last and @used are passed in, upon return they reflect the driver
+ * internal state and can be used with dma_async_is_complete() to check
+ * the status of multiple cookies without re-checking hardware state.
+ */
+static inline enum dma_status dma_async_memcpy_complete(struct dma_chan *chan,
+ dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
+{
+ return chan->device->device_memcpy_complete(chan, cookie, last, used);
+}
+
+/**
+ * dma_async_is_complete - test a cookie against chan state
+ * @cookie: transaction identifier to test status of
+ * @last_complete: last know completed transaction
+ * @last_used: last cookie value handed out
+ *
+ * dma_async_is_complete() is used in dma_async_memcpy_complete()
+ * the test logic is seperated for lightweight testing of multiple cookies
+ */
+static inline enum dma_status dma_async_is_complete(dma_cookie_t cookie,
+ dma_cookie_t last_complete, dma_cookie_t last_used)
+{
+ if (last_complete <= last_used) {
+ if ((cookie <= last_complete) || (cookie > last_used))
+ return DMA_SUCCESS;
+ } else {
+ if ((cookie <= last_complete) && (cookie > last_used))
+ return DMA_SUCCESS;
+ }
+ return DMA_IN_PROGRESS;
+}
+
+
+/* --- DMA device --- */
+
+int dma_async_device_register(struct dma_device *device);
+void dma_async_device_unregister(struct dma_device *device);
+
+#endif /* CONFIG_DMA_ENGINE */
+#endif /* DMAENGINE_H */

2006-03-11 02:27:47

by Chris Leech

[permalink] [raw]
Subject: [PATCH 7/8] [I/OAT] Add a sysctl for tuning the I/OAT offloaded I/O threshold

Any socket recv of less than this ammount will not be offloaded

Signed-off-by: Chris Leech <[email protected]>
---

include/linux/sysctl.h | 1 +
include/net/tcp.h | 1 +
net/core/user_dma.c | 4 ++++
net/ipv4/sysctl_net_ipv4.c | 10 ++++++++++
4 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
index 76eaeff..cd9e7c0 100644
--- a/include/linux/sysctl.h
+++ b/include/linux/sysctl.h
@@ -403,6 +403,7 @@ enum
NET_TCP_MTU_PROBING=113,
NET_TCP_BASE_MSS=114,
NET_IPV4_TCP_WORKAROUND_SIGNED_WINDOWS=115,
+ NET_TCP_DMA_COPYBREAK=116,
};

enum {
diff --git a/include/net/tcp.h b/include/net/tcp.h
index afc4b8a..f319368 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -221,6 +221,7 @@ extern int sysctl_tcp_adv_win_scale;
extern int sysctl_tcp_tw_reuse;
extern int sysctl_tcp_frto;
extern int sysctl_tcp_low_latency;
+extern int sysctl_tcp_dma_copybreak;
extern int sysctl_tcp_nometrics_save;
extern int sysctl_tcp_moderate_rcvbuf;
extern int sysctl_tcp_tso_win_divisor;
diff --git a/net/core/user_dma.c b/net/core/user_dma.c
index 24e51eb..a85d1f1 100644
--- a/net/core/user_dma.c
+++ b/net/core/user_dma.c
@@ -33,6 +33,10 @@ file called LICENSE.

#ifdef CONFIG_NET_DMA

+#define NET_DMA_DEFAULT_COPYBREAK 1024
+
+int sysctl_tcp_dma_copybreak = NET_DMA_DEFAULT_COPYBREAK;
+
/**
* dma_skb_copy_datagram_iovec - Copy a datagram to an iovec.
* @skb - buffer to copy
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 6b6c3ad..6a6aa53 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -688,6 +688,16 @@ ctl_table ipv4_table[] = {
.mode = 0644,
.proc_handler = &proc_dointvec
},
+#ifdef CONFIG_NET_DMA
+ {
+ .ctl_name = NET_TCP_DMA_COPYBREAK,
+ .procname = "tcp_dma_copybreak",
+ .data = &sysctl_tcp_dma_copybreak,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec
+ },
+#endif
{ .ctl_name = 0 }
};


2006-03-11 02:27:15

by Chris Leech

[permalink] [raw]
Subject: [PATCH 4/8] [I/OAT] Utility functions for offloading sk_buff to iovec copies

Provides for pinning user space pages in memory, copying to iovecs,
and copying from sk_buffs including fragmented and chained sk_buffs.

Signed-off-by: Chris Leech <[email protected]>
---

drivers/dma/Makefile | 3
drivers/dma/iovlock.c | 301 +++++++++++++++++++++++++++++++++++++++++++++
include/linux/dmaengine.h | 22 +++
include/net/netdma.h | 6 +
net/core/Makefile | 3
net/core/user_dma.c | 141 +++++++++++++++++++++
6 files changed, 474 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index c8a5f56..bdcfdbd 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -1,2 +1,3 @@
-obj-y += dmaengine.o
+obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
+obj-$(CONFIG_NET_DMA) += iovlock.o
obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
diff --git a/drivers/dma/iovlock.c b/drivers/dma/iovlock.c
new file mode 100644
index 0000000..18d54ce
--- /dev/null
+++ b/drivers/dma/iovlock.c
@@ -0,0 +1,301 @@
+/*****************************************************************************
+Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+Portions based on net/core/datagram.c and copyrighted by their authors.
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+The full GNU General Public License is included in this distribution in the
+file called LICENSE.
+*****************************************************************************/
+
+/*
+ * This code allows the net stack to make use of a DMA engine for
+ * skb to iovec copies.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/pagemap.h>
+#include <net/tcp.h> /* for memcpy_toiovec */
+#include <asm/io.h>
+#include <asm/uaccess.h>
+
+int num_pages_spanned(struct iovec *iov)
+{
+ return
+ ((PAGE_ALIGN((unsigned long)iov->iov_base + iov->iov_len) -
+ ((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT);
+}
+
+/*
+ * Pin down all the iovec pages needed for len bytes.
+ * Return a struct dma_pinned_list to keep track of pages pinned down.
+ *
+ * We are allocating a single chunk of memory, and then carving it up into
+ * 3 sections, the latter 2 whose size depends on the number of iovecs and the
+ * total number of pages, respectively.
+ */
+struct dma_pinned_list *dma_pin_iovec_pages(struct iovec *iov, size_t len)
+{
+ struct dma_pinned_list *local_list;
+ struct page **pages;
+ int i;
+ int ret;
+ int nr_iovecs = 0;
+ int iovec_len_used = 0;
+ int iovec_pages_used = 0;
+ long err;
+
+ /* don't pin down non-user-based iovecs */
+ if (segment_eq(get_fs(), KERNEL_DS))
+ return NULL;
+
+ /* determine how many iovecs/pages there are, up front */
+ do {
+ iovec_len_used += iov[nr_iovecs].iov_len;
+ iovec_pages_used += num_pages_spanned(&iov[nr_iovecs]);
+ nr_iovecs++;
+ } while (iovec_len_used < len);
+
+ /* single kmalloc for pinned list, page_list[], and the page arrays */
+ local_list = kmalloc(sizeof(*local_list)
+ + (nr_iovecs * sizeof (struct dma_page_list))
+ + (iovec_pages_used * sizeof (struct page*)), GFP_KERNEL);
+ if (!local_list) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ /* list of pages starts right after the page list array */
+ pages = (struct page **) &local_list->page_list[nr_iovecs];
+
+ for (i = 0; i < nr_iovecs; i++) {
+ struct dma_page_list *page_list = &local_list->page_list[i];
+
+ len -= iov[i].iov_len;
+
+ if (!access_ok(VERIFY_WRITE, iov[i].iov_base, iov[i].iov_len)) {
+ err = -EFAULT;
+ goto unpin;
+ }
+
+ page_list->nr_pages = num_pages_spanned(&iov[i]);
+ page_list->base_address = iov[i].iov_base;
+
+ page_list->pages = pages;
+ pages += page_list->nr_pages;
+
+ /* pin pages down */
+ down_read(&current->mm->mmap_sem);
+ ret = get_user_pages(
+ current,
+ current->mm,
+ (unsigned long) iov[i].iov_base,
+ page_list->nr_pages,
+ 1, /* write */
+ 0, /* force */
+ page_list->pages,
+ NULL);
+ up_read(&current->mm->mmap_sem);
+
+ if (ret != page_list->nr_pages) {
+ err = -ENOMEM;
+ goto unpin;
+ }
+
+ local_list->nr_iovecs = i + 1;
+ }
+
+ return local_list;
+
+unpin:
+ dma_unpin_iovec_pages(local_list);
+out:
+ return ERR_PTR(err);
+}
+
+void dma_unpin_iovec_pages(struct dma_pinned_list *pinned_list)
+{
+ int i, j;
+
+ if (!pinned_list)
+ return;
+
+ for (i = 0; i < pinned_list->nr_iovecs; i++) {
+ struct dma_page_list *page_list = &pinned_list->page_list[i];
+ for (j = 0; j < page_list->nr_pages; j++) {
+ set_page_dirty_lock(page_list->pages[j]);
+ page_cache_release(page_list->pages[j]);
+ }
+ }
+
+ kfree(pinned_list);
+}
+
+static dma_cookie_t dma_memcpy_to_kernel_iovec(struct dma_chan *chan, struct
+ iovec *iov, unsigned char *kdata, size_t len)
+{
+ dma_cookie_t dma_cookie = 0;
+
+ while (len > 0) {
+ if (iov->iov_len) {
+ int copy = min_t(unsigned int, iov->iov_len, len);
+ dma_cookie = dma_async_memcpy_buf_to_buf(
+ chan,
+ iov->iov_base,
+ kdata,
+ copy);
+ kdata += copy;
+ len -= copy;
+ iov->iov_len -= copy;
+ iov->iov_base += copy;
+ }
+ iov++;
+ }
+
+ return dma_cookie;
+}
+
+/*
+ * We have already pinned down the pages we will be using in the iovecs.
+ * Each entry in iov array has corresponding entry in pinned_list->page_list.
+ * Using array indexing to keep iov[] and page_list[] in sync.
+ * Initial elements in iov array's iov->iov_len will be 0 if already copied into
+ * by another call.
+ * iov array length remaining guaranteed to be bigger than len.
+ */
+dma_cookie_t dma_memcpy_to_iovec(struct dma_chan *chan, struct iovec *iov,
+ struct dma_pinned_list *pinned_list, unsigned char *kdata, size_t len)
+{
+ int iov_byte_offset;
+ int copy;
+ dma_cookie_t dma_cookie = 0;
+ int iovec_idx;
+ int page_idx;
+
+ if (!chan)
+ return memcpy_toiovec(iov, kdata, len);
+
+ /* -> kernel copies (e.g. smbfs) */
+ if (!pinned_list)
+ return dma_memcpy_to_kernel_iovec(chan, iov, kdata, len);
+
+ iovec_idx = 0;
+ while (iovec_idx < pinned_list->nr_iovecs) {
+ struct dma_page_list *page_list;
+
+ /* skip already used-up iovecs */
+ while (!iov[iovec_idx].iov_len)
+ iovec_idx++;
+
+ page_list = &pinned_list->page_list[iovec_idx];
+
+ iov_byte_offset = ((unsigned long)iov[iovec_idx].iov_base & ~PAGE_MASK);
+ page_idx = (((unsigned long)iov[iovec_idx].iov_base & PAGE_MASK)
+ - ((unsigned long)page_list->base_address & PAGE_MASK)) >> PAGE_SHIFT;
+
+ /* break up copies to not cross page boundary */
+ while (iov[iovec_idx].iov_len) {
+ copy = min_t(int, PAGE_SIZE - iov_byte_offset, len);
+ copy = min_t(int, copy, iov[iovec_idx].iov_len);
+
+ dma_cookie = dma_async_memcpy_buf_to_pg(chan,
+ page_list->pages[page_idx],
+ iov_byte_offset,
+ kdata,
+ copy);
+
+ len -= copy;
+ iov[iovec_idx].iov_len -= copy;
+ iov[iovec_idx].iov_base += copy;
+
+ if (!len)
+ return dma_cookie;
+
+ kdata += copy;
+ iov_byte_offset = 0;
+ page_idx++;
+ }
+ iovec_idx++;
+ }
+
+ /* really bad if we ever run out of iovecs */
+ BUG();
+ return -EFAULT;
+}
+
+dma_cookie_t dma_memcpy_pg_to_iovec(struct dma_chan *chan, struct iovec *iov,
+ struct dma_pinned_list *pinned_list, struct page *page,
+ unsigned int offset, size_t len)
+{
+ int iov_byte_offset;
+ int copy;
+ dma_cookie_t dma_cookie = 0;
+ int iovec_idx;
+ int page_idx;
+ int err;
+
+ /* this needs as-yet-unimplemented buf-to-buff, so punt. */
+ /* TODO: use dma for this */
+ if (!chan || !pinned_list) {
+ u8 *vaddr = kmap(page);
+ err = memcpy_toiovec(iov, vaddr + offset, len);
+ kunmap(page);
+ return err;
+ }
+
+ iovec_idx = 0;
+ while (iovec_idx < pinned_list->nr_iovecs) {
+ struct dma_page_list *page_list;
+
+ /* skip already used-up iovecs */
+ while (!iov[iovec_idx].iov_len)
+ iovec_idx++;
+
+ page_list = &pinned_list->page_list[iovec_idx];
+
+ iov_byte_offset = ((unsigned long)iov[iovec_idx].iov_base & ~PAGE_MASK);
+ page_idx = (((unsigned long)iov[iovec_idx].iov_base & PAGE_MASK)
+ - ((unsigned long)page_list->base_address & PAGE_MASK)) >> PAGE_SHIFT;
+
+ /* break up copies to not cross page boundary */
+ while (iov[iovec_idx].iov_len) {
+ copy = min_t(int, PAGE_SIZE - iov_byte_offset, len);
+ copy = min_t(int, copy, iov[iovec_idx].iov_len);
+
+ dma_cookie = dma_async_memcpy_pg_to_pg(chan,
+ page_list->pages[page_idx],
+ iov_byte_offset,
+ page,
+ offset,
+ copy);
+
+ len -= copy;
+ iov[iovec_idx].iov_len -= copy;
+ iov[iovec_idx].iov_base += copy;
+
+ if (!len)
+ return dma_cookie;
+
+ offset += copy;
+ iov_byte_offset = 0;
+ page_idx++;
+ }
+ iovec_idx++;
+ }
+
+ /* really bad if we ever run out of iovecs */
+ BUG();
+ return -EFAULT;
+}
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index ac3bff9..a2ea457 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -319,5 +319,27 @@ static inline enum dma_status dma_async_
int dma_async_device_register(struct dma_device *device);
void dma_async_device_unregister(struct dma_device *device);

+/* --- Helper iov-locking functions --- */
+
+struct dma_page_list {
+ char *base_address;
+ int nr_pages;
+ struct page **pages;
+};
+
+struct dma_pinned_list {
+ int nr_iovecs;
+ struct dma_page_list page_list[0];
+};
+
+struct dma_pinned_list *dma_pin_iovec_pages(struct iovec *iov, size_t len);
+void dma_unpin_iovec_pages(struct dma_pinned_list* pinned_list);
+
+dma_cookie_t dma_memcpy_to_iovec(struct dma_chan *chan, struct iovec *iov,
+ struct dma_pinned_list *pinned_list, unsigned char *kdata, size_t len);
+dma_cookie_t dma_memcpy_pg_to_iovec(struct dma_chan *chan, struct iovec *iov,
+ struct dma_pinned_list *pinned_list, struct page *page,
+ unsigned int offset, size_t len);
+
#endif /* CONFIG_DMA_ENGINE */
#endif /* DMAENGINE_H */
diff --git a/include/net/netdma.h b/include/net/netdma.h
index 6435aef..feb499f 100644
--- a/include/net/netdma.h
+++ b/include/net/netdma.h
@@ -22,6 +22,7 @@ file called LICENSE.
#define NETDMA_H
#ifdef CONFIG_NET_DMA
#include <linux/dmaengine.h>
+#include <linux/skbuff.h>

static inline struct dma_chan *get_softnet_dma(void)
{
@@ -33,5 +34,10 @@ static inline struct dma_chan *get_softn
rcu_read_unlock();
return chan;
}
+
+int dma_skb_copy_datagram_iovec(struct dma_chan* chan,
+ const struct sk_buff *skb, int offset, struct iovec *to,
+ size_t len, struct dma_pinned_list *pinned_list);
+
#endif /* CONFIG_NET_DMA */
#endif /* NETDMA_H */
diff --git a/net/core/Makefile b/net/core/Makefile
index 630da0f..d02132b 100644
--- a/net/core/Makefile
+++ b/net/core/Makefile
@@ -8,7 +8,8 @@ obj-y := sock.o request_sock.o skbuff.o
obj-$(CONFIG_SYSCTL) += sysctl_net_core.o

obj-y += dev.o ethtool.o dev_mcast.o dst.o \
- neighbour.o rtnetlink.o utils.o link_watch.o filter.o
+ neighbour.o rtnetlink.o utils.o link_watch.o filter.o \
+ user_dma.o

obj-$(CONFIG_XFRM) += flow.o
obj-$(CONFIG_SYSFS) += net-sysfs.o
diff --git a/net/core/user_dma.c b/net/core/user_dma.c
new file mode 100644
index 0000000..24e51eb
--- /dev/null
+++ b/net/core/user_dma.c
@@ -0,0 +1,141 @@
+/*****************************************************************************
+Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
+Portions based on net/core/datagram.c and copyrighted by their authors.
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+The full GNU General Public License is included in this distribution in the
+file called LICENSE.
+*****************************************************************************/
+
+/*
+ * This code allows the net stack to make use of a DMA engine for
+ * skb to iovec copies.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/socket.h>
+#include <linux/rtnetlink.h> /* for BUG_TRAP */
+#include <net/tcp.h>
+
+
+#ifdef CONFIG_NET_DMA
+
+/**
+ * dma_skb_copy_datagram_iovec - Copy a datagram to an iovec.
+ * @skb - buffer to copy
+ * @offset - offset in the buffer to start copying from
+ * @iovec - io vector to copy to
+ * @len - amount of data to copy from buffer to iovec
+ * @pinned_list - locked iovec buffer data
+ *
+ * Note: the iovec is modified during the copy.
+ */
+int dma_skb_copy_datagram_iovec(struct dma_chan *chan,
+ struct sk_buff *skb, int offset, struct iovec *to,
+ size_t len, struct dma_pinned_list *pinned_list)
+{
+ int start = skb_headlen(skb);
+ int i, copy = start - offset;
+ dma_cookie_t cookie = 0;
+
+ /* Copy header. */
+ if (copy > 0) {
+ if (copy > len)
+ copy = len;
+ cookie = dma_memcpy_to_iovec(chan, to, pinned_list,
+ skb->data + offset, copy);
+ if (cookie < 0)
+ goto fault;
+ len -= copy;
+ if (len == 0)
+ goto end;
+ offset += copy;
+ }
+
+ /* Copy paged appendix. Hmm... why does this look so complicated? */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + skb_shinfo(skb)->frags[i].size;
+ copy = end - offset;
+ if ((copy = end - offset) > 0) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct page *page = frag->page;
+
+ if (copy > len)
+ copy = len;
+
+ cookie = dma_memcpy_pg_to_iovec(chan, to, pinned_list, page,
+ frag->page_offset + offset - start, copy);
+ if (cookie < 0)
+ goto fault;
+ len -= copy;
+ if (len == 0)
+ goto end;
+ offset += copy;
+ }
+ start = end;
+ }
+
+ if (skb_shinfo(skb)->frag_list) {
+ struct sk_buff *list = skb_shinfo(skb)->frag_list;
+
+ for (; list; list = list->next) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + list->len;
+ copy = end - offset;
+ if (copy > 0) {
+ if (copy > len)
+ copy = len;
+ cookie = dma_skb_copy_datagram_iovec(chan, list,
+ offset - start, to, copy,
+ pinned_list);
+ if (cookie < 0)
+ goto fault;
+ len -= copy;
+ if (len == 0)
+ goto end;
+ offset += copy;
+ }
+ start = end;
+ }
+ }
+
+end:
+ if (!len) {
+ skb->dma_cookie = cookie;
+ return cookie;
+ }
+
+fault:
+ return -EFAULT;
+}
+
+#else
+
+int dma_skb_copy_datagram_iovec(struct dma_chan *chan,
+ const struct sk_buff *skb, int offset, struct iovec *to,
+ size_t len, struct dma_pinned_list *pinned_list)
+{
+ return skb_copy_datagram_iovec(skb, offset, to, len);
+}
+
+#endif

2006-03-11 02:29:05

by Chris Leech

[permalink] [raw]
Subject: [PATCH 8/8] [I/OAT] TCP recv offload to I/OAT

Locks down user pages and sets up for DMA in tcp_recvmsg, then calls
dma_async_try_early_copy in tcp_v4_do_rcv

Signed-off-by: Chris Leech <[email protected]>
---

include/net/netdma.h | 1
net/ipv4/tcp.c | 110 +++++++++++++++++++++++++++++++++++++++++++++-----
net/ipv4/tcp_input.c | 74 ++++++++++++++++++++++++++++++----
net/ipv4/tcp_ipv4.c | 18 ++++++++
net/ipv6/tcp_ipv6.c | 12 +++++
5 files changed, 193 insertions(+), 22 deletions(-)

diff --git a/include/net/netdma.h b/include/net/netdma.h
index feb499f..3d9c222 100644
--- a/include/net/netdma.h
+++ b/include/net/netdma.h
@@ -38,6 +38,7 @@ static inline struct dma_chan *get_softn
int dma_skb_copy_datagram_iovec(struct dma_chan* chan,
const struct sk_buff *skb, int offset, struct iovec *to,
size_t len, struct dma_pinned_list *pinned_list);
+int dma_async_try_early_copy(struct sock *sk, struct sk_buff *skb, int hlen);

#endif /* CONFIG_NET_DMA */
#endif /* NETDMA_H */
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 9122520..a277398 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -262,7 +262,7 @@
#include <net/tcp.h>
#include <net/xfrm.h>
#include <net/ip.h>
-
+#include <net/netdma.h>

#include <asm/uaccess.h>
#include <asm/ioctls.h>
@@ -1109,6 +1109,7 @@ int tcp_recvmsg(struct kiocb *iocb, stru
int target; /* Read at least this many bytes */
long timeo;
struct task_struct *user_recv = NULL;
+ int copied_early = 0;

lock_sock(sk);

@@ -1132,6 +1133,12 @@ int tcp_recvmsg(struct kiocb *iocb, stru

target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);

+#ifdef CONFIG_NET_DMA
+ tp->ucopy.dma_chan = NULL;
+ if ((len > sysctl_tcp_dma_copybreak) && !(flags & MSG_PEEK) && !sysctl_tcp_low_latency && __get_cpu_var(softnet_data.net_dma))
+ tp->ucopy.pinned_list = dma_pin_iovec_pages(msg->msg_iov, len);
+#endif
+
do {
struct sk_buff *skb;
u32 offset;
@@ -1273,6 +1280,10 @@ int tcp_recvmsg(struct kiocb *iocb, stru
} else
sk_wait_data(sk, &timeo);

+#ifdef CONFIG_NET_DMA
+ tp->ucopy.wakeup = 0;
+#endif
+
if (user_recv) {
int chunk;

@@ -1328,13 +1339,39 @@ do_prequeue:
}

if (!(flags & MSG_TRUNC)) {
- err = skb_copy_datagram_iovec(skb, offset,
- msg->msg_iov, used);
- if (err) {
- /* Exception. Bailout! */
- if (!copied)
- copied = -EFAULT;
- break;
+#ifdef CONFIG_NET_DMA
+ if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
+ tp->ucopy.dma_chan = get_softnet_dma();
+
+ if (tp->ucopy.dma_chan) {
+ tp->ucopy.dma_cookie = dma_skb_copy_datagram_iovec(
+ tp->ucopy.dma_chan, skb, offset,
+ msg->msg_iov, used,
+ tp->ucopy.pinned_list);
+
+ if (tp->ucopy.dma_cookie < 0) {
+
+ printk(KERN_ALERT "dma_cookie < 0\n");
+
+ /* Exception. Bailout! */
+ if (!copied)
+ copied = -EFAULT;
+ break;
+ }
+ if ((offset + used) == skb->len)
+ copied_early = 1;
+
+ } else
+#endif
+ {
+ err = skb_copy_datagram_iovec(skb, offset,
+ msg->msg_iov, used);
+ if (err) {
+ /* Exception. Bailout! */
+ if (!copied)
+ copied = -EFAULT;
+ break;
+ }
}
}

@@ -1354,15 +1391,33 @@ skip_copy:

if (skb->h.th->fin)
goto found_fin_ok;
- if (!(flags & MSG_PEEK))
- sk_eat_skb(sk, skb);
+ if (!(flags & MSG_PEEK)) {
+ if (!copied_early)
+ sk_eat_skb(sk, skb);
+#ifdef CONFIG_NET_DMA
+ else {
+ __skb_unlink(skb, &sk->sk_receive_queue);
+ __skb_queue_tail(&sk->sk_async_wait_queue, skb);
+ copied_early = 0;
+ }
+#endif
+ }
continue;

found_fin_ok:
/* Process the FIN. */
++*seq;
- if (!(flags & MSG_PEEK))
- sk_eat_skb(sk, skb);
+ if (!(flags & MSG_PEEK)) {
+ if (!copied_early)
+ sk_eat_skb(sk, skb);
+#ifdef CONFIG_NET_DMA
+ else {
+ __skb_unlink(skb, &sk->sk_receive_queue);
+ __skb_queue_tail(&sk->sk_async_wait_queue, skb);
+ copied_early = 0;
+ }
+#endif
+ }
break;
} while (len > 0);

@@ -1385,6 +1440,34 @@ skip_copy:
tp->ucopy.len = 0;
}

+#ifdef CONFIG_NET_DMA
+ if (tp->ucopy.dma_chan) {
+ struct sk_buff *skb;
+ dma_cookie_t done, used;
+
+ dma_async_memcpy_issue_pending(tp->ucopy.dma_chan);
+
+ while (dma_async_memcpy_complete(tp->ucopy.dma_chan,
+ tp->ucopy.dma_cookie, &done,
+ &used) == DMA_IN_PROGRESS) {
+ /* do partial cleanup of sk_async_wait_queue */
+ while ((skb = skb_peek(&sk->sk_async_wait_queue)) &&
+ (dma_async_is_complete(skb->dma_cookie, done,
+ used) == DMA_SUCCESS)) {
+ __skb_dequeue(&sk->sk_async_wait_queue);
+ kfree_skb(skb);
+ }
+ }
+
+ /* Safe to free early-copied skbs now */
+ __skb_queue_purge(&sk->sk_async_wait_queue);
+ dma_unpin_iovec_pages(tp->ucopy.pinned_list);
+ dma_chan_put(tp->ucopy.dma_chan);
+ tp->ucopy.dma_chan = NULL;
+ tp->ucopy.pinned_list = NULL;
+ }
+#endif
+
/* According to UNIX98, msg_name/msg_namelen are ignored
* on connected socket. I was just happy when found this 8) --ANK
*/
@@ -1652,6 +1735,9 @@ int tcp_disconnect(struct sock *sk, int
__skb_queue_purge(&sk->sk_receive_queue);
sk_stream_writequeue_purge(sk);
__skb_queue_purge(&tp->out_of_order_queue);
+#ifdef CONFIG_NET_DMA
+ __skb_queue_purge(&sk->sk_async_wait_queue);
+#endif

inet->dport = 0;

diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 7625eaf..5307e17 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -71,6 +71,7 @@
#include <net/inet_common.h>
#include <linux/ipsec.h>
#include <asm/unaligned.h>
+#include <net/netdma.h>

int sysctl_tcp_timestamps = 1;
int sysctl_tcp_window_scaling = 1;
@@ -3901,14 +3902,23 @@ int tcp_rcv_established(struct sock *sk,
}
} else {
int eaten = 0;
+ int copied_early = 0;

- if (tp->ucopy.task == current &&
- tp->copied_seq == tp->rcv_nxt &&
- len - tcp_header_len <= tp->ucopy.len &&
- sock_owned_by_user(sk)) {
- __set_current_state(TASK_RUNNING);
+ if (tp->copied_seq == tp->rcv_nxt &&
+ len - tcp_header_len <= tp->ucopy.len) {
+#ifdef CONFIG_NET_DMA
+ if (tcp_dma_try_early_copy(sk, skb, tcp_header_len)) {
+ copied_early = 1;
+ eaten = 1;
+ }
+#endif
+ if (tp->ucopy.task == current && sock_owned_by_user(sk) && !copied_early) {
+ __set_current_state(TASK_RUNNING);

- if (!tcp_copy_to_iovec(sk, skb, tcp_header_len)) {
+ if (!tcp_copy_to_iovec(sk, skb, tcp_header_len))
+ eaten = 1;
+ }
+ if (eaten) {
/* Predicted packet is in window by definition.
* seq == rcv_nxt and rcv_wup <= rcv_nxt.
* Hence, check seq<=rcv_wup reduces to:
@@ -3924,8 +3934,9 @@ int tcp_rcv_established(struct sock *sk,
__skb_pull(skb, tcp_header_len);
tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
NET_INC_STATS_BH(LINUX_MIB_TCPHPHITSTOUSER);
- eaten = 1;
}
+ if (copied_early)
+ tcp_cleanup_rbuf(sk, skb->len);
}
if (!eaten) {
if (tcp_checksum_complete_user(sk, skb))
@@ -3966,6 +3977,11 @@ int tcp_rcv_established(struct sock *sk,

__tcp_ack_snd_check(sk, 0);
no_ack:
+#ifdef CONFIG_NET_DMA
+ if (copied_early)
+ __skb_queue_tail(&sk->sk_async_wait_queue, skb);
+ else
+#endif
if (eaten)
__kfree_skb(skb);
else
@@ -4049,6 +4065,50 @@ discard:
return 0;
}

+#ifdef CONFIG_NET_DMA
+int tcp_dma_try_early_copy(struct sock *sk, struct sk_buff *skb, int hlen)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+ int chunk = skb->len - hlen;
+ int dma_cookie;
+ int copied_early = 0;
+
+ if (tp->ucopy.wakeup)
+ return 0;
+
+ if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
+ tp->ucopy.dma_chan = get_softnet_dma();
+
+ if (tp->ucopy.dma_chan && skb->ip_summed == CHECKSUM_UNNECESSARY) {
+
+ dma_cookie = dma_skb_copy_datagram_iovec(tp->ucopy.dma_chan,
+ skb, hlen, tp->ucopy.iov, chunk, tp->ucopy.pinned_list);
+
+ if (dma_cookie < 0)
+ goto out;
+
+ tp->ucopy.dma_cookie = dma_cookie;
+ copied_early = 1;
+
+ tp->ucopy.len -= chunk;
+ tp->copied_seq += chunk;
+ tcp_rcv_space_adjust(sk);
+
+ if ((tp->ucopy.len == 0) ||
+ (tcp_flag_word(skb->h.th) & TCP_FLAG_PSH) ||
+ (atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1))) {
+ tp->ucopy.wakeup = 1;
+ sk->sk_data_ready(sk, 0);
+ }
+ } else if (chunk > 0) {
+ tp->ucopy.wakeup = 1;
+ sk->sk_data_ready(sk, 0);
+ }
+out:
+ return copied_early;
+}
+#endif /* CONFIG_NET_DMA */
+
static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
struct tcphdr *th, unsigned len)
{
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 9e85c04..5ed065f 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -71,6 +71,7 @@
#include <net/inet_common.h>
#include <net/timewait_sock.h>
#include <net/xfrm.h>
+#include <net/netdma.h>

#include <linux/inet.h>
#include <linux/ipv6.h>
@@ -1091,8 +1092,18 @@ process:
bh_lock_sock(sk);
ret = 0;
if (!sock_owned_by_user(sk)) {
- if (!tcp_prequeue(sk, skb))
+#ifdef CONFIG_NET_DMA
+ struct tcp_sock *tp = tcp_sk(sk);
+ if (!tp->ucopy.dma_chan && tp->ucopy.pinned_list)
+ tp->ucopy.dma_chan = get_softnet_dma();
+ if (tp->ucopy.dma_chan)
ret = tcp_v4_do_rcv(sk, skb);
+ else
+#endif
+ {
+ if (!tcp_prequeue(sk, skb))
+ ret = tcp_v4_do_rcv(sk, skb);
+ }
} else
sk_add_backlog(sk, skb);
bh_unlock_sock(sk);
@@ -1296,6 +1307,11 @@ int tcp_v4_destroy_sock(struct sock *sk)
/* Cleans up our, hopefully empty, out_of_order_queue. */
__skb_queue_purge(&tp->out_of_order_queue);

+#ifdef CONFIG_NET_DMA
+ /* Cleans up our sk_async_wait_queue */
+ __skb_queue_purge(&sk->sk_async_wait_queue);
+#endif
+
/* Clean prequeue, it must be empty really */
__skb_queue_purge(&tp->ucopy.prequeue);

diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 301eee7..a50eb30 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1218,8 +1218,16 @@ process:
bh_lock_sock(sk);
ret = 0;
if (!sock_owned_by_user(sk)) {
- if (!tcp_prequeue(sk, skb))
- ret = tcp_v6_do_rcv(sk, skb);
+#ifdef CONFIG_NET_DMA
+ struct tcp_sock *tp = tcp_sk(sk);
+ if (tp->ucopy.dma_chan)
+ ret = tcp_v6_do_rcv(sk, skb);
+ else
+#endif
+ {
+ if (!tcp_prequeue(sk, skb))
+ ret = tcp_v6_do_rcv(sk, skb);
+ }
} else
sk_add_backlog(sk, skb);
bh_unlock_sock(sk);

2006-03-11 02:28:58

by Chris Leech

[permalink] [raw]
Subject: [PATCH 5/8] [I/OAT] Structure changes for TCP recv offload to I/OAT

Adds an async_wait_queue and some additional fields to tcp_sock, and a
dma_cookie_t to sk_buff.

Signed-off-by: Chris Leech <[email protected]>
---

include/linux/skbuff.h | 6 ++++++
include/linux/tcp.h | 10 ++++++++++
include/net/sock.h | 2 ++
include/net/tcp.h | 9 +++++++++
net/core/sock.c | 6 ++++++
5 files changed, 33 insertions(+), 0 deletions(-)

diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 613b951..91b1081 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -29,6 +29,9 @@
#include <linux/net.h>
#include <linux/textsearch.h>
#include <net/checksum.h>
+#ifdef CONFIG_NET_DMA
+#include <linux/dmaengine.h>
+#endif

#define HAVE_ALLOC_SKB /* For the drivers to know */
#define HAVE_ALIGNABLE_SKB /* Ditto 8) */
@@ -285,6 +288,9 @@ struct sk_buff {
__u16 tc_verd; /* traffic control verdict */
#endif
#endif
+#ifdef CONFIG_NET_DMA
+ dma_cookie_t dma_cookie;
+#endif


/* These elements must be at the end, see alloc_skb() for details. */
diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 542d395..3bc1d37 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -18,6 +18,9 @@
#define _LINUX_TCP_H

#include <linux/types.h>
+#ifdef CONFIG_NET_DMA
+#include <linux/dmaengine.h>
+#endif
#include <asm/byteorder.h>

struct tcphdr {
@@ -233,6 +236,13 @@ struct tcp_sock {
struct iovec *iov;
int memory;
int len;
+#ifdef CONFIG_NET_DMA
+ /* members for async copy */
+ struct dma_chan *dma_chan;
+ int wakeup;
+ struct dma_pinned_list *pinned_list;
+ dma_cookie_t dma_cookie;
+#endif
} ucopy;

__u32 snd_wl1; /* Sequence for window update */
diff --git a/include/net/sock.h b/include/net/sock.h
index 6e133f7..8c75642 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -132,6 +132,7 @@ struct sock_common {
* @sk_receive_queue: incoming packets
* @sk_wmem_alloc: transmit queue bytes committed
* @sk_write_queue: Packet sending queue
+ * @sk_async_wait_queue: DMA copied packets
* @sk_omem_alloc: "o" is "option" or "other"
* @sk_wmem_queued: persistent queue size
* @sk_forward_alloc: space allocated forward
@@ -205,6 +206,7 @@ struct sock {
atomic_t sk_omem_alloc;
struct sk_buff_head sk_receive_queue;
struct sk_buff_head sk_write_queue;
+ struct sk_buff_head sk_async_wait_queue;
int sk_wmem_queued;
int sk_forward_alloc;
gfp_t sk_allocation;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 9418f4d..610f66b 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -28,6 +28,9 @@
#include <linux/cache.h>
#include <linux/percpu.h>
#include <linux/skbuff.h>
+#ifdef CONFIG_NET_DMA
+#include <linux/dmaengine.h>
+#endif

#include <net/inet_connection_sock.h>
#include <net/inet_timewait_sock.h>
@@ -820,6 +823,12 @@ static inline void tcp_prequeue_init(str
tp->ucopy.len = 0;
tp->ucopy.memory = 0;
skb_queue_head_init(&tp->ucopy.prequeue);
+#ifdef CONFIG_NET_DMA
+ tp->ucopy.dma_chan = NULL;
+ tp->ucopy.wakeup = 0;
+ tp->ucopy.pinned_list = NULL;
+ tp->ucopy.dma_cookie = 0;
+#endif
}

/* Packet is added to VJ-style prequeue for processing in process
diff --git a/net/core/sock.c b/net/core/sock.c
index e1c1a36..d90e1bd 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -735,6 +735,9 @@ struct sock *sk_clone(const struct sock
atomic_set(&newsk->sk_omem_alloc, 0);
skb_queue_head_init(&newsk->sk_receive_queue);
skb_queue_head_init(&newsk->sk_write_queue);
+#ifdef CONFIG_NET_DMA
+ skb_queue_head_init(&newsk->sk_async_wait_queue);
+#endif

rwlock_init(&newsk->sk_dst_lock);
rwlock_init(&newsk->sk_callback_lock);
@@ -1286,6 +1289,9 @@ void sock_init_data(struct socket *sock,
skb_queue_head_init(&sk->sk_receive_queue);
skb_queue_head_init(&sk->sk_write_queue);
skb_queue_head_init(&sk->sk_error_queue);
+#ifdef CONFIG_NET_DMA
+ skb_queue_head_init(&sk->sk_async_wait_queue);
+#endif

sk->sk_send_head = NULL;


2006-03-11 08:55:26

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

Chris Leech <[email protected]> wrote:
>
> +void dma_async_device_cleanup(struct kref *kref);
>

Declarations go in header files, please. Or give it static scope.

2006-03-11 08:58:30

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 3/8] [I/OAT] Setup the networking subsystem as a DMA client

Chris Leech <[email protected]> wrote:
>

There seems to be a small race here.

> +static void net_dma_rebalance(void)
> +{
> + unsigned int cpu, i, n;
> + struct dma_chan *chan;
> +
> + lock_cpu_hotplug();
> +
> + if (net_dma_count == 0) {
> + for_each_online_cpu(cpu)
> + rcu_assign_pointer(per_cpu(softnet_data.net_dma, cpu), NULL);
> + unlock_cpu_hotplug();
> + return;
> + }

If some other CPU does netdev_dma_event(DMA_RESOURCE_REMOVED) now

> + i = 0;
> + cpu = first_cpu(cpu_online_map);
> +
> + rcu_read_lock();
> + list_for_each_entry(chan, &net_dma_client->channels, client_node) {
> + n = ((num_online_cpus() / net_dma_count)
> + + (i < (num_online_cpus() % net_dma_count) ? 1 : 0));

This will get a divide-by-zero.

> + while(n) {
> + per_cpu(softnet_data.net_dma, cpu) = chan;
> + cpu = next_cpu(cpu, cpu_online_map);
> + n--;
> + }
> + i++;
> + }
> + rcu_read_unlock();
> +
> + unlock_cpu_hotplug();
> +}
> +
> +/**
> + * netdev_dma_event - event callback for the net_dma_client
> + * @client: should always be net_dma_client
> + * @chan:
> + * @event:
> + */
> +static void netdev_dma_event(struct dma_client *client, struct dma_chan *chan,
> + enum dma_event event)
> +{
> + switch (event) {
> + case DMA_RESOURCE_ADDED:
> + net_dma_count++;
> + net_dma_rebalance();
> + break;
> + case DMA_RESOURCE_REMOVED:
> + net_dma_count--;
> + net_dma_rebalance();
> + break;
> + default:
> + break;
> + }
> +}

2006-03-11 09:02:35

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 3/8] [I/OAT] Setup the networking subsystem as a DMA client

Chris Leech <[email protected]> wrote:
>
> +#ifdef CONFIG_NET_DMA
> +#include <linux/dmaengine.h>
> +#endif

There are still a number of instances of this in the patch series. Did you
decide to keep the ifdefs in there for some reason?

2006-03-11 09:43:15

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 8/8] [I/OAT] TCP recv offload to I/OAT

Chris Leech <[email protected]> wrote:
>
> Locks down user pages and sets up for DMA in tcp_recvmsg, then calls
> dma_async_try_early_copy in tcp_v4_do_rcv

All those ifdefs are still there. They really do put a maintenance burden
on, of all places, net/ipv4/tcp.c. Please find a way of abstracting out
the conceptual additions which IOAT makes to TCP and to wrap that into
inline functions, etc. See how tidily NF_HOOK() has inserted things into
the net stack.

Also, for something which is billed as an performance enhancement,
benchmark numbers are the key thing which we need to judge the desirability
of these changes. What is the plan there?

Thanks.

2006-03-13 22:12:24

by Chris Leech

[permalink] [raw]
Subject: RE: [PATCH 3/8] [I/OAT] Setup the networking subsystem as a DMA client

> > +#ifdef CONFIG_NET_DMA
> > +#include <linux/dmaengine.h>
> > +#endif
>
> There are still a number of instances of this in the patch
> series. Did you
> decide to keep the ifdefs in there for some reason?

No, I just missed them in header files. Thanks.

2006-03-14 22:13:47

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

Hi!

> --- /dev/null
> +++ b/drivers/dma/dmaengine.c
> @@ -0,0 +1,360 @@
> +/*****************************************************************************
> +Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
> +
> +This program is free software; you can redistribute it and/or modify it
> +under the terms of the GNU General Public License as published by the Free
> +Software Foundation; either version 2 of the License, or (at your option)
> +any later version.
> +
> +This program is distributed in the hope that it will be useful, but WITHOUT
> +ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> +more details.
> +
> +You should have received a copy of the GNU General Public License along with
> +this program; if not, write to the Free Software Foundation, Inc., 59
> +Temple Place - Suite 330, Boston, MA 02111-1307, USA.
> +
> +The full GNU General Public License is included in this distribution in the
> +file called LICENSE.
> +*****************************************************************************/


Could you use
/*
*
*/

comment style, and describe in one or two lines what the source does
in the header?

Pavel
--
209:using System.IO;

2006-03-17 07:29:42

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

[snip]

> +/**
> + * dma_async_client_register - allocate and register a &dma_client
> + * @event_callback: callback for notification of channel addition/
> removal
> + */
> +struct dma_client *dma_async_client_register(dma_event_callback
> event_callback)
> +{
> + struct dma_client *client;
> +
> + client = kzalloc(sizeof(*client), GFP_KERNEL);
> + if (!client)
> + return NULL;
> +
> + INIT_LIST_HEAD(&client->channels);
> + spin_lock_init(&client->lock);
> +
> + client->chans_desired = 0;
> + client->chan_count = 0;
> + client->event_callback = event_callback;
> +
> + spin_lock(&dma_list_lock);
> + list_add_tail(&client->global_node, &dma_client_list);
> + spin_unlock(&dma_list_lock);
> +
> + return client;
> +}

It would seem that when a client registers (or shortly there after
when they call dma_async_client_chan_request()) they would expect to
get the number of channels they need by some given time period.

For example, lets say a client registers but no dma device exists.
They will never get called to be aware of this condition.

I would think most clients would either spin until they have all the
channels they need or fall back to a non-async mechanism.

Also, what do you think about adding an operation type (MEMCPY, XOR,
CRYPTO_AES, etc). We can than validate if the operation type
expected is supported by the devices that exist.

> +
> +/**
> + * dma_async_client_unregister - unregister a client and free the
> &dma_client
> + * @client:
> + *
> + * Force frees any allocated DMA channels, frees the &dma_client
> memory
> + */
> +void dma_async_client_unregister(struct dma_client *client)
> +{
> + struct dma_chan *chan;
> +
> + if (!client)
> + return;
> +
> + rcu_read_lock();
> + list_for_each_entry_rcu(chan, &client->channels, client_node) {
> + dma_client_chan_free(chan);
> + }
> + rcu_read_unlock();
> +
> + spin_lock(&dma_list_lock);
> + list_del(&client->global_node);
> + spin_unlock(&dma_list_lock);
> +
> + kfree(client);
> + dma_chans_rebalance();
> +}
> +
> +/**
> + * dma_async_client_chan_request - request DMA channels
> + * @client: &dma_client
> + * @number: count of DMA channels requested
> + *
> + * Clients call dma_async_client_chan_request() to specify how many
> + * DMA channels they need, 0 to free all currently allocated.
> + * The resulting allocations/frees are indicated to the client via
> the
> + * event callback.
> + */
> +void dma_async_client_chan_request(struct dma_client *client,
> + unsigned int number)
> +{
> + client->chans_desired = number;
> + dma_chans_rebalance();
> +}
> +

Shouldn't we also have a dma_async_client_chan_free()?

[snip]

> +/* --- public DMA engine API --- */
> +
> +struct dma_client *dma_async_client_register(dma_event_callback
> event_callback);
> +void dma_async_client_unregister(struct dma_client *client);
> +void dma_async_client_chan_request(struct dma_client *client,
> + unsigned int number);
> +
> +/**
> + * dma_async_memcpy_buf_to_buf - offloaded copy between virtual
> addresses
> + * @chan: DMA channel to offload copy to
> + * @dest: destination address (virtual)
> + * @src: source address (virtual)
> + * @len: length
> + *
> + * Both @dest and @src must be mappable to a bus address according
> to the
> + * DMA mapping API rules for streaming mappings.
> + * Both @dest and @src must stay memory resident (kernel memory or
> locked
> + * user space pages)
> + */
> +static inline dma_cookie_t dma_async_memcpy_buf_to_buf(struct
> dma_chan *chan,
> + void *dest, void *src, size_t len)
> +{
> + int cpu = get_cpu();
> + per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> + per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> + put_cpu();
> +
> + return chan->device->device_memcpy_buf_to_buf(chan, dest, src, len);
> +}

What about renaming the dma_async_memcpy_* to something like
dma_async_op_* and have them take an additional operation argument.

> +
> +/**
> + * dma_async_memcpy_buf_to_pg - offloaded copy
> + * @chan: DMA channel to offload copy to
> + * @page: destination page
> + * @offset: offset in page to copy to
> + * @kdata: source address (virtual)
> + * @len: length
> + *
> + * Both @page/@offset and @kdata must be mappable to a bus address
> according
> + * to the DMA mapping API rules for streaming mappings.
> + * Both @page/@offset and @kdata must stay memory resident (kernel
> memory or
> + * locked user space pages)
> + */
> +static inline dma_cookie_t dma_async_memcpy_buf_to_pg(struct
> dma_chan *chan,
> + struct page *page, unsigned int offset, void *kdata, size_t len)
> +{
> + int cpu = get_cpu();
> + per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> + per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> + put_cpu();
> +
> + return chan->device->device_memcpy_buf_to_pg(chan, page, offset,
> + kdata, len);
> +}
> +
> +/**
> + * dma_async_memcpy_buf_to_pg - offloaded copy
> + * @chan: DMA channel to offload copy to
> + * @dest_page: destination page
> + * @dest_off: offset in page to copy to
> + * @src_page: source page
> + * @src_off: offset in page to copy from
> + * @len: length
> + *
> + * Both @dest_page/@dest_off and @src_page/@src_off must be
> mappable to a bus
> + * address according to the DMA mapping API rules for streaming
> mappings.
> + * Both @dest_page/@dest_off and @src_page/@src_off must stay
> memory resident
> + * (kernel memory or locked user space pages)
> + */
> +static inline dma_cookie_t dma_async_memcpy_pg_to_pg(struct
> dma_chan *chan,
> + struct page *dest_pg, unsigned int dest_off, struct page *src_pg,
> + unsigned int src_off, size_t len)
> +{
> + int cpu = get_cpu();
> + per_cpu_ptr(chan->local, cpu)->bytes_transferred += len;
> + per_cpu_ptr(chan->local, cpu)->memcpy_count++;
> + put_cpu();
> +
> + return chan->device->device_memcpy_pg_to_pg(chan, dest_pg, dest_off,
> + src_pg, src_off, len);
> +}
> +
> +/**
> + * dma_async_memcpy_issue_pending - flush pending copies to HW
> + * @chan:
> + *
> + * This allows drivers to push copies to HW in batches,
> + * reducing MMIO writes where possible.
> + */
> +static inline void dma_async_memcpy_issue_pending(struct dma_chan
> *chan)
> +{
> + return chan->device->device_memcpy_issue_pending(chan);
> +}
> +
> +/**
> + * dma_async_memcpy_complete - poll for transaction completion
> + * @chan: DMA channel
> + * @cookie: transaction identifier to check status of
> + * @last: returns last completed cookie, can be NULL
> + * @used: returns last issued cookie, can be NULL
> + *
> + * If @last and @used are passed in, upon return they reflect the
> driver
> + * internal state and can be used with dma_async_is_complete() to
> check
> + * the status of multiple cookies without re-checking hardware state.
> + */
> +static inline enum dma_status dma_async_memcpy_complete(struct
> dma_chan *chan,
> + dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
> +{
> + return chan->device->device_memcpy_complete(chan, cookie, last,
> used);
> +}
> +
> +/**
> + * dma_async_is_complete - test a cookie against chan state
> + * @cookie: transaction identifier to test status of
> + * @last_complete: last know completed transaction
> + * @last_used: last cookie value handed out
> + *
> + * dma_async_is_complete() is used in dma_async_memcpy_complete()
> + * the test logic is seperated for lightweight testing of multiple
> cookies
> + */
> +static inline enum dma_status dma_async_is_complete(dma_cookie_t
> cookie,
> + dma_cookie_t last_complete, dma_cookie_t last_used)
> +{
> + if (last_complete <= last_used) {
> + if ((cookie <= last_complete) || (cookie > last_used))
> + return DMA_SUCCESS;
> + } else {
> + if ((cookie <= last_complete) && (cookie > last_used))
> + return DMA_SUCCESS;
> + }
> + return DMA_IN_PROGRESS;
> +}
> +
> +
> +/* --- DMA device --- */
> +
> +int dma_async_device_register(struct dma_device *device);
> +void dma_async_device_unregister(struct dma_device *device);
> +
> +#endif /* CONFIG_DMA_ENGINE */
> +#endif /* DMAENGINE_H */
>
> -
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2006-03-28 18:44:05

by Andrew Grover

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

On 3/16/06, Kumar Gala <[email protected]> wrote:
> It would seem that when a client registers (or shortly there after
> when they call dma_async_client_chan_request()) they would expect to
> get the number of channels they need by some given time period.
>
> For example, lets say a client registers but no dma device exists.
> They will never get called to be aware of this condition.
>
> I would think most clients would either spin until they have all the
> channels they need or fall back to a non-async mechanism.

Clients *are* expected to fall back to non-async if they are not given
channels. The reason it was implemented with callbacks for
added/removed was that the client may be initializing before the
channels are enumerated. For example, the net subsystem will ask for
channels and not get them for a while, until the ioatdma PCI device is
found and its driver loads. In this scenario, we'd like the net
subsystem to be given these channels, instead of them going unused.

> Also, what do you think about adding an operation type (MEMCPY, XOR,
> CRYPTO_AES, etc). We can than validate if the operation type
> expected is supported by the devices that exist.

No objections, but this speculative support doesn't need to be in our
initial patchset.

> Shouldn't we also have a dma_async_client_chan_free()?

Well we could just define it to be chan_request(0) but it doesn't seem
to be needed. Also, the allocation mechanism we have for channels is
different from alloc/free's semantics, so it may be best to not muddy
the water in this area.

Regards -- Andy

2006-03-28 18:58:56

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem


On Mar 28, 2006, at 12:44 PM, Andrew Grover wrote:

> On 3/16/06, Kumar Gala <[email protected]> wrote:
>> It would seem that when a client registers (or shortly there after
>> when they call dma_async_client_chan_request()) they would expect to
>> get the number of channels they need by some given time period.
>>
>> For example, lets say a client registers but no dma device exists.
>> They will never get called to be aware of this condition.
>>
>> I would think most clients would either spin until they have all the
>> channels they need or fall back to a non-async mechanism.
>
> Clients *are* expected to fall back to non-async if they are not given
> channels. The reason it was implemented with callbacks for
> added/removed was that the client may be initializing before the
> channels are enumerated. For example, the net subsystem will ask for
> channels and not get them for a while, until the ioatdma PCI device is
> found and its driver loads. In this scenario, we'd like the net
> subsystem to be given these channels, instead of them going unused.

Fair, I need to think on that a little more.

>> Also, what do you think about adding an operation type (MEMCPY, XOR,
>> CRYPTO_AES, etc). We can than validate if the operation type
>> expected is supported by the devices that exist.
>
> No objections, but this speculative support doesn't need to be in our
> initial patchset.

I don't consider it speculative. The patch is for a generic DMA
engine interface. That interface should encompass all users. I have
a security/crypto DMA engine that I'd like to front with the generic
DMA interface today. Also, I believe there is another Intel group
with an XOR engine that had a similar concept called ADMA posted a
while ago.

http://marc.theaimsgroup.com/?t=112603120100004&r=1&w=2

>> Shouldn't we also have a dma_async_client_chan_free()?
>
> Well we could just define it to be chan_request(0) but it doesn't seem
> to be needed. Also, the allocation mechanism we have for channels is
> different from alloc/free's semantics, so it may be best to not muddy
> the water in this area.

Can you explain what the semantics are.

It's been a little while since I posted so my thoughts on the subject
are going to take a little while to come back to me :)

- kumar

2006-03-28 22:01:54

by Andrew Grover

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

On 3/28/06, Kumar Gala <[email protected]> wrote:

> >> Also, what do you think about adding an operation type (MEMCPY, XOR,
> >> CRYPTO_AES, etc). We can than validate if the operation type
> >> expected is supported by the devices that exist.
> >
> > No objections, but this speculative support doesn't need to be in our
> > initial patchset.
>
> I don't consider it speculative. The patch is for a generic DMA
> engine interface. That interface should encompass all users. I have
> a security/crypto DMA engine that I'd like to front with the generic
> DMA interface today. Also, I believe there is another Intel group
> with an XOR engine that had a similar concept called ADMA posted a
> while ago.

Please submit patches then. We will be doing another rev of the I/OAT
patch very soon, which you will be able to patch against. Or, once the
patch gets in mainline then we can enhance it. Code in the Linux
kernel is never "done", and the burden of implementing additional
functionality falls on those who want it.

> Can you explain what the semantics are.
>
> It's been a little while since I posted so my thoughts on the subject
> are going to take a little while to come back to me :)

Yeah. Basically you register as a DMA client, and say how many DMA
channels you want. Our net_dma patch for example uses multiple
channels to help lock contention. Then when channels are available
(i.e. a DMA device added or another client gives them up) then you get
a callback. If the channel goes away (i.e. DMA device is removed
(theoretically possible but practically never happens) or *you* are
going away and change your request to 0 channels) then you get a
remove callback.

This gets around the problem of DMA clients registering (and therefore
not getting) channels simply because they init before the DMA device
is discovered.

Regards -- Andy

2006-03-28 23:03:09

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem


On Mar 28, 2006, at 4:01 PM, Andrew Grover wrote:

> On 3/28/06, Kumar Gala <[email protected]> wrote:
>
>>>> Also, what do you think about adding an operation type (MEMCPY,
>>>> XOR,
>>>> CRYPTO_AES, etc). We can than validate if the operation type
>>>> expected is supported by the devices that exist.
>>>
>>> No objections, but this speculative support doesn't need to be in
>>> our
>>> initial patchset.
>>
>> I don't consider it speculative. The patch is for a generic DMA
>> engine interface. That interface should encompass all users. I have
>> a security/crypto DMA engine that I'd like to front with the generic
>> DMA interface today. Also, I believe there is another Intel group
>> with an XOR engine that had a similar concept called ADMA posted a
>> while ago.
>
> Please submit patches then. We will be doing another rev of the I/OAT
> patch very soon, which you will be able to patch against. Or, once the
> patch gets in mainline then we can enhance it. Code in the Linux
> kernel is never "done", and the burden of implementing additional
> functionality falls on those who want it.

I completely understand that. However, I think putting something
into mainline that only works or solves the particular problem you
have is a bad idea. I'll provide patches for the changes I'd like to
see. However, I figured a little discussion on the subject before I
went off an spent time on it was worth while.

>> Can you explain what the semantics are.
>>
>> It's been a little while since I posted so my thoughts on the subject
>> are going to take a little while to come back to me :)
>
> Yeah. Basically you register as a DMA client, and say how many DMA
> channels you want. Our net_dma patch for example uses multiple
> channels to help lock contention. Then when channels are available
> (i.e. a DMA device added or another client gives them up) then you get
> a callback. If the channel goes away (i.e. DMA device is removed
> (theoretically possible but practically never happens) or *you* are
> going away and change your request to 0 channels) then you get a
> remove callback.

Do you only get callback when a channel is available? How do you
decide to do to provide PIO to the client?

A client should only request multiple channel to handle multiple
concurrent operations.

> This gets around the problem of DMA clients registering (and therefore
> not getting) channels simply because they init before the DMA device
> is discovered.

What do you expect to happen in a system in which the channels are
over subscribed?

Do you expect the DMA device driver to handle scheduling of channels
between multiple clients?

- kumar

2006-03-29 23:05:44

by Andrew Grover

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

On 3/28/06, Kumar Gala <[email protected]> wrote:
> Do you only get callback when a channel is available?

Yes

> How do you
> decide to do to provide PIO to the client?

The client is responsible for using any channels it gets, or falling
back to memcpy() if it doesn't get any. (I don't understand how PIO
comes into the picture..?)

> A client should only request multiple channel to handle multiple
> concurrent operations.

Correct, if there aren't any CPU concurrency issues then 1 channel
will use the device's full bandwidth (unless some other client has
acquired the other channels and is using them, of course.)

> > This gets around the problem of DMA clients registering (and therefore
> > not getting) channels simply because they init before the DMA device
> > is discovered.
>
> What do you expect to happen in a system in which the channels are
> over subscribed?
>
> Do you expect the DMA device driver to handle scheduling of channels
> between multiple clients?

It does the simplest thing that could possibly work right now:
channels are allocated first come first serve. When there is a need,
it should be straightforward to allow multiple clients to share DMA
channels.

Regards -- Andy

2006-03-30 08:01:39

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem


On Mar 29, 2006, at 5:05 PM, Andrew Grover wrote:

> On 3/28/06, Kumar Gala <[email protected]> wrote:
>> Do you only get callback when a channel is available?
>
> Yes
>
>> How do you
>> decide to do to provide PIO to the client?
>
> The client is responsible for using any channels it gets, or falling
> back to memcpy() if it doesn't get any. (I don't understand how PIO
> comes into the picture..?)

I was under the impression that the DMA engine would provide a "sync"
cpu based memcpy (PIO) if a real HW channel wasn't avail, if this is
left to the client that's fine. So how does the client know he
should use normal memcpy()?

>> A client should only request multiple channel to handle multiple
>> concurrent operations.
>
> Correct, if there aren't any CPU concurrency issues then 1 channel
> will use the device's full bandwidth (unless some other client has
> acquired the other channels and is using them, of course.)
>
>>> This gets around the problem of DMA clients registering (and
>>> therefore
>>> not getting) channels simply because they init before the DMA device
>>> is discovered.
>>
>> What do you expect to happen in a system in which the channels are
>> over subscribed?
>>
>> Do you expect the DMA device driver to handle scheduling of channels
>> between multiple clients?
>
> It does the simplest thing that could possibly work right now:
> channels are allocated first come first serve. When there is a need,
> it should be straightforward to allow multiple clients to share DMA
> channels.

Sounds good for a start. Have you given any thoughts on handling
priorities between clients?

I need to take a look at the latest patches. How would you guys like
modifications?

- k

2006-03-30 18:27:46

by Andrew Grover

[permalink] [raw]
Subject: Re: [PATCH 1/8] [I/OAT] DMA memcpy subsystem

On 3/30/06, Kumar Gala <[email protected]> wrote:
> I was under the impression that the DMA engine would provide a "sync"
> cpu based memcpy (PIO) if a real HW channel wasn't avail, if this is
> left to the client that's fine. So how does the client know he
> should use normal memcpy()?

It has to keep track of what DMA channel to use, which it gets when
the channel ADDED callback happens. So it's basically

if (some_client_struct->dma_chan)
dma_memcpy()
else
memcpy()

The async memcpy has the added requirement that at some point the
client must verify the copies have been completed, so doing async
memcopies does require more work on the client's part.

> Sounds good for a start. Have you given any thoughts on handling
> priorities between clients?
>
> I need to take a look at the latest patches. How would you guys like
> modifications?

Haven't given any thought to priorities yet -- we've been focusing on
getting the 1 client case to perform well. :)

Chris posted a link to this: git://198.78.49.142/~cleech/linux-2.6
branch ioat-2.6.17

So you can post patches against that, or the patches posted here apply
against davem's git tree.

Regards -- Andy