2005-03-07 21:16:43

by Evgeniy Polyakov

[permalink] [raw]
Subject: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6



I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.
It supports following features:
- multiple asynchronous crypto device queues
- crypto session routing
- crypto session binding
- modular load balancing
- crypto session batching genetically implemented by design
- crypto session priority
- different kinds of crypto operation(RNG, asymmetrical crypto, HMAC and
any other)

Some design notes:
acrypto has one main crypto session queue, into which each
newly allocated session is inserted and this is a place where load
balancing searches it's food. When new session is being prepared for
insertion it calls load balancer's ->find_device() method, which should
return suitable device(current simple_lb load balancer returns device
with the lowest load(device has the least number of session in it's
queue)) if it exists. After crypto_device being returned acrypto creates
new crypto routing entry which points to returned device and adds it to
crypto session routing queue. Crypto session is being inserted into
device's queue according to it's priority and it is crypto device driver
that should process it's session list according to session's priority.

Each crypto load balancer must implement 2 methods:
->rehash() and ->find_device() which may be called from any context and
under spinlock.
->rehash() method should be called to remix crypto sessions in device's
queues, for example if driver decides that it's device is broken it
marks itself as broken and load balancer(or scheduler if you like)
should remove all sessions from this queue to some other devices.
If session can not be completed scheduler must mark it as broken and
complete it(by calling first broke_session() and then complete_session()
and stop_process_session()). Consumer must check if operation was
successful(and therefore session is not broken).
->find_device() method should return appropriate crypto device.
Since load balancers may be loaded and unloaded without any restriction,
one may create it's own crypto load balancers, which may use
crpypto session's (crypto_data) private area to select appropriate device,
for example, one may store process' pid in private area and write it's own
crypto load balancer which will select private crypto device for given PID,
and the rest of the cypto system to process other requests.

For crypto session to be successfully allocated crypto consumer must
provide two structures - struct crypto_session_initializer and struct crypto_data.
struct crypto_session_initializer contains data needed to find
appropriate device, like type of operation, mode of operation, some
flags(for example SESSION_BINDED, which means that session must be bound
to specified in bdev field crypto device, it is useful for TCPA/TPM),
session priority and callback which will be called after all routing for
given session are finished.
struct crypto_data contains scatterlists for src, dst, key and iv.
It also has void *priv field and it's size which is allocated and may be
used by any crypto agent(for example VIA PadLock driver uses it to store
aes_ctx field, crypto_session can use this field to store some pointers
needed in ->callback()).
Actually callback will be called from work queue context, but I suppose it is
better to not assume calling context.
->callback() will be called after all crypto routing for given session
are done with the same parameters as were provided in initialisation
time(if session has only one routing callback will be called with
original parameters, but if it has several routes callback will be
called with parameters from the latest processed one). I believe crypto
callback should not know about crypto sessions, routings, device and so
on, proper restriction is always a good idea.

Crypto routing.
This feature allows the same session to be processed by several
devices/algorithms. For example if you need to encrypt data and then
sign it in TPM device you can create one route to encryption device and
then route it to TPM device, or this can be used for tweakable cipher
encryption without 2-atomic-maps restriction.

Crypto device.
It can be either software emulator or hardware accelerator chip(like
HIFN 79*/83* or Via PadLock ACE/RNG, or even TPM device like each IBM
ThinkPad or some HP laptops have.
It can be registered with asynchronous crypto layer and must provide
some data for it:
->data_ready() method - it is called each time new session is added to
device's queue.
Array of struct crypto_capability and it's amount -
struct crypto_capability describes each operation given device can
handle, and has a maximum session queue length parameter.
Note: this structure can [be extended to] include "rate" parameter to
show absolute speed of given operation in some units, which therefore
can be used by scheduler(load balancer) for proper device selection.
Actually queue length can somehow reflects device's "speed".
Note2: it can be calculated using ptime parameter of the session initializer -
it is time given session was processed in crypto device.

Acrypto has full userspace support through ioctl and direct process' vmas and pages access.
It is done using ioctl() with 2 copyings from+to userspace data.
Session processing contains of 3 major parts:
1. Session creation. CRYPTO_SESSION_ALLOC ioctl.
User must provide special structure which has src, dst, key and iv data sizes
and crypto initializer(crypto operation, mode, type and priority).
2. Data filling. User must call several CRYPTO_FILL_DATA ioctls.
Each one requires data size and data type(structure crypto_user_data) and data itself.
3. Finish. User must call CRYPTO_SESSION_ADD ioctl with pointer to the are whre crypting result must be stored.
The latter ioctl will sleep while session is being processed.

Second userspace communication mechanism is based on direct access to the process'
vmas and pages from acrypto, pointers are transferred using special kernel connector structure.
Obviously it can not be used with the most hardware, but I like the idea itself.

Currently supported HIFN 7955(small load testing), via padlock driver(not tested),
driver for CE-InfoSys FastCrypt PCI card equipped with a SuperCrypt CE99C003B chip(not tested).


2005-03-07 20:29:27

by Evgeniy Polyakov

[permalink] [raw]
Subject: [??/many] acrypto benchmarks vs cryptoloop vs dm_crypt


Benchmark: Bonnie++ 1.03.
Machine: 2-way Xeon (1+1HT), 1Gb ram.
Ext2 filesystem over file(mapped using loop(cryptoloop, dm_crypt)
or bd_fd filter for bd).

Version @version@ ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
acrypto 2500M 5479 74 8122 5 4886 4 5690 73 10713 4 22.0 0
cryptoloop 2500M 5812 71 10437 7 4402 5 7165 92 10763 6 88.3 0
dm_crypt 2500M 6040 90 6747 36 4768 8 5775 66 10161 5 90.6 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
acrypto 16 1345 99 +++++ +++ +++++ +++ 1403 99 +++++ +++ 4538 100
cryptoloop 16 1372 99 +++++ +++ +++++ +++ 1405 99 +++++ +++ 4501 99
dm_crypt 16 1352 99 +++++ +++ +++++ +++ 1371 99 +++++ +++ 4278 100


bd+acrypto works exactly as cryptoloop (attitude of the performance
acrypto vs. cryptoloop is always the same as CPU usage attitude,
BUT!, I can not setup bd+acrypto to use the same amount of CPU as loopdev!,
so in absolute numbers, cryptoloop is faster).
dm_crypt is slower.


2005-03-07 20:29:11

by Evgeniy Polyakov

[permalink] [raw]
Subject: [3/many] acrypto: acrypto.h

--- /tmp/empty/acrypto.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/acrypto.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,245 @@
+/*
+ * acrypto.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __ACRYPTO_H
+#define __ACRYPTO_H
+
+#define SCACHE_NAMELEN 32
+
+struct crypto_session_initializer;
+struct crypto_data;
+typedef void (*crypto_callback_t) (struct crypto_session_initializer *,
+ struct crypto_data *);
+
+struct crypto_device_stat
+{
+ __u64 scompleted;
+ __u64 sfinished;
+ __u64 sstarted;
+ __u64 kmem_failed;
+ __u64 pool_failed;
+};
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/device.h>
+#include <linux/workqueue.h>
+#include <linux/mempool.h>
+
+#include <asm/scatterlist.h>
+
+#define DEBUG
+#ifdef DEBUG
+#define dprintk(f, a...) printk(f, ##a)
+#define dprintka(f, a...) printk(f, ##a)
+#else
+#define dprintk(f, a...)
+#define dprintka(f, a...)
+#endif
+
+extern void crypto_wake_lb(void);
+
+#define SESSION_COMPLETED (1<<15)
+#define SESSION_FINISHED (1<<14)
+#define SESSION_STARTED (1<<13)
+#define SESSION_PROCESSED (1<<12)
+#define SESSION_BINDED (1<<11)
+#define SESSION_BROKEN (1<<10)
+#define SESSION_FROM_CACHE (1<<9)
+
+#define session_completed(s) (s->ci.flags & SESSION_COMPLETED)
+#define complete_session(s) do {s->ci.flags |= SESSION_COMPLETED;} while(0)
+#define uncomplete_session(s) do {s->ci.flags &= ~SESSION_COMPLETED;} while (0)
+
+#define session_finished(s) (s->ci.flags & SESSION_FINISHED)
+#define finish_session(s) do {s->ci.flags |= SESSION_FINISHED;} while(0)
+#define unfinish_session(s) do {s->ci.flags &= ~SESSION_FINISHED;} while (0)
+
+#define session_started(s) (s->ci.flags & SESSION_STARTED)
+#define start_session(s) do {s->ci.flags |= SESSION_STARTED;} while(0)
+#define unstart_session(s) do {s->ci.flags &= ~SESSION_STARTED;} while (0)
+
+#define session_is_processed(s) (s->ci.flags & SESSION_PROCESSED)
+#define start_process_session(s) do {s->ci.flags |= SESSION_PROCESSED; s->ci.ptime = jiffies;} while(0)
+#define stop_process_session(s) do {s->ci.flags &= ~SESSION_PROCESSED; s->ci.ptime = jiffies - s->ci.ptime; crypto_wake_lb();} while (0)
+
+#define session_binded(s) (s->ci.flags & SESSION_BINDED)
+#define bind_session(s) do {s->ci.flags |= SESSION_BINDED;} while(0)
+#define unbind_session(s) do {s->ci.flags &= ~SESSION_BINDED;} while (0)
+#define sci_binded(ci) (ci->flags & SESSION_BINDED)
+
+#define session_broken(s) (s->ci.flags & SESSION_BROKEN)
+#define broke_session(s) do {s->ci.flags |= SESSION_BROKEN;} while(0)
+#define unbroke_session(s) do {s->ci.flags &= ~SESSION_BROKEN;} while (0)
+
+#define session_from_cache(s) (s->ci.flags & SESSION_FROM_CACHE)
+#define mark_session_from_cache(s) do {s->ci.flags |= SESSION_FROM_CACHE;} while(0)
+
+#define CRYPTO_MAX_PRIV_SIZE 1024
+
+#define DEVICE_BROKEN (1<<0)
+
+#define device_broken(dev) (dev->flags & DEVICE_BROKEN)
+#define broke_device(dev) do {dev->flags |= DEVICE_BROKEN;} while(0)
+#define repair_device(dev) do {dev->flags &= ~DEVICE_BROKEN;} while(0)
+
+struct crypto_capability {
+ u16 operation;
+ u16 type;
+ u16 mode;
+ u16 qlen;
+ u64 ptime;
+ u64 scomp;
+};
+
+struct crypto_session_initializer {
+ u16 operation;
+ u16 type;
+ u16 mode;
+ u16 priority;
+
+ u64 id;
+ u64 dev_id;
+
+ u32 flags;
+
+ u32 bdev;
+
+ u64 ptime;
+
+ crypto_callback_t callback;
+};
+
+struct crypto_data {
+ struct scatterlist *sg_src;
+ int sg_src_num;
+ struct scatterlist *sg_dst;
+ int sg_dst_num;
+ struct scatterlist *sg_key;
+ int sg_key_num;
+ struct scatterlist *sg_iv;
+ int sg_iv_num;
+
+ void *priv;
+ unsigned int priv_size;
+};
+
+struct crypto_device {
+ char name[SCACHE_NAMELEN];
+
+ spinlock_t session_lock;
+ struct list_head session_list;
+
+ u64 sid;
+ spinlock_t lock;
+
+ atomic_t refcnt;
+
+ u32 flags;
+
+ u32 id;
+
+ struct list_head cdev_entry;
+
+ void (*data_ready)(struct crypto_device *);
+
+ struct device_driver *driver;
+ struct device device;
+ struct class_device class_device;
+ struct completion dev_released;
+
+ spinlock_t stat_lock;
+ struct crypto_device_stat stat;
+
+ struct crypto_capability *cap;
+ int cap_number;
+
+ void *priv;
+
+ mempool_t *session_pool;
+ kmem_cache_t *session_cache;
+};
+
+struct crypto_route_head {
+ struct crypto_route *next;
+ struct crypto_route *prev;
+
+ __u32 qlen;
+ spinlock_t lock;
+};
+
+struct crypto_route {
+ struct crypto_route *next;
+ struct crypto_route *prev;
+
+ struct crypto_route_head *list;
+ struct crypto_device *dev;
+
+ struct crypto_session_initializer ci;
+};
+
+struct crypto_session {
+ struct list_head dev_queue_entry;
+ struct list_head main_queue_entry;
+
+ struct crypto_session_initializer ci;
+
+ struct crypto_data data;
+
+ spinlock_t lock;
+
+ struct work_struct work;
+
+ struct crypto_route_head route_list;
+
+ struct crypto_device *pool_dev;
+};
+
+struct crypto_session *crypto_session_alloc(struct crypto_session_initializer *, struct crypto_data *);
+struct crypto_session *crypto_session_create(struct crypto_session_initializer *, struct crypto_data *);
+void crypto_session_destroy(struct crypto_session *);
+void crypto_session_add(struct crypto_session *);
+void crypto_session_dequeue_main(struct crypto_session *);
+void __crypto_session_dequeue_main(struct crypto_session *);
+void __crypto_session_dequeue_route(struct crypto_session *);
+void crypto_session_dequeue_route(struct crypto_session *);
+
+void crypto_device_get(struct crypto_device *);
+void crypto_device_put(struct crypto_device *);
+struct crypto_device *crypto_device_get_name(char *);
+
+int __crypto_device_add(struct crypto_device *);
+int crypto_device_add(struct crypto_device *);
+void __crypto_device_remove(struct crypto_device *);
+void crypto_device_remove(struct crypto_device *);
+int match_initializer(struct crypto_device *, struct crypto_session_initializer *);
+int __match_initializer(struct crypto_capability *, struct crypto_session_initializer *);
+
+void crypto_session_insert_main(struct crypto_device *dev, struct crypto_session *s);
+void crypto_session_insert(struct crypto_device *dev, struct crypto_session *s);
+void __crypto_session_insert(struct crypto_device *dev, struct crypto_session *s);
+
+#endif /* __KERNEL__ */
+#endif /* __ACRYPTO_H */

2005-03-07 20:33:55

by Evgeniy Polyakov

[permalink] [raw]
Subject: [40/many] arch: sparc config

--- ./arch/sparc/Kconfig~ 2005-03-02 10:37:30.000000000 +0300
+++ ./arch/sparc/Kconfig 2005-03-07 21:30:22.000000000 +0300
@@ -390,4 +390,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:33:58

by Evgeniy Polyakov

[permalink] [raw]
Subject: [18/many] acrypto: crypto_user_direct.h

--- /tmp/empty/crypto_user_direct.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user_direct.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,74 @@
+/*
+ * crypto_user_direct.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_USER_DIRECT_H
+#define __CRYPTO_USER_DIRECT_H
+
+struct crypto_user_direct
+{
+ __u64 src;
+ __u32 src_size;
+ __u64 dst;
+ __u32 dst_size;
+
+ __u16 operation;
+ __u16 type;
+ __u16 mode;
+ __u16 priority;
+
+ int pid;
+
+ int key_size;
+ int iv_size;
+
+ __u8 data[0];
+};
+
+#ifdef __KERNEL__
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+
+struct crypto_user_direct_kern
+{
+ struct list_head entry;
+
+ u32 seq;
+ u32 ack;
+
+ struct crypto_user_direct usr;
+ u8 *key;
+ u8 *iv;
+
+ int snum, dnum;
+ struct page **sp, **dp;
+ struct vm_area_struct **svma, **dvma;
+ struct mm_struct *mm;
+};
+
+int crypto_user_direct_init(void);
+void crypto_user_direct_fini(void);
+int crypto_user_direct_add_request(u32 seq, u32 ack, struct crypto_user_direct *usr);
+
+#endif /* __KERNEL__ */
+
+#endif /* __CRYPTO_USER_DIRECT_H */

2005-03-07 20:33:57

by Evgeniy Polyakov

[permalink] [raw]
Subject: [12/many] acrypto: crypto_route.h

--- /tmp/empty/crypto_route.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_route.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,242 @@
+/*
+ * crypto_route.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_ROUTE_H
+#define __CRYPTO_ROUTE_H
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "acrypto.h"
+
+static inline struct crypto_route *crypto_route_alloc_direct(struct crypto_device *dev,
+ struct crypto_session_initializer *ci)
+{
+ struct crypto_route *rt;
+
+ rt = kmalloc(sizeof(*rt), GFP_ATOMIC);
+ if (!rt) {
+ crypto_device_put(dev);
+ return NULL;
+ }
+
+ memset(rt, 0, sizeof(*rt));
+ memcpy(&rt->ci, ci, sizeof(*ci));
+
+ rt->dev = dev;
+
+ return rt;
+}
+
+static inline struct crypto_route *crypto_route_alloc(struct crypto_device *dev,
+ struct crypto_session_initializer *ci)
+{
+ struct crypto_route *rt;
+
+ if (!match_initializer(dev, ci))
+ return NULL;
+
+ rt = crypto_route_alloc_direct(dev, ci);
+
+ return rt;
+}
+
+static inline void crypto_route_free(struct crypto_route *rt)
+{
+ crypto_device_put(rt->dev);
+ rt->dev = NULL;
+ kfree(rt);
+}
+
+static inline void __crypto_route_del(struct crypto_route *rt, struct crypto_route_head *list)
+{
+ struct crypto_route *next, *prev;
+
+ list->qlen--;
+ next = rt->next;
+ prev = rt->prev;
+ rt->next = rt->prev = NULL;
+ rt->list = NULL;
+ next->prev = prev;
+ prev->next = next;
+}
+
+static inline void crypto_route_del(struct crypto_route *rt)
+{
+ struct crypto_route_head *list = rt->list;
+
+ if (list) {
+ spin_lock_irq(&list->lock);
+ if (list == rt->list)
+ __crypto_route_del(rt, rt->list);
+ spin_unlock_irq(&list->lock);
+
+ crypto_route_free(rt);
+ }
+}
+
+static inline struct crypto_route *__crypto_route_dequeue(struct crypto_route_head *list)
+{
+ struct crypto_route *next, *prev, *result;
+
+ prev = (struct crypto_route *)list;
+ next = prev->next;
+ result = NULL;
+ if (next != prev) {
+ result = next;
+ next = next->next;
+ list->qlen--;
+ next->prev = prev;
+ prev->next = next;
+ result->next = result->prev = NULL;
+ result->list = NULL;
+ }
+ return result;
+}
+
+static inline struct crypto_route *crypto_route_dequeue(struct crypto_session *s)
+{
+ struct crypto_route *rt;
+
+ spin_lock_irq(&s->route_list.lock);
+
+ rt = __crypto_route_dequeue(&s->route_list);
+
+ spin_unlock_irq(&s->route_list.lock);
+
+ return rt;
+}
+
+static inline void __crypto_route_queue(struct crypto_route *rt, struct crypto_route_head *list)
+{
+ struct crypto_route *prev, *next;
+
+ rt->list = list;
+ list->qlen++;
+ next = (struct crypto_route *)list;
+ prev = next->prev;
+ rt->next = next;
+ rt->prev = prev;
+ next->prev = prev->next = rt;
+}
+
+static inline void crypto_route_queue(struct crypto_route *rt, struct crypto_session *s)
+{
+
+ spin_lock_irq(&s->route_list.lock);
+
+ __crypto_route_queue(rt, &s->route_list);
+
+ spin_unlock_irq(&s->route_list.lock);
+}
+
+static inline int crypto_route_add(struct crypto_device *dev, struct crypto_session *s,
+ struct crypto_session_initializer *ci)
+{
+ struct crypto_route *rt;
+
+ rt = crypto_route_alloc(dev, ci);
+ if (!rt)
+ return -ENOMEM;
+
+ crypto_route_queue(rt, s);
+
+ return 0;
+}
+
+static inline int crypto_route_add_direct(struct crypto_device *dev, struct crypto_session *s,
+ struct crypto_session_initializer *ci)
+{
+ struct crypto_route *rt;
+
+ rt = crypto_route_alloc_direct(dev, ci);
+ if (!rt)
+ return -ENOMEM;
+
+ crypto_route_queue(rt, s);
+
+ return 0;
+}
+
+static inline int crypto_route_queue_len(struct crypto_session *s)
+{
+ return s->route_list.qlen;
+}
+
+static inline void crypto_route_head_init(struct crypto_route_head *list)
+{
+ spin_lock_init(&list->lock);
+ list->prev = list->next = (struct crypto_route *)list;
+ list->qlen = 0;
+}
+
+static inline struct crypto_route *__crypto_route_current(struct crypto_route_head *list)
+{
+ struct crypto_route *next, *prev, *result;
+
+ prev = (struct crypto_route *)list;
+ next = prev->next;
+ result = NULL;
+ if (next != prev)
+ result = next;
+
+ return result;
+}
+
+static inline struct crypto_route *crypto_route_current(struct crypto_session *s)
+{
+ struct crypto_route_head *list;
+ struct crypto_route *rt = NULL;
+
+ list = &s->route_list;
+
+ if (list) {
+ spin_lock_irq(&list->lock);
+
+ rt = __crypto_route_current(list);
+
+ spin_unlock_irq(&list->lock);
+ }
+
+ return rt;
+}
+
+static inline struct crypto_device *crypto_route_get_current_device(struct crypto_session *s)
+{
+ struct crypto_route *rt = NULL;
+ struct crypto_device *dev = NULL;
+ struct crypto_route_head *list = &s->route_list;
+
+ spin_lock_irq(&list->lock);
+
+ rt = __crypto_route_current(list);
+ if (rt) {
+ dev = rt->dev;
+ crypto_device_get(dev);
+ }
+
+ spin_unlock_irq(&list->lock);
+
+ return dev;
+}
+
+#endif /* __CRYPTO_ROUTE_H */

2005-03-07 20:33:56

by Evgeniy Polyakov

[permalink] [raw]
Subject: [10/many] acrypto: crypto_lb.h

--- /tmp/empty/crypto_lb.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_lb.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,63 @@
+/*
+ * crypto_lb.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_LB_H
+#define __CRYPTO_LB_H
+
+#include "acrypto.h"
+
+#define CRYPTO_LB_NAMELEN 32
+
+struct crypto_lb
+{
+ struct list_head lb_entry;
+
+ char name[CRYPTO_LB_NAMELEN];
+
+ void (*rehash)(struct crypto_lb *);
+ struct crypto_device * (*find_device) (struct crypto_lb *,
+ struct crypto_session_initializer *,
+ struct crypto_data *);
+
+ spinlock_t lock;
+
+ spinlock_t *crypto_device_lock;
+ struct list_head *crypto_device_list;
+
+ struct device_driver *driver;
+ struct device device;
+ struct class_device class_device;
+ struct completion dev_released;
+
+};
+
+int crypto_lb_register(struct crypto_lb *lb, int set_current, int set_default);
+void crypto_lb_unregister(struct crypto_lb *);
+
+inline void crypto_lb_rehash(void);
+struct crypto_device *crypto_lb_find_device(struct crypto_session_initializer *, struct crypto_data *);
+
+void crypto_wake_lb(void);
+
+int crypto_lb_init(void);
+void crypto_lb_fini(void);
+
+#endif /* __CRYPTO_LB_H */

2005-03-07 20:43:44

by Evgeniy Polyakov

[permalink] [raw]
Subject: [44/many] arch: x86_64 config

--- ./arch/x86_64/Kconfig~ 2005-03-02 10:38:18.000000000 +0300
+++ ./arch/x86_64/Kconfig 2005-03-07 21:31:22.000000000 +0300
@@ -456,4 +456,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:43:43

by Evgeniy Polyakov

[permalink] [raw]
Subject: [20/many] acrypto: crypto_user_ioctl.h

--- /tmp/empty/crypto_user_ioctl.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user_ioctl.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,67 @@
+/*
+ * crypto_user_ioctl.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_USER_IOCTL_H
+#define __CRYPTO_USER_IOCTL_H
+
+struct crypto_user_ioctl
+{
+ __u16 src_size;
+ __u16 dst_size;
+ __u16 key_size;
+ __u16 iv_size;
+
+ __u16 operation;
+ __u16 type;
+ __u16 mode;
+ __u16 priority;
+};
+
+#define CRYPTO_USER_IOCTL_SYM 'U'
+#define CRYPTO_SESSION_ALLOC _IOW(CRYPTO_USER_IOCTL_SYM, 0, struct crypto_user_ioctl)
+#define CRYPTO_SESSION_ADD _IOR(CRYPTO_USER_IOCTL_SYM, 1, char *)
+#define CRYPTO_FILL_DATA _IOW(CRYPTO_USER_IOCTL_SYM, 2, struct crypto_user_data)
+
+
+#ifdef __KERNEL__
+
+#include <linux/ioctl.h>
+
+#include "crypto_user.h"
+
+struct crypto_user_ioctl_kern
+{
+ struct crypto_session_initializer ci;
+ struct crypto_data data;
+ struct crypto_session *s;
+
+ int scompleted;
+ wait_queue_head_t wait;
+
+ struct crypto_user_data usr[4];
+ void *ptr[4];
+};
+
+int crypto_user_ioctl_init(void);
+void crypto_user_ioctl_fini(void);
+
+#endif /* __KERNEL__ */
+#endif /* __CRYPTO_USER_IOCTL_H */

2005-03-07 20:48:04

by Evgeniy Polyakov

[permalink] [raw]
Subject: [11/many] acrypto: crypto_main.c

--- /tmp/empty/crypto_main.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_main.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,374 @@
+/*
+ * crypto_main.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+
+#include "acrypto.h"
+#include "crypto_lb.h"
+#include "crypto_conn.h"
+#include "crypto_route.h"
+#include "crypto_user_ioctl.h"
+
+int force_lb_remove;
+module_param(force_lb_remove, int, 0);
+
+struct crypto_device main_crypto_device;
+
+extern struct bus_type crypto_bus_type;
+extern struct device_driver crypto_driver;
+extern struct class crypto_class;
+extern struct device crypto_dev;
+
+extern struct class_device_attribute class_device_attr_devices;
+extern struct class_device_attribute class_device_attr_lbs;
+
+static void dump_ci(struct crypto_session_initializer *ci)
+{
+ dprintk("%llu [%llu] op=%04u, type=%04x, mode=%04x, priority=%04x",
+ ci->id, ci->dev_id,
+ ci->operation, ci->type, ci->mode, ci->priority);
+}
+
+void __crypto_session_insert(struct crypto_device *dev, struct crypto_session *s)
+{
+ struct crypto_session *__s;
+
+ if (unlikely(list_empty(&dev->session_list))) {
+ list_add(&s->dev_queue_entry, &dev->session_list);
+ } else {
+ int inserted = 0;
+
+ list_for_each_entry(__s, &dev->session_list, dev_queue_entry) {
+ if (__s->ci.priority < s->ci.priority) {
+ list_add_tail(&s->dev_queue_entry, &__s->dev_queue_entry);
+ inserted = 1;
+ break;
+ }
+ }
+
+ if (!inserted)
+ list_add_tail(&s->dev_queue_entry, &dev->session_list);
+ }
+
+ dump_ci(&s->ci);
+ dprintk(" added to crypto device %s [%d].\n", dev->name, atomic_read(&dev->refcnt));
+}
+
+void crypto_session_insert_main(struct crypto_device *dev, struct crypto_session *s)
+{
+ struct crypto_session *__s;
+
+ spin_lock_irq(&dev->session_lock);
+
+ crypto_device_get(dev);
+ if (unlikely(list_empty(&dev->session_list))) {
+ list_add(&s->main_queue_entry, &dev->session_list);
+ } else {
+ int inserted = 0;
+
+ list_for_each_entry(__s, &dev->session_list, main_queue_entry) {
+ if (__s->ci.priority < s->ci.priority) {
+ list_add_tail(&s->main_queue_entry,
+ &__s->main_queue_entry);
+ inserted = 1;
+ break;
+ }
+ }
+
+ if (!inserted)
+ list_add_tail(&s->main_queue_entry, &dev->session_list);
+ }
+
+ spin_unlock_irq(&dev->session_lock);
+}
+
+void crypto_session_insert(struct crypto_device *dev, struct crypto_session *s)
+{
+ spin_lock_irq(&dev->session_lock);
+ __crypto_session_insert(dev, s);
+ spin_unlock_irq(&dev->session_lock);
+}
+
+void crypto_session_destroy(struct crypto_session *s)
+{
+ if (s->data.priv_size && s->data.priv)
+ kfree(s->data.priv);
+
+ if (session_from_cache(s))
+ kmem_cache_free(s->pool_dev->session_cache, s);
+ else
+ mempool_free(s, s->pool_dev->session_pool);
+}
+
+struct crypto_session *crypto_session_create(struct crypto_session_initializer *ci, struct crypto_data *d)
+{
+ struct crypto_device *dev = &main_crypto_device;
+ struct crypto_device *ldev;
+ struct crypto_session *s;
+ int err;
+
+ if (d->priv_size > CRYPTO_MAX_PRIV_SIZE) {
+ dprintk("priv_size %u is too big, maximum allowed %u.\n",
+ d->priv_size, CRYPTO_MAX_PRIV_SIZE);
+ return NULL;
+ }
+
+ ldev = crypto_lb_find_device(ci, d);
+ if (!ldev) {
+ dprintk("Cannot find suitable device for [%02x.%02x.%02x.%02x].\n",
+ ci->operation, ci->mode, ci->type, ci->priority);
+ return NULL;
+ }
+
+ s = mempool_alloc(ldev->session_pool, GFP_ATOMIC);
+ if (!s) {
+ ldev->stat.pool_failed++;
+
+ s = kmem_cache_alloc(ldev->session_cache, GFP_ATOMIC);
+ if (!s) {
+ ldev->stat.kmem_failed++;
+ goto err_out_device_put;
+ }
+
+ mark_session_from_cache(s);
+ }
+
+ s->pool_dev = ldev;
+
+ crypto_route_head_init(&s->route_list);
+ INIT_LIST_HEAD(&s->dev_queue_entry);
+ INIT_LIST_HEAD(&s->main_queue_entry);
+
+ spin_lock_init(&s->lock);
+
+ memcpy(&s->ci, ci, sizeof(s->ci));
+ memcpy(&s->data, d, sizeof(s->data));
+
+ s->data.priv = NULL;
+ if (d->priv_size) {
+ s->data.priv = kmalloc(d->priv_size, GFP_ATOMIC);
+ if (!s->data.priv)
+ goto err_out_session_free;
+
+ if (d->priv)
+ memcpy(s->data.priv, d->priv, d->priv_size);
+ }
+ else
+ s->data.priv = d->priv;
+
+ s->ci.id = dev->sid++;
+ s->ci.dev_id = ldev->sid++;
+ s->ci.flags = 0;
+
+ err = crypto_route_add_direct(ldev, s, ci);
+ if (err) {
+ dprintk("Can not add route to device %s.\n", ldev->name);
+ goto err_out_session_free;
+ }
+
+ return s;
+
+err_out_session_free:
+ crypto_session_destroy(s);
+err_out_device_put:
+ crypto_device_put(ldev);
+
+ return NULL;
+}
+
+void crypto_session_add(struct crypto_session *s)
+{
+ struct crypto_device *ldev;
+ struct crypto_device *dev = &main_crypto_device;
+
+ ldev = crypto_route_get_current_device(s);
+ BUG_ON(!ldev); /* This can not happen. */
+
+ spin_lock_irq(&s->lock);
+ crypto_session_insert(ldev, s);
+ crypto_device_put(ldev);
+ crypto_session_insert_main(dev, s);
+ spin_unlock_irq(&s->lock);
+
+ if (ldev->data_ready)
+ ldev->data_ready(ldev);
+}
+
+struct crypto_session *crypto_session_alloc(struct crypto_session_initializer *ci, struct crypto_data *d)
+{
+ struct crypto_session *s;
+
+ s = crypto_session_create(ci, d);
+ if (!s)
+ return NULL;
+
+ crypto_session_add(s);
+
+ return s;
+}
+
+void crypto_session_dequeue_route(struct crypto_session *s)
+{
+ struct crypto_route *rt;
+ struct crypto_device *dev;
+
+ BUG_ON(crypto_route_queue_len(s) > 1);
+
+ while ((rt = crypto_route_dequeue(s))) {
+ dev = rt->dev;
+
+ dprintk(KERN_INFO "Removing route entry for device %s.\n", dev->name);
+
+ spin_lock_irq(&dev->session_lock);
+ list_del_init(&s->dev_queue_entry);
+ spin_unlock_irq(&dev->session_lock);
+
+ crypto_route_free(rt);
+ }
+}
+
+void __crypto_session_dequeue_main(struct crypto_session *s)
+{
+ struct crypto_device *dev = &main_crypto_device;
+
+ list_del(&s->main_queue_entry);
+ crypto_device_put(dev);
+}
+
+void crypto_session_dequeue_main(struct crypto_session *s)
+{
+ struct crypto_device *dev = &main_crypto_device;
+
+ spin_lock_irq(&dev->session_lock);
+
+ __crypto_session_dequeue_main(s);
+
+ spin_unlock_irq(&dev->session_lock);
+}
+
+int __devinit cmain_init(void)
+{
+ struct crypto_device *dev = &main_crypto_device;
+ int err;
+
+ snprintf(dev->name, sizeof(dev->name), "crypto_sessions");
+
+ err = bus_register(&crypto_bus_type);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto bus: err=%d.\n",
+ err);
+ return err;
+ }
+
+ err = driver_register(&crypto_driver);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto driver: err=%d.\n",
+ err);
+ goto err_out_bus_unregister;
+ }
+
+ err = class_register(&crypto_class);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto class: err=%d.\n",
+ err);
+ goto err_out_driver_unregister;
+ }
+
+ err = crypto_lb_init();
+ if (err)
+ goto err_out_class_unregister;
+
+ err = crypto_conn_init();
+ if (err)
+ goto err_out_crypto_lb_fini;
+
+ err = __crypto_device_add(dev);
+ if (err)
+ goto err_out_crypto_conn_fini;
+
+ err = class_device_create_file(&dev->class_device, &class_device_attr_devices);
+ if (err)
+ dprintk("Failed to create \"devices\" attribute: err=%d.\n", err);
+
+ err = class_device_create_file(&dev->class_device, &class_device_attr_lbs);
+ if (err)
+ dprintk("Failed to create \"lbs\" attribute: err=%d.\n", err);
+
+ err = crypto_user_ioctl_init();
+ if (err)
+ goto err_out_remove_files;
+
+ return 0;
+
+err_out_remove_files:
+ class_device_remove_file(&dev->class_device, &class_device_attr_devices);
+ class_device_remove_file(&dev->class_device, &class_device_attr_lbs);
+ __crypto_device_remove(dev);
+err_out_crypto_conn_fini:
+ crypto_conn_fini();
+err_out_crypto_lb_fini:
+ crypto_lb_fini();
+err_out_class_unregister:
+ class_unregister(&crypto_class);
+err_out_driver_unregister:
+ driver_unregister(&crypto_driver);
+err_out_bus_unregister:
+ bus_unregister(&crypto_bus_type);
+
+ return err;
+}
+
+void __devexit cmain_fini(void)
+{
+ struct crypto_device *dev = &main_crypto_device;
+
+ crypto_user_ioctl_fini();
+
+ class_device_remove_file(&dev->class_device, &class_device_attr_devices);
+ class_device_remove_file(&dev->class_device, &class_device_attr_lbs);
+ __crypto_device_remove(dev);
+
+ crypto_conn_fini();
+ crypto_lb_fini();
+
+ class_unregister(&crypto_class);
+ driver_unregister(&crypto_driver);
+ bus_unregister(&crypto_bus_type);
+}
+
+module_init(cmain_init);
+module_exit(cmain_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_DESCRIPTION("Asynchronous crypto layer.");
+
+EXPORT_SYMBOL(crypto_session_alloc);
+EXPORT_SYMBOL_GPL(crypto_session_create);
+EXPORT_SYMBOL_GPL(crypto_session_add);
+EXPORT_SYMBOL_GPL(crypto_session_dequeue_route);

2005-03-07 20:48:04

by Evgeniy Polyakov

[permalink] [raw]
Subject: [36/many] arch: ppc64 config

--- ./arch/ppc64/Kconfig~ 2005-03-02 10:38:10.000000000 +0300
+++ ./arch/ppc64/Kconfig 2005-03-07 21:29:24.000000000 +0300
@@ -396,4 +396,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:52:59

by Evgeniy Polyakov

[permalink] [raw]
Subject: [13/many] acrypto: crypto_stat.c

--- /tmp/empty/crypto_stat.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_stat.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,100 @@
+/*
+ * crypto_stat.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+
+#include "acrypto.h"
+#include "crypto_route.h"
+
+void crypto_stat_start_inc(struct crypto_session *s)
+{
+ struct crypto_device *dev;
+
+ dev = crypto_route_get_current_device(s);
+ if (dev) {
+ spin_lock_irq(&dev->stat_lock);
+ dev->stat.sstarted++;
+ spin_unlock_irq(&dev->stat_lock);
+
+ crypto_device_put(dev);
+ }
+}
+
+void crypto_stat_finish_inc(struct crypto_session *s)
+{
+ struct crypto_device *dev;
+
+ dev = crypto_route_get_current_device(s);
+ if (dev) {
+ spin_lock_irq(&dev->stat_lock);
+ dev->stat.sfinished++;
+ spin_unlock_irq(&dev->stat_lock);
+
+ crypto_device_put(dev);
+ }
+}
+
+void crypto_stat_complete_inc(struct crypto_session *s)
+{
+ struct crypto_device *dev;
+
+ dev = crypto_route_get_current_device(s);
+ if (dev) {
+ spin_lock_irq(&dev->stat_lock);
+ dev->stat.scompleted++;
+ spin_unlock_irq(&dev->stat_lock);
+
+ crypto_device_put(dev);
+ }
+}
+
+void crypto_stat_ptime_inc(struct crypto_session *s)
+{
+ struct crypto_device *dev;
+
+ dev = crypto_route_get_current_device(s);
+ if (dev) {
+ int i;
+
+ spin_lock_irq(&dev->stat_lock);
+ for (i = 0; i < dev->cap_number; ++i) {
+ if (__match_initializer(&dev->cap[i], &s->ci)) {
+ dev->cap[i].ptime += s->ci.ptime;
+ dev->cap[i].scomp++;
+ break;
+ }
+ }
+ spin_unlock_irq(&dev->stat_lock);
+
+ crypto_device_put(dev);
+ }
+}
+
+EXPORT_SYMBOL(crypto_stat_start_inc);
+EXPORT_SYMBOL(crypto_stat_finish_inc);
+EXPORT_SYMBOL(crypto_stat_complete_inc);

2005-03-07 20:43:42

by Evgeniy Polyakov

[permalink] [raw]
Subject: [14/many] acrypto: crypto_stat.h

--- /tmp/empty/crypto_stat.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_stat.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,32 @@
+/*
+ * crypto_stat.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_STAT_H
+#define __CRYPTO_STAT_H
+
+#include "acrypto.h"
+
+void crypto_stat_start_inc(struct crypto_session *s);
+void crypto_stat_finish_inc(struct crypto_session *s);
+void crypto_stat_complete_inc(struct crypto_session *s);
+void crypto_stat_ptime_inc(struct crypto_session *s);
+
+#endif /* __CRYPTO_STAT_H */

2005-03-07 20:57:26

by Evgeniy Polyakov

[permalink] [raw]
Subject: [2/5] bd: userspace utility to control asynchronous block device

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <stdint.h>

#include <linux/types.h>

#include <sys/ioctl.h>
#include <sys/stat.h>
#include <sys/types.h>

#define __USE_LARGEFILE64
#include <fcntl.h>

#include "bd.h"
#include "bd_fd.h"
#include "bd_acrypto.h"
#include "../crypto/crypto_def.h"

#define ulog(f, a...) fprintf(stderr, f, ##a)
struct table;

struct self
{
char device[BD_MAX_NAMESIZ];
char filter[BD_MAX_NAMESIZ];
};

struct table
{
int idx;
char name[BD_MAX_NAMESIZ];
int (* handler)(struct table *tblp, char *argv[], int argc);
int argc;

struct self *self;
};

struct param
{
int found;
char name[BD_MAX_NAMESIZ];
char *val;
};

#define walk_through_table(tblp, tbl, i) \
for (i=sizeof(tbl)/sizeof(tbl[0])-1, tblp=&tbl[i]; i; --i, tblp=&tbl[i]) \

static struct self generic_self;

static int default_action_handler(struct table *tblp, char *argv[], int argc);
static int bind_action_handler(struct table *tblp, char *argv[], int argc);
static int unbind_action_handler(struct table *tblp, char *argv[], int argc);

static struct table action_table[] =
{
{0, "default", default_action_handler, 0, &generic_self},
{0, "bind", bind_action_handler, 5, &generic_self},
{0, "unbind", unbind_action_handler, 5, &generic_self},
};

static int fd_action_handler(struct table *tblp, char *argv[], int argc);
static int acrypto_action_handler(struct table *tblp, char *argv[], int argc);
static struct table bind_table[] =
{
{0, "default", default_action_handler, 0, &generic_self},
{0, "fd", fd_action_handler, 2, &generic_self},
{0, "acrypto", acrypto_action_handler, 10, &generic_self},
};

static int run_cmd(int fd, int request, void *arg)
{
int err;

err = ioctl(fd, request, arg);
if (err)
{
ulog("Failed to make %d request : %s [%d].\n",
request, strerror(errno), errno);
return err;
}

return 0;
}

static int scmp(char *str1, char *str2)
{
int l1, l2;

l1 = strlen(str1);
l2 = strlen(str2);

if (l1 > l2)
return strncmp(str1, str2, l2);
else
return strncmp(str1, str2, l1);
}

static int default_action_handler(struct table *tblp __attribute__((unused)), char *argv[], int argc __attribute__((unused)))
{
ulog("Unsupported action \"%s\".\n", argv[0]);

return -1;
}

static int generic_action_handler(char *argv[])
{
if (scmp(argv[1], "dev"))
{
ulog("Unsupported \"%s\" parameter in \"%s\" action.\n", argv[1], argv[0]);
return -1;
}

snprintf(generic_self.device, sizeof(generic_self.device), "%s", argv[2]);

if (scmp(argv[3], "filter"))
{
ulog("Unsupported \"%s\" parameter in \"%s\" action.\n", argv[3], argv[0]);
return -1;
}

snprintf(generic_self.filter, sizeof(generic_self.filter), "%s", argv[4]);

//ulog("filter=%s, device=%s.\n", generic_self.filter, generic_self.device);

return 0;
}

static int bind_action_handler(struct table *tblp, char *argv[], int argc)
{
struct table *p;
int err, i, fd;
struct bd_user u;

err = generic_action_handler(argv);
if (err)
return err;

argc -= tblp->argc;
argv += tblp->argc-1;

//ulog("%s: argv[0]=%s, argc=%d.\n", __func__, argv[0], argc);

walk_through_table(p, bind_table, i)
{
//ulog("i=%d, argc=%d, cnt=%d, name=%s.\n", i, argc, p->argc, p->name);

if (argc < p->argc)
continue;

if (!scmp(p->name, argv[0]))
{
err = p->handler(p, argv, argc);
return err;
}
}

fd = open(tblp->self->device, O_RDWR | O_LARGEFILE);
if (fd == -1)
{
ulog("Failed to open device file %s when binding filter %s: %s [%d].\n",
tblp->self->device, tblp->self->filter,
strerror(errno), errno);
return -errno;
}

strcpy(u.name, tblp->self->filter);
u.size = 0;

err = run_cmd(fd, BD_BIND_FILTER, &u);

return err;
}

static int unbind_action_handler(struct table *tblp, char *argv[], int argc __attribute__((unused)))
{
int err, fd;
struct bd_user u;

err = generic_action_handler(argv);
if (err)
return err;

fd = open(tblp->self->device, O_RDWR | O_LARGEFILE);
if (fd == -1)
{
ulog("Failed to open device file %s when binding filter %s: %s [%d].\n",
tblp->self->device, tblp->self->filter,
strerror(errno), errno);
return -errno;
}

strcpy(u.name, tblp->self->filter);
u.size = 0;

err = run_cmd(fd, BD_UNBIND_FILTER, &u);

return err;
}

static int fd_action_handler(struct table *tblp, char *argv[], int argc __attribute__((unused)))
{
int err, fd, len, tfd;
char name[BD_MAX_NAMESIZ];
char buf[128];
struct bd_user *u;
struct bd_fd_user *fdu;

if (scmp(argv[1], "file"))
{
ulog("Unsupported \"%s\" parameter in \"%s\" action.\n", argv[1], argv[0]);
return -1;
}

len = snprintf(name, sizeof(name), "%s", argv[2]);

fd = open(tblp->self->device, O_RDWR | O_LARGEFILE);
if (fd == -1)
{
ulog("Failed to open device file %s when binding filter %s: %s [%d].\n",
tblp->self->device, tblp->self->filter,
strerror(errno), errno);
return -errno;
}

tfd = open(name, O_RDWR | O_LARGEFILE);
if (tfd == -1)
{
ulog("Failed to open target file %s: %s [%d].\n",
name, strerror(errno), errno);
close(fd);
return -errno;
}

u = (struct bd_user *)buf;
fdu = (struct bd_fd_user *)(u+1);

strcpy(u->name, tblp->self->filter);
u->size = sizeof(*fdu);
fdu->fd = tfd;

err = run_cmd(fd, BD_BIND_FILTER, u);
if (err)
ulog("Failed to bind filter %s to device %s: err=%d.\n",
tblp->self->filter, tblp->self->device, err);

close(tfd);
close(fd);

//ulog("%s: len=%d, name=%s, err=%d.\n", __func__, len, name, err);


return err;
}

static int acrypto_action_handler(struct table *tblp, char *argv[], int argc)
{
struct param args[] = {
{0, "cipher", NULL},
{0, "key", NULL},
{0, "iv", NULL},
{0, "priority", NULL},
{0, "mode", NULL},
};
int i, j, found, sz, fd, err;
char buf[128];
struct bd_user *u;
struct bd_acrypto_private *a;

memset(buf, 0, sizeof(buf));

u = (struct bd_user *)buf;
a = (struct bd_acrypto_private *)(u+1);

found = 0;
for (i=0; i<(signed)(sizeof(args)/sizeof(args[0])); ++i)
{
for (j=1; j<argc; j+=2)
{
if (scmp(args[i].name, argv[j]))
continue;

if (args[i].found)
{
ulog("dev=%s, filter=%s: parameter \"%s\" already found in \"%s\", ignoring.\n",
tblp->self->device, tblp->self->filter,
args[i].name, argv[0]);
continue;
}

args[i].found = 1;
found++;

args[i].val = argv[j+1];
}
}

if (found != sizeof(args)/sizeof(args[0]))
{
ulog("dev=%s, filter=%s: wrong parameters set: found=%d, sz=%u.\n",
tblp->self->device, tblp->self->filter,
found, sizeof(args)/sizeof(args[0]));
return -1;
}

if (!scmp(args[0].val, "aes128"))
a->type = CRYPTO_TYPE_AES_128;
else if (!scmp(args[0].val, "aes192"))
a->type = CRYPTO_TYPE_AES_192;
else if (!scmp(args[0].val, "aes256"))
a->type = CRYPTO_TYPE_AES_256;
else if (!scmp(args[0].val, "3des"))
a->type = CRYPTO_TYPE_3DES;
else
{
ulog("dev=%s, filter=%s: unsupported cipher \"%s\".\n", tblp->self->device, tblp->self->filter, args[0].val);
return -1;
}

if (!scmp(args[4].val, "ecb"))
a->mode = CRYPTO_MODE_ECB;
else if (!scmp(args[4].val, "cbc"))
a->mode = CRYPTO_MODE_CBC;
else if (!scmp(args[4].val, "cfb"))
a->mode = CRYPTO_MODE_CFB;
else if (!scmp(args[4].val, "ofb"))
a->mode = CRYPTO_MODE_OFB;
else
{
ulog("dev=%s, filter=%s: unsupported mode \"%s\".\n", tblp->self->device, tblp->self->filter, args[4].val);
return -1;
}

a->priority = (__u16)(atoi(args[3].val) & 0xffff);

sz = (signed)strlen(args[1].val)/2;
if (sz % 2 != 0 || sz > (signed)sizeof(a->key))
{
ulog("dev=%s, filter=%s: wrong length of the \"%s\".\n", tblp->self->device, tblp->self->filter, args[1].val);
return -1;
}


j = 0;
for (i=0; i<(signed)strlen(args[1].val)-1; i+=2)
{
unsigned char v[2] = {args[1].val[i], args[1].val[i+1]};
unsigned char val;

val = (unsigned char)(strtol(v, NULL, 16) & 0xff);

a->key[j++] = val;
}

a->key_size = j;

sz = (signed)strlen(args[2].val)/2;
if (sz % 2 != 0 || sz > (signed)sizeof(a->iv))
{
ulog("dev=%s, filter=%s: wrong length of the \"%s\".\n", tblp->self->device, tblp->self->filter, args[2].val);
return -1;
}
j = 0;
for (i=0; i<(signed)strlen(args[2].val)-1; i+=2)
{
unsigned char v[2] = {args[2].val[i], args[2].val[i+1]};
unsigned char val;

val = (unsigned char)(strtol(v, NULL, 16) & 0xff);

a->iv[j++] = val;
}

a->iv_size = j;

ulog("dev=%s, filter=%s: cipher=%s, mode=%s, priority=%d, key_size=%u, iv_size=%u.\n",
tblp->self->device, tblp->self->filter, args[0].val, args[4].val, a->priority,
a->key_size, a->iv_size);


fd = open(tblp->self->device, O_RDWR | O_LARGEFILE);
if (fd == -1)
{
ulog("Failed to open device file %s when binding filter %s: %s [%d].\n",
tblp->self->device, tblp->self->filter,
strerror(errno), errno);
return -errno;
}

strcpy(u->name, tblp->self->filter);
u->size = sizeof(*a);

err = run_cmd(fd, BD_BIND_FILTER, u);
if (err)
ulog("Failed to bind filter %s to device %s: err=%d.\n",
tblp->self->filter, tblp->self->device, err);

close(fd);

return err;
}

int main(int argc, char *argv[])
{
int i, err;
struct table *atp;

argc--;
argv++;

walk_through_table(atp, action_table, i)
{
if (argc < action_table[i].argc)
continue;

//ulog("i=%d, argc=%d, cnt=%d, name=%s.\n", i, argc, atp->argc, atp->name);

if (!scmp(argv[0], atp->name))
{
err = atp->handler(atp, argv, argc);
return err;
}
}

return action_table[0].handler(&action_table[0], argv, argc);
}

2005-03-07 20:57:25

by Evgeniy Polyakov

[permalink] [raw]
Subject: [35/many] arch: ppc config

--- ./arch/ppc/Kconfig~ 2005-03-02 10:38:33.000000000 +0300
+++ ./arch/ppc/Kconfig 2005-03-07 21:29:12.000000000 +0300
@@ -1294,3 +1294,5 @@
source "security/Kconfig"

source "crypto/Kconfig"
+
+source "acrypto/Kconfig"

2005-03-07 20:54:39

by Evgeniy Polyakov

[permalink] [raw]
Subject: [41/many] arch: sparc64 config

--- ./arch/sparc64/Kconfig~ 2005-03-02 10:38:25.000000000 +0300
+++ ./arch/sparc64/Kconfig 2005-03-07 21:30:33.000000000 +0300
@@ -606,4 +606,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:57:26

by Evgeniy Polyakov

[permalink] [raw]
Subject: [1/many] acrypto: Kconfig

diff -Nru /tmp/empty/Kconfig ./acrypto/Kconfig
--- /tmp/empty/Kconfig 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/Kconfig 2005-03-07 21:21:33.000000000 +0300
@@ -0,0 +1,30 @@
+menu "Asynchronous crypto layer"
+
+config ACRYPTO
+ tristate "Asynchronous crypto layer"
+ select CONNECTOR
+ --- help ---
+ It supports:
+ - multiple asynchronous crypto device queues
+ - crypto session routing
+ - crypto session binding
+ - modular load balancing
+ - crypto session batching genetically implemented by design
+ - crypto session priority
+ - different kinds of crypto operation(RNG, asymmetrical crypto, HMAC and any other
+
+config SIMPLE_LB
+ tristate "Simple load balancer"
+ depends on ACRYPTO
+ --- help ---
+ Simple load balancer returns device with the lowest load
+ (device has the least number of session in it's queue) if it exists.
+
+config ASYNC_PROVIDER
+ tristate "Asynchronous crypto provider (AES CBC)"
+ depends on ACRYPTO && (CRYPTO_AES || CRYPTO_AES_586)
+ --- help ---
+ Asynchronous crypto provider based on synchronous crypto layer.
+ It supports AES CBC crypto mode (may be changed by source edition).
+
+endmenu

2005-03-07 20:53:01

by Evgeniy Polyakov

[permalink] [raw]
Subject: [24/many] arch: arm26 config

--- ./arch/arm26/Kconfig~ 2005-03-02 10:38:10.000000000 +0300
+++ ./arch/arm26/Kconfig 2005-03-07 21:26:23.000000000 +0300
@@ -224,4 +224,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 21:06:42

by Evgeniy Polyakov

[permalink] [raw]
Subject: [28/many] arch: i386 config

--- ./arch/i386/Kconfig~ 2005-03-02 10:37:49.000000000 +0300
+++ ./arch/i386/Kconfig 2005-03-07 20:52:47.000000000 +0300
@@ -1214,6 +1214,8 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

#

2005-03-07 20:53:00

by Evgeniy Polyakov

[permalink] [raw]
Subject: [38/many] arch: sh config

--- ./arch/sh/Kconfig~ 2005-03-02 10:38:33.000000000 +0300
+++ ./arch/sh/Kconfig 2005-03-07 21:29:53.000000000 +0300
@@ -792,4 +792,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 21:06:41

by Evgeniy Polyakov

[permalink] [raw]
Subject: [33/many] arch: mips config

--- ./arch/mips/Kconfig~ 2005-03-02 10:38:09.000000000 +0300
+++ ./arch/mips/Kconfig 2005-03-07 21:28:46.000000000 +0300
@@ -1648,6 +1648,8 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

#

2005-03-07 21:16:42

by Evgeniy Polyakov

[permalink] [raw]
Subject: [31/many] arch: m68k config

--- ./arch/m68k/Kconfig~ 2005-03-02 10:38:10.000000000 +0300
+++ ./arch/m68k/Kconfig 2005-03-07 21:28:12.000000000 +0300
@@ -667,4 +667,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 21:16:45

by Evgeniy Polyakov

[permalink] [raw]
Subject: [39/many] arch: sh64 config

--- ./arch/sh64/Kconfig~ 2005-03-02 10:38:17.000000000 +0300
+++ ./arch/sh64/Kconfig 2005-03-07 21:30:08.000000000 +0300
@@ -273,4 +273,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 21:21:08

by Evgeniy Polyakov

[permalink] [raw]
Subject: [17/many] acrypto: crypto_user_direct.c

--- /tmp/empty/crypto_user_direct.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user_direct.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,390 @@
+/*
+ * crypto_user_direct.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/vmalloc.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/connector.h>
+
+#include "acrypto.h"
+#include "crypto_user.h"
+#include "crypto_user_direct.h"
+
+extern struct cb_id crypto_conn_id;
+
+static LIST_HEAD(crypto_user_direct_list);
+static spinlock_t crypto_user_direct_lock = SPIN_LOCK_UNLOCKED;
+static struct completion thread_exited;
+static int need_exit;
+static DECLARE_WAIT_QUEUE_HEAD(crypto_user_direct_wait_queue);
+
+static int crypto_user_direct_alloc_pages(struct crypto_user_direct_kern *k)
+{
+ k->sp = kmalloc(sizeof(struct page *) * k->snum, GFP_KERNEL);
+ if (!k->sp) {
+ dprintk("Failed to allocate %d source pages.\n", k->snum);
+ return -ENOMEM;
+ }
+
+ k->dp = kmalloc(sizeof(struct page *) * k->dnum, GFP_KERNEL);
+ if (!k->dp) {
+ dprintk("Failed to allocate %d destination pages.\n", k->dnum);
+ kfree(k->sp);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void crypto_user_direct_free_pages(struct crypto_user_direct_kern *k)
+{
+ kfree(k->sp);
+ kfree(k->dp);
+}
+
+static int crypto_user_direct_alloc_vmas(struct crypto_user_direct_kern *k)
+{
+ k->svma = kmalloc(sizeof(struct vm_area_struct *) * k->snum, GFP_KERNEL);
+ if (!k->svma) {
+ dprintk("Failed to allocate %d source vmas.\n", k->snum);
+ return -ENOMEM;
+ }
+
+ k->dvma = kmalloc(sizeof(struct vm_area_struct *) * k->dnum, GFP_KERNEL);
+ if (!k->dvma) {
+ dprintk("Failed to allocate %d destination vmas.\n", k->dnum);
+ kfree(k->svma);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void crypto_user_direct_free_vmas(struct crypto_user_direct_kern *k)
+{
+ kfree(k->svma);
+ kfree(k->dvma);
+}
+
+static int crypto_user_direct_alloc_mm(struct crypto_user_direct_kern *k)
+{
+ int err;
+
+ err = crypto_user_direct_alloc_pages(k);
+ if (err)
+ return err;
+
+ err = crypto_user_direct_alloc_vmas(k);
+ if (err) {
+ crypto_user_direct_free_pages(k);
+ return err;
+ }
+
+ return 0;
+}
+
+static void crypto_user_direct_free_mm(struct crypto_user_direct_kern *k)
+{
+ crypto_user_direct_free_pages(k);
+ crypto_user_direct_free_vmas(k);
+}
+
+static void crypto_user_direct_free_data(struct crypto_user_direct_kern *k)
+{
+ int i;
+
+ for (i=0; i<k->snum; ++i)
+ page_cache_release(k->sp[i]);
+ for (i=0; i<k->dnum; ++i) {
+ set_page_dirty_lock(k->dp[i]);
+ page_cache_release(k->dp[i]);
+ }
+ up_read(&k->mm->mmap_sem);
+ crypto_user_direct_free_mm(k);
+ mmput(k->mm);
+}
+
+static void crypto_user_direct_callback(struct crypto_session_initializer *ci, struct crypto_data *data)
+{
+ struct crypto_user_direct_kern *k = data->priv;
+ struct cn_msg m;
+
+ memset(&m, 0, sizeof(m));
+
+ memcpy(&m.id, &crypto_conn_id, sizeof(m.id));
+ m.seq = k->seq;
+ m.ack = k->ack+1;
+
+ cn_netlink_send(&m, 0);
+
+ crypto_user_direct_free_data(k);
+ crypto_user_free_crypto_data(data);
+}
+
+static void crypto_user_direct_fill_data(struct crypto_data *data, struct crypto_user_direct_kern *k)
+{
+ int i, size;
+
+ size = k->usr.src_size;
+ for (i=0; i<data->sg_src_num; ++i) {
+ data->sg_src[i].page = k->sp[i];
+
+ if (i == 0) {
+ data->sg_src[i].offset = offset_in_page(k->usr.src);
+ data->sg_src[i].length = ALIGN_DATA_SIZE(k->usr.src) - k->usr.src;
+ } else {
+ data->sg_src[i].offset = 0;
+ data->sg_src[i].length = (i != data->sg_src_num)?PAGE_SIZE:size;
+ }
+
+ size -= data->sg_src[i].length;
+ }
+
+ size = k->usr.dst_size;
+ for (i=0; i<data->sg_dst_num; ++i) {
+ data->sg_dst[i].page = k->dp[i];
+
+ if (i == 0) {
+ data->sg_dst[i].offset = offset_in_page(k->usr.dst);
+ data->sg_dst[i].length = ALIGN_DATA_SIZE(k->usr.dst) - k->usr.dst;
+ } else {
+ data->sg_dst[i].offset = 0;
+ data->sg_dst[i].length = (i != data->sg_dst_num)?PAGE_SIZE:size;
+ }
+
+ size -= data->sg_dst[i].length;
+ }
+}
+
+static int crypto_user_direct_prepare_data(struct crypto_data *data, struct crypto_user_direct_kern *k)
+{
+ int err, i;
+ struct task_struct *tsk;
+
+ tsk = find_task_by_pid(k->usr.pid);
+ if (!tsk) {
+ dprintk(KERN_ERR "Task with pid=%d does not exist.\n", k->usr.pid);
+ return -ENODEV;
+ }
+
+ dprintk("Found task with pid=%d.\n", k->usr.pid);
+
+ k->mm = get_task_mm(tsk);
+ if (!k->mm)
+ return -EINVAL;
+
+ k->snum = data->sg_src_num;
+ k->dnum = data->sg_dst_num;
+
+ err = crypto_user_direct_alloc_mm(k);
+ if (err)
+ goto err_out_put_mm;
+
+ down_read(&k->mm->mmap_sem);
+
+ err = get_user_pages(tsk, k->mm, k->usr.src, k->snum, 1, 1, k->sp, k->svma);
+ if (err < 0) {
+ dprintk("Failed to get %d src pages for pid=%d.\n",
+ k->snum, k->usr.pid);
+ goto err_out_up_sem;
+ }
+
+ err = get_user_pages(tsk, k->mm, k->usr.dst, k->dnum, 1, 1, k->dp, k->dvma);
+ if (err < 0) {
+ dprintk("Failed to get %d dst pages for pid=%d.\n",
+ k->snum, k->usr.pid);
+ goto err_out_put_src;
+ }
+
+ crypto_user_direct_fill_data(data, k);
+
+ return 0;
+
+err_out_put_src:
+ for (i=0; i<k->snum; ++i)
+ page_cache_release(k->sp[i]);
+err_out_up_sem:
+ up_read(&k->mm->mmap_sem);
+
+ crypto_user_direct_free_mm(k);
+err_out_put_mm:
+ mmput(k->mm);
+ return err;
+}
+
+static int crypto_user_direct_prepare(struct crypto_session_initializer *ci, struct crypto_data *data, struct crypto_user_direct_kern *k)
+{
+ int err;
+
+ ci->operation = k->usr.operation;
+ ci->type = k->usr.type;
+ ci->mode = k->usr.mode;
+ ci->priority = k->usr.priority;
+ ci->callback = crypto_user_direct_callback;
+
+ err = crypto_user_alloc_crypto_data(data, k->usr.src_size, k->usr.dst_size, k->usr.key_size, k->usr.iv_size);
+ if (err)
+ return err;
+
+ if (k->usr.key_size)
+ crypto_user_fill_sg(k->key, k->usr.key_size, data->sg_key);
+
+ if (k->usr.iv_size)
+ crypto_user_fill_sg(k->iv, k->usr.iv_size, data->sg_iv);
+
+ data->priv = k;
+ data->priv_size = 0;
+
+ err = crypto_user_direct_prepare_data(data, k);
+ if (err) {
+ crypto_user_free_crypto_data(data);
+ return err;
+ }
+
+ return 0;
+}
+
+static int crypto_user_direct_process(struct crypto_user_direct_kern *k)
+{
+ struct crypto_session_initializer ci;
+ struct crypto_data data;
+ struct crypto_session *s;
+ int err;
+
+ memset(&ci, 0, sizeof(ci));
+ memset(&data, 0, sizeof(data));
+
+ err = crypto_user_direct_prepare(&ci, &data, k);
+ if (err)
+ return err;
+
+ s = crypto_session_alloc(&ci, &data);
+ if (!s) {
+ crypto_user_direct_free_data(k);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int crypto_user_direct_thread(void *data)
+{
+ struct crypto_user_direct_kern *k, *n;
+
+ daemonize("crypto_user_direct_thread");
+ allow_signal(SIGTERM);
+
+ while (!need_exit) {
+ interruptible_sleep_on_timeout(&crypto_user_direct_wait_queue, 1000);
+
+ spin_lock_bh(&crypto_user_direct_lock);
+ list_for_each_entry_safe(k, n, &crypto_user_direct_list, entry) {
+ list_del(&k->entry);
+
+ spin_unlock_bh(&crypto_user_direct_lock);
+
+ crypto_user_direct_process(k);
+
+ spin_lock_bh(&crypto_user_direct_lock);
+ }
+ spin_unlock_bh(&crypto_user_direct_lock);
+ }
+
+ complete_and_exit(&thread_exited, 0);
+}
+
+int crypto_user_direct_add_request(u32 seq, u32 ack, struct crypto_user_direct *usr)
+{
+ struct crypto_user_direct_kern *k;
+
+ k = kmalloc(sizeof(struct crypto_user_direct_kern) + usr->key_size + usr->iv_size, GFP_ATOMIC);
+ if (!k) {
+ dprintk("Failed to allocate new kernel crypto request.\n");
+ return -ENOMEM;
+ }
+
+ memset(k, 0, sizeof(*k));
+
+ memcpy(&k->usr, usr, sizeof(k->usr));
+
+ k->key = (u8 *)(k+1);
+ k->iv = (u8 *)(k->key + k->usr.key_size);
+
+ memcpy(k->key, usr->data, k->usr.key_size);
+ memcpy(k->iv, usr->data + k->usr.key_size, k->usr.iv_size);
+
+ k->seq = seq;
+ k->ack = ack;
+
+ spin_lock_bh(&crypto_user_direct_lock);
+ list_add(&k->entry, &crypto_user_direct_list);
+ spin_unlock_bh(&crypto_user_direct_lock);
+
+ dprintk("msg [%08x.%08x]: op=[%04x.%04x.%04x.%04x], src=%llx [%d], dst=%llx [%d], key=%p [%d], iv=%p [%d].\n",
+ seq, ack,
+ k->usr.operation, k->usr.mode, k->usr.type, k->usr.priority,
+ k->usr.src, k->usr.src_size,
+ k->usr.dst, k->usr.dst_size,
+ k->key, k->usr.key_size,
+ k->iv, k->usr.iv_size);
+
+ wake_up_interruptible(&crypto_user_direct_wait_queue);
+
+ return 0;
+}
+
+int crypto_user_direct_init(void)
+{
+ int pid, err;
+
+ err = 0;
+ init_completion(&thread_exited);
+ pid = kernel_thread(crypto_user_direct_thread, NULL, CLONE_FS | CLONE_FILES);
+ if (IS_ERR((void *)pid)) {
+ dprintk(KERN_ERR "Failed to create acrypto userspace processing thread.\n");
+ err = -EINVAL;
+ goto err_out_exit;
+ }
+
+ printk(KERN_INFO "Acrypto userspace processing thread has been started.\n");
+
+ return err;
+
+err_out_exit:
+
+ return err;
+}
+
+void crypto_user_direct_fini(void)
+{
+ need_exit = 1;
+ wait_for_completion(&thread_exited);
+
+ printk(KERN_INFO "Acrypto userspace processing thread has been finished.\n");
+}

2005-03-07 21:21:09

by Evgeniy Polyakov

[permalink] [raw]
Subject: [??/many] ucon_crypto.c - simple example of the userspace acrypto usage [DIRECT ACCESS]


/*
* ucon_crypto.c
*
* Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/

#include <asm/types.h>

#include <sys/types.h>
#include <sys/socket.h>
#include <sys/poll.h>
#include <sys/mman.h>

#include <linux/netlink.h>
#include <linux/types.h>
#include <linux/rtnetlink.h>

#include <arpa/inet.h>

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
#include <time.h>

#include "../connector/connector.h"

#include "crypto_def.h"
#include "crypto_conn.h"
#include "crypto_user.h"

static int need_exit;
static __u32 seq;

static int netlink_send(FILE *out, int s, struct cn_msg *msg)
{
struct nlmsghdr *nlh;
unsigned int size;
char buf[128];
int err;
struct cn_msg *m;
struct crypto_conn_data *cmd;

cmd = (struct crypto_conn_data *)(msg + 1);
size = NLMSG_SPACE(sizeof(struct cn_msg) + msg->len);

nlh = (struct nlmsghdr *)buf;
nlh->nlmsg_seq = seq++;
nlh->nlmsg_pid = getpid();
nlh->nlmsg_type = NLMSG_DONE;
nlh->nlmsg_len = NLMSG_LENGTH(size - sizeof(*nlh));
nlh->nlmsg_flags = 0;

m = NLMSG_DATA(nlh);

printf("%s: len=%u, seq=%u, ack=%u, "
"name=%s, cmd=%02x.\n",
__func__,
msg->len, msg->seq, msg->ack,
cmd->name, cmd->cmd);

memcpy(m, msg, sizeof(*m) + msg->len);

err = send(s, nlh, size, 0);
if (err == -1)
{
fprintf(out, "Failed to send: %s [%d].\n",
strerror(errno), errno);
return err;
}
fprintf(out, "%d bytes has been sent.\n", size);

return 0;
}

static int send_cmd(FILE *out, int s, struct crypto_conn_data *cm)
{
struct cn_msg *data;
struct crypto_conn_data *m;
int size;

size = sizeof(*data) + sizeof(*m) + cm->len;

data = malloc(size);
if (!data)
return -ENOMEM;

memset(data, 0, size);

data->id.idx = 0xdead;
data->id.val = 0x0000;
data->seq = seq++;
data->ack = 0;
data->len = sizeof(*cm) + cm->len;

m = (struct crypto_conn_data *)(data + 1);
memcpy(m, cm, sizeof(*m));

memcpy(m+1, cm->data, cm->len);

return netlink_send(out, s, data);
}

static int request_stat(FILE *out, int s, char *name)
{
struct crypto_conn_data m;

memset(&m, 0, sizeof(m));

m.cmd = CRYPTO_GET_STAT;
m.len = 0;

snprintf(m.name, sizeof(m.name), "%s", name);

return send_cmd(out, s, &m);
}

static void process_stat(FILE *out, struct cn_msg *data)
{
struct crypto_device_stat *stat;
struct crypto_conn_data *m;
time_t tm;

m = (struct crypto_conn_data *)(data + 1);
stat = (struct crypto_device_stat *)(m+1);

time(&tm);
fprintf(out,
"%.24s : [%x.%x] [seq=%u, ack=%u], name=%s, cmd=%#02x, "
"sesions: completed=%llu, started=%llu, finished=%llu, kmem_failed=%llu.\n",
ctime(&tm), data->id.idx, data->id.val,
data->seq, data->ack, m->name, m->cmd,
stat->scompleted, stat->sstarted, stat->sfinished, stat->kmem_failed);
fflush(out);
}

static void dump_data(FILE *out, unsigned char *ptr)
{
int i;

fprintf(out, "UCON DATA: ");
for (i=0; i<32; ++i)
fprintf(out, "%02x ", ptr[i]);
fprintf(out, "\n");
}

static int request_crypto(FILE *out, int s, char *name)
{
void *ptr;
int size, err;
struct crypto_user usr;
struct crypto_conn_data *m;

size = 4000;
ptr = malloc(size);
if (!ptr)
{
fprintf(out, "Failed to allocate %d byte area.\n", size);
return -1;
}
memset(ptr, 0x00, size);

err = mlock(ptr, size);
if (err == -1)
{
fprintf(out, "Failed to lock %d byte area.\n", size);
free(ptr);
return -1;
}

memset(&usr, 0, sizeof(usr));

usr.operation = CRYPTO_OP_ENCRYPT;
usr.mode = CRYPTO_MODE_ECB;
usr.type = CRYPTO_TYPE_AES_128;
usr.priority = 0;

usr.src = (__u64)ptr;
usr.src_size = size;
usr.dst = (__u64)ptr;
usr.dst_size = size;

usr.pid = getpid();

usr.key_size = 0;
usr.iv_size = 0;

m = malloc(sizeof(*m) + sizeof(usr) + usr.key_size + usr.iv_size);
if (!m)
return -ENOMEM;
memset(m, 0, sizeof(m));
memcpy(m+1, &usr, sizeof(usr));

m->cmd = CRYPTO_REQUEST;
m->len = sizeof(usr);

snprintf(m->name, sizeof(m->name), "%s", name);

dump_data(out, ptr);
err = send_cmd(out, s, m);
if (err)
goto err_out_free;

fprintf(out, "Command was sent.\n");
sleep(1);
dump_data(out, ptr);

//return 0;

err_out_free:
free(m);

err = munlock(ptr, size);
if (err == -1)
{
fprintf(out, "Failed to unlock %d byte area.\n", size);
free(ptr);
return -1;
}

free(ptr);

return err;
}

int main(int argc, char *argv[])
{
int s, tmp;
char buf[128];
int len;
struct nlmsghdr *reply;
struct sockaddr_nl l_local;
struct cn_msg *data;
struct crypto_conn_data *m;
FILE *out;
struct pollfd pfd;

if (argc < 2)
out = stdout;
else {
out = fopen(argv[1], "a+");
if (!out) {
fprintf(stderr, "Unable to open %s for writing: %s\n",
argv[1], strerror(errno));
out = stdout;
}
}

memset(buf, 0, sizeof(buf));

s = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_NFLOG);
if (s == -1) {
perror("socket");
return -1;
}

l_local.nl_family = AF_NETLINK;
l_local.nl_groups = 0xdead;
l_local.nl_pid = getpid();

if (bind(s, (struct sockaddr *)&l_local, sizeof(struct sockaddr_nl)) == -1) {
perror("bind");
close(s);
return -1;
}

pfd.fd = s;

//request_stat(s, "async_provider0");

for (len=0; len<1000; ++len)
request_crypto(out, s, "hifn0");

tmp = 0;
while (!need_exit) {


pfd.events = POLLIN;
pfd.revents = 0;
switch (poll(&pfd, 1, -1))
{
case 0:
need_exit = 1;
break;
case -1:
if (errno != EINTR)
{
need_exit = 1;
break;
}
continue;
}
if (need_exit)
break;

memset(buf, 0, sizeof(buf));
len = recv(s, buf, sizeof(buf), 0);
if (len == -1) {
perror("recv buf");
close(s);
return -1;
}
reply = (struct nlmsghdr *)buf;

switch (reply->nlmsg_type) {
case NLMSG_ERROR:
fprintf(out, "Error message received.\n");
fflush(out);
break;
case NLMSG_DONE:
data = (struct cn_msg *)NLMSG_DATA(reply);
m = (struct crypto_conn_data *)(data + 1);

switch (m->cmd)
{
case CRYPTO_GET_STAT:
process_stat(out, data);
break;
default:
break;
}

break;
default:
break;
}
}

close(s);
return 0;
}

2005-03-07 21:30:03

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mon, 07 Mar 2005 22:13:18 +0100
Fruhwirth Clemens <[email protected]> wrote:

> On Mon, 2005-03-07 at 23:37 +0300, Evgeniy Polyakov wrote:
>
> > I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.
>
> Thanks Evgeniy for your work! Even though, it's great what's inside, I'm
> afraid it will be judged by the form of its presentation. A patch should
> be something integral, testable on its own. I think it's not necessary
> to package it that fine grained, as it becomes very hard to apply with a
> regular mail reader (Saving/Exporting 50 mails is really a bit of a
> work).
>
> So, the form is a bit suboptimal. Don't hesitate to put all "acrypto*"
> and "arch*" patches in one-large acrypto patch set, and an other for
> "bd*". I'd be glad to say something different, but I think acrypto has
> not been considered by the maintainers to be merged soon, so patch
> splitting doesn't make sense anyway at the moment.

Unfortunately acrypto patch is more than 200kb, so neither mail list
will accept it, so I've sent it in such form :)

Actually the most interesting is the first e-mail with subject line
"[0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6" which
has description of the acrypto layer and it's features.
Acrypto patches itself live in patches with prefix "acrypto"
[it is from 1 to 21].
bd lives in the last five patches.
Several first e-mails without first number ([??/many]...) are
various descriptions.

E-mail with subject line
"[??/many] list of files to be sent in a next couple of e-mails with small description"
contains small one line description of each e-mail.

Sorry for such form, but it is really big set of information pieces,
so I combined it in a such way.

> Best Regards,
> --
> Fruhwirth Clemens - http://clemens.endorphin.org
> for robots: [email protected]
>


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-07 22:21:07

by Evgeniy Polyakov

[permalink] [raw]
Subject: [8/many] acrypto: crypto_dev.c

--- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,421 @@
+/*
+ * crypto_dev.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/device.h>
+
+#include "acrypto.h"
+
+static LIST_HEAD(cdev_list);
+static spinlock_t cdev_lock = SPIN_LOCK_UNLOCKED;
+static u32 cdev_ids;
+
+struct list_head *crypto_device_list = &cdev_list;
+spinlock_t *crypto_device_lock = &cdev_lock;
+
+static int crypto_match(struct device *dev, struct device_driver *drv)
+{
+ return 1;
+}
+
+static int crypto_probe(struct device *dev)
+{
+ return -ENODEV;
+}
+
+static int crypto_remove(struct device *dev)
+{
+ return 0;
+}
+
+static void crypto_release(struct device *dev)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, device);
+
+ complete(&d->dev_released);
+}
+
+static void crypto_class_release(struct class *class)
+{
+}
+
+static void crypto_class_release_device(struct class_device *class_dev)
+{
+}
+
+struct class crypto_class = {
+ .name = "acrypto",
+ .class_release = crypto_class_release,
+ .release = crypto_class_release_device
+};
+
+struct bus_type crypto_bus_type = {
+ .name = "acrypto",
+ .match = crypto_match
+};
+
+struct device_driver crypto_driver = {
+ .name = "crypto_driver",
+ .bus = &crypto_bus_type,
+ .probe = crypto_probe,
+ .remove = crypto_remove,
+};
+
+struct device crypto_dev = {
+ .parent = NULL,
+ .bus = &crypto_bus_type,
+ .bus_id = "Asynchronous crypto",
+ .driver = &crypto_driver,
+ .release = &crypto_release
+};
+
+static ssize_t sessions_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%d\n", atomic_read(&d->refcnt));
+}
+static ssize_t name_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%s\n", d->name);
+}
+static ssize_t devices_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d;
+ int off = 0;
+
+ spin_lock_irq(&cdev_lock);
+ list_for_each_entry(d, &cdev_list, cdev_entry) {
+ off += sprintf(buf + off, "%s ", d->name);
+ }
+ spin_unlock_irq(&cdev_lock);
+
+ if (!off)
+ off = sprintf(buf, "No devices registered yet.");
+
+ off += sprintf(buf + off, "\n");
+
+ return off;
+}
+
+static ssize_t kmem_failed_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%llu\n", d->stat.kmem_failed);
+}
+static ssize_t sstarted_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%llu\n", d->stat.sstarted);
+}
+static ssize_t sfinished_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%llu\n", d->stat.sfinished);
+}
+static ssize_t scompleted_show(struct class_device *dev, char *buf)
+{
+ struct crypto_device *d = container_of(dev, struct crypto_device, class_device);
+
+ return sprintf(buf, "%llu\n", d->stat.scompleted);
+}
+
+static CLASS_DEVICE_ATTR(sessions, 0444, sessions_show, NULL);
+static CLASS_DEVICE_ATTR(name, 0444, name_show, NULL);
+CLASS_DEVICE_ATTR(devices, 0444, devices_show, NULL);
+static CLASS_DEVICE_ATTR(scompleted, 0444, scompleted_show, NULL);
+static CLASS_DEVICE_ATTR(sstarted, 0444, sstarted_show, NULL);
+static CLASS_DEVICE_ATTR(sfinished, 0444, sfinished_show, NULL);
+static CLASS_DEVICE_ATTR(kmem_failed, 0444, kmem_failed_show, NULL);
+
+static int compare_device(struct crypto_device *d1, struct crypto_device *d2)
+{
+ if (!strncmp(d1->name, d2->name, sizeof(d1->name)))
+ return 1;
+
+ return 0;
+}
+
+static void create_device_attributes(struct crypto_device *dev)
+{
+ class_device_create_file(&dev->class_device, &class_device_attr_sessions);
+ class_device_create_file(&dev->class_device, &class_device_attr_name);
+ class_device_create_file(&dev->class_device, &class_device_attr_scompleted);
+ class_device_create_file(&dev->class_device, &class_device_attr_sstarted);
+ class_device_create_file(&dev->class_device, &class_device_attr_sfinished);
+ class_device_create_file(&dev->class_device, &class_device_attr_kmem_failed);
+}
+
+static void remove_device_attributes(struct crypto_device *dev)
+{
+ class_device_remove_file(&dev->class_device, &class_device_attr_sessions);
+ class_device_remove_file(&dev->class_device, &class_device_attr_name);
+ class_device_remove_file(&dev->class_device, &class_device_attr_scompleted);
+ class_device_remove_file(&dev->class_device, &class_device_attr_sstarted);
+ class_device_remove_file(&dev->class_device, &class_device_attr_sfinished);
+ class_device_remove_file(&dev->class_device, &class_device_attr_kmem_failed);
+}
+
+int __match_initializer(struct crypto_capability *cap, struct crypto_session_initializer *ci)
+{
+ dprintk("Match: %04x.%04x.%04x vs. %04x.%04x.%04x.\n",
+ cap->operation, cap->type, cap->mode,
+ ci->operation, ci->type, ci->mode);
+ if (cap->operation == ci->operation && cap->type == ci->type &&
+ cap->mode == (ci->mode & 0x1fff))
+ return 1;
+
+ return 0;
+}
+
+int match_initializer(struct crypto_device *dev, struct crypto_session_initializer *ci)
+{
+ int i;
+
+ for (i = 0; i < dev->cap_number; ++i) {
+ struct crypto_capability *cap = &dev->cap[i];
+
+ if (__match_initializer(cap, ci)) {
+ if (cap->qlen >= atomic_read(&dev->refcnt) + 1) {
+ dprintk("cap->len=%u, req=%u.\n",
+ cap->qlen, atomic_read(&dev->refcnt) + 1);
+ return 1;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void crypto_device_get(struct crypto_device *dev)
+{
+ atomic_inc(&dev->refcnt);
+}
+
+struct crypto_device *crypto_device_get_name(char *name)
+{
+ struct crypto_device *dev;
+ int found = 0;
+
+ spin_lock_irq(&cdev_lock);
+ list_for_each_entry(dev, &cdev_list, cdev_entry) {
+ if (!strcmp(dev->name, name)) {
+ found = 1;
+ crypto_device_get(dev);
+ break;
+ }
+ }
+ spin_unlock_irq(&cdev_lock);
+
+ if (!found)
+ return NULL;
+
+ return dev;
+}
+
+void crypto_device_put(struct crypto_device *dev)
+{
+ atomic_dec(&dev->refcnt);
+}
+
+static int avg_cap_qlen(struct crypto_device *dev)
+{
+ int i, max = 0;
+
+ if (!dev->cap_number)
+ return 0;
+
+ for (i=0; i<dev->cap_number; ++i) {
+ max += dev->cap[i].qlen;
+ }
+
+ return (max / dev->cap_number);
+}
+
+int __crypto_device_add(struct crypto_device *dev)
+{
+ int err, avg;
+
+ memset(&dev->stat, 0, sizeof(dev->stat));
+ spin_lock_init(&dev->stat_lock);
+ spin_lock_init(&dev->lock);
+ spin_lock_init(&dev->session_lock);
+ INIT_LIST_HEAD(&dev->session_list);
+ atomic_set(&dev->refcnt, 0);
+ dev->sid = 0;
+ dev->flags = 0;
+ init_completion(&dev->dev_released);
+ memcpy(&dev->device, &crypto_dev, sizeof(struct device));
+ dev->driver = &crypto_driver;
+
+ dev->session_cache = kmem_cache_create(dev->name, sizeof(struct crypto_session),
+ 0, 0, NULL, NULL);
+ if (!dev->session_cache) {
+ dprintk(KERN_ERR "Failed to create session cache for device %s.\n", dev->name);
+ return -ENOMEM;
+ }
+
+ avg = avg_cap_qlen(dev);
+
+ if (avg) {
+ dev->session_pool = mempool_create(avg, mempool_alloc_slab, mempool_free_slab, dev->session_cache);
+ if (!dev->session_pool) {
+ dprintk(KERN_ERR "Failed to create memory pool with %d objects for device %s.\n",
+ avg, dev->name);
+ err = -ENOMEM;
+ goto err_out_cache_destroy;
+ }
+ } else
+ dev->session_pool = NULL;
+
+ snprintf(dev->device.bus_id, sizeof(dev->device.bus_id), "%s", dev->name);
+ err = device_register(&dev->device);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto device %s: err=%d.\n",
+ dev->name, err);
+ goto err_out_mempool_destroy;
+ }
+
+ snprintf(dev->class_device.class_id, sizeof(dev->class_device.class_id), "%s", dev->name);
+ dev->class_device.dev = &dev->device;
+ dev->class_device.class = &crypto_class;
+
+ err = class_device_register(&dev->class_device);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto class device %s: err=%d.\n",
+ dev->name, err);
+ goto err_out_device_unregister;
+ }
+
+ create_device_attributes(dev);
+
+ return 0;
+
+err_out_device_unregister:
+ device_unregister(&dev->device);
+err_out_mempool_destroy:
+ if (dev->session_pool)
+ mempool_destroy(dev->session_pool);
+err_out_cache_destroy:
+ kmem_cache_destroy(dev->session_cache);
+
+ return err;
+}
+
+void __crypto_device_remove(struct crypto_device *dev)
+{
+ remove_device_attributes(dev);
+ class_device_unregister(&dev->class_device);
+ device_unregister(&dev->device);
+ if (dev->session_pool)
+ mempool_destroy(dev->session_pool);
+ kmem_cache_destroy(dev->session_cache);
+}
+
+int crypto_device_add(struct crypto_device *dev)
+{
+ int err;
+
+ err = __crypto_device_add(dev);
+ if (err)
+ return err;
+
+ spin_lock_irq(&cdev_lock);
+ list_add(&dev->cdev_entry, &cdev_list);
+ dev->id = ++cdev_ids;
+ spin_unlock_irq(&cdev_lock);
+
+ printk(KERN_INFO "Crypto device %s was registered with ID=%x.\n",
+ dev->name, dev->id);
+
+ return 0;
+}
+
+void crypto_device_remove(struct crypto_device *dev)
+{
+ struct crypto_device *__dev, *n;
+
+ __crypto_device_remove(dev);
+
+ spin_lock_irq(&cdev_lock);
+ list_for_each_entry_safe(__dev, n, &cdev_list, cdev_entry) {
+ if (compare_device(__dev, dev)) {
+ list_del_init(&__dev->cdev_entry);
+ spin_unlock_irq(&cdev_lock);
+
+ /*
+ * In test cases or when crypto device driver is not written correctly,
+ * it's ->data_ready() method will not be callen anymore
+ * after device is removed from crypto device list.
+ *
+ * For such cases we either should provide ->flush() call
+ * or properly write ->data_ready() method.
+ */
+
+ while (atomic_read(&__dev->refcnt)) {
+
+ dprintk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n",
+ __dev->name, atomic_read(&dev->refcnt));
+
+ /*
+ * Hack zone: you need to write good ->data_ready()
+ * and crypto device driver itself.
+ *
+ * Driver shoud not buzz if it has pending sessions
+ * in it's queue and it was removed from global device list.
+ *
+ * Although I can workaround it here, for example by
+ * flushing the whole queue and drop all pending sessions.
+ */
+
+ __dev->data_ready(__dev);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ);
+ }
+
+ dprintk(KERN_ERR "Crypto device %s was unregistered.\n",
+ dev->name);
+ return;
+ }
+ }
+ spin_unlock_irq(&cdev_lock);
+
+ dprintk(KERN_ERR "Crypto device %s was not registered.\n", dev->name);
+}
+
+EXPORT_SYMBOL_GPL(crypto_device_add);
+EXPORT_SYMBOL_GPL(crypto_device_remove);
+EXPORT_SYMBOL_GPL(crypto_device_get);
+EXPORT_SYMBOL_GPL(crypto_device_get_name);
+EXPORT_SYMBOL_GPL(crypto_device_put);
+EXPORT_SYMBOL_GPL(match_initializer);

2005-03-07 22:21:07

by Evgeniy Polyakov

[permalink] [raw]
Subject: [23/many] arch: arm config

--- ./arch/arm/Kconfig~ 2005-03-02 10:38:10.000000000 +0300
+++ ./arch/arm/Kconfig 2005-03-07 21:26:11.000000000 +0300
@@ -735,4 +735,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 21:16:44

by Evgeniy Polyakov

[permalink] [raw]
Subject: [22/many] arch: alpha config

--- ./arch/alpha/Kconfig~ 2005-03-02 10:38:38.000000000 +0300
+++ ./arch/alpha/Kconfig 2005-03-07 21:25:54.000000000 +0300
@@ -602,5 +602,7 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"


2005-03-07 23:27:04

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

On Mon, 7 Mar 2005 14:40:52 -0800
Nish Aravamudan <[email protected]> wrote:

> On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > @@ -0,0 +1,421 @@
> > +/*
> > + * crypto_dev.c
>
> <snip>
>
> > + while (atomic_read(&__dev->refcnt)) {
> > +
> > + dprintk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n",
> > + __dev->name, atomic_read(&dev->refcnt));
> > +
> > + /*
> > + * Hack zone: you need to write good ->data_ready()
> > + * and crypto device driver itself.
> > + *
> > + * Driver shoud not buzz if it has pending sessions
> > + * in it's queue and it was removed from global device list.
> > + *
> > + * Although I can workaround it here, for example by
> > + * flushing the whole queue and drop all pending sessions.
> > + */
> > +
> > + __dev->data_ready(__dev);
> > + set_current_state(TASK_UNINTERRUPTIBLE);
> > + schedule_timeout(HZ);
>
> I don't see any wait-queues in the immediate area of this code. Can
> this be an ssleep(1)?

Yes, you are right, this loop just spins until all pending sessions
are removed from given crypto device, so it can just ssleep(1) here.

> Thanks,
> Nish


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-07 23:43:53

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [1/many] acrypto: Kconfig

Evgeniy Polyakov wrote:
> diff -Nru /tmp/empty/Kconfig ./acrypto/Kconfig
> --- /tmp/empty/Kconfig 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/Kconfig 2005-03-07 21:21:33.000000000 +0300
> @@ -0,0 +1,30 @@
> +menu "Asynchronous crypto layer"
> +
> +config ACRYPTO
> + tristate "Asynchronous crypto layer"
> + select CONNECTOR
> + --- help ---
> + It supports:
> + - multiple asynchronous crypto device queues
> + - crypto session routing
> + - crypto session binding
> + - modular load balancing
> + - crypto session batching genetically implemented by design
Just curious, what genetics?

> + - crypto session priority
> + - different kinds of crypto operation(RNG, asymmetrical crypto, HMAC and any other
operation (RNG, ... )
> +
> +config SIMPLE_LB
> + tristate "Simple load balancer"
> + depends on ACRYPTO
> + --- help ---
> + Simple load balancer returns device with the lowest load
> + (device has the least number of session in it's queue) if it exists.
sessions in its
> +
> +config ASYNC_PROVIDER
> + tristate "Asynchronous crypto provider (AES CBC)"
> + depends on ACRYPTO && (CRYPTO_AES || CRYPTO_AES_586)
> + --- help ---
> + Asynchronous crypto provider based on synchronous crypto layer.
> + It supports AES CBC crypto mode (may be changed by source edition).
> +
> +endmenu

--
~Randy

2005-03-07 23:43:56

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

Evgeniy Polyakov wrote:
> --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,421 @@
> +/*
> + * crypto_dev.c
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>

> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/moduleparam.h>
> +#include <linux/types.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/interrupt.h>
> +#include <linux/spinlock.h>
> +#include <linux/device.h>

In alpha order as much as possible, please.

> +#include "acrypto.h"
> +
> +static LIST_HEAD(cdev_list);
> +static spinlock_t cdev_lock = SPIN_LOCK_UNLOCKED;

DEFINE_SPINLOCK(cdev_lock);

--
~Randy

2005-03-07 23:57:28

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

On Mon, 07 Mar 2005 15:37:30 -0800
"Randy.Dunlap" <[email protected]> wrote:

> Evgeniy Polyakov wrote:
> > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > @@ -0,0 +1,421 @@
> > +/*
> > + * crypto_dev.c
> > + *
> > + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/moduleparam.h>
> > +#include <linux/types.h>
> > +#include <linux/list.h>
> > +#include <linux/slab.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/device.h>
>
> In alpha order as much as possible, please.

As far as I remember, some must be first, like kernel.h and module.h,
but I will try to follow alphabet order.

> > +#include "acrypto.h"
> > +
> > +static LIST_HEAD(cdev_list);
> > +static spinlock_t cdev_lock = SPIN_LOCK_UNLOCKED;
>
> DEFINE_SPINLOCK(cdev_lock);

Yep, I suspect some other files also need that.

Will put it into TODO queue.

> --
> ~Randy


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 00:13:22

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [3/many] acrypto: acrypto.h

On Mon, 07 Mar 2005 15:50:28 -0800
"Randy.Dunlap" <[email protected]> wrote:

> Evgeniy Polyakov wrote:
> > --- /tmp/empty/acrypto.h 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/acrypto.h 2005-03-07 20:35:36.000000000 +0300
> > @@ -0,0 +1,245 @@
> > +/*
> > + * acrypto.h
> > + *
> > + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> > + *
> > + */
> > +
> > +#ifdef __KERNEL__
> > +
> > +#define SESSION_COMPLETED (1<<15)
> > +#define SESSION_FINISHED (1<<14)
> > +#define SESSION_STARTED (1<<13)
> > +#define SESSION_PROCESSED (1<<12)
> > +#define SESSION_BINDED (1<<11)
> Just a thought: SESSION_BOUND ??
>
> > +#define SESSION_BROKEN (1<<10)
> > +#define SESSION_FROM_CACHE (1<<9)
> > +
> > +#define DEVICE_BROKEN (1<<0)
> > +
> > +#define device_broken(dev) (dev->flags & DEVICE_BROKEN)
> > +#define broke_device(dev) do {dev->flags |= DEVICE_BROKEN;} while(0)
> break_device(dev)

Mdaa, my spelling is quite broken in some places...

> > +int match_initializer(struct crypto_device *, struct crypto_session_initializer *);
> > +int __match_initializer(struct crypto_capability *, struct crypto_session_initializer *);
> > +
> > +#endif /* __KERNEL__ */
> > +#endif /* __ACRYPTO_H */
>
> Several of these could use some namespace_idents on them (SESSION_xyz,
> DEVICE_xyz, device_xyz, match_xyz)...

Hmm, I think I'm not following...

> --
> ~Randy


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 00:18:19

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [3/many] acrypto: acrypto.h

Evgeniy Polyakov wrote:
> --- /tmp/empty/acrypto.h 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/acrypto.h 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,245 @@
> +/*
> + * acrypto.h
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> + *
> + */
> +
> +#ifdef __KERNEL__
> +
> +#define SESSION_COMPLETED (1<<15)
> +#define SESSION_FINISHED (1<<14)
> +#define SESSION_STARTED (1<<13)
> +#define SESSION_PROCESSED (1<<12)
> +#define SESSION_BINDED (1<<11)
Just a thought: SESSION_BOUND ??

> +#define SESSION_BROKEN (1<<10)
> +#define SESSION_FROM_CACHE (1<<9)
> +
> +#define DEVICE_BROKEN (1<<0)
> +
> +#define device_broken(dev) (dev->flags & DEVICE_BROKEN)
> +#define broke_device(dev) do {dev->flags |= DEVICE_BROKEN;} while(0)
break_device(dev)

> +int match_initializer(struct crypto_device *, struct crypto_session_initializer *);
> +int __match_initializer(struct crypto_capability *, struct crypto_session_initializer *);
> +
> +#endif /* __KERNEL__ */
> +#endif /* __ACRYPTO_H */

Several of these could use some namespace_idents on them (SESSION_xyz,
DEVICE_xyz, device_xyz, match_xyz)...

--
~Randy

2005-03-08 01:01:22

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [1/many] acrypto: Kconfig

On Mon, 07 Mar 2005 15:33:06 -0800
"Randy.Dunlap" <[email protected]> wrote:

> Evgeniy Polyakov wrote:
> > diff -Nru /tmp/empty/Kconfig ./acrypto/Kconfig
> > --- /tmp/empty/Kconfig 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/Kconfig 2005-03-07 21:21:33.000000000 +0300
> > @@ -0,0 +1,30 @@
> > +menu "Asynchronous crypto layer"
> > +
> > +config ACRYPTO
> > + tristate "Asynchronous crypto layer"
> > + select CONNECTOR
> > + --- help ---
> > + It supports:
> > + - multiple asynchronous crypto device queues
> > + - crypto session routing
> > + - crypto session binding
> > + - modular load balancing
> > + - crypto session batching genetically implemented by design
> Just curious, what genetics?

:)
All other stacks requires special flags and various complex schemas
to support batching,
but acrypto provides session queue to the low-level driver,
so it does not require any special cruft to support session batching.
So we have batching just after new driver birth,
that is why it is genetic.

> > + - crypto session priority
> > + - different kinds of crypto operation(RNG, asymmetrical crypto, HMAC and any other
> operation (RNG, ... )
> > +
> > +config SIMPLE_LB
> > + tristate "Simple load balancer"
> > + depends on ACRYPTO
> > + --- help ---
> > + Simple load balancer returns device with the lowest load
> > + (device has the least number of session in it's queue) if it exists.
> sessions in its
> > +
> > +config ASYNC_PROVIDER
> > + tristate "Asynchronous crypto provider (AES CBC)"
> > + depends on ACRYPTO && (CRYPTO_AES || CRYPTO_AES_586)
> > + --- help ---
> > + Asynchronous crypto provider based on synchronous crypto layer.
> > + It supports AES CBC crypto mode (may be changed by source edition).
> > +
> > +endmenu

Thank you for your comments, I will put updates into the queue,
and push them if acrypto will be commited.

> --
> ~Randy


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-07 23:38:18

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

On Mon, 7 Mar 2005 14:51:21 -0800
Nish Aravamudan <[email protected]> wrote:

> On Tue, 8 Mar 2005 02:14:31 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > On Mon, 7 Mar 2005 14:40:52 -0800
> > Nish Aravamudan <[email protected]> wrote:
> >
> > > On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > > > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > > > @@ -0,0 +1,421 @@
> > > > +/*
> > > > + * crypto_dev.c
> > >
> > > <snip>
> > >
> > > > + while (atomic_read(&__dev->refcnt)) {
>
> <snip>
>
> > > > + set_current_state(TASK_UNINTERRUPTIBLE);
> > > > + schedule_timeout(HZ);
> > >
> > > I don't see any wait-queues in the immediate area of this code. Can
> > > this be an ssleep(1)?
> >
> > Yes, you are right, this loop just spins until all pending sessions
> > are removed from given crypto device, so it can just ssleep(1) here.
>
> Would you like me to send an incremental patch or will you be changing
> it yourself?

That would be nice to see your changes in the acrypto.
If it will be commited...

> Thanks,
> Nish


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 01:41:02

by Nish Aravamudan

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

On Tue, 8 Mar 2005 02:14:31 +0300, Evgeniy Polyakov <[email protected]> wrote:
> On Mon, 7 Mar 2005 14:40:52 -0800
> Nish Aravamudan <[email protected]> wrote:
>
> > On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > > @@ -0,0 +1,421 @@
> > > +/*
> > > + * crypto_dev.c
> >
> > <snip>
> >
> > > + while (atomic_read(&__dev->refcnt)) {

<snip>

> > > + set_current_state(TASK_UNINTERRUPTIBLE);
> > > + schedule_timeout(HZ);
> >
> > I don't see any wait-queues in the immediate area of this code. Can
> > this be an ssleep(1)?
>
> Yes, you are right, this loop just spins until all pending sessions
> are removed from given crypto device, so it can just ssleep(1) here.

Would you like me to send an incremental patch or will you be changing
it yourself?

Thanks,
Nish

2005-03-08 01:45:20

by Nish Aravamudan

[permalink] [raw]
Subject: Re: [8/many] acrypto: crypto_dev.c

On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,421 @@
> +/*
> + * crypto_dev.c

<snip>

> + while (atomic_read(&__dev->refcnt)) {
> +
> + dprintk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n",
> + __dev->name, atomic_read(&dev->refcnt));
> +
> + /*
> + * Hack zone: you need to write good ->data_ready()
> + * and crypto device driver itself.
> + *
> + * Driver shoud not buzz if it has pending sessions
> + * in it's queue and it was removed from global device list.
> + *
> + * Although I can workaround it here, for example by
> + * flushing the whole queue and drop all pending sessions.
> + */
> +
> + __dev->data_ready(__dev);
> + set_current_state(TASK_UNINTERRUPTIBLE);
> + schedule_timeout(HZ);

I don't see any wait-queues in the immediate area of this code. Can
this be an ssleep(1)?

Thanks,
Nish

2005-03-08 01:50:11

by Evgeniy Polyakov

[permalink] [raw]
Subject: [1/5] bd: Asynchronous block device


It can be used as loopdev replacement, since it fully supports
file binding.

Original idea was just to create acrypto test module, but since
then it was greatly reformatted.

Each bd device has a list of so called filters each of which is
processed with appropriate transfer buffer.
Each filter must provide 3 methods:
->init() - it is called when filter is being bound to the bd device.
For example it can map appropriate file, prepare network connection
or allocate crypto buffers.
->fini() - it is called when filter is being unbound from bd device.
->transfer() - main transfer routing. It is called with bd_transfer
structure which has source page with it's size and offset, and also
destination position. When transfer is completed(data written/read,
data encypted/decrypted) filter driver must call ->complete() callback
which is provided by bd core.

Filters are processed one after another asynchronously but also there
is an ability to wait untill all previous filters(and appropriate injected
transfers) are finished. It is required for block encryption. It is simple
but not implemented yet.
bd can be used as loopdev replacement and network block device replacement
at the same time, nbd filter is not implemented yet.

With such flexibility one can read data from the file or the real block device,
encrypt it and transfer it to the remote block devices. It can be used for
high availabilty systems, distributed backup and others.

Transfer structure is actully a bvec structure from BIO, it has a pointer
to the global private area which contains BIO reference counter, which is
used for BIO processing finishing.

bd outperforms vanilla loopdev on about 5% on 2-way SMP(1+1HT) machine.

Since bvecs processing is asynchronous on big SMP machines with
big number of combined bvecs results will be better.
With asynchronous BIO processing difference is more than 20%,
but it is very dangerous and now dropped mode.

bd also allows _much_ easier filter creation than device mapper.


diff -Nru /tmp/empty/Kconfig ./drivers/block/bd/Kconfig
--- /tmp/empty/Kconfig 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/Kconfig 2005-03-07 23:07:51.000000000 +0300
@@ -0,0 +1,24 @@
+menu "Asynchronous block device"
+
+config BD
+ tristate "Asynchronous block device"
+ --- help ---
+ Asynchronous block device is similar to device mapping,
+ but allows asynchronous operations.
+ Any number of filter can be connected to each device
+ in linear order. Each filter must have transfer() method.
+
+config BD_FD
+ tristate "File backend for bd"
+ --- help ---
+ File backend for asynchronous block device - it allows
+ similar to loopdev file mapping.
+
+config BD_ACRYPTO
+ tristate "Asynchronous crypto filter for bd"
+ depends on ACRYPTO
+ --- help ---
+ Asynchronous crypto filter which uses provided from userspace
+ IV and key for the full disk encryption.
+
+endmenu
diff -Nru /tmp/empty/Makefile ./drivers/block/bd/Makefile
--- /tmp/empty/Makefile 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/Makefile 2005-03-07 23:03:18.000000000 +0300
@@ -0,0 +1,4 @@
+obj-$(CONFIG_BD) += ubd.o
+obj-$(CONFIG_BD_FD) += bd_fd.o
+obj-$(CONFIG_BD_ACRYPTO) += bd_acrypto.o
+ubd-y += bd.o bd_bio.o bd_filter.o
diff -Nru /tmp/empty/bd.c ./drivers/block/bd/bd.c
--- /tmp/empty/bd.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd.c 2005-03-07 23:01:04.000000000 +0300
@@ -0,0 +1,550 @@
+/*
+ * bd.c
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/bio.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+#include "bd.h"
+#include "bd_filter.h"
+
+static char bd_name[] = "bd";
+
+static unsigned int bd_major = 123;
+module_param(bd_major, uint, 0);
+
+static unsigned int bd_max_num = 8;
+module_param(bd_max_num, uint, 0);
+
+static struct bd_device **bd_devs;
+
+static int bd_ioctl(struct inode *, struct file *, unsigned int, unsigned long);
+static int bd_open_device(struct inode *, struct file *);
+static int bd_release_device(struct inode *, struct file *);
+
+static int bd_request_thread(void *data);
+
+static struct block_device_operations bd_fops =
+{
+ .owner THIS_MODULE,
+ .ioctl bd_ioctl,
+ .open bd_open_device,
+ .release bd_release_device,
+};
+
+static void bd_setup(struct bd_device *dev)
+{
+ struct gendisk *d = dev->disk;
+
+ d->major = dev->major;
+ d->first_minor = dev->minor;
+ d->fops = &bd_fops;
+ d->private_data = dev;
+ d->flags = GENHD_FL_SUPPRESS_PARTITION_INFO;
+
+ sprintf(d->disk_name, "%s", dev->name);
+
+ add_disk(d);
+}
+
+static struct bd_device *bd_alloc_dev(int minor)
+{
+ struct bd_device *dev;
+ int err;
+
+ dev = kmalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ {
+ dprintk(KERN_ERR "Failed to allocate new bd_device structure.\n");
+ return NULL;
+ }
+
+ memset(dev, 0, sizeof(*dev));
+
+ dev->minor = minor;
+ dev->major = bd_major;
+ snprintf(dev->name, sizeof(dev->name), "bd%d", dev->minor);
+
+ init_MUTEX(&dev->usem);
+ spin_lock_init(&dev->bio_lock);
+ spin_lock_init(&dev->state_lock);
+ bd_unbind_dev(dev);
+
+ init_waitqueue_head(&dev->bio_wait);
+
+ dev->disk = alloc_disk(1);
+ if (!dev->disk)
+ {
+ dprintk("Failed to allocate a disk.\n");
+ goto err_out_free_dev;
+ }
+
+ dev->disk->queue = blk_alloc_queue(GFP_KERNEL);
+ if (!dev->disk->queue)
+ {
+ dprintk("Failed to initialize blk queue.\n");
+ goto err_out_free_disk;
+ }
+
+ err = bd_process_bio_init(dev);
+ if (err)
+ goto err_out_free_queue;
+
+ bd_setup(dev);
+
+ init_completion(&dev->thread_exited);
+ dev->pid = kernel_thread(bd_request_thread, dev, CLONE_FS | CLONE_FILES);
+ if (IS_ERR((void *)dev->pid)) {
+ dprintk(KERN_ERR "Failed to create kernel load balancing thread.\n");
+ goto err_out_bio_fini;
+ }
+
+ dprintk("Device %s has been allocated.\n", dev->name);
+
+ return dev;
+
+err_out_bio_fini:
+ bd_process_bio_fini(dev);
+err_out_free_queue:
+ blk_cleanup_queue(dev->disk->queue);
+
+err_out_free_disk:
+ put_disk(dev->disk);
+
+err_out_free_dev:
+ kfree(dev);
+
+ return NULL;
+}
+
+static void bd_free_dev(struct bd_device *dev)
+{
+
+ bd_process_bio_fini(dev);
+ dev->need_exit = 1;
+ kill_proc(dev->pid, SIGTERM, 1);
+ wait_for_completion(&dev->thread_exited);
+
+ bd_clean_filter_list(dev);
+
+ del_gendisk(dev->disk);
+
+ blk_cleanup_queue(dev->disk->queue);
+
+ put_disk(dev->disk);
+
+ dprintk("Device %s has been freed.\n", dev->name);
+
+ kfree(dev);
+}
+
+void bd_get_dev(struct bd_device *dev)
+{
+ down(&dev->usem);
+ dev->refcnt++;
+ up(&dev->usem);
+}
+
+void bd_put_dev(struct bd_device *dev)
+{
+ down(&dev->usem);
+ dev->refcnt--;
+ up(&dev->usem);
+}
+
+struct bd_device *bd_get_dev_minor(int minor)
+{
+ struct bd_device *dev;
+
+ if (minor < 0 || minor >= bd_max_num)
+ {
+ dprintk("Wrong device number %d.\n", minor);
+ return NULL;
+ }
+
+ dev = bd_devs[minor];
+
+ bd_get_dev(dev);
+
+ return dev;
+}
+
+
+static void bd_unplug(request_queue_t *q)
+{
+ struct bd_device *dev;
+
+ if (!q || !q->queuedata)
+ {
+ dprintk("%s: BUG: q=%p.\n", __func__, q);
+ if (q)
+ dprintk("%s: BUG: q->queuedata=%p.\n", __func__, q->queuedata);
+ return;
+ }
+
+ dev = q->queuedata;
+
+ clear_bit(QUEUE_FLAG_PLUGGED, &q->queue_flags);
+}
+
+void bd_bind_dev(struct bd_device *dev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->state_lock, flags);
+ dev->state = bd_bound;
+ spin_unlock_irqrestore(&dev->state_lock, flags);
+}
+
+void bd_unbind_dev(struct bd_device *dev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->state_lock, flags);
+ dev->state = bd_not_bound;
+ spin_unlock_irqrestore(&dev->state_lock, flags);
+}
+
+int bd_is_bound(struct bd_device *dev)
+{
+ return (dev->state == bd_bound);
+}
+
+static void bd_add_bio(struct bd_device *dev, struct bio *bio)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->bio_lock, flags);
+ if (dev->biotail) {
+ dev->biotail->bi_next = bio;
+ dev->biotail = bio;
+ } else
+ dev->bio = dev->biotail = bio;
+
+ atomic_inc(&dev->bio_refcnt);
+ spin_unlock_irqrestore(&dev->bio_lock, flags);
+
+ wake_up(&dev->bio_wait);
+
+ dprintk("%s: bio has been added, refcnt=%d.\n", __func__, atomic_read(&dev->bio_refcnt));
+}
+
+static struct bio *bd_get_bio(struct bd_device *dev)
+{
+ unsigned long flags;
+ struct bio *bio;
+
+ spin_lock_irqsave(&dev->bio_lock, flags);
+ bio = dev->bio;
+ if (bio)
+ {
+ if (bio == dev->biotail)
+ dev->biotail = NULL;
+ dev->bio = bio->bi_next;
+ bio->bi_next = NULL;
+
+ atomic_dec(&dev->bio_refcnt);
+
+ dprintk("%s: bio has been gotten, refcnt=%d.\n", __func__, atomic_read(&dev->bio_refcnt));
+ }
+ spin_unlock_irqrestore(&dev->bio_lock, flags);
+
+ return bio;
+}
+
+static int bd_make_request(request_queue_t *q, struct bio *bio)
+{
+ struct bd_device *dev = q->queuedata;
+ unsigned long flags;
+ int cmd = bio_rw(bio);
+
+ if (!dev || !bd_is_bound(dev))
+ goto err_out_bad_bio;
+
+ spin_lock_irqsave(&dev->state_lock, flags);
+ if (!bd_is_bound(dev))
+ {
+ spin_unlock_irqrestore(&dev->state_lock, flags);
+ goto err_out_not_bound;
+ }
+ spin_unlock_irqrestore(&dev->state_lock, flags);
+
+ if (cmd != READ && cmd != READA && cmd != WRITE)
+ goto err_out_wrong_command;
+
+ bd_add_bio(dev, bio);
+
+ return 0;
+
+err_out_wrong_command:
+err_out_not_bound:
+err_out_bad_bio:
+ bio_io_error(bio, bio->bi_size);
+
+ return -EINVAL;
+}
+
+static int bd_request_thread(void *data)
+{
+ struct bd_device *dev = data;
+ struct bio *bio;
+ int err;
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (!dev->need_exit)
+ {
+ interruptible_sleep_on_timeout(&dev->bio_wait, 10);
+
+ if (signal_pending(current))
+ flush_signals(current);
+
+ if (dev->need_exit)
+ break;
+
+ while ((bio = bd_get_bio(dev)) != NULL)
+ {
+ err = bd_process_bio(dev, bio);
+ }
+ }
+
+ complete_and_exit(&dev->thread_exited, 0);
+}
+
+int bd_fill_dev(struct bd_device *dev, loff_t size)
+{
+ if (!dev->bdev || IS_ERR(dev->bdev))
+ {
+ dprintk("Device %s does not have appropriate block device.\n", dev->name);
+ return -EINVAL;
+ }
+
+ if (!dev->bd_block_size)
+ {
+ dprintk("Device %s have block_size=0.\n", dev->name);
+ return -EINVAL;
+ }
+
+ blk_queue_make_request(dev->disk->queue, bd_make_request);
+ dev->disk->queue->queuedata = dev;
+ dev->disk->queue->unplug_fn = bd_unplug;
+
+ set_capacity(dev->disk, size);
+ bd_set_size(dev->bdev, size << (ffs(dev->bd_block_size) - 1));
+
+ set_blocksize(dev->bdev, dev->bd_block_size);
+
+ return 0;
+}
+
+static int bd_open_device(struct inode *inode, struct file *fp)
+{
+ struct bd_device *dev = inode->i_bdev->bd_disk->private_data;
+
+ bd_get_dev(dev);
+
+ dev->bdev = inode->i_bdev;
+
+ return 0;
+}
+
+static int bd_release_device(struct inode *inode, struct file *fp)
+{
+ struct bd_device *dev = inode->i_bdev->bd_disk->private_data;
+
+ bd_put_dev(dev);
+
+ return 0;
+}
+
+static int bd_ioctl(struct inode *inode, struct file *fp, unsigned int cmd, unsigned long arg)
+{
+ struct bd_device *dev = inode->i_bdev->bd_disk->private_data;
+ int err;
+ unsigned long n;
+ struct bd_main_filter *f;
+ struct bd_user u;
+ void *priv;
+
+ dprintk("%s: dev=%s, cmd=%x, arg=%lx.\n", __func__, dev->name, cmd, arg);
+
+ if (_IOC_TYPE(cmd) != BD_ID)
+ {
+ dprintk("Wrong ioctl cmd type %x, must be %x.\n", _IOC_TYPE(cmd), BD_ID);
+ return -EINVAL;
+ }
+
+ if (down_interruptible(&dev->usem))
+ return -ERESTARTSYS;
+
+ err = 0;
+ switch (cmd)
+ {
+ case BD_BIND_FILTER:
+ case BD_UNBIND_FILTER:
+ n = copy_from_user(&u, (long *)arg, sizeof(u));
+ if (n) {
+ err = EINVAL;
+ break;
+ }
+
+ if (u.size > PAGE_SIZE) {
+ dprintk("dev=%s, filter=%s: private size %u is too big, max=%lu.\n",
+ dev->name, u.name, u.size, PAGE_SIZE);
+ err = -E2BIG;
+ break;
+ }
+
+ dprintk("IOCTL: dev=%s, filter=%s, size=%u.\n", dev->name, u.name, u.size);
+
+ f = bd_find_main_filter_by_name(u.name);
+ if (!f) {
+ err = -ENODEV;
+ break;
+ }
+
+ if (cmd == BD_BIND_FILTER) {
+ /*
+ * f->priv will be freed in bd_del_filter()/bd_add_filter() -> bd_filter_free()
+ * if ->priv and ->priv_size are non-NULL.
+ */
+ priv = kmalloc(u.size, GFP_KERNEL);
+ if (!priv) {
+ err = -ENOMEM;
+ break;
+ }
+
+ n = copy_from_user(priv, (long *)(arg+sizeof(u)), u.size);
+ if (n) {
+ err = EINVAL;
+ break;
+ }
+
+ down(&dev->filter_sem);
+ err = bd_add_filter(dev, f, priv, u.size);
+ up(&dev->filter_sem);
+ }
+ else
+ {
+ down(&dev->filter_sem);
+ bd_del_filter(dev, f);
+ up(&dev->filter_sem);
+ }
+
+ atomic_dec(&f->refcnt);
+ break;
+ default:
+ dprintk("Unsupported ioctl %x for device %s.\n", cmd, dev->name);
+ err = -ENOTSUPP;
+ break;
+ }
+
+ up(&dev->usem);
+
+ return err;
+}
+
+int __devinit bd_init(void)
+{
+ int err, i;
+ struct bd_device *dev;
+
+ err = register_blkdev(bd_major, bd_name);
+ if (err)
+ {
+ dprintk("Failed to register blkdev with major %u: err=%d.\n",
+ bd_major, err);
+ return err;
+ }
+
+ bd_devs = kmalloc(bd_max_num * sizeof(void *), GFP_KERNEL);
+ if (!bd_devs)
+ {
+ dprintk("Failed to allocate %d bd devices.\n", bd_max_num);
+ goto err_out_unregister_blkdev;
+ }
+
+ memset(bd_devs, 0, bd_max_num * sizeof(void *));
+
+ for (i=0; i<bd_max_num; ++i)
+ {
+ dev = bd_alloc_dev(i);
+ if (!dev)
+ goto err_out_free_all_devs;
+
+ bd_devs[i] = dev;
+
+ dprintk("Device %s [%d] - %p.\n", dev->name, i, bd_devs[i]);
+ }
+
+
+ dprintk("%s has been successfully created.\n", bd_name);
+
+ return 0;
+
+err_out_free_all_devs:
+ --i;
+ while(i >= 0)
+ {
+ bd_free_dev(bd_devs[i]);
+ }
+ kfree(bd_devs);
+err_out_unregister_blkdev:
+ unregister_blkdev(bd_major, "bd");
+
+ return -EINVAL;
+}
+
+void __devexit bd_fini(void)
+{
+ int i;
+
+ for (i=0; i<bd_max_num; ++i)
+ bd_free_dev(bd_devs[i]);
+
+ unregister_blkdev(bd_major, bd_name);
+
+ kfree(bd_devs);
+
+ dprintk("%s has beed successfully removed.\n", bd_name);
+}
+
+module_init(bd_init);
+module_exit(bd_fini);
+
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(bd_get_dev);
+EXPORT_SYMBOL(bd_get_dev_minor);
+EXPORT_SYMBOL(bd_put_dev);
+EXPORT_SYMBOL(bd_fill_dev);
+EXPORT_SYMBOL(bd_bind_dev);
+EXPORT_SYMBOL(bd_unbind_dev);
+EXPORT_SYMBOL(bd_is_bound);
diff -Nru /tmp/empty/bd.h ./drivers/block/bd/bd.h
--- /tmp/empty/bd.h 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd.h 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,120 @@
+/*
+ * bd.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __BD_H
+#define __BD_H
+
+#include <linux/ioctl.h>
+
+#define BD_ID 'B'
+#define BD_BIND_FILTER _IOW(BD_ID, 1, void *)
+#define BD_UNBIND_FILTER _IOW(BD_ID, 2, void *)
+
+#define BD_MAX_NAMESIZ 32
+
+struct bd_user
+{
+ char name[BD_MAX_NAMESIZ];
+ __u32 size;
+};
+
+#ifdef __KERNEL__
+
+#include <linux/fs.h>
+#include <linux/completion.h>
+
+#define BD_DEBUG
+
+#ifdef BD_DEBUG
+#define dprintk(f, a...) printk(f, ##a)
+#else
+#define dprintk(f, a...) do {} while(0)
+#endif
+
+enum {
+ bd_not_bound,
+ bd_bound,
+};
+
+struct bd_private
+{
+ int req_cnt;
+};
+
+struct bd_device
+{
+ char name[BD_MAX_NAMESIZ];
+
+ int major;
+ int minor;
+
+ struct semaphore usem;
+ int refcnt;
+
+ unsigned int bd_sector_size;
+ unsigned long bd_block_size;
+ unsigned int bd_max_request_size;
+
+ struct gendisk *disk;
+
+ struct block_device *bdev;
+
+ int old_gfp_mask;
+
+ loff_t offset;
+
+ spinlock_t bio_lock;
+ struct bio *bio;
+ struct bio *biotail;
+ atomic_t bio_refcnt;
+
+ int state;
+ spinlock_t state_lock;
+
+ wait_queue_head_t bio_wait;
+
+ struct completion thread_exited;
+ int pid;
+ int need_exit;
+
+ struct workqueue_struct *transfer_queue;
+
+ struct semaphore filter_sem;
+ struct list_head filter_list;
+ int filter_num;
+};
+
+int bd_fill_dev(struct bd_device *dev, loff_t size);
+
+void bd_bind_dev(struct bd_device *dev);
+void bd_unbind_dev(struct bd_device *dev);
+int bd_is_bound(struct bd_device *dev);
+
+int bd_process_bio(struct bd_device *dev, struct bio *bio);
+int bd_process_bio_init(struct bd_device *dev);
+void bd_process_bio_fini(struct bd_device *dev);
+
+void bd_get_dev(struct bd_device *dev);
+void bd_put_dev(struct bd_device *dev);
+struct bd_device *bd_get_dev_minor(int minor);
+
+#endif /* __KERNEL__ */
+#endif /* __BD_H */
diff -Nru /tmp/empty/bd_acrypto.c ./drivers/block/bd/bd_acrypto.c
--- /tmp/empty/bd_acrypto.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_acrypto.c 2005-03-07 23:10:45.000000000 +0300
@@ -0,0 +1,159 @@
+/*
+ * bd_acrypto.c
+ *
+ * Copyright (c) 2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/bio.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+#include "../../../acrypto/acrypto.h"
+#include "../../../acrypto/crypto_user.h"
+#include "../../../acrypto/crypto_def.h"
+
+#undef dprintk
+
+#include "bd.h"
+#include "bd_filter.h"
+#include "bd_acrypto.h"
+
+static int bd_acrypto_transfer(struct bd_transfer *);
+static int bd_acrypto_init(struct bd_device *, struct bd_filter *);
+static void bd_acrypto_fini(struct bd_device *, struct bd_filter *);
+
+static struct bd_main_filter bd_acrypto_main_filter =
+{
+ .name = "acrypto",
+ .transfer = bd_acrypto_transfer,
+ .init = bd_acrypto_init,
+ .fini = bd_acrypto_fini,
+ .flags = BD_MAIN_FILTER_FLAG_WAIT,
+};
+
+static int bd_acrypto_prepare(struct crypto_data *data, void *src, void *dst, unsigned long size,
+ void *key, int key_size, void *iv, int iv_size, void *priv)
+{
+ int err;
+
+ err = crypto_user_alloc_crypto_data(data, size, size, key_size, iv_size);
+ if (err)
+ return err;
+
+ crypto_user_fill_sg(src, size, data->sg_src);
+ crypto_user_fill_sg(dst, size, data->sg_dst);
+ crypto_user_fill_sg(key, key_size, data->sg_key);
+ crypto_user_fill_sg(iv, iv_size, data->sg_iv);
+
+ data->priv = priv;
+ data->priv_size = 0;
+
+ return 0;
+}
+
+static int bd_acrypto_init(struct bd_device *dev, struct bd_filter *f)
+{
+ struct bd_acrypto_private *p = f->priv;
+
+ if (!p)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void bd_acrypto_fini(struct bd_device *dev, struct bd_filter *f)
+{
+}
+
+void bd_acrypto_callback(struct crypto_session_initializer *ci, struct crypto_data *data)
+{
+ struct bd_transfer *t = data->priv;
+
+ t->f->complete(t);
+}
+
+static int bd_acrypto_transfer(struct bd_transfer *t)
+{
+ struct crypto_data data;
+ struct bd_acrypto_private *p = t->f->priv;
+ int err;
+ void *src, *dst;
+ struct crypto_session *s;
+ struct crypto_session_initializer ci;
+
+ if (!p)
+ return -EINVAL;
+
+ memset(&ci, 0, sizeof(ci));
+
+ ci.operation = (t->cmd == WRITE)?CRYPTO_OP_ENCRYPT:CRYPTO_OP_DECRYPT;
+ ci.type = p->type;
+ ci.mode = p->mode;
+ ci.priority = p->priority;
+ ci.callback = bd_acrypto_callback;
+
+ /*
+ * This is an acrypto filter feature - it is used exactly as filter - it changes data in-place.
+ */
+ src = kmap_atomic(t->src.page, KM_USER0) + t->src.off;
+ dst = kmap_atomic(t->dst.page, KM_USER1) + t->dst.off;
+
+ err = bd_acrypto_prepare(&data, src, dst, t->src.size, p->key, p->key_size, p->iv, p->iv_size, t);
+ if (err)
+ goto err_out_unmap;
+
+ s = crypto_session_alloc(&ci, &data);
+ if (!s)
+ goto err_out_free;
+
+ kunmap_atomic(dst, KM_USER1);
+ kunmap_atomic(src, KM_USER0);
+
+ return 0;
+
+err_out_free:
+ crypto_user_free_crypto_data(&data);
+err_out_unmap:
+ kunmap_atomic(src, KM_USER0);
+
+ return err;
+}
+
+int __devinit bd_acrypto_init_dev(void)
+{
+ return bd_register_main_filter(&bd_acrypto_main_filter);
+}
+
+void __devexit bd_acrypto_fini_dev(void)
+{
+ bd_unregister_main_filter(&bd_acrypto_main_filter);
+}
+
+module_init(bd_acrypto_init_dev);
+module_exit(bd_acrypto_fini_dev);
+
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Asynchronous crypto filter.");
diff -Nru /tmp/empty/bd_acrypto.c~ ./drivers/block/bd/bd_acrypto.c~
--- /tmp/empty/bd_acrypto.c~ 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_acrypto.c~ 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,159 @@
+/*
+ * bd_acrypto.c
+ *
+ * Copyright (c) 2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/bio.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+#include "../crypto/acrypto.h"
+#include "../crypto/crypto_user.h"
+#include "../crypto/crypto_def.h"
+
+#undef dprintk
+
+#include "bd.h"
+#include "bd_filter.h"
+#include "bd_acrypto.h"
+
+static int bd_acrypto_transfer(struct bd_transfer *);
+static int bd_acrypto_init(struct bd_device *, struct bd_filter *);
+static void bd_acrypto_fini(struct bd_device *, struct bd_filter *);
+
+static struct bd_main_filter bd_acrypto_main_filter =
+{
+ .name = "acrypto",
+ .transfer = bd_acrypto_transfer,
+ .init = bd_acrypto_init,
+ .fini = bd_acrypto_fini,
+ .flags = BD_MAIN_FILTER_FLAG_WAIT,
+};
+
+static int bd_acrypto_prepare(struct crypto_data *data, void *src, void *dst, unsigned long size,
+ void *key, int key_size, void *iv, int iv_size, void *priv)
+{
+ int err;
+
+ err = crypto_user_alloc_crypto_data(data, size, size, key_size, iv_size);
+ if (err)
+ return err;
+
+ crypto_user_fill_sg(src, size, data->sg_src);
+ crypto_user_fill_sg(dst, size, data->sg_dst);
+ crypto_user_fill_sg(key, key_size, data->sg_key);
+ crypto_user_fill_sg(iv, iv_size, data->sg_iv);
+
+ data->priv = priv;
+ data->priv_size = 0;
+
+ return 0;
+}
+
+static int bd_acrypto_init(struct bd_device *dev, struct bd_filter *f)
+{
+ struct bd_acrypto_private *p = f->priv;
+
+ if (!p)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void bd_acrypto_fini(struct bd_device *dev, struct bd_filter *f)
+{
+}
+
+void bd_acrypto_callback(struct crypto_session_initializer *ci, struct crypto_data *data)
+{
+ struct bd_transfer *t = data->priv;
+
+ t->f->complete(t);
+}
+
+static int bd_acrypto_transfer(struct bd_transfer *t)
+{
+ struct crypto_data data;
+ struct bd_acrypto_private *p = t->f->priv;
+ int err;
+ void *src, *dst;
+ struct crypto_session *s;
+ struct crypto_session_initializer ci;
+
+ if (!p)
+ return -EINVAL;
+
+ memset(&ci, 0, sizeof(ci));
+
+ ci.operation = (t->cmd == WRITE)?CRYPTO_OP_ENCRYPT:CRYPTO_OP_DECRYPT;
+ ci.type = p->type;
+ ci.mode = p->mode;
+ ci.priority = p->priority;
+ ci.callback = bd_acrypto_callback;
+
+ /*
+ * This is an acrypto filter feature - it is used exactly as filter - it changes data in-place.
+ */
+ src = kmap_atomic(t->src.page, KM_USER0) + t->src.off;
+ dst = kmap_atomic(t->dst.page, KM_USER1) + t->dst.off;
+
+ err = bd_acrypto_prepare(&data, src, dst, t->src.size, p->key, p->key_size, p->iv, p->iv_size, t);
+ if (err)
+ goto err_out_unmap;
+
+ s = crypto_session_alloc(&ci, &data);
+ if (!s)
+ goto err_out_free;
+
+ kunmap_atomic(dst, KM_USER1);
+ kunmap_atomic(src, KM_USER0);
+
+ return 0;
+
+err_out_free:
+ crypto_user_free_crypto_data(&data);
+err_out_unmap:
+ kunmap_atomic(src, KM_USER0);
+
+ return err;
+}
+
+int __devinit bd_acrypto_init_dev(void)
+{
+ return bd_register_main_filter(&bd_acrypto_main_filter);
+}
+
+void __devexit bd_acrypto_fini_dev(void)
+{
+ bd_unregister_main_filter(&bd_acrypto_main_filter);
+}
+
+module_init(bd_acrypto_init_dev);
+module_exit(bd_acrypto_fini_dev);
+
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Asynchronous crypto filter.");
diff -Nru /tmp/empty/bd_acrypto.h ./drivers/block/bd/bd_acrypto.h
--- /tmp/empty/bd_acrypto.h 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_acrypto.h 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,37 @@
+/*
+ * bd_acrypto.h
+ *
+ * Copyright (c) 2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __BD_ACRYPTO_H
+#define __BD_ACRYPTO_H
+
+struct bd_acrypto_private
+{
+ __u16 type;
+ __u16 mode;
+ __u16 priority;
+
+ __u8 key[32];
+ __u8 iv[16];
+ __u16 key_size;
+ __u16 iv_size;
+};
+
+#endif /* __BD_ACRYPTO_H */
diff -Nru /tmp/empty/bd_bio.c ./drivers/block/bd/bd_bio.c
--- /tmp/empty/bd_bio.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_bio.c 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,280 @@
+/*
+ * bd_bio.c
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/types.h>
+#include <linux/bio.h>
+#include <linux/delay.h>
+
+#include "bd.h"
+#include "bd_filter.h"
+
+extern struct semaphore filter_sem;
+extern struct list_head filter_list;
+
+static void bd_free_bvecs(struct bd_filter_transfer *transfers, struct bio *bio)
+{
+ int i;
+
+ for (i=0; i<bio->bi_vcnt; ++i)
+ if (transfers[i].page)
+ __free_pages(transfers[i].page, get_order(transfers[i].size));
+
+ kfree(transfers);
+}
+
+static struct bd_filter_transfer *bd_clone_bvecs(struct bio *bio)
+{
+ struct bd_filter_transfer *t;
+ struct bio_vec *bvec;
+ int i, good_num = 0;
+
+ dprintk("%s: cloning %d bvecs in bio %p.\n", __func__, bio->bi_vcnt, bio);
+
+ t = kmalloc(bio->bi_vcnt*sizeof(struct bd_filter_transfer), GFP_KERNEL);
+ if (!t) {
+ dprintk("Failed to allocate %d bytes array for %d bd_filter_transfer structures.\n",
+ bio->bi_vcnt*sizeof(struct bd_filter_transfer), bio->bi_vcnt);
+ return NULL;
+ }
+
+ memset(t, 0, bio->bi_vcnt*sizeof(struct bd_filter_transfer));
+
+ bio_for_each_segment(bvec, bio, i) {
+ t[i].page = alloc_pages(GFP_KERNEL, get_order(bvec->bv_len));
+ if (!t[i].page)
+ continue;
+
+ t[i].size = bvec->bv_len;
+ t[i].off = bvec->bv_offset;
+
+ good_num++;
+ }
+
+ dprintk("%s: %d [need %d] bd_filter_transfer structures have been successfully allocated.\n",
+ __func__, good_num, bio->bi_vcnt);
+
+ if (good_num != bio->bi_vcnt)
+ {
+ dprintk("Failed to clone %d bvecs [%d succeeded].\n", bio->bi_vcnt, good_num);
+
+ bio_for_each_segment(bvec, bio, i)
+ if (t[i].page)
+ __free_pages(t[i].page, get_order(bvec->bv_len));
+ }
+
+ return (good_num == bio->bi_vcnt)?t:NULL;
+}
+
+static int bd_process_bio_cmd_filter(struct bd_device *dev, struct bd_filter *f, struct bio *bio,
+ struct bd_transfer_private *priv, unsigned int cmd,
+ struct bd_filter_transfer *src, struct bd_filter_transfer *dst)
+{
+ struct bd_transfer *t;
+ struct bio_vec *bvec;
+ int i, err = 0;
+ loff_t pos = bio->bi_sector << (ffs(dev->bd_block_size)-1);
+
+ dprintk("%s: TRANSFER: filter=%s, flags=%08x, pos=%llu.\n", dev->name, f->mf->name, f->mf->flags, pos);
+
+ init_completion(&f->completed);
+
+ t = NULL;
+ bio_for_each_segment(bvec, bio, i) {
+ t = bd_transfer_alloc(GFP_KERNEL);
+ if (!t) {
+ err = -ENOMEM;
+ atomic_dec(&f->refcnt);
+ atomic_dec(&t->priv->bio_refcnt);
+ continue;
+ }
+
+ t->cmd = cmd;
+ t->pos = pos;
+ t->dev = dev;
+ t->priv = priv;
+ t->f = f;
+ t->f->complete = bd_filter_complete;
+
+ if (src) {
+ t->src.page = src[i].page;
+ t->src.off = src[i].off;
+ t->src.size = src[i].size;
+ } else {
+ t->src.page = bvec->bv_page;
+ t->src.off = bvec->bv_offset;
+ t->src.size = bvec->bv_len;
+ }
+
+ if (dst) {
+ t->dst.page = dst[i].page;
+ t->dst.off = dst[i].off;
+ t->dst.size = dst[i].size;
+ } else {
+ t->dst.page = t->src.page;
+ t->dst.off = t->src.off;
+ t->dst.size = t->src.size;
+ }
+
+ pos += bvec->bv_len;
+
+ dprintk("bvec in bio=%p, t=%p, f->refcnt=%d, pos=%llu: SRC=[%p.%u.%u], DST=[%p.%u.%u].\n",
+ bio, t, atomic_read(&t->f->refcnt), pos,
+ t->src.page, t->src.size, t->src.off,
+ t->dst.page, t->dst.size, t->dst.off);
+
+ INIT_WORK(&t->work, &bd_filter_queue_wrapper, t);
+
+ if (i+1 < bio->bi_vcnt)
+ queue_work(dev->transfer_queue, &t->work);
+ }
+
+ if (i == bio->bi_vcnt && t)
+ queue_work(dev->transfer_queue, &t->work);
+
+ return err;
+}
+
+static int bd_process_bio_cmd(struct bd_device *dev, unsigned int cmd, struct bio *bio)
+{
+ int err = 0, bio_err = 0;
+ struct bd_transfer_private *priv;
+ struct bd_filter *f;
+ struct bd_filter_transfer *transfers = NULL;
+
+ dprintk("%s: TRANSFER: cmd=%s [%d], [bs=%lu, bios=%d, bvecs=%d].\n",
+ dev->name, (cmd == READ)?"READ":"WRITE", cmd,
+ dev->bd_block_size, atomic_read(&dev->bio_refcnt), bio->bi_vcnt);
+
+ down(&dev->filter_sem);
+
+ priv = kmalloc(sizeof(struct bd_transfer_private), GFP_KERNEL);
+ if (!priv)
+ {
+ err = -ENOMEM;
+ goto out_up;
+ }
+
+ priv->bio_status = 0;
+ priv->bio = bio;
+ atomic_set(&priv->bio_refcnt, bio->bi_vcnt*dev->filter_num);
+ init_completion(&priv->bio_completed);
+
+ dprintk("BIO: bio=%p, bio_refcnt=%d.\n", bio, atomic_read(&priv->bio_refcnt));
+
+ list_for_each_entry(f, &dev->filter_list, filter_entry)
+ atomic_set(&f->refcnt, bio->bi_vcnt);
+
+ if (cmd == WRITE) {
+ int first = 1;
+
+ transfers = bd_clone_bvecs(bio);
+ if (!transfers) {
+ bio_err = -ENOMEM;
+ atomic_set(&priv->bio_refcnt, 0);
+ goto out_up;
+ }
+
+ list_for_each_entry(f, &dev->filter_list, filter_entry) {
+ if (first) {
+ err = bd_process_bio_cmd_filter(dev, f, bio, priv, cmd, NULL, transfers);
+ first = 0;
+ } else {
+ err = bd_process_bio_cmd_filter(dev, f, bio, priv, cmd, transfers, NULL);
+ }
+
+ if (err && (atomic_read(&f->refcnt) == 0)) {
+ complete(&f->completed);
+ }
+
+ if (need_wait(f->mf->flags)) {
+ wait_for_completion(&f->completed);
+ }
+
+ if (err)
+ bio_err = err;
+ }
+ } else {
+ list_for_each_entry_reverse(f, &dev->filter_list, filter_entry) {
+ err = bd_process_bio_cmd_filter(dev, f, bio, priv, cmd, NULL, NULL);
+ if (err && (atomic_read(&f->refcnt) == 0)) {
+ complete(&f->completed);
+ }
+
+ if (need_wait(f->mf->flags)) {
+ wait_for_completion(&f->completed);
+ }
+
+ if (err)
+ bio_err = err;
+ }
+ }
+
+out_up:
+ up(&dev->filter_sem);
+
+ if (!bio_err || (atomic_read(&priv->bio_refcnt) != 0))
+ wait_for_completion(&priv->bio_completed);
+
+ if (transfers)
+ bd_free_bvecs(transfers, bio);
+ bio_endio(bio, bio->bi_size, (bio_err)?-ENOMEM:0);
+ kfree(priv);
+
+ if (bio_err)
+ dprintk("BIO has been completed with errors.\n");
+
+ return err;
+}
+
+int bd_process_bio(struct bd_device *dev, struct bio *bio)
+{
+ int err;
+
+ dprintk("%s: %lu: dev=%s, bio=%p, cmd=%lu.\n", __func__, jiffies, dev->name, bio, bio_rw(bio));
+ err = bd_process_bio_cmd(dev, (bio_rw(bio) == WRITE)?WRITE:READ, bio);
+
+ return err;
+}
+
+int bd_process_bio_init(struct bd_device *dev)
+{
+ char name[10];
+
+ snprintf(name, sizeof(name), "%s.trans", dev->name);
+
+ dev->transfer_queue = create_workqueue(name);
+ if (!dev->transfer_queue)
+ {
+ dprintk("Failed to create work queue %s for bd device %s.\n",
+ name, dev->name);
+ return -EINVAL;
+ }
+
+ init_MUTEX(&dev->filter_sem);
+ INIT_LIST_HEAD(&dev->filter_list);
+
+ return 0;
+}
+
+void bd_process_bio_fini(struct bd_device *dev)
+{
+ destroy_workqueue(dev->transfer_queue);
+}
diff -Nru /tmp/empty/bd_fd.c ./drivers/block/bd/bd_fd.c
--- /tmp/empty/bd_fd.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_fd.c 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,329 @@
+/*
+ * bd_fd.c
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/bio.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+#include "bd.h"
+#include "bd_filter.h"
+#include "bd_fd.h"
+
+static int bd_fd_transfer(struct bd_transfer *);
+static int bd_fd_init(struct bd_device *, struct bd_filter *);
+static void bd_fd_fini(struct bd_device *, struct bd_filter *);
+
+static struct bd_main_filter fd_filter =
+{
+ .name = "fd",
+ .transfer = bd_fd_transfer,
+ .init = bd_fd_init,
+ .fini = bd_fd_fini,
+ .flags = BD_MAIN_FILTER_FLAG_WAIT | BD_MAIN_FILTER_BACKEND,
+};
+
+static loff_t bd_get_size(struct bd_device *dev, struct bd_fd_private *p)
+{
+ loff_t size;
+
+ size = i_size_read(p->file->f_mapping->host) - dev->offset;
+
+ dprintk("dev=%s, size=%llu [%llu].\n", dev->name, size, size >> (ffs(dev->bd_block_size) - 1));
+
+ return size >> (ffs(dev->bd_block_size) - 1);
+}
+
+static void bd_clear_fd(struct bd_device *dev, struct bd_fd_private *p)
+{
+ if (p->file)
+ fput(p->file);
+ p->file = NULL;
+ p->u.fd = 0;
+}
+
+static int bd_set_fd(struct bd_device *dev, struct bd_fd_private *p)
+{
+ int err;
+ struct inode *inode;
+ struct address_space *mapping;
+
+ p->file = fget(p->u.fd);
+ if (!p->file)
+ {
+ dprintk("File does not exist for fd %d.\n", p->u.fd);
+ err = -EBADF;
+ goto err_out_clear_fd;
+ }
+
+ dprintk("%s: Found file: fd=%d, dentry=%s.\n",
+ dev->name, p->u.fd, p->file->f_dentry->d_iname);
+
+ mapping = p->file->f_mapping;
+ inode = mapping->host;
+
+ err = -EBADF;
+ if (S_ISREG(inode->i_mode) || S_ISBLK(inode->i_mode)) {
+ struct address_space_operations *aops = mapping->a_ops;
+
+ if (!p->file->f_op->sendfile)
+ goto err_out_putf;
+
+ if (!aops->prepare_write || !aops->commit_write)
+ goto err_out_putf;
+
+ dev->bd_block_size = inode->i_blksize;
+ } else {
+ goto err_out_putf;
+ }
+
+ dev->bd_sector_size = 512;
+ dev->bd_block_size = 512;
+ dev->bd_max_request_size = PAGE_SIZE;
+
+ dev->old_gfp_mask = mapping_gfp_mask(mapping);
+ mapping_set_gfp_mask(mapping, dev->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
+
+ bd_fill_dev(dev, bd_get_size(dev, p));
+
+ return 0;
+
+err_out_putf:
+ fput(p->file);
+ p->file = NULL;
+
+err_out_clear_fd:
+ p->u.fd = 0;
+
+ return err;
+}
+
+static int bd_fd_init(struct bd_device *dev, struct bd_filter *f)
+{
+ struct bd_fd_private *p;
+ struct bd_fd_user *u = f->priv;
+ int err;
+
+ p = kmalloc(sizeof(*p), GFP_KERNEL);
+ if (!p) {
+ dprintk("Failed to allocate new bd_fd priavte structure in dev=%s, filter=%s.\n",
+ dev->name, f->mf->name);
+ return -ENOMEM;
+ }
+
+ memset(p, 0, sizeof(*p));
+
+ memcpy(&p->u, u, sizeof(p->u));
+
+ dprintk("%s: filter=%s, flags=%08x.\n", __func__, f->mf->name, f->mf->flags);
+
+ /*
+ * f->priv will be freed in bd_del_filter()/bd_add_filter() -> bd_filter_free();
+ */
+ kfree(f->priv);
+ f->priv = p;
+
+ dprintk("%s: filter=%s, flags=%08x.\n", __func__, f->mf->name, f->mf->flags);
+
+ err = bd_set_fd(dev, p);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static void bd_fd_fini(struct bd_device *dev, struct bd_filter *f)
+{
+ struct bd_fd_private *p = f->priv;
+
+ if (p)
+ bd_clear_fd(dev, p);
+}
+
+static int file_bd_transfer(struct bd_transfer *t,
+ struct page *rpage, unsigned roffs,
+ struct page *lpage, unsigned loffs,
+ int size)
+{
+ char *rbuf = kmap_atomic(rpage, KM_USER0) + roffs;
+ char *lbuf = kmap_atomic(lpage, KM_USER1) + loffs;
+
+ dprintk("%s: cmd=%d, rpage=%p, roff=%u, lpage=%p, loff=%u, size=%d.\n",
+ __func__, t->cmd, rpage, roffs, lpage, loffs, size);
+
+ if (t->cmd == READ)
+ memcpy(lbuf, rbuf, size);
+ else
+ memcpy(rbuf, lbuf, size);
+
+ kunmap_atomic(rbuf, KM_USER0);
+ kunmap_atomic(lbuf, KM_USER1);
+
+ return 0;
+}
+
+static int file_bd_write(struct bd_transfer *t)
+{
+ struct bd_device *dev = t->dev;
+ struct bd_filter *f = t->f;
+ struct page *bv_page = t->src.page;
+ unsigned bv_offs = t->src.off;
+ int bv_len = t->src.size;
+ loff_t pos = t->pos;
+ struct bd_fd_private *p = f->priv;
+ struct address_space *mapping = p->file->f_mapping;
+ struct address_space_operations *aops = mapping->a_ops;
+ struct page *page;
+ pgoff_t index;
+ unsigned size, offset;
+
+ down(&mapping->host->i_sem);
+ index = pos >> PAGE_CACHE_SHIFT;
+ offset = pos & ((pgoff_t)PAGE_CACHE_SIZE - 1);
+ while (bv_len > 0) {
+ int transfer_result;
+
+ size = PAGE_CACHE_SIZE - offset;
+ if (size > bv_len)
+ size = bv_len;
+
+ page = grab_cache_page(mapping, index);
+ if (!page)
+ goto err_out_exit;
+ if (aops->prepare_write(p->file, page, offset, offset+size))
+ goto err_out_page_unlock;
+ transfer_result = file_bd_transfer(t, page, offset, bv_page, bv_offs, size);
+ if (transfer_result) {
+ char *kaddr;
+
+ printk(KERN_ERR "%s: transfer error block %llu\n", dev->name, (unsigned long long)index);
+ kaddr = kmap_atomic(page, KM_USER0);
+ memset(kaddr + offset, 0, size);
+ kunmap_atomic(kaddr, KM_USER0);
+ }
+ flush_dcache_page(page);
+ if (aops->commit_write(p->file, page, offset, offset+size))
+ goto err_out_page_unlock;
+ if (transfer_result)
+ goto err_out_page_unlock;
+ bv_offs += size;
+ bv_len -= size;
+ offset = 0;
+ index++;
+ pos += size;
+ unlock_page(page);
+ page_cache_release(page);
+ }
+ up(&mapping->host->i_sem);
+
+ return 0;
+
+err_out_page_unlock:
+ unlock_page(page);
+ page_cache_release(page);
+err_out_exit:
+ up(&mapping->host->i_sem);
+
+ return -EINVAL;
+}
+
+static int bd_actor(read_descriptor_t *desc, struct page *page, unsigned long offset, unsigned long size)
+{
+ unsigned long count = desc->count;
+ struct bd_transfer *t = desc->arg.data;
+
+ dprintk("%s: dev=%s, page=%p, off=%lu, size=%lu.\n", __func__, t->dev->name, page, offset, size);
+
+ if (size > count)
+ size = count;
+
+ if (file_bd_transfer(t, page, offset, t->src.page, t->src.off, size)) {
+ size = 0;
+ printk(KERN_ERR "Failed to transfer block %ld\n", page->index);
+ desc->error = -EINVAL;
+ }
+
+ desc->count = count - size;
+ desc->written += size;
+
+ return size;
+}
+
+static int file_bd_read(struct bd_transfer *t)
+{
+ struct bd_fd_private *p = t->f->priv;
+ int err;
+ loff_t pos = t->pos;
+
+ err = p->file->f_op->sendfile(p->file, &pos, t->src.size, bd_actor, t);
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+
+static int bd_fd_transfer(struct bd_transfer *t)
+{
+ int err;
+
+ dprintk("%s: TRANSFER: t=%p, dev=%s, filter=%s.\n", __func__, t, t->dev->name, t->f->mf->name);
+
+ if (t->cmd == WRITE)
+ err = file_bd_write(t);
+ else
+ err = file_bd_read(t);
+
+ if (err)
+ t->status = BD_TRANSFER_FAILED;
+
+ t->f->complete(t);
+
+ return err;
+}
+
+int __devinit bd_fd_init_dev(void)
+{
+ int err;
+
+ err = bd_register_main_filter(&fd_filter);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+void __devexit bd_fd_fini_dev(void)
+{
+ bd_unregister_main_filter(&fd_filter);
+}
+
+module_init(bd_fd_init_dev);
+module_exit(bd_fd_fini_dev);
+
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("File backend filter.");
diff -Nru /tmp/empty/bd_fd.h ./drivers/block/bd/bd_fd.h
--- /tmp/empty/bd_fd.h 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_fd.h 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,40 @@
+/*
+ * bd_fd.h
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __BD_FD_H
+#define __BD_FD_H
+
+struct bd_fd_user
+{
+ int fd;
+};
+
+#ifdef __KERNEL__
+
+struct bd_fd_private
+{
+ struct bd_fd_user u;
+
+ struct file *file;
+};
+
+#endif /* __KERNEL__ */
+#endif /* __BD_FD_H */
diff -Nru /tmp/empty/bd_filter.c ./drivers/block/bd/bd_filter.c
--- /tmp/empty/bd_filter.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_filter.c 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,322 @@
+/*
+ * bd_filter.c
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/bio.h>
+
+#include "bd.h"
+#include "bd_filter.h"
+
+static LIST_HEAD(main_filter_list);
+static spinlock_t main_filter_lock = SPIN_LOCK_UNLOCKED;
+//static DECLARE_SPINLOCK(main_filter_lock);
+
+void bd_filter_complete(struct bd_transfer *t)
+{
+ dprintk("%s: t=%p, dev=%s, filter=%s, bio_refcnt=%d [%08x].\n",
+ __func__, t, t->dev->name, t->f->mf->name,
+ atomic_read(&t->priv->bio_refcnt), t->priv->bio_status);
+
+ if (atomic_dec_and_test(&t->f->refcnt))
+ complete(&t->f->completed);
+
+ if (atomic_dec_and_test(&t->priv->bio_refcnt))
+ {
+ dprintk("Completing BIO request.\n");
+ complete(&t->priv->bio_completed);
+ }
+
+ bd_transfer_free(t);
+}
+
+void bd_filter_queue_wrapper(void *data)
+{
+ struct bd_transfer *t = data;
+ struct bd_filter *f = t->f;
+ int err;
+
+ dprintk("%s: dev=%s, transfer=%p, filter=%s, status=%08x.\n", __func__, t->dev->name, t, f->mf->name, t->status);
+
+ err = f->mf->transfer(t);
+}
+
+struct bd_transfer *bd_transfer_alloc(int flags)
+{
+ struct bd_transfer *t;
+
+ t = kmalloc(sizeof(*t), flags);
+ if (!t)
+ {
+ dprintk("Failed to allocate new bd_transfer structure.\n");
+ return NULL;
+ }
+
+ memset(t, 0, sizeof(*t));
+
+ return t;
+}
+
+void bd_transfer_free(struct bd_transfer *t)
+{
+ dprintk("%s: transfer=%p, dev=%s, filter=%s.\n", __func__, t, t->dev->name, t->f->mf->name);
+
+ kfree(t);
+}
+
+static struct bd_filter *bd_filter_alloc(struct bd_main_filter *mf, int flags)
+{
+ struct bd_filter *f;
+
+ f = kmalloc(sizeof(struct bd_filter), flags);
+ if (!f)
+ return NULL;
+
+ memset(f, 0, sizeof(*f));
+
+ atomic_set(&f->refcnt, 0);
+ f->complete = bd_filter_complete;
+ f->mf = mf;
+
+ return f;
+}
+
+static void bd_filter_free(struct bd_filter *f)
+{
+ if (atomic_read(&f->refcnt) == 0)
+ {
+ if (f->priv_size && f->priv)
+ kfree(f->priv);
+ kfree(f);
+ }
+}
+
+int bd_add_filter(struct bd_device *dev, struct bd_main_filter *f, void *priv, u32 priv_size)
+{
+ struct bd_filter *ft;
+ int err = -EINVAL;
+
+ if (is_backend(f->flags) && bd_is_bound(dev))
+ {
+ dprintk("Backend filter already registered in block device %s.\n", dev->name);
+ return -ENODEV;
+ }
+
+ atomic_inc(&f->refcnt);
+
+ ft = bd_filter_alloc(f, GFP_KERNEL);
+ if (!ft) {
+ goto err_out_exit;
+ }
+
+ ft->priv = priv;
+ ft->priv_size = priv_size;
+
+
+ err = f->init(dev, ft);
+ if (err)
+ goto err_out_free_filter;
+
+ if (is_backend(f->flags))
+ bd_bind_dev(dev);
+
+ list_add_tail(&ft->filter_entry, &dev->filter_list);
+ dev->filter_num++;
+
+ return 0;
+
+err_out_free_filter:
+ bd_filter_free(ft);
+err_out_exit:
+
+ atomic_dec(&f->refcnt);
+
+ bd_unbind_dev(dev);
+
+ return err;
+}
+
+void bd_del_filter(struct bd_device *dev, struct bd_main_filter *f)
+{
+ struct bd_filter *ft, *n;
+
+ if (is_backend(f->flags))
+ bd_unbind_dev(dev);
+
+ list_for_each_entry_safe(ft, n, &dev->filter_list, filter_entry)
+ {
+ if (!strncmp(f->name, ft->mf->name, sizeof(f->name)))
+ {
+ list_del(&ft->filter_entry);
+ dev->filter_num--;
+
+ f->fini(dev, ft);
+
+ smp_mb__before_atomic_dec();
+ atomic_dec(&f->refcnt);
+ smp_mb__after_atomic_dec();
+
+ bd_filter_free(ft);
+ }
+ }
+}
+
+struct bd_main_filter *bd_find_main_filter_by_name(char *name)
+{
+ struct bd_main_filter *f;
+ int found = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&main_filter_lock, flags);
+ list_for_each_entry(f, &main_filter_list, main_filter_entry)
+ {
+ if (!strncmp(f->name, name, sizeof(f->name)))
+ {
+ found = 1;
+ atomic_inc(&f->refcnt);
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(&main_filter_lock, flags);
+
+ if (found)
+ return f;
+ else
+ return NULL;
+}
+
+static int bd_main_filter_ok(struct bd_main_filter *f)
+{
+ if (!f->transfer)
+ {
+ dprintk("Filter %s does not have ->transfer() method.\n", f->name);
+ return 0;
+ }
+
+ if (!f->init)
+ {
+ dprintk("Filter %s does not have ->init() method.\n", f->name);
+ return 0;
+ }
+
+ if (!f->fini)
+ {
+ dprintk("Filter %s does not have ->fini() method.\n", f->name);
+ return 0;
+ }
+
+ return 1;
+}
+
+int bd_register_main_filter(struct bd_main_filter *f)
+{
+ unsigned long flags;
+ struct bd_main_filter *ft;
+ int err = 0;
+
+ if (!bd_main_filter_ok(f))
+ return -EINVAL;
+
+ spin_lock_irqsave(&main_filter_lock, flags);
+
+ list_for_each_entry(ft, &main_filter_list, main_filter_entry)
+ {
+ if (!strncmp(f->name, ft->name, sizeof(f->name)))
+ {
+ dprintk("Filter %s was already registered.\n", f->name);
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ if (!err)
+ {
+ list_add(&f->main_filter_entry, &main_filter_list);
+ atomic_set(&f->refcnt, 0);
+ }
+
+ spin_unlock_irqrestore(&main_filter_lock, flags);
+
+ return err;
+}
+
+void bd_unregister_main_filter(struct bd_main_filter *f)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&main_filter_lock, flags);
+ list_del(&f->main_filter_entry);
+ spin_unlock_irqrestore(&main_filter_lock, flags);
+
+ while (atomic_read(&f->refcnt))
+ {
+ dprintk("Waiting for filter %s to become free: refcnt=%d.\n", f->name, atomic_read(&f->refcnt));
+ msleep(1000);
+ }
+}
+
+void bd_clean_filter_list(struct bd_device *dev)
+{
+ struct bd_main_filter *f;
+ unsigned long flags;
+
+ spin_lock_irqsave(&main_filter_lock, flags);
+ list_for_each_entry(f, &main_filter_list, main_filter_entry)
+ {
+ bd_del_filter(dev, f);
+
+ if (dev->filter_num < 0)
+ dprintk("Something wrong with filter processing in device %s: filter_num=%d.\n",
+ dev->name, dev->filter_num);
+ }
+ spin_unlock_irqrestore(&main_filter_lock, flags);
+}
+#if 0
+static struct bd_transfer *bd_transfer_clone(struct bd_transfer *t, int flags)
+{
+ struct bd_transfer *clone;
+
+ clone = bd_transfer_alloc(flags);
+ if (!clone)
+ return NULL;
+
+ clone->cmd = t->cmd;
+ clone->src = t->src;
+ clone->dst = t->dst;
+ clone->pos = t->pos;
+ clone->dev = t->dev;
+ clone->priv = t->priv;
+
+ INIT_WORK(&clone->work, &bd_filter_queue_wrapper, clone);
+
+ dprintk("Transer has been cloned: filter=%s, dev=%s.\n",
+ t->f->mf->name, t->dev->name);
+
+ return clone;
+}
+#endif
+
+EXPORT_SYMBOL_GPL(bd_register_main_filter);
+EXPORT_SYMBOL_GPL(bd_unregister_main_filter);
diff -Nru /tmp/empty/bd_filter.h ./drivers/block/bd/bd_filter.h
--- /tmp/empty/bd_filter.h 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_filter.h 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,153 @@
+/*
+ * bd_filter.h
+ *
+ * Copyright (c) 2004-2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __BD_FILTER_H
+#define __BD_FILTER_H
+
+#include <linux/workqueue.h>
+#include <linux/list.h>
+#include <linux/completion.h>
+
+#define BD_MAIN_FILTER_FLAG_WAIT (1<<0) /* bd core must wait until every callback is completed
+ * and do not send data to another filters
+ */
+#define BD_MAIN_FILTER_BACKEND (1<<1) /* This is a backend filter, i.e. one which contacts with the storage itself */
+
+#define need_wait(flags) !!(flags & BD_MAIN_FILTER_FLAG_WAIT)
+#define is_backend(flags) !!(flags & BD_MAIN_FILTER_BACKEND)
+
+
+#define BD_FILTER_COMPLETED 0
+
+#define BD_FILTER_DIR_DIRECT 0
+#define BD_FILTER_DIR_REVERSE 1
+
+struct bd_transfer;
+struct bd_filter;
+
+struct bd_main_filter
+{
+ char name[BD_MAX_NAMESIZ];
+
+ struct list_head main_filter_entry;
+
+ int (* transfer)(struct bd_transfer *);
+ int (* init)(struct bd_device *, struct bd_filter *);
+ void (* fini)(struct bd_device *, struct bd_filter *);
+
+ atomic_t refcnt;
+ u32 flags;
+};
+
+struct bd_filter
+{
+ struct list_head filter_entry;
+
+ void (* complete)(struct bd_transfer *);
+
+ atomic_t refcnt;
+
+ struct bd_main_filter *mf;
+
+ void *priv;
+ u32 priv_size;
+
+ struct completion completed;
+};
+
+#define BD_TRANSFER_OK 0
+#define BD_TRANSFER_FAILED 1
+
+#define BD_BIO_BROKEN 0
+
+struct bd_transfer_private
+{
+ struct bio *bio;
+ atomic_t bio_refcnt;
+ u32 bio_status;
+ struct completion bio_completed;
+};
+
+struct bd_filter_transfer
+{
+ struct page *page;
+ unsigned int off;
+ unsigned int size;
+};
+
+struct bd_transfer
+{
+ unsigned int cmd;
+ struct bd_filter_transfer src;
+ struct bd_filter_transfer dst;
+
+ loff_t pos;
+
+ u32 status;
+
+ struct bd_device *dev;
+ struct bd_filter *f;
+ struct work_struct work;
+
+ struct bd_transfer_private *priv;
+};
+
+void bd_filter_queue_wrapper(void *);
+void bd_transfer_wait(struct bd_transfer *);
+
+struct bd_transfer *bd_transfer_alloc(int flags);
+void bd_transfer_free(struct bd_transfer *);
+int bd_transfer_through_filters(struct bd_transfer *t);
+
+int bd_add_filter(struct bd_device *dev, struct bd_main_filter *f, void *priv, u32 priv_size);
+void bd_del_filter(struct bd_device *dev, struct bd_main_filter *f);
+int bd_register_main_filter(struct bd_main_filter *f);
+void bd_unregister_main_filter(struct bd_main_filter *f);
+struct bd_main_filter *bd_find_main_filter_by_name(char *);
+void bd_clean_filter_list(struct bd_device *dev);
+
+void bd_filter_complete(struct bd_transfer *t);
+void bd_backend_filter_complete(struct bd_transfer *t);
+
+static __inline__ struct bd_filter *next_filter(struct bd_filter *cur, struct bd_device *dev, int dir)
+{
+ struct bd_filter *n;
+ n = list_entry((dir == BD_FILTER_DIR_DIRECT)?cur->filter_entry.next:cur->filter_entry.prev,
+ struct bd_filter, filter_entry);
+
+ dprintk("%s: cur=%s, dir=%d, ptr=[%p.%p], devp=%p.\n",
+ __func__, cur->mf->name, dir,
+ cur->filter_entry.next, cur->filter_entry.prev,
+ &dev->filter_list);
+
+ if (!n)
+ return NULL;
+
+ if (&n->filter_entry == &dev->filter_list)
+ return NULL;
+
+ dprintk("%s: n=%s.\n", __func__, n->mf->name);
+
+ return n;
+}
+
+
+#endif /* __BD_FILTER_H */
diff -Nru /tmp/empty/bd_xor.c ./drivers/block/bd/bd_xor.c
--- /tmp/empty/bd_xor.c 1970-01-01 03:00:00.000000000 +0300
+++ ./drivers/block/bd/bd_xor.c 2005-03-07 23:01:05.000000000 +0300
@@ -0,0 +1,110 @@
+/*
+ * bd_xor.c
+ *
+ * Copyright (c) 2005 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/fs.h>
+#include <linux/bio.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+#include "bd.h"
+#include "bd_filter.h"
+
+static int bd_xor_transfer(struct bd_transfer *);
+static int bd_xor_init(struct bd_device *, struct bd_filter *);
+static void bd_xor_fini(struct bd_device *, struct bd_filter *);
+
+static struct bd_main_filter bd_xor_main_filter =
+{
+ .name = "xor",
+ .transfer = bd_xor_transfer,
+ .init = bd_xor_init,
+ .fini = bd_xor_fini,
+ .flags = BD_MAIN_FILTER_FLAG_WAIT,
+};
+
+static int bd_xor_init(struct bd_device *dev, struct bd_filter *f)
+{
+ return 0;
+}
+
+static void bd_xor_fini(struct bd_device *dev, struct bd_filter *f)
+{
+}
+
+static void dump_data(char *prefix, u8 *ptr, int size)
+{
+ int i;
+
+ printk("%lu: %s: ", jiffies, prefix);
+ for (i=0; i<size; ++i)
+ printk("%02x ", ptr[i]);
+ printk("\n");
+}
+
+static int bd_xor_transfer(struct bd_transfer *t)
+{
+ int i;
+ u8 *src, *dst;
+
+ src = kmap_atomic(t->src.page, KM_USER0) + t->src.off;
+ dst = kmap_atomic(t->dst.page, KM_USER1) + t->dst.off;
+
+ dump_data("bd_xor_transfer before", src, 32);
+ for (i=0; i<t->src.size; ++i)
+ {
+ dst[i] = src[i] ^ 0xff;
+ }
+ dump_data("bd_xor_transfer after ", dst, 32);
+
+ kunmap_atomic(dst, KM_USER1);
+ kunmap_atomic(src, KM_USER0);
+
+ t->src.page = t->dst.page;
+ t->src.size = t->dst.size;
+ t->src.off = t->dst.off;
+
+ t->f->complete(t);
+
+ return 0;
+}
+
+int __devinit bd_xor_init_dev(void)
+{
+ return bd_register_main_filter(&bd_xor_main_filter);
+}
+
+void __devexit bd_xor_fini_dev(void)
+{
+ bd_unregister_main_filter(&bd_xor_main_filter);
+}
+
+module_init(bd_xor_init_dev);
+module_exit(bd_xor_fini_dev);
+
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Simple XOR filter.");
--- ./drivers/block/Makefile~ 2005-03-02 10:38:38.000000000 +0300
+++ ./drivers/block/Makefile 2005-03-07 23:09:16.000000000 +0300
@@ -44,4 +44,4 @@
obj-$(CONFIG_VIODASD) += viodasd.o
obj-$(CONFIG_BLK_DEV_SX8) += sx8.o
obj-$(CONFIG_BLK_DEV_UB) += ub.o
-
+obj-$(CONFIG_BD) += bd/
--- ./drivers/block/Kconfig~ 2005-03-02 10:37:50.000000000 +0300
+++ ./drivers/block/Kconfig 2005-03-07 23:08:34.000000000 +0300
@@ -506,4 +506,6 @@
This driver provides Support for ATA over Ethernet block
devices like the Coraid EtherDrive (R) Storage Blade.

+source "drivers/block/bd/Kconfig"
+
endmenu

2005-03-07 20:52:58

by Evgeniy Polyakov

[permalink] [raw]
Subject: [6/many] acrypto: crypto_conn.h

--- /tmp/empty/crypto_conn.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_conn.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,45 @@
+/*
+ * crypto_conn.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_CONN_H
+#define __CRYPTO_CONN_H
+
+#include "acrypto.h"
+
+#define CRYPTO_READ_SESSIONS 0
+#define CRYPTO_REQUEST 1
+#define CRYPTO_GET_STAT 2
+
+struct crypto_conn_data
+{
+ char name[SCACHE_NAMELEN];
+ __u16 cmd;
+ __u16 len;
+ __u8 data[0];
+};
+
+#ifdef __KERNEL__
+
+int crypto_conn_init(void);
+void crypto_conn_fini(void);
+
+#endif /* __KERNEL__ */
+#endif /* __CRYPTO_CONN_H */

2005-03-08 01:55:11

by Nishanth Aravamudan

[permalink] [raw]
Subject: [UPDATE PATCH 8/many] acrypto: crypto_dev.c

On Tue, Mar 08, 2005 at 02:27:20AM +0300, Evgeniy Polyakov wrote:
> On Mon, 7 Mar 2005 14:51:21 -0800
> Nish Aravamudan <[email protected]> wrote:
>
> > On Tue, 8 Mar 2005 02:14:31 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > On Mon, 7 Mar 2005 14:40:52 -0800
> > > Nish Aravamudan <[email protected]> wrote:
> > >
> > > > On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > > > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > > > > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > > > > @@ -0,0 +1,421 @@
> > > > > +/*
> > > > > + * crypto_dev.c
> > > >
> > > > <snip>
> > > >
> > > > > + while (atomic_read(&__dev->refcnt)) {
> >
> > <snip>
> >
> > > > > + set_current_state(TASK_UNINTERRUPTIBLE);
> > > > > + schedule_timeout(HZ);
> > > >
> > > > I don't see any wait-queues in the immediate area of this code. Can
> > > > this be an ssleep(1)?
> > >
> > > Yes, you are right, this loop just spins until all pending sessions
> > > are removed from given crypto device, so it can just ssleep(1) here.
> >
> > Would you like me to send an incremental patch or will you be changing
> > it yourself?
>
> That would be nice to see your changes in the acrypto.
> If it will be commited...

Well, here is an incremental patch, then:

Description: Use ssleep() instead of schedule_timeout() to guarantee the
task delays as expected.

Signed-off-by: Nishanth Aravamudan <[email protected]>


--- 2.6.11-v/acrypto/crypto_dev.c 2005-03-07 17:41:31.000000000 -0800
+++ 2.6.11/acrypto/crypto_dev.c 2005-03-07 17:41:57.000000000 -0800
@@ -28,6 +28,7 @@
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/device.h>
+#include <linux/delay.h>

#include "acrypto.h"

@@ -399,8 +400,7 @@ void crypto_device_remove(struct crypto_
*/

__dev->data_ready(__dev);
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(HZ);
+ ssleep(1);
}

dprintk(KERN_ERR "Crypto device %s was unregistered.\n",

2005-03-08 01:55:08

by Clemens Fruhwirth

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mon, 2005-03-07 at 23:37 +0300, Evgeniy Polyakov wrote:

> I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.

Thanks Evgeniy for your work! Even though, it's great what's inside, I'm
afraid it will be judged by the form of its presentation. A patch should
be something integral, testable on its own. I think it's not necessary
to package it that fine grained, as it becomes very hard to apply with a
regular mail reader (Saving/Exporting 50 mails is really a bit of a
work).

So, the form is a bit suboptimal. Don't hesitate to put all "acrypto*"
and "arch*" patches in one-large acrypto patch set, and an other for
"bd*". I'd be glad to say something different, but I think acrypto has
not been considered by the maintainers to be merged soon, so patch
splitting doesn't make sense anyway at the moment.

Best Regards,
--
Fruhwirth Clemens - http://clemens.endorphin.org
for robots: [email protected]


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2005-03-07 20:52:57

by Evgeniy Polyakov

[permalink] [raw]
Subject: [34/many] arch: parisc config

--- ./arch/parisc/Kconfig~ 2005-03-02 10:38:10.000000000 +0300
+++ ./arch/parisc/Kconfig 2005-03-07 21:28:59.000000000 +0300
@@ -204,4 +204,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:52:56

by Evgeniy Polyakov

[permalink] [raw]
Subject: [2/many] acrypto: Makefile

--- /tmp/empty/Makefile 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/Makefile 2005-03-07 21:16:14.000000000 +0300
@@ -0,0 +1,12 @@
+obj-$(CONFIG_ACRYPTO) += acrypto.o
+obj-$(CONFIG_SIMPLE_LB) += simple_lb.o
+obj-$(CONFIG_ASYNC_PROVIDER) += async_provider.o
+
+acrypto-y += crypto_main.o
+acrypto-y += crypto_lb.o
+acrypto-y += crypto_dev.o
+acrypto-y += crypto_conn.o
+acrypto-y += crypto_stat.o
+acrypto-y += crypto_user_direct.o
+acrypto-y += crypto_user_ioctl.o
+acrypto-y += crypto_user.o

2005-03-07 20:52:55

by Evgeniy Polyakov

[permalink] [raw]
Subject: [??/many] iok.c - simple example of the userspace acrypto usage [IOCTL]


#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <fcntl.h>
#include <errno.h>

#include <linux/types.h>

#include "crypto_user.h"
#include "crypto_user_ioctl.h"
#include "crypto_def.h"

#define ulog(f, a...) fprintf(stderr, f, ##a)

int session_add(int fd, void *ptr)
{
int err;

err = ioctl(fd, CRYPTO_SESSION_ADD, ptr);
if (err == -1)
{
ulog("Failed to do CRYPTO_SESSION_ADD: %s [%d].\n", strerror(errno), errno);
return -1;
}

ulog("CRYPTO_SESSION_ADD finished.\n");

return 0;
}
int session_alloc(int fd, struct crypto_user_ioctl *io)
{
int err;

err = ioctl(fd, CRYPTO_SESSION_ALLOC, io);
if (err == -1)
{
ulog("Failed to do CRYPTO_SESSION_ALLOC: %s [%d].\n", strerror(errno), errno);
return -1;
}

ulog("CRYPTO_SESSION_ALLOC finished.\n");

return 0;
}

int fill_data(int fd, unsigned short size, unsigned short type, void *ptr)
{
int err;
struct crypto_user_data *d;
void *data;

data = malloc(size + sizeof(*d));
if (!data)
{
ulog("Failed to allocate %d bytes for CRYPTO_FILL_DATA[%u.%x].\n",
size + sizeof(*d), size, type);
return -ENOMEM;
}

d = (struct crypto_user_data *)data;

d->data_size = size;
d->data_type = type;

memcpy(d+1, ptr, size);

err = ioctl(fd, CRYPTO_FILL_DATA, d);
if (err == -1)
{
ulog("Failed to do CRYPTO_FILL_DATA[%u.%x]: %s [%d].\n",
size, type, strerror(errno), errno);
free(data);
return -1;
}
free(data);

ulog("CRYPTO_FILL_DATA[%u.%x] finished.\n", size, type);

return 0;
}

static void dump_data(unsigned char *ptr, int size)
{
int i;

ulog("IOK DATA: ");
for (i=0; i<size; ++i)
ulog("%02x ", ptr[i]);
ulog("\n");
}

int main(int argc, char *argv[])
{
int fd, err, size;
void *ptr;
unsigned char key[16];
struct crypto_user_ioctl io;

if (argc != 2)
{
ulog("Usage: %s path\n", argv[0]);
return -1;
}

fd = open(argv[1], O_RDWR);
if (fd == -1)
{
ulog("Failed to open file %s: %s [%d].\n", argv[1], strerror(errno), errno);
return -1;
}

size = 2048;

io.operation = CRYPTO_OP_ENCRYPT;
io.type = CRYPTO_TYPE_AES_128;
io.mode = CRYPTO_MODE_ECB;
io.priority = 0;

io.src_size = size;
io.dst_size = size;
io.key_size = 16;
io.iv_size = 0;

err = session_alloc(fd, &io);
if (err)
goto err_out_close_fd;

ptr = malloc(size);
if (!ptr)
{
ulog("Failed to create data.\n");
goto err_out_close_fd;
}
memset(ptr, 0, size);

memset(key, 0, sizeof(key));

err = fill_data(fd, size, CRYPTO_USER_DATA_SRC, ptr);
if (err)
goto err_out_free_ptr;
err = fill_data(fd, size, CRYPTO_USER_DATA_DST, ptr);
if (err)
goto err_out_free_ptr;
err = fill_data(fd, 16, CRYPTO_USER_DATA_KEY, key);
if (err)
goto err_out_free_ptr;

dump_data(ptr, size);

err = session_add(fd, ptr);
if (err)
goto err_out_free_ptr;

dump_data(ptr, size);

err_out_free_ptr:
free(ptr);

err_out_close_fd:
close(fd);

return err;
}

2005-03-07 20:52:52

by Evgeniy Polyakov

[permalink] [raw]
Subject: [7/many] acrypto: crypto_def.h

--- /tmp/empty/crypto_def.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_def.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,39 @@
+/*
+ * crypto_def.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_DEF_H
+#define __CRYPTO_DEF_H
+
+#define CRYPTO_OP_DECRYPT 0
+#define CRYPTO_OP_ENCRYPT 1
+#define CRYPTO_OP_HMAC 2
+
+#define CRYPTO_MODE_ECB 0
+#define CRYPTO_MODE_CBC 1
+#define CRYPTO_MODE_CFB 2
+#define CRYPTO_MODE_OFB 3
+
+#define CRYPTO_TYPE_AES_128 0
+#define CRYPTO_TYPE_AES_192 1
+#define CRYPTO_TYPE_AES_256 2
+#define CRYPTO_TYPE_3DES 3
+
+#endif /* __CRYPTO_DEF_H */

2005-03-07 20:52:54

by Evgeniy Polyakov

[permalink] [raw]
Subject: [4/many] acrypto: async_provider.c

--- /tmp/empty/async_provider.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/async_provider.c 2005-03-07 21:19:10.000000000 +0300
@@ -0,0 +1,322 @@
+/*
+ * async_provider.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/err.h>
+#include <linux/crypto.h>
+#include <linux/mm.h>
+#include <linux/blkdev.h>
+
+#include "acrypto.h"
+#include "crypto_stat.h"
+#include "crypto_def.h"
+#include "crypto_route.h"
+#include "crypto_user.h"
+
+static unsigned int trnum = 1;
+module_param(trnum, uint, 0);
+
+static void prov_data_ready(struct crypto_device *);
+
+static struct crypto_capability prov_caps[] = {
+ {CRYPTO_OP_ENCRYPT, CRYPTO_TYPE_AES_128, CRYPTO_MODE_ECB, 100},
+ {CRYPTO_OP_DECRYPT, CRYPTO_TYPE_AES_128, CRYPTO_MODE_ECB, 100},
+
+ {CRYPTO_OP_ENCRYPT, CRYPTO_TYPE_AES_128, CRYPTO_MODE_CBC, 100},
+ {CRYPTO_OP_DECRYPT, CRYPTO_TYPE_AES_128, CRYPTO_MODE_CBC, 100},
+};
+static int prov_cap_number = sizeof(prov_caps)/sizeof(prov_caps[0]);
+
+static int need_exit;
+static char async_algo[] = "aes";
+static char async_key[] = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+static struct crypto_device pdev = {
+ .name = "async_provider",
+ .data_ready = prov_data_ready,
+ .cap = &prov_caps[0],
+};
+
+static struct async_provider
+{
+ int num;
+ struct crypto_tfm *tfm;
+ wait_queue_head_t async_wait_queue;
+ struct completion thread_exited;
+ struct crypto_device pdev;
+} *aprov;
+
+static void prov_data_ready(struct crypto_device *dev)
+{
+ struct async_provider *p;
+
+ p = (struct async_provider *)dev->priv;
+
+ if (p)
+ wake_up_interruptible(&p->async_wait_queue);
+}
+
+static int async_thread(void *data)
+{
+ struct async_provider *p = (struct async_provider *)data;
+ struct crypto_device *dev = &p->pdev;
+ struct crypto_session *s, *n;
+ int i, err, keylen, ivlen;
+ u8 *key, *iv;
+ int zero_pnum_cnt = 0;
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (!need_exit) {
+ int num, pnum;
+
+ if (need_exit)
+ break;
+
+ num = pnum = 0;
+ list_for_each_entry_safe(s, n, &dev->session_list, dev_queue_entry) {
+ num++;
+
+ if (session_completed(s))
+ continue;
+
+ pnum++;
+
+ start_process_session(s);
+
+ if (s->data.sg_src_num != s->data.sg_dst_num) {
+ dprintk("Broken session: sg_src_num [%d] != sg_dst_num [%d].\n",
+ s->data.sg_src_num, s->data.sg_dst_num);
+ broke_session(s);
+ goto out;
+ }
+
+ /*
+ * Simple case - key is small(it's size is less than PAGE_SIZE).
+ * Assymetric crypto will require proper key sg handling.
+ */
+ key = kmap(s->data.sg_key[0].page) + s->data.sg_key[0].offset;
+ keylen = s->data.sg_key[0].length;
+
+ err = crypto_cipher_setkey(p->tfm, key, keylen);
+ if (err) {
+ dprintk(KERN_ERR "Failed to set key [keylen=%d]: err=%d.\n", keylen, err);
+ broke_session(s);
+ goto out;
+ }
+
+ if (s->ci.mode != CRYPTO_MODE_ECB) {
+ if (!s->data.sg_iv || !s->data.sg_iv_num) {
+ dprintk("Crypto mode %d requires IV.\n", s->ci.mode);
+ broke_session(s);
+ goto out;
+ }
+
+ iv = kmap(s->data.sg_iv[0].page) + s->data.sg_iv[0].offset;
+ ivlen = s->data.sg_iv[0].length;
+
+ if (!iv || !ivlen) {
+ dprintk("Crypto mode %d requires IV, whic is broken: iv=%p, ivlen=%d.\n",
+ s->ci.mode, iv, ivlen);
+ broke_session(s);
+ goto out;
+ }
+
+ crypto_cipher_set_iv(p->tfm, iv, ivlen);
+ } else {
+ iv = NULL;
+ ivlen = 0;
+ }
+
+ for (i=0; i<s->data.sg_src_num; ++i) {
+ u8 *dst, *src;
+ int len;
+
+ dst = kmap_atomic(s->data.sg_dst[i].page, KM_USER0) + s->data.sg_dst[i].offset;
+ src = kmap_atomic(s->data.sg_src[i].page, KM_USER1) + s->data.sg_src[i].offset;
+ len = s->data.sg_src[i].length;
+
+ if (s->ci.operation == CRYPTO_OP_ENCRYPT)
+ err = crypto_cipher_encrypt(p->tfm, &s->data.sg_dst[i], &s->data.sg_src[i], s->data.sg_src[i].length);
+ else
+ err = crypto_cipher_decrypt(p->tfm, &s->data.sg_dst[i], &s->data.sg_src[i], s->data.sg_src[i].length);
+
+ kunmap_atomic(src, KM_USER1);
+ kunmap_atomic(dst, KM_USER0);
+
+ s->data.sg_dst[i].length = s->data.sg_src[i].length;
+ s->data.sg_dst[i].offset = s->data.sg_src[i].offset;
+
+ if (err < 0) {
+ broke_session(s);
+ printk("operation=%02x, size=%u, err=%d.\n", s->ci.operation, s->data.sg_src[i].length, err);
+ }
+ }
+
+ kunmap(s->data.sg_key[0].page);
+
+ if (iv)
+ kunmap(s->data.sg_iv[0].page);
+
+ dprintk("%lu: Completing session %llu [%llu] in %s.\n",
+ jiffies, s->ci.id, s->ci.dev_id, pdev.name);
+out:
+ crypto_stat_complete_inc(s);
+ crypto_session_dequeue_route(s);
+ complete_session(s);
+ stop_process_session(s);
+ }
+
+ if (!pnum)
+ zero_pnum_cnt++;
+ else
+ zero_pnum_cnt = 0;
+
+ if (unlikely(zero_pnum_cnt == 1000)) {
+ zero_pnum_cnt = 0;
+ interruptible_sleep_on_timeout(&p->async_wait_queue, 10);
+ }
+ }
+
+ complete_and_exit(&p->thread_exited, 0);
+}
+
+static int prov_init_one(struct async_provider *p)
+{
+ int pid, err;
+
+ init_waitqueue_head(&p->async_wait_queue);
+
+ p->tfm = crypto_alloc_tfm(async_algo, CRYPTO_TFM_MODE_CBC);
+ if (!p->tfm) {
+ dprintk(KERN_ERR "Failed to allocate %d's %s tfm.\n", p->num, async_algo);
+ return -EINVAL;
+ }
+
+ err = crypto_cipher_setkey(p->tfm, async_key, sizeof(async_key));
+ if (err) {
+ dprintk("Failed to set key [keylen=%d]: err=%d.\n",
+ sizeof(async_key), err);
+ goto err_out_free_tfm;
+ }
+
+ init_completion(&p->thread_exited);
+ pid = kernel_thread(async_thread, p, CLONE_FS | CLONE_FILES);
+ if (IS_ERR((void *)pid)) {
+ err = -EINVAL;
+ dprintk(KERN_ERR "Failed to create kernel load balancing thread.\n");
+ goto err_out_free_tfm;
+ }
+
+ memcpy(&p->pdev, &pdev, sizeof(pdev));
+ snprintf(p->pdev.name, sizeof(p->pdev.name), "async_provider%d", p->num);
+
+ p->pdev.cap_number = prov_cap_number;
+ p->pdev.priv = p;
+
+ err = crypto_device_add(&p->pdev);
+ if (err)
+ goto err_out_remove_thread;
+
+ return 0;
+
+err_out_remove_thread:
+ need_exit = 1;
+ wake_up(&p->async_wait_queue);
+ wait_for_completion(&p->thread_exited);
+err_out_free_tfm:
+ crypto_free_tfm(p->tfm);
+
+ return err;
+}
+
+static void prov_fini_one(struct async_provider *p)
+{
+ crypto_device_remove(&p->pdev);
+ need_exit = 1;
+ wake_up(&p->async_wait_queue);
+ wait_for_completion(&p->thread_exited);
+
+ crypto_free_tfm(p->tfm);
+}
+
+int prov_init(void)
+{
+ int err, i;
+
+ aprov = kmalloc(trnum * sizeof(struct async_provider), GFP_KERNEL);
+ if (!aprov) {
+ dprintk(KERN_ERR "Failed to allocate %d async_provider pointers.\n", trnum);
+ return -ENOMEM;
+ }
+
+ memset(aprov, 0, trnum * sizeof(struct async_provider));
+
+ for (i=0; i<trnum; ++i) {
+ aprov[i].num = i;
+
+ err = prov_init_one(&aprov[i]);
+ if (err)
+ goto err_out_fini_one;
+ }
+
+ dprintk(KERN_INFO "Test crypto provider module %s is loaded for %d processors.\n",
+ pdev.name, trnum);
+
+ return 0;
+
+err_out_fini_one:
+ i--;
+ while (i >= 0)
+ prov_fini_one(&aprov[i]);
+
+ kfree(aprov);
+
+ return err;
+}
+
+void prov_fini(void)
+{
+ int i;
+
+ for (i=0; i<trnum; ++i)
+ prov_fini_one(&aprov[i]);
+
+ kfree(aprov);
+
+ dprintk(KERN_INFO "Test crypto provider module %s is unloaded.\n", pdev.name);
+}
+
+module_init(prov_init);
+module_exit(prov_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_DESCRIPTION("Test crypto module provider.");

2005-03-07 20:52:46

by Evgeniy Polyakov

[permalink] [raw]
Subject: [43/many] arch: v850 config

--- ./arch/v850/Kconfig~ 2005-03-02 10:38:08.000000000 +0300
+++ ./arch/v850/Kconfig 2005-03-07 21:31:12.000000000 +0300
@@ -311,6 +311,8 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

#############################################################################

2005-03-07 20:38:48

by Evgeniy Polyakov

[permalink] [raw]
Subject: [15/many] acrypto: crypto_user.c

--- /tmp/empty/crypto_user.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,196 @@
+/*
+ * crypto_user.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/vmalloc.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+
+#include "acrypto.h"
+#include "crypto_user.h"
+
+static struct scatterlist *crypto_user_alloc_sg(int sg_num)
+{
+ struct scatterlist *sg;
+
+ sg = kmalloc(sg_num * sizeof(*sg), GFP_ATOMIC);
+ if (!sg) {
+ dprintk("Failed to allocate %d scatterlist structures.\n", sg_num);
+ return NULL;
+ }
+
+ memset(sg, 0, sizeof(*sg) * sg_num);
+
+ return sg;
+}
+
+static void crypto_user_free_sg(struct scatterlist *sg)
+{
+ kfree(sg);
+}
+
+int crypto_user_alloc_crypto_data(struct crypto_data *data, int src_size, int dst_size, int key_size, int iv_size)
+{
+ int sg_num;
+
+ if ( (src_size > MAX_DATA_SIZE * PAGE_SIZE) ||
+ (dst_size > MAX_DATA_SIZE * PAGE_SIZE) ||
+ (key_size > MAX_DATA_SIZE * PAGE_SIZE) ||
+ (iv_size > MAX_DATA_SIZE * PAGE_SIZE)) {
+ dprintk("Sizes are too big: src=%u, dst=%u, key=%u, iv=%u, max=%u.\n",
+ src_size, dst_size, key_size, iv_size, MAX_DATA_SIZE);
+ return -EINVAL;
+
+ }
+
+ sg_num = ALIGN_DATA_SIZE(src_size) / PAGE_SIZE;
+ data->sg_src = crypto_user_alloc_sg(sg_num);
+ if (!data->sg_src)
+ goto err_out_exit;
+ data->sg_src_num = sg_num;
+
+ sg_num = ALIGN_DATA_SIZE(dst_size) / PAGE_SIZE;
+ data->sg_dst = crypto_user_alloc_sg(sg_num);
+ if (!data->sg_dst)
+ goto err_out_free_src;
+ data->sg_dst_num = sg_num;
+
+ sg_num = ALIGN_DATA_SIZE(key_size) / PAGE_SIZE;
+ data->sg_key = crypto_user_alloc_sg(sg_num);
+ if (!data->sg_key)
+ goto err_out_free_dst;
+ data->sg_key_num = sg_num;
+
+ sg_num = ALIGN_DATA_SIZE(iv_size) / PAGE_SIZE;
+ data->sg_iv = crypto_user_alloc_sg(sg_num);
+ if (!data->sg_iv)
+ goto err_out_free_key;
+ data->sg_iv_num = sg_num;
+
+ return 0;
+
+err_out_free_key:
+ crypto_user_free_sg(data->sg_key);
+err_out_free_dst:
+ crypto_user_free_sg(data->sg_dst);
+err_out_free_src:
+ crypto_user_free_sg(data->sg_src);
+err_out_exit:
+
+ return -ENOMEM;
+}
+
+void crypto_user_free_crypto_data(struct crypto_data *data)
+{
+ crypto_user_free_sg(data->sg_src);
+ crypto_user_free_sg(data->sg_dst);
+ crypto_user_free_sg(data->sg_key);
+ crypto_user_free_sg(data->sg_iv);
+}
+
+void crypto_user_fill_sg(void *ptr, u16 size, struct scatterlist *sg)
+{
+ int i, sg_num;
+
+ sg_num = ALIGN_DATA_SIZE(size) / PAGE_SIZE;
+
+ dprintk("Filling %d sgs, total size %u: ", sg_num, size);
+
+ for (i=0; i<sg_num; ++i) {
+ sg[i].page = virt_to_page(ptr);
+ if (i == 0) {
+ sg[i].offset = offset_in_page(ptr);
+ sg[i].length = ALIGN_DATA_SIZE((unsigned long)ptr) - (unsigned long)ptr;
+ if (sg[i].length == 0)
+ sg[i].length = PAGE_SIZE;
+ if (sg[i].length > size)
+ sg[i].length = size;
+ } else {
+ sg[i].offset = 0;
+ sg[i].length = (i != sg_num-1)?PAGE_SIZE:size;
+ }
+ dprintka("%x.%x.%p.%lx ", sg[i].offset, sg[i].length, ptr, ALIGN_DATA_SIZE((unsigned long)ptr));
+
+ size -= sg[i].length;
+ ptr += sg[i].length;
+ }
+ dprintka("\n");
+}
+
+struct scatterlist *crypto_user_get_sg(struct crypto_user_data *ud, struct crypto_data *data)
+{
+ struct scatterlist *sg = NULL;
+ int inval = 0;
+
+ switch (ud->data_type) {
+ case CRYPTO_USER_DATA_SRC:
+ sg = data->sg_src;
+ inval = (data->sg_src_num * PAGE_SIZE < ud->data_size);
+ dprintk("Found SRC data type, inval=%d, size=%u.\n", inval, ud->data_size);
+ break;
+ case CRYPTO_USER_DATA_DST:
+ sg = data->sg_dst;
+ inval = (data->sg_dst_num * PAGE_SIZE < ud->data_size);
+ dprintk("Found DST data type, inval=%d, size=%u.\n", inval, ud->data_size);
+ break;
+ case CRYPTO_USER_DATA_KEY:
+ sg = data->sg_key;
+ inval = (data->sg_key_num * PAGE_SIZE < ud->data_size);
+ dprintk("Found KEY data type, inval=%d, size=%u.\n", inval, ud->data_size);
+ break;
+ case CRYPTO_USER_DATA_IV:
+ sg = data->sg_iv;
+ inval = (data->sg_iv_num * PAGE_SIZE < ud->data_size);
+ dprintk("Found IV data type, inval=%d, size=%u.\n", inval, ud->data_size);
+ break;
+ default:
+ dprintk("Unknown data type 0x%x, size=%u.\n", ud->data_type, ud->data_size);
+ break;
+ }
+
+ return (inval)?NULL:sg;
+}
+
+int crypto_user_fill_sg_data(struct crypto_user_data *ud, struct crypto_data *data, void *ptr)
+{
+ struct scatterlist *sg;
+
+ sg = crypto_user_get_sg(ud, data);
+ if (!sg)
+ return -EINVAL;
+
+ crypto_user_fill_sg(ptr, ud->data_size, sg);
+
+ return 0;
+}
+
+EXPORT_SYMBOL_GPL(crypto_user_alloc_crypto_data);
+EXPORT_SYMBOL_GPL(crypto_user_free_crypto_data);
+EXPORT_SYMBOL_GPL(crypto_user_fill_sg);
+EXPORT_SYMBOL_GPL(crypto_user_fill_sg_data);
+EXPORT_SYMBOL_GPL(crypto_user_get_sg);

2005-03-08 02:43:16

by Evgeniy Polyakov

[permalink] [raw]
Subject: [4/5] bd: script for binding file and acrypto filters

#!/bin/sh

num=$#

if [ $num != 2 ]; then
echo "Usage: $0 device backend_file"
exit -1
fi

dev=$1
file=$2

./ubd bind dev /dev/bd0 filter acrypto cipher aes128 mode ecb priority 123 key 00000000000000000000000000000000 iv 00
#./ubd bind dev $dev filter xor
./ubd bind dev $dev filter fd file $file

2005-03-08 02:43:16

by Evgeniy Polyakov

[permalink] [raw]
Subject: [32/many] arch: m68knommu config

--- ./arch/m68knommu/Kconfig~ 2005-03-02 10:38:13.000000000 +0300
+++ ./arch/m68knommu/Kconfig 2005-03-07 21:28:24.000000000 +0300
@@ -578,4 +578,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-08 02:43:15

by Evgeniy Polyakov

[permalink] [raw]
Subject: [25/many] arch: cris config

--- ./arch/cris/Kconfig~ 2005-03-02 10:38:26.000000000 +0300
+++ ./arch/cris/Kconfig 2005-03-07 21:26:37.000000000 +0300
@@ -177,4 +177,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-08 02:43:15

by Evgeniy Polyakov

[permalink] [raw]
Subject: [21/many] acrypto: simple_lb.c

--- /tmp/empty/simple_lb.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/simple_lb.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,88 @@
+/*
+ * simple_lb.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+
+#include "crypto_lb.h"
+
+static void simple_lb_rehash(struct crypto_lb *);
+static struct crypto_device *simple_lb_find_device(struct crypto_lb *,
+ struct crypto_session_initializer *,
+ struct crypto_data *);
+
+struct crypto_lb simple_lb = {
+ .name = "simple_lb",
+ .rehash = simple_lb_rehash,
+ .find_device = simple_lb_find_device
+};
+
+static void simple_lb_rehash(struct crypto_lb *lb)
+{
+}
+
+static struct crypto_device *simple_lb_find_device(struct crypto_lb *lb,
+ struct crypto_session_initializer *ci,
+ struct crypto_data *data)
+{
+ struct crypto_device *dev, *__dev;
+ int min = 0x7ffffff;
+
+ __dev = NULL;
+ list_for_each_entry(dev, lb->crypto_device_list, cdev_entry) {
+ if (device_broken(dev))
+ continue;
+ if (!match_initializer(dev, ci))
+ continue;
+
+ if (atomic_read(&dev->refcnt) < min) {
+ min = atomic_read(&dev->refcnt);
+ __dev = dev;
+ }
+ }
+
+ return __dev;
+}
+
+int __devinit simple_lb_init(void)
+{
+ dprintk(KERN_INFO "Registering simple crypto load balancer.\n");
+
+ return crypto_lb_register(&simple_lb, 1, 1);
+}
+
+void __devexit simple_lb_fini(void)
+{
+ dprintk(KERN_INFO "Unregistering simple crypto load balancer.\n");
+
+ crypto_lb_unregister(&simple_lb);
+}
+
+module_init(simple_lb_init);
+module_exit(simple_lb_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <[email protected]>");
+MODULE_DESCRIPTION("Simple crypto load balancer.");

2005-03-08 02:43:13

by Evgeniy Polyakov

[permalink] [raw]
Subject: [19/many] acrypto: crypto_user_ioctl.c

--- /tmp/empty/crypto_user_ioctl.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user_ioctl.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,281 @@
+/*
+ * crypto_user_ioctl.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/fs.h>
+
+#include <asm/uaccess.h>
+
+#include "acrypto.h"
+#include "crypto_user.h"
+#include "crypto_user_ioctl.h"
+
+static int crypto_user_ioctl_ioctl(struct inode *inode, struct file *fp, unsigned int cmd, unsigned long arg);
+static int crypto_user_ioctl_open(struct inode *inode, struct file *fp);
+static int crypto_user_ioctl_release(struct inode *pinode, struct file *fp);
+int crypto_user_ioctl_init(void);
+void crypto_user_ioctl_fini(void);
+
+static int crypto_user_ioctl_major = 0;
+static char crypto_user_ioctl_name[] = "crypto_user_ioctl";
+
+static struct file_operations crypto_user_ioctl_ops = {
+ .open crypto_user_ioctl_open,
+ .release crypto_user_ioctl_release,
+ .ioctl crypto_user_ioctl_ioctl,
+ .owner THIS_MODULE
+};
+
+static void dump_data(u8 *ptr)
+{
+ int i;
+
+ dprintk("USER DATA: ");
+ for (i=0; i<32; ++i)
+ dprintka("%02x ", ptr[i]);
+ dprintka("\n");
+}
+
+static int crypto_user_ioctl_open(struct inode *inode, struct file *fp)
+{
+ struct crypto_user_ioctl_kern *iok;
+
+ iok = kmalloc(sizeof(*iok), GFP_KERNEL);
+ if (!iok) {
+ dprintk("Failed to allocate new crypto_user_ioctl_kern structure.\n");
+ return -ENOMEM;
+ }
+ memset(iok, 0, sizeof(*iok));
+
+ fp->private_data = iok;
+
+ return 0;
+}
+
+static int crypto_user_ioctl_release(struct inode *pinode, struct file *fp)
+{
+ struct crypto_user_ioctl_kern *iok = fp->private_data;
+ int i;
+
+ for (i=0; i<4; ++i)
+ if (iok->ptr[i])
+ kfree(iok->ptr[i]);
+ kfree(iok);
+
+ return 0;
+}
+
+static void crypto_user_ioctl_callback(struct crypto_session_initializer *ci, struct crypto_data *data)
+{
+ struct crypto_user_ioctl_kern *iok = data->priv;
+
+ dprintk("%s() for session %llu [%llu].\n",
+ __func__, iok->s->ci.id, iok->s->ci.dev_id);
+
+ crypto_user_free_crypto_data(&iok->data);
+
+ iok->scompleted = 1;
+ wake_up_interruptible(&iok->wait);
+}
+
+static int crypto_user_ioctl_session_alloc(struct crypto_user_ioctl *io, struct crypto_user_ioctl_kern *iok)
+{
+ int err;
+
+ err = crypto_user_alloc_crypto_data(&iok->data, io->src_size, io->dst_size, io->key_size, io->iv_size);
+ if (err)
+ return err;
+
+ iok->ci.operation = io->operation;
+ iok->ci.type = io->type;
+ iok->ci.mode = io->mode;
+ iok->ci.priority = io->priority;
+ iok->ci.callback = crypto_user_ioctl_callback;
+
+ iok->data.priv = iok;
+ iok->data.priv_size = 0;
+
+ iok->scompleted = 0;
+
+ init_waitqueue_head(&iok->wait);
+
+ iok->s = crypto_session_create(&iok->ci, &iok->data);
+ if (!iok->s) {
+ crypto_user_free_crypto_data(&iok->data);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int crypto_user_ioctl_session_add(struct crypto_user_ioctl_kern *iok)
+{
+ crypto_session_add(iok->s);
+
+ return 0;
+}
+
+static int crypto_user_ioctl_ioctl(struct inode *inode, struct file *fp, unsigned int cmd, unsigned long arg)
+{
+ struct crypto_user_ioctl io;
+ struct crypto_user_data data;
+ unsigned long not_read;
+ int err;
+ struct crypto_user_ioctl_kern *iok;
+
+ iok = fp->private_data;
+
+ err = 0;
+ switch (cmd) {
+ case CRYPTO_SESSION_ALLOC:
+ not_read = copy_from_user(&io, (void __user *)arg, sizeof(io));
+ if (not_read) {
+ dprintk("Failed to read crypto_user_ioctl structure from userspace.\n");
+ err = -EINVAL;
+ break;
+ }
+
+ err = crypto_user_ioctl_session_alloc(&io, iok);
+ break;
+ case CRYPTO_FILL_DATA:
+ not_read = copy_from_user(&data, (void __user *)arg, sizeof(data));
+ if (not_read) {
+ dprintk("Failed to read crypto_user_ioctl_data structure from userspace.\n");
+ err = -EINVAL;
+ break;
+ }
+
+ if (data.data_size > MAX_DATA_SIZE * PAGE_SIZE) {
+ dprintk("Data size is too bit: size=%u, type=%x.\n",
+ data.data_size, data.data_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (!crypto_user_get_sg(&data, &iok->data)) {
+ dprintk("Invalid crypto_user_data structure [size=%u, type=%x].\n",
+ data.data_size, data.data_type);
+ err = -EINVAL;
+ break;
+ }
+
+ if (iok->ptr[data.data_type])
+ kfree(iok->ptr[data.data_type]);
+
+ iok->ptr[data.data_type] = kmalloc(data.data_size, GFP_KERNEL);
+ if (!iok->ptr[data.data_type]) {
+ dprintk("Failed to allocate %d bytes for data type %d.\n",
+ data.data_size, data.data_type);
+ err = -ENOMEM;
+ break;
+ }
+
+ not_read = copy_from_user(iok->ptr[data.data_type], (void __user *)arg + sizeof(data), data.data_size);
+ if (not_read) {
+ dprintk("Failed to read %d bytes of crypto data [type=%d] from userspace.\n",
+ data.data_size, data.data_type);
+ kfree(iok->ptr[data.data_type]);
+ err = -EINVAL;
+ break;
+ }
+
+ memcpy(&iok->usr[data.data_type], &data, sizeof(struct crypto_user_data));
+
+ err = crypto_user_fill_sg_data(&data, &iok->data, iok->ptr[data.data_type]);
+ if (err) {
+ kfree(iok->ptr[data.data_type]);
+ break;
+ }
+ break;
+ case CRYPTO_SESSION_ADD:
+ if (!iok->s) {
+ dprintk("CRYPTO_SESSION_ADD must be called after session initialisation.\n");
+ err = -EINVAL;
+ break;
+ }
+
+ err = crypto_user_ioctl_session_add(iok);
+ if (err)
+ break;
+
+ wait_event_interruptible(iok->wait, iok->scompleted);
+
+ dump_data(iok->ptr[CRYPTO_USER_DATA_DST]);
+
+ not_read = copy_to_user((void __user *)arg, iok->ptr[CRYPTO_USER_DATA_DST], iok->usr[CRYPTO_USER_DATA_DST].data_size);
+ if (not_read) {
+ dprintk("Failed to copy to user %d bytes of result.\n", iok->usr[CRYPTO_USER_DATA_DST].data_size);
+ err = -EINVAL;
+ break;
+ }
+ break;
+
+ default:
+ dprintk("Invalid ioctl(0x%x).\n", cmd);
+ err = -ENODEV;
+ break;
+ }
+
+ return err;
+}
+
+static ssize_t crypto_user_ioctl_dev_show(struct class_device *dev, char *buf)
+{
+ return sprintf(buf, "%u:%u\n", crypto_user_ioctl_major, 0);
+}
+
+extern struct crypto_device main_crypto_device;
+static CLASS_DEVICE_ATTR(dev, 0644, crypto_user_ioctl_dev_show, NULL);
+
+int crypto_user_ioctl_init(void)
+{
+ struct crypto_device *dev = &main_crypto_device;
+ int err;
+
+ crypto_user_ioctl_major = register_chrdev(0, crypto_user_ioctl_name, &crypto_user_ioctl_ops);
+ if (crypto_user_ioctl_major < 0) {
+ dprintk("Failed to register %s char device: err=%d.\n", crypto_user_ioctl_name, crypto_user_ioctl_major);
+ return -ENODEV;
+ };
+
+ err = class_device_create_file(&dev->class_device, &class_device_attr_dev);
+ if (err)
+ dprintk("Failed to create \"dev\" attribute: err=%d.\n", err);
+
+ printk("Asynchronous crypto userspace helper(ioctl based) has been started, major=%d.\n", crypto_user_ioctl_major);
+
+ return 0;
+}
+
+void crypto_user_ioctl_fini(void)
+{
+ struct crypto_device *dev = &main_crypto_device;
+
+ class_device_remove_file(&dev->class_device, &class_device_attr_dev);
+ unregister_chrdev(crypto_user_ioctl_major, crypto_user_ioctl_name);
+}

2005-03-08 02:43:11

by Evgeniy Polyakov

[permalink] [raw]
Subject: [9/many] acrypto: crypto_lb.c

--- /tmp/empty/crypto_lb.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_lb.c 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,634 @@
+/*
+ * crypto_lb.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/workqueue.h>
+#include <linux/err.h>
+
+#include "acrypto.h"
+#include "crypto_lb.h"
+#include "crypto_stat.h"
+#include "crypto_route.h"
+
+static LIST_HEAD(crypto_lb_list);
+static spinlock_t crypto_lb_lock = SPIN_LOCK_UNLOCKED;
+static int lb_num = 0;
+static struct crypto_lb *current_lb, *default_lb;
+static struct completion thread_exited;
+static int need_exit;
+static struct workqueue_struct *crypto_lb_queue;
+static DECLARE_WAIT_QUEUE_HEAD(crypto_lb_wait_queue);
+
+extern struct list_head *crypto_device_list;
+extern spinlock_t *crypto_device_lock;
+
+extern int force_lb_remove;
+extern struct crypto_device main_crypto_device;
+
+static int lb_is_current(struct crypto_lb *l)
+{
+ return (l->crypto_device_list != NULL && l->crypto_device_lock != NULL);
+}
+
+static int lb_is_default(struct crypto_lb *l)
+{
+ return (l == default_lb);
+}
+
+static void __lb_set_current(struct crypto_lb *l)
+{
+ struct crypto_lb *c = current_lb;
+
+ if (c) {
+ l->crypto_device_list = crypto_device_list;
+ l->crypto_device_lock = crypto_device_lock;
+ current_lb = l;
+ c->crypto_device_list = NULL;
+ c->crypto_device_lock = NULL;
+ } else {
+ l->crypto_device_list = crypto_device_list;
+ l->crypto_device_lock = crypto_device_lock;
+ current_lb = l;
+ }
+}
+
+static void lb_set_current(struct crypto_lb *l)
+{
+ struct crypto_lb *c = current_lb;
+
+ if (c) {
+ spin_lock_irq(&c->lock);
+ __lb_set_current(l);
+ spin_unlock_irq(&c->lock);
+ } else
+ __lb_set_current(l);
+}
+
+static void __lb_set_default(struct crypto_lb *l)
+{
+ default_lb = l;
+}
+
+static void lb_set_default(struct crypto_lb *l)
+{
+ struct crypto_lb *c = default_lb;
+
+ if (c) {
+ spin_lock_irq(&c->lock);
+ __lb_set_default(l);
+ spin_unlock_irq(&c->lock);
+ } else
+ __lb_set_default(l);
+}
+
+static int crypto_lb_match(struct device *dev, struct device_driver *drv)
+{
+ return 1;
+}
+
+static int crypto_lb_probe(struct device *dev)
+{
+ return -ENODEV;
+}
+
+static int crypto_lb_remove(struct device *dev)
+{
+ return 0;
+}
+
+static void crypto_lb_release(struct device *dev)
+{
+ struct crypto_lb *d = container_of(dev, struct crypto_lb, device);
+
+ complete(&d->dev_released);
+}
+
+static void crypto_lb_class_release(struct class *class)
+{
+}
+
+static void crypto_lb_class_release_device(struct class_device *class_dev)
+{
+}
+
+struct class crypto_lb_class = {
+ .name = "crypto_lb",
+ .class_release = crypto_lb_class_release,
+ .release = crypto_lb_class_release_device
+};
+
+struct bus_type crypto_lb_bus_type = {
+ .name = "crypto_lb",
+ .match = crypto_lb_match
+};
+
+struct device_driver crypto_lb_driver = {
+ .name = "crypto_lb_driver",
+ .bus = &crypto_lb_bus_type,
+ .probe = crypto_lb_probe,
+ .remove = crypto_lb_remove,
+};
+
+struct device crypto_lb_dev = {
+ .parent = NULL,
+ .bus = &crypto_lb_bus_type,
+ .bus_id = "crypto load balancer",
+ .driver = &crypto_lb_driver,
+ .release = &crypto_lb_release
+};
+
+static ssize_t name_show(struct class_device *dev, char *buf)
+{
+ struct crypto_lb *lb = container_of(dev, struct crypto_lb, class_device);
+
+ return sprintf(buf, "%s\n", lb->name);
+}
+
+static ssize_t current_show(struct class_device *dev, char *buf)
+{
+ struct crypto_lb *lb;
+ int off = 0;
+
+ spin_lock_irq(&crypto_lb_lock);
+
+ list_for_each_entry(lb, &crypto_lb_list, lb_entry) {
+ if (lb_is_current(lb))
+ off += sprintf(buf + off, "[");
+ if (lb_is_default(lb))
+ off += sprintf(buf + off, "(");
+ off += sprintf(buf + off, "%s", lb->name);
+ if (lb_is_default(lb))
+ off += sprintf(buf + off, ")");
+ if (lb_is_current(lb))
+ off += sprintf(buf + off, "]");
+ }
+
+ spin_unlock_irq(&crypto_lb_lock);
+
+ if (!off)
+ off = sprintf(buf, "No load balancers regitered yet.");
+
+ off += sprintf(buf + off, "\n");
+
+ return off;
+}
+static ssize_t current_store(struct class_device *dev, const char *buf, size_t count)
+{
+ struct crypto_lb *lb;
+
+ spin_lock_irq(&crypto_lb_lock);
+
+ list_for_each_entry(lb, &crypto_lb_list, lb_entry) {
+ if (count == strlen(lb->name) && !strcmp(buf, lb->name)) {
+ lb_set_current(lb);
+ lb_set_default(lb);
+
+ dprintk(KERN_INFO "Load balancer %s is set as current and default.\n",
+ lb->name);
+
+ break;
+ }
+ }
+ spin_unlock_irq(&crypto_lb_lock);
+
+ return count;
+}
+
+static CLASS_DEVICE_ATTR(name, 0444, name_show, NULL);
+CLASS_DEVICE_ATTR(lbs, 0644, current_show, current_store);
+
+static void create_device_attributes(struct crypto_lb *lb)
+{
+ class_device_create_file(&lb->class_device, &class_device_attr_name);
+}
+
+static void remove_device_attributes(struct crypto_lb *lb)
+{
+ class_device_remove_file(&lb->class_device, &class_device_attr_name);
+}
+
+static int compare_lb(struct crypto_lb *l1, struct crypto_lb *l2)
+{
+ if (!strncmp(l1->name, l2->name, sizeof(l1->name)))
+ return 1;
+
+ return 0;
+}
+
+void crypto_lb_rehash(void)
+{
+ if (!current_lb)
+ return;
+
+ spin_lock_irq(&current_lb->lock);
+
+ current_lb->rehash(current_lb);
+
+ spin_unlock_irq(&current_lb->lock);
+
+ wake_up_interruptible(&crypto_lb_wait_queue);
+}
+
+struct crypto_device *crypto_lb_find_device(struct crypto_session_initializer *ci, struct crypto_data *data)
+{
+ struct crypto_device *dev;
+
+ if (!current_lb)
+ return NULL;
+
+ if (sci_binded(ci)) {
+ int found = 0;
+
+ spin_lock_irq(crypto_device_lock);
+
+ list_for_each_entry(dev, crypto_device_list, cdev_entry) {
+ if (dev->id == ci->bdev) {
+ found = 1;
+ break;
+ }
+ }
+
+ spin_unlock_irq(crypto_device_lock);
+
+ return (found) ? dev : NULL;
+ }
+
+ spin_lock_irq(&current_lb->lock);
+
+ current_lb->rehash(current_lb);
+
+ spin_lock(crypto_device_lock);
+
+ dev = current_lb->find_device(current_lb, ci, data);
+ if (dev)
+ crypto_device_get(dev);
+
+ spin_unlock(crypto_device_lock);
+
+ spin_unlock_irq(&current_lb->lock);
+
+ wake_up_interruptible(&crypto_lb_wait_queue);
+
+ return dev;
+}
+
+static int __crypto_lb_register(struct crypto_lb *lb)
+{
+ int err;
+
+ spin_lock_init(&lb->lock);
+
+ init_completion(&lb->dev_released);
+ memcpy(&lb->device, &crypto_lb_dev, sizeof(struct device));
+ lb->driver = &crypto_lb_driver;
+
+ snprintf(lb->device.bus_id, sizeof(lb->device.bus_id), "%s", lb->name);
+ err = device_register(&lb->device);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto load balancer device %s: err=%d.\n",
+ lb->name, err);
+ return err;
+ }
+
+ snprintf(lb->class_device.class_id, sizeof(lb->class_device.class_id), "%s", lb->name);
+ lb->class_device.dev = &lb->device;
+ lb->class_device.class = &crypto_lb_class;
+
+ err = class_device_register(&lb->class_device);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto load balancer class device %s: err=%d.\n",
+ lb->name, err);
+ device_unregister(&lb->device);
+ return err;
+ }
+
+ create_device_attributes(lb);
+ wake_up_interruptible(&crypto_lb_wait_queue);
+
+ return 0;
+
+}
+
+static void __crypto_lb_unregister(struct crypto_lb *lb)
+{
+ wake_up_interruptible(&crypto_lb_wait_queue);
+ remove_device_attributes(lb);
+ class_device_unregister(&lb->class_device);
+ device_unregister(&lb->device);
+}
+
+int crypto_lb_register(struct crypto_lb *lb, int set_current, int set_default)
+{
+ struct crypto_lb *__lb;
+ int err;
+
+ spin_lock_irq(&crypto_lb_lock);
+
+ list_for_each_entry(__lb, &crypto_lb_list, lb_entry) {
+ if (unlikely(compare_lb(__lb, lb))) {
+ spin_unlock_irq(&crypto_lb_lock);
+
+ dprintk(KERN_ERR "Crypto load balancer %s is already registered.\n",
+ lb->name);
+ return -EINVAL;
+ }
+ }
+
+ list_add(&lb->lb_entry, &crypto_lb_list);
+
+ spin_unlock_irq(&crypto_lb_lock);
+
+ err = __crypto_lb_register(lb);
+ if (err) {
+ spin_lock_irq(&crypto_lb_lock);
+ list_del_init(&lb->lb_entry);
+ spin_unlock_irq(&crypto_lb_lock);
+
+ return err;
+ }
+
+ if (!default_lb || set_default)
+ lb_set_default(lb);
+
+ if (!current_lb || set_current)
+ lb_set_current(lb);
+
+ dprintk(KERN_INFO "Crypto load balancer %s was registered and set to be [%s.%s].\n",
+ lb->name, (lb_is_current(lb)) ? "current" : "not current",
+ (lb_is_default(lb)) ? "default" : "not default");
+
+ lb_num++;
+
+ return 0;
+}
+
+void crypto_lb_unregister(struct crypto_lb *lb)
+{
+ struct crypto_lb *__lb, *n;
+
+ if (lb_num == 1) {
+ dprintk(KERN_INFO "You are removing crypto load balancer %s which is current and default.\n"
+ "There is no other crypto load balancers. "
+ "Removing %s delayed untill new load balancer is registered.\n",
+ lb->name, (force_lb_remove) ? "is not" : "is");
+ while (lb_num == 1 && !force_lb_remove) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(HZ);
+
+ if (signal_pending(current))
+ flush_signals(current);
+ }
+ }
+
+ __crypto_lb_unregister(lb);
+
+ spin_lock_irq(&crypto_lb_lock);
+
+ list_for_each_entry_safe(__lb, n, &crypto_lb_list, lb_entry) {
+ if (compare_lb(__lb, lb)) {
+ lb_num--;
+ list_del_init(&__lb->lb_entry);
+
+ dprintk(KERN_ERR "Crypto load balancer %s was unregistered.\n",
+ lb->name);
+ } else if (lb_num) {
+ if (lb_is_default(lb))
+ lb_set_default(__lb);
+ if (lb_is_current(lb))
+ lb_set_current(default_lb);
+ }
+ }
+
+ spin_unlock_irq(&crypto_lb_lock);
+}
+
+static void crypto_lb_queue_wrapper(void *data)
+{
+ struct crypto_device *dev = &main_crypto_device;
+ struct crypto_session *s = (struct crypto_session *)data;
+
+ dprintk(KERN_INFO "%s: Calling callback for session %llu [%llu] flags=%x, "
+ "op=%04u, type=%04x, mode=%04x, priority=%04x\n", __func__,
+ s->ci.id, s->ci.dev_id, s->ci.flags, s->ci.operation,
+ s->ci.type, s->ci.mode, s->ci.priority);
+
+ spin_lock_irq(&s->lock);
+ crypto_stat_finish_inc(s);
+
+ finish_session(s);
+ unstart_session(s);
+ spin_unlock_irq(&s->lock);
+
+ s->ci.callback(&s->ci, &s->data);
+
+ if (session_finished(s)) {
+ crypto_session_destroy(s);
+ return;
+ } else {
+ /*
+ * Special case: crypto consumer marks session as "not finished"
+ * in it's callback - it means that crypto consumer wants
+ * this session to be processed further,
+ * for example crypto consumer can add new route and then
+ * mark session as "not finished".
+ */
+
+ uncomplete_session(s);
+ unstart_session(s);
+ crypto_session_insert_main(dev, s);
+ }
+ spin_unlock_irq(&s->lock);
+}
+
+static void crypto_lb_process_next_route(struct crypto_session *s)
+{
+ struct crypto_route *rt;
+ struct crypto_device *dev, *orig;
+
+ rt = crypto_route_dequeue(s);
+ if (rt) {
+ orig = rt->dev;
+
+ list_del_init(&s->dev_queue_entry);
+
+ crypto_route_free(rt);
+
+ dev = crypto_route_get_current_device(s);
+ if (dev) {
+ dprintk(KERN_INFO "%s: processing new route to %s.\n",
+ __func__, dev->name);
+
+ memcpy(&s->ci, &rt->ci, sizeof(s->ci));
+
+ if (!strncmp(orig->name, dev->name, sizeof(dev->name)))
+ __crypto_session_insert(dev, s);
+ else
+ crypto_session_insert(dev, s);
+
+ /*
+ * Reference to this device was already hold when
+ * new routing was added.
+ */
+ crypto_device_put(dev);
+ }
+ }
+}
+
+void crypto_wake_lb(void)
+{
+ wake_up_interruptible(&crypto_lb_wait_queue);
+}
+
+int crypto_lb_thread(void *data)
+{
+ struct crypto_session *s, *n;
+ struct crypto_device *dev = (struct crypto_device *)data;
+ unsigned long flags;
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (!need_exit) {
+ spin_lock_irqsave(&dev->session_lock, flags);
+ list_for_each_entry_safe(s, n, &dev->session_list, main_queue_entry) {
+ dprintk("session %llu [%llu]: flags=%x, route_num=%d, %s,%s,%s,%s.\n",
+ s->ci.id, s->ci.dev_id, s->ci.flags,
+ crypto_route_queue_len(s),
+ (session_completed(s)) ? "completed" : "not completed",
+ (session_finished(s)) ? "finished" : "not finished",
+ (session_started(s)) ? "started" : "not started",
+ (session_is_processed(s)) ? "is being processed" : "is not being processed");
+
+ if (!spin_trylock(&s->lock))
+ continue;
+
+ if (session_is_processed(s))
+ goto unlock;
+ if (session_started(s))
+ goto unlock;
+
+ if (session_completed(s)) {
+ crypto_stat_ptime_inc(s);
+
+ if (crypto_route_queue_len(s) > 1) {
+ crypto_lb_process_next_route(s);
+ } else {
+ start_session(s);
+ crypto_stat_start_inc(s);
+
+ dprintk("%s: going to remove session %llu [%llu].\n",
+ __func__, s->ci.id, s->ci.dev_id);
+
+ __crypto_session_dequeue_main(s);
+ spin_unlock(&s->lock);
+
+ INIT_WORK(&s->work, &crypto_lb_queue_wrapper, s);
+ queue_work(crypto_lb_queue, &s->work);
+ continue;
+ }
+ }
+unlock:
+ spin_unlock(&s->lock);
+ }
+ spin_unlock_irqrestore(&dev->session_lock, flags);
+
+ interruptible_sleep_on_timeout(&crypto_lb_wait_queue, 100);
+ }
+
+ flush_workqueue(crypto_lb_queue);
+ complete_and_exit(&thread_exited, 0);
+}
+
+int crypto_lb_init(void)
+{
+ int err;
+ long pid;
+
+ err = bus_register(&crypto_lb_bus_type);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto load balancer bus: err=%d.\n", err);
+ goto err_out_exit;
+ }
+
+ err = driver_register(&crypto_lb_driver);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto load balancer driver: err=%d.\n", err);
+ goto err_out_bus_unregister;
+ }
+
+ crypto_lb_class.class_dev_attrs = &class_device_attr_lbs;
+
+ err = class_register(&crypto_lb_class);
+ if (err) {
+ dprintk(KERN_ERR "Failed to register crypto load balancer class: err=%d.\n", err);
+ goto err_out_driver_unregister;
+ }
+
+ crypto_lb_queue = create_workqueue("clbq");
+ if (!crypto_lb_queue) {
+ dprintk(KERN_ERR "Failed to create crypto load balaner work queue.\n");
+ goto err_out_class_unregister;
+ }
+
+ init_completion(&thread_exited);
+ pid = kernel_thread(crypto_lb_thread, &main_crypto_device, CLONE_FS | CLONE_FILES);
+ if (IS_ERR((void *)pid)) {
+ dprintk(KERN_ERR "Failed to create kernel load balancing thread.\n");
+ goto err_out_destroy_workqueue;
+ }
+
+ return 0;
+
+err_out_destroy_workqueue:
+ destroy_workqueue(crypto_lb_queue);
+err_out_class_unregister:
+ class_unregister(&crypto_lb_class);
+err_out_driver_unregister:
+ driver_unregister(&crypto_lb_driver);
+err_out_bus_unregister:
+ bus_unregister(&crypto_lb_bus_type);
+err_out_exit:
+ return err;
+}
+
+void crypto_lb_fini(void)
+{
+ need_exit = 1;
+ wait_for_completion(&thread_exited);
+ flush_workqueue(crypto_lb_queue);
+ destroy_workqueue(crypto_lb_queue);
+ class_unregister(&crypto_lb_class);
+ driver_unregister(&crypto_lb_driver);
+ bus_unregister(&crypto_lb_bus_type);
+}
+
+EXPORT_SYMBOL_GPL(crypto_lb_register);
+EXPORT_SYMBOL_GPL(crypto_lb_unregister);
+EXPORT_SYMBOL_GPL(crypto_lb_rehash);
+EXPORT_SYMBOL_GPL(crypto_lb_find_device);
+EXPORT_SYMBOL_GPL(crypto_wake_lb);

2005-03-08 02:43:07

by Evgeniy Polyakov

[permalink] [raw]
Subject: [37/many] arch: s390 config

--- ./arch/s390/Kconfig~ 2005-03-02 10:38:07.000000000 +0300
+++ ./arch/s390/Kconfig 2005-03-07 21:29:40.000000000 +0300
@@ -477,4 +477,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:24:38

by Evgeniy Polyakov

[permalink] [raw]
Subject: [26/many] arch: frv config

--- ./arch/frv/Kconfig~ 2005-03-02 10:37:54.000000000 +0300
+++ ./arch/frv/Kconfig 2005-03-07 21:26:53.000000000 +0300
@@ -498,4 +498,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-07 20:33:56

by Evgeniy Polyakov

[permalink] [raw]
Subject: [30/many] arch: m32r config

--- ./arch/m32r/Kconfig~ 2005-03-02 10:37:30.000000000 +0300
+++ ./arch/m32r/Kconfig 2005-03-07 21:27:51.000000000 +0300
@@ -364,4 +364,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-08 03:08:38

by Evgeniy Polyakov

[permalink] [raw]
Subject: [16/many] acrypto: crypto_user.h

--- /tmp/empty/crypto_user.h 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_user.h 2005-03-07 20:35:36.000000000 +0300
@@ -0,0 +1,52 @@
+/*
+ * crypto_user.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __CRYPTO_USER_H
+#define __CRYPTO_USER_H
+
+#define MAX_DATA_SIZE 3
+#define ALIGN_DATA_SIZE(size) ((size + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1))
+
+enum crypto_user_data_types
+{
+ CRYPTO_USER_DATA_SRC = 0,
+ CRYPTO_USER_DATA_DST,
+ CRYPTO_USER_DATA_KEY,
+ CRYPTO_USER_DATA_IV,
+};
+
+struct crypto_user_data
+{
+ __u16 data_size;
+ __u16 data_type;
+};
+
+
+#ifdef __KERNEL__
+
+int crypto_user_alloc_crypto_data(struct crypto_data *data, int src_size, int dst_size, int key_size, int iv_size);
+void crypto_user_free_crypto_data(struct crypto_data *data);
+void crypto_user_fill_sg(void *ptr, u16 size, struct scatterlist *sg);
+struct scatterlist *crypto_user_get_sg(struct crypto_user_data *ud, struct crypto_data *data);
+int crypto_user_fill_sg_data(struct crypto_user_data *ud, struct crypto_data *data, void *ptr);
+
+#endif /* __KERNEL__ */
+#endif /* __CRYPTO_USER_H */

2005-03-08 03:08:36

by Evgeniy Polyakov

[permalink] [raw]
Subject: [??/many] list of files to be sent in a next couple of e-mails with small description


announce - asynchronous crypto layer announce
files - file with this cruft
bench - acrypto benchmark vs cryptoloop vs dm_crypt
iok.c - userspace application which uses ioctl based acrypto access
ucon_crypto.c - userspace application which uses direct process' VMA access
acrypto_Kconfig.patch - acrypto kernel config file
acrypto_Makefile.patch - acrypto make file
acrypto_acrypto.h.patch - base definition used in acrypto and it's users
acrypto_async_provider.c.patch - asynchronous crypto provider [AES CBC mode only] - sync crypto based
acrypto_crypto_conn.c.patch - kernel connector's backend - allows statistic fetching
acrypto_crypto_conn.h.patch - definitions for kernel connector's subsystem
acrypto_crypto_def.h.patch - various acrypto definitions like crypto modes, types of operation and so on...
acrypto_crypto_dev.c.patch - main crypto device add/remove routings
acrypto_crypto_lb.c.patch - crypto load balancer's subsystem and main crypto session queues watcher
acrypto_crypto_lb.h.patch - definitions for crypto load balancer processing
acrypto_crypto_main.c.patch - main routings - session allocations/deallocations and so on...
acrypto_crypto_route.h.patch - crypto routing subsystem
acrypto_crypto_stat.c.patch - acrypto statistic helpers
acrypto_crypto_stat.h.patch - acrypto statistic helpers declarations
acrypto_crypto_user.c.patch - base userspace/kernelspace acrypto helpers
acrypto_crypto_user.h.patch - above declarations
acrypto_crypto_user_direct.c.patch - direct process' VMA access helpers
acrypto_crypto_user_direct.h.patch - above declarations
acrypto_crypto_user_ioctl.c.patch - ioctl based userspace access
acrypto_crypto_user_ioctl.h.patch - above declarations
acrypto_simple_lb.c.patch - simple load balancer
alpha arm arm26 cris frv h8300 i386 ia64 m32r m68k
m68knommu mips parisc ppc ppc64 s390 sh sh64 sparc
sparc64 um v850 x86_64 - small patches to enable acrypto config menu

bd.patch - asynchronous block device
ubd.c - userspace utility to configure asynchronous block device
bind unbind - simple scripts to show ubd usage

2005-03-08 03:08:35

by Evgeniy Polyakov

[permalink] [raw]
Subject: [27/many] arch: h8300 config

--- ./arch/h8300/Kconfig~ 2005-03-02 10:38:17.000000000 +0300
+++ ./arch/h8300/Kconfig 2005-03-07 21:27:13.000000000 +0300
@@ -191,4 +191,6 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

2005-03-08 03:08:34

by Evgeniy Polyakov

[permalink] [raw]
Subject: [29/many] arch: ia64 config

--- ./arch/ia64/Kconfig~ 2005-03-02 10:38:26.000000000 +0300
+++ ./arch/ia64/Kconfig 2005-03-07 21:27:38.000000000 +0300
@@ -417,3 +417,5 @@
source "security/Kconfig"

source "crypto/Kconfig"
+
+source "acrypto/Kconfig"

2005-03-08 03:08:34

by Evgeniy Polyakov

[permalink] [raw]
Subject: [5/5] bd: script for unbinding any filters

./ubd unbind dev /dev/bd0 filter acrypto
./ubd unbind dev /dev/bd0 filter xor
./ubd unbind dev /dev/bd0 filter fd

2005-03-08 03:08:33

by Evgeniy Polyakov

[permalink] [raw]
Subject: [5/many] acrypto: crypto_conn.c

--- /tmp/empty/crypto_conn.c 1970-01-01 03:00:00.000000000 +0300
+++ ./acrypto/crypto_conn.c 2005-03-07 21:11:01.000000000 +0300
@@ -0,0 +1,160 @@
+/*
+ * crypto_conn.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/vmalloc.h>
+#include <linux/connector.h>
+
+#include "acrypto.h"
+#include "crypto_lb.h"
+#include "crypto_conn.h"
+#include "crypto_user_ioctl.h"
+#include "crypto_user_direct.h"
+
+struct cb_id crypto_conn_id = { 0xdead, 0x0000 };
+static char crypto_conn_name[] = "crconn";
+
+static void crypto_conn_callback(void *data)
+{
+ struct cn_msg *msg, *reply;
+ struct crypto_conn_data *d, *cmd;
+ struct crypto_device *dev;
+ u32 sessions;
+
+ msg = (struct cn_msg *)data;
+ d = (struct crypto_conn_data *)msg->data;
+
+ if (msg->len < sizeof(*d)) {
+ dprintk(KERN_ERR "Wrong message to crypto connector: msg->len=%u < %u.\n",
+ msg->len, sizeof(*d));
+ return;
+ }
+
+ if (msg->len != sizeof(*d) + d->len) {
+ dprintk(KERN_ERR "Wrong message to crypto connector: msg->len=%u != %u.\n",
+ msg->len, sizeof(*d) + d->len);
+ return;
+ }
+
+ dev = crypto_device_get_name(d->name);
+ if (!dev) {
+ dprintk(KERN_INFO "Crypto device %s was not found.\n", d->name);
+ return;
+ }
+
+ switch (d->cmd) {
+ case CRYPTO_READ_SESSIONS:
+ reply = kmalloc(sizeof(*msg) + sizeof(*cmd) + sizeof(sessions), GFP_ATOMIC);
+ if (reply) {
+ memcpy(reply, msg, sizeof(*reply));
+ reply->len = sizeof(*cmd) + sizeof(sessions);
+
+ /*
+ * See protocol description in connector.c
+ */
+ reply->ack++;
+
+ cmd = (struct crypto_conn_data *)(reply + 1);
+ memcpy(cmd, d, sizeof(*cmd));
+ cmd->len = sizeof(sessions);
+
+ sessions = atomic_read(&dev->refcnt);
+
+ memcpy(cmd + 1, &sessions, sizeof(sessions));
+
+ cn_netlink_send(reply, 0);
+
+ kfree(reply);
+ } else
+ dprintk(KERN_ERR "Failed to allocate %d bytes in reply to comamnd 0x%x.\n",
+ sizeof(*msg) + sizeof(*cmd), d->cmd);
+ break;
+ case CRYPTO_GET_STAT:
+ reply = kmalloc(sizeof(*msg) + sizeof(*cmd) + sizeof(struct crypto_device_stat), GFP_ATOMIC);
+ if (reply) {
+ struct crypto_device_stat *ptr;
+
+ memcpy(reply, msg, sizeof(*reply));
+ reply->len = sizeof(*cmd) + sizeof(*ptr);
+
+ /*
+ * See protocol description in connector.c
+ */
+ reply->ack++;
+
+ cmd = (struct crypto_conn_data *)(reply + 1);
+ memcpy(cmd, d, sizeof(*cmd));
+ cmd->len = sizeof(*ptr);
+
+ ptr = (struct crypto_device_stat *)(cmd + 1);
+ memcpy(ptr, &dev->stat, sizeof(*ptr));
+
+ cn_netlink_send(reply, 0);
+
+ kfree(reply);
+ } else
+ dprintk(KERN_ERR "Failed to allocate %d bytes in reply to comamnd 0x%x.\n",
+ sizeof(*msg) + sizeof(*cmd), d->cmd);
+ break;
+ case CRYPTO_REQUEST:
+#if 1
+ {
+ struct crypto_user_direct *usr;
+
+ usr = (struct crypto_user_direct *)(d->data);
+
+ crypto_user_direct_add_request(msg->seq, msg->ack, usr);
+ }
+#endif
+ break;
+ default:
+ dprintk(KERN_ERR "Wrong operation 0x%04x for crypto connector.\n",
+ d->cmd);
+ return;
+ }
+
+ crypto_device_put(dev);
+}
+
+int crypto_conn_init(void)
+{
+ int err;
+
+ err = cn_add_callback(&crypto_conn_id, crypto_conn_name, crypto_conn_callback);
+ if (err)
+ return err;
+
+ dprintk(KERN_INFO "Crypto connector callback is registered.\n");
+
+ return 0;
+}
+
+void crypto_conn_fini(void)
+{
+ cn_del_callback(&crypto_conn_id);
+ dprintk(KERN_INFO "Crypto connector callback is unregistered.\n");
+}

2005-03-08 03:08:30

by Evgeniy Polyakov

[permalink] [raw]
Subject: [42/many] arch: um config

--- ./arch/um/Kconfig~ 2005-03-02 10:38:09.000000000 +0300
+++ ./arch/um/Kconfig 2005-03-07 21:30:55.000000000 +0300
@@ -289,6 +289,8 @@

source "crypto/Kconfig"

+source "acrypto/Kconfig"
+
source "lib/Kconfig"

menu "SCSI support"

2005-03-08 05:08:58

by Kyle Moffett

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mar 07, 2005, at 15:37, Evgeniy Polyakov wrote:
> I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.
> It supports following features:
> - multiple asynchronous crypto device queues
> - crypto session routing
> - crypto session binding
> - modular load balancing
> - crypto session batching genetically implemented by design
> - crypto session priority
> - different kinds of crypto operation(RNG, asymmetrical crypto, HMAC
> and
> any other)

Did you include support for the new key/keyring infrastructure
introduced
a couple versions ago by David Howells? It allows userspace to create
and
manage various sorts of "keys" in kernelspace. If you create and
register
a few keytypes for various symmetric and asymmetric ciphers, you could
then
take advantage of its support for securely passing keys around in and
out
of userspace.

Cheers,
Kyle Moffett

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
!y?(-)
------END GEEK CODE BLOCK------


2005-03-08 09:12:24

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Tue, 8 Mar 2005 00:08:35 -0500
Kyle Moffett <[email protected]> wrote:

> On Mar 07, 2005, at 15:37, Evgeniy Polyakov wrote:
> > I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.
> > It supports following features:
> > - multiple asynchronous crypto device queues
> > - crypto session routing
> > - crypto session binding
> > - modular load balancing
> > - crypto session batching genetically implemented by design
> > - crypto session priority
> > - different kinds of crypto operation(RNG, asymmetrical crypto, HMAC
> > and
> > any other)
>
> Did you include support for the new key/keyring infrastructure
> introduced
> a couple versions ago by David Howells? It allows userspace to create
> and
> manage various sorts of "keys" in kernelspace. If you create and
> register
> a few keytypes for various symmetric and asymmetric ciphers, you could
> then
> take advantage of its support for securely passing keys around in and
> out
> of userspace.

As far as I know, it has different destination -
for example asynchronous block device, which uses acrypto in one of it's
filters, may use it.

> Cheers,
> Kyle Moffett
>
> -----BEGIN GEEK CODE BLOCK-----
> Version: 3.12
> GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
> L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
> PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
> !y?(-)
> ------END GEEK CODE BLOCK------
>


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 09:14:40

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [UPDATE PATCH 8/many] acrypto: crypto_dev.c

On Mon, 7 Mar 2005 17:46:41 -0800
Nishanth Aravamudan <[email protected]> wrote:

> On Tue, Mar 08, 2005 at 02:27:20AM +0300, Evgeniy Polyakov wrote:
> > On Mon, 7 Mar 2005 14:51:21 -0800
> > Nish Aravamudan <[email protected]> wrote:
> >
> > > On Tue, 8 Mar 2005 02:14:31 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > > On Mon, 7 Mar 2005 14:40:52 -0800
> > > > Nish Aravamudan <[email protected]> wrote:
> > > >
> > > > > On Mon, 7 Mar 2005 23:37:34 +0300, Evgeniy Polyakov <[email protected]> wrote:
> > > > > > --- /tmp/empty/crypto_dev.c 1970-01-01 03:00:00.000000000 +0300
> > > > > > +++ ./acrypto/crypto_dev.c 2005-03-07 20:35:36.000000000 +0300
> > > > > > @@ -0,0 +1,421 @@
> > > > > > +/*
> > > > > > + * crypto_dev.c
> > > > >
> > > > > <snip>
> > > > >
> > > > > > + while (atomic_read(&__dev->refcnt)) {
> > >
> > > <snip>
> > >
> > > > > > + set_current_state(TASK_UNINTERRUPTIBLE);
> > > > > > + schedule_timeout(HZ);
> > > > >
> > > > > I don't see any wait-queues in the immediate area of this code. Can
> > > > > this be an ssleep(1)?
> > > >
> > > > Yes, you are right, this loop just spins until all pending sessions
> > > > are removed from given crypto device, so it can just ssleep(1) here.
> > >
> > > Would you like me to send an incremental patch or will you be changing
> > > it yourself?
> >
> > That would be nice to see your changes in the acrypto.
> > If it will be commited...
>
> Well, here is an incremental patch, then:
>
> Description: Use ssleep() instead of schedule_timeout() to guarantee the
> task delays as expected.
>
> Signed-off-by: Nishanth Aravamudan <[email protected]>
>

Thank you, I've applied it to my tree.

> --- 2.6.11-v/acrypto/crypto_dev.c 2005-03-07 17:41:31.000000000 -0800
> +++ 2.6.11/acrypto/crypto_dev.c 2005-03-07 17:41:57.000000000 -0800
> @@ -28,6 +28,7 @@
> #include <linux/interrupt.h>
> #include <linux/spinlock.h>
> #include <linux/device.h>
> +#include <linux/delay.h>
>
> #include "acrypto.h"
>
> @@ -399,8 +400,7 @@ void crypto_device_remove(struct crypto_
> */
>
> __dev->data_ready(__dev);
> - set_current_state(TASK_UNINTERRUPTIBLE);
> - schedule_timeout(HZ);
> + ssleep(1);
> }
>
> dprintk(KERN_ERR "Crypto device %s was unregistered.\n",


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 10:32:06

by Herbert Xu

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mon, Mar 07, 2005 at 11:37:32PM +0300, Evgeniy Polyakov wrote:
>
> I'm pleased to announce asynchronous crypto layer for Linux kernel 2.6.

Thanks for your work. I'll be reviewing your approach as well as others
over the next week or so.

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[email protected]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

2005-03-08 12:22:22

by Kyle Moffett

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mar 08, 2005, at 04:37, Evgeniy Polyakov wrote:
>>
>> Did you include support for the new key/keyring infrastructure
>> introduced a couple versions ago by David Howells? It allows
>> user-space to create and manage various sorts of "keys" in
>> kernel-space. If you create and register a few keytypes for
>> various symmetric and asymmetric ciphers, you could then take
>> advantage of its support for securely passing keys around in
>> and out of userspace.
>
> As far as I know, it has different destination - for example
> asynchronous block device, which uses acrypto in one of it's
> filters, may use it.

I'm not exactly familiar with asynchronous block device, but I'm
guessing that it would need to get its crypto keys from the user
somehow, no? If so, then the best way of managing them is via
the key/keyring infrastructure. From the point of view of other
kernel systems, it's basically a set of BLOB<=>task associations
that supports a reasonable inheritance and permissions model.

Cheers,
Kyle Moffett

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
!y?(-)
------END GEEK CODE BLOCK------


2005-03-08 12:44:12

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Tue, 8 Mar 2005 07:22:01 -0500
Kyle Moffett <[email protected]> wrote:

> On Mar 08, 2005, at 04:37, Evgeniy Polyakov wrote:
> >>
> >> Did you include support for the new key/keyring infrastructure
> >> introduced a couple versions ago by David Howells? It allows
> >> user-space to create and manage various sorts of "keys" in
> >> kernel-space. If you create and register a few keytypes for
> >> various symmetric and asymmetric ciphers, you could then take
> >> advantage of its support for securely passing keys around in
> >> and out of userspace.
> >
> > As far as I know, it has different destination - for example
> > asynchronous block device, which uses acrypto in one of it's
> > filters, may use it.
>
> I'm not exactly familiar with asynchronous block device, but I'm
> guessing that it would need to get its crypto keys from the user
> somehow, no? If so, then the best way of managing them is via
> the key/keyring infrastructure. From the point of view of other
> kernel systems, it's basically a set of BLOB<=>task associations
> that supports a reasonable inheritance and permissions model.

Yes, it is exactly how block device, not crypto layer, may operate,
but it has very limited usage for block devices in given model,
when it only encrypts storage.

Above setup may be implemeted for the userspace/kernelspace application,
which requires continuous access to the key material from
the both sides, but asynchronous block device
(and existing cryptoloop and dm-crypt) use diferent model, when
controlling userspace application only one time provides
required key material(using ioctl) and exits, but key material
remains in kernelspace in device's private area.

> Cheers,
> Kyle Moffett
>
> -----BEGIN GEEK CODE BLOCK-----
> Version: 3.12
> GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
> L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
> PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
> !y?(-)
> ------END GEEK CODE BLOCK------
>


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 13:25:56

by Joshua Jackson

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Monday 07 March 2005 4:49 pm, Evgeniy Polyakov wrote:
>
> Unfortunately acrypto patch is more than 200kb, so neither mail list
> will accept it, so I've sent it in such form :)
>

As per the FAQ, very large patches are often best submitted as a URL. In case
you don't have a place to host it, you are welcome to email me the complete
patch and I will post a URL link.

I am very interested in your async changes and possibly porting some of the
Free/OpenBSD HW crypto drivers over to it.

--
Joshua Jackson
Vortech Consulting
http://www.vortech.net

2005-03-08 14:46:55

by Kyle Moffett

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Mar 08, 2005, at 08:07, Evgeniy Polyakov wrote:
> On Tue, 8 Mar 2005 07:22:01 -0500 Kyle Moffett <[email protected]>
> wrote:
>> I'm not exactly familiar with asynchronous block device, but I'm
>> guessing that it would need to get its crypto keys from the user
>> somehow, no? If so, then the best way of managing them is via
>> the key/keyring infrastructure. From the point of view of other
>> kernel systems, it's basically a set of BLOB<=>task associations
>> that supports a reasonable inheritance and permissions model.
>
> Above setup may be implemeted for the userspace/kernelspace
> application,
> which requires continuous access to the key material from the both
> sides,
> but asynchronous block device (and existing cryptoloop and dm-crypt)
> use
> different model, when controlling userspace application only one time
> provides required key material(using ioctl) and exits, but key material
> remains in kernelspace in device's private area.

The above application works perfectly with the design of the keyring
system. A process (An init-script or something) creates a "key" either
with a file or through some complex method that only user-space needs to
care about, then it calls the keyctl syscall to create an in-kernel key
with the data BLOB. The kernel module that registered the key-type (IE:
symmetric128 or something like that) verifies that the data is valid and
attaches it to a key data-structure.

Later, when you want to use the key for acrypto, cryptoloop, dm-crypt,
etc,
you would just pass the key-ID instead of a custom binary format, and
the
acrypto layer would just add a reference to the key in its own structure
and increment the refcount.

Cheers,
Kyle Moffett

-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
!y?(-)
------END GEEK CODE BLOCK------


2005-03-08 14:51:20

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [1/5] bd: Asynchronous block device


Small morning patch for bd_fd filter which closes "major security vulnerability"
described at http://off.net/~jme/loopdev_vul.html

Author's quite: "about 3 years ago i published a paper describing how an attacker would be able
to modify the content of the encrypted device without being detected."

Small archive of the descussion at: http://mail.nl.linux.org/linux-crypto/2005-01/msg00040.html

It is provideed to show how easy is bd filters creation.

Thank you for your attention :)

P.S. userspace ubd.c patch is not attached, it is 32 lines copy/allocation.

--- orig/bd_fd.c
+++ mod/bd_fd.c
@@ -29,6 +29,7 @@
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/file.h>
+#include <linux/crypto.h>

#include "bd.h"
#include "bd_filter.h"
@@ -37,6 +38,7 @@
static int bd_fd_transfer(struct bd_transfer *);
static int bd_fd_init(struct bd_device *, struct bd_filter *);
static void bd_fd_fini(struct bd_device *, struct bd_filter *);
+static int bd_fd_check_media(struct bd_device *dev, struct bd_filter *f, int size);

static struct bd_main_filter fd_filter =
{
@@ -126,9 +128,11 @@
{
struct bd_fd_private *p;
struct bd_fd_user *u = f->priv;
- int err;
+ int err, size;
+
+ size = f->priv_size - sizeof(*u);

- p = kmalloc(sizeof(*p), GFP_KERNEL);
+ p = kmalloc(sizeof(*p) + size, GFP_KERNEL);
if (!p) {
dprintk("Failed to allocate new bd_fd priavte structure in dev=%s, filter=%s.\n",
dev->name, f->mf->name);
@@ -136,8 +140,11 @@
}

memset(p, 0, sizeof(*p));
-
memcpy(&p->u, u, sizeof(p->u));
+ if (size) {
+ p->hmac = (u8 *)(p+1);
+ memcpy(p->hmac, u+1, size);
+ }

dprintk("%s: filter=%s, flags=%08x.\n", __func__, f->mf->name, f->mf->flags);

@@ -152,8 +159,11 @@
err = bd_set_fd(dev, p);
if (err)
return err;
+
+ if (size)
+ err = bd_fd_check_media(dev, f, size);

- return 0;
+ return err;
}

static void bd_fd_fini(struct bd_device *dev, struct bd_filter *f)
@@ -305,6 +315,93 @@
return err;
}

+static void bd_fd_complete(struct bd_transfer *t)
+{
+}
+#define SHA512_DIGEST_SIZE 64
+static int bd_fd_check_media(struct bd_device *dev, struct bd_filter *f, int size)
+{
+ struct crypto_tfm *tfm;
+ int err, i;
+ loff_t storage_size, pos;
+ struct bd_fd_private *p = f->priv;
+ struct page *pg;
+ struct bd_transfer t;
+ u8 hmac[SHA512_DIGEST_SIZE];
+ u8 scratch[SHA512_DIGEST_SIZE];
+ struct scatterlist sg;
+
+ if (size != SHA512_DIGEST_SIZE)
+ return -EINVAL;
+
+ tfm = crypto_alloc_tfm("sha512", 0);
+ if (!tfm) {
+ dprintk("Failed to create sha512 tfm for device %s.\n", dev->name);
+ return -ENODEV;
+ }
+
+ storage_size = bd_get_size(dev, p);
+ if (!storage_size) {
+ dprintk("Storage size of %s is %llu.\n", dev->name, storage_size);
+ err = -EINVAL;
+ goto err_out_free_tfm;
+ }
+
+ pg = alloc_pages(GFP_KERNEL, 0);
+ if (!pg) {
+ dprintk("Failed to get free scratch page for device %s.\n", dev->name);
+ err = -ENOMEM;
+ goto err_out_free_tfm;
+ }
+
+ memset(&t, 0, sizeof(t));
+
+ for (pos=0; pos<storage_size;) {
+ t.src.page = pg;
+ t.src.off = 0;
+ t.src.size = (storage_size - pos > PAGE_SIZE)?PAGE_SIZE:(storage_size - pos);
+ t.cmd = READ;
+ t.pos = pos;
+ t.f = f;
+ t.f->complete = bd_fd_complete;
+
+ file_bd_read(&t);
+
+ pos += PAGE_SIZE;
+
+ sg.page = pg;
+ sg.offset = 0;
+ sg.length = PAGE_SIZE;
+
+ crypto_digest_digest(tfm, &sg, 1, scratch);
+
+ for (i=0; i<sizeof(hmac); ++i) {
+ hmac[i] ^= scratch[i];
+ }
+ }
+
+ __free_pages(pg, 0);
+
+err_out_free_tfm:
+ crypto_free_tfm(tfm);
+
+ err = memcmp(hmac, p->hmac, sizeof(hmac));
+
+ printk("Calculated: ");
+ for (i=0; i<sizeof(hmac); ++i) {
+ printk("%02x ", hmac[i]);
+ }
+ printk("\n");
+ printk("Provided : ");
+ for (i=0; i<sizeof(hmac); ++i) {
+ printk("%02x ", p->hmac[i]);
+ }
+ printk("\n");
+
+ return err;
+}
+
+
int __devinit bd_fd_init_dev(void)
{
int err;


--- orig/bd_fd.h
+++ mod/bd_fd.h
@@ -34,6 +34,8 @@
struct bd_fd_user u;

struct file *file;
+
+ u8 *hmac;
};

#endif /* __KERNEL__ */



Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 14:58:55

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Tue, 8 Mar 2005 09:46:30 -0500
Kyle Moffett <[email protected]> wrote:

> On Mar 08, 2005, at 08:07, Evgeniy Polyakov wrote:
> > On Tue, 8 Mar 2005 07:22:01 -0500 Kyle Moffett <[email protected]>
> > wrote:
> >> I'm not exactly familiar with asynchronous block device, but I'm
> >> guessing that it would need to get its crypto keys from the user
> >> somehow, no? If so, then the best way of managing them is via
> >> the key/keyring infrastructure. From the point of view of other
> >> kernel systems, it's basically a set of BLOB<=>task associations
> >> that supports a reasonable inheritance and permissions model.
> >
> > Above setup may be implemeted for the userspace/kernelspace
> > application,
> > which requires continuous access to the key material from the both
> > sides,
> > but asynchronous block device (and existing cryptoloop and dm-crypt)
> > use
> > different model, when controlling userspace application only one time
> > provides required key material(using ioctl) and exits, but key material
> > remains in kernelspace in device's private area.
>
> The above application works perfectly with the design of the keyring
> system. A process (An init-script or something) creates a "key" either
> with a file or through some complex method that only user-space needs to
> care about, then it calls the keyctl syscall to create an in-kernel key
> with the data BLOB. The kernel module that registered the key-type (IE:
> symmetric128 or something like that) verifies that the data is valid and
> attaches it to a key data-structure.
>
> Later, when you want to use the key for acrypto, cryptoloop, dm-crypt,
> etc,
> you would just pass the key-ID instead of a custom binary format, and
> the
> acrypto layer would just add a reference to the key in its own structure
> and increment the refcount.

Acrypto does not actually know about keys, ivs and other.
It is layer between crypto devices(which require key) and crypto consumers
(which provide the key).
One may fill key and iv sg as NULL and put them to the private area,
and then create approprite crypto device, which will obtain them from there,
but not using key/iv sgs.
Acrypto do not use such information.

Of course, one may patch bd_acrypto.c/cryptoloop.c/dm_crypt.c
to use above schema, but it is too complex for the model used,
but nevertheless it can be used, I do not disput against it
in bd_acrypto/cryptoloop/dm_crypt.

> Cheers,
> Kyle Moffett
>
> -----BEGIN GEEK CODE BLOCK-----
> Version: 3.12
> GCM/CS/IT/U d- s++: a18 C++++>$ UB/L/X/*++++(+)>$ P+++(++++)>$
> L++++(+++) E W++(+) N+++(++) o? K? w--- O? M++ V? PS+() PE+(-) Y+
> PGP+++ t+(+++) 5 X R? tv-(--) b++++(++) DI+ D+ G e->++++$ h!*()>++$ r
> !y?(-)
> ------END GEEK CODE BLOCK------
>


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-08 18:04:03

by Nishanth Aravamudan

[permalink] [raw]
Subject: [UPDATE PATCH 9/many] acrypto: crypto_lb.c

On Mon, Mar 07, 2005 at 11:37:34PM +0300, Evgeniy Polyakov wrote:
> --- /tmp/empty/crypto_lb.c 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_lb.c 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,634 @@
> +/*
> + * crypto_lb.c
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>

<snip>

> +void crypto_lb_unregister(struct crypto_lb *lb)
> +{
> + struct crypto_lb *__lb, *n;
> +
> + if (lb_num == 1) {
> + dprintk(KERN_INFO "You are removing crypto load balancer %s which is current and default.\n"
> + "There is no other crypto load balancers. "
> + "Removing %s delayed untill new load balancer is registered.\n",
> + lb->name, (force_lb_remove) ? "is not" : "is");
> + while (lb_num == 1 && !force_lb_remove) {
> + set_current_state(TASK_INTERRUPTIBLE);
> + schedule_timeout(HZ);
> +
> + if (signal_pending(current))
> + flush_signals(current);
> + }
> + }

Description: Use msleep_interruptible() instead of schedule_timeout() to
guarantee the task delays as expected. Using msleep*() also leads to a
more human-understandable interface and allows for virtualized systems
(jiffy-less) to function correctly (with appropriate extensions).

Signed-off-by: Nishanth Aravamudan <[email protected]>

--- 2.6.11-v/acrypto/crypto_lb.c 2005-03-08 09:58:56.000000000 -0800
+++ 2.6.11/acrypto/crypto_lb.c 2005-03-08 09:59:38.000000000 -0800
@@ -29,6 +29,7 @@
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/err.h>
+#include <linux/delay.h>

#include "acrypto.h"
#include "crypto_lb.h"
@@ -397,8 +398,7 @@ void crypto_lb_unregister(struct crypto_
"Removing %s delayed untill new load balancer is registered.\n",
lb->name, (force_lb_remove) ? "is not" : "is");
while (lb_num == 1 && !force_lb_remove) {
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(HZ);
+ msleep_interruptible(1000);

if (signal_pending(current))
flush_signals(current);

2005-03-08 18:09:05

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [UPDATE PATCH 9/many] acrypto: crypto_lb.c

On Tue, 8 Mar 2005 10:02:50 -0800
Nishanth Aravamudan <[email protected]> wrote:

> On Mon, Mar 07, 2005 at 11:37:34PM +0300, Evgeniy Polyakov wrote:
> > --- /tmp/empty/crypto_lb.c 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/crypto_lb.c 2005-03-07 20:35:36.000000000 +0300
> > @@ -0,0 +1,634 @@
> > +/*
> > + * crypto_lb.c
> > + *
> > + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
>
> <snip>
>
> > +void crypto_lb_unregister(struct crypto_lb *lb)
> > +{
> > + struct crypto_lb *__lb, *n;
> > +
> > + if (lb_num == 1) {
> > + dprintk(KERN_INFO "You are removing crypto load balancer %s which is current and default.\n"
> > + "There is no other crypto load balancers. "
> > + "Removing %s delayed untill new load balancer is registered.\n",
> > + lb->name, (force_lb_remove) ? "is not" : "is");
> > + while (lb_num == 1 && !force_lb_remove) {
> > + set_current_state(TASK_INTERRUPTIBLE);
> > + schedule_timeout(HZ);
> > +
> > + if (signal_pending(current))
> > + flush_signals(current);
> > + }
> > + }
>
> Description: Use msleep_interruptible() instead of schedule_timeout() to
> guarantee the task delays as expected. Using msleep*() also leads to a
> more human-understandable interface and allows for virtualized systems
> (jiffy-less) to function correctly (with appropriate extensions).
>
> Signed-off-by: Nishanth Aravamudan <[email protected]>

Also applied, thank you.

> --- 2.6.11-v/acrypto/crypto_lb.c 2005-03-08 09:58:56.000000000 -0800
> +++ 2.6.11/acrypto/crypto_lb.c 2005-03-08 09:59:38.000000000 -0800
> @@ -29,6 +29,7 @@
> #include <linux/spinlock.h>
> #include <linux/workqueue.h>
> #include <linux/err.h>
> +#include <linux/delay.h>
>
> #include "acrypto.h"
> #include "crypto_lb.h"
> @@ -397,8 +398,7 @@ void crypto_lb_unregister(struct crypto_
> "Removing %s delayed untill new load balancer is registered.\n",
> lb->name, (force_lb_remove) ? "is not" : "is");
> while (lb_num == 1 && !force_lb_remove) {
> - set_current_state(TASK_INTERRUPTIBLE);
> - schedule_timeout(HZ);
> + msleep_interruptible(1000);
>
> if (signal_pending(current))
> flush_signals(current);


Evgeniy Polyakov

Only failure makes us experts. -- Theo de Raadt

2005-03-10 10:22:22

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

On Tue, 2005-03-08 at 08:24 -0500, Joshua Jackson wrote:
> On Monday 07 March 2005 4:49 pm, Evgeniy Polyakov wrote:
> >
> > Unfortunately acrypto patch is more than 200kb, so neither mail list
> > will accept it, so I've sent it in such form :)
> >
>
> As per the FAQ, very large patches are often best submitted as a URL. In case
> you don't have a place to host it, you are welcome to email me the complete
> patch and I will post a URL link.

Patch on the web has quite small interest for the majority of the
people,
but probably it is better than 50+ e-mails...

The latest sources which can be compiled as external module
are available at
http://tservice.net.ru/~s0mbre/archive/acrypto/acrypto_latest.tar.gz

> I am very interested in your async changes and possibly porting some of the
> Free/OpenBSD HW crypto drivers over to it.

That would be very good.
You can find HIFN, VIA, FCRYPT drivers created for acrypto at
http://tservice.net.ru/~s0mbre/archive/acrypto/drivers

P.S. Above site is currently down, it will be turned on asap.

--
Evgeniy Polyakov

Crash is better than data corruption -- Arthur Grabowski


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2005-03-10 12:42:51

by Christophe Saout

[permalink] [raw]
Subject: Re: [0/many] Acrypto - asynchronous crypto layer for linux kernel 2.6

Am Dienstag, den 08.03.2005, 00:08 -0500 schrieb Kyle Moffett:

> Did you include support for the new key/keyring infrastructure
> introduced
> a couple versions ago by David Howells? It allows userspace to create
> and
> manage various sorts of "keys" in kernelspace. If you create and
> register
> a few keytypes for various symmetric and asymmetric ciphers, you could
> then
> take advantage of its support for securely passing keys around in and
> out
> of userspace.

I've written a dm-crypt patch some weeks ago that does what you
describe. The crypto information (cipher and key) is added to a keyring
and then the device is constructed using a reference to this key.

I had some issues with the keyring code (mainly a deadlock problem with
crypto module autoloading): http://lkml.org/lkml/2005/2/4/113

I would also like to switch dm-crypt to acrypto once it's accepted into
the kernel.


Attachments:
signature.asc (189.00 B)
Dies ist ein digital signierter Nachrichtenteil

2005-03-10 19:31:26

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [9/many] acrypto: crypto_lb.c

Evgeniy Polyakov wrote:
> --- /tmp/empty/crypto_lb.c 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_lb.c 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,634 @@
> +/*
> + * crypto_lb.c
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> + */
> +
> +
> +static LIST_HEAD(crypto_lb_list);
> +static spinlock_t crypto_lb_lock = SPIN_LOCK_UNLOCKED;

use DEFINE_SPINLOCK()

> +static int lb_num = 0;

statics don't need init to 0.

> +static int lb_is_current(struct crypto_lb *l)
> +{
> + return (l->crypto_device_list != NULL && l->crypto_device_lock != NULL);
> +}
> +
> +static int lb_is_default(struct crypto_lb *l)
> +{
> + return (l == default_lb);
> +}

Is there a (or several) good reason(s) why several of these short
functions are not inline?
(unless some struct.fields need to point to them, of course)

> +static void __lb_set_default(struct crypto_lb *l)
> +{
> + default_lb = l;
> +}
> +
> +static int crypto_lb_match(struct device *dev, struct device_driver *drv)
> +{
> + return 1;
> +}
> +
> +static int crypto_lb_probe(struct device *dev)
> +{
> + return -ENODEV;
> +}
> +
> +static int crypto_lb_remove(struct device *dev)
> +{
> + return 0;
> +}
> +
> +static void crypto_lb_release(struct device *dev)
> +{
> + struct crypto_lb *d = container_of(dev, struct crypto_lb, device);
> +
> + complete(&d->dev_released);
> +}
> +
> +static void crypto_lb_class_release(struct class *class)
> +{
> +}
> +
> +static void crypto_lb_class_release_device(struct class_device *class_dev)
> +{
> +}

> +static ssize_t current_show(struct class_device *dev, char *buf)
> +{
> + struct crypto_lb *lb;
> + int off = 0;
> +
> + spin_lock_irq(&crypto_lb_lock);
> +
> + list_for_each_entry(lb, &crypto_lb_list, lb_entry) {
> + if (lb_is_current(lb))
> + off += sprintf(buf + off, "[");
> + if (lb_is_default(lb))
> + off += sprintf(buf + off, "(");
> + off += sprintf(buf + off, "%s", lb->name);
> + if (lb_is_default(lb))
> + off += sprintf(buf + off, ")");
> + if (lb_is_current(lb))
> + off += sprintf(buf + off, "]");
> + }
> +
> + spin_unlock_irq(&crypto_lb_lock);
> +
> + if (!off)
> + off = sprintf(buf, "No load balancers regitered yet.");
registered
> +
> + off += sprintf(buf + off, "\n");
> +
> + return off;
> +}

> +struct crypto_device *crypto_lb_find_device(struct crypto_session_initializer *ci, struct crypto_data *data)
> +{
> + struct crypto_device *dev;
> +
> + if (!current_lb)
> + return NULL;
> +
> + if (sci_binded(ci)) {
> + int found = 0;
> +
> + spin_lock_irq(crypto_device_lock);
> +
> + list_for_each_entry(dev, crypto_device_list, cdev_entry) {
> + if (dev->id == ci->bdev) {
> + found = 1;
> + break;
> + }
> + }
> +
> + spin_unlock_irq(crypto_device_lock);
> +
> + return (found) ? dev : NULL;
Don't need those parens.


--
~Randy

2005-03-15 16:29:11

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [11/many] acrypto: crypto_main.c

Evgeniy Polyakov wrote:
> --- /tmp/empty/crypto_main.c 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_main.c 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,374 @@
> +/*
> + * crypto_main.c
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> + *
> + */

> +struct crypto_session *crypto_session_alloc(struct crypto_session_initializer *ci, struct crypto_data *d)
> +{
> + struct crypto_session *s;
> +
> + s = crypto_session_create(ci, d);
> + if (!s)
> + return NULL;
> +
> + crypto_session_add(s);
> +
> + return s;
> +}
> +
> +

> +EXPORT_SYMBOL(crypto_session_alloc);
Why is this one not _GPL ?? It calls _create() and _add().

> +EXPORT_SYMBOL_GPL(crypto_session_create);
> +EXPORT_SYMBOL_GPL(crypto_session_add);
> +EXPORT_SYMBOL_GPL(crypto_session_dequeue_route);


--
~Randy

2005-03-15 17:31:01

by Randy.Dunlap

[permalink] [raw]
Subject: Re: [16/many] acrypto: crypto_user.h

Evgeniy Polyakov wrote:
> --- /tmp/empty/crypto_user.h 1970-01-01 03:00:00.000000000 +0300
> +++ ./acrypto/crypto_user.h 2005-03-07 20:35:36.000000000 +0300
> @@ -0,0 +1,52 @@
> +/*
> + * crypto_user.h
> + *
> + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> + *
> + */
> +
> +#ifndef __CRYPTO_USER_H
> +#define __CRYPTO_USER_H
> +
> +#define MAX_DATA_SIZE 3
> +#define ALIGN_DATA_SIZE(size) ((size + PAGE_SIZE - 1) & ~(PAGE_SIZE - 1))

ISTM that we need a generic round_up() function or macro in kernel.h.

a.out.h, reiserfs_fs.h, and ufs_fs.h all have their own round-up
macros.

--
~Randy

2005-03-16 04:52:52

by Evgeniy Polyakov

[permalink] [raw]
Subject: Re: [11/many] acrypto: crypto_main.c

On Tue, 2005-03-15 at 08:24 -0800, Randy.Dunlap wrote:
> Evgeniy Polyakov wrote:
> > --- /tmp/empty/crypto_main.c 1970-01-01 03:00:00.000000000 +0300
> > +++ ./acrypto/crypto_main.c 2005-03-07 20:35:36.000000000 +0300
> > @@ -0,0 +1,374 @@
> > +/*
> > + * crypto_main.c
> > + *
> > + * Copyright (c) 2004 Evgeniy Polyakov <[email protected]>
> > + *
> > + */
>
> > +struct crypto_session *crypto_session_alloc(struct crypto_session_initializer *ci, struct crypto_data *d)
> > +{
> > + struct crypto_session *s;
> > +
> > + s = crypto_session_create(ci, d);
> > + if (!s)
> > + return NULL;
> > +
> > + crypto_session_add(s);
> > +
> > + return s;
> > +}
> > +
> > +
>
> > +EXPORT_SYMBOL(crypto_session_alloc);
> Why is this one not _GPL ?? It calls _create() and _add().

It is not allowed to control _create() and _add() methods, only call
them "atomically"
(without gap between functions where new route can be created).
So I export only that one functin as non-GPL-only for anyone
who wants to use asynchronous crypto in simple mode.
More powerfull control requires GPL.

> > +EXPORT_SYMBOL_GPL(crypto_session_create);
> > +EXPORT_SYMBOL_GPL(crypto_session_add);
> > +EXPORT_SYMBOL_GPL(crypto_session_dequeue_route);
>
>
--
Evgeniy Polyakov

Crash is better than data corruption -- Arthur Grabowski


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part