2012-03-21 21:28:27

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 00/17] Platform Facilities Option and crypto accelerator driver

This patch series adds support for a new device type, the Platform
Facilities Option (PFO). PFO resources are a set of accelerators that
share some system resources managed by the VIO bus. This patchset
includes the basic support for the devices in the VIO bus code along
with drivers for the random number generator and the cryptographic
accelerators.

Please cc me on replies.

Thanks,
Kent

Kent Yoder (12):
powerpc: crypto: AES-CBC mode routines for nx encryption
powerpc: crypto: AES-CCM mode routines for nx encryption
powerpc: crypto: AES-CTR mode routines for nx encryption
powerpc: crypto: AES-ECB mode routines for nx encryption
powerpc: crypto: AES-GCM mode routines for nx encryption
powerpc: crypto: AES-XCBC mode routines for nx encryption
powerpc: crypto: SHA256 hash routines for nx encryption
powerpc: crypto: SHA512 hash routines for nx encryption
powerpc: crypto: nx driver code supporting nx encryption
powerpc: crypto: sysfs routines and docs for the nx device driver
powerpc: crypto: Build files for the nx device driver
powerpc: crypto: enable the PFO-based encryption device

Michael Neuling (1):
hwrng: pseries - PFO-based hwrng driver

Robert Jennings (4):
powerpc: Add new hvcall constants to support PFO
powerpc: Add pseries update notifier for OFDT prop changes
powerpc: Add PFO support to the VIO bus
pseries: Enabled the PFO-based RNG accelerator

Documentation/powerpc/pfo-nx-crypto.txt | 52 ++
arch/powerpc/Makefile | 1 +
arch/powerpc/crypto/nx/Makefile | 11 +
arch/powerpc/crypto/nx/nx-aes-cbc.c | 135 +++++
arch/powerpc/crypto/nx/nx-aes-ccm.c | 466 ++++++++++++++++++
arch/powerpc/crypto/nx/nx-aes-ctr.c | 175 +++++++
arch/powerpc/crypto/nx/nx-aes-ecb.c | 133 +++++
arch/powerpc/crypto/nx/nx-aes-gcm.c | 352 +++++++++++++
arch/powerpc/crypto/nx/nx-aes-xcbc.c | 230 +++++++++
arch/powerpc/crypto/nx/nx-sha256.c | 240 +++++++++
arch/powerpc/crypto/nx/nx-sha512.c | 259 ++++++++++
arch/powerpc/crypto/nx/nx.c | 710 +++++++++++++++++++++++++++
arch/powerpc/crypto/nx/nx.h | 190 +++++++
arch/powerpc/crypto/nx/nx_csbcpb.h | 246 +++++++++
arch/powerpc/crypto/nx/nx_sysfs.c | 194 ++++++++
arch/powerpc/include/asm/hvcall.h | 25 +-
arch/powerpc/include/asm/pSeries_reconfig.h | 12 +
arch/powerpc/include/asm/vio.h | 46 ++
arch/powerpc/kernel/prom_init.c | 19 +-
arch/powerpc/kernel/vio.c | 274 +++++++++--
arch/powerpc/platforms/pseries/reconfig.c | 7 +
drivers/char/hw_random/Kconfig | 13 +
drivers/char/hw_random/Makefile | 1 +
drivers/char/hw_random/pseries-rng.c | 99 ++++
drivers/crypto/Kconfig | 18 +
25 files changed, 3865 insertions(+), 43 deletions(-)
create mode 100644 Documentation/powerpc/pfo-nx-crypto.txt
create mode 100644 arch/powerpc/crypto/nx/Makefile
create mode 100644 arch/powerpc/crypto/nx/nx-aes-cbc.c
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ccm.c
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ctr.c
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ecb.c
create mode 100644 arch/powerpc/crypto/nx/nx-aes-gcm.c
create mode 100644 arch/powerpc/crypto/nx/nx-aes-xcbc.c
create mode 100644 arch/powerpc/crypto/nx/nx-sha256.c
create mode 100644 arch/powerpc/crypto/nx/nx-sha512.c
create mode 100644 arch/powerpc/crypto/nx/nx.c
create mode 100644 arch/powerpc/crypto/nx/nx.h
create mode 100644 arch/powerpc/crypto/nx/nx_csbcpb.h
create mode 100644 arch/powerpc/crypto/nx/nx_sysfs.c
create mode 100644 drivers/char/hw_random/pseries-rng.c


2012-03-21 21:37:23

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 01/17] powerpc: Add new hvcall constants to support PFO

From: Robert Jennings <[email protected]>

The Platform Facilities Option (PFO) adds several new h_calls and
more return codes.

Signed-off-by: Robert Jennings <[email protected]>
Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/include/asm/hvcall.h | 25 +++++++++++++++++++++++--
1 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 1c324ff..6122523 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -77,8 +77,27 @@
#define H_MR_CONDITION -43
#define H_NOT_ENOUGH_RESOURCES -44
#define H_R_STATE -45
-#define H_RESCINDEND -46
-#define H_MULTI_THREADS_ACTIVE -9005
+#define H_RESCINDED -46
+#define H_P2 -55
+#define H_P3 -56
+#define H_P4 -57
+#define H_P5 -58
+#define H_P6 -59
+#define H_P7 -60
+#define H_P8 -61
+#define H_P9 -62
+#define H_TOO_BIG -64
+#define H_OVERLAP -68
+#define H_INTERRUPT -69
+#define H_BAD_DATA -70
+#define H_NOT_ACTIVE -71
+#define H_SG_LIST -72
+#define H_OP_MODE -73
+#define H_COP_HW -74
+#define H_UNSUPPORTED_FLAG_START -256
+#define H_UNSUPPORTED_FLAG_END -511
+#define H_MULTI_THREADS_ACTIVE -9005
+#define H_OUTSTANDING_COP_OPS -9006


/* Long Busy is a condition that can be returned by the firmware
@@ -240,6 +259,8 @@
#define H_GET_MPP 0x2D4
#define H_HOME_NODE_ASSOCIATIVITY 0x2EC
#define H_BEST_ENERGY 0x2F4
+#define H_RANDOM 0x300
+#define H_COP 0x304
#define H_GET_MPP_X 0x314
#define MAX_HCALL_OPCODE H_GET_MPP_X

--
1.7.1

2012-03-21 21:38:10

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 02/17] powerpc: Add pseries update notifier for OFDT prop changes

From: Robert Jennings <[email protected]>

This adds an update notifier mechanism for changes to properties in the
device tree. One use of this would be a device driver that needs to act
on changes to it's properties in the device tree after a live migration
or a dynamic activation that is triggered by updates to ofdt properties.

Signed-off-by: Robert Jennings <[email protected]>
Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/include/asm/pSeries_reconfig.h | 12 ++++++++++++
arch/powerpc/platforms/pseries/reconfig.c | 7 +++++++
2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pSeries_reconfig.h b/arch/powerpc/include/asm/pSeries_reconfig.h
index 23cd6cc..c07edfe 100644
--- a/arch/powerpc/include/asm/pSeries_reconfig.h
+++ b/arch/powerpc/include/asm/pSeries_reconfig.h
@@ -13,6 +13,18 @@
#define PSERIES_RECONFIG_REMOVE 0x0002
#define PSERIES_DRCONF_MEM_ADD 0x0003
#define PSERIES_DRCONF_MEM_REMOVE 0x0004
+#define PSERIES_UPDATE_PROPERTY 0x0005
+
+/**
+ * pSeries_reconfig_notify - Notifier value structure for OFDT property updates
+ *
+ * @node: Device tree node which owns the property being updated
+ * @property: Updated property
+ */
+struct pSeries_reconfig_prop_update {
+ struct device_node *node;
+ struct property *property;
+};

#ifdef CONFIG_PPC_PSERIES
extern int pSeries_reconfig_notifier_register(struct notifier_block *);
diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
index 168651a..7b3bf76 100644
--- a/arch/powerpc/platforms/pseries/reconfig.c
+++ b/arch/powerpc/platforms/pseries/reconfig.c
@@ -103,11 +103,13 @@ int pSeries_reconfig_notifier_register(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&pSeries_reconfig_chain, nb);
}
+EXPORT_SYMBOL_GPL(pSeries_reconfig_notifier_register);

void pSeries_reconfig_notifier_unregister(struct notifier_block *nb)
{
blocking_notifier_chain_unregister(&pSeries_reconfig_chain, nb);
}
+EXPORT_SYMBOL_GPL(pSeries_reconfig_notifier_unregister);

int pSeries_reconfig_notify(unsigned long action, void *p)
{
@@ -426,6 +428,7 @@ static int do_remove_property(char *buf, size_t bufsize)
static int do_update_property(char *buf, size_t bufsize)
{
struct device_node *np;
+ struct pSeries_reconfig_prop_update upd_value;
unsigned char *value;
char *name, *end, *next_prop;
int rc, length;
@@ -454,6 +457,10 @@ static int do_update_property(char *buf, size_t bufsize)
return -ENODEV;
}

+ upd_value.node = np;
+ upd_value.property = newprop;
+ pSeries_reconfig_notify(PSERIES_UPDATE_PROPERTY, &upd_value);
+
rc = prom_update_property(np, newprop, oldprop);
if (rc)
return rc;
--
1.7.1

2012-03-21 21:37:39

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 03/17] powerpc: Add PFO support to the VIO bus

From: Robert Jennings <[email protected]>

Add support for the Platform Facilities Option (PFO) to the VIO bus.
These devices have a separate root node in OpenFirmware which
requires additional parsing to map into the existing VIO device
structure fields. This adds the interface for PFO device drivers to
make synchronous hypervisor calls.

Signed-off-by: Robert Jennings <[email protected]>
Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/include/asm/vio.h | 46 +++++++
arch/powerpc/kernel/vio.c | 274 ++++++++++++++++++++++++++++++++++------
2 files changed, 280 insertions(+), 40 deletions(-)

diff --git a/arch/powerpc/include/asm/vio.h b/arch/powerpc/include/asm/vio.h
index 0a290a1..5e4850b 100644
--- a/arch/powerpc/include/asm/vio.h
+++ b/arch/powerpc/include/asm/vio.h
@@ -46,6 +46,48 @@

struct iommu_table;

+/*
+ * Platform Facilities Option (PFO)-specific data
+ */
+
+/* Starting unit address for PFO devices on the VIO BUS */
+#define VIO_BASE_PFO_UA 0x50000000
+
+/**
+ * vio_pfo_op - PFO operation parameters
+ *
+ * @flags: h_call subfunctions and modifiers
+ * @in: Input data block logical real address
+ * @inlen: If non-negative, the length of the input data block. If negative,
+ * the length of the input data descriptor list in bytes.
+ * @out: Output data block logical real address
+ * @outlen: If non-negative, the length of the input data block. If negative,
+ * the length of the input data descriptor list in bytes.
+ * @csbcpb: Logical real address of the 4k naturally-aligned storage block
+ * containing the CSB & optional FC field specific CPB
+ * @timeout: # of milliseconds to retry h_call, 0 for no timeout.
+ * @hcall_err: pointer to return the h_call return value, else NULL
+ */
+struct vio_pfo_op {
+ u64 flags;
+ s64 in;
+ s64 inlen;
+ s64 out;
+ s64 outlen;
+ u64 csbcpb;
+ void *done;
+ unsigned long handle;
+ unsigned int timeout;
+ long hcall_err;
+};
+
+/* End PFO specific data */
+
+enum vio_dev_family {
+ VDEVICE, /* The OF node is a child of /vdevice */
+ PFO, /* The OF node is a child of /ibm,platform-facilities */
+};
+
/**
* vio_dev - This structure is used to describe virtual I/O devices.
*
@@ -58,6 +100,7 @@ struct vio_dev {
const char *name;
const char *type;
uint32_t unit_address;
+ uint32_t resource_id;
unsigned int irq;
struct {
size_t desired;
@@ -65,6 +108,7 @@ struct vio_dev {
size_t allocated;
atomic_t allocs_failed;
} cmo;
+ enum vio_dev_family family;
struct device dev;
};

@@ -87,6 +131,8 @@ extern void vio_cmo_set_dev_desired(struct vio_dev *viodev, size_t desired);

extern void __devinit vio_unregister_device(struct vio_dev *dev);

+extern int vio_h_cop_sync(struct vio_dev *vdev, struct vio_pfo_op *op);
+
struct device_node;

extern struct vio_dev *vio_register_device_node(
diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index 8b08629..802cc06 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -14,7 +14,9 @@
* 2 of the License, or (at your option) any later version.
*/

+#include <linux/cpu.h>
#include <linux/types.h>
+#include <linux/delay.h>
#include <linux/stat.h>
#include <linux/device.h>
#include <linux/init.h>
@@ -712,13 +714,26 @@ static int vio_cmo_bus_probe(struct vio_dev *viodev)
struct vio_driver *viodrv = to_vio_driver(dev->driver);
unsigned long flags;
size_t size;
+ bool dma_capable = false;
+
+ /* A device requires entitlement if it has a DMA window property */
+ switch (viodev->family) {
+ case VDEVICE:
+ if (of_get_property(viodev->dev.of_node,
+ "ibm,my-dma-window", NULL))
+ dma_capable = true;
+ break;
+ case PFO:
+ dma_capable = false;
+ break;
+ default:
+ dev_warn(dev, "unknown device family: %d\n", viodev->family);
+ BUG();
+ break;
+ }

- /*
- * Check to see that device has a DMA window and configure
- * entitlement for the device.
- */
- if (of_get_property(viodev->dev.of_node,
- "ibm,my-dma-window", NULL)) {
+ /* Configure entitlement for the device. */
+ if (dma_capable) {
/* Check that the driver is CMO enabled and get desired DMA */
if (!viodrv->get_desired_dma) {
dev_err(dev, "%s: device driver does not support CMO\n",
@@ -1054,6 +1069,94 @@ static void vio_cmo_sysfs_init(void) { }
EXPORT_SYMBOL(vio_cmo_entitlement_update);
EXPORT_SYMBOL(vio_cmo_set_dev_desired);

+
+/*
+ * Platform Facilities Option (PFO) support
+ */
+
+/**
+ * vio_h_cop_sync - Perform a synchronous PFO co-processor operation
+ *
+ * @vdev - Pointer to a struct vio_dev for device
+ * @op - Pointer to a struct vio_pfo_op for the operation parameters
+ *
+ * Calls the hypervisor to synchronously perform the PFO operation
+ * described in @op. In the case of a busy response from the hypervisor,
+ * the operation will be re-submitted indefinitely unless a non-zero timeout
+ * is specified or an error occurs. The timeout places a limit on when to
+ * stop re-submitting a operation, the total time can be exceeded if an
+ * operation is in progress.
+ *
+ * If op->hcall_ret is not NULL, this will be set to the return from the
+ * last h_cop_op call or it will be 0 if an error not involving the h_call
+ * was encountered.
+ *
+ * Returns:
+ * 0 on success,
+ * -EINVAL if the h_call fails due to an invalid parameter,
+ * -E2BIG if the h_call can not be performed synchronously,
+ * -EBUSY if a timeout is specified and has elapsed,
+ * -EACCES if the memory area for data/status has been rescinded, or
+ * -EPERM if a hardware fault has been indicated
+ */
+int vio_h_cop_sync(struct vio_dev *vdev, struct vio_pfo_op *op)
+{
+ struct device *dev = &vdev->dev;
+ unsigned long deadline = 0;
+ long hret = 0;
+ int ret = 0;
+
+ if (op->timeout)
+ deadline = jiffies + msecs_to_jiffies(op->timeout);
+
+ while (true) {
+ hret = plpar_hcall_norets(H_COP, op->flags,
+ vdev->resource_id,
+ op->in, op->inlen, op->out,
+ op->outlen, op->csbcpb);
+
+ if (hret == H_SUCCESS ||
+ (hret != H_NOT_ENOUGH_RESOURCES &&
+ hret != H_BUSY && hret != H_RESOURCE) ||
+ (op->timeout && time_after(deadline, jiffies)))
+ break;
+
+ dev_dbg(dev, "%s: hcall ret(%ld), retrying.\n", __func__, hret);
+ }
+
+ switch (hret) {
+ case H_SUCCESS:
+ ret = 0;
+ break;
+ case H_OP_MODE:
+ case H_TOO_BIG:
+ ret = -E2BIG;
+ break;
+ case H_RESCINDED:
+ ret = -EACCES;
+ break;
+ case H_HARDWARE:
+ ret = -EPERM;
+ break;
+ case H_NOT_ENOUGH_RESOURCES:
+ case H_RESOURCE:
+ case H_BUSY:
+ ret = -EBUSY;
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret)
+ dev_dbg(dev, "%s: Sync h_cop_op failure (ret:%d) (hret:%ld)\n",
+ __func__, ret, hret);
+
+ op->hcall_err = hret;
+ return ret;
+}
+EXPORT_SYMBOL(vio_h_cop_sync);
+
static struct iommu_table *vio_build_iommu_table(struct vio_dev *dev)
{
const unsigned char *dma_window;
@@ -1215,35 +1318,86 @@ static void __devinit vio_dev_release(struct device *dev)
struct vio_dev *vio_register_device_node(struct device_node *of_node)
{
struct vio_dev *viodev;
+ struct device_node *parent_node;
const unsigned int *unit_address;
+ const unsigned int *pfo_resid = NULL;
+ enum vio_dev_family family;
+ const char *of_node_name = of_node->name ? of_node->name : "<unknown>";

- /* we need the 'device_type' property, in order to match with drivers */
- if (of_node->type == NULL) {
- printk(KERN_WARNING "%s: node %s missing 'device_type'\n",
- __func__,
- of_node->name ? of_node->name : "<unknown>");
+ /*
+ * Determine if this node is a under the /vdevice node or under the
+ * /ibm,platform-facilities node. This decides the device's family.
+ */
+ parent_node = of_get_parent(of_node);
+ if (parent_node) {
+ if (!strcmp(parent_node->full_name, "/ibm,platform-facilities"))
+ family = PFO;
+ else if (!strcmp(parent_node->full_name, "/vdevice"))
+ family = VDEVICE;
+ else {
+ pr_warn("%s: parent(%s) of %s not recognized.\n",
+ __func__,
+ parent_node->full_name,
+ of_node_name);
+ of_node_put(parent_node);
+ return NULL;
+ }
+ of_node_put(parent_node);
+ } else {
+ pr_warn("%s: could not determine the parent of node %s.\n",
+ __func__, of_node_name);
return NULL;
}

- unit_address = of_get_property(of_node, "reg", NULL);
- if (unit_address == NULL) {
- printk(KERN_WARNING "%s: node %s missing 'reg'\n",
- __func__,
- of_node->name ? of_node->name : "<unknown>");
- return NULL;
- }
+ if (family == PFO)
+ if (of_get_property(of_node, "interrupt-controller", NULL)) {
+ pr_debug("%s: Skipping the interrupt controller %s.\n",
+ __func__, of_node_name);
+ return NULL;
+ }

/* allocate a vio_dev for this node */
viodev = kzalloc(sizeof(struct vio_dev), GFP_KERNEL);
- if (viodev == NULL)
+ if (viodev == NULL) {
+ pr_warn("%s: allocation failure for VIO device.\n", __func__);
return NULL;
+ }
+
+ /* we need the 'device_type' property, in order to match with drivers */
+ viodev->family = family;
+ if (viodev->family == VDEVICE) {
+ if (of_node->type != NULL)
+ viodev->type = of_node->type;
+ else {
+ pr_warn("%s: node %s is missing the 'device_type' "
+ "property.\n", __func__, of_node_name);
+ goto out;
+ }

- viodev->irq = irq_of_parse_and_map(of_node, 0);
+ unit_address = of_get_property(of_node, "reg", NULL);
+ if (unit_address == NULL) {
+ pr_warn("%s: node %s missing 'reg'\n",
+ __func__, of_node_name);
+ goto out;
+ }
+ dev_set_name(&viodev->dev, "%x", *unit_address);
+ viodev->irq = irq_of_parse_and_map(of_node, 0);
+ viodev->unit_address = *unit_address;
+ } else {
+ /* PFO devices need their resource_id for submitting COP_OPs
+ * This is an optional field for devices, but is required when
+ * performing synchronous ops */
+ pfo_resid = of_get_property(of_node, "ibm,resource-id", NULL);
+ if (pfo_resid != NULL)
+ viodev->resource_id = *pfo_resid;
+
+ unit_address = NULL;
+ dev_set_name(&viodev->dev, "%s", of_node_name);
+ viodev->type = of_node_name;
+ viodev->irq = 0;
+ }

- dev_set_name(&viodev->dev, "%x", *unit_address);
viodev->name = of_node->name;
- viodev->type = of_node->type;
- viodev->unit_address = *unit_address;
if (firmware_has_feature(FW_FEATURE_ISERIES)) {
unit_address = of_get_property(of_node,
"linux,unit_address", NULL);
@@ -1277,16 +1431,51 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node)
}

return viodev;
+
+out: /* Use this exit point for any return prior to device_register */
+ kfree(viodev);
+
+ return NULL;
}
EXPORT_SYMBOL(vio_register_device_node);

+/*
+ * vio_bus_scan_for_devices - Scan OF and register each child device
+ * @root_name - OF node name for the root of the subtree to search.
+ * This must be non-NULL
+ *
+ * Starting from the root node provide, register the device node for
+ * each child beneath the root.
+ */
+static void vio_bus_scan_register_devices(char *root_name)
+{
+ struct device_node *node_root, *node_child;
+
+ if (!root_name)
+ return;
+
+ node_root = of_find_node_by_name(NULL, root_name);
+ if (node_root) {
+
+ /*
+ * Create struct vio_devices for each virtual device in
+ * the device tree. Drivers will associate with them later.
+ */
+ node_child = of_get_next_child(node_root, NULL);
+ while (node_child) {
+ vio_register_device_node(node_child);
+ node_child = of_get_next_child(node_root, node_child);
+ }
+ of_node_put(node_root);
+ }
+}
+
/**
* vio_bus_init: - Initialize the virtual IO bus
*/
static int __init vio_bus_init(void)
{
int err;
- struct device_node *node_vroot;

if (firmware_has_feature(FW_FEATURE_CMO))
vio_cmo_sysfs_init();
@@ -1311,19 +1500,8 @@ static int __init vio_bus_init(void)
if (firmware_has_feature(FW_FEATURE_CMO))
vio_cmo_bus_init();

- node_vroot = of_find_node_by_name(NULL, "vdevice");
- if (node_vroot) {
- struct device_node *of_node;
-
- /*
- * Create struct vio_devices for each virtual device in
- * the device tree. Drivers will associate with them later.
- */
- for (of_node = node_vroot->child; of_node != NULL;
- of_node = of_node->sibling)
- vio_register_device_node(of_node);
- of_node_put(node_vroot);
- }
+ vio_bus_scan_register_devices("vdevice");
+ vio_bus_scan_register_devices("ibm,platform-facilities");

return 0;
}
@@ -1446,12 +1624,28 @@ struct vio_dev *vio_find_node(struct device_node *vnode)
{
const uint32_t *unit_address;
char kobj_name[20];
+ struct device_node *vnode_parent;
+ const char *dev_type;
+
+ vnode_parent = of_get_parent(vnode);
+ if (!vnode_parent)
+ return NULL;
+
+ dev_type = of_get_property(vnode_parent, "device_type", NULL);
+ of_node_put(vnode_parent);
+ if (!dev_type)
+ return NULL;

/* construct the kobject name from the device node */
- unit_address = of_get_property(vnode, "reg", NULL);
- if (!unit_address)
+ if (!strcmp(dev_type, "vdevice")) {
+ unit_address = of_get_property(vnode, "reg", NULL);
+ if (!unit_address)
+ return NULL;
+ snprintf(kobj_name, sizeof(kobj_name), "%x", *unit_address);
+ } else if (!strcmp(dev_type, "ibm,platform-facilities"))
+ snprintf(kobj_name, sizeof(kobj_name), "%s", vnode->name);
+ else
return NULL;
- snprintf(kobj_name, sizeof(kobj_name), "%x", *unit_address);

return vio_find_name(kobj_name);
}
--
1.7.1

2012-03-21 21:38:39

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 04/17] hwrng: pseries - PFO-based hwrng driver

From: Michael Neuling <[email protected]>

Adds support for the Platform Facilities Option (PFO)-based hardware
random number generator for POWER hardware.

Signed-off-by: Michael Neuling <[email protected]>
Signed-off-by: Robert Jennings <[email protected]>
Signed-off-by: Kent Yoder <[email protected]>
---
drivers/char/hw_random/Kconfig | 13 +++++
drivers/char/hw_random/Makefile | 1 +
drivers/char/hw_random/pseries-rng.c | 99 ++++++++++++++++++++++++++++++++++
3 files changed, 113 insertions(+), 0 deletions(-)
create mode 100644 drivers/char/hw_random/pseries-rng.c

diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
index 0689bf6..9355347 100644
--- a/drivers/char/hw_random/Kconfig
+++ b/drivers/char/hw_random/Kconfig
@@ -250,3 +250,16 @@ config UML_RANDOM
(check your distro, or download from
http://sourceforge.net/projects/gkernel/). rngd periodically reads
/dev/hwrng and injects the entropy into /dev/random.
+
+config HW_RANDOM_PSERIES
+ tristate "pSeries HW Random Number Generator support"
+ depends on HW_RANDOM && PPC64 && IBMVIO
+ default HW_RANDOM
+ ---help---
+ This driver provides kernel-side support for the Random Number
+ Generator hardware found on POWER7+ machines and above
+
+ To compile this driver as a module, choose M here: the
+ module will be called pseries-rng.
+
+ If unsure, say Y.
diff --git a/drivers/char/hw_random/Makefile b/drivers/char/hw_random/Makefile
index b2ff526..d901dfa 100644
--- a/drivers/char/hw_random/Makefile
+++ b/drivers/char/hw_random/Makefile
@@ -22,3 +22,4 @@ obj-$(CONFIG_HW_RANDOM_OCTEON) += octeon-rng.o
obj-$(CONFIG_HW_RANDOM_NOMADIK) += nomadik-rng.o
obj-$(CONFIG_HW_RANDOM_PICOXCELL) += picoxcell-rng.o
obj-$(CONFIG_HW_RANDOM_PPC4XX) += ppc4xx-rng.o
+obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o
diff --git a/drivers/char/hw_random/pseries-rng.c b/drivers/char/hw_random/pseries-rng.c
new file mode 100644
index 0000000..6ee70ca
--- /dev/null
+++ b/drivers/char/hw_random/pseries-rng.c
@@ -0,0 +1,99 @@
+/*
+ * Copyright (C) 2010 Michael Neuling IBM Corporation
+ *
+ * Driver for the pseries hardware RNG for POWER7+ and above
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/module.h>
+#include <linux/hw_random.h>
+#include <asm/vio.h>
+
+#define MODULE_NAME "pseries-rng"
+
+static int pseries_rng_data_read(struct hwrng *rng, u32 *data)
+{
+ if (plpar_hcall(H_RANDOM, (unsigned long *)data) != H_SUCCESS) {
+ printk(KERN_ERR "pseries rng hcall error\n");
+ return 0;
+ }
+ return 8;
+}
+
+/**
+ * pseries_rng_get_desired_dma - Return desired DMA allocate for CMO operations
+ *
+ * This is a required function for a driver to operate in a CMO environment
+ * but this device does not make use of DMA allocations, return 0.
+ *
+ * Return value:
+ * Number of bytes of IO data the driver will need to perform well -> 0
+ */
+static unsigned long pseries_rng_get_desired_dma(struct vio_dev *vdev)
+{
+ return 0;
+};
+
+static struct hwrng pseries_rng = {
+ .name = MODULE_NAME,
+ .data_read = pseries_rng_data_read,
+};
+
+static int __init pseries_rng_probe(struct vio_dev *dev,
+ const struct vio_device_id *id)
+{
+ return hwrng_register(&pseries_rng);
+}
+
+static int __exit pseries_rng_remove(struct vio_dev *dev)
+{
+ hwrng_unregister(&pseries_rng);
+ return 0;
+}
+
+static struct vio_device_id pseries_rng_driver_ids[] = {
+ { "ibm,random-v1", "ibm,random"},
+ { "", "" }
+};
+MODULE_DEVICE_TABLE(vio, pseries_rng_driver_ids);
+
+static struct vio_driver pseries_rng_driver = {
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ },
+ .probe = pseries_rng_probe,
+ .remove = pseries_rng_remove,
+ .get_desired_dma = pseries_rng_get_desired_dma,
+ .id_table = pseries_rng_driver_ids
+};
+
+static int __init rng_init(void)
+{
+ printk(KERN_INFO "Registering IBM pSeries RNG driver\n");
+ return vio_register_driver(&pseries_rng_driver);
+}
+
+module_init(rng_init);
+
+static void __exit rng_exit(void)
+{
+ vio_unregister_driver(&pseries_rng_driver);
+}
+module_exit(rng_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Michael Neuling <[email protected]>");
+MODULE_DESCRIPTION("H/W RNG driver for IBM pSeries processors");
--
1.7.1

2012-03-21 21:38:41

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 05/17] pseries: Enabled the PFO-based RNG accelerator

From: Robert Jennings <[email protected]>

This patch adds the cas bits to advertise support for the Platform
Facilities Option (PFO) based random number generator accerator.
The pseries-rng driver provides support for this hardware feature.

Signed-off-by: Robert Jennings <[email protected]>
Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/kernel/prom_init.c | 13 ++++++++++++-
1 files changed, 12 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index eca626e..6691077 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -710,6 +710,12 @@ static void __init early_cmdline_parse(void)
#define OV5_XCMO 0x00
#endif
#define OV5_TYPE1_AFFINITY 0x80 /* Type 1 NUMA affinity */
+#if defined(CONFIG_HW_RANDOM_PSERIES) || \
+ defined(CONFIG_HW_RANDOM_PSERIES_MODULE)
+#define OV5_PFO_HW_RNG 0x80 /* PFO Random Number Generator */
+#else
+#define OV5_PFO_HW_RNG 0x00
+#endif

/* Option Vector 6: IBM PAPR hints */
#define OV6_LINUX 0x02 /* Linux is our OS */
@@ -757,7 +763,7 @@ static unsigned char ibm_architecture_vec[] = {
0, /* don't halt */

/* option vector 5: PAPR/OF options */
- 13 - 2, /* length */
+ 18 - 2, /* length */
0, /* don't ignore, don't halt */
OV5_LPAR | OV5_SPLPAR | OV5_LARGE_PAGES | OV5_DRCONF_MEMORY |
OV5_DONATE_DEDICATE_CPU | OV5_MSI,
@@ -773,6 +779,11 @@ static unsigned char ibm_architecture_vec[] = {
*/
#define IBM_ARCH_VEC_NRCORES_OFFSET 100
W(NR_CPUS), /* number of cores supported */
+ 0,
+ 0,
+ 0,
+ 0,
+ OV5_PFO_HW_RNG,

/* option vector 6: IBM PAPR hints */
4 - 2, /* length */
--
1.7.1

2012-03-21 21:39:28

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 06/17] powerpc: crypto: AES-CBC mode routines for nx encryption

These routines add support for AES in CBC mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-cbc.c | 135 +++++++++++++++++++++++++++++++++++
1 files changed, 135 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-cbc.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-cbc.c b/arch/powerpc/crypto/nx/nx-aes-cbc.c
new file mode 100644
index 0000000..01fa5cb
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-cbc.c
@@ -0,0 +1,135 @@
+/**
+ * AES CBC routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int cbc_aes_nx_set_key(struct crypto_tfm *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ case AES_KEYSIZE_192:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_192;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_192];
+ break;
+ case AES_KEYSIZE_256:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_256;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_256];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_CBC;
+ memcpy(csbcpb->cpb.aes_cbc.key, in_key, key_len);
+
+ return 0;
+}
+
+static int cbc_aes_nx_crypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ int enc)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ int rc;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ csbcpb->cpb.hdr.fdm.ende = enc;
+
+ rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes,
+ csbcpb->cpb.aes_cbc.iv);
+ if (rc)
+ goto out;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+out:
+ return rc;
+}
+
+static int cbc_aes_nx_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return cbc_aes_nx_crypt(desc, dst, src, nbytes, 1);
+}
+
+static int cbc_aes_nx_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return cbc_aes_nx_crypt(desc, dst, src, nbytes, 0);
+}
+
+struct crypto_alg nx_cbc_aes_alg = {
+ .cra_name = "cbc(aes)",
+ .cra_driver_name = "cbc-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_blkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_cbc_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_cbc_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_blkcipher = {
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = cbc_aes_nx_set_key,
+ .encrypt = cbc_aes_nx_encrypt,
+ .decrypt = cbc_aes_nx_decrypt,
+ }
+};
--
1.7.1

2012-03-21 21:40:10

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 09/17] powerpc: crypto: AES-ECB mode routines for nx encryption

These routines add support for AES in ECB mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-ecb.c | 133 +++++++++++++++++++++++++++++++++++
1 files changed, 133 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ecb.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-ecb.c b/arch/powerpc/crypto/nx/nx-aes-ecb.c
new file mode 100644
index 0000000..7ceef65
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-ecb.c
@@ -0,0 +1,133 @@
+/**
+ * AES ECB routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int ecb_aes_nx_set_key(struct crypto_tfm *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ case AES_KEYSIZE_192:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_192;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_192];
+ break;
+ case AES_KEYSIZE_256:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_256;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_256];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_ECB;
+ memcpy(csbcpb->cpb.aes_ecb.key, in_key, key_len);
+
+ return 0;
+}
+
+static int ecb_aes_nx_crypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ int enc)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ int rc;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ csbcpb->cpb.hdr.fdm.ende = enc;
+
+ rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, NULL);
+ if (rc)
+ goto out;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+out:
+ return rc;
+}
+
+static int ecb_aes_nx_encrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return ecb_aes_nx_crypt(desc, dst, src, nbytes, 1);
+}
+
+static int ecb_aes_nx_decrypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return ecb_aes_nx_crypt(desc, dst, src, nbytes, 0);
+}
+
+struct crypto_alg nx_ecb_aes_alg = {
+ .cra_name = "ecb(aes)",
+ .cra_driver_name = "ecb-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_blkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_ecb_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_ecb_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_blkcipher = {
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .setkey = ecb_aes_nx_set_key,
+ .encrypt = ecb_aes_nx_encrypt,
+ .decrypt = ecb_aes_nx_decrypt,
+ }
+};
--
1.7.1

2012-03-21 21:40:22

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 10/17] powerpc: crypto: AES-GCM mode routines for nx encryption

These routines add support for AES in GCM mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-gcm.c | 352 +++++++++++++++++++++++++++++++++++
1 files changed, 352 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-gcm.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-gcm.c b/arch/powerpc/crypto/nx/nx-aes-gcm.c
new file mode 100644
index 0000000..c336ef4
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-gcm.c
@@ -0,0 +1,352 @@
+/**
+ * AES GCM routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/aead.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int gcm_aes_nx_set_key(struct crypto_aead *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&tfm->base);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ csbcpb_aead->cpb.hdr.key_size = NX_KS_AES_128;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ case AES_KEYSIZE_192:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_192;
+ csbcpb_aead->cpb.hdr.key_size = NX_KS_AES_192;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_192];
+ break;
+ case AES_KEYSIZE_256:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_256;
+ csbcpb_aead->cpb.hdr.key_size = NX_KS_AES_256;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_256];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_GCM;
+ memcpy(csbcpb->cpb.aes_gcm.key, in_key, key_len);
+
+ csbcpb_aead->cpb.hdr.mode = NX_MODE_AES_GCA;
+ memcpy(csbcpb_aead->cpb.aes_gca.key, in_key, key_len);
+
+ return 0;
+}
+
+static int gcm4106_aes_nx_set_key(struct crypto_aead *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&tfm->base);
+ char *nonce = nx_ctx->priv.gcm.nonce;
+ int rc;
+
+ if (key_len < 4)
+ return -EINVAL;
+
+ key_len -= 4;
+
+ rc = gcm_aes_nx_set_key(tfm, in_key, key_len);
+ if (rc)
+ goto out;
+
+ memcpy(nonce, in_key + key_len, 4);
+out:
+ return rc;
+}
+
+static int gcm_aes_nx_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ if (authsize > crypto_aead_alg(tfm)->maxauthsize)
+ return -EINVAL;
+
+ crypto_aead_crt(tfm)->authsize = authsize;
+
+ return 0;
+}
+
+static int gcm4106_aes_nx_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 8:
+ case 12:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ crypto_aead_crt(tfm)->authsize = authsize;
+
+ return 0;
+}
+
+static int nx_gca(struct nx_crypto_ctx *nx_ctx,
+ struct scatterlist *assoc,
+ unsigned int assoclen,
+ u8 *out)
+{
+ struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead;
+ int rc;
+ struct scatter_walk walk;
+ struct nx_sg *nx_sg = nx_ctx->in_sg;
+
+ if (assoclen > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ if (assoclen <= AES_BLOCK_SIZE) {
+ scatterwalk_start(&walk, assoc);
+ scatterwalk_copychunks(out, &walk, assoclen,
+ SCATTERWALK_FROM_SG);
+ scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0);
+
+ return 0;
+ }
+
+ nx_sg = nx_walk_and_build(nx_sg, nx_ctx->ap->sglen, assoc, 0,
+ assoclen);
+ nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_sg) * sizeof(struct nx_sg);
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op_aead);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(assoclen, &(nx_ctx->stats->aes_bytes));
+
+ memcpy(out, csbcpb_aead->cpb.aes_gca.out_pat, AES_BLOCK_SIZE);
+out:
+ return rc;
+}
+
+static int gcm_aes_nx_crypt(struct aead_request *req, int enc)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct blkcipher_desc desc;
+ unsigned int nbytes = req->cryptlen;
+ int rc = -1;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ desc.info = nx_ctx->priv.gcm.iv;
+ /* initialize the counter */
+ *(u32 *)(desc.info + NX_GCM_CTR_OFFSET) = 1;
+
+ /* For scenarios where the input message is zero length, AES CTR mode
+ * may be used. Set the source data to be a single block (16B) of all
+ * zeros, and set the input IV value to be the same as the GMAC IV
+ * value. - nx_wb 4.8.1.3 */
+ if (nbytes == 0) {
+ char src[AES_BLOCK_SIZE] = {};
+ struct scatterlist sg;
+
+ desc.tfm = crypto_alloc_blkcipher("ctr(aes)", 0, 0);
+ if (IS_ERR(desc.tfm)) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ crypto_blkcipher_setkey(desc.tfm, csbcpb->cpb.aes_gcm.key,
+ csbcpb->cpb.hdr.key_size == NX_KS_AES_128 ? 16 :
+ csbcpb->cpb.hdr.key_size == NX_KS_AES_192 ? 24 : 32);
+
+ sg_init_one(&sg, src, AES_BLOCK_SIZE);
+ if (enc)
+ crypto_blkcipher_encrypt_iv(&desc, req->dst, &sg,
+ AES_BLOCK_SIZE);
+ else
+ crypto_blkcipher_decrypt_iv(&desc, req->dst, &sg,
+ AES_BLOCK_SIZE);
+ crypto_free_blkcipher(desc.tfm);
+
+ rc = 0;
+ goto out;
+ }
+
+ desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
+
+ csbcpb->cpb.aes_gcm.bit_length_aad = req->assoclen * 8;
+
+ if (req->assoclen) {
+ rc = nx_gca(nx_ctx, req->assoc, req->assoclen,
+ csbcpb->cpb.aes_gcm.in_pat_or_aad);
+ if (rc)
+ goto out;
+ }
+
+ if (enc)
+ csbcpb->cpb.hdr.fdm.ende = NX_FDM_ENDE_ENCRYPT;
+ else
+ nbytes -= AES_BLOCK_SIZE;
+
+ csbcpb->cpb.aes_gcm.bit_length_data = nbytes * 8;
+
+ rc = nx_build_sg_lists(nx_ctx, &desc, req->dst, req->src, nbytes,
+ csbcpb->cpb.aes_gcm.iv_or_cnt);
+ if (rc)
+ goto out;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+
+ if (enc) {
+ /* copy out the auth tag */
+ scatterwalk_map_and_copy(csbcpb->cpb.aes_gcm.out_pat_or_mac,
+ req->dst, nbytes,
+ crypto_aead_authsize(crypto_aead_reqtfm(req)),
+ SCATTERWALK_TO_SG);
+ } else if (req->assoclen) {
+ u8 *itag = nx_ctx->priv.gcm.iauth_tag;
+ u8 *otag = csbcpb->cpb.aes_gcm.out_pat_or_mac;
+
+ scatterwalk_map_and_copy(itag, req->dst, nbytes,
+ crypto_aead_authsize(crypto_aead_reqtfm(req)),
+ SCATTERWALK_FROM_SG);
+ rc = memcmp(itag, otag,
+ crypto_aead_authsize(crypto_aead_reqtfm(req))) ?
+ -EBADMSG : 0;
+ }
+out:
+ return rc;
+}
+
+static int gcm_aes_nx_encrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ char *iv = nx_ctx->priv.gcm.iv;
+
+ memcpy(iv, req->iv, 12);
+
+ return gcm_aes_nx_crypt(req, 1);
+}
+
+static int gcm_aes_nx_decrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ char *iv = nx_ctx->priv.gcm.iv;
+
+ memcpy(iv, req->iv, 12);
+
+ return gcm_aes_nx_crypt(req, 0);
+}
+
+static int gcm4106_aes_nx_encrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ char *iv = nx_ctx->priv.gcm.iv;
+ char *nonce = nx_ctx->priv.gcm.nonce;
+
+ memcpy(iv, nonce, NX_GCM4106_NONCE_LEN);
+ memcpy(iv + NX_GCM4106_NONCE_LEN, req->iv, 8);
+
+ return gcm_aes_nx_crypt(req, 1);
+}
+
+static int gcm4106_aes_nx_decrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ char *iv = nx_ctx->priv.gcm.iv;
+ char *nonce = nx_ctx->priv.gcm.nonce;
+
+ memcpy(iv, nonce, NX_GCM4106_NONCE_LEN);
+ memcpy(iv + NX_GCM4106_NONCE_LEN, req->iv, 8);
+
+ return gcm_aes_nx_crypt(req, 0);
+}
+
+/* tell the block cipher walk routines that this is a stream cipher by
+ * setting cra_blocksize to 1. Even using blkcipher_walk_virt_block
+ * during encrypt/decrypt doesn't solve this problem, because it calls
+ * blkcipher_walk_done under the covers, which doesn't use walk->blocksize,
+ * but instead uses this tfm->blocksize. */
+struct crypto_alg nx_gcm_aes_alg = {
+ .cra_name = "gcm(aes)",
+ .cra_driver_name = "gcm-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_AEAD,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_aead_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_gcm_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_gcm_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_aead = {
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ .setkey = gcm_aes_nx_set_key,
+ .setauthsize = gcm_aes_nx_setauthsize,
+ .encrypt = gcm_aes_nx_encrypt,
+ .decrypt = gcm_aes_nx_decrypt,
+ }
+};
+
+struct crypto_alg nx_gcm4106_aes_alg = {
+ .cra_name = "rfc4106(gcm(aes))",
+ .cra_driver_name = "rfc4106-gcm-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_AEAD,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_nivaead_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_gcm4106_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_gcm_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_aead = {
+ .ivsize = 8,
+ .maxauthsize = AES_BLOCK_SIZE,
+ .geniv = "seqiv",
+ .setkey = gcm4106_aes_nx_set_key,
+ .setauthsize = gcm4106_aes_nx_setauthsize,
+ .encrypt = gcm4106_aes_nx_encrypt,
+ .decrypt = gcm4106_aes_nx_decrypt,
+ }
+};
--
1.7.1

2012-03-21 21:39:39

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 07/17] powerpc: crypto: AES-CCM mode routines for nx encryption

These routines add support for AES in CCM mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-ccm.c | 466 +++++++++++++++++++++++++++++++++++
1 files changed, 466 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ccm.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-ccm.c b/arch/powerpc/crypto/nx/nx-aes-ccm.c
new file mode 100644
index 0000000..979b0da
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-ccm.c
@@ -0,0 +1,466 @@
+/**
+ * AES CCM routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/aead.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int ccm_aes_nx_set_key(struct crypto_aead *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&tfm->base);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ csbcpb_aead->cpb.hdr.key_size = NX_KS_AES_128;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_CCM;
+ memcpy(csbcpb->cpb.aes_ccm.key, in_key, key_len);
+
+ csbcpb_aead->cpb.hdr.mode = NX_MODE_AES_CCA;
+ memcpy(csbcpb_aead->cpb.aes_cca.key, in_key, key_len);
+
+ return 0;
+
+}
+
+static int ccm4309_aes_nx_set_key(struct crypto_aead *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&tfm->base);
+
+ if (key_len < 3)
+ return -EINVAL;
+
+ key_len -= 3;
+
+ memcpy(nx_ctx->priv.ccm.nonce, in_key + key_len, 3);
+
+ return ccm_aes_nx_set_key(tfm, in_key, key_len);
+}
+
+static int ccm_aes_nx_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 4:
+ case 6:
+ case 8:
+ case 10:
+ case 12:
+ case 14:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ crypto_aead_crt(tfm)->authsize = authsize;
+
+ return 0;
+}
+
+static int ccm4309_aes_nx_setauthsize(struct crypto_aead *tfm,
+ unsigned int authsize)
+{
+ switch (authsize) {
+ case 8:
+ case 12:
+ case 16:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ crypto_aead_crt(tfm)->authsize = authsize;
+
+ return 0;
+}
+
+
+/* taken from crypto/ccm.c */
+static int set_msg_len(u8 *block, unsigned int msglen, int csize)
+{
+ __be32 data;
+
+ memset(block, 0, csize);
+ block += csize;
+
+ if (csize >= 4)
+ csize = 4;
+ else if (msglen > (unsigned int)(1 << (8 * csize)))
+ return -EOVERFLOW;
+
+ data = cpu_to_be32(msglen);
+ memcpy(block - csize, (u8 *)&data + 4 - csize, csize);
+
+ return 0;
+}
+
+/* taken from crypto/ccm.c */
+static inline int crypto_ccm_check_iv(const u8 *iv)
+{
+ /* 2 <= L <= 8, so 1 <= L' <= 7. */
+ if (1 > iv[0] || iv[0] > 7)
+ return -EINVAL;
+
+ return 0;
+}
+
+/* based on code from crypto/ccm.c */
+static int generate_b0(u8 *iv, unsigned int assoclen, unsigned int authsize,
+ unsigned int cryptlen, u8 *b0)
+{
+ unsigned int l, lp, m = authsize;
+ int rc;
+
+ memcpy(b0, iv, 16);
+
+ lp = b0[0];
+ l = lp + 1;
+
+ /* set m, bits 3-5 */
+ *b0 |= (8 * ((m - 2) / 2));
+
+ /* set adata, bit 6, if associated data is used */
+ if (assoclen)
+ *b0 |= 64;
+
+ rc = set_msg_len(b0 + 16 - l, cryptlen, l);
+
+ return rc;
+}
+
+static int generate_pat(u8 *iv,
+ struct aead_request *req,
+ struct nx_crypto_ctx *nx_ctx,
+ unsigned int authsize,
+ unsigned int nbytes,
+ u8 *out)
+{
+ struct nx_sg *nx_insg = nx_ctx->in_sg;
+ struct nx_sg *nx_outsg = nx_ctx->out_sg;
+ unsigned int iauth_len = 0;
+ struct vio_pfo_op *op = NULL;
+ u8 tmp[16], *b1 = NULL, *b0 = NULL, *result = NULL;
+ int rc;
+
+ /* zero the ctr value */
+ memset(iv + 15 - iv[0], 0, iv[0] + 1);
+
+ if (!req->assoclen) {
+ b0 = nx_ctx->csbcpb->cpb.aes_ccm.in_pat_or_b0;
+ } else if (req->assoclen <= 14) {
+ /* if associated data is 14 bytes or less, we do 1 GCM
+ * operation on 2 AES blocks, B0 (stored in the csbcpb) and B1,
+ * which is fed in through the source buffers here */
+ b0 = nx_ctx->csbcpb->cpb.aes_ccm.in_pat_or_b0;
+ b1 = nx_ctx->priv.ccm.iauth_tag;
+ iauth_len = req->assoclen;
+
+ nx_insg = nx_build_sg_list(nx_insg, b1, 16, nx_ctx->ap->sglen);
+ nx_outsg = nx_build_sg_list(nx_outsg, tmp, 16,
+ nx_ctx->ap->sglen);
+
+ /* inlen should be negative, indicating to phyp that its a
+ * pointer to an sg list */
+ nx_ctx->op.inlen = (nx_ctx->in_sg - nx_insg) *
+ sizeof(struct nx_sg);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - nx_outsg) *
+ sizeof(struct nx_sg);
+
+ nx_ctx->csbcpb->cpb.hdr.fdm.ende = NX_FDM_ENDE_ENCRYPT;
+ nx_ctx->csbcpb->cpb.hdr.fdm.intermediate = 1;
+
+ op = &nx_ctx->op;
+ result = nx_ctx->csbcpb->cpb.aes_ccm.out_pat_or_mac;
+ } else if (req->assoclen <= 65280) {
+ /* if associated data is less than (2^16 - 2^8), we construct
+ * B1 differently and feed in the associated data to a CCA
+ * operation */
+ b0 = nx_ctx->csbcpb_aead->cpb.aes_cca.b0;
+ b1 = nx_ctx->csbcpb_aead->cpb.aes_cca.b1;
+ iauth_len = 14;
+
+ /* remaining assoc data must have scatterlist built for it */
+ nx_insg = nx_walk_and_build(nx_insg, nx_ctx->ap->sglen,
+ req->assoc, iauth_len,
+ req->assoclen - iauth_len);
+ nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_insg) *
+ sizeof(struct nx_sg);
+
+ op = &nx_ctx->op_aead;
+ result = nx_ctx->csbcpb_aead->cpb.aes_cca.out_pat_or_b0;
+ } else {
+ /* if associated data is less than (2^32), we construct B1
+ * differently yet again and feed in the associated data to a
+ * CCA operation */
+ pr_err("associated data len is %u bytes (returning -EINVAL)\n",
+ req->assoclen);
+ rc = -EINVAL;
+ }
+
+ rc = generate_b0(iv, req->assoclen, authsize, nbytes, b0);
+ if (rc)
+ goto done;
+
+ if (b1) {
+ memset(b1, 0, 16);
+ *(u16 *)b1 = (u16)req->assoclen;
+
+ scatterwalk_map_and_copy(b1 + 2, req->assoc, 0,
+ iauth_len, SCATTERWALK_FROM_SG);
+
+ rc = nx_hcall_sync(nx_ctx, op);
+ if (rc)
+ goto done;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
+
+ memcpy(out, result, AES_BLOCK_SIZE);
+ }
+done:
+ return rc;
+}
+
+static int ccm_nx_decrypt(struct aead_request *req,
+ struct blkcipher_desc *desc)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ unsigned int nbytes = req->cryptlen;
+ unsigned int authsize = crypto_aead_authsize(crypto_aead_reqtfm(req));
+ struct nx_ccm_priv *priv = &nx_ctx->priv.ccm;
+ int rc = -1;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ nbytes -= authsize;
+
+ /* copy out the auth tag to compare with later */
+ scatterwalk_map_and_copy(priv->oauth_tag,
+ req->src, nbytes, authsize,
+ SCATTERWALK_FROM_SG);
+
+ rc = generate_pat(desc->info, req, nx_ctx, authsize, nbytes,
+ csbcpb->cpb.aes_ccm.in_pat_or_b0);
+ if (rc)
+ goto out;
+
+ rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes,
+ csbcpb->cpb.aes_ccm.iv_or_ctr);
+ if (rc)
+ goto out;
+
+ csbcpb->cpb.hdr.fdm.ende = NX_FDM_ENDE_DECRYPT;
+ csbcpb->cpb.hdr.fdm.intermediate = 0;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+
+ rc = memcmp(csbcpb->cpb.aes_ccm.out_pat_or_mac, priv->oauth_tag,
+ authsize) ? -EBADMSG : 0;
+out:
+ return rc;
+}
+
+static int ccm_nx_encrypt(struct aead_request *req,
+ struct blkcipher_desc *desc)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ unsigned int nbytes = req->cryptlen;
+ unsigned int authsize = crypto_aead_authsize(crypto_aead_reqtfm(req));
+ int rc = -1;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ rc = generate_pat(desc->info, req, nx_ctx, authsize, nbytes,
+ csbcpb->cpb.aes_ccm.in_pat_or_b0);
+ if (rc)
+ goto out;
+
+ rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes,
+ csbcpb->cpb.aes_ccm.iv_or_ctr);
+ if (rc)
+ goto out;
+
+ csbcpb->cpb.hdr.fdm.ende = NX_FDM_ENDE_ENCRYPT;
+ csbcpb->cpb.hdr.fdm.intermediate = 0;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+
+ /* copy out the auth tag */
+ scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac,
+ req->dst, nbytes, authsize,
+ SCATTERWALK_TO_SG);
+out:
+ return rc;
+}
+
+static int ccm4309_aes_nx_encrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ struct blkcipher_desc desc;
+ u8 *iv = nx_ctx->priv.ccm.iv;
+
+ iv[0] = 3;
+ memcpy(iv + 1, nx_ctx->priv.ccm.nonce, 3);
+ memcpy(iv + 4, req->iv, 8);
+
+ desc.info = iv;
+ desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
+
+ return ccm_nx_encrypt(req, &desc);
+}
+
+static int ccm_aes_nx_encrypt(struct aead_request *req)
+{
+ struct blkcipher_desc desc;
+ int rc;
+
+ desc.info = req->iv;
+ desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
+
+ rc = crypto_ccm_check_iv(desc.info);
+ if (rc)
+ return rc;
+
+ return ccm_nx_encrypt(req, &desc);
+}
+
+static int ccm4309_aes_nx_decrypt(struct aead_request *req)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+ struct blkcipher_desc desc;
+ u8 *iv = nx_ctx->priv.ccm.iv;
+
+ iv[0] = 3;
+ memcpy(iv + 1, nx_ctx->priv.ccm.nonce, 3);
+ memcpy(iv + 4, req->iv, 8);
+
+ desc.info = iv;
+ desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
+
+ return ccm_nx_decrypt(req, &desc);
+}
+
+static int ccm_aes_nx_decrypt(struct aead_request *req)
+{
+ struct blkcipher_desc desc;
+ int rc;
+
+ desc.info = req->iv;
+ desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
+
+ rc = crypto_ccm_check_iv(desc.info);
+ if (rc)
+ return rc;
+
+ return ccm_nx_decrypt(req, &desc);
+}
+
+/* tell the block cipher walk routines that this is a stream cipher by
+ * setting cra_blocksize to 1. Even using blkcipher_walk_virt_block
+ * during encrypt/decrypt doesn't solve this problem, because it calls
+ * blkcipher_walk_done under the covers, which doesn't use walk->blocksize,
+ * but instead uses this tfm->blocksize. */
+struct crypto_alg nx_ccm_aes_alg = {
+ .cra_name = "ccm(aes)",
+ .cra_driver_name = "ccm-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_AEAD |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_aead_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_ccm_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_ccm_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_aead = {
+ .ivsize = AES_BLOCK_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+ .setkey = ccm_aes_nx_set_key,
+ .setauthsize = ccm_aes_nx_setauthsize,
+ .encrypt = ccm_aes_nx_encrypt,
+ .decrypt = ccm_aes_nx_decrypt,
+ }
+};
+
+struct crypto_alg nx_ccm4309_aes_alg = {
+ .cra_name = "rfc4309(ccm(aes))",
+ .cra_driver_name = "rfc4309-ccm-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_AEAD |
+ CRYPTO_ALG_NEED_FALLBACK,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_nivaead_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_ccm4309_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_ccm_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_aead = {
+ .ivsize = 8,
+ .maxauthsize = AES_BLOCK_SIZE,
+ .setkey = ccm4309_aes_nx_set_key,
+ .setauthsize = ccm4309_aes_nx_setauthsize,
+ .encrypt = ccm4309_aes_nx_encrypt,
+ .decrypt = ccm4309_aes_nx_decrypt,
+ .geniv = "seqiv",
+ }
+};
--
1.7.1

2012-03-21 21:40:31

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 11/17] powerpc: crypto: AES-XCBC mode routines for nx encryption

These routines add support for AES in XCBC mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-xcbc.c | 230 ++++++++++++++++++++++++++++++++++
1 files changed, 230 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-xcbc.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-xcbc.c b/arch/powerpc/crypto/nx/nx-aes-xcbc.c
new file mode 100644
index 0000000..b25eb79
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-xcbc.c
@@ -0,0 +1,230 @@
+/**
+ * AES XCBC routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/hash.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+struct xcbc_state {
+ u8 state[AES_BLOCK_SIZE];
+ unsigned int count;
+ u8 buffer[AES_BLOCK_SIZE];
+};
+
+static int nx_xcbc_set_key(struct crypto_shash *desc,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_shash_ctx(desc);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ memcpy(nx_ctx->priv.xcbc.key, in_key, key_len);
+
+ return 0;
+}
+
+static int nx_xcbc_init(struct shash_desc *desc)
+{
+ struct xcbc_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct nx_sg *out_sg;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ memset(sctx, 0, sizeof *sctx);
+
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_XCBC_MAC;
+
+ memcpy(csbcpb->cpb.aes_xcbc.key, nx_ctx->priv.xcbc.key, AES_BLOCK_SIZE);
+ memset(nx_ctx->priv.xcbc.key, 0, sizeof *nx_ctx->priv.xcbc.key);
+
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state,
+ AES_BLOCK_SIZE, nx_ctx->ap->sglen);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ return 0;
+}
+
+static int nx_xcbc_update(struct shash_desc *desc,
+ const u8 *data,
+ unsigned int len)
+{
+ struct xcbc_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct nx_sg *in_sg;
+ u32 to_process, leftover;
+ int rc;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously and we're updating again,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.aes_xcbc.cv,
+ csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE);
+ }
+
+ /* 2 cases for total data len:
+ * 1: <= AES_BLOCK_SIZE: copy into state, return 0
+ * 2: > AES_BLOCK_SIZE: process X blocks, copy in leftover
+ */
+ if (len + sctx->count <= AES_BLOCK_SIZE) {
+ memcpy(sctx->buffer + sctx->count, data, len);
+ sctx->count += len;
+ return 0;
+ }
+
+ /* to_process: the AES_BLOCK_SIZE data chunk to process in this
+ * update */
+ to_process = (sctx->count + len) & ~(AES_BLOCK_SIZE - 1);
+ leftover = (sctx->count + len) & (AES_BLOCK_SIZE - 1);
+
+ /* the hardware will not accept a 0 byte operation for this algorithm
+ * and the operation MUST be finalized to be correct. So if we happen
+ * to get an update that falls on a block sized boundary, we must
+ * save off the last block to finalize with later. */
+ if (!leftover) {
+ to_process -= AES_BLOCK_SIZE;
+ leftover = AES_BLOCK_SIZE;
+ }
+
+ if (sctx->count) {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, sctx->buffer,
+ sctx->count, nx_ctx->ap->sglen);
+ in_sg = nx_build_sg_list(in_sg, (u8 *)data,
+ to_process - sctx->count,
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ } else {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data, to_process,
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ }
+
+ csbcpb->cpb.hdr.fdm.intermediate = 1;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+
+ /* copy the leftover back into the state struct */
+ memcpy(sctx->buffer, data + len - leftover, leftover);
+ sctx->count = leftover;
+
+ /* everything after the first update is continuation */
+ csbcpb->cpb.hdr.fdm.continuation = 1;
+out:
+ return rc;
+}
+
+static int nx_xcbc_final(struct shash_desc *desc, u8 *out)
+{
+ struct xcbc_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ struct nx_sg *in_sg, *out_sg;
+ int rc = 0;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously, now we're finalizing,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.aes_xcbc.cv,
+ csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE);
+ } else if (sctx->count == 0) {
+ /* we've never seen an update, so this is a 0 byte op. The
+ * hardware cannot handle a 0 byte op, so just copy out the
+ * known 0 byte result. This is cheaper than allocating a
+ * software context to do a 0 byte op */
+ u8 data[] = { 0x75, 0xf0, 0x25, 0x1d, 0x52, 0x8a, 0xc0, 0x1c,
+ 0x45, 0x73, 0xdf, 0xd5, 0x84, 0xd7, 0x9f, 0x29 };
+ memcpy(out, data, sizeof(data));
+ goto out;
+ }
+
+ /* final is represented by continuing the operation and indicating that
+ * this is not an intermediate operation */
+ csbcpb->cpb.hdr.fdm.intermediate = 0;
+
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buffer,
+ sctx->count, nx_ctx->ap->sglen);
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, out, AES_BLOCK_SIZE,
+ nx_ctx->ap->sglen);
+
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ if (!nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+
+ memcpy(out, csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE);
+out:
+ return rc;
+}
+
+struct shash_alg nx_shash_aes_xcbc_alg = {
+ .digestsize = AES_BLOCK_SIZE,
+ .init = nx_xcbc_init,
+ .update = nx_xcbc_update,
+ .final = nx_xcbc_final,
+ .setkey = nx_xcbc_set_key,
+ .descsize = sizeof(struct xcbc_state),
+ .statesize = sizeof(struct xcbc_state),
+ .base = {
+ .cra_name = "xcbc(aes)",
+ .cra_driver_name = "xcbc-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_SHASH,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_module = THIS_MODULE,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_init = nx_crypto_ctx_aes_xcbc_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ }
+};
--
1.7.1

2012-03-21 21:40:41

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 12/17] powerpc: crypto: SHA256 hash routines for nx encryption

These routines add support for SHA-256 hashing on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-sha256.c | 240 ++++++++++++++++++++++++++++++++++++
1 files changed, 240 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-sha256.c

diff --git a/arch/powerpc/crypto/nx/nx-sha256.c b/arch/powerpc/crypto/nx/nx-sha256.c
new file mode 100644
index 0000000..38560d6
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-sha256.c
@@ -0,0 +1,240 @@
+/**
+ * SHA-256 routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/hash.h>
+#include <crypto/sha.h>
+#include <linux/module.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int nx_sha256_init(struct shash_desc *desc)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_sg *out_sg;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_SHA);
+
+ memset(sctx, 0, sizeof *sctx);
+
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_SHA256];
+
+ nx_ctx->csbcpb->cpb.hdr.digest_size = NX_DS_SHA256;
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state,
+ SHA256_DIGEST_SIZE, nx_ctx->ap->sglen);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ return 0;
+}
+
+static int nx_sha256_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct nx_sg *in_sg;
+ u64 to_process, leftover;
+ int rc;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously and we're updating again,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.sha256.input_partial_digest,
+ csbcpb->cpb.sha256.message_digest, SHA256_DIGEST_SIZE);
+ }
+
+ /* 2 cases for total data len:
+ * 1: <= SHA256_BLOCK_SIZE: copy into state, return 0
+ * 2: > SHA256_BLOCK_SIZE: process X blocks, copy in leftover
+ */
+ if (len + sctx->count <= SHA256_BLOCK_SIZE) {
+ memcpy(sctx->buf + sctx->count, data, len);
+ sctx->count += len;
+ return 0;
+ }
+
+ /* to_process: the SHA256_BLOCK_SIZE data chunk to process in this
+ * update */
+ to_process = (sctx->count + len) & ~(SHA256_BLOCK_SIZE - 1);
+ leftover = (sctx->count + len) & (SHA256_BLOCK_SIZE - 1);
+
+ if (sctx->count) {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
+ sctx->count, nx_ctx->ap->sglen);
+ in_sg = nx_build_sg_list(in_sg, (u8 *)data,
+ to_process - sctx->count,
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ } else {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data,
+ to_process, nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ }
+
+ csbcpb->cpb.hdr.fdm.intermediate = 1;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->sha256_ops));
+
+ /* copy the leftover back into the state struct */
+ memcpy(sctx->buf, data + len - leftover, leftover);
+ sctx->count = leftover;
+
+ csbcpb->cpb.sha256.message_bit_length += (u64)
+ (csbcpb->cpb.sha256.spbc * 8);
+
+ /* everything after the first update is continuation */
+ csbcpb->cpb.hdr.fdm.continuation = 1;
+out:
+ return rc;
+}
+
+static int nx_sha256_final(struct shash_desc *desc, u8 *out)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct nx_sg *in_sg, *out_sg;
+ int rc;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously, now we're finalizing,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.sha256.input_partial_digest,
+ csbcpb->cpb.sha256.message_digest, SHA256_DIGEST_SIZE);
+ }
+
+ /* final is represented by continuing the operation and indicating that
+ * this is not an intermediate operation */
+ csbcpb->cpb.hdr.fdm.intermediate = 0;
+
+ csbcpb->cpb.sha256.message_bit_length += (u64)(sctx->count * 8);
+
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
+ sctx->count, nx_ctx->ap->sglen);
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, out, SHA256_DIGEST_SIZE,
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ if (!nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->sha256_ops));
+
+ atomic64_add(csbcpb->cpb.sha256.message_bit_length,
+ &(nx_ctx->stats->sha256_bytes));
+ memcpy(out, csbcpb->cpb.sha256.message_digest, SHA256_DIGEST_SIZE);
+out:
+ return rc;
+}
+
+static int nx_sha256_export(struct shash_desc *desc, void *out)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct sha256_state *octx = out;
+
+ octx->count = sctx->count +
+ (csbcpb->cpb.sha256.message_bit_length / 8);
+ memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
+
+ /* if no data has been processed yet, we need to export SHA256's
+ * initial data, in case this context gets imported into a software
+ * context */
+ if (csbcpb->cpb.sha256.message_bit_length)
+ memcpy(octx->state, csbcpb->cpb.sha256.message_digest,
+ SHA256_DIGEST_SIZE);
+ else {
+ octx->state[0] = SHA256_H0;
+ octx->state[1] = SHA256_H1;
+ octx->state[2] = SHA256_H2;
+ octx->state[3] = SHA256_H3;
+ octx->state[4] = SHA256_H4;
+ octx->state[5] = SHA256_H5;
+ octx->state[6] = SHA256_H6;
+ octx->state[7] = SHA256_H7;
+ }
+
+ return 0;
+}
+
+static int nx_sha256_import(struct shash_desc *desc, const void *in)
+{
+ struct sha256_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ const struct sha256_state *ictx = in;
+
+ memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
+
+ sctx->count = ictx->count & 0x3f;
+ csbcpb->cpb.sha256.message_bit_length = (ictx->count & ~0x3f) * 8;
+
+ if (csbcpb->cpb.sha256.message_bit_length) {
+ memcpy(csbcpb->cpb.sha256.message_digest, ictx->state,
+ SHA256_DIGEST_SIZE);
+
+ csbcpb->cpb.hdr.fdm.continuation = 1;
+ csbcpb->cpb.hdr.fdm.intermediate = 1;
+ }
+
+ return 0;
+}
+
+struct shash_alg nx_shash_sha256_alg = {
+ .digestsize = SHA256_DIGEST_SIZE,
+ .init = nx_sha256_init,
+ .update = nx_sha256_update,
+ .final = nx_sha256_final,
+ .export = nx_sha256_export,
+ .import = nx_sha256_import,
+ .descsize = sizeof(struct sha256_state),
+ .statesize = sizeof(struct sha256_state),
+ .base = {
+ .cra_name = "sha256",
+ .cra_driver_name = "sha256-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_SHASH,
+ .cra_blocksize = SHA256_BLOCK_SIZE,
+ .cra_module = THIS_MODULE,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_init = nx_crypto_ctx_sha_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ }
+};
--
1.7.1

2012-03-21 21:40:19

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

These routines add the base device driver code supporting the Power7+
in-Nest encryption accelerator (nx) device.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx.c | 710 ++++++++++++++++++++++++++++++++++++
arch/powerpc/crypto/nx/nx.h | 190 ++++++++++
arch/powerpc/crypto/nx/nx_csbcpb.h | 246 +++++++++++++
3 files changed, 1146 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx.c
create mode 100644 arch/powerpc/crypto/nx/nx.h
create mode 100644 arch/powerpc/crypto/nx/nx_csbcpb.h

diff --git a/arch/powerpc/crypto/nx/nx.c b/arch/powerpc/crypto/nx/nx.c
new file mode 100644
index 0000000..b5730b7
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx.c
@@ -0,0 +1,710 @@
+/**
+ * Routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/hash.h>
+#include <crypto/hash.h>
+#include <crypto/aes.h>
+#include <crypto/sha.h>
+#include <crypto/algapi.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/crypto.h>
+#include <linux/scatterlist.h>
+#include <linux/device.h>
+#include <linux/of.h>
+#include <asm/pSeries_reconfig.h>
+#include <asm/abs_addr.h>
+#include <asm/hvcall.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+/**
+ * nx_hcall_sync - make an H_COP_OP hcall for the passed in op structure
+ *
+ * @nx_ctx: the crypto context handle
+ * @op: PFO operation struct to pass in
+ *
+ * Make the hcall, retrying while the hardware is busy
+ */
+int nx_hcall_sync(struct nx_crypto_ctx *nx_ctx, struct vio_pfo_op *op)
+{
+ int rc;
+ struct vio_dev *viodev = nx_driver.viodev;
+
+ atomic_inc(&(nx_ctx->stats->sync_ops));
+
+ do {
+ rc = vio_h_cop_sync(viodev, op);
+ } while (rc == -EBUSY);
+
+ if (rc) {
+ dev_dbg(&viodev->dev, "vio_h_cop_sync failed: rc: %d "
+ "hcall rc: %ld\n", rc, op->hcall_err);
+ atomic_inc(&(nx_ctx->stats->errors));
+ atomic_set(&(nx_ctx->stats->last_error), op->hcall_err);
+ atomic_set(&(nx_ctx->stats->last_error_pid), current->pid);
+ }
+
+ return rc;
+}
+
+/**
+ * nx_build_sg_list - build an NX scatter list describing a single buffer
+ *
+ * @sg_head: pointer to the first scatter list element to build
+ * @start_addr: pointer to the linear buffer
+ * @len: length of the data at @start_addr
+ * @sgmax: the largest number of scatter list elements we're allowed to create
+ *
+ * This function will start writing nx_sg elements at @sg_head and keep
+ * writing them until all of the data from @start_addr is described or
+ * until sgmax elements have been written. Scatter list elements will be
+ * created such that none of the elements describes a buffer that crosses a 4K
+ * boundary.
+ */
+struct nx_sg *nx_build_sg_list(struct nx_sg *sg_head,
+ u8 *start_addr,
+ unsigned int len,
+ u32 sgmax)
+{
+ unsigned int sg_len = 0;
+ struct nx_sg *sg;
+ u64 sg_addr = (u64)start_addr;
+ u64 end_addr;
+
+ if (is_vmalloc_addr(start_addr))
+ sg_addr = phys_to_abs(page_to_phys(vmalloc_to_page(start_addr)))
+ + offset_in_page(sg_addr);
+ else
+ sg_addr = virt_to_abs(sg_addr);
+
+ end_addr = sg_addr + len;
+
+ for (sg = sg_head; sg_len < len; sg++) {
+ sg->addr = sg_addr;
+ sg_addr = min_t(u64, NX_PAGE_NUM(sg_addr + 4096), end_addr);
+ sg->len = sg_addr - sg->addr;
+ sg_len += sg->len;
+
+ if ((sg - sg_head) == sgmax) {
+ pr_err("nx: scatter/gather list overflow, pid: %d\n",
+ current->pid);
+ return NULL;
+ }
+ }
+
+ /* return the moved sg_head pointer */
+ return sg;
+}
+
+/**
+ * nx_walk_and_build - walk a linux scatterlist and build an nx scatterlist
+ *
+ * @nx_dst: pointer to the first nx_sg element to write
+ * @sglen: max number of nx_sg entries we're allowed to write
+ * @sg_src: pointer to the source linux scatterlist to walk
+ * @start: number of bytes to fast-forward past at the beginning of @sg_src
+ * @src_len: number of bytes to walk in @sg_src
+ */
+struct nx_sg *nx_walk_and_build(struct nx_sg *nx_dst,
+ unsigned int sglen,
+ struct scatterlist *sg_src,
+ unsigned int start,
+ unsigned int src_len)
+{
+ struct scatter_walk walk;
+ struct nx_sg *nx_sg = nx_dst;
+ unsigned int n, offset = 0, len = src_len;
+ char *dst;
+
+ /* we need to fast forward through @start bytes first */
+ for (;;) {
+ scatterwalk_start(&walk, sg_src);
+
+ if (start < offset + sg_src->length)
+ break;
+
+ offset += sg_src->length;
+ sg_src = scatterwalk_sg_next(sg_src);
+ }
+
+ /* start - offset is the number of bytes to advance in the scatterlist
+ * element we're currently looking at */
+ scatterwalk_advance(&walk, start - offset);
+
+ while (len && nx_sg) {
+ n = scatterwalk_clamp(&walk, len);
+ if (!n) {
+ scatterwalk_start(&walk, sg_next(walk.sg));
+ n = scatterwalk_clamp(&walk, len);
+ }
+ dst = scatterwalk_map(&walk, SCATTERWALK_FROM_SG);
+
+ nx_sg = nx_build_sg_list(nx_sg, dst, n, sglen);
+ len -= n;
+
+ scatterwalk_unmap(dst, SCATTERWALK_FROM_SG);
+ scatterwalk_advance(&walk, n);
+ scatterwalk_done(&walk, SCATTERWALK_FROM_SG, len);
+ }
+
+ /* return the moved destination pointer */
+ return nx_sg;
+}
+
+/**
+ * nx_build_sg_lists - walk the input scatterlists and build arrays of NX
+ * scatterlists based on them.
+ *
+ * @nx_ctx: NX crypto context for the lists we're building
+ * @desc: the block cipher descriptor for the operation
+ * @dst: destination scatterlist
+ * @src: source scatterlist
+ * @nbytes: length of data described in the scatterlists
+ * @iv: destination for the iv data, if the algorithm requires it
+ *
+ * This is common code shared by all the AES algorithms. It uses the block
+ * cipher walk routines to traverse input and output scatterlists, building
+ * corresponding NX scatterlists
+ */
+int nx_build_sg_lists(struct nx_crypto_ctx *nx_ctx,
+ struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes,
+ u8 *iv)
+{
+ struct nx_sg *nx_insg = nx_ctx->in_sg;
+ struct nx_sg *nx_outsg = nx_ctx->out_sg;
+ struct blkcipher_walk walk;
+ int rc;
+
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ rc = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
+ if (rc)
+ goto out;
+
+ if (iv)
+ memcpy(iv, walk.iv, AES_BLOCK_SIZE);
+
+ while (walk.nbytes) {
+ nx_insg = nx_build_sg_list(nx_insg, walk.src.virt.addr,
+ walk.nbytes, nx_ctx->ap->sglen);
+ nx_outsg = nx_build_sg_list(nx_outsg, walk.dst.virt.addr,
+ walk.nbytes, nx_ctx->ap->sglen);
+
+ rc = blkcipher_walk_done(desc, &walk, 0);
+ if (rc)
+ break;
+ }
+
+ if (walk.nbytes) {
+ nx_insg = nx_build_sg_list(nx_insg, walk.src.virt.addr,
+ walk.nbytes, nx_ctx->ap->sglen);
+ nx_outsg = nx_build_sg_list(nx_outsg, walk.dst.virt.addr,
+ walk.nbytes, nx_ctx->ap->sglen);
+
+ rc = 0;
+ }
+
+ /* these lengths should be negative, which will indicate to phyp that
+ * the input and output parameters are scatterlists, not linear
+ * buffers */
+ nx_ctx->op.inlen = (nx_ctx->in_sg - nx_insg) * sizeof(struct nx_sg);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - nx_outsg) * sizeof(struct nx_sg);
+out:
+ return rc;
+}
+
+/**
+ * nx_ctx_init - initialize an nx_ctx's vio_pfo_op struct
+ *
+ * @nx_ctx: the nx context to initialize
+ * @function: the function code for the op
+ */
+void nx_ctx_init(struct nx_crypto_ctx *nx_ctx, unsigned int function)
+{
+ memset(nx_ctx->csbcpb, 0, sizeof(struct nx_csbcpb));
+ nx_ctx->csbcpb->csb.valid_bit = 1;
+
+ nx_ctx->op.flags = function;
+ nx_ctx->op.csbcpb = virt_to_abs(nx_ctx->csbcpb);
+ nx_ctx->op.in = virt_to_abs(nx_ctx->in_sg);
+ nx_ctx->op.out = virt_to_abs(nx_ctx->out_sg);
+
+ if (nx_ctx->csbcpb_aead) {
+ memset(nx_ctx->csbcpb_aead, 0, sizeof(struct nx_csbcpb));
+ nx_ctx->csbcpb_aead->csb.valid_bit = 1;
+
+ nx_ctx->op_aead.flags = function;
+ nx_ctx->op_aead.csbcpb = virt_to_abs(nx_ctx->csbcpb_aead);
+ nx_ctx->op_aead.in = virt_to_abs(nx_ctx->in_sg);
+ nx_ctx->op_aead.out = virt_to_abs(nx_ctx->out_sg);
+ }
+}
+
+static void nx_of_update_status(struct device *dev,
+ struct property *p,
+ struct nx_of *props)
+{
+ if (!strncmp(p->value, "okay", p->length)) {
+ props->status = NX_WAITING;
+ props->flags |= NX_OF_FLAG_STATUS_SET;
+ } else {
+ dev_info(dev, "%s: status '%s' is not 'okay'\n", __func__,
+ (char *)p->value);
+ }
+}
+
+static void nx_of_update_sglen(struct device *dev,
+ struct property *p,
+ struct nx_of *props)
+{
+ if (p->length != sizeof(props->max_sg_len)) {
+ dev_err(dev, "%s: unexpected format for "
+ "ibm,max-sg-len property\n", __func__);
+ dev_dbg(dev, "%s: ibm,max-sg-len is %d bytes "
+ "long, expected %zd bytes\n", __func__,
+ p->length, sizeof(props->max_sg_len));
+ return;
+ }
+
+ props->max_sg_len = *(u32 *)p->value;
+ props->flags |= NX_OF_FLAG_MAXSGLEN_SET;
+}
+
+static void nx_of_update_msc(struct device *dev,
+ struct property *p,
+ struct nx_of *props)
+{
+ struct msc_triplet *trip;
+ struct max_sync_cop *msc;
+ unsigned int bytes_so_far, i, lenp;
+
+ msc = (struct max_sync_cop *)p->value;
+ lenp = p->length;
+
+ /* You can't tell if the data read in for this property is sane by its
+ * size alone. This is because there are sizes embedded in the data
+ * structure. The best we can do is check lengths as we parse and bail
+ * as soon as a length error is detected. */
+ bytes_so_far = 0;
+
+ while ((bytes_so_far + sizeof(struct max_sync_cop)) <= lenp) {
+ bytes_so_far += sizeof(struct max_sync_cop);
+
+ trip = msc->trip;
+
+ for (i = 0;
+ ((bytes_so_far + sizeof(struct msc_triplet)) <= lenp) &&
+ i < msc->triplets;
+ i++) {
+ if (msc->fc > NX_MAX_FC || msc->mode > NX_MAX_MODE) {
+ dev_err(dev, "unknown function code/mode "
+ "combo: %d/%d (ignored)\n", msc->fc,
+ msc->mode);
+ goto next_loop;
+ }
+
+ switch (trip->keybitlen) {
+ case 128:
+ case 160:
+ props->ap[msc->fc][msc->mode][0].databytelen =
+ trip->databytelen;
+ props->ap[msc->fc][msc->mode][0].sglen =
+ trip->sglen;
+ break;
+ case 192:
+ props->ap[msc->fc][msc->mode][1].databytelen =
+ trip->databytelen;
+ props->ap[msc->fc][msc->mode][1].sglen =
+ trip->sglen;
+ break;
+ case 256:
+ if (msc->fc == NX_FC_AES) {
+ props->ap[msc->fc][msc->mode][2].
+ databytelen = trip->databytelen;
+ props->ap[msc->fc][msc->mode][2].sglen =
+ trip->sglen;
+ } else if (msc->fc == NX_FC_AES_HMAC ||
+ msc->fc == NX_FC_SHA) {
+ props->ap[msc->fc][msc->mode][1].
+ databytelen = trip->databytelen;
+ props->ap[msc->fc][msc->mode][1].sglen =
+ trip->sglen;
+ } else {
+ dev_warn(dev, "unknown function "
+ "code/key bit len combo"
+ ": (%u/256)\n", msc->fc);
+ }
+ break;
+ case 512:
+ props->ap[msc->fc][msc->mode][2].databytelen =
+ trip->databytelen;
+ props->ap[msc->fc][msc->mode][2].sglen =
+ trip->sglen;
+ break;
+ default:
+ dev_warn(dev, "unknown function code/key bit "
+ "len combo: (%u/%u)\n", msc->fc,
+ trip->keybitlen);
+ break;
+ }
+next_loop:
+ bytes_so_far += sizeof(struct msc_triplet);
+ trip++;
+ }
+
+ msc = (struct max_sync_cop *)trip;
+ }
+
+ props->flags |= NX_OF_FLAG_MAXSYNCCOP_SET;
+}
+
+/**
+ * nx_of_init - read openFirmware values from the device tree
+ *
+ * @dev: device handle
+ * @props: pointer to struct to hold the properties values
+ *
+ * Called once at driver probe time, this function will read out the
+ * openFirmware properties we use at runtime. If all the OF properties are
+ * acceptable, when we exit this function props->flags will indicate that
+ * we're ready to register our crypto algorithms.
+ */
+static void nx_of_init(struct device *dev, struct nx_of *props)
+{
+ struct device_node *base_node = dev->of_node;
+ struct property *p;
+
+ p = of_find_property(base_node, "status", NULL);
+ if (!p)
+ dev_info(dev, "%s: property 'status' not found\n", __func__);
+ else
+ nx_of_update_status(dev, p, props);
+
+ p = of_find_property(base_node, "ibm,max-sg-len", NULL);
+ if (!p)
+ dev_info(dev, "%s: property 'ibm,max-sg-len' not found\n",
+ __func__);
+ else
+ nx_of_update_sglen(dev, p, props);
+
+ p = of_find_property(base_node, "ibm,max-sync-cop", NULL);
+ if (!p)
+ dev_info(dev, "%s: property 'ibm,max-sync-cop' not found\n",
+ __func__);
+ else
+ nx_of_update_msc(dev, p, props);
+}
+
+/**
+ * nx_register_algs - register algorithms with the crypto API
+ *
+ * Called from nx_probe()
+ *
+ * If all OF properties are in an acceptable state, the driver flags will
+ * indicate that we're ready and we'll create our sysfs files and register
+ * out crypto algorithms.
+ */
+static int nx_register_algs(void)
+{
+ int rc = -1;
+
+ if (nx_driver.of.flags != NX_OF_FLAG_MASK_READY)
+ goto out;
+
+ memset(&nx_driver.stats, 0, sizeof(struct nx_stats));
+
+ rc = nx_sysfs_init(&nx_driver.viodriver.driver);
+ if (rc)
+ goto out;
+
+ rc = crypto_register_alg(&nx_ecb_aes_alg);
+ if (rc)
+ goto out;
+
+ rc = crypto_register_alg(&nx_cbc_aes_alg);
+ if (rc)
+ goto out_unreg_ecb;
+
+ rc = crypto_register_alg(&nx_ctr_aes_alg);
+ if (rc)
+ goto out_unreg_cbc;
+
+ rc = crypto_register_alg(&nx_ctr3686_aes_alg);
+ if (rc)
+ goto out_unreg_ctr;
+
+ rc = crypto_register_alg(&nx_gcm_aes_alg);
+ if (rc)
+ goto out_unreg_ctr3686;
+
+ rc = crypto_register_alg(&nx_gcm4106_aes_alg);
+ if (rc)
+ goto out_unreg_gcm;
+
+ rc = crypto_register_alg(&nx_ccm_aes_alg);
+ if (rc)
+ goto out_unreg_gcm4106;
+
+ rc = crypto_register_alg(&nx_ccm4309_aes_alg);
+ if (rc)
+ goto out_unreg_ccm;
+
+ rc = crypto_register_shash(&nx_shash_sha256_alg);
+ if (rc)
+ goto out_unreg_ccm4309;
+
+ rc = crypto_register_shash(&nx_shash_sha512_alg);
+ if (rc)
+ goto out_unreg_s256;
+
+ rc = crypto_register_shash(&nx_shash_aes_xcbc_alg);
+ if (rc)
+ goto out_unreg_s512;
+
+ nx_driver.of.status = NX_OKAY;
+
+ goto out;
+
+out_unreg_s512:
+ crypto_unregister_shash(&nx_shash_sha512_alg);
+out_unreg_s256:
+ crypto_unregister_shash(&nx_shash_sha256_alg);
+out_unreg_ccm4309:
+ crypto_unregister_alg(&nx_ccm4309_aes_alg);
+out_unreg_ccm:
+ crypto_unregister_alg(&nx_ccm_aes_alg);
+out_unreg_gcm4106:
+ crypto_unregister_alg(&nx_gcm4106_aes_alg);
+out_unreg_gcm:
+ crypto_unregister_alg(&nx_gcm_aes_alg);
+out_unreg_ctr3686:
+ crypto_unregister_alg(&nx_ctr3686_aes_alg);
+out_unreg_ctr:
+ crypto_unregister_alg(&nx_ctr_aes_alg);
+out_unreg_cbc:
+ crypto_unregister_alg(&nx_cbc_aes_alg);
+out_unreg_ecb:
+ crypto_unregister_alg(&nx_ecb_aes_alg);
+out:
+ return rc;
+}
+
+/**
+ * nx_crypto_ctx_init - create and initialize a crypto api context
+ *
+ * @nx_ctx: the crypto api context
+ * @fc: function code for the context
+ * @mode: the function code specific mode for this context
+ */
+static int nx_crypto_ctx_init(struct nx_crypto_ctx *nx_ctx, u32 fc, u32 mode)
+{
+ if (nx_driver.of.status != NX_OKAY) {
+ pr_err("Attempt to initialize NX crypto context while device "
+ "is not available!\n");
+ return -1;
+ }
+
+ nx_ctx->csbcpb = kzalloc(NX_CRYPTO_CTX_SIZE, GFP_KERNEL);
+ if (!nx_ctx->csbcpb)
+ return -ENOMEM;
+
+ if (mode == NX_MODE_AES_GCM || mode == NX_MODE_AES_CCM) {
+ nx_ctx->csbcpb_aead = kzalloc(NX_CRYPTO_CTX_SIZE, GFP_KERNEL);
+ if (!nx_ctx->csbcpb_aead) {
+ kfree(nx_ctx->csbcpb);
+ return -ENOMEM;
+ }
+ }
+
+ /* the input scatterlist and output scatterlist are stored in pages
+ * 2 and 3 of the csbcpb space */
+ nx_ctx->in_sg = (struct nx_sg *)((u8 *)nx_ctx->csbcpb + 4096ULL);
+ nx_ctx->out_sg = (struct nx_sg *)((u8 *)nx_ctx->in_sg + 4096ULL);
+
+ /* give each context a pointer to global stats and their OF
+ * properties */
+ nx_ctx->stats = &nx_driver.stats;
+ memcpy(nx_ctx->props, nx_driver.of.ap[fc][mode],
+ sizeof(struct alg_props) * 3);
+
+ return 0;
+}
+
+/* entry points from the crypto tfm initializers */
+int nx_crypto_ctx_aes_ccm_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_CCM);
+}
+
+int nx_crypto_ctx_aes_gcm_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_GCM);
+}
+
+int nx_crypto_ctx_aes_ctr_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_CTR);
+}
+
+int nx_crypto_ctx_aes_cbc_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_CBC);
+}
+
+int nx_crypto_ctx_aes_ecb_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_ECB);
+}
+
+int nx_crypto_ctx_sha_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_SHA, NX_MODE_SHA);
+}
+
+int nx_crypto_ctx_aes_xcbc_init(struct crypto_tfm *tfm)
+{
+ return nx_crypto_ctx_init(crypto_tfm_ctx(tfm), NX_FC_AES,
+ NX_MODE_AES_XCBC_MAC);
+}
+
+/**
+ * nx_crypto_ctx_exit - destroy a crypto api context
+ *
+ * @tfm: the crypto transform pointer for the context
+ *
+ * As crypto API contexts are destroyed, this exit hook is called to free the
+ * memory associated with it.
+ */
+void nx_crypto_ctx_exit(struct crypto_tfm *tfm)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm);
+
+ if (nx_ctx->csbcpb_aead) {
+ kzfree(nx_ctx->csbcpb_aead);
+ nx_ctx->csbcpb_aead = NULL;
+ }
+
+ kzfree(nx_ctx->csbcpb);
+ nx_ctx->csbcpb = NULL;
+ nx_ctx->in_sg = NULL;
+ nx_ctx->out_sg = NULL;
+}
+
+static int __devinit nx_probe(struct vio_dev *viodev,
+ const struct vio_device_id *id)
+{
+ int rc;
+
+ dev_dbg(&viodev->dev, "driver probed: %s resource id: 0x%x\n",
+ viodev->name, viodev->resource_id);
+
+ if (nx_driver.viodev) {
+ dev_err(&viodev->dev, "%s: Attempt to register more than one "
+ "instance of the hardware\n", __func__);
+ rc = -EINVAL;
+ goto out;
+ }
+
+ nx_driver.viodev = viodev;
+
+ nx_of_init(&viodev->dev, &nx_driver.of);
+
+ rc = nx_register_algs();
+out:
+ return rc;
+}
+
+static int __devexit nx_remove(struct vio_dev *viodev)
+{
+ dev_dbg(&viodev->dev, "entering nx_remove for UA 0x%x\n",
+ viodev->unit_address);
+
+ if (nx_driver.of.status == NX_OKAY) {
+ nx_sysfs_fini(&nx_driver.viodriver.driver);
+
+ crypto_unregister_alg(&nx_ccm_aes_alg);
+ crypto_unregister_alg(&nx_ccm4309_aes_alg);
+ crypto_unregister_alg(&nx_gcm_aes_alg);
+ crypto_unregister_alg(&nx_gcm4106_aes_alg);
+ crypto_unregister_alg(&nx_ctr_aes_alg);
+ crypto_unregister_alg(&nx_ctr3686_aes_alg);
+ crypto_unregister_alg(&nx_cbc_aes_alg);
+ crypto_unregister_alg(&nx_ecb_aes_alg);
+ crypto_unregister_shash(&nx_shash_sha256_alg);
+ crypto_unregister_shash(&nx_shash_sha512_alg);
+ crypto_unregister_shash(&nx_shash_aes_xcbc_alg);
+ }
+
+ return 0;
+}
+
+
+/* module wide initialization/cleanup */
+static int __init nx_init(void)
+{
+ return vio_register_driver(&nx_driver.viodriver);
+}
+
+static void __exit nx_fini(void)
+{
+ vio_unregister_driver(&nx_driver.viodriver);
+}
+
+static struct vio_device_id nx_crypto_driver_ids[] __devinitdata = {
+ { "ibm,sym-encryption-v1", "ibm,sym-encryption" },
+ { "", "" }
+};
+MODULE_DEVICE_TABLE(vio, nx_crypto_driver_ids);
+
+/* driver state structure */
+struct nx_crypto_driver nx_driver = {
+ .viodriver = {
+ .id_table = nx_crypto_driver_ids,
+ .probe = nx_probe,
+ .remove = nx_remove,
+ .driver = {
+ .name = "nx",
+ .owner = THIS_MODULE,
+ },
+ },
+};
+
+module_init(nx_init);
+module_exit(nx_fini);
+
+MODULE_AUTHOR("Kent Yoder <[email protected]>");
+MODULE_DESCRIPTION(NX_STRING);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(NX_VERSION);
diff --git a/arch/powerpc/crypto/nx/nx.h b/arch/powerpc/crypto/nx/nx.h
new file mode 100644
index 0000000..27c6188
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx.h
@@ -0,0 +1,190 @@
+
+#ifndef __NX_H__
+#define __NX_H__
+
+#define NX_STRING "IBM Power7+ Nest Accelerator Crypto Driver"
+#define NX_VERSION "1.0"
+
+static const char nx_driver_name[] = "nx";
+static const char nx_driver_string[] = NX_STRING;
+static const char nx_driver_version[] = NX_VERSION;
+
+/* a scatterlist in the format PHYP is expecting */
+struct nx_sg {
+ u64 addr;
+ u32 rsvd;
+ u32 len;
+} __attribute((packed));
+
+#define NX_MAX_SG_ENTRIES (4096/(sizeof(struct nx_sg)))
+
+enum nx_status {
+ NX_DISABLED,
+ NX_WAITING,
+ NX_OKAY
+};
+
+/* msc_triplet and max_sync_cop are used only to assist in parsing the
+ * openFirmware property */
+struct msc_triplet {
+ u32 keybitlen;
+ u32 databytelen;
+ u32 sglen;
+} __packed;
+
+struct max_sync_cop {
+ u32 fc;
+ u32 mode;
+ u32 triplets;
+ struct msc_triplet trip[0];
+} __packed;
+
+struct alg_props {
+ u32 databytelen;
+ u32 sglen;
+};
+
+#define NX_OF_FLAG_MAXSGLEN_SET (1)
+#define NX_OF_FLAG_STATUS_SET (2)
+#define NX_OF_FLAG_MAXSYNCCOP_SET (4)
+#define NX_OF_FLAG_MASK_READY (NX_OF_FLAG_MAXSGLEN_SET | \
+ NX_OF_FLAG_STATUS_SET | \
+ NX_OF_FLAG_MAXSYNCCOP_SET)
+struct nx_of {
+ u32 flags;
+ u32 max_sg_len;
+ enum nx_status status;
+ struct alg_props ap[NX_MAX_FC][NX_MAX_MODE][3];
+};
+
+struct nx_stats {
+ atomic_t aes_ops;
+ atomic64_t aes_bytes;
+ atomic_t sha256_ops;
+ atomic64_t sha256_bytes;
+ atomic_t sha512_ops;
+ atomic64_t sha512_bytes;
+
+ atomic_t sync_ops;
+
+ atomic_t errors;
+ atomic_t last_error;
+ atomic_t last_error_pid;
+};
+
+struct nx_crypto_driver {
+ struct nx_stats stats;
+ struct nx_of of;
+ struct vio_dev __rcu *viodev;
+ struct vio_driver viodriver;
+};
+
+#define NX_GCM4106_NONCE_LEN (4)
+#define NX_GCM_CTR_OFFSET (12)
+struct nx_gcm_priv {
+ u8 iv[16];
+ u8 iauth_tag[16];
+ u8 nonce[NX_GCM4106_NONCE_LEN];
+};
+
+#define NX_CCM_AES_KEY_LEN (16)
+#define NX_CCM4309_AES_KEY_LEN (19)
+#define NX_CCM4309_NONCE_LEN (3)
+struct nx_ccm_priv {
+ u8 iv[16];
+ u8 b0[16];
+ u8 iauth_tag[16];
+ u8 oauth_tag[16];
+ u8 nonce[NX_CCM4309_NONCE_LEN];
+};
+
+struct nx_xcbc_priv {
+ u8 key[16];
+};
+
+struct nx_ctr_priv {
+ u8 iv[16];
+};
+
+/* 3 pages, the context, in scatterlist, out scatterlist */
+#define NX_CRYPTO_CTX_SIZE (4096 * 3)
+
+struct nx_crypto_ctx {
+ struct nx_csbcpb *csbcpb; /* aligned pages given to phyp @ hcall time */
+ struct vio_pfo_op op; /* operation struct with hcall parameters */
+ struct nx_csbcpb *csbcpb_aead; /* secondary csbcpb used by AEAD algs */
+ struct vio_pfo_op op_aead;/* operation struct for csbcpb_aead */
+
+ struct nx_sg *in_sg; /* pointer into csbcpb to an sg list */
+ struct nx_sg *out_sg; /* pointer into csbcpb to an sg list */
+
+ struct alg_props *ap; /* pointer into props based on our key size */
+ struct alg_props props[3];/* openFirmware properties for requests */
+ struct nx_stats *stats; /* pointer into an nx_crypto_driver for stats
+ reporting */
+
+ union {
+ struct nx_gcm_priv gcm;
+ struct nx_ccm_priv ccm;
+ struct nx_xcbc_priv xcbc;
+ struct nx_ctr_priv ctr;
+ } priv;
+
+ union {
+ struct crypto_blkcipher *blk;
+ struct crypto_aead *aead;
+ struct crypto_shash *shash;
+ } fallback;
+};
+
+struct blk_fallback_req {
+ struct blkcipher_desc *desc;
+ struct scatterlist *src;
+ struct scatterlist *dst;
+ unsigned int nbytes;
+ u8 *key;
+ unsigned int key_len;
+ int enc;
+};
+
+/* prototypes */
+int nx_crypto_ctx_aes_ccm_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_aes_gcm_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_aes_xcbc_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_aes_ctr_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_aes_cbc_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_aes_ecb_init(struct crypto_tfm *tfm);
+int nx_crypto_ctx_sha_init(struct crypto_tfm *tfm);
+void nx_crypto_ctx_exit(struct crypto_tfm *tfm);
+void nx_ctx_init(struct nx_crypto_ctx *nx_ctx, unsigned int function);
+int nx_hcall_sync(struct nx_crypto_ctx *ctx, struct vio_pfo_op *op);
+struct nx_sg *nx_build_sg_list(struct nx_sg *, u8 *, unsigned int, u32);
+int nx_build_sg_lists(struct nx_crypto_ctx *, struct blkcipher_desc *,
+ struct scatterlist *, struct scatterlist *, unsigned int,
+ u8 *);
+struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
+ struct scatterlist *, unsigned int,
+ unsigned int);
+int nx_sysfs_init(struct device_driver *);
+void nx_sysfs_fini(struct device_driver *);
+
+#define NX_PAGE_NUM(x) ((u64)(x) & 0xfffffffffffff000ULL)
+
+extern struct crypto_alg nx_cbc_aes_alg;
+extern struct crypto_alg nx_ecb_aes_alg;
+extern struct crypto_alg nx_gcm_aes_alg;
+extern struct crypto_alg nx_gcm4106_aes_alg;
+extern struct crypto_alg nx_ctr_aes_alg;
+extern struct crypto_alg nx_ctr3686_aes_alg;
+extern struct crypto_alg nx_ccm_aes_alg;
+extern struct crypto_alg nx_ccm4309_aes_alg;
+extern struct shash_alg nx_shash_aes_xcbc_alg;
+extern struct shash_alg nx_shash_sha512_alg;
+extern struct shash_alg nx_shash_sha256_alg;
+
+extern struct nx_crypto_driver nx_driver;
+
+#define SCATTERWALK_TO_SG 1
+#define SCATTERWALK_FROM_SG 0
+
+#endif
diff --git a/arch/powerpc/crypto/nx/nx_csbcpb.h b/arch/powerpc/crypto/nx/nx_csbcpb.h
new file mode 100644
index 0000000..ec793a9
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx_csbcpb.h
@@ -0,0 +1,246 @@
+
+#ifndef __NX_CSBCPB_H__
+#define __NX_CSBCPB_H__
+
+struct cop_symcpb_fdm {
+ u8 ende:1;
+ u8 operand_overlap:1;
+ u8 padding_rules:2;
+ u8 __rsvd:2;
+ u8 continuation:1;
+ u8 intermediate:1;
+} __packed;
+
+struct cop_symcpb_header {
+ u8 mode;
+ struct cop_symcpb_fdm fdm;
+ u8 key_size:4;
+ u8 digest_size:4;
+ u8 pad_byte;
+ u8 __rsvd[12];
+} __packed;
+
+struct cop_symcpb_aes_ecb {
+ u8 key[32];
+ u8 __rsvd[80];
+} __packed;
+
+struct cop_symcpb_aes_cbc {
+ u8 iv[16];
+ u8 key[32];
+ u8 cv[16];
+ u32 spbc;
+ u8 __rsvd[44];
+} __packed;
+
+struct cop_symcpb_aes_gmac {
+ u8 in_pat[16];
+ u8 iv[16];
+ u64 bit_length_aad;
+ u64 bit_length_data;
+ u8 in_s0[16];
+ u8 key[32];
+ u8 __rsvd1[16];
+ u8 out_pat_or_mac[16];
+ u8 out_s0[16];
+ u32 spbc;
+ u8 __rsvd2[28];
+} __packed;
+
+struct cop_symcpb_aes_gca {
+ u8 in_pat[16];
+ u8 key[32];
+ u8 out_pat[16];
+ u32 spbc;
+ u8 __rsvd[44];
+} __packed;
+
+struct cop_symcpb_aes_gcm {
+ u8 in_pat_or_aad[16];
+ u8 iv_or_cnt[16];
+ u64 bit_length_aad;
+ u64 bit_length_data;
+ u8 in_s0[16];
+ u8 key[32];
+ u8 __rsvd1[16];
+ u8 out_pat_or_mac[16];
+ u8 out_s0[16];
+ u8 out_cnt[16];
+ u32 spbc;
+ u8 __rsvd2[12];
+} __packed;
+
+struct cop_symcpb_aes_ctr {
+ u8 iv[16];
+ u8 key[32];
+ u8 cv[16];
+ u32 spbc;
+ u8 __rsvd2[44];
+} __packed;
+
+struct cop_symcpb_aes_cca {
+ u8 b0[16];
+ u8 b1[16];
+ u8 key[16];
+ u8 out_pat_or_b0[16];
+ u32 spbc;
+ u8 __rsvd[44];
+} __packed;
+
+struct cop_symcpb_aes_ccm {
+ u8 in_pat_or_b0[16];
+ u8 iv_or_ctr[16];
+ u8 in_s0[16];
+ u8 key[16];
+ u8 __rsvd1[48];
+ u8 out_pat_or_mac[16];
+ u8 out_s0[16];
+ u8 out_ctr[16];
+ u32 spbc;
+ u8 __rsvd2[12];
+} __packed;
+
+struct cop_symcpb_aes_xcbc {
+ u8 cv[16];
+ u8 key[16];
+ u8 __rsvd1[16];
+ u8 out_cv_mac[16];
+ u32 spbc;
+ u8 __rsvd2[44];
+} __packed;
+
+struct cop_symcpb_sha256 {
+ u64 message_bit_length;
+ u64 __rsvd1;
+ u8 input_partial_digest[32];
+ u8 message_digest[32];
+ u32 spbc;
+ u8 __rsvd2[44];
+} __packed;
+
+struct cop_symcpb_sha512 {
+ u64 message_bit_length_hi;
+ u64 message_bit_length_lo;
+ u8 input_partial_digest[64];
+ u8 __rsvd1[32];
+ u8 message_digest[64];
+ u32 spbc;
+ u8 __rsvd2[76];
+} __packed;
+
+struct cop_symcpb_sha256_hmac {
+ u64 MBLe;
+ u64 __rsvd1;
+ u8 key[64];
+ u8 input_partial_message[32];
+ u8 message_digest[32];
+ u32 spbc;
+ u8 __rsvd2[28];
+} __packed;
+
+struct cop_symcpb_sha512_hmac {
+ u64 MBLe;
+ u64 __reserved1;
+ u8 key[128];
+ u8 input_partial_message[64];
+ u8 __rsvd2[32];
+ u8 message_digest[64];
+ u32 spbc;
+ u8 __rsvd3[60];
+} __packed;
+
+struct cop_parameter_block {
+ struct cop_symcpb_header hdr;
+ union {
+ struct cop_symcpb_aes_ecb aes_ecb;
+ struct cop_symcpb_aes_cbc aes_cbc;
+ struct cop_symcpb_aes_gmac aes_gmac;
+ struct cop_symcpb_aes_gca aes_gca;
+ struct cop_symcpb_aes_gcm aes_gcm;
+ struct cop_symcpb_aes_cca aes_cca;
+ struct cop_symcpb_aes_ccm aes_ccm;
+ struct cop_symcpb_aes_ctr aes_ctr;
+ struct cop_symcpb_aes_xcbc aes_xcbc;
+ struct cop_symcpb_sha256 sha256;
+ struct cop_symcpb_sha512 sha512;
+ struct cop_symcpb_sha256_hmac sha256_hmac;
+ struct cop_symcpb_sha512_hmac sha512_hmac;
+ };
+} __packed;
+
+/* co-processor status block */
+struct cop_status_block {
+ u32 valid_bit:1;
+ u32 reserved_0:4;
+ u32 format:1;
+ u32 ch:2;
+ u32 crb_seq_number:8;
+ u32 completion_code:8;
+ u32 completion_extension:8;
+ u32 processed_byte_count;
+ u64 address;
+} __packed;
+
+/* Nest accelerator workbook section 4.4 */
+struct nx_csbcpb {
+ unsigned char __rsvd[112];
+ struct cop_status_block csb;
+ struct cop_parameter_block cpb;
+} __packed;
+
+/* nx_csbcpb related definitions */
+#define NX_MODE_AES_ECB 0
+#define NX_MODE_AES_CBC 1
+#define NX_MODE_AES_GMAC 2
+#define NX_MODE_AES_GCA 3
+#define NX_MODE_AES_GCM 4
+#define NX_MODE_AES_CCA 5
+#define NX_MODE_AES_CCM 6
+#define NX_MODE_AES_CTR 7
+#define NX_MODE_AES_XCBC_MAC 20
+#define NX_MODE_SHA 0
+#define NX_MODE_SHA_HMAC 1
+#define NX_MODE_AES_CBC_HMAC_ETA 8
+#define NX_MODE_AES_CBC_HMAC_ATE 9
+#define NX_MODE_AES_CBC_HMAC_EAA 10
+#define NX_MODE_AES_CTR_HMAC_ETA 12
+#define NX_MODE_AES_CTR_HMAC_ATE 13
+#define NX_MODE_AES_CTR_HMAC_EAA 14
+
+#define NX_FDM_ENDE_ENCRYPT 1
+#define NX_FDM_ENDE_DECRYPT 0
+
+#define NX_FDM_CI_FULL 0
+#define NX_FDM_CI_FIRST 1
+#define NX_FDM_CI_LAST 2
+#define NX_FDM_CI_MIDDLE 3
+
+#define NX_FDM_PR_NONE 0
+#define NX_FDM_PR_PAD 1
+
+#define NX_KS_AES_128 1
+#define NX_KS_AES_192 2
+#define NX_KS_AES_256 3
+
+#define NX_DS_SHA256 2
+#define NX_DS_SHA512 3
+
+#define NX_FC_AES 0
+#define NX_FC_SHA 2
+#define NX_FC_AES_HMAC 6
+
+#define NX_MAX_FC (NX_FC_AES_HMAC + 1)
+#define NX_MAX_MODE (NX_MODE_AES_XCBC_MAC + 1)
+
+#define HCOP_FC_AES NX_FC_AES
+#define HCOP_FC_SHA NX_FC_SHA
+#define HCOP_FC_AES_HMAC NX_FC_AES_HMAC
+
+/* indices into the array of algorithm properties */
+#define NX_PROPS_AES_128 0
+#define NX_PROPS_AES_192 1
+#define NX_PROPS_AES_256 2
+#define NX_PROPS_SHA256 1
+#define NX_PROPS_SHA512 2
+
+#endif
--
1.7.1

2012-03-21 21:41:20

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 15/17] powerpc: crypto: sysfs routines and docs for the nx device driver

These routines add sysfs files supporting the Power7+ in-Nest encryption
accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
Documentation/powerpc/pfo-nx-crypto.txt | 52 ++++++++
arch/powerpc/crypto/nx/nx_sysfs.c | 194 +++++++++++++++++++++++++++++++
2 files changed, 246 insertions(+), 0 deletions(-)
create mode 100644 Documentation/powerpc/pfo-nx-crypto.txt
create mode 100644 arch/powerpc/crypto/nx/nx_sysfs.c

diff --git a/Documentation/powerpc/pfo-nx-crypto.txt b/Documentation/powerpc/pfo-nx-crypto.txt
new file mode 100644
index 0000000..63440d3
--- /dev/null
+++ b/Documentation/powerpc/pfo-nx-crypto.txt
@@ -0,0 +1,52 @@
+
+Documentation for the sysfs interfaces provided by the nx-crypto driver, built
+in arch/powerpc/crypto/nx.
+
+The driver provides 2 sets of sysfs files, 1 for confirming that the device is
+actually being used and 1 for error detection.
+
+All sysfs files can be found in:
+
+ /sys/bus/vio/drivers/nx
+
+Error Detection
+===============
+
+errors:
+- A u32 providing a total count of errors since the driver was loaded. The
+only errors counted here are those returned from the hcall, H_COP_OP.
+
+last_error:
+- The most recent non-zero return code from the H_COP_OP hcall. -EBUSY is not
+recorded here (the hcall will retry until -EBUSY goes away).
+
+last_error_pid:
+- The process ID of the process who received the most recent error from the
+hcall.
+
+Notes on error detection:
+ H_RH_PARM (invalid hardware resource ID) and H_HARDWARE (hardware failure)
+are not recorded in the errors or last_error sysfs files, since they are
+signals to the driver to fall back to software.
+
+Device Use
+==========
+
+aes_bytes:
+- The total number of bytes encrypted using AES in any of the driver's
+supported modes.
+
+aes_ops:
+- The total number of AES operations submitted to the hardware.
+
+sha256_bytes:
+- The total number of bytes hashed by the hardware using SHA-256.
+
+sha256_ops:
+- The total number of SHA-256 operations submitted to the hardware.
+
+sha512_bytes:
+- The total number of bytes hashed by the hardware using SHA-512.
+
+sha512_ops:
+- The total number of SHA-512 operations submitted to the hardware.
diff --git a/arch/powerpc/crypto/nx/nx_sysfs.c b/arch/powerpc/crypto/nx/nx_sysfs.c
new file mode 100644
index 0000000..02c84e7
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx_sysfs.c
@@ -0,0 +1,194 @@
+/**
+ * sysfs routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <linux/device.h>
+#include <linux/kobject.h>
+#include <linux/string.h>
+#include <linux/sysfs.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/crypto.h>
+#include <crypto/hash.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+/* sysfs attributes and callbacks
+ *
+ * For documentation on these attributes, please see:
+ *
+ * Documentation/powerpc/pfo-nx-crypto.txt
+ */
+static ssize_t
+nx_attr_show_sha256_ops(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.sha256_ops));
+}
+static DRIVER_ATTR(sha256_ops, S_IRUGO, nx_attr_show_sha256_ops, NULL);
+
+static ssize_t
+nx_attr_show_sha256_bytes(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%ld\n",
+ atomic64_read(&nx_driver.stats.sha256_bytes));
+}
+static DRIVER_ATTR(sha256_bytes, S_IRUGO, nx_attr_show_sha256_bytes, NULL);
+
+static ssize_t
+nx_attr_show_sha512_ops(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.sha512_ops));
+}
+static DRIVER_ATTR(sha512_ops, S_IRUGO, nx_attr_show_sha512_ops, NULL);
+
+static ssize_t
+nx_attr_show_sha512_bytes(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%ld\n",
+ atomic64_read(&nx_driver.stats.sha512_bytes));
+}
+static DRIVER_ATTR(sha512_bytes, S_IRUGO, nx_attr_show_sha512_bytes, NULL);
+
+static ssize_t
+nx_attr_show_aes_ops(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.aes_ops));
+}
+static DRIVER_ATTR(aes_ops, S_IRUGO, nx_attr_show_aes_ops, NULL);
+
+static ssize_t
+nx_attr_show_aes_bytes(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%ld\n", atomic64_read(&nx_driver.stats.aes_bytes));
+}
+static DRIVER_ATTR(aes_bytes, S_IRUGO, nx_attr_show_aes_bytes, NULL);
+
+static ssize_t
+nx_attr_show_sync_ops(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.sync_ops));
+}
+static DRIVER_ATTR(sync_ops, S_IRUGO, nx_attr_show_sync_ops, NULL);
+
+static ssize_t
+nx_attr_show_errors(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.errors));
+}
+static DRIVER_ATTR(errors, S_IRUGO, nx_attr_show_errors, NULL);
+
+static ssize_t
+nx_attr_show_last_error(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n", atomic_read(&nx_driver.stats.last_error));
+}
+static DRIVER_ATTR(last_error, S_IRUGO, nx_attr_show_last_error, NULL);
+
+static ssize_t
+nx_attr_show_last_error_pid(struct device_driver *driver, char *buf)
+{
+ return sprintf(buf, "%d\n",
+ atomic_read(&nx_driver.stats.last_error_pid));
+}
+static DRIVER_ATTR(last_error_pid, S_IRUGO, nx_attr_show_last_error_pid, NULL);
+
+int
+nx_sysfs_init(struct device_driver *drv)
+{
+ int rc;
+
+ rc = driver_create_file(drv, &driver_attr_aes_ops);
+ if (rc)
+ goto out;
+
+ rc = driver_create_file(drv, &driver_attr_aes_bytes);
+ if (rc)
+ goto out2_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_sync_ops);
+ if (rc)
+ goto out3_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_sha256_ops);
+ if (rc)
+ goto out4_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_sha256_bytes);
+ if (rc)
+ goto out5_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_sha512_ops);
+ if (rc)
+ goto out6_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_sha512_bytes);
+ if (rc)
+ goto out7_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_errors);
+ if (rc)
+ goto out8_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_last_error);
+ if (rc)
+ goto out9_unreg;
+
+ rc = driver_create_file(drv, &driver_attr_last_error_pid);
+ if (rc)
+ goto out10_unreg;
+
+ goto out;
+
+out10_unreg:
+ driver_remove_file(drv, &driver_attr_last_error);
+out9_unreg:
+ driver_remove_file(drv, &driver_attr_errors);
+out8_unreg:
+ driver_remove_file(drv, &driver_attr_sha512_bytes);
+out7_unreg:
+ driver_remove_file(drv, &driver_attr_sha512_ops);
+out6_unreg:
+ driver_remove_file(drv, &driver_attr_sha256_bytes);
+out5_unreg:
+ driver_remove_file(drv, &driver_attr_sha256_ops);
+out4_unreg:
+ driver_remove_file(drv, &driver_attr_sync_ops);
+out3_unreg:
+ driver_remove_file(drv, &driver_attr_aes_bytes);
+out2_unreg:
+ driver_remove_file(drv, &driver_attr_aes_ops);
+out:
+ return rc;
+}
+
+void
+nx_sysfs_fini(struct device_driver *drv)
+{
+ driver_remove_file(drv, &driver_attr_sync_ops);
+ driver_remove_file(drv, &driver_attr_aes_bytes);
+ driver_remove_file(drv, &driver_attr_aes_ops);
+ driver_remove_file(drv, &driver_attr_sha256_bytes);
+ driver_remove_file(drv, &driver_attr_sha256_ops);
+ driver_remove_file(drv, &driver_attr_sha512_bytes);
+ driver_remove_file(drv, &driver_attr_sha512_ops);
+}
--
1.7.1

2012-03-21 21:40:52

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 13/17] powerpc: crypto: SHA512 hash routines for nx encryption

These routines add support for SHA-512 hashing on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-sha512.c | 259 ++++++++++++++++++++++++++++++++++++
1 files changed, 259 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-sha512.c

diff --git a/arch/powerpc/crypto/nx/nx-sha512.c b/arch/powerpc/crypto/nx/nx-sha512.c
new file mode 100644
index 0000000..4f8a43b
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-sha512.c
@@ -0,0 +1,259 @@
+/**
+ * SHA-512 routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/internal/hash.h>
+#include <crypto/sha.h>
+#include <linux/module.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int nx_sha512_init(struct shash_desc *desc)
+{
+ struct sha512_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_sg *out_sg;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_SHA);
+
+ memset(sctx, 0, sizeof *sctx);
+
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_SHA512];
+
+ nx_ctx->csbcpb->cpb.hdr.digest_size = NX_DS_SHA512;
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *)sctx->state,
+ SHA512_DIGEST_SIZE, nx_ctx->ap->sglen);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ return 0;
+}
+
+static int nx_sha512_update(struct shash_desc *desc, const u8 *data,
+ unsigned int len)
+{
+ struct sha512_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct nx_sg *in_sg;
+ u64 to_process, leftover, spbc_bits;
+ int rc;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously and we're updating again,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.sha512.input_partial_digest,
+ csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE);
+ }
+
+ /* 2 cases for total data len:
+ * 1: <= SHA512_BLOCK_SIZE: copy into state, return 0
+ * 2: > SHA512_BLOCK_SIZE: process X blocks, copy in leftover
+ */
+ if ((u64)len + sctx->count[0] <= SHA512_BLOCK_SIZE) {
+ memcpy(sctx->buf + sctx->count[0], data, len);
+ sctx->count[0] += len;
+ return 0;
+ }
+
+ /* to_process: the SHA512_BLOCK_SIZE data chunk to process in this
+ * update */
+ to_process = (sctx->count[0] + len) & ~(SHA512_BLOCK_SIZE - 1);
+ leftover = (sctx->count[0] + len) & (SHA512_BLOCK_SIZE - 1);
+
+ if (sctx->count[0]) {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)sctx->buf,
+ sctx->count[0], nx_ctx->ap->sglen);
+ in_sg = nx_build_sg_list(in_sg, (u8 *)data,
+ to_process - sctx->count[0],
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ } else {
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data,
+ to_process, nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
+ sizeof(struct nx_sg);
+ }
+
+ csbcpb->cpb.hdr.fdm.intermediate = 1;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->sha512_ops));
+
+ /* copy the leftover back into the state struct */
+ memcpy(sctx->buf, data + len - leftover, leftover);
+ sctx->count[0] = leftover;
+
+ spbc_bits = csbcpb->cpb.sha512.spbc * 8;
+ csbcpb->cpb.sha512.message_bit_length_lo += spbc_bits;
+ if (csbcpb->cpb.sha512.message_bit_length_lo < spbc_bits)
+ csbcpb->cpb.sha512.message_bit_length_hi++;
+
+ /* everything after the first update is continuation */
+ csbcpb->cpb.hdr.fdm.continuation = 1;
+out:
+ return rc;
+}
+
+static int nx_sha512_final(struct shash_desc *desc, u8 *out)
+{
+ struct sha512_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct nx_sg *in_sg, *out_sg;
+ u64 count0;
+ int rc;
+
+ if (csbcpb->cpb.hdr.fdm.continuation == 1) {
+ /* we've hit the nx chip previously, now we're finalizing,
+ * so copy over the partial digest */
+ memcpy(csbcpb->cpb.sha512.input_partial_digest,
+ csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE);
+ }
+
+ /* final is represented by continuing the operation and indicating that
+ * this is not an intermediate operation */
+ csbcpb->cpb.hdr.fdm.intermediate = 0;
+
+ count0 = sctx->count[0] * 8;
+
+ csbcpb->cpb.sha512.message_bit_length_lo += count0;
+ if (csbcpb->cpb.sha512.message_bit_length_lo < count0)
+ csbcpb->cpb.sha512.message_bit_length_hi++;
+
+ in_sg = nx_build_sg_list(nx_ctx->in_sg, sctx->buf, sctx->count[0],
+ nx_ctx->ap->sglen);
+ out_sg = nx_build_sg_list(nx_ctx->out_sg, out, SHA512_DIGEST_SIZE,
+ nx_ctx->ap->sglen);
+ nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+ nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+ if (!nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->sha512_ops));
+ atomic64_add(csbcpb->cpb.sha512.message_bit_length_lo,
+ &(nx_ctx->stats->sha512_bytes));
+
+ memcpy(out, csbcpb->cpb.sha512.message_digest, SHA512_DIGEST_SIZE);
+out:
+ return rc;
+}
+
+static int nx_sha512_export(struct shash_desc *desc, void *out)
+{
+ struct sha512_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ struct sha512_state *octx = out;
+
+ /* move message_bit_length (128 bits) into count and convert its value
+ * to bytes */
+ octx->count[0] = csbcpb->cpb.sha512.message_bit_length_lo >> 3 |
+ ((csbcpb->cpb.sha512.message_bit_length_hi & 7) << 61);
+ octx->count[1] = csbcpb->cpb.sha512.message_bit_length_hi >> 3;
+
+ octx->count[0] += sctx->count[0];
+ if (octx->count[0] < sctx->count[0])
+ octx->count[1]++;
+
+ memcpy(octx->buf, sctx->buf, sizeof(octx->buf));
+
+ /* if no data has been processed yet, we need to export SHA512's
+ * initial data, in case this context gets imported into a software
+ * context */
+ if (csbcpb->cpb.sha512.message_bit_length_hi ||
+ csbcpb->cpb.sha512.message_bit_length_lo)
+ memcpy(octx->state, csbcpb->cpb.sha512.message_digest,
+ SHA512_DIGEST_SIZE);
+ else {
+ octx->state[0] = SHA512_H0;
+ octx->state[1] = SHA512_H1;
+ octx->state[2] = SHA512_H2;
+ octx->state[3] = SHA512_H3;
+ octx->state[4] = SHA512_H4;
+ octx->state[5] = SHA512_H5;
+ octx->state[6] = SHA512_H6;
+ octx->state[7] = SHA512_H7;
+ }
+
+ return 0;
+}
+
+static int nx_sha512_import(struct shash_desc *desc, const void *in)
+{
+ struct sha512_state *sctx = shash_desc_ctx(desc);
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+ struct nx_csbcpb *csbcpb = (struct nx_csbcpb *)nx_ctx->csbcpb;
+ const struct sha512_state *ictx = in;
+
+ memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
+ sctx->count[0] = ictx->count[0] & 0x3f;
+ csbcpb->cpb.sha512.message_bit_length_lo = (ictx->count[0] & ~0x3f)
+ << 3;
+ csbcpb->cpb.sha512.message_bit_length_hi = ictx->count[1] << 3 |
+ ictx->count[0] >> 61;
+
+ if (csbcpb->cpb.sha512.message_bit_length_hi ||
+ csbcpb->cpb.sha512.message_bit_length_lo) {
+ memcpy(csbcpb->cpb.sha512.message_digest, ictx->state,
+ SHA512_DIGEST_SIZE);
+
+ csbcpb->cpb.hdr.fdm.continuation = 1;
+ csbcpb->cpb.hdr.fdm.intermediate = 1;
+ }
+
+ return 0;
+}
+
+struct shash_alg nx_shash_sha512_alg = {
+ .digestsize = SHA512_DIGEST_SIZE,
+ .init = nx_sha512_init,
+ .update = nx_sha512_update,
+ .final = nx_sha512_final,
+ .export = nx_sha512_export,
+ .import = nx_sha512_import,
+ .descsize = sizeof(struct sha512_state),
+ .statesize = sizeof(struct sha512_state),
+ .base = {
+ .cra_name = "sha512",
+ .cra_driver_name = "sha512-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_SHASH,
+ .cra_blocksize = SHA512_BLOCK_SIZE,
+ .cra_module = THIS_MODULE,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_init = nx_crypto_ctx_sha_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ }
+};
--
1.7.1

2012-03-21 21:41:37

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 16/17] powerpc: crypto: Build files for the nx device driver

These files support configuring and building the nx device driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/Makefile | 1 +
arch/powerpc/crypto/nx/Makefile | 11 +++++++++++
drivers/crypto/Kconfig | 18 ++++++++++++++++++
3 files changed, 30 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/Makefile

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index b8b105c..4977c65 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -159,6 +159,7 @@ core-$(CONFIG_XMON) += arch/powerpc/xmon/
core-$(CONFIG_KVM) += arch/powerpc/kvm/

drivers-$(CONFIG_OPROFILE) += arch/powerpc/oprofile/
+drivers-$(CONFIG_CRYPTO_DEV_NX) += arch/powerpc/crypto/nx/

# Default to zImage, override when needed
all: zImage
diff --git a/arch/powerpc/crypto/nx/Makefile b/arch/powerpc/crypto/nx/Makefile
new file mode 100644
index 0000000..39a283f
--- /dev/null
+++ b/arch/powerpc/crypto/nx/Makefile
@@ -0,0 +1,11 @@
+obj-$(CONFIG_CRYPTO_DEV_NX) += nx-crypto.o
+nx-crypto-objs := nx.o \
+ nx_sysfs.o \
+ nx-aes-cbc.o \
+ nx-aes-ecb.o \
+ nx-aes-gcm.o \
+ nx-aes-ccm.o \
+ nx-aes-ctr.o \
+ nx-aes-xcbc.o \
+ nx-sha256.o \
+ nx-sha512.o
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 6d16b4b..f438ca5 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -293,4 +293,22 @@ config CRYPTO_DEV_S5P
Select this to offload Samsung S5PV210 or S5PC110 from AES
algorithms execution.

+
+config CRYPTO_DEV_NX
+ tristate "Support for Power7+ in-Nest cryptographic accleration"
+ depends on PPC64 && IBMVIO
+ select CRYPTO_AES
+ select CRYPTO_CBC
+ select CRYPTO_ECB
+ select CRYPTO_CCM
+ select CRYPTO_GCM
+ select CRYPTO_AUTHENC
+ select CRYPTO_XCBC
+ select CRYPTO_SHA256
+ select CRYPTO_SHA512
+ help
+ Support for Power7+ in-Nest cryptographic acceleration. This
+ module supports acceleration for AES and SHA2 algorithms. If you
+ choose 'M' here, this module will be called nx_crypto.
+
endif # CRYPTO_HW
--
1.7.1

2012-03-21 21:41:06

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 17/17] powerpc: crypto: enable the PFO-based encryption device

This patch adds the cas bits to advertise support for the Platform
Facilities Option (PFO) based encryption accelerator device. The nx
device driver provides support for this hardware feature.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/kernel/prom_init.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index 6691077..fb5412e 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -716,6 +716,12 @@ static void __init early_cmdline_parse(void)
#else
#define OV5_PFO_HW_RNG 0x00
#endif
+#if defined(CONFIG_CRYPTO_DEV_NX) || \
+ defined(CONFIG_CRYPTO_DEV_NX_MODULE)
+#define OV5_PFO_HW_ENCR 0x20
+#else
+#define OV5_PFO_HW_ENCR 0x00
+#endif

/* Option Vector 6: IBM PAPR hints */
#define OV6_LINUX 0x02 /* Linux is our OS */
@@ -783,7 +789,7 @@ static unsigned char ibm_architecture_vec[] = {
0,
0,
0,
- OV5_PFO_HW_RNG,
+ OV5_PFO_HW_RNG | OV5_PFO_HW_ENCR,

/* option vector 6: IBM PAPR hints */
4 - 2, /* length */
--
1.7.1

2012-03-21 21:40:01

by Kent Yoder

[permalink] [raw]
Subject: [PATCH 08/17] powerpc: crypto: AES-CTR mode routines for nx encryption

These routines add support for AES in CTR mode on the Power7+ CPU's
in-Nest accelerator driver.

Signed-off-by: Kent Yoder <[email protected]>
---
arch/powerpc/crypto/nx/nx-aes-ctr.c | 175 +++++++++++++++++++++++++++++++++++
1 files changed, 175 insertions(+), 0 deletions(-)
create mode 100644 arch/powerpc/crypto/nx/nx-aes-ctr.c

diff --git a/arch/powerpc/crypto/nx/nx-aes-ctr.c b/arch/powerpc/crypto/nx/nx-aes-ctr.c
new file mode 100644
index 0000000..a1ea44e
--- /dev/null
+++ b/arch/powerpc/crypto/nx/nx-aes-ctr.c
@@ -0,0 +1,175 @@
+/**
+ * AES CTR routines supporting the Power 7+ Nest Accelerators driver
+ *
+ * Copyright (C) 2011-2012 International Business Machines Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 only.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Author: Kent Yoder <[email protected]>
+ */
+
+#include <crypto/aes.h>
+#include <crypto/ctr.h>
+#include <crypto/algapi.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <asm/vio.h>
+
+#include "nx_csbcpb.h"
+#include "nx.h"
+
+
+static int ctr_aes_nx_set_key(struct crypto_tfm *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+
+ nx_ctx_init(nx_ctx, HCOP_FC_AES);
+
+ switch (key_len) {
+ case AES_KEYSIZE_128:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_128;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_128];
+ break;
+ case AES_KEYSIZE_192:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_192;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_192];
+ break;
+ case AES_KEYSIZE_256:
+ csbcpb->cpb.hdr.key_size = NX_KS_AES_256;
+ nx_ctx->ap = &nx_ctx->props[NX_PROPS_AES_256];
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ csbcpb->cpb.hdr.mode = NX_MODE_AES_CTR;
+ memcpy(csbcpb->cpb.aes_ctr.key, in_key, key_len);
+
+ return 0;
+}
+
+static int ctr3686_aes_nx_set_key(struct crypto_tfm *tfm,
+ const u8 *in_key,
+ unsigned int key_len)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(tfm);
+
+ if (key_len < CTR_RFC3686_NONCE_SIZE)
+ return -EINVAL;
+
+ memcpy(nx_ctx->priv.ctr.iv,
+ in_key + key_len - CTR_RFC3686_NONCE_SIZE,
+ CTR_RFC3686_NONCE_SIZE);
+
+ key_len -= CTR_RFC3686_NONCE_SIZE;
+
+ return ctr_aes_nx_set_key(tfm, in_key, key_len);
+}
+
+static int ctr_aes_nx_crypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
+ struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+ int rc;
+
+ if (nbytes > nx_ctx->ap->databytelen)
+ return -EINVAL;
+
+ rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes,
+ csbcpb->cpb.aes_ctr.iv);
+ if (rc)
+ goto out;
+
+ if (!nx_ctx->op.inlen || !nx_ctx->op.outlen)
+ return -EINVAL;
+
+ rc = nx_hcall_sync(nx_ctx, &nx_ctx->op);
+ if (rc)
+ goto out;
+
+ atomic_inc(&(nx_ctx->stats->aes_ops));
+ atomic64_add(csbcpb->csb.processed_byte_count,
+ &(nx_ctx->stats->aes_bytes));
+out:
+ return rc;
+}
+
+static int ctr3686_aes_nx_crypt(struct blkcipher_desc *desc,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
+ u8 *iv = nx_ctx->priv.ctr.iv;
+
+ memcpy(iv + CTR_RFC3686_NONCE_SIZE,
+ desc->info, CTR_RFC3686_IV_SIZE);
+ iv[15] = 1;
+
+ desc->info = nx_ctx->priv.ctr.iv;
+
+ return ctr_aes_nx_crypt(desc, dst, src, nbytes);
+}
+
+struct crypto_alg nx_ctr_aes_alg = {
+ .cra_name = "ctr(aes)",
+ .cra_driver_name = "ctr-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_blkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_ctr_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_ctr_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_blkcipher = {
+ .min_keysize = AES_MIN_KEY_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = ctr_aes_nx_set_key,
+ .encrypt = ctr_aes_nx_crypt,
+ .decrypt = ctr_aes_nx_crypt,
+ }
+};
+
+struct crypto_alg nx_ctr3686_aes_alg = {
+ .cra_name = "rfc3686(ctr(aes))",
+ .cra_driver_name = "rfc3686-ctr-aes-nx",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
+ .cra_blocksize = 1,
+ .cra_ctxsize = sizeof(struct nx_crypto_ctx),
+ .cra_type = &crypto_blkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(nx_ctr3686_aes_alg.cra_list),
+ .cra_init = nx_crypto_ctx_aes_ctr_init,
+ .cra_exit = nx_crypto_ctx_exit,
+ .cra_blkcipher = {
+ .min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
+ .max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
+ .ivsize = CTR_RFC3686_IV_SIZE,
+ .geniv = "seqiv",
+ .setkey = ctr3686_aes_nx_set_key,
+ .encrypt = ctr3686_aes_nx_crypt,
+ .decrypt = ctr3686_aes_nx_crypt,
+ }
+};
--
1.7.1

2012-03-21 22:11:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 15/17] powerpc: crypto: sysfs routines and docs for the nx device driver

On Wed, Mar 21, 2012 at 04:41:20PM -0500, Kent Yoder wrote:
> These routines add sysfs files supporting the Power7+ in-Nest encryption
> accelerator driver.
>
> Signed-off-by: Kent Yoder <[email protected]>
> ---
> Documentation/powerpc/pfo-nx-crypto.txt | 52 ++++++++

Please put sysfs file information in Documentation/ABI/ where it
belongs.

> arch/powerpc/crypto/nx/nx_sysfs.c | 194 +++++++++++++++++++++++++++++++
> 2 files changed, 246 insertions(+), 0 deletions(-)
> create mode 100644 Documentation/powerpc/pfo-nx-crypto.txt
> create mode 100644 arch/powerpc/crypto/nx/nx_sysfs.c
>
> diff --git a/Documentation/powerpc/pfo-nx-crypto.txt b/Documentation/powerpc/pfo-nx-crypto.txt
> new file mode 100644
> index 0000000..63440d3
> --- /dev/null
> +++ b/Documentation/powerpc/pfo-nx-crypto.txt
> @@ -0,0 +1,52 @@
> +
> +Documentation for the sysfs interfaces provided by the nx-crypto driver, built
> +in arch/powerpc/crypto/nx.
> +
> +The driver provides 2 sets of sysfs files, 1 for confirming that the device is
> +actually being used and 1 for error detection.

Shouldn't the first just be debugfs files, as no "normal" user will ever
care about such a thing?

Actually, why are these sysfs files at all, how about all of this going
into debugfs?


> +Error Detection
> +===============

<snip>

What can anyone do with any of these files? What use are they to users?

> +Device Use
> +==========

Again, what does a user care about these items for?

> +int
> +nx_sysfs_init(struct device_driver *drv)
> +{
> + int rc;
> +
> + rc = driver_create_file(drv, &driver_attr_aes_ops);
> + if (rc)
> + goto out;

<snip>

Oh, ${DIETY}, no. Please don't create files one by one, we do have
functions that do all of this for you automatically, why aren't you
using them?

> +void
> +nx_sysfs_fini(struct device_driver *drv)
> +{
> + driver_remove_file(drv, &driver_attr_sync_ops);
> + driver_remove_file(drv, &driver_attr_aes_bytes);
> + driver_remove_file(drv, &driver_attr_aes_ops);
> + driver_remove_file(drv, &driver_attr_sha256_bytes);
> + driver_remove_file(drv, &driver_attr_sha256_ops);
> + driver_remove_file(drv, &driver_attr_sha512_bytes);
> + driver_remove_file(drv, &driver_attr_sha512_ops);

Same here, don't do this, do it all at once.

> +}

Who is calling these functions? Where in the device lifecycle are the
files being created? Did you just race userspace with how they are
created, or are you doing it "properly"? (hint, odds are, as you are
trying to manually create and remove these by hand, you aren't doing it
properly...)

thanks,

greg k-h

2012-03-21 22:15:20

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

On Wed, Mar 21, 2012 at 04:41:08PM -0500, Kent Yoder wrote:
> +static int nx_register_algs(void)
> +{
> + int rc = -1;
> +
> + if (nx_driver.of.flags != NX_OF_FLAG_MASK_READY)
> + goto out;
> +
> + memset(&nx_driver.stats, 0, sizeof(struct nx_stats));
> +
> + rc = nx_sysfs_init(&nx_driver.viodriver.driver);

Ok, that's not bad, you are doing it from the probe() function of your
bus. As long as the bus got things right on when uevents get sent to
userspace, does it?

> +static struct vio_device_id nx_crypto_driver_ids[] __devinitdata = {
> + { "ibm,sym-encryption-v1", "ibm,sym-encryption" },
> + { "", "" }
> +};
> +MODULE_DEVICE_TABLE(vio, nx_crypto_driver_ids);
> +
> +/* driver state structure */
> +struct nx_crypto_driver nx_driver = {
> + .viodriver = {
> + .id_table = nx_crypto_driver_ids,
> + .probe = nx_probe,
> + .remove = nx_remove,
> + .driver = {
> + .name = "nx",
> + .owner = THIS_MODULE,
> + },

Really? vio drivers are supposed to look like this with the .name and
.owner field manually being set in the static initialization of the
driver? That's sad, and should be fixed, the vio core should do this
type of thing for you.

greg k-h

2012-03-21 22:45:41

by Kent Yoder

[permalink] [raw]
Subject: Re: [PATCH 15/17] powerpc: crypto: sysfs routines and docs for the nx device driver

Hi Greg,

On Wed, 2012-03-21 at 15:11 -0700, Greg KH wrote:
> On Wed, Mar 21, 2012 at 04:41:20PM -0500, Kent Yoder wrote:
> > These routines add sysfs files supporting the Power7+ in-Nest encryption
> > accelerator driver.
> >
> > Signed-off-by: Kent Yoder <[email protected]>
> > ---
> > Documentation/powerpc/pfo-nx-crypto.txt | 52 ++++++++
>
> Please put sysfs file information in Documentation/ABI/ where it
> belongs.

Will do, I see debugfs docs in there too.

> Shouldn't the first just be debugfs files, as no "normal" user will ever
> care about such a thing?
>
> Actually, why are these sysfs files at all, how about all of this going
> into debugfs?

Yes, that's fine. These really are just for checking 'am I really
doing hardware encryption' and debugging any return codes that may come
back from the hcall. The error return is probably more easily dev_err'd
though.

>
> > +Error Detection
> > +===============
>
> <snip>
>
> What can anyone do with any of these files? What use are they to users?
>
> > +Device Use
> > +==========
>
> Again, what does a user care about these items for?
>
> > +int
> > +nx_sysfs_init(struct device_driver *drv)
> > +{
> > + int rc;
> > +
> > + rc = driver_create_file(drv, &driver_attr_aes_ops);
> > + if (rc)
> > + goto out;
>
> <snip>
>
> Oh, ${DIETY}, no. Please don't create files one by one, we do have
> functions that do all of this for you automatically, why aren't you
> using them?

Ok, I'll go look for some debugfs wrappers.

> > +void
> > +nx_sysfs_fini(struct device_driver *drv)
> > +{
> > + driver_remove_file(drv, &driver_attr_sync_ops);
> > + driver_remove_file(drv, &driver_attr_aes_bytes);
> > + driver_remove_file(drv, &driver_attr_aes_ops);
> > + driver_remove_file(drv, &driver_attr_sha256_bytes);
> > + driver_remove_file(drv, &driver_attr_sha256_ops);
> > + driver_remove_file(drv, &driver_attr_sha512_bytes);
> > + driver_remove_file(drv, &driver_attr_sha512_ops);
>
> Same here, don't do this, do it all at once.
>
> > +}
>
> Who is calling these functions? Where in the device lifecycle are the
> files being created? Did you just race userspace with how they are
> created, or are you doing it "properly"? (hint, odds are, as you are
> trying to manually create and remove these by hand, you aren't doing it
> properly...)

We're not racing here, other than with cat. Just so an admin can see
his workload is getting offloaded.

Thanks,
Kent

> thanks,
>
> greg k-h
>

2012-03-22 01:50:17

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

On Wed, 2012-03-21 at 15:15 -0700, Greg KH wrote:
>
> Really? vio drivers are supposed to look like this with the .name and
> .owner field manually being set in the static initialization of the
> driver? That's sad, and should be fixed, the vio core should do this
> type of thing for you.

Yeah, they still do it the old way, nobody got to fix that yet, should
be pretty easy though.

Cheers,
Ben.

2012-03-22 02:57:55

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

On Thu, 2012-03-22 at 12:50 +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2012-03-21 at 15:15 -0700, Greg KH wrote:
> >
> > Really? vio drivers are supposed to look like this with the .name and
> > .owner field manually being set in the static initialization of the
> > driver? That's sad, and should be fixed, the vio core should do this
> > type of thing for you.
>
> Yeah, they still do it the old way, nobody got to fix that yet, should
> be pretty easy though.

Kent, Dave care to try this ? It's only compile tested.

powerpc+sparc/vio: Modernize driver registration

This makes vio_register_driver() get the module owner & name at compile
time like PCI drivers do, and adds a name pointer directly in struct
vio_driver to avoid having to explicitly initialize the embedded
struct device.

Signed-off-by: Benjamin Herrenschmidt <[email protected]>
---
arch/powerpc/include/asm/vio.h | 10 +++++++++-
arch/powerpc/kernel/vio.c | 9 +++++++--
arch/sparc/include/asm/vio.h | 9 ++++++++-
arch/sparc/kernel/ds.c | 5 +----
arch/sparc/kernel/vio.c | 9 +++++++--
drivers/block/sunvdc.c | 5 +----
drivers/net/ethernet/ibm/ibmveth.c | 7 ++-----
drivers/net/ethernet/sun/sunvnet.c | 5 +----
drivers/scsi/ibmvscsi/ibmvfc.c | 7 ++-----
drivers/scsi/ibmvscsi/ibmvscsi.c | 7 ++-----
drivers/scsi/ibmvscsi/ibmvstgt.c | 5 +----
drivers/tty/hvc/hvc_vio.c | 7 ++-----
drivers/tty/hvc/hvcs.c | 5 +----
13 files changed, 44 insertions(+), 46 deletions(-)

diff --git a/arch/powerpc/include/asm/vio.h b/arch/powerpc/include/asm/vio.h
index 0a290a1..6bfd5ff 100644
--- a/arch/powerpc/include/asm/vio.h
+++ b/arch/powerpc/include/asm/vio.h
@@ -69,6 +69,7 @@ struct vio_dev {
};

struct vio_driver {
+ const char *name;
const struct vio_device_id *id_table;
int (*probe)(struct vio_dev *dev, const struct vio_device_id *id);
int (*remove)(struct vio_dev *dev);
@@ -76,10 +77,17 @@ struct vio_driver {
* be loaded in a CMO environment if it uses DMA.
*/
unsigned long (*get_desired_dma)(struct vio_dev *dev);
+ const struct dev_pm_ops *pm;
struct device_driver driver;
};

-extern int vio_register_driver(struct vio_driver *drv);
+extern int __vio_register_driver(struct vio_driver *drv, struct module *owner,
+ const char *mod_name);
+/*
+ * vio_register_driver must be a macro so that KBUILD_MODNAME can be expanded
+ */
+#define vio_register_driver(driver) \
+ __vio_register_driver(driver, THIS_MODULE, KBUILD_MODNAME)
extern void vio_unregister_driver(struct vio_driver *drv);

extern int vio_cmo_entitlement_update(size_t);
diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index bca3fc4..879dd25 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -1159,17 +1159,22 @@ static int vio_bus_remove(struct device *dev)
* vio_register_driver: - Register a new vio driver
* @drv: The vio_driver structure to be registered.
*/
-int vio_register_driver(struct vio_driver *viodrv)
+int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
+ const char *mod_name)
{
printk(KERN_DEBUG "%s: driver %s registering\n", __func__,
viodrv->driver.name);

/* fill in 'struct driver' fields */
+ viodrv->driver.name = viodrv->name;
+ viodrv->driver.pm = viodrv->pm;
viodrv->driver.bus = &vio_bus_type;
+ viodrv->driver.owner = owner;
+ viodrv->driver.mod_name = mod_name;

return driver_register(&viodrv->driver);
}
-EXPORT_SYMBOL(vio_register_driver);
+EXPORT_SYMBOL(__vio_register_driver);

/**
* vio_unregister_driver - Remove registration of vio driver.
diff --git a/arch/sparc/include/asm/vio.h b/arch/sparc/include/asm/vio.h
index 9d83d3b..432afa8 100644
--- a/arch/sparc/include/asm/vio.h
+++ b/arch/sparc/include/asm/vio.h
@@ -284,6 +284,7 @@ struct vio_dev {
};

struct vio_driver {
+ const char *name;
struct list_head node;
const struct vio_device_id *id_table;
int (*probe)(struct vio_dev *dev, const struct vio_device_id *id);
@@ -371,7 +372,13 @@ do { if (vio->debug & VIO_DEBUG_##TYPE) \
vio->vdev->channel_id, ## a); \
} while (0)

-extern int vio_register_driver(struct vio_driver *drv);
+extern int __vio_register_driver(struct vio_driver *drv, struct module *owner,
+ const char *mod_name);
+/*
+ * vio_register_driver must be a macro so that KBUILD_MODNAME can be expanded
+ */
+#define vio_register_driver(driver) \
+ __vio_register_driver(driver, THIS_MODULE, KBUILD_MODNAME)
extern void vio_unregister_driver(struct vio_driver *drv);

static inline struct vio_driver *to_vio_driver(struct device_driver *drv)
diff --git a/arch/sparc/kernel/ds.c b/arch/sparc/kernel/ds.c
index 381edcd..fea13c7 100644
--- a/arch/sparc/kernel/ds.c
+++ b/arch/sparc/kernel/ds.c
@@ -1244,10 +1244,7 @@ static struct vio_driver ds_driver = {
.id_table = ds_match,
.probe = ds_probe,
.remove = ds_remove,
- .driver = {
- .name = "ds",
- .owner = THIS_MODULE,
- }
+ .name = "ds",
};

static int __init ds_init(void)
diff --git a/arch/sparc/kernel/vio.c b/arch/sparc/kernel/vio.c
index f67e28e..6758b0b 100644
--- a/arch/sparc/kernel/vio.c
+++ b/arch/sparc/kernel/vio.c
@@ -119,13 +119,18 @@ static struct bus_type vio_bus_type = {
.remove = vio_device_remove,
};

-int vio_register_driver(struct vio_driver *viodrv)
+int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
+ const char *mod_name)
{
viodrv->driver.bus = &vio_bus_type;
+ viodrv->driver.name = viodrv->name;
+ viodrv->driver.bus = &vio_bus_type;
+ viodrv->driver.owner = owner;
+ viodrv->driver.mod_name = mod_name;

return driver_register(&viodrv->driver);
}
-EXPORT_SYMBOL(vio_register_driver);
+EXPORT_SYMBOL(__vio_register_driver);

void vio_unregister_driver(struct vio_driver *viodrv)
{
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 48e8fee..9dcf76a 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -839,10 +839,7 @@ static struct vio_driver vdc_port_driver = {
.id_table = vdc_port_match,
.probe = vdc_port_probe,
.remove = vdc_port_remove,
- .driver = {
- .name = "vdc_port",
- .owner = THIS_MODULE,
- }
+ .name = "vdc_port",
};

static int __init vdc_init(void)
diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
index e877371..9010cea 100644
--- a/drivers/net/ethernet/ibm/ibmveth.c
+++ b/drivers/net/ethernet/ibm/ibmveth.c
@@ -1616,11 +1616,8 @@ static struct vio_driver ibmveth_driver = {
.probe = ibmveth_probe,
.remove = ibmveth_remove,
.get_desired_dma = ibmveth_get_desired_dma,
- .driver = {
- .name = ibmveth_driver_name,
- .owner = THIS_MODULE,
- .pm = &ibmveth_pm_ops,
- }
+ .name = ibmveth_driver_name,
+ .pm = &ibmveth_pm_ops,
};

static int __init ibmveth_module_init(void)
diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
index 8c6c059..998c0f0 100644
--- a/drivers/net/ethernet/sun/sunvnet.c
+++ b/drivers/net/ethernet/sun/sunvnet.c
@@ -1264,10 +1264,7 @@ static struct vio_driver vnet_port_driver = {
.id_table = vnet_port_match,
.probe = vnet_port_probe,
.remove = vnet_port_remove,
- .driver = {
- .name = "vnet_port",
- .owner = THIS_MODULE,
- }
+ .name = "vnet_port",
};

static int __init vnet_init(void)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index bdfa223..134a0ae 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -4890,11 +4890,8 @@ static struct vio_driver ibmvfc_driver = {
.probe = ibmvfc_probe,
.remove = ibmvfc_remove,
.get_desired_dma = ibmvfc_get_desired_dma,
- .driver = {
- .name = IBMVFC_NAME,
- .owner = THIS_MODULE,
- .pm = &ibmvfc_pm_ops,
- }
+ .name = IBMVFC_NAME,
+ .pm = &ibmvfc_pm_ops,
};

static struct fc_function_template ibmvfc_transport_functions = {
diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index e984951..3a6c474 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -2061,11 +2061,8 @@ static struct vio_driver ibmvscsi_driver = {
.probe = ibmvscsi_probe,
.remove = ibmvscsi_remove,
.get_desired_dma = ibmvscsi_get_desired_dma,
- .driver = {
- .name = "ibmvscsi",
- .owner = THIS_MODULE,
- .pm = &ibmvscsi_pm_ops,
- }
+ .name = "ibmvscsi",
+ .pm = &ibmvscsi_pm_ops,
};

static struct srp_function_template ibmvscsi_transport_functions = {
diff --git a/drivers/scsi/ibmvscsi/ibmvstgt.c b/drivers/scsi/ibmvscsi/ibmvstgt.c
index 2256bab..aa7ed81 100644
--- a/drivers/scsi/ibmvscsi/ibmvstgt.c
+++ b/drivers/scsi/ibmvscsi/ibmvstgt.c
@@ -918,10 +918,7 @@ static struct vio_driver ibmvstgt_driver = {
.id_table = ibmvstgt_device_table,
.probe = ibmvstgt_probe,
.remove = ibmvstgt_remove,
- .driver = {
- .name = "ibmvscsis",
- .owner = THIS_MODULE,
- }
+ .name = "ibmvscsis",
};

static int get_system_info(void)
diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 3a0d53d..ee30779 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -310,11 +310,8 @@ static int __devexit hvc_vio_remove(struct vio_dev *vdev)
static struct vio_driver hvc_vio_driver = {
.id_table = hvc_driver_table,
.probe = hvc_vio_probe,
- .remove = __devexit_p(hvc_vio_remove),
- .driver = {
- .name = hvc_driver_name,
- .owner = THIS_MODULE,
- }
+ .remove = hvc_vio_remove,
+ .name = hvc_driver_name,
};

static int __init hvc_vio_init(void)
diff --git a/drivers/tty/hvc/hvcs.c b/drivers/tty/hvc/hvcs.c
index d237591..3436436 100644
--- a/drivers/tty/hvc/hvcs.c
+++ b/drivers/tty/hvc/hvcs.c
@@ -879,10 +879,7 @@ static struct vio_driver hvcs_vio_driver = {
.id_table = hvcs_driver_table,
.probe = hvcs_probe,
.remove = __devexit_p(hvcs_remove),
- .driver = {
- .name = hvcs_driver_name,
- .owner = THIS_MODULE,
- }
+ .name = hvcs_driver_name,
};

/* Only called from hvcs_get_pi please */

2012-03-22 03:40:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

On Thu, Mar 22, 2012 at 01:57:30PM +1100, Benjamin Herrenschmidt wrote:
> +int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
> + const char *mod_name)
> {
> viodrv->driver.bus = &vio_bus_type;
> + viodrv->driver.name = viodrv->name;
> + viodrv->driver.bus = &vio_bus_type;
> + viodrv->driver.owner = owner;
> + viodrv->driver.mod_name = mod_name;

Any reason you set .bus twice?

2012-03-22 05:39:56

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: [PATCH 14/17] powerpc: crypto: nx driver code supporting nx encryption

On Wed, 2012-03-21 at 20:39 -0700, Greg KH wrote:
> On Thu, Mar 22, 2012 at 01:57:30PM +1100, Benjamin Herrenschmidt wrote:
> > +int __vio_register_driver(struct vio_driver *viodrv, struct module *owner,
> > + const char *mod_name)
> > {
> > viodrv->driver.bus = &vio_bus_type;
> > + viodrv->driver.name = viodrv->name;
> > + viodrv->driver.bus = &vio_bus_type;
> > + viodrv->driver.owner = owner;
> > + viodrv->driver.mod_name = mod_name;
>
> Any reason you set .bus twice?

Nope, just a typo. I'll fix it up, when I have feedback from Dave.

Cheers,
Ben.

2012-03-22 09:55:56

by Anton Blanchard

[permalink] [raw]
Subject: Re: [PATCH 05/17] pseries: Enabled the PFO-based RNG accelerator


Hi,

+#if defined(CONFIG_HW_RANDOM_PSERIES) || \
+ defined(CONFIG_HW_RANDOM_PSERIES_MODULE)
+#define OV5_PFO_HW_RNG 0x80 /* PFO Random Number
Generator */ +#else
+#define OV5_PFO_HW_RNG 0x00
+#endif

Milton tipped me off about this. We really don't want to be doing
ibm,client-architecture reboots every time a config option is changed.

Let's just hardwire it on.

Anton

2012-03-22 17:17:35

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 00/17] Platform Facilities Option and crypto accelerator driver


On Mar 21, 2012, at 4:28 PM, Kent Yoder wrote:

> arch/powerpc/crypto/nx/Makefile | 11 +
> arch/powerpc/crypto/nx/nx-aes-cbc.c | 135 +++++
> arch/powerpc/crypto/nx/nx-aes-ccm.c | 466 ++++++++++++++++++
> arch/powerpc/crypto/nx/nx-aes-ctr.c | 175 +++++++
> arch/powerpc/crypto/nx/nx-aes-ecb.c | 133 +++++
> arch/powerpc/crypto/nx/nx-aes-gcm.c | 352 +++++++++++++
> arch/powerpc/crypto/nx/nx-aes-xcbc.c | 230 +++++++++
> arch/powerpc/crypto/nx/nx-sha256.c | 240 +++++++++
> arch/powerpc/crypto/nx/nx-sha512.c | 259 ++++++++++
> arch/powerpc/crypto/nx/nx.c | 710 +++++++++++++++++++++++++++
> arch/powerpc/crypto/nx/nx.h | 190 +++++++
> arch/powerpc/crypto/nx/nx_csbcpb.h | 246 +++++++++
> arch/powerpc/crypto/nx/nx_sysfs.c | 194 ++++++++

Is there a reason this isn't in drivers/crypto/

- k

2012-03-22 19:07:28

by Kent Yoder

[permalink] [raw]
Subject: Re: [PATCH 00/17] Platform Facilities Option and crypto accelerator driver

Hi Kumar,

> Is there a reason this isn't in drivers/crypto/

Other arch-specific dirs have their crypto subdir as well such as
arch/s390. I was just matching that.

Kent

> - k

2012-03-23 16:07:10

by Kumar Gala

[permalink] [raw]
Subject: Re: [PATCH 00/17] Platform Facilities Option and crypto accelerator driver


On Mar 22, 2012, at 2:08 PM, Kent Yoder wrote:

> Hi Kumar,
>
>> Is there a reason this isn't in drivers/crypto/
>
> Other arch-specific dirs have their crypto subdir as well such as
> arch/s390. I was just matching that.
>
> Kent
>
>> - k
>

>From what I can tell this isn't ISA level instructions and thus should NOT be in arch/powerpc. This should be moved into drivers/crypto

- k

2012-03-26 16:09:51

by Kent Yoder

[permalink] [raw]
Subject: Re: [PATCH 00/17] Platform Facilities Option and crypto accelerator driver


>
> From what I can tell this isn't ISA level instructions and thus should NOT be in arch/powerpc. This should be moved into drivers/crypto

That makes sense. I'll move them over in my next submission.

Kent

> - k