This series introduces crypto kfuncs to make BPF programs able to
utilize kernel crypto subsystem. Crypto operations made pluggable to
avoid extensive growth of kernel when it's not needed. Only skcipher is
added within this series, but it can be easily extended to other types
of operations. No hardware offload supported as it needs sleepable
context which is not available for TX or XDP programs. At the same time
crypto context initialization kfunc can only run in sleepable context,
that's why it should be run separately and store the result in the map.
Selftests show the common way to implement crypto actions in BPF
programs. Benchmark is also added to have a baseline.
Vadim Fedorenko (4):
bpf: make common crypto API for TC/XDP programs
bpf: crypto: add skcipher to bpf crypto
selftests: bpf: crypto skcipher algo selftests
selftests: bpf: crypto: add benchmark for crypto functions
MAINTAINERS | 8 +
crypto/Makefile | 3 +
crypto/bpf_crypto_skcipher.c | 82 ++++
include/linux/bpf.h | 1 +
include/linux/bpf_crypto.h | 24 ++
kernel/bpf/Makefile | 3 +
kernel/bpf/crypto.c | 377 ++++++++++++++++++
kernel/bpf/helpers.c | 2 +-
kernel/bpf/verifier.c | 1 +
tools/testing/selftests/bpf/Makefile | 2 +
tools/testing/selftests/bpf/bench.c | 6 +
.../selftests/bpf/benchs/bench_bpf_crypto.c | 190 +++++++++
tools/testing/selftests/bpf/config | 5 +
.../selftests/bpf/prog_tests/crypto_sanity.c | 200 ++++++++++
.../selftests/bpf/progs/crypto_basic.c | 70 ++++
.../selftests/bpf/progs/crypto_bench.c | 111 ++++++
.../selftests/bpf/progs/crypto_common.h | 67 ++++
.../selftests/bpf/progs/crypto_sanity.c | 161 ++++++++
.../selftests/bpf/progs/crypto_share.h | 10 +
19 files changed, 1322 insertions(+), 1 deletion(-)
create mode 100644 crypto/bpf_crypto_skcipher.c
create mode 100644 include/linux/bpf_crypto.h
create mode 100644 kernel/bpf/crypto.c
create mode 100644 tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_basic.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_bench.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_common.h
create mode 100644 tools/testing/selftests/bpf/progs/crypto_sanity.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_share.h
--
2.43.0
Add crypto API support to BPF to be able to decrypt or encrypt packets
in TC/XDP BPF programs. Special care should be taken for initialization
part of crypto algo because crypto alloc) doesn't work with preemtion
disabled, it can be run only in sleepable BPF program. Also async crypto
is not supported because of the very same issue - TC/XDP BPF programs
are not sleepable.
Signed-off-by: Vadim Fedorenko <[email protected]>
---
v8 -> v9:
- improve initialization API to provide more data and make one call to
have crypto context fully initialized and ready to do crypto actions
- improve fast path to avoid indirect calls
- make bpf_crypto_create runnable from syscall type programs
v7 -> v8:
- add statesize ops to bpf crypto type as some ciphers are now stateful
- improve error path in bpf_crypto_create
v6 -> v7:
- style fixes
v5 -> v6:
- replace lskcipher with infrastructure to provide pluggable cipher
types
- add BPF skcipher as plug-in module in a separate patch
v4 -> v5:
- replace crypto API to use lskcipher (suggested by Herbert Xu)
- remove SG list usage and provide raw buffers
v3 -> v4:
- reuse __bpf_dynptr_data and remove own implementation
- use const __str to provide algorithm name
- use kfunc macroses to avoid compilator warnings
v2 -> v3:
- fix kdoc issues
v1 -> v2:
- use kmalloc in sleepable func, suggested by Alexei
- use __bpf_dynptr_is_rdonly() to check destination, suggested by Jakub
- use __bpf_dynptr_data_ptr() for all dynptr accesses
---
include/linux/bpf.h | 1 +
include/linux/bpf_crypto.h | 24 +++
kernel/bpf/Makefile | 3 +
kernel/bpf/crypto.c | 377 +++++++++++++++++++++++++++++++++++++
kernel/bpf/helpers.c | 2 +-
kernel/bpf/verifier.c | 1 +
6 files changed, 407 insertions(+), 1 deletion(-)
create mode 100644 include/linux/bpf_crypto.h
create mode 100644 kernel/bpf/crypto.c
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 5034c1b4ded7..acc479c13f52 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1265,6 +1265,7 @@ int bpf_dynptr_check_size(u32 size);
u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
+bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
#ifdef CONFIG_BPF_JIT
int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
new file mode 100644
index 000000000000..a41e71d4e2d9
--- /dev/null
+++ b/include/linux/bpf_crypto.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#ifndef _BPF_CRYPTO_H
+#define _BPF_CRYPTO_H
+
+struct bpf_crypto_type {
+ void *(*alloc_tfm)(const char *algo);
+ void (*free_tfm)(void *tfm);
+ int (*has_algo)(const char *algo);
+ int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
+ int (*setauthsize)(void *tfm, unsigned int authsize);
+ int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
+ int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
+ unsigned int (*ivsize)(void *tfm);
+ unsigned int (*statesize)(void *tfm);
+ u32 (*get_flags)(void *tfm);
+ struct module *owner;
+ char name[14];
+};
+
+int bpf_crypto_register_type(const struct bpf_crypto_type *type);
+int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
+
+#endif /* _BPF_CRYPTO_H */
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 368c5d86b5b7..736bd22e5ce0 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -44,6 +44,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
obj-${CONFIG_BPF_LSM} += bpf_lsm.o
endif
+ifeq ($(CONFIG_CRYPTO),y)
+obj-$(CONFIG_BPF_SYSCALL) += crypto.o
+endif
obj-$(CONFIG_BPF_PRELOAD) += preload/
obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
new file mode 100644
index 000000000000..a76d80f37f55
--- /dev/null
+++ b/kernel/bpf/crypto.c
@@ -0,0 +1,377 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2024 Meta, Inc */
+#include <linux/bpf.h>
+#include <linux/bpf_crypto.h>
+#include <linux/bpf_mem_alloc.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
+#include <linux/filter.h>
+#include <linux/scatterlist.h>
+#include <linux/skbuff.h>
+#include <crypto/skcipher.h>
+
+struct bpf_crypto_type_list {
+ const struct bpf_crypto_type *type;
+ struct list_head list;
+};
+
+/* BPF crypto initialization parameters struct */
+/**
+ * struct bpf_crypto_params - BPF crypto initialization parameters structure
+ * @type: The string of crypto operation type.
+ * @algo: The string of algorithm to initialize.
+ * @key: The cipher key used to init crypto algorithm.
+ * @key_len: The length of cipher key.
+ * @authsize: The length of authentication tag used by algorithm.
+ */
+struct bpf_crypto_params {
+ char type[14];
+ char algo[128];
+ __u8 key[256];
+ __u32 key_len;
+ __u32 authsize;
+} __attribute__((aligned(8)));
+
+static LIST_HEAD(bpf_crypto_types);
+static DECLARE_RWSEM(bpf_crypto_types_sem);
+
+/**
+ * struct bpf_crypto_ctx - refcounted BPF crypto context structure
+ * @type: The pointer to bpf crypto type
+ * @tfm: The pointer to instance of crypto API struct.
+ * @rcu: The RCU head used to free the crypto context with RCU safety.
+ * @usage: Object reference counter. When the refcount goes to 0, the
+ * memory is released back to the BPF allocator, which provides
+ * RCU safety.
+ */
+struct bpf_crypto_ctx {
+ const struct bpf_crypto_type *type;
+ void *tfm;
+ u32 siv_len;
+ struct rcu_head rcu;
+ refcount_t usage;
+};
+
+int bpf_crypto_register_type(const struct bpf_crypto_type *type)
+{
+ struct bpf_crypto_type_list *node;
+ int err = -EEXIST;
+
+ down_write(&bpf_crypto_types_sem);
+ list_for_each_entry(node, &bpf_crypto_types, list) {
+ if (!strcmp(node->type->name, type->name))
+ goto unlock;
+ }
+
+ node = kmalloc(sizeof(*node), GFP_KERNEL);
+ err = -ENOMEM;
+ if (!node)
+ goto unlock;
+
+ node->type = type;
+ list_add(&node->list, &bpf_crypto_types);
+ err = 0;
+
+unlock:
+ up_write(&bpf_crypto_types_sem);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
+
+int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
+{
+ struct bpf_crypto_type_list *node;
+ int err = -ENOENT;
+
+ down_write(&bpf_crypto_types_sem);
+ list_for_each_entry(node, &bpf_crypto_types, list) {
+ if (strcmp(node->type->name, type->name))
+ continue;
+
+ list_del(&node->list);
+ kfree(node);
+ err = 0;
+ break;
+ }
+ up_write(&bpf_crypto_types_sem);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
+
+static const struct bpf_crypto_type *bpf_crypto_get_type(const char *name)
+{
+ const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
+ struct bpf_crypto_type_list *node;
+
+ down_read(&bpf_crypto_types_sem);
+ list_for_each_entry(node, &bpf_crypto_types, list) {
+ if (strcmp(node->type->name, name))
+ continue;
+
+ if (try_module_get(node->type->owner))
+ type = node->type;
+ break;
+ }
+ up_read(&bpf_crypto_types_sem);
+
+ return type;
+}
+
+__bpf_kfunc_start_defs();
+
+/**
+ * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
+ *
+ * Allocates a crypto context that can be used, acquired, and released by
+ * a BPF program. The crypto context returned by this function must either
+ * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
+ * As crypto API functions use GFP_KERNEL allocations, this function can
+ * only be used in sleepable BPF programs.
+ *
+ * bpf_crypto_ctx_create() allocates memory for crypto context.
+ * It may return NULL if no memory is available.
+ * @params: pointer to struct bpf_crypto_params which contains all the
+ * details needed to initialise crypto context.
+ * @err: integer to store error code when NULL is returned.
+ */
+__bpf_kfunc struct bpf_crypto_ctx *
+bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
+{
+ const struct bpf_crypto_type *type;
+ struct bpf_crypto_ctx *ctx;
+
+ type = bpf_crypto_get_type(params->type);
+ if (IS_ERR(type)) {
+ *err = PTR_ERR(type);
+ return NULL;
+ }
+
+ if (!type->has_algo(params->algo)) {
+ *err = -EOPNOTSUPP;
+ goto err_module_put;
+ }
+
+ if (!params->authsize && type->setauthsize) {
+ *err = -EOPNOTSUPP;
+ goto err_module_put;
+ }
+
+ if (params->authsize && !type->setauthsize) {
+ *err = -EOPNOTSUPP;
+ goto err_module_put;
+ }
+
+ if (!params->key_len) {
+ *err = -EINVAL;
+ goto err_module_put;
+ }
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx) {
+ *err = -ENOMEM;
+ goto err_module_put;
+ }
+
+ ctx->type = type;
+ ctx->tfm = type->alloc_tfm(params->algo);
+ if (IS_ERR(ctx->tfm)) {
+ *err = PTR_ERR(ctx->tfm);
+ goto err_free_ctx;
+ }
+
+ if (params->authsize) {
+ *err = type->setauthsize(ctx->tfm, params->authsize);
+ if (*err)
+ goto err_free_tfm;
+ }
+
+ *err = type->setkey(ctx->tfm, params->key, params->key_len);
+ if (*err)
+ goto err_free_tfm;
+
+ if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
+ *err = -EINVAL;
+ goto err_free_tfm;
+ }
+
+ ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
+
+ refcount_set(&ctx->usage, 1);
+
+ return ctx;
+
+err_free_tfm:
+ type->free_tfm(ctx->tfm);
+err_free_ctx:
+ kfree(ctx);
+err_module_put:
+ module_put(type->owner);
+
+ return NULL;
+}
+
+static void crypto_free_cb(struct rcu_head *head)
+{
+ struct bpf_crypto_ctx *ctx;
+
+ ctx = container_of(head, struct bpf_crypto_ctx, rcu);
+ ctx->type->free_tfm(ctx->tfm);
+ module_put(ctx->type->owner);
+ kfree(ctx);
+}
+
+/**
+ * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto context.
+ * @ctx: The BPF crypto context being acquired. The ctx must be a trusted
+ * pointer.
+ *
+ * Acquires a reference to a BPF crypto context. The context returned by this function
+ * must either be embedded in a map as a kptr, or freed with
+ * bpf_crypto_skcipher_ctx_release().
+ */
+__bpf_kfunc struct bpf_crypto_ctx *
+bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
+{
+ if (!refcount_inc_not_zero(&ctx->usage))
+ return NULL;
+ return ctx;
+}
+
+/**
+ * bpf_crypto_ctx_release() - Release a previously acquired BPF crypto context.
+ * @ctx: The crypto context being released.
+ *
+ * Releases a previously acquired reference to a BPF crypto context. When the final
+ * reference of the BPF crypto context has been released, it is subsequently freed in
+ * an RCU callback in the BPF memory allocator.
+ */
+__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
+{
+ if (refcount_dec_and_test(&ctx->usage))
+ call_rcu(&ctx->rcu, crypto_free_cb);
+}
+
+static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
+ const struct bpf_dynptr_kern *src,
+ struct bpf_dynptr_kern *dst,
+ const struct bpf_dynptr_kern *siv,
+ bool decrypt)
+{
+ u32 src_len, dst_len, siv_len;
+ const u8 *psrc;
+ u8 *pdst, *piv;
+ int err;
+
+ if (__bpf_dynptr_is_rdonly(dst))
+ return -EINVAL;
+
+ siv_len = __bpf_dynptr_size(siv);
+ src_len = __bpf_dynptr_size(src);
+ dst_len = __bpf_dynptr_size(dst);
+ if (!src_len || !dst_len)
+ return -EINVAL;
+
+ if (siv_len != ctx->siv_len)
+ return -EINVAL;
+
+ psrc = __bpf_dynptr_data(src, src_len);
+ if (!psrc)
+ return -EINVAL;
+ pdst = __bpf_dynptr_data_rw(dst, dst_len);
+ if (!pdst)
+ return -EINVAL;
+
+ piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;
+ if (siv_len && !piv)
+ return -EINVAL;
+
+ err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv)
+ : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
+
+ return err;
+}
+
+/**
+ * bpf_crypto_decrypt() - Decrypt buffer using configured context and IV provided.
+ * @ctx: The crypto context being used. The ctx must be a trusted pointer.
+ * @src: bpf_dynptr to the encrypted data. Must be a trusted pointer.
+ * @dst: bpf_dynptr to the buffer where to store the result. Must be a trusted pointer.
+ * @siv: bpf_dynptr to IV data and state data to be used by decryptor.
+ *
+ * Decrypts provided buffer using IV data and the crypto context. Crypto context must be configured.
+ */
+__bpf_kfunc int bpf_crypto_decrypt(struct bpf_crypto_ctx *ctx,
+ const struct bpf_dynptr_kern *src,
+ struct bpf_dynptr_kern *dst,
+ struct bpf_dynptr_kern *siv)
+{
+ return bpf_crypto_crypt(ctx, src, dst, siv, true);
+}
+
+/**
+ * bpf_crypto_encrypt() - Encrypt buffer using configured context and IV provided.
+ * @ctx: The crypto context being used. The ctx must be a trusted pointer.
+ * @src: bpf_dynptr to the plain data. Must be a trusted pointer.
+ * @dst: bpf_dynptr to buffer where to store the result. Must be a trusted pointer.
+ * @siv: bpf_dynptr to IV data and state data to be used by decryptor.
+ *
+ * Encrypts provided buffer using IV data and the crypto context. Crypto context must be configured.
+ */
+__bpf_kfunc int bpf_crypto_encrypt(struct bpf_crypto_ctx *ctx,
+ const struct bpf_dynptr_kern *src,
+ struct bpf_dynptr_kern *dst,
+ struct bpf_dynptr_kern *siv)
+{
+ return bpf_crypto_crypt(ctx, src, dst, siv, false);
+}
+
+__bpf_kfunc_end_defs();
+
+BTF_KFUNCS_START(crypt_init_kfunc_btf_ids)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_create, KF_ACQUIRE | KF_RET_NULL | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_release, KF_RELEASE)
+BTF_ID_FLAGS(func, bpf_crypto_ctx_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
+BTF_KFUNCS_END(crypt_init_kfunc_btf_ids)
+
+static const struct btf_kfunc_id_set crypt_init_kfunc_set = {
+ .owner = THIS_MODULE,
+ .set = &crypt_init_kfunc_btf_ids,
+};
+
+BTF_KFUNCS_START(crypt_kfunc_btf_ids)
+BTF_ID_FLAGS(func, bpf_crypto_decrypt, KF_RCU)
+BTF_ID_FLAGS(func, bpf_crypto_encrypt, KF_RCU)
+BTF_KFUNCS_END(crypt_kfunc_btf_ids)
+
+static const struct btf_kfunc_id_set crypt_kfunc_set = {
+ .owner = THIS_MODULE,
+ .set = &crypt_kfunc_btf_ids,
+};
+
+BTF_ID_LIST(bpf_crypto_dtor_ids)
+BTF_ID(struct, bpf_crypto_ctx)
+BTF_ID(func, bpf_crypto_ctx_release)
+
+static int __init crypto_kfunc_init(void)
+{
+ int ret;
+ const struct btf_id_dtor_kfunc bpf_crypto_dtors[] = {
+ {
+ .btf_id = bpf_crypto_dtor_ids[0],
+ .kfunc_btf_id = bpf_crypto_dtor_ids[1]
+ },
+ };
+
+ ret = register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_CLS, &crypt_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SCHED_ACT, &crypt_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &crypt_kfunc_set);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_SYSCALL,
+ &crypt_init_kfunc_set);
+ return ret ?: register_btf_id_dtor_kfuncs(bpf_crypto_dtors,
+ ARRAY_SIZE(bpf_crypto_dtors),
+ THIS_MODULE);
+}
+
+late_initcall(crypto_kfunc_init);
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8cde717137bd..a67fa076b844 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -1443,7 +1443,7 @@ static const struct bpf_func_proto bpf_kptr_xchg_proto = {
#define DYNPTR_SIZE_MASK 0xFFFFFF
#define DYNPTR_RDONLY_BIT BIT(31)
-static bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
+bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr)
{
return ptr->size & DYNPTR_RDONLY_BIT;
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2aad6d90550f..f83f537af60f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5295,6 +5295,7 @@ BTF_ID(struct, cgroup)
BTF_ID(struct, bpf_cpumask)
#endif
BTF_ID(struct, task_struct)
+BTF_ID(struct, bpf_crypto_ctx)
BTF_SET_END(rcu_protected_types)
static bool rcu_protected_object(const struct btf *btf, u32 btf_id)
--
2.43.0
Add simple tc hook selftests to show the way to work with new crypto
BPF API. Some tricky dynptr initialization is used to provide empty iv
dynptr. Simple AES-ECB algo is used to demonstrate encryption and
decryption of fixed size buffers.
Signed-off-by: Vadim Fedorenko <[email protected]>
---
v8 -> v9:
- adjust tests to use new bpf_crypto_create API
v7 -> v8:
- use sizeof for all constant buffer operations
- make local functions static
- initialize crypto_key value via access to bss data
- add bpf_skb_pull_data to be sure that data is linear
- some comments around tricky dynptr initialization
v6 -> v7:
- style issues
v5 -> v6:
- use AF_ALG socket to confirm proper algorithm test
- adjust test kernel config to include AF_ALG
v4 -> v5:
- adjust selftests to use new naming
- restore tests on aarch64 and s390 as no sg lists are used
v3 -> v4:
- adjust selftests to use new syntax of helpers
- add tests for acquire and release
v2 -> v3:
- disable tests on s390 and aarch64 because of unknown Fatal exception
in sg_init_one
v1 -> v2:
- add CONFIG_CRYPTO_AES and CONFIG_CRYPTO_ECB to selftest build config
suggested by Daniel
---
tools/testing/selftests/bpf/config | 5 +
.../selftests/bpf/prog_tests/crypto_sanity.c | 200 ++++++++++++++++++
.../selftests/bpf/progs/crypto_basic.c | 70 ++++++
.../selftests/bpf/progs/crypto_common.h | 67 ++++++
.../selftests/bpf/progs/crypto_sanity.c | 161 ++++++++++++++
.../selftests/bpf/progs/crypto_share.h | 10 +
6 files changed, 513 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_basic.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_common.h
create mode 100644 tools/testing/selftests/bpf/progs/crypto_sanity.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_share.h
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index afd675b1bf80..eeabd798bc3a 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -13,7 +13,12 @@ CONFIG_BPF_SYSCALL=y
CONFIG_CGROUP_BPF=y
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
+CONFIG_CRYPTO_USER_API_SKCIPHER=y
+CONFIG_CRYPTO_SKCIPHER=y
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_AES=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_BTF=y
CONFIG_DEBUG_INFO_DWARF4=y
diff --git a/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
new file mode 100644
index 000000000000..1084848aa1ef
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/crypto_sanity.c
@@ -0,0 +1,200 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <net/if.h>
+#include <linux/in6.h>
+#include <linux/if_alg.h>
+
+#include "test_progs.h"
+#include "network_helpers.h"
+#include "crypto_sanity.skel.h"
+#include "crypto_basic.skel.h"
+#include "../progs/crypto_share.h"
+
+#define NS_TEST "crypto_sanity_ns"
+#define IPV6_IFACE_ADDR "face::1"
+static const unsigned char crypto_key[] = "testtest12345678";
+static const char plain_text[] = "stringtoencrypt0";
+static int opfd = -1, tfmfd = -1;
+static const char algo[] = "ecb(aes)";
+static int init_afalg(void)
+{
+ struct sockaddr_alg sa = {
+ .salg_family = AF_ALG,
+ .salg_type = "skcipher",
+ .salg_name = "ecb(aes)"
+ };
+
+ tfmfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
+ if (tfmfd == -1)
+ return errno;
+ if (bind(tfmfd, (struct sockaddr *)&sa, sizeof(sa)) == -1)
+ return errno;
+ if (setsockopt(tfmfd, SOL_ALG, ALG_SET_KEY, crypto_key, 16) == -1)
+ return errno;
+ opfd = accept(tfmfd, NULL, 0);
+ if (opfd == -1)
+ return errno;
+ return 0;
+}
+
+static void deinit_afalg(void)
+{
+ if (tfmfd != -1)
+ close(tfmfd);
+ if (opfd != -1)
+ close(opfd);
+}
+
+static void do_crypt_afalg(const void *src, void *dst, int size, bool encrypt)
+{
+ struct msghdr msg = {};
+ struct cmsghdr *cmsg;
+ char cbuf[CMSG_SPACE(4)] = {0};
+ struct iovec iov;
+
+ msg.msg_control = cbuf;
+ msg.msg_controllen = sizeof(cbuf);
+
+ cmsg = CMSG_FIRSTHDR(&msg);
+ cmsg->cmsg_level = SOL_ALG;
+ cmsg->cmsg_type = ALG_SET_OP;
+ cmsg->cmsg_len = CMSG_LEN(4);
+ *(__u32 *)CMSG_DATA(cmsg) = encrypt ? ALG_OP_ENCRYPT : ALG_OP_DECRYPT;
+
+ iov.iov_base = (char *)src;
+ iov.iov_len = size;
+
+ msg.msg_iov = &iov;
+ msg.msg_iovlen = 1;
+
+ sendmsg(opfd, &msg, 0);
+ read(opfd, dst, size);
+}
+
+void test_crypto_basic(void)
+{
+ RUN_TESTS(crypto_basic);
+}
+
+void test_crypto_sanity(void)
+{
+ struct crypto_syscall_args sargs = {
+ .key_len = 16,
+ };
+ LIBBPF_OPTS(bpf_tc_hook, qdisc_hook, .attach_point = BPF_TC_EGRESS);
+ LIBBPF_OPTS(bpf_tc_opts, tc_attach_enc);
+ LIBBPF_OPTS(bpf_tc_opts, tc_attach_dec);
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .ctx_in = &sargs,
+ .ctx_size_in = sizeof(sargs),
+ );
+ struct nstoken *nstoken = NULL;
+ struct crypto_sanity *skel;
+ char afalg_plain[16] = {0};
+ char afalg_dst[16] = {0};
+ struct sockaddr_in6 addr;
+ int sockfd, err, pfd;
+ socklen_t addrlen;
+
+ SYS(fail, "ip netns add %s", NS_TEST);
+ SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
+ SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
+
+ nstoken = open_netns(NS_TEST);
+ if (!ASSERT_OK_PTR(nstoken, "open_netns"))
+ goto fail;
+
+ err = init_afalg();
+ if (!ASSERT_OK(err, "AF_ALG init fail"))
+ goto fail;
+
+ qdisc_hook.ifindex = if_nametoindex("lo");
+ if (!ASSERT_GT(qdisc_hook.ifindex, 0, "if_nametoindex lo"))
+ goto fail;
+
+ skel = crypto_sanity__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel open"))
+ return;
+
+ memcpy(skel->bss->key, crypto_key, sizeof(crypto_key));
+ snprintf(skel->bss->algo, 128, "%s", algo);
+ pfd = bpf_program__fd(skel->progs.skb_crypto_setup);
+ if (!ASSERT_GT(pfd, 0, "skb_crypto_setup fd"))
+ goto fail;
+
+ err = bpf_prog_test_run_opts(pfd, &opts);
+ if (!ASSERT_OK(err, "skb_crypto_setup") ||
+ !ASSERT_OK(opts.retval, "skb_crypto_setup retval"))
+ goto fail;
+
+ if (!ASSERT_OK(skel->bss->status, "skb_crypto_setup status"))
+ goto fail;
+
+ err = crypto_sanity__attach(skel);
+ if (!ASSERT_OK(err, "crypto_sanity__attach"))
+ goto fail;
+
+ err = bpf_tc_hook_create(&qdisc_hook);
+ if (!ASSERT_OK(err, "create qdisc hook"))
+ goto fail;
+
+ addrlen = sizeof(addr);
+ err = make_sockaddr(AF_INET6, IPV6_IFACE_ADDR, UDP_TEST_PORT,
+ (void *)&addr, &addrlen);
+ if (!ASSERT_OK(err, "make_sockaddr"))
+ goto fail;
+
+ tc_attach_enc.prog_fd = bpf_program__fd(skel->progs.encrypt_sanity);
+ err = bpf_tc_attach(&qdisc_hook, &tc_attach_enc);
+ if (!ASSERT_OK(err, "attach encrypt filter"))
+ goto fail;
+
+ sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
+ if (!ASSERT_NEQ(sockfd, -1, "encrypt socket"))
+ goto fail;
+ err = sendto(sockfd, plain_text, sizeof(plain_text), 0, (void *)&addr, addrlen);
+ close(sockfd);
+ if (!ASSERT_EQ(err, sizeof(plain_text), "encrypt send"))
+ goto fail;
+
+ do_crypt_afalg(plain_text, afalg_dst, sizeof(afalg_dst), true);
+
+ bpf_tc_detach(&qdisc_hook, &tc_attach_enc);
+ if (!ASSERT_OK(skel->bss->status, "encrypt status"))
+ goto fail;
+ if (!ASSERT_STRNEQ(skel->bss->dst, afalg_dst, sizeof(afalg_dst), "encrypt AF_ALG"))
+ goto fail;
+
+ tc_attach_dec.prog_fd = bpf_program__fd(skel->progs.decrypt_sanity);
+ err = bpf_tc_attach(&qdisc_hook, &tc_attach_dec);
+ if (!ASSERT_OK(err, "attach decrypt filter"))
+ goto fail;
+
+ sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
+ if (!ASSERT_NEQ(sockfd, -1, "decrypt socket"))
+ goto fail;
+ err = sendto(sockfd, afalg_dst, sizeof(afalg_dst), 0, (void *)&addr, addrlen);
+ close(sockfd);
+ if (!ASSERT_EQ(err, sizeof(afalg_dst), "decrypt send"))
+ goto fail;
+
+ do_crypt_afalg(afalg_dst, afalg_plain, sizeof(afalg_plain), false);
+
+ bpf_tc_detach(&qdisc_hook, &tc_attach_dec);
+ if (!ASSERT_OK(skel->bss->status, "decrypt status"))
+ goto fail;
+ if (!ASSERT_STRNEQ(skel->bss->dst, afalg_plain, sizeof(afalg_plain), "decrypt AF_ALG"))
+ goto fail;
+
+fail:
+ if (nstoken) {
+ bpf_tc_hook_destroy(&qdisc_hook);
+ close_netns(nstoken);
+ }
+ deinit_afalg();
+ SYS_NOFAIL("ip netns del " NS_TEST " &> /dev/null");
+ crypto_sanity__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/crypto_basic.c b/tools/testing/selftests/bpf/progs/crypto_basic.c
new file mode 100644
index 000000000000..dfe59947e141
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/crypto_basic.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
+#include "crypto_common.h"
+
+int status;
+
+SEC("syscall")
+int crypto_release(void *ctx)
+{
+ struct bpf_crypto_params params = {
+ .type = "skcipher",
+ .algo = "ecb(aes)",
+ .key = "12345678testtest",
+ .key_len = 16,
+ };
+ struct bpf_crypto_ctx *cctx;
+ int err = 0;
+
+ status = 0;
+
+ cctx = bpf_crypto_ctx_create(¶ms, &err);
+
+ if (!cctx) {
+ status = err;
+ return 0;
+ }
+
+ bpf_crypto_ctx_release(cctx);
+
+ return 0;
+}
+
+SEC("syscall")
+__failure __msg("Unreleased reference")
+int crypto_acquire(void *ctx)
+{
+ struct bpf_crypto_params params = {
+ .type = "skcipher",
+ .algo = "ecb(aes)",
+ .key = "12345678testtest",
+ .key_len = 16,
+ };
+ struct bpf_crypto_ctx *cctx;
+ int err = 0;
+
+ status = 0;
+
+ cctx = bpf_crypto_ctx_create(¶ms, &err);
+
+ if (!cctx) {
+ status = err;
+ return 0;
+ }
+
+ cctx = bpf_crypto_ctx_acquire(cctx);
+ if (!cctx)
+ return -EINVAL;
+
+ bpf_crypto_ctx_release(cctx);
+
+ return 0;
+}
+
+char __license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/crypto_common.h b/tools/testing/selftests/bpf/progs/crypto_common.h
new file mode 100644
index 000000000000..b4eff7fb021d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/crypto_common.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#ifndef _CRYPTO_COMMON_H
+#define _CRYPTO_COMMON_H
+
+#include "errno.h"
+#include <stdbool.h>
+#include "crypto_share.h"
+
+struct bpf_crypto_ctx *bpf_crypto_ctx_create(const struct bpf_crypto_params *params,
+ int *err) __ksym;
+struct bpf_crypto_ctx *bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx) __ksym;
+void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx) __ksym;
+int bpf_crypto_encrypt(struct bpf_crypto_ctx *ctx, const struct bpf_dynptr *src,
+ struct bpf_dynptr *dst, struct bpf_dynptr *iv) __ksym;
+int bpf_crypto_decrypt(struct bpf_crypto_ctx *ctx, const struct bpf_dynptr *src,
+ struct bpf_dynptr *dst, struct bpf_dynptr *iv) __ksym;
+
+struct __crypto_ctx_value {
+ struct bpf_crypto_ctx __kptr * ctx;
+};
+
+struct array_map {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, int);
+ __type(value, struct __crypto_ctx_value);
+ __uint(max_entries, 1);
+} __crypto_ctx_map SEC(".maps");
+
+static inline struct __crypto_ctx_value *crypto_ctx_value_lookup(void)
+{
+ u32 key = 0;
+
+ return bpf_map_lookup_elem(&__crypto_ctx_map, &key);
+}
+
+static inline int crypto_ctx_insert(struct bpf_crypto_ctx *ctx)
+{
+ struct __crypto_ctx_value local, *v;
+ struct bpf_crypto_ctx *old;
+ u32 key = 0;
+ int err;
+
+ local.ctx = NULL;
+ err = bpf_map_update_elem(&__crypto_ctx_map, &key, &local, 0);
+ if (err) {
+ bpf_crypto_ctx_release(ctx);
+ return err;
+ }
+
+ v = bpf_map_lookup_elem(&__crypto_ctx_map, &key);
+ if (!v) {
+ bpf_crypto_ctx_release(ctx);
+ return -ENOENT;
+ }
+
+ old = bpf_kptr_xchg(&v->ctx, ctx);
+ if (old) {
+ bpf_crypto_ctx_release(old);
+ return -EEXIST;
+ }
+
+ return 0;
+}
+
+#endif /* _CRYPTO_COMMON_H */
diff --git a/tools/testing/selftests/bpf/progs/crypto_sanity.c b/tools/testing/selftests/bpf/progs/crypto_sanity.c
new file mode 100644
index 000000000000..57df5776bcaf
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/crypto_sanity.c
@@ -0,0 +1,161 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include "bpf_tracing_net.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
+#include "crypto_common.h"
+
+unsigned char key[256] = {};
+char algo[128] = {};
+char dst[16] = {};
+int status;
+
+static int skb_dynptr_validate(struct __sk_buff *skb, struct bpf_dynptr *psrc)
+{
+ struct ipv6hdr ip6h;
+ struct udphdr udph;
+ u32 offset;
+
+ if (skb->protocol != __bpf_constant_htons(ETH_P_IPV6))
+ return -1;
+
+ if (bpf_skb_load_bytes(skb, ETH_HLEN, &ip6h, sizeof(ip6h)))
+ return -1;
+
+ if (ip6h.nexthdr != IPPROTO_UDP)
+ return -1;
+
+ if (bpf_skb_load_bytes(skb, ETH_HLEN + sizeof(ip6h), &udph, sizeof(udph)))
+ return -1;
+
+ if (udph.dest != __bpf_constant_htons(UDP_TEST_PORT))
+ return -1;
+
+ offset = ETH_HLEN + sizeof(ip6h) + sizeof(udph);
+ if (skb->len < offset + 16)
+ return -1;
+
+ /* let's make sure that 16 bytes of payload are in the linear part of skb */
+ bpf_skb_pull_data(skb, offset + 16);
+ bpf_dynptr_from_skb(skb, 0, psrc);
+ bpf_dynptr_adjust(psrc, offset, offset + 16);
+
+ return 0;
+}
+
+SEC("syscall")
+int skb_crypto_setup(struct crypto_syscall_args *ctx)
+{
+ struct bpf_crypto_params params = {
+ .type = "skcipher",
+ .key_len = ctx->key_len,
+ .authsize = ctx->authsize,
+ };
+ struct bpf_crypto_ctx *cctx;
+ int err = 0;
+
+ status = 0;
+
+ if (ctx->key_len > 255) {
+ status = -EINVAL;
+ return 0;
+ }
+
+ __builtin_memcpy(¶ms.algo, algo, sizeof(algo));
+ __builtin_memcpy(¶ms.key, key, sizeof(key));
+ cctx = bpf_crypto_ctx_create(¶ms, &err);
+
+ if (!cctx) {
+ status = err;
+ return 0;
+ }
+
+ err = crypto_ctx_insert(cctx);
+ if (err && err != -EEXIST)
+ status = err;
+
+ return 0;
+}
+
+SEC("tc")
+int decrypt_sanity(struct __sk_buff *skb)
+{
+ struct __crypto_ctx_value *v;
+ struct bpf_crypto_ctx *ctx;
+ struct bpf_dynptr psrc, pdst, iv;
+ int err;
+
+ err = skb_dynptr_validate(skb, &psrc);
+ if (err < 0) {
+ status = err;
+ return TC_ACT_SHOT;
+ }
+
+ v = crypto_ctx_value_lookup();
+ if (!v) {
+ status = -ENOENT;
+ return TC_ACT_SHOT;
+ }
+
+ ctx = v->ctx;
+ if (!ctx) {
+ status = -ENOENT;
+ return TC_ACT_SHOT;
+ }
+
+ bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
+ /* iv dynptr has to be initialized with 0 size, but proper memory region
+ * has to be provided anyway
+ */
+ bpf_dynptr_from_mem(dst, 0, 0, &iv);
+
+ status = bpf_crypto_decrypt(ctx, &psrc, &pdst, &iv);
+
+ return TC_ACT_SHOT;
+}
+
+SEC("tc")
+int encrypt_sanity(struct __sk_buff *skb)
+{
+ struct __crypto_ctx_value *v;
+ struct bpf_crypto_ctx *ctx;
+ struct bpf_dynptr psrc, pdst, iv;
+ int err;
+
+ status = 0;
+
+ err = skb_dynptr_validate(skb, &psrc);
+ if (err < 0) {
+ status = err;
+ return TC_ACT_SHOT;
+ }
+
+ v = crypto_ctx_value_lookup();
+ if (!v) {
+ status = -ENOENT;
+ return TC_ACT_SHOT;
+ }
+
+ ctx = v->ctx;
+ if (!ctx) {
+ status = -ENOENT;
+ return TC_ACT_SHOT;
+ }
+
+ bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
+ /* iv dynptr has to be initialized with 0 size, but proper memory region
+ * has to be provided anyway
+ */
+ bpf_dynptr_from_mem(dst, 0, 0, &iv);
+
+ status = bpf_crypto_encrypt(ctx, &psrc, &pdst, &iv);
+
+ return TC_ACT_SHOT;
+}
+
+char __license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/crypto_share.h b/tools/testing/selftests/bpf/progs/crypto_share.h
new file mode 100644
index 000000000000..c5a6ef65156d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/crypto_share.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#define UDP_TEST_PORT 7777
+
+struct crypto_syscall_args {
+ u32 key_len;
+ u32 authsize;
+};
+
--
2.43.0
Implement skcipher crypto in BPF crypto framework.
Signed-off-by: Vadim Fedorenko <[email protected]>
Acked-by: Herbert Xu <[email protected]>
---
v8 -> v9:
- add Herbert's Ack
v7 -> v8:
- Move bpf_crypto_skcipher.c to crypto and make it part of
skcipher module. This way looks more natural and makes bpf crypto
proper modular. MAINTAINERS files is adjusted to make bpf part
belong to BPF maintainers.
v6 - v7:
- style issues
v6:
- introduce new file
---
MAINTAINERS | 8 ++++
crypto/Makefile | 3 ++
crypto/bpf_crypto_skcipher.c | 82 ++++++++++++++++++++++++++++++++++++
3 files changed, 93 insertions(+)
create mode 100644 crypto/bpf_crypto_skcipher.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 6a233e1a3cf2..c9f887fbb477 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3822,6 +3822,14 @@ F: kernel/bpf/tnum.c
F: kernel/bpf/trampoline.c
F: kernel/bpf/verifier.c
+BPF [CRYPTO]
+M: Vadim Fedorenko <[email protected]>
+L: [email protected]
+S: Maintained
+F: crypto/bpf_crypto_skcipher.c
+F: include/linux/bpf_crypto.h
+F: kernel/bpf/crypto.c
+
BPF [DOCUMENTATION] (Related to Standardization)
R: David Vernet <[email protected]>
L: [email protected]
diff --git a/crypto/Makefile b/crypto/Makefile
index 408f0a1f9ab9..538124f8bf8a 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -20,6 +20,9 @@ crypto_skcipher-y += lskcipher.o
crypto_skcipher-y += skcipher.o
obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
+ifeq ($(CONFIG_BPF_SYSCALL),y)
+obj-$(CONFIG_CRYPTO_SKCIPHER2) += bpf_crypto_skcipher.o
+endif
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
diff --git a/crypto/bpf_crypto_skcipher.c b/crypto/bpf_crypto_skcipher.c
new file mode 100644
index 000000000000..b5e657415770
--- /dev/null
+++ b/crypto/bpf_crypto_skcipher.c
@@ -0,0 +1,82 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2024 Meta, Inc */
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/bpf_crypto.h>
+#include <crypto/skcipher.h>
+
+static void *bpf_crypto_lskcipher_alloc_tfm(const char *algo)
+{
+ return crypto_alloc_lskcipher(algo, 0, 0);
+}
+
+static void bpf_crypto_lskcipher_free_tfm(void *tfm)
+{
+ crypto_free_lskcipher(tfm);
+}
+
+static int bpf_crypto_lskcipher_has_algo(const char *algo)
+{
+ return crypto_has_skcipher(algo, CRYPTO_ALG_TYPE_LSKCIPHER, CRYPTO_ALG_TYPE_MASK);
+}
+
+static int bpf_crypto_lskcipher_setkey(void *tfm, const u8 *key, unsigned int keylen)
+{
+ return crypto_lskcipher_setkey(tfm, key, keylen);
+}
+
+static u32 bpf_crypto_lskcipher_get_flags(void *tfm)
+{
+ return crypto_lskcipher_get_flags(tfm);
+}
+
+static unsigned int bpf_crypto_lskcipher_ivsize(void *tfm)
+{
+ return crypto_lskcipher_ivsize(tfm);
+}
+
+static unsigned int bpf_crypto_lskcipher_statesize(void *tfm)
+{
+ return crypto_lskcipher_statesize(tfm);
+}
+
+static int bpf_crypto_lskcipher_encrypt(void *tfm, const u8 *src, u8 *dst,
+ unsigned int len, u8 *siv)
+{
+ return crypto_lskcipher_encrypt(tfm, src, dst, len, siv);
+}
+
+static int bpf_crypto_lskcipher_decrypt(void *tfm, const u8 *src, u8 *dst,
+ unsigned int len, u8 *siv)
+{
+ return crypto_lskcipher_decrypt(tfm, src, dst, len, siv);
+}
+
+static const struct bpf_crypto_type bpf_crypto_lskcipher_type = {
+ .alloc_tfm = bpf_crypto_lskcipher_alloc_tfm,
+ .free_tfm = bpf_crypto_lskcipher_free_tfm,
+ .has_algo = bpf_crypto_lskcipher_has_algo,
+ .setkey = bpf_crypto_lskcipher_setkey,
+ .encrypt = bpf_crypto_lskcipher_encrypt,
+ .decrypt = bpf_crypto_lskcipher_decrypt,
+ .ivsize = bpf_crypto_lskcipher_ivsize,
+ .statesize = bpf_crypto_lskcipher_statesize,
+ .get_flags = bpf_crypto_lskcipher_get_flags,
+ .owner = THIS_MODULE,
+ .name = "skcipher",
+};
+
+static int __init bpf_crypto_skcipher_init(void)
+{
+ return bpf_crypto_register_type(&bpf_crypto_lskcipher_type);
+}
+
+static void __exit bpf_crypto_skcipher_exit(void)
+{
+ int err = bpf_crypto_unregister_type(&bpf_crypto_lskcipher_type);
+ WARN_ON_ONCE(err);
+}
+
+module_init(bpf_crypto_skcipher_init);
+module_exit(bpf_crypto_skcipher_exit);
+MODULE_LICENSE("GPL");
--
2.43.0
Some simple benchmarks are added to understand the baseline of
performance.
Signed-off-by: Vadim Fedorenko <[email protected]>
---
v9:
- initial submission
---
tools/testing/selftests/bpf/Makefile | 2 +
tools/testing/selftests/bpf/bench.c | 6 +
.../selftests/bpf/benchs/bench_bpf_crypto.c | 190 ++++++++++++++++++
.../selftests/bpf/progs/crypto_bench.c | 108 ++++++++++
4 files changed, 306 insertions(+)
create mode 100644 tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
create mode 100644 tools/testing/selftests/bpf/progs/crypto_bench.c
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index edc73f8f5aef..be8567337480 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -729,6 +729,7 @@ $(OUTPUT)/bench_local_storage_rcu_tasks_trace.o: $(OUTPUT)/local_storage_rcu_tas
$(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.skel.h
$(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h
$(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h
+$(OUTPUT)/bench_bpf_crypto.o: $(OUTPUT)/crypto_bench.skel.h
$(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
$(OUTPUT)/bench: LDLIBS += -lm
$(OUTPUT)/bench: $(OUTPUT)/bench.o \
@@ -748,6 +749,7 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
$(OUTPUT)/bench_bpf_hashmap_lookup.o \
$(OUTPUT)/bench_local_storage_create.o \
$(OUTPUT)/bench_htab_mem.o \
+ $(OUTPUT)/bench_bpf_crypto.o \
#
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 82de56c8162e..627b74ae041b 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -281,6 +281,7 @@ extern struct argp bench_hashmap_lookup_argp;
extern struct argp bench_local_storage_create_argp;
extern struct argp bench_htab_mem_argp;
extern struct argp bench_trigger_batch_argp;
+extern struct argp bench_crypto_argp;
static const struct argp_child bench_parsers[] = {
{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
@@ -294,6 +295,7 @@ static const struct argp_child bench_parsers[] = {
{ &bench_local_storage_create_argp, 0, "local-storage-create benchmark", 0 },
{ &bench_htab_mem_argp, 0, "hash map memory benchmark", 0 },
{ &bench_trigger_batch_argp, 0, "BPF triggering benchmark", 0 },
+ { &bench_crypto_argp, 0, "bpf crypto benchmark", 0 },
{},
};
@@ -538,6 +540,8 @@ extern const struct bench bench_local_storage_tasks_trace;
extern const struct bench bench_bpf_hashmap_lookup;
extern const struct bench bench_local_storage_create;
extern const struct bench bench_htab_mem;
+extern const struct bench bench_crypto_encrypt;
+extern const struct bench bench_crypto_decrypt;
static const struct bench *benchs[] = {
&bench_count_global,
@@ -590,6 +594,8 @@ static const struct bench *benchs[] = {
&bench_bpf_hashmap_lookup,
&bench_local_storage_create,
&bench_htab_mem,
+ &bench_crypto_encrypt,
+ &bench_crypto_decrypt,
};
static void find_benchmark(void)
diff --git a/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c b/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
new file mode 100644
index 000000000000..86048f02e6ac
--- /dev/null
+++ b/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include <argp.h>
+#include "bench.h"
+#include "crypto_bench.skel.h"
+#include "../progs/crypto_share.h"
+
+#define MAX_CIPHER_LEN 32
+static char *input;
+static struct crypto_ctx {
+ struct crypto_bench *skel;
+ int pfd;
+} ctx;
+
+static struct crypto_args {
+ u32 crypto_len;
+ char *crypto_cipher;
+} args = {
+ .crypto_len = 16,
+ .crypto_cipher = "ecb(aes)",
+};
+
+enum {
+ ARG_CRYPTO_LEN = 5000,
+ ARG_CRYPTO_CIPHER = 5001,
+};
+
+static const struct argp_option opts[] = {
+ { "crypto-len", ARG_CRYPTO_LEN, "CRYPTO_LEN", 0,
+ "Set the length of crypto buffer" },
+ { "crypto-cipher", ARG_CRYPTO_CIPHER, "CRYPTO_CIPHER", 0,
+ "Set the cipher to use (defaul:ecb(aes))" },
+ {},
+};
+
+static error_t crypto_parse_arg(int key, char *arg, struct argp_state *state)
+{
+ switch (key) {
+ case ARG_CRYPTO_LEN:
+ args.crypto_len = strtoul(arg, NULL, 10);
+ if (!args.crypto_len ||
+ args.crypto_len > sizeof(ctx.skel->bss->dst)) {
+ fprintf(stderr, "Invalid crypto buffer len (limit %zu)\n",
+ sizeof(ctx.skel->bss->dst));
+ argp_usage(state);
+ }
+ break;
+ case ARG_CRYPTO_CIPHER:
+ args.crypto_cipher = strdup(arg);
+ if (!strlen(args.crypto_cipher) ||
+ strlen(args.crypto_cipher) > MAX_CIPHER_LEN) {
+ fprintf(stderr, "Invalid crypto cipher len (limit %d)\n",
+ MAX_CIPHER_LEN);
+ argp_usage(state);
+ }
+ break;
+ default:
+ return ARGP_ERR_UNKNOWN;
+ }
+
+ return 0;
+}
+
+const struct argp bench_crypto_argp = {
+ .options = opts,
+ .parser = crypto_parse_arg,
+};
+
+static void crypto_validate(void)
+{
+ if (env.consumer_cnt != 0) {
+ fprintf(stderr, "bpf crypto benchmark doesn't support consumer!\n");
+ exit(1);
+ }
+}
+
+static void crypto_setup(void)
+{
+ struct crypto_syscall_args sargs = {
+ .key_len = 16,
+ };
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .ctx_in = &sargs,
+ .ctx_size_in = sizeof(sargs),
+ );
+
+ int err, pfd;
+ size_t i, sz;
+
+ sz = args.crypto_len;
+ if (!sz || sz > sizeof(ctx.skel->bss->dst)) {
+ fprintf(stderr, "invalid encrypt buffer size (source %zu, target %zu)\n",
+ sz, sizeof(ctx.skel->bss->dst));
+ exit(1);
+ }
+
+ setup_libbpf();
+
+ ctx.skel = crypto_bench__open();
+ if (!ctx.skel) {
+ fprintf(stderr, "failed to open skeleton\n");
+ exit(1);
+ }
+
+ snprintf(ctx.skel->bss->cipher, 128, "%s", args.crypto_cipher);
+ memcpy(ctx.skel->bss->key, "12345678testtest", 16);
+
+ srandom(time(NULL));
+ input = malloc(sz);
+ for (i = 0; i < sz - 1; i++)
+ input[i] = '1' + random() % 9;
+ input[sz - 1] = '\0';
+
+ ctx.skel->rodata->len = args.crypto_len;
+
+ err = crypto_bench__load(ctx.skel);
+ if (err) {
+ fprintf(stderr, "failed to load skeleton\n");
+ crypto_bench__destroy(ctx.skel);
+ exit(1);
+ }
+
+ pfd = bpf_program__fd(ctx.skel->progs.crypto_setup);
+ if (pfd < 0) {
+ fprintf(stderr, "failed to get fd for setup prog\n");
+ crypto_bench__destroy(ctx.skel);
+ exit(1);
+ }
+
+ err = bpf_prog_test_run_opts(pfd, &opts);
+ if (err || ctx.skel->bss->status) {
+ fprintf(stderr, "failed to run setup prog: err %d, status %d\n",
+ err, ctx.skel->bss->status);
+ crypto_bench__destroy(ctx.skel);
+ exit(1);
+ }
+}
+
+static void crypto_encrypt_setup(void)
+{
+ crypto_setup();
+ ctx.pfd = bpf_program__fd(ctx.skel->progs.crypto_encrypt);
+}
+
+static void crypto_decrypt_setup(void)
+{
+ crypto_setup();
+ ctx.pfd = bpf_program__fd(ctx.skel->progs.crypto_decrypt);
+}
+
+static void crypto_measure(struct bench_res *res)
+{
+ res->hits = atomic_swap(&ctx.skel->bss->hits, 0);
+}
+
+static void *crypto_producer(void *)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, opts,
+ .repeat = 64,
+ .data_in = input,
+ .data_size_in = args.crypto_len,
+ );
+
+ while (true)
+ (void)bpf_prog_test_run_opts(ctx.pfd, &opts);
+ return NULL;
+}
+
+const struct bench bench_crypto_encrypt = {
+ .name = "crypto-encrypt",
+ .argp = &bench_crypto_argp,
+ .validate = crypto_validate,
+ .setup = crypto_encrypt_setup,
+ .producer_thread = crypto_producer,
+ .measure = crypto_measure,
+ .report_progress = hits_drops_report_progress,
+ .report_final = hits_drops_report_final,
+};
+
+const struct bench bench_crypto_decrypt = {
+ .name = "crypto-decrypt",
+ .argp = &bench_crypto_argp,
+ .validate = crypto_validate,
+ .setup = crypto_decrypt_setup,
+ .producer_thread = crypto_producer,
+ .measure = crypto_measure,
+ .report_progress = hits_drops_report_progress,
+ .report_final = hits_drops_report_final,
+};
diff --git a/tools/testing/selftests/bpf/progs/crypto_bench.c b/tools/testing/selftests/bpf/progs/crypto_bench.c
new file mode 100644
index 000000000000..bd01794a0236
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/crypto_bench.c
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include "bpf_tracing_net.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_endian.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_kfuncs.h"
+#include "crypto_common.h"
+
+const volatile unsigned int len = 16;
+char dst[256] = {};
+long hits = 0;
+int status;
+char cipher[128] = {};
+u8 key[256] = {};
+
+SEC("syscall")
+int crypto_setup(struct crypto_syscall_args *args)
+{
+ struct bpf_crypto_ctx *cctx;
+ struct bpf_crypto_params params = {
+ .type = "skcipher",
+ .key_len = args->key_len,
+ .authsize = args->authsize,
+ };
+ int err = 0;
+
+ status = 0;
+
+ if (!cipher[0] || !args->key_len || args->key_len > 255) {
+ status = -EINVAL;
+ return 0;
+ }
+
+ __builtin_memcpy(¶ms.algo, cipher, sizeof(cipher));
+ __builtin_memcpy(¶ms.key, key, sizeof(key));
+ cctx = bpf_crypto_ctx_create(¶ms, &err);
+
+ if (!cctx) {
+ status = err;
+ return 0;
+ }
+
+ err = crypto_ctx_insert(cctx);
+ if (err && err != -EEXIST)
+ status = err;
+
+ return 0;
+}
+
+SEC("tc")
+int crypto_encrypt(struct __sk_buff *skb)
+{
+ struct __crypto_ctx_value *v;
+ struct bpf_crypto_ctx *ctx;
+ struct bpf_dynptr psrc, pdst, iv;
+
+ v = crypto_ctx_value_lookup();
+ if (!v) {
+ status = -ENOENT;
+ return 0;
+ }
+
+ ctx = v->ctx;
+ if (!ctx) {
+ status = -ENOENT;
+ return 0;
+ }
+
+ bpf_dynptr_from_skb(skb, 0, &psrc);
+ bpf_dynptr_from_mem(dst, len, 0, &pdst);
+ bpf_dynptr_from_mem(dst, 0, 0, &iv);
+
+ status = bpf_crypto_encrypt(ctx, &psrc, &pdst, &iv);
+ __sync_add_and_fetch(&hits, 1);
+
+ return 0;
+}
+
+SEC("tc")
+int crypto_decrypt(struct __sk_buff *skb)
+{
+ struct bpf_dynptr psrc, pdst, iv;
+ struct __crypto_ctx_value *v;
+ struct bpf_crypto_ctx *ctx;
+
+ v = crypto_ctx_value_lookup();
+ if (!v)
+ return -ENOENT;
+
+ ctx = v->ctx;
+ if (!ctx)
+ return -ENOENT;
+
+ bpf_dynptr_from_skb(skb, 0, &psrc);
+ bpf_dynptr_from_mem(dst, len, 0, &pdst);
+ bpf_dynptr_from_mem(dst, 0, 0, &iv);
+
+ status = bpf_crypto_decrypt(ctx, &psrc, &pdst, &iv);
+ __sync_add_and_fetch(&hits, 1);
+
+ return 0;
+}
+
+char __license[] SEC("license") = "GPL";
--
2.43.0
Hi Vadim,
kernel test robot noticed the following build warnings:
[auto build test WARNING on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Vadim-Fedorenko/bpf-make-common-crypto-API-for-TC-XDP-programs/20240417-044349
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20240416204004.3942393-2-vadfed%40meta.com
patch subject: [PATCH bpf-next v9 1/4] bpf: make common crypto API for TC/XDP programs
config: x86_64-rhel-8.3-rust (https://download.01.org/0day-ci/archive/20240417/[email protected]/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240417/[email protected]/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
All warnings (new ones prefixed by >>):
>> kernel/bpf/crypto.c:53: warning: Function parameter or struct member 'siv_len' not described in 'bpf_crypto_ctx'
vim +53 kernel/bpf/crypto.c
37
38 /**
39 * struct bpf_crypto_ctx - refcounted BPF crypto context structure
40 * @type: The pointer to bpf crypto type
41 * @tfm: The pointer to instance of crypto API struct.
42 * @rcu: The RCU head used to free the crypto context with RCU safety.
43 * @usage: Object reference counter. When the refcount goes to 0, the
44 * memory is released back to the BPF allocator, which provides
45 * RCU safety.
46 */
47 struct bpf_crypto_ctx {
48 const struct bpf_crypto_type *type;
49 void *tfm;
50 u32 siv_len;
51 struct rcu_head rcu;
52 refcount_t usage;
> 53 };
54
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On Tue, Apr 16, 2024 at 01:40:04PM -0700, Vadim Fedorenko wrote:
> Some simple benchmarks are added to understand the baseline of
> performance.
>
> Signed-off-by: Vadim Fedorenko <[email protected]>
...
> diff --git a/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c b/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
> new file mode 100644
> index 000000000000..86048f02e6ac
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/benchs/bench_bpf_crypto.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +
> +#include <argp.h>
> +#include "bench.h"
> +#include "crypto_bench.skel.h"
> +#include "../progs/crypto_share.h"
> +
> +#define MAX_CIPHER_LEN 32
> +static char *input;
> +static struct crypto_ctx {
> + struct crypto_bench *skel;
> + int pfd;
> +} ctx;
> +
> +static struct crypto_args {
> + u32 crypto_len;
> + char *crypto_cipher;
> +} args = {
> + .crypto_len = 16,
> + .crypto_cipher = "ecb(aes)",
> +};
> +
> +enum {
> + ARG_CRYPTO_LEN = 5000,
> + ARG_CRYPTO_CIPHER = 5001,
> +};
> +
> +static const struct argp_option opts[] = {
> + { "crypto-len", ARG_CRYPTO_LEN, "CRYPTO_LEN", 0,
> + "Set the length of crypto buffer" },
> + { "crypto-cipher", ARG_CRYPTO_CIPHER, "CRYPTO_CIPHER", 0,
> + "Set the cipher to use (defaul:ecb(aes))" },
nit: should this be 'default' ?
Flagged by checkpatch.pl --codespell
> + {},
> +};
...
On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 5034c1b4ded7..acc479c13f52 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1265,6 +1265,7 @@ int bpf_dynptr_check_size(u32 size);
> u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
> const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32 len);
> void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
> +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
>
> #ifdef CONFIG_BPF_JIT
> int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
> diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
> new file mode 100644
> index 000000000000..a41e71d4e2d9
> --- /dev/null
> +++ b/include/linux/bpf_crypto.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +#ifndef _BPF_CRYPTO_H
> +#define _BPF_CRYPTO_H
> +
> +struct bpf_crypto_type {
> + void *(*alloc_tfm)(const char *algo);
> + void (*free_tfm)(void *tfm);
> + int (*has_algo)(const char *algo);
> + int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
> + int (*setauthsize)(void *tfm, unsigned int authsize);
> + int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
> + int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int len, u8 *iv);
> + unsigned int (*ivsize)(void *tfm);
> + unsigned int (*statesize)(void *tfm);
> + u32 (*get_flags)(void *tfm);
> + struct module *owner;
> + char name[14];
> +};
> +
> +int bpf_crypto_register_type(const struct bpf_crypto_type *type);
> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
> +
> +#endif /* _BPF_CRYPTO_H */
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 368c5d86b5b7..736bd22e5ce0 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -44,6 +44,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
> obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
> obj-${CONFIG_BPF_LSM} += bpf_lsm.o
> endif
> +ifeq ($(CONFIG_CRYPTO),y)
> +obj-$(CONFIG_BPF_SYSCALL) += crypto.o
> +endif
> obj-$(CONFIG_BPF_PRELOAD) += preload/
>
> obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
> diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
> new file mode 100644
> index 000000000000..a76d80f37f55
> --- /dev/null
> +++ b/kernel/bpf/crypto.c
> @@ -0,0 +1,377 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (c) 2024 Meta, Inc */
> +#include <linux/bpf.h>
> +#include <linux/bpf_crypto.h>
> +#include <linux/bpf_mem_alloc.h>
> +#include <linux/btf.h>
> +#include <linux/btf_ids.h>
> +#include <linux/filter.h>
> +#include <linux/scatterlist.h>
> +#include <linux/skbuff.h>
> +#include <crypto/skcipher.h>
> +
> +struct bpf_crypto_type_list {
> + const struct bpf_crypto_type *type;
> + struct list_head list;
> +};
> +
> +/* BPF crypto initialization parameters struct */
> +/**
> + * struct bpf_crypto_params - BPF crypto initialization parameters structure
> + * @type: The string of crypto operation type.
> + * @algo: The string of algorithm to initialize.
> + * @key: The cipher key used to init crypto algorithm.
> + * @key_len: The length of cipher key.
> + * @authsize: The length of authentication tag used by algorithm.
> + */
> +struct bpf_crypto_params {
> + char type[14];
> + char algo[128];
> + __u8 key[256];
It should have a two byte hole here. Add
__u8 reserved[2];
and check for 0 in bpf_crypto_ctx_create() in case it could be reused later. The
bpf_crypto_ctx_create() should not be called very often.
> + __u32 key_len;
> + __u32 authsize;
I don't think there is tail padding of this struct, so should be fine.
> +} __attribute__((aligned(8)));
Does it need aligned(8) here?
> +
> +static LIST_HEAD(bpf_crypto_types);
> +static DECLARE_RWSEM(bpf_crypto_types_sem);
> +
> +/**
> + * struct bpf_crypto_ctx - refcounted BPF crypto context structure
> + * @type: The pointer to bpf crypto type
> + * @tfm: The pointer to instance of crypto API struct.
> + * @rcu: The RCU head used to free the crypto context with RCU safety.
> + * @usage: Object reference counter. When the refcount goes to 0, the
> + * memory is released back to the BPF allocator, which provides
> + * RCU safety.
> + */
> +struct bpf_crypto_ctx {
> + const struct bpf_crypto_type *type;
> + void *tfm;
> + u32 siv_len;
> + struct rcu_head rcu;
> + refcount_t usage;
> +};
> +
> +int bpf_crypto_register_type(const struct bpf_crypto_type *type)
> +{
> + struct bpf_crypto_type_list *node;
> + int err = -EEXIST;
> +
> + down_write(&bpf_crypto_types_sem);
> + list_for_each_entry(node, &bpf_crypto_types, list) {
> + if (!strcmp(node->type->name, type->name))
> + goto unlock;
> + }
> +
> + node = kmalloc(sizeof(*node), GFP_KERNEL);
> + err = -ENOMEM;
> + if (!node)
> + goto unlock;
> +
> + node->type = type;
> + list_add(&node->list, &bpf_crypto_types);
> + err = 0;
> +
> +unlock:
> + up_write(&bpf_crypto_types_sem);
> +
> + return err;
> +}
> +EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
> +
> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
> +{
> + struct bpf_crypto_type_list *node;
> + int err = -ENOENT;
> +
> + down_write(&bpf_crypto_types_sem);
> + list_for_each_entry(node, &bpf_crypto_types, list) {
> + if (strcmp(node->type->name, type->name))
> + continue;
> +
> + list_del(&node->list);
> + kfree(node);
> + err = 0;
> + break;
> + }
> + up_write(&bpf_crypto_types_sem);
> +
> + return err;
> +}
> +EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
> +
> +static const struct bpf_crypto_type *bpf_crypto_get_type(const char *name)
> +{
> + const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
> + struct bpf_crypto_type_list *node;
> +
> + down_read(&bpf_crypto_types_sem);
> + list_for_each_entry(node, &bpf_crypto_types, list) {
> + if (strcmp(node->type->name, name))
> + continue;
> +
> + if (try_module_get(node->type->owner))
> + type = node->type;
> + break;
> + }
> + up_read(&bpf_crypto_types_sem);
> +
> + return type;
> +}
> +
> +__bpf_kfunc_start_defs();
> +
> +/**
> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
> + *
> + * Allocates a crypto context that can be used, acquired, and released by
> + * a BPF program. The crypto context returned by this function must either
> + * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
> + * As crypto API functions use GFP_KERNEL allocations, this function can
> + * only be used in sleepable BPF programs.
> + *
> + * bpf_crypto_ctx_create() allocates memory for crypto context.
> + * It may return NULL if no memory is available.
> + * @params: pointer to struct bpf_crypto_params which contains all the
> + * details needed to initialise crypto context.
> + * @err: integer to store error code when NULL is returned.
> + */
> +__bpf_kfunc struct bpf_crypto_ctx *
> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
Add a "u32 params__sz" arg in case that the params struct will have addition.
Take a look at how opts__sz is checked in nf_conntrack_bpf.c.
> +{
> + const struct bpf_crypto_type *type;
> + struct bpf_crypto_ctx *ctx;
> +
> + type = bpf_crypto_get_type(params->type);
> + if (IS_ERR(type)) {
> + *err = PTR_ERR(type);
> + return NULL;
> + }
> +
> + if (!type->has_algo(params->algo)) {
> + *err = -EOPNOTSUPP;
> + goto err_module_put;
> + }
> +
> + if (!params->authsize && type->setauthsize) {
> + *err = -EOPNOTSUPP;
> + goto err_module_put;
> + }
> +
> + if (params->authsize && !type->setauthsize) {
nit. Together with the previous "if" test, replace them with one test like:
if (!!params->authsize ^ !!type->setauthsize) {
> + *err = -EOPNOTSUPP;
> + goto err_module_put;
> + }
> +
> + if (!params->key_len) {
Also checks "|| params->key_len > sizeof(params->key)"
> + *err = -EINVAL;
> + goto err_module_put;
> + }
> +
> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> + if (!ctx) {
> + *err = -ENOMEM;
> + goto err_module_put;
> + }
> +
> + ctx->type = type;
> + ctx->tfm = type->alloc_tfm(params->algo);
> + if (IS_ERR(ctx->tfm)) {
> + *err = PTR_ERR(ctx->tfm);
> + goto err_free_ctx;
> + }
> +
> + if (params->authsize) {
> + *err = type->setauthsize(ctx->tfm, params->authsize);
> + if (*err)
> + goto err_free_tfm;
> + }
> +
> + *err = type->setkey(ctx->tfm, params->key, params->key_len);
> + if (*err)
> + goto err_free_tfm;
> +
> + if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
> + *err = -EINVAL;
> + goto err_free_tfm;
> + }
> +
> + ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
> +
> + refcount_set(&ctx->usage, 1);
> +
> + return ctx;
> +
> +err_free_tfm:
> + type->free_tfm(ctx->tfm);
> +err_free_ctx:
> + kfree(ctx);
> +err_module_put:
> + module_put(type->owner);
> +
> + return NULL;
> +}
> +
> +static void crypto_free_cb(struct rcu_head *head)
> +{
> + struct bpf_crypto_ctx *ctx;
> +
> + ctx = container_of(head, struct bpf_crypto_ctx, rcu);
> + ctx->type->free_tfm(ctx->tfm);
> + module_put(ctx->type->owner);
> + kfree(ctx);
> +}
> +
> +/**
> + * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto context.
> + * @ctx: The BPF crypto context being acquired. The ctx must be a trusted
> + * pointer.
> + *
> + * Acquires a reference to a BPF crypto context. The context returned by this function
> + * must either be embedded in a map as a kptr, or freed with
> + * bpf_crypto_skcipher_ctx_release().
> + */
> +__bpf_kfunc struct bpf_crypto_ctx *
> +bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
> +{
> + if (!refcount_inc_not_zero(&ctx->usage))
> + return NULL;
> + return ctx;
> +}
> +
> +/**
> + * bpf_crypto_ctx_release() - Release a previously acquired BPF crypto context.
> + * @ctx: The crypto context being released.
> + *
> + * Releases a previously acquired reference to a BPF crypto context. When the final
> + * reference of the BPF crypto context has been released, it is subsequently freed in
> + * an RCU callback in the BPF memory allocator.
> + */
> +__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
> +{
> + if (refcount_dec_and_test(&ctx->usage))
> + call_rcu(&ctx->rcu, crypto_free_cb);
> +}
> +
> +static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
> + const struct bpf_dynptr_kern *src,
> + struct bpf_dynptr_kern *dst,
> + const struct bpf_dynptr_kern *siv,
> + bool decrypt)
> +{
> + u32 src_len, dst_len, siv_len;
> + const u8 *psrc;
> + u8 *pdst, *piv;
> + int err;
> +
> + if (__bpf_dynptr_is_rdonly(dst))
> + return -EINVAL;
> +
> + siv_len = __bpf_dynptr_size(siv);
> + src_len = __bpf_dynptr_size(src);
> + dst_len = __bpf_dynptr_size(dst);
> + if (!src_len || !dst_len)
> + return -EINVAL;
> +
> + if (siv_len != ctx->siv_len)
> + return -EINVAL;
> +
> + psrc = __bpf_dynptr_data(src, src_len);
> + if (!psrc)
> + return -EINVAL;
> + pdst = __bpf_dynptr_data_rw(dst, dst_len);
> + if (!pdst)
> + return -EINVAL;
> +
> + piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;
It has been a while. I don't remember if it has already been brought up before.
The "const struct bpf_dynptr_kern *siv" here is essentially an optional pointer.
Allowing NULL is a more intuitive usage instead of passing a 0-len dynptr. The
verifier needs some changes to take __nullable suffix for "struct
bpf_dynptr_kern *siv__nullable". This could be a follow up to relax the
restriction to allow NULL and not necessary in this set.
> + if (siv_len && !piv)
> + return -EINVAL;
> +
> + err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len, piv)
> + : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
> +
> + return err;
> +}
On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
> +void test_crypto_sanity(void)
> +{
> + struct crypto_syscall_args sargs = {
> + .key_len = 16,
> + };
> + LIBBPF_OPTS(bpf_tc_hook, qdisc_hook, .attach_point = BPF_TC_EGRESS);
> + LIBBPF_OPTS(bpf_tc_opts, tc_attach_enc);
> + LIBBPF_OPTS(bpf_tc_opts, tc_attach_dec);
> + LIBBPF_OPTS(bpf_test_run_opts, opts,
> + .ctx_in = &sargs,
> + .ctx_size_in = sizeof(sargs),
> + );
> + struct nstoken *nstoken = NULL;
> + struct crypto_sanity *skel;
> + char afalg_plain[16] = {0};
> + char afalg_dst[16] = {0};
> + struct sockaddr_in6 addr;
> + int sockfd, err, pfd;
> + socklen_t addrlen;
> +
> + SYS(fail, "ip netns add %s", NS_TEST);
> + SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST, IPV6_IFACE_ADDR);
> + SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
> +
> + nstoken = open_netns(NS_TEST);
> + if (!ASSERT_OK_PTR(nstoken, "open_netns"))
> + goto fail;
skel is not initialized. The "fail:" case needs it.
> +
> + err = init_afalg();
> + if (!ASSERT_OK(err, "AF_ALG init fail"))
> + goto fail;
> +
> + qdisc_hook.ifindex = if_nametoindex("lo");
> + if (!ASSERT_GT(qdisc_hook.ifindex, 0, "if_nametoindex lo"))
> + goto fail;
> +
> + skel = crypto_sanity__open_and_load();
> + if (!ASSERT_OK_PTR(skel, "skel open"))
> + return;
The netns "crypto_sanity_ns" is not deleted.
> +
> + memcpy(skel->bss->key, crypto_key, sizeof(crypto_key));
> + snprintf(skel->bss->algo, 128, "%s", algo);
> + pfd = bpf_program__fd(skel->progs.skb_crypto_setup);
> + if (!ASSERT_GT(pfd, 0, "skb_crypto_setup fd"))
> + goto fail;
> +
> + err = bpf_prog_test_run_opts(pfd, &opts);
> + if (!ASSERT_OK(err, "skb_crypto_setup") ||
> + !ASSERT_OK(opts.retval, "skb_crypto_setup retval"))
> + goto fail;
> +
> + if (!ASSERT_OK(skel->bss->status, "skb_crypto_setup status"))
> + goto fail;
> +
> + err = crypto_sanity__attach(skel);
This attach is a left over from previous revision?
> + if (!ASSERT_OK(err, "crypto_sanity__attach"))
> + goto fail;
> +
> + err = bpf_tc_hook_create(&qdisc_hook);
> + if (!ASSERT_OK(err, "create qdisc hook"))
> + goto fail;
> +
> + addrlen = sizeof(addr);
> + err = make_sockaddr(AF_INET6, IPV6_IFACE_ADDR, UDP_TEST_PORT,
> + (void *)&addr, &addrlen);
> + if (!ASSERT_OK(err, "make_sockaddr"))
> + goto fail;
> +
> + tc_attach_enc.prog_fd = bpf_program__fd(skel->progs.encrypt_sanity);
> + err = bpf_tc_attach(&qdisc_hook, &tc_attach_enc);
> + if (!ASSERT_OK(err, "attach encrypt filter"))
> + goto fail;
> +
> + sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
> + if (!ASSERT_NEQ(sockfd, -1, "encrypt socket"))
> + goto fail;
> + err = sendto(sockfd, plain_text, sizeof(plain_text), 0, (void *)&addr, addrlen);
> + close(sockfd);
> + if (!ASSERT_EQ(err, sizeof(plain_text), "encrypt send"))
> + goto fail;
> +
> + do_crypt_afalg(plain_text, afalg_dst, sizeof(afalg_dst), true);
> +
> + bpf_tc_detach(&qdisc_hook, &tc_attach_enc);
Check error.
I suspect this detach should have failed because at least the
tc_attach_enc.prog_fd is not 0.
The following attach (&tc_attach_"dec") may just happen to have a higher
priority such that the left over here does not matter. It is still better to get
it right.
> + if (!ASSERT_OK(skel->bss->status, "encrypt status"))
> + goto fail;
> + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_dst, sizeof(afalg_dst), "encrypt AF_ALG"))
> + goto fail;
> +
> + tc_attach_dec.prog_fd = bpf_program__fd(skel->progs.decrypt_sanity);
> + err = bpf_tc_attach(&qdisc_hook, &tc_attach_dec);
> + if (!ASSERT_OK(err, "attach decrypt filter"))
> + goto fail;
> +
> + sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
> + if (!ASSERT_NEQ(sockfd, -1, "decrypt socket"))
> + goto fail;
> + err = sendto(sockfd, afalg_dst, sizeof(afalg_dst), 0, (void *)&addr, addrlen);
> + close(sockfd);
> + if (!ASSERT_EQ(err, sizeof(afalg_dst), "decrypt send"))
> + goto fail;
> +
> + do_crypt_afalg(afalg_dst, afalg_plain, sizeof(afalg_plain), false);
> +
> + bpf_tc_detach(&qdisc_hook, &tc_attach_dec);
> + if (!ASSERT_OK(skel->bss->status, "decrypt status"))
> + goto fail;
> + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_plain, sizeof(afalg_plain), "decrypt AF_ALG"))
> + goto fail;
> +
> +fail:
> + if (nstoken) {
No need to check NULL. close_netns() can handle it.
> + bpf_tc_hook_destroy(&qdisc_hook);
This also does not destroy the clsact qdisc. Although the function name feels
like it would, from a quick look at bpf_tc_hook_destroy, it depends on both
BPF_TC_INGRESS and BPF_TC_EGRESS are set in the qdisc_hook.attach_point.
I would skip the whole bpf_tc_hook_destroy. It will go away together with the netns.
[ ... ]
> diff --git a/tools/testing/selftests/bpf/progs/crypto_sanity.c b/tools/testing/selftests/bpf/progs/crypto_sanity.c
> new file mode 100644
> index 000000000000..57df5776bcaf
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/crypto_sanity.c
> @@ -0,0 +1,161 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +
> +#include "vmlinux.h"
> +#include "bpf_tracing_net.h"
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_endian.h>
> +#include <bpf/bpf_tracing.h>
> +#include "bpf_misc.h"
> +#include "bpf_kfuncs.h"
> +#include "crypto_common.h"
> +
> +unsigned char key[256] = {};
> +char algo[128] = {};
> +char dst[16] = {};
> +int status;
> +
> +static int skb_dynptr_validate(struct __sk_buff *skb, struct bpf_dynptr *psrc)
> +{
> + struct ipv6hdr ip6h;
> + struct udphdr udph;
> + u32 offset;
> +
> + if (skb->protocol != __bpf_constant_htons(ETH_P_IPV6))
> + return -1;
> +
> + if (bpf_skb_load_bytes(skb, ETH_HLEN, &ip6h, sizeof(ip6h)))
> + return -1;
> +
> + if (ip6h.nexthdr != IPPROTO_UDP)
> + return -1;
> +
> + if (bpf_skb_load_bytes(skb, ETH_HLEN + sizeof(ip6h), &udph, sizeof(udph)))
> + return -1;
> +
> + if (udph.dest != __bpf_constant_htons(UDP_TEST_PORT))
> + return -1;
> +
> + offset = ETH_HLEN + sizeof(ip6h) + sizeof(udph);
> + if (skb->len < offset + 16)
> + return -1;
> +
> + /* let's make sure that 16 bytes of payload are in the linear part of skb */
> + bpf_skb_pull_data(skb, offset + 16);
> + bpf_dynptr_from_skb(skb, 0, psrc);
> + bpf_dynptr_adjust(psrc, offset, offset + 16);
> +
> + return 0;
> +}
> +
> +SEC("syscall")
> +int skb_crypto_setup(struct crypto_syscall_args *ctx)
> +{
> + struct bpf_crypto_params params = {
> + .type = "skcipher",
> + .key_len = ctx->key_len,
> + .authsize = ctx->authsize,
> + };
> + struct bpf_crypto_ctx *cctx;
> + int err = 0;
> +
> + status = 0;
> +
> + if (ctx->key_len > 255) {
key_len == 256 won't work ?
> + status = -EINVAL;
> + return 0;
> + }
> +
> + __builtin_memcpy(¶ms.algo, algo, sizeof(algo));
> + __builtin_memcpy(¶ms.key, key, sizeof(key));
It will be useful to comment here what problem was hit such that the key cannot
be passed in the "struct crypto_syscall_args" and need to go back to the global
variable.
Instead of "key_len" in the crypto_syscall_args and the actual "key" in global,
how about skip using the "struct crypto_syscall_args" altogether and put key_len
(and authsize) in the global?
Put UDP_TEST_PORT as a global variable for config/filter usage also and the
"crypto_share.h" can go away.
> + cctx = bpf_crypto_ctx_create(¶ms, &err);
> +
> + if (!cctx) {
> + status = err;
> + return 0;
> + }
> +
> + err = crypto_ctx_insert(cctx);
> + if (err && err != -EEXIST)
> + status = err;
> +
> + return 0;
> +}
> +
> +SEC("tc")
> +int decrypt_sanity(struct __sk_buff *skb)
> +{
> + struct __crypto_ctx_value *v;
> + struct bpf_crypto_ctx *ctx;
> + struct bpf_dynptr psrc, pdst, iv;
> + int err;
> +
> + err = skb_dynptr_validate(skb, &psrc);
> + if (err < 0) {
> + status = err;
> + return TC_ACT_SHOT;
> + }
> +
> + v = crypto_ctx_value_lookup();
> + if (!v) {
> + status = -ENOENT;
> + return TC_ACT_SHOT;
> + }
> +
> + ctx = v->ctx;
> + if (!ctx) {
> + status = -ENOENT;
> + return TC_ACT_SHOT;
> + }
> +
> + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
dst is now a global which makes it easier to test the result. A comment here to
note this point for people referencing this test for production use case and
suggest a percpu map could be used.
It will be useful to have dynptr working with stack memory in the future.
> + /* iv dynptr has to be initialized with 0 size, but proper memory region
> + * has to be provided anyway
> + */
> + bpf_dynptr_from_mem(dst, 0, 0, &iv);
> +
> + status = bpf_crypto_decrypt(ctx, &psrc, &pdst, &iv);
> +
> + return TC_ACT_SHOT;
> +}
> +
> +SEC("tc")
> +int encrypt_sanity(struct __sk_buff *skb)
> +{
> + struct __crypto_ctx_value *v;
> + struct bpf_crypto_ctx *ctx;
> + struct bpf_dynptr psrc, pdst, iv;
> + int err;
> +
> + status = 0;
> +
> + err = skb_dynptr_validate(skb, &psrc);
> + if (err < 0) {
> + status = err;
> + return TC_ACT_SHOT;
> + }
> +
> + v = crypto_ctx_value_lookup();
> + if (!v) {
> + status = -ENOENT;
> + return TC_ACT_SHOT;
> + }
> +
> + ctx = v->ctx;
> + if (!ctx) {
> + status = -ENOENT;
> + return TC_ACT_SHOT;
> + }
> +
> + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
> + /* iv dynptr has to be initialized with 0 size, but proper memory region
> + * has to be provided anyway
> + */
> + bpf_dynptr_from_mem(dst, 0, 0, &iv);
> +
> + status = bpf_crypto_encrypt(ctx, &psrc, &pdst, &iv);
> +
> + return TC_ACT_SHOT;
> +}
> +
> +char __license[] SEC("license") = "GPL";
> diff --git a/tools/testing/selftests/bpf/progs/crypto_share.h b/tools/testing/selftests/bpf/progs/crypto_share.h
> new file mode 100644
> index 000000000000..c5a6ef65156d
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/crypto_share.h
> @@ -0,0 +1,10 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
> +
> +#define UDP_TEST_PORT 7777
> +
> +struct crypto_syscall_args {
> + u32 key_len;
> + u32 authsize;
> +};
> +
On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
> +static void *crypto_producer(void *)
The bpf CI cannot compile:
benchs/bench_bpf_crypto.c:157:36: error: omitting the parameter name in a function definition is a C23 extension [-Werror,-Wc23-extensions]
157 | static void *crypto_producer(void *)
https://github.com/kernel-patches/bpf/actions/runs/8712330655
> +{
> + LIBBPF_OPTS(bpf_test_run_opts, opts,
> + .repeat = 64,
> + .data_in = input,
> + .data_size_in = args.crypto_len,
> + );
> +
> + while (true)
> + (void)bpf_prog_test_run_opts(ctx.pfd, &opts);
> + return NULL;
> +}
On 19/04/2024 19:57, Martin KaFai Lau wrote:
> On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
>> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
>> index 5034c1b4ded7..acc479c13f52 100644
>> --- a/include/linux/bpf.h
>> +++ b/include/linux/bpf.h
>> @@ -1265,6 +1265,7 @@ int bpf_dynptr_check_size(u32 size);
>> u32 __bpf_dynptr_size(const struct bpf_dynptr_kern *ptr);
>> const void *__bpf_dynptr_data(const struct bpf_dynptr_kern *ptr, u32
>> len);
>> void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
>> +bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
>> #ifdef CONFIG_BPF_JIT
>> int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct
>> bpf_trampoline *tr);
>> diff --git a/include/linux/bpf_crypto.h b/include/linux/bpf_crypto.h
>> new file mode 100644
>> index 000000000000..a41e71d4e2d9
>> --- /dev/null
>> +++ b/include/linux/bpf_crypto.h
>> @@ -0,0 +1,24 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
>> +#ifndef _BPF_CRYPTO_H
>> +#define _BPF_CRYPTO_H
>> +
>> +struct bpf_crypto_type {
>> + void *(*alloc_tfm)(const char *algo);
>> + void (*free_tfm)(void *tfm);
>> + int (*has_algo)(const char *algo);
>> + int (*setkey)(void *tfm, const u8 *key, unsigned int keylen);
>> + int (*setauthsize)(void *tfm, unsigned int authsize);
>> + int (*encrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int
>> len, u8 *iv);
>> + int (*decrypt)(void *tfm, const u8 *src, u8 *dst, unsigned int
>> len, u8 *iv);
>> + unsigned int (*ivsize)(void *tfm);
>> + unsigned int (*statesize)(void *tfm);
>> + u32 (*get_flags)(void *tfm);
>> + struct module *owner;
>> + char name[14];
>> +};
>> +
>> +int bpf_crypto_register_type(const struct bpf_crypto_type *type);
>> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type);
>> +
>> +#endif /* _BPF_CRYPTO_H */
>> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
>> index 368c5d86b5b7..736bd22e5ce0 100644
>> --- a/kernel/bpf/Makefile
>> +++ b/kernel/bpf/Makefile
>> @@ -44,6 +44,9 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
>> obj-$(CONFIG_BPF_SYSCALL) += cpumask.o
>> obj-${CONFIG_BPF_LSM} += bpf_lsm.o
>> endif
>> +ifeq ($(CONFIG_CRYPTO),y)
>> +obj-$(CONFIG_BPF_SYSCALL) += crypto.o
>> +endif
>> obj-$(CONFIG_BPF_PRELOAD) += preload/
>> obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
>> diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
>> new file mode 100644
>> index 000000000000..a76d80f37f55
>> --- /dev/null
>> +++ b/kernel/bpf/crypto.c
>> @@ -0,0 +1,377 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/* Copyright (c) 2024 Meta, Inc */
>> +#include <linux/bpf.h>
>> +#include <linux/bpf_crypto.h>
>> +#include <linux/bpf_mem_alloc.h>
>> +#include <linux/btf.h>
>> +#include <linux/btf_ids.h>
>> +#include <linux/filter.h>
>> +#include <linux/scatterlist.h>
>> +#include <linux/skbuff.h>
>> +#include <crypto/skcipher.h>
>> +
>> +struct bpf_crypto_type_list {
>> + const struct bpf_crypto_type *type;
>> + struct list_head list;
>> +};
>> +
>> +/* BPF crypto initialization parameters struct */
>> +/**
>> + * struct bpf_crypto_params - BPF crypto initialization parameters
>> structure
>> + * @type: The string of crypto operation type.
>> + * @algo: The string of algorithm to initialize.
>> + * @key: The cipher key used to init crypto algorithm.
>> + * @key_len: The length of cipher key.
>> + * @authsize: The length of authentication tag used by algorithm.
>> + */
>> +struct bpf_crypto_params {
>> + char type[14];
>> + char algo[128];
>> + __u8 key[256];
>
> It should have a two byte hole here. Add
> __u8 reserved[2];
>
> and check for 0 in bpf_crypto_ctx_create() in case it could be reused
> later. The bpf_crypto_ctx_create() should not be called very often.
>
Sure, I'll add it to have algo, key and other fields aligned to 8 bytes.
>> + __u32 key_len;
>> + __u32 authsize;
>
> I don't think there is tail padding of this struct, so should be fine.
>
>> +} __attribute__((aligned(8)));
>
> Does it need aligned(8) here?
Nope, looks like left over after uapi variant.
>
>> +
>> +static LIST_HEAD(bpf_crypto_types);
>> +static DECLARE_RWSEM(bpf_crypto_types_sem);
>> +
>> +/**
>> + * struct bpf_crypto_ctx - refcounted BPF crypto context structure
>> + * @type: The pointer to bpf crypto type
>> + * @tfm: The pointer to instance of crypto API struct.
>> + * @rcu: The RCU head used to free the crypto context with RCU
>> safety.
>> + * @usage: Object reference counter. When the refcount goes to 0, the
>> + * memory is released back to the BPF allocator, which provides
>> + * RCU safety.
>> + */
>> +struct bpf_crypto_ctx {
>> + const struct bpf_crypto_type *type;
>> + void *tfm;
>> + u32 siv_len;
>> + struct rcu_head rcu;
>> + refcount_t usage;
>> +};
>> +
>> +int bpf_crypto_register_type(const struct bpf_crypto_type *type)
>> +{
>> + struct bpf_crypto_type_list *node;
>> + int err = -EEXIST;
>> +
>> + down_write(&bpf_crypto_types_sem);
>> + list_for_each_entry(node, &bpf_crypto_types, list) {
>> + if (!strcmp(node->type->name, type->name))
>> + goto unlock;
>> + }
>> +
>> + node = kmalloc(sizeof(*node), GFP_KERNEL);
>> + err = -ENOMEM;
>> + if (!node)
>> + goto unlock;
>> +
>> + node->type = type;
>> + list_add(&node->list, &bpf_crypto_types);
>> + err = 0;
>> +
>> +unlock:
>> + up_write(&bpf_crypto_types_sem);
>> +
>> + return err;
>> +}
>> +EXPORT_SYMBOL_GPL(bpf_crypto_register_type);
>> +
>> +int bpf_crypto_unregister_type(const struct bpf_crypto_type *type)
>> +{
>> + struct bpf_crypto_type_list *node;
>> + int err = -ENOENT;
>> +
>> + down_write(&bpf_crypto_types_sem);
>> + list_for_each_entry(node, &bpf_crypto_types, list) {
>> + if (strcmp(node->type->name, type->name))
>> + continue;
>> +
>> + list_del(&node->list);
>> + kfree(node);
>> + err = 0;
>> + break;
>> + }
>> + up_write(&bpf_crypto_types_sem);
>> +
>> + return err;
>> +}
>> +EXPORT_SYMBOL_GPL(bpf_crypto_unregister_type);
>> +
>> +static const struct bpf_crypto_type *bpf_crypto_get_type(const char
>> *name)
>> +{
>> + const struct bpf_crypto_type *type = ERR_PTR(-ENOENT);
>> + struct bpf_crypto_type_list *node;
>> +
>> + down_read(&bpf_crypto_types_sem);
>> + list_for_each_entry(node, &bpf_crypto_types, list) {
>> + if (strcmp(node->type->name, name))
>> + continue;
>> +
>> + if (try_module_get(node->type->owner))
>> + type = node->type;
>> + break;
>> + }
>> + up_read(&bpf_crypto_types_sem);
>> +
>> + return type;
>> +}
>> +
>> +__bpf_kfunc_start_defs();
>> +
>> +/**
>> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
>> + *
>> + * Allocates a crypto context that can be used, acquired, and
>> released by
>> + * a BPF program. The crypto context returned by this function must
>> either
>> + * be embedded in a map as a kptr, or freed with
>> bpf_crypto_ctx_release().
>> + * As crypto API functions use GFP_KERNEL allocations, this function can
>> + * only be used in sleepable BPF programs.
>> + *
>> + * bpf_crypto_ctx_create() allocates memory for crypto context.
>> + * It may return NULL if no memory is available.
>> + * @params: pointer to struct bpf_crypto_params which contains all the
>> + * details needed to initialise crypto context.
>> + * @err: integer to store error code when NULL is returned.
>> + */
>> +__bpf_kfunc struct bpf_crypto_ctx *
>> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
>
> Add a "u32 params__sz" arg in case that the params struct will have
> addition.
> Take a look at how opts__sz is checked in nf_conntrack_bpf.c.
>
nf_conntrack uses hard-coded value, while xfrm code uses
sizeof(struct bpf_xfrm_state_opts), which one is better?
>> +{
>> + const struct bpf_crypto_type *type;
>> + struct bpf_crypto_ctx *ctx;
>> +
>> + type = bpf_crypto_get_type(params->type);
>> + if (IS_ERR(type)) {
>> + *err = PTR_ERR(type);
>> + return NULL;
>> + }
>> +
>> + if (!type->has_algo(params->algo)) {
>> + *err = -EOPNOTSUPP;
>> + goto err_module_put;
>> + }
>> +
>> + if (!params->authsize && type->setauthsize) {
>> + *err = -EOPNOTSUPP;
>> + goto err_module_put;
>> + }
>> +
>> + if (params->authsize && !type->setauthsize) {
>
> nit. Together with the previous "if" test, replace them with one test like:
>
> if (!!params->authsize ^ !!type->setauthsize) {
>
yep
>
>> + *err = -EOPNOTSUPP;
>> + goto err_module_put;
>> + }
>> +
>> + if (!params->key_len) {
>
> Also checks "|| params->key_len > sizeof(params->key)"
Sure
>> + *err = -EINVAL;
>> + goto err_module_put;
>> + }
>> +
>> + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
>> + if (!ctx) {
>> + *err = -ENOMEM;
>> + goto err_module_put;
>> + }
>> +
>> + ctx->type = type;
>> + ctx->tfm = type->alloc_tfm(params->algo);
>> + if (IS_ERR(ctx->tfm)) {
>> + *err = PTR_ERR(ctx->tfm);
>> + goto err_free_ctx;
>> + }
>> +
>> + if (params->authsize) {
>> + *err = type->setauthsize(ctx->tfm, params->authsize);
>> + if (*err)
>> + goto err_free_tfm;
>> + }
>> +
>> + *err = type->setkey(ctx->tfm, params->key, params->key_len);
>> + if (*err)
>> + goto err_free_tfm;
>> +
>> + if (type->get_flags(ctx->tfm) & CRYPTO_TFM_NEED_KEY) {
>> + *err = -EINVAL;
>> + goto err_free_tfm;
>> + }
>> +
>> + ctx->siv_len = type->ivsize(ctx->tfm) + type->statesize(ctx->tfm);
>> +
>> + refcount_set(&ctx->usage, 1);
>> +
>> + return ctx;
>> +
>> +err_free_tfm:
>> + type->free_tfm(ctx->tfm);
>> +err_free_ctx:
>> + kfree(ctx);
>> +err_module_put:
>> + module_put(type->owner);
>> +
>> + return NULL;
>> +}
>> +
>> +static void crypto_free_cb(struct rcu_head *head)
>> +{
>> + struct bpf_crypto_ctx *ctx;
>> +
>> + ctx = container_of(head, struct bpf_crypto_ctx, rcu);
>> + ctx->type->free_tfm(ctx->tfm);
>> + module_put(ctx->type->owner);
>> + kfree(ctx);
>> +}
>> +
>> +/**
>> + * bpf_crypto_ctx_acquire() - Acquire a reference to a BPF crypto
>> context.
>> + * @ctx: The BPF crypto context being acquired. The ctx must be a
>> trusted
>> + * pointer.
>> + *
>> + * Acquires a reference to a BPF crypto context. The context returned
>> by this function
>> + * must either be embedded in a map as a kptr, or freed with
>> + * bpf_crypto_skcipher_ctx_release().
>> + */
>> +__bpf_kfunc struct bpf_crypto_ctx *
>> +bpf_crypto_ctx_acquire(struct bpf_crypto_ctx *ctx)
>> +{
>> + if (!refcount_inc_not_zero(&ctx->usage))
>> + return NULL;
>> + return ctx;
>> +}
>> +
>> +/**
>> + * bpf_crypto_ctx_release() - Release a previously acquired BPF
>> crypto context.
>> + * @ctx: The crypto context being released.
>> + *
>> + * Releases a previously acquired reference to a BPF crypto context.
>> When the final
>> + * reference of the BPF crypto context has been released, it is
>> subsequently freed in
>> + * an RCU callback in the BPF memory allocator.
>> + */
>> +__bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
>> +{
>> + if (refcount_dec_and_test(&ctx->usage))
>> + call_rcu(&ctx->rcu, crypto_free_cb);
>> +}
>> +
>> +static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
>> + const struct bpf_dynptr_kern *src,
>> + struct bpf_dynptr_kern *dst,
>> + const struct bpf_dynptr_kern *siv,
>> + bool decrypt)
>> +{
>> + u32 src_len, dst_len, siv_len;
>> + const u8 *psrc;
>> + u8 *pdst, *piv;
>> + int err;
>> +
>> + if (__bpf_dynptr_is_rdonly(dst))
>> + return -EINVAL;
>> +
>> + siv_len = __bpf_dynptr_size(siv);
>> + src_len = __bpf_dynptr_size(src);
>> + dst_len = __bpf_dynptr_size(dst);
>> + if (!src_len || !dst_len)
>> + return -EINVAL;
>> +
>> + if (siv_len != ctx->siv_len)
>> + return -EINVAL;
>> +
>> + psrc = __bpf_dynptr_data(src, src_len);
>> + if (!psrc)
>> + return -EINVAL;
>> + pdst = __bpf_dynptr_data_rw(dst, dst_len);
>> + if (!pdst)
>> + return -EINVAL;
>> +
>> + piv = siv_len ? __bpf_dynptr_data_rw(siv, siv_len) : NULL;
>
> It has been a while. I don't remember if it has already been brought up
> before.
>
> The "const struct bpf_dynptr_kern *siv" here is essentially an optional
> pointer. Allowing NULL is a more intuitive usage instead of passing a
> 0-len dynptr. The verifier needs some changes to take __nullable suffix
> for "struct bpf_dynptr_kern *siv__nullable". This could be a follow up
> to relax the restriction to allow NULL and not necessary in this set.
>
I think we have already discussed and agreed to have follow up once this
part is merged.
>> + if (siv_len && !piv)
>> + return -EINVAL;
>> +
>> + err = decrypt ? ctx->type->decrypt(ctx->tfm, psrc, pdst, src_len,
>> piv)
>> + : ctx->type->encrypt(ctx->tfm, psrc, pdst, src_len, piv);
>> +
>> + return err;
>> +}
>
>
On 19/04/2024 22:38, Martin KaFai Lau wrote:
> On 4/16/24 1:40 PM, Vadim Fedorenko wrote:
>> +void test_crypto_sanity(void)
>> +{
>> + struct crypto_syscall_args sargs = {
>> + .key_len = 16,
>> + };
>> + LIBBPF_OPTS(bpf_tc_hook, qdisc_hook, .attach_point = BPF_TC_EGRESS);
>> + LIBBPF_OPTS(bpf_tc_opts, tc_attach_enc);
>> + LIBBPF_OPTS(bpf_tc_opts, tc_attach_dec);
>> + LIBBPF_OPTS(bpf_test_run_opts, opts,
>> + .ctx_in = &sargs,
>> + .ctx_size_in = sizeof(sargs),
>> + );
>> + struct nstoken *nstoken = NULL;
>> + struct crypto_sanity *skel;
>> + char afalg_plain[16] = {0};
>> + char afalg_dst[16] = {0};
>> + struct sockaddr_in6 addr;
>> + int sockfd, err, pfd;
>> + socklen_t addrlen;
>> +
>> + SYS(fail, "ip netns add %s", NS_TEST);
>> + SYS(fail, "ip -net %s -6 addr add %s/128 dev lo nodad", NS_TEST,
>> IPV6_IFACE_ADDR);
>> + SYS(fail, "ip -net %s link set dev lo up", NS_TEST);
>> +
>> + nstoken = open_netns(NS_TEST);
>> + if (!ASSERT_OK_PTR(nstoken, "open_netns"))
>> + goto fail;
>
> skel is not initialized. The "fail:" case needs it.
>
>> +
>> + err = init_afalg();
>> + if (!ASSERT_OK(err, "AF_ALG init fail"))
>> + goto fail;
>> +
>> + qdisc_hook.ifindex = if_nametoindex("lo");
>> + if (!ASSERT_GT(qdisc_hook.ifindex, 0, "if_nametoindex lo"))
>> + goto fail;
>> +
>> + skel = crypto_sanity__open_and_load();
>> + if (!ASSERT_OK_PTR(skel, "skel open"))
>> + return;
>
> The netns "crypto_sanity_ns" is not deleted.
>
I'll re-arrange skel init and open_netns. Dunno why it was moved, it
should be other way.
>> +
>> + memcpy(skel->bss->key, crypto_key, sizeof(crypto_key));
>> + snprintf(skel->bss->algo, 128, "%s", algo);
>> + pfd = bpf_program__fd(skel->progs.skb_crypto_setup);
>> + if (!ASSERT_GT(pfd, 0, "skb_crypto_setup fd"))
>> + goto fail;
>> +
>> + err = bpf_prog_test_run_opts(pfd, &opts);
>> + if (!ASSERT_OK(err, "skb_crypto_setup") ||
>> + !ASSERT_OK(opts.retval, "skb_crypto_setup retval"))
>> + goto fail;
>> +
>> + if (!ASSERT_OK(skel->bss->status, "skb_crypto_setup status"))
>> + goto fail;
>> +
>> + err = crypto_sanity__attach(skel);
>
> This attach is a left over from previous revision?
>
Looks like it is.
>> + if (!ASSERT_OK(err, "crypto_sanity__attach"))
>> + goto fail;
>> +
>> + err = bpf_tc_hook_create(&qdisc_hook);
>> + if (!ASSERT_OK(err, "create qdisc hook"))
>> + goto fail;
>> +
>> + addrlen = sizeof(addr);
>> + err = make_sockaddr(AF_INET6, IPV6_IFACE_ADDR, UDP_TEST_PORT,
>> + (void *)&addr, &addrlen);
>> + if (!ASSERT_OK(err, "make_sockaddr"))
>> + goto fail;
>> +
>> + tc_attach_enc.prog_fd = bpf_program__fd(skel->progs.encrypt_sanity);
>> + err = bpf_tc_attach(&qdisc_hook, &tc_attach_enc);
>> + if (!ASSERT_OK(err, "attach encrypt filter"))
>> + goto fail;
>> +
>> + sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
>> + if (!ASSERT_NEQ(sockfd, -1, "encrypt socket"))
>> + goto fail;
>> + err = sendto(sockfd, plain_text, sizeof(plain_text), 0, (void
>> *)&addr, addrlen);
>> + close(sockfd);
>> + if (!ASSERT_EQ(err, sizeof(plain_text), "encrypt send"))
>> + goto fail;
>> +
>> + do_crypt_afalg(plain_text, afalg_dst, sizeof(afalg_dst), true);
>> +
>> + bpf_tc_detach(&qdisc_hook, &tc_attach_enc);
>
> Check error.
>
> I suspect this detach should have failed because at least the
> tc_attach_enc.prog_fd is not 0.
>
> The following attach (&tc_attach_"dec") may just happen to have a higher
> priority such that the left over here does not matter. It is still
> better to get it right.
>
Ok, I'll follow the way of tc_opts test
>> + if (!ASSERT_OK(skel->bss->status, "encrypt status"))
>> + goto fail;
>> + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_dst, sizeof(afalg_dst),
>> "encrypt AF_ALG"))
>> + goto fail;
>> +
>> + tc_attach_dec.prog_fd = bpf_program__fd(skel->progs.decrypt_sanity);
>> + err = bpf_tc_attach(&qdisc_hook, &tc_attach_dec);
>> + if (!ASSERT_OK(err, "attach decrypt filter"))
>> + goto fail;
>> +
>> + sockfd = socket(AF_INET6, SOCK_DGRAM, 0);
>> + if (!ASSERT_NEQ(sockfd, -1, "decrypt socket"))
>> + goto fail;
>> + err = sendto(sockfd, afalg_dst, sizeof(afalg_dst), 0, (void
>> *)&addr, addrlen);
>> + close(sockfd);
>> + if (!ASSERT_EQ(err, sizeof(afalg_dst), "decrypt send"))
>> + goto fail;
>> +
>> + do_crypt_afalg(afalg_dst, afalg_plain, sizeof(afalg_plain), false);
>> +
>> + bpf_tc_detach(&qdisc_hook, &tc_attach_dec);
>> + if (!ASSERT_OK(skel->bss->status, "decrypt status"))
>> + goto fail;
>> + if (!ASSERT_STRNEQ(skel->bss->dst, afalg_plain,
>> sizeof(afalg_plain), "decrypt AF_ALG"))
>> + goto fail;
>> +
>> +fail:
>> + if (nstoken) {
>
> No need to check NULL. close_netns() can handle it.
>
>> + bpf_tc_hook_destroy(&qdisc_hook);
>
> This also does not destroy the clsact qdisc. Although the function name
> feels like it would, from a quick look at bpf_tc_hook_destroy, it
> depends on both BPF_TC_INGRESS and BPF_TC_EGRESS are set in the
> qdisc_hook.attach_point.
>
> I would skip the whole bpf_tc_hook_destroy. It will go away together
> with the netns.
>
Got it
> [ ... ]
>
>> diff --git a/tools/testing/selftests/bpf/progs/crypto_sanity.c
>> b/tools/testing/selftests/bpf/progs/crypto_sanity.c
>> new file mode 100644
>> index 000000000000..57df5776bcaf
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/crypto_sanity.c
>> @@ -0,0 +1,161 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
>> +
>> +#include "vmlinux.h"
>> +#include "bpf_tracing_net.h"
>> +#include <bpf/bpf_helpers.h>
>> +#include <bpf/bpf_endian.h>
>> +#include <bpf/bpf_tracing.h>
>> +#include "bpf_misc.h"
>> +#include "bpf_kfuncs.h"
>> +#include "crypto_common.h"
>> +
>> +unsigned char key[256] = {};
>> +char algo[128] = {};
>> +char dst[16] = {};
>> +int status;
>> +
>> +static int skb_dynptr_validate(struct __sk_buff *skb, struct
>> bpf_dynptr *psrc)
>> +{
>> + struct ipv6hdr ip6h;
>> + struct udphdr udph;
>> + u32 offset;
>> +
>> + if (skb->protocol != __bpf_constant_htons(ETH_P_IPV6))
>> + return -1;
>> +
>> + if (bpf_skb_load_bytes(skb, ETH_HLEN, &ip6h, sizeof(ip6h)))
>> + return -1;
>> +
>> + if (ip6h.nexthdr != IPPROTO_UDP)
>> + return -1;
>> +
>> + if (bpf_skb_load_bytes(skb, ETH_HLEN + sizeof(ip6h), &udph,
>> sizeof(udph)))
>> + return -1;
>> +
>> + if (udph.dest != __bpf_constant_htons(UDP_TEST_PORT))
>> + return -1;
>> +
>> + offset = ETH_HLEN + sizeof(ip6h) + sizeof(udph);
>> + if (skb->len < offset + 16)
>> + return -1;
>> +
>> + /* let's make sure that 16 bytes of payload are in the linear
>> part of skb */
>> + bpf_skb_pull_data(skb, offset + 16);
>> + bpf_dynptr_from_skb(skb, 0, psrc);
>> + bpf_dynptr_adjust(psrc, offset, offset + 16);
>> +
>> + return 0;
>> +}
>> +
>> +SEC("syscall")
>> +int skb_crypto_setup(struct crypto_syscall_args *ctx)
>> +{
>> + struct bpf_crypto_params params = {
>> + .type = "skcipher",
>> + .key_len = ctx->key_len,
>> + .authsize = ctx->authsize,
>> + };
>> + struct bpf_crypto_ctx *cctx;
>> + int err = 0;
>> +
>> + status = 0;
>> +
>> + if (ctx->key_len > 255) {
>
> key_len == 256 won't work ?
Yeah, you are right, I'll adjust the check
>
>> + status = -EINVAL;
>> + return 0;
>> + }
>> +
>> + __builtin_memcpy(¶ms.algo, algo, sizeof(algo));
>> + __builtin_memcpy(¶ms.key, key, sizeof(key));
>
> It will be useful to comment here what problem was hit such that the key
> cannot be passed in the "struct crypto_syscall_args" and need to go back
> to the global variable.
Ok, I'll add some details.
> Instead of "key_len" in the crypto_syscall_args and the actual "key" in
> global, how about skip using the "struct crypto_syscall_args" altogether
> and put key_len (and authsize) in the global?
>
> Put UDP_TEST_PORT as a global variable for config/filter usage also and
> the "crypto_share.h" can go away.
>
Yeah, I can do it.
>> + cctx = bpf_crypto_ctx_create(¶ms, &err);
>> +
>> + if (!cctx) {
>> + status = err;
>> + return 0;
>> + }
>> +
>> + err = crypto_ctx_insert(cctx);
>> + if (err && err != -EEXIST)
>> + status = err;
>> +
>> + return 0;
>> +}
>> +
>> +SEC("tc")
>> +int decrypt_sanity(struct __sk_buff *skb)
>> +{
>> + struct __crypto_ctx_value *v;
>> + struct bpf_crypto_ctx *ctx;
>> + struct bpf_dynptr psrc, pdst, iv;
>> + int err;
>> +
>> + err = skb_dynptr_validate(skb, &psrc);
>> + if (err < 0) {
>> + status = err;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + v = crypto_ctx_value_lookup();
>> + if (!v) {
>> + status = -ENOENT;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + ctx = v->ctx;
>> + if (!ctx) {
>> + status = -ENOENT;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
>
> dst is now a global which makes it easier to test the result. A comment
> here to note this point for people referencing this test for production
> use case and suggest a percpu map could be used.
Ok
> It will be useful to have dynptr working with stack memory in the future.
Another follow-up?
>> + /* iv dynptr has to be initialized with 0 size, but proper memory
>> region
>> + * has to be provided anyway
>> + */
>> + bpf_dynptr_from_mem(dst, 0, 0, &iv);
>> +
>> + status = bpf_crypto_decrypt(ctx, &psrc, &pdst, &iv);
>> +
>> + return TC_ACT_SHOT;
>> +}
>> +
>> +SEC("tc")
>> +int encrypt_sanity(struct __sk_buff *skb)
>> +{
>> + struct __crypto_ctx_value *v;
>> + struct bpf_crypto_ctx *ctx;
>> + struct bpf_dynptr psrc, pdst, iv;
>> + int err;
>> +
>> + status = 0;
>> +
>> + err = skb_dynptr_validate(skb, &psrc);
>> + if (err < 0) {
>> + status = err;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + v = crypto_ctx_value_lookup();
>> + if (!v) {
>> + status = -ENOENT;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + ctx = v->ctx;
>> + if (!ctx) {
>> + status = -ENOENT;
>> + return TC_ACT_SHOT;
>> + }
>> +
>> + bpf_dynptr_from_mem(dst, sizeof(dst), 0, &pdst);
>> + /* iv dynptr has to be initialized with 0 size, but proper memory
>> region
>> + * has to be provided anyway
>> + */
>> + bpf_dynptr_from_mem(dst, 0, 0, &iv);
>> +
>> + status = bpf_crypto_encrypt(ctx, &psrc, &pdst, &iv);
>> +
>> + return TC_ACT_SHOT;
>> +}
>> +
>> +char __license[] SEC("license") = "GPL";
>> diff --git a/tools/testing/selftests/bpf/progs/crypto_share.h
>> b/tools/testing/selftests/bpf/progs/crypto_share.h
>> new file mode 100644
>> index 000000000000..c5a6ef65156d
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/crypto_share.h
>> @@ -0,0 +1,10 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
>> +
>> +#define UDP_TEST_PORT 7777
>> +
>> +struct crypto_syscall_args {
>> + u32 key_len;
>> + u32 authsize;
>> +};
>> +
>
On 4/19/24 5:24 PM, Vadim Fedorenko wrote:
>>> +/**
>>> + * bpf_crypto_ctx_create() - Create a mutable BPF crypto context.
>>> + *
>>> + * Allocates a crypto context that can be used, acquired, and released by
>>> + * a BPF program. The crypto context returned by this function must either
>>> + * be embedded in a map as a kptr, or freed with bpf_crypto_ctx_release().
>>> + * As crypto API functions use GFP_KERNEL allocations, this function can
>>> + * only be used in sleepable BPF programs.
>>> + *
>>> + * bpf_crypto_ctx_create() allocates memory for crypto context.
>>> + * It may return NULL if no memory is available.
>>> + * @params: pointer to struct bpf_crypto_params which contains all the
>>> + * details needed to initialise crypto context.
>>> + * @err: integer to store error code when NULL is returned.
>>> + */
>>> +__bpf_kfunc struct bpf_crypto_ctx *
>>> +bpf_crypto_ctx_create(const struct bpf_crypto_params *params, int *err)
>>
>> Add a "u32 params__sz" arg in case that the params struct will have addition.
>> Take a look at how opts__sz is checked in nf_conntrack_bpf.c.
>>
>
> nf_conntrack uses hard-coded value, while xfrm code uses
> sizeof(struct bpf_xfrm_state_opts), which one is better?
If it is about the enum NF_BPF_CT_OPTS_SZ in nf_conntrack, I don't think it is a
must have. bpf_core_type_size() should have the same effect to figure out the
sizeof a struct in the running kernel.
afaik, sizeof() should do.