2013-03-18 14:50:48

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH RFC] Support for JIT in Seccomp BPF filters.

Hi,

This patch serie adds support for jitted seccomp BPF filters, with the
required modifications to make it work on the ARM architecture.

- The first patch in the serie adds the required boiler plate in the
core kernel seccomp code to invoke the JIT compilation/free code.

- The second patch reworks the ARM BPF JIT code to make the generation
process less dependent on struct sk_filter.

- The last patch actually implements the ARM part in the BPF jit code.

Some benchmarks, on a 1.6Ghz 88f6282 CPU:

Each system call is tested in two way (fast/slow):

- on the fast version, the tested system call is accepted immediately
after checking the architecture (5 BPF instructions).

- on the slow version, the tested system call is accepted after
previously checking for 85 syscall (90 instructions, including the
architecture check).

The tested syscall is invoked in a loop 1000000 time, the reported
time is the time spent in the loop in seconds.

Without Seccomp JIT:

Syscall Time-Fast Time-Slow
--------------- ---------- ----------
gettimeofday 0.389 1.633
getpid 0.406 1.688
getresuid 1.003 2.266
getcwd 1.342 2.128

With Seccomp JIT:

Syscall Time-Fast Time-Slow
--------------- ----------- ---------
gettimeofday 0.348 0.428
getpid 0.365 0.480
getresuid 0.981 1.060
getcwd 1.237 1.294

For reference, the same code without any seccomp filter:

Syscall Time
--------------- -----
gettimeofday 0.119
getpid 0.137
getresuid 0.747
getcwd 1.021

The activation of the BPF JIT for seccomp is still controled with the
/proc/sys/net/core/bpf_jit_enable sysctl knob.

Those changes are based on the latest rmk-for-next branch.

V2 Changes:
- Document the @bpf_func field in struct seccomp_filter as recommended
by Kees Cook.
- Invoke seccomp_bpf_load directly from generated code without going
via a wrapper.

Regards,


2013-03-18 14:51:08

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

Architecture must select HAVE_SECCOMP_FILTER_JIT and implement
seccomp_jit_compile() and seccomp_jit_free() if they intend to support
jitted seccomp filters.

struct seccomp_filter has been moved to <linux/seccomp.h> to make its
content available to the jit compilation code.

In a way similar to the net BPF, the jit compilation code is expected
to updates struct seccomp_filter.bpf_func pointer to the generated
code.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/Kconfig | 14 ++++++++++++++
include/linux/seccomp.h | 41 +++++++++++++++++++++++++++++++++++++++++
kernel/seccomp.c | 34 +++++-----------------------------
3 files changed, 60 insertions(+), 29 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 5a1779c..1284367 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -339,6 +339,10 @@ config HAVE_ARCH_SECCOMP_FILTER
- secure_computing return value is checked and a return value of -1
results in the system call being skipped immediately.

+# Used by archs to tell that they support SECCOMP_FILTER_JIT
+config HAVE_SECCOMP_FILTER_JIT
+ bool
+
config SECCOMP_FILTER
def_bool y
depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
@@ -349,6 +353,16 @@ config SECCOMP_FILTER

See Documentation/prctl/seccomp_filter.txt for details.

+config SECCOMP_FILTER_JIT
+ bool "enable Seccomp filter Just In Time compiler"
+ depends on HAVE_SECCOMP_FILTER_JIT && BPF_JIT && SECCOMP_FILTER
+ help
+ Seccomp syscall filtering capabilities are normally handled
+ by an interpreter. This option allows kernel to generate a native
+ code when filter is loaded in memory. This should speedup
+ syscall filtering. Note : Admin should enable this feature
+ changing /proc/sys/net/core/bpf_jit_enable
+
config HAVE_CONTEXT_TRACKING
bool
help
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 6f19cfd..a216ab7 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -6,6 +6,7 @@
#ifdef CONFIG_SECCOMP

#include <linux/thread_info.h>
+#include <linux/filter.h>
#include <asm/seccomp.h>

struct seccomp_filter;
@@ -47,6 +48,46 @@ static inline int seccomp_mode(struct seccomp *s)
return s->mode;
}

+/**
+ * struct seccomp_filter - container for seccomp BPF programs
+ *
+ * @usage: reference count to manage the object lifetime.
+ * get/put helpers should be used when accessing an instance
+ * outside of a lifetime-guarded section. In general, this
+ * is only needed for handling filters shared across tasks.
+ * @prev: points to a previously installed, or inherited, filter
+ * @len: the number of instructions in the program
+ * @bpc_func: points to either sk_run_filter or the code generated
+ * by the BPF JIT.
+ * @insns: the BPF program instructions to evaluate
+ *
+ * seccomp_filter objects are organized in a tree linked via the @prev
+ * pointer. For any task, it appears to be a singly-linked list starting
+ * with current->seccomp.filter, the most recently attached or inherited filter.
+ * However, multiple filters may share a @prev node, by way of fork(), which
+ * results in a unidirectional tree existing in memory. This is similar to
+ * how namespaces work.
+ *
+ * seccomp_filter objects should never be modified after being attached
+ * to a task_struct (other than @usage).
+ */
+struct seccomp_filter {
+ atomic_t usage;
+ struct seccomp_filter *prev;
+ unsigned short len; /* Instruction count */
+ unsigned int (*bpf_func)(const struct sk_buff *skb,
+ const struct sock_filter *filter);
+ struct sock_filter insns[];
+};
+
+#ifdef CONFIG_SECCOMP_FILTER_JIT
+extern void seccomp_jit_compile(struct seccomp_filter *fp);
+extern void seccomp_jit_free(struct seccomp_filter *fp);
+#else
+static inline void seccomp_jit_compile(struct seccomp_filter *fp) { }
+static inline void seccomp_jit_free(struct seccomp_filter *fp) { }
+#endif
+
#else /* CONFIG_SECCOMP */

#include <linux/errno.h>
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index b7a1004..a1aadaa 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -30,34 +30,6 @@
#include <linux/tracehook.h>
#include <linux/uaccess.h>

-/**
- * struct seccomp_filter - container for seccomp BPF programs
- *
- * @usage: reference count to manage the object lifetime.
- * get/put helpers should be used when accessing an instance
- * outside of a lifetime-guarded section. In general, this
- * is only needed for handling filters shared across tasks.
- * @prev: points to a previously installed, or inherited, filter
- * @len: the number of instructions in the program
- * @insns: the BPF program instructions to evaluate
- *
- * seccomp_filter objects are organized in a tree linked via the @prev
- * pointer. For any task, it appears to be a singly-linked list starting
- * with current->seccomp.filter, the most recently attached or inherited filter.
- * However, multiple filters may share a @prev node, by way of fork(), which
- * results in a unidirectional tree existing in memory. This is similar to
- * how namespaces work.
- *
- * seccomp_filter objects should never be modified after being attached
- * to a task_struct (other than @usage).
- */
-struct seccomp_filter {
- atomic_t usage;
- struct seccomp_filter *prev;
- unsigned short len; /* Instruction count */
- struct sock_filter insns[];
-};
-
/* Limit any path through the tree to 256KB worth of instructions. */
#define MAX_INSNS_PER_PATH ((1 << 18) / sizeof(struct sock_filter))

@@ -213,7 +185,7 @@ static u32 seccomp_run_filters(int syscall)
* value always takes priority (ignoring the DATA).
*/
for (f = current->seccomp.filter; f; f = f->prev) {
- u32 cur_ret = sk_run_filter(NULL, f->insns);
+ u32 cur_ret = f->bpf_func(NULL, f->insns);
if ((cur_ret & SECCOMP_RET_ACTION) < (ret & SECCOMP_RET_ACTION))
ret = cur_ret;
}
@@ -275,6 +247,9 @@ static long seccomp_attach_filter(struct sock_fprog *fprog)
if (ret)
goto fail;

+ filter->bpf_func = sk_run_filter;
+ seccomp_jit_compile(filter);
+
/*
* If there is an existing filter, make it the prev and don't drop its
* task reference.
@@ -332,6 +307,7 @@ void put_seccomp_filter(struct task_struct *tsk)
while (orig && atomic_dec_and_test(&orig->usage)) {
struct seccomp_filter *freeme = orig;
orig = orig->prev;
+ seccomp_jit_free(freeme);
kfree(freeme);
}
}
--
1.7.10.4

2013-03-18 14:51:20

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH V2 2/3] ARM: net: bpf_jit: make code generation less dependent on struct sk_filter.

This is in preparation of bpf_jit support for seccomp filters.

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/net/bpf_jit_32.c | 46 ++++++++++++++++++++++++++++-----------------
1 file changed, 29 insertions(+), 17 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6828ef6..0ea29e0 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -55,7 +55,8 @@
#define FLAG_NEED_X_RESET (1 << 0)

struct jit_ctx {
- const struct sk_filter *skf;
+ unsigned short prog_len;
+ struct sock_filter *prog_insns;
unsigned idx;
unsigned prologue_bytes;
int ret0_fp_idx;
@@ -131,8 +132,8 @@ static u16 saved_regs(struct jit_ctx *ctx)
{
u16 ret = 0;

- if ((ctx->skf->len > 1) ||
- (ctx->skf->insns[0].code == BPF_S_RET_A))
+ if ((ctx->prog_len > 1) ||
+ (ctx->prog_insns[0].code == BPF_S_RET_A))
ret |= 1 << r_A;

#ifdef CONFIG_FRAME_POINTER
@@ -181,7 +182,7 @@ static inline bool is_load_to_a(u16 inst)
static void build_prologue(struct jit_ctx *ctx)
{
u16 reg_set = saved_regs(ctx);
- u16 first_inst = ctx->skf->insns[0].code;
+ u16 first_inst = ctx->prog_insns[0].code;
u16 off;

#ifdef CONFIG_FRAME_POINTER
@@ -279,7 +280,7 @@ static u16 imm_offset(u32 k, struct jit_ctx *ctx)
ctx->imms[i] = k;

/* constants go just after the epilogue */
- offset = ctx->offsets[ctx->skf->len];
+ offset = ctx->offsets[ctx->prog_len];
offset += ctx->prologue_bytes;
offset += ctx->epilogue_bytes;
offset += i * 4;
@@ -419,7 +420,7 @@ static inline void emit_err_ret(u8 cond, struct jit_ctx *ctx)
emit(ARM_MOV_R(ARM_R0, ARM_R0), ctx);
} else {
_emit(cond, ARM_MOV_I(ARM_R0, 0), ctx);
- _emit(cond, ARM_B(b_imm(ctx->skf->len, ctx)), ctx);
+ _emit(cond, ARM_B(b_imm(ctx->prog_len, ctx)), ctx);
}
}

@@ -469,14 +470,13 @@ static inline void update_on_xread(struct jit_ctx *ctx)
static int build_body(struct jit_ctx *ctx)
{
void *load_func[] = {jit_get_skb_b, jit_get_skb_h, jit_get_skb_w};
- const struct sk_filter *prog = ctx->skf;
const struct sock_filter *inst;
unsigned i, load_order, off, condt;
int imm12;
u32 k;

- for (i = 0; i < prog->len; i++) {
- inst = &(prog->insns[i]);
+ for (i = 0; i < ctx->prog_len; i++) {
+ inst = &(ctx->prog_insns[i]);
/* K as an immediate value operand */
k = inst->k;

@@ -769,8 +769,8 @@ cmp_x:
ctx->ret0_fp_idx = i;
emit_mov_i(ARM_R0, k, ctx);
b_epilogue:
- if (i != ctx->skf->len - 1)
- emit(ARM_B(b_imm(prog->len, ctx)), ctx);
+ if (i != ctx->prog_len - 1)
+ emit(ARM_B(b_imm(ctx->prog_len, ctx)), ctx);
break;
case BPF_S_MISC_TAX:
/* X = A */
@@ -858,7 +858,7 @@ b_epilogue:
}


-void bpf_jit_compile(struct sk_filter *fp)
+static void __bpf_jit_compile(struct jit_ctx *out_ctx)
{
struct jit_ctx ctx;
unsigned tmp_idx;
@@ -867,11 +867,10 @@ void bpf_jit_compile(struct sk_filter *fp)
if (!bpf_jit_enable)
return;

- memset(&ctx, 0, sizeof(ctx));
- ctx.skf = fp;
+ ctx = *out_ctx;
ctx.ret0_fp_idx = -1;

- ctx.offsets = kzalloc(4 * (ctx.skf->len + 1), GFP_KERNEL);
+ ctx.offsets = kzalloc(4 * (ctx.prog_len + 1), GFP_KERNEL);
if (ctx.offsets == NULL)
return;

@@ -921,13 +920,26 @@ void bpf_jit_compile(struct sk_filter *fp)
print_hex_dump(KERN_INFO, "BPF JIT code: ",
DUMP_PREFIX_ADDRESS, 16, 4, ctx.target,
alloc_size, false);
-
- fp->bpf_func = (void *)ctx.target;
out:
kfree(ctx.offsets);
+
+ *out_ctx = ctx;
return;
}

+void bpf_jit_compile(struct sk_filter *fp)
+{
+ struct jit_ctx ctx;
+
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.prog_len = fp->len;
+ ctx.prog_insns = fp->insns;
+
+ __bpf_jit_compile(&ctx);
+ if (ctx.target)
+ fp->bpf_func = (void*)ctx.target;
+}
+
static void bpf_jit_free_worker(struct work_struct *work)
{
module_free(NULL, work);
--
1.7.10.4

2013-03-18 14:51:30

by Nicolas Schichan

[permalink] [raw]
Subject: [PATCH V2 3/3] ARM: net: bpf_jit: add support for jitted seccomp filters.

This patch selects HAVE_SECCOMP_FILTER_JIT in the ARM Kconfig file,
implements and seccomp_jit_compile() and seccomp_jit_free(), and adds
support for BPF_S_ANC_SECCOMP_LD_W instruction.

BPF_S_ANC_SECCOMP_LD_W instructions trigger the generation of a call
to C function seccomp_bpf_load().

Signed-off-by: Nicolas Schichan <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm/net/bpf_jit_32.c | 36 ++++++++++++++++++++++++++++++++++++
2 files changed, 37 insertions(+)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index dedf02b..b3dce17 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -24,6 +24,7 @@ config ARM
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_BPF_JIT
+ select HAVE_SECCOMP_FILTER_JIT
select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 0ea29e0..86e3b78 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -548,6 +548,15 @@ load_common:
emit_err_ret(ARM_COND_NE, ctx);
emit(ARM_MOV_R(r_A, ARM_R0), ctx);
break;
+#ifdef CONFIG_SECCOMP_FILTER_JIT
+ case BPF_S_ANC_SECCOMP_LD_W:
+ ctx->seen |= SEEN_CALL;
+ emit_mov_i(ARM_R3, (u32)seccomp_bpf_load, ctx);
+ emit_mov_i(ARM_R0, k, ctx);
+ emit_blx_r(ARM_R3, ctx);
+ emit(ARM_MOV_R(r_A, ARM_R0), ctx);
+ break;
+#endif
case BPF_S_LD_W_IND:
load_order = 2;
goto load_ind;
@@ -956,3 +965,30 @@ void bpf_jit_free(struct sk_filter *fp)
schedule_work(work);
}
}
+
+#ifdef CONFIG_SECCOMP_FILTER_JIT
+void seccomp_jit_compile(struct seccomp_filter *fp)
+{
+ struct jit_ctx ctx;
+
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.prog_len = fp->len;
+ ctx.prog_insns = fp->insns;
+
+ __bpf_jit_compile(&ctx);
+ if (ctx.target)
+ fp->bpf_func = (void*)ctx.target;
+}
+
+void seccomp_jit_free(struct seccomp_filter *fp)
+{
+ struct work_struct *work;
+
+ if (fp->bpf_func != sk_run_filter) {
+ work = (struct work_struct *)fp->bpf_func;
+
+ INIT_WORK(work, bpf_jit_free_worker);
+ schedule_work(work);
+ }
+}
+#endif
--
1.7.10.4

2013-04-01 21:53:12

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On Mon, Mar 18, 2013 at 7:50 AM, Nicolas Schichan <[email protected]> wrote:
> Architecture must select HAVE_SECCOMP_FILTER_JIT and implement
> seccomp_jit_compile() and seccomp_jit_free() if they intend to support
> jitted seccomp filters.
>
> struct seccomp_filter has been moved to <linux/seccomp.h> to make its
> content available to the jit compilation code.
>
> In a way similar to the net BPF, the jit compilation code is expected
> to updates struct seccomp_filter.bpf_func pointer to the generated
> code.
>
> Signed-off-by: Nicolas Schichan <[email protected]>

Acked-by: Kees Cook <[email protected]>

I'd love to see this for x86 too. I suspect it'd be a small change
after this series lands.

Thanks,

-Kees

--
Kees Cook
Chrome OS Security

2013-04-04 19:58:48

by Will Drewry

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On Mon, Apr 1, 2013 at 4:53 PM, Kees Cook <[email protected]> wrote:
> On Mon, Mar 18, 2013 at 7:50 AM, Nicolas Schichan <[email protected]> wrote:
>> Architecture must select HAVE_SECCOMP_FILTER_JIT and implement
>> seccomp_jit_compile() and seccomp_jit_free() if they intend to support
>> jitted seccomp filters.
>>
>> struct seccomp_filter has been moved to <linux/seccomp.h> to make its
>> content available to the jit compilation code.
>>
>> In a way similar to the net BPF, the jit compilation code is expected
>> to updates struct seccomp_filter.bpf_func pointer to the generated
>> code.
>>
>> Signed-off-by: Nicolas Schichan <[email protected]>
>
> Acked-by: Kees Cook <[email protected]>
>
> I'd love to see this for x86 too. I suspect it'd be a small change
> after this series lands.

Agreed - and thanks for working through the necessary changes!

Acked-By: Will Drewry <[email protected]>
(for the series)

2013-04-05 12:01:39

by Mircea Gherzan

[permalink] [raw]
Subject: Re: [PATCH V2 2/3] ARM: net: bpf_jit: make code generation less dependent on struct sk_filter.

2013/3/18 Nicolas Schichan <[email protected]>:
> This is in preparation of bpf_jit support for seccomp filters.
>
> Signed-off-by: Nicolas Schichan <[email protected]>
> ---
> arch/arm/net/bpf_jit_32.c | 46 ++++++++++++++++++++++++++++-----------------
> 1 file changed, 29 insertions(+), 17 deletions(-)
>
> diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
> index 6828ef6..0ea29e0 100644
> --- a/arch/arm/net/bpf_jit_32.c
> +++ b/arch/arm/net/bpf_jit_32.c
> @@ -55,7 +55,8 @@
> #define FLAG_NEED_X_RESET (1 << 0)
>
> struct jit_ctx {
> - const struct sk_filter *skf;
> + unsigned short prog_len;
> + struct sock_filter *prog_insns;
> unsigned idx;
> unsigned prologue_bytes;
> int ret0_fp_idx;
> @@ -131,8 +132,8 @@ static u16 saved_regs(struct jit_ctx *ctx)
> {
> u16 ret = 0;
>
> - if ((ctx->skf->len > 1) ||
> - (ctx->skf->insns[0].code == BPF_S_RET_A))
> + if ((ctx->prog_len > 1) ||
> + (ctx->prog_insns[0].code == BPF_S_RET_A))
> ret |= 1 << r_A;
>
> #ifdef CONFIG_FRAME_POINTER
> @@ -181,7 +182,7 @@ static inline bool is_load_to_a(u16 inst)
> static void build_prologue(struct jit_ctx *ctx)
> {
> u16 reg_set = saved_regs(ctx);
> - u16 first_inst = ctx->skf->insns[0].code;
> + u16 first_inst = ctx->prog_insns[0].code;
> u16 off;
>
> #ifdef CONFIG_FRAME_POINTER
> @@ -279,7 +280,7 @@ static u16 imm_offset(u32 k, struct jit_ctx *ctx)
> ctx->imms[i] = k;
>
> /* constants go just after the epilogue */
> - offset = ctx->offsets[ctx->skf->len];
> + offset = ctx->offsets[ctx->prog_len];
> offset += ctx->prologue_bytes;
> offset += ctx->epilogue_bytes;
> offset += i * 4;
> @@ -419,7 +420,7 @@ static inline void emit_err_ret(u8 cond, struct jit_ctx *ctx)
> emit(ARM_MOV_R(ARM_R0, ARM_R0), ctx);
> } else {
> _emit(cond, ARM_MOV_I(ARM_R0, 0), ctx);
> - _emit(cond, ARM_B(b_imm(ctx->skf->len, ctx)), ctx);
> + _emit(cond, ARM_B(b_imm(ctx->prog_len, ctx)), ctx);
> }
> }
>
> @@ -469,14 +470,13 @@ static inline void update_on_xread(struct jit_ctx *ctx)
> static int build_body(struct jit_ctx *ctx)
> {
> void *load_func[] = {jit_get_skb_b, jit_get_skb_h, jit_get_skb_w};
> - const struct sk_filter *prog = ctx->skf;
> const struct sock_filter *inst;
> unsigned i, load_order, off, condt;
> int imm12;
> u32 k;
>
> - for (i = 0; i < prog->len; i++) {
> - inst = &(prog->insns[i]);
> + for (i = 0; i < ctx->prog_len; i++) {
> + inst = &(ctx->prog_insns[i]);
> /* K as an immediate value operand */
> k = inst->k;
>
> @@ -769,8 +769,8 @@ cmp_x:
> ctx->ret0_fp_idx = i;
> emit_mov_i(ARM_R0, k, ctx);
> b_epilogue:
> - if (i != ctx->skf->len - 1)
> - emit(ARM_B(b_imm(prog->len, ctx)), ctx);
> + if (i != ctx->prog_len - 1)
> + emit(ARM_B(b_imm(ctx->prog_len, ctx)), ctx);
> break;
> case BPF_S_MISC_TAX:
> /* X = A */
> @@ -858,7 +858,7 @@ b_epilogue:
> }
>
>
> -void bpf_jit_compile(struct sk_filter *fp)
> +static void __bpf_jit_compile(struct jit_ctx *out_ctx)
> {
> struct jit_ctx ctx;
> unsigned tmp_idx;
> @@ -867,11 +867,10 @@ void bpf_jit_compile(struct sk_filter *fp)
> if (!bpf_jit_enable)
> return;
>
> - memset(&ctx, 0, sizeof(ctx));
> - ctx.skf = fp;
> + ctx = *out_ctx;
> ctx.ret0_fp_idx = -1;
>
> - ctx.offsets = kzalloc(4 * (ctx.skf->len + 1), GFP_KERNEL);
> + ctx.offsets = kzalloc(4 * (ctx.prog_len + 1), GFP_KERNEL);
> if (ctx.offsets == NULL)
> return;
>
> @@ -921,13 +920,26 @@ void bpf_jit_compile(struct sk_filter *fp)
> print_hex_dump(KERN_INFO, "BPF JIT code: ",
> DUMP_PREFIX_ADDRESS, 16, 4, ctx.target,
> alloc_size, false);
> -
> - fp->bpf_func = (void *)ctx.target;
> out:
> kfree(ctx.offsets);
> +
> + *out_ctx = ctx;
> return;
> }
>
> +void bpf_jit_compile(struct sk_filter *fp)
> +{
> + struct jit_ctx ctx;
> +
> + memset(&ctx, 0, sizeof(ctx));
> + ctx.prog_len = fp->len;
> + ctx.prog_insns = fp->insns;
> +
> + __bpf_jit_compile(&ctx);
> + if (ctx.target)
> + fp->bpf_func = (void*)ctx.target;
> +}
> +
> static void bpf_jit_free_worker(struct work_struct *work)
> {
> module_free(NULL, work);
> --
> 1.7.10.4
>

Acked-by: Mircea Gherzan <[email protected]>

2013-04-05 12:01:59

by Mircea Gherzan

[permalink] [raw]
Subject: Re: [PATCH V2 3/3] ARM: net: bpf_jit: add support for jitted seccomp filters.

2013/3/18 Nicolas Schichan <[email protected]>:
> This patch selects HAVE_SECCOMP_FILTER_JIT in the ARM Kconfig file,
> implements and seccomp_jit_compile() and seccomp_jit_free(), and adds
> support for BPF_S_ANC_SECCOMP_LD_W instruction.
>
> BPF_S_ANC_SECCOMP_LD_W instructions trigger the generation of a call
> to C function seccomp_bpf_load().
>
> Signed-off-by: Nicolas Schichan <[email protected]>
> ---
> arch/arm/Kconfig | 1 +
> arch/arm/net/bpf_jit_32.c | 36 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 37 insertions(+)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index dedf02b..b3dce17 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -24,6 +24,7 @@ config ARM
> select HAVE_ARCH_SECCOMP_FILTER
> select HAVE_ARCH_TRACEHOOK
> select HAVE_BPF_JIT
> + select HAVE_SECCOMP_FILTER_JIT
> select HAVE_C_RECORDMCOUNT
> select HAVE_DEBUG_KMEMLEAK
> select HAVE_DMA_API_DEBUG
> diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
> index 0ea29e0..86e3b78 100644
> --- a/arch/arm/net/bpf_jit_32.c
> +++ b/arch/arm/net/bpf_jit_32.c
> @@ -548,6 +548,15 @@ load_common:
> emit_err_ret(ARM_COND_NE, ctx);
> emit(ARM_MOV_R(r_A, ARM_R0), ctx);
> break;
> +#ifdef CONFIG_SECCOMP_FILTER_JIT
> + case BPF_S_ANC_SECCOMP_LD_W:
> + ctx->seen |= SEEN_CALL;
> + emit_mov_i(ARM_R3, (u32)seccomp_bpf_load, ctx);
> + emit_mov_i(ARM_R0, k, ctx);
> + emit_blx_r(ARM_R3, ctx);
> + emit(ARM_MOV_R(r_A, ARM_R0), ctx);
> + break;
> +#endif
> case BPF_S_LD_W_IND:
> load_order = 2;
> goto load_ind;
> @@ -956,3 +965,30 @@ void bpf_jit_free(struct sk_filter *fp)
> schedule_work(work);
> }
> }
> +
> +#ifdef CONFIG_SECCOMP_FILTER_JIT
> +void seccomp_jit_compile(struct seccomp_filter *fp)
> +{
> + struct jit_ctx ctx;
> +
> + memset(&ctx, 0, sizeof(ctx));
> + ctx.prog_len = fp->len;
> + ctx.prog_insns = fp->insns;
> +
> + __bpf_jit_compile(&ctx);
> + if (ctx.target)
> + fp->bpf_func = (void*)ctx.target;
> +}
> +
> +void seccomp_jit_free(struct seccomp_filter *fp)
> +{
> + struct work_struct *work;
> +
> + if (fp->bpf_func != sk_run_filter) {
> + work = (struct work_struct *)fp->bpf_func;
> +
> + INIT_WORK(work, bpf_jit_free_worker);
> + schedule_work(work);
> + }
> +}
> +#endif
> --
> 1.7.10.4
>

Acked-by: Mircea Gherzan <[email protected]>

2013-04-17 21:56:33

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On Mon, 18 Mar 2013 15:50:30 +0100 Nicolas Schichan <[email protected]> wrote:

> Architecture must select HAVE_SECCOMP_FILTER_JIT and implement
> seccomp_jit_compile() and seccomp_jit_free() if they intend to support
> jitted seccomp filters.
>
> struct seccomp_filter has been moved to <linux/seccomp.h> to make its
> content available to the jit compilation code.
>
> In a way similar to the net BPF, the jit compilation code is expected
> to updates struct seccomp_filter.bpf_func pointer to the generated
> code.
>

This patch is killing me.

> --- a/include/linux/seccomp.h
> +++ b/include/linux/seccomp.h
> @@ -6,6 +6,7 @@
> #ifdef CONFIG_SECCOMP
>
> #include <linux/thread_info.h>
> +#include <linux/filter.h>
> #include <asm/seccomp.h>

In file included from include/linux/compat.h:18,
from include/linux/filter.h:9,
from include/linux/seccomp.h:9,
from include/linux/sched.h:39,
from arch/x86/kernel/asm-offsets.c:9:
/usr/src/25/arch/x86/include/asm/compat.h: In function 'arch_compat_alloc_user_space':
/usr/src/25/arch/x86/include/asm/compat.h:301: error: dereferencing pointer to incomplete type

Problem is, compat.h's arch_compat_alloc_user_space() needs sched.h for
task_struct but as you can see from the above include tree, sched.h
includes seccomp.h and everything falls over. The preprocessed code
contains the definition of arch_compat_alloc_user_space() *before* the
definition of task_struct.

This is a basic x86_64 "make clean; make allmodconfig; make".

I had a few slashes at fixing it but got all frustrated.

2013-04-22 12:38:52

by Nicolas Schichan

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On 04/17/2013 11:56 PM, Andrew Morton wrote:
> This patch is killing me.
>
>> --- a/include/linux/seccomp.h
>> +++ b/include/linux/seccomp.h
>> @@ -6,6 +6,7 @@
>> #ifdef CONFIG_SECCOMP
>>
>> #include <linux/thread_info.h>
>> +#include <linux/filter.h>
>> #include <asm/seccomp.h>
>
> In file included from include/linux/compat.h:18,
> from include/linux/filter.h:9,
> from include/linux/seccomp.h:9,
> from include/linux/sched.h:39,
> from arch/x86/kernel/asm-offsets.c:9:
> /usr/src/25/arch/x86/include/asm/compat.h: In function 'arch_compat_alloc_user_space':
> /usr/src/25/arch/x86/include/asm/compat.h:301: error: dereferencing pointer to incomplete type
>
> Problem is, compat.h's arch_compat_alloc_user_space() needs sched.h for
> task_struct but as you can see from the above include tree, sched.h
> includes seccomp.h and everything falls over. The preprocessed code
> contains the definition of arch_compat_alloc_user_space() *before* the
> definition of task_struct.
>
> This is a basic x86_64 "make clean; make allmodconfig; make".

Hi,

Would including <uapi/linux/filter.h> instead of <linux/filter.h> in seccomp.h
be an acceptable solution ?

I have tried that and (with an additional forward declaration of struct
sk_buff) an x86_64 "make clean; make allmodconfig" run finishes succesfully.

If that's ok with you, I can resend the serie with that fix.

Regards,

--
Nicolas Schichan
Freebox SAS

2013-04-23 23:43:07

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On Mon, 22 Apr 2013 14:31:20 +0200 Nicolas Schichan <[email protected]> wrote:

> On 04/17/2013 11:56 PM, Andrew Morton wrote:
> > This patch is killing me.
> >
> >> --- a/include/linux/seccomp.h
> >> +++ b/include/linux/seccomp.h
> >> @@ -6,6 +6,7 @@
> >> #ifdef CONFIG_SECCOMP
> >>
> >> #include <linux/thread_info.h>
> >> +#include <linux/filter.h>
> >> #include <asm/seccomp.h>
> >
> > In file included from include/linux/compat.h:18,
> > from include/linux/filter.h:9,
> > from include/linux/seccomp.h:9,
> > from include/linux/sched.h:39,
> > from arch/x86/kernel/asm-offsets.c:9:
> > /usr/src/25/arch/x86/include/asm/compat.h: In function 'arch_compat_alloc_user_space':
> > /usr/src/25/arch/x86/include/asm/compat.h:301: error: dereferencing pointer to incomplete type
> >
> > Problem is, compat.h's arch_compat_alloc_user_space() needs sched.h for
> > task_struct but as you can see from the above include tree, sched.h
> > includes seccomp.h and everything falls over. The preprocessed code
> > contains the definition of arch_compat_alloc_user_space() *before* the
> > definition of task_struct.
> >
> > This is a basic x86_64 "make clean; make allmodconfig; make".
>
> Hi,
>
> Would including <uapi/linux/filter.h> instead of <linux/filter.h> in seccomp.h
> be an acceptable solution ?
>
> I have tried that and (with an additional forward declaration of struct
> sk_buff) an x86_64 "make clean; make allmodconfig" run finishes succesfully.
>
> If that's ok with you, I can resend the serie with that fix.

It would be better to make the code and include tangle less complex,
rather than more complex.

Did we really need to move the `struct seccomp_filter' definition into
the header file? afaict that wasn't really necessary - we can add a
few helper functions to kernel/seccomp.c and then have the remote code
treat seccomp_filter in an opaque fashion rather than directly poking
at its internals.


btw, what on earth is going on with seccomp_jit_free()? It does
disturbing undocumented typecasting and it punts the module_free into a
kernel thread for mysterious, undocumented and possibly buggy reasons.

I realize it just copies bpf_jit_free(). The same observations apply there.

2013-04-24 15:52:38

by Nicolas Schichan

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On 04/24/2013 01:43 AM, Andrew Morton wrote:
> On Mon, 22 Apr 2013 14:31:20 +0200 Nicolas Schichan <[email protected]> wrote:
>> Would including <uapi/linux/filter.h> instead of <linux/filter.h> in seccomp.h
>> be an acceptable solution ?
>>
>> I have tried that and (with an additional forward declaration of struct
>> sk_buff) an x86_64 "make clean; make allmodconfig" run finishes succesfully.
>>
>> If that's ok with you, I can resend the serie with that fix.
>
> It would be better to make the code and include tangle less complex,
> rather than more complex.
>
> Did we really need to move the `struct seccomp_filter' definition into
> the header file? afaict that wasn't really necessary - we can add a
> few helper functions to kernel/seccomp.c and then have the remote code
> treat seccomp_filter in an opaque fashion rather than directly poking
> at its internals.

Hi,

I will resend a V3 of the patch serie with the accessors.

> btw, what on earth is going on with seccomp_jit_free()? It does
> disturbing undocumented typecasting and it punts the module_free into a
> kernel thread for mysterious, undocumented and possibly buggy reasons.
>
> I realize it just copies bpf_jit_free(). The same observations apply there.

The reason for this hack for both seccomp filters and socket filters is that
{seccomp,bpf}_jit_free are called from a softirq. module_free() cannot be
called directly from softirq, as it will in turn call vfree() which will
BUG_ON() if in_interrupt() is non zero. So to call module_free(), it is
therefore required to be in a process context, which is provided by the work
struct.

Here is the call stack for the socket filter case:

[<c0087bf8>] (vfree+0x28/0x2c) from [<c001fc5c>] (bpf_jit_free+0x10/0x18)
[<c001fc5c>] (bpf_jit_free+0x10/0x18) from
[<c0231188>](sk_filter_release_rcu+0x10/0x1c)
[<c0231188>] (sk_filter_release_rcu+0x10/0x1c) from [<c0060bb4>]
(__rcu_process_callbacks+0x98/0xac)
[<c0060bb4>] (__rcu_process_callbacks+0x98/0xac) from [<c0060bd8>]
(rcu_process_callbacks+0x10/0x20)
[<c0060bd8>] (rcu_process_callbacks+0x10/0x20) from [<c0029498>]
(__do_softirq+0xbc/0x194)
[<c0029498>] (__do_softirq+0xbc/0x194) from [<c00295b0>] (run_ksoftirqd+0x40/0x64)
[<c00295b0>] (run_ksoftirqd+0x40/0x64) from [<c0041954>]
(smpboot_thread_fn+0x150/0x15c)
[<c0041954>] (smpboot_thread_fn+0x150/0x15c) from [<c003bb2c>] (kthread+0xa4/0xb0)
[<c003bb2c>] (kthread+0xa4/0xb0) from [<c0013450>] (ret_from_fork+0x14/0x24)

Here is the call stack for the seccomp filter case:

[<c0087c28>] (vfree+0x28/0x2c) from [<c0060834>] (put_seccomp_filter+0x6c/0x84)
[<c0060834>] (put_seccomp_filter+0x6c/0x84) from [<c0020db4>]
(free_task+0x30/0x50)
[<c0020db4>] (free_task+0x30/0x50) from [<c0060be4>]
(__rcu_process_callbacks+0x98/0xac)
[<c0060be4>] (__rcu_process_callbacks+0x98/0xac) from [<c0060c08>]
(rcu_process_callbacks+0x10/0x20)
[<c0060c08>] (rcu_process_callbacks+0x10/0x20) from [<c00294c8>]
(__do_softirq+0xbc/0x194)
[<c00294c8>] (__do_softirq+0xbc/0x194) from [<c00295e0>] (run_ksoftirqd+0x40/0x64)
[<c00295e0>] (run_ksoftirqd+0x40/0x64) from [<c0041984>]
(smpboot_thread_fn+0x150/0x15c)
[<c0041984>] (smpboot_thread_fn+0x150/0x15c) from [<c003bb5c>] (kthread+0xa4/0xb0)
[<c003bb5c>] (kthread+0xa4/0xb0) from [<c0013450>] (ret_from_fork+0x14/0x24)

Regards,

--
Nicolas Schichan
Freebox SAS

2013-04-24 22:33:26

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V2 1/3] seccomp: add generic code for jitted seccomp filters.

On Wed, 24 Apr 2013 17:52:34 +0200 Nicolas Schichan <[email protected]> wrote:

> > btw, what on earth is going on with seccomp_jit_free()? It does
> > disturbing undocumented typecasting and it punts the module_free into a
> > kernel thread for mysterious, undocumented and possibly buggy reasons.
> >
> > I realize it just copies bpf_jit_free(). The same observations apply there.
>
> The reason for this hack for both seccomp filters and socket filters is that
> {seccomp,bpf}_jit_free are called from a softirq. module_free() cannot be
> called directly from softirq, as it will in turn call vfree() which will
> BUG_ON() if in_interrupt() is non zero. So to call module_free(), it is
> therefore required to be in a process context, which is provided by the work
> struct.

Well let's explain this to the next sucker who comes along.

From: Andrew Morton <[email protected]>
Subject: bpf: add comments explaining the schedule_work() operation

Cc: Will Drewry <[email protected]>
Cc: Mircea Gherzan <[email protected]>
Cc: Nicolas Schichan <[email protected]>
Cc: Russell King <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---

arch/arm/net/bpf_jit_32.c | 8 ++++++++
arch/powerpc/net/bpf_jit_comp.c | 4 ++++
arch/s390/net/bpf_jit_comp.c | 4 ++++
arch/sparc/net/bpf_jit_comp.c | 4 ++++
arch/x86/net/bpf_jit_comp.c | 4 ++++
5 files changed, 24 insertions(+)

diff -puN arch/arm/net/bpf_jit_32.c~a arch/arm/net/bpf_jit_32.c
--- a/arch/arm/net/bpf_jit_32.c~a
+++ a/arch/arm/net/bpf_jit_32.c
@@ -958,6 +958,10 @@ void bpf_jit_free(struct sk_filter *fp)
struct work_struct *work;

if (fp->bpf_func != sk_run_filter) {
+ /*
+ * bpf_jit_free() can be called from softirq; module_free()
+ * requires process context.
+ */
work = (struct work_struct *)fp->bpf_func;

INIT_WORK(work, bpf_jit_free_worker);
@@ -985,6 +989,10 @@ void seccomp_jit_free(struct seccomp_fil
void *bpf_func = seccomp_filter_get_bpf_func(fp);

if (bpf_func != sk_run_filter) {
+ /*
+ * seccomp_jit_free() can be called from softirq; module_free()
+ * requires process context.
+ */
work = (struct work_struct *)bpf_func;

INIT_WORK(work, bpf_jit_free_worker);
diff -puN arch/sparc/net/bpf_jit_comp.c~a arch/sparc/net/bpf_jit_comp.c
--- a/arch/sparc/net/bpf_jit_comp.c~a
+++ a/arch/sparc/net/bpf_jit_comp.c
@@ -817,6 +817,10 @@ static void jit_free_defer(struct work_s
void bpf_jit_free(struct sk_filter *fp)
{
if (fp->bpf_func != sk_run_filter) {
+ /*
+ * bpf_jit_free() can be called from softirq; module_free()
+ * requires process context.
+ */
struct work_struct *work = (struct work_struct *)fp->bpf_func;

INIT_WORK(work, jit_free_defer);
diff -puN arch/powerpc/net/bpf_jit_comp.c~a arch/powerpc/net/bpf_jit_comp.c
--- a/arch/powerpc/net/bpf_jit_comp.c~a
+++ a/arch/powerpc/net/bpf_jit_comp.c
@@ -699,6 +699,10 @@ static void jit_free_defer(struct work_s
void bpf_jit_free(struct sk_filter *fp)
{
if (fp->bpf_func != sk_run_filter) {
+ /*
+ * bpf_jit_free() can be called from softirq; module_free()
+ * requires process context.
+ */
struct work_struct *work = (struct work_struct *)fp->bpf_func;

INIT_WORK(work, jit_free_defer);
diff -puN arch/s390/net/bpf_jit_comp.c~a arch/s390/net/bpf_jit_comp.c
--- a/arch/s390/net/bpf_jit_comp.c~a
+++ a/arch/s390/net/bpf_jit_comp.c
@@ -818,6 +818,10 @@ void bpf_jit_free(struct sk_filter *fp)

if (fp->bpf_func == sk_run_filter)
return;
+ /*
+ * bpf_jit_free() can be called from softirq; module_free() requires
+ * process context.
+ */
work = (struct work_struct *)fp->bpf_func;
INIT_WORK(work, jit_free_defer);
schedule_work(work);
diff -puN arch/x86/net/bpf_jit_comp.c~a arch/x86/net/bpf_jit_comp.c
--- a/arch/x86/net/bpf_jit_comp.c~a
+++ a/arch/x86/net/bpf_jit_comp.c
@@ -749,6 +749,10 @@ static void jit_free_defer(struct work_s
void bpf_jit_free(struct sk_filter *fp)
{
if (fp->bpf_func != sk_run_filter) {
+ /*
+ * bpf_jit_free() can be called from softirq; module_free()
+ * requires process context.
+ */
struct work_struct *work = (struct work_struct *)fp->bpf_func;

INIT_WORK(work, jit_free_defer);
diff -puN include/linux/filter.h~a include/linux/filter.h
diff -puN net/core/filter.c~a net/core/filter.c
_