2020-07-10 14:20:42

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH RFC 0/3] io_uring: add restrictions to support untrusted applications and guests

Following the proposal that I send about restrictions [1], I wrote a PoC with
the main changes. It is still WiP so I left some TODO in the code.

I also wrote helpers in liburing and a test case (test/register-restrictions.c)
available in this repository:
https://github.com/stefano-garzarella/liburing (branch: io_uring_restrictions)

Just to recap the proposal, the idea is to add some restrictions to the
operations (sqe, register, fixed file) to safely allow untrusted applications
or guests to use io_uring queues.

The first patch changes io_uring_register(2) opcodes into an enumeration to
keep track of the last opcode available.

The second patch adds IOURING_REGISTER_RESTRICTIONS opcode and the code to
handle restrictions.

The third patch adds IORING_SETUP_R_DISABLED flag to start the rings disabled,
allowing the user to register restrictions, buffers, files, before to start
processing SQEs.
I'm not sure if this could help seccomp. An alternative pointed out by Jann
Horn could be to register restrictions during io_uring_setup(2), but this
requires some intrusive changes (there is no space in the struct
io_uring_params to pass a pointer to restriction arrays, maybe we can add a
flag and add the pointer at the end of the struct io_uring_params).

Another limitation now is that I need to enable every time
IORING_REGISTER_ENABLE_RINGS in the restrictions to be able to start the rings,
I'm not sure if we should treat it as an exception.

Maybe registering restrictions during io_uring_setup(2) could solve both issues
(seccomp integration and IORING_REGISTER_ENABLE_RINGS registration), but I need
some suggestions to properly extend the io_uring_setup(2).

Comments and suggestions are very welcome.

Thank you in advance,
Stefano

[1] https://lore.kernel.org/io-uring/20200609142406.upuwpfmgqjeji4lc@steredhat/

Stefano Garzarella (3):
io_uring: use an enumeration for io_uring_register(2) opcodes
io_uring: add IOURING_REGISTER_RESTRICTIONS opcode
io_uring: allow disabling rings during the creation

fs/io_uring.c | 155 ++++++++++++++++++++++++++++++++--
include/uapi/linux/io_uring.h | 59 ++++++++++---
2 files changed, 194 insertions(+), 20 deletions(-)

--
2.26.2


2020-07-10 14:21:01

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH RFC 1/3] io_uring: use an enumeration for io_uring_register(2) opcodes

The enumeration allows us to keep track of the last
io_uring_register(2) opcode available.

Behaviour and opcodes names don't change.

Signed-off-by: Stefano Garzarella <[email protected]>
---
include/uapi/linux/io_uring.h | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 92c22699a5a7..2d18f1d0b5df 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -252,17 +252,22 @@ struct io_uring_params {
/*
* io_uring_register(2) opcodes and arguments
*/
-#define IORING_REGISTER_BUFFERS 0
-#define IORING_UNREGISTER_BUFFERS 1
-#define IORING_REGISTER_FILES 2
-#define IORING_UNREGISTER_FILES 3
-#define IORING_REGISTER_EVENTFD 4
-#define IORING_UNREGISTER_EVENTFD 5
-#define IORING_REGISTER_FILES_UPDATE 6
-#define IORING_REGISTER_EVENTFD_ASYNC 7
-#define IORING_REGISTER_PROBE 8
-#define IORING_REGISTER_PERSONALITY 9
-#define IORING_UNREGISTER_PERSONALITY 10
+enum {
+ IORING_REGISTER_BUFFERS,
+ IORING_UNREGISTER_BUFFERS,
+ IORING_REGISTER_FILES,
+ IORING_UNREGISTER_FILES,
+ IORING_REGISTER_EVENTFD,
+ IORING_UNREGISTER_EVENTFD,
+ IORING_REGISTER_FILES_UPDATE,
+ IORING_REGISTER_EVENTFD_ASYNC,
+ IORING_REGISTER_PROBE,
+ IORING_REGISTER_PERSONALITY,
+ IORING_UNREGISTER_PERSONALITY,
+
+ /* this goes last */
+ IORING_REGISTER_LAST
+};

struct io_uring_files_update {
__u32 offset;
--
2.26.2

2020-07-10 14:21:13

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH RFC 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode

The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode
permanently installs a feature whitelist on an io_ring_ctx.
The io_ring_ctx can then be passed to untrusted code with the
knowledge that only operations present in the whitelist can be
executed.

The whitelist approach ensures that new features added to io_uring
do not accidentally become available when an existing application
is launched on a newer kernel version.

Currently is it possible to restrict sqe opcodes and register
opcodes. It is also possible to allow only fixed files.

IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards
it is not possible to change restrictions anymore.
This prevents untrusted code from removing restrictions.

Suggested-by: Stefan Hajnoczi <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
fs/io_uring.c | 98 ++++++++++++++++++++++++++++++++++-
include/uapi/linux/io_uring.h | 30 +++++++++++
2 files changed, 127 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index d37d7ea5ebe5..4768a9973d4b 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -218,6 +218,13 @@ struct io_buffer {
__u16 bid;
};

+struct io_restriction {
+ DECLARE_BITMAP(register_op, IORING_REGISTER_LAST);
+ DECLARE_BITMAP(sqe_op, IORING_OP_LAST);
+ DECLARE_BITMAP(restriction_op, IORING_RESTRICTION_LAST);
+ bool enabled; /* TODO: remove and use a flag ?? */
+};
+
struct io_ring_ctx {
struct {
struct percpu_ref refs;
@@ -337,6 +344,7 @@ struct io_ring_ctx {
struct llist_head file_put_llist;

struct work_struct exit_work;
+ struct io_restriction restrictions;
};

/*
@@ -5491,6 +5499,11 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
if (unlikely(!fixed && io_async_submit(req->ctx)))
return -EBADF;

+ if (unlikely(!fixed && req->ctx->restrictions.enabled &&
+ test_bit(IORING_RESTRICTION_FIXED_FILES_ONLY,
+ req->ctx->restrictions.restriction_op)))
+ return -EACCES;
+
return io_file_get(state, req, fd, &req->file, fixed);
}

@@ -5895,6 +5908,10 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
if (unlikely(req->opcode >= IORING_OP_LAST))
return -EINVAL;

+ if (unlikely(ctx->restrictions.enabled &&
+ !test_bit(req->opcode, ctx->restrictions.sqe_op)))
+ return -EACCES;
+
if (unlikely(io_sq_thread_acquire_mm(ctx, req)))
return -EFAULT;

@@ -8079,6 +8096,69 @@ static int io_unregister_personality(struct io_ring_ctx *ctx, unsigned id)
return -EINVAL;
}

+static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
+ unsigned int nr_args)
+{
+ struct io_uring_restriction *res;
+ size_t size;
+ int i, ret;
+
+ /* We allow only a single restrictions registration */
+ if (ctx->restrictions.enabled)
+ return -EINVAL; /* TODO: check ret value */
+
+ /* TODO: Is it okay to set a maximum? */
+ if (!arg || nr_args > 256)
+ return -EINVAL;
+
+ size = array_size(nr_args, sizeof(*res));
+ if (size == SIZE_MAX)
+ return -EOVERFLOW;
+
+ res = kmalloc(size, GFP_KERNEL);
+ if (!res)
+ return -ENOMEM;
+
+ if (copy_from_user(res, arg, size)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ for (i = 0; i < nr_args; i++) {
+ if (res[i].opcode >= IORING_RESTRICTION_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ __set_bit(res[i].opcode, ctx->restrictions.restriction_op);
+
+ if (res[i].opcode == IORING_RESTRICTION_REGISTER_OP) {
+ if (res[i].register_op >= IORING_REGISTER_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ __set_bit(res[i].register_op,
+ ctx->restrictions.register_op);
+ } else if (res[i].opcode == IORING_RESTRICTION_SQE_OP) {
+ if (res[i].sqe_op >= IORING_OP_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ __set_bit(res[i].sqe_op, ctx->restrictions.sqe_op);
+ }
+ }
+
+ ctx->restrictions.enabled = true;
+
+ ret = 0;
+out:
+ /* TODO: should we reset all restrictions if an error happened? */
+ kfree(res);
+ return ret;
+}
+
static bool io_register_op_must_quiesce(int op)
{
switch (op) {
@@ -8125,6 +8205,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
if (ret) {
percpu_ref_resurrect(&ctx->refs);
ret = -EINTR;
+ goto out_quiesce;
+ }
+ }
+
+ if (ctx->restrictions.enabled) {
+ if (opcode >= IORING_REGISTER_LAST) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (!test_bit(opcode, ctx->restrictions.register_op)) {
+ ret = -EACCES;
goto out;
}
}
@@ -8188,15 +8280,19 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
break;
ret = io_unregister_personality(ctx, nr_args);
break;
+ case IORING_REGISTER_RESTRICTIONS:
+ ret = io_register_restrictions(ctx, arg, nr_args);
+ break;
default:
ret = -EINVAL;
break;
}

+out:
if (io_register_op_must_quiesce(opcode)) {
/* bring the ctx back to life */
percpu_ref_reinit(&ctx->refs);
-out:
+out_quiesce:
reinit_completion(&ctx->ref_comp);
}
return ret;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 2d18f1d0b5df..69f4684c988d 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -264,6 +264,7 @@ enum {
IORING_REGISTER_PROBE,
IORING_REGISTER_PERSONALITY,
IORING_UNREGISTER_PERSONALITY,
+ IORING_REGISTER_RESTRICTIONS,

/* this goes last */
IORING_REGISTER_LAST
@@ -292,4 +293,33 @@ struct io_uring_probe {
struct io_uring_probe_op ops[0];
};

+struct io_uring_restriction {
+ __u16 opcode;
+ union {
+ __u8 register_op; /* IORING_RESTRICTION_REGISTER_OP */
+ __u8 sqe_op; /* IORING_RESTRICTION_SQE_OP */
+ };
+ __u8 resv;
+ __u32 resv2[3];
+};
+
+/*
+ * io_uring_restriction->opcode values
+ */
+enum {
+ /* Allow an io_uring_register(2) opcode */
+ IORING_RESTRICTION_REGISTER_OP,
+
+ /* Allow an sqe opcode */
+ IORING_RESTRICTION_SQE_OP,
+
+ /* Only allow fixed files */
+ IORING_RESTRICTION_FIXED_FILES_ONLY,
+
+ /* Only allow registered addresses and translate them */
+ //TODO: IORING_RESTRICTION_BUFFER_CHECK,
+
+ IORING_RESTRICTION_LAST
+};
+
#endif
--
2.26.2

2020-07-10 14:23:33

by Stefano Garzarella

[permalink] [raw]
Subject: [PATCH RFC 3/3] io_uring: allow disabling rings during the creation

This patch adds a new IORING_SETUP_R_DISABLED flag to start the
rings disabled, allowing the user to register restrictions,
buffers, files, before to start processing SQEs.

When IORING_SETUP_R_DISABLED is set, SQE are not processed and
SQPOLL kthread is not started.

The restrictions registration are allowed only when the rings
are disable to prevent concurrency issue while processing SQEs.

The rings can be enabled using IORING_REGISTER_ENABLE_RINGS
opcode with io_uring_register(2).

Suggested-by: Jens Axboe <[email protected]>
Signed-off-by: Stefano Garzarella <[email protected]>
---
fs/io_uring.c | 57 ++++++++++++++++++++++++++++++-----
include/uapi/linux/io_uring.h | 2 ++
2 files changed, 51 insertions(+), 8 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 4768a9973d4b..52a75bf4206f 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6955,8 +6955,8 @@ static int io_init_wq_offload(struct io_ring_ctx *ctx,
return ret;
}

-static int io_sq_offload_start(struct io_ring_ctx *ctx,
- struct io_uring_params *p)
+static int io_sq_offload_create(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
{
int ret;

@@ -6993,7 +6993,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
ctx->sqo_thread = NULL;
goto err;
}
- wake_up_process(ctx->sqo_thread);
} else if (p->flags & IORING_SETUP_SQ_AFF) {
/* Can't have SQ_AFF without SQPOLL */
ret = -EINVAL;
@@ -7012,6 +7011,18 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
return ret;
}

+static int io_sq_offload_start(struct io_ring_ctx *ctx)
+{
+ if (ctx->flags & IORING_SETUP_SQPOLL) {
+ if (!ctx->sqo_thread)
+ return -EINVAL; /* TODO: check errno */
+
+ wake_up_process(ctx->sqo_thread);
+ }
+
+ return 0;
+}
+
static void io_unaccount_mem(struct user_struct *user, unsigned long nr_pages)
{
atomic_long_sub(nr_pages, &user->locked_vm);
@@ -7632,9 +7643,6 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
int submitted = 0;
struct fd f;

- if (current->task_works)
- task_work_run();
-
if (flags & ~(IORING_ENTER_GETEVENTS | IORING_ENTER_SQ_WAKEUP))
return -EINVAL;

@@ -7651,6 +7659,12 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,
if (!percpu_ref_tryget(&ctx->refs))
goto out_fput;

+ if (ctx->flags & IORING_SETUP_R_DISABLED)
+ return -EBADF;
+
+ if (current->task_works)
+ task_work_run();
+
/*
* For SQ polling, the thread will do all submissions and completions.
* Just return the requested submit count, and wake the thread if
@@ -7956,10 +7970,16 @@ static int io_uring_create(unsigned entries, struct io_uring_params *p,
if (ret)
goto err;

- ret = io_sq_offload_start(ctx, p);
+ ret = io_sq_offload_create(ctx, p);
if (ret)
goto err;

+ if (!(p->flags & IORING_SETUP_R_DISABLED)) {
+ ret = io_sq_offload_start(ctx);
+ if (ret)
+ goto err;
+ }
+
memset(&p->sq_off, 0, sizeof(p->sq_off));
p->sq_off.head = offsetof(struct io_rings, sq.head);
p->sq_off.tail = offsetof(struct io_rings, sq.tail);
@@ -8020,7 +8040,8 @@ static long io_uring_setup(u32 entries, struct io_uring_params __user *params)

if (p.flags & ~(IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL |
IORING_SETUP_SQ_AFF | IORING_SETUP_CQSIZE |
- IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ))
+ IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ |
+ IORING_SETUP_R_DISABLED))
return -EINVAL;

return io_uring_create(entries, &p, params);
@@ -8103,6 +8124,10 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
size_t size;
int i, ret;

+ /* Restrictions allowed only if rings started disabled */
+ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+ return -EINVAL;
+
/* We allow only a single restrictions registration */
if (ctx->restrictions.enabled)
return -EINVAL; /* TODO: check ret value */
@@ -8159,6 +8184,16 @@ static int io_register_restrictions(struct io_ring_ctx *ctx, void __user *arg,
return ret;
}

+static int io_register_enable_rings(struct io_ring_ctx *ctx)
+{
+ if (!(ctx->flags & IORING_SETUP_R_DISABLED))
+ return -EINVAL;
+
+ ctx->flags &= ~IORING_SETUP_R_DISABLED;
+
+ return io_sq_offload_start(ctx);
+}
+
static bool io_register_op_must_quiesce(int op)
{
switch (op) {
@@ -8280,6 +8315,12 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
break;
ret = io_unregister_personality(ctx, nr_args);
break;
+ case IORING_REGISTER_ENABLE_RINGS:
+ ret = -EINVAL;
+ if (arg || nr_args)
+ break;
+ ret = io_register_enable_rings(ctx);
+ break;
case IORING_REGISTER_RESTRICTIONS:
ret = io_register_restrictions(ctx, arg, nr_args);
break;
diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 69f4684c988d..57081c746b06 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -94,6 +94,7 @@ enum {
#define IORING_SETUP_CQSIZE (1U << 3) /* app defines CQ size */
#define IORING_SETUP_CLAMP (1U << 4) /* clamp SQ/CQ ring sizes */
#define IORING_SETUP_ATTACH_WQ (1U << 5) /* attach to existing wq */
+#define IORING_SETUP_R_DISABLED (1U << 6) /* start with ring disabled */

enum {
IORING_OP_NOP,
@@ -265,6 +266,7 @@ enum {
IORING_REGISTER_PERSONALITY,
IORING_UNREGISTER_PERSONALITY,
IORING_REGISTER_RESTRICTIONS,
+ IORING_REGISTER_ENABLE_RINGS,

/* this goes last */
IORING_REGISTER_LAST
--
2.26.2

2020-07-10 15:33:23

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH RFC 0/3] io_uring: add restrictions to support untrusted applications and guests

.snip..
> Just to recap the proposal, the idea is to add some restrictions to the
> operations (sqe, register, fixed file) to safely allow untrusted applications
> or guests to use io_uring queues.

Hi!

This is neat and quite cool - but one thing that keeps nagging me is
what how much overhead does this cut from the existing setup when you use
virtio (with guests obviously)? That is from a high level view the
beaty of io_uring being passed in the guest is you don't have the
virtio ring -> io_uring processing, right?

Thanks!

2020-07-10 16:23:05

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH RFC 0/3] io_uring: add restrictions to support untrusted applications and guests

Hi Konrad,

On Fri, Jul 10, 2020 at 11:33:09AM -0400, Konrad Rzeszutek Wilk wrote:
> .snip..
> > Just to recap the proposal, the idea is to add some restrictions to the
> > operations (sqe, register, fixed file) to safely allow untrusted applications
> > or guests to use io_uring queues.
>
> Hi!
>
> This is neat and quite cool - but one thing that keeps nagging me is
> what how much overhead does this cut from the existing setup when you use
> virtio (with guests obviously)?

I need to do more tests, but the preliminary results that I reported on
the original proposal [1] show an overhead of ~ 4.17 uS (with iodepth=1)
when I'm using virtio ring processed in a dedicated iothread:

- 73 kIOPS using virtio-blk + QEMU iothread + io_uring backend
- 104 kIOPS using io_uring passthrough

> That is from a high level view the
> beaty of io_uring being passed in the guest is you don't have the
> virtio ring -> io_uring processing, right?

Right, and potentially we can share the io_uring queues directly to the
guest userspace applications, cutting down the cost of Linux block
layer in the guest.

Thanks for your feedback,
Stefano

[1] https://lore.kernel.org/io-uring/20200609142406.upuwpfmgqjeji4lc@steredhat/

2020-07-10 17:54:08

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH RFC 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode

On 7/10/20 8:19 AM, Stefano Garzarella wrote:
> The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode
> permanently installs a feature whitelist on an io_ring_ctx.
> The io_ring_ctx can then be passed to untrusted code with the
> knowledge that only operations present in the whitelist can be
> executed.
>
> The whitelist approach ensures that new features added to io_uring
> do not accidentally become available when an existing application
> is launched on a newer kernel version.

Keeping with the trend of the times, you should probably use 'allowlist'
here instead of 'whitelist'.
>
> Currently is it possible to restrict sqe opcodes and register
> opcodes. It is also possible to allow only fixed files.
>
> IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards
> it is not possible to change restrictions anymore.
> This prevents untrusted code from removing restrictions.

A few comments below.

> @@ -337,6 +344,7 @@ struct io_ring_ctx {
> struct llist_head file_put_llist;
>
> struct work_struct exit_work;
> + struct io_restriction restrictions;
> };
>
> /*

Since very few will use this feature, was going to suggest that we make
it dynamically allocated. But it's just 32 bytes, currently, so probably
not worth the effort...

> @@ -5491,6 +5499,11 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
> if (unlikely(!fixed && io_async_submit(req->ctx)))
> return -EBADF;
>
> + if (unlikely(!fixed && req->ctx->restrictions.enabled &&
> + test_bit(IORING_RESTRICTION_FIXED_FILES_ONLY,
> + req->ctx->restrictions.restriction_op)))
> + return -EACCES;
> +
> return io_file_get(state, req, fd, &req->file, fixed);
> }

This one hurts, though. I don't want any extra overhead from the
feature, and you're digging deep in ctx here to figure out of we need to
check.

Generally, all the checking needs to be out-of-line, and it needs to
base the decision on whether to check something or not on a cache hot
piece of data. So I'd suggest to turn all of these into some flag.
ctx->flags generally mirrors setup flags, so probably just add a:

unsigned int restrictions : 1;

after eventfd_async : 1 in io_ring_ctx. That's free, plenty of room
there and that cacheline is already pulled in for reading.


--
Jens Axboe

2020-07-13 08:08:42

by Stefano Garzarella

[permalink] [raw]
Subject: Re: [PATCH RFC 2/3] io_uring: add IOURING_REGISTER_RESTRICTIONS opcode

On Fri, Jul 10, 2020 at 11:52:48AM -0600, Jens Axboe wrote:
> On 7/10/20 8:19 AM, Stefano Garzarella wrote:
> > The new io_uring_register(2) IOURING_REGISTER_RESTRICTIONS opcode
> > permanently installs a feature whitelist on an io_ring_ctx.
> > The io_ring_ctx can then be passed to untrusted code with the
> > knowledge that only operations present in the whitelist can be
> > executed.
> >
> > The whitelist approach ensures that new features added to io_uring
> > do not accidentally become available when an existing application
> > is launched on a newer kernel version.
>
> Keeping with the trend of the times, you should probably use 'allowlist'
> here instead of 'whitelist'.

Sure, it is better!

> >
> > Currently is it possible to restrict sqe opcodes and register
> > opcodes. It is also possible to allow only fixed files.
> >
> > IOURING_REGISTER_RESTRICTIONS can only be made once. Afterwards
> > it is not possible to change restrictions anymore.
> > This prevents untrusted code from removing restrictions.
>
> A few comments below.
>
> > @@ -337,6 +344,7 @@ struct io_ring_ctx {
> > struct llist_head file_put_llist;
> >
> > struct work_struct exit_work;
> > + struct io_restriction restrictions;
> > };
> >
> > /*
>
> Since very few will use this feature, was going to suggest that we make
> it dynamically allocated. But it's just 32 bytes, currently, so probably
> not worth the effort...
>

Yeah, I'm not sure it will grow in the future, so I'm tempted to leave it
as it is, but I can easily change it if you prefer.

> > @@ -5491,6 +5499,11 @@ static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req,
> > if (unlikely(!fixed && io_async_submit(req->ctx)))
> > return -EBADF;
> >
> > + if (unlikely(!fixed && req->ctx->restrictions.enabled &&
> > + test_bit(IORING_RESTRICTION_FIXED_FILES_ONLY,
> > + req->ctx->restrictions.restriction_op)))
> > + return -EACCES;
> > +
> > return io_file_get(state, req, fd, &req->file, fixed);
> > }
>
> This one hurts, though. I don't want any extra overhead from the
> feature, and you're digging deep in ctx here to figure out of we need to
> check.
>
> Generally, all the checking needs to be out-of-line, and it needs to
> base the decision on whether to check something or not on a cache hot
> piece of data. So I'd suggest to turn all of these into some flag.
> ctx->flags generally mirrors setup flags, so probably just add a:
>
> unsigned int restrictions : 1;
>
> after eventfd_async : 1 in io_ring_ctx. That's free, plenty of room
> there and that cacheline is already pulled in for reading.
>

Thanks for the clear explanation!

I left a TODO comment near the 'enabled' field to look for something better,
and what you're suggesting is what I was looking for :-)

I'll change it!

Thanks,
Stefano

2020-07-13 09:27:29

by Stefan Hajnoczi

[permalink] [raw]
Subject: Re: [PATCH RFC 0/3] io_uring: add restrictions to support untrusted applications and guests

On Fri, Jul 10, 2020 at 06:20:17PM +0200, Stefano Garzarella wrote:
> On Fri, Jul 10, 2020 at 11:33:09AM -0400, Konrad Rzeszutek Wilk wrote:
> > .snip..
> > > Just to recap the proposal, the idea is to add some restrictions to the
> > > operations (sqe, register, fixed file) to safely allow untrusted applications
> > > or guests to use io_uring queues.
> >
> > Hi!
> >
> > This is neat and quite cool - but one thing that keeps nagging me is
> > what how much overhead does this cut from the existing setup when you use
> > virtio (with guests obviously)?
>
> I need to do more tests, but the preliminary results that I reported on
> the original proposal [1] show an overhead of ~ 4.17 uS (with iodepth=1)
> when I'm using virtio ring processed in a dedicated iothread:
>
> - 73 kIOPS using virtio-blk + QEMU iothread + io_uring backend
> - 104 kIOPS using io_uring passthrough
>
> > That is from a high level view the
> > beaty of io_uring being passed in the guest is you don't have the
> > virtio ring -> io_uring processing, right?
>
> Right, and potentially we can share the io_uring queues directly to the
> guest userspace applications, cutting down the cost of Linux block
> layer in the guest.

Another factor is that the guest submits requests directly to the host
kernel sqpoll thread. When a virtqueue is used the sqpoll thread cannot
poll it directly so another host thread (QEMU) needs to poll the
virtqueue. The same applies for the completion code path.

Stefan


Attachments:
(No filename) (1.52 kB)
signature.asc (499.00 B)
Download all attachments