2019-06-25 22:12:55

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 00/10] Landlock LSM: Toward unprivileged sandboxing

Hi,

This ninth series upgrade Landlock as a stackable LSM [4] and elide the
file system path evaluation from the previous series [1] to make this
review process easier. This patch series is almost half the size of the
previous one.

Landlock is a low-level framework to build custom access-control systems
or safe endpoint security agents. There is two types of Landlock hooks:
FS_WALK and FS_PICK. Each of them accepts a dedicated eBPF program,
called a Landlock program. The set of actions on a file is well defined
(e.g. read, write, ioctl, append, lock, mount...) taking inspiration
from the major Linux LSMs and some other access-controls like Capsicum.

The example patch show how a file system access control can be built
based on a list of denied files and directories. From a security point
of view, it may be preferable to use a whitelist instead of a blacklist,
but this series only enable to match a specific list of files. Bringing
back a way to evaluate a path is planned for a future dedicated series,
once this base Landlock framework is merged. I may take inspiration
from the LOOKUP_BENEATH approach [5], but from an eBPF point of view.

The documentation patch contains some kernel documentation and
explanations on how to use Landlock. The compiled documentation and
some talks can be found here: https://landlock.io
This patch series can be found in a Git repository here:
https://github.com/landlock-lsm/linux/commits/landlock-v9

There is still some minor issues with this patch series but it is enough
to get a deep review.

This is the first step of the roadmap discussed at LPC [2]. While the
intended final goal is to allow unprivileged users to use Landlock, this
series allows only a process with global CAP_SYS_ADMIN to load and
enforce a rule. This may help to get feedback and avoid unexpected
behaviors.

This series can be applied on top of bpf-next, commit 88091ff56b71
("selftests, bpf: Add test for veth native XDP"). This can be tested
with CONFIG_SECCOMP_FILTER and CONFIG_SECURITY_LANDLOCK. I would really
appreciate constructive comments on the design and the code.


# Landlock LSM

The goal of this new Linux Security Module (LSM) called Landlock is to
allow any process, including unprivileged ones, to create powerful
security sandboxes comparable to XNU Sandbox or OpenBSD Pledge (which
could be implemented with Landlock). This kind of sandbox is expected
to help mitigate the security impact of bugs or unexpected/malicious
behaviors in user-space applications.

The approach taken is to add the minimum amount of code while still
allowing the user-space application to create quite complex access
rules. A dedicated security policy language such as the one used by
SELinux, AppArmor and other major LSMs involves a lot of code and is
usually permitted to only a trusted user (i.e. root). On the contrary,
eBPF programs already exist and are designed to be safely loaded by
unprivileged user-space.

This design does not seem too intrusive but is flexible enough to allow
a powerful sandbox mechanism accessible by any process on Linux. The use
of seccomp and Landlock is more suitable with the help of a user-space
library (e.g. libseccomp) that could help to specify a high-level
language to express a security policy instead of raw eBPF programs.
Moreover, thanks to the LLVM front-end, it is quite easy to write an
eBPF program with a subset of the C language.


# Frequently asked questions

## Why is seccomp-bpf not enough?

A seccomp filter can access only raw syscall arguments (i.e. the
register values) which means that it is not possible to filter according
to the value pointed to by an argument, such as a file pathname. As an
embryonic Landlock version demonstrated, filtering at the syscall level
is complicated (e.g. need to take care of race conditions). This is
mainly because the access control checkpoints of the kernel are not at
this high-level but more underneath, at the LSM-hook level. The LSM
hooks are designed to handle this kind of checks. Landlock abstracts
this approach to leverage the ability of unprivileged users to limit
themselves.

Cf. section "What it isn't?" in Documentation/prctl/seccomp_filter.txt


## Why use the seccomp(2) syscall?

Landlock use the same semantic as seccomp to apply access rule
restrictions. It add a new layer of security for the current process
which is inherited by its children. It makes sense to use an unique
access-restricting syscall (that should be allowed by seccomp filters)
which can only drop privileges. Moreover, a Landlock rule could come
from outside a process (e.g. passed through a UNIX socket). It is then
useful to differentiate the creation/load of Landlock eBPF programs via
bpf(2), from rule enforcement via seccomp(2).


## Why a new LSM? Are SELinux, AppArmor, Smack and Tomoyo not good
enough?

The current access control LSMs are fine for their purpose which is to
give the *root* the ability to enforce a security policy for the
*system*. What is missing is a way to enforce a security policy for any
application by its developer and *unprivileged user* as seccomp can do
for raw syscall filtering.

Differences from other (access control) LSMs:
* not only dedicated to administrators (i.e. no_new_priv);
* limited kernel attack surface (e.g. policy parsing);
* constrained policy rules (no DoS: deterministic execution time);
* do not leak more information than the loader process can legitimately
have access to (minimize metadata inference).


# Changes since v8

* fit with the new LSM stacking framework (security blobs were tested
but are not use in this series because of the code reduction)
* remove the Landlock program chaining and the file path evaluation
feature to get a minimum viable product and ease the review
* replace the example with a simple blacklist policy
* rebase on bpf-next


# Changes since v7

* major revamp of the file system enforcement:
* new eBPF map dedicated to tie an inode with an arbitrary 64-bits
value, which can be used to tag files
* three new Landlock hooks: FS_WALK, FS_PICK and FS_GET
* add the ability to chain Landlock programs
* add a new eBPF map type to compare inodes
* don't use macros anymore
* replace subtype fields:
* triggers: fine-grained bitfiel of actions on which a Landlock
program may be called (if it comes from a sandbox process)
* previous: a parent chained program
* upstreamed patches:
* commit 369130b63178 ("selftests: Enhance kselftest_harness.h to
print which assert failed")


# Changes since v6

* upstreamed patches:
* commit 752ba56fb130 ("bpf: Extend check_uarg_tail_zero() checks")
* commit 0b40808a1084 ("selftests: Make test_harness.h more generally
available") and related ones
* commit 3bb857e47e49 ("LSM: Enable multiple calls to
security_add_hooks() for the same LSM")
* simplify the landlock_context (remove syscall_* fields) and add three
FS sub-events: IOCTL, LOCK, FCNTL
* minimize the number of callable BPF functions from a Landlock rule
* do not split put_seccomp_filter() with put_seccomp()
* rename Landlock version to Landlock ABI
* miscellaneous fixes
* rebase on net-next


# Changes since v5

* eBPF program subtype:
* use a prog_subtype pointer instead of inlining it into bpf_attr
* enable a future-proof behavior (reject unhandled data/size)
* add tests
* use a simple rule hierarchy (similar to seccomp-bpf)
* add a ptrace scope protection
* add more tests
* add more documentation
* rename some files
* miscellaneous fixes
* rebase on net-next


# Changes since v4

* upstreamed patches:
* commit d498f8719a09 ("bpf: Rebuild bpf.o for any dependency update")
* commit a734fb5d6006 ("samples/bpf: Reset global variables") and
related ones
* commit f4874d01beba ("bpf: Use bpf_create_map() from the library")
and related ones
* commit d02d8986a768 ("bpf: Always test unprivileged programs")
* commit 640eb7e7b524 ("fs: Constify path_is_under()'s arguments")
* commit 535e7b4b5ef2 ("bpf: Use u64_to_user_ptr()")
* revamp Landlock to not expose an LSM hook interface but wrap and
abstract them with Landlock events (currently one for all filesystem
related operations: LANDLOCK_SUBTYPE_EVENT_FS)
* wrap all filesystem kernel objects through the same FS handle (struct
landlock_handle_fs): struct file, struct inode, struct path and struct
dentry
* a rule don't return an errno code but only a boolean to allow or deny
an access request
* handle all filesystem related LSM hooks
* add some tests and a sample:
* BPF context tests
* Landlock sandboxing tests and sample
* write Landlock rules in C and compile them with LLVM
* change field names of eBPF program subtype
* remove arraymap of handles for now (will be replaced with a revamped
map)
* remove cgroup handling for now
* add user and kernel documentation
* rebase on net-next


# Changes since v3

* upstreamed patch:
* commit 1955351da41c ("bpf: Set register type according to
is_valid_access()")
* use abstract LSM hook arguments with custom types (e.g.
*_LANDLOCK_ARG_FS for struct file, struct inode and struct path)
* add more LSM hooks to support full filesystem access control
* improve the sandbox example
* fix races and RCU issues:
* eBPF program execution and eBPF helpers
* revamp the arraymap of handles to cleanly deal with update/delete
* eBPF program subtype for Landlock:
* remove the "origin" field
* add an "option" field
* rebase onto Daniel Mack's patches v7 [3]
* remove merged commit 1955351da41c ("bpf: Set register type according
to is_valid_access()")
* fix spelling mistakes
* cleanup some type and variable names
* split patches
* for now, remove cgroup delegation handling for unprivileged user
* remove extra access check for cgroup_get_from_fd()
* remove unused example code dealing with skb
* remove seccomp-bpf link:
* no more seccomp cookie
* for now, it is no more possible to check the current syscall
properties


# Changes since v2

* revamp cgroup handling:
* use Daniel Mack's patches "Add eBPF hooks for cgroups" v5
* remove bpf_landlock_cmp_cgroup_beneath()
* make BPF_PROG_ATTACH usable with delegated cgroups
* add a new CGRP_NO_NEW_PRIVS flag for safe cgroups
* handle Landlock sandboxing for cgroups hierarchy
* allow unprivileged processes to attach Landlock eBPF program to
cgroups
* add subtype to eBPF programs:
* replace Landlock hook identification by custom eBPF program types
with a dedicated subtype field
* manage fine-grained privileged Landlock programs
* register Landlock programs for dedicated trigger origins (e.g.
syscall, return from seccomp filter and/or interruption)
* performance and memory optimizations: use an array to access Landlock
hooks directly but do not duplicated it for each thread
(seccomp-based)
* allow running Landlock programs without seccomp filter
* fix seccomp-related issues
* remove extra errno bounding check for Landlock programs
* add some examples for optional eBPF functions or context access
(network related) according to security checks to allow more features
for privileged programs (e.g. Checmate)


# Changes since v1

* focus on the LSM hooks, not the syscalls:
* much more simple implementation
* does not need audit cache tricks to avoid race conditions
* more simple to use and more generic because using the LSM hook
abstraction directly
* more efficient because only checking in LSM hooks
* architecture agnostic
* switch from cBPF to eBPF:
* new eBPF program types dedicated to Landlock
* custom functions used by the eBPF program
* gain some new features (e.g. 10 registers, can load values of
different size, LLVM translator) but only a few functions allowed
and a dedicated map type
* new context: LSM hook ID, cookie and LSM hook arguments
* need to set the sysctl kernel.unprivileged_bpf_disable to 0 (default
value) to be able to load hook filters as unprivileged users
* smaller and simpler:
* no more checker groups but dedicated arraymap of handles
* simpler userland structs thanks to eBPF functions
* distinctive name: Landlock


[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/
[3] https://lore.kernel.org/netdev/[email protected]/
[4] https://lore.kernel.org/lkml/[email protected]/
[5] https://lore.kernel.org/lkml/[email protected]/

Regards,

Mickaël Salaün (10):
fs,security: Add a new file access type: MAY_CHROOT
bpf: Add eBPF program subtype and is_valid_subtype() verifier
bpf,landlock: Define an eBPF program type for Landlock hooks
seccomp,landlock: Enforce Landlock programs per process hierarchy
bpf,landlock: Add a new map type: inode
landlock: Handle filesystem access control
landlock: Add ptrace restrictions
bpf: Add a Landlock sandbox example
bpf,landlock: Add tests for Landlock
landlock: Add user and kernel documentation for Landlock

Documentation/security/index.rst | 1 +
Documentation/security/landlock/index.rst | 20 +
Documentation/security/landlock/kernel.rst | 99 +++
Documentation/security/landlock/user.rst | 148 +++++
MAINTAINERS | 13 +
fs/open.c | 3 +-
include/linux/bpf.h | 17 +
include/linux/bpf_types.h | 6 +
include/linux/fs.h | 1 +
include/linux/landlock.h | 34 ++
include/linux/lsm_hooks.h | 1 +
include/linux/seccomp.h | 5 +
include/uapi/linux/bpf.h | 22 +-
include/uapi/linux/landlock.h | 109 ++++
include/uapi/linux/seccomp.h | 1 +
kernel/bpf/Makefile | 3 +
kernel/bpf/core.c | 3 +
kernel/bpf/inodemap.c | 315 ++++++++++
kernel/bpf/syscall.c | 59 +-
kernel/bpf/verifier.c | 25 +
kernel/fork.c | 8 +-
kernel/seccomp.c | 4 +
net/core/filter.c | 25 +-
samples/bpf/.gitignore | 1 +
samples/bpf/Makefile | 3 +
samples/bpf/bpf_load.c | 76 ++-
samples/bpf/bpf_load.h | 7 +
samples/bpf/landlock1.h | 8 +
samples/bpf/landlock1_kern.c | 104 ++++
samples/bpf/landlock1_user.c | 157 +++++
security/Kconfig | 1 +
security/Makefile | 2 +
security/landlock/Kconfig | 18 +
security/landlock/Makefile | 5 +
security/landlock/common.h | 81 +++
security/landlock/enforce.c | 272 +++++++++
security/landlock/enforce.h | 18 +
security/landlock/enforce_seccomp.c | 92 +++
security/landlock/hooks.c | 95 +++
security/landlock/hooks.h | 31 +
security/landlock/hooks_fs.c | 568 ++++++++++++++++++
security/landlock/hooks_fs.h | 31 +
security/landlock/hooks_ptrace.c | 121 ++++
security/landlock/hooks_ptrace.h | 8 +
security/landlock/init.c | 159 +++++
security/security.c | 15 +
tools/include/uapi/linux/bpf.h | 22 +-
tools/include/uapi/linux/landlock.h | 109 ++++
tools/lib/bpf/bpf.c | 10 +-
tools/lib/bpf/bpf.h | 2 +
tools/lib/bpf/libbpf.c | 1 +
tools/lib/bpf/libbpf.map | 1 +
tools/lib/bpf/libbpf_probes.c | 2 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/bpf/bpf_helpers.h | 2 +
tools/testing/selftests/bpf/test_verifier.c | 27 +-
.../testing/selftests/bpf/verifier/landlock.c | 35 ++
.../testing/selftests/bpf/verifier/subtype.c | 20 +
tools/testing/selftests/landlock/.gitignore | 4 +
tools/testing/selftests/landlock/Makefile | 39 ++
tools/testing/selftests/landlock/test.h | 48 ++
tools/testing/selftests/landlock/test_base.c | 24 +
tools/testing/selftests/landlock/test_fs.c | 257 ++++++++
.../testing/selftests/landlock/test_ptrace.c | 154 +++++
64 files changed, 3528 insertions(+), 25 deletions(-)
create mode 100644 Documentation/security/landlock/index.rst
create mode 100644 Documentation/security/landlock/kernel.rst
create mode 100644 Documentation/security/landlock/user.rst
create mode 100644 include/linux/landlock.h
create mode 100644 include/uapi/linux/landlock.h
create mode 100644 kernel/bpf/inodemap.c
create mode 100644 samples/bpf/landlock1.h
create mode 100644 samples/bpf/landlock1_kern.c
create mode 100644 samples/bpf/landlock1_user.c
create mode 100644 security/landlock/Kconfig
create mode 100644 security/landlock/Makefile
create mode 100644 security/landlock/common.h
create mode 100644 security/landlock/enforce.c
create mode 100644 security/landlock/enforce.h
create mode 100644 security/landlock/enforce_seccomp.c
create mode 100644 security/landlock/hooks.c
create mode 100644 security/landlock/hooks.h
create mode 100644 security/landlock/hooks_fs.c
create mode 100644 security/landlock/hooks_fs.h
create mode 100644 security/landlock/hooks_ptrace.c
create mode 100644 security/landlock/hooks_ptrace.h
create mode 100644 security/landlock/init.c
create mode 100644 tools/include/uapi/linux/landlock.h
create mode 100644 tools/testing/selftests/bpf/verifier/landlock.c
create mode 100644 tools/testing/selftests/bpf/verifier/subtype.c
create mode 100644 tools/testing/selftests/landlock/.gitignore
create mode 100644 tools/testing/selftests/landlock/Makefile
create mode 100644 tools/testing/selftests/landlock/test.h
create mode 100644 tools/testing/selftests/landlock/test_base.c
create mode 100644 tools/testing/selftests/landlock/test_fs.c
create mode 100644 tools/testing/selftests/landlock/test_ptrace.c

--
2.20.1


2019-06-25 22:14:09

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 04/10] seccomp,landlock: Enforce Landlock programs per process hierarchy

The seccomp(2) syscall can be used by a task to apply a Landlock program
to itself. As a seccomp filter, a Landlock program is enforced for the
current task and all its future children. A program is immutable and a
task can only add new restricting programs to itself, forming a list of
programss.

A Landlock program is tied to a Landlock hook. If the action on a kernel
object is allowed by the other Linux security mechanisms (e.g. DAC,
capabilities, other LSM), then a Landlock hook related to this kind of
object is triggered. The list of programs for this hook is then
evaluated. Each program return a 32-bit value which can deny the action
on a kernel object with a non-zero value. If every programs of the list
return zero, then the action on the object is allowed.

Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: James Morris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Serge E. Hallyn <[email protected]>
Cc: Will Drewry <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---

Changes since v8:
* Remove the chaining concept from the eBPF program contexts (chain and
cookie). We need to keep these subtypes this way to be able to make
them evolve, though.

Changes since v7:
* handle and verify program chains
* split and rename providers.c to enforce.c and enforce_seccomp.c
* rename LANDLOCK_SUBTYPE_* to LANDLOCK_*

Changes since v6:
* rename some functions with more accurate names to reflect that an eBPF
program for Landlock could be used for something else than a rule
* reword rule "appending" to "prepending" and explain it
* remove the superfluous no_new_privs check, only check global
CAP_SYS_ADMIN when prepending a Landlock rule (needed for containers)
* create and use {get,put}_seccomp_landlock() (suggested by Kees Cook)
* replace ifdef with static inlined function (suggested by Kees Cook)
* use get_user() (suggested by Kees Cook)
* replace atomic_t with refcount_t (requested by Kees Cook)
* move struct landlock_{rule,events} from landlock.h to common.h
* cleanup headers

Changes since v5:
* remove struct landlock_node and use a similar inheritance mechanisme
as seccomp-bpf (requested by Andy Lutomirski)
* rename SECCOMP_ADD_LANDLOCK_RULE to SECCOMP_APPEND_LANDLOCK_RULE
* rename file manager.c to providers.c
* add comments
* typo and cosmetic fixes

Changes since v4:
* merge manager and seccomp patches
* return -EFAULT in seccomp(2) when user_bpf_fd is null to easely check
if Landlock is supported
* only allow a process with the global CAP_SYS_ADMIN to use Landlock
(will be lifted in the future)
* add an early check to exit as soon as possible if the current process
does not have Landlock rules

Changes since v3:
* remove the hard link with seccomp (suggested by Andy Lutomirski and
Kees Cook):
* remove the cookie which could imply multiple evaluation of Landlock
rules
* remove the origin field in struct landlock_data
* remove documentation fix (merged upstream)
* rename the new seccomp command to SECCOMP_ADD_LANDLOCK_RULE
* internal renaming
* split commit
* new design to be able to inherit on the fly the parent rules

Changes since v2:
* Landlock programs can now be run without seccomp filter but for any
syscall (from the process) or interruption
* move Landlock related functions and structs into security/landlock/*
(to manage cgroups as well)
* fix seccomp filter handling: run Landlock programs for each of their
legitimate seccomp filter
* properly clean up all seccomp results
* cosmetic changes to ease the understanding
* fix some ifdef
---
include/linux/landlock.h | 34 ++++
include/linux/seccomp.h | 5 +
include/uapi/linux/seccomp.h | 1 +
kernel/fork.c | 8 +-
kernel/seccomp.c | 4 +
security/landlock/Makefile | 3 +-
security/landlock/common.h | 45 +++++
security/landlock/enforce.c | 272 ++++++++++++++++++++++++++++
security/landlock/enforce.h | 18 ++
security/landlock/enforce_seccomp.c | 92 ++++++++++
10 files changed, 480 insertions(+), 2 deletions(-)
create mode 100644 include/linux/landlock.h
create mode 100644 security/landlock/enforce.c
create mode 100644 security/landlock/enforce.h
create mode 100644 security/landlock/enforce_seccomp.c

diff --git a/include/linux/landlock.h b/include/linux/landlock.h
new file mode 100644
index 000000000000..8ac7942f50fc
--- /dev/null
+++ b/include/linux/landlock.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Landlock LSM - public kernel headers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifndef _LINUX_LANDLOCK_H
+#define _LINUX_LANDLOCK_H
+
+#include <linux/errno.h>
+#include <linux/sched.h> /* task_struct */
+
+#if defined(CONFIG_SECCOMP_FILTER) && defined(CONFIG_SECURITY_LANDLOCK)
+extern int landlock_seccomp_prepend_prog(unsigned int flags,
+ const int __user *user_bpf_fd);
+extern void put_seccomp_landlock(struct task_struct *tsk);
+extern void get_seccomp_landlock(struct task_struct *tsk);
+#else /* CONFIG_SECCOMP_FILTER && CONFIG_SECURITY_LANDLOCK */
+static inline int landlock_seccomp_prepend_prog(unsigned int flags,
+ const int __user *user_bpf_fd)
+{
+ return -EINVAL;
+}
+static inline void put_seccomp_landlock(struct task_struct *tsk)
+{
+}
+static inline void get_seccomp_landlock(struct task_struct *tsk)
+{
+}
+#endif /* CONFIG_SECCOMP_FILTER && CONFIG_SECURITY_LANDLOCK */
+
+#endif /* _LINUX_LANDLOCK_H */
diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
index 84868d37b35d..106a0ceff3d7 100644
--- a/include/linux/seccomp.h
+++ b/include/linux/seccomp.h
@@ -11,6 +11,7 @@

#ifdef CONFIG_SECCOMP

+#include <linux/landlock.h>
#include <linux/thread_info.h>
#include <asm/seccomp.h>

@@ -22,6 +23,7 @@ struct seccomp_filter;
* system calls available to a process.
* @filter: must always point to a valid seccomp-filter or NULL as it is
* accessed without locking during system call entry.
+ * @landlock_prog_set: contains a set of Landlock programs.
*
* @filter must only be accessed from the context of current as there
* is no read locking.
@@ -29,6 +31,9 @@ struct seccomp_filter;
struct seccomp {
int mode;
struct seccomp_filter *filter;
+#if defined(CONFIG_SECCOMP_FILTER) && defined(CONFIG_SECURITY_LANDLOCK)
+ struct landlock_prog_set *landlock_prog_set;
+#endif /* CONFIG_SECCOMP_FILTER && CONFIG_SECURITY_LANDLOCK */
};

#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
diff --git a/include/uapi/linux/seccomp.h b/include/uapi/linux/seccomp.h
index 90734aa5aa36..bce6534e7feb 100644
--- a/include/uapi/linux/seccomp.h
+++ b/include/uapi/linux/seccomp.h
@@ -16,6 +16,7 @@
#define SECCOMP_SET_MODE_FILTER 1
#define SECCOMP_GET_ACTION_AVAIL 2
#define SECCOMP_GET_NOTIF_SIZES 3
+#define SECCOMP_PREPEND_LANDLOCK_PROG 4

/* Valid flags for SECCOMP_SET_MODE_FILTER */
#define SECCOMP_FILTER_FLAG_TSYNC (1UL << 0)
diff --git a/kernel/fork.c b/kernel/fork.c
index 75675b9bf6df..a1ad5e80611b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -51,6 +51,7 @@
#include <linux/security.h>
#include <linux/hugetlb.h>
#include <linux/seccomp.h>
+#include <linux/landlock.h>
#include <linux/swap.h>
#include <linux/syscalls.h>
#include <linux/jiffies.h>
@@ -454,6 +455,7 @@ void free_task(struct task_struct *tsk)
rt_mutex_debug_task_free(tsk);
ftrace_graph_exit_task(tsk);
put_seccomp_filter(tsk);
+ put_seccomp_landlock(tsk);
arch_release_task_struct(tsk);
if (tsk->flags & PF_KTHREAD)
free_kthread_struct(tsk);
@@ -884,7 +886,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
* the usage counts on the error path calling free_task.
*/
tsk->seccomp.filter = NULL;
-#endif
+#ifdef CONFIG_SECURITY_LANDLOCK
+ tsk->seccomp.landlock_prog_set = NULL;
+#endif /* CONFIG_SECURITY_LANDLOCK */
+#endif /* CONFIG_SECCOMP */

setup_thread_stack(tsk, orig);
clear_user_return_notifier(tsk);
@@ -1598,6 +1603,7 @@ static void copy_seccomp(struct task_struct *p)

/* Ref-count the new filter user, and assign it. */
get_seccomp_filter(current);
+ get_seccomp_landlock(current);
p->seccomp = current->seccomp;

/*
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 811b4a86cdf6..e5005a644b23 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -41,6 +41,7 @@
#include <linux/tracehook.h>
#include <linux/uaccess.h>
#include <linux/anon_inodes.h>
+#include <linux/landlock.h>

enum notify_state {
SECCOMP_NOTIFY_INIT,
@@ -1397,6 +1398,9 @@ static long do_seccomp(unsigned int op, unsigned int flags,
return -EINVAL;

return seccomp_get_notif_sizes(uargs);
+ case SECCOMP_PREPEND_LANDLOCK_PROG:
+ return landlock_seccomp_prepend_prog(flags,
+ (const int __user *)uargs);
default:
return -EINVAL;
}
diff --git a/security/landlock/Makefile b/security/landlock/Makefile
index 7205f9a7a2ee..2a1a7082a365 100644
--- a/security/landlock/Makefile
+++ b/security/landlock/Makefile
@@ -1,3 +1,4 @@
obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o

-landlock-y := init.o
+landlock-y := init.o \
+ enforce.o enforce_seccomp.o
diff --git a/security/landlock/common.h b/security/landlock/common.h
index fd63ed1592a7..0c9b5904e7f5 100644
--- a/security/landlock/common.h
+++ b/security/landlock/common.h
@@ -23,4 +23,49 @@
#define _LANDLOCK_TRIGGER_FS_PICK_LAST LANDLOCK_TRIGGER_FS_PICK_WRITE
#define _LANDLOCK_TRIGGER_FS_PICK_MASK ((_LANDLOCK_TRIGGER_FS_PICK_LAST << 1ULL) - 1)

+extern struct lsm_blob_sizes landlock_blob_sizes;
+
+struct landlock_prog_list {
+ struct landlock_prog_list *prev;
+ struct bpf_prog *prog;
+ refcount_t usage;
+};
+
+/**
+ * struct landlock_prog_set - Landlock programs enforced on a thread
+ *
+ * This is used for low performance impact when forking a process. Instead of
+ * copying the full array and incrementing the usage of each entries, only
+ * create a pointer to &struct landlock_prog_set and increments its usage. When
+ * prepending a new program, if &struct landlock_prog_set is shared with other
+ * tasks, then duplicate it and prepend the program to this new &struct
+ * landlock_prog_set.
+ *
+ * @usage: reference count to manage the object lifetime. When a thread need to
+ * add Landlock programs and if @usage is greater than 1, then the
+ * thread must duplicate &struct landlock_prog_set to not change the
+ * children's programs as well.
+ * @programs: array of non-NULL &struct landlock_prog_list pointers
+ */
+struct landlock_prog_set {
+ struct landlock_prog_list *programs[_LANDLOCK_HOOK_LAST];
+ refcount_t usage;
+};
+
+/**
+ * get_index - get an index for the programs of struct landlock_prog_set
+ *
+ * @type: a Landlock hook type
+ */
+static inline int get_index(enum landlock_hook_type type)
+{
+ /* type ID > 0 for loaded programs */
+ return type - 1;
+}
+
+static inline enum landlock_hook_type get_type(struct bpf_prog *prog)
+{
+ return prog->aux->extra->subtype.landlock_hook.type;
+}
+
#endif /* _SECURITY_LANDLOCK_COMMON_H */
diff --git a/security/landlock/enforce.c b/security/landlock/enforce.c
new file mode 100644
index 000000000000..c06063d9d43d
--- /dev/null
+++ b/security/landlock/enforce.c
@@ -0,0 +1,272 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Landlock LSM - enforcing helpers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <asm/barrier.h> /* smp_store_release() */
+#include <asm/page.h> /* PAGE_SIZE */
+#include <linux/bpf.h> /* bpf_prog_put() */
+#include <linux/compiler.h> /* READ_ONCE() */
+#include <linux/err.h> /* PTR_ERR() */
+#include <linux/errno.h>
+#include <linux/filter.h> /* struct bpf_prog */
+#include <linux/refcount.h>
+#include <linux/slab.h> /* alloc(), kfree() */
+
+#include "common.h" /* struct landlock_prog_list */
+
+/* TODO: use a dedicated kmem_cache_alloc() instead of k*alloc() */
+
+static void put_landlock_prog_list(struct landlock_prog_list *prog_list)
+{
+ struct landlock_prog_list *orig = prog_list;
+
+ /* clean up single-reference branches iteratively */
+ while (orig && refcount_dec_and_test(&orig->usage)) {
+ struct landlock_prog_list *freeme = orig;
+
+ if (orig->prog)
+ bpf_prog_put(orig->prog);
+ orig = orig->prev;
+ kfree(freeme);
+ }
+}
+
+void landlock_put_prog_set(struct landlock_prog_set *prog_set)
+{
+ if (prog_set && refcount_dec_and_test(&prog_set->usage)) {
+ size_t i;
+
+ for (i = 0; i < ARRAY_SIZE(prog_set->programs); i++)
+ put_landlock_prog_list(prog_set->programs[i]);
+ kfree(prog_set);
+ }
+}
+
+void landlock_get_prog_set(struct landlock_prog_set *prog_set)
+{
+ if (!prog_set)
+ return;
+ refcount_inc(&prog_set->usage);
+}
+
+static struct landlock_prog_set *new_landlock_prog_set(void)
+{
+ struct landlock_prog_set *ret;
+
+ /* array filled with NULL values */
+ ret = kzalloc(sizeof(*ret), GFP_KERNEL);
+ if (!ret)
+ return ERR_PTR(-ENOMEM);
+ refcount_set(&ret->usage, 1);
+ return ret;
+}
+
+/**
+ * store_landlock_prog - prepend and deduplicate a Landlock prog_list
+ *
+ * Prepend @prog to @init_prog_set while ignoring @prog
+ * if they are already in @ref_prog_set. Whatever is the result of this
+ * function call, you can call bpf_prog_put(@prog) after.
+ *
+ * @init_prog_set: empty prog_set to prepend to
+ * @ref_prog_set: prog_set to check for duplicate programs
+ * @prog: program to prepend
+ *
+ * Return -errno on error or 0 if @prog was successfully stored.
+ */
+static int store_landlock_prog(struct landlock_prog_set *init_prog_set,
+ const struct landlock_prog_set *ref_prog_set,
+ struct bpf_prog *prog)
+{
+ struct landlock_prog_list *tmp_list = NULL;
+ int err;
+ u32 hook_idx;
+ enum landlock_hook_type last_type;
+ struct bpf_prog *new = prog;
+
+ /* allocate all the memory we need */
+ struct landlock_prog_list *new_list;
+
+ last_type = get_type(new);
+
+ /* ignore duplicate programs */
+ if (ref_prog_set) {
+ struct landlock_prog_list *ref;
+
+ hook_idx = get_index(get_type(new));
+ for (ref = ref_prog_set->programs[hook_idx];
+ ref; ref = ref->prev) {
+ if (ref->prog == new)
+ return -EINVAL;
+ }
+ }
+
+ new = bpf_prog_inc(new);
+ if (IS_ERR(new)) {
+ err = PTR_ERR(new);
+ goto put_tmp_list;
+ }
+ new_list = kzalloc(sizeof(*new_list), GFP_KERNEL);
+ if (!new_list) {
+ bpf_prog_put(new);
+ err = -ENOMEM;
+ goto put_tmp_list;
+ }
+ /* ignore Landlock types in this tmp_list */
+ new_list->prog = new;
+ new_list->prev = tmp_list;
+ refcount_set(&new_list->usage, 1);
+ tmp_list = new_list;
+
+ if (!tmp_list)
+ /* inform user space that this program was already added */
+ return -EEXIST;
+
+ /* properly store the list (without error cases) */
+ while (tmp_list) {
+ struct landlock_prog_list *new_list;
+
+ new_list = tmp_list;
+ tmp_list = tmp_list->prev;
+ /* do not increment the previous prog list usage */
+ hook_idx = get_index(get_type(new_list->prog));
+ new_list->prev = init_prog_set->programs[hook_idx];
+ /* no need to add from the last program to the first because
+ * each of them are a different Landlock type */
+ smp_store_release(&init_prog_set->programs[hook_idx], new_list);
+ }
+ return 0;
+
+put_tmp_list:
+ put_landlock_prog_list(tmp_list);
+ return err;
+}
+
+/* limit Landlock programs set to 256KB */
+#define LANDLOCK_PROGRAMS_MAX_PAGES (1 << 6)
+
+/**
+ * landlock_prepend_prog - attach a Landlock prog_list to @current_prog_set
+ *
+ * Whatever is the result of this function call, you can call
+ * bpf_prog_put(@prog) after.
+ *
+ * @current_prog_set: landlock_prog_set pointer, must be locked (if needed) to
+ * prevent a concurrent put/free. This pointer must not be
+ * freed after the call.
+ * @prog: non-NULL Landlock prog_list to prepend to @current_prog_set. @prog
+ * will be owned by landlock_prepend_prog() and freed if an error
+ * happened.
+ *
+ * Return @current_prog_set or a new pointer when OK. Return a pointer error
+ * otherwise.
+ */
+struct landlock_prog_set *landlock_prepend_prog(
+ struct landlock_prog_set *current_prog_set,
+ struct bpf_prog *prog)
+{
+ struct landlock_prog_set *new_prog_set = current_prog_set;
+ unsigned long pages;
+ int err;
+ size_t i;
+ struct landlock_prog_set tmp_prog_set = {};
+
+ if (prog->type != BPF_PROG_TYPE_LANDLOCK_HOOK)
+ return ERR_PTR(-EINVAL);
+
+ /* validate memory size allocation */
+ pages = prog->pages;
+ if (current_prog_set) {
+ size_t i;
+
+ for (i = 0; i < ARRAY_SIZE(current_prog_set->programs); i++) {
+ struct landlock_prog_list *walker_p;
+
+ for (walker_p = current_prog_set->programs[i];
+ walker_p; walker_p = walker_p->prev)
+ pages += walker_p->prog->pages;
+ }
+ /* count a struct landlock_prog_set if we need to allocate one */
+ if (refcount_read(&current_prog_set->usage) != 1)
+ pages += round_up(sizeof(*current_prog_set), PAGE_SIZE)
+ / PAGE_SIZE;
+ }
+ if (pages > LANDLOCK_PROGRAMS_MAX_PAGES)
+ return ERR_PTR(-E2BIG);
+
+ /* ensure early that we can allocate enough memory for the new
+ * prog_lists */
+ err = store_landlock_prog(&tmp_prog_set, current_prog_set, prog);
+ if (err)
+ return ERR_PTR(err);
+
+ /*
+ * Each task_struct points to an array of prog list pointers. These
+ * tables are duplicated when additions are made (which means each
+ * table needs to be refcounted for the processes using it). When a new
+ * table is created, all the refcounters on the prog_list are bumped (to
+ * track each table that references the prog). When a new prog is
+ * added, it's just prepended to the list for the new table to point
+ * at.
+ *
+ * Manage all the possible errors before this step to not uselessly
+ * duplicate current_prog_set and avoid a rollback.
+ */
+ if (!new_prog_set) {
+ /*
+ * If there is no Landlock program set used by the current task,
+ * then create a new one.
+ */
+ new_prog_set = new_landlock_prog_set();
+ if (IS_ERR(new_prog_set))
+ goto put_tmp_lists;
+ } else if (refcount_read(&current_prog_set->usage) > 1) {
+ /*
+ * If the current task is not the sole user of its Landlock
+ * program set, then duplicate them.
+ */
+ new_prog_set = new_landlock_prog_set();
+ if (IS_ERR(new_prog_set))
+ goto put_tmp_lists;
+ for (i = 0; i < ARRAY_SIZE(new_prog_set->programs); i++) {
+ new_prog_set->programs[i] =
+ READ_ONCE(current_prog_set->programs[i]);
+ if (new_prog_set->programs[i])
+ refcount_inc(&new_prog_set->programs[i]->usage);
+ }
+
+ /*
+ * Landlock program set from the current task will not be freed
+ * here because the usage is strictly greater than 1. It is
+ * only prevented to be freed by another task thanks to the
+ * caller of landlock_prepend_prog() which should be locked if
+ * needed.
+ */
+ landlock_put_prog_set(current_prog_set);
+ }
+
+ /* prepend tmp_prog_set to new_prog_set */
+ for (i = 0; i < ARRAY_SIZE(tmp_prog_set.programs); i++) {
+ /* get the last new list */
+ struct landlock_prog_list *last_list =
+ tmp_prog_set.programs[i];
+
+ if (last_list) {
+ while (last_list->prev)
+ last_list = last_list->prev;
+ /* no need to increment usage (pointer replacement) */
+ last_list->prev = new_prog_set->programs[i];
+ new_prog_set->programs[i] = tmp_prog_set.programs[i];
+ }
+ }
+ return new_prog_set;
+
+put_tmp_lists:
+ for (i = 0; i < ARRAY_SIZE(tmp_prog_set.programs); i++)
+ put_landlock_prog_list(tmp_prog_set.programs[i]);
+ return new_prog_set;
+}
diff --git a/security/landlock/enforce.h b/security/landlock/enforce.h
new file mode 100644
index 000000000000..39b800d9999f
--- /dev/null
+++ b/security/landlock/enforce.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Landlock LSM - enforcing helpers headers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifndef _SECURITY_LANDLOCK_ENFORCE_H
+#define _SECURITY_LANDLOCK_ENFORCE_H
+
+struct landlock_prog_set *landlock_prepend_prog(
+ struct landlock_prog_set *current_prog_set,
+ struct bpf_prog *prog);
+void landlock_put_prog_set(struct landlock_prog_set *prog_set);
+void landlock_get_prog_set(struct landlock_prog_set *prog_set);
+
+#endif /* _SECURITY_LANDLOCK_ENFORCE_H */
diff --git a/security/landlock/enforce_seccomp.c b/security/landlock/enforce_seccomp.c
new file mode 100644
index 000000000000..c38c81e6b01a
--- /dev/null
+++ b/security/landlock/enforce_seccomp.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Landlock LSM - enforcing with seccomp
+ *
+ * Copyright © 2016-2018 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifdef CONFIG_SECCOMP_FILTER
+
+#include <linux/bpf.h> /* bpf_prog_put() */
+#include <linux/capability.h>
+#include <linux/err.h> /* PTR_ERR() */
+#include <linux/errno.h>
+#include <linux/filter.h> /* struct bpf_prog */
+#include <linux/landlock.h>
+#include <linux/refcount.h>
+#include <linux/sched.h> /* current */
+#include <linux/uaccess.h> /* get_user() */
+
+#include "enforce.h"
+
+/* headers in include/linux/landlock.h */
+
+/**
+ * landlock_seccomp_prepend_prog - attach a Landlock program to the current
+ * process
+ *
+ * current->seccomp.landlock_state->prog_set is lazily allocated. When a
+ * process fork, only a pointer is copied. When a new program is added by a
+ * process, if there is other references to this process' prog_set, then a new
+ * allocation is made to contain an array pointing to Landlock program lists.
+ * This design enable low-performance impact and is memory efficient while
+ * keeping the property of prepend-only programs.
+ *
+ * For now, installing a Landlock prog requires that the requesting task has
+ * the global CAP_SYS_ADMIN. We cannot force the use of no_new_privs to not
+ * exclude containers where a process may legitimately acquire more privileges
+ * thanks to an SUID binary.
+ *
+ * @flags: not used for now, but could be used for TSYNC
+ * @user_bpf_fd: file descriptor pointing to a loaded Landlock prog
+ */
+int landlock_seccomp_prepend_prog(unsigned int flags,
+ const int __user *user_bpf_fd)
+{
+ struct landlock_prog_set *new_prog_set;
+ struct bpf_prog *prog;
+ int bpf_fd, err;
+
+ /* planned to be replaced with a no_new_privs check to allow
+ * unprivileged tasks */
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ /* enable to check if Landlock is supported with early EFAULT */
+ if (!user_bpf_fd)
+ return -EFAULT;
+ if (flags)
+ return -EINVAL;
+ err = get_user(bpf_fd, user_bpf_fd);
+ if (err)
+ return err;
+
+ prog = bpf_prog_get(bpf_fd);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ /*
+ * We don't need to lock anything for the current process hierarchy,
+ * everything is guarded by the atomic counters.
+ */
+ new_prog_set = landlock_prepend_prog(
+ current->seccomp.landlock_prog_set, prog);
+ bpf_prog_put(prog);
+ /* @prog is managed/freed by landlock_prepend_prog() */
+ if (IS_ERR(new_prog_set))
+ return PTR_ERR(new_prog_set);
+ current->seccomp.landlock_prog_set = new_prog_set;
+ return 0;
+}
+
+void put_seccomp_landlock(struct task_struct *tsk)
+{
+ landlock_put_prog_set(tsk->seccomp.landlock_prog_set);
+}
+
+void get_seccomp_landlock(struct task_struct *tsk)
+{
+ landlock_get_prog_set(tsk->seccomp.landlock_prog_set);
+}
+
+#endif /* CONFIG_SECCOMP_FILTER */
--
2.20.1

2019-06-25 22:14:28

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 09/10] bpf,landlock: Add tests for Landlock

Test basic context access, ptrace protection and filesystem hooks.

Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: James Morris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Serge E. Hallyn <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Will Drewry <[email protected]>
---

Changes since v8:
* update eBPF include path for macros
* use TEST_GEN_PROGS and use the generic "clean" target
* add more verbose errors
* update the bpf/verifier files
* remove chain tests (from landlock and bpf/verifier)
* replace the whitelist tests with blacklist tests (because of stateless
Landlock programs): remove "dotdot" tests and other depth tests
* sync the landlock Makefile with its bpf sibling directory and use
bpf_load_program_xattr()

Changes since v7:
* update tests and add new ones for filesystem hierarchy and Landlock
chains.

Changes since v6:
* use the new kselftest_harness.h
* use const variables
* replace ASSERT_STEP with ASSERT_*
* rename BPF_PROG_TYPE_LANDLOCK to BPF_PROG_TYPE_LANDLOCK_RULE
* force sample library rebuild
* fix install target

Changes since v5:
* add subtype test
* add ptrace tests
* split and rename files
* cleanup and rebase
---
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/bpf/bpf_helpers.h | 2 +
tools/testing/selftests/bpf/test_verifier.c | 1 +
.../testing/selftests/bpf/verifier/landlock.c | 35 +++
.../testing/selftests/bpf/verifier/subtype.c | 10 +
tools/testing/selftests/landlock/.gitignore | 4 +
tools/testing/selftests/landlock/Makefile | 39 +++
tools/testing/selftests/landlock/test.h | 48 ++++
tools/testing/selftests/landlock/test_base.c | 24 ++
tools/testing/selftests/landlock/test_fs.c | 257 ++++++++++++++++++
.../testing/selftests/landlock/test_ptrace.c | 154 +++++++++++
11 files changed, 575 insertions(+)
create mode 100644 tools/testing/selftests/bpf/verifier/landlock.c
create mode 100644 tools/testing/selftests/landlock/.gitignore
create mode 100644 tools/testing/selftests/landlock/Makefile
create mode 100644 tools/testing/selftests/landlock/test.h
create mode 100644 tools/testing/selftests/landlock/test_base.c
create mode 100644 tools/testing/selftests/landlock/test_fs.c
create mode 100644 tools/testing/selftests/landlock/test_ptrace.c

diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 9781ca79794a..342a7d714fb9 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -21,6 +21,7 @@ TARGETS += ir
TARGETS += kcmp
TARGETS += kexec
TARGETS += kvm
+TARGETS += landlock
TARGETS += lib
TARGETS += livepatch
TARGETS += membarrier
diff --git a/tools/testing/selftests/bpf/bpf_helpers.h b/tools/testing/selftests/bpf/bpf_helpers.h
index 1a5b1accf091..0b15c49fac3f 100644
--- a/tools/testing/selftests/bpf/bpf_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_helpers.h
@@ -225,6 +225,8 @@ static void *(*bpf_sk_storage_get)(void *map, struct bpf_sock *sk,
static int (*bpf_sk_storage_delete)(void *map, struct bpf_sock *sk) =
(void *)BPF_FUNC_sk_storage_delete;
static int (*bpf_send_signal)(unsigned sig) = (void *)BPF_FUNC_send_signal;
+static unsigned long long (*bpf_inode_map_lookup)(void *map, void *key) =
+ (void *) BPF_FUNC_inode_map_lookup;

/* llvm builtin functions that eBPF C program may use to
* emit BPF_LD_ABS and BPF_LD_IND instructions
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 93faffd31fc3..c67218ffebf9 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -30,6 +30,7 @@
#include <linux/bpf.h>
#include <linux/if_ether.h>
#include <linux/btf.h>
+#include <linux/landlock.h>

#include <bpf/bpf.h>
#include <bpf/libbpf.h>
diff --git a/tools/testing/selftests/bpf/verifier/landlock.c b/tools/testing/selftests/bpf/verifier/landlock.c
new file mode 100644
index 000000000000..7ed4e24c0a88
--- /dev/null
+++ b/tools/testing/selftests/bpf/verifier/landlock.c
@@ -0,0 +1,35 @@
+{
+ "landlock/fs_pick: always accept",
+ .insns = {
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_LANDLOCK_HOOK,
+ .has_prog_subtype = true,
+ .prog_subtype = {
+ .landlock_hook = {
+ .type = LANDLOCK_HOOK_FS_PICK,
+ .triggers = LANDLOCK_TRIGGER_FS_PICK_READ,
+ }
+ },
+},
+{
+ "landlock/fs_pick: read context",
+ .insns = {
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+ BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_6,
+ offsetof(struct landlock_ctx_fs_pick, inode)),
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .result = ACCEPT,
+ .prog_type = BPF_PROG_TYPE_LANDLOCK_HOOK,
+ .has_prog_subtype = true,
+ .prog_subtype = {
+ .landlock_hook = {
+ .type = LANDLOCK_HOOK_FS_PICK,
+ .triggers = LANDLOCK_TRIGGER_FS_PICK_READ,
+ }
+ },
+},
diff --git a/tools/testing/selftests/bpf/verifier/subtype.c b/tools/testing/selftests/bpf/verifier/subtype.c
index cf614223d53f..6bb7ef4b39b5 100644
--- a/tools/testing/selftests/bpf/verifier/subtype.c
+++ b/tools/testing/selftests/bpf/verifier/subtype.c
@@ -8,3 +8,13 @@
.result = REJECT,
.has_prog_subtype = true,
},
+{
+ "missing subtype",
+ .insns = {
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .errstr = "",
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_LANDLOCK_HOOK,
+},
diff --git a/tools/testing/selftests/landlock/.gitignore b/tools/testing/selftests/landlock/.gitignore
new file mode 100644
index 000000000000..25b9cd834c3c
--- /dev/null
+++ b/tools/testing/selftests/landlock/.gitignore
@@ -0,0 +1,4 @@
+/test_base
+/test_fs
+/test_ptrace
+/tmp_*
diff --git a/tools/testing/selftests/landlock/Makefile b/tools/testing/selftests/landlock/Makefile
new file mode 100644
index 000000000000..7a253bf6d580
--- /dev/null
+++ b/tools/testing/selftests/landlock/Makefile
@@ -0,0 +1,39 @@
+LIBDIR := ../../../lib
+BPFDIR := $(LIBDIR)/bpf
+APIDIR := ../../../include/uapi
+GENDIR := ../../../../include/generated
+GENHDR := $(GENDIR)/autoconf.h
+
+ifneq ($(wildcard $(GENHDR)),)
+ GENFLAGS := -DHAVE_GENHDR
+endif
+
+BPFOBJS := $(BPFDIR)/bpf.o $(BPFDIR)/nlattr.o
+LOADOBJ := ../../../../samples/bpf/bpf_load.o
+
+CFLAGS += -Wl,-no-as-needed -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include
+LDFLAGS += -lelf
+
+test_src = $(wildcard test_*.c)
+
+test_objs := $(test_src:.c=)
+
+TEST_GEN_PROGS := $(test_objs)
+
+.PHONY: all clean force
+
+all: $(test_objs)
+
+# force a rebuild of BPFOBJS when its dependencies are updated
+force:
+
+# rebuild bpf.o as a workaround for the samples/bpf bug
+$(BPFOBJS): $(LOADOBJ) force
+ $(MAKE) -C $(BPFDIR)
+
+$(LOADOBJ): force
+ $(MAKE) -C $(dir $(LOADOBJ))
+
+$(test_objs): $(BPFOBJS) $(LOADOBJ) ../kselftest_harness.h
+
+include ../lib.mk
diff --git a/tools/testing/selftests/landlock/test.h b/tools/testing/selftests/landlock/test.h
new file mode 100644
index 000000000000..7d412d94148c
--- /dev/null
+++ b/tools/testing/selftests/landlock/test.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Landlock helpers
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2019 ANSSI
+ */
+
+#include <bpf/bpf.h>
+#include <errno.h>
+#include <linux/filter.h>
+#include <linux/landlock.h>
+#include <linux/seccomp.h>
+#include <sys/prctl.h>
+#include <sys/syscall.h>
+
+#include "../kselftest_harness.h"
+#include "../../../../samples/bpf/bpf_load.h"
+
+#ifndef SECCOMP_PREPEND_LANDLOCK_PROG
+#define SECCOMP_PREPEND_LANDLOCK_PROG 4
+#endif
+
+#ifndef seccomp
+static int __attribute__((unused)) seccomp(unsigned int op, unsigned int flags,
+ void *args)
+{
+ errno = 0;
+ return syscall(__NR_seccomp, op, flags, args);
+}
+#endif
+
+/* bpf_load_program() with subtype */
+static int __attribute__((unused)) ll_bpf_load_program(
+ const struct bpf_insn *insns, size_t insns_cnt, char *log_buf,
+ size_t log_buf_sz, const union bpf_prog_subtype *subtype)
+{
+ struct bpf_load_program_attr load_attr;
+
+ memset(&load_attr, 0, sizeof(struct bpf_load_program_attr));
+ load_attr.prog_type = BPF_PROG_TYPE_LANDLOCK_HOOK;
+ load_attr.prog_subtype = subtype;
+ load_attr.insns = insns;
+ load_attr.insns_cnt = insns_cnt;
+ load_attr.license = "GPL";
+
+ return bpf_load_program_xattr(&load_attr, log_buf, log_buf_sz);
+}
diff --git a/tools/testing/selftests/landlock/test_base.c b/tools/testing/selftests/landlock/test_base.c
new file mode 100644
index 000000000000..db46f39048cb
--- /dev/null
+++ b/tools/testing/selftests/landlock/test_base.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Landlock tests - base
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ */
+
+#define _GNU_SOURCE
+#include <errno.h>
+
+#include "test.h"
+
+TEST(seccomp_landlock)
+{
+ int ret;
+
+ ret = seccomp(SECCOMP_PREPEND_LANDLOCK_PROG, 0, NULL);
+ EXPECT_EQ(-1, ret);
+ EXPECT_EQ(EFAULT, errno) {
+ TH_LOG("Kernel does not support CONFIG_SECURITY_LANDLOCK");
+ }
+}
+
+TEST_HARNESS_MAIN
diff --git a/tools/testing/selftests/landlock/test_fs.c b/tools/testing/selftests/landlock/test_fs.c
new file mode 100644
index 000000000000..dba726ea4994
--- /dev/null
+++ b/tools/testing/selftests/landlock/test_fs.c
@@ -0,0 +1,257 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Landlock tests - file system
+ *
+ * Copyright © 2018-2019 Mickaël Salaün <[email protected]>
+ */
+
+#include <fcntl.h> /* O_DIRECTORY */
+#include <sys/stat.h> /* statbuf */
+#include <unistd.h> /* faccessat() */
+
+#include "test.h"
+
+#define TEST_PATH_TRIGGERS ( \
+ LANDLOCK_TRIGGER_FS_PICK_OPEN | \
+ LANDLOCK_TRIGGER_FS_PICK_READDIR | \
+ LANDLOCK_TRIGGER_FS_PICK_EXECUTE | \
+ LANDLOCK_TRIGGER_FS_PICK_GETATTR)
+
+static void test_path_rel(struct __test_metadata *_metadata, int dirfd,
+ const char *path, int ret)
+{
+ int fd;
+ struct stat statbuf;
+
+ ASSERT_EQ(ret, faccessat(dirfd, path, R_OK | X_OK, 0));
+ ASSERT_EQ(ret, fstatat(dirfd, path, &statbuf, 0));
+ fd = openat(dirfd, path, O_DIRECTORY);
+ if (ret) {
+ ASSERT_EQ(-1, fd);
+ } else {
+ ASSERT_NE(-1, fd);
+ EXPECT_EQ(0, close(fd));
+ }
+}
+
+static void test_path(struct __test_metadata *_metadata, const char *path,
+ int ret)
+{
+ return test_path_rel(_metadata, AT_FDCWD, path, ret);
+}
+
+static const char d1[] = "/usr";
+static const char d2[] = "/usr/share";
+static const char d3[] = "/usr/share/doc";
+
+TEST(fs_base)
+{
+ test_path(_metadata, d1, 0);
+ test_path(_metadata, d2, 0);
+ test_path(_metadata, d3, 0);
+}
+
+#define MAP_VALUE_DENY 1
+
+static int create_denied_inode_map(struct __test_metadata *_metadata,
+ const char *const dirs[])
+{
+ int map, key, dirs_len, i;
+ __u64 value = MAP_VALUE_DENY;
+
+ ASSERT_NE(NULL, dirs) {
+ TH_LOG("No directory list\n");
+ }
+ ASSERT_NE(NULL, dirs[0]) {
+ TH_LOG("Empty directory list\n");
+ }
+
+ /* get the number of dir entries */
+ for (dirs_len = 0; dirs[dirs_len]; dirs_len++);
+ map = bpf_create_map(BPF_MAP_TYPE_INODE, sizeof(key), sizeof(value),
+ dirs_len, 0);
+ ASSERT_NE(-1, map) {
+ TH_LOG("Failed to create a map of %d elements: %s\n", dirs_len,
+ strerror(errno));
+ }
+
+ for (i = 0; dirs[i]; i++) {
+ key = open(dirs[i], O_RDONLY | O_CLOEXEC | O_DIRECTORY);
+ ASSERT_NE(-1, key) {
+ TH_LOG("Failed to open directory \"%s\": %s\n", dirs[i],
+ strerror(errno));
+ }
+ ASSERT_EQ(0, bpf_map_update_elem(map, &key, &value, BPF_ANY)) {
+ TH_LOG("Failed to update the map with \"%s\": %s\n",
+ dirs[i], strerror(errno));
+ }
+ close(key);
+ }
+ return map;
+}
+
+static void enforce_map(struct __test_metadata *_metadata, int map,
+ bool subpath)
+{
+ const struct bpf_insn prog_deny[] = {
+ BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
+ /* look for the requested inode in the map */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6,
+ offsetof(struct landlock_ctx_fs_walk, inode)),
+ BPF_LD_MAP_FD(BPF_REG_1, map), /* 2 instructions */
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+ BPF_FUNC_inode_map_lookup),
+ /* if it is there, then deny access to the inode, otherwise
+ * allow it */
+ BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, MAP_VALUE_DENY, 2),
+ BPF_MOV32_IMM(BPF_REG_0, LANDLOCK_RET_ALLOW),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(BPF_REG_0, LANDLOCK_RET_DENY),
+ BPF_EXIT_INSN(),
+ };
+ union bpf_prog_subtype subtype = {};
+ int fd_walk = -1, fd_pick;
+ char log[1024] = "";
+
+ if (subpath) {
+ subtype.landlock_hook.type = LANDLOCK_HOOK_FS_WALK;
+ fd_walk = ll_bpf_load_program((const struct bpf_insn *)&prog_deny,
+ sizeof(prog_deny) / sizeof(struct bpf_insn),
+ log, sizeof(log), &subtype);
+ ASSERT_NE(-1, fd_walk) {
+ TH_LOG("Failed to load fs_walk program: %s\n%s",
+ strerror(errno), log);
+ }
+ ASSERT_EQ(0, seccomp(SECCOMP_PREPEND_LANDLOCK_PROG, 0, &fd_walk)) {
+ TH_LOG("Failed to apply Landlock program: %s", strerror(errno));
+ }
+ EXPECT_EQ(0, close(fd_walk));
+ }
+
+ subtype.landlock_hook.type = LANDLOCK_HOOK_FS_PICK;
+ subtype.landlock_hook.triggers = TEST_PATH_TRIGGERS;
+ fd_pick = ll_bpf_load_program((const struct bpf_insn *)&prog_deny,
+ sizeof(prog_deny) / sizeof(struct bpf_insn), log,
+ sizeof(log), &subtype);
+ ASSERT_NE(-1, fd_pick) {
+ TH_LOG("Failed to load fs_pick program: %s\n%s",
+ strerror(errno), log);
+ }
+ ASSERT_EQ(0, seccomp(SECCOMP_PREPEND_LANDLOCK_PROG, 0, &fd_pick)) {
+ TH_LOG("Failed to apply Landlock program: %s", strerror(errno));
+ }
+ EXPECT_EQ(0, close(fd_pick));
+}
+
+static void check_map_blacklist(struct __test_metadata *_metadata,
+ bool subpath)
+{
+ int map = create_denied_inode_map(_metadata, (const char *const [])
+ { d2, NULL });
+ ASSERT_NE(-1, map);
+ enforce_map(_metadata, map, subpath);
+ test_path(_metadata, d1, 0);
+ test_path(_metadata, d2, -1);
+ test_path(_metadata, d3, subpath ? -1 : 0);
+ EXPECT_EQ(0, close(map));
+}
+
+TEST(fs_map_blacklist_literal)
+{
+ check_map_blacklist(_metadata, false);
+}
+
+TEST(fs_map_blacklist_subpath)
+{
+ check_map_blacklist(_metadata, true);
+}
+
+static const char r2[] = ".";
+static const char r3[] = "./doc";
+
+enum relative_access {
+ REL_OPEN,
+ REL_CHDIR,
+ REL_CHROOT,
+};
+
+static void check_access(struct __test_metadata *_metadata,
+ bool enforce, enum relative_access rel)
+{
+ int dirfd;
+ int map = -1;
+
+ if (rel == REL_CHROOT)
+ ASSERT_NE(-1, chdir(d2));
+ if (enforce) {
+ map = create_denied_inode_map(_metadata, (const char *const [])
+ { d3, NULL });
+ ASSERT_NE(-1, map);
+ enforce_map(_metadata, map, true);
+ }
+ switch (rel) {
+ case REL_OPEN:
+ dirfd = open(d2, O_DIRECTORY);
+ ASSERT_NE(-1, dirfd);
+ break;
+ case REL_CHDIR:
+ ASSERT_NE(-1, chdir(d2));
+ dirfd = AT_FDCWD;
+ break;
+ case REL_CHROOT:
+ ASSERT_NE(-1, chroot(d2)) {
+ TH_LOG("Failed to chroot: %s\n", strerror(errno));
+ }
+ dirfd = AT_FDCWD;
+ break;
+ default:
+ ASSERT_TRUE(false);
+ return;
+ }
+
+ test_path_rel(_metadata, dirfd, r2, 0);
+ test_path_rel(_metadata, dirfd, r3, enforce ? -1 : 0);
+
+ if (rel == REL_OPEN)
+ EXPECT_EQ(0, close(dirfd));
+ if (enforce)
+ EXPECT_EQ(0, close(map));
+}
+
+TEST(fs_allow_open)
+{
+ /* no enforcement, via open */
+ check_access(_metadata, false, REL_OPEN);
+}
+
+TEST(fs_allow_chdir)
+{
+ /* no enforcement, via chdir */
+ check_access(_metadata, false, REL_CHDIR);
+}
+
+TEST(fs_allow_chroot)
+{
+ /* no enforcement, via chroot */
+ check_access(_metadata, false, REL_CHROOT);
+}
+
+TEST(fs_deny_open)
+{
+ /* enforcement without tag, via open */
+ check_access(_metadata, true, REL_OPEN);
+}
+
+TEST(fs_deny_chdir)
+{
+ /* enforcement without tag, via chdir */
+ check_access(_metadata, true, REL_CHDIR);
+}
+
+TEST(fs_deny_chroot)
+{
+ /* enforcement without tag, via chroot */
+ check_access(_metadata, true, REL_CHROOT);
+}
+
+TEST_HARNESS_MAIN
diff --git a/tools/testing/selftests/landlock/test_ptrace.c b/tools/testing/selftests/landlock/test_ptrace.c
new file mode 100644
index 000000000000..2f3e346288bc
--- /dev/null
+++ b/tools/testing/selftests/landlock/test_ptrace.c
@@ -0,0 +1,154 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Landlock tests - ptrace
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ */
+
+#define _GNU_SOURCE
+#include <signal.h> /* raise */
+#include <sys/ptrace.h>
+#include <sys/types.h> /* waitpid */
+#include <sys/wait.h> /* waitpid */
+#include <unistd.h> /* fork, pipe */
+
+#include "test.h"
+
+static void apply_null_sandbox(struct __test_metadata *_metadata)
+{
+ const struct bpf_insn prog_accept[] = {
+ BPF_MOV32_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ };
+ const union bpf_prog_subtype subtype = {
+ .landlock_hook = {
+ .type = LANDLOCK_HOOK_FS_PICK,
+ .triggers = LANDLOCK_TRIGGER_FS_PICK_OPEN,
+ }
+ };
+ int prog;
+ char log[256] = "";
+
+ prog = ll_bpf_load_program((const struct bpf_insn *)&prog_accept,
+ sizeof(prog_accept) / sizeof(struct bpf_insn), log,
+ sizeof(log), &subtype);
+ ASSERT_NE(-1, prog) {
+ TH_LOG("Failed to load minimal rule: %s\n%s",
+ strerror(errno), log);
+ }
+ ASSERT_EQ(0, seccomp(SECCOMP_PREPEND_LANDLOCK_PROG, 0, &prog)) {
+ TH_LOG("Failed to apply minimal rule: %s", strerror(errno));
+ }
+ EXPECT_EQ(0, close(prog));
+}
+
+/* PTRACE_TRACEME and PTRACE_ATTACH without Landlock rules effect */
+static void check_ptrace(struct __test_metadata *_metadata,
+ int sandbox_both, int sandbox_parent, int sandbox_child,
+ int expect_ptrace)
+{
+ pid_t child;
+ int status;
+ int pipefd[2];
+
+ ASSERT_EQ(0, pipe(pipefd));
+ if (sandbox_both)
+ apply_null_sandbox(_metadata);
+
+ child = fork();
+ ASSERT_LE(0, child);
+ if (child == 0) {
+ char buf;
+
+ EXPECT_EQ(0, close(pipefd[1]));
+ if (sandbox_child)
+ apply_null_sandbox(_metadata);
+
+ /* test traceme */
+ ASSERT_EQ(expect_ptrace, ptrace(PTRACE_TRACEME));
+ if (expect_ptrace) {
+ ASSERT_EQ(EPERM, errno);
+ } else {
+ ASSERT_EQ(0, raise(SIGSTOP));
+ }
+
+ /* sync */
+ ASSERT_EQ(1, read(pipefd[0], &buf, 1)) {
+ TH_LOG("Failed to read() sync from parent");
+ }
+ ASSERT_EQ('.', buf);
+ _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE);
+ }
+
+ EXPECT_EQ(0, close(pipefd[0]));
+ if (sandbox_parent)
+ apply_null_sandbox(_metadata);
+
+ /* test traceme */
+ if (!expect_ptrace) {
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(1, WIFSTOPPED(status));
+ ASSERT_EQ(0, ptrace(PTRACE_DETACH, child, NULL, 0));
+ }
+ /* test attach */
+ ASSERT_EQ(expect_ptrace, ptrace(PTRACE_ATTACH, child, NULL, 0));
+ if (expect_ptrace) {
+ ASSERT_EQ(EPERM, errno);
+ } else {
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ ASSERT_EQ(1, WIFSTOPPED(status));
+ ASSERT_EQ(0, ptrace(PTRACE_CONT, child, NULL, 0));
+ }
+
+ /* sync */
+ ASSERT_EQ(1, write(pipefd[1], ".", 1)) {
+ TH_LOG("Failed to write() sync to child");
+ }
+ ASSERT_EQ(child, waitpid(child, &status, 0));
+ if (WIFSIGNALED(status) || WEXITSTATUS(status))
+ _metadata->passed = 0;
+}
+
+TEST(ptrace_allow_without_sandbox)
+{
+ /* no sandbox */
+ check_ptrace(_metadata, 0, 0, 0, 0);
+}
+
+TEST(ptrace_allow_with_one_sandbox)
+{
+ /* child sandbox */
+ check_ptrace(_metadata, 0, 0, 1, 0);
+}
+
+TEST(ptrace_allow_with_nested_sandbox)
+{
+ /* inherited and child sandbox */
+ check_ptrace(_metadata, 1, 0, 1, 0);
+}
+
+TEST(ptrace_deny_with_parent_sandbox)
+{
+ /* parent sandbox */
+ check_ptrace(_metadata, 0, 1, 0, -1);
+}
+
+TEST(ptrace_deny_with_nested_and_parent_sandbox)
+{
+ /* inherited and parent sandbox */
+ check_ptrace(_metadata, 1, 1, 0, -1);
+}
+
+TEST(ptrace_deny_with_forked_sandbox)
+{
+ /* inherited, parent and child sandbox */
+ check_ptrace(_metadata, 1, 1, 1, -1);
+}
+
+TEST(ptrace_deny_with_sibling_sandbox)
+{
+ /* parent and child sandbox */
+ check_ptrace(_metadata, 0, 1, 1, -1);
+}
+
+TEST_HARNESS_MAIN
--
2.20.1

2019-06-25 22:14:43

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 03/10] bpf,landlock: Define an eBPF program type for Landlock hooks

Add a new type of eBPF program used by Landlock hooks. This type of
program can be chained with the same eBPF program type (according to
subtype rules). A state can be kept with a value available in the
program's context (e.g. named "cookie" for Landlock programs).

This new BPF program type will be registered with the Landlock LSM
initialization.

Add an initial Landlock Kconfig and update the MAINTAINERS file.

Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: James Morris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Serge E. Hallyn <[email protected]>
---

Changes since v8:
* Remove the chaining concept from the eBPF program contexts (chain and
cookie). We need to keep these subtypes this way to be able to make
them evolve, though.
* remove bpf_landlock_put_extra() because there is no more a "previous"
field to free (for now)

Changes since v7:
* cosmetic fixes
* rename LANDLOCK_SUBTYPE_* to LANDLOCK_*
* cleanup UAPI definitions and move them from bpf.h to landlock.h
(suggested by Alexei Starovoitov)
* disable Landlock by default (suggested by Alexei Starovoitov)
* rename BPF_PROG_TYPE_LANDLOCK_{RULE,HOOK}
* update the Kconfig
* update the MAINTAINERS file
* replace the IOCTL, LOCK and FCNTL events with FS_PICK, FS_WALK and
FS_GET hook types
* add the ability to chain programs with an eBPF program file descriptor
(i.e. the "previous" field in a Landlock subtype) and keep a state
with a "cookie" value available from the context
* add a "triggers" subtype bitfield to match specific actions (e.g.
append, chdir, read...)

Changes since v6:
* add 3 more sub-events: IOCTL, LOCK, FCNTL
https://lkml.kernel.org/r/[email protected]
* rename LANDLOCK_VERSION to LANDLOCK_ABI to better reflect its purpose,
and move it from landlock.h to common.h
* rename BPF_PROG_TYPE_LANDLOCK to BPF_PROG_TYPE_LANDLOCK_RULE: an eBPF
program could be used for something else than a rule
* simplify struct landlock_context by removing the arch and syscall_nr fields
* remove all eBPF map functions call, remove ABILITY_WRITE
* refactor bpf_landlock_func_proto() (suggested by Kees Cook)
* constify pointers
* fix doc inclusion

Changes since v5:
* rename file hooks.c to init.c
* fix spelling

Changes since v4:
* merge a minimal (not enabled) LSM code and Kconfig in this commit

Changes since v3:
* split commit
* revamp the landlock_context:
* add arch, syscall_nr and syscall_cmd (ioctl, fcntl…) to be able to
cross-check action with the event type
* replace args array with dedicated fields to ease the addition of new
fields
---
MAINTAINERS | 13 ++++
include/linux/bpf_types.h | 3 +
include/uapi/linux/bpf.h | 1 +
include/uapi/linux/landlock.h | 109 +++++++++++++++++++++++++++
security/Kconfig | 1 +
security/Makefile | 2 +
security/landlock/Kconfig | 18 +++++
security/landlock/Makefile | 3 +
security/landlock/common.h | 26 +++++++
security/landlock/init.c | 110 ++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 1 +
tools/include/uapi/linux/landlock.h | 109 +++++++++++++++++++++++++++
tools/lib/bpf/libbpf.c | 1 +
tools/lib/bpf/libbpf_probes.c | 1 +
14 files changed, 398 insertions(+)
create mode 100644 include/uapi/linux/landlock.h
create mode 100644 security/landlock/Kconfig
create mode 100644 security/landlock/Makefile
create mode 100644 security/landlock/common.h
create mode 100644 security/landlock/init.c
create mode 100644 tools/include/uapi/linux/landlock.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 606d1f80bc49..4a5edc14ee84 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -8807,6 +8807,19 @@ F: net/core/skmsg.c
F: net/core/sock_map.c
F: net/ipv4/tcp_bpf.c

+LANDLOCK SECURITY MODULE
+M: Mickaël Salaün <[email protected]>
+S: Supported
+F: Documentation/security/landlock/
+F: include/linux/landlock.h
+F: include/uapi/linux/landlock.h
+F: samples/bpf/landlock*
+F: security/landlock/
+F: tools/include/uapi/linux/landlock.h
+F: tools/testing/selftests/landlock/
+K: landlock
+K: LANDLOCK
+
LANTIQ / INTEL Ethernet drivers
M: Hauke Mehrtens <[email protected]>
L: [email protected]
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 5a9975678d6f..dee8b82e31b1 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -37,6 +37,9 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LIRC_MODE2, lirc_mode2)
#ifdef CONFIG_INET
BPF_PROG_TYPE(BPF_PROG_TYPE_SK_REUSEPORT, sk_reuseport)
#endif
+#ifdef CONFIG_SECURITY_LANDLOCK
+BPF_PROG_TYPE(BPF_PROG_TYPE_LANDLOCK_HOOK, landlock)
+#endif

BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ddae50373d58..50145d448bc3 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -170,6 +170,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_FLOW_DISSECTOR,
BPF_PROG_TYPE_CGROUP_SYSCTL,
BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
+ BPF_PROG_TYPE_LANDLOCK_HOOK,
};

enum bpf_attach_type {
diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
new file mode 100644
index 000000000000..9e6d8e10ec2c
--- /dev/null
+++ b/include/uapi/linux/landlock.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Landlock - UAPI headers
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifndef _UAPI__LINUX_LANDLOCK_H__
+#define _UAPI__LINUX_LANDLOCK_H__
+
+#include <linux/types.h>
+
+#define LANDLOCK_RET_ALLOW 0
+#define LANDLOCK_RET_DENY 1
+
+/**
+ * enum landlock_hook_type - hook type for which a Landlock program is called
+ *
+ * A hook is a policy decision point which exposes the same context type for
+ * each program evaluation.
+ *
+ * @LANDLOCK_HOOK_FS_PICK: called for the last element of a file path
+ * @LANDLOCK_HOOK_FS_WALK: called for each directory of a file path (excluding
+ * the directory passed to fs_pick, if any)
+ */
+enum landlock_hook_type {
+ LANDLOCK_HOOK_FS_PICK = 1,
+ LANDLOCK_HOOK_FS_WALK,
+};
+
+/**
+ * DOC: landlock_triggers
+ *
+ * A landlock trigger is used as a bitmask in subtype.landlock_hook.triggers
+ * for a fs_pick program. It defines a set of actions for which the program
+ * should verify an access request.
+ *
+ * - %LANDLOCK_TRIGGER_FS_PICK_APPEND
+ * - %LANDLOCK_TRIGGER_FS_PICK_CHDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_CHROOT
+ * - %LANDLOCK_TRIGGER_FS_PICK_CREATE
+ * - %LANDLOCK_TRIGGER_FS_PICK_EXECUTE
+ * - %LANDLOCK_TRIGGER_FS_PICK_FCNTL
+ * - %LANDLOCK_TRIGGER_FS_PICK_GETATTR
+ * - %LANDLOCK_TRIGGER_FS_PICK_IOCTL
+ * - %LANDLOCK_TRIGGER_FS_PICK_LINK
+ * - %LANDLOCK_TRIGGER_FS_PICK_LINKTO
+ * - %LANDLOCK_TRIGGER_FS_PICK_LOCK
+ * - %LANDLOCK_TRIGGER_FS_PICK_MAP
+ * - %LANDLOCK_TRIGGER_FS_PICK_MOUNTON
+ * - %LANDLOCK_TRIGGER_FS_PICK_OPEN
+ * - %LANDLOCK_TRIGGER_FS_PICK_READ
+ * - %LANDLOCK_TRIGGER_FS_PICK_READDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_RECEIVE
+ * - %LANDLOCK_TRIGGER_FS_PICK_RENAME
+ * - %LANDLOCK_TRIGGER_FS_PICK_RENAMETO
+ * - %LANDLOCK_TRIGGER_FS_PICK_RMDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_SETATTR
+ * - %LANDLOCK_TRIGGER_FS_PICK_TRANSFER
+ * - %LANDLOCK_TRIGGER_FS_PICK_UNLINK
+ * - %LANDLOCK_TRIGGER_FS_PICK_WRITE
+ */
+#define LANDLOCK_TRIGGER_FS_PICK_APPEND (1ULL << 0)
+#define LANDLOCK_TRIGGER_FS_PICK_CHDIR (1ULL << 1)
+#define LANDLOCK_TRIGGER_FS_PICK_CHROOT (1ULL << 2)
+#define LANDLOCK_TRIGGER_FS_PICK_CREATE (1ULL << 3)
+#define LANDLOCK_TRIGGER_FS_PICK_EXECUTE (1ULL << 4)
+#define LANDLOCK_TRIGGER_FS_PICK_FCNTL (1ULL << 5)
+#define LANDLOCK_TRIGGER_FS_PICK_GETATTR (1ULL << 6)
+#define LANDLOCK_TRIGGER_FS_PICK_IOCTL (1ULL << 7)
+#define LANDLOCK_TRIGGER_FS_PICK_LINK (1ULL << 8)
+#define LANDLOCK_TRIGGER_FS_PICK_LINKTO (1ULL << 9)
+#define LANDLOCK_TRIGGER_FS_PICK_LOCK (1ULL << 10)
+#define LANDLOCK_TRIGGER_FS_PICK_MAP (1ULL << 11)
+#define LANDLOCK_TRIGGER_FS_PICK_MOUNTON (1ULL << 12)
+#define LANDLOCK_TRIGGER_FS_PICK_OPEN (1ULL << 13)
+#define LANDLOCK_TRIGGER_FS_PICK_READ (1ULL << 14)
+#define LANDLOCK_TRIGGER_FS_PICK_READDIR (1ULL << 15)
+#define LANDLOCK_TRIGGER_FS_PICK_RECEIVE (1ULL << 16)
+#define LANDLOCK_TRIGGER_FS_PICK_RENAME (1ULL << 17)
+#define LANDLOCK_TRIGGER_FS_PICK_RENAMETO (1ULL << 18)
+#define LANDLOCK_TRIGGER_FS_PICK_RMDIR (1ULL << 19)
+#define LANDLOCK_TRIGGER_FS_PICK_SETATTR (1ULL << 20)
+#define LANDLOCK_TRIGGER_FS_PICK_TRANSFER (1ULL << 21)
+#define LANDLOCK_TRIGGER_FS_PICK_UNLINK (1ULL << 22)
+#define LANDLOCK_TRIGGER_FS_PICK_WRITE (1ULL << 23)
+
+/**
+ * struct landlock_ctx_fs_pick - context accessible to a fs_pick program
+ *
+ * @inode: pointer to the current kernel object that can be used to compare
+ * inodes from an inode map.
+ */
+struct landlock_ctx_fs_pick {
+ __u64 inode;
+};
+
+/**
+ * struct landlock_ctx_fs_walk - context accessible to a fs_walk program
+ *
+ * @inode: pointer to the current kernel object that can be used to compare
+ * inodes from an inode map.
+ */
+struct landlock_ctx_fs_walk {
+ __u64 inode;
+};
+
+#endif /* _UAPI__LINUX_LANDLOCK_H__ */
diff --git a/security/Kconfig b/security/Kconfig
index 466cc1f8ffed..d3c070a01470 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -237,6 +237,7 @@ source "security/apparmor/Kconfig"
source "security/loadpin/Kconfig"
source "security/yama/Kconfig"
source "security/safesetid/Kconfig"
+source "security/landlock/Kconfig"

source "security/integrity/Kconfig"

diff --git a/security/Makefile b/security/Makefile
index c598b904938f..396ff107f70d 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -11,6 +11,7 @@ subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor
subdir-$(CONFIG_SECURITY_YAMA) += yama
subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin
subdir-$(CONFIG_SECURITY_SAFESETID) += safesetid
+subdir-$(CONFIG_SECURITY_LANDLOCK) += landlock

# always enable default capabilities
obj-y += commoncap.o
@@ -27,6 +28,7 @@ obj-$(CONFIG_SECURITY_APPARMOR) += apparmor/
obj-$(CONFIG_SECURITY_YAMA) += yama/
obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/
obj-$(CONFIG_SECURITY_SAFESETID) += safesetid/
+obj-$(CONFIG_SECURITY_LANDLOCK) += landlock/
obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o

# Object integrity file lists
diff --git a/security/landlock/Kconfig b/security/landlock/Kconfig
new file mode 100644
index 000000000000..8bd103102008
--- /dev/null
+++ b/security/landlock/Kconfig
@@ -0,0 +1,18 @@
+config SECURITY_LANDLOCK
+ bool "Landlock support"
+ depends on SECURITY
+ depends on BPF_SYSCALL
+ depends on SECCOMP_FILTER
+ default n
+ help
+ This selects Landlock, a programmatic access control. It enables to
+ restrict processes on the fly (i.e. create a sandbox). The security
+ policy is a set of eBPF programs, dedicated to deny a list of actions
+ on specific kernel objects (e.g. file).
+
+ You need to enable seccomp filter to apply a security policy to a
+ process hierarchy (e.g. application with built-in sandboxing).
+
+ See Documentation/security/landlock/ for further information.
+
+ If you are unsure how to answer this question, answer N.
diff --git a/security/landlock/Makefile b/security/landlock/Makefile
new file mode 100644
index 000000000000..7205f9a7a2ee
--- /dev/null
+++ b/security/landlock/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o
+
+landlock-y := init.o
diff --git a/security/landlock/common.h b/security/landlock/common.h
new file mode 100644
index 000000000000..fd63ed1592a7
--- /dev/null
+++ b/security/landlock/common.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Landlock LSM - private headers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifndef _SECURITY_LANDLOCK_COMMON_H
+#define _SECURITY_LANDLOCK_COMMON_H
+
+#include <linux/bpf.h> /* enum bpf_prog_aux */
+#include <linux/filter.h> /* bpf_prog */
+#include <linux/refcount.h> /* refcount_t */
+#include <uapi/linux/landlock.h> /* enum landlock_hook_type */
+
+#define LANDLOCK_NAME "landlock"
+
+/* UAPI bounds and bitmasks */
+
+#define _LANDLOCK_HOOK_LAST LANDLOCK_HOOK_FS_WALK
+
+#define _LANDLOCK_TRIGGER_FS_PICK_LAST LANDLOCK_TRIGGER_FS_PICK_WRITE
+#define _LANDLOCK_TRIGGER_FS_PICK_MASK ((_LANDLOCK_TRIGGER_FS_PICK_LAST << 1ULL) - 1)
+
+#endif /* _SECURITY_LANDLOCK_COMMON_H */
diff --git a/security/landlock/init.c b/security/landlock/init.c
new file mode 100644
index 000000000000..03073cd0fc4e
--- /dev/null
+++ b/security/landlock/init.c
@@ -0,0 +1,110 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Landlock LSM - init
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <linux/bpf.h> /* enum bpf_access_type */
+#include <linux/capability.h> /* capable */
+#include <linux/filter.h> /* struct bpf_prog */
+
+#include "common.h" /* LANDLOCK_* */
+
+static bool bpf_landlock_is_valid_access(int off, int size,
+ enum bpf_access_type type, const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+{
+ const union bpf_prog_subtype *prog_subtype;
+ enum bpf_reg_type reg_type = NOT_INIT;
+ int max_size = 0;
+
+ if (WARN_ON(!prog->aux->extra))
+ return false;
+ prog_subtype = &prog->aux->extra->subtype;
+
+ if (off < 0)
+ return false;
+ if (size <= 0 || size > sizeof(__u64))
+ return false;
+
+ /* check memory range access */
+ switch (reg_type) {
+ case NOT_INIT:
+ return false;
+ case SCALAR_VALUE:
+ /* allow partial raw value */
+ if (size > max_size)
+ return false;
+ info->ctx_field_size = max_size;
+ break;
+ default:
+ /* deny partial pointer */
+ if (size != max_size)
+ return false;
+ }
+
+ info->reg_type = reg_type;
+ return true;
+}
+
+static bool bpf_landlock_is_valid_subtype(struct bpf_prog_extra *prog_extra)
+{
+ const union bpf_prog_subtype *subtype;
+
+ if (!prog_extra)
+ return false;
+ subtype = &prog_extra->subtype;
+
+ switch (subtype->landlock_hook.type) {
+ case LANDLOCK_HOOK_FS_PICK:
+ if (!subtype->landlock_hook.triggers ||
+ subtype->landlock_hook.triggers &
+ ~_LANDLOCK_TRIGGER_FS_PICK_MASK)
+ return false;
+ break;
+ case LANDLOCK_HOOK_FS_WALK:
+ if (subtype->landlock_hook.triggers)
+ return false;
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static const struct bpf_func_proto *bpf_landlock_func_proto(
+ enum bpf_func_id func_id,
+ const struct bpf_prog *prog)
+{
+ u64 hook_type;
+
+ if (WARN_ON(!prog->aux->extra))
+ return NULL;
+ hook_type = prog->aux->extra->subtype.landlock_hook.type;
+
+ /* generic functions */
+ /* TODO: do we need/want update/delete functions for every LL prog?
+ * => impurity vs. audit */
+ switch (func_id) {
+ case BPF_FUNC_map_lookup_elem:
+ return &bpf_map_lookup_elem_proto;
+ case BPF_FUNC_map_update_elem:
+ return &bpf_map_update_elem_proto;
+ case BPF_FUNC_map_delete_elem:
+ return &bpf_map_delete_elem_proto;
+ default:
+ break;
+ }
+ return NULL;
+}
+
+const struct bpf_verifier_ops landlock_verifier_ops = {
+ .get_func_proto = bpf_landlock_func_proto,
+ .is_valid_access = bpf_landlock_is_valid_access,
+ .is_valid_subtype = bpf_landlock_is_valid_subtype,
+};
+
+const struct bpf_prog_ops landlock_prog_ops = {};
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index ddae50373d58..50145d448bc3 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -170,6 +170,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_FLOW_DISSECTOR,
BPF_PROG_TYPE_CGROUP_SYSCTL,
BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
+ BPF_PROG_TYPE_LANDLOCK_HOOK,
};

enum bpf_attach_type {
diff --git a/tools/include/uapi/linux/landlock.h b/tools/include/uapi/linux/landlock.h
new file mode 100644
index 000000000000..9e6d8e10ec2c
--- /dev/null
+++ b/tools/include/uapi/linux/landlock.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Landlock - UAPI headers
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#ifndef _UAPI__LINUX_LANDLOCK_H__
+#define _UAPI__LINUX_LANDLOCK_H__
+
+#include <linux/types.h>
+
+#define LANDLOCK_RET_ALLOW 0
+#define LANDLOCK_RET_DENY 1
+
+/**
+ * enum landlock_hook_type - hook type for which a Landlock program is called
+ *
+ * A hook is a policy decision point which exposes the same context type for
+ * each program evaluation.
+ *
+ * @LANDLOCK_HOOK_FS_PICK: called for the last element of a file path
+ * @LANDLOCK_HOOK_FS_WALK: called for each directory of a file path (excluding
+ * the directory passed to fs_pick, if any)
+ */
+enum landlock_hook_type {
+ LANDLOCK_HOOK_FS_PICK = 1,
+ LANDLOCK_HOOK_FS_WALK,
+};
+
+/**
+ * DOC: landlock_triggers
+ *
+ * A landlock trigger is used as a bitmask in subtype.landlock_hook.triggers
+ * for a fs_pick program. It defines a set of actions for which the program
+ * should verify an access request.
+ *
+ * - %LANDLOCK_TRIGGER_FS_PICK_APPEND
+ * - %LANDLOCK_TRIGGER_FS_PICK_CHDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_CHROOT
+ * - %LANDLOCK_TRIGGER_FS_PICK_CREATE
+ * - %LANDLOCK_TRIGGER_FS_PICK_EXECUTE
+ * - %LANDLOCK_TRIGGER_FS_PICK_FCNTL
+ * - %LANDLOCK_TRIGGER_FS_PICK_GETATTR
+ * - %LANDLOCK_TRIGGER_FS_PICK_IOCTL
+ * - %LANDLOCK_TRIGGER_FS_PICK_LINK
+ * - %LANDLOCK_TRIGGER_FS_PICK_LINKTO
+ * - %LANDLOCK_TRIGGER_FS_PICK_LOCK
+ * - %LANDLOCK_TRIGGER_FS_PICK_MAP
+ * - %LANDLOCK_TRIGGER_FS_PICK_MOUNTON
+ * - %LANDLOCK_TRIGGER_FS_PICK_OPEN
+ * - %LANDLOCK_TRIGGER_FS_PICK_READ
+ * - %LANDLOCK_TRIGGER_FS_PICK_READDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_RECEIVE
+ * - %LANDLOCK_TRIGGER_FS_PICK_RENAME
+ * - %LANDLOCK_TRIGGER_FS_PICK_RENAMETO
+ * - %LANDLOCK_TRIGGER_FS_PICK_RMDIR
+ * - %LANDLOCK_TRIGGER_FS_PICK_SETATTR
+ * - %LANDLOCK_TRIGGER_FS_PICK_TRANSFER
+ * - %LANDLOCK_TRIGGER_FS_PICK_UNLINK
+ * - %LANDLOCK_TRIGGER_FS_PICK_WRITE
+ */
+#define LANDLOCK_TRIGGER_FS_PICK_APPEND (1ULL << 0)
+#define LANDLOCK_TRIGGER_FS_PICK_CHDIR (1ULL << 1)
+#define LANDLOCK_TRIGGER_FS_PICK_CHROOT (1ULL << 2)
+#define LANDLOCK_TRIGGER_FS_PICK_CREATE (1ULL << 3)
+#define LANDLOCK_TRIGGER_FS_PICK_EXECUTE (1ULL << 4)
+#define LANDLOCK_TRIGGER_FS_PICK_FCNTL (1ULL << 5)
+#define LANDLOCK_TRIGGER_FS_PICK_GETATTR (1ULL << 6)
+#define LANDLOCK_TRIGGER_FS_PICK_IOCTL (1ULL << 7)
+#define LANDLOCK_TRIGGER_FS_PICK_LINK (1ULL << 8)
+#define LANDLOCK_TRIGGER_FS_PICK_LINKTO (1ULL << 9)
+#define LANDLOCK_TRIGGER_FS_PICK_LOCK (1ULL << 10)
+#define LANDLOCK_TRIGGER_FS_PICK_MAP (1ULL << 11)
+#define LANDLOCK_TRIGGER_FS_PICK_MOUNTON (1ULL << 12)
+#define LANDLOCK_TRIGGER_FS_PICK_OPEN (1ULL << 13)
+#define LANDLOCK_TRIGGER_FS_PICK_READ (1ULL << 14)
+#define LANDLOCK_TRIGGER_FS_PICK_READDIR (1ULL << 15)
+#define LANDLOCK_TRIGGER_FS_PICK_RECEIVE (1ULL << 16)
+#define LANDLOCK_TRIGGER_FS_PICK_RENAME (1ULL << 17)
+#define LANDLOCK_TRIGGER_FS_PICK_RENAMETO (1ULL << 18)
+#define LANDLOCK_TRIGGER_FS_PICK_RMDIR (1ULL << 19)
+#define LANDLOCK_TRIGGER_FS_PICK_SETATTR (1ULL << 20)
+#define LANDLOCK_TRIGGER_FS_PICK_TRANSFER (1ULL << 21)
+#define LANDLOCK_TRIGGER_FS_PICK_UNLINK (1ULL << 22)
+#define LANDLOCK_TRIGGER_FS_PICK_WRITE (1ULL << 23)
+
+/**
+ * struct landlock_ctx_fs_pick - context accessible to a fs_pick program
+ *
+ * @inode: pointer to the current kernel object that can be used to compare
+ * inodes from an inode map.
+ */
+struct landlock_ctx_fs_pick {
+ __u64 inode;
+};
+
+/**
+ * struct landlock_ctx_fs_walk - context accessible to a fs_walk program
+ *
+ * @inode: pointer to the current kernel object that can be used to compare
+ * inodes from an inode map.
+ */
+struct landlock_ctx_fs_walk {
+ __u64 inode;
+};
+
+#endif /* _UAPI__LINUX_LANDLOCK_H__ */
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 68f45a96769f..1b99c8da7a67 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -2646,6 +2646,7 @@ static bool bpf_prog_type__needs_kver(enum bpf_prog_type type)
case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
case BPF_PROG_TYPE_PERF_EVENT:
case BPF_PROG_TYPE_CGROUP_SYSCTL:
+ case BPF_PROG_TYPE_LANDLOCK_HOOK:
return false;
case BPF_PROG_TYPE_KPROBE:
default:
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index 6635a31a7a16..f4f34cb8869a 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -101,6 +101,7 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
case BPF_PROG_TYPE_SK_REUSEPORT:
case BPF_PROG_TYPE_FLOW_DISSECTOR:
case BPF_PROG_TYPE_CGROUP_SYSCTL:
+ case BPF_PROG_TYPE_LANDLOCK_HOOK:
default:
break;
}
--
2.20.1

2019-06-25 22:15:52

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 05/10] bpf,landlock: Add a new map type: inode

This new map store arbitrary 64-bits values referenced by inode keys.
The map can be updated from user space with file descriptor pointing to
inodes tied to a file system. From an eBPF (Landlock) program point of
view, such a map is read-only and can only be used to retrieved a
64-bits value tied to a given inode. This is useful to recognize an
inode tagged by user space, without access right to this inode (i.e. no
need to have a write access to this inode).

Add dedicated BPF functions to handle this type of map:
* bpf_inode_map_update_elem()
* bpf_inode_map_lookup_elem()
* bpf_inode_map_delete_elem()

Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: James Morris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Serge E. Hallyn <[email protected]>
Cc: Jann Horn <[email protected]>
---

Changes since v8:
* remove prog chaining and object tagging to ease review
* use bpf_map_init_from_attr()

Changes since v7:
* new design with a dedicated map and a BPF function to tie a value to
an inode
* add the ability to set or get a tag on an inode from a Landlock
program

Changes since v6:
* remove WARN_ON() for missing dentry->d_inode
* refactor bpf_landlock_func_proto() (suggested by Kees Cook)

Changes since v5:
* cosmetic fixes and rebase

Changes since v4:
* use a file abstraction (handle) to wrap inode, dentry, path and file
structs
* remove bpf_landlock_cmp_fs_beneath()
* rename the BPF helper and move it to kernel/bpf/
* tighten helpers accessible by a Landlock rule

Changes since v3:
* remove bpf_landlock_cmp_fs_prop() (suggested by Alexei Starovoitov)
* add hooks dealing with struct inode and struct path pointers:
inode_permission and inode_getattr
* add abstraction over eBPF helper arguments thanks to wrapping structs
* add bpf_landlock_get_fs_mode() helper to check file type and mode
* merge WARN_ON() (suggested by Kees Cook)
* fix and update bpf_helpers.h
* use BPF_CALL_* for eBPF helpers (suggested by Alexei Starovoitov)
* make handle arraymap safe (RCU) and remove buggy synchronize_rcu()
* factor out the arraymay walk
* use size_t to index array (suggested by Jann Horn)

Changes since v2:
* add MNT_INTERNAL check to only add file handle from user-visible FS
(e.g. no anonymous inode)
* replace struct file* with struct path* in map_landlock_handle
* add BPF protos
* fix bpf_landlock_cmp_fs_prop_with_struct_file()
---
include/linux/bpf.h | 9 +
include/linux/bpf_types.h | 3 +
include/uapi/linux/bpf.h | 12 +-
kernel/bpf/Makefile | 3 +
kernel/bpf/core.c | 2 +
kernel/bpf/inodemap.c | 315 +++++++++++++++++++++++++++++++++
kernel/bpf/syscall.c | 27 ++-
kernel/bpf/verifier.c | 14 ++
tools/include/uapi/linux/bpf.h | 12 +-
tools/lib/bpf/libbpf_probes.c | 1 +
10 files changed, 395 insertions(+), 3 deletions(-)
create mode 100644 kernel/bpf/inodemap.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index da167d3afecc..cc72ec18f0f6 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -208,6 +208,8 @@ enum bpf_arg_type {
ARG_PTR_TO_INT, /* pointer to int */
ARG_PTR_TO_LONG, /* pointer to long */
ARG_PTR_TO_SOCKET, /* pointer to bpf_sock (fullsock) */
+
+ ARG_PTR_TO_INODE, /* pointer to a struct inode */
};

/* type of values returned from helper functions */
@@ -278,6 +280,7 @@ enum bpf_reg_type {
PTR_TO_TCP_SOCK_OR_NULL, /* reg points to struct tcp_sock or NULL */
PTR_TO_TP_BUFFER, /* reg points to a writable raw tp's buffer */
PTR_TO_XDP_SOCK, /* reg points to struct xdp_sock */
+ PTR_TO_INODE, /* reg points to struct inode */
};

/* The information passed from prog-specific *_is_valid_access
@@ -485,6 +488,7 @@ struct bpf_event_entry {
struct rcu_head rcu;
};

+
bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *fp);
int bpf_prog_calc_tag(struct bpf_prog *fp);

@@ -689,6 +693,10 @@ int bpf_fd_array_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
void *key, void *value, u64 map_flags);
int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
+int bpf_inode_map_update_elem(struct bpf_map *map, int *key, u64 *value,
+ u64 flags);
+int bpf_inode_map_lookup_elem(struct bpf_map *map, int *key, u64 *value);
+int bpf_inode_map_delete_elem(struct bpf_map *map, int *key);

int bpf_get_file_flag(int flags);
int bpf_check_uarg_tail_zero(void __user *uaddr, size_t expected_size,
@@ -1059,6 +1067,7 @@ extern const struct bpf_func_proto bpf_spin_unlock_proto;
extern const struct bpf_func_proto bpf_get_local_storage_proto;
extern const struct bpf_func_proto bpf_strtol_proto;
extern const struct bpf_func_proto bpf_strtoul_proto;
+extern const struct bpf_func_proto bpf_inode_map_lookup_proto;

/* Shared helpers among cBPF and eBPF. */
void bpf_user_rnd_init_once(void);
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index dee8b82e31b1..9e385473b57a 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -79,3 +79,6 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, reuseport_array_ops)
#endif
BPF_MAP_TYPE(BPF_MAP_TYPE_QUEUE, queue_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops)
+#ifdef CONFIG_SECURITY_LANDLOCK
+BPF_MAP_TYPE(BPF_MAP_TYPE_INODE, inode_ops)
+#endif
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 50145d448bc3..08ff720835ba 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -134,6 +134,7 @@ enum bpf_map_type {
BPF_MAP_TYPE_QUEUE,
BPF_MAP_TYPE_STACK,
BPF_MAP_TYPE_SK_STORAGE,
+ BPF_MAP_TYPE_INODE,
};

/* Note that tracing related programs such as
@@ -2716,6 +2717,14 @@ union bpf_attr {
* **-EPERM** if no permission to send the *sig*.
*
* **-EAGAIN** if bpf program can try again.
+ *
+ * u64 bpf_inode_map_lookup(map, key)
+ * Description
+ * Perform a lookup in *map* for an entry associated to an inode
+ * *key*.
+ * Return
+ * Map value associated to *key*, or **NULL** if no entry was
+ * found.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -2827,7 +2836,8 @@ union bpf_attr {
FN(strtoul), \
FN(sk_storage_get), \
FN(sk_storage_delete), \
- FN(send_signal),
+ FN(send_signal), \
+ FN(inode_map_lookup),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 29d781061cd5..e6fe613b3105 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -22,3 +22,6 @@ obj-$(CONFIG_CGROUP_BPF) += cgroup.o
ifeq ($(CONFIG_INET),y)
obj-$(CONFIG_BPF_SYSCALL) += reuseport_array.o
endif
+ifeq ($(CONFIG_SECURITY_LANDLOCK),y)
+obj-$(CONFIG_BPF_SYSCALL) += inodemap.o
+endif
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8ad392e52328..3cf5d16a8496 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2032,6 +2032,8 @@ const struct bpf_func_proto bpf_get_current_comm_proto __weak;
const struct bpf_func_proto bpf_get_current_cgroup_id_proto __weak;
const struct bpf_func_proto bpf_get_local_storage_proto __weak;

+const struct bpf_func_proto bpf_inode_map_update_proto __weak;
+
const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
{
return NULL;
diff --git a/kernel/bpf/inodemap.c b/kernel/bpf/inodemap.c
new file mode 100644
index 000000000000..fcad0de51557
--- /dev/null
+++ b/kernel/bpf/inodemap.c
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * inode map for Landlock
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2019 ANSSI
+ */
+
+#include <asm/resource.h> /* RLIMIT_NOFILE */
+#include <linux/bpf.h>
+#include <linux/err.h>
+#include <linux/file.h> /* fput() */
+#include <linux/filter.h> /* BPF_CALL_2() */
+#include <linux/fs.h> /* struct file */
+#include <linux/mm.h>
+#include <linux/mount.h> /* MNT_INTERNAL */
+#include <linux/path.h> /* struct path */
+#include <linux/sched/signal.h> /* rlimit() */
+#include <linux/security.h>
+#include <linux/slab.h>
+
+struct inode_elem {
+ struct inode *inode;
+ u64 value;
+};
+
+struct inode_array {
+ struct bpf_map map;
+ size_t nb_entries;
+ struct inode_elem elems[0];
+};
+
+/* must call iput(inode) after this call */
+static struct inode *inode_from_fd(int ufd, bool check_access)
+{
+ struct inode *ret;
+ struct fd f;
+ int deny;
+
+ f = fdget(ufd);
+ if (unlikely(!f.file || !file_inode(f.file))) {
+ ret = ERR_PTR(-EBADF);
+ goto put_fd;
+ }
+ /* TODO: add this check when called from an eBPF program too (already
+ * checked by the LSM parent hooks anyway) */
+ if (unlikely(IS_PRIVATE(file_inode(f.file)))) {
+ ret = ERR_PTR(-EINVAL);
+ goto put_fd;
+ }
+ /* check if the FD is tied to a mount point */
+ /* TODO: add this check when called from an eBPF program too */
+ if (unlikely(!f.file->f_path.mnt || f.file->f_path.mnt->mnt_flags &
+ MNT_INTERNAL)) {
+ ret = ERR_PTR(-EINVAL);
+ goto put_fd;
+ }
+ if (check_access) {
+ /*
+ * need to be allowed to access attributes from this file to
+ * then be able to compare an inode to this entry
+ */
+ deny = security_inode_getattr(&f.file->f_path);
+ if (deny) {
+ ret = ERR_PTR(deny);
+ goto put_fd;
+ }
+ }
+ ret = file_inode(f.file);
+ ihold(ret);
+
+put_fd:
+ fdput(f);
+ return ret;
+}
+
+/* (never) called from eBPF program */
+static int fake_map_delete_elem(struct bpf_map *map, void *key)
+{
+ WARN_ON(1);
+ return -EINVAL;
+}
+
+/* called from syscall */
+static int sys_inode_map_delete_elem(struct bpf_map *map, struct inode *key)
+{
+ struct inode_array *array = container_of(map, struct inode_array, map);
+ struct inode *inode;
+ int i;
+
+ WARN_ON_ONCE(!rcu_read_lock_held());
+ for (i = 0; i < array->map.max_entries; i++) {
+ if (array->elems[i].inode == key) {
+ inode = xchg(&array->elems[i].inode, NULL);
+ array->nb_entries--;
+ iput(inode);
+ return 0;
+ }
+ }
+ return -ENOENT;
+}
+
+/* called from syscall */
+int bpf_inode_map_delete_elem(struct bpf_map *map, int *key)
+{
+ struct inode *inode;
+ int err;
+
+ inode = inode_from_fd(*key, false);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+ err = sys_inode_map_delete_elem(map, inode);
+ iput(inode);
+ return err;
+}
+
+static void inode_map_free(struct bpf_map *map)
+{
+ struct inode_array *array = container_of(map, struct inode_array, map);
+ int i;
+
+ synchronize_rcu();
+ for (i = 0; i < array->map.max_entries; i++)
+ iput(array->elems[i].inode);
+ bpf_map_area_free(array);
+}
+
+static struct bpf_map *inode_map_alloc(union bpf_attr *attr)
+{
+ int numa_node = bpf_map_attr_numa_node(attr);
+ struct inode_array *array;
+ u64 array_size;
+
+ /* only allow root to create this type of map (for now), should be
+ * removed when Landlock will be usable by unprivileged users */
+ if (!capable(CAP_SYS_ADMIN))
+ return ERR_PTR(-EPERM);
+
+ /* the key is a file descriptor and the value must be 64-bits (for
+ * now) */
+ if (attr->max_entries == 0 || attr->key_size != sizeof(u32) ||
+ attr->value_size != FIELD_SIZEOF(struct inode_elem, value) ||
+ attr->map_flags & ~(BPF_F_RDONLY | BPF_F_WRONLY) ||
+ numa_node != NUMA_NO_NODE)
+ return ERR_PTR(-EINVAL);
+
+ if (attr->value_size > KMALLOC_MAX_SIZE)
+ /* if value_size is bigger, the user space won't be able to
+ * access the elements.
+ */
+ return ERR_PTR(-E2BIG);
+
+ /*
+ * Limit number of entries in an inode map to the maximum number of
+ * open files for the current process. The maximum number of file
+ * references (including all inode maps) for a process is then
+ * (RLIMIT_NOFILE - 1) * RLIMIT_NOFILE. If the process' RLIMIT_NOFILE
+ * is 0, then any entry update is forbidden.
+ *
+ * An eBPF program can inherit all the inode map FD. The worse case is
+ * to fill a bunch of arraymaps, create an eBPF program, close the
+ * inode map FDs, and start again. The maximum number of inode map
+ * entries can then be close to RLIMIT_NOFILE^3.
+ */
+ if (attr->max_entries > rlimit(RLIMIT_NOFILE))
+ return ERR_PTR(-EMFILE);
+
+ array_size = sizeof(*array);
+ array_size += (u64) attr->max_entries * sizeof(struct inode_elem);
+
+ /* make sure there is no u32 overflow later in round_up() */
+ if (array_size >= U32_MAX - PAGE_SIZE)
+ return ERR_PTR(-ENOMEM);
+
+ /* allocate all map elements and zero-initialize them */
+ array = bpf_map_area_alloc(array_size, numa_node);
+ if (!array)
+ return ERR_PTR(-ENOMEM);
+
+ /* copy mandatory map attributes */
+ bpf_map_init_from_attr(&array->map, attr);
+ array->map.memory.pages = round_up(array_size, PAGE_SIZE) >> PAGE_SHIFT;
+
+ return &array->map;
+}
+
+/* (never) called from eBPF program */
+static void *fake_map_lookup_elem(struct bpf_map *map, void *key)
+{
+ WARN_ON(1);
+ return ERR_PTR(-EINVAL);
+}
+
+/* called from syscall (wrapped) and eBPF program */
+static u64 inode_map_lookup_elem(struct bpf_map *map, struct inode *key)
+{
+ struct inode_array *array = container_of(map, struct inode_array, map);
+ size_t i;
+ u64 ret = 0;
+
+ WARN_ON_ONCE(!rcu_read_lock_held());
+ /* TODO: use rbtree to switch to O(log n) */
+ for (i = 0; i < array->map.max_entries; i++) {
+ if (array->elems[i].inode == key) {
+ ret = array->elems[i].value;
+ break;
+ }
+ }
+ return ret;
+}
+
+/*
+ * The key is a FD when called from a syscall, but an inode pointer when called
+ * from an eBPF program.
+ */
+
+/* called from syscall */
+int bpf_inode_map_lookup_elem(struct bpf_map *map, int *key, u64 *value)
+{
+ struct inode *inode;
+
+ inode = inode_from_fd(*key, false);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+ *value = inode_map_lookup_elem(map, inode);
+ iput(inode);
+ if (!value)
+ return -ENOENT;
+ return 0;
+}
+
+/* (never) called from eBPF program */
+static int fake_map_update_elem(struct bpf_map *map, void *key, void *value,
+ u64 flags)
+{
+ WARN_ON(1);
+ /* do not leak an inode accessed by a Landlock program */
+ return -EINVAL;
+}
+
+/* called from syscall */
+static int sys_inode_map_update_elem(struct bpf_map *map, struct inode *key,
+ u64 *value, u64 flags)
+{
+ struct inode_array *array = container_of(map, struct inode_array, map);
+ size_t i;
+
+ if (unlikely(flags != BPF_ANY))
+ return -EINVAL;
+
+ if (unlikely(array->nb_entries >= array->map.max_entries))
+ /* all elements were pre-allocated, cannot insert a new one */
+ return -E2BIG;
+
+ for (i = 0; i < array->map.max_entries; i++) {
+ if (!array->elems[i].inode) {
+ /* the inode (key) is already grabbed by the caller */
+ ihold(key);
+ array->elems[i].inode = key;
+ array->elems[i].value = *value;
+ array->nb_entries++;
+ return 0;
+ }
+ }
+ WARN_ON(1);
+ return -ENOENT;
+}
+
+/* called from syscall */
+int bpf_inode_map_update_elem(struct bpf_map *map, int *key, u64 *value,
+ u64 flags)
+{
+ struct inode *inode;
+ int err;
+
+ WARN_ON_ONCE(!rcu_read_lock_held());
+ inode = inode_from_fd(*key, true);
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+ err = sys_inode_map_update_elem(map, inode, value, flags);
+ iput(inode);
+ return err;
+}
+
+/* called from syscall or (never) from eBPF program */
+static int fake_map_get_next_key(struct bpf_map *map, void *key,
+ void *next_key)
+{
+ /* do not leak a file descriptor */
+ return -EINVAL;
+}
+
+/* void map for eBPF program */
+const struct bpf_map_ops inode_ops = {
+ .map_alloc = inode_map_alloc,
+ .map_free = inode_map_free,
+ .map_get_next_key = fake_map_get_next_key,
+ .map_lookup_elem = fake_map_lookup_elem,
+ .map_delete_elem = fake_map_delete_elem,
+ .map_update_elem = fake_map_update_elem,
+};
+
+BPF_CALL_2(bpf_inode_map_lookup, struct bpf_map *, map, void *, key)
+{
+ WARN_ON_ONCE(!rcu_read_lock_held());
+ return inode_map_lookup_elem(map, key);
+}
+
+const struct bpf_func_proto bpf_inode_map_lookup_proto = {
+ .func = bpf_inode_map_lookup,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_CONST_MAP_PTR,
+ .arg2_type = ARG_PTR_TO_INODE,
+};
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 7dd3376904d4..ba2a09a7f813 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -720,6 +720,22 @@ static void *__bpf_copy_key(void __user *ukey, u64 key_size)
return NULL;
}

+int __weak bpf_inode_map_update_elem(struct bpf_map *map, int *key,
+ u64 *value, u64 flags)
+{
+ return -ENOTSUPP;
+}
+
+int __weak bpf_inode_map_lookup_elem(struct bpf_map *map, int *key, u64 *value)
+{
+ return -ENOTSUPP;
+}
+
+int __weak bpf_inode_map_delete_elem(struct bpf_map *map, int *key)
+{
+ return -ENOTSUPP;
+}
+
/* last field in 'union bpf_attr' used by this command */
#define BPF_MAP_LOOKUP_ELEM_LAST_FIELD flags

@@ -801,6 +817,8 @@ static int map_lookup_elem(union bpf_attr *attr)
} else if (map->map_type == BPF_MAP_TYPE_QUEUE ||
map->map_type == BPF_MAP_TYPE_STACK) {
err = map->ops->map_peek_elem(map, value);
+ } else if (map->map_type == BPF_MAP_TYPE_INODE) {
+ err = bpf_inode_map_lookup_elem(map, key, value);
} else {
rcu_read_lock();
if (map->ops->map_lookup_elem_sys_only)
@@ -951,6 +969,10 @@ static int map_update_elem(union bpf_attr *attr)
} else if (map->map_type == BPF_MAP_TYPE_QUEUE ||
map->map_type == BPF_MAP_TYPE_STACK) {
err = map->ops->map_push_elem(map, value, attr->flags);
+ } else if (map->map_type == BPF_MAP_TYPE_INODE) {
+ rcu_read_lock();
+ err = bpf_inode_map_update_elem(map, key, value, attr->flags);
+ rcu_read_unlock();
} else {
rcu_read_lock();
err = map->ops->map_update_elem(map, key, value, attr->flags);
@@ -1006,7 +1028,10 @@ static int map_delete_elem(union bpf_attr *attr)
preempt_disable();
__this_cpu_inc(bpf_prog_active);
rcu_read_lock();
- err = map->ops->map_delete_elem(map, key);
+ if (map->map_type == BPF_MAP_TYPE_INODE)
+ err = bpf_inode_map_delete_elem(map, key);
+ else
+ err = map->ops->map_delete_elem(map, key);
rcu_read_unlock();
__this_cpu_dec(bpf_prog_active);
preempt_enable();
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 930260683d0a..ce3cd7fd8882 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -400,6 +400,7 @@ static const char * const reg_type_str[] = {
[PTR_TO_TCP_SOCK_OR_NULL] = "tcp_sock_or_null",
[PTR_TO_TP_BUFFER] = "tp_buffer",
[PTR_TO_XDP_SOCK] = "xdp_sock",
+ [PTR_TO_INODE] = "inode",
};

static char slot_type_char[] = {
@@ -1801,6 +1802,7 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
case PTR_TO_TCP_SOCK:
case PTR_TO_TCP_SOCK_OR_NULL:
case PTR_TO_XDP_SOCK:
+ case PTR_TO_INODE:
return true;
default:
return false;
@@ -3254,6 +3256,10 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
verbose(env, "verifier internal error\n");
return -EFAULT;
}
+ } else if (arg_type == ARG_PTR_TO_INODE) {
+ expected_type = PTR_TO_INODE;
+ if (type != expected_type)
+ goto err_type;
} else if (arg_type_is_mem_ptr(arg_type)) {
expected_type = PTR_TO_STACK;
/* One exception here. In case function allows for NULL to be
@@ -3462,6 +3468,10 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
func_id != BPF_FUNC_sk_storage_delete)
goto error;
break;
+ case BPF_MAP_TYPE_INODE:
+ if (func_id != BPF_FUNC_inode_map_lookup)
+ goto error;
+ break;
default:
break;
}
@@ -3530,6 +3540,10 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
if (map->map_type != BPF_MAP_TYPE_SK_STORAGE)
goto error;
break;
+ case BPF_FUNC_inode_map_lookup:
+ if (map->map_type != BPF_MAP_TYPE_INODE)
+ goto error;
+ break;
default:
break;
}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 50145d448bc3..08ff720835ba 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -134,6 +134,7 @@ enum bpf_map_type {
BPF_MAP_TYPE_QUEUE,
BPF_MAP_TYPE_STACK,
BPF_MAP_TYPE_SK_STORAGE,
+ BPF_MAP_TYPE_INODE,
};

/* Note that tracing related programs such as
@@ -2716,6 +2717,14 @@ union bpf_attr {
* **-EPERM** if no permission to send the *sig*.
*
* **-EAGAIN** if bpf program can try again.
+ *
+ * u64 bpf_inode_map_lookup(map, key)
+ * Description
+ * Perform a lookup in *map* for an entry associated to an inode
+ * *key*.
+ * Return
+ * Map value associated to *key*, or **NULL** if no entry was
+ * found.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -2827,7 +2836,8 @@ union bpf_attr {
FN(strtoul), \
FN(sk_storage_get), \
FN(sk_storage_delete), \
- FN(send_signal),
+ FN(send_signal), \
+ FN(inode_map_lookup),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index f4f34cb8869a..000319a95bfb 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -249,6 +249,7 @@ bool bpf_probe_map_type(enum bpf_map_type map_type, __u32 ifindex)
case BPF_MAP_TYPE_XSKMAP:
case BPF_MAP_TYPE_SOCKHASH:
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
+ case BPF_MAP_TYPE_INODE:
default:
break;
}
--
2.20.1

2019-06-25 22:29:24

by Mickaël Salaün

[permalink] [raw]
Subject: [PATCH bpf-next v9 06/10] landlock: Handle filesystem access control

This add two Landlock hooks: FS_WALK and FS_PICK.

The FS_WALK hook is used to walk through a file path. A program tied to
this hook will be evaluated for each directory traversal except the last
one if it is the leaf of the path. It is important to differentiate
this hook from FS_PICK to enable more powerful path evaluation in the
future (cf. Landlock patch v8).

The FS_PICK hook is used to validate a set of actions requested on a
file. This actions are defined with triggers (e.g. read, write, open,
append...).

The Landlock LSM hook registration is done after other LSM to only run
actions from user-space, via eBPF programs, if the access was granted by
major (privileged) LSMs.

Signed-off-by: Mickaël Salaün <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: James Morris <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Serge E. Hallyn <[email protected]>
---

Changes since v8:
* add a new LSM_ORDER_LAST, cf. commit e2bc445b66ca ("LSM: Introduce
enum lsm_order")
* add WARN_ON() for pointer dereferencement
* remove the FS_GET subtype which rely on program chaining
* remove the subtype option which was only used for chaining (with the
"previous" field)
* remove inode_lookup which depends on the (removed) nameidata security
blob
* remove eBPF helpers to get and set Landlock inode tags
* do not use task LSM credentials (for now)

Changes since v7:
* major rewrite with clean Landlock hooks able to deal with file paths

Changes since v6:
* add 3 more sub-events: IOCTL, LOCK, FCNTL
https://lkml.kernel.org/r/[email protected]
* use the new security_add_hooks()
* explain the -Werror=unused-function
* constify pointers
* cleanup headers

Changes since v5:
* split hooks.[ch] into hooks.[ch] and hooks_fs.[ch]
* add more documentation
* cosmetic fixes
* rebase (SCALAR_VALUE)

Changes since v4:
* add LSM hook abstraction called Landlock event
* use the compiler type checking to verify hooks use by an event
* handle all filesystem related LSM hooks (e.g. file_permission,
mmap_file, sb_mount...)
* register BPF programs for Landlock just after LSM hooks registration
* move hooks registration after other LSMs
* add failsafes to check if a hook is not used by the kernel
* allow partial raw value access form the context (needed for programs
generated by LLVM)

Changes since v3:
* split commit
* add hooks dealing with struct inode and struct path pointers:
inode_permission and inode_getattr
* add abstraction over eBPF helper arguments thanks to wrapping structs
---
include/linux/lsm_hooks.h | 1 +
security/landlock/Makefile | 3 +-
security/landlock/common.h | 10 +
security/landlock/hooks.c | 95 ++++++
security/landlock/hooks.h | 31 ++
security/landlock/hooks_fs.c | 568 +++++++++++++++++++++++++++++++++++
security/landlock/hooks_fs.h | 31 ++
security/landlock/init.c | 47 +++
security/security.c | 15 +
9 files changed, 800 insertions(+), 1 deletion(-)
create mode 100644 security/landlock/hooks.c
create mode 100644 security/landlock/hooks.h
create mode 100644 security/landlock/hooks_fs.c
create mode 100644 security/landlock/hooks_fs.h

diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 47f58cfb6a19..eefe9f214f05 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -2092,6 +2092,7 @@ extern void security_add_hooks(struct security_hook_list *hooks, int count,
enum lsm_order {
LSM_ORDER_FIRST = -1, /* This is only for capabilities. */
LSM_ORDER_MUTABLE = 0,
+ LSM_ORDER_LAST = 1, /* potentially-unprivileged LSM */
};

struct lsm_info {
diff --git a/security/landlock/Makefile b/security/landlock/Makefile
index 2a1a7082a365..270ece5d93de 100644
--- a/security/landlock/Makefile
+++ b/security/landlock/Makefile
@@ -1,4 +1,5 @@
obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o

landlock-y := init.o \
- enforce.o enforce_seccomp.o
+ enforce.o enforce_seccomp.o \
+ hooks.o hooks_fs.o
diff --git a/security/landlock/common.h b/security/landlock/common.h
index 0c9b5904e7f5..49b892515144 100644
--- a/security/landlock/common.h
+++ b/security/landlock/common.h
@@ -11,6 +11,7 @@

#include <linux/bpf.h> /* enum bpf_prog_aux */
#include <linux/filter.h> /* bpf_prog */
+#include <linux/lsm_hooks.h> /* lsm_blob_sizes */
#include <linux/refcount.h> /* refcount_t */
#include <uapi/linux/landlock.h> /* enum landlock_hook_type */

@@ -68,4 +69,13 @@ static inline enum landlock_hook_type get_type(struct bpf_prog *prog)
return prog->aux->extra->subtype.landlock_hook.type;
}

+__maybe_unused
+static bool current_has_prog_type(enum landlock_hook_type hook_type)
+{
+ struct landlock_prog_set *prog_set;
+
+ prog_set = current->seccomp.landlock_prog_set;
+ return (prog_set && prog_set->programs[get_index(hook_type)]);
+}
+
#endif /* _SECURITY_LANDLOCK_COMMON_H */
diff --git a/security/landlock/hooks.c b/security/landlock/hooks.c
new file mode 100644
index 000000000000..a1620d0481eb
--- /dev/null
+++ b/security/landlock/hooks.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Landlock LSM - hook helpers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <asm/current.h>
+#include <linux/bpf.h> /* enum bpf_prog_aux */
+#include <linux/errno.h>
+#include <linux/filter.h> /* BPF_PROG_RUN() */
+#include <linux/rculist.h> /* list_add_tail_rcu */
+#include <uapi/linux/landlock.h> /* struct landlock_context */
+
+#include "common.h" /* struct landlock_rule, get_index() */
+#include "hooks.h" /* landlock_hook_ctx */
+
+#include "hooks_fs.h"
+
+/* return a Landlock program context (e.g. hook_ctx->fs_walk.prog_ctx) */
+static const void *get_ctx(enum landlock_hook_type hook_type,
+ struct landlock_hook_ctx *hook_ctx)
+{
+ switch (hook_type) {
+ case LANDLOCK_HOOK_FS_WALK:
+ return landlock_get_ctx_fs_walk(hook_ctx->fs_walk);
+ case LANDLOCK_HOOK_FS_PICK:
+ return landlock_get_ctx_fs_pick(hook_ctx->fs_pick);
+ }
+ WARN_ON(1);
+ return NULL;
+}
+
+/**
+ * landlock_access_deny - run Landlock programs tied to a hook
+ *
+ * @hook_idx: hook index in the programs array
+ * @ctx: non-NULL valid eBPF context
+ * @prog_set: Landlock program set pointer
+ * @triggers: a bitmask to check if a program should be run
+ *
+ * Return true if at least one program return deny.
+ */
+static bool landlock_access_deny(enum landlock_hook_type hook_type,
+ struct landlock_hook_ctx *hook_ctx,
+ struct landlock_prog_set *prog_set, u64 triggers)
+{
+ struct landlock_prog_list *prog_list, *prev_list = NULL;
+ u32 hook_idx = get_index(hook_type);
+
+ if (!prog_set)
+ return false;
+
+ for (prog_list = prog_set->programs[hook_idx];
+ prog_list; prog_list = prog_list->prev) {
+ u32 ret;
+ const void *prog_ctx;
+
+ /* check if @prog expect at least one of this triggers */
+ if (triggers && !(triggers & prog_list->prog->aux->extra->
+ subtype.landlock_hook.triggers))
+ continue;
+ prog_ctx = get_ctx(hook_type, hook_ctx);
+ if (!prog_ctx || WARN_ON(IS_ERR(prog_ctx)))
+ return true;
+ rcu_read_lock();
+ ret = BPF_PROG_RUN(prog_list->prog, prog_ctx);
+ rcu_read_unlock();
+ /* deny access if a program returns a value different than 0 */
+ if (ret)
+ return true;
+ if (prev_list && prog_list->prev && prog_list->prev->prog->
+ aux->extra->subtype.landlock_hook.type ==
+ prev_list->prog->aux->extra->
+ subtype.landlock_hook.type)
+ WARN_ON(prog_list->prev != prev_list);
+ prev_list = prog_list;
+ }
+ return false;
+}
+
+int landlock_decide(enum landlock_hook_type hook_type,
+ struct landlock_hook_ctx *hook_ctx, u64 triggers)
+{
+ bool deny = false;
+
+#ifdef CONFIG_SECCOMP_FILTER
+ deny = landlock_access_deny(hook_type, hook_ctx,
+ current->seccomp.landlock_prog_set, triggers);
+#endif /* CONFIG_SECCOMP_FILTER */
+
+ /* should we use -EPERM or -EACCES? */
+ return deny ? -EACCES : 0;
+}
diff --git a/security/landlock/hooks.h b/security/landlock/hooks.h
new file mode 100644
index 000000000000..31446e6629fb
--- /dev/null
+++ b/security/landlock/hooks.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Landlock LSM - hooks helpers
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <asm/current.h>
+#include <linux/sched.h> /* struct task_struct */
+#include <linux/seccomp.h>
+
+#include "hooks_fs.h"
+
+struct landlock_hook_ctx {
+ union {
+ struct landlock_hook_ctx_fs_walk *fs_walk;
+ struct landlock_hook_ctx_fs_pick *fs_pick;
+ };
+};
+
+static inline bool landlocked(const struct task_struct *task)
+{
+#ifdef CONFIG_SECCOMP_FILTER
+ return !!(task->seccomp.landlock_prog_set);
+#else
+ return false;
+#endif /* CONFIG_SECCOMP_FILTER */
+}
+
+int landlock_decide(enum landlock_hook_type, struct landlock_hook_ctx *, u64);
diff --git a/security/landlock/hooks_fs.c b/security/landlock/hooks_fs.c
new file mode 100644
index 000000000000..c3f0f60d72a7
--- /dev/null
+++ b/security/landlock/hooks_fs.c
@@ -0,0 +1,568 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Landlock LSM - filesystem hooks
+ *
+ * Copyright © 2016-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <linux/bpf.h> /* enum bpf_access_type */
+#include <linux/kernel.h> /* ARRAY_SIZE */
+#include <linux/lsm_hooks.h>
+#include <linux/rcupdate.h> /* synchronize_rcu() */
+#include <linux/stat.h> /* S_ISDIR */
+#include <linux/stddef.h> /* offsetof */
+#include <linux/types.h> /* uintptr_t */
+#include <linux/workqueue.h> /* INIT_WORK() */
+
+/* permissions translation */
+#include <linux/fs.h> /* MAY_* */
+#include <linux/mman.h> /* PROT_* */
+#include <linux/namei.h>
+
+/* hook arguments */
+#include <linux/dcache.h> /* struct dentry */
+#include <linux/fs.h> /* struct inode, struct iattr */
+#include <linux/mm_types.h> /* struct vm_area_struct */
+#include <linux/mount.h> /* struct vfsmount */
+#include <linux/path.h> /* struct path */
+#include <linux/sched.h> /* struct task_struct */
+#include <linux/time.h> /* struct timespec */
+
+#include "common.h"
+#include "hooks_fs.h"
+#include "hooks.h"
+
+/* fs_pick */
+
+#include <asm/page.h> /* PAGE_SIZE */
+#include <asm/syscall.h>
+#include <linux/dcache.h> /* d_path, dentry_path_raw */
+#include <linux/err.h> /* *_ERR */
+#include <linux/gfp.h> /* __get_free_page, GFP_KERNEL */
+#include <linux/path.h> /* struct path */
+
+bool landlock_is_valid_access_fs_pick(int off, enum bpf_access_type type,
+ enum bpf_reg_type *reg_type, int *max_size)
+{
+ switch (off) {
+ case offsetof(struct landlock_ctx_fs_pick, inode):
+ if (type != BPF_READ)
+ return false;
+ *reg_type = PTR_TO_INODE;
+ *max_size = sizeof(u64);
+ return true;
+ default:
+ return false;
+ }
+}
+
+bool landlock_is_valid_access_fs_walk(int off, enum bpf_access_type type,
+ enum bpf_reg_type *reg_type, int *max_size)
+{
+ switch (off) {
+ case offsetof(struct landlock_ctx_fs_walk, inode):
+ if (type != BPF_READ)
+ return false;
+ *reg_type = PTR_TO_INODE;
+ *max_size = sizeof(u64);
+ return true;
+ default:
+ return false;
+ }
+}
+
+/* fs_walk */
+
+struct landlock_hook_ctx_fs_walk {
+ struct landlock_ctx_fs_walk prog_ctx;
+};
+
+const struct landlock_ctx_fs_walk *landlock_get_ctx_fs_walk(
+ const struct landlock_hook_ctx_fs_walk *hook_ctx)
+{
+ if (WARN_ON(!hook_ctx))
+ return NULL;
+
+ return &hook_ctx->prog_ctx;
+}
+
+static int decide_fs_walk(int may_mask, struct inode *inode)
+{
+ struct landlock_hook_ctx_fs_walk fs_walk = {};
+ struct landlock_hook_ctx hook_ctx = {
+ .fs_walk = &fs_walk,
+ };
+ const enum landlock_hook_type hook_type = LANDLOCK_HOOK_FS_WALK;
+
+ if (!current_has_prog_type(hook_type))
+ /* no fs_walk */
+ return 0;
+ if (WARN_ON(!inode))
+ return -EFAULT;
+
+ /* init common data: inode, is_dot, is_dotdot, is_root */
+ fs_walk.prog_ctx.inode = (uintptr_t)inode;
+ return landlock_decide(hook_type, &hook_ctx, 0);
+}
+
+/* fs_pick */
+
+struct landlock_hook_ctx_fs_pick {
+ __u64 triggers;
+ struct landlock_ctx_fs_pick prog_ctx;
+};
+
+const struct landlock_ctx_fs_pick *landlock_get_ctx_fs_pick(
+ const struct landlock_hook_ctx_fs_pick *hook_ctx)
+{
+ if (WARN_ON(!hook_ctx))
+ return NULL;
+
+ return &hook_ctx->prog_ctx;
+}
+
+static int decide_fs_pick(__u64 triggers, struct inode *inode)
+{
+ struct landlock_hook_ctx_fs_pick fs_pick = {};
+ struct landlock_hook_ctx hook_ctx = {
+ .fs_pick = &fs_pick,
+ };
+ const enum landlock_hook_type hook_type = LANDLOCK_HOOK_FS_PICK;
+
+ if (WARN_ON(!triggers))
+ return 0;
+ if (!current_has_prog_type(hook_type))
+ /* no fs_pick */
+ return 0;
+ if (WARN_ON(!inode))
+ return -EFAULT;
+
+ fs_pick.triggers = triggers,
+ /* init common data: inode */
+ fs_pick.prog_ctx.inode = (uintptr_t)inode;
+ return landlock_decide(hook_type, &hook_ctx, fs_pick.triggers);
+}
+
+/* helpers */
+
+static u64 fs_may_to_triggers(int may_mask, umode_t mode)
+{
+ u64 ret = 0;
+
+ if (may_mask & MAY_EXEC)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_EXECUTE;
+ if (may_mask & MAY_READ) {
+ if (S_ISDIR(mode))
+ ret |= LANDLOCK_TRIGGER_FS_PICK_READDIR;
+ else
+ ret |= LANDLOCK_TRIGGER_FS_PICK_READ;
+ }
+ if (may_mask & MAY_WRITE)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_WRITE;
+ if (may_mask & MAY_APPEND)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_APPEND;
+ if (may_mask & MAY_OPEN)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_OPEN;
+ if (may_mask & MAY_CHROOT)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_CHROOT;
+ else if (may_mask & MAY_CHDIR)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_CHDIR;
+ /* XXX: ignore MAY_ACCESS */
+ WARN_ON(!ret);
+ return ret;
+}
+
+static inline u64 mem_prot_to_triggers(unsigned long prot, bool private)
+{
+ u64 ret = LANDLOCK_TRIGGER_FS_PICK_MAP;
+
+ /* private mapping do not write to files */
+ if (!private && (prot & PROT_WRITE))
+ ret |= LANDLOCK_TRIGGER_FS_PICK_WRITE;
+ if (prot & PROT_READ)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_READ;
+ if (prot & PROT_EXEC)
+ ret |= LANDLOCK_TRIGGER_FS_PICK_EXECUTE;
+ WARN_ON(!ret);
+ return ret;
+}
+
+/* binder hooks */
+
+static int hook_binder_transfer_file(struct task_struct *from,
+ struct task_struct *to, struct file *file)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!file))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_TRANSFER,
+ file_inode(file));
+}
+
+/* sb hooks */
+
+static int hook_sb_statfs(struct dentry *dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR,
+ dentry->d_inode);
+}
+
+/* TODO: handle mount source and remount */
+static int hook_sb_mount(const char *dev_name, const struct path *path,
+ const char *type, unsigned long flags, void *data)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!path))
+ return 0;
+ if (WARN_ON(!path->dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_MOUNTON,
+ path->dentry->d_inode);
+}
+
+/*
+ * The @old_path is similar to a destination mount point.
+ */
+static int hook_sb_pivotroot(const struct path *old_path,
+ const struct path *new_path)
+{
+ int err;
+
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!old_path))
+ return 0;
+ if (WARN_ON(!old_path->dentry))
+ return 0;
+ err = decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_MOUNTON,
+ old_path->dentry->d_inode);
+ if (err)
+ return err;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_CHROOT,
+ new_path->dentry->d_inode);
+}
+
+/* inode hooks */
+
+/* a directory inode contains only one dentry */
+static int hook_inode_create(struct inode *dir, struct dentry *dentry,
+ umode_t mode)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_CREATE, dir);
+}
+
+static int hook_inode_link(struct dentry *old_dentry, struct inode *dir,
+ struct dentry *new_dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!old_dentry)) {
+ int ret = decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_LINK,
+ old_dentry->d_inode);
+ if (ret)
+ return ret;
+ }
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_LINKTO, dir);
+}
+
+static int hook_inode_unlink(struct inode *dir, struct dentry *dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_UNLINK,
+ dentry->d_inode);
+}
+
+static int hook_inode_symlink(struct inode *dir, struct dentry *dentry,
+ const char *old_name)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_CREATE, dir);
+}
+
+static int hook_inode_mkdir(struct inode *dir, struct dentry *dentry,
+ umode_t mode)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_CREATE, dir);
+}
+
+static int hook_inode_rmdir(struct inode *dir, struct dentry *dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_RMDIR, dentry->d_inode);
+}
+
+static int hook_inode_mknod(struct inode *dir, struct dentry *dentry,
+ umode_t mode, dev_t dev)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_CREATE, dir);
+}
+
+static int hook_inode_rename(struct inode *old_dir, struct dentry *old_dentry,
+ struct inode *new_dir, struct dentry *new_dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ /* TODO: add artificial walk session from old_dir to old_dentry */
+ if (!WARN_ON(!old_dentry)) {
+ int ret = decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_RENAME,
+ old_dentry->d_inode);
+ if (ret)
+ return ret;
+ }
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_RENAMETO, new_dir);
+}
+
+static int hook_inode_readlink(struct dentry *dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_READ, dentry->d_inode);
+}
+
+/*
+ * ignore the inode_follow_link hook (could set is_symlink in the fs_walk
+ * context)
+ */
+
+static int hook_inode_permission(struct inode *inode, int mask)
+{
+ u64 triggers;
+
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!inode))
+ return 0;
+
+ triggers = fs_may_to_triggers(mask, inode->i_mode);
+ /*
+ * decide_fs_walk() is exclusive with decide_fs_pick(): in a path walk,
+ * ignore execute-only access on directory for any fs_pick program
+ */
+ if (triggers == LANDLOCK_TRIGGER_FS_PICK_EXECUTE &&
+ S_ISDIR(inode->i_mode))
+ return decide_fs_walk(mask, inode);
+
+ return decide_fs_pick(triggers, inode);
+}
+
+static int hook_inode_setattr(struct dentry *dentry, struct iattr *attr)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_SETATTR,
+ dentry->d_inode);
+}
+
+static int hook_inode_getattr(const struct path *path)
+{
+ /* TODO: link parent inode and path */
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!path))
+ return 0;
+ if (WARN_ON(!path->dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR,
+ path->dentry->d_inode);
+}
+
+static int hook_inode_setxattr(struct dentry *dentry, const char *name,
+ const void *value, size_t size, int flags)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_SETATTR,
+ dentry->d_inode);
+}
+
+static int hook_inode_getxattr(struct dentry *dentry, const char *name)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR,
+ dentry->d_inode);
+}
+
+static int hook_inode_listxattr(struct dentry *dentry)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR,
+ dentry->d_inode);
+}
+
+static int hook_inode_removexattr(struct dentry *dentry, const char *name)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!dentry))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_SETATTR,
+ dentry->d_inode);
+}
+
+static int hook_inode_getsecurity(struct inode *inode, const char *name,
+ void **buffer, bool alloc)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR, inode);
+}
+
+static int hook_inode_setsecurity(struct inode *inode, const char *name,
+ const void *value, size_t size, int flag)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_SETATTR, inode);
+}
+
+static int hook_inode_listsecurity(struct inode *inode, char *buffer,
+ size_t buffer_size)
+{
+ if (!landlocked(current))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_GETATTR, inode);
+}
+
+/* file hooks */
+
+static int hook_file_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!file))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_IOCTL,
+ file_inode(file));
+}
+
+static int hook_file_lock(struct file *file, unsigned int cmd)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!file))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_LOCK, file_inode(file));
+}
+
+static int hook_file_fcntl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!file))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_FCNTL,
+ file_inode(file));
+}
+
+static int hook_mmap_file(struct file *file, unsigned long reqprot,
+ unsigned long prot, unsigned long flags)
+{
+ if (!landlocked(current))
+ return 0;
+ /* file can be null for anonymous mmap */
+ if (!file)
+ return 0;
+ return decide_fs_pick(mem_prot_to_triggers(prot, flags & MAP_PRIVATE),
+ file_inode(file));
+}
+
+static int hook_file_mprotect(struct vm_area_struct *vma,
+ unsigned long reqprot, unsigned long prot)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!vma))
+ return 0;
+ if (!vma->vm_file)
+ return 0;
+ return decide_fs_pick(mem_prot_to_triggers(prot,
+ !(vma->vm_flags & VM_SHARED)),
+ file_inode(vma->vm_file));
+}
+
+static int hook_file_receive(struct file *file)
+{
+ if (!landlocked(current))
+ return 0;
+ if (WARN_ON(!file))
+ return 0;
+ return decide_fs_pick(LANDLOCK_TRIGGER_FS_PICK_RECEIVE,
+ file_inode(file));
+}
+
+static struct security_hook_list landlock_hooks[] = {
+ LSM_HOOK_INIT(binder_transfer_file, hook_binder_transfer_file),
+
+ LSM_HOOK_INIT(sb_statfs, hook_sb_statfs),
+ LSM_HOOK_INIT(sb_mount, hook_sb_mount),
+ LSM_HOOK_INIT(sb_pivotroot, hook_sb_pivotroot),
+
+ LSM_HOOK_INIT(inode_create, hook_inode_create),
+ LSM_HOOK_INIT(inode_link, hook_inode_link),
+ LSM_HOOK_INIT(inode_unlink, hook_inode_unlink),
+ LSM_HOOK_INIT(inode_symlink, hook_inode_symlink),
+ LSM_HOOK_INIT(inode_mkdir, hook_inode_mkdir),
+ LSM_HOOK_INIT(inode_rmdir, hook_inode_rmdir),
+ LSM_HOOK_INIT(inode_mknod, hook_inode_mknod),
+ LSM_HOOK_INIT(inode_rename, hook_inode_rename),
+ LSM_HOOK_INIT(inode_readlink, hook_inode_readlink),
+ LSM_HOOK_INIT(inode_permission, hook_inode_permission),
+ LSM_HOOK_INIT(inode_setattr, hook_inode_setattr),
+ LSM_HOOK_INIT(inode_getattr, hook_inode_getattr),
+ LSM_HOOK_INIT(inode_setxattr, hook_inode_setxattr),
+ LSM_HOOK_INIT(inode_getxattr, hook_inode_getxattr),
+ LSM_HOOK_INIT(inode_listxattr, hook_inode_listxattr),
+ LSM_HOOK_INIT(inode_removexattr, hook_inode_removexattr),
+ LSM_HOOK_INIT(inode_getsecurity, hook_inode_getsecurity),
+ LSM_HOOK_INIT(inode_setsecurity, hook_inode_setsecurity),
+ LSM_HOOK_INIT(inode_listsecurity, hook_inode_listsecurity),
+
+ /* do not handle file_permission for now */
+ LSM_HOOK_INIT(file_ioctl, hook_file_ioctl),
+ LSM_HOOK_INIT(file_lock, hook_file_lock),
+ LSM_HOOK_INIT(file_fcntl, hook_file_fcntl),
+ LSM_HOOK_INIT(mmap_file, hook_mmap_file),
+ LSM_HOOK_INIT(file_mprotect, hook_file_mprotect),
+ LSM_HOOK_INIT(file_receive, hook_file_receive),
+ /* file_open is not handled, use inode_permission instead */
+};
+
+__init void landlock_add_hooks_fs(void)
+{
+ security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
+ LANDLOCK_NAME);
+}
diff --git a/security/landlock/hooks_fs.h b/security/landlock/hooks_fs.h
new file mode 100644
index 000000000000..eeae4dcd842f
--- /dev/null
+++ b/security/landlock/hooks_fs.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Landlock LSM - filesystem hooks
+ *
+ * Copyright © 2017-2019 Mickaël Salaün <[email protected]>
+ * Copyright © 2018-2019 ANSSI
+ */
+
+#include <linux/bpf.h> /* enum bpf_access_type */
+
+__init void landlock_add_hooks_fs(void);
+
+/* fs_pick */
+
+struct landlock_hook_ctx_fs_pick;
+
+bool landlock_is_valid_access_fs_pick(int off, enum bpf_access_type type,
+ enum bpf_reg_type *reg_type, int *max_size);
+
+const struct landlock_ctx_fs_pick *landlock_get_ctx_fs_pick(
+ const struct landlock_hook_ctx_fs_pick *hook_ctx);
+
+/* fs_walk */
+
+struct landlock_hook_ctx_fs_walk;
+
+bool landlock_is_valid_access_fs_walk(int off, enum bpf_access_type type,
+ enum bpf_reg_type *reg_type, int *max_size);
+
+const struct landlock_ctx_fs_walk *landlock_get_ctx_fs_walk(
+ const struct landlock_hook_ctx_fs_walk *hook_ctx);
diff --git a/security/landlock/init.c b/security/landlock/init.c
index 03073cd0fc4e..68def2a7af71 100644
--- a/security/landlock/init.c
+++ b/security/landlock/init.c
@@ -9,8 +9,10 @@
#include <linux/bpf.h> /* enum bpf_access_type */
#include <linux/capability.h> /* capable */
#include <linux/filter.h> /* struct bpf_prog */
+#include <linux/lsm_hooks.h>

#include "common.h" /* LANDLOCK_* */
+#include "hooks_fs.h"

static bool bpf_landlock_is_valid_access(int off, int size,
enum bpf_access_type type, const struct bpf_prog *prog,
@@ -29,6 +31,23 @@ static bool bpf_landlock_is_valid_access(int off, int size,
if (size <= 0 || size > sizeof(__u64))
return false;

+ /* set register type and max size */
+ switch (prog_subtype->landlock_hook.type) {
+ case LANDLOCK_HOOK_FS_PICK:
+ if (!landlock_is_valid_access_fs_pick(off, type, &reg_type,
+ &max_size))
+ return false;
+ break;
+ case LANDLOCK_HOOK_FS_WALK:
+ if (!landlock_is_valid_access_fs_walk(off, type, &reg_type,
+ &max_size))
+ return false;
+ break;
+ default:
+ WARN_ON(1);
+ return false;
+ }
+
/* check memory range access */
switch (reg_type) {
case NOT_INIT:
@@ -98,6 +117,18 @@ static const struct bpf_func_proto *bpf_landlock_func_proto(
default:
break;
}
+
+ switch (hook_type) {
+ case LANDLOCK_HOOK_FS_WALK:
+ case LANDLOCK_HOOK_FS_PICK:
+ switch (func_id) {
+ case BPF_FUNC_inode_map_lookup:
+ return &bpf_inode_map_lookup_proto;
+ default:
+ break;
+ }
+ break;
+ }
return NULL;
}

@@ -108,3 +139,19 @@ const struct bpf_verifier_ops landlock_verifier_ops = {
};

const struct bpf_prog_ops landlock_prog_ops = {};
+
+static int __init landlock_init(void)
+{
+ pr_info(LANDLOCK_NAME ": Initializing (sandbox with seccomp)\n");
+ landlock_add_hooks_fs();
+ return 0;
+}
+
+struct lsm_blob_sizes landlock_blob_sizes __lsm_ro_after_init = {};
+
+DEFINE_LSM(LANDLOCK_NAME) = {
+ .name = LANDLOCK_NAME,
+ .order = LSM_ORDER_LAST,
+ .blobs = &landlock_blob_sizes,
+ .init = landlock_init,
+};
diff --git a/security/security.c b/security/security.c
index f493db0bf62a..05a23995407d 100644
--- a/security/security.c
+++ b/security/security.c
@@ -263,6 +263,21 @@ static void __init ordered_lsm_parse(const char *order, const char *origin)
}
}

+ /*
+ * In case of an unprivileged access-control, we don't want to give the
+ * ability to any process to do some checks (e.g. through an eBPF
+ * program) on kernel objects (e.g. files) if a privileged security
+ * policy forbid their access. We must then load
+ * potentially-unprivileged security modules after all other LSMs.
+ *
+ * LSM_ORDER_LAST is always last and does not appear in the modifiable
+ * ordered list of enabled LSMs.
+ */
+ for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) {
+ if (lsm->order == LSM_ORDER_LAST)
+ append_ordered_lsm(lsm, "last");
+ }
+
/* Disable all LSMs not in the ordered list. */
for (lsm = __start_lsm_info; lsm < __end_lsm_info; lsm++) {
if (exists_ordered_lsm(lsm))
--
2.20.1

2019-06-25 22:52:50

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH bpf-next v9 05/10] bpf,landlock: Add a new map type: inode

On Tue, Jun 25, 2019 at 11:52:34PM +0200, Micka?l Sala?n wrote:
> +/* must call iput(inode) after this call */
> +static struct inode *inode_from_fd(int ufd, bool check_access)
> +{
> + struct inode *ret;
> + struct fd f;
> + int deny;
> +
> + f = fdget(ufd);
> + if (unlikely(!f.file || !file_inode(f.file))) {
> + ret = ERR_PTR(-EBADF);
> + goto put_fd;
> + }

Just when does one get a NULL file_inode()? The reason I'm asking is
that arseloads of code would break if one managed to create such
a beast...

Incidentally, that should be return ERR_PTR(-EBADF); fdput() is wrong there.

> + }
> + /* check if the FD is tied to a mount point */
> + /* TODO: add this check when called from an eBPF program too */
> + if (unlikely(!f.file->f_path.mnt

Again, the same question - when the hell can that happen? If you are
sitting on an exploitable roothole, do share it...

|| f.file->f_path.mnt->mnt_flags &
> + MNT_INTERNAL)) {
> + ret = ERR_PTR(-EINVAL);
> + goto put_fd;

What does it have to do with mountpoints, anyway?

> +/* called from syscall */
> +static int sys_inode_map_delete_elem(struct bpf_map *map, struct inode *key)
> +{
> + struct inode_array *array = container_of(map, struct inode_array, map);
> + struct inode *inode;
> + int i;
> +
> + WARN_ON_ONCE(!rcu_read_lock_held());
> + for (i = 0; i < array->map.max_entries; i++) {
> + if (array->elems[i].inode == key) {
> + inode = xchg(&array->elems[i].inode, NULL);
> + array->nb_entries--;

Umm... Is that intended to be atomic in any sense?

> + iput(inode);
> + return 0;
> + }
> + }
> + return -ENOENT;
> +}
> +
> +/* called from syscall */
> +int bpf_inode_map_delete_elem(struct bpf_map *map, int *key)
> +{
> + struct inode *inode;
> + int err;
> +
> + inode = inode_from_fd(*key, false);
> + if (IS_ERR(inode))
> + return PTR_ERR(inode);
> + err = sys_inode_map_delete_elem(map, inode);
> + iput(inode);
> + return err;
> +}

Wait a sec... So we have those beasties that can have long-term
references to arbitrary inodes stuck in them? What will happen
if you get umount(2) called while such a thing exists?

2019-06-27 16:26:09

by Mickaël Salaün

[permalink] [raw]
Subject: Re: [PATCH bpf-next v9 05/10] bpf,landlock: Add a new map type: inode


On 26/06/2019 00:52, Al Viro wrote:
> On Tue, Jun 25, 2019 at 11:52:34PM +0200, Mickaël Salaün wrote:
>> +/* must call iput(inode) after this call */
>> +static struct inode *inode_from_fd(int ufd, bool check_access)
>> +{
>> + struct inode *ret;
>> + struct fd f;
>> + int deny;
>> +
>> + f = fdget(ufd);
>> + if (unlikely(!f.file || !file_inode(f.file))) {
>> + ret = ERR_PTR(-EBADF);
>> + goto put_fd;
>> + }
>
> Just when does one get a NULL file_inode()? The reason I'm asking is
> that arseloads of code would break if one managed to create such
> a beast...

I didn't find any API documentation about this guarantee, so I followed
a defensive programming approach. I'll remove the file_inode() check.

>
> Incidentally, that should be return ERR_PTR(-EBADF); fdput() is wrong there.

Right, I'll fix that.

>
>> + }
>> + /* check if the FD is tied to a mount point */
>> + /* TODO: add this check when called from an eBPF program too */
>> + if (unlikely(!f.file->f_path.mnt
>
> Again, the same question - when the hell can that happen?

Defensive programming again, I'll remove it.

> If you are
> sitting on an exploitable roothole, do share it...
>
> || f.file->f_path.mnt->mnt_flags &
>> + MNT_INTERNAL)) {
>> + ret = ERR_PTR(-EINVAL);
>> + goto put_fd;
>
> What does it have to do with mountpoints, anyway?

I want to only manage inodes tied to a userspace-visible file system
(this check may not be enough though). It doesn't make sense to be able
to add inodes which are not mounted, to this kind of map.

>
>> +/* called from syscall */
>> +static int sys_inode_map_delete_elem(struct bpf_map *map, struct inode *key)
>> +{
>> + struct inode_array *array = container_of(map, struct inode_array, map);
>> + struct inode *inode;
>> + int i;
>> +
>> + WARN_ON_ONCE(!rcu_read_lock_held());
>> + for (i = 0; i < array->map.max_entries; i++) {
>> + if (array->elems[i].inode == key) {
>> + inode = xchg(&array->elems[i].inode, NULL);
>> + array->nb_entries--;
>
> Umm... Is that intended to be atomic in any sense?

nb_entries is not used as a bound check but to avoid walking uselessly
through the (pre-allocated) array when adding a new element, but I'll
use an atomic to avoid inconsistencies anyway.

>
>> + iput(inode);
>> + return 0;
>> + }
>> + }
>> + return -ENOENT;
>> +}
>> +
>> +/* called from syscall */
>> +int bpf_inode_map_delete_elem(struct bpf_map *map, int *key)
>> +{
>> + struct inode *inode;
>> + int err;
>> +
>> + inode = inode_from_fd(*key, false);
>> + if (IS_ERR(inode))
>> + return PTR_ERR(inode);
>> + err = sys_inode_map_delete_elem(map, inode);
>> + iput(inode);
>> + return err;
>> +}
>
> Wait a sec... So we have those beasties that can have long-term
> references to arbitrary inodes stuck in them? What will happen
> if you get umount(2) called while such a thing exists?

I though an umount would be denied but no, we get a self-destructed busy
inode and a bug!
What about wrapping the inode's superblock->s_op->destroy_inode() to
first remove the element from the map and then call the real
destroy_inode(), if any?
Or I could update fs/inode.c:destroy_inode() to call inode->free_inode()
if it is set, and set it when such inode is referenced by a map?
Or maybe I could hold the referencing file in the map and then wrap its
f_op?


--
Mickaël Salaün
ANSSI/SDE/ST/LAM

Les données à caractère personnel recueillies et traitées dans le cadre de cet échange, le sont à seule fin d’exécution d’une relation professionnelle et s’opèrent dans cette seule finalité et pour la durée nécessaire à cette relation. Si vous souhaitez faire usage de vos droits de consultation, de rectification et de suppression de vos données, veuillez contacter [email protected]. Si vous avez reçu ce message par erreur, nous vous remercions d’en informer l’expéditeur et de détruire le message. The personal data collected and processed during this exchange aims solely at completing a business relationship and is limited to the necessary duration of that relationship. If you wish to use your rights of consultation, rectification and deletion of your data, please contact: [email protected]. If you have received this message in error, we thank you for informing the sender and destroying the message.

2019-06-27 16:57:40

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH bpf-next v9 05/10] bpf,landlock: Add a new map type: inode

On Thu, Jun 27, 2019 at 06:18:12PM +0200, Micka?l Sala?n wrote:

> >> +/* called from syscall */
> >> +static int sys_inode_map_delete_elem(struct bpf_map *map, struct inode *key)
> >> +{
> >> + struct inode_array *array = container_of(map, struct inode_array, map);
> >> + struct inode *inode;
> >> + int i;
> >> +
> >> + WARN_ON_ONCE(!rcu_read_lock_held());
> >> + for (i = 0; i < array->map.max_entries; i++) {
> >> + if (array->elems[i].inode == key) {
> >> + inode = xchg(&array->elems[i].inode, NULL);
> >> + array->nb_entries--;
> >
> > Umm... Is that intended to be atomic in any sense?
>
> nb_entries is not used as a bound check but to avoid walking uselessly
> through the (pre-allocated) array when adding a new element, but I'll
> use an atomic to avoid inconsistencies anyway.


> > Wait a sec... So we have those beasties that can have long-term
> > references to arbitrary inodes stuck in them? What will happen
> > if you get umount(2) called while such a thing exists?
>
> I though an umount would be denied but no, we get a self-destructed busy
> inode and a bug!
> What about wrapping the inode's superblock->s_op->destroy_inode() to
> first remove the element from the map and then call the real
> destroy_inode(), if any?

What do you mean, _the_ map? I don't see anything to prevent insertion
of references to the same inode into any number of those...

> Or I could update fs/inode.c:destroy_inode() to call inode->free_inode()
> if it is set, and set it when such inode is referenced by a map?
> Or maybe I could hold the referencing file in the map and then wrap its
> f_op?

First of all, anything including the word "wrap" is a non-starter.
We really don't need the headache associated with the locking needed
to replace the method tables on the fly, or with the code checking that
->f_op points to given method table, etc. That's not going to fly,
especially since you'd end up _chaining_ those (again, the same reference
can go in more than once).

Nothing is allowed to change the method tables of live objects, period.
Once a struct file is opened, its ->f_op is never going to change and
it entirely belongs to the device driver or filesystem it resides on.
Nothing else (not VFS, not VM, not some LSM module, etc.) has any business
touching that. The same goes for inodes, dentries, etc.

What kind of behaviour do you want there? Do you want the inodes you've
referenced there to be forgotten on e.g. memory pressure? The thing is,
I don't see how "it's getting freed" could map onto any semantics that
might be useful for you - it looks like the wrong event for that.

2019-06-28 13:17:25

by Mickaël Salaün

[permalink] [raw]
Subject: Re: [PATCH bpf-next v9 05/10] bpf,landlock: Add a new map type: inode



On 27/06/2019 18:56, Al Viro wrote:
> On Thu, Jun 27, 2019 at 06:18:12PM +0200, Mickaël Salaün wrote:
>
>>>> +/* called from syscall */
>>>> +static int sys_inode_map_delete_elem(struct bpf_map *map, struct inode *key)
>>>> +{
>>>> + struct inode_array *array = container_of(map, struct inode_array, map);
>>>> + struct inode *inode;
>>>> + int i;
>>>> +
>>>> + WARN_ON_ONCE(!rcu_read_lock_held());
>>>> + for (i = 0; i < array->map.max_entries; i++) {
>>>> + if (array->elems[i].inode == key) {
>>>> + inode = xchg(&array->elems[i].inode, NULL);
>>>> + array->nb_entries--;
>>>
>>> Umm... Is that intended to be atomic in any sense?
>>
>> nb_entries is not used as a bound check but to avoid walking uselessly
>> through the (pre-allocated) array when adding a new element, but I'll
>> use an atomic to avoid inconsistencies anyway.
>
>
>>> Wait a sec... So we have those beasties that can have long-term
>>> references to arbitrary inodes stuck in them? What will happen
>>> if you get umount(2) called while such a thing exists?
>>
>> I though an umount would be denied but no, we get a self-destructed busy
>> inode and a bug!
>> What about wrapping the inode's superblock->s_op->destroy_inode() to
>> first remove the element from the map and then call the real
>> destroy_inode(), if any?
>
> What do you mean, _the_ map? I don't see anything to prevent insertion
> of references to the same inode into any number of those...

Indeed, the current design needs to check for duplicate inode references
to avoid unused entries (until a reference is removed). I was planning
to use an rbtree but I'm working on using a hash table instead (cf.
bpf/hashtab.c), which will solve the issue anyway.

>
>> Or I could update fs/inode.c:destroy_inode() to call inode->free_inode()
>> if it is set, and set it when such inode is referenced by a map?
>> Or maybe I could hold the referencing file in the map and then wrap its
>> f_op?
>
> First of all, anything including the word "wrap" is a non-starter.
> We really don't need the headache associated with the locking needed
> to replace the method tables on the fly, or with the code checking that
> ->f_op points to given method table, etc. That's not going to fly,
> especially since you'd end up _chaining_ those (again, the same reference
> can go in more than once).
>
> Nothing is allowed to change the method tables of live objects, period.
> Once a struct file is opened, its ->f_op is never going to change and
> it entirely belongs to the device driver or filesystem it resides on.
> Nothing else (not VFS, not VM, not some LSM module, etc.) has any business
> touching that. The same goes for inodes, dentries, etc.
>
> What kind of behaviour do you want there? Do you want the inodes you've
> referenced there to be forgotten on e.g. memory pressure? The thing is,
> I don't see how "it's getting freed" could map onto any semantics that
> might be useful for you - it looks like the wrong event for that.

At least, I would like to be able to compare an inode with the reference
one if this reference may be accessible somewhere on the system. Being
able to keep the inode reference as long as its superblock is alive
seems to solve the problem. This enable for example to compare inodes
from two bind mounts of the same file system even if one mount point is
unmounted.

Storing and using the device ID and the inode number bring a new problem
when an inode is removed and when its number is recycled. However, if I
can be notified when such an inode is removed (preferably without using
an LSM hook) and if I can know when the backing device go out of the
scope of the (live) system (e.g. hot unplugging an USB drive), this
should solve the problem and also enable to keep a reference to an inode
as long as possible without any dangling pointer nor wrapper.


--
Mickaël Salaün
ANSSI/SDE/ST/LAM

Les données à caractère personnel recueillies et traitées dans le cadre de cet échange, le sont à seule fin d’exécution d’une relation professionnelle et s’opèrent dans cette seule finalité et pour la durée nécessaire à cette relation. Si vous souhaitez faire usage de vos droits de consultation, de rectification et de suppression de vos données, veuillez contacter [email protected]. Si vous avez reçu ce message par erreur, nous vous remercions d’en informer l’expéditeur et de détruire le message. The personal data collected and processed during this exchange aims solely at completing a business relationship and is limited to the necessary duration of that relationship. If you wish to use your rights of consultation, rectification and deletion of your data, please contact: [email protected]. If you have received this message in error, we thank you for informing the sender and destroying the message.