2018-05-27 11:26:37

by Sean Young

[permalink] [raw]
Subject: [PATCH v5 0/3] IR decoding using BPF

The kernel IR decoders (drivers/media/rc/ir-*-decoder.c) support the most
widely used IR protocols, but there are many protocols which are not
supported[1]. For example, the lirc-remotes[2] repo has over 2700 remotes,
many of which are not supported by rc-core. There is a "long tail" of
unsupported IR protocols, for which lircd is need to decode the IR .

IR encoding is done in such a way that some simple circuit can decode it;
therefore, bpf is ideal.

In order to support all these protocols, here we have bpf based IR decoding.
The idea is that user-space can define a decoder in bpf, attach it to
the rc device through the lirc chardev.

Separate work is underway to extend ir-keytable to have an extensive library
of bpf-based decoders, and a much expanded library of rc keymaps.

Another future application would be to compile IRP[3] to a IR BPF program, and
so support virtually every remote without having to write a decoder for each.
It might also be possible to support non-button devices such as analog
directional pads or air conditioning remote controls and decode the target
temperature in bpf, and pass that to an input device.

Thanks,

Sean Young

[1] http://www.hifi-remote.com/wiki/index.php?title=DecodeIR
[2] https://sourceforge.net/p/lirc-remotes/code/ci/master/tree/remotes/
[3] http://www.hifi-remote.com/wiki/index.php?title=IRP_Notation

Changes since v4:
- Renamed rc_dev_bpf_{attach,detach,query} to lirc_bpf_{attach,detach,query}
- Fixed error path in lirc_bpf_query
- Rebased on bpf-next

Changes since v3:
- Implemented review comments from Quentin Monnet and Y Song (thanks!)
- More helpful and better formatted bpf helper documentation
- Changed back to bpf_prog_array rather than open-coded implementation
- scancodes can be 64 bit
- bpf gets passed values in microseconds, not nanoseconds.
microseconds is more than than enough (IR receivers support carriers upto
70kHz, at which point a single period is already 14 microseconds). Also,
this makes it much more consistent with lirc mode2.
- Since it looks much more like lirc mode2, rename the program type to
BPF_PROG_TYPE_LIRC_MODE2.
- Rebased on bpf-next

Changes since v2:
- Fixed locking issues
- Improved self-test to cover more cases
- Rebased on bpf-next again

Changes since v1:
- Code review comments from Y Song <[email protected]> and
Randy Dunlap <[email protected]>
- Re-wrote sample bpf to be selftest
- Renamed RAWIR_DECODER -> RAWIR_EVENT (Kconfig, context, bpf prog type)
- Rebase on bpf-next
- Introduced bpf_rawir_event context structure with simpler access checking


Sean Young (3):
bpf: bpf_prog_array_copy() should return -ENOENT if exclude_prog not
found
media: rc: introduce BPF_PROG_LIRC_MODE2
bpf: add selftest for lirc_mode2 type program

drivers/media/rc/Kconfig | 13 +
drivers/media/rc/Makefile | 1 +
drivers/media/rc/bpf-lirc.c | 313 ++++++++++++++++++
drivers/media/rc/lirc_dev.c | 30 ++
drivers/media/rc/rc-core-priv.h | 21 ++
drivers/media/rc/rc-ir-raw.c | 12 +-
include/linux/bpf_lirc.h | 29 ++
include/linux/bpf_types.h | 3 +
include/uapi/linux/bpf.h | 53 ++-
kernel/bpf/core.c | 11 +-
kernel/bpf/syscall.c | 7 +
kernel/trace/bpf_trace.c | 2 +
tools/bpf/bpftool/prog.c | 1 +
tools/include/uapi/linux/bpf.h | 53 ++-
tools/include/uapi/linux/lirc.h | 217 ++++++++++++
tools/lib/bpf/libbpf.c | 1 +
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 7 +-
tools/testing/selftests/bpf/bpf_helpers.h | 5 +
.../testing/selftests/bpf/test_lirc_mode2.sh | 28 ++
.../selftests/bpf/test_lirc_mode2_kern.c | 23 ++
.../selftests/bpf/test_lirc_mode2_user.c | 149 +++++++++
22 files changed, 971 insertions(+), 9 deletions(-)
create mode 100644 drivers/media/rc/bpf-lirc.c
create mode 100644 include/linux/bpf_lirc.h
create mode 100644 tools/include/uapi/linux/lirc.h
create mode 100755 tools/testing/selftests/bpf/test_lirc_mode2.sh
create mode 100644 tools/testing/selftests/bpf/test_lirc_mode2_kern.c
create mode 100644 tools/testing/selftests/bpf/test_lirc_mode2_user.c

--
2.17.0



2018-05-27 11:25:02

by Sean Young

[permalink] [raw]
Subject: [PATCH v5 3/3] bpf: add selftest for lirc_mode2 type program

This is simple test over rc-loopback.

Acked-by: Yonghong Song <[email protected]>
Signed-off-by: Sean Young <[email protected]>
---
tools/bpf/bpftool/prog.c | 1 +
tools/include/uapi/linux/bpf.h | 53 ++++-
tools/include/uapi/linux/lirc.h | 217 ++++++++++++++++++
tools/lib/bpf/libbpf.c | 1 +
tools/testing/selftests/bpf/.gitignore | 1 +
tools/testing/selftests/bpf/Makefile | 7 +-
tools/testing/selftests/bpf/bpf_helpers.h | 5 +
.../testing/selftests/bpf/test_lirc_mode2.sh | 28 +++
.../selftests/bpf/test_lirc_mode2_kern.c | 23 ++
.../selftests/bpf/test_lirc_mode2_user.c | 149 ++++++++++++
10 files changed, 481 insertions(+), 4 deletions(-)
create mode 100644 tools/include/uapi/linux/lirc.h
create mode 100755 tools/testing/selftests/bpf/test_lirc_mode2.sh
create mode 100644 tools/testing/selftests/bpf/test_lirc_mode2_kern.c
create mode 100644 tools/testing/selftests/bpf/test_lirc_mode2_user.c

diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 39b88e760367..a4f435203fef 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -71,6 +71,7 @@ static const char * const prog_type_name[] = {
[BPF_PROG_TYPE_SK_MSG] = "sk_msg",
[BPF_PROG_TYPE_RAW_TRACEPOINT] = "raw_tracepoint",
[BPF_PROG_TYPE_CGROUP_SOCK_ADDR] = "cgroup_sock_addr",
+ [BPF_PROG_TYPE_LIRC_MODE2] = "lirc_mode2",
};

static void print_boot_time(__u64 nsecs, char *buf, unsigned int size)
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 9b8c6e310e9a..4636c596096d 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -143,6 +143,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_RAW_TRACEPOINT,
BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
BPF_PROG_TYPE_LWT_SEG6LOCAL,
+ BPF_PROG_TYPE_LIRC_MODE2,
};

enum bpf_attach_type {
@@ -160,6 +161,7 @@ enum bpf_attach_type {
BPF_CGROUP_INET6_CONNECT,
BPF_CGROUP_INET4_POST_BIND,
BPF_CGROUP_INET6_POST_BIND,
+ BPF_LIRC_MODE2,
__MAX_BPF_ATTACH_TYPE
};

@@ -2004,6 +2006,53 @@ union bpf_attr {
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
+ *
+ * int bpf_rc_keydown(void *ctx, u32 protocol, u64 scancode, u32 toggle)
+ * Description
+ * This helper is used in programs implementing IR decoding, to
+ * report a successfully decoded key press with *scancode*,
+ * *toggle* value in the given *protocol*. The scancode will be
+ * translated to a keycode using the rc keymap, and reported as
+ * an input key down event. After a period a key up event is
+ * generated. This period can be extended by calling either
+ * **bpf_rc_keydown** () again with the same values, or calling
+ * **bpf_rc_repeat** ().
+ *
+ * Some protocols include a toggle bit, in case the button was
+ * released and pressed again between consecutive scancodes.
+ *
+ * The *ctx* should point to the lirc sample as passed into
+ * the program.
+ *
+ * The *protocol* is the decoded protocol number (see
+ * **enum rc_proto** for some predefined values).
+ *
+ * This helper is only available is the kernel was compiled with
+ * the **CONFIG_BPF_LIRC_MODE2** configuration option set to
+ * "**y**".
+ *
+ * Return
+ * 0
+ *
+ * int bpf_rc_repeat(void *ctx)
+ * Description
+ * This helper is used in programs implementing IR decoding, to
+ * report a successfully decoded repeat key message. This delays
+ * the generation of a key up event for previously generated
+ * key down event.
+ *
+ * Some IR protocols like NEC have a special IR message for
+ * repeating last button, for when a button is held down.
+ *
+ * The *ctx* should point to the lirc sample as passed into
+ * the program.
+ *
+ * This helper is only available is the kernel was compiled with
+ * the **CONFIG_BPF_LIRC_MODE2** configuration option set to
+ * "**y**".
+ *
+ * Return
+ * 0
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -2082,7 +2131,9 @@ union bpf_attr {
FN(lwt_push_encap), \
FN(lwt_seg6_store_bytes), \
FN(lwt_seg6_adjust_srh), \
- FN(lwt_seg6_action),
+ FN(lwt_seg6_action), \
+ FN(rc_repeat), \
+ FN(rc_keydown),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
diff --git a/tools/include/uapi/linux/lirc.h b/tools/include/uapi/linux/lirc.h
new file mode 100644
index 000000000000..f189931042a7
--- /dev/null
+++ b/tools/include/uapi/linux/lirc.h
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * lirc.h - linux infrared remote control header file
+ * last modified 2010/07/13 by Jarod Wilson
+ */
+
+#ifndef _LINUX_LIRC_H
+#define _LINUX_LIRC_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define PULSE_BIT 0x01000000
+#define PULSE_MASK 0x00FFFFFF
+
+#define LIRC_MODE2_SPACE 0x00000000
+#define LIRC_MODE2_PULSE 0x01000000
+#define LIRC_MODE2_FREQUENCY 0x02000000
+#define LIRC_MODE2_TIMEOUT 0x03000000
+
+#define LIRC_VALUE_MASK 0x00FFFFFF
+#define LIRC_MODE2_MASK 0xFF000000
+
+#define LIRC_SPACE(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_SPACE)
+#define LIRC_PULSE(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_PULSE)
+#define LIRC_FREQUENCY(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_FREQUENCY)
+#define LIRC_TIMEOUT(val) (((val)&LIRC_VALUE_MASK) | LIRC_MODE2_TIMEOUT)
+
+#define LIRC_VALUE(val) ((val)&LIRC_VALUE_MASK)
+#define LIRC_MODE2(val) ((val)&LIRC_MODE2_MASK)
+
+#define LIRC_IS_SPACE(val) (LIRC_MODE2(val) == LIRC_MODE2_SPACE)
+#define LIRC_IS_PULSE(val) (LIRC_MODE2(val) == LIRC_MODE2_PULSE)
+#define LIRC_IS_FREQUENCY(val) (LIRC_MODE2(val) == LIRC_MODE2_FREQUENCY)
+#define LIRC_IS_TIMEOUT(val) (LIRC_MODE2(val) == LIRC_MODE2_TIMEOUT)
+
+/* used heavily by lirc userspace */
+#define lirc_t int
+
+/*** lirc compatible hardware features ***/
+
+#define LIRC_MODE2SEND(x) (x)
+#define LIRC_SEND2MODE(x) (x)
+#define LIRC_MODE2REC(x) ((x) << 16)
+#define LIRC_REC2MODE(x) ((x) >> 16)
+
+#define LIRC_MODE_RAW 0x00000001
+#define LIRC_MODE_PULSE 0x00000002
+#define LIRC_MODE_MODE2 0x00000004
+#define LIRC_MODE_SCANCODE 0x00000008
+#define LIRC_MODE_LIRCCODE 0x00000010
+
+
+#define LIRC_CAN_SEND_RAW LIRC_MODE2SEND(LIRC_MODE_RAW)
+#define LIRC_CAN_SEND_PULSE LIRC_MODE2SEND(LIRC_MODE_PULSE)
+#define LIRC_CAN_SEND_MODE2 LIRC_MODE2SEND(LIRC_MODE_MODE2)
+#define LIRC_CAN_SEND_LIRCCODE LIRC_MODE2SEND(LIRC_MODE_LIRCCODE)
+
+#define LIRC_CAN_SEND_MASK 0x0000003f
+
+#define LIRC_CAN_SET_SEND_CARRIER 0x00000100
+#define LIRC_CAN_SET_SEND_DUTY_CYCLE 0x00000200
+#define LIRC_CAN_SET_TRANSMITTER_MASK 0x00000400
+
+#define LIRC_CAN_REC_RAW LIRC_MODE2REC(LIRC_MODE_RAW)
+#define LIRC_CAN_REC_PULSE LIRC_MODE2REC(LIRC_MODE_PULSE)
+#define LIRC_CAN_REC_MODE2 LIRC_MODE2REC(LIRC_MODE_MODE2)
+#define LIRC_CAN_REC_SCANCODE LIRC_MODE2REC(LIRC_MODE_SCANCODE)
+#define LIRC_CAN_REC_LIRCCODE LIRC_MODE2REC(LIRC_MODE_LIRCCODE)
+
+#define LIRC_CAN_REC_MASK LIRC_MODE2REC(LIRC_CAN_SEND_MASK)
+
+#define LIRC_CAN_SET_REC_CARRIER (LIRC_CAN_SET_SEND_CARRIER << 16)
+#define LIRC_CAN_SET_REC_DUTY_CYCLE (LIRC_CAN_SET_SEND_DUTY_CYCLE << 16)
+
+#define LIRC_CAN_SET_REC_DUTY_CYCLE_RANGE 0x40000000
+#define LIRC_CAN_SET_REC_CARRIER_RANGE 0x80000000
+#define LIRC_CAN_GET_REC_RESOLUTION 0x20000000
+#define LIRC_CAN_SET_REC_TIMEOUT 0x10000000
+#define LIRC_CAN_SET_REC_FILTER 0x08000000
+
+#define LIRC_CAN_MEASURE_CARRIER 0x02000000
+#define LIRC_CAN_USE_WIDEBAND_RECEIVER 0x04000000
+
+#define LIRC_CAN_SEND(x) ((x)&LIRC_CAN_SEND_MASK)
+#define LIRC_CAN_REC(x) ((x)&LIRC_CAN_REC_MASK)
+
+#define LIRC_CAN_NOTIFY_DECODE 0x01000000
+
+/*** IOCTL commands for lirc driver ***/
+
+#define LIRC_GET_FEATURES _IOR('i', 0x00000000, __u32)
+
+#define LIRC_GET_SEND_MODE _IOR('i', 0x00000001, __u32)
+#define LIRC_GET_REC_MODE _IOR('i', 0x00000002, __u32)
+#define LIRC_GET_REC_RESOLUTION _IOR('i', 0x00000007, __u32)
+
+#define LIRC_GET_MIN_TIMEOUT _IOR('i', 0x00000008, __u32)
+#define LIRC_GET_MAX_TIMEOUT _IOR('i', 0x00000009, __u32)
+
+/* code length in bits, currently only for LIRC_MODE_LIRCCODE */
+#define LIRC_GET_LENGTH _IOR('i', 0x0000000f, __u32)
+
+#define LIRC_SET_SEND_MODE _IOW('i', 0x00000011, __u32)
+#define LIRC_SET_REC_MODE _IOW('i', 0x00000012, __u32)
+/* Note: these can reset the according pulse_width */
+#define LIRC_SET_SEND_CARRIER _IOW('i', 0x00000013, __u32)
+#define LIRC_SET_REC_CARRIER _IOW('i', 0x00000014, __u32)
+#define LIRC_SET_SEND_DUTY_CYCLE _IOW('i', 0x00000015, __u32)
+#define LIRC_SET_TRANSMITTER_MASK _IOW('i', 0x00000017, __u32)
+
+/*
+ * when a timeout != 0 is set the driver will send a
+ * LIRC_MODE2_TIMEOUT data packet, otherwise LIRC_MODE2_TIMEOUT is
+ * never sent, timeout is disabled by default
+ */
+#define LIRC_SET_REC_TIMEOUT _IOW('i', 0x00000018, __u32)
+
+/* 1 enables, 0 disables timeout reports in MODE2 */
+#define LIRC_SET_REC_TIMEOUT_REPORTS _IOW('i', 0x00000019, __u32)
+
+/*
+ * if enabled from the next key press on the driver will send
+ * LIRC_MODE2_FREQUENCY packets
+ */
+#define LIRC_SET_MEASURE_CARRIER_MODE _IOW('i', 0x0000001d, __u32)
+
+/*
+ * to set a range use LIRC_SET_REC_CARRIER_RANGE with the
+ * lower bound first and later LIRC_SET_REC_CARRIER with the upper bound
+ */
+#define LIRC_SET_REC_CARRIER_RANGE _IOW('i', 0x0000001f, __u32)
+
+#define LIRC_SET_WIDEBAND_RECEIVER _IOW('i', 0x00000023, __u32)
+
+/*
+ * struct lirc_scancode - decoded scancode with protocol for use with
+ * LIRC_MODE_SCANCODE
+ *
+ * @timestamp: Timestamp in nanoseconds using CLOCK_MONOTONIC when IR
+ * was decoded.
+ * @flags: should be 0 for transmit. When receiving scancodes,
+ * LIRC_SCANCODE_FLAG_TOGGLE or LIRC_SCANCODE_FLAG_REPEAT can be set
+ * depending on the protocol
+ * @rc_proto: see enum rc_proto
+ * @keycode: the translated keycode. Set to 0 for transmit.
+ * @scancode: the scancode received or to be sent
+ */
+struct lirc_scancode {
+ __u64 timestamp;
+ __u16 flags;
+ __u16 rc_proto;
+ __u32 keycode;
+ __u64 scancode;
+};
+
+/* Set if the toggle bit of rc-5 or rc-6 is enabled */
+#define LIRC_SCANCODE_FLAG_TOGGLE 1
+/* Set if this is a nec or sanyo repeat */
+#define LIRC_SCANCODE_FLAG_REPEAT 2
+
+/**
+ * enum rc_proto - the Remote Controller protocol
+ *
+ * @RC_PROTO_UNKNOWN: Protocol not known
+ * @RC_PROTO_OTHER: Protocol known but proprietary
+ * @RC_PROTO_RC5: Philips RC5 protocol
+ * @RC_PROTO_RC5X_20: Philips RC5x 20 bit protocol
+ * @RC_PROTO_RC5_SZ: StreamZap variant of RC5
+ * @RC_PROTO_JVC: JVC protocol
+ * @RC_PROTO_SONY12: Sony 12 bit protocol
+ * @RC_PROTO_SONY15: Sony 15 bit protocol
+ * @RC_PROTO_SONY20: Sony 20 bit protocol
+ * @RC_PROTO_NEC: NEC protocol
+ * @RC_PROTO_NECX: Extended NEC protocol
+ * @RC_PROTO_NEC32: NEC 32 bit protocol
+ * @RC_PROTO_SANYO: Sanyo protocol
+ * @RC_PROTO_MCIR2_KBD: RC6-ish MCE keyboard
+ * @RC_PROTO_MCIR2_MSE: RC6-ish MCE mouse
+ * @RC_PROTO_RC6_0: Philips RC6-0-16 protocol
+ * @RC_PROTO_RC6_6A_20: Philips RC6-6A-20 protocol
+ * @RC_PROTO_RC6_6A_24: Philips RC6-6A-24 protocol
+ * @RC_PROTO_RC6_6A_32: Philips RC6-6A-32 protocol
+ * @RC_PROTO_RC6_MCE: MCE (Philips RC6-6A-32 subtype) protocol
+ * @RC_PROTO_SHARP: Sharp protocol
+ * @RC_PROTO_XMP: XMP protocol
+ * @RC_PROTO_CEC: CEC protocol
+ * @RC_PROTO_IMON: iMon Pad protocol
+ */
+enum rc_proto {
+ RC_PROTO_UNKNOWN = 0,
+ RC_PROTO_OTHER = 1,
+ RC_PROTO_RC5 = 2,
+ RC_PROTO_RC5X_20 = 3,
+ RC_PROTO_RC5_SZ = 4,
+ RC_PROTO_JVC = 5,
+ RC_PROTO_SONY12 = 6,
+ RC_PROTO_SONY15 = 7,
+ RC_PROTO_SONY20 = 8,
+ RC_PROTO_NEC = 9,
+ RC_PROTO_NECX = 10,
+ RC_PROTO_NEC32 = 11,
+ RC_PROTO_SANYO = 12,
+ RC_PROTO_MCIR2_KBD = 13,
+ RC_PROTO_MCIR2_MSE = 14,
+ RC_PROTO_RC6_0 = 15,
+ RC_PROTO_RC6_6A_20 = 16,
+ RC_PROTO_RC6_6A_24 = 17,
+ RC_PROTO_RC6_6A_32 = 18,
+ RC_PROTO_RC6_MCE = 19,
+ RC_PROTO_SHARP = 20,
+ RC_PROTO_XMP = 21,
+ RC_PROTO_CEC = 22,
+ RC_PROTO_IMON = 23,
+};
+
+#endif
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d20411ebfa2f..6cb45898afa7 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1462,6 +1462,7 @@ static bool bpf_prog_type__needs_kver(enum bpf_prog_type type)
case BPF_PROG_TYPE_CGROUP_DEVICE:
case BPF_PROG_TYPE_SK_MSG:
case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ case BPF_PROG_TYPE_LIRC_MODE2:
return false;
case BPF_PROG_TYPE_UNSPEC:
case BPF_PROG_TYPE_KPROBE:
diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore
index adc8e5474b66..6ea835982464 100644
--- a/tools/testing/selftests/bpf/.gitignore
+++ b/tools/testing/selftests/bpf/.gitignore
@@ -17,3 +17,4 @@ test_sock_addr
urandom_read
test_btf
test_sockmap
+test_lirc_mode2_user
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 85044448bbc7..06400291e6e2 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -24,7 +24,7 @@ urandom_read: urandom_read.c
# Order correspond to 'make run_tests' order
TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \
- test_sock test_btf test_sockmap
+ test_sock test_btf test_sockmap test_lirc_mode2_user

TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test_obj_id.o \
test_pkt_md_access.o test_xdp_redirect.o test_xdp_meta.o sockmap_parse_prog.o \
@@ -34,7 +34,7 @@ TEST_GEN_FILES = test_pkt_access.o test_xdp.o test_l4lb.o test_tcp_estats.o test
sockmap_tcp_msg_prog.o connect4_prog.o connect6_prog.o test_adjust_tail.o \
test_btf_haskv.o test_btf_nokv.o test_sockmap_kern.o test_tunnel_kern.o \
test_get_stack_rawtp.o test_sockmap_kern.o test_sockhash_kern.o \
- test_lwt_seg6local.o
+ test_lwt_seg6local.o test_lirc_mode2_kern.o

# Order correspond to 'make run_tests' order
TEST_PROGS := test_kmod.sh \
@@ -44,7 +44,8 @@ TEST_PROGS := test_kmod.sh \
test_offload.py \
test_sock_addr.sh \
test_tunnel.sh \
- test_lwt_seg6local.sh
+ test_lwt_seg6local.sh \
+ test_lirc_mode2.sh

# Compile but not part of 'make run_tests'
TEST_GEN_PROGS_EXTENDED = test_libbpf_open test_sock_addr
diff --git a/tools/testing/selftests/bpf/bpf_helpers.h b/tools/testing/selftests/bpf/bpf_helpers.h
index 334d3e8c5e89..a66a9d91acf4 100644
--- a/tools/testing/selftests/bpf/bpf_helpers.h
+++ b/tools/testing/selftests/bpf/bpf_helpers.h
@@ -126,6 +126,11 @@ static int (*bpf_lwt_seg6_action)(void *ctx, unsigned int action, void *param,
static int (*bpf_lwt_seg6_adjust_srh)(void *ctx, unsigned int offset,
unsigned int len) =
(void *) BPF_FUNC_lwt_seg6_adjust_srh;
+static int (*bpf_rc_repeat)(void *ctx) =
+ (void *) BPF_FUNC_rc_repeat;
+static int (*bpf_rc_keydown)(void *ctx, unsigned int protocol,
+ unsigned long long scancode, unsigned int toggle) =
+ (void *) BPF_FUNC_rc_keydown;

/* llvm builtin functions that eBPF C program may use to
* emit BPF_LD_ABS and BPF_LD_IND instructions
diff --git a/tools/testing/selftests/bpf/test_lirc_mode2.sh b/tools/testing/selftests/bpf/test_lirc_mode2.sh
new file mode 100755
index 000000000000..ce2e15e4f976
--- /dev/null
+++ b/tools/testing/selftests/bpf/test_lirc_mode2.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+GREEN='\033[0;92m'
+RED='\033[0;31m'
+NC='\033[0m' # No Color
+
+modprobe rc-loopback
+
+for i in /sys/class/rc/rc*
+do
+ if grep -q DRV_NAME=rc-loopback $i/uevent
+ then
+ LIRCDEV=$(grep DEVNAME= $i/lirc*/uevent | sed sQDEVNAME=Q/dev/Q)
+ fi
+done
+
+if [ -n $LIRCDEV ];
+then
+ TYPE=lirc_mode2
+ ./test_lirc_mode2_user $LIRCDEV
+ ret=$?
+ if [ $ret -ne 0 ]; then
+ echo -e ${RED}"FAIL: $TYPE"${NC}
+ else
+ echo -e ${GREEN}"PASS: $TYPE"${NC}
+ fi
+fi
diff --git a/tools/testing/selftests/bpf/test_lirc_mode2_kern.c b/tools/testing/selftests/bpf/test_lirc_mode2_kern.c
new file mode 100644
index 000000000000..ba26855563a5
--- /dev/null
+++ b/tools/testing/selftests/bpf/test_lirc_mode2_kern.c
@@ -0,0 +1,23 @@
+// SPDX-License-Identifier: GPL-2.0
+// test ir decoder
+//
+// Copyright (C) 2018 Sean Young <[email protected]>
+
+#include <linux/bpf.h>
+#include <linux/lirc.h>
+#include "bpf_helpers.h"
+
+SEC("lirc_mode2")
+int bpf_decoder(unsigned int *sample)
+{
+ if (LIRC_IS_PULSE(*sample)) {
+ unsigned int duration = LIRC_VALUE(*sample);
+
+ if (duration & 0x10000)
+ bpf_rc_keydown(sample, 0x40, duration & 0xffff, 0);
+ }
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_lirc_mode2_user.c b/tools/testing/selftests/bpf/test_lirc_mode2_user.c
new file mode 100644
index 000000000000..d470d63c33db
--- /dev/null
+++ b/tools/testing/selftests/bpf/test_lirc_mode2_user.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0
+// test ir decoder
+//
+// Copyright (C) 2018 Sean Young <[email protected]>
+
+// A lirc chardev is a device representing a consumer IR (cir) device which
+// can receive infrared signals from remote control and/or transmit IR.
+//
+// IR is sent as a series of pulses and space somewhat like morse code. The
+// BPF program can decode this into scancodes so that rc-core can translate
+// this into input key codes using the rc keymap.
+//
+// This test works by sending IR over rc-loopback, so the IR is processed by
+// BPF and then decoded into scancodes. The lirc chardev must be the one
+// associated with rc-loopback, see the output of ir-keytable(1).
+//
+// The following CONFIG options must be enabled for the test to succeed:
+// CONFIG_RC_CORE=y
+// CONFIG_BPF_RAWIR_EVENT=y
+// CONFIG_RC_LOOPBACK=y
+
+// Steps:
+// 1. Open the /dev/lircN device for rc-loopback (given on command line)
+// 2. Attach bpf_lirc_mode2 program which decodes some IR.
+// 3. Send some IR to the same IR device; since it is loopback, this will
+// end up in the bpf program
+// 4. bpf program should decode IR and report keycode
+// 5. We can read keycode from same /dev/lirc device
+
+#include <linux/bpf.h>
+#include <linux/lirc.h>
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <poll.h>
+#include <sys/types.h>
+#include <sys/ioctl.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+
+#include "bpf_util.h"
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+int main(int argc, char **argv)
+{
+ struct bpf_object *obj;
+ int ret, lircfd, progfd, mode;
+ int testir = 0x1dead;
+ u32 prog_ids[10], prog_flags[10], prog_cnt;
+
+ if (argc != 2) {
+ printf("Usage: %s /dev/lircN\n", argv[0]);
+ return 2;
+ }
+
+ ret = bpf_prog_load("test_lirc_mode2_kern.o",
+ BPF_PROG_TYPE_LIRC_MODE2, &obj, &progfd);
+ if (ret) {
+ printf("Failed to load bpf program\n");
+ return 1;
+ }
+
+ lircfd = open(argv[1], O_RDWR | O_NONBLOCK);
+ if (lircfd == -1) {
+ printf("failed to open lirc device %s: %m\n", argv[1]);
+ return 1;
+ }
+
+ /* Let's try detach it before it was ever attached */
+ ret = bpf_prog_detach2(progfd, lircfd, BPF_LIRC_MODE2);
+ if (ret != -1 || errno != ENOENT) {
+ printf("bpf_prog_detach2 not attached should fail: %m\n");
+ return 1;
+ }
+
+ mode = LIRC_MODE_SCANCODE;
+ if (ioctl(lircfd, LIRC_SET_REC_MODE, &mode)) {
+ printf("failed to set rec mode: %m\n");
+ return 1;
+ }
+
+ prog_cnt = 10;
+ ret = bpf_prog_query(lircfd, BPF_LIRC_MODE2, 0, prog_flags, prog_ids,
+ &prog_cnt);
+ if (ret) {
+ printf("Failed to query bpf programs on lirc device: %m\n");
+ return 1;
+ }
+
+ if (prog_cnt != 0) {
+ printf("Expected nothing to be attached\n");
+ return 1;
+ }
+
+ ret = bpf_prog_attach(progfd, lircfd, BPF_LIRC_MODE2, 0);
+ if (ret) {
+ printf("Failed to attach bpf to lirc device: %m\n");
+ return 1;
+ }
+
+ /* Write raw IR */
+ ret = write(lircfd, &testir, sizeof(testir));
+ if (ret != sizeof(testir)) {
+ printf("Failed to send test IR message: %m\n");
+ return 1;
+ }
+
+ struct pollfd pfd = { .fd = lircfd, .events = POLLIN };
+ struct lirc_scancode lsc;
+
+ poll(&pfd, 1, 100);
+
+ /* Read decoded IR */
+ ret = read(lircfd, &lsc, sizeof(lsc));
+ if (ret != sizeof(lsc)) {
+ printf("Failed to read decoded IR: %m\n");
+ return 1;
+ }
+
+ if (lsc.scancode != 0xdead || lsc.rc_proto != 64) {
+ printf("Incorrect scancode decoded\n");
+ return 1;
+ }
+
+ prog_cnt = 10;
+ ret = bpf_prog_query(lircfd, BPF_LIRC_MODE2, 0, prog_flags, prog_ids,
+ &prog_cnt);
+ if (ret) {
+ printf("Failed to query bpf programs on lirc device: %m\n");
+ return 1;
+ }
+
+ if (prog_cnt != 1) {
+ printf("Expected one program to be attached\n");
+ return 1;
+ }
+
+ /* Let's try detaching it now it is actually attached */
+ ret = bpf_prog_detach2(progfd, lircfd, BPF_LIRC_MODE2);
+ if (ret) {
+ printf("bpf_prog_detach2: returned %m\n");
+ return 1;
+ }
+
+ return 0;
+}
--
2.17.0


2018-05-27 11:25:18

by Sean Young

[permalink] [raw]
Subject: [PATCH v5 1/3] bpf: bpf_prog_array_copy() should return -ENOENT if exclude_prog not found

This makes is it possible for bpf prog detach to return -ENOENT.

Acked-by: Yonghong Song <[email protected]>
Signed-off-by: Sean Young <[email protected]>
---
kernel/bpf/core.c | 11 +++++++++--
kernel/trace/bpf_trace.c | 2 ++
2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index b574dddc05b8..527587de8a67 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1616,6 +1616,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
int new_prog_cnt, carry_prog_cnt = 0;
struct bpf_prog **existing_prog;
struct bpf_prog_array *array;
+ bool found_exclude = false;
int new_prog_idx = 0;

/* Figure out how many existing progs we need to carry over to
@@ -1624,14 +1625,20 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
if (old_array) {
existing_prog = old_array->progs;
for (; *existing_prog; existing_prog++) {
- if (*existing_prog != exclude_prog &&
- *existing_prog != &dummy_bpf_prog.prog)
+ if (*existing_prog == exclude_prog) {
+ found_exclude = true;
+ continue;
+ }
+ if (*existing_prog != &dummy_bpf_prog.prog)
carry_prog_cnt++;
if (*existing_prog == include_prog)
return -EEXIST;
}
}

+ if (exclude_prog && !found_exclude)
+ return -ENOENT;
+
/* How many progs (not NULL) will be in the new array? */
new_prog_cnt = carry_prog_cnt;
if (include_prog)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 81fdf2fc94ac..af1486d9a0ed 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1006,6 +1006,8 @@ void perf_event_detach_bpf_prog(struct perf_event *event)

old_array = event->tp_event->prog_array;
ret = bpf_prog_array_copy(old_array, event->prog, NULL, &new_array);
+ if (ret == -ENOENT)
+ goto unlock;
if (ret < 0) {
bpf_prog_array_delete_safe(old_array, event->prog);
} else {
--
2.17.0


2018-05-27 11:26:41

by Sean Young

[permalink] [raw]
Subject: [PATCH v5 2/3] media: rc: introduce BPF_PROG_LIRC_MODE2

Add support for BPF_PROG_LIRC_MODE2. This type of BPF program can call
rc_keydown() to reported decoded IR scancodes, or rc_repeat() to report
that the last key should be repeated.

The bpf program can be attached to using the bpf(BPF_PROG_ATTACH) syscall;
the target_fd must be the /dev/lircN device.

Acked-by: Yonghong Song <[email protected]>
Signed-off-by: Sean Young <[email protected]>
---
drivers/media/rc/Kconfig | 13 ++
drivers/media/rc/Makefile | 1 +
drivers/media/rc/bpf-lirc.c | 313 ++++++++++++++++++++++++++++++++
drivers/media/rc/lirc_dev.c | 30 +++
drivers/media/rc/rc-core-priv.h | 21 +++
drivers/media/rc/rc-ir-raw.c | 12 +-
include/linux/bpf_lirc.h | 29 +++
include/linux/bpf_types.h | 3 +
include/uapi/linux/bpf.h | 53 +++++-
kernel/bpf/syscall.c | 7 +
10 files changed, 479 insertions(+), 3 deletions(-)
create mode 100644 drivers/media/rc/bpf-lirc.c
create mode 100644 include/linux/bpf_lirc.h

diff --git a/drivers/media/rc/Kconfig b/drivers/media/rc/Kconfig
index eb2c3b6eca7f..d5b35a6ba899 100644
--- a/drivers/media/rc/Kconfig
+++ b/drivers/media/rc/Kconfig
@@ -25,6 +25,19 @@ config LIRC
passes raw IR to and from userspace, which is needed for
IR transmitting (aka "blasting") and for the lirc daemon.

+config BPF_LIRC_MODE2
+ bool "Support for eBPF programs attached to lirc devices"
+ depends on BPF_SYSCALL
+ depends on RC_CORE=y
+ depends on LIRC
+ help
+ Allow attaching eBPF programs to a lirc device using the bpf(2)
+ syscall command BPF_PROG_ATTACH. This is supported for raw IR
+ receivers.
+
+ These eBPF programs can be used to decode IR into scancodes, for
+ IR protocols not supported by the kernel decoders.
+
menuconfig RC_DECODERS
bool "Remote controller decoders"
depends on RC_CORE
diff --git a/drivers/media/rc/Makefile b/drivers/media/rc/Makefile
index 2e1c87066f6c..e0340d043fe8 100644
--- a/drivers/media/rc/Makefile
+++ b/drivers/media/rc/Makefile
@@ -5,6 +5,7 @@ obj-y += keymaps/
obj-$(CONFIG_RC_CORE) += rc-core.o
rc-core-y := rc-main.o rc-ir-raw.o
rc-core-$(CONFIG_LIRC) += lirc_dev.o
+rc-core-$(CONFIG_BPF_LIRC_MODE2) += bpf-lirc.o
obj-$(CONFIG_IR_NEC_DECODER) += ir-nec-decoder.o
obj-$(CONFIG_IR_RC5_DECODER) += ir-rc5-decoder.o
obj-$(CONFIG_IR_RC6_DECODER) += ir-rc6-decoder.o
diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
new file mode 100644
index 000000000000..40826bba06b6
--- /dev/null
+++ b/drivers/media/rc/bpf-lirc.c
@@ -0,0 +1,313 @@
+// SPDX-License-Identifier: GPL-2.0
+// bpf-lirc.c - handles bpf
+//
+// Copyright (C) 2018 Sean Young <[email protected]>
+
+#include <linux/bpf.h>
+#include <linux/filter.h>
+#include <linux/bpf_lirc.h>
+#include "rc-core-priv.h"
+
+/*
+ * BPF interface for raw IR
+ */
+const struct bpf_prog_ops lirc_mode2_prog_ops = {
+};
+
+BPF_CALL_1(bpf_rc_repeat, u32*, sample)
+{
+ struct ir_raw_event_ctrl *ctrl;
+
+ ctrl = container_of(sample, struct ir_raw_event_ctrl, bpf_sample);
+
+ rc_repeat(ctrl->dev);
+
+ return 0;
+}
+
+static const struct bpf_func_proto rc_repeat_proto = {
+ .func = bpf_rc_repeat,
+ .gpl_only = true, /* rc_repeat is EXPORT_SYMBOL_GPL */
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+};
+
+/*
+ * Currently rc-core does not support 64-bit scancodes, but there are many
+ * known protocols with more than 32 bits. So, define the interface as u64
+ * as a future-proof.
+ */
+BPF_CALL_4(bpf_rc_keydown, u32*, sample, u32, protocol, u64, scancode,
+ u32, toggle)
+{
+ struct ir_raw_event_ctrl *ctrl;
+
+ ctrl = container_of(sample, struct ir_raw_event_ctrl, bpf_sample);
+
+ rc_keydown(ctrl->dev, protocol, scancode, toggle != 0);
+
+ return 0;
+}
+
+static const struct bpf_func_proto rc_keydown_proto = {
+ .func = bpf_rc_keydown,
+ .gpl_only = true, /* rc_keydown is EXPORT_SYMBOL_GPL */
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+ .arg2_type = ARG_ANYTHING,
+ .arg3_type = ARG_ANYTHING,
+ .arg4_type = ARG_ANYTHING,
+};
+
+static const struct bpf_func_proto *
+lirc_mode2_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+{
+ switch (func_id) {
+ case BPF_FUNC_rc_repeat:
+ return &rc_repeat_proto;
+ case BPF_FUNC_rc_keydown:
+ return &rc_keydown_proto;
+ case BPF_FUNC_map_lookup_elem:
+ return &bpf_map_lookup_elem_proto;
+ case BPF_FUNC_map_update_elem:
+ return &bpf_map_update_elem_proto;
+ case BPF_FUNC_map_delete_elem:
+ return &bpf_map_delete_elem_proto;
+ case BPF_FUNC_ktime_get_ns:
+ return &bpf_ktime_get_ns_proto;
+ case BPF_FUNC_tail_call:
+ return &bpf_tail_call_proto;
+ case BPF_FUNC_get_prandom_u32:
+ return &bpf_get_prandom_u32_proto;
+ case BPF_FUNC_trace_printk:
+ if (capable(CAP_SYS_ADMIN))
+ return bpf_get_trace_printk_proto();
+ /* fall through */
+ default:
+ return NULL;
+ }
+}
+
+static bool lirc_mode2_is_valid_access(int off, int size,
+ enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+{
+ /* We have one field of u32 */
+ return type == BPF_READ && off == 0 && size == sizeof(u32);
+}
+
+const struct bpf_verifier_ops lirc_mode2_verifier_ops = {
+ .get_func_proto = lirc_mode2_func_proto,
+ .is_valid_access = lirc_mode2_is_valid_access
+};
+
+#define BPF_MAX_PROGS 64
+
+static int lirc_bpf_attach(struct rc_dev *rcdev, struct bpf_prog *prog)
+{
+ struct bpf_prog_array __rcu *old_array;
+ struct bpf_prog_array *new_array;
+ struct ir_raw_event_ctrl *raw;
+ int ret;
+
+ if (rcdev->driver_type != RC_DRIVER_IR_RAW)
+ return -EINVAL;
+
+ ret = mutex_lock_interruptible(&ir_raw_handler_lock);
+ if (ret)
+ return ret;
+
+ raw = rcdev->raw;
+ if (!raw) {
+ ret = -ENODEV;
+ goto unlock;
+ }
+
+ if (raw->progs && bpf_prog_array_length(raw->progs) >= BPF_MAX_PROGS) {
+ ret = -E2BIG;
+ goto unlock;
+ }
+
+ old_array = raw->progs;
+ ret = bpf_prog_array_copy(old_array, NULL, prog, &new_array);
+ if (ret < 0)
+ goto unlock;
+
+ rcu_assign_pointer(raw->progs, new_array);
+ bpf_prog_array_free(old_array);
+
+unlock:
+ mutex_unlock(&ir_raw_handler_lock);
+ return ret;
+}
+
+static int lirc_bpf_detach(struct rc_dev *rcdev, struct bpf_prog *prog)
+{
+ struct bpf_prog_array __rcu *old_array;
+ struct bpf_prog_array *new_array;
+ struct ir_raw_event_ctrl *raw;
+ int ret;
+
+ if (rcdev->driver_type != RC_DRIVER_IR_RAW)
+ return -EINVAL;
+
+ ret = mutex_lock_interruptible(&ir_raw_handler_lock);
+ if (ret)
+ return ret;
+
+ raw = rcdev->raw;
+ if (!raw) {
+ ret = -ENODEV;
+ goto unlock;
+ }
+
+ old_array = raw->progs;
+ ret = bpf_prog_array_copy(old_array, prog, NULL, &new_array);
+ /*
+ * Do not use bpf_prog_array_delete_safe() as we would end up
+ * with a dummy entry in the array, and the we would free the
+ * dummy in lirc_bpf_free()
+ */
+ if (ret)
+ goto unlock;
+
+ rcu_assign_pointer(raw->progs, new_array);
+ bpf_prog_array_free(old_array);
+unlock:
+ mutex_unlock(&ir_raw_handler_lock);
+ return ret;
+}
+
+void lirc_bpf_run(struct rc_dev *rcdev, u32 sample)
+{
+ struct ir_raw_event_ctrl *raw = rcdev->raw;
+
+ raw->bpf_sample = sample;
+
+ if (raw->progs)
+ BPF_PROG_RUN_ARRAY(raw->progs, &raw->bpf_sample, BPF_PROG_RUN);
+}
+
+/*
+ * This should be called once the rc thread has been stopped, so there can be
+ * no concurrent bpf execution.
+ */
+void lirc_bpf_free(struct rc_dev *rcdev)
+{
+ struct bpf_prog **progs;
+
+ if (!rcdev->raw->progs)
+ return;
+
+ progs = rcu_dereference(rcdev->raw->progs)->progs;
+ while (*progs)
+ bpf_prog_put(*progs++);
+
+ bpf_prog_array_free(rcdev->raw->progs);
+}
+
+int lirc_prog_attach(const union bpf_attr *attr)
+{
+ struct bpf_prog *prog;
+ struct rc_dev *rcdev;
+ int ret;
+
+ if (attr->attach_flags)
+ return -EINVAL;
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd,
+ BPF_PROG_TYPE_LIRC_MODE2);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ rcdev = rc_dev_get_from_fd(attr->target_fd);
+ if (IS_ERR(rcdev)) {
+ bpf_prog_put(prog);
+ return PTR_ERR(rcdev);
+ }
+
+ ret = lirc_bpf_attach(rcdev, prog);
+ if (ret)
+ bpf_prog_put(prog);
+
+ put_device(&rcdev->dev);
+
+ return ret;
+}
+
+int lirc_prog_detach(const union bpf_attr *attr)
+{
+ struct bpf_prog *prog;
+ struct rc_dev *rcdev;
+ int ret;
+
+ if (attr->attach_flags)
+ return -EINVAL;
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd,
+ BPF_PROG_TYPE_LIRC_MODE2);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ rcdev = rc_dev_get_from_fd(attr->target_fd);
+ if (IS_ERR(rcdev)) {
+ bpf_prog_put(prog);
+ return PTR_ERR(rcdev);
+ }
+
+ ret = lirc_bpf_detach(rcdev, prog);
+
+ bpf_prog_put(prog);
+ put_device(&rcdev->dev);
+
+ return ret;
+}
+
+int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+{
+ __u32 __user *prog_ids = u64_to_user_ptr(attr->query.prog_ids);
+ struct bpf_prog_array __rcu *progs;
+ struct rc_dev *rcdev;
+ u32 cnt, flags = 0;
+ int ret;
+
+ if (attr->query.query_flags)
+ return -EINVAL;
+
+ rcdev = rc_dev_get_from_fd(attr->query.target_fd);
+ if (IS_ERR(rcdev))
+ return PTR_ERR(rcdev);
+
+ if (rcdev->driver_type != RC_DRIVER_IR_RAW) {
+ ret = -EINVAL;
+ goto put;
+ }
+
+ ret = mutex_lock_interruptible(&ir_raw_handler_lock);
+ if (ret)
+ goto put;
+
+ progs = rcdev->raw->progs;
+ cnt = progs ? bpf_prog_array_length(progs) : 0;
+
+ if (copy_to_user(&uattr->query.prog_cnt, &cnt, sizeof(cnt))) {
+ ret = -EFAULT;
+ goto unlock;
+ }
+
+ if (copy_to_user(&uattr->query.attach_flags, &flags, sizeof(flags))) {
+ ret = -EFAULT;
+ goto unlock;
+ }
+
+ if (attr->query.prog_cnt != 0 && prog_ids && cnt)
+ ret = bpf_prog_array_copy_to_user(progs, prog_ids, cnt);
+
+unlock:
+ mutex_unlock(&ir_raw_handler_lock);
+put:
+ put_device(&rcdev->dev);
+
+ return ret;
+}
diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
index 24e9fbb80e81..da7013a12a58 100644
--- a/drivers/media/rc/lirc_dev.c
+++ b/drivers/media/rc/lirc_dev.c
@@ -20,6 +20,7 @@
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/device.h>
+#include <linux/file.h>
#include <linux/idr.h>
#include <linux/poll.h>
#include <linux/sched.h>
@@ -104,6 +105,12 @@ void ir_lirc_raw_event(struct rc_dev *dev, struct ir_raw_event ev)
TO_US(ev.duration), TO_STR(ev.pulse));
}

+ /*
+ * bpf does not care about the gap generated above; that exists
+ * for backwards compatibility
+ */
+ lirc_bpf_run(dev, sample);
+
spin_lock_irqsave(&dev->lirc_fh_lock, flags);
list_for_each_entry(fh, &dev->lirc_fh, list) {
if (LIRC_IS_TIMEOUT(sample) && !fh->send_timeout_reports)
@@ -816,4 +823,27 @@ void __exit lirc_dev_exit(void)
unregister_chrdev_region(lirc_base_dev, RC_DEV_MAX);
}

+struct rc_dev *rc_dev_get_from_fd(int fd)
+{
+ struct fd f = fdget(fd);
+ struct lirc_fh *fh;
+ struct rc_dev *dev;
+
+ if (!f.file)
+ return ERR_PTR(-EBADF);
+
+ if (f.file->f_op != &lirc_fops) {
+ fdput(f);
+ return ERR_PTR(-EINVAL);
+ }
+
+ fh = f.file->private_data;
+ dev = fh->rc;
+
+ get_device(&dev->dev);
+ fdput(f);
+
+ return dev;
+}
+
MODULE_ALIAS("lirc_dev");
diff --git a/drivers/media/rc/rc-core-priv.h b/drivers/media/rc/rc-core-priv.h
index e0e6a17460f6..eb004757038b 100644
--- a/drivers/media/rc/rc-core-priv.h
+++ b/drivers/media/rc/rc-core-priv.h
@@ -13,6 +13,7 @@
#define MAX_IR_EVENT_SIZE 512

#include <linux/slab.h>
+#include <uapi/linux/bpf.h>
#include <media/rc-core.h>

/**
@@ -57,6 +58,11 @@ struct ir_raw_event_ctrl {
/* raw decoder state follows */
struct ir_raw_event prev_ev;
struct ir_raw_event this_ev;
+
+#ifdef CONFIG_BPF_LIRC_MODE2
+ u32 bpf_sample;
+ struct bpf_prog_array __rcu *progs;
+#endif
struct nec_dec {
int state;
unsigned count;
@@ -126,6 +132,9 @@ struct ir_raw_event_ctrl {
} imon;
};

+/* Mutex for locking raw IR processing and handler change */
+extern struct mutex ir_raw_handler_lock;
+
/* macros for IR decoders */
static inline bool geq_margin(unsigned d1, unsigned d2, unsigned margin)
{
@@ -288,6 +297,7 @@ void ir_lirc_raw_event(struct rc_dev *dev, struct ir_raw_event ev);
void ir_lirc_scancode_event(struct rc_dev *dev, struct lirc_scancode *lsc);
int ir_lirc_register(struct rc_dev *dev);
void ir_lirc_unregister(struct rc_dev *dev);
+struct rc_dev *rc_dev_get_from_fd(int fd);
#else
static inline int lirc_dev_init(void) { return 0; }
static inline void lirc_dev_exit(void) {}
@@ -299,4 +309,15 @@ static inline int ir_lirc_register(struct rc_dev *dev) { return 0; }
static inline void ir_lirc_unregister(struct rc_dev *dev) { }
#endif

+/*
+ * bpf interface
+ */
+#ifdef CONFIG_BPF_LIRC_MODE2
+void lirc_bpf_free(struct rc_dev *dev);
+void lirc_bpf_run(struct rc_dev *dev, u32 sample);
+#else
+static inline void lirc_bpf_free(struct rc_dev *dev) { }
+static inline void lirc_bpf_run(struct rc_dev *dev, u32 sample) { }
+#endif
+
#endif /* _RC_CORE_PRIV */
diff --git a/drivers/media/rc/rc-ir-raw.c b/drivers/media/rc/rc-ir-raw.c
index 374f83105a23..7675b7ee5bc7 100644
--- a/drivers/media/rc/rc-ir-raw.c
+++ b/drivers/media/rc/rc-ir-raw.c
@@ -14,7 +14,7 @@
static LIST_HEAD(ir_raw_client_list);

/* Used to handle IR raw handler extensions */
-static DEFINE_MUTEX(ir_raw_handler_lock);
+DEFINE_MUTEX(ir_raw_handler_lock);
static LIST_HEAD(ir_raw_handler_list);
static atomic64_t available_protocols = ATOMIC64_INIT(0);

@@ -621,9 +621,17 @@ void ir_raw_event_unregister(struct rc_dev *dev)
list_for_each_entry(handler, &ir_raw_handler_list, list)
if (handler->raw_unregister)
handler->raw_unregister(dev);
- mutex_unlock(&ir_raw_handler_lock);
+
+ lirc_bpf_free(dev);

ir_raw_event_free(dev);
+
+ /*
+ * A user can be calling bpf(BPF_PROG_{QUERY|ATTACH|DETACH}), so
+ * ensure that the raw member is null on unlock; this is how
+ * "device gone" is checked.
+ */
+ mutex_unlock(&ir_raw_handler_lock);
}

/*
diff --git a/include/linux/bpf_lirc.h b/include/linux/bpf_lirc.h
new file mode 100644
index 000000000000..5f8a4283092d
--- /dev/null
+++ b/include/linux/bpf_lirc.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _BPF_LIRC_H
+#define _BPF_LIRC_H
+
+#include <uapi/linux/bpf.h>
+
+#ifdef CONFIG_BPF_LIRC_MODE2
+int lirc_prog_attach(const union bpf_attr *attr);
+int lirc_prog_detach(const union bpf_attr *attr);
+int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
+#else
+static inline int lirc_prog_attach(const union bpf_attr *attr)
+{
+ return -EINVAL;
+}
+
+static inline int lirc_prog_detach(const union bpf_attr *attr)
+{
+ return -EINVAL;
+}
+
+static inline int lirc_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+#endif
+
+#endif /* _BPF_LIRC_H */
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index b161e506dcfc..c5700c2d5549 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -26,6 +26,9 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_RAW_TRACEPOINT, raw_tracepoint)
#ifdef CONFIG_CGROUP_BPF
BPF_PROG_TYPE(BPF_PROG_TYPE_CGROUP_DEVICE, cg_dev)
#endif
+#ifdef CONFIG_BPF_LIRC_MODE2
+BPF_PROG_TYPE(BPF_PROG_TYPE_LIRC_MODE2, lirc_mode2)
+#endif

BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 9b8c6e310e9a..4636c596096d 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -143,6 +143,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_RAW_TRACEPOINT,
BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
BPF_PROG_TYPE_LWT_SEG6LOCAL,
+ BPF_PROG_TYPE_LIRC_MODE2,
};

enum bpf_attach_type {
@@ -160,6 +161,7 @@ enum bpf_attach_type {
BPF_CGROUP_INET6_CONNECT,
BPF_CGROUP_INET4_POST_BIND,
BPF_CGROUP_INET6_POST_BIND,
+ BPF_LIRC_MODE2,
__MAX_BPF_ATTACH_TYPE
};

@@ -2004,6 +2006,53 @@ union bpf_attr {
* direct packet access.
* Return
* 0 on success, or a negative error in case of failure.
+ *
+ * int bpf_rc_keydown(void *ctx, u32 protocol, u64 scancode, u32 toggle)
+ * Description
+ * This helper is used in programs implementing IR decoding, to
+ * report a successfully decoded key press with *scancode*,
+ * *toggle* value in the given *protocol*. The scancode will be
+ * translated to a keycode using the rc keymap, and reported as
+ * an input key down event. After a period a key up event is
+ * generated. This period can be extended by calling either
+ * **bpf_rc_keydown** () again with the same values, or calling
+ * **bpf_rc_repeat** ().
+ *
+ * Some protocols include a toggle bit, in case the button was
+ * released and pressed again between consecutive scancodes.
+ *
+ * The *ctx* should point to the lirc sample as passed into
+ * the program.
+ *
+ * The *protocol* is the decoded protocol number (see
+ * **enum rc_proto** for some predefined values).
+ *
+ * This helper is only available is the kernel was compiled with
+ * the **CONFIG_BPF_LIRC_MODE2** configuration option set to
+ * "**y**".
+ *
+ * Return
+ * 0
+ *
+ * int bpf_rc_repeat(void *ctx)
+ * Description
+ * This helper is used in programs implementing IR decoding, to
+ * report a successfully decoded repeat key message. This delays
+ * the generation of a key up event for previously generated
+ * key down event.
+ *
+ * Some IR protocols like NEC have a special IR message for
+ * repeating last button, for when a button is held down.
+ *
+ * The *ctx* should point to the lirc sample as passed into
+ * the program.
+ *
+ * This helper is only available is the kernel was compiled with
+ * the **CONFIG_BPF_LIRC_MODE2** configuration option set to
+ * "**y**".
+ *
+ * Return
+ * 0
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -2082,7 +2131,9 @@ union bpf_attr {
FN(lwt_push_encap), \
FN(lwt_seg6_store_bytes), \
FN(lwt_seg6_adjust_srh), \
- FN(lwt_seg6_action),
+ FN(lwt_seg6_action), \
+ FN(rc_repeat), \
+ FN(rc_keydown),

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
* function eBPF program intends to call
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 388d4feda348..3c104113d040 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -11,6 +11,7 @@
*/
#include <linux/bpf.h>
#include <linux/bpf_trace.h>
+#include <linux/bpf_lirc.h>
#include <linux/btf.h>
#include <linux/syscalls.h>
#include <linux/slab.h>
@@ -1578,6 +1579,8 @@ static int bpf_prog_attach(const union bpf_attr *attr)
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);
+ case BPF_LIRC_MODE2:
+ return lirc_prog_attach(attr);
default:
return -EINVAL;
}
@@ -1648,6 +1651,8 @@ static int bpf_prog_detach(const union bpf_attr *attr)
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);
+ case BPF_LIRC_MODE2:
+ return lirc_prog_detach(attr);
default:
return -EINVAL;
}
@@ -1695,6 +1700,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
case BPF_CGROUP_SOCK_OPS:
case BPF_CGROUP_DEVICE:
break;
+ case BPF_LIRC_MODE2:
+ return lirc_prog_query(attr, uattr);
default:
return -EINVAL;
}
--
2.17.0


2018-05-30 10:58:57

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH v5 0/3] IR decoding using BPF

On 05/27/2018 01:24 PM, Sean Young wrote:
> The kernel IR decoders (drivers/media/rc/ir-*-decoder.c) support the most
> widely used IR protocols, but there are many protocols which are not
> supported[1]. For example, the lirc-remotes[2] repo has over 2700 remotes,
> many of which are not supported by rc-core. There is a "long tail" of
> unsupported IR protocols, for which lircd is need to decode the IR .
>
> IR encoding is done in such a way that some simple circuit can decode it;
> therefore, bpf is ideal.
>
> In order to support all these protocols, here we have bpf based IR decoding.
> The idea is that user-space can define a decoder in bpf, attach it to
> the rc device through the lirc chardev.
>
> Separate work is underway to extend ir-keytable to have an extensive library
> of bpf-based decoders, and a much expanded library of rc keymaps.
>
> Another future application would be to compile IRP[3] to a IR BPF program, and
> so support virtually every remote without having to write a decoder for each.
> It might also be possible to support non-button devices such as analog
> directional pads or air conditioning remote controls and decode the target
> temperature in bpf, and pass that to an input device.
>
> Thanks,
>
> Sean Young
>
> [1] http://www.hifi-remote.com/wiki/index.php?title=DecodeIR
> [2] https://sourceforge.net/p/lirc-remotes/code/ci/master/tree/remotes/
> [3] http://www.hifi-remote.com/wiki/index.php?title=IRP_Notation
>
> Changes since v4:
> - Renamed rc_dev_bpf_{attach,detach,query} to lirc_bpf_{attach,detach,query}
> - Fixed error path in lirc_bpf_query
> - Rebased on bpf-next
>
> Changes since v3:
> - Implemented review comments from Quentin Monnet and Y Song (thanks!)
> - More helpful and better formatted bpf helper documentation
> - Changed back to bpf_prog_array rather than open-coded implementation
> - scancodes can be 64 bit
> - bpf gets passed values in microseconds, not nanoseconds.
> microseconds is more than than enough (IR receivers support carriers upto
> 70kHz, at which point a single period is already 14 microseconds). Also,
> this makes it much more consistent with lirc mode2.
> - Since it looks much more like lirc mode2, rename the program type to
> BPF_PROG_TYPE_LIRC_MODE2.
> - Rebased on bpf-next
>
> Changes since v2:
> - Fixed locking issues
> - Improved self-test to cover more cases
> - Rebased on bpf-next again
>
> Changes since v1:
> - Code review comments from Y Song <[email protected]> and
> Randy Dunlap <[email protected]>
> - Re-wrote sample bpf to be selftest
> - Renamed RAWIR_DECODER -> RAWIR_EVENT (Kconfig, context, bpf prog type)
> - Rebase on bpf-next
> - Introduced bpf_rawir_event context structure with simpler access checking

Applied to bpf-next, thanks Sean!

2018-06-04 17:48:21

by Matthias Reichl

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] media: rc: introduce BPF_PROG_LIRC_MODE2

Hi Sean,

I finally found the time to test your patch series and noticed
2 issues - comments are inline

On Sun, May 27, 2018 at 12:24:09PM +0100, Sean Young wrote:
> diff --git a/drivers/media/rc/Kconfig b/drivers/media/rc/Kconfig
> index eb2c3b6eca7f..d5b35a6ba899 100644
> --- a/drivers/media/rc/Kconfig
> +++ b/drivers/media/rc/Kconfig
> @@ -25,6 +25,19 @@ config LIRC
> passes raw IR to and from userspace, which is needed for
> IR transmitting (aka "blasting") and for the lirc daemon.
>
> +config BPF_LIRC_MODE2
> + bool "Support for eBPF programs attached to lirc devices"
> + depends on BPF_SYSCALL
> + depends on RC_CORE=y

Requiring rc-core to be built into the kernel could become
problematic in the future for people using media_build.

Currently the whole media tree (including rc-core) can be built
as modules so DVB and IR drivers can be replaced by newer versions.
But with rc-core in the kernel things could easily break if internal
data structures are changed.

Maybe we should add a small layer with a stable API/ABI between
bpf-lirc and rc-core to decouple them? Or would it be possible
to build rc-core with bpf support as a module?

> + depends on LIRC
> + help
> + Allow attaching eBPF programs to a lirc device using the bpf(2)
> + syscall command BPF_PROG_ATTACH. This is supported for raw IR
> + receivers.
> +
> + These eBPF programs can be used to decode IR into scancodes, for
> + IR protocols not supported by the kernel decoders.
> +
> menuconfig RC_DECODERS
> bool "Remote controller decoders"
> depends on RC_CORE
> [...]
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 388d4feda348..3c104113d040 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -11,6 +11,7 @@
> */
> #include <linux/bpf.h>
> #include <linux/bpf_trace.h>
> +#include <linux/bpf_lirc.h>
> #include <linux/btf.h>
> #include <linux/syscalls.h>
> #include <linux/slab.h>
> @@ -1578,6 +1579,8 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> case BPF_SK_SKB_STREAM_PARSER:
> case BPF_SK_SKB_STREAM_VERDICT:
> return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);
> + case BPF_LIRC_MODE2:
> + return lirc_prog_attach(attr);
> default:
> return -EINVAL;
> }
> @@ -1648,6 +1651,8 @@ static int bpf_prog_detach(const union bpf_attr *attr)
> case BPF_SK_SKB_STREAM_PARSER:
> case BPF_SK_SKB_STREAM_VERDICT:
> return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);
> + case BPF_LIRC_MODE2:
> + return lirc_prog_detach(attr);
> default:
> return -EINVAL;
> }
> @@ -1695,6 +1700,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
> case BPF_CGROUP_SOCK_OPS:
> case BPF_CGROUP_DEVICE:
> break;
> + case BPF_LIRC_MODE2:
> + return lirc_prog_query(attr, uattr);

When testing this patch series I was wondering why I always got
-EINVAL when trying to query the registered programs.

Closer inspection revealed that bpf_prog_attach/detach/query and
calls to them in the bpf syscall are in "#ifdef CONFIG_CGROUP_BPF"
blocks - and as I built the kernel without CONFIG_CGROUP_BPF
BPF_PROG_ATTACH/DETACH/QUERY weren't handled in the syscall switch
and I got -EINVAL from the bpf syscall function.

I haven't checked in detail yet, but it looks to me like
bpf_prog_attach/detach/query could always be built (or when
either cgroup bpf or lirc bpf are enabled) and the #ifdefs moved
inside the switch(). So lirc bpf could be used without cgroup bpf.
Or am I missing something?

so long,

Hias
> default:
> return -EINVAL;
> }
> --
> 2.17.0
>

2018-06-05 10:17:23

by Sean Young

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] media: rc: introduce BPF_PROG_LIRC_MODE2

On Mon, Jun 04, 2018 at 07:47:30PM +0200, Matthias Reichl wrote:
> Hi Sean,
>
> I finally found the time to test your patch series and noticed
> 2 issues - comments are inline
>
> On Sun, May 27, 2018 at 12:24:09PM +0100, Sean Young wrote:
> > diff --git a/drivers/media/rc/Kconfig b/drivers/media/rc/Kconfig
> > index eb2c3b6eca7f..d5b35a6ba899 100644
> > --- a/drivers/media/rc/Kconfig
> > +++ b/drivers/media/rc/Kconfig
> > @@ -25,6 +25,19 @@ config LIRC
> > passes raw IR to and from userspace, which is needed for
> > IR transmitting (aka "blasting") and for the lirc daemon.
> >
> > +config BPF_LIRC_MODE2
> > + bool "Support for eBPF programs attached to lirc devices"
> > + depends on BPF_SYSCALL
> > + depends on RC_CORE=y
>
> Requiring rc-core to be built into the kernel could become
> problematic in the future for people using media_build.
>
> Currently the whole media tree (including rc-core) can be built
> as modules so DVB and IR drivers can be replaced by newer versions.
> But with rc-core in the kernel things could easily break if internal
> data structures are changed.
>
> Maybe we should add a small layer with a stable API/ABI between
> bpf-lirc and rc-core to decouple them? Or would it be possible
> to build rc-core with bpf support as a module?

Unfortunately bpf cannot be built as a module.

> > + depends on LIRC
> > + help
> > + Allow attaching eBPF programs to a lirc device using the bpf(2)
> > + syscall command BPF_PROG_ATTACH. This is supported for raw IR
> > + receivers.
> > +
> > + These eBPF programs can be used to decode IR into scancodes, for
> > + IR protocols not supported by the kernel decoders.
> > +
> > menuconfig RC_DECODERS
> > bool "Remote controller decoders"
> > depends on RC_CORE
> > [...]
> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > index 388d4feda348..3c104113d040 100644
> > --- a/kernel/bpf/syscall.c
> > +++ b/kernel/bpf/syscall.c
> > @@ -11,6 +11,7 @@
> > */
> > #include <linux/bpf.h>
> > #include <linux/bpf_trace.h>
> > +#include <linux/bpf_lirc.h>
> > #include <linux/btf.h>
> > #include <linux/syscalls.h>
> > #include <linux/slab.h>
> > @@ -1578,6 +1579,8 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> > case BPF_SK_SKB_STREAM_PARSER:
> > case BPF_SK_SKB_STREAM_VERDICT:
> > return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);
> > + case BPF_LIRC_MODE2:
> > + return lirc_prog_attach(attr);
> > default:
> > return -EINVAL;
> > }
> > @@ -1648,6 +1651,8 @@ static int bpf_prog_detach(const union bpf_attr *attr)
> > case BPF_SK_SKB_STREAM_PARSER:
> > case BPF_SK_SKB_STREAM_VERDICT:
> > return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);
> > + case BPF_LIRC_MODE2:
> > + return lirc_prog_detach(attr);
> > default:
> > return -EINVAL;
> > }
> > @@ -1695,6 +1700,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
> > case BPF_CGROUP_SOCK_OPS:
> > case BPF_CGROUP_DEVICE:
> > break;
> > + case BPF_LIRC_MODE2:
> > + return lirc_prog_query(attr, uattr);
>
> When testing this patch series I was wondering why I always got
> -EINVAL when trying to query the registered programs.
>
> Closer inspection revealed that bpf_prog_attach/detach/query and
> calls to them in the bpf syscall are in "#ifdef CONFIG_CGROUP_BPF"
> blocks - and as I built the kernel without CONFIG_CGROUP_BPF
> BPF_PROG_ATTACH/DETACH/QUERY weren't handled in the syscall switch
> and I got -EINVAL from the bpf syscall function.
>
> I haven't checked in detail yet, but it looks to me like
> bpf_prog_attach/detach/query could always be built (or when
> either cgroup bpf or lirc bpf are enabled) and the #ifdefs moved
> inside the switch(). So lirc bpf could be used without cgroup bpf.
> Or am I missing something?

You are right, this features depends on CONFIG_CGROUP_BPF right now. This
also affects the BPF_SK_MSG_VERDICT, BPF_SK_SKB_STREAM_VERDICT and
BPF_SK_SKB_STREAM_PARSER type bpf attachments, and as far as I know
these shouldn't depend on CONFIG_CGROUP_BPF either.


Sean

2018-06-05 10:36:03

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH v5 2/3] media: rc: introduce BPF_PROG_LIRC_MODE2

On 06/05/2018 12:16 PM, Sean Young wrote:
> On Mon, Jun 04, 2018 at 07:47:30PM +0200, Matthias Reichl wrote:
[...]
>>> @@ -1695,6 +1700,8 @@ static int bpf_prog_query(const union bpf_attr *attr,
>>> case BPF_CGROUP_SOCK_OPS:
>>> case BPF_CGROUP_DEVICE:
>>> break;
>>> + case BPF_LIRC_MODE2:
>>> + return lirc_prog_query(attr, uattr);
>>
>> When testing this patch series I was wondering why I always got
>> -EINVAL when trying to query the registered programs.
>>
>> Closer inspection revealed that bpf_prog_attach/detach/query and
>> calls to them in the bpf syscall are in "#ifdef CONFIG_CGROUP_BPF"
>> blocks - and as I built the kernel without CONFIG_CGROUP_BPF
>> BPF_PROG_ATTACH/DETACH/QUERY weren't handled in the syscall switch
>> and I got -EINVAL from the bpf syscall function.
>>
>> I haven't checked in detail yet, but it looks to me like
>> bpf_prog_attach/detach/query could always be built (or when
>> either cgroup bpf or lirc bpf are enabled) and the #ifdefs moved
>> inside the switch(). So lirc bpf could be used without cgroup bpf.
>> Or am I missing something?
>
> You are right, this features depends on CONFIG_CGROUP_BPF right now. This
> also affects the BPF_SK_MSG_VERDICT, BPF_SK_SKB_STREAM_VERDICT and
> BPF_SK_SKB_STREAM_PARSER type bpf attachments, and as far as I know
> these shouldn't depend on CONFIG_CGROUP_BPF either.

The latter three do depend on it from a bigger picture as sockmap progs
are orchestrated via cgroups. But I'd be fine if you decouple lirc from
it since there's really no dependency at all, and presumably there are
valid cases where you would want to run it on low-end devices with minimal
kernel.

Thanks,
Daniel

2018-06-06 21:33:12

by Sean Young

[permalink] [raw]
Subject: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_BPF_CGROUP

Compile bpf_prog_{attach,detach,query} even if CONFIG_BPF_CGROUP is not
set.

Signed-off-by: Sean Young <[email protected]>
---
include/linux/bpf-cgroup.h | 31 +++++++++++
kernel/bpf/cgroup.c | 110 +++++++++++++++++++++++++++++++++++++
kernel/bpf/syscall.c | 105 ++---------------------------------
3 files changed, 145 insertions(+), 101 deletions(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 975fb4cf1bb7..ee67cd35f426 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -188,12 +188,43 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
\
__ret; \
})
+int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach);
+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr);
#else

struct cgroup_bpf {};
static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }

+static inline int sockmap_get_from_fd(const union bpf_attr *attr,
+ int type, bool attach)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+
#define cgroup_bpf_enabled (0)
#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index f7c00bd6f8e4..d6e18f9dc0c4 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -428,6 +428,116 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
return ret;
}

+int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
+ enum bpf_attach_type attach_type)
+{
+ switch (prog->type) {
+ case BPF_PROG_TYPE_CGROUP_SOCK:
+ case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
+ default:
+ return 0;
+ }
+}
+
+int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach)
+{
+ struct bpf_prog *prog = NULL;
+ int ufd = attr->target_fd;
+ struct bpf_map *map;
+ struct fd f;
+ int err;
+
+ f = fdget(ufd);
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ if (attach) {
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
+ if (IS_ERR(prog)) {
+ fdput(f);
+ return PTR_ERR(prog);
+ }
+ }
+
+ err = sock_map_prog(map, prog, attr->attach_type);
+ if (err) {
+ fdput(f);
+ if (prog)
+ bpf_prog_put(prog);
+ return err;
+ }
+
+ fdput(f);
+ return 0;
+}
+
+int cgroup_bpf_prog_attach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
+ bpf_prog_put(prog);
+ return -EINVAL;
+ }
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp)) {
+ bpf_prog_put(prog);
+ return PTR_ERR(cgrp);
+ }
+
+ ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
+ attr->attach_flags);
+ if (ret)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+
+ return ret;
+}
+
+int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ prog = NULL;
+
+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
+ if (prog)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+ return ret;
+}
+
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->query.target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+ ret = cgroup_bpf_query(cgrp, attr, uattr);
+ cgroup_put(cgrp);
+ return ret;
+}
+
/**
* __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
* @sk: The socket sending or receiving traffic
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 0fa20624707f..52fa44856623 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1489,65 +1489,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
return err;
}

-#ifdef CONFIG_CGROUP_BPF
-
-static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
- enum bpf_attach_type attach_type)
-{
- switch (prog->type) {
- case BPF_PROG_TYPE_CGROUP_SOCK:
- case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
- return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
- default:
- return 0;
- }
-}
-
#define BPF_PROG_ATTACH_LAST_FIELD attach_flags

-static int sockmap_get_from_fd(const union bpf_attr *attr,
- int type, bool attach)
-{
- struct bpf_prog *prog = NULL;
- int ufd = attr->target_fd;
- struct bpf_map *map;
- struct fd f;
- int err;
-
- f = fdget(ufd);
- map = __bpf_map_get(f);
- if (IS_ERR(map))
- return PTR_ERR(map);
-
- if (attach) {
- prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
- if (IS_ERR(prog)) {
- fdput(f);
- return PTR_ERR(prog);
- }
- }
-
- err = sock_map_prog(map, prog, attr->attach_type);
- if (err) {
- fdput(f);
- if (prog)
- bpf_prog_put(prog);
- return err;
- }
-
- fdput(f);
- return 0;
-}
-
#define BPF_F_ATTACH_MASK \
(BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)

static int bpf_prog_attach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1593,28 +1542,7 @@ static int bpf_prog_attach(const union bpf_attr *attr)
return -EINVAL;
}

- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- return PTR_ERR(prog);
-
- if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
- bpf_prog_put(prog);
- return -EINVAL;
- }
-
- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp)) {
- bpf_prog_put(prog);
- return PTR_ERR(cgrp);
- }
-
- ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
- attr->attach_flags);
- if (ret)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
-
- return ret;
+ return cgroup_bpf_prog_attach(attr, ptype);
}

#define BPF_PROG_DETACH_LAST_FIELD attach_type
@@ -1622,9 +1550,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
static int bpf_prog_detach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1667,19 +1592,7 @@ static int bpf_prog_detach(const union bpf_attr *attr)
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
-
- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- prog = NULL;
-
- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
- if (prog)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
- return ret;
+ return cgroup_bpf_prog_detach(attr, ptype);
}

#define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt
@@ -1687,9 +1600,6 @@ static int bpf_prog_detach(const union bpf_attr *attr)
static int bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
- struct cgroup *cgrp;
- int ret;
-
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (CHECK_ATTR(BPF_PROG_QUERY))
@@ -1717,14 +1627,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
default:
return -EINVAL;
}
- cgrp = cgroup_get_from_fd(attr->query.target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
- ret = cgroup_bpf_query(cgrp, attr, uattr);
- cgroup_put(cgrp);
- return ret;
+
+ return cgroup_bpf_prog_query(attr, uattr);
}
-#endif /* CONFIG_CGROUP_BPF */

#define BPF_PROG_TEST_RUN_LAST_FIELD test.duration

@@ -2371,7 +2276,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_OBJ_GET:
err = bpf_obj_get(&attr);
break;
-#ifdef CONFIG_CGROUP_BPF
case BPF_PROG_ATTACH:
err = bpf_prog_attach(&attr);
break;
@@ -2381,7 +2285,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_PROG_QUERY:
err = bpf_prog_query(&attr, uattr);
break;
-#endif
case BPF_PROG_TEST_RUN:
err = bpf_prog_test_run(&attr, uattr);
break;
--
2.17.1


2018-06-07 19:08:30

by Simon Horman

[permalink] [raw]
Subject: Re: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_BPF_CGROUP

On Wed, Jun 06, 2018 at 10:09:40PM +0100, Sean Young wrote:
> Compile bpf_prog_{attach,detach,query} even if CONFIG_BPF_CGROUP is not
> set.
>
> Signed-off-by: Sean Young <[email protected]>

Hi Sean,

This patch seems to involve rather extensive refactoring, the details of
which are not explained, in order to make a change whose motivation is also
not explained. I for one would value a more extensive changelog if
not a series of patches which implement discrete changes.

Thanks!

2018-06-07 19:08:58

by Y Song

[permalink] [raw]
Subject: Re: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_BPF_CGROUP

On Wed, Jun 6, 2018 at 2:09 PM, Sean Young <[email protected]> wrote:
> Compile bpf_prog_{attach,detach,query} even if CONFIG_BPF_CGROUP is not
> set.

It should be CONFIG_CGROUP_BPF here. The same for subject line.
Today, if CONFIG_CGROUP_BPF is not defined. Users will get an -EINVAL
if they try to attach/detach/query.

I am not sure what is the motivation behind this change. Could you explain more?

>
> Signed-off-by: Sean Young <[email protected]>
> ---
> include/linux/bpf-cgroup.h | 31 +++++++++++
> kernel/bpf/cgroup.c | 110 +++++++++++++++++++++++++++++++++++++
> kernel/bpf/syscall.c | 105 ++---------------------------------
> 3 files changed, 145 insertions(+), 101 deletions(-)
>
> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
> index 975fb4cf1bb7..ee67cd35f426 100644
> --- a/include/linux/bpf-cgroup.h
> +++ b/include/linux/bpf-cgroup.h
> @@ -188,12 +188,43 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
> \
> __ret; \
> })
> +int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach);
> +int cgroup_bpf_prog_attach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype);
> +int cgroup_bpf_prog_detach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype);
> +int cgroup_bpf_prog_query(const union bpf_attr *attr,
> + union bpf_attr __user *uattr);
> #else
>
> struct cgroup_bpf {};
> static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
> static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }
>
> +static inline int sockmap_get_from_fd(const union bpf_attr *attr,
> + int type, bool attach)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
> + union bpf_attr __user *uattr)
> +{
> + return -EINVAL;
> +}
> +
> #define cgroup_bpf_enabled (0)
> #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
> #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index f7c00bd6f8e4..d6e18f9dc0c4 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -428,6 +428,116 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
> return ret;
> }
>
> +int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
> + enum bpf_attach_type attach_type)
> +{
> + switch (prog->type) {
> + case BPF_PROG_TYPE_CGROUP_SOCK:
> + case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> + return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
> + default:
> + return 0;
> + }
> +}
> +
> +int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach)
> +{
> + struct bpf_prog *prog = NULL;
> + int ufd = attr->target_fd;
> + struct bpf_map *map;
> + struct fd f;
> + int err;
> +
> + f = fdget(ufd);
> + map = __bpf_map_get(f);
> + if (IS_ERR(map))
> + return PTR_ERR(map);
> +
> + if (attach) {
> + prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
> + if (IS_ERR(prog)) {
> + fdput(f);
> + return PTR_ERR(prog);
> + }
> + }
> +
> + err = sock_map_prog(map, prog, attr->attach_type);
> + if (err) {
> + fdput(f);
> + if (prog)
> + bpf_prog_put(prog);
> + return err;
> + }
> +
> + fdput(f);
> + return 0;
> +}
> +
> +int cgroup_bpf_prog_attach(const union bpf_attr *attr, enum bpf_prog_type ptype)
> +{
> + struct bpf_prog *prog;
> + struct cgroup *cgrp;
> + int ret;
> +
> + prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
> + if (IS_ERR(prog))
> + return PTR_ERR(prog);
> +
> + if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
> + bpf_prog_put(prog);
> + return -EINVAL;
> + }
> +
> + cgrp = cgroup_get_from_fd(attr->target_fd);
> + if (IS_ERR(cgrp)) {
> + bpf_prog_put(prog);
> + return PTR_ERR(cgrp);
> + }
> +
> + ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
> + attr->attach_flags);
> + if (ret)
> + bpf_prog_put(prog);
> + cgroup_put(cgrp);
> +
> + return ret;
> +}
> +
> +int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
> +{
> + struct bpf_prog *prog;
> + struct cgroup *cgrp;
> + int ret;
> +
> + cgrp = cgroup_get_from_fd(attr->target_fd);
> + if (IS_ERR(cgrp))
> + return PTR_ERR(cgrp);
> +
> + prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
> + if (IS_ERR(prog))
> + prog = NULL;
> +
> + ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
> + if (prog)
> + bpf_prog_put(prog);
> + cgroup_put(cgrp);
> + return ret;
> +}
> +
> +int cgroup_bpf_prog_query(const union bpf_attr *attr,
> + union bpf_attr __user *uattr)
> +{
> + struct cgroup *cgrp;
> + int ret;
> +
> + cgrp = cgroup_get_from_fd(attr->query.target_fd);
> + if (IS_ERR(cgrp))
> + return PTR_ERR(cgrp);
> + ret = cgroup_bpf_query(cgrp, attr, uattr);
> + cgroup_put(cgrp);
> + return ret;
> +}
> +
> /**
> * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
> * @sk: The socket sending or receiving traffic
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 0fa20624707f..52fa44856623 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1489,65 +1489,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
> return err;
> }
>
> -#ifdef CONFIG_CGROUP_BPF
> -
> -static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
> - enum bpf_attach_type attach_type)
> -{
> - switch (prog->type) {
> - case BPF_PROG_TYPE_CGROUP_SOCK:
> - case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> - return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
> - default:
> - return 0;
> - }
> -}
> -
> #define BPF_PROG_ATTACH_LAST_FIELD attach_flags
>
> -static int sockmap_get_from_fd(const union bpf_attr *attr,
> - int type, bool attach)
> -{
> - struct bpf_prog *prog = NULL;
> - int ufd = attr->target_fd;
> - struct bpf_map *map;
> - struct fd f;
> - int err;
> -
> - f = fdget(ufd);
> - map = __bpf_map_get(f);
> - if (IS_ERR(map))
> - return PTR_ERR(map);
> -
> - if (attach) {
> - prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
> - if (IS_ERR(prog)) {
> - fdput(f);
> - return PTR_ERR(prog);
> - }
> - }
> -
> - err = sock_map_prog(map, prog, attr->attach_type);
> - if (err) {
> - fdput(f);
> - if (prog)
> - bpf_prog_put(prog);
> - return err;
> - }
> -
> - fdput(f);
> - return 0;
> -}
> -
> #define BPF_F_ATTACH_MASK \
> (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)
>
> static int bpf_prog_attach(const union bpf_attr *attr)
> {
> enum bpf_prog_type ptype;
> - struct bpf_prog *prog;
> - struct cgroup *cgrp;
> - int ret;
>
> if (!capable(CAP_NET_ADMIN))
> return -EPERM;
> @@ -1593,28 +1542,7 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> return -EINVAL;
> }
>
> - prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
> - if (IS_ERR(prog))
> - return PTR_ERR(prog);
> -
> - if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
> - bpf_prog_put(prog);
> - return -EINVAL;
> - }
> -
> - cgrp = cgroup_get_from_fd(attr->target_fd);
> - if (IS_ERR(cgrp)) {
> - bpf_prog_put(prog);
> - return PTR_ERR(cgrp);
> - }
> -
> - ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
> - attr->attach_flags);
> - if (ret)
> - bpf_prog_put(prog);
> - cgroup_put(cgrp);
> -
> - return ret;
> + return cgroup_bpf_prog_attach(attr, ptype);
> }
>
> #define BPF_PROG_DETACH_LAST_FIELD attach_type
> @@ -1622,9 +1550,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
> static int bpf_prog_detach(const union bpf_attr *attr)
> {
> enum bpf_prog_type ptype;
> - struct bpf_prog *prog;
> - struct cgroup *cgrp;
> - int ret;
>
> if (!capable(CAP_NET_ADMIN))
> return -EPERM;
> @@ -1667,19 +1592,7 @@ static int bpf_prog_detach(const union bpf_attr *attr)
> return -EINVAL;
> }
>
> - cgrp = cgroup_get_from_fd(attr->target_fd);
> - if (IS_ERR(cgrp))
> - return PTR_ERR(cgrp);
> -
> - prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
> - if (IS_ERR(prog))
> - prog = NULL;
> -
> - ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
> - if (prog)
> - bpf_prog_put(prog);
> - cgroup_put(cgrp);
> - return ret;
> + return cgroup_bpf_prog_detach(attr, ptype);
> }
>
> #define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt
> @@ -1687,9 +1600,6 @@ static int bpf_prog_detach(const union bpf_attr *attr)
> static int bpf_prog_query(const union bpf_attr *attr,
> union bpf_attr __user *uattr)
> {
> - struct cgroup *cgrp;
> - int ret;
> -
> if (!capable(CAP_NET_ADMIN))
> return -EPERM;
> if (CHECK_ATTR(BPF_PROG_QUERY))
> @@ -1717,14 +1627,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
> default:
> return -EINVAL;
> }
> - cgrp = cgroup_get_from_fd(attr->query.target_fd);
> - if (IS_ERR(cgrp))
> - return PTR_ERR(cgrp);
> - ret = cgroup_bpf_query(cgrp, attr, uattr);
> - cgroup_put(cgrp);
> - return ret;
> +
> + return cgroup_bpf_prog_query(attr, uattr);
> }
> -#endif /* CONFIG_CGROUP_BPF */
>
> #define BPF_PROG_TEST_RUN_LAST_FIELD test.duration
>
> @@ -2371,7 +2276,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
> case BPF_OBJ_GET:
> err = bpf_obj_get(&attr);
> break;
> -#ifdef CONFIG_CGROUP_BPF
> case BPF_PROG_ATTACH:
> err = bpf_prog_attach(&attr);
> break;
> @@ -2381,7 +2285,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
> case BPF_PROG_QUERY:
> err = bpf_prog_query(&attr, uattr);
> break;
> -#endif
> case BPF_PROG_TEST_RUN:
> err = bpf_prog_test_run(&attr, uattr);
> break;
> --
> 2.17.1
>

2018-06-07 22:41:29

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_BPF_CGROUP

On 06/07/2018 08:14 PM, Y Song wrote:
> On Wed, Jun 6, 2018 at 2:09 PM, Sean Young <[email protected]> wrote:
>> Compile bpf_prog_{attach,detach,query} even if CONFIG_BPF_CGROUP is not
>> set.
>
> It should be CONFIG_CGROUP_BPF here. The same for subject line.
> Today, if CONFIG_CGROUP_BPF is not defined. Users will get an -EINVAL
> if they try to attach/detach/query.
>
> I am not sure what is the motivation behind this change. Could you explain more?

Motivation was that lirc2 progs are not related to cgroups at all and there
are users that have compiled it out, yet it uses BPF_PROG_ATTACH/DETACH for
managing them. This definitely needs to be more clearly explained in the
changelog, agree.

>> Signed-off-by: Sean Young <[email protected]>
>> ---
>> include/linux/bpf-cgroup.h | 31 +++++++++++
>> kernel/bpf/cgroup.c | 110 +++++++++++++++++++++++++++++++++++++
>> kernel/bpf/syscall.c | 105 ++---------------------------------
>> 3 files changed, 145 insertions(+), 101 deletions(-)
>>[...]

2018-06-14 18:42:56

by Sean Young

[permalink] [raw]
Subject: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

If the kernel is compiled with CONFIG_CGROUP_BPF not enabled, it is not
possible to attach, detach or query IR BPF programs to /dev/lircN devices,
making them impossible to use. For embedded devices, it should be possible
to use IR decoding without cgroups or CONFIG_CGROUP_BPF enabled.

This change requires some refactoring, since bpf_prog_{attach,detach,query}
functions are now always compiled, but their code paths for cgroups need
moving out. Rather than a #ifdef CONFIG_CGROUP_BPF in kernel/bpf/syscall.c,
moving them to kernel/bpf/cgroup.c does not require #ifdefs since that file
is already conditionally compiled.

Signed-off-by: Sean Young <[email protected]>
---
include/linux/bpf-cgroup.h | 31 +++++++++++
kernel/bpf/cgroup.c | 110 +++++++++++++++++++++++++++++++++++++
kernel/bpf/syscall.c | 105 ++---------------------------------
3 files changed, 145 insertions(+), 101 deletions(-)

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 975fb4cf1bb7..ee67cd35f426 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -188,12 +188,43 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
\
__ret; \
})
+int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach);
+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr);
#else

struct cgroup_bpf {};
static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }

+static inline int sockmap_get_from_fd(const union bpf_attr *attr,
+ int type, bool attach)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+
#define cgroup_bpf_enabled (0)
#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index f7c00bd6f8e4..d6e18f9dc0c4 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -428,6 +428,116 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
return ret;
}

+int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
+ enum bpf_attach_type attach_type)
+{
+ switch (prog->type) {
+ case BPF_PROG_TYPE_CGROUP_SOCK:
+ case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
+ return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
+ default:
+ return 0;
+ }
+}
+
+int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach)
+{
+ struct bpf_prog *prog = NULL;
+ int ufd = attr->target_fd;
+ struct bpf_map *map;
+ struct fd f;
+ int err;
+
+ f = fdget(ufd);
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ if (attach) {
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
+ if (IS_ERR(prog)) {
+ fdput(f);
+ return PTR_ERR(prog);
+ }
+ }
+
+ err = sock_map_prog(map, prog, attr->attach_type);
+ if (err) {
+ fdput(f);
+ if (prog)
+ bpf_prog_put(prog);
+ return err;
+ }
+
+ fdput(f);
+ return 0;
+}
+
+int cgroup_bpf_prog_attach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
+ bpf_prog_put(prog);
+ return -EINVAL;
+ }
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp)) {
+ bpf_prog_put(prog);
+ return PTR_ERR(cgrp);
+ }
+
+ ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
+ attr->attach_flags);
+ if (ret)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+
+ return ret;
+}
+
+int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ prog = NULL;
+
+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
+ if (prog)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+ return ret;
+}
+
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->query.target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+ ret = cgroup_bpf_query(cgrp, attr, uattr);
+ cgroup_put(cgrp);
+ return ret;
+}
+
/**
* __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
* @sk: The socket sending or receiving traffic
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 0fa20624707f..52fa44856623 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1489,65 +1489,14 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
return err;
}

-#ifdef CONFIG_CGROUP_BPF
-
-static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
- enum bpf_attach_type attach_type)
-{
- switch (prog->type) {
- case BPF_PROG_TYPE_CGROUP_SOCK:
- case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
- return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
- default:
- return 0;
- }
-}
-
#define BPF_PROG_ATTACH_LAST_FIELD attach_flags

-static int sockmap_get_from_fd(const union bpf_attr *attr,
- int type, bool attach)
-{
- struct bpf_prog *prog = NULL;
- int ufd = attr->target_fd;
- struct bpf_map *map;
- struct fd f;
- int err;
-
- f = fdget(ufd);
- map = __bpf_map_get(f);
- if (IS_ERR(map))
- return PTR_ERR(map);
-
- if (attach) {
- prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
- if (IS_ERR(prog)) {
- fdput(f);
- return PTR_ERR(prog);
- }
- }
-
- err = sock_map_prog(map, prog, attr->attach_type);
- if (err) {
- fdput(f);
- if (prog)
- bpf_prog_put(prog);
- return err;
- }
-
- fdput(f);
- return 0;
-}
-
#define BPF_F_ATTACH_MASK \
(BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)

static int bpf_prog_attach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1593,28 +1542,7 @@ static int bpf_prog_attach(const union bpf_attr *attr)
return -EINVAL;
}

- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- return PTR_ERR(prog);
-
- if (bpf_prog_attach_check_attach_type(prog, attr->attach_type)) {
- bpf_prog_put(prog);
- return -EINVAL;
- }
-
- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp)) {
- bpf_prog_put(prog);
- return PTR_ERR(cgrp);
- }
-
- ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
- attr->attach_flags);
- if (ret)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
-
- return ret;
+ return cgroup_bpf_prog_attach(attr, ptype);
}

#define BPF_PROG_DETACH_LAST_FIELD attach_type
@@ -1622,9 +1550,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
static int bpf_prog_detach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1667,19 +1592,7 @@ static int bpf_prog_detach(const union bpf_attr *attr)
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
-
- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- prog = NULL;
-
- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
- if (prog)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
- return ret;
+ return cgroup_bpf_prog_detach(attr, ptype);
}

#define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt
@@ -1687,9 +1600,6 @@ static int bpf_prog_detach(const union bpf_attr *attr)
static int bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
- struct cgroup *cgrp;
- int ret;
-
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (CHECK_ATTR(BPF_PROG_QUERY))
@@ -1717,14 +1627,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
default:
return -EINVAL;
}
- cgrp = cgroup_get_from_fd(attr->query.target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
- ret = cgroup_bpf_query(cgrp, attr, uattr);
- cgroup_put(cgrp);
- return ret;
+
+ return cgroup_bpf_prog_query(attr, uattr);
}
-#endif /* CONFIG_CGROUP_BPF */

#define BPF_PROG_TEST_RUN_LAST_FIELD test.duration

@@ -2371,7 +2276,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_OBJ_GET:
err = bpf_obj_get(&attr);
break;
-#ifdef CONFIG_CGROUP_BPF
case BPF_PROG_ATTACH:
err = bpf_prog_attach(&attr);
break;
@@ -2381,7 +2285,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_PROG_QUERY:
err = bpf_prog_query(&attr, uattr);
break;
-#endif
case BPF_PROG_TEST_RUN:
err = bpf_prog_test_run(&attr, uattr);
break;
--
2.17.1


2018-06-15 14:06:11

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

Hi Sean,

On 06/14/2018 08:42 PM, Sean Young wrote:
> If the kernel is compiled with CONFIG_CGROUP_BPF not enabled, it is not
> possible to attach, detach or query IR BPF programs to /dev/lircN devices,
> making them impossible to use. For embedded devices, it should be possible
> to use IR decoding without cgroups or CONFIG_CGROUP_BPF enabled.
>
> This change requires some refactoring, since bpf_prog_{attach,detach,query}
> functions are now always compiled, but their code paths for cgroups need
> moving out. Rather than a #ifdef CONFIG_CGROUP_BPF in kernel/bpf/syscall.c,
> moving them to kernel/bpf/cgroup.c does not require #ifdefs since that file
> is already conditionally compiled.
>
> Signed-off-by: Sean Young <[email protected]>

Just two minor edits below from my side:

> include/linux/bpf-cgroup.h | 31 +++++++++++
> kernel/bpf/cgroup.c | 110 +++++++++++++++++++++++++++++++++++++
> kernel/bpf/syscall.c | 105 ++---------------------------------
> 3 files changed, 145 insertions(+), 101 deletions(-)
>
> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
> index 975fb4cf1bb7..ee67cd35f426 100644
> --- a/include/linux/bpf-cgroup.h
> +++ b/include/linux/bpf-cgroup.h
> @@ -188,12 +188,43 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
> \
> __ret; \
> })
> +int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach);
> +int cgroup_bpf_prog_attach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype);
> +int cgroup_bpf_prog_detach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype);
> +int cgroup_bpf_prog_query(const union bpf_attr *attr,
> + union bpf_attr __user *uattr);
> #else
>
> struct cgroup_bpf {};
> static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
> static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }
>
> +static inline int sockmap_get_from_fd(const union bpf_attr *attr,
> + int type, bool attach)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
> + enum bpf_prog_type ptype)
> +{
> + return -EINVAL;
> +}
> +
> +static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
> + union bpf_attr __user *uattr)
> +{
> + return -EINVAL;
> +}
> +
> #define cgroup_bpf_enabled (0)
> #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
> #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index f7c00bd6f8e4..d6e18f9dc0c4 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -428,6 +428,116 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
> return ret;
> }
>
> +int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
> + enum bpf_attach_type attach_type)
> +{
> + switch (prog->type) {
> + case BPF_PROG_TYPE_CGROUP_SOCK:
> + case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
> + return attach_type == prog->expected_attach_type ? 0 : -EINVAL;
> + default:
> + return 0;
> + }
> +}

This one is rather BPF core, cgroup progs only happen to use it. Could
you either move it as static inline into include/linux/bpf.h or leave it
in kernel/bpf/syscall.c? In any case the #ifdef CONFIG_CGROUP_BPF wrapper
could probably be dropped as well from it.

> +int sockmap_get_from_fd(const union bpf_attr *attr, int type, bool attach)
> +{
> + struct bpf_prog *prog = NULL;
> + int ufd = attr->target_fd;
> + struct bpf_map *map;
> + struct fd f;
> + int err;
> +
> + f = fdget(ufd);
> + map = __bpf_map_get(f);
> + if (IS_ERR(map))
> + return PTR_ERR(map);
> +
> + if (attach) {
> + prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
> + if (IS_ERR(prog)) {
> + fdput(f);
> + return PTR_ERR(prog);
> + }
> + }
> +
> + err = sock_map_prog(map, prog, attr->attach_type);
> + if (err) {
> + fdput(f);
> + if (prog)
> + bpf_prog_put(prog);
> + return err;
> + }
> +
> + fdput(f);
> + return 0;
> +}

And this one should rather end up in kernel/bpf/sockmap.c to have it with
the rest of sockmap code. Moving into cgroup.c would rather be a in the
wrong place given what the code does.

Thanks,
Daniel

2018-06-18 17:13:27

by Sean Young

[permalink] [raw]
Subject: [PATCH v3] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

If the kernel is compiled with CONFIG_CGROUP_BPF not enabled, it is not
possible to attach, detach or query IR BPF programs to /dev/lircN devices,
making them impossible to use. For embedded devices, it should be possible
to use IR decoding without cgroups or CONFIG_CGROUP_BPF enabled.

This change requires some refactoring, since bpf_prog_{attach,detach,query}
functions are now always compiled, but their code paths for cgroups need
moving out. Rather than a #ifdef CONFIG_CGROUP_BPF in kernel/bpf/syscall.c,
moving them to kernel/bpf/cgroup.c and kernel/bpf/sockmap.c does not
require #ifdefs since that is already conditionally compiled.

Signed-off-by: Sean Young <[email protected]>
---
drivers/media/rc/bpf-lirc.c | 14 +-----
include/linux/bpf-cgroup.h | 26 ++++++++++
include/linux/bpf.h | 8 +++
include/linux/bpf_lirc.h | 5 +-
kernel/bpf/cgroup.c | 52 ++++++++++++++++++++
kernel/bpf/sockmap.c | 18 +++++++
kernel/bpf/syscall.c | 98 ++++++++-----------------------------
7 files changed, 130 insertions(+), 91 deletions(-)

diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
index 40826bba06b6..fcfab6635f9c 100644
--- a/drivers/media/rc/bpf-lirc.c
+++ b/drivers/media/rc/bpf-lirc.c
@@ -207,29 +207,19 @@ void lirc_bpf_free(struct rc_dev *rcdev)
bpf_prog_array_free(rcdev->raw->progs);
}

-int lirc_prog_attach(const union bpf_attr *attr)
+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
- struct bpf_prog *prog;
struct rc_dev *rcdev;
int ret;

if (attr->attach_flags)
return -EINVAL;

- prog = bpf_prog_get_type(attr->attach_bpf_fd,
- BPF_PROG_TYPE_LIRC_MODE2);
- if (IS_ERR(prog))
- return PTR_ERR(prog);
-
rcdev = rc_dev_get_from_fd(attr->target_fd);
- if (IS_ERR(rcdev)) {
- bpf_prog_put(prog);
+ if (IS_ERR(rcdev))
return PTR_ERR(rcdev);
- }

ret = lirc_bpf_attach(rcdev, prog);
- if (ret)
- bpf_prog_put(prog);

put_device(&rcdev->dev);

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 975fb4cf1bb7..79795c5fa7c3 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -188,12 +188,38 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
\
__ret; \
})
+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype, struct bpf_prog *prog);
+int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr);
#else

+struct bpf_prog;
struct cgroup_bpf {};
static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }

+static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+
#define cgroup_bpf_enabled (0)
#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 995c3b1e59bf..ac4c73d29f96 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -684,6 +684,8 @@ static inline void bpf_map_offload_map_free(struct bpf_map *map)
struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key);
int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type);
+int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog);
#else
static inline struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
{
@@ -702,6 +704,12 @@ static inline int sock_map_prog(struct bpf_map *map,
{
return -EOPNOTSUPP;
}
+
+int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog);
+{
+ return -EINVAL;
+}
#endif

#if defined(CONFIG_XDP_SOCKETS)
diff --git a/include/linux/bpf_lirc.h b/include/linux/bpf_lirc.h
index 5f8a4283092d..9d9ff755ec29 100644
--- a/include/linux/bpf_lirc.h
+++ b/include/linux/bpf_lirc.h
@@ -5,11 +5,12 @@
#include <uapi/linux/bpf.h>

#ifdef CONFIG_BPF_LIRC_MODE2
-int lirc_prog_attach(const union bpf_attr *attr);
+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int lirc_prog_detach(const union bpf_attr *attr);
int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
#else
-static inline int lirc_prog_attach(const union bpf_attr *attr)
+static inline int lirc_prog_attach(const union bpf_attr *attr,
+ struct bpf_prog *prog)
{
return -EINVAL;
}
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index f7c00bd6f8e4..f0ae8a3d01f9 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -428,6 +428,58 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
return ret;
}

+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype, struct bpf_prog *prog)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
+ attr->attach_flags);
+ cgroup_put(cgrp);
+
+ return ret;
+}
+
+int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ prog = NULL;
+
+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
+ if (prog)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+ return ret;
+}
+
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->query.target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+ ret = cgroup_bpf_query(cgrp, attr, uattr);
+ cgroup_put(cgrp);
+ return ret;
+}
+
/**
* __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
* @sk: The socket sending or receiving traffic
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 52a91d816c0e..81d0c55a77aa 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -1915,6 +1915,24 @@ int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type)
return 0;
}

+int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog)
+{
+ int ufd = attr->target_fd;
+ struct bpf_map *map;
+ struct fd f;
+ int err;
+
+ f = fdget(ufd);
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = sock_map_prog(map, prog, attr->attach_type);
+ fdput(f);
+ return err;
+}
+
static void *sock_map_lookup(struct bpf_map *map, void *key)
{
return NULL;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 0fa20624707f..93993c03c9ac 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1489,8 +1489,6 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
return err;
}

-#ifdef CONFIG_CGROUP_BPF
-
static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
enum bpf_attach_type attach_type)
{
@@ -1505,40 +1503,6 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,

#define BPF_PROG_ATTACH_LAST_FIELD attach_flags

-static int sockmap_get_from_fd(const union bpf_attr *attr,
- int type, bool attach)
-{
- struct bpf_prog *prog = NULL;
- int ufd = attr->target_fd;
- struct bpf_map *map;
- struct fd f;
- int err;
-
- f = fdget(ufd);
- map = __bpf_map_get(f);
- if (IS_ERR(map))
- return PTR_ERR(map);
-
- if (attach) {
- prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
- if (IS_ERR(prog)) {
- fdput(f);
- return PTR_ERR(prog);
- }
- }
-
- err = sock_map_prog(map, prog, attr->attach_type);
- if (err) {
- fdput(f);
- if (prog)
- bpf_prog_put(prog);
- return err;
- }
-
- fdput(f);
- return 0;
-}
-
#define BPF_F_ATTACH_MASK \
(BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)

@@ -1546,7 +1510,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
struct bpf_prog *prog;
- struct cgroup *cgrp;
int ret;

if (!capable(CAP_NET_ADMIN))
@@ -1583,12 +1546,15 @@ static int bpf_prog_attach(const union bpf_attr *attr)
ptype = BPF_PROG_TYPE_CGROUP_DEVICE;
break;
case BPF_SK_MSG_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, true);
+ ptype = BPF_PROG_TYPE_SK_MSG;
+ break;
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);
+ ptype = BPF_PROG_TYPE_SK_SKB;
+ break;
case BPF_LIRC_MODE2:
- return lirc_prog_attach(attr);
+ ptype = BPF_PROG_TYPE_LIRC_MODE2;
+ break;
default:
return -EINVAL;
}
@@ -1602,17 +1568,20 @@ static int bpf_prog_attach(const union bpf_attr *attr)
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp)) {
- bpf_prog_put(prog);
- return PTR_ERR(cgrp);
+ switch (ptype) {
+ case BPF_PROG_TYPE_SK_SKB:
+ case BPF_PROG_TYPE_SK_MSG:
+ ret = sockmap_get_from_fd(attr, ptype, prog);
+ break;
+ case BPF_PROG_TYPE_LIRC_MODE2:
+ ret = lirc_prog_attach(attr, prog);
+ break;
+ default:
+ ret = cgroup_bpf_prog_attach(attr, ptype, prog);
}

- ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
- attr->attach_flags);
if (ret)
bpf_prog_put(prog);
- cgroup_put(cgrp);

return ret;
}
@@ -1622,9 +1591,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
static int bpf_prog_detach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1657,29 +1623,17 @@ static int bpf_prog_detach(const union bpf_attr *attr)
ptype = BPF_PROG_TYPE_CGROUP_DEVICE;
break;
case BPF_SK_MSG_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, false);
+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, NULL);
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);
+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, NULL);
case BPF_LIRC_MODE2:
return lirc_prog_detach(attr);
default:
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
-
- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- prog = NULL;
-
- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
- if (prog)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
- return ret;
+ return cgroup_bpf_prog_detach(attr, ptype);
}

#define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt
@@ -1687,9 +1641,6 @@ static int bpf_prog_detach(const union bpf_attr *attr)
static int bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
- struct cgroup *cgrp;
- int ret;
-
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (CHECK_ATTR(BPF_PROG_QUERY))
@@ -1717,14 +1668,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
default:
return -EINVAL;
}
- cgrp = cgroup_get_from_fd(attr->query.target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
- ret = cgroup_bpf_query(cgrp, attr, uattr);
- cgroup_put(cgrp);
- return ret;
+
+ return cgroup_bpf_prog_query(attr, uattr);
}
-#endif /* CONFIG_CGROUP_BPF */

#define BPF_PROG_TEST_RUN_LAST_FIELD test.duration

@@ -2371,7 +2317,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_OBJ_GET:
err = bpf_obj_get(&attr);
break;
-#ifdef CONFIG_CGROUP_BPF
case BPF_PROG_ATTACH:
err = bpf_prog_attach(&attr);
break;
@@ -2381,7 +2326,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_PROG_QUERY:
err = bpf_prog_query(&attr, uattr);
break;
-#endif
case BPF_PROG_TEST_RUN:
err = bpf_prog_test_run(&attr, uattr);
break;
--
2.17.1


2018-06-18 18:49:41

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

Hi Sean,

I love your patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.18-rc1 next-20180618]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url: https://github.com/0day-ci/linux/commits/Sean-Young/bpf-attach-type-BPF_LIRC_MODE2-should-not-depend-on-CONFIG_CGROUP_BPF/20180619-023056
config: i386-tinyconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
# save the attached .config to linux build tree
make ARCH=i386

All errors (new ones prefixed by >>):

In file included from kernel///events/core.c:45:0:
>> include/linux/bpf.h:710:1: error: expected identifier or '(' before '{' token
{
^

vim +710 include/linux/bpf.h

707
708 int sockmap_get_from_fd(const union bpf_attr *attr, int type,
709 struct bpf_prog *prog);
> 710 {
711 return -EINVAL;
712 }
713 #endif
714

---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation


Attachments:
(No filename) (1.13 kB)
.config.gz (6.22 kB)
Download all attachments

2018-06-18 22:58:06

by Sean Young

[permalink] [raw]
Subject: Re: [PATCH v3] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

On Tue, Jun 19, 2018 at 02:46:29AM +0800, kbuild test robot wrote:
> Hi Sean,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.18-rc1 next-20180618]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
>
> url: https://github.com/0day-ci/linux/commits/Sean-Young/bpf-attach-type-BPF_LIRC_MODE2-should-not-depend-on-CONFIG_CGROUP_BPF/20180619-023056
> config: i386-tinyconfig (attached as .config)
> compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=i386
>
> All errors (new ones prefixed by >>):
>
> In file included from kernel///events/core.c:45:0:
> >> include/linux/bpf.h:710:1: error: expected identifier or '(' before '{' token
> {
> ^

Oh dear I never tested that with CONFIG_INET off (no ip? madness!). I'll
send out a v4 soon.


Sean

2018-06-18 23:05:28

by Sean Young

[permalink] [raw]
Subject: [PATCH v4] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

If the kernel is compiled with CONFIG_CGROUP_BPF not enabled, it is not
possible to attach, detach or query IR BPF programs to /dev/lircN devices,
making them impossible to use. For embedded devices, it should be possible
to use IR decoding without cgroups or CONFIG_CGROUP_BPF enabled.

This change requires some refactoring, since bpf_prog_{attach,detach,query}
functions are now always compiled, but their code paths for cgroups need
moving out. Rather than a #ifdef CONFIG_CGROUP_BPF in kernel/bpf/syscall.c,
moving them to kernel/bpf/cgroup.c and kernel/bpf/sockmap.c does not
require #ifdefs since that is already conditionally compiled.

Signed-off-by: Sean Young <[email protected]>
---
drivers/media/rc/bpf-lirc.c | 14 +-----
include/linux/bpf-cgroup.h | 26 ++++++++++
include/linux/bpf.h | 8 +++
include/linux/bpf_lirc.h | 5 +-
kernel/bpf/cgroup.c | 52 ++++++++++++++++++++
kernel/bpf/sockmap.c | 18 +++++++
kernel/bpf/syscall.c | 98 ++++++++-----------------------------
7 files changed, 130 insertions(+), 91 deletions(-)

diff --git a/drivers/media/rc/bpf-lirc.c b/drivers/media/rc/bpf-lirc.c
index 40826bba06b6..fcfab6635f9c 100644
--- a/drivers/media/rc/bpf-lirc.c
+++ b/drivers/media/rc/bpf-lirc.c
@@ -207,29 +207,19 @@ void lirc_bpf_free(struct rc_dev *rcdev)
bpf_prog_array_free(rcdev->raw->progs);
}

-int lirc_prog_attach(const union bpf_attr *attr)
+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
{
- struct bpf_prog *prog;
struct rc_dev *rcdev;
int ret;

if (attr->attach_flags)
return -EINVAL;

- prog = bpf_prog_get_type(attr->attach_bpf_fd,
- BPF_PROG_TYPE_LIRC_MODE2);
- if (IS_ERR(prog))
- return PTR_ERR(prog);
-
rcdev = rc_dev_get_from_fd(attr->target_fd);
- if (IS_ERR(rcdev)) {
- bpf_prog_put(prog);
+ if (IS_ERR(rcdev))
return PTR_ERR(rcdev);
- }

ret = lirc_bpf_attach(rcdev, prog);
- if (ret)
- bpf_prog_put(prog);

put_device(&rcdev->dev);

diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 975fb4cf1bb7..79795c5fa7c3 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -188,12 +188,38 @@ int __cgroup_bpf_check_dev_permission(short dev_type, u32 major, u32 minor,
\
__ret; \
})
+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype, struct bpf_prog *prog);
+int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype);
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr);
#else

+struct bpf_prog;
struct cgroup_bpf {};
static inline void cgroup_bpf_put(struct cgroup *cgrp) {}
static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; }

+static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype)
+{
+ return -EINVAL;
+}
+
+static inline int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+
#define cgroup_bpf_enabled (0)
#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 995c3b1e59bf..7c87ec22d465 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -684,6 +684,8 @@ static inline void bpf_map_offload_map_free(struct bpf_map *map)
struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key);
struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key);
int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type);
+int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog);
#else
static inline struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
{
@@ -702,6 +704,12 @@ static inline int sock_map_prog(struct bpf_map *map,
{
return -EOPNOTSUPP;
}
+
+static inline int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
#endif

#if defined(CONFIG_XDP_SOCKETS)
diff --git a/include/linux/bpf_lirc.h b/include/linux/bpf_lirc.h
index 5f8a4283092d..9d9ff755ec29 100644
--- a/include/linux/bpf_lirc.h
+++ b/include/linux/bpf_lirc.h
@@ -5,11 +5,12 @@
#include <uapi/linux/bpf.h>

#ifdef CONFIG_BPF_LIRC_MODE2
-int lirc_prog_attach(const union bpf_attr *attr);
+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int lirc_prog_detach(const union bpf_attr *attr);
int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
#else
-static inline int lirc_prog_attach(const union bpf_attr *attr)
+static inline int lirc_prog_attach(const union bpf_attr *attr,
+ struct bpf_prog *prog)
{
return -EINVAL;
}
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index f7c00bd6f8e4..f0ae8a3d01f9 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -428,6 +428,58 @@ int __cgroup_bpf_query(struct cgroup *cgrp, const union bpf_attr *attr,
return ret;
}

+int cgroup_bpf_prog_attach(const union bpf_attr *attr,
+ enum bpf_prog_type ptype, struct bpf_prog *prog)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
+ attr->attach_flags);
+ cgroup_put(cgrp);
+
+ return ret;
+}
+
+int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)
+{
+ struct bpf_prog *prog;
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+
+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
+ if (IS_ERR(prog))
+ prog = NULL;
+
+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
+ if (prog)
+ bpf_prog_put(prog);
+ cgroup_put(cgrp);
+ return ret;
+}
+
+int cgroup_bpf_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ struct cgroup *cgrp;
+ int ret;
+
+ cgrp = cgroup_get_from_fd(attr->query.target_fd);
+ if (IS_ERR(cgrp))
+ return PTR_ERR(cgrp);
+ ret = cgroup_bpf_query(cgrp, attr, uattr);
+ cgroup_put(cgrp);
+ return ret;
+}
+
/**
* __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
* @sk: The socket sending or receiving traffic
diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 52a91d816c0e..81d0c55a77aa 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -1915,6 +1915,24 @@ int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type)
return 0;
}

+int sockmap_get_from_fd(const union bpf_attr *attr, int type,
+ struct bpf_prog *prog)
+{
+ int ufd = attr->target_fd;
+ struct bpf_map *map;
+ struct fd f;
+ int err;
+
+ f = fdget(ufd);
+ map = __bpf_map_get(f);
+ if (IS_ERR(map))
+ return PTR_ERR(map);
+
+ err = sock_map_prog(map, prog, attr->attach_type);
+ fdput(f);
+ return err;
+}
+
static void *sock_map_lookup(struct bpf_map *map, void *key)
{
return NULL;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 0fa20624707f..93993c03c9ac 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1489,8 +1489,6 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
return err;
}

-#ifdef CONFIG_CGROUP_BPF
-
static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
enum bpf_attach_type attach_type)
{
@@ -1505,40 +1503,6 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,

#define BPF_PROG_ATTACH_LAST_FIELD attach_flags

-static int sockmap_get_from_fd(const union bpf_attr *attr,
- int type, bool attach)
-{
- struct bpf_prog *prog = NULL;
- int ufd = attr->target_fd;
- struct bpf_map *map;
- struct fd f;
- int err;
-
- f = fdget(ufd);
- map = __bpf_map_get(f);
- if (IS_ERR(map))
- return PTR_ERR(map);
-
- if (attach) {
- prog = bpf_prog_get_type(attr->attach_bpf_fd, type);
- if (IS_ERR(prog)) {
- fdput(f);
- return PTR_ERR(prog);
- }
- }
-
- err = sock_map_prog(map, prog, attr->attach_type);
- if (err) {
- fdput(f);
- if (prog)
- bpf_prog_put(prog);
- return err;
- }
-
- fdput(f);
- return 0;
-}
-
#define BPF_F_ATTACH_MASK \
(BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)

@@ -1546,7 +1510,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
struct bpf_prog *prog;
- struct cgroup *cgrp;
int ret;

if (!capable(CAP_NET_ADMIN))
@@ -1583,12 +1546,15 @@ static int bpf_prog_attach(const union bpf_attr *attr)
ptype = BPF_PROG_TYPE_CGROUP_DEVICE;
break;
case BPF_SK_MSG_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, true);
+ ptype = BPF_PROG_TYPE_SK_MSG;
+ break;
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);
+ ptype = BPF_PROG_TYPE_SK_SKB;
+ break;
case BPF_LIRC_MODE2:
- return lirc_prog_attach(attr);
+ ptype = BPF_PROG_TYPE_LIRC_MODE2;
+ break;
default:
return -EINVAL;
}
@@ -1602,17 +1568,20 @@ static int bpf_prog_attach(const union bpf_attr *attr)
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp)) {
- bpf_prog_put(prog);
- return PTR_ERR(cgrp);
+ switch (ptype) {
+ case BPF_PROG_TYPE_SK_SKB:
+ case BPF_PROG_TYPE_SK_MSG:
+ ret = sockmap_get_from_fd(attr, ptype, prog);
+ break;
+ case BPF_PROG_TYPE_LIRC_MODE2:
+ ret = lirc_prog_attach(attr, prog);
+ break;
+ default:
+ ret = cgroup_bpf_prog_attach(attr, ptype, prog);
}

- ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,
- attr->attach_flags);
if (ret)
bpf_prog_put(prog);
- cgroup_put(cgrp);

return ret;
}
@@ -1622,9 +1591,6 @@ static int bpf_prog_attach(const union bpf_attr *attr)
static int bpf_prog_detach(const union bpf_attr *attr)
{
enum bpf_prog_type ptype;
- struct bpf_prog *prog;
- struct cgroup *cgrp;
- int ret;

if (!capable(CAP_NET_ADMIN))
return -EPERM;
@@ -1657,29 +1623,17 @@ static int bpf_prog_detach(const union bpf_attr *attr)
ptype = BPF_PROG_TYPE_CGROUP_DEVICE;
break;
case BPF_SK_MSG_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, false);
+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, NULL);
case BPF_SK_SKB_STREAM_PARSER:
case BPF_SK_SKB_STREAM_VERDICT:
- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);
+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, NULL);
case BPF_LIRC_MODE2:
return lirc_prog_detach(attr);
default:
return -EINVAL;
}

- cgrp = cgroup_get_from_fd(attr->target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
-
- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);
- if (IS_ERR(prog))
- prog = NULL;
-
- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);
- if (prog)
- bpf_prog_put(prog);
- cgroup_put(cgrp);
- return ret;
+ return cgroup_bpf_prog_detach(attr, ptype);
}

#define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt
@@ -1687,9 +1641,6 @@ static int bpf_prog_detach(const union bpf_attr *attr)
static int bpf_prog_query(const union bpf_attr *attr,
union bpf_attr __user *uattr)
{
- struct cgroup *cgrp;
- int ret;
-
if (!capable(CAP_NET_ADMIN))
return -EPERM;
if (CHECK_ATTR(BPF_PROG_QUERY))
@@ -1717,14 +1668,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
default:
return -EINVAL;
}
- cgrp = cgroup_get_from_fd(attr->query.target_fd);
- if (IS_ERR(cgrp))
- return PTR_ERR(cgrp);
- ret = cgroup_bpf_query(cgrp, attr, uattr);
- cgroup_put(cgrp);
- return ret;
+
+ return cgroup_bpf_prog_query(attr, uattr);
}
-#endif /* CONFIG_CGROUP_BPF */

#define BPF_PROG_TEST_RUN_LAST_FIELD test.duration

@@ -2371,7 +2317,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_OBJ_GET:
err = bpf_obj_get(&attr);
break;
-#ifdef CONFIG_CGROUP_BPF
case BPF_PROG_ATTACH:
err = bpf_prog_attach(&attr);
break;
@@ -2381,7 +2326,6 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
case BPF_PROG_QUERY:
err = bpf_prog_query(&attr, uattr);
break;
-#endif
case BPF_PROG_TEST_RUN:
err = bpf_prog_test_run(&attr, uattr);
break;
--
2.17.1


2018-06-26 10:04:31

by Daniel Borkmann

[permalink] [raw]
Subject: Re: [PATCH v4] bpf: attach type BPF_LIRC_MODE2 should not depend on CONFIG_CGROUP_BPF

On 06/19/2018 01:04 AM, Sean Young wrote:
> If the kernel is compiled with CONFIG_CGROUP_BPF not enabled, it is not
> possible to attach, detach or query IR BPF programs to /dev/lircN devices,
> making them impossible to use. For embedded devices, it should be possible
> to use IR decoding without cgroups or CONFIG_CGROUP_BPF enabled.
>
> This change requires some refactoring, since bpf_prog_{attach,detach,query}
> functions are now always compiled, but their code paths for cgroups need
> moving out. Rather than a #ifdef CONFIG_CGROUP_BPF in kernel/bpf/syscall.c,
> moving them to kernel/bpf/cgroup.c and kernel/bpf/sockmap.c does not
> require #ifdefs since that is already conditionally compiled.
>
> Signed-off-by: Sean Young <[email protected]>

Applied to bpf, thanks Sean!