This patch series revisits the proposal for a GPU cgroup controller to
track and limit memory allocations by various device/allocator
subsystems. The patch series also contains a simple prototype to
illustrate how Android intends to implement DMA-BUF allocator
attribution using the GPU cgroup controller. The prototype does not
include resource limit enforcements.
Changelog:
v3:
Remove Upstreaming Plan from gpu-cgroup.rst per John Stultz
Use more common dual author commit message format per John Stultz
Remove android from binder changes title per Todd Kjos
Add a kselftest for this new behavior per Greg Kroah-Hartman
Include details on behavior for all combinations of kernel/userspace
versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman.
Fix pid and uid types in binder UAPI header
v2:
See the previous revision of this change submitted by Hridya Valsaraju
at: https://lore.kernel.org/all/[email protected]/
Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König. Pointers to struct gpucg and struct gpucg_device
tracking the current associations were added to the dma_buf struct to
achieve this.
Fix incorrect Kconfig help section indentation per Randy Dunlap.
History of the GPU cgroup controller
====================================
The GPU/DRM cgroup controller came into being when a consensus[1]
was reached that the resources it tracked were unsuitable to be integrated
into memcg. Originally, the proposed controller was specific to the DRM
subsystem and was intended to track GEM buffers and GPU-specific
resources[2]. In order to help establish a unified memory accounting model
for all GPU and all related subsystems, Daniel Vetter put forth a
suggestion to move it out of the DRM subsystem so that it can be used by
other DMA-BUF exporters as well[3]. This RFC proposes an interface that
does the same.
[1]: https://patchwork.kernel.org/project/dri-devel/cover/[email protected]/#22624705
[2]: https://lore.kernel.org/amd-gfx/[email protected]/
[3]: https://lore.kernel.org/amd-gfx/YCVOl8%[email protected]/
Hridya Valsaraju (5):
gpu: rfc: Proposal for a GPU cgroup controller
cgroup: gpu: Add a cgroup controller for allocator attribution of GPU
memory
dmabuf: heaps: export system_heap buffers with GPU cgroup charging
dmabuf: Add gpu cgroup charge transfer function
binder: Add a buffer flag to relinquish ownership of fds
T.J. Mercier (3):
dmabuf: Use the GPU cgroup charge/uncharge APIs
binder: use __kernel_pid_t and __kernel_uid_t for userspace
selftests: Add binder cgroup gpu memory transfer test
Documentation/gpu/rfc/gpu-cgroup.rst | 183 +++++++
Documentation/gpu/rfc/index.rst | 4 +
drivers/android/binder.c | 26 +
drivers/dma-buf/dma-buf.c | 100 ++++
drivers/dma-buf/dma-heap.c | 27 +
drivers/dma-buf/heaps/system_heap.c | 3 +
include/linux/cgroup_gpu.h | 127 +++++
include/linux/cgroup_subsys.h | 4 +
include/linux/dma-buf.h | 22 +-
include/linux/dma-heap.h | 11 +
include/uapi/linux/android/binder.h | 5 +-
init/Kconfig | 7 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/gpu.c | 304 +++++++++++
.../selftests/drivers/android/binder/Makefile | 8 +
.../drivers/android/binder/binder_util.c | 254 +++++++++
.../drivers/android/binder/binder_util.h | 32 ++
.../selftests/drivers/android/binder/config | 4 +
.../binder/test_dmabuf_cgroup_transfer.c | 480 ++++++++++++++++++
19 files changed, 1598 insertions(+), 4 deletions(-)
create mode 100644 Documentation/gpu/rfc/gpu-cgroup.rst
create mode 100644 include/linux/cgroup_gpu.h
create mode 100644 kernel/cgroup/gpu.c
create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
create mode 100644 tools/testing/selftests/drivers/android/binder/config
create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
--
2.35.1.616.g0bdcbb4464-goog
This test verifies that the cgroup GPU memory charge is transferred
correctly when a dmabuf is passed between processes in two different
cgroups and the sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the
binder transaction data containing the dmabuf file descriptor.
Signed-off-by: T.J. Mercier <[email protected]>
---
.../selftests/drivers/android/binder/Makefile | 8 +
.../drivers/android/binder/binder_util.c | 254 +++++++++
.../drivers/android/binder/binder_util.h | 32 ++
.../selftests/drivers/android/binder/config | 4 +
.../binder/test_dmabuf_cgroup_transfer.c | 480 ++++++++++++++++++
5 files changed, 778 insertions(+)
create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
create mode 100644 tools/testing/selftests/drivers/android/binder/config
create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
diff --git a/tools/testing/selftests/drivers/android/binder/Makefile b/tools/testing/selftests/drivers/android/binder/Makefile
new file mode 100644
index 000000000000..726439d10675
--- /dev/null
+++ b/tools/testing/selftests/drivers/android/binder/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+CFLAGS += -Wall
+
+TEST_GEN_PROGS = test_dmabuf_cgroup_transfer
+
+include ../../../lib.mk
+
+$(OUTPUT)/test_dmabuf_cgroup_transfer: ../../../cgroup/cgroup_util.c binder_util.c
diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.c b/tools/testing/selftests/drivers/android/binder/binder_util.c
new file mode 100644
index 000000000000..c9dcf5b9d42b
--- /dev/null
+++ b/tools/testing/selftests/drivers/android/binder/binder_util.c
@@ -0,0 +1,254 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "binder_util.h"
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/mount.h>
+
+#include <linux/limits.h>
+#include <linux/android/binder.h>
+#include <linux/android/binderfs.h>
+
+static const size_t BINDER_MMAP_SIZE = 64 * 1024;
+
+static void binderfs_unmount(const char *mountpoint)
+{
+ if (umount2(mountpoint, MNT_DETACH))
+ fprintf(stderr, "Failed to unmount binderfs at %s: %s\n",
+ mountpoint, strerror(errno));
+ else
+ fprintf(stderr, "Binderfs unmounted: %s\n", mountpoint);
+
+ if (rmdir(mountpoint))
+ fprintf(stderr, "Failed to remove binderfs mount %s: %s\n",
+ mountpoint, strerror(errno));
+ else
+ fprintf(stderr, "Binderfs mountpoint destroyed: %s\n", mountpoint);
+}
+
+struct binderfs_ctx create_binderfs(const char *name)
+{
+ int fd, ret, saved_errno;
+ struct binderfs_device device = { 0 };
+ struct binderfs_ctx ctx = { 0 };
+
+ /*
+ * P_tmpdir is set to "/tmp/" on Android platforms where Binder is most
+ * commonly used, but this path does not actually exist on Android. We
+ * will first try using "/data/local/tmp" and fallback to P_tmpdir if
+ * that fails for non-Android platforms.
+ */
+ static const char tmpdir[] = "/data/local/tmp";
+ static const size_t MAX_TMPDIR_SIZE =
+ sizeof(tmpdir) > sizeof(P_tmpdir) ?
+ sizeof(tmpdir) : sizeof(P_tmpdir);
+ static const char template[] = "/binderfs_XXXXXX";
+
+ char *mkdtemp_result;
+ char binderfs_mntpt[MAX_TMPDIR_SIZE + sizeof(template)];
+ char device_path[MAX_TMPDIR_SIZE + sizeof(template) + BINDERFS_MAX_NAME];
+
+ snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", tmpdir, template);
+
+ mkdtemp_result = mkdtemp(binderfs_mntpt);
+ if (mkdtemp_result == NULL) {
+ fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s.\n",
+ binderfs_mntpt, strerror(errno));
+ fprintf(stderr, "Trying fallback mountpoint...\n");
+ snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", P_tmpdir, template);
+ if (mkdtemp(binderfs_mntpt) == NULL) {
+ fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s\n",
+ binderfs_mntpt, strerror(errno));
+ return ctx;
+ }
+ }
+ fprintf(stderr, "Binderfs mountpoint created at %s\n", binderfs_mntpt);
+
+ if (mount(NULL, binderfs_mntpt, "binder", 0, 0)) {
+ perror("Could not mount binderfs");
+ rmdir(binderfs_mntpt);
+ return ctx;
+ }
+ fprintf(stderr, "Binderfs mounted at %s\n", binderfs_mntpt);
+
+ strncpy(device.name, name, sizeof(device.name));
+ snprintf(device_path, sizeof(device_path), "%s/binder-control", binderfs_mntpt);
+ fd = open(device_path, O_RDONLY | O_CLOEXEC);
+ if (!fd) {
+ perror("Failed to open binder-control device");
+ binderfs_unmount(binderfs_mntpt);
+ return ctx;
+ }
+
+ ret = ioctl(fd, BINDER_CTL_ADD, &device);
+ saved_errno = errno;
+ close(fd);
+ errno = saved_errno;
+ if (ret) {
+ perror("Failed to allocate new binder device");
+ binderfs_unmount(binderfs_mntpt);
+ return ctx;
+ }
+
+ fprintf(stderr, "Allocated new binder device with major %d, minor %d, and name %s at %s\n",
+ device.major, device.minor, device.name, binderfs_mntpt);
+
+ ctx.name = strdup(name);
+ ctx.mountpoint = strdup(binderfs_mntpt);
+ return ctx;
+}
+
+void destroy_binderfs(struct binderfs_ctx *ctx)
+{
+ char path[PATH_MAX];
+
+ snprintf(path, sizeof(path), "%s/%s", ctx->mountpoint, ctx->name);
+
+ if (unlink(path))
+ fprintf(stderr, "Failed to unlink binder device %s: %s\n", path, strerror(errno));
+ else
+ fprintf(stderr, "Destroyed binder %s at %s\n", ctx->name, ctx->mountpoint);
+
+ binderfs_unmount(ctx->mountpoint);
+
+ free(ctx->name);
+ free(ctx->mountpoint);
+}
+
+struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx)
+{
+ struct binder_ctx ctx = {.fd = -1, .memory = NULL};
+ char path[PATH_MAX];
+
+ snprintf(path, sizeof(path), "%s/%s", bfs_ctx->mountpoint, bfs_ctx->name);
+ ctx.fd = open(path, O_RDWR | O_NONBLOCK | O_CLOEXEC);
+ if (ctx.fd < 0) {
+ fprintf(stderr, "Error opening binder device %s: %s\n", path, strerror(errno));
+ return ctx;
+ }
+
+ ctx.memory = mmap(NULL, BINDER_MMAP_SIZE, PROT_READ, MAP_SHARED, ctx.fd, 0);
+ if (ctx.memory == NULL) {
+ perror("Error mapping binder memory");
+ close(ctx.fd);
+ ctx.fd = -1;
+ }
+
+ return ctx;
+}
+
+void close_binder(struct binder_ctx *ctx)
+{
+ if (munmap(ctx->memory, BINDER_MMAP_SIZE))
+ perror("Failed to unmap binder memory");
+ ctx->memory = NULL;
+
+ if (close(ctx->fd))
+ perror("Failed to close binder");
+ ctx->fd = -1;
+}
+
+int become_binder_context_manager(int binder_fd)
+{
+ return ioctl(binder_fd, BINDER_SET_CONTEXT_MGR, 0);
+}
+
+int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
+ void *readbuf, binder_size_t readsize)
+{
+ int err;
+ struct binder_write_read bwr = {
+ .write_buffer = (binder_uintptr_t)writebuf,
+ .write_size = writesize,
+ .read_buffer = (binder_uintptr_t)readbuf,
+ .read_size = readsize
+ };
+
+ do {
+ if (ioctl(binder_fd, BINDER_WRITE_READ, &bwr) >= 0)
+ err = 0;
+ else
+ err = -errno;
+ } while (err == -EINTR);
+
+ if (err < 0) {
+ perror("BINDER_WRITE_READ");
+ return -1;
+ }
+
+ if (bwr.write_consumed < writesize) {
+ fprintf(stderr, "Binder did not consume full write buffer %llu %llu\n",
+ bwr.write_consumed, writesize);
+ return -1;
+ }
+
+ return bwr.read_consumed;
+}
+
+static const char *reply_string(int cmd)
+{
+ switch (cmd) {
+ case BR_ERROR:
+ return("BR_ERROR");
+ case BR_OK:
+ return("BR_OK");
+ case BR_TRANSACTION_SEC_CTX:
+ return("BR_TRANSACTION_SEC_CTX");
+ case BR_TRANSACTION:
+ return("BR_TRANSACTION");
+ case BR_REPLY:
+ return("BR_REPLY");
+ case BR_ACQUIRE_RESULT:
+ return("BR_ACQUIRE_RESULT");
+ case BR_DEAD_REPLY:
+ return("BR_DEAD_REPLY");
+ case BR_TRANSACTION_COMPLETE:
+ return("BR_TRANSACTION_COMPLETE");
+ case BR_INCREFS:
+ return("BR_INCREFS");
+ case BR_ACQUIRE:
+ return("BR_ACQUIRE");
+ case BR_RELEASE:
+ return("BR_RELEASE");
+ case BR_DECREFS:
+ return("BR_DECREFS");
+ case BR_ATTEMPT_ACQUIRE:
+ return("BR_ATTEMPT_ACQUIRE");
+ case BR_NOOP:
+ return("BR_NOOP");
+ case BR_SPAWN_LOOPER:
+ return("BR_SPAWN_LOOPER");
+ case BR_FINISHED:
+ return("BR_FINISHED");
+ case BR_DEAD_BINDER:
+ return("BR_DEAD_BINDER");
+ case BR_CLEAR_DEATH_NOTIFICATION_DONE:
+ return("BR_CLEAR_DEATH_NOTIFICATION_DONE");
+ case BR_FAILED_REPLY:
+ return("BR_FAILED_REPLY");
+ case BR_FROZEN_REPLY:
+ return("BR_FROZEN_REPLY");
+ case BR_ONEWAY_SPAM_SUSPECT:
+ return("BR_ONEWAY_SPAM_SUSPECT");
+ default:
+ return("Unknown");
+ };
+}
+
+int expect_binder_reply(int32_t actual, int32_t expected)
+{
+ if (actual != expected) {
+ fprintf(stderr, "Expected %s but received %s\n",
+ reply_string(expected), reply_string(actual));
+ return -1;
+ }
+ return 0;
+}
+
diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.h b/tools/testing/selftests/drivers/android/binder/binder_util.h
new file mode 100644
index 000000000000..807f5abe987e
--- /dev/null
+++ b/tools/testing/selftests/drivers/android/binder/binder_util.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef SELFTEST_BINDER_UTIL_H
+#define SELFTEST_BINDER_UTIL_H
+
+#include <stdint.h>
+
+#include <linux/android/binder.h>
+
+struct binderfs_ctx {
+ char *name;
+ char *mountpoint;
+};
+
+struct binder_ctx {
+ int fd;
+ void *memory;
+};
+
+struct binderfs_ctx create_binderfs(const char *name);
+void destroy_binderfs(struct binderfs_ctx *ctx);
+
+struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx);
+void close_binder(struct binder_ctx *ctx);
+
+int become_binder_context_manager(int binder_fd);
+
+int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
+ void *readbuf, binder_size_t readsize);
+
+int expect_binder_reply(int32_t actual, int32_t expected);
+#endif
diff --git a/tools/testing/selftests/drivers/android/binder/config b/tools/testing/selftests/drivers/android/binder/config
new file mode 100644
index 000000000000..fcc5f8f693b3
--- /dev/null
+++ b/tools/testing/selftests/drivers/android/binder/config
@@ -0,0 +1,4 @@
+CONFIG_CGROUP_GPU=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDERFS=y
+CONFIG_ANDROID_BINDER_IPC=y
diff --git a/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
new file mode 100644
index 000000000000..9b952ab401cc
--- /dev/null
+++ b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
@@ -0,0 +1,480 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * This test verifies that the cgroup GPU memory charge is transferred correctly
+ * when a dmabuf is passed between processes in two different cgroups and the
+ * sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the binder transaction
+ * data containing the dmabuf file descriptor.
+ *
+ * The gpu_cgroup_dmabuf_transfer test function becomes the binder context
+ * manager, then forks a child who initiates a transaction with the context
+ * manager by specifying a target of 0. The context manager reply contains a
+ * dmabuf file descriptor which was allocated by the gpu_cgroup_dmabuf_transfer
+ * test function, but should be charged to the child cgroup after the binder
+ * transaction.
+ */
+
+#include <errno.h>
+#include <fcntl.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/epoll.h>
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "binder_util.h"
+#include "../../../cgroup/cgroup_util.h"
+#include "../../../kselftest.h"
+#include "../../../kselftest_harness.h"
+
+#include <linux/limits.h>
+#include <linux/dma-heap.h>
+#include <linux/android/binder.h>
+
+#define UNUSED(x) ((void)(x))
+
+static const unsigned int BINDER_CODE = 8675309; /* Any number will work here */
+
+struct cgroup_ctx {
+ char *root;
+ char *source;
+ char *dest;
+};
+
+void destroy_cgroups(struct __test_metadata *_metadata, struct cgroup_ctx *ctx)
+{
+ if (ctx->source != NULL) {
+ TH_LOG("Destroying cgroup: %s", ctx->source);
+ rmdir(ctx->source);
+ free(ctx->source);
+ }
+
+ if (ctx->dest != NULL) {
+ TH_LOG("Destroying cgroup: %s", ctx->dest);
+ rmdir(ctx->dest);
+ free(ctx->dest);
+ }
+
+ free(ctx->root);
+ ctx->root = ctx->source = ctx->dest = NULL;
+}
+
+struct cgroup_ctx create_cgroups(struct __test_metadata *_metadata)
+{
+ struct cgroup_ctx ctx = {0};
+ char root[PATH_MAX], *tmp;
+ static const char template[] = "/gpucg_XXXXXX";
+
+ if (cg_find_unified_root(root, sizeof(root))) {
+ TH_LOG("Could not find cgroups root");
+ return ctx;
+ }
+
+ if (cg_read_strstr(root, "cgroup.controllers", "gpu")) {
+ TH_LOG("Could not find GPU controller");
+ return ctx;
+ }
+
+ if (cg_write(root, "cgroup.subtree_control", "+gpu")) {
+ TH_LOG("Could not enable GPU controller");
+ return ctx;
+ }
+
+ ctx.root = strdup(root);
+
+ snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
+ tmp = mkdtemp(root);
+ if (tmp == NULL) {
+ TH_LOG("%s - Could not create source cgroup", strerror(errno));
+ destroy_cgroups(_metadata, &ctx);
+ return ctx;
+ }
+ ctx.source = strdup(tmp);
+
+ snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
+ tmp = mkdtemp(root);
+ if (tmp == NULL) {
+ TH_LOG("%s - Could not create destination cgroup", strerror(errno));
+ destroy_cgroups(_metadata, &ctx);
+ return ctx;
+ }
+ ctx.dest = strdup(tmp);
+
+ TH_LOG("Created cgroups: %s %s", ctx.source, ctx.dest);
+
+ return ctx;
+}
+
+int dmabuf_heap_alloc(int fd, size_t len, int *dmabuf_fd)
+{
+ struct dma_heap_allocation_data data = {
+ .len = len,
+ .fd = 0,
+ .fd_flags = O_RDONLY | O_CLOEXEC,
+ .heap_flags = 0,
+ };
+ int ret;
+
+ if (!dmabuf_fd)
+ return -EINVAL;
+
+ ret = ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data);
+ if (ret < 0)
+ return ret;
+ *dmabuf_fd = (int)data.fd;
+ return ret;
+}
+
+/* The system heap is known to export dmabufs with support for cgroup tracking */
+int alloc_dmabuf_from_system_heap(struct __test_metadata *_metadata, size_t bytes)
+{
+ int heap_fd = -1, dmabuf_fd = -1;
+ static const char * const heap_path = "/dev/dma_heap/system";
+
+ heap_fd = open(heap_path, O_RDONLY);
+ if (heap_fd < 0) {
+ TH_LOG("%s - open %s failed!\n", strerror(errno), heap_path);
+ return -1;
+ }
+
+ if (dmabuf_heap_alloc(heap_fd, bytes, &dmabuf_fd))
+ TH_LOG("dmabuf allocation failed! - %s", strerror(errno));
+ close(heap_fd);
+
+ return dmabuf_fd;
+}
+
+int binder_request_dmabuf(int binder_fd)
+{
+ int ret;
+
+ /*
+ * We just send an empty binder_buffer_object to initiate a transaction
+ * with the context manager, who should respond with a single dmabuf
+ * inside a binder_fd_array_object.
+ */
+
+ struct binder_buffer_object bbo = {
+ .hdr.type = BINDER_TYPE_PTR,
+ .flags = 0,
+ .buffer = 0,
+ .length = 0,
+ .parent = 0, /* No parent */
+ .parent_offset = 0 /* No parent */
+ };
+
+ binder_size_t offsets[] = {0};
+
+ struct {
+ int32_t cmd;
+ struct binder_transaction_data btd;
+ } __attribute__((packed)) bc = {
+ .cmd = BC_TRANSACTION,
+ .btd = {
+ .target = { 0 },
+ .cookie = 0,
+ .code = BINDER_CODE,
+ .flags = TF_ACCEPT_FDS, /* We expect a FDA in the reply */
+ .data_size = sizeof(bbo),
+ .offsets_size = sizeof(offsets),
+ .data.ptr = {
+ (binder_uintptr_t)&bbo,
+ (binder_uintptr_t)offsets
+ }
+ },
+ };
+
+ struct {
+ int32_t reply_noop;
+ } __attribute__((packed)) br;
+
+ ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
+ if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
+ return -1;
+ } else if (ret < sizeof(br)) {
+ fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
+ return -1;
+ }
+ return 0;
+}
+
+int send_dmabuf_reply(int binder_fd, struct binder_transaction_data *tr, int dmabuf_fd)
+{
+ int ret;
+ /*
+ * The trailing 0 is to achieve the necessary alignment for the binder
+ * buffer_size.
+ */
+ int fdarray[] = { dmabuf_fd, 0 };
+
+ struct binder_buffer_object bbo = {
+ .hdr.type = BINDER_TYPE_PTR,
+ .flags = BINDER_BUFFER_FLAG_SENDER_NO_NEED,
+ .buffer = (binder_uintptr_t)fdarray,
+ .length = sizeof(fdarray),
+ .parent = 0, /* No parent */
+ .parent_offset = 0 /* No parent */
+ };
+
+ struct binder_fd_array_object bfdao = {
+ .hdr.type = BINDER_TYPE_FDA,
+ .num_fds = 1,
+ .parent = 0, /* The binder_buffer_object */
+ .parent_offset = 0 /* FDs follow immediately */
+ };
+
+ uint64_t sz = sizeof(fdarray);
+ uint8_t data[sizeof(sz) + sizeof(bbo) + sizeof(bfdao)];
+ binder_size_t offsets[] = {sizeof(sz), sizeof(sz)+sizeof(bbo)};
+
+ memcpy(data, &sz, sizeof(sz));
+ memcpy(data + sizeof(sz), &bbo, sizeof(bbo));
+ memcpy(data + sizeof(sz) + sizeof(bbo), &bfdao, sizeof(bfdao));
+
+ struct {
+ int32_t cmd;
+ struct binder_transaction_data_sg btd;
+ } __attribute__((packed)) bc = {
+ .cmd = BC_REPLY_SG,
+ .btd.transaction_data = {
+ .target = { tr->target.handle },
+ .cookie = tr->cookie,
+ .code = BINDER_CODE,
+ .flags = 0,
+ .data_size = sizeof(data),
+ .offsets_size = sizeof(offsets),
+ .data.ptr = {
+ (binder_uintptr_t)data,
+ (binder_uintptr_t)offsets
+ }
+ },
+ .btd.buffers_size = sizeof(fdarray)
+ };
+
+ struct {
+ int32_t reply_noop;
+ } __attribute__((packed)) br;
+
+ ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
+ if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
+ return -1;
+ } else if (ret < sizeof(br)) {
+ fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
+ return -1;
+ }
+ return 0;
+}
+
+struct binder_transaction_data *binder_wait_for_transaction(int binder_fd,
+ uint32_t *readbuf,
+ size_t readsize)
+{
+ static const int MAX_EVENTS = 1, EPOLL_WAIT_TIME_MS = 3 * 1000;
+ struct binder_reply {
+ int32_t reply0;
+ int32_t reply1;
+ struct binder_transaction_data btd;
+ } *br;
+ struct binder_transaction_data *ret = NULL;
+ struct epoll_event events[MAX_EVENTS];
+ int epoll_fd, num_events, readcount;
+ uint32_t bc[] = { BC_ENTER_LOOPER };
+
+ do_binder_write_read(binder_fd, &bc, sizeof(bc), NULL, 0);
+
+ epoll_fd = epoll_create1(EPOLL_CLOEXEC);
+ if (epoll_fd == -1) {
+ perror("epoll_create");
+ return NULL;
+ }
+
+ events[0].events = EPOLLIN;
+ if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, binder_fd, &events[0])) {
+ perror("epoll_ctl add");
+ goto err_close;
+ }
+
+ num_events = epoll_wait(epoll_fd, events, MAX_EVENTS, EPOLL_WAIT_TIME_MS);
+ if (num_events < 0) {
+ perror("epoll_wait");
+ goto err_ctl;
+ } else if (num_events == 0) {
+ fprintf(stderr, "No events\n");
+ goto err_ctl;
+ }
+
+ readcount = do_binder_write_read(binder_fd, NULL, 0, readbuf, readsize);
+ fprintf(stderr, "Read %d bytes from binder\n", readcount);
+
+ if (readcount < (int)sizeof(struct binder_reply)) {
+ fprintf(stderr, "read_consumed not large enough\n");
+ goto err_ctl;
+ }
+
+ br = (struct binder_reply *)readbuf;
+ if (expect_binder_reply(br->reply0, BR_NOOP))
+ goto err_ctl;
+
+ if (br->reply1 == BR_TRANSACTION) {
+ if (br->btd.code == BINDER_CODE)
+ ret = &br->btd;
+ else
+ fprintf(stderr, "Received transaction with unexpected code: %u\n",
+ br->btd.code);
+ } else {
+ expect_binder_reply(br->reply1, BR_TRANSACTION_COMPLETE);
+ }
+
+err_ctl:
+ if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, binder_fd, NULL))
+ perror("epoll_ctl del");
+err_close:
+ close(epoll_fd);
+ return ret;
+}
+
+static int child_request_dmabuf_transfer(const char *cgroup, void *arg)
+{
+ UNUSED(cgroup);
+ int ret = -1;
+ uint32_t readbuf[32];
+ struct binderfs_ctx bfs_ctx = *(struct binderfs_ctx *)arg;
+ struct binder_ctx b_ctx;
+
+ fprintf(stderr, "Child PID: %d\n", getpid());
+
+ b_ctx = open_binder(&bfs_ctx);
+ if (b_ctx.fd < 0) {
+ fprintf(stderr, "Child unable to open binder\n");
+ return -1;
+ }
+
+ if (binder_request_dmabuf(b_ctx.fd))
+ goto err;
+
+ /* The child must stay alive until the binder reply is received */
+ if (binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf)) == NULL)
+ ret = 0;
+
+ /*
+ * We don't close the received dmabuf here so that the parent can
+ * inspect the cgroup gpu memory charges to verify the charge transfer
+ * completed successfully.
+ */
+err:
+ close_binder(&b_ctx);
+ fprintf(stderr, "Child done\n");
+ return ret;
+}
+
+TEST(gpu_cgroup_dmabuf_transfer)
+{
+ static const char * const GPUMEM_FILENAME = "gpu.memory.current";
+ static const size_t ONE_MiB = 1024 * 1024;
+
+ int ret, dmabuf_fd;
+ uint32_t readbuf[32];
+ long memsize;
+ pid_t child_pid;
+ struct binderfs_ctx bfs_ctx;
+ struct binder_ctx b_ctx;
+ struct cgroup_ctx cg_ctx;
+ struct binder_transaction_data *tr;
+ struct flat_binder_object *fbo;
+ struct binder_buffer_object *bbo;
+
+ bfs_ctx = create_binderfs("testbinder");
+ if (bfs_ctx.name == NULL)
+ ksft_exit_skip("The Android binderfs filesystem is not available\n");
+
+ cg_ctx = create_cgroups(_metadata);
+ if (cg_ctx.root == NULL) {
+ destroy_binderfs(&bfs_ctx);
+ ksft_exit_skip("cgroup v2 isn't mounted\n");
+ }
+
+ ASSERT_EQ(cg_enter_current(cg_ctx.source), 0) {
+ TH_LOG("Could not move parent to cgroup: %s", cg_ctx.source);
+ goto err_cg;
+ }
+
+ dmabuf_fd = alloc_dmabuf_from_system_heap(_metadata, ONE_MiB);
+ ASSERT_GE(dmabuf_fd, 0) {
+ goto err_cg;
+ }
+ TH_LOG("Allocated dmabuf");
+
+ memsize = cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system");
+ ASSERT_EQ(memsize, ONE_MiB) {
+ TH_LOG("GPU memory used after allocation: %ld but it should be %lu",
+ memsize, (unsigned long)ONE_MiB);
+ goto err_dmabuf;
+ }
+
+ b_ctx = open_binder(&bfs_ctx);
+ ASSERT_GE(b_ctx.fd, 0) {
+ TH_LOG("Parent unable to open binder");
+ goto err_dmabuf;
+ }
+ TH_LOG("Opened binder at %s/%s", bfs_ctx.mountpoint, bfs_ctx.name);
+
+ ASSERT_EQ(become_binder_context_manager(b_ctx.fd), 0) {
+ TH_LOG("Cannot become context manager: %s", strerror(errno));
+ goto err_binder;
+ }
+
+ child_pid = cg_run_nowait(cg_ctx.dest, child_request_dmabuf_transfer, &bfs_ctx);
+ ASSERT_GT(child_pid, 0) {
+ TH_LOG("Error forking: %s", strerror(errno));
+ goto err_binder;
+ }
+
+ tr = binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf));
+ ASSERT_NE(tr, NULL) {
+ TH_LOG("Error receiving transaction request from child");
+ goto err_child;
+ }
+ fbo = (struct flat_binder_object *)tr->data.ptr.buffer;
+ ASSERT_EQ(fbo->hdr.type, BINDER_TYPE_PTR) {
+ TH_LOG("Did not receive a buffer object from child");
+ goto err_child;
+ }
+ bbo = (struct binder_buffer_object *)fbo;
+ ASSERT_EQ(bbo->length, 0) {
+ TH_LOG("Did not receive an empty buffer object from child");
+ goto err_child;
+ }
+
+ TH_LOG("Received transaction from child");
+ send_dmabuf_reply(b_ctx.fd, tr, dmabuf_fd);
+
+ ASSERT_EQ(cg_read_key_long(cg_ctx.dest, GPUMEM_FILENAME, "system"), ONE_MiB) {
+ TH_LOG("Destination cgroup does not have system charge!");
+ goto err_child;
+ }
+ ASSERT_EQ(cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system"), 0) {
+ TH_LOG("Source cgroup still has system charge!");
+ goto err_child;
+ }
+ TH_LOG("Charge transfer succeeded!");
+
+err_child:
+ waitpid(child_pid, &ret, 0);
+ if (WIFEXITED(ret))
+ TH_LOG("Child %d terminated with %d", child_pid, WEXITSTATUS(ret));
+ else
+ TH_LOG("Child terminated abnormally");
+err_binder:
+ close_binder(&b_ctx);
+err_dmabuf:
+ close(dmabuf_fd);
+err_cg:
+ destroy_cgroups(_metadata, &cg_ctx);
+ destroy_binderfs(&bfs_ctx);
+}
+
+TEST_HARNESS_MAIN
--
2.35.1.616.g0bdcbb4464-goog
The kernel interface should use types that the kernel defines instead of
pid_t and uid_t, whose definiton is owned by libc. This fixes the header
so that it can be included without first including sys/types.h.
Signed-off-by: T.J. Mercier <[email protected]>
---
include/uapi/linux/android/binder.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
index 169fd5069a1a..aa28454dbca3 100644
--- a/include/uapi/linux/android/binder.h
+++ b/include/uapi/linux/android/binder.h
@@ -289,8 +289,8 @@ struct binder_transaction_data {
/* General information about the transaction. */
__u32 flags;
- pid_t sender_pid;
- uid_t sender_euid;
+ __kernel_pid_t sender_pid;
+ __kernel_uid_t sender_euid;
binder_size_t data_size; /* number of bytes of data */
binder_size_t offsets_size; /* number of bytes of offsets */
--
2.35.1.616.g0bdcbb4464-goog
From: Hridya Valsaraju <[email protected]>
This patch introduces a buffer flag BINDER_BUFFER_FLAG_SENDER_NO_NEED
that a process sending an fd array to another process over binder IPC
can set to relinquish ownership of the fds being sent for memory
accounting purposes. If the flag is found to be set during the fd array
translation and the fd is for a DMA-BUF, the buffer is uncharged from
the sender's cgroup and charged to the receiving process's cgroup
instead.
It is up to the sending process to ensure that it closes the fds
regardless of whether the transfer failed or succeeded.
Most graphics shared memory allocations in Android are done by the
graphics allocator HAL process. On requests from clients, the HAL process
allocates memory and sends the fds to the clients over binder IPC.
The graphics allocator HAL will not retain any references to the
buffers. When the HAL sets the BINDER_BUFFER_FLAG_SENDER_NO_NEED for fd
arrays holding DMA-BUF fds, the gpu cgroup controller will be able to
correctly charge the buffers to the client processes instead of the
graphics allocator HAL.
Since this is a new feature exposed to userspace, the kernel and userspace
must be compatible for the accounting to work for transfers. In all cases
the allocation and transport of DMA buffers via binder will succeed, but
only when both the kernel supports, and userspace depends on this feature
will the transfer accounting work. The possible scenarios are detailed
below:
1. new kernel + old userspace
The kernel supports the feature but userspace does not use it. The old
userspace won't mount the new cgroup controller, accounting is not
performed, charge is not transferred.
2. old kernel + new userspace
The new cgroup controller is not supported by the kernel, accounting is
not performed, charge is not transferred.
3. old kernel + old userspace
Same as #2
4. new kernel + new userspace
Cgroup is mounted, feature is supported and used.
Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Remove android from title per Todd Kjos.
Use more common dual author commit message format per John Stultz.
Include details on behavior for all combinations of kernel/userspace
versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman.
v2 changes
Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König.
---
drivers/android/binder.c | 26 ++++++++++++++++++++++++++
include/uapi/linux/android/binder.h | 1 +
2 files changed, 27 insertions(+)
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 8351c5638880..f50d88ded188 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -42,6 +42,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/dma-buf.h>
#include <linux/fdtable.h>
#include <linux/file.h>
#include <linux/freezer.h>
@@ -2482,8 +2483,10 @@ static int binder_translate_fd_array(struct list_head *pf_head,
{
binder_size_t fdi, fd_buf_size;
binder_size_t fda_offset;
+ bool transfer_gpu_charge = false;
const void __user *sender_ufda_base;
struct binder_proc *proc = thread->proc;
+ struct binder_proc *target_proc = t->to_proc;
int ret;
fd_buf_size = sizeof(u32) * fda->num_fds;
@@ -2521,8 +2524,15 @@ static int binder_translate_fd_array(struct list_head *pf_head,
if (ret)
return ret;
+ if (IS_ENABLED(CONFIG_CGROUP_GPU) &&
+ parent->flags & BINDER_BUFFER_FLAG_SENDER_NO_NEED)
+ transfer_gpu_charge = true;
+
for (fdi = 0; fdi < fda->num_fds; fdi++) {
u32 fd;
+ struct dma_buf *dmabuf;
+ struct gpucg *gpucg;
+
binder_size_t offset = fda_offset + fdi * sizeof(fd);
binder_size_t sender_uoffset = fdi * sizeof(fd);
@@ -2532,6 +2542,22 @@ static int binder_translate_fd_array(struct list_head *pf_head,
in_reply_to);
if (ret)
return ret > 0 ? -EINVAL : ret;
+
+ if (!transfer_gpu_charge)
+ continue;
+
+ dmabuf = dma_buf_get(fd);
+ if (IS_ERR(dmabuf))
+ continue;
+
+ gpucg = gpucg_get(target_proc->tsk);
+ ret = dma_buf_charge_transfer(dmabuf, gpucg);
+ if (ret) {
+ pr_warn("%d:%d Unable to transfer DMA-BUF fd charge to %d",
+ proc->pid, thread->pid, target_proc->pid);
+ gpucg_put(gpucg);
+ }
+ dma_buf_put(dmabuf);
}
return 0;
}
diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
index 3246f2c74696..169fd5069a1a 100644
--- a/include/uapi/linux/android/binder.h
+++ b/include/uapi/linux/android/binder.h
@@ -137,6 +137,7 @@ struct binder_buffer_object {
enum {
BINDER_BUFFER_FLAG_HAS_PARENT = 0x01,
+ BINDER_BUFFER_FLAG_SENDER_NO_NEED = 0x02,
};
/* struct binder_fd_array_object - object describing an array of fds in a buffer
--
2.35.1.616.g0bdcbb4464-goog
From: Hridya Valsaraju <[email protected]>
All DMA heaps now register a new GPU cgroup device upon creation, and the
system_heap now exports buffers associated with its GPU cgroup device for
tracking purposes.
Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Use more common dual author commit message format per John Stultz.
v2 changes
Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König.
---
drivers/dma-buf/dma-heap.c | 27 +++++++++++++++++++++++++++
drivers/dma-buf/heaps/system_heap.c | 3 +++
include/linux/dma-heap.h | 11 +++++++++++
3 files changed, 41 insertions(+)
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
index 8f5848aa144f..885072427775 100644
--- a/drivers/dma-buf/dma-heap.c
+++ b/drivers/dma-buf/dma-heap.c
@@ -7,6 +7,7 @@
*/
#include <linux/cdev.h>
+#include <linux/cgroup_gpu.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/dma-buf.h>
@@ -31,6 +32,7 @@
* @heap_devt heap device node
* @list list head connecting to list of heaps
* @heap_cdev heap char device
+ * @gpucg_dev gpu cgroup device for memory accounting
*
* Represents a heap of memory from which buffers can be made.
*/
@@ -41,6 +43,9 @@ struct dma_heap {
dev_t heap_devt;
struct list_head list;
struct cdev heap_cdev;
+#ifdef CONFIG_CGROUP_GPU
+ struct gpucg_device gpucg_dev;
+#endif
};
static LIST_HEAD(heap_list);
@@ -216,6 +221,26 @@ const char *dma_heap_get_name(struct dma_heap *heap)
return heap->name;
}
+#ifdef CONFIG_CGROUP_GPU
+/**
+ * dma_heap_get_gpucg_dev() - get struct gpucg_device for the heap.
+ * @heap: DMA-Heap to get the gpucg_device struct for.
+ *
+ * Returns:
+ * The gpucg_device struct for the heap. NULL if the GPU cgroup controller is
+ * not enabled.
+ */
+struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap)
+{
+ return &heap->gpucg_dev;
+}
+#else /* CONFIG_CGROUP_GPU */
+struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap)
+{
+ return NULL;
+}
+#endif /* CONFIG_CGROUP_GPU */
+
struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
{
struct dma_heap *heap, *h, *err_ret;
@@ -288,6 +313,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
list_add(&heap->list, &heap_list);
mutex_unlock(&heap_list_lock);
+ gpucg_register_device(dma_heap_get_gpucg_dev(heap), exp_info->name);
+
return heap;
err2:
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index ab7fd896d2c4..752a05c3cfe2 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -395,6 +395,9 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
exp_info.ops = &system_heap_buf_ops;
exp_info.size = buffer->len;
exp_info.flags = fd_flags;
+#ifdef CONFIG_CGROUP_GPU
+ exp_info.gpucg_dev = dma_heap_get_gpucg_dev(heap);
+#endif
exp_info.priv = buffer;
dmabuf = dma_buf_export(&exp_info);
if (IS_ERR(dmabuf)) {
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
index 0c05561cad6e..e447a61d054e 100644
--- a/include/linux/dma-heap.h
+++ b/include/linux/dma-heap.h
@@ -10,6 +10,7 @@
#define _DMA_HEAPS_H
#include <linux/cdev.h>
+#include <linux/cgroup_gpu.h>
#include <linux/types.h>
struct dma_heap;
@@ -59,6 +60,16 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
*/
const char *dma_heap_get_name(struct dma_heap *heap);
+/**
+ * dma_heap_get_gpucg_dev() - get a pointer to the struct gpucg_device for the
+ * heap.
+ * @heap: DMA-Heap to retrieve gpucg_device for.
+ *
+ * Returns:
+ * The gpucg_device struct for the heap.
+ */
+struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap);
+
/**
* dma_heap_add - adds a heap to dmabuf heaps
* @exp_info: information needed to register this heap
--
2.35.1.616.g0bdcbb4464-goog
From: Hridya Valsaraju <[email protected]>
The cgroup controller provides accounting for GPU and GPU-related
memory allocations. The memory being accounted can be device memory or
memory allocated from pools dedicated to serve GPU-related tasks.
This patch adds APIs to:
-allow a device to register for memory accounting using the GPU cgroup
controller.
-charge and uncharge allocated memory to a cgroup.
When the cgroup controller is enabled, it would expose information about
the memory allocated by each device(registered for GPU cgroup memory
accounting) for each cgroup.
The API/UAPI can be extended to set per-device/total allocation limits
in the future.
The cgroup controller has been named following the discussion in [1].
[1]: https://lore.kernel.org/amd-gfx/YCJp%2F%[email protected]/
Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Use more common dual author commit message format per John Stultz.
v2 changes
Fix incorrect Kconfig help section indentation per Randy Dunlap.
---
include/linux/cgroup_gpu.h | 127 ++++++++++++++
include/linux/cgroup_subsys.h | 4 +
init/Kconfig | 7 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/gpu.c | 304 ++++++++++++++++++++++++++++++++++
5 files changed, 443 insertions(+)
create mode 100644 include/linux/cgroup_gpu.h
create mode 100644 kernel/cgroup/gpu.c
diff --git a/include/linux/cgroup_gpu.h b/include/linux/cgroup_gpu.h
new file mode 100644
index 000000000000..c5bc2b882783
--- /dev/null
+++ b/include/linux/cgroup_gpu.h
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: MIT
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ * Copyright (C) 2022 Google LLC.
+ */
+#ifndef _CGROUP_GPU_H
+#define _CGROUP_GPU_H
+
+#include <linux/cgroup.h>
+#include <linux/page_counter.h>
+
+#ifdef CONFIG_CGROUP_GPU
+ /* The GPU cgroup controller data structure */
+struct gpucg {
+ struct cgroup_subsys_state css;
+
+ /* list of all resource pools that belong to this cgroup */
+ struct list_head rpools;
+};
+
+struct gpucg_device {
+ /*
+ * list of various resource pools in various cgroups that the device is
+ * part of.
+ */
+ struct list_head rpools;
+
+ /* list of all devices registered for GPU cgroup accounting */
+ struct list_head dev_node;
+
+ /*
+ * pointer to string literal to be used as identifier for accounting and
+ * limit setting
+ */
+ const char *name;
+};
+
+/**
+ * css_to_gpucg - get the corresponding gpucg ref from a cgroup_subsys_state
+ * @css: the target cgroup_subsys_state
+ *
+ * Returns: gpu cgroup that contains the @css
+ */
+static inline struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css)
+{
+ return css ? container_of(css, struct gpucg, css) : NULL;
+}
+
+/**
+ * gpucg_get - get the gpucg reference that a task belongs to
+ * @task: the target task
+ *
+ * This increases the reference count of the css that the @task belongs to.
+ *
+ * Returns: reference to the gpu cgroup the task belongs to.
+ */
+static inline struct gpucg *gpucg_get(struct task_struct *task)
+{
+ if (!cgroup_subsys_enabled(gpu_cgrp_subsys))
+ return NULL;
+ return css_to_gpucg(task_get_css(task, gpu_cgrp_id));
+}
+
+/**
+ * gpucg_put - put a gpucg reference
+ * @gpucg: the target gpucg
+ *
+ * Put a reference obtained via gpucg_get
+ */
+static inline void gpucg_put(struct gpucg *gpucg)
+{
+ if (gpucg)
+ css_put(&gpucg->css);
+}
+
+/**
+ * gpucg_parent - find the parent of a gpu cgroup
+ * @cg: the target gpucg
+ *
+ * This does not increase the reference count of the parent cgroup
+ *
+ * Returns: parent gpu cgroup of @cg
+ */
+static inline struct gpucg *gpucg_parent(struct gpucg *cg)
+{
+ return css_to_gpucg(cg->css.parent);
+}
+
+int gpucg_try_charge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+void gpucg_register_device(struct gpucg_device *gpucg_dev, const char *name);
+#else /* CONFIG_CGROUP_GPU */
+
+struct gpucg;
+struct gpucg_device;
+
+static inline struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css)
+{
+ return NULL;
+}
+
+static inline struct gpucg *gpucg_get(struct task_struct *task)
+{
+ return NULL;
+}
+
+static inline void gpucg_put(struct gpucg *gpucg) {}
+
+static inline struct gpucg *gpucg_parent(struct gpucg *cg)
+{
+ return NULL;
+}
+
+static inline int gpucg_try_charge(struct gpucg *gpucg,
+ struct gpucg_device *device,
+ u64 usage)
+{
+ return 0;
+}
+
+static inline void gpucg_uncharge(struct gpucg *gpucg,
+ struct gpucg_device *device,
+ u64 usage) {}
+
+static inline void gpucg_register_device(struct gpucg_device *gpucg_dev,
+ const char *name) {}
+#endif /* CONFIG_CGROUP_GPU */
+#endif /* _CGROUP_GPU_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 445235487230..46a2a7b93c41 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -65,6 +65,10 @@ SUBSYS(rdma)
SUBSYS(misc)
#endif
+#if IS_ENABLED(CONFIG_CGROUP_GPU)
+SUBSYS(gpu)
+#endif
+
/*
* The following subsystems are not supported on the default hierarchy.
*/
diff --git a/init/Kconfig b/init/Kconfig
index e9119bf54b1f..43568472930a 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -980,6 +980,13 @@ config BLK_CGROUP
See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information.
+config CGROUP_GPU
+ bool "gpu cgroup controller (EXPERIMENTAL)"
+ select PAGE_COUNTER
+ help
+ Provides accounting and limit setting for memory allocations by the GPU and
+ GPU-related subsystems.
+
config CGROUP_WRITEBACK
bool
depends on MEMCG && BLK_CGROUP
diff --git a/kernel/cgroup/Makefile b/kernel/cgroup/Makefile
index 12f8457ad1f9..be95a5a532fc 100644
--- a/kernel/cgroup/Makefile
+++ b/kernel/cgroup/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_CGROUP_RDMA) += rdma.o
obj-$(CONFIG_CPUSETS) += cpuset.o
obj-$(CONFIG_CGROUP_MISC) += misc.o
obj-$(CONFIG_CGROUP_DEBUG) += debug.o
+obj-$(CONFIG_CGROUP_GPU) += gpu.o
diff --git a/kernel/cgroup/gpu.c b/kernel/cgroup/gpu.c
new file mode 100644
index 000000000000..3e9bfb45c6af
--- /dev/null
+++ b/kernel/cgroup/gpu.c
@@ -0,0 +1,304 @@
+// SPDX-License-Identifier: MIT
+// Copyright 2019 Advanced Micro Devices, Inc.
+// Copyright (C) 2022 Google LLC.
+
+#include <linux/cgroup.h>
+#include <linux/cgroup_gpu.h>
+#include <linux/mm.h>
+#include <linux/page_counter.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+
+static struct gpucg *root_gpucg __read_mostly;
+
+/*
+ * Protects list of resource pools maintained on per cgroup basis
+ * and list of devices registered for memory accounting using the GPU cgroup
+ * controller.
+ */
+static DEFINE_MUTEX(gpucg_mutex);
+static LIST_HEAD(gpucg_devices);
+
+struct gpucg_resource_pool {
+ /* The device whose resource usage is tracked by this resource pool */
+ struct gpucg_device *device;
+
+ /* list of all resource pools for the cgroup */
+ struct list_head cg_node;
+
+ /*
+ * list maintained by the gpucg_device to keep track of its
+ * resource pools
+ */
+ struct list_head dev_node;
+
+ /* tracks memory usage of the resource pool */
+ struct page_counter total;
+};
+
+static void free_cg_rpool_locked(struct gpucg_resource_pool *rpool)
+{
+ lockdep_assert_held(&gpucg_mutex);
+
+ list_del(&rpool->cg_node);
+ list_del(&rpool->dev_node);
+ kfree(rpool);
+}
+
+static void gpucg_css_free(struct cgroup_subsys_state *css)
+{
+ struct gpucg_resource_pool *rpool, *tmp;
+ struct gpucg *gpucg = css_to_gpucg(css);
+
+ // delete all resource pools
+ mutex_lock(&gpucg_mutex);
+ list_for_each_entry_safe(rpool, tmp, &gpucg->rpools, cg_node)
+ free_cg_rpool_locked(rpool);
+ mutex_unlock(&gpucg_mutex);
+
+ kfree(gpucg);
+}
+
+static struct cgroup_subsys_state *
+gpucg_css_alloc(struct cgroup_subsys_state *parent_css)
+{
+ struct gpucg *gpucg, *parent;
+
+ gpucg = kzalloc(sizeof(struct gpucg), GFP_KERNEL);
+ if (!gpucg)
+ return ERR_PTR(-ENOMEM);
+
+ parent = css_to_gpucg(parent_css);
+ if (!parent)
+ root_gpucg = gpucg;
+
+ INIT_LIST_HEAD(&gpucg->rpools);
+
+ return &gpucg->css;
+}
+
+static struct gpucg_resource_pool *find_cg_rpool_locked(
+ struct gpucg *cg,
+ struct gpucg_device *device)
+{
+ struct gpucg_resource_pool *pool;
+
+ lockdep_assert_held(&gpucg_mutex);
+
+ list_for_each_entry(pool, &cg->rpools, cg_node)
+ if (pool->device == device)
+ return pool;
+
+ return NULL;
+}
+
+static struct gpucg_resource_pool *init_cg_rpool(struct gpucg *cg,
+ struct gpucg_device *device)
+{
+ struct gpucg_resource_pool *rpool = kzalloc(sizeof(*rpool),
+ GFP_KERNEL);
+ if (!rpool)
+ return ERR_PTR(-ENOMEM);
+
+ rpool->device = device;
+
+ page_counter_init(&rpool->total, NULL);
+ INIT_LIST_HEAD(&rpool->cg_node);
+ INIT_LIST_HEAD(&rpool->dev_node);
+ list_add_tail(&rpool->cg_node, &cg->rpools);
+ list_add_tail(&rpool->dev_node, &device->rpools);
+
+ return rpool;
+}
+
+/**
+ * get_cg_rpool_locked - find the resource pool for the specified device and
+ * specified cgroup. If the resource pool does not exist for the cg, it is
+ * created in a hierarchical manner in the cgroup and its ancestor cgroups who
+ * do not already have a resource pool entry for the device.
+ *
+ * @cg: The cgroup to find the resource pool for.
+ * @device: The device associated with the returned resource pool.
+ *
+ * Return: return resource pool entry corresponding to the specified device in
+ * the specified cgroup (hierarchically creating them if not existing already).
+ *
+ */
+static struct gpucg_resource_pool *
+get_cg_rpool_locked(struct gpucg *cg, struct gpucg_device *device)
+{
+ struct gpucg *parent_cg, *p, *stop_cg;
+ struct gpucg_resource_pool *rpool, *tmp_rpool;
+ struct gpucg_resource_pool *parent_rpool = NULL, *leaf_rpool = NULL;
+
+ rpool = find_cg_rpool_locked(cg, device);
+ if (rpool)
+ return rpool;
+
+ stop_cg = cg;
+ do {
+ rpool = init_cg_rpool(stop_cg, device);
+ if (IS_ERR(rpool))
+ goto err;
+
+ if (!leaf_rpool)
+ leaf_rpool = rpool;
+
+ stop_cg = gpucg_parent(stop_cg);
+ if (!stop_cg)
+ break;
+
+ rpool = find_cg_rpool_locked(stop_cg, device);
+ } while (!rpool);
+
+ /*
+ * Re-initialize page counters of all rpools created in this invocation
+ * to enable hierarchical charging.
+ * stop_cg is the first ancestor cg who already had a resource pool for
+ * the device. It can also be NULL if no ancestors had a pre-existing
+ * resource pool for the device before this invocation.
+ */
+ rpool = leaf_rpool;
+ for (p = cg; p != stop_cg; p = parent_cg) {
+ parent_cg = gpucg_parent(p);
+ if (!parent_cg)
+ break;
+ parent_rpool = find_cg_rpool_locked(parent_cg, device);
+ page_counter_init(&rpool->total, &parent_rpool->total);
+
+ rpool = parent_rpool;
+ }
+
+ return leaf_rpool;
+err:
+ for (p = cg; p != stop_cg; p = gpucg_parent(p)) {
+ tmp_rpool = find_cg_rpool_locked(p, device);
+ free_cg_rpool_locked(tmp_rpool);
+ }
+ return rpool;
+}
+
+/**
+ * gpucg_try_charge - charge memory to the specified gpucg and gpucg_device.
+ * Caller must hold a reference to @gpucg obtained through gpucg_get(). The size
+ * of the memory is rounded up to be a multiple of the page size.
+ *
+ * @gpucg: The gpu cgroup to charge the memory to.
+ * @device: The device to charge the memory to.
+ * @usage: size of memory to charge in bytes.
+ *
+ * Return: returns 0 if the charging is successful and otherwise returns an
+ * error code.
+ */
+int gpucg_try_charge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage)
+{
+ struct page_counter *counter;
+ u64 nr_pages;
+ struct gpucg_resource_pool *rp;
+ int ret = 0;
+
+ mutex_lock(&gpucg_mutex);
+ rp = get_cg_rpool_locked(gpucg, device);
+ /*
+ * gpucg_mutex can be unlocked here, rp will stay valid until gpucg is
+ * freed and the caller is holding a reference to the gpucg.
+ */
+ mutex_unlock(&gpucg_mutex);
+
+ if (IS_ERR(rp))
+ return PTR_ERR(rp);
+
+ nr_pages = PAGE_ALIGN(usage) >> PAGE_SHIFT;
+ if (page_counter_try_charge(&rp->total, nr_pages, &counter))
+ css_get_many(&gpucg->css, nr_pages);
+ else
+ ret = -ENOMEM;
+
+ return ret;
+}
+
+/**
+ * gpucg_uncharge - uncharge memory from the specified gpucg and gpucg_device.
+ * The caller must hold a reference to @gpucg obtained through gpucg_get().
+ *
+ * @gpucg: The gpu cgroup to uncharge the memory from.
+ * @device: The device to uncharge the memory from.
+ * @usage: size of memory to uncharge in bytes.
+ */
+void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage)
+{
+ u64 nr_pages;
+ struct gpucg_resource_pool *rp;
+
+ mutex_lock(&gpucg_mutex);
+ rp = find_cg_rpool_locked(gpucg, device);
+ /*
+ * gpucg_mutex can be unlocked here, rp will stay valid until gpucg is
+ * freed and there are active refs on gpucg.
+ */
+ mutex_unlock(&gpucg_mutex);
+
+ if (unlikely(!rp)) {
+ pr_err("Resource pool not found, incorrect charge/uncharge ordering?\n");
+ return;
+ }
+
+ nr_pages = PAGE_ALIGN(usage) >> PAGE_SHIFT;
+ page_counter_uncharge(&rp->total, nr_pages);
+ css_put_many(&gpucg->css, nr_pages);
+}
+
+/**
+ * gpucg_register_device - Registers a device for memory accounting using the
+ * GPU cgroup controller.
+ *
+ * @device: The device to register for memory accounting.
+ * @name: Pointer to a string literal to denote the name of the device.
+ *
+ * Both @device andd @name must remain valid.
+ */
+void gpucg_register_device(struct gpucg_device *device, const char *name)
+{
+ if (!device)
+ return;
+
+ INIT_LIST_HEAD(&device->dev_node);
+ INIT_LIST_HEAD(&device->rpools);
+
+ mutex_lock(&gpucg_mutex);
+ list_add_tail(&device->dev_node, &gpucg_devices);
+ mutex_unlock(&gpucg_mutex);
+
+ device->name = name;
+}
+
+static int gpucg_resource_show(struct seq_file *sf, void *v)
+{
+ struct gpucg_resource_pool *rpool;
+ struct gpucg *cg = css_to_gpucg(seq_css(sf));
+
+ mutex_lock(&gpucg_mutex);
+ list_for_each_entry(rpool, &cg->rpools, cg_node) {
+ seq_printf(sf, "%s %lu\n", rpool->device->name,
+ page_counter_read(&rpool->total) * PAGE_SIZE);
+ }
+ mutex_unlock(&gpucg_mutex);
+
+ return 0;
+}
+
+struct cftype files[] = {
+ {
+ .name = "memory.current",
+ .seq_show = gpucg_resource_show,
+ },
+ { } /* terminate */
+};
+
+struct cgroup_subsys gpu_cgrp_subsys = {
+ .css_alloc = gpucg_css_alloc,
+ .css_free = gpucg_css_free,
+ .early_init = false,
+ .legacy_cftypes = files,
+ .dfl_cftypes = files,
+};
--
2.35.1.616.g0bdcbb4464-goog
From: Hridya Valsaraju <[email protected]>
This patch adds a proposal for a new GPU cgroup controller for
accounting/limiting GPU and GPU-related memory allocations.
The proposed controller is based on the DRM cgroup controller[1] and
follows the design of the RDMA cgroup controller.
The new cgroup controller would:
* Allow setting per-cgroup limits on the total size of buffers charged
to it.
* Allow setting per-device limits on the total size of buffers
allocated by device within a cgroup.
* Expose a per-device/allocator breakdown of the buffers charged to a
cgroup.
The prototype in the following patches is only for memory accounting
using the GPU cgroup controller and does not implement limit setting.
[1]: https://lore.kernel.org/amd-gfx/[email protected]/
Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Remove Upstreaming Plan from gpu-cgroup.rst per John Stultz.
Use more common dual author commit message format per John Stultz.
---
Documentation/gpu/rfc/gpu-cgroup.rst | 183 +++++++++++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 +
2 files changed, 187 insertions(+)
create mode 100644 Documentation/gpu/rfc/gpu-cgroup.rst
diff --git a/Documentation/gpu/rfc/gpu-cgroup.rst b/Documentation/gpu/rfc/gpu-cgroup.rst
new file mode 100644
index 000000000000..5b40d5518a5e
--- /dev/null
+++ b/Documentation/gpu/rfc/gpu-cgroup.rst
@@ -0,0 +1,183 @@
+===================================
+GPU cgroup controller
+===================================
+
+Goals
+=====
+This document intends to outline a plan to create a cgroup v2 controller subsystem
+for the per-cgroup accounting of device and system memory allocated by the GPU
+and related subsystems.
+
+The new cgroup controller would:
+
+* Allow setting per-cgroup limits on the total size of buffers charged to it.
+
+* Allow setting per-device limits on the total size of buffers allocated by a
+ device/allocator within a cgroup.
+
+* Expose a per-device/allocator breakdown of the buffers charged to a cgroup.
+
+Alternatives Considered
+=======================
+
+The following alternatives were considered:
+
+The memory cgroup controller
+____________________________
+
+1. As was noted in [1], memory accounting provided by the GPU cgroup
+controller is not a good fit for integration into memcg due to the
+differences in how accounting is performed. It implements a mechanism
+for the allocator attribution of GPU and GPU-related memory by
+charging each buffer to the cgroup of the process on behalf of which
+the memory was allocated. The buffer stays charged to the cgroup until
+it is freed regardless of whether the process retains any references
+to it. On the other hand, the memory cgroup controller offers a more
+fine-grained charging and uncharging behavior depending on the kind of
+page being accounted.
+
+2. Memcg performs accounting in units of pages. In the DMA-BUF buffer sharing model,
+a process takes a reference to the entire buffer(hence keeping it alive) even if
+it is only accessing parts of it. Therefore, per-page memory tracking for DMA-BUF
+memory accounting would only introduce additional overhead without any benefits.
+
+[1]: https://patchwork.kernel.org/project/dri-devel/cover/[email protected]/#22624705
+
+Userspace service to keep track of buffer allocations and releases
+__________________________________________________________________
+
+1. There is no way for a userspace service to intercept all allocations and releases.
+2. In case the process gets killed or restarted, we lose all accounting so far.
+
+UAPI
+====
+When enabled, the new cgroup controller would create the following files in every cgroup.
+
+::
+
+ gpu.memory.current (R)
+ gpu.memory.max (R/W)
+
+gpu.memory.current is a read-only file and would contain per-device memory allocations
+in a key-value format where key is a string representing the device name
+and the value is the size of memory charged to the device in the cgroup in bytes.
+
+For example:
+
+::
+
+ cat /sys/kernel/fs/cgroup1/gpu.memory.current
+ dev1 4194304
+ dev2 4194304
+
+The string key for each device is set by the device driver when the device registers
+with the GPU cgroup controller to participate in resource accounting(see section
+'Design and Implementation' for more details).
+
+gpu.memory.max is a read/write file. It would show the current total
+size limits on memory usage for the cgroup and the limits on total memory usage
+for each allocator/device.
+
+Setting a total limit for a cgroup can be done as follows:
+
+::
+
+ echo “total 41943040” > /sys/kernel/fs/cgroup1/gpu.memory.max
+
+Setting a total limit for a particular device/allocator can be done as follows:
+
+::
+
+ echo “dev1 4194304” > /sys/kernel/fs/cgroup1/gpu.memory.max
+
+In this example, 'dev1' is the string key set by the device driver during
+registration.
+
+Design and Implementation
+=========================
+
+The cgroup controller would closely follow the design of the RDMA cgroup controller
+subsystem where each cgroup maintains a list of resource pools.
+Each resource pool contains a struct device and the counter to track current total,
+and the maximum limit set for the device.
+
+The below code block is a preliminary estimation on how the core kernel data structures
+and APIs would look like.
+
+.. code-block:: c
+
+ /**
+ * The GPU cgroup controller data structure.
+ */
+ struct gpucg {
+ struct cgroup_subsys_state css;
+
+ /* list of all resource pools that belong to this cgroup */
+ struct list_head rpools;
+ };
+
+ struct gpucg_device {
+ /*
+ * list of various resource pools in various cgroups that the device is
+ * part of.
+ */
+ struct list_head rpools;
+
+ /* list of all devices registered for GPU cgroup accounting */
+ struct list_head dev_node;
+
+ /* name to be used as identifier for accounting and limit setting */
+ const char *name;
+ };
+
+ struct gpucg_resource_pool {
+ /* The device whose resource usage is tracked by this resource pool */
+ struct gpucg_device *device;
+
+ /* list of all resource pools for the cgroup */
+ struct list_head cg_node;
+
+ /*
+ * list maintained by the gpucg_device to keep track of its
+ * resource pools
+ */
+ struct list_head dev_node;
+
+ /* tracks memory usage of the resource pool */
+ struct page_counter total;
+ };
+
+ /**
+ * gpucg_register_device - Registers a device for memory accounting using the
+ * GPU cgroup controller.
+ *
+ * @device: The device to register for memory accounting. Must remain valid
+ * after registration.
+ * @name: Pointer to a string literal to denote the name of the device.
+ */
+ void gpucg_register_device(struct gpucg_device *gpucg_dev, const char *name);
+
+ /**
+ * gpucg_try_charge - charge memory to the specified gpucg and gpucg_device.
+ *
+ * @gpucg: The gpu cgroup to charge the memory to.
+ * @device: The device to charge the memory to.
+ * @usage: size of memory to charge in bytes.
+ *
+ * Return: returns 0 if the charging is successful and otherwise returns an
+ * error code.
+ */
+ int gpucg_try_charge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+
+ /**
+ * gpucg_uncharge - uncharge memory from the specified gpucg and gpucg_device.
+ *
+ * @gpucg: The gpu cgroup to uncharge the memory from.
+ * @device: The device to charge the memory from.
+ * @usage: size of memory to uncharge in bytes.
+ */
+ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+
+Future Work
+===========
+Additional GPU resources can be supported by adding new controller files.
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 91e93a705230..0a9bcd94e95d 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -23,3 +23,7 @@ host such documentation:
.. toctree::
i915_scheduler.rst
+
+.. toctree::
+
+ gpu-cgroup.rst
--
2.35.1.616.g0bdcbb4464-goog
This patch uses the GPU cgroup charge/uncharge APIs to charge buffers
allocated by any DMA-BUF exporter that exports a buffer with a GPU cgroup
device association.
By doing so, it becomes possible to track who allocated/exported a
DMA-BUF even after the allocating process drops all references to a
buffer.
Originally-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Use more common dual author commit message format per John Stultz.
v2 changes
Move dma-buf cgroup charging/uncharging from a dma_buf_op defined by
every heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König.
---
drivers/dma-buf/dma-buf.c | 52 +++++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 20 +++++++++++++--
2 files changed, 70 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 602b12d7470d..83d0d1b91547 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -56,6 +56,50 @@ static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
dentry->d_name.name, ret > 0 ? name : "");
}
+#ifdef CONFIG_CGROUP_GPU
+static inline struct gpucg_device *
+exp_info_gpucg_dev(const struct dma_buf_export_info *exp_info)
+{
+ return exp_info->gpucg_dev;
+}
+
+static bool dmabuf_try_charge(struct dma_buf *dmabuf,
+ struct gpucg_device *gpucg_dev)
+{
+ dmabuf->gpucg = gpucg_get(current);
+ dmabuf->gpucg_dev = gpucg_dev;
+ if (gpucg_try_charge(dmabuf->gpucg, dmabuf->gpucg_dev, dmabuf->size)) {
+ gpucg_put(dmabuf->gpucg);
+ dmabuf->gpucg = NULL;
+ dmabuf->gpucg_dev = NULL;
+ return false;
+ }
+ return true;
+}
+
+static void dmabuf_uncharge(struct dma_buf *dmabuf)
+{
+ if (dmabuf->gpucg && dmabuf->gpucg_dev) {
+ gpucg_uncharge(dmabuf->gpucg, dmabuf->gpucg_dev, dmabuf->size);
+ gpucg_put(dmabuf->gpucg);
+ }
+}
+#else /* CONFIG_CGROUP_GPU */
+static inline struct gpucg_device *exp_info_gpucg_dev(
+const struct dma_buf_export_info *exp_info)
+{
+ return NULL;
+}
+
+static inline bool dmabuf_try_charge(struct dma_buf *dmabuf,
+ struct gpucg_device *gpucg_dev))
+{
+ return false;
+}
+
+static inline void dmabuf_uncharge(struct dma_buf *dmabuf) {}
+#endif /* CONFIG_CGROUP_GPU */
+
static void dma_buf_release(struct dentry *dentry)
{
struct dma_buf *dmabuf;
@@ -79,6 +123,8 @@ static void dma_buf_release(struct dentry *dentry)
if (dmabuf->resv == (struct dma_resv *)&dmabuf[1])
dma_resv_fini(dmabuf->resv);
+ dmabuf_uncharge(dmabuf);
+
WARN_ON(!list_empty(&dmabuf->attachments));
module_put(dmabuf->owner);
kfree(dmabuf->name);
@@ -484,6 +530,7 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
{
struct dma_buf *dmabuf;
struct dma_resv *resv = exp_info->resv;
+ struct gpucg_device *gpucg_dev = exp_info_gpucg_dev(exp_info);
struct file *file;
size_t alloc_size = sizeof(struct dma_buf);
int ret;
@@ -534,6 +581,9 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
}
dmabuf->resv = resv;
+ if (gpucg_dev && !dmabuf_try_charge(dmabuf, gpucg_dev))
+ goto err_charge;
+
file = dma_buf_getfile(dmabuf, exp_info->flags);
if (IS_ERR(file)) {
ret = PTR_ERR(file);
@@ -565,6 +615,8 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
file->f_path.dentry->d_fsdata = NULL;
fput(file);
err_dmabuf:
+ dmabuf_uncharge(dmabuf);
+err_charge:
kfree(dmabuf);
err_module:
module_put(exp_info->owner);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 7ab50076e7a6..742f29c3daaf 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -13,6 +13,7 @@
#ifndef __DMA_BUF_H__
#define __DMA_BUF_H__
+#include <linux/cgroup_gpu.h>
#include <linux/dma-buf-map.h>
#include <linux/file.h>
#include <linux/err.h>
@@ -303,7 +304,7 @@ struct dma_buf {
/**
* @size:
*
- * Size of the buffer; invariant over the lifetime of the buffer.
+ * Size of the buffer in bytes; invariant over the lifetime of the buffer.
*/
size_t size;
@@ -453,6 +454,17 @@ struct dma_buf {
struct dma_buf *dmabuf;
} *sysfs_entry;
#endif
+
+#ifdef CONFIG_CGROUP_GPU
+ /** @gpucg: Pointer to the cgroup this buffer currently belongs to. */
+ struct gpucg *gpucg;
+
+ /** @gpucg_dev:
+ *
+ * Pointer to the cgroup GPU device whence this buffer originates.
+ */
+ struct gpucg_device *gpucg_dev;
+#endif
};
/**
@@ -529,9 +541,10 @@ struct dma_buf_attachment {
* @exp_name: name of the exporter - useful for debugging.
* @owner: pointer to exporter module - used for refcounting kernel module
* @ops: Attach allocator-defined dma buf ops to the new buffer
- * @size: Size of the buffer - invariant over the lifetime of the buffer
+ * @size: Size of the buffer in bytes - invariant over the lifetime of the buffer
* @flags: mode flags for the file
* @resv: reservation-object, NULL to allocate default one
+ * @gpucg_dev: pointer to the gpu cgroup device this buffer belongs to
* @priv: Attach private data of allocator to this buffer
*
* This structure holds the information required to export the buffer. Used
@@ -544,6 +557,9 @@ struct dma_buf_export_info {
size_t size;
int flags;
struct dma_resv *resv;
+#ifdef CONFIG_CGROUP_GPU
+ struct gpucg_device *gpucg_dev;
+#endif
void *priv;
};
--
2.35.1.616.g0bdcbb4464-goog
From: Hridya Valsaraju <[email protected]>
The dma_buf_charge_transfer function provides a way for processes to
transfer charge of a buffer to a different process. This is essential
for the cases where a central allocator process does allocations for
various subsystems, hands over the fd to the client who requested the
memory and drops all references to the allocated memory.
Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>
---
v3 changes
Use more common dual author commit message format per John Stultz.
v2 changes
Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König.
---
drivers/dma-buf/dma-buf.c | 48 +++++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 2 ++
2 files changed, 50 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 83d0d1b91547..55e1b982f840 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1374,6 +1374,54 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
}
EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF);
+/**
+ * dma_buf_charge_transfer - Change the GPU cgroup to which the provided dma_buf
+ * is charged.
+ * @dmabuf: [in] buffer whose charge will be migrated to a different GPU
+ * cgroup
+ * @gpucg: [in] the destination GPU cgroup for dmabuf's charge
+ *
+ * Only tasks that belong to the same cgroup the buffer is currently charged to
+ * may call this function, otherwise it will return -EPERM.
+ *
+ * Returns 0 on success, or a negative errno code otherwise.
+ */
+int dma_buf_charge_transfer(struct dma_buf *dmabuf, struct gpucg *gpucg)
+{
+#ifdef CONFIG_CGROUP_GPU
+ struct gpucg *current_gpucg;
+ int ret = 0;
+
+ /*
+ * Verify that the cgroup of the process requesting the transfer is the
+ * same as the one the buffer is currently charged to.
+ */
+ current_gpucg = gpucg_get(current);
+ mutex_lock(&dmabuf->lock);
+ if (current_gpucg != dmabuf->gpucg) {
+ ret = -EPERM;
+ goto err;
+ }
+
+ ret = gpucg_try_charge(gpucg, dmabuf->gpucg_dev, dmabuf->size);
+ if (ret)
+ goto err;
+
+ dmabuf->gpucg = gpucg;
+
+ /* uncharge the buffer from the cgroup it's currently charged to. */
+ gpucg_uncharge(current_gpucg, dmabuf->gpucg_dev, dmabuf->size);
+
+err:
+ mutex_unlock(&dmabuf->lock);
+ gpucg_put(current_gpucg);
+ return ret;
+#else
+ return 0;
+#endif /* CONFIG_CGROUP_GPU */
+}
+EXPORT_SYMBOL_NS_GPL(dma_buf_charge_transfer, DMA_BUF);
+
#ifdef CONFIG_DEBUG_FS
static int dma_buf_debug_show(struct seq_file *s, void *unused)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 742f29c3daaf..85c940c08867 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -646,4 +646,6 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map);
+
+int dma_buf_charge_transfer(struct dma_buf *dmabuf, struct gpucg *gpucg);
#endif /* __DMA_BUF_H__ */
--
2.35.1.616.g0bdcbb4464-goog
On 3/9/22 9:52 AM, T.J. Mercier wrote:
> This test verifies that the cgroup GPU memory charge is transferred
> correctly when a dmabuf is passed between processes in two different
> cgroups and the sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the
> binder transaction data containing the dmabuf file descriptor.
>
> Signed-off-by: T.J. Mercier <[email protected]>
> ---
> .../selftests/drivers/android/binder/Makefile | 8 +
> .../drivers/android/binder/binder_util.c | 254 +++++++++
> .../drivers/android/binder/binder_util.h | 32 ++
> .../selftests/drivers/android/binder/config | 4 +
> .../binder/test_dmabuf_cgroup_transfer.c | 480 ++++++++++++++++++
> 5 files changed, 778 insertions(+)
> create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
> create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
> create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
> create mode 100644 tools/testing/selftests/drivers/android/binder/config
> create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
>
> diff --git a/tools/testing/selftests/drivers/android/binder/Makefile b/tools/testing/selftests/drivers/android/binder/Makefile
> new file mode 100644
> index 000000000000..726439d10675
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -Wall
> +
Does this test inteded to be built on all architectures? Is arch
check necessary here?
Also does this test require root previleges - I see mount and
unmount operations in the test. If so add root check and skip
if non-root user runs the test.
> +TEST_GEN_PROGS = test_dmabuf_cgroup_transfer
> +
> +include ../../../lib.mk
> +
> +$(OUTPUT)/test_dmabuf_cgroup_transfer: ../../../cgroup/cgroup_util.c binder_util.c
> diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.c b/tools/testing/selftests/drivers/android/binder/binder_util.c
> new file mode 100644
> index 000000000000..c9dcf5b9d42b
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/binder_util.c
> @@ -0,0 +1,254 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "binder_util.h"
> +
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/mount.h>
> +
> +#include <linux/limits.h>
> +#include <linux/android/binder.h>
> +#include <linux/android/binderfs.h>
> +
> +static const size_t BINDER_MMAP_SIZE = 64 * 1024;
> +
> +static void binderfs_unmount(const char *mountpoint)
> +{
> + if (umount2(mountpoint, MNT_DETACH))
> + fprintf(stderr, "Failed to unmount binderfs at %s: %s\n",
> + mountpoint, strerror(errno));
> + else
> + fprintf(stderr, "Binderfs unmounted: %s\n", mountpoint);
> +
> + if (rmdir(mountpoint))
> + fprintf(stderr, "Failed to remove binderfs mount %s: %s\n",
> + mountpoint, strerror(errno));
> + else
> + fprintf(stderr, "Binderfs mountpoint destroyed: %s\n", mountpoint);
Does umount require root previleges? Same commment as above about
non-root user running test.
> +}
> +
> +struct binderfs_ctx create_binderfs(const char *name)
> +{
> + int fd, ret, saved_errno;
> + struct binderfs_device device = { 0 };
> + struct binderfs_ctx ctx = { 0 };
> +
> + /*
> + * P_tmpdir is set to "/tmp/" on Android platforms where Binder is most
> + * commonly used, but this path does not actually exist on Android. We
> + * will first try using "/data/local/tmp" and fallback to P_tmpdir if
> + * that fails for non-Android platforms.
> + */
> + static const char tmpdir[] = "/data/local/tmp";
> + static const size_t MAX_TMPDIR_SIZE =
> + sizeof(tmpdir) > sizeof(P_tmpdir) ?
> + sizeof(tmpdir) : sizeof(P_tmpdir);
> + static const char template[] = "/binderfs_XXXXXX";
> +
> + char *mkdtemp_result;
> + char binderfs_mntpt[MAX_TMPDIR_SIZE + sizeof(template)];
> + char device_path[MAX_TMPDIR_SIZE + sizeof(template) + BINDERFS_MAX_NAME];
> +
> + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", tmpdir, template);
> +
> + mkdtemp_result = mkdtemp(binderfs_mntpt);
> + if (mkdtemp_result == NULL) {
> + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s.\n",
> + binderfs_mntpt, strerror(errno));
> + fprintf(stderr, "Trying fallback mountpoint...\n");
> + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", P_tmpdir, template);
> + if (mkdtemp(binderfs_mntpt) == NULL) {
> + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s\n",
> + binderfs_mntpt, strerror(errno));
> + return ctx;
> + }
> + }
> + fprintf(stderr, "Binderfs mountpoint created at %s\n", binderfs_mntpt);
Does mount require root previleges? Same commment as above about
non-root user running test.
> +
> + if (mount(NULL, binderfs_mntpt, "binder", 0, 0)) {
> + perror("Could not mount binderfs");
> + rmdir(binderfs_mntpt);
> + return ctx;
> + }
> + fprintf(stderr, "Binderfs mounted at %s\n", binderfs_mntpt);
> +
> + strncpy(device.name, name, sizeof(device.name));
> + snprintf(device_path, sizeof(device_path), "%s/binder-control", binderfs_mntpt);
> + fd = open(device_path, O_RDONLY | O_CLOEXEC);
> + if (!fd) {
> + perror("Failed to open binder-control device");
> + binderfs_unmount(binderfs_mntpt);
> + return ctx;
> + }
> +
> + ret = ioctl(fd, BINDER_CTL_ADD, &device);
> + saved_errno = errno;
> + close(fd);
> + errno = saved_errno;
> + if (ret) {
> + perror("Failed to allocate new binder device");
> + binderfs_unmount(binderfs_mntpt);
> + return ctx;
> + }
> +
> + fprintf(stderr, "Allocated new binder device with major %d, minor %d, and name %s at %s\n",
> + device.major, device.minor, device.name, binderfs_mntpt);
> +
> + ctx.name = strdup(name);
> + ctx.mountpoint = strdup(binderfs_mntpt);
> + return ctx;
> +}
> +
> +void destroy_binderfs(struct binderfs_ctx *ctx)
> +{
> + char path[PATH_MAX];
> +
> + snprintf(path, sizeof(path), "%s/%s", ctx->mountpoint, ctx->name);
> +
> + if (unlink(path))
> + fprintf(stderr, "Failed to unlink binder device %s: %s\n", path, strerror(errno));
> + else
> + fprintf(stderr, "Destroyed binder %s at %s\n", ctx->name, ctx->mountpoint);
> +
> + binderfs_unmount(ctx->mountpoint);
> +
> + free(ctx->name);
> + free(ctx->mountpoint);
> +}
> +
> +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx)
> +{
> + struct binder_ctx ctx = {.fd = -1, .memory = NULL};
> + char path[PATH_MAX];
> +
> + snprintf(path, sizeof(path), "%s/%s", bfs_ctx->mountpoint, bfs_ctx->name);
> + ctx.fd = open(path, O_RDWR | O_NONBLOCK | O_CLOEXEC);
> + if (ctx.fd < 0) {
> + fprintf(stderr, "Error opening binder device %s: %s\n", path, strerror(errno));
Does this require root previleges?
> + return ctx;
> + }
> +
> + ctx.memory = mmap(NULL, BINDER_MMAP_SIZE, PROT_READ, MAP_SHARED, ctx.fd, 0);
> + if (ctx.memory == NULL) {
> + perror("Error mapping binder memory");
> + close(ctx.fd);
> + ctx.fd = -1;
> + }
> +
> + return ctx;
> +}
> +
> +void close_binder(struct binder_ctx *ctx)
> +{
> + if (munmap(ctx->memory, BINDER_MMAP_SIZE))
> + perror("Failed to unmap binder memory");
> + ctx->memory = NULL;
> +
> + if (close(ctx->fd))
> + perror("Failed to close binder");
> + ctx->fd = -1;
> +}
> +
> +int become_binder_context_manager(int binder_fd)
> +{
> + return ioctl(binder_fd, BINDER_SET_CONTEXT_MGR, 0);
> +}
> +
> +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> + void *readbuf, binder_size_t readsize)
> +{
> + int err;
> + struct binder_write_read bwr = {
> + .write_buffer = (binder_uintptr_t)writebuf,
> + .write_size = writesize,
> + .read_buffer = (binder_uintptr_t)readbuf,
> + .read_size = readsize
> + };
> +
> + do {
> + if (ioctl(binder_fd, BINDER_WRITE_READ, &bwr) >= 0)
> + err = 0;
> + else
> + err = -errno;
> + } while (err == -EINTR);
> +
> + if (err < 0) {
> + perror("BINDER_WRITE_READ");
> + return -1;
> + }
> +
> + if (bwr.write_consumed < writesize) {
> + fprintf(stderr, "Binder did not consume full write buffer %llu %llu\n",
> + bwr.write_consumed, writesize);
> + return -1;
> + }
> +
> + return bwr.read_consumed;
> +}
> +
> +static const char *reply_string(int cmd)
> +{
> + switch (cmd) {
> + case BR_ERROR:
> + return("BR_ERROR");
> + case BR_OK:
> + return("BR_OK");
> + case BR_TRANSACTION_SEC_CTX:
> + return("BR_TRANSACTION_SEC_CTX");
> + case BR_TRANSACTION:
> + return("BR_TRANSACTION");
> + case BR_REPLY:
> + return("BR_REPLY");
> + case BR_ACQUIRE_RESULT:
> + return("BR_ACQUIRE_RESULT");
> + case BR_DEAD_REPLY:
> + return("BR_DEAD_REPLY");
> + case BR_TRANSACTION_COMPLETE:
> + return("BR_TRANSACTION_COMPLETE");
> + case BR_INCREFS:
> + return("BR_INCREFS");
> + case BR_ACQUIRE:
> + return("BR_ACQUIRE");
> + case BR_RELEASE:
> + return("BR_RELEASE");
> + case BR_DECREFS:
> + return("BR_DECREFS");
> + case BR_ATTEMPT_ACQUIRE:
> + return("BR_ATTEMPT_ACQUIRE");
> + case BR_NOOP:
> + return("BR_NOOP");
> + case BR_SPAWN_LOOPER:
> + return("BR_SPAWN_LOOPER");
> + case BR_FINISHED:
> + return("BR_FINISHED");
> + case BR_DEAD_BINDER:
> + return("BR_DEAD_BINDER");
> + case BR_CLEAR_DEATH_NOTIFICATION_DONE:
> + return("BR_CLEAR_DEATH_NOTIFICATION_DONE");
> + case BR_FAILED_REPLY:
> + return("BR_FAILED_REPLY");
> + case BR_FROZEN_REPLY:
> + return("BR_FROZEN_REPLY");
> + case BR_ONEWAY_SPAM_SUSPECT:
> + return("BR_ONEWAY_SPAM_SUSPECT");
> + default:
> + return("Unknown");
> + };
> +}
> +
> +int expect_binder_reply(int32_t actual, int32_t expected)
> +{
> + if (actual != expected) {
> + fprintf(stderr, "Expected %s but received %s\n",
> + reply_string(expected), reply_string(actual));
> + return -1;
> + }
> + return 0;
> +}
> +
> diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.h b/tools/testing/selftests/drivers/android/binder/binder_util.h
> new file mode 100644
> index 000000000000..807f5abe987e
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/binder_util.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef SELFTEST_BINDER_UTIL_H
> +#define SELFTEST_BINDER_UTIL_H
> +
> +#include <stdint.h>
> +
> +#include <linux/android/binder.h>
> +
> +struct binderfs_ctx {
> + char *name;
> + char *mountpoint;
> +};
> +
> +struct binder_ctx {
> + int fd;
> + void *memory;
> +};
> +
> +struct binderfs_ctx create_binderfs(const char *name);
> +void destroy_binderfs(struct binderfs_ctx *ctx);
> +
> +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx);
> +void close_binder(struct binder_ctx *ctx);
> +
> +int become_binder_context_manager(int binder_fd);
> +
> +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> + void *readbuf, binder_size_t readsize);
> +
> +int expect_binder_reply(int32_t actual, int32_t expected);
> +#endif
> diff --git a/tools/testing/selftests/drivers/android/binder/config b/tools/testing/selftests/drivers/android/binder/config
> new file mode 100644
> index 000000000000..fcc5f8f693b3
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/config
> @@ -0,0 +1,4 @@
> +CONFIG_CGROUP_GPU=y
> +CONFIG_ANDROID=y
> +CONFIG_ANDROID_BINDERFS=y
> +CONFIG_ANDROID_BINDER_IPC=y
> diff --git a/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> new file mode 100644
> index 000000000000..9b952ab401cc
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> @@ -0,0 +1,480 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * This test verifies that the cgroup GPU memory charge is transferred correctly
> + * when a dmabuf is passed between processes in two different cgroups and the
> + * sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the binder transaction
> + * data containing the dmabuf file descriptor.
> + *
> + * The gpu_cgroup_dmabuf_transfer test function becomes the binder context
> + * manager, then forks a child who initiates a transaction with the context
> + * manager by specifying a target of 0. The context manager reply contains a
> + * dmabuf file descriptor which was allocated by the gpu_cgroup_dmabuf_transfer
> + * test function, but should be charged to the child cgroup after the binder
> + * transaction.
> + */
> +
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stddef.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/epoll.h>
> +#include <sys/ioctl.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +
> +#include "binder_util.h"
> +#include "../../../cgroup/cgroup_util.h"
> +#include "../../../kselftest.h"
> +#include "../../../kselftest_harness.h"
> +
> +#include <linux/limits.h>
> +#include <linux/dma-heap.h>
> +#include <linux/android/binder.h>
> +
> +#define UNUSED(x) ((void)(x))
> +
> +static const unsigned int BINDER_CODE = 8675309; /* Any number will work here */
> +
> +struct cgroup_ctx {
> + char *root;
> + char *source;
> + char *dest;
> +};
> +
> +void destroy_cgroups(struct __test_metadata *_metadata, struct cgroup_ctx *ctx)
> +{
> + if (ctx->source != NULL) {
> + TH_LOG("Destroying cgroup: %s", ctx->source);
> + rmdir(ctx->source);
> + free(ctx->source);
> + }
> +
> + if (ctx->dest != NULL) {
> + TH_LOG("Destroying cgroup: %s", ctx->dest);
> + rmdir(ctx->dest);
> + free(ctx->dest);
> + }
> +
> + free(ctx->root);
> + ctx->root = ctx->source = ctx->dest = NULL;
> +}
> +
> +struct cgroup_ctx create_cgroups(struct __test_metadata *_metadata)
> +{
> + struct cgroup_ctx ctx = {0};
> + char root[PATH_MAX], *tmp;
> + static const char template[] = "/gpucg_XXXXXX";
> +
> + if (cg_find_unified_root(root, sizeof(root))) {
> + TH_LOG("Could not find cgroups root");
> + return ctx;
> + }
> +
> + if (cg_read_strstr(root, "cgroup.controllers", "gpu")) {
> + TH_LOG("Could not find GPU controller");
> + return ctx;
> + }
> +
> + if (cg_write(root, "cgroup.subtree_control", "+gpu")) {
> + TH_LOG("Could not enable GPU controller");
> + return ctx;
> + }
> +
> + ctx.root = strdup(root);
> +
> + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> + tmp = mkdtemp(root);
> + if (tmp == NULL) {
> + TH_LOG("%s - Could not create source cgroup", strerror(errno));
> + destroy_cgroups(_metadata, &ctx);
> + return ctx;
> + }
> + ctx.source = strdup(tmp);
> +
> + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> + tmp = mkdtemp(root);
> + if (tmp == NULL) {
> + TH_LOG("%s - Could not create destination cgroup", strerror(errno));
> + destroy_cgroups(_metadata, &ctx);
> + return ctx;
> + }
> + ctx.dest = strdup(tmp);
> +
> + TH_LOG("Created cgroups: %s %s", ctx.source, ctx.dest);
> +
> + return ctx;
> +}
> +
> +int dmabuf_heap_alloc(int fd, size_t len, int *dmabuf_fd)
> +{
> + struct dma_heap_allocation_data data = {
> + .len = len,
> + .fd = 0,
> + .fd_flags = O_RDONLY | O_CLOEXEC,
> + .heap_flags = 0,
> + };
> + int ret;
> +
> + if (!dmabuf_fd)
> + return -EINVAL;
> +
> + ret = ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data);
> + if (ret < 0)
> + return ret;
> + *dmabuf_fd = (int)data.fd;
> + return ret;
> +}
> +
> +/* The system heap is known to export dmabufs with support for cgroup tracking */
> +int alloc_dmabuf_from_system_heap(struct __test_metadata *_metadata, size_t bytes)
> +{
> + int heap_fd = -1, dmabuf_fd = -1;
> + static const char * const heap_path = "/dev/dma_heap/system";
> +
> + heap_fd = open(heap_path, O_RDONLY);
> + if (heap_fd < 0) {
> + TH_LOG("%s - open %s failed!\n", strerror(errno), heap_path);
> + return -1;
> + }
Same question about root preveliges?
> +
> + if (dmabuf_heap_alloc(heap_fd, bytes, &dmabuf_fd))
> + TH_LOG("dmabuf allocation failed! - %s", strerror(errno));
> + close(heap_fd);
> +
> + return dmabuf_fd;
> +}
> +
> +int binder_request_dmabuf(int binder_fd)
> +{
> + int ret;
> +
> + /*
> + * We just send an empty binder_buffer_object to initiate a transaction
> + * with the context manager, who should respond with a single dmabuf
> + * inside a binder_fd_array_object.
> + */
> +
> + struct binder_buffer_object bbo = {
> + .hdr.type = BINDER_TYPE_PTR,
> + .flags = 0,
> + .buffer = 0,
> + .length = 0,
> + .parent = 0, /* No parent */
> + .parent_offset = 0 /* No parent */
> + };
> +
> + binder_size_t offsets[] = {0};
> +
> + struct {
> + int32_t cmd;
> + struct binder_transaction_data btd;
> + } __attribute__((packed)) bc = {
> + .cmd = BC_TRANSACTION,
> + .btd = {
> + .target = { 0 },
> + .cookie = 0,
> + .code = BINDER_CODE,
> + .flags = TF_ACCEPT_FDS, /* We expect a FDA in the reply */
> + .data_size = sizeof(bbo),
> + .offsets_size = sizeof(offsets),
> + .data.ptr = {
> + (binder_uintptr_t)&bbo,
> + (binder_uintptr_t)offsets
> + }
> + },
> + };
> +
> + struct {
> + int32_t reply_noop;
> + } __attribute__((packed)) br;
> +
> + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> + return -1;
> + } else if (ret < sizeof(br)) {
> + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> + return -1;
> + }
> + return 0;
> +}
> +
> +int send_dmabuf_reply(int binder_fd, struct binder_transaction_data *tr, int dmabuf_fd)
> +{
> + int ret;
> + /*
> + * The trailing 0 is to achieve the necessary alignment for the binder
> + * buffer_size.
> + */
> + int fdarray[] = { dmabuf_fd, 0 };
> +
> + struct binder_buffer_object bbo = {
> + .hdr.type = BINDER_TYPE_PTR,
> + .flags = BINDER_BUFFER_FLAG_SENDER_NO_NEED,
> + .buffer = (binder_uintptr_t)fdarray,
> + .length = sizeof(fdarray),
> + .parent = 0, /* No parent */
> + .parent_offset = 0 /* No parent */
> + };
> +
> + struct binder_fd_array_object bfdao = {
> + .hdr.type = BINDER_TYPE_FDA,
> + .num_fds = 1,
> + .parent = 0, /* The binder_buffer_object */
> + .parent_offset = 0 /* FDs follow immediately */
> + };
> +
> + uint64_t sz = sizeof(fdarray);
> + uint8_t data[sizeof(sz) + sizeof(bbo) + sizeof(bfdao)];
> + binder_size_t offsets[] = {sizeof(sz), sizeof(sz)+sizeof(bbo)};
> +
> + memcpy(data, &sz, sizeof(sz));
> + memcpy(data + sizeof(sz), &bbo, sizeof(bbo));
> + memcpy(data + sizeof(sz) + sizeof(bbo), &bfdao, sizeof(bfdao));
> +
> + struct {
> + int32_t cmd;
> + struct binder_transaction_data_sg btd;
> + } __attribute__((packed)) bc = {
> + .cmd = BC_REPLY_SG,
> + .btd.transaction_data = {
> + .target = { tr->target.handle },
> + .cookie = tr->cookie,
> + .code = BINDER_CODE,
> + .flags = 0,
> + .data_size = sizeof(data),
> + .offsets_size = sizeof(offsets),
> + .data.ptr = {
> + (binder_uintptr_t)data,
> + (binder_uintptr_t)offsets
> + }
> + },
> + .btd.buffers_size = sizeof(fdarray)
> + };
> +
> + struct {
> + int32_t reply_noop;
> + } __attribute__((packed)) br;
> +
> + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> + return -1;
> + } else if (ret < sizeof(br)) {
> + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> + return -1;
> + }
> + return 0;
> +}
> +
> +struct binder_transaction_data *binder_wait_for_transaction(int binder_fd,
> + uint32_t *readbuf,
> + size_t readsize)
> +{
> + static const int MAX_EVENTS = 1, EPOLL_WAIT_TIME_MS = 3 * 1000;
> + struct binder_reply {
> + int32_t reply0;
> + int32_t reply1;
> + struct binder_transaction_data btd;
> + } *br;
> + struct binder_transaction_data *ret = NULL;
> + struct epoll_event events[MAX_EVENTS];
> + int epoll_fd, num_events, readcount;
> + uint32_t bc[] = { BC_ENTER_LOOPER };
> +
> + do_binder_write_read(binder_fd, &bc, sizeof(bc), NULL, 0);
> +
> + epoll_fd = epoll_create1(EPOLL_CLOEXEC);
> + if (epoll_fd == -1) {
> + perror("epoll_create");
> + return NULL;
> + }
> +
> + events[0].events = EPOLLIN;
> + if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, binder_fd, &events[0])) {
> + perror("epoll_ctl add");
> + goto err_close;
> + }
> +
> + num_events = epoll_wait(epoll_fd, events, MAX_EVENTS, EPOLL_WAIT_TIME_MS);
> + if (num_events < 0) {
> + perror("epoll_wait");
> + goto err_ctl;
> + } else if (num_events == 0) {
> + fprintf(stderr, "No events\n");
> + goto err_ctl;
> + }
> +
> + readcount = do_binder_write_read(binder_fd, NULL, 0, readbuf, readsize);
> + fprintf(stderr, "Read %d bytes from binder\n", readcount);
> +
> + if (readcount < (int)sizeof(struct binder_reply)) {
> + fprintf(stderr, "read_consumed not large enough\n");
> + goto err_ctl;
> + }
> +
> + br = (struct binder_reply *)readbuf;
> + if (expect_binder_reply(br->reply0, BR_NOOP))
> + goto err_ctl;
> +
> + if (br->reply1 == BR_TRANSACTION) {
> + if (br->btd.code == BINDER_CODE)
> + ret = &br->btd;
> + else
> + fprintf(stderr, "Received transaction with unexpected code: %u\n",
> + br->btd.code);
> + } else {
> + expect_binder_reply(br->reply1, BR_TRANSACTION_COMPLETE);
> + }
> +
> +err_ctl:
> + if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, binder_fd, NULL))
> + perror("epoll_ctl del");
> +err_close:
> + close(epoll_fd);
> + return ret;
> +}
> +
> +static int child_request_dmabuf_transfer(const char *cgroup, void *arg)
> +{
> + UNUSED(cgroup);
> + int ret = -1;
> + uint32_t readbuf[32];
> + struct binderfs_ctx bfs_ctx = *(struct binderfs_ctx *)arg;
> + struct binder_ctx b_ctx;
> +
> + fprintf(stderr, "Child PID: %d\n", getpid());
> +
> + b_ctx = open_binder(&bfs_ctx);
> + if (b_ctx.fd < 0) {
> + fprintf(stderr, "Child unable to open binder\n");
> + return -1;
> + }
> +
> + if (binder_request_dmabuf(b_ctx.fd))
> + goto err;
> +
> + /* The child must stay alive until the binder reply is received */
> + if (binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf)) == NULL)
> + ret = 0;
> +
> + /*
> + * We don't close the received dmabuf here so that the parent can
> + * inspect the cgroup gpu memory charges to verify the charge transfer
> + * completed successfully.
> + */
> +err:
> + close_binder(&b_ctx);
> + fprintf(stderr, "Child done\n");
> + return ret;
> +}
> +
> +TEST(gpu_cgroup_dmabuf_transfer)
> +{
> + static const char * const GPUMEM_FILENAME = "gpu.memory.current";
> + static const size_t ONE_MiB = 1024 * 1024;
> +
> + int ret, dmabuf_fd;
> + uint32_t readbuf[32];
> + long memsize;
> + pid_t child_pid;
> + struct binderfs_ctx bfs_ctx;
> + struct binder_ctx b_ctx;
> + struct cgroup_ctx cg_ctx;
> + struct binder_transaction_data *tr;
> + struct flat_binder_object *fbo;
> + struct binder_buffer_object *bbo;
> +
If root previges is necessary - pls add check here and skip.
> + bfs_ctx = create_binderfs("testbinder");
> + if (bfs_ctx.name == NULL)
> + ksft_exit_skip("The Android binderfs filesystem is not available\n");
> +
> + cg_ctx = create_cgroups(_metadata);
> + if (cg_ctx.root == NULL) {
> + destroy_binderfs(&bfs_ctx);
> + ksft_exit_skip("cgroup v2 isn't mounted\n");
> + }
> +
> + ASSERT_EQ(cg_enter_current(cg_ctx.source), 0) {
> + TH_LOG("Could not move parent to cgroup: %s", cg_ctx.source);
> + goto err_cg;
> + }
> +
> + dmabuf_fd = alloc_dmabuf_from_system_heap(_metadata, ONE_MiB);
> + ASSERT_GE(dmabuf_fd, 0) {
> + goto err_cg;
> + }
> + TH_LOG("Allocated dmabuf");
> +
> + memsize = cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system");
> + ASSERT_EQ(memsize, ONE_MiB) {
> + TH_LOG("GPU memory used after allocation: %ld but it should be %lu",
> + memsize, (unsigned long)ONE_MiB);
> + goto err_dmabuf;
> + }
> +
> + b_ctx = open_binder(&bfs_ctx);
> + ASSERT_GE(b_ctx.fd, 0) {
> + TH_LOG("Parent unable to open binder");
> + goto err_dmabuf;
> + }
> + TH_LOG("Opened binder at %s/%s", bfs_ctx.mountpoint, bfs_ctx.name);
> +
> + ASSERT_EQ(become_binder_context_manager(b_ctx.fd), 0) {
> + TH_LOG("Cannot become context manager: %s", strerror(errno));
> + goto err_binder;
> + }
> +
> + child_pid = cg_run_nowait(cg_ctx.dest, child_request_dmabuf_transfer, &bfs_ctx);
> + ASSERT_GT(child_pid, 0) {
> + TH_LOG("Error forking: %s", strerror(errno));
> + goto err_binder;
> + }
> +
> + tr = binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf));
> + ASSERT_NE(tr, NULL) {
> + TH_LOG("Error receiving transaction request from child");
> + goto err_child;
> + }
> + fbo = (struct flat_binder_object *)tr->data.ptr.buffer;
> + ASSERT_EQ(fbo->hdr.type, BINDER_TYPE_PTR) {
> + TH_LOG("Did not receive a buffer object from child");
> + goto err_child;
> + }
> + bbo = (struct binder_buffer_object *)fbo;
> + ASSERT_EQ(bbo->length, 0) {
> + TH_LOG("Did not receive an empty buffer object from child");
> + goto err_child;
> + }
> +
> + TH_LOG("Received transaction from child");
> + send_dmabuf_reply(b_ctx.fd, tr, dmabuf_fd);
> +
> + ASSERT_EQ(cg_read_key_long(cg_ctx.dest, GPUMEM_FILENAME, "system"), ONE_MiB) {
> + TH_LOG("Destination cgroup does not have system charge!");
> + goto err_child;
> + }
> + ASSERT_EQ(cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system"), 0) {
> + TH_LOG("Source cgroup still has system charge!");
> + goto err_child;
> + }
> + TH_LOG("Charge transfer succeeded!");
> +
> +err_child:
> + waitpid(child_pid, &ret, 0);
> + if (WIFEXITED(ret))
> + TH_LOG("Child %d terminated with %d", child_pid, WEXITSTATUS(ret));
> + else
> + TH_LOG("Child terminated abnormally");
What does this mean? What are the conditions that could cause this?
Pls include more info. in the message.
> +err_binder:
> + close_binder(&b_ctx);
> +err_dmabuf:
> + close(dmabuf_fd);
> +err_cg:
> + destroy_cgroups(_metadata, &cg_ctx);
> + destroy_binderfs(&bfs_ctx);
> +}
> +
> +TEST_HARNESS_MAIN
>
thanks,
-- Shuah
On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
>
> The kernel interface should use types that the kernel defines instead of
> pid_t and uid_t, whose definiton is owned by libc. This fixes the header
> so that it can be included without first including sys/types.h.
>
> Signed-off-by: T.J. Mercier <[email protected]>
> ---
> include/uapi/linux/android/binder.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> index 169fd5069a1a..aa28454dbca3 100644
> --- a/include/uapi/linux/android/binder.h
> +++ b/include/uapi/linux/android/binder.h
> @@ -289,8 +289,8 @@ struct binder_transaction_data {
>
> /* General information about the transaction. */
> __u32 flags;
> - pid_t sender_pid;
> - uid_t sender_euid;
> + __kernel_pid_t sender_pid;
> + __kernel_uid_t sender_euid;
Are we guaranteed that this does not affect the UAPI at all? Userspace
code using this definition will have to run with kernels using the old
definition and visa-versa.
> binder_size_t data_size; /* number of bytes of data */
> binder_size_t offsets_size; /* number of bytes of offsets */
>
> --
> 2.35.1.616.g0bdcbb4464-goog
>
From: T.J. Mercier
> Sent: 14 March 2022 23:45
>
> On Thu, Mar 10, 2022 at 11:33 AM Todd Kjos <[email protected]> wrote:
> >
> > On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
> > >
> > > The kernel interface should use types that the kernel defines instead of
> > > pid_t and uid_t, whose definiton is owned by libc. This fixes the header
> > > so that it can be included without first including sys/types.h.
> > >
> > > Signed-off-by: T.J. Mercier <[email protected]>
> > > ---
> > > include/uapi/linux/android/binder.h | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> > > index 169fd5069a1a..aa28454dbca3 100644
> > > --- a/include/uapi/linux/android/binder.h
> > > +++ b/include/uapi/linux/android/binder.h
> > > @@ -289,8 +289,8 @@ struct binder_transaction_data {
> > >
> > > /* General information about the transaction. */
> > > __u32 flags;
> > > - pid_t sender_pid;
> > > - uid_t sender_euid;
> > > + __kernel_pid_t sender_pid;
> > > + __kernel_uid_t sender_euid;
> >
> > Are we guaranteed that this does not affect the UAPI at all? Userspace
> > code using this definition will have to run with kernels using the old
> > definition and visa-versa.
>
> A standards compliant userspace should be expecting a signed integer
> type here. So the only way I can think userspace would be affected is
> if:
> 1) pid_t is a long AND
> 2) sizeof(long) > sizeof(int) AND
> 3) Consumers of the pid_t definition actually attempt to mutate the
> result to make use of extra bits in the variable (which are not there)
Or the userspace headers have a 16bit pid_t.
I can't help feeling that uapi headers should only use explicit
fixed sized types.
There is no point indirecting the type names - the sizes still
can't be changes.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Thu, Mar 10, 2022 at 11:33 AM Todd Kjos <[email protected]> wrote:
>
> On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
> >
> > The kernel interface should use types that the kernel defines instead of
> > pid_t and uid_t, whose definiton is owned by libc. This fixes the header
> > so that it can be included without first including sys/types.h.
> >
> > Signed-off-by: T.J. Mercier <[email protected]>
> > ---
> > include/uapi/linux/android/binder.h | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> > index 169fd5069a1a..aa28454dbca3 100644
> > --- a/include/uapi/linux/android/binder.h
> > +++ b/include/uapi/linux/android/binder.h
> > @@ -289,8 +289,8 @@ struct binder_transaction_data {
> >
> > /* General information about the transaction. */
> > __u32 flags;
> > - pid_t sender_pid;
> > - uid_t sender_euid;
> > + __kernel_pid_t sender_pid;
> > + __kernel_uid_t sender_euid;
>
> Are we guaranteed that this does not affect the UAPI at all? Userspace
> code using this definition will have to run with kernels using the old
> definition and visa-versa.
A standards compliant userspace should be expecting a signed integer
type here. So the only way I can think userspace would be affected is
if:
1) pid_t is a long AND
2) sizeof(long) > sizeof(int) AND
3) Consumers of the pid_t definition actually attempt to mutate the
result to make use of extra bits in the variable (which are not there)
This seems extremely unlikely. For instance just on the topic of the
first item, all of the C library implementations with pid_t
definitions linked here use an int, except for Bionic which typdefs
pid_t to __kernel_pid_t and Sortix which uses long.
https://wiki.osdev.org/C_Library
However I would argue this is already broken and should count as a bug
fix since I can't do this:
$ cat binder_include.c ; gcc binder_include.c
#include <linux/android/binder.h>
int main() {}
In file included from binder_include.c:1:
/usr/include/linux/android/binder.h:291:9: error: unknown type name ‘pid_t’
291 | pid_t sender_pid;
| ^~~~~
/usr/include/linux/android/binder.h:292:9: error: unknown type name ‘uid_t’
292 | uid_t sender_euid;
| ^~~~~
This is also the only occurrence of pid_t in all of
include/uapi/linux. All 40+ other uses are __kernel_pid_t, and I don't
see why the binder header should be different.
>
> > binder_size_t data_size; /* number of bytes of data */
> > binder_size_t offsets_size; /* number of bytes of offsets */
> >
> > --
> > 2.35.1.616.g0bdcbb4464-goog
> >
On Tue, Mar 15, 2022 at 12:56 AM David Laight <[email protected]> wrote:
>
> From: T.J. Mercier
> > Sent: 14 March 2022 23:45
> >
> > On Thu, Mar 10, 2022 at 11:33 AM Todd Kjos <[email protected]> wrote:
> > >
> > > On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
> > > >
> > > > The kernel interface should use types that the kernel defines instead of
> > > > pid_t and uid_t, whose definiton is owned by libc. This fixes the header
> > > > so that it can be included without first including sys/types.h.
> > > >
> > > > Signed-off-by: T.J. Mercier <[email protected]>
> > > > ---
> > > > include/uapi/linux/android/binder.h | 4 ++--
> > > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> > > > index 169fd5069a1a..aa28454dbca3 100644
> > > > --- a/include/uapi/linux/android/binder.h
> > > > +++ b/include/uapi/linux/android/binder.h
> > > > @@ -289,8 +289,8 @@ struct binder_transaction_data {
> > > >
> > > > /* General information about the transaction. */
> > > > __u32 flags;
> > > > - pid_t sender_pid;
> > > > - uid_t sender_euid;
> > > > + __kernel_pid_t sender_pid;
> > > > + __kernel_uid_t sender_euid;
> > >
> > > Are we guaranteed that this does not affect the UAPI at all? Userspace
> > > code using this definition will have to run with kernels using the old
> > > definition and visa-versa.
> >
> > A standards compliant userspace should be expecting a signed integer
> > type here. So the only way I can think userspace would be affected is
> > if:
> > 1) pid_t is a long AND
> > 2) sizeof(long) > sizeof(int) AND
> > 3) Consumers of the pid_t definition actually attempt to mutate the
> > result to make use of extra bits in the variable (which are not there)
>
> Or the userspace headers have a 16bit pid_t.
Since the kernel uses an int for PIDs, wouldn't a 16 bit pid_t already
be potentially broken (overflow) on systems where int is not 16 bits?
On systems where int is 16 bits, there is no change here except to
achieve uniform use of __kernel_pid_t in the kernel headers and fix
the include problem.
>
> I can't help feeling that uapi headers should only use explicit
> fixed sized types.
> There is no point indirecting the type names - the sizes still
> can't be changes.
I think it's still unlikely to be an actual problem. For example there
are other occasions where a switch like this was made:
https://github.com/torvalds/linux/commit/694a58e29ef27c4c26f103a9decfd053f94dd34c
https://github.com/torvalds/linux/commit/269b8fd5d058f2c0da01a42b20315ffc2640d99b
And also since Binder's only known user is Android through Bionic
which already expects the type of pid_t to be __kernel_pid_t.
>
> David
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
On Wed, Mar 9, 2022 at 1:31 PM Shuah Khan <[email protected]> wrote:
>
> On 3/9/22 9:52 AM, T.J. Mercier wrote:
> > This test verifies that the cgroup GPU memory charge is transferred
> > correctly when a dmabuf is passed between processes in two different
> > cgroups and the sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the
> > binder transaction data containing the dmabuf file descriptor.
> >
> > Signed-off-by: T.J. Mercier <[email protected]>
> > ---
> > .../selftests/drivers/android/binder/Makefile | 8 +
> > .../drivers/android/binder/binder_util.c | 254 +++++++++
> > .../drivers/android/binder/binder_util.h | 32 ++
> > .../selftests/drivers/android/binder/config | 4 +
> > .../binder/test_dmabuf_cgroup_transfer.c | 480 ++++++++++++++++++
> > 5 files changed, 778 insertions(+)
> > create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
> > create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
> > create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
> > create mode 100644 tools/testing/selftests/drivers/android/binder/config
> > create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> >
> > diff --git a/tools/testing/selftests/drivers/android/binder/Makefile b/tools/testing/selftests/drivers/android/binder/Makefile
> > new file mode 100644
> > index 000000000000..726439d10675
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/android/binder/Makefile
> > @@ -0,0 +1,8 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +CFLAGS += -Wall
> > +
>
> Does this test inteded to be built on all architectures? Is arch
> check necessary here?
I think this test should be runnable on any architecture, so long as
the correct kernel configs (for binder and cgroups) are enabled. I
have tested it on arm64 and x86-64.
>
> Also does this test require root previleges - I see mount and
> unmount operations in the test. If so add root check and skip
> if non-root user runs the test.
Yes, currently it does. Thanks, I will add this check at the location
you have pointed out in the TEST function.
>
> > +TEST_GEN_PROGS = test_dmabuf_cgroup_transfer
> > +
> > +include ../../../lib.mk
> > +
> > +$(OUTPUT)/test_dmabuf_cgroup_transfer: ../../../cgroup/cgroup_util.c binder_util.c
> > diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.c b/tools/testing/selftests/drivers/android/binder/binder_util.c
> > new file mode 100644
> > index 000000000000..c9dcf5b9d42b
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/android/binder/binder_util.c
> > @@ -0,0 +1,254 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#include "binder_util.h"
> > +
> > +#include <errno.h>
> > +#include <fcntl.h>
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <string.h>
> > +#include <unistd.h>
> > +#include <sys/ioctl.h>
> > +#include <sys/mman.h>
> > +#include <sys/mount.h>
> > +
> > +#include <linux/limits.h>
> > +#include <linux/android/binder.h>
> > +#include <linux/android/binderfs.h>
> > +
> > +static const size_t BINDER_MMAP_SIZE = 64 * 1024;
> > +
> > +static void binderfs_unmount(const char *mountpoint)
> > +{
> > + if (umount2(mountpoint, MNT_DETACH))
> > + fprintf(stderr, "Failed to unmount binderfs at %s: %s\n",
> > + mountpoint, strerror(errno));
> > + else
> > + fprintf(stderr, "Binderfs unmounted: %s\n", mountpoint);
> > +
> > + if (rmdir(mountpoint))
> > + fprintf(stderr, "Failed to remove binderfs mount %s: %s\n",
> > + mountpoint, strerror(errno));
> > + else
> > + fprintf(stderr, "Binderfs mountpoint destroyed: %s\n", mountpoint);
>
> Does umount require root previleges? Same commment as above about
> non-root user running test.
>
>
> > +}
> > +
> > +struct binderfs_ctx create_binderfs(const char *name)
> > +{
> > + int fd, ret, saved_errno;
> > + struct binderfs_device device = { 0 };
> > + struct binderfs_ctx ctx = { 0 };
> > +
> > + /*
> > + * P_tmpdir is set to "/tmp/" on Android platforms where Binder is most
> > + * commonly used, but this path does not actually exist on Android. We
> > + * will first try using "/data/local/tmp" and fallback to P_tmpdir if
> > + * that fails for non-Android platforms.
> > + */
> > + static const char tmpdir[] = "/data/local/tmp";
> > + static const size_t MAX_TMPDIR_SIZE =
> > + sizeof(tmpdir) > sizeof(P_tmpdir) ?
> > + sizeof(tmpdir) : sizeof(P_tmpdir);
> > + static const char template[] = "/binderfs_XXXXXX";
> > +
> > + char *mkdtemp_result;
> > + char binderfs_mntpt[MAX_TMPDIR_SIZE + sizeof(template)];
> > + char device_path[MAX_TMPDIR_SIZE + sizeof(template) + BINDERFS_MAX_NAME];
> > +
> > + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", tmpdir, template);
> > +
> > + mkdtemp_result = mkdtemp(binderfs_mntpt);
> > + if (mkdtemp_result == NULL) {
> > + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s.\n",
> > + binderfs_mntpt, strerror(errno));
> > + fprintf(stderr, "Trying fallback mountpoint...\n");
> > + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", P_tmpdir, template);
> > + if (mkdtemp(binderfs_mntpt) == NULL) {
> > + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s\n",
> > + binderfs_mntpt, strerror(errno));
> > + return ctx;
> > + }
> > + }
> > + fprintf(stderr, "Binderfs mountpoint created at %s\n", binderfs_mntpt);
>
> Does mount require root previleges? Same commment as above about
> non-root user running test.
>
> > +
> > + if (mount(NULL, binderfs_mntpt, "binder", 0, 0)) {
> > + perror("Could not mount binderfs");
> > + rmdir(binderfs_mntpt);
> > + return ctx;
> > + }
> > + fprintf(stderr, "Binderfs mounted at %s\n", binderfs_mntpt);
> > +
> > + strncpy(device.name, name, sizeof(device.name));
> > + snprintf(device_path, sizeof(device_path), "%s/binder-control", binderfs_mntpt);
> > + fd = open(device_path, O_RDONLY | O_CLOEXEC);
> > + if (!fd) {
> > + perror("Failed to open binder-control device");
> > + binderfs_unmount(binderfs_mntpt);
> > + return ctx;
> > + }
> > +
> > + ret = ioctl(fd, BINDER_CTL_ADD, &device);
> > + saved_errno = errno;
> > + close(fd);
> > + errno = saved_errno;
> > + if (ret) {
> > + perror("Failed to allocate new binder device");
> > + binderfs_unmount(binderfs_mntpt);
> > + return ctx;
> > + }
> > +
> > + fprintf(stderr, "Allocated new binder device with major %d, minor %d, and name %s at %s\n",
> > + device.major, device.minor, device.name, binderfs_mntpt);
> > +
> > + ctx.name = strdup(name);
> > + ctx.mountpoint = strdup(binderfs_mntpt);
> > + return ctx;
> > +}
> > +
> > +void destroy_binderfs(struct binderfs_ctx *ctx)
> > +{
> > + char path[PATH_MAX];
> > +
> > + snprintf(path, sizeof(path), "%s/%s", ctx->mountpoint, ctx->name);
> > +
> > + if (unlink(path))
> > + fprintf(stderr, "Failed to unlink binder device %s: %s\n", path, strerror(errno));
> > + else
> > + fprintf(stderr, "Destroyed binder %s at %s\n", ctx->name, ctx->mountpoint);
> > +
> > + binderfs_unmount(ctx->mountpoint);
> > +
> > + free(ctx->name);
> > + free(ctx->mountpoint);
> > +}
> > +
> > +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx)
> > +{
> > + struct binder_ctx ctx = {.fd = -1, .memory = NULL};
> > + char path[PATH_MAX];
> > +
> > + snprintf(path, sizeof(path), "%s/%s", bfs_ctx->mountpoint, bfs_ctx->name);
> > + ctx.fd = open(path, O_RDWR | O_NONBLOCK | O_CLOEXEC);
> > + if (ctx.fd < 0) {
> > + fprintf(stderr, "Error opening binder device %s: %s\n", path, strerror(errno));
>
> Does this require root previleges?
>
> > + return ctx;
> > + }
> > +
> > + ctx.memory = mmap(NULL, BINDER_MMAP_SIZE, PROT_READ, MAP_SHARED, ctx.fd, 0);
> > + if (ctx.memory == NULL) {
> > + perror("Error mapping binder memory");
> > + close(ctx.fd);
> > + ctx.fd = -1;
> > + }
> > +
> > + return ctx;
> > +}
> > +
> > +void close_binder(struct binder_ctx *ctx)
> > +{
> > + if (munmap(ctx->memory, BINDER_MMAP_SIZE))
> > + perror("Failed to unmap binder memory");
> > + ctx->memory = NULL;
> > +
> > + if (close(ctx->fd))
> > + perror("Failed to close binder");
> > + ctx->fd = -1;
> > +}
> > +
> > +int become_binder_context_manager(int binder_fd)
> > +{
> > + return ioctl(binder_fd, BINDER_SET_CONTEXT_MGR, 0);
> > +}
> > +
> > +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> > + void *readbuf, binder_size_t readsize)
> > +{
> > + int err;
> > + struct binder_write_read bwr = {
> > + .write_buffer = (binder_uintptr_t)writebuf,
> > + .write_size = writesize,
> > + .read_buffer = (binder_uintptr_t)readbuf,
> > + .read_size = readsize
> > + };
> > +
> > + do {
> > + if (ioctl(binder_fd, BINDER_WRITE_READ, &bwr) >= 0)
> > + err = 0;
> > + else
> > + err = -errno;
> > + } while (err == -EINTR);
> > +
> > + if (err < 0) {
> > + perror("BINDER_WRITE_READ");
> > + return -1;
> > + }
> > +
> > + if (bwr.write_consumed < writesize) {
> > + fprintf(stderr, "Binder did not consume full write buffer %llu %llu\n",
> > + bwr.write_consumed, writesize);
> > + return -1;
> > + }
> > +
> > + return bwr.read_consumed;
> > +}
> > +
> > +static const char *reply_string(int cmd)
> > +{
> > + switch (cmd) {
> > + case BR_ERROR:
> > + return("BR_ERROR");
> > + case BR_OK:
> > + return("BR_OK");
> > + case BR_TRANSACTION_SEC_CTX:
> > + return("BR_TRANSACTION_SEC_CTX");
> > + case BR_TRANSACTION:
> > + return("BR_TRANSACTION");
> > + case BR_REPLY:
> > + return("BR_REPLY");
> > + case BR_ACQUIRE_RESULT:
> > + return("BR_ACQUIRE_RESULT");
> > + case BR_DEAD_REPLY:
> > + return("BR_DEAD_REPLY");
> > + case BR_TRANSACTION_COMPLETE:
> > + return("BR_TRANSACTION_COMPLETE");
> > + case BR_INCREFS:
> > + return("BR_INCREFS");
> > + case BR_ACQUIRE:
> > + return("BR_ACQUIRE");
> > + case BR_RELEASE:
> > + return("BR_RELEASE");
> > + case BR_DECREFS:
> > + return("BR_DECREFS");
> > + case BR_ATTEMPT_ACQUIRE:
> > + return("BR_ATTEMPT_ACQUIRE");
> > + case BR_NOOP:
> > + return("BR_NOOP");
> > + case BR_SPAWN_LOOPER:
> > + return("BR_SPAWN_LOOPER");
> > + case BR_FINISHED:
> > + return("BR_FINISHED");
> > + case BR_DEAD_BINDER:
> > + return("BR_DEAD_BINDER");
> > + case BR_CLEAR_DEATH_NOTIFICATION_DONE:
> > + return("BR_CLEAR_DEATH_NOTIFICATION_DONE");
> > + case BR_FAILED_REPLY:
> > + return("BR_FAILED_REPLY");
> > + case BR_FROZEN_REPLY:
> > + return("BR_FROZEN_REPLY");
> > + case BR_ONEWAY_SPAM_SUSPECT:
> > + return("BR_ONEWAY_SPAM_SUSPECT");
> > + default:
> > + return("Unknown");
> > + };
> > +}
> > +
> > +int expect_binder_reply(int32_t actual, int32_t expected)
> > +{
> > + if (actual != expected) {
> > + fprintf(stderr, "Expected %s but received %s\n",
> > + reply_string(expected), reply_string(actual));
> > + return -1;
> > + }
> > + return 0;
> > +}
> > +
> > diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.h b/tools/testing/selftests/drivers/android/binder/binder_util.h
> > new file mode 100644
> > index 000000000000..807f5abe987e
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/android/binder/binder_util.h
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +#ifndef SELFTEST_BINDER_UTIL_H
> > +#define SELFTEST_BINDER_UTIL_H
> > +
> > +#include <stdint.h>
> > +
> > +#include <linux/android/binder.h>
> > +
> > +struct binderfs_ctx {
> > + char *name;
> > + char *mountpoint;
> > +};
> > +
> > +struct binder_ctx {
> > + int fd;
> > + void *memory;
> > +};
> > +
> > +struct binderfs_ctx create_binderfs(const char *name);
> > +void destroy_binderfs(struct binderfs_ctx *ctx);
> > +
> > +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx);
> > +void close_binder(struct binder_ctx *ctx);
> > +
> > +int become_binder_context_manager(int binder_fd);
> > +
> > +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> > + void *readbuf, binder_size_t readsize);
> > +
> > +int expect_binder_reply(int32_t actual, int32_t expected);
> > +#endif
> > diff --git a/tools/testing/selftests/drivers/android/binder/config b/tools/testing/selftests/drivers/android/binder/config
> > new file mode 100644
> > index 000000000000..fcc5f8f693b3
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/android/binder/config
> > @@ -0,0 +1,4 @@
> > +CONFIG_CGROUP_GPU=y
> > +CONFIG_ANDROID=y
> > +CONFIG_ANDROID_BINDERFS=y
> > +CONFIG_ANDROID_BINDER_IPC=y
> > diff --git a/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> > new file mode 100644
> > index 000000000000..9b952ab401cc
> > --- /dev/null
> > +++ b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> > @@ -0,0 +1,480 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/*
> > + * This test verifies that the cgroup GPU memory charge is transferred correctly
> > + * when a dmabuf is passed between processes in two different cgroups and the
> > + * sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the binder transaction
> > + * data containing the dmabuf file descriptor.
> > + *
> > + * The gpu_cgroup_dmabuf_transfer test function becomes the binder context
> > + * manager, then forks a child who initiates a transaction with the context
> > + * manager by specifying a target of 0. The context manager reply contains a
> > + * dmabuf file descriptor which was allocated by the gpu_cgroup_dmabuf_transfer
> > + * test function, but should be charged to the child cgroup after the binder
> > + * transaction.
> > + */
> > +
> > +#include <errno.h>
> > +#include <fcntl.h>
> > +#include <stddef.h>
> > +#include <stdint.h>
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <string.h>
> > +#include <sys/epoll.h>
> > +#include <sys/ioctl.h>
> > +#include <sys/types.h>
> > +#include <sys/wait.h>
> > +
> > +#include "binder_util.h"
> > +#include "../../../cgroup/cgroup_util.h"
> > +#include "../../../kselftest.h"
> > +#include "../../../kselftest_harness.h"
> > +
> > +#include <linux/limits.h>
> > +#include <linux/dma-heap.h>
> > +#include <linux/android/binder.h>
> > +
> > +#define UNUSED(x) ((void)(x))
> > +
> > +static const unsigned int BINDER_CODE = 8675309; /* Any number will work here */
> > +
> > +struct cgroup_ctx {
> > + char *root;
> > + char *source;
> > + char *dest;
> > +};
> > +
> > +void destroy_cgroups(struct __test_metadata *_metadata, struct cgroup_ctx *ctx)
> > +{
> > + if (ctx->source != NULL) {
> > + TH_LOG("Destroying cgroup: %s", ctx->source);
> > + rmdir(ctx->source);
> > + free(ctx->source);
> > + }
> > +
> > + if (ctx->dest != NULL) {
> > + TH_LOG("Destroying cgroup: %s", ctx->dest);
> > + rmdir(ctx->dest);
> > + free(ctx->dest);
> > + }
> > +
> > + free(ctx->root);
> > + ctx->root = ctx->source = ctx->dest = NULL;
> > +}
> > +
> > +struct cgroup_ctx create_cgroups(struct __test_metadata *_metadata)
> > +{
> > + struct cgroup_ctx ctx = {0};
> > + char root[PATH_MAX], *tmp;
> > + static const char template[] = "/gpucg_XXXXXX";
> > +
> > + if (cg_find_unified_root(root, sizeof(root))) {
> > + TH_LOG("Could not find cgroups root");
> > + return ctx;
> > + }
> > +
> > + if (cg_read_strstr(root, "cgroup.controllers", "gpu")) {
> > + TH_LOG("Could not find GPU controller");
> > + return ctx;
> > + }
> > +
> > + if (cg_write(root, "cgroup.subtree_control", "+gpu")) {
> > + TH_LOG("Could not enable GPU controller");
> > + return ctx;
> > + }
> > +
> > + ctx.root = strdup(root);
> > +
> > + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> > + tmp = mkdtemp(root);
> > + if (tmp == NULL) {
> > + TH_LOG("%s - Could not create source cgroup", strerror(errno));
> > + destroy_cgroups(_metadata, &ctx);
> > + return ctx;
> > + }
> > + ctx.source = strdup(tmp);
> > +
> > + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> > + tmp = mkdtemp(root);
> > + if (tmp == NULL) {
> > + TH_LOG("%s - Could not create destination cgroup", strerror(errno));
> > + destroy_cgroups(_metadata, &ctx);
> > + return ctx;
> > + }
> > + ctx.dest = strdup(tmp);
> > +
> > + TH_LOG("Created cgroups: %s %s", ctx.source, ctx.dest);
> > +
> > + return ctx;
> > +}
> > +
> > +int dmabuf_heap_alloc(int fd, size_t len, int *dmabuf_fd)
> > +{
> > + struct dma_heap_allocation_data data = {
> > + .len = len,
> > + .fd = 0,
> > + .fd_flags = O_RDONLY | O_CLOEXEC,
> > + .heap_flags = 0,
> > + };
> > + int ret;
> > +
> > + if (!dmabuf_fd)
> > + return -EINVAL;
> > +
> > + ret = ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data);
> > + if (ret < 0)
> > + return ret;
> > + *dmabuf_fd = (int)data.fd;
> > + return ret;
> > +}
> > +
> > +/* The system heap is known to export dmabufs with support for cgroup tracking */
> > +int alloc_dmabuf_from_system_heap(struct __test_metadata *_metadata, size_t bytes)
> > +{
> > + int heap_fd = -1, dmabuf_fd = -1;
> > + static const char * const heap_path = "/dev/dma_heap/system";
> > +
> > + heap_fd = open(heap_path, O_RDONLY);
> > + if (heap_fd < 0) {
> > + TH_LOG("%s - open %s failed!\n", strerror(errno), heap_path);
> > + return -1;
> > + }
>
> Same question about root preveliges?
>
> > +
> > + if (dmabuf_heap_alloc(heap_fd, bytes, &dmabuf_fd))
> > + TH_LOG("dmabuf allocation failed! - %s", strerror(errno));
> > + close(heap_fd);
> > +
> > + return dmabuf_fd;
> > +}
> > +
> > +int binder_request_dmabuf(int binder_fd)
> > +{
> > + int ret;
> > +
> > + /*
> > + * We just send an empty binder_buffer_object to initiate a transaction
> > + * with the context manager, who should respond with a single dmabuf
> > + * inside a binder_fd_array_object.
> > + */
> > +
> > + struct binder_buffer_object bbo = {
> > + .hdr.type = BINDER_TYPE_PTR,
> > + .flags = 0,
> > + .buffer = 0,
> > + .length = 0,
> > + .parent = 0, /* No parent */
> > + .parent_offset = 0 /* No parent */
> > + };
> > +
> > + binder_size_t offsets[] = {0};
> > +
> > + struct {
> > + int32_t cmd;
> > + struct binder_transaction_data btd;
> > + } __attribute__((packed)) bc = {
> > + .cmd = BC_TRANSACTION,
> > + .btd = {
> > + .target = { 0 },
> > + .cookie = 0,
> > + .code = BINDER_CODE,
> > + .flags = TF_ACCEPT_FDS, /* We expect a FDA in the reply */
> > + .data_size = sizeof(bbo),
> > + .offsets_size = sizeof(offsets),
> > + .data.ptr = {
> > + (binder_uintptr_t)&bbo,
> > + (binder_uintptr_t)offsets
> > + }
> > + },
> > + };
> > +
> > + struct {
> > + int32_t reply_noop;
> > + } __attribute__((packed)) br;
> > +
> > + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> > + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> > + return -1;
> > + } else if (ret < sizeof(br)) {
> > + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> > + return -1;
> > + }
> > + return 0;
> > +}
> > +
> > +int send_dmabuf_reply(int binder_fd, struct binder_transaction_data *tr, int dmabuf_fd)
> > +{
> > + int ret;
> > + /*
> > + * The trailing 0 is to achieve the necessary alignment for the binder
> > + * buffer_size.
> > + */
> > + int fdarray[] = { dmabuf_fd, 0 };
> > +
> > + struct binder_buffer_object bbo = {
> > + .hdr.type = BINDER_TYPE_PTR,
> > + .flags = BINDER_BUFFER_FLAG_SENDER_NO_NEED,
> > + .buffer = (binder_uintptr_t)fdarray,
> > + .length = sizeof(fdarray),
> > + .parent = 0, /* No parent */
> > + .parent_offset = 0 /* No parent */
> > + };
> > +
> > + struct binder_fd_array_object bfdao = {
> > + .hdr.type = BINDER_TYPE_FDA,
> > + .num_fds = 1,
> > + .parent = 0, /* The binder_buffer_object */
> > + .parent_offset = 0 /* FDs follow immediately */
> > + };
> > +
> > + uint64_t sz = sizeof(fdarray);
> > + uint8_t data[sizeof(sz) + sizeof(bbo) + sizeof(bfdao)];
> > + binder_size_t offsets[] = {sizeof(sz), sizeof(sz)+sizeof(bbo)};
> > +
> > + memcpy(data, &sz, sizeof(sz));
> > + memcpy(data + sizeof(sz), &bbo, sizeof(bbo));
> > + memcpy(data + sizeof(sz) + sizeof(bbo), &bfdao, sizeof(bfdao));
> > +
> > + struct {
> > + int32_t cmd;
> > + struct binder_transaction_data_sg btd;
> > + } __attribute__((packed)) bc = {
> > + .cmd = BC_REPLY_SG,
> > + .btd.transaction_data = {
> > + .target = { tr->target.handle },
> > + .cookie = tr->cookie,
> > + .code = BINDER_CODE,
> > + .flags = 0,
> > + .data_size = sizeof(data),
> > + .offsets_size = sizeof(offsets),
> > + .data.ptr = {
> > + (binder_uintptr_t)data,
> > + (binder_uintptr_t)offsets
> > + }
> > + },
> > + .btd.buffers_size = sizeof(fdarray)
> > + };
> > +
> > + struct {
> > + int32_t reply_noop;
> > + } __attribute__((packed)) br;
> > +
> > + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> > + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> > + return -1;
> > + } else if (ret < sizeof(br)) {
> > + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> > + return -1;
> > + }
> > + return 0;
> > +}
> > +
> > +struct binder_transaction_data *binder_wait_for_transaction(int binder_fd,
> > + uint32_t *readbuf,
> > + size_t readsize)
> > +{
> > + static const int MAX_EVENTS = 1, EPOLL_WAIT_TIME_MS = 3 * 1000;
> > + struct binder_reply {
> > + int32_t reply0;
> > + int32_t reply1;
> > + struct binder_transaction_data btd;
> > + } *br;
> > + struct binder_transaction_data *ret = NULL;
> > + struct epoll_event events[MAX_EVENTS];
> > + int epoll_fd, num_events, readcount;
> > + uint32_t bc[] = { BC_ENTER_LOOPER };
> > +
> > + do_binder_write_read(binder_fd, &bc, sizeof(bc), NULL, 0);
> > +
> > + epoll_fd = epoll_create1(EPOLL_CLOEXEC);
> > + if (epoll_fd == -1) {
> > + perror("epoll_create");
> > + return NULL;
> > + }
> > +
> > + events[0].events = EPOLLIN;
> > + if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, binder_fd, &events[0])) {
> > + perror("epoll_ctl add");
> > + goto err_close;
> > + }
> > +
> > + num_events = epoll_wait(epoll_fd, events, MAX_EVENTS, EPOLL_WAIT_TIME_MS);
> > + if (num_events < 0) {
> > + perror("epoll_wait");
> > + goto err_ctl;
> > + } else if (num_events == 0) {
> > + fprintf(stderr, "No events\n");
> > + goto err_ctl;
> > + }
> > +
> > + readcount = do_binder_write_read(binder_fd, NULL, 0, readbuf, readsize);
> > + fprintf(stderr, "Read %d bytes from binder\n", readcount);
> > +
> > + if (readcount < (int)sizeof(struct binder_reply)) {
> > + fprintf(stderr, "read_consumed not large enough\n");
> > + goto err_ctl;
> > + }
> > +
> > + br = (struct binder_reply *)readbuf;
> > + if (expect_binder_reply(br->reply0, BR_NOOP))
> > + goto err_ctl;
> > +
> > + if (br->reply1 == BR_TRANSACTION) {
> > + if (br->btd.code == BINDER_CODE)
> > + ret = &br->btd;
> > + else
> > + fprintf(stderr, "Received transaction with unexpected code: %u\n",
> > + br->btd.code);
> > + } else {
> > + expect_binder_reply(br->reply1, BR_TRANSACTION_COMPLETE);
> > + }
> > +
> > +err_ctl:
> > + if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, binder_fd, NULL))
> > + perror("epoll_ctl del");
> > +err_close:
> > + close(epoll_fd);
> > + return ret;
> > +}
> > +
> > +static int child_request_dmabuf_transfer(const char *cgroup, void *arg)
> > +{
> > + UNUSED(cgroup);
> > + int ret = -1;
> > + uint32_t readbuf[32];
> > + struct binderfs_ctx bfs_ctx = *(struct binderfs_ctx *)arg;
> > + struct binder_ctx b_ctx;
> > +
> > + fprintf(stderr, "Child PID: %d\n", getpid());
> > +
> > + b_ctx = open_binder(&bfs_ctx);
> > + if (b_ctx.fd < 0) {
> > + fprintf(stderr, "Child unable to open binder\n");
> > + return -1;
> > + }
> > +
> > + if (binder_request_dmabuf(b_ctx.fd))
> > + goto err;
> > +
> > + /* The child must stay alive until the binder reply is received */
> > + if (binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf)) == NULL)
> > + ret = 0;
> > +
> > + /*
> > + * We don't close the received dmabuf here so that the parent can
> > + * inspect the cgroup gpu memory charges to verify the charge transfer
> > + * completed successfully.
> > + */
> > +err:
> > + close_binder(&b_ctx);
> > + fprintf(stderr, "Child done\n");
> > + return ret;
> > +}
> > +
> > +TEST(gpu_cgroup_dmabuf_transfer)
> > +{
> > + static const char * const GPUMEM_FILENAME = "gpu.memory.current";
> > + static const size_t ONE_MiB = 1024 * 1024;
> > +
> > + int ret, dmabuf_fd;
> > + uint32_t readbuf[32];
> > + long memsize;
> > + pid_t child_pid;
> > + struct binderfs_ctx bfs_ctx;
> > + struct binder_ctx b_ctx;
> > + struct cgroup_ctx cg_ctx;
> > + struct binder_transaction_data *tr;
> > + struct flat_binder_object *fbo;
> > + struct binder_buffer_object *bbo;
> > +
>
> If root previges is necessary - pls add check here and skip.
>
> > + bfs_ctx = create_binderfs("testbinder");
> > + if (bfs_ctx.name == NULL)
> > + ksft_exit_skip("The Android binderfs filesystem is not available\n");
> > +
> > + cg_ctx = create_cgroups(_metadata);
> > + if (cg_ctx.root == NULL) {
> > + destroy_binderfs(&bfs_ctx);
> > + ksft_exit_skip("cgroup v2 isn't mounted\n");
> > + }
> > +
> > + ASSERT_EQ(cg_enter_current(cg_ctx.source), 0) {
> > + TH_LOG("Could not move parent to cgroup: %s", cg_ctx.source);
> > + goto err_cg;
> > + }
> > +
> > + dmabuf_fd = alloc_dmabuf_from_system_heap(_metadata, ONE_MiB);
> > + ASSERT_GE(dmabuf_fd, 0) {
> > + goto err_cg;
> > + }
> > + TH_LOG("Allocated dmabuf");
> > +
> > + memsize = cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system");
> > + ASSERT_EQ(memsize, ONE_MiB) {
> > + TH_LOG("GPU memory used after allocation: %ld but it should be %lu",
> > + memsize, (unsigned long)ONE_MiB);
> > + goto err_dmabuf;
> > + }
> > +
> > + b_ctx = open_binder(&bfs_ctx);
> > + ASSERT_GE(b_ctx.fd, 0) {
> > + TH_LOG("Parent unable to open binder");
> > + goto err_dmabuf;
> > + }
> > + TH_LOG("Opened binder at %s/%s", bfs_ctx.mountpoint, bfs_ctx.name);
> > +
> > + ASSERT_EQ(become_binder_context_manager(b_ctx.fd), 0) {
> > + TH_LOG("Cannot become context manager: %s", strerror(errno));
> > + goto err_binder;
> > + }
> > +
> > + child_pid = cg_run_nowait(cg_ctx.dest, child_request_dmabuf_transfer, &bfs_ctx);
> > + ASSERT_GT(child_pid, 0) {
> > + TH_LOG("Error forking: %s", strerror(errno));
> > + goto err_binder;
> > + }
> > +
> > + tr = binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf));
> > + ASSERT_NE(tr, NULL) {
> > + TH_LOG("Error receiving transaction request from child");
> > + goto err_child;
> > + }
> > + fbo = (struct flat_binder_object *)tr->data.ptr.buffer;
> > + ASSERT_EQ(fbo->hdr.type, BINDER_TYPE_PTR) {
> > + TH_LOG("Did not receive a buffer object from child");
> > + goto err_child;
> > + }
> > + bbo = (struct binder_buffer_object *)fbo;
> > + ASSERT_EQ(bbo->length, 0) {
> > + TH_LOG("Did not receive an empty buffer object from child");
> > + goto err_child;
> > + }
> > +
> > + TH_LOG("Received transaction from child");
> > + send_dmabuf_reply(b_ctx.fd, tr, dmabuf_fd);
> > +
> > + ASSERT_EQ(cg_read_key_long(cg_ctx.dest, GPUMEM_FILENAME, "system"), ONE_MiB) {
> > + TH_LOG("Destination cgroup does not have system charge!");
> > + goto err_child;
> > + }
> > + ASSERT_EQ(cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system"), 0) {
> > + TH_LOG("Source cgroup still has system charge!");
> > + goto err_child;
> > + }
> > + TH_LOG("Charge transfer succeeded!");
> > +
> > +err_child:
> > + waitpid(child_pid, &ret, 0);
> > + if (WIFEXITED(ret))
> > + TH_LOG("Child %d terminated with %d", child_pid, WEXITSTATUS(ret));
> > + else
> > + TH_LOG("Child terminated abnormally");
>
> What does this mean? What are the conditions that could cause this?
> Pls include more info. in the message.
This would be very unusual, but possible if the child is
(accidentally?) killed by a user or the low/out of memory killer. It
looks like I can add more information with the WIFSIGNALED and
WTERMSIG macros, so I will do that.
>
> > +err_binder:
> > + close_binder(&b_ctx);
> > +err_dmabuf:
> > + close(dmabuf_fd);
> > +err_cg:
> > + destroy_cgroups(_metadata, &cg_ctx);
> > + destroy_binderfs(&bfs_ctx);
> > +}
> > +
> > +TEST_HARNESS_MAIN
> >
>
> thanks,
> -- Shuah
Thanks for taking a look!
On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
>
> From: Hridya Valsaraju <[email protected]>
>
> This patch introduces a buffer flag BINDER_BUFFER_FLAG_SENDER_NO_NEED
> that a process sending an fd array to another process over binder IPC
> can set to relinquish ownership of the fds being sent for memory
> accounting purposes. If the flag is found to be set during the fd array
> translation and the fd is for a DMA-BUF, the buffer is uncharged from
> the sender's cgroup and charged to the receiving process's cgroup
> instead.
>
> It is up to the sending process to ensure that it closes the fds
> regardless of whether the transfer failed or succeeded.
>
> Most graphics shared memory allocations in Android are done by the
> graphics allocator HAL process. On requests from clients, the HAL process
> allocates memory and sends the fds to the clients over binder IPC.
> The graphics allocator HAL will not retain any references to the
> buffers. When the HAL sets the BINDER_BUFFER_FLAG_SENDER_NO_NEED for fd
> arrays holding DMA-BUF fds, the gpu cgroup controller will be able to
> correctly charge the buffers to the client processes instead of the
> graphics allocator HAL.
>
> Since this is a new feature exposed to userspace, the kernel and userspace
> must be compatible for the accounting to work for transfers. In all cases
> the allocation and transport of DMA buffers via binder will succeed, but
> only when both the kernel supports, and userspace depends on this feature
> will the transfer accounting work. The possible scenarios are detailed
> below:
>
> 1. new kernel + old userspace
> The kernel supports the feature but userspace does not use it. The old
> userspace won't mount the new cgroup controller, accounting is not
> performed, charge is not transferred.
>
> 2. old kernel + new userspace
> The new cgroup controller is not supported by the kernel, accounting is
> not performed, charge is not transferred.
>
> 3. old kernel + old userspace
> Same as #2
>
> 4. new kernel + new userspace
> Cgroup is mounted, feature is supported and used.
>
> Signed-off-by: Hridya Valsaraju <[email protected]>
> Signed-off-by: T.J. Mercier <[email protected]>
Acked-by: Todd Kjos <[email protected]>
>
> ---
> v3 changes
> Remove android from title per Todd Kjos.
>
> Use more common dual author commit message format per John Stultz.
>
> Include details on behavior for all combinations of kernel/userspace
> versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman.
>
> v2 changes
> Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
> heap to a single dma-buf function for all heaps per Daniel Vetter and
> Christian König.
> ---
> drivers/android/binder.c | 26 ++++++++++++++++++++++++++
> include/uapi/linux/android/binder.h | 1 +
> 2 files changed, 27 insertions(+)
>
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 8351c5638880..f50d88ded188 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -42,6 +42,7 @@
>
> #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
>
> +#include <linux/dma-buf.h>
> #include <linux/fdtable.h>
> #include <linux/file.h>
> #include <linux/freezer.h>
> @@ -2482,8 +2483,10 @@ static int binder_translate_fd_array(struct list_head *pf_head,
> {
> binder_size_t fdi, fd_buf_size;
> binder_size_t fda_offset;
> + bool transfer_gpu_charge = false;
> const void __user *sender_ufda_base;
> struct binder_proc *proc = thread->proc;
> + struct binder_proc *target_proc = t->to_proc;
> int ret;
>
> fd_buf_size = sizeof(u32) * fda->num_fds;
> @@ -2521,8 +2524,15 @@ static int binder_translate_fd_array(struct list_head *pf_head,
> if (ret)
> return ret;
>
> + if (IS_ENABLED(CONFIG_CGROUP_GPU) &&
> + parent->flags & BINDER_BUFFER_FLAG_SENDER_NO_NEED)
> + transfer_gpu_charge = true;
> +
> for (fdi = 0; fdi < fda->num_fds; fdi++) {
> u32 fd;
> + struct dma_buf *dmabuf;
> + struct gpucg *gpucg;
> +
> binder_size_t offset = fda_offset + fdi * sizeof(fd);
> binder_size_t sender_uoffset = fdi * sizeof(fd);
>
> @@ -2532,6 +2542,22 @@ static int binder_translate_fd_array(struct list_head *pf_head,
> in_reply_to);
> if (ret)
> return ret > 0 ? -EINVAL : ret;
> +
> + if (!transfer_gpu_charge)
> + continue;
> +
> + dmabuf = dma_buf_get(fd);
> + if (IS_ERR(dmabuf))
> + continue;
> +
> + gpucg = gpucg_get(target_proc->tsk);
> + ret = dma_buf_charge_transfer(dmabuf, gpucg);
> + if (ret) {
> + pr_warn("%d:%d Unable to transfer DMA-BUF fd charge to %d",
> + proc->pid, thread->pid, target_proc->pid);
> + gpucg_put(gpucg);
> + }
> + dma_buf_put(dmabuf);
> }
> return 0;
> }
> diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> index 3246f2c74696..169fd5069a1a 100644
> --- a/include/uapi/linux/android/binder.h
> +++ b/include/uapi/linux/android/binder.h
> @@ -137,6 +137,7 @@ struct binder_buffer_object {
>
> enum {
> BINDER_BUFFER_FLAG_HAS_PARENT = 0x01,
> + BINDER_BUFFER_FLAG_SENDER_NO_NEED = 0x02,
> };
>
> /* struct binder_fd_array_object - object describing an array of fds in a buffer
> --
> 2.35.1.616.g0bdcbb4464-goog
>
On Wed, Mar 9, 2022 at 8:53 AM T.J. Mercier <[email protected]> wrote:
>
> This test verifies that the cgroup GPU memory charge is transferred
> correctly when a dmabuf is passed between processes in two different
> cgroups and the sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the
> binder transaction data containing the dmabuf file descriptor.
>
> Signed-off-by: T.J. Mercier <[email protected]>
Reviewed-by: Todd Kjos <[email protected]>
for the binder driver interactions. Need Christian to take a look at
the binderfs interactions.
> ---
> .../selftests/drivers/android/binder/Makefile | 8 +
> .../drivers/android/binder/binder_util.c | 254 +++++++++
> .../drivers/android/binder/binder_util.h | 32 ++
> .../selftests/drivers/android/binder/config | 4 +
> .../binder/test_dmabuf_cgroup_transfer.c | 480 ++++++++++++++++++
> 5 files changed, 778 insertions(+)
> create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
> create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
> create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
> create mode 100644 tools/testing/selftests/drivers/android/binder/config
> create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
>
> diff --git a/tools/testing/selftests/drivers/android/binder/Makefile b/tools/testing/selftests/drivers/android/binder/Makefile
> new file mode 100644
> index 000000000000..726439d10675
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -Wall
> +
> +TEST_GEN_PROGS = test_dmabuf_cgroup_transfer
> +
> +include ../../../lib.mk
> +
> +$(OUTPUT)/test_dmabuf_cgroup_transfer: ../../../cgroup/cgroup_util.c binder_util.c
> diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.c b/tools/testing/selftests/drivers/android/binder/binder_util.c
> new file mode 100644
> index 000000000000..c9dcf5b9d42b
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/binder_util.c
> @@ -0,0 +1,254 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "binder_util.h"
> +
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/mount.h>
> +
> +#include <linux/limits.h>
> +#include <linux/android/binder.h>
> +#include <linux/android/binderfs.h>
> +
> +static const size_t BINDER_MMAP_SIZE = 64 * 1024;
> +
> +static void binderfs_unmount(const char *mountpoint)
> +{
> + if (umount2(mountpoint, MNT_DETACH))
> + fprintf(stderr, "Failed to unmount binderfs at %s: %s\n",
> + mountpoint, strerror(errno));
> + else
> + fprintf(stderr, "Binderfs unmounted: %s\n", mountpoint);
> +
> + if (rmdir(mountpoint))
> + fprintf(stderr, "Failed to remove binderfs mount %s: %s\n",
> + mountpoint, strerror(errno));
> + else
> + fprintf(stderr, "Binderfs mountpoint destroyed: %s\n", mountpoint);
> +}
> +
> +struct binderfs_ctx create_binderfs(const char *name)
> +{
> + int fd, ret, saved_errno;
> + struct binderfs_device device = { 0 };
> + struct binderfs_ctx ctx = { 0 };
> +
> + /*
> + * P_tmpdir is set to "/tmp/" on Android platforms where Binder is most
> + * commonly used, but this path does not actually exist on Android. We
> + * will first try using "/data/local/tmp" and fallback to P_tmpdir if
> + * that fails for non-Android platforms.
> + */
> + static const char tmpdir[] = "/data/local/tmp";
> + static const size_t MAX_TMPDIR_SIZE =
> + sizeof(tmpdir) > sizeof(P_tmpdir) ?
> + sizeof(tmpdir) : sizeof(P_tmpdir);
> + static const char template[] = "/binderfs_XXXXXX";
> +
> + char *mkdtemp_result;
> + char binderfs_mntpt[MAX_TMPDIR_SIZE + sizeof(template)];
> + char device_path[MAX_TMPDIR_SIZE + sizeof(template) + BINDERFS_MAX_NAME];
> +
> + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", tmpdir, template);
> +
> + mkdtemp_result = mkdtemp(binderfs_mntpt);
> + if (mkdtemp_result == NULL) {
> + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s.\n",
> + binderfs_mntpt, strerror(errno));
> + fprintf(stderr, "Trying fallback mountpoint...\n");
> + snprintf(binderfs_mntpt, sizeof(binderfs_mntpt), "%s%s", P_tmpdir, template);
> + if (mkdtemp(binderfs_mntpt) == NULL) {
> + fprintf(stderr, "Failed to create binderfs mountpoint at %s: %s\n",
> + binderfs_mntpt, strerror(errno));
> + return ctx;
> + }
> + }
> + fprintf(stderr, "Binderfs mountpoint created at %s\n", binderfs_mntpt);
> +
> + if (mount(NULL, binderfs_mntpt, "binder", 0, 0)) {
> + perror("Could not mount binderfs");
> + rmdir(binderfs_mntpt);
> + return ctx;
> + }
> + fprintf(stderr, "Binderfs mounted at %s\n", binderfs_mntpt);
> +
> + strncpy(device.name, name, sizeof(device.name));
> + snprintf(device_path, sizeof(device_path), "%s/binder-control", binderfs_mntpt);
> + fd = open(device_path, O_RDONLY | O_CLOEXEC);
> + if (!fd) {
> + perror("Failed to open binder-control device");
> + binderfs_unmount(binderfs_mntpt);
> + return ctx;
> + }
> +
> + ret = ioctl(fd, BINDER_CTL_ADD, &device);
> + saved_errno = errno;
> + close(fd);
> + errno = saved_errno;
> + if (ret) {
> + perror("Failed to allocate new binder device");
> + binderfs_unmount(binderfs_mntpt);
> + return ctx;
> + }
> +
> + fprintf(stderr, "Allocated new binder device with major %d, minor %d, and name %s at %s\n",
> + device.major, device.minor, device.name, binderfs_mntpt);
> +
> + ctx.name = strdup(name);
> + ctx.mountpoint = strdup(binderfs_mntpt);
> + return ctx;
> +}
> +
> +void destroy_binderfs(struct binderfs_ctx *ctx)
> +{
> + char path[PATH_MAX];
> +
> + snprintf(path, sizeof(path), "%s/%s", ctx->mountpoint, ctx->name);
> +
> + if (unlink(path))
> + fprintf(stderr, "Failed to unlink binder device %s: %s\n", path, strerror(errno));
> + else
> + fprintf(stderr, "Destroyed binder %s at %s\n", ctx->name, ctx->mountpoint);
> +
> + binderfs_unmount(ctx->mountpoint);
> +
> + free(ctx->name);
> + free(ctx->mountpoint);
> +}
> +
> +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx)
> +{
> + struct binder_ctx ctx = {.fd = -1, .memory = NULL};
> + char path[PATH_MAX];
> +
> + snprintf(path, sizeof(path), "%s/%s", bfs_ctx->mountpoint, bfs_ctx->name);
> + ctx.fd = open(path, O_RDWR | O_NONBLOCK | O_CLOEXEC);
> + if (ctx.fd < 0) {
> + fprintf(stderr, "Error opening binder device %s: %s\n", path, strerror(errno));
> + return ctx;
> + }
> +
> + ctx.memory = mmap(NULL, BINDER_MMAP_SIZE, PROT_READ, MAP_SHARED, ctx.fd, 0);
> + if (ctx.memory == NULL) {
> + perror("Error mapping binder memory");
> + close(ctx.fd);
> + ctx.fd = -1;
> + }
> +
> + return ctx;
> +}
> +
> +void close_binder(struct binder_ctx *ctx)
> +{
> + if (munmap(ctx->memory, BINDER_MMAP_SIZE))
> + perror("Failed to unmap binder memory");
> + ctx->memory = NULL;
> +
> + if (close(ctx->fd))
> + perror("Failed to close binder");
> + ctx->fd = -1;
> +}
> +
> +int become_binder_context_manager(int binder_fd)
> +{
> + return ioctl(binder_fd, BINDER_SET_CONTEXT_MGR, 0);
> +}
> +
> +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> + void *readbuf, binder_size_t readsize)
> +{
> + int err;
> + struct binder_write_read bwr = {
> + .write_buffer = (binder_uintptr_t)writebuf,
> + .write_size = writesize,
> + .read_buffer = (binder_uintptr_t)readbuf,
> + .read_size = readsize
> + };
> +
> + do {
> + if (ioctl(binder_fd, BINDER_WRITE_READ, &bwr) >= 0)
> + err = 0;
> + else
> + err = -errno;
> + } while (err == -EINTR);
> +
> + if (err < 0) {
> + perror("BINDER_WRITE_READ");
> + return -1;
> + }
> +
> + if (bwr.write_consumed < writesize) {
> + fprintf(stderr, "Binder did not consume full write buffer %llu %llu\n",
> + bwr.write_consumed, writesize);
> + return -1;
> + }
> +
> + return bwr.read_consumed;
> +}
> +
> +static const char *reply_string(int cmd)
> +{
> + switch (cmd) {
> + case BR_ERROR:
> + return("BR_ERROR");
> + case BR_OK:
> + return("BR_OK");
> + case BR_TRANSACTION_SEC_CTX:
> + return("BR_TRANSACTION_SEC_CTX");
> + case BR_TRANSACTION:
> + return("BR_TRANSACTION");
> + case BR_REPLY:
> + return("BR_REPLY");
> + case BR_ACQUIRE_RESULT:
> + return("BR_ACQUIRE_RESULT");
> + case BR_DEAD_REPLY:
> + return("BR_DEAD_REPLY");
> + case BR_TRANSACTION_COMPLETE:
> + return("BR_TRANSACTION_COMPLETE");
> + case BR_INCREFS:
> + return("BR_INCREFS");
> + case BR_ACQUIRE:
> + return("BR_ACQUIRE");
> + case BR_RELEASE:
> + return("BR_RELEASE");
> + case BR_DECREFS:
> + return("BR_DECREFS");
> + case BR_ATTEMPT_ACQUIRE:
> + return("BR_ATTEMPT_ACQUIRE");
> + case BR_NOOP:
> + return("BR_NOOP");
> + case BR_SPAWN_LOOPER:
> + return("BR_SPAWN_LOOPER");
> + case BR_FINISHED:
> + return("BR_FINISHED");
> + case BR_DEAD_BINDER:
> + return("BR_DEAD_BINDER");
> + case BR_CLEAR_DEATH_NOTIFICATION_DONE:
> + return("BR_CLEAR_DEATH_NOTIFICATION_DONE");
> + case BR_FAILED_REPLY:
> + return("BR_FAILED_REPLY");
> + case BR_FROZEN_REPLY:
> + return("BR_FROZEN_REPLY");
> + case BR_ONEWAY_SPAM_SUSPECT:
> + return("BR_ONEWAY_SPAM_SUSPECT");
> + default:
> + return("Unknown");
> + };
> +}
> +
> +int expect_binder_reply(int32_t actual, int32_t expected)
> +{
> + if (actual != expected) {
> + fprintf(stderr, "Expected %s but received %s\n",
> + reply_string(expected), reply_string(actual));
> + return -1;
> + }
> + return 0;
> +}
> +
> diff --git a/tools/testing/selftests/drivers/android/binder/binder_util.h b/tools/testing/selftests/drivers/android/binder/binder_util.h
> new file mode 100644
> index 000000000000..807f5abe987e
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/binder_util.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef SELFTEST_BINDER_UTIL_H
> +#define SELFTEST_BINDER_UTIL_H
> +
> +#include <stdint.h>
> +
> +#include <linux/android/binder.h>
> +
> +struct binderfs_ctx {
> + char *name;
> + char *mountpoint;
> +};
> +
> +struct binder_ctx {
> + int fd;
> + void *memory;
> +};
> +
> +struct binderfs_ctx create_binderfs(const char *name);
> +void destroy_binderfs(struct binderfs_ctx *ctx);
> +
> +struct binder_ctx open_binder(struct binderfs_ctx *bfs_ctx);
> +void close_binder(struct binder_ctx *ctx);
> +
> +int become_binder_context_manager(int binder_fd);
> +
> +int do_binder_write_read(int binder_fd, void *writebuf, binder_size_t writesize,
> + void *readbuf, binder_size_t readsize);
> +
> +int expect_binder_reply(int32_t actual, int32_t expected);
> +#endif
> diff --git a/tools/testing/selftests/drivers/android/binder/config b/tools/testing/selftests/drivers/android/binder/config
> new file mode 100644
> index 000000000000..fcc5f8f693b3
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/config
> @@ -0,0 +1,4 @@
> +CONFIG_CGROUP_GPU=y
> +CONFIG_ANDROID=y
> +CONFIG_ANDROID_BINDERFS=y
> +CONFIG_ANDROID_BINDER_IPC=y
> diff --git a/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> new file mode 100644
> index 000000000000..9b952ab401cc
> --- /dev/null
> +++ b/tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c
> @@ -0,0 +1,480 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * This test verifies that the cgroup GPU memory charge is transferred correctly
> + * when a dmabuf is passed between processes in two different cgroups and the
> + * sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the binder transaction
> + * data containing the dmabuf file descriptor.
> + *
> + * The gpu_cgroup_dmabuf_transfer test function becomes the binder context
> + * manager, then forks a child who initiates a transaction with the context
> + * manager by specifying a target of 0. The context manager reply contains a
> + * dmabuf file descriptor which was allocated by the gpu_cgroup_dmabuf_transfer
> + * test function, but should be charged to the child cgroup after the binder
> + * transaction.
> + */
> +
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stddef.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/epoll.h>
> +#include <sys/ioctl.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +
> +#include "binder_util.h"
> +#include "../../../cgroup/cgroup_util.h"
> +#include "../../../kselftest.h"
> +#include "../../../kselftest_harness.h"
> +
> +#include <linux/limits.h>
> +#include <linux/dma-heap.h>
> +#include <linux/android/binder.h>
> +
> +#define UNUSED(x) ((void)(x))
> +
> +static const unsigned int BINDER_CODE = 8675309; /* Any number will work here */
> +
> +struct cgroup_ctx {
> + char *root;
> + char *source;
> + char *dest;
> +};
> +
> +void destroy_cgroups(struct __test_metadata *_metadata, struct cgroup_ctx *ctx)
> +{
> + if (ctx->source != NULL) {
> + TH_LOG("Destroying cgroup: %s", ctx->source);
> + rmdir(ctx->source);
> + free(ctx->source);
> + }
> +
> + if (ctx->dest != NULL) {
> + TH_LOG("Destroying cgroup: %s", ctx->dest);
> + rmdir(ctx->dest);
> + free(ctx->dest);
> + }
> +
> + free(ctx->root);
> + ctx->root = ctx->source = ctx->dest = NULL;
> +}
> +
> +struct cgroup_ctx create_cgroups(struct __test_metadata *_metadata)
> +{
> + struct cgroup_ctx ctx = {0};
> + char root[PATH_MAX], *tmp;
> + static const char template[] = "/gpucg_XXXXXX";
> +
> + if (cg_find_unified_root(root, sizeof(root))) {
> + TH_LOG("Could not find cgroups root");
> + return ctx;
> + }
> +
> + if (cg_read_strstr(root, "cgroup.controllers", "gpu")) {
> + TH_LOG("Could not find GPU controller");
> + return ctx;
> + }
> +
> + if (cg_write(root, "cgroup.subtree_control", "+gpu")) {
> + TH_LOG("Could not enable GPU controller");
> + return ctx;
> + }
> +
> + ctx.root = strdup(root);
> +
> + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> + tmp = mkdtemp(root);
> + if (tmp == NULL) {
> + TH_LOG("%s - Could not create source cgroup", strerror(errno));
> + destroy_cgroups(_metadata, &ctx);
> + return ctx;
> + }
> + ctx.source = strdup(tmp);
> +
> + snprintf(root, sizeof(root), "%s/%s", ctx.root, template);
> + tmp = mkdtemp(root);
> + if (tmp == NULL) {
> + TH_LOG("%s - Could not create destination cgroup", strerror(errno));
> + destroy_cgroups(_metadata, &ctx);
> + return ctx;
> + }
> + ctx.dest = strdup(tmp);
> +
> + TH_LOG("Created cgroups: %s %s", ctx.source, ctx.dest);
> +
> + return ctx;
> +}
> +
> +int dmabuf_heap_alloc(int fd, size_t len, int *dmabuf_fd)
> +{
> + struct dma_heap_allocation_data data = {
> + .len = len,
> + .fd = 0,
> + .fd_flags = O_RDONLY | O_CLOEXEC,
> + .heap_flags = 0,
> + };
> + int ret;
> +
> + if (!dmabuf_fd)
> + return -EINVAL;
> +
> + ret = ioctl(fd, DMA_HEAP_IOCTL_ALLOC, &data);
> + if (ret < 0)
> + return ret;
> + *dmabuf_fd = (int)data.fd;
> + return ret;
> +}
> +
> +/* The system heap is known to export dmabufs with support for cgroup tracking */
> +int alloc_dmabuf_from_system_heap(struct __test_metadata *_metadata, size_t bytes)
> +{
> + int heap_fd = -1, dmabuf_fd = -1;
> + static const char * const heap_path = "/dev/dma_heap/system";
> +
> + heap_fd = open(heap_path, O_RDONLY);
> + if (heap_fd < 0) {
> + TH_LOG("%s - open %s failed!\n", strerror(errno), heap_path);
> + return -1;
> + }
> +
> + if (dmabuf_heap_alloc(heap_fd, bytes, &dmabuf_fd))
> + TH_LOG("dmabuf allocation failed! - %s", strerror(errno));
> + close(heap_fd);
> +
> + return dmabuf_fd;
> +}
> +
> +int binder_request_dmabuf(int binder_fd)
> +{
> + int ret;
> +
> + /*
> + * We just send an empty binder_buffer_object to initiate a transaction
> + * with the context manager, who should respond with a single dmabuf
> + * inside a binder_fd_array_object.
> + */
> +
> + struct binder_buffer_object bbo = {
> + .hdr.type = BINDER_TYPE_PTR,
> + .flags = 0,
> + .buffer = 0,
> + .length = 0,
> + .parent = 0, /* No parent */
> + .parent_offset = 0 /* No parent */
> + };
> +
> + binder_size_t offsets[] = {0};
> +
> + struct {
> + int32_t cmd;
> + struct binder_transaction_data btd;
> + } __attribute__((packed)) bc = {
> + .cmd = BC_TRANSACTION,
> + .btd = {
> + .target = { 0 },
> + .cookie = 0,
> + .code = BINDER_CODE,
> + .flags = TF_ACCEPT_FDS, /* We expect a FDA in the reply */
> + .data_size = sizeof(bbo),
> + .offsets_size = sizeof(offsets),
> + .data.ptr = {
> + (binder_uintptr_t)&bbo,
> + (binder_uintptr_t)offsets
> + }
> + },
> + };
> +
> + struct {
> + int32_t reply_noop;
> + } __attribute__((packed)) br;
> +
> + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> + return -1;
> + } else if (ret < sizeof(br)) {
> + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> + return -1;
> + }
> + return 0;
> +}
> +
> +int send_dmabuf_reply(int binder_fd, struct binder_transaction_data *tr, int dmabuf_fd)
> +{
> + int ret;
> + /*
> + * The trailing 0 is to achieve the necessary alignment for the binder
> + * buffer_size.
> + */
> + int fdarray[] = { dmabuf_fd, 0 };
> +
> + struct binder_buffer_object bbo = {
> + .hdr.type = BINDER_TYPE_PTR,
> + .flags = BINDER_BUFFER_FLAG_SENDER_NO_NEED,
> + .buffer = (binder_uintptr_t)fdarray,
> + .length = sizeof(fdarray),
> + .parent = 0, /* No parent */
> + .parent_offset = 0 /* No parent */
> + };
> +
> + struct binder_fd_array_object bfdao = {
> + .hdr.type = BINDER_TYPE_FDA,
> + .num_fds = 1,
> + .parent = 0, /* The binder_buffer_object */
> + .parent_offset = 0 /* FDs follow immediately */
> + };
> +
> + uint64_t sz = sizeof(fdarray);
> + uint8_t data[sizeof(sz) + sizeof(bbo) + sizeof(bfdao)];
> + binder_size_t offsets[] = {sizeof(sz), sizeof(sz)+sizeof(bbo)};
> +
> + memcpy(data, &sz, sizeof(sz));
> + memcpy(data + sizeof(sz), &bbo, sizeof(bbo));
> + memcpy(data + sizeof(sz) + sizeof(bbo), &bfdao, sizeof(bfdao));
> +
> + struct {
> + int32_t cmd;
> + struct binder_transaction_data_sg btd;
> + } __attribute__((packed)) bc = {
> + .cmd = BC_REPLY_SG,
> + .btd.transaction_data = {
> + .target = { tr->target.handle },
> + .cookie = tr->cookie,
> + .code = BINDER_CODE,
> + .flags = 0,
> + .data_size = sizeof(data),
> + .offsets_size = sizeof(offsets),
> + .data.ptr = {
> + (binder_uintptr_t)data,
> + (binder_uintptr_t)offsets
> + }
> + },
> + .btd.buffers_size = sizeof(fdarray)
> + };
> +
> + struct {
> + int32_t reply_noop;
> + } __attribute__((packed)) br;
> +
> + ret = do_binder_write_read(binder_fd, &bc, sizeof(bc), &br, sizeof(br));
> + if (ret >= sizeof(br) && expect_binder_reply(br.reply_noop, BR_NOOP)) {
> + return -1;
> + } else if (ret < sizeof(br)) {
> + fprintf(stderr, "Not enough bytes in binder reply %d\n", ret);
> + return -1;
> + }
> + return 0;
> +}
> +
> +struct binder_transaction_data *binder_wait_for_transaction(int binder_fd,
> + uint32_t *readbuf,
> + size_t readsize)
> +{
> + static const int MAX_EVENTS = 1, EPOLL_WAIT_TIME_MS = 3 * 1000;
> + struct binder_reply {
> + int32_t reply0;
> + int32_t reply1;
> + struct binder_transaction_data btd;
> + } *br;
> + struct binder_transaction_data *ret = NULL;
> + struct epoll_event events[MAX_EVENTS];
> + int epoll_fd, num_events, readcount;
> + uint32_t bc[] = { BC_ENTER_LOOPER };
> +
> + do_binder_write_read(binder_fd, &bc, sizeof(bc), NULL, 0);
> +
> + epoll_fd = epoll_create1(EPOLL_CLOEXEC);
> + if (epoll_fd == -1) {
> + perror("epoll_create");
> + return NULL;
> + }
> +
> + events[0].events = EPOLLIN;
> + if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, binder_fd, &events[0])) {
> + perror("epoll_ctl add");
> + goto err_close;
> + }
> +
> + num_events = epoll_wait(epoll_fd, events, MAX_EVENTS, EPOLL_WAIT_TIME_MS);
> + if (num_events < 0) {
> + perror("epoll_wait");
> + goto err_ctl;
> + } else if (num_events == 0) {
> + fprintf(stderr, "No events\n");
> + goto err_ctl;
> + }
> +
> + readcount = do_binder_write_read(binder_fd, NULL, 0, readbuf, readsize);
> + fprintf(stderr, "Read %d bytes from binder\n", readcount);
> +
> + if (readcount < (int)sizeof(struct binder_reply)) {
> + fprintf(stderr, "read_consumed not large enough\n");
> + goto err_ctl;
> + }
> +
> + br = (struct binder_reply *)readbuf;
> + if (expect_binder_reply(br->reply0, BR_NOOP))
> + goto err_ctl;
> +
> + if (br->reply1 == BR_TRANSACTION) {
> + if (br->btd.code == BINDER_CODE)
> + ret = &br->btd;
> + else
> + fprintf(stderr, "Received transaction with unexpected code: %u\n",
> + br->btd.code);
> + } else {
> + expect_binder_reply(br->reply1, BR_TRANSACTION_COMPLETE);
> + }
> +
> +err_ctl:
> + if (epoll_ctl(epoll_fd, EPOLL_CTL_DEL, binder_fd, NULL))
> + perror("epoll_ctl del");
> +err_close:
> + close(epoll_fd);
> + return ret;
> +}
> +
> +static int child_request_dmabuf_transfer(const char *cgroup, void *arg)
> +{
> + UNUSED(cgroup);
> + int ret = -1;
> + uint32_t readbuf[32];
> + struct binderfs_ctx bfs_ctx = *(struct binderfs_ctx *)arg;
> + struct binder_ctx b_ctx;
> +
> + fprintf(stderr, "Child PID: %d\n", getpid());
> +
> + b_ctx = open_binder(&bfs_ctx);
> + if (b_ctx.fd < 0) {
> + fprintf(stderr, "Child unable to open binder\n");
> + return -1;
> + }
> +
> + if (binder_request_dmabuf(b_ctx.fd))
> + goto err;
> +
> + /* The child must stay alive until the binder reply is received */
> + if (binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf)) == NULL)
> + ret = 0;
> +
> + /*
> + * We don't close the received dmabuf here so that the parent can
> + * inspect the cgroup gpu memory charges to verify the charge transfer
> + * completed successfully.
> + */
> +err:
> + close_binder(&b_ctx);
> + fprintf(stderr, "Child done\n");
> + return ret;
> +}
> +
> +TEST(gpu_cgroup_dmabuf_transfer)
> +{
> + static const char * const GPUMEM_FILENAME = "gpu.memory.current";
> + static const size_t ONE_MiB = 1024 * 1024;
> +
> + int ret, dmabuf_fd;
> + uint32_t readbuf[32];
> + long memsize;
> + pid_t child_pid;
> + struct binderfs_ctx bfs_ctx;
> + struct binder_ctx b_ctx;
> + struct cgroup_ctx cg_ctx;
> + struct binder_transaction_data *tr;
> + struct flat_binder_object *fbo;
> + struct binder_buffer_object *bbo;
> +
> + bfs_ctx = create_binderfs("testbinder");
> + if (bfs_ctx.name == NULL)
> + ksft_exit_skip("The Android binderfs filesystem is not available\n");
> +
> + cg_ctx = create_cgroups(_metadata);
> + if (cg_ctx.root == NULL) {
> + destroy_binderfs(&bfs_ctx);
> + ksft_exit_skip("cgroup v2 isn't mounted\n");
> + }
> +
> + ASSERT_EQ(cg_enter_current(cg_ctx.source), 0) {
> + TH_LOG("Could not move parent to cgroup: %s", cg_ctx.source);
> + goto err_cg;
> + }
> +
> + dmabuf_fd = alloc_dmabuf_from_system_heap(_metadata, ONE_MiB);
> + ASSERT_GE(dmabuf_fd, 0) {
> + goto err_cg;
> + }
> + TH_LOG("Allocated dmabuf");
> +
> + memsize = cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system");
> + ASSERT_EQ(memsize, ONE_MiB) {
> + TH_LOG("GPU memory used after allocation: %ld but it should be %lu",
> + memsize, (unsigned long)ONE_MiB);
> + goto err_dmabuf;
> + }
> +
> + b_ctx = open_binder(&bfs_ctx);
> + ASSERT_GE(b_ctx.fd, 0) {
> + TH_LOG("Parent unable to open binder");
> + goto err_dmabuf;
> + }
> + TH_LOG("Opened binder at %s/%s", bfs_ctx.mountpoint, bfs_ctx.name);
> +
> + ASSERT_EQ(become_binder_context_manager(b_ctx.fd), 0) {
> + TH_LOG("Cannot become context manager: %s", strerror(errno));
> + goto err_binder;
> + }
> +
> + child_pid = cg_run_nowait(cg_ctx.dest, child_request_dmabuf_transfer, &bfs_ctx);
> + ASSERT_GT(child_pid, 0) {
> + TH_LOG("Error forking: %s", strerror(errno));
> + goto err_binder;
> + }
> +
> + tr = binder_wait_for_transaction(b_ctx.fd, readbuf, sizeof(readbuf));
> + ASSERT_NE(tr, NULL) {
> + TH_LOG("Error receiving transaction request from child");
> + goto err_child;
> + }
> + fbo = (struct flat_binder_object *)tr->data.ptr.buffer;
> + ASSERT_EQ(fbo->hdr.type, BINDER_TYPE_PTR) {
> + TH_LOG("Did not receive a buffer object from child");
> + goto err_child;
> + }
> + bbo = (struct binder_buffer_object *)fbo;
> + ASSERT_EQ(bbo->length, 0) {
> + TH_LOG("Did not receive an empty buffer object from child");
> + goto err_child;
> + }
> +
> + TH_LOG("Received transaction from child");
> + send_dmabuf_reply(b_ctx.fd, tr, dmabuf_fd);
> +
> + ASSERT_EQ(cg_read_key_long(cg_ctx.dest, GPUMEM_FILENAME, "system"), ONE_MiB) {
> + TH_LOG("Destination cgroup does not have system charge!");
> + goto err_child;
> + }
> + ASSERT_EQ(cg_read_key_long(cg_ctx.source, GPUMEM_FILENAME, "system"), 0) {
> + TH_LOG("Source cgroup still has system charge!");
> + goto err_child;
> + }
> + TH_LOG("Charge transfer succeeded!");
> +
> +err_child:
> + waitpid(child_pid, &ret, 0);
> + if (WIFEXITED(ret))
> + TH_LOG("Child %d terminated with %d", child_pid, WEXITSTATUS(ret));
> + else
> + TH_LOG("Child terminated abnormally");
> +err_binder:
> + close_binder(&b_ctx);
> +err_dmabuf:
> + close(dmabuf_fd);
> +err_cg:
> + destroy_cgroups(_metadata, &cg_ctx);
> + destroy_binderfs(&bfs_ctx);
> +}
> +
> +TEST_HARNESS_MAIN
> --
> 2.35.1.616.g0bdcbb4464-goog
>
On Mon, Mar 14, 2022 at 4:45 PM T.J. Mercier <[email protected]> wrote:
>
> On Thu, Mar 10, 2022 at 11:33 AM Todd Kjos <[email protected]> wrote:
> >
> > On Wed, Mar 9, 2022 at 8:52 AM T.J. Mercier <[email protected]> wrote:
> > >
> > > The kernel interface should use types that the kernel defines instead of
> > > pid_t and uid_t, whose definiton is owned by libc. This fixes the header
> > > so that it can be included without first including sys/types.h.
> > >
> > > Signed-off-by: T.J. Mercier <[email protected]>
> > > ---
> > > include/uapi/linux/android/binder.h | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
> > > index 169fd5069a1a..aa28454dbca3 100644
> > > --- a/include/uapi/linux/android/binder.h
> > > +++ b/include/uapi/linux/android/binder.h
> > > @@ -289,8 +289,8 @@ struct binder_transaction_data {
> > >
> > > /* General information about the transaction. */
> > > __u32 flags;
> > > - pid_t sender_pid;
> > > - uid_t sender_euid;
> > > + __kernel_pid_t sender_pid;
> > > + __kernel_uid_t sender_euid;
> >
> > Are we guaranteed that this does not affect the UAPI at all? Userspace
> > code using this definition will have to run with kernels using the old
> > definition and visa-versa.
>
> A standards compliant userspace should be expecting a signed integer
> type here. So the only way I can think userspace would be affected is
> if:
> 1) pid_t is a long AND
> 2) sizeof(long) > sizeof(int) AND
> 3) Consumers of the pid_t definition actually attempt to mutate the
> result to make use of extra bits in the variable (which are not there)
>
> This seems extremely unlikely. For instance just on the topic of the
> first item, all of the C library implementations with pid_t
> definitions linked here use an int, except for Bionic which typdefs
> pid_t to __kernel_pid_t and Sortix which uses long.
> https://wiki.osdev.org/C_Library
>
> However I would argue this is already broken and should count as a bug
> fix since I can't do this:
>
> $ cat binder_include.c ; gcc binder_include.c
> #include <linux/android/binder.h>
> int main() {}
> In file included from binder_include.c:1:
> /usr/include/linux/android/binder.h:291:9: error: unknown type name ‘pid_t’
> 291 | pid_t sender_pid;
> | ^~~~~
> /usr/include/linux/android/binder.h:292:9: error: unknown type name ‘uid_t’
> 292 | uid_t sender_euid;
> | ^~~~~
>
> This is also the only occurrence of pid_t in all of
> include/uapi/linux. All 40+ other uses are __kernel_pid_t, and I don't
> see why the binder header should be different.
It looks like those other cases used to be pid_t, but were changed to
__kernel_pid_t.
Acked-by: Todd Kjos <[email protected]>
>
>
> >
> > > binder_size_t data_size; /* number of bytes of data */
> > > binder_size_t offsets_size; /* number of bytes of offsets */
> > >
> > > --
> > > 2.35.1.616.g0bdcbb4464-goog
> > >
Hello.
On Wed, Mar 09, 2022 at 04:52:15PM +0000, "T.J. Mercier" <[email protected]> wrote:
> +int dma_buf_charge_transfer(struct dma_buf *dmabuf, struct gpucg *gpucg)
> +{
> +#ifdef CONFIG_CGROUP_GPU
> + struct gpucg *current_gpucg;
> + int ret = 0;
> +
> + /*
> + * Verify that the cgroup of the process requesting the transfer is the
> + * same as the one the buffer is currently charged to.
> + */
> + current_gpucg = gpucg_get(current);
> + mutex_lock(&dmabuf->lock);
> + if (current_gpucg != dmabuf->gpucg) {
> + ret = -EPERM;
> + goto err;
> + }
Add a shortcut for gpucg == current_gpucg?
> +
> + ret = gpucg_try_charge(gpucg, dmabuf->gpucg_dev, dmabuf->size);
> + if (ret)
> + goto err;
> +
> + dmabuf->gpucg = gpucg;
> +
> + /* uncharge the buffer from the cgroup it's currently charged to. */
> + gpucg_uncharge(current_gpucg, dmabuf->gpucg_dev, dmabuf->size);
I think gpucg_* API would need to cater for such transfers too since
possibly transitional breach of a limit during the transfer may
unnecessarily fail the operation.
My 0.02€,
Michal
Hello.
On Wed, Mar 09, 2022 at 04:52:11PM +0000, "T.J. Mercier" <[email protected]> wrote:
> +The new cgroup controller would:
> +
> +* Allow setting per-cgroup limits on the total size of buffers charged to it.
What is the meaning of the total? (I only have very na?ve
understanding of the device buffers.)
Is it like a) there's global pool of memory that is partitioned among
individual devices or b) each device has its own specific type of memory
and adding across two devices is adding apples and oranges or c) there
can be various devices both of a) and b) type?
(Apologies not replying to previous versions and possibly missing
anything.)
Thanks,
Michal
On Mon, Mar 21, 2022 at 10:45 AM Michal Koutný <[email protected]> wrote:
>
> Hello.
>
> On Wed, Mar 09, 2022 at 04:52:15PM +0000, "T.J. Mercier" <[email protected]> wrote:
> > +int dma_buf_charge_transfer(struct dma_buf *dmabuf, struct gpucg *gpucg)
> > +{
> > +#ifdef CONFIG_CGROUP_GPU
> > + struct gpucg *current_gpucg;
> > + int ret = 0;
> > +
> > + /*
> > + * Verify that the cgroup of the process requesting the transfer is the
> > + * same as the one the buffer is currently charged to.
> > + */
> > + current_gpucg = gpucg_get(current);
> > + mutex_lock(&dmabuf->lock);
> > + if (current_gpucg != dmabuf->gpucg) {
> > + ret = -EPERM;
> > + goto err;
> > + }
>
> Add a shortcut for gpucg == current_gpucg?
Good idea, thank you!
>
> > +
> > + ret = gpucg_try_charge(gpucg, dmabuf->gpucg_dev, dmabuf->size);
> > + if (ret)
> > + goto err;
> > +
> > + dmabuf->gpucg = gpucg;
> > +
> > + /* uncharge the buffer from the cgroup it's currently charged to. */
> > + gpucg_uncharge(current_gpucg, dmabuf->gpucg_dev, dmabuf->size);
>
> I think gpucg_* API would need to cater for such transfers too since
> possibly transitional breach of a limit during the transfer may
> unnecessarily fail the operation.
Since the charge is duplicated in two cgroups for a short period
before it is uncharged from the source cgroup I guess the situation
you're thinking about is a global (or common ancestor) limit? I can
see how that would be a problem for transfers done this way and an
alternative would be to swap the order of the charge operations: first
uncharge, then try_charge. To be certain the uncharge is reversible if
the try_charge fails, I think I'd need either a mutex used at all
gpucg_*charge call sites or access to the gpucg_mutex, which implies
adding transfer support to gpu.c as part of the gpucg_* API itself and
calling it here. Am I following correctly here?
This series doesn't actually add limit support just accounting, but
I'd like to get it right here.
>
> My 0.02€,
> Michal
Thanks!
On Mon, Mar 14, 2022 at 05:43:40PM -0700, Todd Kjos wrote:
> On Wed, Mar 9, 2022 at 8:53 AM T.J. Mercier <[email protected]> wrote:
> >
> > This test verifies that the cgroup GPU memory charge is transferred
> > correctly when a dmabuf is passed between processes in two different
> > cgroups and the sender specifies BINDER_BUFFER_FLAG_SENDER_NO_NEED in the
> > binder transaction data containing the dmabuf file descriptor.
> >
> > Signed-off-by: T.J. Mercier <[email protected]>
>
> Reviewed-by: Todd Kjos <[email protected]>
> for the binder driver interactions. Need Christian to take a look at
> the binderfs interactions.
Sorry, just saw this now. I'll take a look tomorrow!
On Mon, Mar 21, 2022 at 10:37 AM Michal Koutný <[email protected]> wrote:
>
> Hello.
>
> On Wed, Mar 09, 2022 at 04:52:11PM +0000, "T.J. Mercier" <[email protected]> wrote:
> > +The new cgroup controller would:
> > +
> > +* Allow setting per-cgroup limits on the total size of buffers charged to it.
>
> What is the meaning of the total? (I only have very naïve
> understanding of the device buffers.)
So "total" is used twice here in two different contexts.
The first one is the global "GPU" cgroup context. As in any buffer
that any exporter claims is a GPU buffer, regardless of where/how it
is allocated. So this refers to the sum of all gpu buffers of any
type/source. An exporter contributes to this total by registering a
corresponding gpucg_device and making charges against that device when
it exports.
The second one is in a per device context. This allows us to make a
distinction between different types of GPU memory based on who
exported the buffer. A single process can make use of several
different types of dma buffers (for example cached and uncached
versions of the same type of memory), and it would be useful to have
different limits for each. These are distinguished by the device name
string chosen when the gpucg_device is first registered.
>
> Is it like a) there's global pool of memory that is partitioned among
> individual devices or b) each device has its own specific type of memory
> and adding across two devices is adding apples and oranges or c) there
> can be various devices both of a) and b) type?
So I guess the most correct answer to this question is c.
>
> (Apologies not replying to previous versions and possibly missing
> anything.)
>
> Thanks,
> Michal
>
On Tue, Mar 22, 2022 at 08:41:55AM -0700, "T.J. Mercier" <[email protected]> wrote:
> So "total" is used twice here in two different contexts.
> The first one is the global "GPU" cgroup context. As in any buffer
> that any exporter claims is a GPU buffer, regardless of where/how it
> is allocated. So this refers to the sum of all gpu buffers of any
> type/source. An exporter contributes to this total by registering a
> corresponding gpucg_device and making charges against that device when
> it exports.
> The second one is in a per device context. This allows us to make a
> distinction between different types of GPU memory based on who
> exported the buffer. A single process can make use of several
> different types of dma buffers (for example cached and uncached
> versions of the same type of memory), and it would be useful to have
> different limits for each. These are distinguished by the device name
> string chosen when the gpucg_device is first registered.
So is this understanding correct?
(if there was an analogous line in gpu.memory.current to gpu.memory.max)
$ cat gpu.memory.current
total T
dev1 d1
...
devN dn
T = Σ di + RAM_backed_buffers
and that some of RAM_backed_buffers may be accounted also in
memory.current (case by case, depending on allocator).
Thanks,
Michal
On Tue, Mar 22, 2022 at 2:52 AM Michal Koutný <[email protected]> wrote:
>
> On Mon, Mar 21, 2022 at 04:54:26PM -0700, "T.J. Mercier"
> <[email protected]> wrote:
> > Since the charge is duplicated in two cgroups for a short period
> > before it is uncharged from the source cgroup I guess the situation
> > you're thinking about is a global (or common ancestor) limit?
>
> The common ancestor was on my mind (after the self-shortcut).
>
> > I can see how that would be a problem for transfers done this way and
> > an alternative would be to swap the order of the charge operations:
> > first uncharge, then try_charge. To be certain the uncharge is
> > reversible if the try_charge fails, I think I'd need either a mutex
> > used at all gpucg_*charge call sites or access to the gpucg_mutex,
>
> Yes, that'd provide safe conditions for such operations, although I'm
> not sure these special types of memory can afford global lock on their
> fast paths.
I have a benchmark I think is suitable, so let me try this change to
the transfer implementation and see how it compares.
>
> > which implies adding transfer support to gpu.c as part of the gpucg_*
> > API itself and calling it here. Am I following correctly here?
>
> My idea was to provide a special API (apart from
> gpucp_{try_charge,uncharge}) to facilitate transfers...
>
> > This series doesn't actually add limit support just accounting, but
> > I'd like to get it right here.
>
> ...which could be implemented (or changed) depending on how the charging
> is realized internally.
>
>
> Michal
On Tue, Mar 22, 2022 at 9:47 AM T.J. Mercier <[email protected]> wrote:
>
> On Tue, Mar 22, 2022 at 2:52 AM Michal Koutný <[email protected]> wrote:
> >
> > On Mon, Mar 21, 2022 at 04:54:26PM -0700, "T.J. Mercier"
> > <[email protected]> wrote:
> > > Since the charge is duplicated in two cgroups for a short period
> > > before it is uncharged from the source cgroup I guess the situation
> > > you're thinking about is a global (or common ancestor) limit?
> >
> > The common ancestor was on my mind (after the self-shortcut).
> >
> > > I can see how that would be a problem for transfers done this way and
> > > an alternative would be to swap the order of the charge operations:
> > > first uncharge, then try_charge. To be certain the uncharge is
> > > reversible if the try_charge fails, I think I'd need either a mutex
> > > used at all gpucg_*charge call sites or access to the gpucg_mutex,
> >
> > Yes, that'd provide safe conditions for such operations, although I'm
> > not sure these special types of memory can afford global lock on their
> > fast paths.
>
> I have a benchmark I think is suitable, so let me try this change to
> the transfer implementation and see how it compares.
I added a mutex to struct gpucg which is locked when charging the
cgroup initially during allocation, and also only for the source
cgroup during dma_buf_charge_transfer. Then I used a multithreaded
benchmark where each thread allocates 4, 8, 16, or 32 DMA buffers and
then sends them through Binder to another process with charge transfer
enabled. This was intended to generate contention for the mutex in
dma_buf_charge_transfer. The results of this benchmark show that the
difference between a mutex protected charge transfer and an
unprotected charge transfer is within measurement noise. The worst
data point shows about 3% overheard for the mutex.
So I'll prep this change for the next revision. Thanks for pointing it out.
>
> >
> > > which implies adding transfer support to gpu.c as part of the gpucg_*
> > > API itself and calling it here. Am I following correctly here?
> >
> > My idea was to provide a special API (apart from
> > gpucp_{try_charge,uncharge}) to facilitate transfers...
> >
> > > This series doesn't actually add limit support just accounting, but
> > > I'd like to get it right here.
> >
> > ...which could be implemented (or changed) depending on how the charging
> > is realized internally.
> >
> >
> > Michal