2022-05-03 01:26:33

by T.J. Mercier

[permalink] [raw]
Subject: [PATCH v6 0/6] Proposal for a GPU cgroup controller

This patch series revisits the proposal for a GPU cgroup controller to
track and limit memory allocations by various device/allocator
subsystems. The patch series also contains a simple prototype to
illustrate how Android intends to implement DMA-BUF allocator
attribution using the GPU cgroup controller. The prototype does not
include resource limit enforcements.

Changelog:
v6:
Move documentation into cgroup-v2.rst per Tejun Heo.

Rename BINDER_FD{A}_FLAG_SENDER_NO_NEED ->
BINDER_FD{A}_FLAG_XFER_CHARGE per Carlos Llamas.

Return error on transfer failure per Carlos Llamas.

v5:
Rebase on top of v5.18-rc3

Drop the global GPU cgroup "total" (sum of all device totals) portion
of the design since there is no currently known use for this per
Tejun Heo.

Fix commit message which still contained the old name for
dma_buf_transfer_charge per Michal Koutný.

Remove all GPU cgroup code except what's necessary to support charge transfer
from dma_buf. Previously charging was done in export, but for non-Android
graphics use-cases this is not ideal since there may be a delay between
allocation and export, during which time there is no accounting.

Merge dmabuf: Use the GPU cgroup charge/uncharge APIs patch into
dmabuf: heaps: export system_heap buffers with GPU cgroup charging as a
result of above.

Put the charge and uncharge code in the same file (system_heap_allocate,
system_heap_dma_buf_release) instead of splitting them between the heap and
the dma_buf_release. This avoids asymmetric management of the gpucg charges.

Modify the dma_buf_transfer_charge API to accept a task_struct instead
of a gpucg. This avoids requiring the caller to manage the refcount
of the gpucg upon failure and confusing ownership transfer logic.

Support all strings for gpucg_register_bucket instead of just string
literals.

Enforce globally unique gpucg_bucket names.

Constrain gpucg_bucket name lengths to 64 bytes.

Append "-heap" to gpucg_bucket names from dmabuf-heaps.

Drop patch 7 from the series, which changed the types of
binder_transaction_data's sender_pid and sender_euid fields. This was
done in another commit here:
https://lore.kernel.org/all/[email protected]/

Rename:
gpucg_try_charge -> gpucg_charge
find_cg_rpool_locked -> cg_rpool_find_locked
init_cg_rpool -> cg_rpool_init
get_cg_rpool_locked -> cg_rpool_get_locked
"gpu cgroup controller" -> "GPU controller"
gpucg_device -> gpucg_bucket
usage -> size

Tests:
Support both binder_fd_array_object and binder_fd_object. This is
necessary because new versions of Android will use binder_fd_object
instead of binder_fd_array_object, and we need to support both.

Tests for both binder_fd_array_object and binder_fd_object.

For binder_utils return error codes instead of
struct binder{fs}_ctx.

Use ifdef __ANDROID__ to choose platform-dependent temp path instead
of a runtime fallback.

Ensure binderfs_mntpt ends with a trailing '/' character instead of
prepending it where used.

v4:
Skip test if not run as root per Shuah Khan

Add better test logging for abnormal child termination per Shuah Khan

Adjust ordering of charge/uncharge during transfer to avoid potentially
hitting cgroup limit per Michal Koutný

Adjust gpucg_try_charge critical section for charge transfer functionality

Fix uninitialized return code error for dmabuf_try_charge error case

v3:
Remove Upstreaming Plan from gpu-cgroup.rst per John Stultz

Use more common dual author commit message format per John Stultz

Remove android from binder changes title per Todd Kjos

Add a kselftest for this new behavior per Greg Kroah-Hartman

Include details on behavior for all combinations of kernel/userspace
versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman.

Fix pid and uid types in binder UAPI header

v2:
See the previous revision of this change submitted by Hridya Valsaraju
at: https://lore.kernel.org/all/[email protected]/

Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König. Pointers to struct gpucg and struct gpucg_device
tracking the current associations were added to the dma_buf struct to
achieve this.

Fix incorrect Kconfig help section indentation per Randy Dunlap.

History of the GPU cgroup controller
====================================
The GPU/DRM cgroup controller came into being when a consensus[1]
was reached that the resources it tracked were unsuitable to be integrated
into memcg. Originally, the proposed controller was specific to the DRM
subsystem and was intended to track GEM buffers and GPU-specific
resources[2]. In order to help establish a unified memory accounting model
for all GPU and all related subsystems, Daniel Vetter put forth a
suggestion to move it out of the DRM subsystem so that it can be used by
other DMA-BUF exporters as well[3]. This RFC proposes an interface that
does the same.

[1]: https://patchwork.kernel.org/project/dri-devel/cover/[email protected]/#22624705
[2]: https://lore.kernel.org/amd-gfx/[email protected]/
[3]: https://lore.kernel.org/amd-gfx/YCVOl8%[email protected]/

Hridya Valsaraju (3):
gpu: rfc: Proposal for a GPU cgroup controller
cgroup: gpu: Add a cgroup controller for allocator attribution of GPU
memory
binder: Add flags to relinquish ownership of fds

T.J. Mercier (3):
dmabuf: heaps: export system_heap buffers with GPU cgroup charging
dmabuf: Add gpu cgroup charge transfer function
selftests: Add binder cgroup gpu memory transfer tests

Documentation/admin-guide/cgroup-v2.rst | 24 +
drivers/android/binder.c | 31 +-
drivers/dma-buf/dma-buf.c | 80 ++-
drivers/dma-buf/dma-heap.c | 39 ++
drivers/dma-buf/heaps/system_heap.c | 28 +-
include/linux/cgroup_gpu.h | 137 +++++
include/linux/cgroup_subsys.h | 4 +
include/linux/dma-buf.h | 49 +-
include/linux/dma-heap.h | 15 +
include/uapi/linux/android/binder.h | 23 +-
init/Kconfig | 7 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/gpu.c | 386 +++++++++++++
.../selftests/drivers/android/binder/Makefile | 8 +
.../drivers/android/binder/binder_util.c | 250 +++++++++
.../drivers/android/binder/binder_util.h | 32 ++
.../selftests/drivers/android/binder/config | 4 +
.../binder/test_dmabuf_cgroup_transfer.c | 526 ++++++++++++++++++
18 files changed, 1621 insertions(+), 23 deletions(-)
create mode 100644 include/linux/cgroup_gpu.h
create mode 100644 kernel/cgroup/gpu.c
create mode 100644 tools/testing/selftests/drivers/android/binder/Makefile
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.c
create mode 100644 tools/testing/selftests/drivers/android/binder/binder_util.h
create mode 100644 tools/testing/selftests/drivers/android/binder/config
create mode 100644 tools/testing/selftests/drivers/android/binder/test_dmabuf_cgroup_transfer.c

--
2.36.0.464.gb9c8b46e94-goog


2022-05-03 01:26:33

by T.J. Mercier

[permalink] [raw]
Subject: [PATCH v6 5/6] binder: Add flags to relinquish ownership of fds

From: Hridya Valsaraju <[email protected]>

This patch introduces flags BINDER_FD_FLAG_XFER_CHARGE, and
BINDER_FD_FLAG_XFER_CHARGE that a process sending an individual fd or
fd array to another process over binder IPC can set to relinquish
ownership of the fds being sent for memory accounting purposes. If the
flag is found to be set during the fd or fd array translation and the
fd is for a DMA-BUF, the buffer is uncharged from the sender's cgroup
and charged to the receiving process's cgroup instead.

It is up to the sending process to ensure that it closes the fds
regardless of whether the transfer failed or succeeded.

Most graphics shared memory allocations in Android are done by the
graphics allocator HAL process. On requests from clients, the HAL process
allocates memory and sends the fds to the clients over binder IPC.
The graphics allocator HAL will not retain any references to the
buffers. When the HAL sets *_FLAG_XFER_CHARGE for fd arrays holding
DMA-BUF fds, or individual fd objects, the gpu cgroup controller will
be able to correctly charge the buffers to the client processes instead
of the graphics allocator HAL.

Since this is a new feature exposed to userspace, the kernel and userspace
must be compatible for the accounting to work for transfers. In all cases
the allocation and transport of DMA buffers via binder will succeed, but
only when both the kernel supports, and userspace depends on this feature
will the transfer accounting work. The possible scenarios are detailed
below:

1. new kernel + old userspace
The kernel supports the feature but userspace does not use it. The old
userspace won't mount the new cgroup controller, accounting is not
performed, charge is not transferred.

2. old kernel + new userspace
The new cgroup controller is not supported by the kernel, accounting is
not performed, charge is not transferred.

3. old kernel + old userspace
Same as #2

4. new kernel + new userspace
Cgroup is mounted, feature is supported and used.

Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>

---
v6 changes
Rename BINDER_FD{A}_FLAG_SENDER_NO_NEED ->
BINDER_FD{A}_FLAG_XFER_CHARGE per Carlos Llamas.

Return error on transfer failure per Carlos Llamas.

v5 changes
Support both binder_fd_array_object and binder_fd_object. This is
necessary because new versions of Android will use binder_fd_object
instead of binder_fd_array_object, and we need to support both.

Use the new, simpler dma_buf_transfer_charge API.

v3 changes
Remove android from title per Todd Kjos.

Use more common dual author commit message format per John Stultz.

Include details on behavior for all combinations of kernel/userspace
versions in changelog (thanks Suren Baghdasaryan) per Greg Kroah-Hartman.

v2 changes
Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
heap to a single dma-buf function for all heaps per Daniel Vetter and
Christian König.
---
drivers/android/binder.c | 31 +++++++++++++++++++++++++----
drivers/dma-buf/dma-buf.c | 4 ++--
include/linux/dma-buf.h | 2 +-
include/uapi/linux/android/binder.h | 23 +++++++++++++++++----
4 files changed, 49 insertions(+), 11 deletions(-)

diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 8351c5638880..1f39b24498f1 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -42,6 +42,7 @@

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

+#include <linux/dma-buf.h>
#include <linux/fdtable.h>
#include <linux/file.h>
#include <linux/freezer.h>
@@ -2170,7 +2171,7 @@ static int binder_translate_handle(struct flat_binder_object *fp,
return ret;
}

-static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
+static int binder_translate_fd(u32 fd, binder_size_t fd_offset, __u32 flags,
struct binder_transaction *t,
struct binder_thread *thread,
struct binder_transaction *in_reply_to)
@@ -2208,6 +2209,26 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
goto err_security;
}

+ if (IS_ENABLED(CONFIG_CGROUP_GPU) && (flags & BINDER_FD_FLAG_XFER_CHARGE)) {
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file)) {
+ binder_user_error(
+ "%d:%d got transaction with XFER_CHARGE for non-dmabuf fd, %d\n",
+ proc->pid, thread->pid, fd);
+ ret = -EINVAL;
+ goto err_dmabuf;
+ }
+
+ dmabuf = file->private_data;
+ ret = dma_buf_transfer_charge(dmabuf, target_proc->tsk);
+ if (ret) {
+ pr_warn("%d:%d Unable to transfer DMA-BUF fd charge to %d\n",
+ proc->pid, thread->pid, target_proc->pid);
+ goto err_xfer;
+ }
+ }
+
/*
* Add fixup record for this transaction. The allocation
* of the fd in the target needs to be done from a
@@ -2226,6 +2247,8 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
return ret;

err_alloc:
+err_xfer:
+err_dmabuf:
err_security:
fput(file);
err_fget:
@@ -2528,7 +2551,7 @@ static int binder_translate_fd_array(struct list_head *pf_head,

ret = copy_from_user(&fd, sender_ufda_base + sender_uoffset, sizeof(fd));
if (!ret)
- ret = binder_translate_fd(fd, offset, t, thread,
+ ret = binder_translate_fd(fd, offset, fda->flags, t, thread,
in_reply_to);
if (ret)
return ret > 0 ? -EINVAL : ret;
@@ -3179,8 +3202,8 @@ static void binder_transaction(struct binder_proc *proc,
struct binder_fd_object *fp = to_binder_fd_object(hdr);
binder_size_t fd_offset = object_offset +
(uintptr_t)&fp->fd - (uintptr_t)fp;
- int ret = binder_translate_fd(fp->fd, fd_offset, t,
- thread, in_reply_to);
+ int ret = binder_translate_fd(fp->fd, fd_offset, fp->flags,
+ t, thread, in_reply_to);

fp->pad_binder = 0;
if (ret < 0 ||
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f3fb844925e2..36ed6cd4ddcc 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -31,7 +31,6 @@

#include "dma-buf-sysfs-stats.h"

-static inline int is_dma_buf_file(struct file *);

struct dma_buf_list {
struct list_head head;
@@ -400,10 +399,11 @@ static const struct file_operations dma_buf_fops = {
/*
* is_dma_buf_file - Check if struct file* is associated with dma_buf
*/
-static inline int is_dma_buf_file(struct file *file)
+int is_dma_buf_file(struct file *file)
{
return file->f_op == &dma_buf_fops;
}
+EXPORT_SYMBOL_NS_GPL(is_dma_buf_file, DMA_BUF);

static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 438ad8577b76..2b9812758fee 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -614,7 +614,7 @@ dma_buf_attachment_is_dynamic(struct dma_buf_attachment *attach)
{
return !!attach->importer_ops;
}
-
+int is_dma_buf_file(struct file *file);
struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
struct device *dev);
struct dma_buf_attachment *
diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
index 11157fae8a8e..d17e791b38ab 100644
--- a/include/uapi/linux/android/binder.h
+++ b/include/uapi/linux/android/binder.h
@@ -91,14 +91,14 @@ struct flat_binder_object {
/**
* struct binder_fd_object - describes a filedescriptor to be fixed up.
* @hdr: common header structure
- * @pad_flags: padding to remain compatible with old userspace code
+ * @flags: One or more BINDER_FD_FLAG_* flags
* @pad_binder: padding to remain compatible with old userspace code
* @fd: file descriptor
* @cookie: opaque data, used by user-space
*/
struct binder_fd_object {
struct binder_object_header hdr;
- __u32 pad_flags;
+ __u32 flags;
union {
binder_uintptr_t pad_binder;
__u32 fd;
@@ -107,6 +107,17 @@ struct binder_fd_object {
binder_uintptr_t cookie;
};

+enum {
+ /**
+ * @BINDER_FD_FLAG_XFER_CHARGE
+ *
+ * When set, the sender of a binder_fd_object wishes to relinquish ownership of the fd for
+ * memory accounting purposes. If the fd is for a DMA-BUF, the buffer is uncharged from the
+ * sender's cgroup and charged to the receiving process's cgroup instead.
+ */
+ BINDER_FD_FLAG_XFER_CHARGE = 0x2000,
+};
+
/* struct binder_buffer_object - object describing a userspace buffer
* @hdr: common header structure
* @flags: one or more BINDER_BUFFER_* flags
@@ -141,7 +152,7 @@ enum {

/* struct binder_fd_array_object - object describing an array of fds in a buffer
* @hdr: common header structure
- * @pad: padding to ensure correct alignment
+ * @flags: One or more BINDER_FDA_FLAG_* flags
* @num_fds: number of file descriptors in the buffer
* @parent: index in offset array to buffer holding the fd array
* @parent_offset: start offset of fd array in the buffer
@@ -162,12 +173,16 @@ enum {
*/
struct binder_fd_array_object {
struct binder_object_header hdr;
- __u32 pad;
+ __u32 flags;
binder_size_t num_fds;
binder_size_t parent;
binder_size_t parent_offset;
};

+enum {
+ BINDER_FDA_FLAG_XFER_CHARGE = BINDER_FD_FLAG_XFER_CHARGE,
+};
+
/*
* On 64-bit platforms where user code may run in 32-bits the driver must
* translate the buffer (and local binder) addresses appropriately.
--
2.36.0.464.gb9c8b46e94-goog

2022-05-03 01:26:33

by T.J. Mercier

[permalink] [raw]
Subject: [PATCH v6 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory

From: Hridya Valsaraju <[email protected]>

The cgroup controller provides accounting for GPU and GPU-related
memory allocations. The memory being accounted can be device memory or
memory allocated from pools dedicated to serve GPU-related tasks.

This patch adds APIs to:
-allow a device to register for memory accounting using the GPU cgroup
controller.
-charge and uncharge allocated memory to a cgroup.

When the cgroup controller is enabled, it would expose information about
the memory allocated by each device(registered for GPU cgroup memory
accounting) for each cgroup.

The API/UAPI can be extended to set per-device/total allocation limits
in the future.

The cgroup controller has been named following the discussion in [1].

[1]: https://lore.kernel.org/amd-gfx/YCJp%2F%[email protected]/

Signed-off-by: Hridya Valsaraju <[email protected]>
Signed-off-by: T.J. Mercier <[email protected]>

---
v5 changes
Support all strings for gpucg_register_device instead of just string
literals.

Enforce globally unique gpucg_bucket names.

Constrain gpucg_bucket name lengths to 64 bytes.

Obtain just a single css refcount instead of nr_pages for each
charge.

Rename:
gpucg_try_charge -> gpucg_charge
find_cg_rpool_locked -> cg_rpool_find_locked
init_cg_rpool -> cg_rpool_init
get_cg_rpool_locked -> cg_rpool_get_locked
"gpu cgroup controller" -> "GPU controller"
gpucg_device -> gpucg_bucket
usage -> size

v4 changes
Adjust gpucg_try_charge critical section for future charge transfer
functionality.

v3 changes
Use more common dual author commit message format per John Stultz.

v2 changes
Fix incorrect Kconfig help section indentation per Randy Dunlap.
---
include/linux/cgroup_gpu.h | 123 +++++++++++++
include/linux/cgroup_subsys.h | 4 +
init/Kconfig | 7 +
kernel/cgroup/Makefile | 1 +
kernel/cgroup/gpu.c | 324 ++++++++++++++++++++++++++++++++++
5 files changed, 459 insertions(+)
create mode 100644 include/linux/cgroup_gpu.h
create mode 100644 kernel/cgroup/gpu.c

diff --git a/include/linux/cgroup_gpu.h b/include/linux/cgroup_gpu.h
new file mode 100644
index 000000000000..4dfe633d6ec7
--- /dev/null
+++ b/include/linux/cgroup_gpu.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: MIT
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ * Copyright (C) 2022 Google LLC.
+ */
+#ifndef _CGROUP_GPU_H
+#define _CGROUP_GPU_H
+
+#include <linux/cgroup.h>
+#include <linux/list.h>
+
+#define GPUCG_BUCKET_NAME_MAX_LEN 64
+
+#ifdef CONFIG_CGROUP_GPU
+ /* The GPU cgroup controller data structure */
+struct gpucg {
+ struct cgroup_subsys_state css;
+
+ /* list of all resource pools that belong to this cgroup */
+ struct list_head rpools;
+};
+
+/* A named entity representing bucket of tracked memory. */
+struct gpucg_bucket {
+ /* list of various resource pools in various cgroups that the bucket is part of */
+ struct list_head rpools;
+
+ /* list of all buckets registered for GPU cgroup accounting */
+ struct list_head bucket_node;
+
+ /* string to be used as identifier for accounting and limit setting */
+ const char *name;
+};
+
+/**
+ * css_to_gpucg - get the corresponding gpucg ref from a cgroup_subsys_state
+ * @css: the target cgroup_subsys_state
+ *
+ * Returns: gpu cgroup that contains the @css
+ */
+static inline struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css)
+{
+ return css ? container_of(css, struct gpucg, css) : NULL;
+}
+
+/**
+ * gpucg_get - get the gpucg reference that a task belongs to
+ * @task: the target task
+ *
+ * This increases the reference count of the css that the @task belongs to.
+ *
+ * Returns: reference to the gpu cgroup the task belongs to.
+ */
+static inline struct gpucg *gpucg_get(struct task_struct *task)
+{
+ if (!cgroup_subsys_enabled(gpu_cgrp_subsys))
+ return NULL;
+ return css_to_gpucg(task_get_css(task, gpu_cgrp_id));
+}
+
+/**
+ * gpucg_put - put a gpucg reference
+ * @gpucg: the target gpucg
+ *
+ * Put a reference obtained via gpucg_get
+ */
+static inline void gpucg_put(struct gpucg *gpucg)
+{
+ if (gpucg)
+ css_put(&gpucg->css);
+}
+
+/**
+ * gpucg_parent - find the parent of a gpu cgroup
+ * @cg: the target gpucg
+ *
+ * This does not increase the reference count of the parent cgroup
+ *
+ * Returns: parent gpu cgroup of @cg
+ */
+static inline struct gpucg *gpucg_parent(struct gpucg *cg)
+{
+ return css_to_gpucg(cg->css.parent);
+}
+
+int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size);
+void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size);
+int gpucg_register_bucket(struct gpucg_bucket *bucket, const char *name);
+#else /* CONFIG_CGROUP_GPU */
+
+struct gpucg;
+struct gpucg_bucket;
+
+static inline struct gpucg *css_to_gpucg(struct cgroup_subsys_state *css)
+{
+ return NULL;
+}
+
+static inline struct gpucg *gpucg_get(struct task_struct *task)
+{
+ return NULL;
+}
+
+static inline void gpucg_put(struct gpucg *gpucg) {}
+
+static inline struct gpucg *gpucg_parent(struct gpucg *cg)
+{
+ return NULL;
+}
+
+static inline int gpucg_charge(struct gpucg *gpucg,
+ struct gpucg_bucket *bucket,
+ u64 size)
+{
+ return 0;
+}
+
+static inline void gpucg_uncharge(struct gpucg *gpucg,
+ struct gpucg_bucket *bucket,
+ u64 size) {}
+
+static inline int gpucg_register_bucket(struct gpucg_bucket *bucket, const char *name) {}
+#endif /* CONFIG_CGROUP_GPU */
+#endif /* _CGROUP_GPU_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 445235487230..46a2a7b93c41 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -65,6 +65,10 @@ SUBSYS(rdma)
SUBSYS(misc)
#endif

+#if IS_ENABLED(CONFIG_CGROUP_GPU)
+SUBSYS(gpu)
+#endif
+
/*
* The following subsystems are not supported on the default hierarchy.
*/
diff --git a/init/Kconfig b/init/Kconfig
index ddcbefe535e9..2e00a190e170 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -984,6 +984,13 @@ config BLK_CGROUP

See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information.

+config CGROUP_GPU
+ bool "GPU controller (EXPERIMENTAL)"
+ select PAGE_COUNTER
+ help
+ Provides accounting and limit setting for memory allocations by the GPU and
+ GPU-related subsystems.
+
config CGROUP_WRITEBACK
bool
depends on MEMCG && BLK_CGROUP
diff --git a/kernel/cgroup/Makefile b/kernel/cgroup/Makefile
index 12f8457ad1f9..be95a5a532fc 100644
--- a/kernel/cgroup/Makefile
+++ b/kernel/cgroup/Makefile
@@ -7,3 +7,4 @@ obj-$(CONFIG_CGROUP_RDMA) += rdma.o
obj-$(CONFIG_CPUSETS) += cpuset.o
obj-$(CONFIG_CGROUP_MISC) += misc.o
obj-$(CONFIG_CGROUP_DEBUG) += debug.o
+obj-$(CONFIG_CGROUP_GPU) += gpu.o
diff --git a/kernel/cgroup/gpu.c b/kernel/cgroup/gpu.c
new file mode 100644
index 000000000000..34d0a5b85834
--- /dev/null
+++ b/kernel/cgroup/gpu.c
@@ -0,0 +1,324 @@
+// SPDX-License-Identifier: MIT
+// Copyright 2019 Advanced Micro Devices, Inc.
+// Copyright (C) 2022 Google LLC.
+
+#include <linux/cgroup.h>
+#include <linux/cgroup_gpu.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/mm.h>
+#include <linux/page_counter.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+static struct gpucg *root_gpucg __read_mostly;
+
+/*
+ * Protects list of resource pools maintained on per cgroup basis and list
+ * of buckets registered for memory accounting using the GPU cgroup controller.
+ */
+static DEFINE_MUTEX(gpucg_mutex);
+static LIST_HEAD(gpucg_buckets);
+
+struct gpucg_resource_pool {
+ /* The bucket whose resource usage is tracked by this resource pool */
+ struct gpucg_bucket *bucket;
+
+ /* list of all resource pools for the cgroup */
+ struct list_head cg_node;
+
+ /* list maintained by the gpucg_bucket to keep track of its resource pools */
+ struct list_head bucket_node;
+
+ /* tracks memory usage of the resource pool */
+ struct page_counter total;
+};
+
+static void free_cg_rpool_locked(struct gpucg_resource_pool *rpool)
+{
+ lockdep_assert_held(&gpucg_mutex);
+
+ list_del(&rpool->cg_node);
+ list_del(&rpool->bucket_node);
+ kfree(rpool);
+}
+
+static void gpucg_css_free(struct cgroup_subsys_state *css)
+{
+ struct gpucg_resource_pool *rpool, *tmp;
+ struct gpucg *gpucg = css_to_gpucg(css);
+
+ // delete all resource pools
+ mutex_lock(&gpucg_mutex);
+ list_for_each_entry_safe(rpool, tmp, &gpucg->rpools, cg_node)
+ free_cg_rpool_locked(rpool);
+ mutex_unlock(&gpucg_mutex);
+
+ kfree(gpucg);
+}
+
+static struct cgroup_subsys_state *
+gpucg_css_alloc(struct cgroup_subsys_state *parent_css)
+{
+ struct gpucg *gpucg, *parent;
+
+ gpucg = kzalloc(sizeof(struct gpucg), GFP_KERNEL);
+ if (!gpucg)
+ return ERR_PTR(-ENOMEM);
+
+ parent = css_to_gpucg(parent_css);
+ if (!parent)
+ root_gpucg = gpucg;
+
+ INIT_LIST_HEAD(&gpucg->rpools);
+
+ return &gpucg->css;
+}
+
+static struct gpucg_resource_pool *cg_rpool_find_locked(
+ struct gpucg *cg,
+ struct gpucg_bucket *bucket)
+{
+ struct gpucg_resource_pool *rpool;
+
+ lockdep_assert_held(&gpucg_mutex);
+
+ list_for_each_entry(rpool, &cg->rpools, cg_node)
+ if (rpool->bucket == bucket)
+ return rpool;
+
+ return NULL;
+}
+
+static struct gpucg_resource_pool *cg_rpool_init(struct gpucg *cg,
+ struct gpucg_bucket *bucket)
+{
+ struct gpucg_resource_pool *rpool = kzalloc(sizeof(*rpool),
+ GFP_KERNEL);
+ if (!rpool)
+ return ERR_PTR(-ENOMEM);
+
+ rpool->bucket = bucket;
+
+ page_counter_init(&rpool->total, NULL);
+ INIT_LIST_HEAD(&rpool->cg_node);
+ INIT_LIST_HEAD(&rpool->bucket_node);
+ list_add_tail(&rpool->cg_node, &cg->rpools);
+ list_add_tail(&rpool->bucket_node, &bucket->rpools);
+
+ return rpool;
+}
+
+/**
+ * get_cg_rpool_locked - find the resource pool for the specified bucket and
+ * specified cgroup. If the resource pool does not exist for the cg, it is
+ * created in a hierarchical manner in the cgroup and its ancestor cgroups who
+ * do not already have a resource pool entry for the bucket.
+ *
+ * @cg: The cgroup to find the resource pool for.
+ * @bucket: The bucket associated with the returned resource pool.
+ *
+ * Return: return resource pool entry corresponding to the specified bucket in
+ * the specified cgroup (hierarchically creating them if not existing already).
+ *
+ */
+static struct gpucg_resource_pool *
+cg_rpool_get_locked(struct gpucg *cg, struct gpucg_bucket *bucket)
+{
+ struct gpucg *parent_cg, *p, *stop_cg;
+ struct gpucg_resource_pool *rpool, *tmp_rpool;
+ struct gpucg_resource_pool *parent_rpool = NULL, *leaf_rpool = NULL;
+
+ rpool = cg_rpool_find_locked(cg, bucket);
+ if (rpool)
+ return rpool;
+
+ stop_cg = cg;
+ do {
+ rpool = cg_rpool_init(stop_cg, bucket);
+ if (IS_ERR(rpool))
+ goto err;
+
+ if (!leaf_rpool)
+ leaf_rpool = rpool;
+
+ stop_cg = gpucg_parent(stop_cg);
+ if (!stop_cg)
+ break;
+
+ rpool = cg_rpool_find_locked(stop_cg, bucket);
+ } while (!rpool);
+
+ /*
+ * Re-initialize page counters of all rpools created in this invocation
+ * to enable hierarchical charging.
+ * stop_cg is the first ancestor cg who already had a resource pool for
+ * the bucket. It can also be NULL if no ancestors had a pre-existing
+ * resource pool for the bucket before this invocation.
+ */
+ rpool = leaf_rpool;
+ for (p = cg; p != stop_cg; p = parent_cg) {
+ parent_cg = gpucg_parent(p);
+ if (!parent_cg)
+ break;
+ parent_rpool = cg_rpool_find_locked(parent_cg, bucket);
+ page_counter_init(&rpool->total, &parent_rpool->total);
+
+ rpool = parent_rpool;
+ }
+
+ return leaf_rpool;
+err:
+ for (p = cg; p != stop_cg; p = gpucg_parent(p)) {
+ tmp_rpool = cg_rpool_find_locked(p, bucket);
+ free_cg_rpool_locked(tmp_rpool);
+ }
+ return rpool;
+}
+
+/**
+ * gpucg_charge - charge memory to the specified gpucg and gpucg_bucket.
+ * Caller must hold a reference to @gpucg obtained through gpucg_get(). The size
+ * of the memory is rounded up to be a multiple of the page size.
+ *
+ * @gpucg: The gpu cgroup to charge the memory to.
+ * @bucket: The bucket to charge the memory to.
+ * @size: The size of memory to charge in bytes.
+ * This size will be rounded up to the nearest page size.
+ *
+ * Return: returns 0 if the charging is successful and otherwise returns an
+ * error code.
+ */
+int gpucg_charge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size)
+{
+ struct page_counter *counter;
+ u64 nr_pages;
+ struct gpucg_resource_pool *rp;
+ int ret = 0;
+
+ nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
+ mutex_lock(&gpucg_mutex);
+ rp = cg_rpool_get_locked(gpucg, bucket);
+ /*
+ * Continue to hold gpucg_mutex because we use it to block charges while transfers are in
+ * progress to avoid potentially exceeding a limit.
+ */
+ if (IS_ERR(rp)) {
+ mutex_unlock(&gpucg_mutex);
+ return PTR_ERR(rp);
+ }
+
+ if (page_counter_try_charge(&rp->total, nr_pages, &counter))
+ css_get(&gpucg->css);
+ else
+ ret = -ENOMEM;
+ mutex_unlock(&gpucg_mutex);
+
+ return ret;
+}
+
+/**
+ * gpucg_uncharge - uncharge memory from the specified gpucg and gpucg_bucket.
+ * The caller must hold a reference to @gpucg obtained through gpucg_get().
+ *
+ * @gpucg: The gpu cgroup to uncharge the memory from.
+ * @bucket: The bucket to uncharge the memory from.
+ * @size: The size of memory to uncharge in bytes.
+ * This size will be rounded up to the nearest page size.
+ */
+void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_bucket *bucket, u64 size)
+{
+ u64 nr_pages;
+ struct gpucg_resource_pool *rp;
+
+ mutex_lock(&gpucg_mutex);
+ rp = cg_rpool_find_locked(gpucg, bucket);
+ /*
+ * gpucg_mutex can be unlocked here, rp will stay valid until gpucg is freed and there are
+ * active refs on gpucg. Uncharges are fine while transfers are in progress since there is
+ * no potential to exceed a limit while uncharging and transferring.
+ */
+ mutex_unlock(&gpucg_mutex);
+
+ if (unlikely(!rp)) {
+ pr_err("Resource pool not found, incorrect charge/uncharge ordering?\n");
+ return;
+ }
+
+ nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ page_counter_uncharge(&rp->total, nr_pages);
+ css_put(&gpucg->css);
+}
+
+/**
+ * gpucg_register_bucket - Registers a bucket for memory accounting using the
+ * GPU cgroup controller.
+ *
+ * @bucket: The bucket to register for memory accounting.
+ * @name: Pointer to a null-terminated string to denote the name of the bucket. This name should be
+ * globally unique, and should not exceed @GPUCG_BUCKET_NAME_MAX_LEN bytes.
+ *
+ * @bucket must remain valid. @name will be copied.
+ *
+ * Returns 0 on success, or a negative errno code otherwise.
+ */
+int gpucg_register_bucket(struct gpucg_bucket *bucket, const char *name)
+{
+ struct gpucg_bucket *b;
+
+ if (!bucket || !name)
+ return -EINVAL;
+
+ if (strlen(name) >= GPUCG_BUCKET_NAME_MAX_LEN)
+ return -ENAMETOOLONG;
+
+ INIT_LIST_HEAD(&bucket->bucket_node);
+ INIT_LIST_HEAD(&bucket->rpools);
+ bucket->name = kstrdup_const(name, GFP_KERNEL);
+
+ mutex_lock(&gpucg_mutex);
+ list_for_each_entry(b, &gpucg_buckets, bucket_node) {
+ if (strncmp(b->name, bucket->name, GPUCG_BUCKET_NAME_MAX_LEN) == 0) {
+ mutex_unlock(&gpucg_mutex);
+ kfree_const(bucket->name);
+ return -EEXIST;
+ }
+ }
+ list_add_tail(&bucket->bucket_node, &gpucg_buckets);
+ mutex_unlock(&gpucg_mutex);
+
+ return 0;
+}
+
+static int gpucg_resource_show(struct seq_file *sf, void *v)
+{
+ struct gpucg_resource_pool *rpool;
+ struct gpucg *cg = css_to_gpucg(seq_css(sf));
+
+ mutex_lock(&gpucg_mutex);
+ list_for_each_entry(rpool, &cg->rpools, cg_node) {
+ seq_printf(sf, "%s %lu\n", rpool->bucket->name,
+ page_counter_read(&rpool->total) * PAGE_SIZE);
+ }
+ mutex_unlock(&gpucg_mutex);
+
+ return 0;
+}
+
+struct cftype files[] = {
+ {
+ .name = "memory.current",
+ .seq_show = gpucg_resource_show,
+ },
+ { } /* terminate */
+};
+
+struct cgroup_subsys gpu_cgrp_subsys = {
+ .css_alloc = gpucg_css_alloc,
+ .css_free = gpucg_css_free,
+ .early_init = false,
+ .legacy_cftypes = files,
+ .dfl_cftypes = files,
+};
--
2.36.0.464.gb9c8b46e94-goog

2022-05-04 17:02:45

by Michal Koutný

[permalink] [raw]
Subject: Re: [PATCH v6 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory

Hello.

On Mon, May 02, 2022 at 11:19:36PM +0000, "T.J. Mercier" <[email protected]> wrote:
> This patch adds APIs to:
> -allow a device to register for memory accounting using the GPU cgroup
> controller.
> -charge and uncharge allocated memory to a cgroup.

Is this API for separately built consumers?
The respective functions should be exported (EXPORT_SYMBOL_GPL) if I
haven't missed anything.

> +#ifdef CONFIG_CGROUP_GPU
> + /* The GPU cgroup controller data structure */
> +struct gpucg {
> + struct cgroup_subsys_state css;
> +
> + /* list of all resource pools that belong to this cgroup */
> + struct list_head rpools;
> +};
> +
> +/* A named entity representing bucket of tracked memory. */
> +struct gpucg_bucket {
> + /* list of various resource pools in various cgroups that the bucket is part of */
> + struct list_head rpools;
> +
> + /* list of all buckets registered for GPU cgroup accounting */
> + struct list_head bucket_node;
> +
> + /* string to be used as identifier for accounting and limit setting */
> + const char *name;
> +};

Do these struct have to be defined "publicly"?
I.e. the driver code could just work with gpucg and gpucg_bucket
pointers.

> +int gpucg_register_bucket(struct gpucg_bucket *bucket, const char *name)

...and the registration function would return a pointer to newly
(internally) allocated gpucg_bucket.

Regards,
Michal

2022-05-04 21:54:52

by T.J. Mercier

[permalink] [raw]
Subject: Re: [PATCH v6 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory

On Wed, May 4, 2022 at 5:26 AM Michal Koutný <[email protected]> wrote:
>
> Hello.
>
> On Mon, May 02, 2022 at 11:19:36PM +0000, "T.J. Mercier" <[email protected]> wrote:
> > This patch adds APIs to:
> > -allow a device to register for memory accounting using the GPU cgroup
> > controller.
> > -charge and uncharge allocated memory to a cgroup.
>
> Is this API for separately built consumers?
> The respective functions should be exported (EXPORT_SYMBOL_GPL) if I
> haven't missed anything.
>
As the only users are dmabuf heaps and dmabuf, and those cannot be
built as modules I did not export the symbols here. However these
definitely would need to be exported to support use by modules, and I
have had to do that in one of my device test trees for this change.
Should I export these now for this series?

> > +#ifdef CONFIG_CGROUP_GPU
> > + /* The GPU cgroup controller data structure */
> > +struct gpucg {
> > + struct cgroup_subsys_state css;
> > +
> > + /* list of all resource pools that belong to this cgroup */
> > + struct list_head rpools;
> > +};
> > +
> > +/* A named entity representing bucket of tracked memory. */
> > +struct gpucg_bucket {
> > + /* list of various resource pools in various cgroups that the bucket is part of */
> > + struct list_head rpools;
> > +
> > + /* list of all buckets registered for GPU cgroup accounting */
> > + struct list_head bucket_node;
> > +
> > + /* string to be used as identifier for accounting and limit setting */
> > + const char *name;
> > +};
>
> Do these struct have to be defined "publicly"?
> I.e. the driver code could just work with gpucg and gpucg_bucket
> pointers.
>
> > +int gpucg_register_bucket(struct gpucg_bucket *bucket, const char *name)
>
> ...and the registration function would return a pointer to newly
> (internally) allocated gpucg_bucket.
>
No, except maybe the gpucg_bucket name which I can add an accessor
function for. Won't this mean depending on LTO for potential inlining
of the functions currently implemented in the header? I'm happy to
make this change, but I wonder why some parts of the kernel take this
approach and others do not.

> Regards,
> Michal

2022-05-07 13:26:45

by T.J. Mercier

[permalink] [raw]
Subject: Re: [PATCH v6 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory

On Thu, May 5, 2022 at 4:50 AM 'Michal Koutný' via kernel-team
<[email protected]> wrote:
>
> On Wed, May 04, 2022 at 10:19:20AM -0700, "T.J. Mercier" <[email protected]> wrote:
> > Should I export these now for this series?
>
> Hehe, _I_ don't know.
> Depends on the likelihood this lands in and is built upon.
>
Ok, I'll leave these unexported for now unless I hear otherwise.

> > No, except maybe the gpucg_bucket name which I can add an accessor
> > function for. Won't this mean depending on LTO for potential inlining
> > of the functions currently implemented in the header?
>
> Yes. Also depends how much inlining here would be performance relevant.
> I suggested this with an OS vendor hat on, i.e. the less such ABI, the
> simpler.
>
> > I'm happy to make this change, but I wonder why some parts of the
> > kernel take this approach and others do not.
>
> I think there is no convention (see also
> Documentation/process/stable-api-nonsense.rst ;-)).
>
Alright I'll queue this change up for the next rev.

> Regards,
> Michal

Thanks again!

>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2022-05-08 01:02:07

by Michal Koutný

[permalink] [raw]
Subject: Re: [PATCH v6 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory

On Wed, May 04, 2022 at 10:19:20AM -0700, "T.J. Mercier" <[email protected]> wrote:
> Should I export these now for this series?

Hehe, _I_ don't know.
Depends on the likelihood this lands in and is built upon.

> No, except maybe the gpucg_bucket name which I can add an accessor
> function for. Won't this mean depending on LTO for potential inlining
> of the functions currently implemented in the header?

Yes. Also depends how much inlining here would be performance relevant.
I suggested this with an OS vendor hat on, i.e. the less such ABI, the
simpler.

> I'm happy to make this change, but I wonder why some parts of the
> kernel take this approach and others do not.

I think there is no convention (see also
Documentation/process/stable-api-nonsense.rst ;-)).

Regards,
Michal