2022-02-17 16:28:28

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 0/9] drm/i915: use ref_tracker library for tracking wakerefs

Hi,

Appearance of ref_tracker library allows to drop custom solution for wakeref
tracking used in i915 and reuse the library.
For this few adjustements has been made to ref_tracker, details in patches.
I hope changes are OK for original author.

The patchset has been rebased on top of drm-tip to allow test changes by CI.

Added CC to netdev as the only user of the library atm.

Regards
Andrzej


Andrzej Hajda (7):
lib/ref_tracker: add unlocked leak print helper
lib/ref_tracker: compact stacktraces before printing
lib/ref_tracker: __ref_tracker_dir_print improve printing
lib/ref_tracker: add printing to memory buffer
lib/ref_tracker: improve allocation flags
drm/i915: Correct type of wakeref variable
drm/i915: replace Intel internal tracker with kernel core ref_tracker

Chris Wilson (2):
drm/i915: Separate wakeref tracking
drm/i915: Track leaked gt->wakerefs

drivers/gpu/drm/i915/Kconfig.debug | 19 ++
drivers/gpu/drm/i915/Makefile | 1 +
.../drm/i915/display/intel_display_power.c | 2 +-
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 +-
.../i915/gem/selftests/i915_gem_coherency.c | 10 +-
.../drm/i915/gem/selftests/i915_gem_mman.c | 14 +-
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 13 +-
.../gpu/drm/i915/gt/intel_breadcrumbs_types.h | 3 +-
drivers/gpu/drm/i915/gt/intel_engine_pm.c | 6 +-
drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 +
.../drm/i915/gt/intel_execlists_submission.c | 2 +-
drivers/gpu/drm/i915/gt/intel_gt_pm.c | 12 +-
drivers/gpu/drm/i915/gt/intel_gt_pm.h | 36 ++-
drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c | 4 +-
drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 20 +-
drivers/gpu/drm/i915/gt/selftest_gt_pm.c | 5 +-
drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +-
drivers/gpu/drm/i915/gt/selftest_rps.c | 17 +-
drivers/gpu/drm/i915/gt/selftest_slpc.c | 10 +-
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 11 +-
drivers/gpu/drm/i915/i915_pmu.c | 16 +-
drivers/gpu/drm/i915/intel_runtime_pm.c | 239 ++----------------
drivers/gpu/drm/i915/intel_runtime_pm.h | 10 +-
drivers/gpu/drm/i915/intel_wakeref.c | 10 +-
drivers/gpu/drm/i915/intel_wakeref.h | 112 +++++++-
include/linux/ref_tracker.h | 31 ++-
lib/ref_tracker.c | 150 ++++++++---
27 files changed, 429 insertions(+), 343 deletions(-)

--
2.25.1


2022-02-17 20:17:58

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 8/9] drm/i915: Correct type of wakeref variable

Wakeref has dedicated type. Assumption it will be int
compatible forever is incorrect.

Signed-off-by: Andrzej Hajda <[email protected]>
---
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 7799939c38945..b308dd0866eaf 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2797,7 +2797,7 @@ static void destroyed_worker_func(struct work_struct *w)
struct intel_guc *guc = container_of(w, struct intel_guc,
submission_state.destroyed_worker);
struct intel_gt *gt = guc_to_gt(guc);
- int tmp;
+ intel_wakeref_t tmp;

with_intel_gt_pm(gt, tmp)
deregister_destroyed_contexts(guc);
--
2.25.1

2022-02-17 21:37:03

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 2/9] lib/ref_tracker: compact stacktraces before printing

In cases references are taken alternately on multiple exec paths leak
report can grow substantially, sorting and grouping leaks by stack_handle
allows to compact it.

Signed-off-by: Andrzej Hajda <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
---
lib/ref_tracker.c | 35 +++++++++++++++++++++++++++--------
1 file changed, 27 insertions(+), 8 deletions(-)

diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c
index 1b0c6d645d64a..0e9c7d2828ccb 100644
--- a/lib/ref_tracker.c
+++ b/lib/ref_tracker.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/export.h>
+#include <linux/list_sort.h>
#include <linux/ref_tracker.h>
#include <linux/slab.h>
#include <linux/stacktrace.h>
@@ -14,23 +15,41 @@ struct ref_tracker {
depot_stack_handle_t free_stack_handle;
};

+static int ref_tracker_cmp(void *priv, const struct list_head *a, const struct list_head *b)
+{
+ const struct ref_tracker *ta = list_entry(a, const struct ref_tracker, head);
+ const struct ref_tracker *tb = list_entry(b, const struct ref_tracker, head);
+
+ return ta->alloc_stack_handle - tb->alloc_stack_handle;
+}
+
void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
unsigned int display_limit)
{
+ unsigned int i = 0, count = 0;
struct ref_tracker *tracker;
- unsigned int i = 0;
+ depot_stack_handle_t stack;

lockdep_assert_held(&dir->lock);

+ if (list_empty(&dir->list))
+ return;
+
+ list_sort(NULL, &dir->list, ref_tracker_cmp);
+
list_for_each_entry(tracker, &dir->list, head) {
- if (i < display_limit) {
- pr_err("leaked reference.\n");
- if (tracker->alloc_stack_handle)
- stack_depot_print(tracker->alloc_stack_handle);
- i++;
- } else {
+ if (i++ >= display_limit)
break;
- }
+ if (!count++)
+ stack = tracker->alloc_stack_handle;
+ if (stack == tracker->alloc_stack_handle &&
+ !list_is_last(&tracker->head, &dir->list))
+ continue;
+
+ pr_err("leaked %d references.\n", count);
+ if (stack)
+ stack_depot_print(stack);
+ count = 0;
}
}
EXPORT_SYMBOL(__ref_tracker_dir_print);
--
2.25.1

2022-02-17 23:04:30

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH 2/9] lib/ref_tracker: compact stacktraces before printing

On Thu, Feb 17, 2022 at 7:23 AM Eric Dumazet <[email protected]> wrote:


> Then, iterating the list and update the array (that you can keep
> sorted by ->stack_handle)

The 'sorted' part might be unnecessary, if all callers keep
@display_limits small enough.

2022-02-17 23:22:32

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 6/9] drm/i915: Separate wakeref tracking

From: Chris Wilson <[email protected]>

Extract the callstack tracking of intel_runtime_pm.c into its own
utility so that that we can reuse it for other online debugging of
scoped wakerefs.

Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Andrzej Hajda <[email protected]>
Signed-off-by: Andrzej Hajda <[email protected]>
---
drivers/gpu/drm/i915/Kconfig.debug | 9 +
drivers/gpu/drm/i915/Makefile | 4 +
drivers/gpu/drm/i915/intel_runtime_pm.c | 244 +++----------------
drivers/gpu/drm/i915/intel_runtime_pm.h | 10 +-
drivers/gpu/drm/i915/intel_wakeref.h | 6 +-
drivers/gpu/drm/i915/intel_wakeref_tracker.c | 234 ++++++++++++++++++
drivers/gpu/drm/i915/intel_wakeref_tracker.h | 76 ++++++
7 files changed, 355 insertions(+), 228 deletions(-)
create mode 100644 drivers/gpu/drm/i915/intel_wakeref_tracker.c
create mode 100644 drivers/gpu/drm/i915/intel_wakeref_tracker.h

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index e7fd3e76f8a20..8b1973146e848 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -33,6 +33,7 @@ config DRM_I915_DEBUG
select PREEMPT_COUNT
select I2C_CHARDEV
select STACKDEPOT
+ select STACKTRACE
select DRM_DP_AUX_CHARDEV
select X86_MSR # used by igt/pm_rpm
select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
@@ -45,6 +46,7 @@ config DRM_I915_DEBUG
select DRM_I915_DEBUG_GEM
select DRM_I915_DEBUG_GEM_ONCE
select DRM_I915_DEBUG_MMIO
+ select DRM_I915_TRACK_WAKEREF
select DRM_I915_DEBUG_RUNTIME_PM
select DRM_I915_SW_FENCE_DEBUG_OBJECTS
select DRM_I915_SELFTEST
@@ -235,11 +237,18 @@ config DRM_I915_DEBUG_VBLANK_EVADE

If in doubt, say "N".

+config DRM_I915_TRACK_WAKEREF
+ depends on STACKDEPOT
+ depends on STACKTRACE
+ bool
+
config DRM_I915_DEBUG_RUNTIME_PM
bool "Enable extra state checking for runtime PM"
depends on DRM_I915
default n
select STACKDEPOT
+ select STACKTRACE
+ select DRM_I915_TRACK_WAKEREF
help
Choose this option to turn on extra state checking for the
runtime PM functionality. This may introduce overhead during
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 9d588d936e3dc..88a403d3294cb 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -75,6 +75,10 @@ i915-$(CONFIG_DEBUG_FS) += \
i915_debugfs_params.o \
display/intel_display_debugfs.o \
display/intel_pipe_crc.o
+
+i915-$(CONFIG_DRM_I915_TRACK_WAKEREF) += \
+ intel_wakeref_tracker.o
+
i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o

# "Graphics Technology" (aka we talk to the gpu)
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 6ed5786bcd299..7bd10efa56bf3 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -52,182 +52,37 @@

#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_RUNTIME_PM)

-#include <linux/sort.h>
-
-#define STACKDEPTH 8
-
-static noinline depot_stack_handle_t __save_depot_stack(void)
-{
- unsigned long entries[STACKDEPTH];
- unsigned int n;
-
- n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
- return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
-}
-
static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
{
- spin_lock_init(&rpm->debug.lock);
- stack_depot_init();
+ intel_wakeref_tracker_init(&rpm->debug);
}

-static noinline depot_stack_handle_t
+static intel_wakeref_t
track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
{
- depot_stack_handle_t stack, *stacks;
- unsigned long flags;
-
- if (rpm->no_wakeref_tracking)
- return -1;
-
- stack = __save_depot_stack();
- if (!stack)
+ if (!rpm->available)
return -1;

- spin_lock_irqsave(&rpm->debug.lock, flags);
-
- if (!rpm->debug.count)
- rpm->debug.last_acquire = stack;
-
- stacks = krealloc(rpm->debug.owners,
- (rpm->debug.count + 1) * sizeof(*stacks),
- GFP_NOWAIT | __GFP_NOWARN);
- if (stacks) {
- stacks[rpm->debug.count++] = stack;
- rpm->debug.owners = stacks;
- } else {
- stack = -1;
- }
-
- spin_unlock_irqrestore(&rpm->debug.lock, flags);
-
- return stack;
+ return intel_wakeref_tracker_add(&rpm->debug);
}

static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm,
- depot_stack_handle_t stack)
+ intel_wakeref_t wakeref)
{
- struct drm_i915_private *i915 = container_of(rpm,
- struct drm_i915_private,
- runtime_pm);
- unsigned long flags, n;
- bool found = false;
-
- if (unlikely(stack == -1))
- return;
-
- spin_lock_irqsave(&rpm->debug.lock, flags);
- for (n = rpm->debug.count; n--; ) {
- if (rpm->debug.owners[n] == stack) {
- memmove(rpm->debug.owners + n,
- rpm->debug.owners + n + 1,
- (--rpm->debug.count - n) * sizeof(stack));
- found = true;
- break;
- }
- }
- spin_unlock_irqrestore(&rpm->debug.lock, flags);
-
- if (drm_WARN(&i915->drm, !found,
- "Unmatched wakeref (tracking %lu), count %u\n",
- rpm->debug.count, atomic_read(&rpm->wakeref_count))) {
- char *buf;
-
- buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
- if (!buf)
- return;
-
- stack_depot_snprint(stack, buf, PAGE_SIZE, 2);
- DRM_DEBUG_DRIVER("wakeref %x from\n%s", stack, buf);
-
- stack = READ_ONCE(rpm->debug.last_release);
- if (stack) {
- stack_depot_snprint(stack, buf, PAGE_SIZE, 2);
- DRM_DEBUG_DRIVER("wakeref last released at\n%s", buf);
- }
-
- kfree(buf);
- }
+ intel_wakeref_tracker_remove(&rpm->debug, wakeref);
}

-static int cmphandle(const void *_a, const void *_b)
+static void untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm)
{
- const depot_stack_handle_t * const a = _a, * const b = _b;
+ struct drm_printer p = drm_debug_printer("i915");

- if (*a < *b)
- return -1;
- else if (*a > *b)
- return 1;
- else
- return 0;
-}
-
-static void
-__print_intel_runtime_pm_wakeref(struct drm_printer *p,
- const struct intel_runtime_pm_debug *dbg)
-{
- unsigned long i;
- char *buf;
-
- buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
- if (!buf)
- return;
-
- if (dbg->last_acquire) {
- stack_depot_snprint(dbg->last_acquire, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref last acquired:\n%s", buf);
- }
-
- if (dbg->last_release) {
- stack_depot_snprint(dbg->last_release, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref last released:\n%s", buf);
- }
-
- drm_printf(p, "Wakeref count: %lu\n", dbg->count);
-
- sort(dbg->owners, dbg->count, sizeof(*dbg->owners), cmphandle, NULL);
-
- for (i = 0; i < dbg->count; i++) {
- depot_stack_handle_t stack = dbg->owners[i];
- unsigned long rep;
-
- rep = 1;
- while (i + 1 < dbg->count && dbg->owners[i + 1] == stack)
- rep++, i++;
- stack_depot_snprint(stack, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref x%lu taken at:\n%s", rep, buf);
- }
-
- kfree(buf);
-}
-
-static noinline void
-__untrack_all_wakerefs(struct intel_runtime_pm_debug *debug,
- struct intel_runtime_pm_debug *saved)
-{
- *saved = *debug;
-
- debug->owners = NULL;
- debug->count = 0;
- debug->last_release = __save_depot_stack();
-}
-
-static void
-dump_and_free_wakeref_tracking(struct intel_runtime_pm_debug *debug)
-{
- if (debug->count) {
- struct drm_printer p = drm_debug_printer("i915");
-
- __print_intel_runtime_pm_wakeref(&p, debug);
- }
-
- kfree(debug->owners);
+ intel_wakeref_tracker_reset(&rpm->debug, &p);
}

static noinline void
__intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm)
{
- struct intel_runtime_pm_debug dbg = {};
+ struct intel_wakeref_tracker saved;
unsigned long flags;

if (!atomic_dec_and_lock_irqsave(&rpm->wakeref_count,
@@ -235,60 +90,21 @@ __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm)
flags))
return;

- __untrack_all_wakerefs(&rpm->debug, &dbg);
+ saved = __intel_wakeref_tracker_reset(&rpm->debug);
spin_unlock_irqrestore(&rpm->debug.lock, flags);

- dump_and_free_wakeref_tracking(&dbg);
-}
-
-static noinline void
-untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm)
-{
- struct intel_runtime_pm_debug dbg = {};
- unsigned long flags;
-
- spin_lock_irqsave(&rpm->debug.lock, flags);
- __untrack_all_wakerefs(&rpm->debug, &dbg);
- spin_unlock_irqrestore(&rpm->debug.lock, flags);
+ if (saved.count) {
+ struct drm_printer p = drm_debug_printer("i915");

- dump_and_free_wakeref_tracking(&dbg);
+ __intel_wakeref_tracker_show(&saved, &p);
+ intel_wakeref_tracker_fini(&saved);
+ }
}

void print_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm,
struct drm_printer *p)
{
- struct intel_runtime_pm_debug dbg = {};
-
- do {
- unsigned long alloc = dbg.count;
- depot_stack_handle_t *s;
-
- spin_lock_irq(&rpm->debug.lock);
- dbg.count = rpm->debug.count;
- if (dbg.count <= alloc) {
- memcpy(dbg.owners,
- rpm->debug.owners,
- dbg.count * sizeof(*s));
- }
- dbg.last_acquire = rpm->debug.last_acquire;
- dbg.last_release = rpm->debug.last_release;
- spin_unlock_irq(&rpm->debug.lock);
- if (dbg.count <= alloc)
- break;
-
- s = krealloc(dbg.owners,
- dbg.count * sizeof(*s),
- GFP_NOWAIT | __GFP_NOWARN);
- if (!s)
- goto out;
-
- dbg.owners = s;
- } while (1);
-
- __print_intel_runtime_pm_wakeref(p, &dbg);
-
-out:
- kfree(dbg.owners);
+ intel_wakeref_tracker_show(&rpm->debug, p);
}

#else
@@ -297,14 +113,14 @@ static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
{
}

-static depot_stack_handle_t
+static intel_wakeref_t
track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
{
return -1;
}

static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm,
- intel_wakeref_t wref)
+ intel_wakeref_t wakeref)
{
}

@@ -349,9 +165,8 @@ intel_runtime_pm_release(struct intel_runtime_pm *rpm, int wakelock)
static intel_wakeref_t __intel_runtime_pm_get(struct intel_runtime_pm *rpm,
bool wakelock)
{
- struct drm_i915_private *i915 = container_of(rpm,
- struct drm_i915_private,
- runtime_pm);
+ struct drm_i915_private *i915 =
+ container_of(rpm, struct drm_i915_private, runtime_pm);
int ret;

ret = pm_runtime_get_sync(rpm->kdev);
@@ -556,9 +371,8 @@ void intel_runtime_pm_put(struct intel_runtime_pm *rpm, intel_wakeref_t wref)
*/
void intel_runtime_pm_enable(struct intel_runtime_pm *rpm)
{
- struct drm_i915_private *i915 = container_of(rpm,
- struct drm_i915_private,
- runtime_pm);
+ struct drm_i915_private *i915 =
+ container_of(rpm, struct drm_i915_private, runtime_pm);
struct device *kdev = rpm->kdev;

/*
@@ -604,9 +418,8 @@ void intel_runtime_pm_enable(struct intel_runtime_pm *rpm)

void intel_runtime_pm_disable(struct intel_runtime_pm *rpm)
{
- struct drm_i915_private *i915 = container_of(rpm,
- struct drm_i915_private,
- runtime_pm);
+ struct drm_i915_private *i915 =
+ container_of(rpm, struct drm_i915_private, runtime_pm);
struct device *kdev = rpm->kdev;

/* Transfer rpm ownership back to core */
@@ -621,9 +434,8 @@ void intel_runtime_pm_disable(struct intel_runtime_pm *rpm)

void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm)
{
- struct drm_i915_private *i915 = container_of(rpm,
- struct drm_i915_private,
- runtime_pm);
+ struct drm_i915_private *i915 =
+ container_of(rpm, struct drm_i915_private, runtime_pm);
int count = atomic_read(&rpm->wakeref_count);

drm_WARN(&i915->drm, count,
@@ -637,7 +449,7 @@ void intel_runtime_pm_driver_release(struct intel_runtime_pm *rpm)
void intel_runtime_pm_init_early(struct intel_runtime_pm *rpm)
{
struct drm_i915_private *i915 =
- container_of(rpm, struct drm_i915_private, runtime_pm);
+ container_of(rpm, struct drm_i915_private, runtime_pm);
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
struct device *kdev = &pdev->dev;

diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h
index d9160e3ff4afc..0871fa2176474 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.h
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.h
@@ -61,15 +61,7 @@ struct intel_runtime_pm {
* paired rpm_put) we can remove corresponding pairs of and keep
* the array trimmed to active wakerefs.
*/
- struct intel_runtime_pm_debug {
- spinlock_t lock;
-
- depot_stack_handle_t last_acquire;
- depot_stack_handle_t last_release;
-
- depot_stack_handle_t *owners;
- unsigned long count;
- } debug;
+ struct intel_wakeref_tracker debug;
#endif
};

diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 4f4c2e15e736e..e6ba389652d74 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -17,7 +17,9 @@
#include <linux/timer.h>
#include <linux/workqueue.h>

-#if IS_ENABLED(CONFIG_DRM_I915_DEBUG)
+#include "intel_wakeref_tracker.h"
+
+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
#define INTEL_WAKEREF_BUG_ON(expr) BUG_ON(expr)
#else
#define INTEL_WAKEREF_BUG_ON(expr) BUILD_BUG_ON_INVALID(expr)
@@ -26,8 +28,6 @@
struct intel_runtime_pm;
struct intel_wakeref;

-typedef depot_stack_handle_t intel_wakeref_t;
-
struct intel_wakeref_ops {
int (*get)(struct intel_wakeref *wf);
int (*put)(struct intel_wakeref *wf);
diff --git a/drivers/gpu/drm/i915/intel_wakeref_tracker.c b/drivers/gpu/drm/i915/intel_wakeref_tracker.c
new file mode 100644
index 0000000000000..a0bcef13a1085
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_wakeref_tracker.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2021 Intel Corporation
+ */
+
+#include <linux/slab.h>
+#include <linux/stackdepot.h>
+#include <linux/stacktrace.h>
+#include <linux/sort.h>
+
+#include <drm/drm_print.h>
+
+#include "intel_wakeref.h"
+
+#define STACKDEPTH 8
+
+static noinline depot_stack_handle_t __save_depot_stack(void)
+{
+ unsigned long entries[STACKDEPTH];
+ unsigned int n;
+
+ n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
+ return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
+}
+
+static void __print_depot_stack(depot_stack_handle_t stack,
+ char *buf, int sz, int indent)
+{
+ unsigned long *entries;
+ unsigned int nr_entries;
+
+ nr_entries = stack_depot_fetch(stack, &entries);
+ stack_trace_snprint(buf, sz, entries, nr_entries, indent);
+}
+
+static int cmphandle(const void *_a, const void *_b)
+{
+ const depot_stack_handle_t * const a = _a, * const b = _b;
+
+ if (*a < *b)
+ return -1;
+ else if (*a > *b)
+ return 1;
+ else
+ return 0;
+}
+
+void
+__intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w,
+ struct drm_printer *p)
+{
+ unsigned long i;
+ char *buf;
+
+ buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
+ if (!buf)
+ return;
+
+ if (w->last_acquire) {
+ __print_depot_stack(w->last_acquire, buf, PAGE_SIZE, 2);
+ drm_printf(p, "Wakeref last acquired:\n%s", buf);
+ }
+
+ if (w->last_release) {
+ __print_depot_stack(w->last_release, buf, PAGE_SIZE, 2);
+ drm_printf(p, "Wakeref last released:\n%s", buf);
+ }
+
+ drm_printf(p, "Wakeref count: %lu\n", w->count);
+
+ sort(w->owners, w->count, sizeof(*w->owners), cmphandle, NULL);
+
+ for (i = 0; i < w->count; i++) {
+ depot_stack_handle_t stack = w->owners[i];
+ unsigned long rep;
+
+ rep = 1;
+ while (i + 1 < w->count && w->owners[i + 1] == stack)
+ rep++, i++;
+ __print_depot_stack(stack, buf, PAGE_SIZE, 2);
+ drm_printf(p, "Wakeref x%lu taken at:\n%s", rep, buf);
+ }
+
+ kfree(buf);
+}
+
+void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w,
+ struct drm_printer *p)
+{
+ struct intel_wakeref_tracker tmp = {};
+
+ do {
+ unsigned long alloc = tmp.count;
+ depot_stack_handle_t *s;
+
+ spin_lock_irq(&w->lock);
+ tmp.count = w->count;
+ if (tmp.count <= alloc)
+ memcpy(tmp.owners, w->owners, tmp.count * sizeof(*s));
+ tmp.last_acquire = w->last_acquire;
+ tmp.last_release = w->last_release;
+ spin_unlock_irq(&w->lock);
+ if (tmp.count <= alloc)
+ break;
+
+ s = krealloc(tmp.owners,
+ tmp.count * sizeof(*s),
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (!s)
+ goto out;
+
+ tmp.owners = s;
+ } while (1);
+
+ __intel_wakeref_tracker_show(&tmp, p);
+
+out:
+ intel_wakeref_tracker_fini(&tmp);
+}
+
+intel_wakeref_t intel_wakeref_tracker_add(struct intel_wakeref_tracker *w)
+{
+ depot_stack_handle_t stack, *stacks;
+ unsigned long flags;
+
+ stack = __save_depot_stack();
+ if (!stack)
+ return -1;
+
+ spin_lock_irqsave(&w->lock, flags);
+
+ if (!w->count)
+ w->last_acquire = stack;
+
+ stacks = krealloc(w->owners,
+ (w->count + 1) * sizeof(*stacks),
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (stacks) {
+ stacks[w->count++] = stack;
+ w->owners = stacks;
+ } else {
+ stack = -1;
+ }
+
+ spin_unlock_irqrestore(&w->lock, flags);
+
+ return stack;
+}
+
+void intel_wakeref_tracker_remove(struct intel_wakeref_tracker *w,
+ intel_wakeref_t stack)
+{
+ unsigned long flags, n;
+ bool found = false;
+
+ if (unlikely(stack == -1))
+ return;
+
+ spin_lock_irqsave(&w->lock, flags);
+ for (n = w->count; n--; ) {
+ if (w->owners[n] == stack) {
+ memmove(w->owners + n,
+ w->owners + n + 1,
+ (--w->count - n) * sizeof(stack));
+ found = true;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&w->lock, flags);
+
+ if (WARN(!found,
+ "Unmatched wakeref %x, tracking %lu\n",
+ stack, w->count)) {
+ char *buf;
+
+ buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
+ if (!buf)
+ return;
+
+ __print_depot_stack(stack, buf, PAGE_SIZE, 2);
+ pr_err("wakeref %x from\n%s", stack, buf);
+
+ stack = READ_ONCE(w->last_release);
+ if (stack && !w->count) {
+ __print_depot_stack(stack, buf, PAGE_SIZE, 2);
+ pr_err("wakeref last released at\n%s", buf);
+ }
+
+ kfree(buf);
+ }
+}
+
+struct intel_wakeref_tracker
+__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w)
+{
+ struct intel_wakeref_tracker saved;
+
+ lockdep_assert_held(&w->lock);
+
+ saved = *w;
+
+ w->owners = NULL;
+ w->count = 0;
+ w->last_release = __save_depot_stack();
+
+ return saved;
+}
+
+void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
+ struct drm_printer *p)
+{
+ struct intel_wakeref_tracker tmp;
+
+ spin_lock_irq(&w->lock);
+ tmp = __intel_wakeref_tracker_reset(w);
+ spin_unlock_irq(&w->lock);
+
+ if (tmp.count)
+ __intel_wakeref_tracker_show(&tmp, p);
+
+ intel_wakeref_tracker_fini(&tmp);
+}
+
+void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w)
+{
+ memset(w, 0, sizeof(*w));
+ spin_lock_init(&w->lock);
+ stack_depot_init();
+}
+
+void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w)
+{
+ kfree(w->owners);
+}
diff --git a/drivers/gpu/drm/i915/intel_wakeref_tracker.h b/drivers/gpu/drm/i915/intel_wakeref_tracker.h
new file mode 100644
index 0000000000000..61df68e28c0fb
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_wakeref_tracker.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef INTEL_WAKEREF_TRACKER_H
+#define INTEL_WAKEREF_TRACKER_H
+
+#include <linux/kconfig.h>
+#include <linux/spinlock.h>
+#include <linux/stackdepot.h>
+
+typedef depot_stack_handle_t intel_wakeref_t;
+
+struct drm_printer;
+
+struct intel_wakeref_tracker {
+ spinlock_t lock;
+
+ depot_stack_handle_t last_acquire;
+ depot_stack_handle_t last_release;
+
+ depot_stack_handle_t *owners;
+ unsigned long count;
+};
+
+#if IS_ENABLED(CONFIG_DRM_I915_TRACK_WAKEREF)
+
+void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w);
+void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w);
+
+intel_wakeref_t intel_wakeref_tracker_add(struct intel_wakeref_tracker *w);
+void intel_wakeref_tracker_remove(struct intel_wakeref_tracker *w,
+ intel_wakeref_t handle);
+
+struct intel_wakeref_tracker
+__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w);
+void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
+ struct drm_printer *p);
+
+void __intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w,
+ struct drm_printer *p);
+void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w,
+ struct drm_printer *p);
+
+#else
+
+static inline void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w) {}
+static inline void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w) {}
+
+static inline intel_wakeref_t
+intel_wakeref_tracker_add(struct intel_wakeref_tracker *w)
+{
+ return -1;
+}
+
+static inline void
+intel_wakeref_untrack_remove(struct intel_wakeref_tracker *w, intel_wakeref_t handle) {}
+
+static inline struct intel_wakeref_tracker
+__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w)
+{
+ return (struct intel_wakeref_tracker){};
+}
+
+static inline void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
+ struct drm_printer *p)
+{
+}
+
+static inline void __intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w, struct drm_printer *p) {}
+static inline void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w, struct drm_printer *p) {}
+
+#endif
+
+#endif /* INTEL_WAKEREF_TRACKER_H */
--
2.25.1

2022-02-17 23:24:23

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH 2/9] lib/ref_tracker: compact stacktraces before printing

On Thu, Feb 17, 2022 at 6:05 AM Andrzej Hajda <[email protected]> wrote:
>
> In cases references are taken alternately on multiple exec paths leak
> report can grow substantially, sorting and grouping leaks by stack_handle
> allows to compact it.
>
> Signed-off-by: Andrzej Hajda <[email protected]>
> Reviewed-by: Chris Wilson <[email protected]>
> ---
> lib/ref_tracker.c | 35 +++++++++++++++++++++++++++--------
> 1 file changed, 27 insertions(+), 8 deletions(-)
>
> diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c
> index 1b0c6d645d64a..0e9c7d2828ccb 100644
> --- a/lib/ref_tracker.c
> +++ b/lib/ref_tracker.c
> @@ -1,5 +1,6 @@
> // SPDX-License-Identifier: GPL-2.0-or-later
> #include <linux/export.h>
> +#include <linux/list_sort.h>
> #include <linux/ref_tracker.h>
> #include <linux/slab.h>
> #include <linux/stacktrace.h>
> @@ -14,23 +15,41 @@ struct ref_tracker {
> depot_stack_handle_t free_stack_handle;
> };
>
> +static int ref_tracker_cmp(void *priv, const struct list_head *a, const struct list_head *b)
> +{
> + const struct ref_tracker *ta = list_entry(a, const struct ref_tracker, head);
> + const struct ref_tracker *tb = list_entry(b, const struct ref_tracker, head);
> +
> + return ta->alloc_stack_handle - tb->alloc_stack_handle;
> +}
> +
> void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
> unsigned int display_limit)
> {
> + unsigned int i = 0, count = 0;
> struct ref_tracker *tracker;
> - unsigned int i = 0;
> + depot_stack_handle_t stack;
>
> lockdep_assert_held(&dir->lock);
>
> + if (list_empty(&dir->list))
> + return;
> +
> + list_sort(NULL, &dir->list, ref_tracker_cmp);

What is going to be the cost of sorting a list with 1,000,000 items in it ?

I just want to make sure we do not trade printing at most ~10 references
(from netdev_wait_allrefs()) to a soft lockup :/ with no useful info
if something went terribly wrong.

I suggest that you do not sort a potential big list, and instead
attempt to allocate an array of @display_limits 'struct stack_counts'

I suspect @display_limits will always be kept to a reasonable value
(less than 100 ?)

struct stack_counts {
depot_stack_handle_t stack_handle;
unsigned int count;
}

Then, iterating the list and update the array (that you can keep
sorted by ->stack_handle)

Then after iterating, print the (at_most) @display_limits handles
found in the temp array.

> +
> list_for_each_entry(tracker, &dir->list, head) {
> - if (i < display_limit) {
> - pr_err("leaked reference.\n");
> - if (tracker->alloc_stack_handle)
> - stack_depot_print(tracker->alloc_stack_handle);
> - i++;
> - } else {
> + if (i++ >= display_limit)
> break;
> - }
> + if (!count++)
> + stack = tracker->alloc_stack_handle;
> + if (stack == tracker->alloc_stack_handle &&
> + !list_is_last(&tracker->head, &dir->list))
> + continue;
> +
> + pr_err("leaked %d references.\n", count);
> + if (stack)
> + stack_depot_print(stack);
> + count = 0;
> }
> }
> EXPORT_SYMBOL(__ref_tracker_dir_print);
> --
> 2.25.1
>

2022-02-17 23:43:55

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 7/9] drm/i915: Track leaked gt->wakerefs

From: Chris Wilson <[email protected]>

Track every intel_gt_pm_get() until its corresponding release in
intel_gt_pm_put() by returning a cookie to the caller for acquire that
must be passed by on rleased. When there is an imbalance, we can see who
either tried to free a stale wakeref, or who forgot to free theirs.

v2: Rebase from backporting wakeref leak (Umesh)

Signed-off-by: Chris Wilson <[email protected]>
Reviewed-by: Andrzej Hajda <[email protected]>
Signed-off-by: Andrzej Hajda <[email protected]>
---
drivers/gpu/drm/i915/Kconfig.debug | 15 +++++++
.../gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 ++--
.../i915/gem/selftests/i915_gem_coherency.c | 10 +++--
.../drm/i915/gem/selftests/i915_gem_mman.c | 14 ++++---
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c | 13 ++++--
.../gpu/drm/i915/gt/intel_breadcrumbs_types.h | 3 +-
drivers/gpu/drm/i915/gt/intel_engine_pm.c | 4 +-
drivers/gpu/drm/i915/gt/intel_engine_types.h | 2 +
.../drm/i915/gt/intel_execlists_submission.c | 2 +-
drivers/gpu/drm/i915/gt/intel_gt_pm.c | 10 +++--
drivers/gpu/drm/i915/gt/intel_gt_pm.h | 36 ++++++++++++----
drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c | 4 +-
drivers/gpu/drm/i915/gt/selftest_engine_cs.c | 20 +++++----
drivers/gpu/drm/i915/gt/selftest_gt_pm.c | 5 ++-
drivers/gpu/drm/i915/gt/selftest_reset.c | 10 +++--
drivers/gpu/drm/i915/gt/selftest_rps.c | 17 ++++----
drivers/gpu/drm/i915/gt/selftest_slpc.c | 10 +++--
.../gpu/drm/i915/gt/uc/intel_guc_submission.c | 9 ++--
drivers/gpu/drm/i915/i915_pmu.c | 16 +++----
drivers/gpu/drm/i915/intel_wakeref.c | 4 ++
drivers/gpu/drm/i915/intel_wakeref.h | 42 +++++++++++++++++++
21 files changed, 182 insertions(+), 71 deletions(-)

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 8b1973146e848..3bdc73f30a9e1 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -48,6 +48,7 @@ config DRM_I915_DEBUG
select DRM_I915_DEBUG_MMIO
select DRM_I915_TRACK_WAKEREF
select DRM_I915_DEBUG_RUNTIME_PM
+ select DRM_I915_DEBUG_WAKEREF
select DRM_I915_SW_FENCE_DEBUG_OBJECTS
select DRM_I915_SELFTEST
select BROKEN # for prototype uAPI
@@ -257,3 +258,17 @@ config DRM_I915_DEBUG_RUNTIME_PM
Recommended for driver developers only.

If in doubt, say "N"
+
+config DRM_I915_DEBUG_WAKEREF
+ bool "Enable extra tracking for wakerefs"
+ depends on DRM_I915
+ default n
+ select STACKDEPOT
+ select STACKTRACE
+ select DRM_I915_TRACK_WAKEREF
+ help
+ Choose this option to turn on extra state checking and usage
+ tracking for the wakerefPM functionality. This may introduce
+ overhead during driver runtime.
+
+ If in doubt, say "N"
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
index 13c975da77474..4b6c144f706da 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
@@ -252,6 +252,7 @@ struct i915_execbuffer {
struct intel_gt *gt; /* gt for the execbuf */
struct intel_context *context; /* logical state for the request */
struct i915_gem_context *gem_context; /** caller's context */
+ intel_wakeref_t wakeref;

/** our requests to build */
struct i915_request *requests[MAX_ENGINE_INSTANCE + 1];
@@ -2679,7 +2680,7 @@ eb_select_engine(struct i915_execbuffer *eb)

for_each_child(ce, child)
intel_context_get(child);
- intel_gt_pm_get(ce->engine->gt);
+ eb->wakeref = intel_gt_pm_get(ce->engine->gt);

if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags)) {
err = intel_context_alloc_state(ce);
@@ -2713,7 +2714,7 @@ eb_select_engine(struct i915_execbuffer *eb)
return err;

err:
- intel_gt_pm_put(ce->engine->gt);
+ intel_gt_pm_put(ce->engine->gt, eb->wakeref);
for_each_child(ce, child)
intel_context_put(child);
intel_context_put(ce);
@@ -2725,7 +2726,7 @@ eb_put_engine(struct i915_execbuffer *eb)
{
struct intel_context *child;

- intel_gt_pm_put(eb->gt);
+ intel_gt_pm_put(eb->context->engine->gt, eb->wakeref);
for_each_child(eb->context, child)
intel_context_put(child);
intel_context_put(eb->context);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 13b088cc787eb..553f2730c2a76 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -85,6 +85,7 @@ static int cpu_get(struct context *ctx, unsigned long offset, u32 *v)

static int gtt_set(struct context *ctx, unsigned long offset, u32 v)
{
+ intel_wakeref_t wakeref;
struct i915_vma *vma;
u32 __iomem *map;
int err = 0;
@@ -99,7 +100,7 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v)
if (IS_ERR(vma))
return PTR_ERR(vma);

- intel_gt_pm_get(vma->vm->gt);
+ wakeref = intel_gt_pm_get(vma->vm->gt);

map = i915_vma_pin_iomap(vma);
i915_vma_unpin(vma);
@@ -112,12 +113,13 @@ static int gtt_set(struct context *ctx, unsigned long offset, u32 v)
i915_vma_unpin_iomap(vma);

out_rpm:
- intel_gt_pm_put(vma->vm->gt);
+ intel_gt_pm_put(vma->vm->gt, wakeref);
return err;
}

static int gtt_get(struct context *ctx, unsigned long offset, u32 *v)
{
+ intel_wakeref_t wakeref;
struct i915_vma *vma;
u32 __iomem *map;
int err = 0;
@@ -132,7 +134,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v)
if (IS_ERR(vma))
return PTR_ERR(vma);

- intel_gt_pm_get(vma->vm->gt);
+ wakeref = intel_gt_pm_get(vma->vm->gt);

map = i915_vma_pin_iomap(vma);
i915_vma_unpin(vma);
@@ -145,7 +147,7 @@ static int gtt_get(struct context *ctx, unsigned long offset, u32 *v)
i915_vma_unpin_iomap(vma);

out_rpm:
- intel_gt_pm_put(vma->vm->gt);
+ intel_gt_pm_put(vma->vm->gt, wakeref);
return err;
}

diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 8ae1a1530bd80..dea5e8e39ab2d 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -624,14 +624,14 @@ static bool assert_mmap_offset(struct drm_i915_private *i915,
static void disable_retire_worker(struct drm_i915_private *i915)
{
i915_gem_driver_unregister__shrinker(i915);
- intel_gt_pm_get(to_gt(i915));
+ intel_gt_pm_get_untracked(to_gt(i915));
cancel_delayed_work_sync(&to_gt(i915)->requests.retire_work);
}

static void restore_retire_worker(struct drm_i915_private *i915)
{
igt_flush_test(i915);
- intel_gt_pm_put(to_gt(i915));
+ intel_gt_pm_put_untracked(to_gt(i915));
i915_gem_driver_register__shrinker(i915);
}

@@ -772,6 +772,7 @@ static int igt_mmap_offset_exhaustion(void *arg)

static int gtt_set(struct drm_i915_gem_object *obj)
{
+ intel_wakeref_t wakeref;
struct i915_vma *vma;
void __iomem *map;
int err = 0;
@@ -780,7 +781,7 @@ static int gtt_set(struct drm_i915_gem_object *obj)
if (IS_ERR(vma))
return PTR_ERR(vma);

- intel_gt_pm_get(vma->vm->gt);
+ wakeref = intel_gt_pm_get(vma->vm->gt);
map = i915_vma_pin_iomap(vma);
i915_vma_unpin(vma);
if (IS_ERR(map)) {
@@ -792,12 +793,13 @@ static int gtt_set(struct drm_i915_gem_object *obj)
i915_vma_unpin_iomap(vma);

out:
- intel_gt_pm_put(vma->vm->gt);
+ intel_gt_pm_put(vma->vm->gt, wakeref);
return err;
}

static int gtt_check(struct drm_i915_gem_object *obj)
{
+ intel_wakeref_t wakeref;
struct i915_vma *vma;
void __iomem *map;
int err = 0;
@@ -806,7 +808,7 @@ static int gtt_check(struct drm_i915_gem_object *obj)
if (IS_ERR(vma))
return PTR_ERR(vma);

- intel_gt_pm_get(vma->vm->gt);
+ wakeref = intel_gt_pm_get(vma->vm->gt);
map = i915_vma_pin_iomap(vma);
i915_vma_unpin(vma);
if (IS_ERR(map)) {
@@ -822,7 +824,7 @@ static int gtt_check(struct drm_i915_gem_object *obj)
i915_vma_unpin_iomap(vma);

out:
- intel_gt_pm_put(vma->vm->gt);
+ intel_gt_pm_put(vma->vm->gt, wakeref);
return err;
}

diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
index 209cf265bf746..f061d93c27357 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
@@ -27,11 +27,14 @@ static void irq_disable(struct intel_breadcrumbs *b)

static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
{
+ intel_wakeref_t wakeref;
+
/*
* Since we are waiting on a request, the GPU should be busy
* and should have its own rpm reference.
*/
- if (GEM_WARN_ON(!intel_gt_pm_get_if_awake(b->irq_engine->gt)))
+ wakeref = intel_gt_pm_get_if_awake(b->irq_engine->gt);
+ if (GEM_WARN_ON(!wakeref))
return;

/*
@@ -40,7 +43,7 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
* which we can add a new waiter and avoid the cost of re-enabling
* the irq.
*/
- WRITE_ONCE(b->irq_armed, true);
+ WRITE_ONCE(b->irq_armed, wakeref);

/* Requests may have completed before we could enable the interrupt. */
if (!b->irq_enabled++ && b->irq_enable(b))
@@ -60,12 +63,14 @@ static void intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)

static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b)
{
+ intel_wakeref_t wakeref = b->irq_armed;
+
GEM_BUG_ON(!b->irq_enabled);
if (!--b->irq_enabled)
b->irq_disable(b);

- WRITE_ONCE(b->irq_armed, false);
- intel_gt_pm_put_async(b->irq_engine->gt);
+ WRITE_ONCE(b->irq_armed, 0);
+ intel_gt_pm_put_async(b->irq_engine->gt, wakeref);
}

static void intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b)
diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
index 72dfd3748c4c3..bdf09fd67b6e7 100644
--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
@@ -13,6 +13,7 @@
#include <linux/types.h>

#include "intel_engine_types.h"
+#include "intel_wakeref.h"

/*
* Rather than have every client wait upon all user interrupts,
@@ -43,7 +44,7 @@ struct intel_breadcrumbs {
spinlock_t irq_lock; /* protects the interrupt from hardirq context */
struct irq_work irq_work; /* for use from inside irq_lock */
unsigned int irq_enabled;
- bool irq_armed;
+ intel_wakeref_t irq_armed;

/* Not all breadcrumbs are attached to physical HW */
intel_engine_mask_t engine_mask;
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index b0a4a2dbe3ee9..52e46e7830ff5 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -47,7 +47,7 @@ static int __engine_unpark(struct intel_wakeref *wf)

ENGINE_TRACE(engine, "\n");

- intel_gt_pm_get(engine->gt);
+ engine->wakeref_track = intel_gt_pm_get(engine->gt);

/* Discard stale context state from across idling */
ce = engine->kernel_context;
@@ -260,7 +260,7 @@ static int __engine_park(struct intel_wakeref *wf)
engine->park(engine);

/* While gt calls i915_vma_parked(), we have to break the lock cycle */
- intel_gt_pm_put_async(engine->gt);
+ intel_gt_pm_put_async(engine->gt, engine->wakeref_track);
return 0;
}

diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
index 36365bdbe1ee7..dcd84d1eb90b7 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
+++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
@@ -382,7 +382,9 @@ struct intel_engine_cs {
unsigned long serial;

unsigned long wakeref_serial;
+ intel_wakeref_t wakeref_track;
struct intel_wakeref wakeref;
+
struct file *default_state;

struct {
diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
index 961d795220a30..4ff269b2697d5 100644
--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
+++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c
@@ -630,7 +630,7 @@ static void __execlists_schedule_out(struct i915_request * const rq,
execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT);
if (engine->fw_domain && !--engine->fw_active)
intel_uncore_forcewake_put(engine->uncore, engine->fw_domain);
- intel_gt_pm_put_async(engine->gt);
+ intel_gt_pm_put_async_untracked(engine->gt);

/*
* If this is part of a virtual engine, its next request may
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index c0fa41e4c8030..7ee65a93f926f 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -25,19 +25,20 @@
static void user_forcewake(struct intel_gt *gt, bool suspend)
{
int count = atomic_read(&gt->user_wakeref);
+ intel_wakeref_t wakeref;

/* Inside suspend/resume so single threaded, no races to worry about. */
if (likely(!count))
return;

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
if (suspend) {
GEM_BUG_ON(count > atomic_read(&gt->wakeref.count));
atomic_sub(count, &gt->wakeref.count);
} else {
atomic_add(count, &gt->wakeref.count);
}
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);
}

static void runtime_begin(struct intel_gt *gt)
@@ -210,6 +211,7 @@ int intel_gt_resume(struct intel_gt *gt)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
int err;

err = intel_gt_has_unrecoverable_error(gt);
@@ -226,7 +228,7 @@ int intel_gt_resume(struct intel_gt *gt)
*/
gt_sanitize(gt, true);

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);

intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL);
intel_rc6_sanitize(&gt->rc6);
@@ -273,7 +275,7 @@ int intel_gt_resume(struct intel_gt *gt)

out_fw:
intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);
return err;

err_wedged:
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.h b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
index bc898df7a48cc..3ab06d897df25 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.h
@@ -16,19 +16,28 @@ static inline bool intel_gt_pm_is_awake(const struct intel_gt *gt)
return intel_wakeref_is_active(&gt->wakeref);
}

-static inline void intel_gt_pm_get(struct intel_gt *gt)
+static inline void intel_gt_pm_get_untracked(struct intel_gt *gt)
{
intel_wakeref_get(&gt->wakeref);
}

+static inline intel_wakeref_t intel_gt_pm_get(struct intel_gt *gt)
+{
+ intel_gt_pm_get_untracked(gt);
+ return intel_wakeref_track(&gt->wakeref);
+}
+
static inline void __intel_gt_pm_get(struct intel_gt *gt)
{
__intel_wakeref_get(&gt->wakeref);
}

-static inline bool intel_gt_pm_get_if_awake(struct intel_gt *gt)
+static inline intel_wakeref_t intel_gt_pm_get_if_awake(struct intel_gt *gt)
{
- return intel_wakeref_get_if_active(&gt->wakeref);
+ if (!intel_wakeref_get_if_active(&gt->wakeref))
+ return 0;
+
+ return intel_wakeref_track(&gt->wakeref);
}

static inline void intel_gt_pm_might_get(struct intel_gt *gt)
@@ -36,12 +45,18 @@ static inline void intel_gt_pm_might_get(struct intel_gt *gt)
intel_wakeref_might_get(&gt->wakeref);
}

-static inline void intel_gt_pm_put(struct intel_gt *gt)
+static inline void intel_gt_pm_put_untracked(struct intel_gt *gt)
{
intel_wakeref_put(&gt->wakeref);
}

-static inline void intel_gt_pm_put_async(struct intel_gt *gt)
+static inline void intel_gt_pm_put(struct intel_gt *gt, intel_wakeref_t handle)
+{
+ intel_wakeref_untrack(&gt->wakeref, handle);
+ intel_gt_pm_put_untracked(gt);
+}
+
+static inline void intel_gt_pm_put_async_untracked(struct intel_gt *gt)
{
intel_wakeref_put_async(&gt->wakeref);
}
@@ -51,9 +66,14 @@ static inline void intel_gt_pm_might_put(struct intel_gt *gt)
intel_wakeref_might_put(&gt->wakeref);
}

-#define with_intel_gt_pm(gt, tmp) \
- for (tmp = 1, intel_gt_pm_get(gt); tmp; \
- intel_gt_pm_put(gt), tmp = 0)
+static inline void intel_gt_pm_put_async(struct intel_gt *gt, intel_wakeref_t handle)
+{
+ intel_wakeref_untrack(&gt->wakeref, handle);
+ intel_gt_pm_put_async_untracked(gt);
+}
+
+#define with_intel_gt_pm(gt, wf) \
+ for (wf = intel_gt_pm_get(gt); wf; intel_gt_pm_put(gt, wf), wf = 0)

static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt)
{
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c
index 37765919fe322..e02a3e26e0d02 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c
@@ -26,7 +26,7 @@
int intel_gt_pm_debugfs_forcewake_user_open(struct intel_gt *gt)
{
atomic_inc(&gt->user_wakeref);
- intel_gt_pm_get(gt);
+ intel_gt_pm_get_untracked(gt);
if (GRAPHICS_VER(gt->i915) >= 6)
intel_uncore_forcewake_user_get(gt->uncore);

@@ -37,7 +37,7 @@ int intel_gt_pm_debugfs_forcewake_user_release(struct intel_gt *gt)
{
if (GRAPHICS_VER(gt->i915) >= 6)
intel_uncore_forcewake_user_put(gt->uncore);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put_untracked(gt);
atomic_dec(&gt->user_wakeref);

return 0;
diff --git a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
index 1b75f478d1b83..8ea6bf4c987e2 100644
--- a/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/selftest_engine_cs.c
@@ -21,20 +21,22 @@ static int cmp_u32(const void *A, const void *B)
return *a - *b;
}

-static void perf_begin(struct intel_gt *gt)
+static intel_wakeref_t perf_begin(struct intel_gt *gt)
{
- intel_gt_pm_get(gt);
+ intel_wakeref_t wakeref = intel_gt_pm_get(gt);

/* Boost gpufreq to max [waitboost] and keep it fixed */
atomic_inc(&gt->rps.num_waiters);
schedule_work(&gt->rps.work);
flush_work(&gt->rps.work);
+
+ return wakeref;
}

-static int perf_end(struct intel_gt *gt)
+static int perf_end(struct intel_gt *gt, intel_wakeref_t wakeref)
{
atomic_dec(&gt->rps.num_waiters);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

return igt_flush_test(gt->i915);
}
@@ -123,12 +125,13 @@ static int perf_mi_bb_start(void *arg)
struct intel_gt *gt = arg;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
int err = 0;

if (GRAPHICS_VER(gt->i915) < 7) /* for per-engine CS_TIMESTAMP */
return 0;

- perf_begin(gt);
+ wakeref = perf_begin(gt);
for_each_engine(engine, gt, id) {
struct intel_context *ce = engine->kernel_context;
struct i915_vma *batch;
@@ -194,7 +197,7 @@ static int perf_mi_bb_start(void *arg)
pr_info("%s: MI_BB_START cycles: %u\n",
engine->name, trifilter(cycles));
}
- if (perf_end(gt))
+ if (perf_end(gt, wakeref))
err = -EIO;

return err;
@@ -247,12 +250,13 @@ static int perf_mi_noop(void *arg)
struct intel_gt *gt = arg;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
int err = 0;

if (GRAPHICS_VER(gt->i915) < 7) /* for per-engine CS_TIMESTAMP */
return 0;

- perf_begin(gt);
+ wakeref = perf_begin(gt);
for_each_engine(engine, gt, id) {
struct intel_context *ce = engine->kernel_context;
struct i915_vma *base, *nop;
@@ -348,7 +352,7 @@ static int perf_mi_noop(void *arg)
pr_info("%s: 16K MI_NOOP cycles: %u\n",
engine->name, trifilter(cycles));
}
- if (perf_end(gt))
+ if (perf_end(gt, wakeref))
err = -EIO;

return err;
diff --git a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
index be94f863bdeff..f0f9983a6fbb2 100644
--- a/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/selftest_gt_pm.c
@@ -68,6 +68,7 @@ static int live_gt_clocks(void *arg)
struct intel_gt *gt = arg;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
int err = 0;

if (!gt->clock_frequency) { /* unknown */
@@ -97,7 +98,7 @@ static int live_gt_clocks(void *arg)
*/
return 0;

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
intel_uncore_forcewake_get(gt->uncore, FORCEWAKE_ALL);

for_each_engine(engine, gt, id) {
@@ -134,7 +135,7 @@ static int live_gt_clocks(void *arg)
}

intel_uncore_forcewake_put(gt->uncore, FORCEWAKE_ALL);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

return err;
}
diff --git a/drivers/gpu/drm/i915/gt/selftest_reset.c b/drivers/gpu/drm/i915/gt/selftest_reset.c
index 37c38bdd5f474..cb01901c94e94 100644
--- a/drivers/gpu/drm/i915/gt/selftest_reset.c
+++ b/drivers/gpu/drm/i915/gt/selftest_reset.c
@@ -257,11 +257,12 @@ static int igt_atomic_reset(void *arg)
{
struct intel_gt *gt = arg;
const typeof(*igt_atomic_phases) *p;
+ intel_wakeref_t wakeref;
int err = 0;

/* Check that the resets are usable from atomic context */

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
igt_global_reset_lock(gt);

/* Flush any requests before we get started and check basics */
@@ -292,7 +293,7 @@ static int igt_atomic_reset(void *arg)

unlock:
igt_global_reset_unlock(gt);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

return err;
}
@@ -303,6 +304,7 @@ static int igt_atomic_engine_reset(void *arg)
const typeof(*igt_atomic_phases) *p;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
int err = 0;

/* Check that the resets are usable from atomic context */
@@ -313,7 +315,7 @@ static int igt_atomic_engine_reset(void *arg)
if (intel_uc_uses_guc_submission(&gt->uc))
return 0;

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
igt_global_reset_lock(gt);

/* Flush any requests before we get started and check basics */
@@ -361,7 +363,7 @@ static int igt_atomic_engine_reset(void *arg)

out_unlock:
igt_global_reset_unlock(gt);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

return err;
}
diff --git a/drivers/gpu/drm/i915/gt/selftest_rps.c b/drivers/gpu/drm/i915/gt/selftest_rps.c
index 6a69ac0184ad8..7effd09ced988 100644
--- a/drivers/gpu/drm/i915/gt/selftest_rps.c
+++ b/drivers/gpu/drm/i915/gt/selftest_rps.c
@@ -223,6 +223,7 @@ int live_rps_clock_interval(void *arg)
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct igt_spinner spin;
+ intel_wakeref_t wakeref;
int err = 0;

if (!intel_rps_is_enabled(rps) || GRAPHICS_VER(gt->i915) < 6)
@@ -235,7 +236,7 @@ int live_rps_clock_interval(void *arg)
saved_work = rps->work.func;
rps->work.func = dummy_rps_work;

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
intel_rps_disable(&gt->rps);

intel_gt_check_clock_frequency(gt);
@@ -354,7 +355,7 @@ int live_rps_clock_interval(void *arg)
}

intel_rps_enable(&gt->rps);
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

igt_spinner_fini(&spin);

@@ -375,6 +376,7 @@ int live_rps_control(void *arg)
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct igt_spinner spin;
+ intel_wakeref_t wakeref;
int err = 0;

/*
@@ -397,7 +399,7 @@ int live_rps_control(void *arg)
saved_work = rps->work.func;
rps->work.func = dummy_rps_work;

- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
for_each_engine(engine, gt, id) {
struct i915_request *rq;
ktime_t min_dt, max_dt;
@@ -487,7 +489,7 @@ int live_rps_control(void *arg)
break;
}
}
- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);

igt_spinner_fini(&spin);

@@ -1026,6 +1028,7 @@ int live_rps_interrupt(void *arg)
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct igt_spinner spin;
+ intel_wakeref_t wakeref;
u32 pm_events;
int err = 0;

@@ -1036,9 +1039,9 @@ int live_rps_interrupt(void *arg)
if (!intel_rps_has_interrupts(rps) || GRAPHICS_VER(gt->i915) < 6)
return 0;

- intel_gt_pm_get(gt);
- pm_events = rps->pm_events;
- intel_gt_pm_put(gt);
+ pm_events = 0;
+ with_intel_gt_pm(gt, wakeref)
+ pm_events = rps->pm_events;
if (!pm_events) {
pr_err("No RPS PM events registered, but RPS is enabled?\n");
return -ENODEV;
diff --git a/drivers/gpu/drm/i915/gt/selftest_slpc.c b/drivers/gpu/drm/i915/gt/selftest_slpc.c
index b768cea5943dd..27be3c9b29b13 100644
--- a/drivers/gpu/drm/i915/gt/selftest_slpc.c
+++ b/drivers/gpu/drm/i915/gt/selftest_slpc.c
@@ -44,6 +44,7 @@ static int live_slpc_clamp_min(void *arg)
struct intel_rps *rps = &gt->rps;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
struct igt_spinner spin;
u32 slpc_min_freq, slpc_max_freq;
int err = 0;
@@ -70,7 +71,7 @@ static int live_slpc_clamp_min(void *arg)
}

intel_gt_pm_wait_for_idle(gt);
- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
for_each_engine(engine, gt, id) {
struct i915_request *rq;
u32 step, min_freq, req_freq;
@@ -156,7 +157,7 @@ static int live_slpc_clamp_min(void *arg)
if (igt_flush_test(gt->i915))
err = -EIO;

- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);
igt_spinner_fini(&spin);
intel_gt_pm_wait_for_idle(gt);

@@ -171,6 +172,7 @@ static int live_slpc_clamp_max(void *arg)
struct intel_rps *rps;
struct intel_engine_cs *engine;
enum intel_engine_id id;
+ intel_wakeref_t wakeref;
struct igt_spinner spin;
int err = 0;
u32 slpc_min_freq, slpc_max_freq;
@@ -200,7 +202,7 @@ static int live_slpc_clamp_max(void *arg)
}

intel_gt_pm_wait_for_idle(gt);
- intel_gt_pm_get(gt);
+ wakeref = intel_gt_pm_get(gt);
for_each_engine(engine, gt, id) {
struct i915_request *rq;
u32 max_freq, req_freq;
@@ -290,7 +292,7 @@ static int live_slpc_clamp_max(void *arg)
slpc_set_max_freq(slpc, slpc_max_freq);
slpc_set_min_freq(slpc, slpc_min_freq);

- intel_gt_pm_put(gt);
+ intel_gt_pm_put(gt, wakeref);
igt_spinner_fini(&spin);
intel_gt_pm_wait_for_idle(gt);

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index b3a429a92c0da..7799939c38945 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -1048,7 +1048,7 @@ static void scrub_guc_desc_for_outstanding_g2h(struct intel_guc *guc)
if (deregister)
guc_signal_context_fence(ce);
if (destroyed) {
- intel_gt_pm_put_async(guc_to_gt(guc));
+ intel_gt_pm_put_async_untracked(guc_to_gt(guc));
release_guc_id(guc, ce);
__guc_context_destroy(ce);
}
@@ -1254,6 +1254,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now)
unsigned long flags;
u32 reset_count;
bool in_reset;
+ intel_wakeref_t wakeref;

spin_lock_irqsave(&guc->timestamp.lock, flags);

@@ -1276,7 +1277,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now)
* start_gt_clk is derived from GuC state. To get a consistent
* view of activity, we query the GuC state only if gt is awake.
*/
- if (!in_reset && intel_gt_pm_get_if_awake(gt)) {
+ if (!in_reset && (wakeref = intel_gt_pm_get_if_awake(gt))) {
stats_saved = *stats;
gt_stamp_saved = guc->timestamp.gt_stamp;
/*
@@ -1285,7 +1286,7 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now)
*/
guc_update_engine_gt_clks(engine);
guc_update_pm_timestamp(guc, now);
- intel_gt_pm_put_async(gt);
+ intel_gt_pm_put_async(gt, wakeref);
if (i915_reset_count(gpu_error) != reset_count) {
*stats = stats_saved;
guc->timestamp.gt_stamp = gt_stamp_saved;
@@ -3903,7 +3904,7 @@ int intel_guc_deregister_done_process_msg(struct intel_guc *guc,
intel_context_put(ce);
} else if (context_destroyed(ce)) {
/* Context has been destroyed */
- intel_gt_pm_put_async(guc_to_gt(guc));
+ intel_gt_pm_put_async_untracked(guc_to_gt(guc));
release_guc_id(guc, ce);
__guc_context_destroy(ce);
}
diff --git a/drivers/gpu/drm/i915/i915_pmu.c b/drivers/gpu/drm/i915/i915_pmu.c
index cfc21042499d0..3bd0c75c2ee69 100644
--- a/drivers/gpu/drm/i915/i915_pmu.c
+++ b/drivers/gpu/drm/i915/i915_pmu.c
@@ -171,19 +171,19 @@ static u64 get_rc6(struct intel_gt *gt)
{
struct drm_i915_private *i915 = gt->i915;
struct i915_pmu *pmu = &i915->pmu;
+ intel_wakeref_t wakeref;
unsigned long flags;
- bool awake = false;
u64 val;

- if (intel_gt_pm_get_if_awake(gt)) {
+ wakeref = intel_gt_pm_get_if_awake(gt);
+ if (wakeref) {
val = __get_rc6(gt);
- intel_gt_pm_put_async(gt);
- awake = true;
+ intel_gt_pm_put_async(gt, wakeref);
}

spin_lock_irqsave(&pmu->lock, flags);

- if (awake) {
+ if (wakeref) {
pmu->sample[__I915_SAMPLE_RC6].cur = val;
} else {
/*
@@ -377,12 +377,14 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
struct intel_uncore *uncore = gt->uncore;
struct i915_pmu *pmu = &i915->pmu;
struct intel_rps *rps = &gt->rps;
+ intel_wakeref_t wakeref;

if (!frequency_sampling_enabled(pmu))
return;

/* Report 0/0 (actual/requested) frequency while parked. */
- if (!intel_gt_pm_get_if_awake(gt))
+ wakeref = intel_gt_pm_get_if_awake(gt);
+ if (!wakeref)
return;

if (pmu->enable & config_mask(I915_PMU_ACTUAL_FREQUENCY)) {
@@ -413,7 +415,7 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
period_ns / 1000);
}

- intel_gt_pm_put_async(gt);
+ intel_gt_pm_put_async(gt, wakeref);
}

static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index dfd87d0822180..db4887e33ea60 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -108,6 +108,10 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
INIT_DELAYED_WORK(&wf->work, __intel_wakeref_put_work);
lockdep_init_map(&wf->work.work.lockdep_map,
"wakeref.work", &key->work, 0);
+
+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
+ intel_wakeref_tracker_init(&wf->debug);
+#endif
}

int intel_wakeref_wait_for_idle(struct intel_wakeref *wf)
diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index e6ba389652d74..38439deefc5cc 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -43,6 +43,10 @@ struct intel_wakeref {
const struct intel_wakeref_ops *ops;

struct delayed_work work;
+
+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
+ struct intel_wakeref_tracker debug;
+#endif
};

struct intel_wakeref_lockclass {
@@ -262,6 +266,44 @@ __intel_wakeref_defer_park(struct intel_wakeref *wf)
*/
int intel_wakeref_wait_for_idle(struct intel_wakeref *wf);

+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
+
+static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf)
+{
+ return intel_wakeref_tracker_add(&wf->debug);
+}
+
+static inline void intel_wakeref_untrack(struct intel_wakeref *wf,
+ intel_wakeref_t handle)
+{
+ intel_wakeref_tracker_remove(&wf->debug, handle);
+}
+
+static inline void intel_wakeref_show(struct intel_wakeref *wf,
+ struct drm_printer *p)
+{
+ intel_wakeref_tracker_show(&wf->debug, p);
+}
+
+#else
+
+static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf)
+{
+ return -1;
+}
+
+static inline void intel_wakeref_untrack(struct intel_wakeref *wf,
+ intel_wakeref_t handle)
+{
+}
+
+static inline void intel_wakeref_show(struct intel_wakeref *wf,
+ struct drm_printer *p)
+{
+}
+
+#endif
+
struct intel_wakeref_auto {
struct intel_runtime_pm *rpm;
struct timer_list timer;
--
2.25.1

2022-02-18 00:09:24

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 1/9] lib/ref_tracker: add unlocked leak print helper

To have reliable detection of leaks, caller must be able to check under the same
lock both: tracked counter and the leaks. dir.lock is natural candidate for such
lock and unlocked print helper can be called with this lock taken.
As a bonus we can reuse this helper in ref_tracker_dir_exit.

Signed-off-by: Andrzej Hajda <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
---
include/linux/ref_tracker.h | 8 +++++
lib/ref_tracker.c | 66 +++++++++++++++++++++----------------
2 files changed, 46 insertions(+), 28 deletions(-)

diff --git a/include/linux/ref_tracker.h b/include/linux/ref_tracker.h
index 60f3453be23e6..b9c968a716483 100644
--- a/include/linux/ref_tracker.h
+++ b/include/linux/ref_tracker.h
@@ -32,6 +32,9 @@ static inline void ref_tracker_dir_init(struct ref_tracker_dir *dir,

void ref_tracker_dir_exit(struct ref_tracker_dir *dir);

+void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
+ unsigned int display_limit);
+
void ref_tracker_dir_print(struct ref_tracker_dir *dir,
unsigned int display_limit);

@@ -52,6 +55,11 @@ static inline void ref_tracker_dir_exit(struct ref_tracker_dir *dir)
{
}

+static inline void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
+ unsigned int display_limit)
+{
+}
+
static inline void ref_tracker_dir_print(struct ref_tracker_dir *dir,
unsigned int display_limit)
{
diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c
index a6789c0c626b0..1b0c6d645d64a 100644
--- a/lib/ref_tracker.c
+++ b/lib/ref_tracker.c
@@ -14,6 +14,38 @@ struct ref_tracker {
depot_stack_handle_t free_stack_handle;
};

+void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
+ unsigned int display_limit)
+{
+ struct ref_tracker *tracker;
+ unsigned int i = 0;
+
+ lockdep_assert_held(&dir->lock);
+
+ list_for_each_entry(tracker, &dir->list, head) {
+ if (i < display_limit) {
+ pr_err("leaked reference.\n");
+ if (tracker->alloc_stack_handle)
+ stack_depot_print(tracker->alloc_stack_handle);
+ i++;
+ } else {
+ break;
+ }
+ }
+}
+EXPORT_SYMBOL(__ref_tracker_dir_print);
+
+void ref_tracker_dir_print(struct ref_tracker_dir *dir,
+ unsigned int display_limit)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dir->lock, flags);
+ __ref_tracker_dir_print(dir, display_limit);
+ spin_unlock_irqrestore(&dir->lock, flags);
+}
+EXPORT_SYMBOL(ref_tracker_dir_print);
+
void ref_tracker_dir_exit(struct ref_tracker_dir *dir)
{
struct ref_tracker *tracker, *n;
@@ -26,13 +58,13 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir)
kfree(tracker);
dir->quarantine_avail++;
}
- list_for_each_entry_safe(tracker, n, &dir->list, head) {
- pr_err("leaked reference.\n");
- if (tracker->alloc_stack_handle)
- stack_depot_print(tracker->alloc_stack_handle);
+ if (!list_empty(&dir->list)) {
+ __ref_tracker_dir_print(dir, 16);
leak = true;
- list_del(&tracker->head);
- kfree(tracker);
+ list_for_each_entry_safe(tracker, n, &dir->list, head) {
+ list_del(&tracker->head);
+ kfree(tracker);
+ }
}
spin_unlock_irqrestore(&dir->lock, flags);
WARN_ON_ONCE(leak);
@@ -40,28 +72,6 @@ void ref_tracker_dir_exit(struct ref_tracker_dir *dir)
}
EXPORT_SYMBOL(ref_tracker_dir_exit);

-void ref_tracker_dir_print(struct ref_tracker_dir *dir,
- unsigned int display_limit)
-{
- struct ref_tracker *tracker;
- unsigned long flags;
- unsigned int i = 0;
-
- spin_lock_irqsave(&dir->lock, flags);
- list_for_each_entry(tracker, &dir->list, head) {
- if (i < display_limit) {
- pr_err("leaked reference.\n");
- if (tracker->alloc_stack_handle)
- stack_depot_print(tracker->alloc_stack_handle);
- i++;
- } else {
- break;
- }
- }
- spin_unlock_irqrestore(&dir->lock, flags);
-}
-EXPORT_SYMBOL(ref_tracker_dir_print);
-
int ref_tracker_alloc(struct ref_tracker_dir *dir,
struct ref_tracker **trackerp,
gfp_t gfp)
--
2.25.1

2022-02-18 00:21:57

by Andrzej Hajda

[permalink] [raw]
Subject: [PATCH 9/9] drm/i915: replace Intel internal tracker with kernel core ref_tracker

Beside reusing existing code, the main advantage of ref_tracker is
tracking per instance of wakeref. It allows also to catch double
put.
On the other side we lose information about the first acquire and
the last release, but the advantages outweigh it.

Signed-off-by: Andrzej Hajda <[email protected]>
Reviewed-by: Chris Wilson <[email protected]>
---
drivers/gpu/drm/i915/Kconfig.debug | 11 +-
drivers/gpu/drm/i915/Makefile | 3 -
.../drm/i915/display/intel_display_power.c | 2 +-
drivers/gpu/drm/i915/gt/intel_engine_pm.c | 2 +-
drivers/gpu/drm/i915/gt/intel_gt_pm.c | 2 +-
drivers/gpu/drm/i915/intel_runtime_pm.c | 23 +-
drivers/gpu/drm/i915/intel_runtime_pm.h | 2 +-
drivers/gpu/drm/i915/intel_wakeref.c | 8 +-
drivers/gpu/drm/i915/intel_wakeref.h | 72 +++++-
drivers/gpu/drm/i915/intel_wakeref_tracker.c | 234 ------------------
drivers/gpu/drm/i915/intel_wakeref_tracker.h | 76 ------
11 files changed, 86 insertions(+), 349 deletions(-)
delete mode 100644 drivers/gpu/drm/i915/intel_wakeref_tracker.c
delete mode 100644 drivers/gpu/drm/i915/intel_wakeref_tracker.h

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 3bdc73f30a9e1..6c57f3e265f20 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -32,6 +32,7 @@ config DRM_I915_DEBUG
select DEBUG_FS
select PREEMPT_COUNT
select I2C_CHARDEV
+ select REF_TRACKER
select STACKDEPOT
select STACKTRACE
select DRM_DP_AUX_CHARDEV
@@ -46,7 +47,6 @@ config DRM_I915_DEBUG
select DRM_I915_DEBUG_GEM
select DRM_I915_DEBUG_GEM_ONCE
select DRM_I915_DEBUG_MMIO
- select DRM_I915_TRACK_WAKEREF
select DRM_I915_DEBUG_RUNTIME_PM
select DRM_I915_DEBUG_WAKEREF
select DRM_I915_SW_FENCE_DEBUG_OBJECTS
@@ -238,18 +238,13 @@ config DRM_I915_DEBUG_VBLANK_EVADE

If in doubt, say "N".

-config DRM_I915_TRACK_WAKEREF
- depends on STACKDEPOT
- depends on STACKTRACE
- bool
-
config DRM_I915_DEBUG_RUNTIME_PM
bool "Enable extra state checking for runtime PM"
depends on DRM_I915
default n
+ select REF_TRACKER
select STACKDEPOT
select STACKTRACE
- select DRM_I915_TRACK_WAKEREF
help
Choose this option to turn on extra state checking for the
runtime PM functionality. This may introduce overhead during
@@ -263,9 +258,9 @@ config DRM_I915_DEBUG_WAKEREF
bool "Enable extra tracking for wakerefs"
depends on DRM_I915
default n
+ select REF_TRACKER
select STACKDEPOT
select STACKTRACE
- select DRM_I915_TRACK_WAKEREF
help
Choose this option to turn on extra state checking and usage
tracking for the wakerefPM functionality. This may introduce
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 88a403d3294cb..1f8d71430e2e6 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -76,9 +76,6 @@ i915-$(CONFIG_DEBUG_FS) += \
display/intel_display_debugfs.o \
display/intel_pipe_crc.o

-i915-$(CONFIG_DRM_I915_TRACK_WAKEREF) += \
- intel_wakeref_tracker.o
-
i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o

# "Graphics Technology" (aka we talk to the gpu)
diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
index 9ebae7ac32356..0e1bf724f89b5 100644
--- a/drivers/gpu/drm/i915/display/intel_display_power.c
+++ b/drivers/gpu/drm/i915/display/intel_display_power.c
@@ -2107,7 +2107,7 @@ print_async_put_domains_state(struct i915_power_domains *power_domains)
struct drm_i915_private,
power_domains);

- drm_dbg(&i915->drm, "async_put_wakeref %u\n",
+ drm_dbg(&i915->drm, "async_put_wakeref %lu\n",
power_domains->async_put_wakeref);

print_power_domains(power_domains, "async_put_domains[0]",
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_pm.c b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
index 52e46e7830ff5..cf8cc348942cb 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_pm.c
@@ -273,7 +273,7 @@ void intel_engine_init__pm(struct intel_engine_cs *engine)
{
struct intel_runtime_pm *rpm = engine->uncore->rpm;

- intel_wakeref_init(&engine->wakeref, rpm, &wf_ops);
+ intel_wakeref_init(&engine->wakeref, rpm, &wf_ops, engine->name);
intel_engine_init_heartbeat(engine);
}

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_pm.c b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
index 7ee65a93f926f..01a055d0d0989 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_pm.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_pm.c
@@ -129,7 +129,7 @@ static const struct intel_wakeref_ops wf_ops = {

void intel_gt_pm_init_early(struct intel_gt *gt)
{
- intel_wakeref_init(&gt->wakeref, gt->uncore->rpm, &wf_ops);
+ intel_wakeref_init(&gt->wakeref, gt->uncore->rpm, &wf_ops, "GT");
seqcount_mutex_init(&gt->stats.lock, &gt->wakeref.mutex);
}

diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index 7bd10efa56bf3..e923ab8d8da08 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -54,7 +54,7 @@

static void init_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
{
- intel_wakeref_tracker_init(&rpm->debug);
+ ref_tracker_dir_init(&rpm->debug, INTEL_REFTRACK_DEAD_COUNT, dev_name(rpm->kdev));
}

static intel_wakeref_t
@@ -63,26 +63,26 @@ track_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm)
if (!rpm->available)
return -1;

- return intel_wakeref_tracker_add(&rpm->debug);
+ return intel_ref_tracker_alloc(&rpm->debug);
}

static void untrack_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm,
intel_wakeref_t wakeref)
{
- intel_wakeref_tracker_remove(&rpm->debug, wakeref);
+ if (!rpm->available)
+ return;
+
+ intel_ref_tracker_free(&rpm->debug, wakeref);
}

static void untrack_all_intel_runtime_pm_wakerefs(struct intel_runtime_pm *rpm)
{
- struct drm_printer p = drm_debug_printer("i915");
-
- intel_wakeref_tracker_reset(&rpm->debug, &p);
+ ref_tracker_dir_exit(&rpm->debug);
}

static noinline void
__intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm)
{
- struct intel_wakeref_tracker saved;
unsigned long flags;

if (!atomic_dec_and_lock_irqsave(&rpm->wakeref_count,
@@ -90,15 +90,8 @@ __intel_wakeref_dec_and_check_tracking(struct intel_runtime_pm *rpm)
flags))
return;

- saved = __intel_wakeref_tracker_reset(&rpm->debug);
+ __ref_tracker_dir_print(&rpm->debug, INTEL_REFTRACK_PRINT_LIMIT);
spin_unlock_irqrestore(&rpm->debug.lock, flags);
-
- if (saved.count) {
- struct drm_printer p = drm_debug_printer("i915");
-
- __intel_wakeref_tracker_show(&saved, &p);
- intel_wakeref_tracker_fini(&saved);
- }
}

void print_intel_runtime_pm_wakeref(struct intel_runtime_pm *rpm,
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.h b/drivers/gpu/drm/i915/intel_runtime_pm.h
index 0871fa2176474..db5692f1eb67e 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.h
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.h
@@ -61,7 +61,7 @@ struct intel_runtime_pm {
* paired rpm_put) we can remove corresponding pairs of and keep
* the array trimmed to active wakerefs.
*/
- struct intel_wakeref_tracker debug;
+ struct ref_tracker_dir debug;
#endif
};

diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index db4887e33ea60..97fe9a0f0adbd 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -62,6 +62,9 @@ static void ____intel_wakeref_put_last(struct intel_wakeref *wf)
if (likely(!wf->ops->put(wf))) {
rpm_put(wf);
wake_up_var(&wf->wakeref);
+#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
+ ref_tracker_dir_exit(&wf->debug);
+#endif
}

unlock:
@@ -96,7 +99,8 @@ static void __intel_wakeref_put_work(struct work_struct *wrk)
void __intel_wakeref_init(struct intel_wakeref *wf,
struct intel_runtime_pm *rpm,
const struct intel_wakeref_ops *ops,
- struct intel_wakeref_lockclass *key)
+ struct intel_wakeref_lockclass *key,
+ const char *name)
{
wf->rpm = rpm;
wf->ops = ops;
@@ -110,7 +114,7 @@ void __intel_wakeref_init(struct intel_wakeref *wf,
"wakeref.work", &key->work, 0);

#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
- intel_wakeref_tracker_init(&wf->debug);
+ ref_tracker_dir_init(&wf->debug, INTEL_REFTRACK_DEAD_COUNT, name);
#endif
}

diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 38439deefc5cc..c694d129aca7b 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -7,17 +7,24 @@
#ifndef INTEL_WAKEREF_H
#define INTEL_WAKEREF_H

+#include <drm/drm_print.h>
+
#include <linux/atomic.h>
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/lockdep.h>
#include <linux/mutex.h>
#include <linux/refcount.h>
+#include <linux/ref_tracker.h>
+#include <linux/slab.h>
#include <linux/stackdepot.h>
#include <linux/timer.h>
#include <linux/workqueue.h>

-#include "intel_wakeref_tracker.h"
+typedef unsigned long intel_wakeref_t;
+
+#define INTEL_REFTRACK_DEAD_COUNT 16
+#define INTEL_REFTRACK_PRINT_LIMIT 16

#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
#define INTEL_WAKEREF_BUG_ON(expr) BUG_ON(expr)
@@ -45,7 +52,7 @@ struct intel_wakeref {
struct delayed_work work;

#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)
- struct intel_wakeref_tracker debug;
+ struct ref_tracker_dir debug;
#endif
};

@@ -57,11 +64,12 @@ struct intel_wakeref_lockclass {
void __intel_wakeref_init(struct intel_wakeref *wf,
struct intel_runtime_pm *rpm,
const struct intel_wakeref_ops *ops,
- struct intel_wakeref_lockclass *key);
-#define intel_wakeref_init(wf, rpm, ops) do { \
+ struct intel_wakeref_lockclass *key,
+ const char *name);
+#define intel_wakeref_init(wf, rpm, ops, name) do { \
static struct intel_wakeref_lockclass __key; \
\
- __intel_wakeref_init((wf), (rpm), (ops), &__key); \
+ __intel_wakeref_init((wf), (rpm), (ops), &__key, name); \
} while (0)

int __intel_wakeref_get_first(struct intel_wakeref *wf);
@@ -266,17 +274,67 @@ __intel_wakeref_defer_park(struct intel_wakeref *wf)
*/
int intel_wakeref_wait_for_idle(struct intel_wakeref *wf);

+#define INTEL_WAKEREF_DEF ((intel_wakeref_t)(-1))
+
+static inline intel_wakeref_t intel_ref_tracker_alloc(struct ref_tracker_dir *dir)
+{
+ struct ref_tracker *user = NULL;
+
+ ref_tracker_alloc(dir, &user, GFP_NOWAIT);
+
+ return (intel_wakeref_t)user ?: INTEL_WAKEREF_DEF;
+}
+
+static inline void intel_ref_tracker_free(struct ref_tracker_dir *dir,
+ intel_wakeref_t handle)
+{
+ struct ref_tracker *user;
+
+ user = (handle == INTEL_WAKEREF_DEF) ? NULL : (void *)handle;
+
+ ref_tracker_free(dir, &user);
+}
+
+static inline void
+intel_wakeref_tracker_show(struct ref_tracker_dir *dir,
+ struct drm_printer *p)
+{
+ const size_t buf_size = PAGE_SIZE;
+ char *buf, *sb, *se;
+ size_t count;
+
+ buf = kmalloc(buf_size, GFP_NOWAIT);
+ if (!buf)
+ return;
+
+ count = ref_tracker_dir_snprint(dir, buf, buf_size);
+ if (!count)
+ goto free;
+ /* printk does not like big buffers, so we split it */
+ for (sb = buf; *sb; sb = se + 1) {
+ se = strchrnul(sb, '\n');
+ drm_printf(p, "%.*s", (int)(se - sb + 1), sb);
+ if (!*se)
+ break;
+ }
+ if (count >= buf_size)
+ drm_printf(p, "dropped %zd extra bytes of leak report.\n",
+ count + 1 - buf_size);
+free:
+ kfree(buf);
+}
+
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_WAKEREF)

static inline intel_wakeref_t intel_wakeref_track(struct intel_wakeref *wf)
{
- return intel_wakeref_tracker_add(&wf->debug);
+ return intel_ref_tracker_alloc(&wf->debug);
}

static inline void intel_wakeref_untrack(struct intel_wakeref *wf,
intel_wakeref_t handle)
{
- intel_wakeref_tracker_remove(&wf->debug, handle);
+ intel_ref_tracker_free(&wf->debug, handle);
}

static inline void intel_wakeref_show(struct intel_wakeref *wf,
diff --git a/drivers/gpu/drm/i915/intel_wakeref_tracker.c b/drivers/gpu/drm/i915/intel_wakeref_tracker.c
deleted file mode 100644
index a0bcef13a1085..0000000000000
--- a/drivers/gpu/drm/i915/intel_wakeref_tracker.c
+++ /dev/null
@@ -1,234 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2021 Intel Corporation
- */
-
-#include <linux/slab.h>
-#include <linux/stackdepot.h>
-#include <linux/stacktrace.h>
-#include <linux/sort.h>
-
-#include <drm/drm_print.h>
-
-#include "intel_wakeref.h"
-
-#define STACKDEPTH 8
-
-static noinline depot_stack_handle_t __save_depot_stack(void)
-{
- unsigned long entries[STACKDEPTH];
- unsigned int n;
-
- n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
- return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
-}
-
-static void __print_depot_stack(depot_stack_handle_t stack,
- char *buf, int sz, int indent)
-{
- unsigned long *entries;
- unsigned int nr_entries;
-
- nr_entries = stack_depot_fetch(stack, &entries);
- stack_trace_snprint(buf, sz, entries, nr_entries, indent);
-}
-
-static int cmphandle(const void *_a, const void *_b)
-{
- const depot_stack_handle_t * const a = _a, * const b = _b;
-
- if (*a < *b)
- return -1;
- else if (*a > *b)
- return 1;
- else
- return 0;
-}
-
-void
-__intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w,
- struct drm_printer *p)
-{
- unsigned long i;
- char *buf;
-
- buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
- if (!buf)
- return;
-
- if (w->last_acquire) {
- __print_depot_stack(w->last_acquire, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref last acquired:\n%s", buf);
- }
-
- if (w->last_release) {
- __print_depot_stack(w->last_release, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref last released:\n%s", buf);
- }
-
- drm_printf(p, "Wakeref count: %lu\n", w->count);
-
- sort(w->owners, w->count, sizeof(*w->owners), cmphandle, NULL);
-
- for (i = 0; i < w->count; i++) {
- depot_stack_handle_t stack = w->owners[i];
- unsigned long rep;
-
- rep = 1;
- while (i + 1 < w->count && w->owners[i + 1] == stack)
- rep++, i++;
- __print_depot_stack(stack, buf, PAGE_SIZE, 2);
- drm_printf(p, "Wakeref x%lu taken at:\n%s", rep, buf);
- }
-
- kfree(buf);
-}
-
-void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w,
- struct drm_printer *p)
-{
- struct intel_wakeref_tracker tmp = {};
-
- do {
- unsigned long alloc = tmp.count;
- depot_stack_handle_t *s;
-
- spin_lock_irq(&w->lock);
- tmp.count = w->count;
- if (tmp.count <= alloc)
- memcpy(tmp.owners, w->owners, tmp.count * sizeof(*s));
- tmp.last_acquire = w->last_acquire;
- tmp.last_release = w->last_release;
- spin_unlock_irq(&w->lock);
- if (tmp.count <= alloc)
- break;
-
- s = krealloc(tmp.owners,
- tmp.count * sizeof(*s),
- GFP_NOWAIT | __GFP_NOWARN);
- if (!s)
- goto out;
-
- tmp.owners = s;
- } while (1);
-
- __intel_wakeref_tracker_show(&tmp, p);
-
-out:
- intel_wakeref_tracker_fini(&tmp);
-}
-
-intel_wakeref_t intel_wakeref_tracker_add(struct intel_wakeref_tracker *w)
-{
- depot_stack_handle_t stack, *stacks;
- unsigned long flags;
-
- stack = __save_depot_stack();
- if (!stack)
- return -1;
-
- spin_lock_irqsave(&w->lock, flags);
-
- if (!w->count)
- w->last_acquire = stack;
-
- stacks = krealloc(w->owners,
- (w->count + 1) * sizeof(*stacks),
- GFP_NOWAIT | __GFP_NOWARN);
- if (stacks) {
- stacks[w->count++] = stack;
- w->owners = stacks;
- } else {
- stack = -1;
- }
-
- spin_unlock_irqrestore(&w->lock, flags);
-
- return stack;
-}
-
-void intel_wakeref_tracker_remove(struct intel_wakeref_tracker *w,
- intel_wakeref_t stack)
-{
- unsigned long flags, n;
- bool found = false;
-
- if (unlikely(stack == -1))
- return;
-
- spin_lock_irqsave(&w->lock, flags);
- for (n = w->count; n--; ) {
- if (w->owners[n] == stack) {
- memmove(w->owners + n,
- w->owners + n + 1,
- (--w->count - n) * sizeof(stack));
- found = true;
- break;
- }
- }
- spin_unlock_irqrestore(&w->lock, flags);
-
- if (WARN(!found,
- "Unmatched wakeref %x, tracking %lu\n",
- stack, w->count)) {
- char *buf;
-
- buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN);
- if (!buf)
- return;
-
- __print_depot_stack(stack, buf, PAGE_SIZE, 2);
- pr_err("wakeref %x from\n%s", stack, buf);
-
- stack = READ_ONCE(w->last_release);
- if (stack && !w->count) {
- __print_depot_stack(stack, buf, PAGE_SIZE, 2);
- pr_err("wakeref last released at\n%s", buf);
- }
-
- kfree(buf);
- }
-}
-
-struct intel_wakeref_tracker
-__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w)
-{
- struct intel_wakeref_tracker saved;
-
- lockdep_assert_held(&w->lock);
-
- saved = *w;
-
- w->owners = NULL;
- w->count = 0;
- w->last_release = __save_depot_stack();
-
- return saved;
-}
-
-void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
- struct drm_printer *p)
-{
- struct intel_wakeref_tracker tmp;
-
- spin_lock_irq(&w->lock);
- tmp = __intel_wakeref_tracker_reset(w);
- spin_unlock_irq(&w->lock);
-
- if (tmp.count)
- __intel_wakeref_tracker_show(&tmp, p);
-
- intel_wakeref_tracker_fini(&tmp);
-}
-
-void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w)
-{
- memset(w, 0, sizeof(*w));
- spin_lock_init(&w->lock);
- stack_depot_init();
-}
-
-void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w)
-{
- kfree(w->owners);
-}
diff --git a/drivers/gpu/drm/i915/intel_wakeref_tracker.h b/drivers/gpu/drm/i915/intel_wakeref_tracker.h
deleted file mode 100644
index 61df68e28c0fb..0000000000000
--- a/drivers/gpu/drm/i915/intel_wakeref_tracker.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/* SPDX-License-Identifier: MIT */
-/*
- * Copyright © 2019 Intel Corporation
- */
-
-#ifndef INTEL_WAKEREF_TRACKER_H
-#define INTEL_WAKEREF_TRACKER_H
-
-#include <linux/kconfig.h>
-#include <linux/spinlock.h>
-#include <linux/stackdepot.h>
-
-typedef depot_stack_handle_t intel_wakeref_t;
-
-struct drm_printer;
-
-struct intel_wakeref_tracker {
- spinlock_t lock;
-
- depot_stack_handle_t last_acquire;
- depot_stack_handle_t last_release;
-
- depot_stack_handle_t *owners;
- unsigned long count;
-};
-
-#if IS_ENABLED(CONFIG_DRM_I915_TRACK_WAKEREF)
-
-void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w);
-void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w);
-
-intel_wakeref_t intel_wakeref_tracker_add(struct intel_wakeref_tracker *w);
-void intel_wakeref_tracker_remove(struct intel_wakeref_tracker *w,
- intel_wakeref_t handle);
-
-struct intel_wakeref_tracker
-__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w);
-void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
- struct drm_printer *p);
-
-void __intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w,
- struct drm_printer *p);
-void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w,
- struct drm_printer *p);
-
-#else
-
-static inline void intel_wakeref_tracker_init(struct intel_wakeref_tracker *w) {}
-static inline void intel_wakeref_tracker_fini(struct intel_wakeref_tracker *w) {}
-
-static inline intel_wakeref_t
-intel_wakeref_tracker_add(struct intel_wakeref_tracker *w)
-{
- return -1;
-}
-
-static inline void
-intel_wakeref_untrack_remove(struct intel_wakeref_tracker *w, intel_wakeref_t handle) {}
-
-static inline struct intel_wakeref_tracker
-__intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w)
-{
- return (struct intel_wakeref_tracker){};
-}
-
-static inline void intel_wakeref_tracker_reset(struct intel_wakeref_tracker *w,
- struct drm_printer *p)
-{
-}
-
-static inline void __intel_wakeref_tracker_show(const struct intel_wakeref_tracker *w, struct drm_printer *p) {}
-static inline void intel_wakeref_tracker_show(struct intel_wakeref_tracker *w, struct drm_printer *p) {}
-
-#endif
-
-#endif /* INTEL_WAKEREF_TRACKER_H */
--
2.25.1

2022-02-18 14:58:03

by Andrzej Hajda

[permalink] [raw]
Subject: Re: [PATCH 2/9] lib/ref_tracker: compact stacktraces before printing



On 17.02.2022 16:23, Eric Dumazet wrote:
> On Thu, Feb 17, 2022 at 6:05 AM Andrzej Hajda <[email protected]> wrote:
>> In cases references are taken alternately on multiple exec paths leak
>> report can grow substantially, sorting and grouping leaks by stack_handle
>> allows to compact it.
>>
>> Signed-off-by: Andrzej Hajda <[email protected]>
>> Reviewed-by: Chris Wilson <[email protected]>
>> ---
>> lib/ref_tracker.c | 35 +++++++++++++++++++++++++++--------
>> 1 file changed, 27 insertions(+), 8 deletions(-)
>>
>> diff --git a/lib/ref_tracker.c b/lib/ref_tracker.c
>> index 1b0c6d645d64a..0e9c7d2828ccb 100644
>> --- a/lib/ref_tracker.c
>> +++ b/lib/ref_tracker.c
>> @@ -1,5 +1,6 @@
>> // SPDX-License-Identifier: GPL-2.0-or-later
>> #include <linux/export.h>
>> +#include <linux/list_sort.h>
>> #include <linux/ref_tracker.h>
>> #include <linux/slab.h>
>> #include <linux/stacktrace.h>
>> @@ -14,23 +15,41 @@ struct ref_tracker {
>> depot_stack_handle_t free_stack_handle;
>> };
>>
>> +static int ref_tracker_cmp(void *priv, const struct list_head *a, const struct list_head *b)
>> +{
>> + const struct ref_tracker *ta = list_entry(a, const struct ref_tracker, head);
>> + const struct ref_tracker *tb = list_entry(b, const struct ref_tracker, head);
>> +
>> + return ta->alloc_stack_handle - tb->alloc_stack_handle;
>> +}
>> +
>> void __ref_tracker_dir_print(struct ref_tracker_dir *dir,
>> unsigned int display_limit)
>> {
>> + unsigned int i = 0, count = 0;
>> struct ref_tracker *tracker;
>> - unsigned int i = 0;
>> + depot_stack_handle_t stack;
>>
>> lockdep_assert_held(&dir->lock);
>>
>> + if (list_empty(&dir->list))
>> + return;
>> +
>> + list_sort(NULL, &dir->list, ref_tracker_cmp);
> What is going to be the cost of sorting a list with 1,000,000 items in it ?

Do we really have such cases?


>
> I just want to make sure we do not trade printing at most ~10 references
> (from netdev_wait_allrefs()) to a soft lockup :/ with no useful info
> if something went terribly wrong.
>
> I suggest that you do not sort a potential big list, and instead
> attempt to allocate an array of @display_limits 'struct stack_counts'
>
> I suspect @display_limits will always be kept to a reasonable value
> (less than 100 ?)

I though rather about 16 :)
In theory everything is possible, but do we have real case examples
which could lead to 100 stack traces?
Maybe some frameworks used by multiple consumers (drivers) ???

>
> struct stack_counts {
> depot_stack_handle_t stack_handle;
> unsigned int count;
> }
>
> Then, iterating the list and update the array (that you can keep
> sorted by ->stack_handle)
>
> Then after iterating, print the (at_most) @display_limits handles
> found in the temp array.

OK, could be faster and less invasive.
Other solution would be keeping the array in dir and update in every
tracker alloc/free, this way we avoid iteration over potentially big
list, but it would cost memory and since printing is rather rare I am
not sure if it is worth.

I will try your proposition.

Regards
Andrzej

>
>> +
>> list_for_each_entry(tracker, &dir->list, head) {
>> - if (i < display_limit) {
>> - pr_err("leaked reference.\n");
>> - if (tracker->alloc_stack_handle)
>> - stack_depot_print(tracker->alloc_stack_handle);
>> - i++;
>> - } else {
>> + if (i++ >= display_limit)
>> break;
>> - }
>> + if (!count++)
>> + stack = tracker->alloc_stack_handle;
>> + if (stack == tracker->alloc_stack_handle &&
>> + !list_is_last(&tracker->head, &dir->list))
>> + continue;
>> +
>> + pr_err("leaked %d references.\n", count);
>> + if (stack)
>> + stack_depot_print(stack);
>> + count = 0;
>> }
>> }
>> EXPORT_SYMBOL(__ref_tracker_dir_print);
>> --
>> 2.25.1
>>

2022-02-21 09:52:09

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH 2/9] lib/ref_tracker: compact stacktraces before printing

On Fri, Feb 18, 2022 at 2:55 AM Andrzej Hajda <[email protected]> wrote:
>

> OK, could be faster and less invasive.
> Other solution would be keeping the array in dir and update in every
> tracker alloc/free, this way we avoid iteration over potentially big
> list, but it would cost memory and since printing is rather rare I am
> not sure if it is worth.

printing is extremely rare [1]

We want to use ref_tracker in production, we need to keep the fast
path as fast as possible ;)

[1] If you think about providing access to the traces from sysfs, we
might need to make sure we do not hold the dir spinlock
during the expensive generation of the output data.