2018-02-11 09:39:33

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

Fix a deadlock on hybrid graphics laptops that's been present since 2013:

DRM drivers poll connectors in 10 sec intervals. The poll worker is
stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
the poll worker invokes the DRM drivers' ->detect callbacks, which call
pm_runtime_get_sync(). If the poll worker starts after runtime suspend
has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
with the intention of runtime resuming the device afterwards. The result
is a circular wait between poll worker and autosuspend worker.

I'm seeing this deadlock so often it's not funny anymore. I've also found
3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
[4/5].)

One theoretical solution would be to add a flag to the ->detect callback
to tell it that it's running in the poll worker's context. In that case
it doesn't need to call pm_runtime_get_sync() because the poll worker is
only enabled while runtime active and we know that ->runtime_suspend
waits for it to finish. But this approach would require touching every
single DRM driver's ->detect hook. Moreover the ->detect hook is called
from numerous other places, both in the DRM library as well as in each
driver, making it necessary to trace every possible call chain and check
if it's coming from the poll worker or not. I've tried to do that for
nouveau (see the call sites listed in the commit message of patch [3/5])
and concluded that this approach is a nightmare to implement.

Another reason for the infeasibility of this approach is that ->detect
is called from within the DRM library without driver involvement, e.g.
via DRM's sysfs interface. In those cases, pm_runtime_get_sync() would
have to be called by the DRM library, but the DRM library is supposed to
stay generic, wheras runtime PM is driver-specific.

So instead, I've come up with this much simpler solution which gleans
from the current task's flags if it's a workqueue worker, retrieves the
work_struct and checks if it's the poll worker. All that's needed is
one small helper in the workqueue code and another small helper in the
DRM library. This solution lends itself nicely to stable-backporting.

The patches for radeon and amdgpu are compile-tested only, I only have a
MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
"msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
This ensures that the poll worker runs after ->runtime_suspend has begun.
Wait 12 sec after the GPU has begun runtime suspend, then check
/sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
series, the status will be stuck at "suspending" and you'll get hung task
errors in dmesg after a few minutes.

i915, malidp and msm "solved" this issue by not stopping the poll worker
on runtime suspend. But this results in the GPU bouncing back and forth
between D0 and D3 continuously. That's a total no-go for GPUs which
runtime suspend to D3cold since every suspend/resume cycle costs a
significant amount of time and energy. (i915 and malidp do not seem
to acquire a runtime PM ref in the ->detect callbacks, which seems
questionable. msm however does and would also deadlock if it disabled
the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)

Please review. Thanks,

Lukas

Lukas Wunner (5):
workqueue: Allow retrieval of current task's work struct
drm: Allow determining if current task is output poll worker
drm/nouveau: Fix deadlock on runtime suspend
drm/radeon: Fix deadlock on runtime suspend
drm/amdgpu: Fix deadlock on runtime suspend

drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
drivers/gpu/drm/drm_probe_helper.c | 14 +++++
drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++---------
include/drm/drm_crtc_helper.h | 1 +
include/linux/workqueue.h | 1 +
kernel/workqueue.c | 16 ++++++
7 files changed, 132 insertions(+), 50 deletions(-)

--
2.15.1



2018-02-11 09:41:13

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 3/5] drm/nouveau: Fix deadlock on runtime suspend

nouveau's ->runtime_suspend hook calls drm_kms_helper_poll_disable(),
which waits for the output poll worker to finish if it's running.

The output poll worker meanwhile calls pm_runtime_get_sync() in
nouveau_connector_detect() which waits for the ongoing suspend to finish,
causing a deadlock.

Fix by not acquiring a runtime PM ref if nouveau_connector_detect() is
called in the output poll worker's context. This is safe because
the poll worker is only enabled while runtime active and we know that
->runtime_suspend waits for it to finish.

Other contexts calling nouveau_connector_detect() do require a runtime
PM ref, these comprise:

status_store() drm sysfs interface
->fill_modes drm callback
drm_fb_helper_probe_connector_modes()
drm_mode_getconnector()
nouveau_connector_hotplug()
nouveau_display_hpd_work()
nv17_tv_set_property()

Stack trace for posterity:

INFO: task kworker/0:1:58 blocked for more than 120 seconds.
Workqueue: events output_poll_execute [drm_kms_helper]
Call Trace:
schedule+0x28/0x80
rpm_resume+0x107/0x6e0
__pm_runtime_resume+0x47/0x70
nouveau_connector_detect+0x7e/0x4a0 [nouveau]
nouveau_connector_detect_lvds+0x132/0x180 [nouveau]
drm_helper_probe_detect_ctx+0x85/0xd0 [drm_kms_helper]
output_poll_execute+0x11e/0x1c0 [drm_kms_helper]
process_one_work+0x184/0x380
worker_thread+0x2e/0x390

INFO: task kworker/0:2:252 blocked for more than 120 seconds.
Workqueue: pm pm_runtime_work
Call Trace:
schedule+0x28/0x80
schedule_timeout+0x1e3/0x370
wait_for_completion+0x123/0x190
flush_work+0x142/0x1c0
nouveau_pmops_runtime_suspend+0x7e/0xd0 [nouveau]
pci_pm_runtime_suspend+0x5c/0x180
vga_switcheroo_runtime_suspend+0x1e/0xa0
__rpm_callback+0xc1/0x200
rpm_callback+0x1f/0x70
rpm_suspend+0x13c/0x640
pm_runtime_work+0x6e/0x90
process_one_work+0x184/0x380
worker_thread+0x2e/0x390

Bugzilla: https://bugs.archlinux.org/task/53497
Bugzilla: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=870523
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70388#c33
Fixes: 5addcf0a5f0f ("nouveau: add runtime PM support (v0.9)")
Cc: [email protected] # v3.12+: 1234567890ab: workqueue: Allow retrieval of current task's work struct
Cc: [email protected] # v3.12+: 1234567890ab: drm: Allow determining if current task is output poll worker
Cc: Ben Skeggs <[email protected]>
Cc: Dave Airlie <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
index 69d6e61a01ec..6ed9cb053dfa 100644
--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
+++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
@@ -570,9 +570,15 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
nv_connector->edid = NULL;
}

- ret = pm_runtime_get_sync(connector->dev->dev);
- if (ret < 0 && ret != -EACCES)
- return conn_status;
+ /* Outputs are only polled while runtime active, so acquiring a
+ * runtime PM ref here is unnecessary (and would deadlock upon
+ * runtime suspend because it waits for polling to finish).
+ */
+ if (!drm_kms_helper_is_poll_worker()) {
+ ret = pm_runtime_get_sync(connector->dev->dev);
+ if (ret < 0 && ret != -EACCES)
+ return conn_status;
+ }

nv_encoder = nouveau_connector_ddc_detect(connector);
if (nv_encoder && (i2c = nv_encoder->i2c) != NULL) {
@@ -647,8 +653,10 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)

out:

- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return conn_status;
}
--
2.15.1


2018-02-11 09:41:57

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 2/5] drm: Allow determining if current task is output poll worker

Introduce a helper to determine if the current task is an output poll
worker.

This allows us to fix a long-standing deadlock in several DRM drivers
wherein the ->runtime_suspend callback waits for the output poll worker
to finish and the worker in turn calls a ->detect callback which waits
for runtime suspend to finish. The ->detect callback is invoked from
multiple call sites and waiting for runtime suspend to finish is the
correct thing to do except if it's executing in the context of the
worker.

Cc: Dave Airlie <[email protected]>
Cc: Ben Skeggs <[email protected]>
Cc: Alex Deucher <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/gpu/drm/drm_probe_helper.c | 14 ++++++++++++++
include/drm/drm_crtc_helper.h | 1 +
2 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 555fbe54d6e2..019881d15ce1 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -653,6 +653,20 @@ static void output_poll_execute(struct work_struct *work)
schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
}

+/**
+ * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
+ *
+ * Determine if %current task is an output poll worker. This can be used
+ * to select distinct code paths for output polling versus other contexts.
+ */
+bool drm_kms_helper_is_poll_worker(void)
+{
+ struct work_struct *work = current_work();
+
+ return work && work->func == output_poll_execute;
+}
+EXPORT_SYMBOL(drm_kms_helper_is_poll_worker);
+
/**
* drm_kms_helper_poll_disable - disable output polling
* @dev: drm_device
diff --git a/include/drm/drm_crtc_helper.h b/include/drm/drm_crtc_helper.h
index 76e237bd989b..6914633037a5 100644
--- a/include/drm/drm_crtc_helper.h
+++ b/include/drm/drm_crtc_helper.h
@@ -77,5 +77,6 @@ void drm_kms_helper_hotplug_event(struct drm_device *dev);

void drm_kms_helper_poll_disable(struct drm_device *dev);
void drm_kms_helper_poll_enable(struct drm_device *dev);
+bool drm_kms_helper_is_poll_worker(void);

#endif
--
2.15.1


2018-02-11 09:42:03

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 5/5] drm/amdgpu: Fix deadlock on runtime suspend

amdgpu's ->runtime_suspend hook calls drm_kms_helper_poll_disable(),
which waits for the output poll worker to finish if it's running.

The output poll worker meanwhile calls pm_runtime_get_sync() in
amdgpu's ->detect hooks, which waits for the ongoing suspend to finish,
causing a deadlock.

Fix by not acquiring a runtime PM ref if the ->detect hooks are called
in the output poll worker's context. This is safe because the poll
worker is only enabled while runtime active and we know that
->runtime_suspend waits for it to finish.

Fixes: d38ceaf99ed0 ("drm/amdgpu: add core driver (v4)")
Cc: [email protected] # v4.2+: 1234567890ab: workqueue: Allow retrieval of current task's work struct
Cc: [email protected] # v4.2+: 1234567890ab: drm: Allow determining if current task is output poll worker
Cc: Alex Deucher <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++++++---------
1 file changed, 38 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
index 8ca3783f2deb..74d2efaec52f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -736,9 +736,11 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
int r;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (encoder) {
struct amdgpu_encoder *amdgpu_encoder = to_amdgpu_encoder(encoder);
@@ -757,8 +759,12 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
/* check acpi lid status ??? */

amdgpu_connector_update_scratch_regs(connector, ret);
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
+
return ret;
}

@@ -868,9 +874,11 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
int r;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

encoder = amdgpu_connector_best_single_encoder(connector);
if (!encoder)
@@ -924,8 +932,10 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
amdgpu_connector_update_scratch_regs(connector, ret);

out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
@@ -988,9 +998,11 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
bool dret = false, broken_edid = false;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
ret = connector->status;
@@ -1115,8 +1127,10 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
amdgpu_connector_update_scratch_regs(connector, ret);

exit:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
@@ -1359,9 +1373,11 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
struct drm_encoder *encoder = amdgpu_connector_best_single_encoder(connector);
int r;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
ret = connector->status;
@@ -1429,8 +1445,10 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)

amdgpu_connector_update_scratch_regs(connector, ret);
out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
--
2.15.1


2018-02-11 09:42:04

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 4/5] drm/radeon: Fix deadlock on runtime suspend

radeon's ->runtime_suspend hook calls drm_kms_helper_poll_disable(),
which waits for the output poll worker to finish if it's running.

The output poll worker meanwhile calls pm_runtime_get_sync() in
radeon's ->detect hooks, which waits for the ongoing suspend to finish,
causing a deadlock.

Fix by not acquiring a runtime PM ref if the ->detect hooks are called
in the output poll worker's context. This is safe because the poll
worker is only enabled while runtime active and we know that
->runtime_suspend waits for it to finish.

Stack trace for posterity:

INFO: task kworker/0:3:31847 blocked for more than 120 seconds
Workqueue: events output_poll_execute [drm_kms_helper]
Call Trace:
schedule+0x3c/0x90
rpm_resume+0x1e2/0x690
__pm_runtime_resume+0x3f/0x60
radeon_lvds_detect+0x39/0xf0 [radeon]
output_poll_execute+0xda/0x1e0 [drm_kms_helper]
process_one_work+0x14b/0x440
worker_thread+0x48/0x4a0

INFO: task kworker/2:0:10493 blocked for more than 120 seconds.
Workqueue: pm pm_runtime_work
Call Trace:
schedule+0x3c/0x90
schedule_timeout+0x1b3/0x240
wait_for_common+0xc2/0x180
wait_for_completion+0x1d/0x20
flush_work+0xfc/0x1a0
__cancel_work_timer+0xa5/0x1d0
cancel_delayed_work_sync+0x13/0x20
drm_kms_helper_poll_disable+0x1f/0x30 [drm_kms_helper]
radeon_pmops_runtime_suspend+0x3d/0xa0 [radeon]
pci_pm_runtime_suspend+0x61/0x1a0
vga_switcheroo_runtime_suspend+0x21/0x70
__rpm_callback+0x32/0x70
rpm_callback+0x24/0x80
rpm_suspend+0x12b/0x640
pm_runtime_work+0x6f/0xb0
process_one_work+0x14b/0x440
worker_thread+0x48/0x4a0

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94147
Fixes: 10ebc0bc0934 ("drm/radeon: add runtime PM support (v2)")
Cc: [email protected] # v3.13+: 1234567890ab: workqueue: Allow retrieval of current task's work struct
Cc: [email protected] # v3.13+: 1234567890ab: drm: Allow determining if current task is output poll worker
Cc: Ismo Toijala <[email protected]>
Cc: Alex Deucher <[email protected]>
Cc: Dave Airlie <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
drivers/gpu/drm/radeon/radeon_connectors.c | 74 ++++++++++++++++++++----------
1 file changed, 49 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index 5012f5e47a1e..2e2ca3c6b47d 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -899,9 +899,11 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
int r;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (encoder) {
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
@@ -924,8 +926,12 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
/* check acpi lid status ??? */

radeon_connector_update_scratch_regs(connector, ret);
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
+
return ret;
}

@@ -1039,9 +1045,11 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
int r;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

encoder = radeon_best_single_encoder(connector);
if (!encoder)
@@ -1108,8 +1116,10 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
radeon_connector_update_scratch_regs(connector, ret);

out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
@@ -1173,9 +1183,11 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
if (!radeon_connector->dac_load_detect)
return ret;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

encoder = radeon_best_single_encoder(connector);
if (!encoder)
@@ -1187,8 +1199,12 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
if (ret == connector_status_connected)
ret = radeon_connector_analog_encoder_conflict_solve(connector, encoder, ret, false);
radeon_connector_update_scratch_regs(connector, ret);
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }
+
return ret;
}

@@ -1251,9 +1267,11 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
enum drm_connector_status ret = connector_status_disconnected;
bool dret = false, broken_edid = false;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (radeon_connector->detected_hpd_without_ddc) {
force = true;
@@ -1436,8 +1454,10 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
}

exit:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
@@ -1688,9 +1708,11 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
if (radeon_dig_connector->is_mst)
return connector_status_disconnected;

- r = pm_runtime_get_sync(connector->dev->dev);
- if (r < 0)
- return connector_status_disconnected;
+ if (!drm_kms_helper_is_poll_worker()) {
+ r = pm_runtime_get_sync(connector->dev->dev);
+ if (r < 0)
+ return connector_status_disconnected;
+ }

if (!force && radeon_check_hpd_status_unchanged(connector)) {
ret = connector->status;
@@ -1777,8 +1799,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
}

out:
- pm_runtime_mark_last_busy(connector->dev->dev);
- pm_runtime_put_autosuspend(connector->dev->dev);
+ if (!drm_kms_helper_is_poll_worker()) {
+ pm_runtime_mark_last_busy(connector->dev->dev);
+ pm_runtime_put_autosuspend(connector->dev->dev);
+ }

return ret;
}
--
2.15.1


2018-02-11 09:43:49

by Lukas Wunner

[permalink] [raw]
Subject: [PATCH 1/5] workqueue: Allow retrieval of current task's work struct

Introduce a helper to retrieve the current task's work struct if it is
a workqueue worker.

This allows us to fix a long-standing deadlock in several DRM drivers
wherein the ->runtime_suspend callback waits for a specific worker to
finish and that worker in turn calls a function which waits for runtime
suspend to finish. That function is invoked from multiple call sites
and waiting for runtime suspend to finish is the correct thing to do
except if it's executing in the context of the worker.

Cc: Tejun Heo <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Dave Airlie <[email protected]>
Cc: Ben Skeggs <[email protected]>
Cc: Alex Deucher <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
---
include/linux/workqueue.h | 1 +
kernel/workqueue.c | 16 ++++++++++++++++
2 files changed, 17 insertions(+)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 4a54ef96aff5..bc0cda180c8b 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -465,6 +465,7 @@ extern bool cancel_delayed_work_sync(struct delayed_work *dwork);

extern void workqueue_set_max_active(struct workqueue_struct *wq,
int max_active);
+extern struct work_struct *current_work(void);
extern bool current_is_workqueue_rescuer(void);
extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
extern unsigned int work_busy(struct work_struct *work);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 017044c26233..bb9a519cbf50 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4179,6 +4179,22 @@ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
}
EXPORT_SYMBOL_GPL(workqueue_set_max_active);

+/**
+ * current_work - retrieve %current task's work struct
+ *
+ * Determine if %current task is a workqueue worker and what it's working on.
+ * Useful to find out the context that the %current task is running in.
+ *
+ * Return: work struct if %current task is a workqueue worker, %NULL otherwise.
+ */
+struct work_struct *current_work(void)
+{
+ struct worker *worker = current_wq_worker();
+
+ return worker ? worker->current_work : NULL;
+}
+EXPORT_SYMBOL(current_work);
+
/**
* current_is_workqueue_rescuer - is %current workqueue rescuer?
*
--
2.15.1


2018-02-11 18:59:26

by Mike Lothian

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On 11 February 2018 at 09:38, Lukas Wunner <[email protected]> wrote:
> Fix a deadlock on hybrid graphics laptops that's been present since 2013:
>
> DRM drivers poll connectors in 10 sec intervals. The poll worker is
> stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> the poll worker invokes the DRM drivers' ->detect callbacks, which call
> pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> with the intention of runtime resuming the device afterwards. The result
> is a circular wait between poll worker and autosuspend worker.
>
> I'm seeing this deadlock so often it's not funny anymore. I've also found
> 3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
> [4/5].)
>
> One theoretical solution would be to add a flag to the ->detect callback
> to tell it that it's running in the poll worker's context. In that case
> it doesn't need to call pm_runtime_get_sync() because the poll worker is
> only enabled while runtime active and we know that ->runtime_suspend
> waits for it to finish. But this approach would require touching every
> single DRM driver's ->detect hook. Moreover the ->detect hook is called
> from numerous other places, both in the DRM library as well as in each
> driver, making it necessary to trace every possible call chain and check
> if it's coming from the poll worker or not. I've tried to do that for
> nouveau (see the call sites listed in the commit message of patch [3/5])
> and concluded that this approach is a nightmare to implement.
>
> Another reason for the infeasibility of this approach is that ->detect
> is called from within the DRM library without driver involvement, e.g.
> via DRM's sysfs interface. In those cases, pm_runtime_get_sync() would
> have to be called by the DRM library, but the DRM library is supposed to
> stay generic, wheras runtime PM is driver-specific.
>
> So instead, I've come up with this much simpler solution which gleans
> from the current task's flags if it's a workqueue worker, retrieves the
> work_struct and checks if it's the poll worker. All that's needed is
> one small helper in the workqueue code and another small helper in the
> DRM library. This solution lends itself nicely to stable-backporting.
>
> The patches for radeon and amdgpu are compile-tested only, I only have a
> MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> This ensures that the poll worker runs after ->runtime_suspend has begun.
> Wait 12 sec after the GPU has begun runtime suspend, then check
> /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> series, the status will be stuck at "suspending" and you'll get hung task
> errors in dmesg after a few minutes.
>
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend. But this results in the GPU bouncing back and forth
> between D0 and D3 continuously. That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy. (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable. msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)
>
> Please review. Thanks,
>
> Lukas
>
> Lukas Wunner (5):
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
> drivers/gpu/drm/drm_probe_helper.c | 14 +++++
> drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
> drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++---------
> include/drm/drm_crtc_helper.h | 1 +
> include/linux/workqueue.h | 1 +
> kernel/workqueue.c | 16 ++++++
> 7 files changed, 132 insertions(+), 50 deletions(-)
>
> --
> 2.15.1
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


I wasn't quite sure where to add that msleep. I've tested the patches
as is on top of agd5f's wip branch without ill effects

I've had a radeon and now a amdgpu PRIME setup and don't believe I've
ever seen this issue

If you could pop a patch together for the msleep I'll give it a test on amdgpu

Cheers

Mike

2018-02-11 19:24:09

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Sun, Feb 11, 2018 at 06:58:11PM +0000, Mike Lothian wrote:
> On 11 February 2018 at 09:38, Lukas Wunner <[email protected]> wrote:
> > The patches for radeon and amdgpu are compile-tested only, I only have a
> > MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> > "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> > This ensures that the poll worker runs after ->runtime_suspend has begun.
> > Wait 12 sec after the GPU has begun runtime suspend, then check
> > /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> > series, the status will be stuck at "suspending" and you'll get hung task
> > errors in dmesg after a few minutes.
>
> I wasn't quite sure where to add that msleep. I've tested the patches
> as is on top of agd5f's wip branch without ill effects
>
> I've had a radeon and now a amdgpu PRIME setup and don't believe I've
> ever seen this issue
>
> If you could pop a patch together for the msleep I'll give it a test on
> amdgpu

Here you go, this is for all 3 drivers.
Should deadlock without the series.
Thanks!


diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 50afcf6..eefa0d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -718,6 +718,9 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 555fbe5..ee7cf0d 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -586,6 +586,7 @@ static void output_poll_execute(struct work_struct *work)
repoll = true;
goto out;
}
+ dev_info(&dev->pdev->dev, "begin poll\n");

drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
@@ -651,6 +652,7 @@ static void output_poll_execute(struct work_struct *work)

if (repoll)
schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
+ dev_info(&dev->pdev->dev, "end poll\n");
}

/**
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index 3e29302..f9da5bc 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -855,6 +855,9 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
nouveau_switcheroo_optimus_dsm();
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 31dd04f..707b8aa 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -413,6 +413,9 @@ static int radeon_pmops_runtime_suspend(struct device *dev)
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);

2018-02-11 19:42:51

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Sun, Feb 11, 2018 at 08:23:14PM +0100, Lukas Wunner wrote:
> On Sun, Feb 11, 2018 at 06:58:11PM +0000, Mike Lothian wrote:
> > On 11 February 2018 at 09:38, Lukas Wunner <[email protected]> wrote:
> > > The patches for radeon and amdgpu are compile-tested only, I only have a
> > > MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> > > "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> > > This ensures that the poll worker runs after ->runtime_suspend has begun.
> > > Wait 12 sec after the GPU has begun runtime suspend, then check
> > > /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> > > series, the status will be stuck at "suspending" and you'll get hung task
> > > errors in dmesg after a few minutes.
> >
> > I wasn't quite sure where to add that msleep. I've tested the patches
> > as is on top of agd5f's wip branch without ill effects
> >
> > I've had a radeon and now a amdgpu PRIME setup and don't believe I've
> > ever seen this issue
> >
> > If you could pop a patch together for the msleep I'll give it a test on
> > amdgpu
>
> Here you go, this is for all 3 drivers.
> Should deadlock without the series.
> Thanks!

Sorry, I missed that amdgpu_drv.c and radeon_drv.c don't include delay.h,
rectified testing patch below:


diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 50afcf6..beaaf2c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -36,6 +36,7 @@

#include <drm/drm_pciids.h>
#include <linux/console.h>
+#include <linux/delay.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
@@ -718,6 +719,9 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
index 555fbe5..ee7cf0d 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -586,6 +586,7 @@ static void output_poll_execute(struct work_struct *work)
repoll = true;
goto out;
}
+ dev_info(&dev->pdev->dev, "begin poll\n");

drm_connector_list_iter_begin(dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
@@ -651,6 +652,7 @@ static void output_poll_execute(struct work_struct *work)

if (repoll)
schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
+ dev_info(&dev->pdev->dev, "end poll\n");
}

/**
diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
index 3e29302..f9da5bc 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
@@ -855,6 +855,9 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);
nouveau_switcheroo_optimus_dsm();
diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c
index 31dd04f..2b4e7e0 100644
--- a/drivers/gpu/drm/radeon/radeon_drv.c
+++ b/drivers/gpu/drm/radeon/radeon_drv.c
@@ -35,6 +35,7 @@

#include <drm/drm_pciids.h>
#include <linux/console.h>
+#include <linux/delay.h>
#include <linux/module.h>
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
@@ -413,6 +414,9 @@ static int radeon_pmops_runtime_suspend(struct device *dev)
return -EBUSY;
}

+ printk("waiting 12 sec\n");
+ msleep(12*1000);
+ printk("done waiting 12 sec\n");
drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
drm_kms_helper_poll_disable(drm_dev);
vga_switcheroo_set_dynamic_switch(pdev, VGA_SWITCHEROO_OFF);

2018-02-12 00:38:56

by Mike Lothian

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

Hi

I've not been able to reproduce the original problem you're trying to
solve on amdgpu thats with or without your patch set and the above
"trigger" too

Is anything else required to trigger it, I started multiple DRI_PRIME
glxgears, in parallel, serial waiting the 12 seconds and serial within
the 12 seconds and I couldn't reproduce it

Regards

Mike

2018-02-12 05:09:41

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
> I've not been able to reproduce the original problem you're trying to
> solve on amdgpu thats with or without your patch set and the above
> "trigger" too
>
> Is anything else required to trigger it, I started multiple DRI_PRIME
> glxgears, in parallel, serial waiting the 12 seconds and serial within
> the 12 seconds and I couldn't reproduce it

The discrete GPU needs to runtime suspend, that's the trigger,
so no DRI_PRIME executables should be running. Just let it
autosuspend after boot. Do you see "waiting 12 sec" messages
in dmesg? If not it's not autosuspending.

Thanks,

Lukas

2018-02-12 09:04:59

by Mike Lothian

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On 12 February 2018 at 03:39, Lukas Wunner <[email protected]> wrote:
> On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
>> I've not been able to reproduce the original problem you're trying to
>> solve on amdgpu thats with or without your patch set and the above
>> "trigger" too
>>
>> Is anything else required to trigger it, I started multiple DRI_PRIME
>> glxgears, in parallel, serial waiting the 12 seconds and serial within
>> the 12 seconds and I couldn't reproduce it
>
> The discrete GPU needs to runtime suspend, that's the trigger,
> so no DRI_PRIME executables should be running. Just let it
> autosuspend after boot. Do you see "waiting 12 sec" messages
> in dmesg? If not it's not autosuspending.
>
> Thanks,
>
> Lukas

Hi

Yes I'm seeing those messages, I'm just not seeing the hangs

I've attached the dmesg in case you're interested

Regards

Mike


Attachments:
dmesg.nohangs (87.61 kB)

2018-02-12 10:15:08

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 12, 2018 at 09:03:26AM +0000, Mike Lothian wrote:
> On 12 February 2018 at 03:39, Lukas Wunner <[email protected]> wrote:
> > On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
> > > I've not been able to reproduce the original problem you're trying to
> > > solve on amdgpu thats with or without your patch set and the above
> > > "trigger" too
> > >
> > > Is anything else required to trigger it, I started multiple DRI_PRIME
> > > glxgears, in parallel, serial waiting the 12 seconds and serial within
> > > the 12 seconds and I couldn't reproduce it
> >
> > The discrete GPU needs to runtime suspend, that's the trigger,
> > so no DRI_PRIME executables should be running. Just let it
> > autosuspend after boot. Do you see "waiting 12 sec" messages
> > in dmesg? If not it's not autosuspending.
>
> Yes I'm seeing those messages, I'm just not seeing the hangs
>
> I've attached the dmesg in case you're interested

Okay the reason you're not seeing deadlocks is because the output poll
worker is not enabled. And the output poll worker is not enabled
because your discrete GPU doesn't have any outputs:

[ 0.265568] [drm:dc_create] *ERROR* DC: Number of connectors is zero!

The outputs are only polled if there are connectors which have the
DRM_CONNECTOR_POLL_CONNECT or DRM_CONNECTOR_POLL_DISCONNECT flag set.
And that only ever seems to be the case for VGA and DVI.

We know based on bugzilla reports that hybrid graphics laptops do exist
which poll outputs with radeon and nouveau. If there are no laptops
supported by amdgpu whose discrete GPU has polled connectors, then
patch [5/5] would be unnecessary. That is for Alex to decide.

However that is very good to know, so thanks a lot for your testing
efforts, much appreciated!

Kind regards,

Lukas

2018-02-12 16:51:40

by Imre Deak

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> [...]
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend. But this results in the GPU bouncing back and forth
> between D0 and D3 continuously. That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy. (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable. msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)

In i915 polling is on during runtime suspend only if there are outputs
without hotplug interrupt support. A special case is when an output has
working HPD interrupts when in D0, but no interrupts when runtime
suspended. For these we start polling (from a scheduled work) in the
runtime suspend hook and stop it in the runtime resume hook (again from
a scheduled work).

Imo whether to leave polling on or not for the above purpose is a policy
question (configurable with the drm_kms_helper.poll param).

--Imre

>
> Please review. Thanks,
>
> Lukas
>
> Lukas Wunner (5):
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
> drivers/gpu/drm/drm_probe_helper.c | 14 +++++
> drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
> drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++---------
> include/drm/drm_crtc_helper.h | 1 +
> include/linux/workqueue.h | 1 +
> kernel/workqueue.c | 16 ++++++
> 7 files changed, 132 insertions(+), 50 deletions(-)
>
> --
> 2.15.1
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-02-12 17:09:16

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 1/5] workqueue: Allow retrieval of current task's work struct

Hello,

On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> Introduce a helper to retrieve the current task's work struct if it is
> a workqueue worker.
>
> This allows us to fix a long-standing deadlock in several DRM drivers
> wherein the ->runtime_suspend callback waits for a specific worker to
> finish and that worker in turn calls a function which waits for runtime
> suspend to finish. That function is invoked from multiple call sites
> and waiting for runtime suspend to finish is the correct thing to do
> except if it's executing in the context of the worker.
>
> Cc: Tejun Heo <[email protected]>
> Cc: Lai Jiangshan <[email protected]>
> Cc: Dave Airlie <[email protected]>
> Cc: Ben Skeggs <[email protected]>
> Cc: Alex Deucher <[email protected]>
> Signed-off-by: Lukas Wunner <[email protected]>

I wonder whether it's too generic a name but there are other functions
named in a similar fashion and AFAICS current_work isn't used by
anyone in the tree, so it seems okay.

Acked-by: Tejun Heo <[email protected]>

Please feel free to route as you see fit.

Thanks.

--
tejun

2018-02-12 17:44:26

by Lyude Paul

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

This patchset is a no-brainer for me. I'm ashamed to say I think I might have
seen this issue before, but polling gets used so rarely nowadays it slipped
under the rug by accident.

Anyway, for the whole series (except patch #2, which I will respond to with a
short comment in just a moment):

Reviewed-by: Lyude Paul <[email protected]>

On Sun, 2018-02-11 at 10:38 +0100, Lukas Wunner wrote:
> Fix a deadlock on hybrid graphics laptops that's been present since 2013:
>
> DRM drivers poll connectors in 10 sec intervals. The poll worker is
> stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> the poll worker invokes the DRM drivers' ->detect callbacks, which call
> pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> with the intention of runtime resuming the device afterwards. The result
> is a circular wait between poll worker and autosuspend worker.
>
> I'm seeing this deadlock so often it's not funny anymore. I've also found
> 3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
> [4/5].)
>
> One theoretical solution would be to add a flag to the ->detect callback
> to tell it that it's running in the poll worker's context. In that case
> it doesn't need to call pm_runtime_get_sync() because the poll worker is
> only enabled while runtime active and we know that ->runtime_suspend
> waits for it to finish. But this approach would require touching every
> single DRM driver's ->detect hook. Moreover the ->detect hook is called
> from numerous other places, both in the DRM library as well as in each
> driver, making it necessary to trace every possible call chain and check
> if it's coming from the poll worker or not. I've tried to do that for
> nouveau (see the call sites listed in the commit message of patch [3/5])
> and concluded that this approach is a nightmare to implement.
>
> Another reason for the infeasibility of this approach is that ->detect
> is called from within the DRM library without driver involvement, e.g.
> via DRM's sysfs interface. In those cases, pm_runtime_get_sync() would
> have to be called by the DRM library, but the DRM library is supposed to
> stay generic, wheras runtime PM is driver-specific.
>
> So instead, I've come up with this much simpler solution which gleans
> from the current task's flags if it's a workqueue worker, retrieves the
> work_struct and checks if it's the poll worker. All that's needed is
> one small helper in the workqueue code and another small helper in the
> DRM library. This solution lends itself nicely to stable-backporting.
>
> The patches for radeon and amdgpu are compile-tested only, I only have a
> MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> This ensures that the poll worker runs after ->runtime_suspend has begun.
> Wait 12 sec after the GPU has begun runtime suspend, then check
> /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> series, the status will be stuck at "suspending" and you'll get hung task
> errors in dmesg after a few minutes.
>
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend. But this results in the GPU bouncing back and forth
> between D0 and D3 continuously. That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy. (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable. msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)
>
> Please review. Thanks,
>
> Lukas
>
> Lukas Wunner (5):
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
> drivers/gpu/drm/drm_probe_helper.c | 14 +++++
> drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
> drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++----
> -----
> include/drm/drm_crtc_helper.h | 1 +
> include/linux/workqueue.h | 1 +
> kernel/workqueue.c | 16 ++++++
> 7 files changed, 132 insertions(+), 50 deletions(-)
>
--
Cheers,
Lyude Paul

2018-02-12 17:47:24

by Lyude Paul

[permalink] [raw]
Subject: Re: [PATCH 2/5] drm: Allow determining if current task is output poll worker

On Sun, 2018-02-11 at 10:38 +0100, Lukas Wunner wrote:
> Introduce a helper to determine if the current task is an output poll
> worker.
>
> This allows us to fix a long-standing deadlock in several DRM drivers
> wherein the ->runtime_suspend callback waits for the output poll worker
> to finish and the worker in turn calls a ->detect callback which waits
> for runtime suspend to finish. The ->detect callback is invoked from
> multiple call sites and waiting for runtime suspend to finish is the
> correct thing to do except if it's executing in the context of the
> worker.
>
> Cc: Dave Airlie <[email protected]>
> Cc: Ben Skeggs <[email protected]>
> Cc: Alex Deucher <[email protected]>
> Signed-off-by: Lukas Wunner <[email protected]>
> ---
> drivers/gpu/drm/drm_probe_helper.c | 14 ++++++++++++++
> include/drm/drm_crtc_helper.h | 1 +
> 2 files changed, 15 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_probe_helper.c
> b/drivers/gpu/drm/drm_probe_helper.c
> index 555fbe54d6e2..019881d15ce1 100644
> --- a/drivers/gpu/drm/drm_probe_helper.c
> +++ b/drivers/gpu/drm/drm_probe_helper.c
> @@ -653,6 +653,20 @@ static void output_poll_execute(struct work_struct
> *work)
> schedule_delayed_work(delayed_work,
> DRM_OUTPUT_POLL_PERIOD);
> }
>
> +/**
> + * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
> + *
> + * Determine if %current task is an output poll worker. This can be used
> + * to select distinct code paths for output polling versus other contexts.
> + */
For this, it would be worth explicitly noting in the comments herethat this
should be called by DRM drivers in order to prevent racing with hotplug
polling workers, so that new drivers in the future can avoid implementing this
race condition in their driver.

> +bool drm_kms_helper_is_poll_worker(void)
> +{
> + struct work_struct *work = current_work();
> +
> + return work && work->func == output_poll_execute;
> +}
> +EXPORT_SYMBOL(drm_kms_helper_is_poll_worker);
> +
> /**
> * drm_kms_helper_poll_disable - disable output polling
> * @dev: drm_device
> diff --git a/include/drm/drm_crtc_helper.h b/include/drm/drm_crtc_helper.h
> index 76e237bd989b..6914633037a5 100644
> --- a/include/drm/drm_crtc_helper.h
> +++ b/include/drm/drm_crtc_helper.h
> @@ -77,5 +77,6 @@ void drm_kms_helper_hotplug_event(struct drm_device *dev);
>
> void drm_kms_helper_poll_disable(struct drm_device *dev);
> void drm_kms_helper_poll_enable(struct drm_device *dev);
> +bool drm_kms_helper_is_poll_worker(void);
>
> #endif
--
Cheers,
Lyude Paul

2018-02-12 17:51:44

by Chris Wilson

[permalink] [raw]
Subject: Re: [Intel-gfx] [PATCH 2/5] drm: Allow determining if current task is output poll worker

Quoting Lyude Paul (2018-02-12 17:46:11)
> On Sun, 2018-02-11 at 10:38 +0100, Lukas Wunner wrote:
> > Introduce a helper to determine if the current task is an output poll
> > worker.
> >
> > This allows us to fix a long-standing deadlock in several DRM drivers
> > wherein the ->runtime_suspend callback waits for the output poll worker
> > to finish and the worker in turn calls a ->detect callback which waits
> > for runtime suspend to finish. The ->detect callback is invoked from
> > multiple call sites and waiting for runtime suspend to finish is the
> > correct thing to do except if it's executing in the context of the
> > worker.
> >
> > Cc: Dave Airlie <[email protected]>
> > Cc: Ben Skeggs <[email protected]>
> > Cc: Alex Deucher <[email protected]>
> > Signed-off-by: Lukas Wunner <[email protected]>
> > ---
> > drivers/gpu/drm/drm_probe_helper.c | 14 ++++++++++++++
> > include/drm/drm_crtc_helper.h | 1 +
> > 2 files changed, 15 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_probe_helper.c
> > b/drivers/gpu/drm/drm_probe_helper.c
> > index 555fbe54d6e2..019881d15ce1 100644
> > --- a/drivers/gpu/drm/drm_probe_helper.c
> > +++ b/drivers/gpu/drm/drm_probe_helper.c
> > @@ -653,6 +653,20 @@ static void output_poll_execute(struct work_struct
> > *work)
> > schedule_delayed_work(delayed_work,
> > DRM_OUTPUT_POLL_PERIOD);
> > }
> >
> > +/**
> > + * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
> > + *
> > + * Determine if %current task is an output poll worker. This can be used
> > + * to select distinct code paths for output polling versus other contexts.
> > + */
> For this, it would be worth explicitly noting in the comments herethat this
> should be called by DRM drivers in order to prevent racing with hotplug
> polling workers, so that new drivers in the future can avoid implementing this
> race condition in their driver.
>
> > +bool drm_kms_helper_is_poll_worker(void)
> > +{
> > + struct work_struct *work = current_work();
> > +
> > + return work && work->func == output_poll_execute;

What ensures that work is accessible? Does this need rcu_read_lock
protection or more?
-Chris

2018-02-12 18:41:13

by Lukas Wunner

[permalink] [raw]
Subject: Re: [Intel-gfx] [PATCH 2/5] drm: Allow determining if current task is output poll worker

On Mon, Feb 12, 2018 at 05:50:12PM +0000, Chris Wilson wrote:
> Quoting Lyude Paul (2018-02-12 17:46:11)
> > On Sun, 2018-02-11 at 10:38 +0100, Lukas Wunner wrote:
> > > Introduce a helper to determine if the current task is an output poll
> > > worker.
> > >
> > > This allows us to fix a long-standing deadlock in several DRM drivers
> > > wherein the ->runtime_suspend callback waits for the output poll worker
> > > to finish and the worker in turn calls a ->detect callback which waits
> > > for runtime suspend to finish. The ->detect callback is invoked from
> > > multiple call sites and waiting for runtime suspend to finish is the
> > > correct thing to do except if it's executing in the context of the
> > > worker.
> > >
> > > Cc: Dave Airlie <[email protected]>
> > > Cc: Ben Skeggs <[email protected]>
> > > Cc: Alex Deucher <[email protected]>
> > > Signed-off-by: Lukas Wunner <[email protected]>
> > > ---
> > > drivers/gpu/drm/drm_probe_helper.c | 14 ++++++++++++++
> > > include/drm/drm_crtc_helper.h | 1 +
> > > 2 files changed, 15 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_probe_helper.c
> > > b/drivers/gpu/drm/drm_probe_helper.c
> > > index 555fbe54d6e2..019881d15ce1 100644
> > > --- a/drivers/gpu/drm/drm_probe_helper.c
> > > +++ b/drivers/gpu/drm/drm_probe_helper.c
> > > @@ -653,6 +653,20 @@ static void output_poll_execute(struct work_struct
> > > *work)
> > > schedule_delayed_work(delayed_work,
> > > DRM_OUTPUT_POLL_PERIOD);
> > > }
> > >
> > > +/**
> > > + * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
> > > + *
> > > + * Determine if %current task is an output poll worker. This can be used
> > > + * to select distinct code paths for output polling versus other contexts.
> > > + */
> > > +bool drm_kms_helper_is_poll_worker(void)
> > > +{
> > > + struct work_struct *work = current_work();
> > > +
> > > + return work && work->func == output_poll_execute;
>
> What ensures that work is accessible? Does this need rcu_read_lock
> protection or more?

The work_struct exists as long this kthread is working on it.
Since this is called by the kthread itself, it is guaranteed to exist.
So there's no protection needed.

Thanks,

Lukas

2018-02-12 18:59:30

by Alex Deucher

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 12, 2018 at 4:45 AM, Lukas Wunner <[email protected]> wrote:
> On Mon, Feb 12, 2018 at 09:03:26AM +0000, Mike Lothian wrote:
>> On 12 February 2018 at 03:39, Lukas Wunner <[email protected]> wrote:
>> > On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
>> > > I've not been able to reproduce the original problem you're trying to
>> > > solve on amdgpu thats with or without your patch set and the above
>> > > "trigger" too
>> > >
>> > > Is anything else required to trigger it, I started multiple DRI_PRIME
>> > > glxgears, in parallel, serial waiting the 12 seconds and serial within
>> > > the 12 seconds and I couldn't reproduce it
>> >
>> > The discrete GPU needs to runtime suspend, that's the trigger,
>> > so no DRI_PRIME executables should be running. Just let it
>> > autosuspend after boot. Do you see "waiting 12 sec" messages
>> > in dmesg? If not it's not autosuspending.
>>
>> Yes I'm seeing those messages, I'm just not seeing the hangs
>>
>> I've attached the dmesg in case you're interested
>
> Okay the reason you're not seeing deadlocks is because the output poll
> worker is not enabled. And the output poll worker is not enabled
> because your discrete GPU doesn't have any outputs:
>
> [ 0.265568] [drm:dc_create] *ERROR* DC: Number of connectors is zero!
>
> The outputs are only polled if there are connectors which have the
> DRM_CONNECTOR_POLL_CONNECT or DRM_CONNECTOR_POLL_DISCONNECT flag set.
> And that only ever seems to be the case for VGA and DVI.
>
> We know based on bugzilla reports that hybrid graphics laptops do exist
> which poll outputs with radeon and nouveau. If there are no laptops
> supported by amdgpu whose discrete GPU has polled connectors, then
> patch [5/5] would be unnecessary. That is for Alex to decide.

Most hybrid laptops don't have display connectors on the dGPU and we
only use polling on analog connectors, so you are not likely to run
into this on recent laptops. That said, I don't know if there is some
OEM system out there with a VGA port on the dGPU in a hybrid laptop.
I guess another option is to just disable polling on hybrid laptops.

Alex

>
> However that is very good to know, so thanks a lot for your testing
> efforts, much appreciated!
>
> Kind regards,
>
> Lukas
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

2018-02-13 08:19:15

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 12, 2018 at 01:58:32PM -0500, Alex Deucher wrote:
> On Mon, Feb 12, 2018 at 4:45 AM, Lukas Wunner <[email protected]> wrote:
> > On Mon, Feb 12, 2018 at 09:03:26AM +0000, Mike Lothian wrote:
> >> On 12 February 2018 at 03:39, Lukas Wunner <[email protected]> wrote:
> >> > On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
> >> > > I've not been able to reproduce the original problem you're trying to
> >> > > solve on amdgpu thats with or without your patch set and the above
> >> > > "trigger" too
> >
> > Okay the reason you're not seeing deadlocks is because the output poll
> > worker is not enabled. And the output poll worker is not enabled
> > because your discrete GPU doesn't have any outputs:
> >
> > [ 0.265568] [drm:dc_create] *ERROR* DC: Number of connectors is zero!
> >
> > The outputs are only polled if there are connectors which have the
> > DRM_CONNECTOR_POLL_CONNECT or DRM_CONNECTOR_POLL_DISCONNECT flag set.
> > And that only ever seems to be the case for VGA and DVI.
> >
> > We know based on bugzilla reports that hybrid graphics laptops do exist
> > which poll outputs with radeon and nouveau. If there are no laptops
> > supported by amdgpu whose discrete GPU has polled connectors, then
> > patch [5/5] would be unnecessary. That is for Alex to decide.
>
> Most hybrid laptops don't have display connectors on the dGPU and we
> only use polling on analog connectors, so you are not likely to run
> into this on recent laptops. That said, I don't know if there is some
> OEM system out there with a VGA port on the dGPU in a hybrid laptop.
> I guess another option is to just disable polling on hybrid laptops.

If we don't know for sure, applying patch [5/5] would seem to be the
safest approach. (Assuming it doesn't break anything else.)

Right now runtime PM is only used on hybrid graphics dGPUs by nouveau,
radeon and amdgpu. Would it be conceivable that its use is expanded
beyond that in the future? E.g. on a desktop machine, if DPMS is off
on all screens, why keep the GPU in D0? If that is conceivable, chances
that analog connectors are present are higher, and then the patch would
be necessary again. (Of course this would mean that analog screens
wouldn't light up automatically if they're attached while the GPU is
in D3hot, but the user may forbid runtime PM via sysfs if that is
unwanted.)

Thanks,

Lukas

2018-02-13 10:56:48

by Liviu Dudau

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

Hi Lukas,

On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> Fix a deadlock on hybrid graphics laptops that's been present since 2013:
>
> DRM drivers poll connectors in 10 sec intervals. The poll worker is
> stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> the poll worker invokes the DRM drivers' ->detect callbacks, which call
> pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> with the intention of runtime resuming the device afterwards. The result
> is a circular wait between poll worker and autosuspend worker.
>
> I'm seeing this deadlock so often it's not funny anymore. I've also found
> 3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
> [4/5].)
>
> One theoretical solution would be to add a flag to the ->detect callback
> to tell it that it's running in the poll worker's context. In that case
> it doesn't need to call pm_runtime_get_sync() because the poll worker is
> only enabled while runtime active and we know that ->runtime_suspend
> waits for it to finish. But this approach would require touching every
> single DRM driver's ->detect hook. Moreover the ->detect hook is called
> from numerous other places, both in the DRM library as well as in each
> driver, making it necessary to trace every possible call chain and check
> if it's coming from the poll worker or not. I've tried to do that for
> nouveau (see the call sites listed in the commit message of patch [3/5])
> and concluded that this approach is a nightmare to implement.
>
> Another reason for the infeasibility of this approach is that ->detect
> is called from within the DRM library without driver involvement, e.g.
> via DRM's sysfs interface. In those cases, pm_runtime_get_sync() would
> have to be called by the DRM library, but the DRM library is supposed to
> stay generic, wheras runtime PM is driver-specific.
>
> So instead, I've come up with this much simpler solution which gleans
> from the current task's flags if it's a workqueue worker, retrieves the
> work_struct and checks if it's the poll worker. All that's needed is
> one small helper in the workqueue code and another small helper in the
> DRM library. This solution lends itself nicely to stable-backporting.
>
> The patches for radeon and amdgpu are compile-tested only, I only have a
> MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> This ensures that the poll worker runs after ->runtime_suspend has begun.
> Wait 12 sec after the GPU has begun runtime suspend, then check
> /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> series, the status will be stuck at "suspending" and you'll get hung task
> errors in dmesg after a few minutes.
>
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend. But this results in the GPU bouncing back and forth
> between D0 and D3 continuously. That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy. (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable. msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)

I think I understand the problem you are trying to solve, but I'm
struggling to understand where malidp makes any specific mistakes. First
of all, malidp is only a display engine, so there is no GPU attached to
it, but that is only a small clarification. Second, malidp doesn't use
directly any of the callbacks that you are referring to, it uses the
drm_cma_xxxx() API plus the generic drm_xxxx() call. So if there are any
issues there (as they might well be) I think they would apply to a lot
more drivers and the fix will involve more than just malidp, i915 and
msm.

Would love to get any clarifications on what I might have misunderstood.

Best regards,
Liviu


>
> Please review. Thanks,
>
> Lukas
>
> Lukas Wunner (5):
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
> drivers/gpu/drm/drm_probe_helper.c | 14 +++++
> drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
> drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++---------
> include/drm/drm_crtc_helper.h | 1 +
> include/linux/workqueue.h | 1 +
> kernel/workqueue.c | 16 ++++++
> 7 files changed, 132 insertions(+), 50 deletions(-)
>
> --
> 2.15.1
>

--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯

2018-02-13 11:53:21

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Tue, Feb 13, 2018 at 10:55:06AM +0000, Liviu Dudau wrote:
> On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> > DRM drivers poll connectors in 10 sec intervals. The poll worker is
> > stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> > the poll worker invokes the DRM drivers' ->detect callbacks, which call
> > pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> > has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> > with the intention of runtime resuming the device afterwards. The result
> > is a circular wait between poll worker and autosuspend worker.
>
> I think I understand the problem you are trying to solve, but I'm
> struggling to understand where malidp makes any specific mistakes. First
> of all, malidp is only a display engine, so there is no GPU attached to
> it, but that is only a small clarification. Second, malidp doesn't use
> directly any of the callbacks that you are referring to, it uses the
> drm_cma_xxxx() API plus the generic drm_xxxx() call. So if there are any
> issues there (as they might well be) I think they would apply to a lot
> more drivers and the fix will involve more than just malidp, i915 and
> msm.

I don't know if malidp makes any specific mistakes and didn't mean to
cast it in a negative light, sorry.

So a lot of DRM drivers acquire a runtime PM ref in the connector ->detect
hook because they can't probe connectors while runtime suspended.
E.g. for a PCI device, probing might require mmio access, which isn't
possible outside of power state D0. There are no ->detect hooks declared
in drivers/gpu/drm/arm/, so it's unclear to me whether you're able to probe
during runtime suspend.

hdlcd_drv.c and malidp_drv.c both enable output polling. Output polling
is only necessary if you don't get HPD interrupts.

You're not disabling polling upon runtime suspend. Thus, if a runtime PM
ref is acquired during polling (such as in a ->detect hook), the GPU will
be runtime resumed every 10 secs. You may want to verify that's not the
case. If you decide that you do want to stop polling during runtime
suspend because it runtime resumes the GPU continuously, you'll need the
helper introduced in this series. So by cc'ing you I just wanted to keep
you in the loop about an issue that may potentially affect your driver.

Let me know if there are any questions.

Thanks,

Lukas

2018-02-13 15:21:18

by Alex Deucher

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Tue, Feb 13, 2018 at 3:17 AM, Lukas Wunner <[email protected]> wrote:
> On Mon, Feb 12, 2018 at 01:58:32PM -0500, Alex Deucher wrote:
>> On Mon, Feb 12, 2018 at 4:45 AM, Lukas Wunner <[email protected]> wrote:
>> > On Mon, Feb 12, 2018 at 09:03:26AM +0000, Mike Lothian wrote:
>> >> On 12 February 2018 at 03:39, Lukas Wunner <[email protected]> wrote:
>> >> > On Mon, Feb 12, 2018 at 12:35:51AM +0000, Mike Lothian wrote:
>> >> > > I've not been able to reproduce the original problem you're trying to
>> >> > > solve on amdgpu thats with or without your patch set and the above
>> >> > > "trigger" too
>> >
>> > Okay the reason you're not seeing deadlocks is because the output poll
>> > worker is not enabled. And the output poll worker is not enabled
>> > because your discrete GPU doesn't have any outputs:
>> >
>> > [ 0.265568] [drm:dc_create] *ERROR* DC: Number of connectors is zero!
>> >
>> > The outputs are only polled if there are connectors which have the
>> > DRM_CONNECTOR_POLL_CONNECT or DRM_CONNECTOR_POLL_DISCONNECT flag set.
>> > And that only ever seems to be the case for VGA and DVI.
>> >
>> > We know based on bugzilla reports that hybrid graphics laptops do exist
>> > which poll outputs with radeon and nouveau. If there are no laptops
>> > supported by amdgpu whose discrete GPU has polled connectors, then
>> > patch [5/5] would be unnecessary. That is for Alex to decide.
>>
>> Most hybrid laptops don't have display connectors on the dGPU and we
>> only use polling on analog connectors, so you are not likely to run
>> into this on recent laptops. That said, I don't know if there is some
>> OEM system out there with a VGA port on the dGPU in a hybrid laptop.
>> I guess another option is to just disable polling on hybrid laptops.
>
> If we don't know for sure, applying patch [5/5] would seem to be the
> safest approach. (Assuming it doesn't break anything else.)


I don't have any objections. I see no reason to leave out the amdgpu changes.

Alex

2018-02-13 15:47:42

by Liviu Dudau

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Tue, Feb 13, 2018 at 12:52:06PM +0100, Lukas Wunner wrote:
> On Tue, Feb 13, 2018 at 10:55:06AM +0000, Liviu Dudau wrote:
> > On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> > > DRM drivers poll connectors in 10 sec intervals. The poll worker is
> > > stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> > > the poll worker invokes the DRM drivers' ->detect callbacks, which call
> > > pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> > > has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> > > with the intention of runtime resuming the device afterwards. The result
> > > is a circular wait between poll worker and autosuspend worker.
> >
> > I think I understand the problem you are trying to solve, but I'm
> > struggling to understand where malidp makes any specific mistakes. First
> > of all, malidp is only a display engine, so there is no GPU attached to
> > it, but that is only a small clarification. Second, malidp doesn't use
> > directly any of the callbacks that you are referring to, it uses the
> > drm_cma_xxxx() API plus the generic drm_xxxx() call. So if there are any
> > issues there (as they might well be) I think they would apply to a lot
> > more drivers and the fix will involve more than just malidp, i915 and
> > msm.
>
> I don't know if malidp makes any specific mistakes and didn't mean to
> cast it in a negative light, sorry.

I did not take what you've said as a negative thing, only wanted to
understand how you came to your conclusions.

>
> So a lot of DRM drivers acquire a runtime PM ref in the connector ->detect
> hook because they can't probe connectors while runtime suspended.
> E.g. for a PCI device, probing might require mmio access, which isn't
> possible outside of power state D0. There are no ->detect hooks declared
> in drivers/gpu/drm/arm/, so it's unclear to me whether you're able to probe
> during runtime suspend.

That's because the drivers in drivers/gpu/drm/arm do not have
connectors, they are only the CRTC part of the driver. Both hdlcd and
mali-dp use the component framework to locate an encoder in device tree
that will then provide the connectors.

>
> hdlcd_drv.c and malidp_drv.c both enable output polling. Output polling
> is only necessary if you don't get HPD interrupts.

That's right, hdlcd and mali-dp don't receive HPD interrupts because
they don't have any. And because we don't know ahead of time which
encoder/connector will be paired with the driver, we enable polling as a
safe fallback.

>
> You're not disabling polling upon runtime suspend. Thus, if a runtime PM
> ref is acquired during polling (such as in a ->detect hook), the GPU will
> be runtime resumed every 10 secs. You may want to verify that's not the
> case. If you decide that you do want to stop polling during runtime
> suspend because it runtime resumes the GPU continuously, you'll need the
> helper introduced in this series. So by cc'ing you I just wanted to keep
> you in the loop about an issue that may potentially affect your driver.

Again, we have no GPU linked to us and the polling during runtime
suspend should be handled by the driver for the paired encoder, not by
us. I understand the potential issue but I'm struggling to understand if
it really applies to the drivers/gpu/drm/arm drivers other than in an
abstract way.

Best regards,
Liviu

>
> Let me know if there are any questions.
>
> Thanks,
>
> Lukas

--
====================
| I would like to |
| fix the world, |
| but they're not |
| giving me the |
\ source code! /
---------------
¯\_(ツ)_/¯

2018-02-14 13:58:33

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Tue, Feb 13, 2018 at 03:46:08PM +0000, Liviu Dudau wrote:
> On Tue, Feb 13, 2018 at 12:52:06PM +0100, Lukas Wunner wrote:
> > On Tue, Feb 13, 2018 at 10:55:06AM +0000, Liviu Dudau wrote:
> > > On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> > > > DRM drivers poll connectors in 10 sec intervals. The poll worker is
> > > > stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> > > > the poll worker invokes the DRM drivers' ->detect callbacks, which call
> > > > pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> > > > has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> > > > with the intention of runtime resuming the device afterwards. The result
> > > > is a circular wait between poll worker and autosuspend worker.
> > >
> > > I think I understand the problem you are trying to solve, but I'm
> > > struggling to understand where malidp makes any specific mistakes. First
> > > of all, malidp is only a display engine, so there is no GPU attached to
> > > it, but that is only a small clarification. Second, malidp doesn't use
> > > directly any of the callbacks that you are referring to, it uses the
> > > drm_cma_xxxx() API plus the generic drm_xxxx() call. So if there are any
> > > issues there (as they might well be) I think they would apply to a lot
> > > more drivers and the fix will involve more than just malidp, i915 and
> > > msm.
[snip]
> > There are no ->detect hooks declared
> > in drivers/gpu/drm/arm/, so it's unclear to me whether you're able to probe
> > during runtime suspend.
>
> That's because the drivers in drivers/gpu/drm/arm do not have
> connectors, they are only the CRTC part of the driver. Both hdlcd and
> mali-dp use the component framework to locate an encoder in device tree
> that will then provide the connectors.
>
> >
> > hdlcd_drv.c and malidp_drv.c both enable output polling. Output polling
> > is only necessary if you don't get HPD interrupts.
>
> That's right, hdlcd and mali-dp don't receive HPD interrupts because
> they don't have any. And because we don't know ahead of time which
> encoder/connector will be paired with the driver, we enable polling as a
> safe fallback.
>

Looking e.g. at inno_hdmi.c (used by rk3036.dtsi), this calls
drm_helper_hpd_irq_event() on receiving an HPD interrupt, and that
function returns immediately if polling is not enabled. So you *have*
to enable polling to receive HPD events.

You seem to keep the crtc runtime active as long as it's bound to an
encoder. If you do not ever intend to runtime suspend the crtc while
an encoder is attached, you don't need to keep polling enabled during
runtime suspend (because there's nothing to poll), but it shouldn't
hurt either.

If you would runtime suspend while an encoder is attached, then you
would only runtime resume every 10 sec (upon polling) if the encoder
was a child of the crtc and would support runtime suspend as well.
That's because the PM core wakes the parent by default when a child
runtime resumes. However in the DT's I've looked at, the encoder
is never a child of the crtc and at least inno_hdmi.c doesn't use
runtime suspend.

So I think you're all green, I can't spot any grave issues here.
Just be aware of the above-mentioned constraints.

Thanks,

Lukas

2018-02-17 10:39:17

by Lukas Wunner

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend

Pushed to drm-misc-fixes, thanks a lot everyone for the acks,
reviews, testing and comments.

drm-misc maintainers, heads-up:

drm-misc-fixes is still based on 4.15-rc8. The present series
applies cleanly to both 4.15 and 4.16, so I had no need to have
4.16-rc1 backmerged, but that may be necessary sooner or later.
I did a local test pull into drm-fixes, the shortlog looked sane
and it merged and compiled cleanly.

Please note two other commits are still pending in drm-misc-fixes:

commit 745fd50f3b044db6a3922e1718306555613164b0
Author: Daniel Vetter <[email protected]>
Date: Wed Jan 31 12:04:50 2018 +0100

drm/cirrus: Load lut in crtc_commit

Gustavo sent a pull request for this one on Jan 31 but erroneously
based it on the wrong commit and it ended up not being pulled by Dave.

commit 54f809cfbd6b4a43959039f5d33596ed3297ce16
Author: Leo (Sunpeng) Li <[email protected]>
Date: Wed Jan 17 12:51:08 2018 +0100

drm/atomic: Fix memleak on ERESTARTSYS during non-blocking commits

This one has already been pulled by Dave into drm-next for 4.17
as commit 1c6ceeee6ebb but Maarten subsequently cherry-picked
it onto drm-misc-fixes.

Let me know if I've made any mistakes.

Thanks,

Lukas

2018-02-19 11:36:30

by Daniel Vetter

[permalink] [raw]
Subject: Re: [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> Fix a deadlock on hybrid graphics laptops that's been present since 2013:
>
> DRM drivers poll connectors in 10 sec intervals. The poll worker is
> stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> the poll worker invokes the DRM drivers' ->detect callbacks, which call
> pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> with the intention of runtime resuming the device afterwards. The result
> is a circular wait between poll worker and autosuspend worker.

Don't shut down the poll worker when runtime suspending, that' doesn't
work. If you need the poll work, then that means waking up the gpu every
few seconds. If you don't need it, then make sure the DRM_CONNECTOR_POLL
flags are set correctly (you can update them at runtime, the poll worker
will pick that up).

That should fix the deadlock, and it's how we do it in i915 (where igt in
CI totally hammers the runtime pm support, and it seems to hold up).

And I guess we need to improve the poll worker docs about this.

> I'm seeing this deadlock so often it's not funny anymore. I've also found
> 3 nouveau bugzillas about it and 1 radeon bugzilla. (See patch [3/5] and
> [4/5].)
>
> One theoretical solution would be to add a flag to the ->detect callback
> to tell it that it's running in the poll worker's context. In that case
> it doesn't need to call pm_runtime_get_sync() because the poll worker is
> only enabled while runtime active and we know that ->runtime_suspend
> waits for it to finish. But this approach would require touching every
> single DRM driver's ->detect hook. Moreover the ->detect hook is called
> from numerous other places, both in the DRM library as well as in each
> driver, making it necessary to trace every possible call chain and check
> if it's coming from the poll worker or not. I've tried to do that for
> nouveau (see the call sites listed in the commit message of patch [3/5])
> and concluded that this approach is a nightmare to implement.
>
> Another reason for the infeasibility of this approach is that ->detect
> is called from within the DRM library without driver involvement, e.g.
> via DRM's sysfs interface. In those cases, pm_runtime_get_sync() would
> have to be called by the DRM library, but the DRM library is supposed to
> stay generic, wheras runtime PM is driver-specific.
>
> So instead, I've come up with this much simpler solution which gleans
> from the current task's flags if it's a workqueue worker, retrieves the
> work_struct and checks if it's the poll worker. All that's needed is
> one small helper in the workqueue code and another small helper in the
> DRM library. This solution lends itself nicely to stable-backporting.
>
> The patches for radeon and amdgpu are compile-tested only, I only have a
> MacBook Pro with an Nvidia GK107 to test. To test the patches, add an
> "msleep(12*1000);" at the top of the driver's ->runtime_suspend hook.
> This ensures that the poll worker runs after ->runtime_suspend has begun.
> Wait 12 sec after the GPU has begun runtime suspend, then check
> /sys/bus/pci/devices/0000:01:00.0/power/runtime_status. Without this
> series, the status will be stuck at "suspending" and you'll get hung task
> errors in dmesg after a few minutes.
>
> i915, malidp and msm "solved" this issue by not stopping the poll worker
> on runtime suspend. But this results in the GPU bouncing back and forth
> between D0 and D3 continuously. That's a total no-go for GPUs which
> runtime suspend to D3cold since every suspend/resume cycle costs a
> significant amount of time and energy. (i915 and malidp do not seem
> to acquire a runtime PM ref in the ->detect callbacks, which seems
> questionable. msm however does and would also deadlock if it disabled
> the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)

Well, userspace expects hotplug events, even when we runtime suspend
stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
pretty serious policy decision which is ok in the context of
vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
up if you plug something in, even with all the runtime pm stuff enabled.
Same for sata and everything else.
-Daniel

> Please review. Thanks,
>
> Lukas
>
> Lukas Wunner (5):
> workqueue: Allow retrieval of current task's work struct
> drm: Allow determining if current task is output poll worker
> drm/nouveau: Fix deadlock on runtime suspend
> drm/radeon: Fix deadlock on runtime suspend
> drm/amdgpu: Fix deadlock on runtime suspend
>
> drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c | 58 +++++++++++++-------
> drivers/gpu/drm/drm_probe_helper.c | 14 +++++
> drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +++++--
> drivers/gpu/drm/radeon/radeon_connectors.c | 74 +++++++++++++++++---------
> include/drm/drm_crtc_helper.h | 1 +
> include/linux/workqueue.h | 1 +
> kernel/workqueue.c | 16 ++++++
> 7 files changed, 132 insertions(+), 50 deletions(-)
>
> --
> 2.15.1
>
> _______________________________________________
> Nouveau mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/nouveau

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2018-02-19 11:59:17

by Lukas Wunner

[permalink] [raw]
Subject: Re: [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 19, 2018 at 12:34:43PM +0100, Daniel Vetter wrote:
> On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> > Fix a deadlock on hybrid graphics laptops that's been present since 2013:
> >
> > DRM drivers poll connectors in 10 sec intervals. The poll worker is
> > stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> > the poll worker invokes the DRM drivers' ->detect callbacks, which call
> > pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> > has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> > with the intention of runtime resuming the device afterwards. The result
> > is a circular wait between poll worker and autosuspend worker.
>
> Don't shut down the poll worker when runtime suspending, that' doesn't
> work. If you need the poll work, then that means waking up the gpu every
> few seconds. If you don't need it, then make sure the DRM_CONNECTOR_POLL
> flags are set correctly (you can update them at runtime, the poll worker
> will pick that up).
>
> That should fix the deadlock, and it's how we do it in i915 (where igt in
> CI totally hammers the runtime pm support, and it seems to hold up).

It would fix the deadlock but it's not an option on dual GPU laptops.
Power consumption of the discrete GPU is massive (9 W on my machine).


> > i915, malidp and msm "solved" this issue by not stopping the poll worker
> > on runtime suspend. But this results in the GPU bouncing back and forth
> > between D0 and D3 continuously. That's a total no-go for GPUs which
> > runtime suspend to D3cold since every suspend/resume cycle costs a
> > significant amount of time and energy. (i915 and malidp do not seem
> > to acquire a runtime PM ref in the ->detect callbacks, which seems
> > questionable. msm however does and would also deadlock if it disabled
> > the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)
>
> Well, userspace expects hotplug events, even when we runtime suspend
> stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
> pretty serious policy decision which is ok in the context of
> vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
> up if you plug something in, even with all the runtime pm stuff enabled.
> Same for sata and everything else.

On the MacBook Pro, the HPD pin of external DP and HDMI ports goes into
the gmux controller, which sends an interrupt on hotplug even if the GPU
is powered down.

Apparently Optimus has a similar functionality, cf. 3a6536c51d5d.

For the rare cases where an external VGA or DVI-A port is present, I guess
it's reasonable to have the user wake up the machine manually.

I'm not sure why nouveau polls ports on my laptop, the GK107 only has an
LVDS and three DP ports, need to investigate.

Thanks,

Lukas

2018-02-19 14:07:38

by Daniel Vetter

[permalink] [raw]
Subject: Re: [Intel-gfx] [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 19, 2018 at 12:58:17PM +0100, Lukas Wunner wrote:
> On Mon, Feb 19, 2018 at 12:34:43PM +0100, Daniel Vetter wrote:
> > On Sun, Feb 11, 2018 at 10:38:28AM +0100, Lukas Wunner wrote:
> > > Fix a deadlock on hybrid graphics laptops that's been present since 2013:
> > >
> > > DRM drivers poll connectors in 10 sec intervals. The poll worker is
> > > stopped on ->runtime_suspend with cancel_delayed_work_sync(). However
> > > the poll worker invokes the DRM drivers' ->detect callbacks, which call
> > > pm_runtime_get_sync(). If the poll worker starts after runtime suspend
> > > has begun, pm_runtime_get_sync() will wait for runtime suspend to finish
> > > with the intention of runtime resuming the device afterwards. The result
> > > is a circular wait between poll worker and autosuspend worker.
> >
> > Don't shut down the poll worker when runtime suspending, that' doesn't
> > work. If you need the poll work, then that means waking up the gpu every
> > few seconds. If you don't need it, then make sure the DRM_CONNECTOR_POLL
> > flags are set correctly (you can update them at runtime, the poll worker
> > will pick that up).
> >
> > That should fix the deadlock, and it's how we do it in i915 (where igt in
> > CI totally hammers the runtime pm support, and it seems to hold up).
>
> It would fix the deadlock but it's not an option on dual GPU laptops.
> Power consumption of the discrete GPU is massive (9 W on my machine).
>
>
> > > i915, malidp and msm "solved" this issue by not stopping the poll worker
> > > on runtime suspend. But this results in the GPU bouncing back and forth
> > > between D0 and D3 continuously. That's a total no-go for GPUs which
> > > runtime suspend to D3cold since every suspend/resume cycle costs a
> > > significant amount of time and energy. (i915 and malidp do not seem
> > > to acquire a runtime PM ref in the ->detect callbacks, which seems
> > > questionable. msm however does and would also deadlock if it disabled
> > > the poll worker on runtime suspend. cc += Archit, Liviu, intel-gfx)
> >
> > Well, userspace expects hotplug events, even when we runtime suspend
> > stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
> > pretty serious policy decision which is ok in the context of
> > vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
> > up if you plug something in, even with all the runtime pm stuff enabled.
> > Same for sata and everything else.
>
> On the MacBook Pro, the HPD pin of external DP and HDMI ports goes into
> the gmux controller, which sends an interrupt on hotplug even if the GPU
> is powered down.
>
> Apparently Optimus has a similar functionality, cf. 3a6536c51d5d.

Yeah, for vga_switcheroo and gmux setups shutting down polling explicitly
makes sense. I think ideally we'd stop polling in the gmux handler somehow
(maybe by dropping the relevant DRM_CONNECTOR_POLL flags, or outright
stopping it all). But not when runtime suspending the entire gpu (e.g.
idle system that shuts down the screen and everything, before it decides
a few minutes later to do a full system suspend).

I also think that this approach would lead to cleaner code, having
explicit checks for the (locking) execution context all over the place
tends to result in regrets eventually ime.

> For the rare cases where an external VGA or DVI-A port is present, I guess
> it's reasonable to have the user wake up the machine manually.
>
> I'm not sure why nouveau polls ports on my laptop, the GK107 only has an
> LVDS and three DP ports, need to investigate.

Yeah, that'd be good to figure out. The probe helpers should shut down the
worker if there's no connector that needs probing. We use that to
enable/disable the poll worker when there's a hotplug storm on the irq
line.

Once that's fixed we can perhaps also untangle the poll-vs-gmux story.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2018-02-19 14:49:09

by Lukas Wunner

[permalink] [raw]
Subject: Re: [Intel-gfx] [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 19, 2018 at 03:05:53PM +0100, Daniel Vetter wrote:
> On Mon, Feb 19, 2018 at 12:58:17PM +0100, Lukas Wunner wrote:
> > On Mon, Feb 19, 2018 at 12:34:43PM +0100, Daniel Vetter wrote:
> > > Well, userspace expects hotplug events, even when we runtime suspend
> > > stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
> > > pretty serious policy decision which is ok in the context of
> > > vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
> > > up if you plug something in, even with all the runtime pm stuff enabled.
> > > Same for sata and everything else.
> >
> > On the MacBook Pro, the HPD pin of external DP and HDMI ports goes into
> > the gmux controller, which sends an interrupt on hotplug even if the GPU
> > is powered down.
> >
> > Apparently Optimus has a similar functionality, cf. 3a6536c51d5d.
>
> Yeah, for vga_switcheroo and gmux setups shutting down polling explicitly
> makes sense. I think ideally we'd stop polling in the gmux handler somehow
> (maybe by dropping the relevant DRM_CONNECTOR_POLL flags, or outright
> stopping it all). But not when runtime suspending the entire gpu (e.g.
> idle system that shuts down the screen and everything, before it decides
> a few minutes later to do a full system suspend).

nouveau, radeon and amdgpu currently use runtime PM *only* on hybrid
graphics laptops.

Should the drivers later be extended to also use runtime PM in other
scenarios (desktop machines, eGPUs), they can easily detect whether
to disable polling on runtime suspend by calling apple_gmux_present()
on Macs or the equivalent for Optimus/ATPX.

Thanks,

Lukas

2018-02-19 14:55:48

by Daniel Vetter

[permalink] [raw]
Subject: Re: [Intel-gfx] [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 19, 2018 at 03:47:42PM +0100, Lukas Wunner wrote:
> On Mon, Feb 19, 2018 at 03:05:53PM +0100, Daniel Vetter wrote:
> > On Mon, Feb 19, 2018 at 12:58:17PM +0100, Lukas Wunner wrote:
> > > On Mon, Feb 19, 2018 at 12:34:43PM +0100, Daniel Vetter wrote:
> > > > Well, userspace expects hotplug events, even when we runtime suspend
> > > > stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
> > > > pretty serious policy decision which is ok in the context of
> > > > vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
> > > > up if you plug something in, even with all the runtime pm stuff enabled.
> > > > Same for sata and everything else.
> > >
> > > On the MacBook Pro, the HPD pin of external DP and HDMI ports goes into
> > > the gmux controller, which sends an interrupt on hotplug even if the GPU
> > > is powered down.
> > >
> > > Apparently Optimus has a similar functionality, cf. 3a6536c51d5d.
> >
> > Yeah, for vga_switcheroo and gmux setups shutting down polling explicitly
> > makes sense. I think ideally we'd stop polling in the gmux handler somehow
> > (maybe by dropping the relevant DRM_CONNECTOR_POLL flags, or outright
> > stopping it all). But not when runtime suspending the entire gpu (e.g.
> > idle system that shuts down the screen and everything, before it decides
> > a few minutes later to do a full system suspend).
>
> nouveau, radeon and amdgpu currently use runtime PM *only* on hybrid
> graphics laptops.
>
> Should the drivers later be extended to also use runtime PM in other
> scenarios (desktop machines, eGPUs), they can easily detect whether
> to disable polling on runtime suspend by calling apple_gmux_present()
> on Macs or the equivalent for Optimus/ATPX.

Ah, then I think the current solution is ok (if not entirely clean imo,
but that can be fixed up whenever it hurts). Implementing runtime pm for
other cases is up to the driver authors really (probably more pressing
when the gpu is on the same SoC).
-Daniel

>
> Thanks,
>
> Lukas
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2018-02-19 15:50:53

by Alex Deucher

[permalink] [raw]
Subject: Re: [Intel-gfx] [Nouveau] [PATCH 0/5] Fix deadlock on runtime suspend in DRM drivers

On Mon, Feb 19, 2018 at 9:54 AM, Daniel Vetter <[email protected]> wrote:
> On Mon, Feb 19, 2018 at 03:47:42PM +0100, Lukas Wunner wrote:
>> On Mon, Feb 19, 2018 at 03:05:53PM +0100, Daniel Vetter wrote:
>> > On Mon, Feb 19, 2018 at 12:58:17PM +0100, Lukas Wunner wrote:
>> > > On Mon, Feb 19, 2018 at 12:34:43PM +0100, Daniel Vetter wrote:
>> > > > Well, userspace expects hotplug events, even when we runtime suspend
>> > > > stuff. Hence waking shit up with polling. Imo ignoring hotplug events is a
>> > > > pretty serious policy decision which is ok in the context of
>> > > > vga_switcheroo, but not really as an automatic thing. E.g. usb also wakes
>> > > > up if you plug something in, even with all the runtime pm stuff enabled.
>> > > > Same for sata and everything else.
>> > >
>> > > On the MacBook Pro, the HPD pin of external DP and HDMI ports goes into
>> > > the gmux controller, which sends an interrupt on hotplug even if the GPU
>> > > is powered down.
>> > >
>> > > Apparently Optimus has a similar functionality, cf. 3a6536c51d5d.
>> >
>> > Yeah, for vga_switcheroo and gmux setups shutting down polling explicitly
>> > makes sense. I think ideally we'd stop polling in the gmux handler somehow
>> > (maybe by dropping the relevant DRM_CONNECTOR_POLL flags, or outright
>> > stopping it all). But not when runtime suspending the entire gpu (e.g.
>> > idle system that shuts down the screen and everything, before it decides
>> > a few minutes later to do a full system suspend).
>>
>> nouveau, radeon and amdgpu currently use runtime PM *only* on hybrid
>> graphics laptops.
>>
>> Should the drivers later be extended to also use runtime PM in other
>> scenarios (desktop machines, eGPUs), they can easily detect whether
>> to disable polling on runtime suspend by calling apple_gmux_present()
>> on Macs or the equivalent for Optimus/ATPX.
>
> Ah, then I think the current solution is ok (if not entirely clean imo,
> but that can be fixed up whenever it hurts). Implementing runtime pm for
> other cases is up to the driver authors really (probably more pressing
> when the gpu is on the same SoC).

On our APUs, we support fairly fine grained powergating so this mostly
happens auto-magically in hw; no need for runtimepm. We haven't
supported native analog encoders in last 3 or 4 generations of display
hw, so polling is not much of an issue going forward. On most
integrated platforms (e.g., laptops and all-in-ones), digital hotplug
is handled by the platform (we get an ACPI ATIF notification) so we
can wake the dGPU.

Alex

> -Daniel
>
>>
>> Thanks,
>>
>> Lukas
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel