Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1090271rwb; Fri, 18 Nov 2022 12:35:53 -0800 (PST) X-Google-Smtp-Source: AA0mqf5ok/h7l1WTm9YNU19y03Q1+olaXXHBxGI5mObaT5mxsrMX+HltF4JfcqcOuBzNi7fDs0Hi X-Received: by 2002:a17:90a:9f09:b0:218:6158:b081 with SMTP id n9-20020a17090a9f0900b002186158b081mr9773028pjp.66.1668803753593; Fri, 18 Nov 2022 12:35:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668803753; cv=none; d=google.com; s=arc-20160816; b=K/ZT/SJJxMXFSLiIu0c7jw5vmqWaOuzl1gOwz3XpQDVRMKvkp/ROZNrgaFdeFGA/ri SwT+kykyYHtC+NvH1Kp8sahdR7icnpexjshDYj6Vo7eS1jc3LFZuaS+GIREizGE/xfhM zRI2sprKQ7J7mBwzSmyjqTJjT7rn/OHTd++4l28v5m/ATq/aHvtTMh0sr2Pd7dD72Q9J dRg2ZG/pNNCSm7G5TvTFexc8hNGueEjmLIFziCRujR2E+kztxztm0NdTBYUcMIGWr9cF fLp6kbtyuQonxkGcIAIJSaIKN2OeZm3hafGG7sEzl+xBtWLj0o/wM3cUpPfOkgerl2Os wd6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=c3Y71VY1TbeyNbLc+b+Z7nylVDb1KGvuh8zZQgXS+xM=; b=0HHRISzbWm1Dw45FN0aIGrTzYCK3AyaIG5pKBdwBmPiqxpV/ZAbsOnEzceIEjKiJfc 7zRUVwiECgdwobc4SQKRSeAiSBTAlqFOc5mYkprhC6/gVrtbyr/0fnxWrs9RRPnqcjqy uPIf3oLbSX8MsahjgaNZhbeXblhz2j3MwCuz4tlkc/AcM8IP+l9kqeJwbZEJRcX7US/2 YHWeRF1+08gROC69+2KXG+1/kh8C0UMnDSaRd9WHntOHORJM1W6X93MauFD9p31pqKkf 29F3E7IvWRZvCz1QUq1af3ZN2iKopk3hu9ktF76SIze0bBI3jFhXiSWjqMUbS0ulWuZH a5Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=pXl4LF+P; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id be6-20020a170902aa0600b00186a185131asi4255798plb.0.2022.11.18.12.35.42; Fri, 18 Nov 2022 12:35:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=pXl4LF+P; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235778AbiKRTqc (ORCPT + 90 others); Fri, 18 Nov 2022 14:46:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235357AbiKRTq0 (ORCPT ); Fri, 18 Nov 2022 14:46:26 -0500 Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com [IPv6:2001:4860:4864:20::33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F0787818F; Fri, 18 Nov 2022 11:46:23 -0800 (PST) Received: by mail-oa1-x33.google.com with SMTP id 586e51a60fabf-13bd2aea61bso7192570fac.0; Fri, 18 Nov 2022 11:46:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=c3Y71VY1TbeyNbLc+b+Z7nylVDb1KGvuh8zZQgXS+xM=; b=pXl4LF+Pfw+DdiBNdCi+cg6f79ONgSZBbIDy4vSVv13X1ElGi4GDr6YTPRfIATxCrx X/ywpl57jnaztWWMg5KGyrmtEnSwDgQoNq7weJipaYYmhNVkn+jJBUAmJDjcd+6UVq5r Ps/ZPvLZ+poyPJQVTX43OJsDpKhQO5P5S75vxOkpwL4ASrKdwz8vKkN3Zu8m/igbYfTs QWF07yEF80eIdbq/oO31YEhjtBO+uTHrtlEi+PnUPA9/o9XB0wmum92K34dfA89bXdg2 hvZ7jWIAciaiqurH3X0rGa+h4eWaINBpXTU3OS5aZe5yxdPPQFUK/Zb7TAMkP0tWG915 KXXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=c3Y71VY1TbeyNbLc+b+Z7nylVDb1KGvuh8zZQgXS+xM=; b=ymcFG6/nJpTAm/pNvIqaUDvjrTj3sByxvGx8MVCR/DkPNI1vwyyE6puoPmZgrKZJLo ax36v3X+q9FJQ7OqqJoFiOsQGnkb6O8J+uBJvrrPn+1VdCTuOzZ3qZE5sZhNd19HXaSt /gaOdScdgZMWGmiKjmT9Gc3tXG8QTJ+MeJr3NTDZ+SktHA5GvwMEMs4MM4qscpgPUJQi I/LWWQ71qZWW/d578XRItxfIWQOJTilbA+IhIiQOAFr5tJjGshx5GYgY19QrwR3+C1n5 Ejc9z8jZ8qq4S4Hv3TtmBtSfIiq5lmEPLwCmzSqGK2Z4hqCl3HAcduuV9eLzlBGB1khP 1rZw== X-Gm-Message-State: ANoB5pnwVa7s48lE1WWvSDtm73Fvh/77186acxKCB+geHwv8ViSYC21Z 4mYfDj2E30gtZ2G9pwBZ2TiYMAPNnKk9pkJcHsY= X-Received: by 2002:a05:6870:b07:b0:13b:d07f:f29d with SMTP id lh7-20020a0568700b0700b0013bd07ff29dmr5008292oab.96.1668800782482; Fri, 18 Nov 2022 11:46:22 -0800 (PST) MIME-Version: 1.0 References: <20221114221754.385090-1-lyude@redhat.com> <20221114221754.385090-2-lyude@redhat.com> <35e55d8040afdcb5684bec16ff710ce5ff32b202.camel@redhat.com> In-Reply-To: <35e55d8040afdcb5684bec16ff710ce5ff32b202.camel@redhat.com> From: Alex Deucher Date: Fri, 18 Nov 2022 14:46:10 -0500 Message-ID: Subject: Re: [PATCH v2 1/4] drm/amdgpu/mst: Stop ignoring error codes and deadlocking To: Lyude Paul Cc: "Lin, Wayne" , "amd-gfx@lists.freedesktop.org" , "Liu, Wenjing" , "open list:DRM DRIVERS" , open list , "Mahfooz, Hamza" , David Airlie , "Francis, David" , "Siqueira, Rodrigo" , "Hung, Alex" , "Zuo, Jerry" , "Pillai, Aurabindo" , "Wentland, Harry" , Daniel Vetter , "Li, Sun peng (Leo)" , "Wu, Hersen" , Mikita Lipski , "Pan, Xinhui" , "Li, Roman" , "stable@vger.kernel.org" , "Koenig, Christian" , Thomas Zimmermann , "Deucher, Alexander" , "Kazlauskas, Nicholas" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org I've already picked this up. Can you send a follow up patch with just the coverity fix? Alex On Fri, Nov 18, 2022 at 2:17 PM Lyude Paul wrote: > > JFYI, Coverity pointed out one more issue with this series so I'm going to > send out a respin real quick to fix it. It's just a missing variable > assignment (we leave ret unassigned by mistake in > pre_compute_mst_dsc_configs()) so I will carry over your r-b on it. > > On Wed, 2022-11-16 at 04:39 +0000, Lin, Wayne wrote: > > [Public] > > > > All the patch set looks good to me. Feel free to add: > > Reviewed-by: Wayne Lin > > > > Again, thank you Lyude for helping on this!!! > > > > Regards, > > Wayne > > > -----Original Message----- > > > From: Lyude Paul > > > Sent: Tuesday, November 15, 2022 6:18 AM > > > To: amd-gfx@lists.freedesktop.org > > > Cc: Wentland, Harry ; stable@vger.kernel.org; > > > Li, Sun peng (Leo) ; Siqueira, Rodrigo > > > ; Deucher, Alexander > > > ; Koenig, Christian > > > ; Pan, Xinhui ; David > > > Airlie ; Daniel Vetter ; Kazlauskas, > > > Nicholas ; Pillai, Aurabindo > > > ; Li, Roman ; Zuo, Jerry > > > ; Wu, Hersen ; Lin, Wayne > > > ; Thomas Zimmermann ; > > > Mahfooz, Hamza ; Hung, Alex > > > ; Mikita Lipski ; Liu, > > > Wenjing ; Francis, David > > > ; open list:DRM DRIVERS > > devel@lists.freedesktop.org>; open list > > > Subject: [PATCH v2 1/4] drm/amdgpu/mst: Stop ignoring error codes and > > > deadlocking > > > > > > It appears that amdgpu makes the mistake of completely ignoring the return > > > values from the DP MST helpers, and instead just returns a simple true/false. > > > In this case, it seems to have come back to bite us because as a result of > > > simply returning false from compute_mst_dsc_configs_for_state(), amdgpu > > > had no way of telling when a deadlock happened from these helpers. This > > > could definitely result in some kernel splats. > > > > > > V2: > > > * Address Wayne's comments (fix another bunch of spots where we weren't > > > passing down return codes) > > > > > > Signed-off-by: Lyude Paul > > > Fixes: 8c20a1ed9b4f ("drm/amd/display: MST DSC compute fair share") > > > Cc: Harry Wentland > > > Cc: # v5.6+ > > > --- > > > .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 18 +- > > > .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 235 ++++++++++------ > > > -- > > > .../display/amdgpu_dm/amdgpu_dm_mst_types.h | 12 +- > > > 3 files changed, 147 insertions(+), 118 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > > > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > > > index 0db2a88cd4d7b..852a2100c6b38 100644 > > > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > > > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c > > > @@ -6462,7 +6462,7 @@ static int > > > dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state, > > > struct drm_connector_state *new_con_state; > > > struct amdgpu_dm_connector *aconnector; > > > struct dm_connector_state *dm_conn_state; > > > - int i, j; > > > + int i, j, ret; > > > int vcpi, pbn_div, pbn, slot_num = 0; > > > > > > for_each_new_connector_in_state(state, connector, > > > new_con_state, i) { @@ -6509,8 +6509,11 @@ static int > > > dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state, > > > dm_conn_state->pbn = pbn; > > > dm_conn_state->vcpi_slots = slot_num; > > > > > > - drm_dp_mst_atomic_enable_dsc(state, aconnector- > > > > port, dm_conn_state->pbn, > > > - false); > > > + ret = drm_dp_mst_atomic_enable_dsc(state, > > > aconnector->port, > > > + dm_conn_state- > > > > pbn, false); > > > + if (ret < 0) > > > + return ret; > > > + > > > continue; > > > } > > > > > > @@ -9523,10 +9526,9 @@ static int amdgpu_dm_atomic_check(struct > > > drm_device *dev, > > > > > > #if defined(CONFIG_DRM_AMD_DC_DCN) > > > if (dc_resource_is_dsc_encoding_supported(dc)) { > > > - if (!pre_validate_dsc(state, &dm_state, vars)) { > > > - ret = -EINVAL; > > > + ret = pre_validate_dsc(state, &dm_state, vars); > > > + if (ret != 0) > > > goto fail; > > > - } > > > } > > > #endif > > > > > > @@ -9621,9 +9623,9 @@ static int amdgpu_dm_atomic_check(struct > > > drm_device *dev, > > > } > > > > > > #if defined(CONFIG_DRM_AMD_DC_DCN) > > > - if (!compute_mst_dsc_configs_for_state(state, dm_state- > > > > context, vars)) { > > > + ret = compute_mst_dsc_configs_for_state(state, dm_state- > > > > context, vars); > > > + if (ret) { > > > > > > DRM_DEBUG_DRIVER("compute_mst_dsc_configs_for_state() > > > failed\n"); > > > - ret = -EINVAL; > > > goto fail; > > > } > > > > > > diff --git > > > a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > > > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > > > index 6ff96b4bdda5c..bba2e8aaa2c20 100644 > > > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > > > +++ > > > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > > > @@ -703,13 +703,13 @@ static int bpp_x16_from_pbn(struct > > > dsc_mst_fairness_params param, int pbn) > > > return dsc_config.bits_per_pixel; > > > } > > > > > > -static bool increase_dsc_bpp(struct drm_atomic_state *state, > > > - struct drm_dp_mst_topology_state *mst_state, > > > - struct dc_link *dc_link, > > > - struct dsc_mst_fairness_params *params, > > > - struct dsc_mst_fairness_vars *vars, > > > - int count, > > > - int k) > > > +static int increase_dsc_bpp(struct drm_atomic_state *state, > > > + struct drm_dp_mst_topology_state *mst_state, > > > + struct dc_link *dc_link, > > > + struct dsc_mst_fairness_params *params, > > > + struct dsc_mst_fairness_vars *vars, > > > + int count, > > > + int k) > > > { > > > int i; > > > bool bpp_increased[MAX_PIPES]; > > > @@ -719,6 +719,7 @@ static bool increase_dsc_bpp(struct > > > drm_atomic_state *state, > > > int remaining_to_increase = 0; > > > int link_timeslots_used; > > > int fair_pbn_alloc; > > > + int ret = 0; > > > > > > for (i = 0; i < count; i++) { > > > if (vars[i + k].dsc_enabled) { > > > @@ -757,52 +758,60 @@ static bool increase_dsc_bpp(struct > > > drm_atomic_state *state, > > > > > > if (initial_slack[next_index] > fair_pbn_alloc) { > > > vars[next_index].pbn += fair_pbn_alloc; > > > - if (drm_dp_atomic_find_time_slots(state, > > > - > > > params[next_index].port->mgr, > > > - > > > params[next_index].port, > > > - > > > vars[next_index].pbn) < 0) > > > - return false; > > > - if (!drm_dp_mst_atomic_check(state)) { > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + > > > params[next_index].port->mgr, > > > + > > > params[next_index].port, > > > + > > > vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > + > > > + ret = drm_dp_mst_atomic_check(state); > > > + if (ret == 0) { > > > vars[next_index].bpp_x16 = > > > bpp_x16_from_pbn(params[next_index], vars[next_index].pbn); > > > } else { > > > vars[next_index].pbn -= fair_pbn_alloc; > > > - if (drm_dp_atomic_find_time_slots(state, > > > - > > > params[next_index].port->mgr, > > > - > > > params[next_index].port, > > > - > > > vars[next_index].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + > > > params[next_index].port->mgr, > > > + > > > params[next_index].port, > > > + > > > vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > } > > > } else { > > > vars[next_index].pbn += initial_slack[next_index]; > > > - if (drm_dp_atomic_find_time_slots(state, > > > - > > > params[next_index].port->mgr, > > > - > > > params[next_index].port, > > > - > > > vars[next_index].pbn) < 0) > > > - return false; > > > - if (!drm_dp_mst_atomic_check(state)) { > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + > > > params[next_index].port->mgr, > > > + > > > params[next_index].port, > > > + > > > vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > + > > > + ret = drm_dp_mst_atomic_check(state); > > > + if (ret == 0) { > > > vars[next_index].bpp_x16 = > > > params[next_index].bw_range.max_target_bpp_x16; > > > } else { > > > vars[next_index].pbn -= > > > initial_slack[next_index]; > > > - if (drm_dp_atomic_find_time_slots(state, > > > - > > > params[next_index].port->mgr, > > > - > > > params[next_index].port, > > > - > > > vars[next_index].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + > > > params[next_index].port->mgr, > > > + > > > params[next_index].port, > > > + > > > vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > } > > > } > > > > > > bpp_increased[next_index] = true; > > > remaining_to_increase--; > > > } > > > - return true; > > > + return 0; > > > } > > > > > > -static bool try_disable_dsc(struct drm_atomic_state *state, > > > - struct dc_link *dc_link, > > > - struct dsc_mst_fairness_params *params, > > > - struct dsc_mst_fairness_vars *vars, > > > - int count, > > > - int k) > > > +static int try_disable_dsc(struct drm_atomic_state *state, > > > + struct dc_link *dc_link, > > > + struct dsc_mst_fairness_params *params, > > > + struct dsc_mst_fairness_vars *vars, > > > + int count, > > > + int k) > > > { > > > int i; > > > bool tried[MAX_PIPES]; > > > @@ -810,6 +819,7 @@ static bool try_disable_dsc(struct drm_atomic_state > > > *state, > > > int max_kbps_increase; > > > int next_index; > > > int remaining_to_try = 0; > > > + int ret; > > > > > > for (i = 0; i < count; i++) { > > > if (vars[i + k].dsc_enabled > > > @@ -840,49 +850,52 @@ static bool try_disable_dsc(struct > > > drm_atomic_state *state, > > > break; > > > > > > vars[next_index].pbn = > > > kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps); > > > - if (drm_dp_atomic_find_time_slots(state, > > > - params[next_index].port- > > > > mgr, > > > - params[next_index].port, > > > - vars[next_index].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + params[next_index].port- > > > > mgr, > > > + params[next_index].port, > > > + vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > > > > - if (!drm_dp_mst_atomic_check(state)) { > > > + ret = drm_dp_mst_atomic_check(state); > > > + if (ret == 0) { > > > vars[next_index].dsc_enabled = false; > > > vars[next_index].bpp_x16 = 0; > > > } else { > > > vars[next_index].pbn = > > > kbps_to_peak_pbn(params[next_index].bw_range.max_kbps); > > > - if (drm_dp_atomic_find_time_slots(state, > > > - > > > params[next_index].port->mgr, > > > - > > > params[next_index].port, > > > - > > > vars[next_index].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > + > > > params[next_index].port->mgr, > > > + > > > params[next_index].port, > > > + > > > vars[next_index].pbn); > > > + if (ret < 0) > > > + return ret; > > > } > > > > > > tried[next_index] = true; > > > remaining_to_try--; > > > } > > > - return true; > > > + return 0; > > > } > > > > > > -static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state > > > *state, > > > - struct dc_state *dc_state, > > > - struct dc_link *dc_link, > > > - struct dsc_mst_fairness_vars *vars, > > > - struct drm_dp_mst_topology_mgr > > > *mgr, > > > - int *link_vars_start_index) > > > +static int compute_mst_dsc_configs_for_link(struct drm_atomic_state > > > *state, > > > + struct dc_state *dc_state, > > > + struct dc_link *dc_link, > > > + struct dsc_mst_fairness_vars *vars, > > > + struct drm_dp_mst_topology_mgr > > > *mgr, > > > + int *link_vars_start_index) > > > { > > > struct dc_stream_state *stream; > > > struct dsc_mst_fairness_params params[MAX_PIPES]; > > > struct amdgpu_dm_connector *aconnector; > > > struct drm_dp_mst_topology_state *mst_state = > > > drm_atomic_get_mst_topology_state(state, mgr); > > > int count = 0; > > > - int i, k; > > > + int i, k, ret; > > > bool debugfs_overwrite = false; > > > > > > memset(params, 0, sizeof(params)); > > > > > > if (IS_ERR(mst_state)) > > > - return false; > > > + return PTR_ERR(mst_state); > > > > > > mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link); #if > > > defined(CONFIG_DRM_AMD_DC_DCN) @@ -933,7 +946,7 @@ static bool > > > compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, > > > > > > if (count == 0) { > > > ASSERT(0); > > > - return true; > > > + return 0; > > > } > > > > > > /* k is start index of vars for current phy link used by mst hub */ @@ > > > -947,13 +960,17 @@ static bool compute_mst_dsc_configs_for_link(struct > > > drm_atomic_state *state, > > > vars[i + k].pbn = > > > kbps_to_peak_pbn(params[i].bw_range.stream_kbps); > > > vars[i + k].dsc_enabled = false; > > > vars[i + k].bpp_x16 = 0; > > > - if (drm_dp_atomic_find_time_slots(state, params[i].port- > > > > mgr, params[i].port, > > > - vars[i + k].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, params[i].port- > > > > mgr, params[i].port, > > > + vars[i + k].pbn); > > > + if (ret < 0) > > > + return ret; > > > } > > > - if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { > > > + ret = drm_dp_mst_atomic_check(state); > > > + if (ret == 0 && !debugfs_overwrite) { > > > set_dsc_configs_from_fairness_vars(params, vars, count, k); > > > - return true; > > > + return 0; > > > + } else if (ret != -ENOSPC) { > > > + return ret; > > > } > > > > > > /* Try max compression */ > > > @@ -962,31 +979,36 @@ static bool > > > compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, > > > vars[i + k].pbn = > > > kbps_to_peak_pbn(params[i].bw_range.min_kbps); > > > vars[i + k].dsc_enabled = true; > > > vars[i + k].bpp_x16 = > > > params[i].bw_range.min_target_bpp_x16; > > > - if (drm_dp_atomic_find_time_slots(state, > > > params[i].port->mgr, > > > - params[i].port, vars[i > > > + k].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > params[i].port->mgr, > > > + params[i].port, > > > vars[i + k].pbn); > > > + if (ret < 0) > > > + return ret; > > > } else { > > > vars[i + k].pbn = > > > kbps_to_peak_pbn(params[i].bw_range.stream_kbps); > > > vars[i + k].dsc_enabled = false; > > > vars[i + k].bpp_x16 = 0; > > > - if (drm_dp_atomic_find_time_slots(state, > > > params[i].port->mgr, > > > - params[i].port, vars[i > > > + k].pbn) < 0) > > > - return false; > > > + ret = drm_dp_atomic_find_time_slots(state, > > > params[i].port->mgr, > > > + params[i].port, > > > vars[i + k].pbn); > > > + if (ret < 0) > > > + return ret; > > > } > > > } > > > - if (drm_dp_mst_atomic_check(state)) > > > - return false; > > > + ret = drm_dp_mst_atomic_check(state); > > > + if (ret != 0) > > > + return ret; > > > > > > /* Optimize degree of compression */ > > > - if (!increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, > > > k)) > > > - return false; > > > + ret = increase_dsc_bpp(state, mst_state, dc_link, params, vars, > > > count, k); > > > + if (ret < 0) > > > + return ret; > > > > > > - if (!try_disable_dsc(state, dc_link, params, vars, count, k)) > > > - return false; > > > + ret = try_disable_dsc(state, dc_link, params, vars, count, k); > > > + if (ret < 0) > > > + return ret; > > > > > > set_dsc_configs_from_fairness_vars(params, vars, count, k); > > > > > > - return true; > > > + return 0; > > > } > > > > > > static bool is_dsc_need_re_compute( > > > @@ -1087,15 +1109,16 @@ static bool is_dsc_need_re_compute( > > > return is_dsc_need_re_compute; > > > } > > > > > > -bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, > > > - struct dc_state *dc_state, > > > - struct dsc_mst_fairness_vars *vars) > > > +int compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, > > > + struct dc_state *dc_state, > > > + struct dsc_mst_fairness_vars *vars) > > > { > > > int i, j; > > > struct dc_stream_state *stream; > > > bool computed_streams[MAX_PIPES]; > > > struct amdgpu_dm_connector *aconnector; > > > int link_vars_start_index = 0; > > > + int ret = 0; > > > > > > for (i = 0; i < dc_state->stream_count; i++) > > > computed_streams[i] = false; > > > @@ -1118,17 +1141,19 @@ bool compute_mst_dsc_configs_for_state(struct > > > drm_atomic_state *state, > > > continue; > > > > > > if (dcn20_remove_stream_from_ctx(stream->ctx->dc, > > > dc_state, stream) != DC_OK) > > > - return false; > > > + return -EINVAL; > > > > > > if (!is_dsc_need_re_compute(state, dc_state, stream->link)) > > > continue; > > > > > > mutex_lock(&aconnector->mst_mgr.lock); > > > - if (!compute_mst_dsc_configs_for_link(state, dc_state, > > > stream->link, vars, > > > - &aconnector->mst_mgr, > > > - &link_vars_start_index)) { > > > + > > > + ret = compute_mst_dsc_configs_for_link(state, dc_state, > > > stream->link, vars, > > > + &aconnector->mst_mgr, > > > + &link_vars_start_index); > > > + if (ret != 0) { > > > mutex_unlock(&aconnector->mst_mgr.lock); > > > - return false; > > > + return ret; > > > } > > > mutex_unlock(&aconnector->mst_mgr.lock); > > > > > > @@ -1143,22 +1168,22 @@ bool compute_mst_dsc_configs_for_state(struct > > > drm_atomic_state *state, > > > > > > if (stream->timing.flags.DSC == 1) > > > if (dc_stream_add_dsc_to_resource(stream->ctx- > > > > dc, dc_state, stream) != DC_OK) > > > - return false; > > > + return -EINVAL; > > > } > > > > > > - return true; > > > + return ret; > > > } > > > > > > -static bool > > > - pre_compute_mst_dsc_configs_for_state(struct drm_atomic_state > > > *state, > > > - struct dc_state *dc_state, > > > - struct dsc_mst_fairness_vars > > > *vars) > > > +static int pre_compute_mst_dsc_configs_for_state(struct > > > drm_atomic_state *state, > > > + struct dc_state *dc_state, > > > + struct dsc_mst_fairness_vars > > > *vars) > > > { > > > int i, j; > > > struct dc_stream_state *stream; > > > bool computed_streams[MAX_PIPES]; > > > struct amdgpu_dm_connector *aconnector; > > > int link_vars_start_index = 0; > > > + int ret; > > > > > > for (i = 0; i < dc_state->stream_count; i++) > > > computed_streams[i] = false; > > > @@ -1184,11 +1209,12 @@ static bool > > > continue; > > > > > > mutex_lock(&aconnector->mst_mgr.lock); > > > - if (!compute_mst_dsc_configs_for_link(state, dc_state, > > > stream->link, vars, > > > - &aconnector->mst_mgr, > > > - &link_vars_start_index)) { > > > + ret = compute_mst_dsc_configs_for_link(state, dc_state, > > > stream->link, vars, > > > + &aconnector->mst_mgr, > > > + &link_vars_start_index); > > > + if (ret != 0) { > > > mutex_unlock(&aconnector->mst_mgr.lock); > > > - return false; > > > + return ret; > > > } > > > mutex_unlock(&aconnector->mst_mgr.lock); > > > > > > @@ -1198,7 +1224,7 @@ static bool > > > } > > > } > > > > > > - return true; > > > + return ret; > > > } > > > > > > static int find_crtc_index_in_state_by_stream(struct drm_atomic_state > > > *state, @@ -1253,9 +1279,9 @@ static bool > > > is_dsc_precompute_needed(struct drm_atomic_state *state) > > > return ret; > > > } > > > > > > -bool pre_validate_dsc(struct drm_atomic_state *state, > > > - struct dm_atomic_state **dm_state_ptr, > > > - struct dsc_mst_fairness_vars *vars) > > > +int pre_validate_dsc(struct drm_atomic_state *state, > > > + struct dm_atomic_state **dm_state_ptr, > > > + struct dsc_mst_fairness_vars *vars) > > > { > > > int i; > > > struct dm_atomic_state *dm_state; > > > @@ -1264,11 +1290,12 @@ bool pre_validate_dsc(struct drm_atomic_state > > > *state, > > > > > > if (!is_dsc_precompute_needed(state)) { > > > DRM_INFO_ONCE("DSC precompute is not needed.\n"); > > > - return true; > > > + return 0; > > > } > > > - if (dm_atomic_get_state(state, dm_state_ptr)) { > > > + ret = dm_atomic_get_state(state, dm_state_ptr); > > > + if (ret != 0) { > > > DRM_INFO_ONCE("dm_atomic_get_state() failed\n"); > > > - return false; > > > + return ret; > > > } > > > dm_state = *dm_state_ptr; > > > > > > @@ -1280,7 +1307,7 @@ bool pre_validate_dsc(struct drm_atomic_state > > > *state, > > > > > > local_dc_state = kmemdup(dm_state->context, sizeof(struct > > > dc_state), GFP_KERNEL); > > > if (!local_dc_state) > > > - return false; > > > + return -ENOMEM; > > > > > > for (i = 0; i < local_dc_state->stream_count; i++) { > > > struct dc_stream_state *stream = dm_state->context- > > > > streams[i]; @@ -1316,9 +1343,9 @@ bool pre_validate_dsc(struct > > > drm_atomic_state *state, > > > if (ret != 0) > > > goto clean_exit; > > > > > > - if (!pre_compute_mst_dsc_configs_for_state(state, local_dc_state, > > > vars)) { > > > + ret = pre_compute_mst_dsc_configs_for_state(state, local_dc_state, > > > vars); > > > + if (ret != 0) { > > > > > > DRM_INFO_ONCE("pre_compute_mst_dsc_configs_for_state() > > > failed\n"); > > > - ret = -EINVAL; > > > goto clean_exit; > > > } > > > > > > @@ -1349,7 +1376,7 @@ bool pre_validate_dsc(struct drm_atomic_state > > > *state, > > > > > > kfree(local_dc_state); > > > > > > - return (ret == 0); > > > + return ret; > > > } > > > > > > static unsigned int kbps_from_pbn(unsigned int pbn) diff --git > > > a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h > > > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h > > > index b92a7c5671aa2..97fd70df531bf 100644 > > > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h > > > +++ > > > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h > > > @@ -53,15 +53,15 @@ struct dsc_mst_fairness_vars { > > > struct amdgpu_dm_connector *aconnector; }; > > > > > > -bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, > > > - struct dc_state *dc_state, > > > - struct dsc_mst_fairness_vars *vars); > > > +int compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, > > > + struct dc_state *dc_state, > > > + struct dsc_mst_fairness_vars *vars); > > > > > > bool needs_dsc_aux_workaround(struct dc_link *link); > > > > > > -bool pre_validate_dsc(struct drm_atomic_state *state, > > > - struct dm_atomic_state **dm_state_ptr, > > > - struct dsc_mst_fairness_vars *vars); > > > +int pre_validate_dsc(struct drm_atomic_state *state, > > > + struct dm_atomic_state **dm_state_ptr, > > > + struct dsc_mst_fairness_vars *vars); > > > > > > enum dc_status dm_dp_mst_is_port_support_mode( > > > struct amdgpu_dm_connector *aconnector, > > > -- > > > 2.37.3 > > > > -- > Cheers, > Lyude Paul (she/her) > Software Engineer at Red Hat >