Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp3720690pxj; Tue, 11 May 2021 10:24:43 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzVyfX6Avg1K+XeKb+A+t+aHp2PnMrsmMBOB1GYfrkDwaYTQAKLxI315xqXDH6KdBSM+yPV X-Received: by 2002:a05:651c:1410:: with SMTP id u16mr25678526lje.296.1620753882773; Tue, 11 May 2021 10:24:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620753882; cv=none; d=google.com; s=arc-20160816; b=pJbAWCXdQLAzhjDXKy4kOB7v7JLZWd6jVoIh7gshzGtuYuRoiuEmLVmDBAtvcHcmEF qqIm9BWdCncjduH77xehiXHHZ9R76eTytsBubElHLa2enrUj6Khj3637khAOHR4K9ZLz hY0FHYbln8gHKBdxCG21A06rryE//pG9upsRRTmT7nWP/aoG4xGHi1ovKp8wl0sS1SRN FpZFqaBM+Hb5tT2+b8sCXzsuJQfmwZM486hN94lU5ZRFgnxQRKMxyLimIDtVQ4o+zzAQ ozHSQsYnY9ciV0FsHqWjPyIuYyDf0NmH2pP9uHK/CZiBSYO/2U6Go0UPYDyyTlp0iPwY ue/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version; bh=Ml2PDWbdLX72C7HXwtPmUq3pca/wWJyrCfkk5O/A24g=; b=LWHoBq2f3sd+lhnV8dDyuBkFsM1eLdoLM3F2MbzT9GiugLV7KGTU16OJ4dv7CEcW/Q 45Pp+aKflJzrDny40EkFtSzZuRbYWIsw/aK74yCQZHZBCe4foLRr40He4xeTTNvWRzBj HbTkJ31CEksDD4OYuAv9HYVp9Y8l+d6MsRJ9f6cWNygPy03BiSouct0vlC11LdoGvjyS HflkVRfRsKhif6x8yCPY43Y9JA0A6RhbHDtUYT/gQwqbsFdN7XMWl47TJ/gu8442Z0Sl MleRl3bhdUwJIl2fwsrwHvrDJK8Bk3RKLzIvkRrp76UT8Vma4j9FWrxeTZ3n70Q8ZIPx L7xQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f11si19480992lfu.433.2021.05.11.10.24.12; Tue, 11 May 2021 10:24:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231517AbhEKRVX (ORCPT + 99 others); Tue, 11 May 2021 13:21:23 -0400 Received: from mail-oi1-f178.google.com ([209.85.167.178]:46746 "EHLO mail-oi1-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230315AbhEKRVW (ORCPT ); Tue, 11 May 2021 13:21:22 -0400 Received: by mail-oi1-f178.google.com with SMTP id x15so5937207oic.13 for ; Tue, 11 May 2021 10:20:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ml2PDWbdLX72C7HXwtPmUq3pca/wWJyrCfkk5O/A24g=; b=iMi0i7UPZ8fML1cpEcm4XdrzX1vPTbKRiVtcZwsB3c9SA8pItDZKvhNO2cFO4IT+WR Ko4jVcwl0lbAa1t0NMNCMYpyCph9qRah149kFbioaPyey0DXgHNXNNhU4cGCpHgq3NrK 9+AZdT02rpvgBr2JByZl8iGmHBGICR2L3eFzOJ9BQLtnY9xXG4tzV/+c2Cag5fFOvPsd tnKK8aX+UGfj26n7HrO52iBykuxkD22+Hu6XgK0/F1Sxm8s5Q/7+c3pY40fAsgOz0564 K9FTJvmaj3Uj+aD/fadLshKYkIwXLreVcNvGWYrWboxoJ3gYNSVUD9gkKMa81vIv6Dkp wbwQ== X-Gm-Message-State: AOAM530/M0bh2iKPX5sbwoPmbcNt3KESSO0iuAOskhldCXCBYCBK7FOB N/sFfE1Pl7eNq7KB3ltRRPtr6gjs5UH0DwIeMaM= X-Received: by 2002:a05:6808:90d:: with SMTP id w13mr8344188oih.71.1620753615904; Tue, 11 May 2021 10:20:15 -0700 (PDT) MIME-Version: 1.0 References: <20210508074118.1621729-1-swboyd@chromium.org> In-Reply-To: From: "Rafael J. Wysocki" Date: Tue, 11 May 2021 19:20:04 +0200 Message-ID: Subject: Re: [PATCH] component: Move host device to end of device lists on binding To: Stephen Boyd Cc: "Rafael J. Wysocki" , Daniel Vetter , Greg Kroah-Hartman , Linux Kernel Mailing List , Russell King , Rob Clark , dri-devel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 11, 2021 at 7:00 PM Stephen Boyd wrote: > > Quoting Rafael J. Wysocki (2021-05-11 03:52:06) > > On Mon, May 10, 2021 at 9:08 PM Stephen Boyd wrote: > > > > [cut] > > > > > > > > > > > > > > I will try it, but then I wonder about things like system wide > > > > > suspend/resume too. The drm encoder chain would need to reimplement the > > > > > logic for system wide suspend/resume so that any PM ops attached to the > > > > > msm device run in the correct order. Right now the bridge PM ops will > > > > > run, the i2c bus PM ops will run, and then the msm PM ops will run. > > > > > After this change, the msm PM ops will run, the bridge PM ops will run, > > > > > and then the i2c bus PM ops will run. It feels like that could be a > > > > > problem if we're suspending the DSI encoder while the bridge is still > > > > > active. > > > > > > > > Yup suspend/resume has the exact same problem as shutdown. > > > > > > I think suspend/resume has the exact opposite problem. At least I think > > > the correct order is to suspend the bridge, then the encoder, i.e. DSI, > > > like is happening today. It looks like drm_atomic_helper_shutdown() > > > operates from the top down when we want bottom up? I admit I have no > > > idea what is supposed to happen here. > > > > Why would the system-wide suspend ordering be different from the > > shutdown ordering? > > I don't really know. I'm mostly noting that today the order of suspend > is to suspend the bridge device first and then the aggregate device. If > the suspend of the aggregate device is traversing the devices like > drm_atomic_helper_shutdown() then it would operate on the bridge device > after it has been suspended, like is happening during shutdown. But it > looks like that isn't happening. At least for the msm driver we're > suspending the aggregate device after the bridge, and there are some > weird usages of prepare and complete in there (see msm_pm_prepare() and > msm_pm_complete) which makes me think that it's all working around this > component code. Well, it looks like the "prepare" phase is used sort-of against the rules (because "prepare" is not supposed to make changes to the hardware configuration or at least that is not its role) in order to work around an ordering issue that is present in shutdown which doesn't have a "prepare" phase. > The prepare phase is going to suspend the display pipeline, and then the > bridge device will run its suspend hooks, and then the aggregate driver > will run its suspend hooks. If we had a proper device for the aggregate > device instead of the bind/unbind component hooks we could clean this > up. I'm not sufficiently familiar with the component code to add anything constructive here, but generally speaking it looks like the "natural" dpm_list ordering does not match the order in which the devices in question should be suspended (or shut down for that matter), so indeed it is necessary to reorder dpm_list this way or another. Please also note that it generally may not be sufficient to reorder dpm_list if the devices are suspended and resumed asynchronously during system-wide transitions, because in that case the callbacks of different devices are only started in the dpm_list order, but they may be completed in a different order (and of course they may run in parallel with each other). Shutdown is simpler, because it runs the callback synchronously for all devices IIRC.