Received: by 2002:a25:8b91:0:0:0:0:0 with SMTP id j17csp2216467ybl; Thu, 9 Jan 2020 08:49:19 -0800 (PST) X-Google-Smtp-Source: APXvYqwHVAY6mgVntp1Ko1+4rVz28MZqtLOmhm+yM0MFCMrLI5qOUggAS4MMEtqOYQ3kt7xr7A+5 X-Received: by 2002:aca:45c1:: with SMTP id s184mr3980599oia.158.1578588559828; Thu, 09 Jan 2020 08:49:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1578588559; cv=none; d=google.com; s=arc-20160816; b=dTshaooxfKcnLc5Ka3DDjhfkPi4ffrHIyszjpom/ohISc4Sef3lGp1lBEHHHtwtxtq n8SqxZT37vAso8xCsF9TWjX5em9Fu7RiO+empN/dU0YwtmPn42uxWKIjj1SQGX4kR2G4 xISJT7koEU6NnIsLBHDZsN/JeZheFDQsr7SOtJt9xUUFNH6xPrwnauiuhFCF+fwx4xjF inRAuYfiwASECPyKfwymelda0q2u7pTNkVUDx7CEl/ZLM0dPc2a+/xNrP4+vw8zWCfH7 OQN6JFOk4aspLcO3DYzx3STmkygos9KgWcRQw7wOj9ieVs5r8J/P9DaMYmWFdC6HXnqe Kw7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=CZjlAZTS8JPbLxOgvC/NAK8Yiz5siVAovodSb+EyXaM=; b=jTWg1mzd86OGLyHxR/mvST6upLbfGVdeSAhALLoOAMxQmZbSQZaEl0j99wwjDZ66VT 83l42GW7jZScDXciMyGI3SjPjgjYEzNUQlFD5pACw6T+5TAqjOHO6oLnv1MIIzxFZFhk DxahkmEzWs+Lj/699qF2Bx7GwVLFIHv9KDLDcBBcQFWXa7jYf5r6ON6aIiZrkoL48L1I Y5VckydL4APQCvnxoAh27P+bc+gCIRPBdbjP99MQv3Sew18bMmx9ABiOUC550OCI/mF0 KB61EKgPxn/mXLEsur69iPs2kTrJfXUIOw4Ii9tCR6KB90B2rw9D4JyUJN77nyjA5tPJ aw2g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l18si4338754oth.236.2020.01.09.08.49.07; Thu, 09 Jan 2020 08:49:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731524AbgAIOIy (ORCPT + 99 others); Thu, 9 Jan 2020 09:08:54 -0500 Received: from foss.arm.com ([217.140.110.172]:59810 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728406AbgAIOIy (ORCPT ); Thu, 9 Jan 2020 09:08:54 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E6011FB; Thu, 9 Jan 2020 06:08:53 -0800 (PST) Received: from [10.1.27.38] (unknown [10.1.27.38]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 34AA63F534; Thu, 9 Jan 2020 06:08:49 -0800 (PST) Subject: Re: [PATCH v2 5/7] drm/panfrost: Add support for multiple power domain support To: Nicolas Boichat , Rob Herring Cc: Mark Rutland , devicetree@vger.kernel.org, Tomeu Vizoso , David Airlie , linux-kernel@vger.kernel.org, Liam Girdwood , dri-devel@lists.freedesktop.org, Mark Brown , linux-mediatek@lists.infradead.org, Alyssa Rosenzweig , hsinyi@chromium.org, Matthias Brugger , linux-arm-kernel@lists.infradead.org References: <20200108052337.65916-1-drinkcat@chromium.org> <20200108052337.65916-6-drinkcat@chromium.org> From: Steven Price Message-ID: Date: Thu, 9 Jan 2020 14:08:48 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.2 MIME-Version: 1.0 In-Reply-To: <20200108052337.65916-6-drinkcat@chromium.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 08/01/2020 05:23, Nicolas Boichat wrote: > When there is a single power domain per device, the core will > ensure the power domains are all switched on. > > However, when there are multiple ones, as in MT8183 Bifrost GPU, > we need to handle them in driver code. > > > Signed-off-by: Nicolas Boichat > --- > > The downstream driver we use on chromeos-4.19 currently uses 2 > additional devices in device tree to accomodate for this [1], but > I believe this solution is cleaner. I'm not sure what is best, but it seems odd to encode this into the Panfrost driver itself - it doesn't have any knowledge of what to do with these power domains. The naming of the domains looks suspiciously like someone thought that e.g. only half of the cores could be powered, but it doesn't look like that was implemented in the chromeos driver linked and anyway that is *meant* to be automatic in the hardware! (I.e. if you only power up one cores in one core stack then the PDC should only enable the power domain for that set of cores). Steve > > [1] https://chromium.googlesource.com/chromiumos/third_party/kernel/+/refs/heads/chromeos-4.19/drivers/gpu/arm/midgard/platform/mediatek/mali_kbase_runtime_pm.c#31 > > drivers/gpu/drm/panfrost/panfrost_device.c | 87 ++++++++++++++++++++-- > drivers/gpu/drm/panfrost/panfrost_device.h | 4 + > 2 files changed, 83 insertions(+), 8 deletions(-) > > diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c > index a0b0a6fef8b4e63..c6e9e059de94a4d 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_device.c > +++ b/drivers/gpu/drm/panfrost/panfrost_device.c > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > #include > > #include "panfrost_device.h" > @@ -131,6 +132,67 @@ static void panfrost_regulator_fini(struct panfrost_device *pfdev) > regulator_disable(pfdev->regulator_sram); > } > > +static void panfrost_pm_domain_fini(struct panfrost_device *pfdev) > +{ > + int i; > + > + for (i = 0; i < ARRAY_SIZE(pfdev->pm_domain_devs); i++) { > + if (!pfdev->pm_domain_devs[i]) > + break; > + > + if (pfdev->pm_domain_links[i]) > + device_link_del(pfdev->pm_domain_links[i]); > + > + dev_pm_domain_detach(pfdev->pm_domain_devs[i], true); > + } > +} > + > +static int panfrost_pm_domain_init(struct panfrost_device *pfdev) > +{ > + int err; > + int i, num_domains; > + > + num_domains = of_count_phandle_with_args(pfdev->dev->of_node, > + "power-domains", > + "#power-domain-cells"); > + /* Single domains are handled by the core. */ > + if (num_domains < 2) > + return 0; > + > + if (num_domains > ARRAY_SIZE(pfdev->pm_domain_devs)) { > + dev_err(pfdev->dev, "Too many pm-domains: %d\n", num_domains); > + return -EINVAL; > + } > + > + for (i = 0; i < num_domains; i++) { > + pfdev->pm_domain_devs[i] = > + dev_pm_domain_attach_by_id(pfdev->dev, i); > + if (IS_ERR(pfdev->pm_domain_devs[i])) { > + err = PTR_ERR(pfdev->pm_domain_devs[i]); > + pfdev->pm_domain_devs[i] = NULL; > + dev_err(pfdev->dev, > + "failed to get pm-domain %d: %d\n", i, err); > + goto err; > + } > + > + pfdev->pm_domain_links[i] = device_link_add(pfdev->dev, > + pfdev->pm_domain_devs[i], DL_FLAG_PM_RUNTIME | > + DL_FLAG_STATELESS | DL_FLAG_RPM_ACTIVE); > + if (!pfdev->pm_domain_links[i]) { > + dev_err(pfdev->pm_domain_devs[i], > + "adding device link failed!\n"); > + err = -ENODEV; > + goto err; > + } > + } > + > + return 0; > + > +err: > + panfrost_pm_domain_fini(pfdev); > + return err; > +} > + > int panfrost_device_init(struct panfrost_device *pfdev) > { > int err; > @@ -161,37 +223,45 @@ int panfrost_device_init(struct panfrost_device *pfdev) > goto err_out1; > } > > + err = panfrost_pm_domain_init(pfdev); > + if (err) { > + dev_err(pfdev->dev, "pm_domain init failed %d\n", err); > + goto err_out2; > + } > + > res = platform_get_resource(pfdev->pdev, IORESOURCE_MEM, 0); > pfdev->iomem = devm_ioremap_resource(pfdev->dev, res); > if (IS_ERR(pfdev->iomem)) { > dev_err(pfdev->dev, "failed to ioremap iomem\n"); > err = PTR_ERR(pfdev->iomem); > - goto err_out2; > + goto err_out3; > } > > err = panfrost_gpu_init(pfdev); > if (err) > - goto err_out2; > + goto err_out3; > > err = panfrost_mmu_init(pfdev); > if (err) > - goto err_out3; > + goto err_out4; > > err = panfrost_job_init(pfdev); > if (err) > - goto err_out4; > + goto err_out5; > > err = panfrost_perfcnt_init(pfdev); > if (err) > - goto err_out5; > + goto err_out6; > > return 0; > -err_out5: > +err_out6: > panfrost_job_fini(pfdev); > -err_out4: > +err_out5: > panfrost_mmu_fini(pfdev); > -err_out3: > +err_out4: > panfrost_gpu_fini(pfdev); > +err_out3: > + panfrost_pm_domain_fini(pfdev); > err_out2: > panfrost_reset_fini(pfdev); > err_out1: > @@ -208,6 +278,7 @@ void panfrost_device_fini(struct panfrost_device *pfdev) > panfrost_mmu_fini(pfdev); > panfrost_gpu_fini(pfdev); > panfrost_reset_fini(pfdev); > + panfrost_pm_domain_fini(pfdev); > panfrost_regulator_fini(pfdev); > panfrost_clk_fini(pfdev); > } > diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h > index a124334d69e7e93..92d471676fc7823 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_device.h > +++ b/drivers/gpu/drm/panfrost/panfrost_device.h > @@ -19,6 +19,7 @@ struct panfrost_job; > struct panfrost_perfcnt; > > #define NUM_JOB_SLOTS 3 > +#define MAX_PM_DOMAINS 3 > > struct panfrost_features { > u16 id; > @@ -62,6 +63,9 @@ struct panfrost_device { > struct regulator *regulator; > struct regulator *regulator_sram; > struct reset_control *rstc; > + /* pm_domains for devices with more than one. */ > + struct device *pm_domain_devs[MAX_PM_DOMAINS]; > + struct device_link *pm_domain_links[MAX_PM_DOMAINS]; > > struct panfrost_features features; > >