Received: by 10.223.185.116 with SMTP id b49csp2633677wrg; Mon, 5 Mar 2018 06:20:44 -0800 (PST) X-Google-Smtp-Source: AG47ELualOTZR7iTR6nP6kF4sGNccwQWWsFuViYVSvTfNxhAMKyowwS4oad83TwNovvBKEx5/2dv X-Received: by 10.98.149.90 with SMTP id p87mr15440303pfd.28.1520259644815; Mon, 05 Mar 2018 06:20:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520259644; cv=none; d=google.com; s=arc-20160816; b=vYgf5thhuf79SQoBwys5AFiX7PK7YqGUKxBjJvBBpnZUOf4KKqyh5QskfsmdPIqZqv UOFOdO1rUYzFAFrEBFP3yqTttnfVB06VAbBA3cpDkRcvXZfFPX5Ex5NI76jkPCGcmLVj TMPRYvP50mJR4lph+3QkfTeKmlP/w740h7LWaqg3FtKwOXNZBYrz9Ppsgq+JUlgtCVER qpx9pmL8Gk9R2bWTJJf/o5VI45/lpiiAw7uZGVpSe7pS25Xg0UCsZ3+4KeGKNwd/9vD2 oXsZG36REbMtBHDlsK05CJjxnXfzXXqPRNvuGqHa31wLbmi6itwtVyS//i4i17Qu+QwW 3xNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=WVMpAQZE6wGRHJYPPPwoZ5A0iYjO4bdlrpdrPgaxkb8=; b=rRZwS6sXGd3ifSkK+dQwEH3dxgY+eS/68L+ysiNbSdqrPtikm/LYcHde9uW7FHgmCs uBMpYT8GXzj2o0uDKeEW5GqzLHEZ8JfKQIKEcPkCRw+/sLOe+lGZLQQttxvObREcyH5U YT7sHXEX8I7TW+CFWhofDv+fDMWGsulYtHyVvLtCptIRGMoA3+ul3BMovlf8dpslnFy8 opevvAZwPQLUaU+4ZoTFjbFPQ8D4Tw4ytTPmgJMhwy7M05amo3FP4Ck2lETzVeEiVBPk uGdRFixNr6A/LflihcrpM91yKUhuiwNW0l2kREiNAFEjkBhBhqnr8LUwF4BrZDq/WPNC LHdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=JpK7JfBO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m10si8454382pgs.236.2018.03.05.06.20.30; Mon, 05 Mar 2018 06:20:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=JpK7JfBO; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935282AbeCENtk (ORCPT + 99 others); Mon, 5 Mar 2018 08:49:40 -0500 Received: from mail-ua0-f196.google.com ([209.85.217.196]:46790 "EHLO mail-ua0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933502AbeCENtg (ORCPT ); Mon, 5 Mar 2018 08:49:36 -0500 Received: by mail-ua0-f196.google.com with SMTP id d1so9504206ual.13 for ; Mon, 05 Mar 2018 05:49:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=WVMpAQZE6wGRHJYPPPwoZ5A0iYjO4bdlrpdrPgaxkb8=; b=JpK7JfBOyzpLo9mCKqkncY1d3IoP/8tFl2nitQUWTnWmM3zs8JKozBNm90MmoAmLze kpRlhFb2TkhUBOZdgz9E8U9JMJKOyP4s2yWmQnjvzhXmwssQYHVDI47ea8uogqIwQk5b tLuQ2TFXsuRYgsmDjkBm7RSa3LZkFO/gd4eWU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=WVMpAQZE6wGRHJYPPPwoZ5A0iYjO4bdlrpdrPgaxkb8=; b=QfQfPx54EbxUHvy0NAQaInxb/tpSYYdB98yR31Ve54nelfRkQ+2Ck2k6La7aWiz8DB QQEtdzKbeWbrccyeMV2b+ECf3DxP2VtRsP+6KWAV64hLrsTNznXIPUUdTxvhl6XpDZxd 8RJcTxem6d6z/q5Lm8c/02KZ6XZ3MboDrCtKs4OYZUnr9yEOzhqL/20mVahPfbuRUalB jQX5WS4djjl/U8pAeNg8v4//6OD9y9zklAWiIB21l5f2loAg/ktJ8OmhPC7tzLQEy6bJ dU8W3XtNPn5DjiChytWv/Sg+ZvagE1dsvooAmSUoWutbtXGGhmRBf8G5zsan25XfJ47a De0A== X-Gm-Message-State: APf1xPCviKicf/sYSIn5MdNOvPPO1VSpTfDbPT/pTOer6834wxV8tJ1P 2KDpSO1gSP/xyekfb+72xqWUDjlzp6k= X-Received: by 10.176.81.137 with SMTP id g9mr10621907uaa.171.1520257775537; Mon, 05 Mar 2018 05:49:35 -0800 (PST) Received: from mail-ua0-f176.google.com (mail-ua0-f176.google.com. [209.85.217.176]) by smtp.gmail.com with ESMTPSA id 191sm7351339vkv.45.2018.03.05.05.49.33 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 05 Mar 2018 05:49:34 -0800 (PST) Received: by mail-ua0-f176.google.com with SMTP id x4so10422980uaj.11 for ; Mon, 05 Mar 2018 05:49:33 -0800 (PST) X-Received: by 10.176.71.234 with SMTP id w42mr10622735uac.132.1520257773330; Mon, 05 Mar 2018 05:49:33 -0800 (PST) MIME-Version: 1.0 Received: by 10.176.0.99 with HTTP; Mon, 5 Mar 2018 05:49:12 -0800 (PST) In-Reply-To: <20180301101837.27969-14-jeffy.chen@rock-chips.com> References: <20180301101837.27969-1-jeffy.chen@rock-chips.com> <20180301101837.27969-14-jeffy.chen@rock-chips.com> From: Tomasz Figa Date: Mon, 5 Mar 2018 22:49:12 +0900 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RESEND PATCH v6 13/14] iommu/rockchip: Add runtime PM support To: Jeffy Chen Cc: Linux Kernel Mailing List , Ricky Liang , Robin Murphy , simon xue , Heiko Stuebner , "open list:ARM/Rockchip SoC..." , "open list:IOMMU DRIVERS" , Joerg Roedel , "list@263.net:IOMMU DRIVERS , Joerg Roedel ," Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Jeffy, On Thu, Mar 1, 2018 at 7:18 PM, Jeffy Chen wrote: > When the power domain is powered off, the IOMMU cannot be accessed and > register programming must be deferred until the power domain becomes > enabled. > > Add runtime PM support, and use runtime PM device link from IOMMU to > master to startup and shutdown IOMMU. > > Signed-off-by: Jeffy Chen > --- > > Changes in v6: None > Changes in v5: > Avoid race about pm_runtime_get_if_in_use() and pm_runtime_enabled(). > > Changes in v4: None > Changes in v3: > Only call startup() and shutdown() when iommu attached. > Remove pm_mutex. > Check runtime PM disabled. > Check pm_runtime in rk_iommu_irq(). > > Changes in v2: None > > drivers/iommu/rockchip-iommu.c | 181 +++++++++++++++++++++++++++++++---------- > 1 file changed, 140 insertions(+), 41 deletions(-) > > diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c > index 2448a0528e39..0e0a42f41818 100644 > --- a/drivers/iommu/rockchip-iommu.c > +++ b/drivers/iommu/rockchip-iommu.c > @@ -22,6 +22,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -106,6 +107,7 @@ struct rk_iommu { > }; > > struct rk_iommudata { > + struct device_link *link; /* runtime PM link from IOMMU to master */ Kerneldoc comment for the struct could be added instead. > struct rk_iommu *iommu; > }; > > @@ -518,7 +520,12 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id) > u32 int_status; > dma_addr_t iova; > irqreturn_t ret = IRQ_NONE; > - int i; > + int i, err, need_runtime_put; nit: need_runtime_put could be a bool. > + > + err = pm_runtime_get_if_in_use(iommu->dev); > + if (err <= 0 && err != -EINVAL) > + return ret; > + need_runtime_put = err > 0; Generally something must be really wrong if we end up with err == 0 here, because the IOMMU must be powered on to signal an interrupt. The only case this could happen would be if the IRQ signal was shared with some device from another power domain. Is it possible on Rockchip SoCs? If not, perhaps we should have a WARN_ON() here for such case. > > WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)); > > @@ -570,6 +577,9 @@ static irqreturn_t rk_iommu_irq(int irq, void *dev_id) > > clk_bulk_disable(iommu->num_clocks, iommu->clocks); > > + if (need_runtime_put) > + pm_runtime_put(iommu->dev); > + > return ret; > } > > @@ -611,10 +621,20 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain, > spin_lock_irqsave(&rk_domain->iommus_lock, flags); > list_for_each(pos, &rk_domain->iommus) { > struct rk_iommu *iommu; > + int ret; > + > iommu = list_entry(pos, struct rk_iommu, node); > - WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)); > - rk_iommu_zap_lines(iommu, iova, size); > - clk_bulk_disable(iommu->num_clocks, iommu->clocks); > + > + /* Only zap TLBs of IOMMUs that are powered on. */ > + ret = pm_runtime_get_if_in_use(iommu->dev); > + if (ret > 0 || ret == -EINVAL) { > + WARN_ON(clk_bulk_enable(iommu->num_clocks, > + iommu->clocks)); > + rk_iommu_zap_lines(iommu, iova, size); > + clk_bulk_disable(iommu->num_clocks, iommu->clocks); > + } > + if (ret > 0) > + pm_runtime_put(iommu->dev); > } > spin_unlock_irqrestore(&rk_domain->iommus_lock, flags); > } > @@ -817,22 +837,30 @@ static struct rk_iommu *rk_iommu_from_dev(struct device *dev) > return data ? data->iommu : NULL; > } > > -static int rk_iommu_attach_device(struct iommu_domain *domain, > - struct device *dev) > +/* Must be called with iommu powered on and attached */ > +static void rk_iommu_shutdown(struct rk_iommu *iommu) > { > - struct rk_iommu *iommu; > + int i; > + > + /* Ignore error while disabling, just keep going */ > + WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)); > + rk_iommu_enable_stall(iommu); > + rk_iommu_disable_paging(iommu); > + for (i = 0; i < iommu->num_mmu; i++) { > + rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, 0); > + rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, 0); > + } > + rk_iommu_disable_stall(iommu); > + clk_bulk_disable(iommu->num_clocks, iommu->clocks); > +} > + > +/* Must be called with iommu powered on and attached */ > +static int rk_iommu_startup(struct rk_iommu *iommu) > +{ > + struct iommu_domain *domain = iommu->domain; > struct rk_iommu_domain *rk_domain = to_rk_domain(domain); > - unsigned long flags; > int ret, i; > > - /* > - * Allow 'virtual devices' (e.g., drm) to attach to domain. > - * Such a device does not belong to an iommu group. > - */ > - iommu = rk_iommu_from_dev(dev); > - if (!iommu) > - return 0; > - > ret = clk_bulk_enable(iommu->num_clocks, iommu->clocks); > if (ret) > return ret; > @@ -845,8 +873,6 @@ static int rk_iommu_attach_device(struct iommu_domain *domain, > if (ret) > goto out_disable_stall; > > - iommu->domain = domain; > - > for (i = 0; i < iommu->num_mmu; i++) { > rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, > rk_domain->dt_dma); > @@ -855,14 +881,6 @@ static int rk_iommu_attach_device(struct iommu_domain *domain, > } > > ret = rk_iommu_enable_paging(iommu); > - if (ret) > - goto out_disable_stall; > - > - spin_lock_irqsave(&rk_domain->iommus_lock, flags); > - list_add_tail(&iommu->node, &rk_domain->iommus); > - spin_unlock_irqrestore(&rk_domain->iommus_lock, flags); > - > - dev_dbg(dev, "Attached to iommu domain\n"); > > out_disable_stall: > rk_iommu_disable_stall(iommu); > @@ -877,31 +895,76 @@ static void rk_iommu_detach_device(struct iommu_domain *domain, > struct rk_iommu *iommu; > struct rk_iommu_domain *rk_domain = to_rk_domain(domain); > unsigned long flags; > - int i; > + int ret; > > /* Allow 'virtual devices' (eg drm) to detach from domain */ > iommu = rk_iommu_from_dev(dev); > if (!iommu) > return; > > + dev_dbg(dev, "Detaching from iommu domain\n"); > + > + /* iommu already detached */ > + if (iommu->domain != domain) > + return; > + > + iommu->domain = NULL; > + > spin_lock_irqsave(&rk_domain->iommus_lock, flags); > list_del_init(&iommu->node); > spin_unlock_irqrestore(&rk_domain->iommus_lock, flags); > > - /* Ignore error while disabling, just keep going */ > - WARN_ON(clk_bulk_enable(iommu->num_clocks, iommu->clocks)); > - rk_iommu_enable_stall(iommu); > - rk_iommu_disable_paging(iommu); > - for (i = 0; i < iommu->num_mmu; i++) { > - rk_iommu_write(iommu->bases[i], RK_MMU_INT_MASK, 0); > - rk_iommu_write(iommu->bases[i], RK_MMU_DTE_ADDR, 0); > - } > - rk_iommu_disable_stall(iommu); > - clk_bulk_disable(iommu->num_clocks, iommu->clocks); > + ret = pm_runtime_get_if_in_use(iommu->dev); > + if (ret > 0 || ret == -EINVAL) > + rk_iommu_shutdown(iommu); > + if (ret > 0) > + pm_runtime_put(iommu->dev); > +} > > - iommu->domain = NULL; > +static int rk_iommu_attach_device(struct iommu_domain *domain, > + struct device *dev) > +{ > + struct rk_iommu *iommu; > + struct rk_iommu_domain *rk_domain = to_rk_domain(domain); > + unsigned long flags; > + int ret, need_runtime_put; nit: need_runtime_put could be bool Best regards, Tomasz