Received: by 10.223.185.116 with SMTP id b49csp1940107wrg; Thu, 22 Feb 2018 05:46:44 -0800 (PST) X-Google-Smtp-Source: AH8x226VnperD+csb5kOoDLSSGKzqTSm7JqTKWtOfEJg+ksk4CSkLfm5xP3VUunoL/muuP/wnRHg X-Received: by 10.98.170.13 with SMTP id e13mr6998608pff.113.1519307204254; Thu, 22 Feb 2018 05:46:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519307204; cv=none; d=google.com; s=arc-20160816; b=THtxvxTYqSNMHUi23Wa4rSRnMxHsmEfzbkTEdooRRMQJR0mEaYh22j64ZGGimX20TO QXUWzLMIrfYNudpiE9fS0X8+ZetVLYjYbj504x4LuGCgvw+CcwvwUsF0iVjbFw+OTc7t 4TQcjdwC8xgexZmpdMXWZ61l4nyEeUk1ftEBwUpVuCcl2aGcZo4XtNBkpGXhvTlNv3wk 89L5SjcD1A2mvwH1mJ6JOhQAnZzoCSslnVCyTrMbsg3oATWNKdpMWiIPfs8+xMjIVl0n XHwJENnPgy+5VUwVBKd876CS2IiJQvtiBIjGRODHMhRq0lm3/RDi8V6PWnzHhGTTDH6t c/Dw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=G7BcZMLPJwB/L6Vy+dmxGpkwTmkjCMlNF+8BedMl8HM=; b=xm781DyGEQFNMffIeMgICjyhCVZvxTQUkDMi1GT6wAXSPUaq39gdrQeE7csb0y2+Ty Z8J92GBLh3CZ0FnvRgoCJxMpBQxKQrVB39K2kOIZCQlA/LUDrwV/4hjubcIbIrUJh+L3 6W1yRzWN6zVLcI3GQ1LIiyksuy8NNUa4x5wREDUZRSG109Pspd37gks0qJE6A4dMKDSo H7iAM9Ddkak6XfL7RPllJk7hJMydBpe5C+56nv4ekwzvZyCERUFx6/q72C8bkecYPS8P VhkgoP6pVw0qaZ6VtXrEPLgxN6fUd3S14sz0MFRq94CbHTUSA+bBUaY+FEDCU5vMc0lC PeYQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10-v6si74941plh.410.2018.02.22.05.46.29; Thu, 22 Feb 2018 05:46:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932658AbeBVNpl (ORCPT + 99 others); Thu, 22 Feb 2018 08:45:41 -0500 Received: from foss.arm.com ([217.140.101.70]:41400 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932414AbeBVNpj (ORCPT ); Thu, 22 Feb 2018 08:45:39 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7C3B61596; Thu, 22 Feb 2018 05:45:39 -0800 (PST) Received: from [10.1.210.88] (e110467-lin.cambridge.arm.com [10.1.210.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 65C933F487; Thu, 22 Feb 2018 05:45:36 -0800 (PST) Subject: Re: [PATCH v7 6/6] drm/msm: iommu: Replace runtime calls with runtime suppliers To: Tomasz Figa Cc: Vivek Gautam , Will Deacon , Rob Clark , "list@263.net:IOMMU DRIVERS" , Joerg Roedel , Rob Herring , Mark Rutland , "Rafael J. Wysocki" , devicetree@vger.kernel.org, Linux Kernel Mailing List , Linux PM , dri-devel , freedreno , David Airlie , Greg KH , Stephen Boyd , linux-arm-msm , jcrouse@codeaurora.org References: <1517999482-17317-1-git-send-email-vivek.gautam@codeaurora.org> <7406f1ce-c2c9-a6bd-2886-5a34de45add6@arm.com> From: Robin Murphy Message-ID: <28466b36-b5d3-4f60-a45e-b75d79c2a3cb@arm.com> Date: Thu, 22 Feb 2018 13:45:34 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org [sorry, I had intended to reply sooner but clearly forgot] On 16/02/18 00:13, Tomasz Figa wrote: > On Fri, Feb 16, 2018 at 2:14 AM, Robin Murphy wrote: >> On 15/02/18 04:17, Tomasz Figa wrote: >> [...] >>>> >>>> Could you elaborate on what kind of locking you are concerned about? >>>> As I explained before, the normally happening fast path would lock >>>> dev->power_lock only for the brief moment of incrementing the runtime >>>> PM usage counter. >>> >>> >>> My bad, that's not even it. >>> >>> The atomic usage counter is incremented beforehands, without any >>> locking [1] and the spinlock is acquired only for the sake of >>> validating that device's runtime PM state remained valid indeed [2], >>> which would be the case in the fast path of the same driver doing two >>> mappings in parallel, with the master powered on (and so the SMMU, >>> through device links; if master was not powered on already, powering >>> on the SMMU is unavoidable anyway and it would add much more latency >>> than the spinlock itself). >> >> >> We now have no locking at all in the map path, and only a per-domain lock >> around TLB sync in unmap which is unfortunately necessary for correctness; >> the latter isn't too terrible, since in "serious" hardware it should only be >> serialising a few cpus serving the same device against each other (e.g. for >> multiple queues on a single NIC). >> >> Putting in a global lock which serialises *all* concurrent map and unmap >> calls for *all* unrelated devices makes things worse. Period. Even if the >> lock itself were held for the minimum possible time, i.e. trivially >> "spin_lock(&lock); spin_unlock(&lock)", the cost of repeatedly bouncing that >> one cache line around between 96 CPUs across two sockets is not negligible. > > Fair enough. Note that we're in a quite interesting situation now: > a) We need to have runtime PM enabled on Qualcomm SoC to have power > properly managed, > b) We need to have lock-free map/unmap on such distributed systems, > c) If runtime PM is enabled, we need to call into runtime PM from any > code that does hardware accesses, otherwise the IOMMU API (and so DMA > API and then any V4L2 driver) becomes unusable. > > I can see one more way that could potentially let us have all the > three. How about enabling runtime PM only on selected implementations > (e.g. qcom,smmu) and then having all the runtime PM calls surrounded > with if (pm_runtime_enabled()), which is lockless? Yes, that's the kind of thing I was gravitating towards - my vague thought was adding some flag to the smmu_domain, but pm_runtime_enabled() does look conceptually a lot cleaner. >> >>> [1] >>> http://elixir.free-electrons.com/linux/v4.16-rc1/source/drivers/base/power/runtime.c#L1028 >>> [2] >>> http://elixir.free-electrons.com/linux/v4.16-rc1/source/drivers/base/power/runtime.c#L613 >>> >>> In any case, I can't imagine this working with V4L2 or anything else >>> relying on any memory management more generic than calling IOMMU API >>> directly from the driver, with the IOMMU device having runtime PM >>> enabled, but without managing the runtime PM from the IOMMU driver's >>> callbacks that need access to the hardware. As I mentioned before, >>> only the IOMMU driver knows when exactly the real hardware access >>> needs to be done (e.g. Rockchip/Exynos don't need to do that for >>> map/unmap if the power is down, but some implementations of SMMU with >>> TLB powered separately might need to do so). >> >> >> It's worth noting that Exynos and Rockchip are relatively small >> self-contained IP blocks integrated closely with the interfaces of their >> relevant master devices; SMMU is an architecture, implementations of which >> may be large, distributed, and have complex and wildly differing internal >> topologies. As such, it's a lot harder to make hardware-specific assumptions >> and/or be correct for all possible cases. >> >> Don't get me wrong, I do ultimately agree that the IOMMU driver is the only >> agent who ultimately knows what calls are going to be necessary for whatever >> operation it's performing on its own hardware*; it's just that for SMMU it >> needs to be implemented in a way that has zero impact on the cases where it >> doesn't matter, because it's not viable to specialise that driver for any >> particular IP implementation/use-case. > > Still, exactly the same holds for the low power embedded use cases, > where we strive for the lowest possible power consumption, while > maintaining performance levels high as well. And so the SMMU code is > expected to also work with our use cases, such as V4L2 or DRM drivers. > Since these points don't hold for current SMMU code, I could say that > the it has been already specialized for large, distributed > implementations. Heh, really it's specialised for ease of maintenance in terms of doing as little as we can get away with, but for what we have implemented, fast code does save CPU cycles and power on embedded systems too ;) Robin.