Received: by 2002:a5b:505:0:0:0:0:0 with SMTP id o5csp237733ybp; Tue, 8 Oct 2019 17:13:19 -0700 (PDT) X-Google-Smtp-Source: APXvYqzo6sfn13tra5/E9R9UFnhC2WyomEWkK0XWW1BwF024U+ypyhcubChiwusTmnsTcQ3FzH4e X-Received: by 2002:a05:6402:3054:: with SMTP id bu20mr579974edb.97.1570579999820; Tue, 08 Oct 2019 17:13:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1570579999; cv=none; d=google.com; s=arc-20160816; b=bRUwnzLgphOi7/WDr1tIzvxi7Mhp0JwG4otJv79hM7yc2eQ0dwH/200nk1slcLske3 yq72MvaGQ8sFEoIIZOxtqlTMKxC+aOKBVaYhEGiMVDIkU0fd2oPSYImQSgvOl4gR3z2y lGi1wmiBmLu2pzQpunjrFN3bLwFuE0+US43YQnz4h1tW1q4it4SdLNFVL/s+1DqQyl8O S5nbMiqvv/OXbd+CVIVZbXIonURjqBLtLQhQSOt/HwKJjKVS5zVWteFwhcu1SG1iknn8 nLCYK0KbDOkvnBdwu9uTDwkyNLoRio53SGgxTLxbU2LTexr4rmdlplGEGgNmbxQmDYKD 9zvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:dkim-signature; bh=cNdhJyeG5GjZBd+aVbSImZd8WSDzEG5SHRL3gOPr830=; b=Kfq4XGY4sZQVjnVYuSR8VtRmjBONKRWRbDOecl+SIrxgmIZ/z7t7kfbHhOPRYJMKwp bNrgJ/xijlDtAAeUER0ChV4XXUeLIeJq0mo7YHtXPmdAySAcKvlzfeZpIyFVR9uUl6Hp 18bZ08ZX3bCA+Wuh9v4+siZ69ls+H9itYUEC/53a6otUTKhDruesiOArBdf79cfXan6c h4hznSLVYr613jhALiKO684bMdZURGrJ2rTYghs2soFKaq4U5RnnQx1zpK5BoWCWd4+y q4yrLu3LmdG6f/F0kIuH6f8sEVOQKSTVszqb3vMf7uhPh72MMQSTtleG5JgvtnOLz+Y4 cTjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=C2B+kKRx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z6si316066edr.443.2019.10.08.17.12.56; Tue, 08 Oct 2019 17:13:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=C2B+kKRx; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729918AbfJIAMF (ORCPT + 99 others); Tue, 8 Oct 2019 20:12:05 -0400 Received: from mail-pl1-f202.google.com ([209.85.214.202]:41955 "EHLO mail-pl1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727051AbfJIAMF (ORCPT ); Tue, 8 Oct 2019 20:12:05 -0400 Received: by mail-pl1-f202.google.com with SMTP id q3so380727pll.8 for ; Tue, 08 Oct 2019 17:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cNdhJyeG5GjZBd+aVbSImZd8WSDzEG5SHRL3gOPr830=; b=C2B+kKRxBmnboQ58DGVk4sq36JsCS6eLlr5yxMvcDGkQZQqR36FuEpUVDnKLBBTFhE 2n8g2QXfc+4VQuaAgjo7mnb+mp3sjeNBCpuWXtwDkxo/p6x35jVJvdh8V12H7TUfCB6D CPurcGcVqslBeikqXcPleaeTfgH5rwnaVguteREmCA/AlarppMrGoUvTAeHpkLi+uUFV G/IEp2UiRc+4vnOP67GMbZb83jOxDKBxIfnJQoPmXwIbU4jgB9XRHTDGAgPCOUm+sOHx r5X0J/CPVhHhaFXfd9rqqgUjIts5xjH4nkb7F+KiXeF8iaDO7BFMefxUWl7Ey4jYm7+U 1oaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cNdhJyeG5GjZBd+aVbSImZd8WSDzEG5SHRL3gOPr830=; b=Xxq6GUsFKmzGYV+geOJdsrS16FWCnRYJvN97rLpyV2KO2xV9JsOfNNenN+G6MMj43U 6yqLYMSiPHuIIHeIILAj6qNyqBQyRKGz67f5fHFiOED/QfedUOgMr1/rww5gr1L04CpJ 5YgA9/mamweg0rogu0tIbV49rKM2bV+SYbBRPDEVBjzMjIIuA3YZ/B5zKrrKsV0RkKUQ wW4AWlaYOAB7qpYYqDD5menNkAmjRU1YcMneuAlx0NbVxg2mtd/I/DRw014X9xEz8QSp tNuf7geRwo1xBGzfoRiCesStkYb37X99HeLD6q8REXeegqKF9V8i5sCfN7/gLbjTMsRf yHCA== X-Gm-Message-State: APjAAAVwivVIl+RsRww4f9mdSqj0LfuRZI1yO/YfeM/HUoQ9RKzLvU4H zH7t8NsJmATwcItjcOv08ZhjnCJCVZqIBK8= X-Received: by 2002:a63:1609:: with SMTP id w9mr1270591pgl.184.1570579924013; Tue, 08 Oct 2019 17:12:04 -0700 (PDT) Date: Tue, 8 Oct 2019 17:11:59 -0700 In-Reply-To: <5d978bf9.1c69fb81.7b927.b6ac@mx.google.com> Message-Id: <20191009001159.27761-1-saravanak@google.com> Mime-Version: 1.0 References: <5d978bf9.1c69fb81.7b927.b6ac@mx.google.com> X-Mailer: git-send-email 2.23.0.581.g78d2f28ef7-goog Subject: [RFC PATCH] interconnect: Replace of_icc_get() with icc_get() and reduce DT binding From: Saravana Kannan To: swboyd@chromium.org Cc: bjorn.andersson@linaro.org, daidavid1@codeaurora.org, devicetree@vger.kernel.org, evgreen@chromium.org, georgi.djakov@linaro.org, linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, mripard@kernel.org, robh+dt@kernel.org, saravanak@google.com, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Quoting Stephen Boyd: > Quoting David Dai (2019-09-27 10:16:07) > > On 9/25/2019 6:28 AM, Stephen Boyd wrote: > > > Quoting Bjorn Andersson (2019-09-24 22:59:33) > > >> On Tue 24 Sep 22:41 PDT 2019, Stephen Boyd wrote: > > >> > > >>> The DT binding could also be simplified somewhat. Currently a path needs > > >>> to be specified in DT for each and every use case that is possible for a > > >>> device to want. Typically the path is to memory, which looks to be > > >>> reserved for in the binding with the "dma-mem" named path, but sometimes > > >>> the path is from a device to the CPU or more generically from a device > > >>> to another device which could be a CPU, cache, DMA master, or another > > >>> device if some sort of DMA to DMA scenario is happening. Let's remove > > >>> the pair part of the binding so that we just list out a device's > > >>> possible endpoints on the bus or busses that it's connected to. > > >>> > > >>> If the kernel wants to figure out what the path is to memory or the CPU > > >>> or a cache or something else it should be able to do that by finding the > > >>> node for the "destination" endpoint, extracting that node's > > >>> "interconnects" property, and deriving the path in software. For > > >>> example, we shouldn't need to write out each use case path by path in DT > > >>> for each endpoint node that wants to set a bandwidth to memory. We > > >>> should just be able to indicate what endpoint(s) a device sits on based > > >>> on the interconnect provider in the system and then walk the various > > >>> interconnects to find the path from that source endpoint to the > > >>> destination endpoint. > > >>> > > >> But doesn't this implies that the other end of the path is always some > > >> specific node, e.g. DDR? With a single node how would you describe > > >> CPU->LLCC or GPU->OCIMEM? > > > By only specifying the endpoint the device uses it describes what the > > > hardware block interfaces with. It doesn't imply that there's only one > > > other end of the path. It implies that the paths should be discoverable > > > by walking the interconnect graph given some source device node and > > > target device node. In most cases the target device node will be a DDR > > > controller node, but sometimes it could be LLCC or OCIMEM. We may need > > > to add some sort of "get the DDR controller device" API or work it into > > > the interconnect API somehow to indicate what target endpoint is > > > desired. By not listing all those paths in DT we gain flexibility to add > > > more paths later on without having to update or tweak DT to describe > > > more paths/routes through the interconnect. > > > > > > I'm unsure that using the target device node or target source device is > > the correct way to represent the constraints that the consumers apply on > > the interconnects. While it's true the traffic is intended for the > > targeted devices, the constraints(QoS or BW) are for the interconnect or > > specifically the paths that span across the ports of various > > interconnects(NoC devices in this case). I think having both src and dst > > properties is still the simplest way to achieve the flexibility that we > > require to set the constraints for ports(that may not have a target > > device defined in DT or exists as some intermediate port across multiple > > interconnects). > > > The need for paths described in DT may make sense for certain cases but > that seems to be the minority. My guess is that maybe an OPP binding > would need to describe the path to apply the bandwidth to. Otherwise I > don't see what the need is for. Maybe you can list out more scenarios? > > Either way, the binding has been designed to cover all the possibilities > by just saying that we have to describe at least two points for an > 'interconnect'. It is a path based binding. I'd rather see us have an > endpoint based binding with the option to fallback to paths if we need > to constrain something. Maybe this can be a new property that is used > the majority of the time? > > gpu@f00 { > interconnect-endpoints =3D <&icc GPU_SLAVE_PORT>, <&icc GPU_MASTER_PORT0>, <&icc GPU_MASTER_PORT1>; > interconnect-endpoint-names =3D "slave", "master0", "master1"; > }; > > (Or we can invert it and make interconnect-paths be non-standard) > > The property would describe what's going to this device and how it's > integrated into the SoC. This is similar to how we describe what port is > connected to a device with the of graph binding or how we only list the > clk or regulator that goes to a device and not the whole path to the > root of the respective tree. > > There can be a driver API that gets these port numbers out and > constructs a path to another struct device or struct device_node. I > imagine that 90% of the time a driver is going to request some bandwidth > from their master port (or ports) to the DDR controller. We could either > make the DDR controller a device that can be globally acquired or > integrate it deeply into the API to the point that it looks for a DDR > controller somewhere or relies on interconnect providers to tell the > framework about the controller. > > TL;DR is that I don't want to have to specify paths in each and every > node to say that some port on this device here is connected to some port > on the DDR controller and that we want to adjust the bandwidth or QoS > across this path. I'd like to describe a device "hermetically" by > listing out the ports the device has. Then we can rely on the OS to > figure out what paths to construct and change. If we need to constrain > or tweak those paths then we can do that with the existing interconnects > binding, but let's worry about that when we get there. I think I understand what you are trying to do here. Correct me if my understanding is wrong. Each device just lists what interconnects (and their ports) it's connected to -- let's call this device endpoints. If a device is making bandwidth votes from itself to some other device, it just specifies the other end point (as a device? path name?) in icc_get(). The interconnect framework can then figure out what two interconnect ports the icc_get() is about (by looking at the device endpoints info) and then construct the path. But it's not clear how you'll handle the case where a Device-A wants to make a bandwidth vote from a Device-B to Device-C. This is necessary for multiple scenarios. Eg: booting a remote proc where the CPU needs to make sure the remote proc has its path to DDR active. icc_get() can't always assume the source is the device making the request. So, I don't think you can omit the source of the path in the DT binding. If we take the above into account, would the only change in your proposal be to list the source and destination device in DT instead of their interconnect and ports? I don't have a strong opinion on whether this is necessary, but want to make sure that we are all talking about the same thing. Another way to look at this: There's one crucial difference between clocks and interconnects. Given a clock controller and it's "output port", the clock that you referring to doesn't change irrespective of what device is asking for it. But in the case of an interconnect, if you specify just a destination interconnect and it's port, the path that you are referring to changes based on which device is requesting it. And if you want a device independent of referring to a path, you need to specify the source and destination explicitly. Also, if a firmware isn't used, how do you see your icc_get() proposal working with just the "name"? In what way is it better than the current icc_get() API? Thanks, Saravana P.S: Sorry about any messy quoting/indentation. I'm hand editing the quoting from the raw email text and using git to send a response.