Received: by 10.223.164.221 with SMTP id h29csp2314939wrb; Thu, 2 Nov 2017 09:01:55 -0700 (PDT) X-Google-Smtp-Source: ABhQp+RNtrBZOhialIwciTNd9iUg6coyY2xZx/nOx0V/AKu1B6qvcUFOP/tde58Om/OJifRsMW7e X-Received: by 10.98.62.81 with SMTP id l78mr4206160pfa.171.1509638515396; Thu, 02 Nov 2017 09:01:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509638515; cv=none; d=google.com; s=arc-20160816; b=CO0BjiDKl8vvQf6HkAYUWPuLQ108z6GY7+GtPle5rjSQbGKpJ6kuA/jQxsvCgWt1pP 6hOdR1ADBVsQ7NTQ+Q19k2mjO2FaeESSQk5NIxB5oJiGdBXS96dQ9dNSDQsTNh3WqLyY UQdcOzob4RNw0ePHwaFdi3hJElnoe3WKwDsdSEcnS98tlleSpRtj6J712+n/5a81nym6 0etvULDVKmEVu1AfznxXtvLiDDm//bn/xOct3k+hal59PAn6mNaR7hBGYgEJ3gWSjc58 YLHFL1NOp2dbMgXV9gJO8RudrT89vdHKSSprszTz9f2YbDQW7a8RNdU56FAqsWRMxzKw MOSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=R+mRt6lbrZrM9k0/iqDkD2au/pdEH6Nbb0RZkLKxHJ0=; b=IDO0jMRGR9o2jjnn6jGhCwdQwetVrE8k1zOqw9buvkAsTcL1M6zxC9Rsck2Mrv9voz b9NozAe3v9WcA3VKJev42Agg+F4/9e9vULR2jSPh7il4JQbiB8HJTM0kj7QFQpOuQT2L 86pF19QspD4IR4iT7kG9TajD6+uJ6R1Lt9X+95HT/fJoqAyCAn0IurMj2RVjNl1u9Ads hgIF87vmKzxfRCTM+oQSKqOozwoUpkRj7pG2ZP3oojJJtl+uroUSMZl4CI0BNQMgCTFt GoF6kO94mK6AdNfF47YlkAMz32q7lSvHlgSzurCEbbUf/04Rm7vSg0Wa9qAmS+oyLdya MWpA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FkU07Mog; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p6si1665081pgf.114.2017.11.02.09.01.38; Thu, 02 Nov 2017 09:01:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=FkU07Mog; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933999AbdKBQAx (ORCPT + 97 others); Thu, 2 Nov 2017 12:00:53 -0400 Received: from mail-lf0-f68.google.com ([209.85.215.68]:53854 "EHLO mail-lf0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933784AbdKBQAu (ORCPT ); Thu, 2 Nov 2017 12:00:50 -0400 Received: by mail-lf0-f68.google.com with SMTP id l23so7083380lfk.10 for ; Thu, 02 Nov 2017 09:00:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=R+mRt6lbrZrM9k0/iqDkD2au/pdEH6Nbb0RZkLKxHJ0=; b=FkU07MogQpwJgGLhKZcYFOqGuxM6z42pkRZxENnc83Ngd9a3phW0JLd+5HqlClXhKG xostcH/lKQtIik6Ms90gUEb1/oPKjThTVQ3FesAfBbquIayfzB1cReKyOp/iOMCLJEM2 OrkydGDcnLvIsMoebHnqaN0WTrn5IPPZVTy5U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=R+mRt6lbrZrM9k0/iqDkD2au/pdEH6Nbb0RZkLKxHJ0=; b=OZ2qFOBCBrR2g0DUsYy1vCDL0pRjYQwvhC9VhrlYbeg6ZAZnGpOxjtidyj2vMZ98dL OYS5gBawcO0ytrYwxQgYCPvyoCxgZ5GODVu42BSmKy5flrQgeGX+aidPjEbpzaMux0On +AeqjkwfxX4nmJOcm+l1Ye67L3/oH2rbitNlXXRHRoXT85n3vck0J3W2ieNeolbnD12C kJcmgrTBaxw5Jla5IFw//6bzeHuUqZfy/8JPbHTNCN5nzMCz2pHLcS24nXi4VufcLiI8 EHSwNlb+o0JUm++2szadndgbJEF5tT9VT8iUuKMrulj/5FaYKVydTTrXwprmK1B9fN6K D/WQ== X-Gm-Message-State: AMCzsaXmQZ4JcIaMTVvw75vuMRCihVNqFJbhtbj6Hf+wxJgRq0fYjrLZ 8SP2dmtLHTtSbirxOAT2PALXmg== X-Received: by 10.25.142.13 with SMTP id q13mr1429625lfd.217.1509638448454; Thu, 02 Nov 2017 09:00:48 -0700 (PDT) Received: from [10.44.66.8] ([212.45.67.2]) by smtp.googlemail.com with ESMTPSA id l18sm746213ljb.41.2017.11.02.09.00.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Nov 2017 09:00:47 -0700 (PDT) Subject: Re: [PATCH v3 1/3] interconnect: Add generic on-chip interconnect API To: Amit Kucheria Cc: Linux PM list , gregkh@linuxfoundation.org, "Rafael J. Wysocki" , robh+dt@kernel.org, khilman@baylibre.com, Michael Turquette , Vincent Guittot , Saravana Kannan , Stephen Boyd , Andy Gross , seansw@qti.qualcomm.com, davidai@quicinc.com, LKML , lakml , linux-arm-msm@vger.kernel.org, mark.rutland@arm.com, Lorenzo Pieralisi References: <20170908171830.13813-1-georgi.djakov@linaro.org> <20170908171830.13813-2-georgi.djakov@linaro.org> From: Georgi Djakov Message-ID: <26225878-8fed-b990-d02f-5253190de77a@linaro.org> Date: Thu, 2 Nov 2017 18:00:42 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Amit, On 11/02/2017 09:28 AM, Amit Kucheria wrote: [..>> +Interconnect node is the software definition of the interconnect hardware >> +port. Each interconnect provider consists of multiple interconnect nodes, >> +which are connected to other SoC components including other interconnect >> +providers. The point on the diagram where the CPUs connects to the memory is >> +called an interconnect node, which belongs to the Mem NoC interconnect provider. >> + >> +Interconnect endpoits are the first or the last element of the path. Every > > s/endpoits/endpoints > Ok! >> +endpoint is a node, but not every node is an endpoint. >> + >> +Interconnect path is everything between two endpoints including all the nodes >> +that have to be traversed to reach from a source to destination node. It may >> +include multiple master-slave pairs across several interconnect providers. >> + >> +Interconnect consumers are the entities which make use of the data paths exposed >> +by the providers. The consumers send requests to providers requesting various >> +throughput, latency and priority. Usually the consumers are device drivers, that >> +send request based on their needs. An example for a consumer is a a video > > typo: is a video> Ok! [..] >> +/** >> + * interconnect_set() - set constraints on a path between two endpoints >> + * @path: reference to the path returned by interconnect_get() >> + * @creq: request from the consumer, containing its requirements >> + * >> + * This function is used by an interconnect consumer to express its own needs >> + * in term of bandwidth and QoS for a previously requested path between two >> + * endpoints. The requests are aggregated and each node is updated accordingly. >> + * >> + * Returns 0 on success, or an approproate error code otherwise. >> + */ >> +int interconnect_set(struct interconnect_path *path, >> + struct interconnect_creq *creq) >> +{ >> + struct interconnect_node *next, *prev = NULL; >> + size_t i; >> + int ret = 0; >> + >> + for (i = 0; i < path->num_nodes; i++, prev = next) { >> + next = path->reqs[i].node; >> + >> + /* >> + * Both endpoints should be valid and master-slave pairs of > > Losing the 'and' improves readability. > Ok! >> + * the same interconnect provider that will be configured. >> + */ >> + if (!next || !prev) >> + continue; >> + if (next->icp != prev->icp) >> + continue; >> + >> + mutex_lock(&next->icp->lock); >> + >> + /* update the consumer request for this path */ >> + path->reqs[i].avg_bw = creq->avg_bw; >> + path->reqs[i].peak_bw = creq->peak_bw; >> + >> + /* aggregate all requests */ >> + interconnect_aggregate_icn(next); >> + interconnect_aggregate_icp(next->icp); >> + >> + if (next->icp->ops->set) { >> + /* commit the aggregated constraints */ >> + ret = next->icp->ops->set(prev, next, &next->icp->creq); >> + } > > A comment here on how the contraints are propagated along the path > would be good. At the moment, this seems to go to each provider along > the path, take the provider lock and set the new constraints, then > move on to the next provider. Is there any need to make the changes > along the entire path "atomic"? Yes, the above is correct. There is no such need at least for the hardware i am currently playing with. We can add support for that later if needed. > > I'm trying to understand what happens in the case where a new request > comes along while the previous path is still be traversed. It just gets aggregated and set and this seems to not be an issue as the paths are valid. Now I am trying to keep it simple and if anything needs serialization we can add some locking later. > >> + mutex_unlock(&next->icp->lock); >> + if (ret) >> + goto out; >> + } >> + >> +out: >> + return ret; >> +} >> + >> +/** >> + * interconnect_get() - return a handle for path between two endpoints > > This is not used currently by the msm8916 platform driver? Is this > expected to be called by leaf device drivers or by higher layer bus > drivers? I suspect a mix of the two, but an example would be nice. This is called by a consumer driver to express its for example bandwidth needs between various endpoints. Will add some examples. [..] >> +/** >> + * struct icp - interconnect provider (controller) entity that might >> + * provide multiple interconnect controls >> + * >> + * @icp_list: list of the registered interconnect providers >> + * @nodes: internal list of the interconnect provider nodes >> + * @ops: pointer to device specific struct icp_ops >> + * @dev: the device this interconnect provider belongs to >> + * @lock: a lock to protect creq and users >> + * @creq: the actual state of constraints for this interconnect provider >> + * @users: count of active users >> + * @data: pointer to private data >> + */ >> +struct icp { >> + struct list_head icp_list; >> + struct list_head nodes; >> + const struct icp_ops *ops; >> + struct device *dev; >> + struct mutex lock; >> + struct interconnect_creq creq; >> + int users; >> + void *data; >> +}; > > Use interconnect_provider here instead of icp for the sake of > consistency. Same for icp_ops. Or replace everything with any of the > other suggested alternatives. My suggestion to the name pool is 'xcon' > where x == inter. Yes i am working on better naming, thanks! BR, Georgi From 1582938484582039919@xxx Thu Nov 02 07:29:37 +0000 2017 X-GM-THRID: 1577992804531504815 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread