Received: by 2002:a25:ca44:0:0:0:0:0 with SMTP id a65csp2170019ybg; Thu, 30 Jul 2020 12:08:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyXDexzWr9zrjV0XZxKwOByOhQsGS/rW7qa+ZF/JeHMXpmDuf40+Uy0/MA95EDUe2+pz+Ex X-Received: by 2002:a17:906:bcc4:: with SMTP id lw4mr487337ejb.361.1596136109828; Thu, 30 Jul 2020 12:08:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596136109; cv=none; d=google.com; s=arc-20160816; b=MLdv9J2eNWR1oddKzfys8LxnE+ephysIsU0vMZq7PWYqE47jvPsKEcSt3o7dJmbni0 mzacenLnpJ5E8B5PvbqF1pPO0TbEy6ahuznrCAt0UGuhq/N2Tfi+lG9GtItkGXln/BEy FcWtNE35f6w7qkdwPzQgR1bdi+r+DFNyMHhtX4Utnm6xSBC35ZJzws1v78+hm9ZzV/fz VUVEZLUHPlRZcBRO2LRf+u3HCHjBUSpgi28fUbvogw1OrWgoFdPsH9HBn/KDJjb6ARkq RKV4zCOkk/HR9wzclrcjGJASojJ7wiZG+WzS0ArTiKmn3beOvsPe9iB9rCqkOIfSaqXn j8IQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=yQqZ2MMm/8Cs5yn8qC9wISuJd6WR3cQ7bq1kU0X78gY=; b=tH0ddLgSfulHkuWnSIhybGMs3+NoWbbeOEU5QCst6vWF+IMOPJkfyc8h/UhWuFhiLI yywFHqFMHxqjs3Jfv2TYvfvUh9DHU5NP+BKFB8Ykg9+pRWjfhB768BGHG9tEi6Vqp3ID a8E2K0YKaPPYYsUHBVJTq1qvetfoaAgfJEEJNrd/mM+eQYlI3DJquzfgOUjKCrVoU3vb bi1n7kdW5XJUbnPT+qpg5DXVIrqZWOrIpYAE9PFewqLwwk9Uks70L2iz8wAa2eMyvD3/ i8u2nDoScW3oTzVfdlbc/b9FAcD1/3JBSm9Z0RRj+tEKNbgvKhTeOWYAQY7J6qksNbEj hBGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=njK4CKRS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f10si2579631edx.485.2020.07.30.12.08.07; Thu, 30 Jul 2020 12:08:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=njK4CKRS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730426AbgG3TH6 (ORCPT + 99 others); Thu, 30 Jul 2020 15:07:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728644AbgG3TH5 (ORCPT ); Thu, 30 Jul 2020 15:07:57 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D173C061574 for ; Thu, 30 Jul 2020 12:07:57 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id r4so4391648pls.2 for ; Thu, 30 Jul 2020 12:07:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=yQqZ2MMm/8Cs5yn8qC9wISuJd6WR3cQ7bq1kU0X78gY=; b=njK4CKRSnqz/4kzXOci0zfsSuCFC++5HVfGahiUteSV13O0mQJuO4aCpWv6gZZbvPh /m8TZ8bDXUx4J2okW5Qy7Ct1B4UP5g7A9E0bqr9q2RraQGCBw/WCZPaqz+arHMQvWJpB ZC8tFcP2gD3NkWjH9GRLocJugT/Q5jT2htO+tAq+nwrBPNtl+6amILJRHuK/7QNHs99J SV3fxiK0BgmIIG2A0cotkrNX7RxZ7n/RxaNatkGjuiGPHx7yNBbe9lcz6Qe26zmMQ2at l0uU8JLfKul+uv8ay8RejwYPELo/WJb1or5SErxVz9ycVOIbZzEl2SOcasTlJxb597cy YKaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=yQqZ2MMm/8Cs5yn8qC9wISuJd6WR3cQ7bq1kU0X78gY=; b=uU+1I/tOTie0gCBWfnKm5FWErXtiXGImIWLE7878iUFxLOCYnEhxNXh180JlP8kXPT GAPmpa1fvJXKvlSZY2zLF3IfEx16SCJXn8mxTclfXA029r/WKv4dbBZ2AA1M4m79GALa wl6J9fybl9LyFQVvsULrr8KxZ1CoImDMN5p2VPXmzG2iDvEteuQHuvzq93OsQivrA1Nl YOQuQt0kynzLHh+qeQvY76gUbI0f2zqf+VmDIkkN7vDeoUu5CKWj7xm2mmj+1jxmdr0c f87BktoyLe7wC0gSgeH/P11QYs6WqxJ5UxUiXO7x4pnBcbd0ULObZ0q5xK0Nx0FjsItJ fjhg== X-Gm-Message-State: AOAM532Vl4a2zHPt+MIKxBFwqjk5E3CZvx2YUrJFgY4+HLockR0kJmaS XRqfDwpLNcSzzzhw1TFTyc2WawAVea17F0graXxb8g== X-Received: by 2002:a17:90a:17e9:: with SMTP id q96mr526355pja.91.1596136076328; Thu, 30 Jul 2020 12:07:56 -0700 (PDT) MIME-Version: 1.0 References: <20200722110139.24778-1-georgi.djakov@linaro.org> <20200722110139.24778-2-georgi.djakov@linaro.org> In-Reply-To: From: Saravana Kannan Date: Thu, 30 Jul 2020 12:07:20 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] interconnect: Add sync state support To: Mike Tipton Cc: Georgi Djakov , Linux PM , okukatla@codeaurora.org, Bjorn Andersson , Vincent Guittot , linux-arm-msm , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 27, 2020 at 11:18 PM Mike Tipton wrote: > > On 7/22/2020 10:07 AM, Saravana Kannan wrote: > > On Wed, Jul 22, 2020 at 4:01 AM Georgi Djakov wrote: > >> > >> The bootloaders often do some initial configuration of the interconnects > >> in the system and we want to keep this configuration until all consumers > >> have probed and expressed their bandwidth needs. This is because we don't > >> want to change the configuration by starting to disable unused paths until > >> every user had a chance to request the amount of bandwidth it needs. > >> > >> To accomplish this we will implement an interconnect specific sync_state > >> callback which will synchronize (aggregate and set) the current bandwidth > >> settings when all consumers have been probed. > >> > >> Signed-off-by: Georgi Djakov > >> --- > >> drivers/interconnect/core.c | 61 +++++++++++++++++++++++++++ > >> include/linux/interconnect-provider.h | 5 +++ > >> 2 files changed, 66 insertions(+) > >> > >> diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c > >> index e5f998744501..0c4e38d9f1fa 100644 > >> --- a/drivers/interconnect/core.c > >> +++ b/drivers/interconnect/core.c > >> @@ -26,6 +26,8 @@ > >> > >> static DEFINE_IDR(icc_idr); > >> static LIST_HEAD(icc_providers); > >> +static int providers_count; > >> +static bool synced_state; > >> static DEFINE_MUTEX(icc_lock); > >> static struct dentry *icc_debugfs_dir; > >> > >> @@ -255,6 +257,12 @@ static int aggregate_requests(struct icc_node *node) > >> continue; > >> p->aggregate(node, r->tag, r->avg_bw, r->peak_bw, > >> &node->avg_bw, &node->peak_bw); > >> + > >> + /* during boot use the initial bandwidth as a floor value */ > >> + if (!synced_state) { > >> + node->avg_bw = max(node->avg_bw, node->init_avg); > >> + node->peak_bw = max(node->peak_bw, node->init_peak); > >> + } > > > > Sorry I didn't reply earlier. > > > > I liked your previous approach with the get_bw ops. The v2 approach > > forces every interconnect provider driver to set up these values even > > if they are okay with just maxing out the bandwidth. Also, if they can > > actually query their hardware, this adds additional steps for them. > > The problem with using something like get_bw() is that while we can > dynamically query the HW, we have far less granularity in HW than we > have nodes in the framework. We vote at BCM-level granularity, but each > BCM can have many nodes. For example, the sdm845 CN0 BCM has 47 nodes. > If we implement get_bw() generically, then it would return the BW for > each node, which would be the queried BCM vote scaled to account for > differences in BCM/node widths. While this could be useful in general as > an informational callback, we wouldn't want to use this as a proxy for > our initial BW vote requirements. For CN0, we wouldn't want or need to > vote 47 times for the same CN0 BCM. Each of the 47 node requests would > result in the same BCM request. Firstly most people in the list don't know what BCM means. Also, all of this is your provider driver specific issues. If you are exposing more nodes than available HW granularity, then you might want to question why it needs to be done (probably to make aggregation easier for the driver). If it's needed, then optimize your get/set() calls by caching the value in an internal variable so that you don't send a request to your BCM if you haven't changed the value since the last request. This is not a reason to not have get_bw() calls at the framework level. Other providers might support it and it'd make their lives easier. > All we'd really need is a single node per-BCM to serve as the proxy > node. We'd query the HW, scale the queried value for the chosen proxy > node, and set init_avg/init_peak appropriately. This would save a lot of > unnecessary votes. Based on the current implementation, the set() call > in icc_node_add() for initial BW wouldn't trigger any actual HW requests > since we only queue BCMs that require updating in the aggregate() > callback. However, the set() call in icc_sync_state() would, since we > re-aggregate each node that has a non-zero init_avg/init_peak. Having a fake "proxy node" seems like a bad internal design. Also, have you timed the cost of these calls to justify your concern? If you cache the values after aggregation and check before you send it down to a "BCM", at worst you get one additional call to rpmh per BCM due to this feature. I'm guessing any time delta would be lost as noise compared to the boot up time. > There's nothing stopping us from implementing get_bw() as if it were > get_initial_bw(), but that only works until the framework decides to use > get_bw() for more things than just the initial vote. I suppose we could > also just have a "get_initial_bw" callback, but it only needs to be > called once, so doesn't necessarily need a callback as opposed to > additional init_avg/init_peak members in the icc_node struct. The benefit of "ops" vs "fill up these variables" is that you can differentiate between "I don't care, framework can decide" vs "I need it to be 0". Put another way, there's no way to say "I don't care" if you use variables. And by default drivers that don't really care (as in, okay if it's set to INT_MAX) shouldn't have to do extra code/work. Long story short, there's nothing wrong with get_bw(). If your specific driver needs to optimize the calls to your RPMH hardware, that should be hidden inside your driver. -Saravana