Received: by 10.192.165.148 with SMTP id m20csp1317534imm; Wed, 25 Apr 2018 16:44:30 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+hxXRbkJjfD0jvMl1ngrfx03o6IEHqtWyU/gSYiqwJCBA6LPG55JDoibXajKuNUd/WV3vf X-Received: by 2002:a17:902:33a5:: with SMTP id b34-v6mr31350763plc.232.1524699870526; Wed, 25 Apr 2018 16:44:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1524699870; cv=none; d=google.com; s=arc-20160816; b=RoVlvpxUUqBHB0hcxcFXM6MB9RZeYM4RbumHberuHc+XpX2pHdX/8rSyKXM0uJ5kmc dNYkon6sNgpZyfq0EzCc9tuj9WsaerNX+uNhcoJRYIEYYT/pMvK1D/G7P366W/XMwz+G 59Fptgj96PKQkmcbRtxYYxNSFLg5aD94fN1yvpLsVFrDhY3gxAscRBSdnJN90L1mnn0m 4rdWhdfw79MwrvSFS1RJNVDLczEw2382FADkTbeofli189gE2BxjjQZkrjMXp9EbbsL3 /1BzBILq9eP3thNS3tx66bJHzQdfAikJBrVNA2vVf09pqKeXgjGAAWiQNfx9MrCN1cyc MvOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=wEw6tngxo5vd5sI9MEBG2hVOCTj84GGuXbMjL1onXs0=; b=JfeFhfRnrn4DfPMemANUMGnPAQ5y/8XtwVGfV6lnYXZGEdIoqOvWSrv88uYbuQOnLW bDuAATGvn8rwaBt11oua8CovjVCR9zFYI+u3Xem7Gn9fhDp9L8VmJp7E8QtTPmPlFDLw YjDWtsf+mDU2zqp/BQVzloQBzOxjVJXcKc5gyRmLNx3VgynA66bq9EDs+56bR4ccdp3e oDToRxfyenLcMCoGub0751dbQiJzUuPiTMtNO7DQjH09dHQhtJMdqsvdOYYLGnx3KA+4 +Ue2zqwDSDErU4oXlNAKjLEeTqobFVptyt4LCDFkymBe7DoHPjrbwie7izhZoniQ5VU8 ryyw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=dsgi9THT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g3si16635812pfc.237.2018.04.25.16.44.15; Wed, 25 Apr 2018 16:44:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=dsgi9THT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754158AbeDYXlz (ORCPT + 99 others); Wed, 25 Apr 2018 19:41:55 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:37626 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753677AbeDYXlv (ORCPT ); Wed, 25 Apr 2018 19:41:51 -0400 Received: by mail-pg0-f66.google.com with SMTP id a13so9995123pgu.4 for ; Wed, 25 Apr 2018 16:41:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=wEw6tngxo5vd5sI9MEBG2hVOCTj84GGuXbMjL1onXs0=; b=dsgi9THThk1mEP2+kuIPZ2SeuKlC1P+V2TXz3bElMySBiqFqKKb9P64LgKlcU7xuzb K9PKGxc54inFfakHtSM92PmYyRc89cRtoPGXSc0VaeP7EMH2IfX2IIqj3F5p2A/pwC5M uFgyj4eA+KVjxINzjUmP+KmwyVz0vVF0ckcL8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=wEw6tngxo5vd5sI9MEBG2hVOCTj84GGuXbMjL1onXs0=; b=MoQ2VmwNpn1y7pZ9aNATgx2GdOKjlhGPRzwcrjZDDLS6rQ/xBuYUWwWPoNbcqIitsk 9WJMTk8DuI+3a6Tm9KQ5jq1l985QHlJk82HnsiGq8h0r+w4S8KtNPtOHFuhYm43zsgWl 0ZWlIo745GrSqntkCjRHvRCzqQGB7aVjIqtKawYYtF6FztaEZW/wXKj6L8nqpPApGb5E S361PItUVCniPc0XPc67bOcro+a+E4NoOwfXqwBulcPpVtJiKTD1U/7iUGuRcx9yOehI b2Z8qwXUC9AOoIfRTuBP/5thOqUd4X4WiDXN/eiR+vtGCSEs5i7CyZ4mn2bCxnjX3JOy LfFA== X-Gm-Message-State: ALQs6tB6mqXgB+Oah3Xlr1oCTUysMUCkZpp4ViJQLXHZciJMmMWu0dpv iFrNBKPxqZbjLWhEevaSbLYYQA== X-Received: by 10.98.75.22 with SMTP id y22mr3035164pfa.29.1524699710858; Wed, 25 Apr 2018 16:41:50 -0700 (PDT) Received: from localhost ([2620:0:1000:1501:8e2d:4727:1211:622]) by smtp.gmail.com with ESMTPSA id t68sm40136745pfk.9.2018.04.25.16.41.50 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 25 Apr 2018 16:41:50 -0700 (PDT) Date: Wed, 25 Apr 2018 16:41:49 -0700 From: Matthias Kaehlcke To: Lina Iyer Cc: andy.gross@linaro.org, david.brown@linaro.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org, rnayak@codeaurora.org, bjorn.andersson@linaro.org, linux-kernel@vger.kernel.org, sboyd@kernel.org, evgreen@chromium.org, dianders@chromium.org Subject: Re: [PATCH v6 09/10] drivers: qcom: rpmh: add support for batch RPMH request Message-ID: <20180425234149.GE243180@google.com> References: <20180419221635.17849-1-ilina@codeaurora.org> <20180419221635.17849-10-ilina@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20180419221635.17849-10-ilina@codeaurora.org> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 19, 2018 at 04:16:34PM -0600, Lina Iyer wrote: > Platform drivers need make a lot of resource state requests at the same > time, say, at the start or end of an usecase. It can be quite > inefficient to send each request separately. Instead they can give the > RPMH library a batch of requests to be sent and wait on the whole > transaction to be complete. > > rpmh_write_batch() is a blocking call that can be used to send multiple > RPMH command sets. Each RPMH command set is set asynchronously and the > API blocks until all the command sets are complete and receive their > tx_done callbacks. > > Signed-off-by: Lina Iyer > --- > > Changes in v6: > - replace rpmh_client with device * > Changes in v4: > - reorganize rpmh_write_batch() > - introduce wait_count here, instead of patch#4 > --- > drivers/soc/qcom/rpmh.c | 155 +++++++++++++++++++++++++++++++++++++++- > include/soc/qcom/rpmh.h | 8 +++ > 2 files changed, 161 insertions(+), 2 deletions(-) > > diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c > index b56445a05e23..c5962c692aac 100644 > --- a/drivers/soc/qcom/rpmh.c > +++ b/drivers/soc/qcom/rpmh.c > @@ -21,6 +21,7 @@ > #include "rpmh-internal.h" > > #define RPMH_TIMEOUT_MS msecs_to_jiffies(10000) > +#define RPMH_MAX_REQ_IN_BATCH 10 > > #define DEFINE_RPMH_MSG_ONSTACK(dev, s, q, name) \ > struct rpmh_request name = { \ > @@ -34,6 +35,7 @@ > .completion = q, \ > .dev = dev, \ > .free = NULL, \ > + .wait_count = NULL, \ > } > > /** > @@ -60,6 +62,7 @@ struct cache_req { > * @dev: the device making the request > * @err: err return from the controller > * @free: the request object to be freed at tx_done > + * @wait_count: count of waiters for this completion > */ > struct rpmh_request { > struct tcs_request msg; > @@ -68,6 +71,7 @@ struct rpmh_request { > const struct device *dev; > int err; > struct rpmh_request *free; > + atomic_t *wait_count; > }; > > /** > @@ -77,12 +81,14 @@ struct rpmh_request { > * @cache: the list of cached requests > * @lock: synchronize access to the controller data > * @dirty: was the cache updated since flush > + * @batch_cache: Cache sleep and wake requests sent as batch > */ > struct rpmh_ctrlr { > struct rsc_drv *drv; > struct list_head cache; > spinlock_t lock; > bool dirty; > + const struct rpmh_request *batch_cache[2 * RPMH_MAX_REQ_IN_BATCH]; > }; > > +static int cache_batch(struct rpmh_ctrlr *ctrlr, > + struct rpmh_request **rpm_msg, int count) > +{ > + unsigned long flags; > + int ret = 0; > + int index = 0; > + int i; > + > + spin_lock_irqsave(&ctrlr->lock, flags); > + while (ctrlr->batch_cache[index]) > + index++; This will access memory beyond 'batch_cache' when the cache is full. > +static void invalidate_batch(struct rpmh_ctrlr *ctrlr) > +{ > + unsigned long flags; > + int index = 0; > + int i; > + > + spin_lock_irqsave(&ctrlr->lock, flags); > + while (ctrlr->batch_cache[index]) > + index++; Same as above. Also, why loop twice? > + for (i = 0; i < index; i++) { > + kfree(ctrlr->batch_cache[i]->free); > + ctrlr->batch_cache[i] = NULL; > + } > +/** > + * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the > + * batch to finish. > + * > + * @dev: the device making the request > + * @state: Active/sleep set > + * @cmd: The payload data > + * @n: The array of count of elements in each batch, 0 terminated. nit: in this driver 'n' is usually associated with the command offset within a TCS. Since it isn't an overly descriptive name it may already cost the reader a while to commit that to his/her memory, and now we are overloading 'n' with a different meaning (I also noticed this in another patch of this series, but didn't comment). > + * Write a request to the RSC controller without caching. If the request > + * state is ACTIVE, then the requests are treated as completion request > + * and sent to the controller immediately. The function waits until all the > + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the > + * request is sent as fire-n-forget and no ack is expected. > + * > + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests. > + */ > +int rpmh_write_batch(const struct device *dev, enum rpmh_state state, > + const struct tcs_cmd *cmd, u32 *n) > +{ > + struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL }; > + DECLARE_COMPLETION_ONSTACK(compl); > + atomic_t wait_count = ATOMIC_INIT(0); > + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); > + int count = 0; > + int ret, i; > + > + if (IS_ERR(ctrlr) || !cmd || !n) > + return -EINVAL; > + > + while (n[count++] > 0) > + ; > + count--; > + if (!count || count > RPMH_MAX_REQ_IN_BATCH) > + return -EINVAL; > + > + for (i = 0; i < count; i++) { > + rpm_msg[i] = __get_rpmh_msg_async(state, cmd, n[i]); > + if (IS_ERR_OR_NULL(rpm_msg[i])) { > + ret = PTR_ERR(rpm_msg[i]); > + for (; i >= 0; i--) > + kfree(rpm_msg[i]->free); > + return ret; > + } > + cmd += n[i]; > + } > + > + if (state != RPMH_ACTIVE_ONLY_STATE) > + return cache_batch(ctrlr, rpm_msg, count); > + > + atomic_set(&wait_count, count); > + > + for (i = 0; i < count; i++) { > + rpm_msg[i]->completion = &compl; > + rpm_msg[i]->wait_count = &wait_count; > + ret = rpmh_rsc_send_data(ctrlr->drv, &rpm_msg[i]->msg); > + if (ret) { > + int j; > + > + pr_err("Error(%d) sending RPMH message addr=%#x\n", > + ret, rpm_msg[i]->msg.cmds[0].addr); > + for (j = i; j < count; j++) > + rpmh_tx_done(&rpm_msg[j]->msg, ret); > + break; > + } > + } > + > + ret = wait_for_completion_timeout(&compl, RPMH_TIMEOUT_MS); > + return (ret > 0) ? 0 : -ETIMEDOUT; > + > +} > +EXPORT_SYMBOL(rpmh_write_batch); > + > static int is_req_valid(struct cache_req *req) > { > return (req->sleep_val != UINT_MAX && > @@ -375,6 +520,11 @@ int rpmh_flush(const struct device *dev) > return 0; > } > > + /* First flush the cached batch requests */ > + ret = flush_batch(ctrlr); > + if (ret) > + return ret; > + > /* > * Nobody else should be calling this function other than system PM,, ~ Remove extra comma.