Received: by 10.223.185.116 with SMTP id b49csp178943wrg; Fri, 2 Mar 2018 16:21:45 -0800 (PST) X-Google-Smtp-Source: AG47ELsz1SVv0HaTPJ/k3FFjSVaLlRAq1dB4yXDJHYfGyBNBI+/GsoGERocLcof63R9lG8igNr8z X-Received: by 2002:a17:902:1a4:: with SMTP id b33-v6mr6654678plb.321.1520036505682; Fri, 02 Mar 2018 16:21:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520036505; cv=none; d=google.com; s=arc-20160816; b=eJcujW1A1EGd6hznupUS3NlHrjT3racT3qGx0OW3OAeFKefAdoZxV9I7n2ywA34E6M glz5a2MUJXFzBPhsH7mTBAHstKnlaQkUl302bhly7EUMTKx+IlD7/RssdHwVYsEeDjsd bPY4uhA3QGYyzrQkGI2PeOM9U3Dy7WAJCiTToWVBxdHj2Z960qChV34rr+0e6a04Ye+i wYS7rEafhmUscWV0eiphReU9ZFYx9jb7LghgPdIg3kwOmG+/wV8rM+d5kKWwM3PqeG6N 3HUVWeJLhRLcQofK2GG25lNLe0ZfkgGq8C5pnMpeQApuExNug424Ot0r7GTd++SqvCSa RLvA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dmarc-filter:dkim-signature:dkim-signature :arc-authentication-results; bh=XdQat1NFwCV9Rd+buqUdNBYQ+DuRvjn4ZPAOiL+QUzI=; b=AeyE1GFmXLkUBT1RuTdTGLGFO7gxRq8OrR28N8nWubvZ+CJ+5no7ne1AbpBf+fYrwy fsjdH3wrTdxIHBQvhnMqRuaOQSwRTI08nnKUQKkgEMUS4bjqXMIQLmPvBqufce92rXNN jinoth32cpg0Y5n50lpTfZIXT3a5Q7wLCrenZrgg7JETSsclG555XFaXWLZHLDkNutA/ XMhleWISkRhLXzyGqTD+4VaseaiRrZScqTdJLH2RWxtyEDcs0trAAUQNlXOefIgs3Pby uDq/AlZ0NbIHpRhqewzu17TkUx9lkeV24uZ75gjA0P+APKfP4OISbMM9DPtDL0ScyroW FPpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=l84h4LeT; dkim=pass header.i=@codeaurora.org header.s=default header.b=kYQ6tQTj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b59-v6si5615970plb.529.2018.03.02.16.21.31; Fri, 02 Mar 2018 16:21:45 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=l84h4LeT; dkim=pass header.i=@codeaurora.org header.s=default header.b=kYQ6tQTj; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968094AbeCBQqG (ORCPT + 99 others); Fri, 2 Mar 2018 11:46:06 -0500 Received: from smtp.codeaurora.org ([198.145.29.96]:41236 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967985AbeCBQnm (ORCPT ); Fri, 2 Mar 2018 11:43:42 -0500 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id 2F055601EA; Fri, 2 Mar 2018 16:43:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1520009022; bh=ef8hN5p11Qmm5zgTC/S+Q+CjYROUl4n/y7bqolxrW/k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l84h4LeT4CEpUIip6Io9Z75UPnYUX8BHo4z1mfIxylAFYXCVMf/iiF/RVt6ELOC2j 7c6QYhwsVsvD/a98F2aFGFdFUwyFksEVrbBy4m2JwJRtM4/SZ8hK2u9scC2bdw/n9T 3JD3WRG0rUBGKbv9WyEYHbUnIdAeR5obH8kD6fPc= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_SIGNED,T_DKIM_INVALID autolearn=no autolearn_force=no version=3.4.0 Received: from codeaurora.org (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: ilina@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id CB461607EF; Fri, 2 Mar 2018 16:43:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1520009020; bh=ef8hN5p11Qmm5zgTC/S+Q+CjYROUl4n/y7bqolxrW/k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYQ6tQTj1yb7tQfxUP5NUQK3qf1WGAREyPaS91tp6yymENK9CgWytzDufnnMbIVjd aXYWlwB1nY7Rr42P1Yw+En2joSkNofNnQae8msKKrxMD+FWY8T3Ul+6pCxfjAr7QWJ m12YyN5ezvt1EMKLRVG4J9m3VeGGxycx+ZaNRgNI= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org CB461607EF Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=ilina@codeaurora.org From: Lina Iyer To: andy.gross@linaro.org, david.brown@linaro.org, linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org Cc: rnayak@codeaurora.org, bjorn.andersson@linaro.org, linux-kernel@vger.kernel.org, Lina Iyer Subject: [PATCH v3 09/10] drivers: qcom: rpmh: add support for batch RPMH request Date: Fri, 2 Mar 2018 09:43:16 -0700 Message-Id: <20180302164317.10554-10-ilina@codeaurora.org> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180302164317.10554-1-ilina@codeaurora.org> References: <20180302164317.10554-1-ilina@codeaurora.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Platform drivers need make a lot of resource state requests at the same time, say, at the start or end of an usecase. It can be quite inefficient to send each request separately. Instead they can give the RPMH library a batch of requests to be sent and wait on the whole transaction to be complete. rpmh_write_batch() is a blocking call that can be used to send multiple RPMH command sets. Each RPMH command set is set asynchronously and the API blocks until all the command sets are complete and receive their tx_done callbacks. Signed-off-by: Lina Iyer --- drivers/soc/qcom/rpmh.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++++ include/soc/qcom/rpmh.h | 8 +++ 2 files changed, 158 insertions(+) diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c index a02d9f685b2b..19e84b031c0d 100644 --- a/drivers/soc/qcom/rpmh.c +++ b/drivers/soc/qcom/rpmh.c @@ -22,6 +22,7 @@ #define RPMH_MAX_MBOXES 2 #define RPMH_TIMEOUT (10 * HZ) +#define RPMH_MAX_REQ_IN_BATCH 10 #define DEFINE_RPMH_MSG_ONSTACK(rc, s, q, c, name) \ struct rpmh_request name = { \ @@ -81,12 +82,14 @@ struct rpmh_request { * @cache: the list of cached requests * @lock: synchronize access to the controller data * @dirty: was the cache updated since flush + * @batch_cache: Cache sleep and wake requests sent as batch */ struct rpmh_ctrlr { struct rsc_drv *drv; struct list_head cache; spinlock_t lock; bool dirty; + struct rpmh_request *batch_cache[2 * RPMH_MAX_REQ_IN_BATCH]; }; /** @@ -343,6 +346,146 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state, } EXPORT_SYMBOL(rpmh_write); +static int cache_batch(struct rpmh_client *rc, + struct rpmh_request **rpm_msg, int count) +{ + struct rpmh_ctrlr *rpm = rc->ctrlr; + unsigned long flags; + int ret = 0; + int index = 0; + int i; + + spin_lock_irqsave(&rpm->lock, flags); + while (rpm->batch_cache[index]) + index++; + if (index + count >= 2 * RPMH_MAX_REQ_IN_BATCH) { + ret = -ENOMEM; + goto fail; + } + + for (i = 0; i < count; i++) + rpm->batch_cache[index + i] = rpm_msg[i]; +fail: + spin_unlock_irqrestore(&rpm->lock, flags); + + return ret; +} + +static int flush_batch(struct rpmh_client *rc) +{ + struct rpmh_ctrlr *rpm = rc->ctrlr; + struct rpmh_request *rpm_msg; + unsigned long flags; + int ret = 0; + int i; + + /* Send Sleep/Wake requests to the controller, expect no response */ + spin_lock_irqsave(&rpm->lock, flags); + for (i = 0; rpm->batch_cache[i]; i++) { + rpm_msg = rpm->batch_cache[i]; + ret = rpmh_rsc_write_ctrl_data(rc->ctrlr->drv, &rpm_msg->msg); + if (ret) + break; + } + spin_unlock_irqrestore(&rpm->lock, flags); + + return ret; +} + +static void invalidate_batch(struct rpmh_client *rc) +{ + struct rpmh_ctrlr *rpm = rc->ctrlr; + unsigned long flags; + int index = 0; + int i; + + spin_lock_irqsave(&rpm->lock, flags); + while (rpm->batch_cache[index]) + index++; + for (i = 0; i < index; i++) { + kfree(rpm->batch_cache[i]->free); + rpm->batch_cache[i] = NULL; + } + spin_unlock_irqrestore(&rpm->lock, flags); +} + +/** + * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the + * batch to finish. + * + * @rc: The RPMh handle got from rpmh_get_dev_channel + * @state: Active/sleep set + * @cmd: The payload data + * @n: The array of count of elements in each batch, 0 terminated. + * + * Write a request to the mailbox controller without caching. If the request + * state is ACTIVE, then the requests are treated as completion request + * and sent to the controller immediately. The function waits until all the + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the + * request is sent as fire-n-forget and no ack is expected. + * + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests. + */ +int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state, + struct tcs_cmd *cmd, int *n) +{ + struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL }; + DECLARE_COMPLETION_ONSTACK(compl); + atomic_t wait_count = ATOMIC_INIT(0); /* overwritten */ + int count = 0; + int ret, i, j; + + if (IS_ERR_OR_NULL(rc) || !cmd || !n) + return -EINVAL; + + while (n[count++] > 0) + ; + count--; + if (!count || count > RPMH_MAX_REQ_IN_BATCH) + return -EINVAL; + + /* Create async request batches */ + for (i = 0; i < count; i++) { + rpm_msg[i] = __get_rpmh_msg_async(rc, state, cmd, n[i]); + if (IS_ERR_OR_NULL(rpm_msg[i])) { + for (j = 0 ; j < i; j++) + kfree(rpm_msg[j]->free); + return PTR_ERR(rpm_msg[i]); + } + cmd += n[i]; + } + + /* Send if Active and wait for the whole set to complete */ + if (state == RPMH_ACTIVE_ONLY_STATE) { + might_sleep(); + atomic_set(&wait_count, count); + for (i = 0; i < count; i++) { + rpm_msg[i]->completion = &compl; + rpm_msg[i]->wait_count = &wait_count; + /* Bypass caching and write to mailbox directly */ + ret = rpmh_rsc_send_data(rc->ctrlr->drv, + &rpm_msg[i]->msg); + if (ret < 0) { + pr_err( + "Error(%d) sending RPMH message addr=0x%x\n", + ret, rpm_msg[i]->msg.payload[0].addr); + break; + } + } + /* For those unsent requests, spoof tx_done */ + for (j = i; j < count; j++) + rpmh_tx_done(&rpm_msg[j]->msg, ret); + return wait_for_tx_done(rc, &compl, cmd[0].addr, cmd[0].data); + } + + /* + * Cache sleep/wake data in store. + * But flush batch first before flushing all other data. + */ + return cache_batch(rc, rpm_msg, count); +} +EXPORT_SYMBOL(rpmh_write_batch); + static int is_req_valid(struct cache_req *req) { return (req->sleep_val != UINT_MAX && @@ -391,6 +534,11 @@ int rpmh_flush(struct rpmh_client *rc) return 0; } + /* First flush the cached batch requests */ + ret = flush_batch(rc); + if (ret) + return ret; + /* * Nobody else should be calling this function other than system PM,, * hence we can run without locks. @@ -433,6 +581,8 @@ int rpmh_invalidate(struct rpmh_client *rc) if (IS_ERR_OR_NULL(rc)) return -EINVAL; + invalidate_batch(rc); + spin_lock_irqsave(&rpm->lock, flags); rpm->dirty = true; spin_unlock_irqrestore(&rpm->lock, flags); diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h index 172a649f1a1c..6dc1042534d6 100644 --- a/include/soc/qcom/rpmh.h +++ b/include/soc/qcom/rpmh.h @@ -18,6 +18,9 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state, int rpmh_write_async(struct rpmh_client *rc, enum rpmh_state state, struct tcs_cmd *cmd, int n); +int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state, + struct tcs_cmd *cmd, int *n); + struct rpmh_client *rpmh_get_client(struct platform_device *pdev); int rpmh_flush(struct rpmh_client *rc); @@ -40,6 +43,11 @@ static inline int rpmh_write_async(struct rpmh_client *rc, struct tcs_cmd *cmd, int n) { return -ENODEV; } +static inline int rpmh_write_batch(struct rpmh_client *rc, + enum rpmh_state state, + struct tcs_cmd *cmd, int *n) +{ return -ENODEV; } + static inline int rpmh_flush(struct rpmh_client *rc) { return -ENODEV; } -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project