Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp841728imm; Wed, 23 May 2018 06:28:09 -0700 (PDT) X-Google-Smtp-Source: AB8JxZqV+RM7zom9IvNKqqSWlw6D9ZYL+QVSGjGHO+l/tZl2843hjC8FizO4K9OPabs9pOAWaB1P X-Received: by 2002:a62:190f:: with SMTP id 15-v6mr2949771pfz.42.1527082089242; Wed, 23 May 2018 06:28:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527082089; cv=none; d=google.com; s=arc-20160816; b=qpxgSW3QoQKEXW4FZnwIykVyZ2ukASuL1U3B1+pQbe12s1vGCNnbyfM8osOJp+tAHO K3CV3MbPTIxL84Cj2m9aWdlelHIGb+4r59UGVJ46Q5GJnGUu1Vr+vPjpyh9q4AIoc5gk NLa9sRQkYWZyz9TZcg71Iuoo96yw1EB71HvegU/B82TPy3BYMaURN3b0qjCYetEZYBR0 NiOUIYElMu4tmjmljLEMWt/SVea80ikbCDEo4iADDXlqelbL8oNW9kZhWA3s+YHpK+7S RBknorFgQ92Xi2yfLGh6Tb+PPpsFVywC2gpujr5ZMtRw9Ps0e4rdsrMz4Ny9BkAlBR1r 7euQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dmarc-filter :dkim-signature:dkim-signature:arc-authentication-results; bh=exAv7ZlIe/nKypiOAsV1gsTlB+wU925C9YIbWGG34JM=; b=Cyyc5zzGuxTZ4nY1SdDJBi7e5tduqQtKBKEsIJu+vN3PaBgxgDWGymhzUw5QvT+oku fWhq2EtWAz9CcpEaetkNXQyiUNtX4P8xz3z6bN6YnfM7SyHmFzJfhQ0RKsZVRUOCvcDM WyZAeEtWVF3YWFW/DDDEjoqW0hG2zZ2T4aMdupiXR+AVUYqE9zRUzSObIo5rRrwv7JwP 6vtTSUIfSmWCpEqYC7EnfjZ9LQHQYXNzq8MkPt2x4bpDIqofOUIv2DfCGWxUR3PJYlP4 FrCOpntom90vSieYN/yqg/y10nSNxNspRZ5uvhh6JhIb8NvTFhxEjc83lzH30yFg81fb uF2g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=idDJQQK8; dkim=pass header.i=@codeaurora.org header.s=default header.b=idDJQQK8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5-v6si18732872pfb.314.2018.05.23.06.27.54; Wed, 23 May 2018 06:28:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@codeaurora.org header.s=default header.b=idDJQQK8; dkim=pass header.i=@codeaurora.org header.s=default header.b=idDJQQK8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932968AbeEWN1g (ORCPT + 99 others); Wed, 23 May 2018 09:27:36 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:57330 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932845AbeEWN1d (ORCPT ); Wed, 23 May 2018 09:27:33 -0400 Received: by smtp.codeaurora.org (Postfix, from userid 1000) id E48C2609D1; Wed, 23 May 2018 13:27:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1527082052; bh=7oI+42pMdygbfWHiVWadWvAAZCJGZ4QJou1NawTPybg=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=idDJQQK8X68IRjNl3WIlZEcdcol+6CdpLY9rrWqc1BWeY9sNPshe9ND7zHdBZllZF at7ySIxWhKMFvJQMXwgkJEgRoLk/nV0741Gh2blPWVPBLZbik24Q8ZGTYiWLZh4tU2 01mmsVyNmXZLHHhaKdfRIi8bJy+J/5zlFRcy56iY= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on pdx-caf-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=2.0 tests=ALL_TRUSTED,BAYES_00, DKIM_SIGNED,FROM_LOCAL_NOVOWEL,T_DKIM_INVALID autolearn=no autolearn_force=no version=3.4.0 Received: from [10.206.16.80] (blr-c-bdr-fw-01_globalnat_allzones-outside.qualcomm.com [103.229.19.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: rplsssn@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id CB02E60314; Wed, 23 May 2018 13:27:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=codeaurora.org; s=default; t=1527082052; bh=7oI+42pMdygbfWHiVWadWvAAZCJGZ4QJou1NawTPybg=; h=Subject:To:Cc:References:From:Date:In-Reply-To:From; b=idDJQQK8X68IRjNl3WIlZEcdcol+6CdpLY9rrWqc1BWeY9sNPshe9ND7zHdBZllZF at7ySIxWhKMFvJQMXwgkJEgRoLk/nV0741Gh2blPWVPBLZbik24Q8ZGTYiWLZh4tU2 01mmsVyNmXZLHHhaKdfRIi8bJy+J/5zlFRcy56iY= DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org CB02E60314 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=rplsssn@codeaurora.org Subject: Re: [PATCH v8 09/10] drivers: qcom: rpmh: add support for batch RPMH request To: Doug Anderson , Lina Iyer Cc: Andy Gross , David Brown , linux-arm-msm@vger.kernel.org, "open list:ARM/QUALCOMM SUPPORT" , Rajendra Nayak , msivasub@codeaurora.org, mkshah@codeaurora.org, Bjorn Andersson , LKML , Stephen Boyd , Evan Green , Matthias Kaehlcke References: <20180509170159.29682-1-ilina@codeaurora.org> <20180509170159.29682-10-ilina@codeaurora.org> From: Raju P L S S S N Message-ID: Date: Wed, 23 May 2018 18:57:25 +0530 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, will reply on points other than what Lina has responded. On 5/12/2018 1:49 AM, Doug Anderson wrote: > Hi, > > On Wed, May 9, 2018 at 10:01 AM, Lina Iyer wrote: >> /** >> @@ -77,12 +82,14 @@ struct rpmh_request { >> * @cache: the list of cached requests >> * @lock: synchronize access to the controller data >> * @dirty: was the cache updated since flush >> + * @batch_cache: Cache sleep and wake requests sent as batch >> */ >> struct rpmh_ctrlr { >> struct rsc_drv *drv; >> struct list_head cache; >> spinlock_t lock; >> bool dirty; >> + const struct rpmh_request *batch_cache[RPMH_MAX_BATCH_CACHE]; > > I'm pretty confused about why the "batch_cache" is separate from the > normal cache. As far as I can tell the purpose of the two is the same > but you have two totally separate code paths and data structures. > > >> }; >> >> static struct rpmh_ctrlr rpmh_rsc[RPMH_MAX_CTRLR]; >> @@ -133,6 +140,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) >> struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request, >> msg); >> struct completion *compl = rpm_msg->completion; >> + atomic_t *wc = rpm_msg->wait_count; >> >> rpm_msg->err = r; >> >> @@ -143,8 +151,13 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) >> kfree(rpm_msg->free); >> >> /* Signal the blocking thread we are done */ >> - if (compl) >> - complete(compl); >> + if (!compl) >> + return; > > The comment above this "if" block no longer applies to the line next > to it after your patch. ...but below I suggest you get rid of > "wait_count", so maybe this part of the patch will go away. > > >> +static int cache_batch(struct rpmh_ctrlr *ctrlr, >> + struct rpmh_request **rpm_msg, int count) >> +{ >> + unsigned long flags; >> + int ret = 0; >> + int index = 0; >> + int i; >> + >> + spin_lock_irqsave(&ctrlr->lock, flags); >> + while (index < RPMH_MAX_BATCH_CACHE && ctrlr->batch_cache[index]) >> + index++; >> + if (index + count >= RPMH_MAX_BATCH_CACHE) { >> + ret = -ENOMEM; >> + goto fail; >> + } >> + >> + for (i = 0; i < count; i++) >> + ctrlr->batch_cache[index + i] = rpm_msg[i]; >> +fail: > > Nit: this label is for both failure and normal exit, so call it "exit". > > >> + spin_unlock_irqrestore(&ctrlr->lock, flags); >> + >> + return ret; >> +} > > As part of my overall confusion about why the batch cache is different > than the normal one: for the normal use case you still call > rpmh_rsc_write_ctrl_data() for things you put in your cache, but you > don't for the batch cache. I still haven't totally figured out what > rpmh_rsc_write_ctrl_data() does, but it seems strange that you don't > do it for the batch cache but you do for the other one. > > >> +/** >> + * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the >> + * batch to finish. >> + * >> + * @dev: the device making the request >> + * @state: Active/sleep set >> + * @cmd: The payload data >> + * @n: The array of count of elements in each batch, 0 terminated. >> + * >> + * Write a request to the RSC controller without caching. If the request >> + * state is ACTIVE, then the requests are treated as completion request >> + * and sent to the controller immediately. The function waits until all the >> + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the >> + * request is sent as fire-n-forget and no ack is expected. >> + * >> + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests. >> + */ >> +int rpmh_write_batch(const struct device *dev, enum rpmh_state state, >> + const struct tcs_cmd *cmd, u32 *n) >> +{ >> + struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL }; >> + DECLARE_COMPLETION_ONSTACK(compl); >> + atomic_t wait_count = ATOMIC_INIT(0); >> + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); >> + int count = 0; >> + int ret, i; >> + >> + if (IS_ERR(ctrlr) || !cmd || !n) >> + return -EINVAL; >> + >> + while (n[count++] > 0) >> + ; >> + count--; >> + if (!count || count > RPMH_MAX_REQ_IN_BATCH) >> + return -EINVAL; >> + >> + for (i = 0; i < count; i++) { >> + rpm_msg[i] = __get_rpmh_msg_async(state, cmd, n[i]); >> + if (IS_ERR_OR_NULL(rpm_msg[i])) { > > Just "IS_ERR". It's never NULL. > ...also add a i-- somewhere in here or you're going to be kfree()ing > your error value, aren't you? Sure. Will make change in next patch. > > >> + ret = PTR_ERR(rpm_msg[i]); >> + for (; i >= 0; i--) >> + kfree(rpm_msg[i]->free); >> + return ret; >> + } >> + cmd += n[i]; >> + } >> + >> + if (state != RPMH_ACTIVE_ONLY_STATE) >> + return cache_batch(ctrlr, rpm_msg, count); > > Don't you need to free rpm_msg items in this case? > > >> + >> + atomic_set(&wait_count, count); >> + >> + for (i = 0; i < count; i++) { >> + rpm_msg[i]->completion = &compl; >> + rpm_msg[i]->wait_count = &wait_count; >> + ret = rpmh_rsc_send_data(ctrlr->drv, &rpm_msg[i]->msg); >> + if (ret) { >> + int j; >> + >> + pr_err("Error(%d) sending RPMH message addr=%#x\n", >> + ret, rpm_msg[i]->msg.cmds[0].addr); >> + for (j = i; j < count; j++) >> + rpmh_tx_done(&rpm_msg[j]->msg, ret); > > You're just using rpmh_tx_done() to free memory? Note that you'll > probably do your error handling in this function a favor if you rename > __get_rpmh_msg_async() to __fill_rpmh_msg() and remove the memory > allocation from there. Then you can do one big allocation of the > whole array in rpmh_write_batch() and then you'll only have one free > at the end... > > > >> + break; > > "break" seems wrong here. You'll end up waiting for the completion, > then I guess timing out, then returning -ETIMEDOUT? As the loop above break runs for remaining count, completion will be notified so there will not be waiting. Thanks, Raju > > >> + } >> + } >> + >> + ret = wait_for_completion_timeout(&compl, RPMH_TIMEOUT_MS); > > The "wait_count" abstraction is confusing and I believe it's not > needed. I think you can remove it and change the above to this > (untested) code: > > time_left = RPMH_TIMEOUT_MS; > for (i = 0; i < count; i++) { > time_left = wait_for_completion_timeout(&compl, time_left); > if (!time_left) > return -ETIMEDOUT; > } > > ...specifically completions are additive, so just wait "count" times > and then the reader doesn't need to learn your new wait_count > abstraction and try to reason about it. > > ...and, actually, I argue in other replies that this should't use a > timeout, so even cleaner: > > for (i = 0; i < count; i++) > wait_for_completion(&compl); > > > Once you do that, you can also get rid of the need to pre-count "n", > so all your loops turn into: > > for (i = 0; n[i]; i++) > > > I suppose you might want to get rid of "RPMH_MAX_REQ_IN_BATCH" and > dynamically allocate your array too, but that seems sane. As per > above it seems like you should just dynamically allocate a whole array > of "struct rpmh_request" items at once anyway. > > --- > >> + return (ret > 0) ? 0 : -ETIMEDOUT; >> + >> +} >> +EXPORT_SYMBOL(rpmh_write_batch); > > Perhaps an even simpler thing than taking all my advice above: can't > you just add a optional completion to rpmh_write_async()? That would > just be stuffed into rpm_msg. > > Now your batch code would just be a bunch of calls to > rpmh_write_async() with an equal number of wait_for_completion() calls > at the end. Is there a reason that wouldn't work? You'd get rid of > _a lot_ of code. > > > -Doug >