Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp1246212imm; Fri, 11 May 2018 13:19:42 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpSBjaw9tTFEU69mBWqDNWEpHc+OUsE6F09Jd/dgWDiym9kieyLw2F+qWKa1zmzZHqyraC3 X-Received: by 2002:a62:e53:: with SMTP id w80-v6mr229701pfi.236.1526069982114; Fri, 11 May 2018 13:19:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526069982; cv=none; d=google.com; s=arc-20160816; b=qFSFiIyC2bQrbzmjhkS9WoPYoEEGVtGUle7tWDubcXliahAV+gH/LtfrJIjAT6yCaX h6GzqH5mzrfPRdHH16TPDjbUCyhRIDCSpCZpVzV34Rw2h5N60eDrFMETn0RB4LrZ3if3 TxD6hHYjmu5Mn7jvI9rCuyTYnezLVu8zwhcrxMXKoODBLU6uTCwx/aHJP2cztE6Oa68P Xzr2HoBa9DJ5QPP+dAkQlCLd2tvYSYZc36kFY0GdklmzTz7pBU8jOYZ7BKsQ4+WMvWav du0I0L2uwhauUsy3/xpJ2Ae9PrRsLXktcYJ+X5yJ2lzEkxNlaqT0Mku1nEgSW9EbKutZ aUtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature:dkim-signature :arc-authentication-results; bh=sAANhDlaQ1KGklC+ETxFb3M2BsSMeELPcKBJIxSvUtQ=; b=wuZ4ApQ8hhPId8uxGiO4oH2kvoHJQpTlHOtWIU52PksH339OihJjSpFgzIesHnzA/2 M6gYVV/Uzt3dyD75QnCOT9rEIEJrhj63MzQ17zKum89PAa8yN79KoDi4YhvIpXzWUcTD 67OkxeOFgjZTHzCEPIiI7qujGeCdaxiHqabf+cmxrly1GS9lU8mxJYKk0qv46uQsQHcm juNNA1Ag0XAS+PCP9pPIBe0sUr8tbaMd6NGqCfkRm07sKlj5bRSD9amm7RD6ANBuEuKL aXqe5pCghES6Y1lt9J2wBoM7HrNxiYx8HnhXTnxGL3hyhQLLtHwSy0zFcitCW21aneEY 8w0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@google.com header.s=20161025 header.b=lGxbO8WW; dkim=fail header.i=@chromium.org header.s=google header.b=K+ScNaiD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i188-v6si3270130pgc.414.2018.05.11.13.19.27; Fri, 11 May 2018 13:19:42 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@google.com header.s=20161025 header.b=lGxbO8WW; dkim=fail header.i=@chromium.org header.s=google header.b=K+ScNaiD; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751338AbeEKUTG (ORCPT + 99 others); Fri, 11 May 2018 16:19:06 -0400 Received: from mail-ua0-f193.google.com ([209.85.217.193]:44142 "EHLO mail-ua0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750818AbeEKUTE (ORCPT ); Fri, 11 May 2018 16:19:04 -0400 Received: by mail-ua0-f193.google.com with SMTP id h15-v6so4397002uan.11 for ; Fri, 11 May 2018 13:19:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=sAANhDlaQ1KGklC+ETxFb3M2BsSMeELPcKBJIxSvUtQ=; b=lGxbO8WW+4mLXVbYzn1k+ic0a1xHjqTbEPaQpLQtESElz0XTO1qBn53eGKNoHzj/9l aTUsm8G9x82DZUhT0dbvU4JSXTLXmvscvkX2CMR6QsNVsHYwOX1zpXsCySGgxcQZ+cC+ y52oPFoRanXdA9CS742KdDcDWXihlxBu3r3zob97bkAxx8FjuZtoAa0vQO7zArKAEDX3 prynep3EQDGz4vvOCoKovbiT0FJ3WxYSlP96O3bW02N7QKyy2B7MUIpNVZbXnYbJWVh5 yuPj7sUgf0sOjKziYjL0juNd3paNIpe1XOH3zpKKbZ2rdOF9e1/UhhW3cU7UEpWoZLVJ p8Gg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=sAANhDlaQ1KGklC+ETxFb3M2BsSMeELPcKBJIxSvUtQ=; b=K+ScNaiDhmOKJHH759GhzfA+pwdxHgbG2KaWm9h9B6FYXFO9J1RDh0FSbM22ksRAfE iKeUGGI9lmepRmogTAn54VA68v/ONO+nSMfdXWJptBiDn4oFUpvKjq+tZK29Is99jc0Z FP1DYMAvVkCD/+VkdDdH8fadqHHUoVKmk+Pac= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=sAANhDlaQ1KGklC+ETxFb3M2BsSMeELPcKBJIxSvUtQ=; b=tTkrHtLccN9P0harSzGiVJdHxVLL+Lu3UQVDJyKqr+tLVZKAhCnCZPKTakZENtDnEo AaRGJl6CvRAbFq4BShD1aV47KfKgh8kGHCTMlfdmt55f1GDMCZugbtBCDhnl54SjJNWN BpD0zJZA5BAO1uKeAhLThQUFIaPweTAYQzlNklNN+gtUl1AXIIIDI3RSQ4HWsta3c2WL ILxuC1c7LF16w8RhrDYoJa9TaPur5Yo3R0kJQTIAVIpGFBWEt9FoK9LR2uL0snCnmZpj 26DVytQcz7PHK9rUxR1PjjxYvPP6TymJ2EQhT7i72wqA88N1HZxWxxLbiQKHWrI/+WXE HBkg== X-Gm-Message-State: ALKqPwfKtf2V7ZEaeKO5tG3K5DKC8Bt/eea4ruPcFC0nvDuWrvA23HyI 9YZkv1rxT120HyBXANnjgKnF8wcjRbK09ucvw3oHvg== X-Received: by 2002:ab0:504:: with SMTP id 4-v6mr2593427uax.135.1526069943077; Fri, 11 May 2018 13:19:03 -0700 (PDT) MIME-Version: 1.0 Received: by 10.31.48.82 with HTTP; Fri, 11 May 2018 13:19:02 -0700 (PDT) In-Reply-To: <20180509170159.29682-10-ilina@codeaurora.org> References: <20180509170159.29682-1-ilina@codeaurora.org> <20180509170159.29682-10-ilina@codeaurora.org> From: Doug Anderson Date: Fri, 11 May 2018 13:19:02 -0700 X-Google-Sender-Auth: EUaF3EFqbtf_HCVaSIAQnYHEKTQ Message-ID: Subject: Re: [PATCH v8 09/10] drivers: qcom: rpmh: add support for batch RPMH request To: Lina Iyer Cc: Andy Gross , David Brown , linux-arm-msm@vger.kernel.org, "open list:ARM/QUALCOMM SUPPORT" , Rajendra Nayak , Bjorn Andersson , LKML , Stephen Boyd , Evan Green , Matthias Kaehlcke , rplsssn@codeaurora.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Wed, May 9, 2018 at 10:01 AM, Lina Iyer wrote: > /** > @@ -77,12 +82,14 @@ struct rpmh_request { > * @cache: the list of cached requests > * @lock: synchronize access to the controller data > * @dirty: was the cache updated since flush > + * @batch_cache: Cache sleep and wake requests sent as batch > */ > struct rpmh_ctrlr { > struct rsc_drv *drv; > struct list_head cache; > spinlock_t lock; > bool dirty; > + const struct rpmh_request *batch_cache[RPMH_MAX_BATCH_CACHE]; I'm pretty confused about why the "batch_cache" is separate from the normal cache. As far as I can tell the purpose of the two is the same but you have two totally separate code paths and data structures. > }; > > static struct rpmh_ctrlr rpmh_rsc[RPMH_MAX_CTRLR]; > @@ -133,6 +140,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) > struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request, > msg); > struct completion *compl = rpm_msg->completion; > + atomic_t *wc = rpm_msg->wait_count; > > rpm_msg->err = r; > > @@ -143,8 +151,13 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) > kfree(rpm_msg->free); > > /* Signal the blocking thread we are done */ > - if (compl) > - complete(compl); > + if (!compl) > + return; The comment above this "if" block no longer applies to the line next to it after your patch. ...but below I suggest you get rid of "wait_count", so maybe this part of the patch will go away. > +static int cache_batch(struct rpmh_ctrlr *ctrlr, > + struct rpmh_request **rpm_msg, int count) > +{ > + unsigned long flags; > + int ret = 0; > + int index = 0; > + int i; > + > + spin_lock_irqsave(&ctrlr->lock, flags); > + while (index < RPMH_MAX_BATCH_CACHE && ctrlr->batch_cache[index]) > + index++; > + if (index + count >= RPMH_MAX_BATCH_CACHE) { > + ret = -ENOMEM; > + goto fail; > + } > + > + for (i = 0; i < count; i++) > + ctrlr->batch_cache[index + i] = rpm_msg[i]; > +fail: Nit: this label is for both failure and normal exit, so call it "exit". > + spin_unlock_irqrestore(&ctrlr->lock, flags); > + > + return ret; > +} As part of my overall confusion about why the batch cache is different than the normal one: for the normal use case you still call rpmh_rsc_write_ctrl_data() for things you put in your cache, but you don't for the batch cache. I still haven't totally figured out what rpmh_rsc_write_ctrl_data() does, but it seems strange that you don't do it for the batch cache but you do for the other one. > +/** > + * rpmh_write_batch: Write multiple sets of RPMH commands and wait for the > + * batch to finish. > + * > + * @dev: the device making the request > + * @state: Active/sleep set > + * @cmd: The payload data > + * @n: The array of count of elements in each batch, 0 terminated. > + * > + * Write a request to the RSC controller without caching. If the request > + * state is ACTIVE, then the requests are treated as completion request > + * and sent to the controller immediately. The function waits until all the > + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the > + * request is sent as fire-n-forget and no ack is expected. > + * > + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests. > + */ > +int rpmh_write_batch(const struct device *dev, enum rpmh_state state, > + const struct tcs_cmd *cmd, u32 *n) > +{ > + struct rpmh_request *rpm_msg[RPMH_MAX_REQ_IN_BATCH] = { NULL }; > + DECLARE_COMPLETION_ONSTACK(compl); > + atomic_t wait_count = ATOMIC_INIT(0); > + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); > + int count = 0; > + int ret, i; > + > + if (IS_ERR(ctrlr) || !cmd || !n) > + return -EINVAL; > + > + while (n[count++] > 0) > + ; > + count--; > + if (!count || count > RPMH_MAX_REQ_IN_BATCH) > + return -EINVAL; > + > + for (i = 0; i < count; i++) { > + rpm_msg[i] = __get_rpmh_msg_async(state, cmd, n[i]); > + if (IS_ERR_OR_NULL(rpm_msg[i])) { Just "IS_ERR". It's never NULL. ...also add a i-- somewhere in here or you're going to be kfree()ing your error value, aren't you? > + ret = PTR_ERR(rpm_msg[i]); > + for (; i >= 0; i--) > + kfree(rpm_msg[i]->free); > + return ret; > + } > + cmd += n[i]; > + } > + > + if (state != RPMH_ACTIVE_ONLY_STATE) > + return cache_batch(ctrlr, rpm_msg, count); Don't you need to free rpm_msg items in this case? > + > + atomic_set(&wait_count, count); > + > + for (i = 0; i < count; i++) { > + rpm_msg[i]->completion = &compl; > + rpm_msg[i]->wait_count = &wait_count; > + ret = rpmh_rsc_send_data(ctrlr->drv, &rpm_msg[i]->msg); > + if (ret) { > + int j; > + > + pr_err("Error(%d) sending RPMH message addr=%#x\n", > + ret, rpm_msg[i]->msg.cmds[0].addr); > + for (j = i; j < count; j++) > + rpmh_tx_done(&rpm_msg[j]->msg, ret); You're just using rpmh_tx_done() to free memory? Note that you'll probably do your error handling in this function a favor if you rename __get_rpmh_msg_async() to __fill_rpmh_msg() and remove the memory allocation from there. Then you can do one big allocation of the whole array in rpmh_write_batch() and then you'll only have one free at the end... > + break; "break" seems wrong here. You'll end up waiting for the completion, then I guess timing out, then returning -ETIMEDOUT? > + } > + } > + > + ret = wait_for_completion_timeout(&compl, RPMH_TIMEOUT_MS); The "wait_count" abstraction is confusing and I believe it's not needed. I think you can remove it and change the above to this (untested) code: time_left = RPMH_TIMEOUT_MS; for (i = 0; i < count; i++) { time_left = wait_for_completion_timeout(&compl, time_left); if (!time_left) return -ETIMEDOUT; } ...specifically completions are additive, so just wait "count" times and then the reader doesn't need to learn your new wait_count abstraction and try to reason about it. ...and, actually, I argue in other replies that this should't use a timeout, so even cleaner: for (i = 0; i < count; i++) wait_for_completion(&compl); Once you do that, you can also get rid of the need to pre-count "n", so all your loops turn into: for (i = 0; n[i]; i++) I suppose you might want to get rid of "RPMH_MAX_REQ_IN_BATCH" and dynamically allocate your array too, but that seems sane. As per above it seems like you should just dynamically allocate a whole array of "struct rpmh_request" items at once anyway. --- > + return (ret > 0) ? 0 : -ETIMEDOUT; > + > +} > +EXPORT_SYMBOL(rpmh_write_batch); Perhaps an even simpler thing than taking all my advice above: can't you just add a optional completion to rpmh_write_async()? That would just be stuffed into rpm_msg. Now your batch code would just be a bunch of calls to rpmh_write_async() with an equal number of wait_for_completion() calls at the end. Is there a reason that wouldn't work? You'd get rid of _a lot_ of code. -Doug