Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp2357127ybh; Fri, 24 Jul 2020 10:43:40 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx12LD2b9vQ8Tv2g32AFUsB7Rwlquby5ukD6h3XdU8+884L1lDpPVOUOwDNbKcWAOqBj1tA X-Received: by 2002:a05:6402:337:: with SMTP id q23mr10158525edw.63.1595612620698; Fri, 24 Jul 2020 10:43:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1595612620; cv=none; d=google.com; s=arc-20160816; b=dhqexthngwDxpmXmv5B7amrI2K58Y3x7IEZz7Ep9faMuVvPyeSwU4z6ni/Ns3EqVv1 cISP3jYN6IDCkNHqelKUC/WLPfcoJS0F98OEo+B54UbNiyzu7UWwdLvrOnY8ud7opWTL 4b2fJwNg7gBoPNs8SLwSUgU3mRz5M+xRFo6ZGlZG8Y0wOED9osajBybHZqKrpb2RUMX+ d1JyBvcec9uv/DsQ6Ro5kkIvyM3XvssTmI4nGBYRn5YPvza1B9YhnlW/sV8eeGgGgl9w OJxYd+KDmS6vjTn8gQxGToVjD6FEceV8dCNFrWgA6mPnRVt+85t4NWFB2ufiornmg9vL qIrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=KZ/frwGtvpbkSQcyqIfW7YdPbUhjvH7p2sUj+Dhtrik=; b=hWcvBcFcOot1mbcEo15Fz7tNoR40ZPIY5NOkGomS00KaXaYXV+n4xVvgvsCnvONztj mKX7jfoEue/wsLJc7yAHI9HK2LCY9gO5toN45UQrtgH3cTjbthUoCfVyQKB5N9yUPP2Z lFq72BBV4o+Kr3HcNpGX2iEKGEWn4X9OHyTmKUumzuXyTTVkFImQxKtMudq/NqW+cHsU sKD8Ob0TiYXbleH8Uk4DQDNOFCib0JbNm9uesWFer+xUC3fLtbRqYe26477ow/iWSt95 lovyxa55C4B2bOO1kZwHoekjor/n1PC5n50nA9CT8xaZox8iFpAgZewknvVmNDr+oKOQ HLCA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=O2Rp9vOT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r13si926092edq.209.2020.07.24.10.43.18; Fri, 24 Jul 2020 10:43:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=O2Rp9vOT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727012AbgGXRnJ (ORCPT + 99 others); Fri, 24 Jul 2020 13:43:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726887AbgGXRnJ (ORCPT ); Fri, 24 Jul 2020 13:43:09 -0400 Received: from mail-ua1-x941.google.com (mail-ua1-x941.google.com [IPv6:2607:f8b0:4864:20::941]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30D98C0619E4 for ; Fri, 24 Jul 2020 10:43:09 -0700 (PDT) Received: by mail-ua1-x941.google.com with SMTP id q15so3217952uap.4 for ; Fri, 24 Jul 2020 10:43:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KZ/frwGtvpbkSQcyqIfW7YdPbUhjvH7p2sUj+Dhtrik=; b=O2Rp9vOTFjxXD8aPgRWvdo5xtZPGrvhpmFdBiOnsyxgT9EjPvkk6AA0HL3v9h5mov2 BvJiVUY2R5mAE04XosUbpJGSQV6fD9RaQxEhzuCv+Bdkcv7iCs8GXzO2hVlCxUGydsFr t6qnxmZoQYSBjgXfW8mkPxOyLTU5w9rsvOtXc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KZ/frwGtvpbkSQcyqIfW7YdPbUhjvH7p2sUj+Dhtrik=; b=Xu5XCMRH+9wzSgNc4Md+rfXpVbXRc5oPVCyCDt7SE07lIzaMxAgZsHedI9IF5KdTyr fxmZgjsWaKCo7mMumAgvSuL6jYtNgE/8xpP3Tsl6cB6KikgK04Q5Ij9AscTfT20OxVOw w2HVa5cVmItZalAMxN1c5WVHs9Kyh36Mf9VfwNqxTpZX6RA4JHiijMZAB51VkRGcgXLj OLBriFBfb6+ZFMbsK0tb+1czDR7fP/prj24uot+j12v5Y5DOElkdZrrigXaIhQbDr3Ul 4AEKLiua79ttlfnOI07+9Qb3ywMO+l0nUONLgowJS4U0xfo72pTHYJ/lpkWEw6Vjf7CZ 2+eA== X-Gm-Message-State: AOAM5326pc5HKklzLZvRG03OCanoiEsY5cxydz6N4N1jop9TFBZt+z7g bUbs7zTcnpNU339K8rOsGUqKXOWhF+E= X-Received: by 2002:ab0:288:: with SMTP id 8mr9658564uah.105.1595612587926; Fri, 24 Jul 2020 10:43:07 -0700 (PDT) Received: from mail-vk1-f169.google.com (mail-vk1-f169.google.com. [209.85.221.169]) by smtp.gmail.com with ESMTPSA id v63sm196254vkf.45.2020.07.24.10.43.06 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 24 Jul 2020 10:43:07 -0700 (PDT) Received: by mail-vk1-f169.google.com with SMTP id h1so2388236vkn.12 for ; Fri, 24 Jul 2020 10:43:06 -0700 (PDT) X-Received: by 2002:a1f:9f17:: with SMTP id i23mr7212195vke.65.1595612586463; Fri, 24 Jul 2020 10:43:06 -0700 (PDT) MIME-Version: 1.0 References: <20200723010137.3127584-1-swboyd@chromium.org> In-Reply-To: <20200723010137.3127584-1-swboyd@chromium.org> From: Doug Anderson Date: Fri, 24 Jul 2020 10:42:55 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free To: Stephen Boyd Cc: Andy Gross , Bjorn Andersson , LKML , linux-arm-msm , Maulik Shah , Lina Iyer Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd wrote: > > The busy loop in rpmh_rsc_send_data() is written with the assumption > that the udelay will be preempted by the tcs_tx_done() irq handler when > the TCS slots are all full. This doesn't hold true when the calling > thread is an irqthread and the tcs_tx_done() irq is also an irqthread. > That's because kernel irqthreads are SCHED_FIFO and thus need to > voluntarily give up priority by calling into the scheduler so that other > threads can run. > > I see RCU stalls when I boot with irqthreads on the kernel commandline > because the modem remoteproc driver is trying to send an rpmh async > message from an irqthread that needs to give up the CPU for the rpmh > irqthread to run and clear out tcs slots. > > rcu: INFO: rcu_preempt self-detected stall on CPU > rcu: 0-....: (1 GPs behind) idle=402/1/0x4000000000000002 softirq=2108/2109 fqs=4920 > (t=21016 jiffies g=2933 q=590) > Task dump for CPU 0: > irq/11-smp2p R running task 0 148 2 0x00000028 > Call trace: > dump_backtrace+0x0/0x154 > show_stack+0x20/0x2c > sched_show_task+0xfc/0x108 > dump_cpu_task+0x44/0x50 > rcu_dump_cpu_stacks+0xa4/0xf8 > rcu_sched_clock_irq+0x7dc/0xaa8 > update_process_times+0x30/0x54 > tick_sched_handle+0x50/0x64 > tick_sched_timer+0x4c/0x8c > __hrtimer_run_queues+0x21c/0x36c > hrtimer_interrupt+0xf0/0x22c > arch_timer_handler_phys+0x40/0x50 > handle_percpu_devid_irq+0x114/0x25c > __handle_domain_irq+0x84/0xc4 > gic_handle_irq+0xd0/0x178 > el1_irq+0xbc/0x180 > save_return_addr+0x18/0x28 > return_address+0x54/0x88 > preempt_count_sub+0x40/0x88 > _raw_spin_unlock_irqrestore+0x4c/0x6c > ___ratelimit+0xd0/0x128 > rpmh_rsc_send_data+0x24c/0x378 > __rpmh_write+0x1b0/0x208 > rpmh_write_async+0x90/0xbc > rpmhpd_send_corner+0x60/0x8c > rpmhpd_aggregate_corner+0x8c/0x124 > rpmhpd_set_performance_state+0x8c/0xbc > _genpd_set_performance_state+0xdc/0x1b8 > dev_pm_genpd_set_performance_state+0xb8/0xf8 > q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss] > qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss] > q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5] > handle_nested_irq+0xd0/0x138 > qcom_smp2p_intr+0x188/0x200 > irq_thread_fn+0x2c/0x70 > irq_thread+0xfc/0x14c > kthread+0x11c/0x12c > ret_from_fork+0x10/0x18 > > This busy loop naturally lends itself to using a wait queue so that each > thread that tries to send a message will sleep waiting on the waitqueue > and only be woken up when a free slot is available. This should make > things more predictable too because the scheduler will be able to sleep > tasks that are waiting on a free tcs instead of the busy loop we > currently have today. > > Cc: Douglas Anderson > Cc: Maulik Shah > Cc: Lina Iyer > Signed-off-by: Stephen Boyd > --- > drivers/soc/qcom/rpmh-internal.h | 2 + > drivers/soc/qcom/rpmh-rsc.c | 101 ++++++++++++------------------- > 2 files changed, 41 insertions(+), 62 deletions(-) > > diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h > index ef60e790a750..9a325bac58fe 100644 > --- a/drivers/soc/qcom/rpmh-internal.h > +++ b/drivers/soc/qcom/rpmh-internal.h > @@ -8,6 +8,7 @@ > #define __RPM_INTERNAL_H__ > > #include > +#include > #include > > #define TCS_TYPE_NR 4 > @@ -118,6 +119,7 @@ struct rsc_drv { > struct tcs_group tcs[TCS_TYPE_NR]; > DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR); > spinlock_t lock; > + wait_queue_head_t tcs_wait; nit: this structure has a kernel-doc comment above it describing the elements. Could you add yours? > struct rpmh_ctrlr client; > }; > > diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c > index 076fd27f3081..6c758b052c95 100644 > --- a/drivers/soc/qcom/rpmh-rsc.c > +++ b/drivers/soc/qcom/rpmh-rsc.c > @@ -19,6 +19,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -444,6 +445,7 @@ static irqreturn_t tcs_tx_done(int irq, void *p) > */ > if (!drv->tcs[ACTIVE_TCS].num_tcs) > enable_tcs_irq(drv, i, false); > + wake_up(&drv->tcs_wait); > spin_unlock(&drv->lock); nit: I think it's slightly better to do the wake_up() after the spin_unlock(), no? The first thing the other task will do is to try to grab the spinlock and we might as well give it a chance of succeeding without looping. I don't see any reason why we'd need to be holding the lock while calling wake_up(). > if (req) > rpmh_tx_done(req, err); > @@ -562,44 +564,59 @@ static int find_free_tcs(struct tcs_group *tcs) > return -EBUSY; > } > > +static int claim_tcs_for_req(struct rsc_drv *drv, struct tcs_group *tcs, > + const struct tcs_request *msg) nit: I know this is a short function and kernel convention doesn't strictly require comments in front of all functions. However, every other function in this file has a comment and I had a really hard time dealing with the rpmh-rsc code before the comments. Could you add one for your function, even if it's short? One thing that would be nice to note is that the only error it returns is -EBUSY. See below. > +{ > + int ret; > + > + /* > + * The h/w does not like if we send a request to the same address, > + * when one is already in-flight or being processed. > + */ > + ret = check_for_req_inflight(drv, tcs, msg); > + if (ret) > + return ret; > + > + return find_free_tcs(tcs); > +} > + > /** > - * tcs_write() - Store messages into a TCS right now, or return -EBUSY. > + * rpmh_rsc_send_data() - Write / trigger active-only message. > * @drv: The controller. > * @msg: The data to be sent. > * > - * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it. > - * > - * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for > - * the same address is already transferring returns -EBUSY which means the > - * client should retry shortly. > + * NOTES: > + * - This is only used for "ACTIVE_ONLY" since the limitations of this > + * function don't make sense for sleep/wake cases. > + * - To do the transfer, we will grab a whole TCS for ourselves--we don't > + * try to share. If there are none available we'll wait indefinitely > + * for a free one. > + * - This function will not wait for the commands to be finished, only for > + * data to be programmed into the RPMh. See rpmh_tx_done() which will > + * be called when the transfer is fully complete. > + * - This function must be called with interrupts enabled. If the hardware > + * is busy doing someone else's transfer we need that transfer to fully > + * finish so that we can have the hardware, and to fully finish it needs > + * the interrupt handler to run. If the interrupts is set to run on the > + * active CPU this can never happen if interrupts are disabled. > * > - * Return: 0 on success, -EBUSY if client should retry, or an error. > - * Client should have interrupts enabled for a bit before retrying. > + * Return: 0 on success, -EINVAL on error. > */ > -static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg) > +int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) > { > struct tcs_group *tcs; > int tcs_id; > unsigned long flags; > - int ret; > > tcs = get_tcs_for_msg(drv, msg); > if (IS_ERR(tcs)) > return PTR_ERR(tcs); > > spin_lock_irqsave(&drv->lock, flags); > - /* > - * The h/w does not like if we send a request to the same address, > - * when one is already in-flight or being processed. > - */ > - ret = check_for_req_inflight(drv, tcs, msg); > - if (ret) > - goto unlock; > > - ret = find_free_tcs(tcs); > - if (ret < 0) > - goto unlock; > - tcs_id = ret; > + wait_event_lock_irq(drv->tcs_wait, > + (tcs_id = claim_tcs_for_req(drv, tcs, msg)) >= 0, Even though claim_tcs_for_req() only returns 0 or -EBUSY today (IOW it never returns error codes other than -EBUSY), should we handle it? If we don't, claim_tcs_for_req() should be very clear that it shouldn't return any errors other than -EBUSY. Other than small nits this looks great to me, thanks! -Doug