Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp3524103pxb; Mon, 24 Jan 2022 11:22:24 -0800 (PST) X-Google-Smtp-Source: ABdhPJwHgKVPiQRf1TOQ+rk3Q7K1gXlrQ2/320m5huAvZwlAwrOYm3iBck/+u0p1Kt87DQEU6K/s X-Received: by 2002:a17:902:9887:b0:14a:199:bc51 with SMTP id s7-20020a170902988700b0014a0199bc51mr15681243plp.39.1643052144324; Mon, 24 Jan 2022 11:22:24 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643052144; cv=none; d=google.com; s=arc-20160816; b=ZS8uXKtUFgoK1KGfApBEtQB5CGxP9nL2GLlDZuhPiQC6uC9WwQVuPKWB3fs+iiYjRC cFVAXpq35zprLeZRl0pAB0BCcTdX7qr/jIPtcibExdz1qNPjbz7qDOSM07voEApY1amz 00RLdiMdVSZsq6HnemeLmgSXN4wZBG7rdtpvYqCMp9P2X7/koj1dzNMn2xunC0JHCSJ4 ritZw8xogUtNqealhsbDsW/nLZjplz9Mir0Uxa4g91G1CweDEHHewqlsjTm5b2EMsS4f JfG+KXDZjaDSETWZaBTz8Kzb2tA0E4PPyO2TZx2p7BJXEBX16ukfv7A7pmcvLHoq2yKc lNQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=hK6Gk5mnXF4IHRICMmepTdEfNCx/EzeL17Dix2QUIaU=; b=ZwFKRhSkgMxVAQN/elmSEPW75i7MaRLQ94NwnqFDcs9MonvdcpnIQFNROkn+tLLF2s jMwjBaAZCqSEoVqPgRW4HjXlQ20zwnCoeM86B/2SLOO5v1hSIKWiv2FBctQVHspXmyJs kDBdGRjPmnpKOlGXf/djvRWmH+BUs5smZvSAi0seizCXsTJlfgLvdjHc0GNHAZ+rWrXb ZIqPlkVwFL2Z0CsH03l164OI5WY2o20TNfhVcSgoNB9CyjQoPZvjxPgq0gy5176Sx0Lv T/sP8yMpDPcu84kwdRxr4r33nwsYYaJXWaVtyQCUS7KgvJqjgcn3Jllx+mLSSCY8S2kE 28Mg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=OGSCboF8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f18si14204117pgl.426.2022.01.24.11.22.11; Mon, 24 Jan 2022 11:22:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=OGSCboF8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238359AbiAXOEV (ORCPT + 99 others); Mon, 24 Jan 2022 09:04:21 -0500 Received: from alexa-out.qualcomm.com ([129.46.98.28]:48014 "EHLO alexa-out.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230156AbiAXOET (ORCPT ); Mon, 24 Jan 2022 09:04:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1643033059; x=1674569059; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=hK6Gk5mnXF4IHRICMmepTdEfNCx/EzeL17Dix2QUIaU=; b=OGSCboF8Fn9X54gI+CpLoN7tH+gJi/KtdhubT5uyh6/cQiIOGW0KfZBl Wtlr8LM/DabWj2unrWbYcf1OBR/D8YO8GPHVR14ZDHZCkFGe/pTFJdp5k m5dJzICVsQma+KvLsYIsJX6W/pG4fx8k5sckRk7Dqqoc9A0hOJxhrRvSN w=; Received: from ironmsg09-lv.qualcomm.com ([10.47.202.153]) by alexa-out.qualcomm.com with ESMTP; 24 Jan 2022 06:04:19 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg09-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2022 06:04:19 -0800 Received: from [10.216.45.46] (10.80.80.8) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Mon, 24 Jan 2022 06:04:15 -0800 Message-ID: <49c9611e-f8cd-3a18-19b3-123da5d8d8eb@quicinc.com> Date: Mon, 24 Jan 2022 19:34:09 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH] remoteproc: Use unbounded/high priority workqueue for recovery work Content-Language: en-US To: , CC: , , References: <1642620644-19297-1-git-send-email-quic_mojha@quicinc.com> From: Mukesh Ojha In-Reply-To: <1642620644-19297-1-git-send-email-quic_mojha@quicinc.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01c.na.qualcomm.com (10.47.97.222) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org +linux-arm-msm Thanks, -Mukesh On 1/20/2022 1:00 AM, Mukesh Ojha wrote: > There could be a scenario where there is too much load(n number > of tasks which is affined) on a core on which rproc recovery > is queued. Due to which, it takes number of seconds to complete > the recovery. > > If we make this queue unbounded and move it to high priority worker > pool then this work can be attempted to finished in less time. > > Signed-off-by: Mukesh Ojha > --- > drivers/remoteproc/remoteproc_core.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c > index 69f51ac..efb6316 100644 > --- a/drivers/remoteproc/remoteproc_core.c > +++ b/drivers/remoteproc/remoteproc_core.c > @@ -59,6 +59,7 @@ static int rproc_release_carveout(struct rproc *rproc, > > /* Unique indices for remoteproc devices */ > static DEFINE_IDA(rproc_dev_index); > +static struct workqueue_struct *rproc_recovery_wq; > > static const char * const rproc_crash_names[] = { > [RPROC_MMUFAULT] = "mmufault", > @@ -2752,8 +2753,10 @@ void rproc_report_crash(struct rproc *rproc, enum rproc_crash_type type) > dev_err(&rproc->dev, "crash detected in %s: type %s\n", > rproc->name, rproc_crash_to_string(type)); > > - /* Have a worker handle the error; ensure system is not suspended */ > - queue_work(system_freezable_wq, &rproc->crash_handler); > + if (rproc_recovery_wq) > + queue_work(rproc_recovery_wq, &rproc->crash_handler); > + else > + queue_work(system_freezable_wq, &rproc->crash_handler); > } > EXPORT_SYMBOL(rproc_report_crash); > > @@ -2802,6 +2805,11 @@ static void __exit rproc_exit_panic(void) > > static int __init remoteproc_init(void) > { > + rproc_recovery_wq = alloc_workqueue("rproc_recovery_wq", WQ_UNBOUND | > + WQ_HIGHPRI | WQ_FREEZABLE, 0); > + if (!rproc_recovery_wq) > + pr_err("remoteproc: creation of rproc_recovery_wq failed\n"); > + > rproc_init_sysfs(); > rproc_init_debugfs(); > rproc_init_cdev(); > @@ -2818,6 +2826,8 @@ static void __exit remoteproc_exit(void) > rproc_exit_panic(); > rproc_exit_debugfs(); > rproc_exit_sysfs(); > + if (rproc_recovery_wq) > + destroy_workqueue(rproc_recovery_wq); > } > module_exit(remoteproc_exit); >