Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1194482pxb; Fri, 21 Jan 2022 11:59:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJyUEPx3a7A8UAkqYOxGFvOM7fmejbNB5IV1NaxEy0a6B1iTlHuI1cg4diHQW5PkhH718F71 X-Received: by 2002:a05:6a00:1312:b0:4c4:cffa:a4c0 with SMTP id j18-20020a056a00131200b004c4cffaa4c0mr5047125pfu.79.1642795172373; Fri, 21 Jan 2022 11:59:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642795172; cv=none; d=google.com; s=arc-20160816; b=tFxPSkcsgmIkYvxo0qzu5M9G6RdNxrQxHsSAQyi1YFkp1TiC5pdCEI8XCt2eEpoiJB zwbp9qowh9mTnJiUC4zrbAtOzqeKtAYfY435WBRcq1ZKTYrtAUVeai6rXzWFLxazPjOG qyD9ruiWnG35j13MD5559Vb1+nq1z3bIayv+KIFlHGGPrkG2Ec51vNiWO27F2XVWH/O/ 0gGoDCafGhA+4Z+NCGfSdBaFSEFtjvt0FEZXEhTB5FwG651NtyDpnXSzIX3WqLRPSf4O j5dHuQrn5GdfQgEmgVz7xCbGlKmpfC0/ZVfUCSVjGA9a4NXKTnc7jd3kCrjl4VCH4/iT 7Mlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=M4sInZkOzrVxNr7O+DfC3mK7EDZIxISiULG3Q/UF81o=; b=XIox24jUP0EKyDklkMRUBfUlaGSyo6l733PtcJmj97UCUsBi6cH1BoOVF1BPpfE+jM HN4bWgnQPOMApV2Iqf2zRqlPPV1xVebBOMeVUuzBpbzDVY/TgqGIt5yjABTqlX3RhX+N jTkg12eOSKojmUCw9GZQkSm4DUCIxGObAK6aZvG6kB46dxswn/if0/LTC36qw5SCeQWl kl1hpF6RlT9expVrj6p7JxLeZb7S0Q62lNtAMPavm70PgpryEHsZ3/UipUqRex5VfxHH arO8dyORLKbN1NAEkrQJTjuyfmGYCUk+FtQngPOArki/hVn/T9/cil6npJBB5xYp9+xI hTOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=B0mPXnmJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f65si4322168pfa.158.2022.01.21.11.59.20; Fri, 21 Jan 2022 11:59:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@quicinc.com header.s=qcdkim header.b=B0mPXnmJ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=quicinc.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239940AbiASTbN (ORCPT + 99 others); Wed, 19 Jan 2022 14:31:13 -0500 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:7054 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232837AbiASTbM (ORCPT ); Wed, 19 Jan 2022 14:31:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1642620671; x=1674156671; h=from:to:cc:subject:date:message-id:mime-version; bh=M4sInZkOzrVxNr7O+DfC3mK7EDZIxISiULG3Q/UF81o=; b=B0mPXnmJg+DUSEfhgBGKwI0r4dVA8G3AOLXz+GJ4Z3Mr5c8SeDTnuJYY Bl1v4HXhssWaB9wecUlejwnTliWR5luvThRbV+NeJO80K1X83ljQjx/6H mQCF5sjN1POcg1d2Jk6DYP0v5xKmzJNhkAjJHICadj/skn5CsGDsyc/lo U=; Received: from unknown (HELO ironmsg-SD-alpha.qualcomm.com) ([10.53.140.30]) by alexa-out-sd-01.qualcomm.com with ESMTP; 19 Jan 2022 11:31:11 -0800 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg-SD-alpha.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2022 11:31:11 -0800 Received: from hu-mojha-hyd.qualcomm.com (10.80.80.8) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.922.19; Wed, 19 Jan 2022 11:31:09 -0800 From: Mukesh Ojha To: , CC: , , Mukesh Ojha Subject: [PATCH] remoteproc: Use unbounded/high priority workqueue for recovery work Date: Thu, 20 Jan 2022 01:00:44 +0530 Message-ID: <1642620644-19297-1-git-send-email-quic_mojha@quicinc.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01c.na.qualcomm.com (10.47.97.222) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There could be a scenario where there is too much load(n number of tasks which is affined) on a core on which rproc recovery is queued. Due to which, it takes number of seconds to complete the recovery. If we make this queue unbounded and move it to high priority worker pool then this work can be attempted to finished in less time. Signed-off-by: Mukesh Ojha --- drivers/remoteproc/remoteproc_core.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c index 69f51ac..efb6316 100644 --- a/drivers/remoteproc/remoteproc_core.c +++ b/drivers/remoteproc/remoteproc_core.c @@ -59,6 +59,7 @@ static int rproc_release_carveout(struct rproc *rproc, /* Unique indices for remoteproc devices */ static DEFINE_IDA(rproc_dev_index); +static struct workqueue_struct *rproc_recovery_wq; static const char * const rproc_crash_names[] = { [RPROC_MMUFAULT] = "mmufault", @@ -2752,8 +2753,10 @@ void rproc_report_crash(struct rproc *rproc, enum rproc_crash_type type) dev_err(&rproc->dev, "crash detected in %s: type %s\n", rproc->name, rproc_crash_to_string(type)); - /* Have a worker handle the error; ensure system is not suspended */ - queue_work(system_freezable_wq, &rproc->crash_handler); + if (rproc_recovery_wq) + queue_work(rproc_recovery_wq, &rproc->crash_handler); + else + queue_work(system_freezable_wq, &rproc->crash_handler); } EXPORT_SYMBOL(rproc_report_crash); @@ -2802,6 +2805,11 @@ static void __exit rproc_exit_panic(void) static int __init remoteproc_init(void) { + rproc_recovery_wq = alloc_workqueue("rproc_recovery_wq", WQ_UNBOUND | + WQ_HIGHPRI | WQ_FREEZABLE, 0); + if (!rproc_recovery_wq) + pr_err("remoteproc: creation of rproc_recovery_wq failed\n"); + rproc_init_sysfs(); rproc_init_debugfs(); rproc_init_cdev(); @@ -2818,6 +2826,8 @@ static void __exit remoteproc_exit(void) rproc_exit_panic(); rproc_exit_debugfs(); rproc_exit_sysfs(); + if (rproc_recovery_wq) + destroy_workqueue(rproc_recovery_wq); } module_exit(remoteproc_exit); -- 2.7.4