Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp1773496ybk; Thu, 21 May 2020 15:08:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy2SGsKQe1plBN9/3O5B4ThetQlXP20LhpCMVJePqGXgBN3RIfjDnf7Blj4Rz1KRs4tmqAM X-Received: by 2002:a50:f111:: with SMTP id w17mr750439edl.41.1590098881095; Thu, 21 May 2020 15:08:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590098881; cv=none; d=google.com; s=arc-20160816; b=rFu1Khg8rMilRgHprKjXI16S9nn9aB4339ddWCCyTnLNQRoOaCSgkQ6Qg9pyGmv14E gTQO+gwJYwp2meCoMgm7Zr+xm/98G92/7ju1+IdkeSIOYNG4KfT86o8IztU0e043WciU c9E4fujnm+zhJ00svwLjKJ/1KCqZqRzOunXEHq+odHrIncrw/YxodrgnZvfcX4NMCOM/ LDfZQwSCU/bmiCB72l3wImAOCOCx6OviIDpYxzO8KgA7Q04NgOw2BwhV1Up2elMl3hVH FyMCrpPEwpBI5pGoDyhTdOwD8dtHk9pcR2AiCNWRHdGuNM4/97HEi3pgaj8VUr0uKYaA KS6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:to:from:cc:in-reply-to:subject:date:dkim-signature; bh=z4qjWDJS+dDEG6zsJVodzvcv2AJMK5S0YVhak8+CMXA=; b=QA87a+3sTmOYnFlI5ds3vdOjc4g4aRam9MktUSdf983LV0N28rJflqMkzbHI+YeJZ+ Tfx1p2vbVbxk0rr/EBMkIHv6BERcl1p1q75+Pepfgj9xP0D8ATCBCrgstBz5guuSkVnl 9qj//tRGjsx2UGR1/NrU3/HuM5hjfR5Lsm69U/me0al3zo3cto6+tzNVcsHspNA/gQLl CH3g7nGJ2poZkHT20NClht3TiK7FaHPPjmiqq69ZlGhS7aFJReI056Zm9Xsle1AVXqDu BHGDYU4jaH6Z7DWVuEJhRYk5FBGmaGjbU6F7RByV0l1jfnKbzOy/tVUX+6vM31oOvjst T+Ug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@dabbelt-com.20150623.gappssmtp.com header.s=20150623 header.b=x7AysmNp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d17si3770701ejh.334.2020.05.21.15.07.36; Thu, 21 May 2020 15:08:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@dabbelt-com.20150623.gappssmtp.com header.s=20150623 header.b=x7AysmNp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730503AbgEUWGT (ORCPT + 99 others); Thu, 21 May 2020 18:06:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729822AbgEUWGS (ORCPT ); Thu, 21 May 2020 18:06:18 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4801AC061A0E for ; Thu, 21 May 2020 15:06:18 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id f4so3948459pgi.10 for ; Thu, 21 May 2020 15:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dabbelt-com.20150623.gappssmtp.com; s=20150623; h=date:subject:in-reply-to:cc:from:to:message-id:mime-version :content-transfer-encoding; bh=z4qjWDJS+dDEG6zsJVodzvcv2AJMK5S0YVhak8+CMXA=; b=x7AysmNp0uZndl6qCVfHpRaL6qwKwUKih6h6F4bYJ5i+no/SnDwO35gG5Eo7C+oaOm MpxI+tMIP+WzltyQEgPNkG4DBL9GqCf2buVXoU52+/n6bdMlcqcRv/fF6yC22dQsOLWI noZAmJZVDt3ryDt9cyeG5ibC6Lb59+K63CG/9HNRRRVEQYuydaiZdkiVUxoHN14A3/WK 9h7rpwKhDRYjxQfg1kvKiLx/+OIF0y8/HnFr+pziSjuL3HV21wLfoE2PCtUuYzC20XYn Zie6+tByf13dxvZD5BQaBzKuYgmptdAZIrdjAayFYNJdPej5FLb8RuBsZsdzrX6DWxYF 0kCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:subject:in-reply-to:cc:from:to:message-id :mime-version:content-transfer-encoding; bh=z4qjWDJS+dDEG6zsJVodzvcv2AJMK5S0YVhak8+CMXA=; b=eQ9Ml1QNQqHfFdjq/SWh5ZQyKDpPtwsnQOJeGE9uxgrSPLixaoSxFDGrZ1aU1bpdjk MZtfnW0yZFEuyKH2SCoApp8Kb3I8XdJ9NcFsTopWZi/yk+iMe7Xx5zPS4nFo9C5rHnsu 97ksHNVZFP/w0DvnqVMFAwBwaQRkE1HvaoK4gqoksOH0n9B+Vh9VJoKjPp7+zHgEMogK JqmKOz7z218LVHXs9UrqU43pSIcmlA4/Kd2FRKIJgl2bI8ZErkjrDcok+wEjhuyHYPoP 4gthPzlGnecPEUqrXY0rg01k9UW2xc2H8MpTkGop95VgdmCsehPVNSYSn3Hf1XJIxUD9 YNHQ== X-Gm-Message-State: AOAM531d2cKdS6R7j6QptbhU4AgE8gfLBeWIIV3aPAe95wb+RQh6hEHG 6FPjQ8THhjSeJgBmexFuAk7EgQ== X-Received: by 2002:a62:fc4c:: with SMTP id e73mr755715pfh.305.1590098777630; Thu, 21 May 2020 15:06:17 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id u17sm4961524pgo.90.2020.05.21.15.06.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 May 2020 15:06:16 -0700 (PDT) Date: Thu, 21 May 2020 15:06:16 -0700 (PDT) X-Google-Original-Date: Thu, 21 May 2020 14:58:47 PDT (-0700) Subject: Re: [PATCH v2 2/3] irqchip/sifive-plic: Setup cpuhp once after boot CPU handler is present In-Reply-To: <20200518091441.94843-3-anup.patel@wdc.com> CC: Paul Walmsley , tglx@linutronix.de, jason@lakedaemon.net, Marc Zyngier , Atish Patra , Alistair Francis , anup@brainfault.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , stable@vger.kernel.org From: Palmer Dabbelt To: Anup Patel Message-ID: Mime-Version: 1.0 (MHng) Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 18 May 2020 02:14:40 PDT (-0700), Anup Patel wrote: > For multiple PLIC instances, the plic_init() is called once for each > PLIC instance. Due to this we have two issues: > 1. cpuhp_setup_state() is called multiple times > 2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state() > is called before boot CPU PLIC handler is available. > > This patch fixes both above issues. > > Fixes: f1ad1133b18f ("irqchip/sifive-plic: Add support for multiple PLICs") > Cc: stable@vger.kernel.org > Signed-off-by: Anup Patel > --- > drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c > index 9f7f8ce88c00..6c54abf5cc5e 100644 > --- a/drivers/irqchip/irq-sifive-plic.c > +++ b/drivers/irqchip/irq-sifive-plic.c > @@ -76,6 +76,7 @@ struct plic_handler { > void __iomem *enable_base; > struct plic_priv *priv; > }; > +static bool plic_cpuhp_setup_done; > static DEFINE_PER_CPU(struct plic_handler, plic_handlers); > > static inline void plic_toggle(struct plic_handler *handler, > @@ -285,6 +286,7 @@ static int __init plic_init(struct device_node *node, > int error = 0, nr_contexts, nr_handlers = 0, i; > u32 nr_irqs; > struct plic_priv *priv; > + struct plic_handler *handler; > > priv = kzalloc(sizeof(*priv), GFP_KERNEL); > if (!priv) > @@ -313,7 +315,6 @@ static int __init plic_init(struct device_node *node, > > for (i = 0; i < nr_contexts; i++) { > struct of_phandle_args parent; > - struct plic_handler *handler; > irq_hw_number_t hwirq; > int cpu, hartid; > > @@ -367,9 +368,18 @@ static int __init plic_init(struct device_node *node, > nr_handlers++; > } > > - cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, > + /* > + * We can have multiple PLIC instances so setup cpuhp state only > + * when context handler for current/boot CPU is present. > + */ > + handler = this_cpu_ptr(&plic_handlers); > + if (handler->present && !plic_cpuhp_setup_done) { > + cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, > "irqchip/sifive/plic:starting", > plic_starting_cpu, plic_dying_cpu); > + plic_cpuhp_setup_done = true; So presumably something else is preventing multiple plic_init() calls from executing at the same time? Assuming that's the case Reviewed-by: Palmer Dabbelt Acked-by: Palmer Dabbelt > + } > + > pr_info("mapped %d interrupts with %d handlers for %d contexts.\n", > nr_irqs, nr_handlers, nr_contexts); > set_handle_irq(plic_handle_irq);