Received: by 2002:a25:1104:0:0:0:0:0 with SMTP id 4csp34097ybr; Thu, 21 May 2020 23:51:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJywu8UdNL/xqgHFhCT3dr7AIBtFQuZMMrw7p+cODz9HHGRw21Di4CUXE05ZDnInF7YatfzJ X-Received: by 2002:a17:906:1ccd:: with SMTP id i13mr7177472ejh.148.1590130281126; Thu, 21 May 2020 23:51:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590130281; cv=none; d=google.com; s=arc-20160816; b=qSaHAVzNwhVmu3Kt2RyhTlatbJJWrQvg2+fCnS/1qUC0cKRFqyFjUvpYQdjY/NQVH0 9UFSIYIPJ/kXyUUloyRV7yCtXOSu3Zx6SfituZLPU7XcF8jmR16nv4SvxVVvrf4YtnJZ Xc8xa4MAGws8fDHSR2xnRn/yXUJkx7gK6Due0+4cbEptYRN0zLZv/d/FvxIvDGOprQU1 JmkBZKx4ppr64sWAcs/YAFh9MrrBop2870ehrOIFrbuH81MKjrUs5DSp06GUmg55kdAO 7KT6kPFB6wD8ZKQGXFT4RZk3/Pc3Hnv5LGzFQZh66LSntySAJqK3srjiCrxb+oAmYs0z ybyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=BdlhqyJ951xgsvD5BLBHy8K0oY2CiBKgpcw7Bfgw0og=; b=Bx14r9pO2hsM8g3lZRjGG7er/KMebCWr23Z72XOXCr3FMulSjrzFzghAxjT0hx23DG IarY4Z9H8t6DB872mlKyGdA8OwR/Kt5t/Hwe7BS87OWFeMqslY48lENtxg46mo1kNjeT lTohT4DkE8xcrzdLs/nrC4AOIIlTp94pv5oufzyI7Aqw+9EgJh1OgmuhdzsWRA4xJYcU AF7dyJBez8XRpmeMOKwulk200UL2fw0N7ymiPDii+crNx30ei+p45wbUDoJDfl5D60m5 4PgZi222uUNZ5LTJIAHRBRlYU3qwsm8aQWdKNc5Pf7sAlSkv0uiz0UHqSczhIIVZUUPF YWFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=MMALL3hd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p31si4254720edb.296.2020.05.21.23.50.59; Thu, 21 May 2020 23:51:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=MMALL3hd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728222AbgEVGqi (ORCPT + 99 others); Fri, 22 May 2020 02:46:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726910AbgEVGqh (ORCPT ); Fri, 22 May 2020 02:46:37 -0400 Received: from mail-wm1-x344.google.com (mail-wm1-x344.google.com [IPv6:2a00:1450:4864:20::344]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0562C061A0E for ; Thu, 21 May 2020 23:46:37 -0700 (PDT) Received: by mail-wm1-x344.google.com with SMTP id z4so7647637wmi.2 for ; Thu, 21 May 2020 23:46:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BdlhqyJ951xgsvD5BLBHy8K0oY2CiBKgpcw7Bfgw0og=; b=MMALL3hdVhUKe9RSDOjWQFf8Rj16jm9NnaDZwZSTAIl5sqs1rfb8ZlWZc+mksI4LjS YbK0eqASO6GJdnMPtQ5kw5DxjfG1V8V6misIPJraMRq6XAak7Sp7PpOXLmnLQX3YpalD /n7t3ILmEcLLWGr5ub+o8UpM85BjaDmcBby+A0LnpXJy96kcsLIG9Vu5AGZAAgOagoih LsVmGmmxp3vWCshshBJJBO209oiWdvVCaR9QMuFLeXs01cISosV176UzwqN8Rt6QYJ4o lB7KQbGW4yDAXO/d3jT+3nquplZ+z0UZY3ixTLgWO8wjJs0k1wVyEb9PBGgA1ZXR1YYq WBqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BdlhqyJ951xgsvD5BLBHy8K0oY2CiBKgpcw7Bfgw0og=; b=pK3F6qYflh95ceZrSEvIeR1QqR7go1MxRheJ5gCryrX21yujU7iWW06F8sqNt0/ITr D0KN5mANhrJH9BQjtskUV7SBFoAvblTUmRBfIC5bZxU/hjM8jjkzVrB2ouRSvUV8Ikn2 52qj3LkfcCzX35RFzTS8yfHosSKYQg7RrHHpU7qkk4VDPLDQgmyUxqp+bwn//UOYIcKs a4eRBX/u6qH4Q86gx39pw1EIhn4tp8Em735KSgbLv9W9Tr2kxolJXO1PAHJIUkzgH+IT qcJ4wyo6TUfu+cSfx9dz526/HuUPE+Cb6BWaYboOdv3v3rjqzHt0nqP2kcfE5PbviQan zb5Q== X-Gm-Message-State: AOAM530t1vXCQrg4ytswTX2etHyUyoAqVkIP0H1jHt+VRL6tQ9Rl7leR IOFA6RCilIDhMAOhENuhjzbY9xAwzd7gnSw9+wKDgg== X-Received: by 2002:a7b:c0d1:: with SMTP id s17mr11498344wmh.157.1590129996226; Thu, 21 May 2020 23:46:36 -0700 (PDT) MIME-Version: 1.0 References: <20200518091441.94843-3-anup.patel@wdc.com> In-Reply-To: From: Anup Patel Date: Fri, 22 May 2020 12:16:24 +0530 Message-ID: Subject: Re: [PATCH v2 2/3] irqchip/sifive-plic: Setup cpuhp once after boot CPU handler is present To: Palmer Dabbelt Cc: Anup Patel , Paul Walmsley , Thomas Gleixner , Jason Cooper , Marc Zyngier , Atish Patra , Alistair Francis , linux-riscv , "linux-kernel@vger.kernel.org List" , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 22, 2020 at 3:36 AM Palmer Dabbelt wrote: > > On Mon, 18 May 2020 02:14:40 PDT (-0700), Anup Patel wrote: > > For multiple PLIC instances, the plic_init() is called once for each > > PLIC instance. Due to this we have two issues: > > 1. cpuhp_setup_state() is called multiple times > > 2. plic_starting_cpu() can crash for boot CPU if cpuhp_setup_state() > > is called before boot CPU PLIC handler is available. > > > > This patch fixes both above issues. > > > > Fixes: f1ad1133b18f ("irqchip/sifive-plic: Add support for multiple PLICs") > > Cc: stable@vger.kernel.org > > Signed-off-by: Anup Patel > > --- > > drivers/irqchip/irq-sifive-plic.c | 14 ++++++++++++-- > > 1 file changed, 12 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c > > index 9f7f8ce88c00..6c54abf5cc5e 100644 > > --- a/drivers/irqchip/irq-sifive-plic.c > > +++ b/drivers/irqchip/irq-sifive-plic.c > > @@ -76,6 +76,7 @@ struct plic_handler { > > void __iomem *enable_base; > > struct plic_priv *priv; > > }; > > +static bool plic_cpuhp_setup_done; > > static DEFINE_PER_CPU(struct plic_handler, plic_handlers); > > > > static inline void plic_toggle(struct plic_handler *handler, > > @@ -285,6 +286,7 @@ static int __init plic_init(struct device_node *node, > > int error = 0, nr_contexts, nr_handlers = 0, i; > > u32 nr_irqs; > > struct plic_priv *priv; > > + struct plic_handler *handler; > > > > priv = kzalloc(sizeof(*priv), GFP_KERNEL); > > if (!priv) > > @@ -313,7 +315,6 @@ static int __init plic_init(struct device_node *node, > > > > for (i = 0; i < nr_contexts; i++) { > > struct of_phandle_args parent; > > - struct plic_handler *handler; > > irq_hw_number_t hwirq; > > int cpu, hartid; > > > > @@ -367,9 +368,18 @@ static int __init plic_init(struct device_node *node, > > nr_handlers++; > > } > > > > - cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, > > + /* > > + * We can have multiple PLIC instances so setup cpuhp state only > > + * when context handler for current/boot CPU is present. > > + */ > > + handler = this_cpu_ptr(&plic_handlers); > > + if (handler->present && !plic_cpuhp_setup_done) { > > + cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, > > "irqchip/sifive/plic:starting", > > plic_starting_cpu, plic_dying_cpu); > > + plic_cpuhp_setup_done = true; > > So presumably something else is preventing multiple plic_init() calls from > executing at the same time? Assuming that's the case AFAIK, interrupt controller and timer probing happens sequentially on boot CPU before all secondary CPUs are brought-up. > > Reviewed-by: Palmer Dabbelt > Acked-by: Palmer Dabbelt > > > + } > > + > > pr_info("mapped %d interrupts with %d handlers for %d contexts.\n", > > nr_irqs, nr_handlers, nr_contexts); > > set_handle_irq(plic_handle_irq); Thanks, Anup