Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E195C61DA4 for ; Mon, 6 Feb 2023 06:13:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229553AbjBFGNV (ORCPT ); Mon, 6 Feb 2023 01:13:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229478AbjBFGNS (ORCPT ); Mon, 6 Feb 2023 01:13:18 -0500 Received: from fd01.gateway.ufhost.com (fd01.gateway.ufhost.com [61.152.239.71]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 296BA10431 for ; Sun, 5 Feb 2023 22:13:16 -0800 (PST) Received: from EXMBX165.cuchost.com (unknown [175.102.18.54]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "EXMBX165", Issuer "EXMBX165" (not verified)) by fd01.gateway.ufhost.com (Postfix) with ESMTP id 7B8FA24E3B3; Mon, 6 Feb 2023 14:13:13 +0800 (CST) Received: from EXMBX067.cuchost.com (172.16.6.67) by EXMBX165.cuchost.com (172.16.6.75) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 6 Feb 2023 14:13:13 +0800 Received: from [192.168.125.89] (183.27.96.33) by EXMBX067.cuchost.com (172.16.6.67) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Mon, 6 Feb 2023 14:13:12 +0800 Message-ID: Date: Mon, 6 Feb 2023 14:13:11 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [PATCH v1] irqchip/irq-sifive-plic: Add syscore callbacks for hibernation Content-Language: en-US To: Marc Zyngier CC: Thomas Gleixner , Palmer Dabbelt , Paul Walmsley , , , Ley Foon Tan , Sia Jee Heng References: <20230113094216.116036-1-mason.huo@starfivetech.com> <864js01j26.wl-maz@kernel.org> From: Mason Huo In-Reply-To: <864js01j26.wl-maz@kernel.org> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [183.27.96.33] X-ClientProxiedBy: EXCAS062.cuchost.com (172.16.6.22) To EXMBX067.cuchost.com (172.16.6.67) X-YovoleRuleAgent: yovoleflag Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2023/2/5 18:51, Marc Zyngier wrote: > On Fri, 13 Jan 2023 09:42:16 +0000, > Mason Huo wrote: >> >> The priority and enable registers of plic will be reset >> during hibernation power cycle in poweroff mode, >> add the syscore callbacks to save/restore those registers. >> >> Signed-off-by: Mason Huo >> Reviewed-by: Ley Foon Tan >> Reviewed-by: Sia Jee Heng >> --- >> drivers/irqchip/irq-sifive-plic.c | 93 ++++++++++++++++++++++++++++++- >> 1 file changed, 91 insertions(+), 2 deletions(-) >> >> diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c >> index ff47bd0dec45..80306de45d2b 100644 >> --- a/drivers/irqchip/irq-sifive-plic.c >> +++ b/drivers/irqchip/irq-sifive-plic.c >> @@ -17,6 +17,7 @@ >> #include >> #include >> #include >> +#include >> #include >> >> /* >> @@ -67,6 +68,8 @@ struct plic_priv { >> struct irq_domain *irqdomain; >> void __iomem *regs; >> unsigned long plic_quirks; >> + unsigned int nr_irqs; >> + u32 *priority_reg; >> }; >> >> struct plic_handler { >> @@ -79,10 +82,13 @@ struct plic_handler { >> raw_spinlock_t enable_lock; >> void __iomem *enable_base; >> struct plic_priv *priv; >> + /* To record interrupts that are enabled before suspend. */ >> + u32 enable_reg[MAX_DEVICES / 32]; > > What does MAX_DEVICES represent here? How is it related to the number > of interrupts you're trying to save? It seems to be related to the > number of CPUs, so it hardly makes any sense so far. > The comment of this macro describes that "The largest number supported by devices marked as 'sifive,plic-1.0.0', is 1024, of which device 0 is defined as non-existent by the RISC-V Privileged Spec." As far as I understand, the *device* here means HW IRQ source, and the HW IRQ 0 is non-existent. >> }; >> static int plic_parent_irq __ro_after_init; >> static bool plic_cpuhp_setup_done __ro_after_init; >> static DEFINE_PER_CPU(struct plic_handler, plic_handlers); >> +static struct plic_priv *priv_data; >> >> static int plic_irq_set_type(struct irq_data *d, unsigned int type); >> >> @@ -229,6 +235,78 @@ static int plic_irq_set_type(struct irq_data *d, unsigned int type) >> return IRQ_SET_MASK_OK; >> } >> >> +static void plic_irq_resume(void) >> +{ >> + unsigned int i, cpu; >> + u32 __iomem *reg; >> + >> + for (i = 0; i < priv_data->nr_irqs; i++) >> + writel(priv_data->priority_reg[i], >> + priv_data->regs + PRIORITY_BASE + i * PRIORITY_PER_ID); > > From what I can tell, this driver uses exactly 2 priorities: 0 and 1. > And yet you use a full 32bit to encode those. Does it seem like a good > idea? > Yes, currently this driver uses oly 2 priorities. But, according to the sifive spec, the priority register is a 32bit register, and it supports 7 levels of priority. >> + >> + for_each_cpu(cpu, cpu_present_mask) { >> + struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu); >> + >> + if (!handler->present) >> + continue; >> + >> + for (i = 0; i < DIV_ROUND_UP(priv_data->nr_irqs, 32); i++) { >> + reg = handler->enable_base + i * sizeof(u32); >> + raw_spin_lock(&handler->enable_lock); >> + writel(handler->enable_reg[i], reg); >> + raw_spin_unlock(&handler->enable_lock); > > Why do you need to take/release the lock around *each* register > access? Isn't that lock constant for a given CPU? > OK, will fix it in the next version. >> + } >> + } >> +} >> + >> +static int plic_irq_suspend(void) >> +{ >> + unsigned int i, cpu; >> + u32 __iomem *reg; >> + >> + for (i = 0; i < priv_data->nr_irqs; i++) >> + priv_data->priority_reg[i] = >> + readl(priv_data->regs + PRIORITY_BASE + i * PRIORITY_PER_ID); >> + >> + for_each_cpu(cpu, cpu_present_mask) { >> + struct plic_handler *handler = per_cpu_ptr(&plic_handlers, cpu); >> + >> + if (!handler->present) >> + continue; >> + >> + for (i = 0; i < DIV_ROUND_UP(priv_data->nr_irqs, 32); i++) { >> + reg = handler->enable_base + i * sizeof(u32); >> + raw_spin_lock(&handler->enable_lock); >> + handler->enable_reg[i] = readl(reg); >> + raw_spin_unlock(&handler->enable_lock); > > Same remarks. > > M. >