Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26791C54E94 for ; Thu, 26 Jan 2023 14:19:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231588AbjAZOTd (ORCPT ); Thu, 26 Jan 2023 09:19:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231889AbjAZOTQ (ORCPT ); Thu, 26 Jan 2023 09:19:16 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51BF915C9C for ; Thu, 26 Jan 2023 06:18:50 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A697461753 for ; Thu, 26 Jan 2023 14:18:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 058B5C4339B; Thu, 26 Jan 2023 14:18:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1674742729; bh=hMc1AmLrPA7CiaXnhw7DGjrkYDVNXuEpH2+n7eta8Lc=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=jV8Gztt4PetOOETz+e5vKQyV3DbzzwK9ITu/pQpuz/qqYhyeCUjW9nQRDk3arGdw6 Y18rRAOiDv0hPeZyAwZOOpwrR8LgIrgkYjQ97oxGiQS3x2dtcl+OUFD2T671CqSd5P xWnpKnwtOcvIb8rxFs/deDefNh2kdvVw/nBULeDqRpr6irPN6n09KI1QFTPUz06iX+ 3TtKq9afbz43WTT8LS+y1d16fIAio2KgynW8koANLqr8z7xFvC9ACK2CNkPQIkr6PR x1NFdqcNdUNKv7latbpxQ3WA4YzDjgT98FqDBzQy02Jm8UEpTtva+ZaGuKfWBanAY4 f5Gs5BeCu5MBg== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1pL35S-004qj7-KG; Thu, 26 Jan 2023 14:18:46 +0000 Date: Thu, 26 Jan 2023 14:18:46 +0000 Message-ID: <86y1ppl6pl.wl-maz@kernel.org> From: Marc Zyngier To: Vignesh Raghavendra Cc: Nishanth Menon , Tero Kristo , Santosh Shilimkar , Thomas Gleixner , , Subject: Re: [RFC PATCH 2/2] irqchip: irq-ti-sci-inta: Introduce IRQ affinity support In-Reply-To: <20230122081607.959474-3-vigneshr@ti.com> References: <20230122081607.959474-1-vigneshr@ti.com> <20230122081607.959474-3-vigneshr@ti.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/28.2 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: vigneshr@ti.com, nm@ti.com, kristo@kernel.org, ssantosh@kernel.org, tglx@linutronix.de, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, 22 Jan 2023 08:16:07 +0000, Vignesh Raghavendra wrote: > > Add support for setting IRQ affinity for VINTs which have only one event > mapped to them. This just involves changing the parent IRQs affinity > (GIC/INTR). Flag VINTs which have affinity configured so as to not > aggregate/map more events to such VINTs. > > Signed-off-by: Vignesh Raghavendra > --- > drivers/irqchip/irq-ti-sci-inta.c | 39 +++++++++++++++++++++++++++++++ > 1 file changed, 39 insertions(+) > > diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c > index f1419d24568e..237cb4707cb8 100644 > --- a/drivers/irqchip/irq-ti-sci-inta.c > +++ b/drivers/irqchip/irq-ti-sci-inta.c > @@ -64,6 +64,7 @@ struct ti_sci_inta_event_desc { > * @events: Array of event descriptors assigned to this vint. > * @parent_virq: Linux IRQ number that gets attached to parent > * @vint_id: TISCI vint ID > + * @affinity_managed flag to indicate VINT affinity is managed > */ > struct ti_sci_inta_vint_desc { > struct irq_domain *domain; > @@ -72,6 +73,7 @@ struct ti_sci_inta_vint_desc { > struct ti_sci_inta_event_desc events[MAX_EVENTS_PER_VINT]; > unsigned int parent_virq; > u16 vint_id; > + bool affinity_managed; > }; > > /** > @@ -334,6 +336,8 @@ static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_irq(struct irq_domain *d > vint_id = ti_sci_get_free_resource(inta->vint); > if (vint_id == TI_SCI_RESOURCE_NULL) { > list_for_each_entry(vint_desc, &inta->vint_list, list) { > + if (vint_desc->affinity_managed) > + continue; > free_bit = find_first_zero_bit(vint_desc->event_map, > MAX_EVENTS_PER_VINT); > if (free_bit != MAX_EVENTS_PER_VINT) > @@ -434,6 +438,7 @@ static int ti_sci_inta_request_resources(struct irq_data *data) > return PTR_ERR(event_desc); > > data->chip_data = event_desc; > + irq_data_update_effective_affinity(data, cpu_online_mask); > > return 0; > } > @@ -504,11 +509,45 @@ static void ti_sci_inta_ack_irq(struct irq_data *data) > ti_sci_inta_manage_event(data, VINT_STATUS_OFFSET); > } > > +#ifdef CONFIG_SMP > +static int ti_sci_inta_set_affinity(struct irq_data *d, > + const struct cpumask *mask_val, bool force) > +{ > + struct ti_sci_inta_event_desc *event_desc; > + struct ti_sci_inta_vint_desc *vint_desc; > + struct irq_data *parent_irq_data; > + > + if (cpumask_equal(irq_data_get_effective_affinity_mask(d), mask_val)) > + return 0; > + > + event_desc = irq_data_get_irq_chip_data(d); > + if (event_desc) { > + vint_desc = to_vint_desc(event_desc, event_desc->vint_bit); > + > + /* > + * Cannot set affinity if there is more than one event > + * mapped to same VINT > + */ > + if (bitmap_weight(vint_desc->event_map, MAX_EVENTS_PER_VINT) > 1) > + return -EINVAL; > + > + vint_desc->affinity_managed = true; > + > + irq_data_update_effective_affinity(d, mask_val); > + parent_irq_data = irq_get_irq_data(vint_desc->parent_virq); > + if (parent_irq_data->chip->irq_set_affinity) > + return parent_irq_data->chip->irq_set_affinity(parent_irq_data, mask_val, force); This looks completely wrong. You still have a chained irqchip on all paths, and have to do some horrible probing to work out: - which parent interrupt this is - how many interrupts are connected to it And then the fun begins: - You have one interrupt that is standalone, so its affinity can be moved - An unrelated driver gets probed, and one of its interrupts gets lumped together with the one above - Now it cannot be moved anymore, and userspace complains The rule is very simple: chained irqchip, no affinity management. Either you reserve a poll of direct interrupts that have affinity management and no muxing, or you keep the current approach. But I'm strongly opposed to this sort of approach. Thanks, M. -- Without deviation from the norm, progress is not possible.