Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp4393973ybg; Mon, 8 Jun 2020 06:51:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz2K3crfNt+v5WP+0QKRSRZkc76ZQwpeIqnSb8mexic/0es9tJdLQ+DLW7I5RS5pIQiy6R6 X-Received: by 2002:a17:906:3154:: with SMTP id e20mr6684638eje.171.1591624294503; Mon, 08 Jun 2020 06:51:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591624294; cv=none; d=google.com; s=arc-20160816; b=QGBfA66UWDMIMnB4AvqWCGJTdqtZV1Cs4nAhDyRl4t9rJOuBjL2PcKUK67dBTbhLZr kTlbChRoLEpzEhXE9WkH22wt6F5ETR44jfP4rDGSVHWBLdEXUVs0G8uf6qotSn37WHVt jp7Z1f1A9l+uXPyyTQODMEtvPvF7z/Z+z9cytsYPmCTkXrm/z6lozo3vZsSWb5Wonjir Dwrg51OACHGVH93HDFDzYKXFVGncpEFxe7MaF5L+knUKcQtq02+rkOpQjdV90mKAZypT DRv3LkVERyUHEx+pdsBsnuJgLEuoBuqY6gyb5lpKL1520BhVzZk1RoRBdAh40/3wu2qD Kbqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:from; bh=591E7STbDX8V/bswDwDTMwr3063rru62qBMUzWSSy1M=; b=euuYYPa3Bu1tXECjczc32ZOgK13lx1A5c0aM2EdbzB2/nTeXi7VIeHHiCN6PmqklSb 7fxCTwCkvAooOmrSTIKULmbMZF2T9gkWlSxXmh7NHY+Ix8x8EjSqG1WsoJm182UBV9w6 umxevYdeERE2svjfNIpCTIB5fuOgGd0jRsBt+DGqeXj1mUDUQbW+rx0HLaIqRMuZd8Ea kxecWQDF9KEHzvprFxkbiJ31P+OjsJ7jWxv44m5luM/LzaQ8dlITc6c9Bly9TS5pH+3I w6EWUTyPZGB3OD1oOJQ+x2ZBjwU3vmnPS2qHg1yLsfSQsaD6TlL6wLPlsHq2gDAk6R9c miEw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y13si3476320ede.144.2020.06.08.06.51.10; Mon, 08 Jun 2020 06:51:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729847AbgFHNtF (ORCPT + 99 others); Mon, 8 Jun 2020 09:49:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729051AbgFHNtF (ORCPT ); Mon, 8 Jun 2020 09:49:05 -0400 Received: from Galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E311C08C5C2 for ; Mon, 8 Jun 2020 06:49:05 -0700 (PDT) Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jiI93-0007jq-0X; Mon, 08 Jun 2020 15:48:57 +0200 Received: by nanos.tec.linutronix.de (Postfix, from userid 1000) id 6DC97100F9F; Mon, 8 Jun 2020 15:48:56 +0200 (CEST) From: Thomas Gleixner To: "Herrenschmidt\, Benjamin" , "maz\@kernel.org" , "Saidi\, Ali" Cc: "jason\@lakedaemon.net" , "linux-kernel\@vger.kernel.org" , "linux-arm-kernel\@lists.infradead.org" , "Woodhouse\, David" , "Zilberman\, Zeev" , "Machulsky\, Zorik" Subject: Re: [PATCH] irqchip/gic-v3-its: Don't try to move a disabled irq In-Reply-To: References: <622fb6be108e894ee365d6b213535c8b@kernel.org> Date: Mon, 08 Jun 2020 15:48:56 +0200 Message-ID: <87mu5dacs7.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org "Herrenschmidt, Benjamin" writes: > On Wed, 2020-06-03 at 16:16 +0100, Marc Zyngier wrote: >> > My original patch should certain check activated and not disabled. >> > With that do you still have reservations Marc? >> >> I'd still prefer it if we could do something in core code, rather >> than spreading these checks in the individual drivers. If we can't, >> fair enough. But it feels like the core set_affinity function could >> just do the same thing in a single place (although the started vs >> activated is yet another piece of the puzzle I didn't consider, >> and the ITS doesn't need the "can_reserve" thing). > > For the sake of fixing the problem in a timely and backportable way I > would suggest first merging the fix, *then* fixing the core core. The "fix" is just wrong > if (cpu != its_dev->event_map.col_map[id]) { > target_col = &its_dev->its->collections[cpu]; > - its_send_movi(its_dev, target_col, id); > + > + /* If the IRQ is disabled a discard was sent so don't move */ > + if (!irqd_irq_disabled(d)) That check needs to be !irqd_is_activated() because enable_irq() does not touch anything affinity related. > + its_send_movi(its_dev, target_col, id); > + > its_dev->event_map.col_map[id] = cpu; > irq_data_update_effective_affinity(d, cpumask_of(cpu)); And then these associtations are disconnected from reality in any case. Something like the completely untested patch below should work. Thanks, tglx --- arch/x86/kernel/apic/vector.c | 21 +++------------------ kernel/irq/manage.c | 37 +++++++++++++++++++++++++++++++++++-- 2 files changed, 38 insertions(+), 20 deletions(-) --- a/arch/x86/kernel/apic/vector.c +++ b/arch/x86/kernel/apic/vector.c @@ -446,12 +446,10 @@ static int x86_vector_activate(struct ir trace_vector_activate(irqd->irq, apicd->is_managed, apicd->can_reserve, reserve); - /* Nothing to do for fixed assigned vectors */ - if (!apicd->can_reserve && !apicd->is_managed) - return 0; - raw_spin_lock_irqsave(&vector_lock, flags); - if (reserve || irqd_is_managed_and_shutdown(irqd)) + if (!apicd->can_reserve && !apicd->is_managed) + assign_irq_vector_any_locked(irqd); + else if (reserve || irqd_is_managed_and_shutdown(irqd)) vector_assign_managed_shutdown(irqd); else if (apicd->is_managed) ret = activate_managed(irqd); @@ -775,21 +773,8 @@ void lapic_offline(void) static int apic_set_affinity(struct irq_data *irqd, const struct cpumask *dest, bool force) { - struct apic_chip_data *apicd = apic_chip_data(irqd); int err; - /* - * Core code can call here for inactive interrupts. For inactive - * interrupts which use managed or reservation mode there is no - * point in going through the vector assignment right now as the - * activation will assign a vector which fits the destination - * cpumask. Let the core code store the destination mask and be - * done with it. - */ - if (!irqd_is_activated(irqd) && - (apicd->is_managed || apicd->can_reserve)) - return IRQ_SET_MASK_OK; - raw_spin_lock(&vector_lock); cpumask_and(vector_searchmask, dest, cpu_online_mask); if (irqd_affinity_is_managed(irqd)) --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -195,9 +195,9 @@ void irq_set_thread_affinity(struct irq_ set_bit(IRQTF_AFFINITY, &action->thread_flags); } +#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK static void irq_validate_effective_affinity(struct irq_data *data) { -#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK const struct cpumask *m = irq_data_get_effective_affinity_mask(data); struct irq_chip *chip = irq_data_get_irq_chip(data); @@ -205,9 +205,19 @@ static void irq_validate_effective_affin return; pr_warn_once("irq_chip %s did not update eff. affinity mask of irq %u\n", chip->name, data->irq); -#endif } +static inline void irq_init_effective_affinity(struct irq_data *data, + const struct cpumask *mask) +{ + cpumask_copy(irq_data_get_effective_affinity_mask(data), mask); +} +#else +static inline void irq_validate_effective_affinity(struct irq_data *data) { } +static inline boot irq_init_effective_affinity(struct irq_data *data, + const struct cpumask *mask) { } +#endif + int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, bool force) { @@ -304,6 +314,26 @@ static int irq_try_set_affinity(struct i return ret; } +static bool irq_set_affinity_deactivated(struct irq_data *data, + const struct cpumask *mask, bool force) +{ + struct irq_desc *desc = irq_data_to_desc(data); + + /* + * If the interrupt is not yet activated, just store the affinity + * mask and do not call the chip driver at all. On activation the + * driver has to make sure anyway that the interrupt is in a + * useable state so startup works. + */ + if (!IS_ENABLED(CONFIG_IRQ_DOMAIN_HIERARCHY) || irqd_is_activated(data)) + return false; + + cpumask_copy(desc->irq_common_data.affinity, mask); + irq_init_effective_affinity(data, mask); + irqd_set(data, IRQD_AFFINITY_SET); + return true; +} + int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, bool force) { @@ -314,6 +344,9 @@ int irq_set_affinity_locked(struct irq_d if (!chip || !chip->irq_set_affinity) return -EINVAL; + if (irq_set_affinity_deactivated(data, mask, force)) + return 0; + if (irq_can_move_pcntxt(data) && !irqd_is_setaffinity_pending(data)) { ret = irq_try_set_affinity(data, mask, force); } else {