Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp1854517ybz; Sun, 26 Apr 2020 05:49:24 -0700 (PDT) X-Google-Smtp-Source: APiQypIvRJ2L3sYzZgotQELTwKMGrD0YfkddOz7t6mqez3mYjMvtdQ8TsCZ4guGDcX7y9N3wyvyH X-Received: by 2002:a17:906:1d4c:: with SMTP id o12mr15735371ejh.357.1587905363867; Sun, 26 Apr 2020 05:49:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587905363; cv=none; d=google.com; s=arc-20160816; b=j8ntFzsU5wGWZXCwYVkPXbnmjtoGS5VITRM5vE0QP8m6KBFQgnT3vW9fGV61uvPF/8 nNjWPQuOTUoqxJF/JEbllGmC8CNZVGWAlJO2P+6V6W3HC9qQpUyxWl+wz02GJSrpm78n lDh1PSttX3evo+/cBR4j6uunDAQCZ20rzOqfRrukAu3u8QhB7W/c/RZbBZVidcSmQuwY 3yn0hYHsUTInBJYXu36qAvsccgPdYEDwpEFyzIjW+zFiEa0TBGCSqtZlkfK+QwI9yZBT XwTG4k0Ko9/8tANlxpS5rFZuQd32RI54U7jUpTdoacznQppal6CynOURJzZHDTEZEaYf oR7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=15N0AMALEEQtdmDZbhxqjR5vrw9NC2Tzc+/LVSacRkg=; b=IV1dGg8On/Ie/86IiBswWyOPf0xXvcm5dIKRzNAk6z/9+Sb7i8pmp1AGVjzOqWheQQ NlbccLj9wXZrtljN8IaEoM8ItoTnzc4/Zz444t5owCNrwYpvBFshdkejS7VbD9J9KExV 8iKuOfXghj67z4Y3f3vCkUykN8EjI6zanw7WrENmMpzwfiGAhD+4a/oEWmHz9DRX/p2Z 665tQz9BanJ7AKPmU6pDAcoYBHe5t2szXiJvD4j+Uz2weQCZfS8+aoii6OZi/854iS+4 K6JZwo4oVFeSid4SMkrEUNvSaF9YhPGWpmdBonZFHW3dCh8sdrKIWAfTP2PnrtGDJAVS hZNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=qWKvpQKh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k2si6490360ejz.509.2020.04.26.05.48.59; Sun, 26 Apr 2020 05:49:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=qWKvpQKh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726147AbgDZMrF (ORCPT + 99 others); Sun, 26 Apr 2020 08:47:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1725972AbgDZMrF (ORCPT ); Sun, 26 Apr 2020 08:47:05 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2729C061A0F for ; Sun, 26 Apr 2020 05:47:04 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id u127so17135547wmg.1 for ; Sun, 26 Apr 2020 05:47:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=15N0AMALEEQtdmDZbhxqjR5vrw9NC2Tzc+/LVSacRkg=; b=qWKvpQKhhsmD797vQVIN5JK4iKB9E8RGZuUIyQ6XkQkftoQ7V64AYQ0Y5llUdLW0Qs +cA4patiUjfaLpKEhdJ0qkFAT1iNG5amjHaM2p3X+CXATyx0QjToiLiOkqu0GNEWHTKR 7W5MsH42/tj28f3o//avZg+pNlHwz/OkANqFmP/HUSAjDb80ei8R0clBZ7u4dxgrU5Lj iAazhoYzVMM8WPQCsMKA9GKTQVvphaHeM68WFxUKeYP5LkwtDDYc3Zkzq81w49576Wq1 m1F6ZCEI2cpILiOAYkrMeJLtWj2GDHrSmpGTs/SxgHutzkQA64tTLBMSXeO6gC0g6QET wmNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=15N0AMALEEQtdmDZbhxqjR5vrw9NC2Tzc+/LVSacRkg=; b=i5PFANt2Amtu2oHyjB7UOo80nEsaQnElcw33qENLFLIA6JFzjudJAuxEhbzHgYvmcJ WoGXPWy2/HjpLFGIk/K7lcK573kS+E231lss4eWQzznTWobBtfBWsGuRzNHA8dm/4s6+ C1LOE3hMx5oaJXc5Lj+me20Igdrhitz+uIG68PZzijKQzJO/D5/lKdNpGsGkow03neNJ kSJbKeLWmeRTJEqVgO5ZVSXwkP+TdKe6rvNk1vu7+dPDHYfigAUnWXjsZutwn1mlEG/2 QxRDdM6U6yc32veNvPUrfbzXdzq4qGKy5F+OK4IRq8x4Qpfi10QZ6tRiES6irucCYI10 sx9A== X-Gm-Message-State: AGi0Puat6Gm1a+lGxXqh5y7wS4ZNWKtS9HKwbd6zbwUetH2zbpyyv1zB +CTZyTIygshlUT2AVSe2s7UtoG7zE55rUzatHxtM2ihzjLMGhA== X-Received: by 2002:a1c:4144:: with SMTP id o65mr21894889wma.78.1587905223357; Sun, 26 Apr 2020 05:47:03 -0700 (PDT) MIME-Version: 1.0 References: <20200426110740.123638-1-zong.li@sifive.com> In-Reply-To: <20200426110740.123638-1-zong.li@sifive.com> From: Anup Patel Date: Sun, 26 Apr 2020 18:16:52 +0530 Message-ID: Subject: Re: [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs To: Zong Li Cc: Palmer Dabbelt , Paul Walmsley , "linux-kernel@vger.kernel.org List" , linux-riscv , David Abdurachmanov Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Apr 26, 2020 at 4:37 PM Zong Li wrote: > > Currently, driver forces the IRQs to be handled by only one core. This > patch provides the way to enable others cores to handle IRQs if needed, > so users could decide how many cores they wanted on default by boot > argument. > > Use 'irqaffinity' boot argument to determine affinity. If there is no > irqaffinity in dts or kernel configuration, use irq default affinity, > so all harts would try to claim IRQ. > > For example, add irqaffinity=0 in chosen node to set irq affinity to > hart 0. It also supports more than one harts to handle irq, such as set > irqaffinity=0,3,4. > > You can change IRQ affinity from user-space using procfs. For example, > you can make CPU0 and CPU2 serve IRQ together by the following command: > > echo 4 > /proc/irq//smp_affinity > > Signed-off-by: Zong Li > --- > drivers/irqchip/irq-sifive-plic.c | 21 +++++++-------------- > 1 file changed, 7 insertions(+), 14 deletions(-) > > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqchip/irq-sifive-plic.c > index d0a71febdadc..bc1440d54185 100644 > --- a/drivers/irqchip/irq-sifive-plic.c > +++ b/drivers/irqchip/irq-sifive-plic.c > @@ -111,15 +111,12 @@ static inline void plic_irq_toggle(const struct cpumask *mask, > static void plic_irq_unmask(struct irq_data *d) > { > struct cpumask amask; > - unsigned int cpu; > struct plic_priv *priv = irq_get_chip_data(d->irq); > > cpumask_and(&amask, &priv->lmask, cpu_online_mask); > - cpu = cpumask_any_and(irq_data_get_affinity_mask(d), > - &amask); > - if (WARN_ON_ONCE(cpu >= nr_cpu_ids)) > - return; > - plic_irq_toggle(cpumask_of(cpu), d, 1); > + cpumask_and(&amask, &amask, irq_data_get_affinity_mask(d)); > + > + plic_irq_toggle(&amask, d, 1); > } > > static void plic_irq_mask(struct irq_data *d) > @@ -133,24 +130,20 @@ static void plic_irq_mask(struct irq_data *d) > static int plic_set_affinity(struct irq_data *d, > const struct cpumask *mask_val, bool force) > { > - unsigned int cpu; > struct cpumask amask; > struct plic_priv *priv = irq_get_chip_data(d->irq); > > cpumask_and(&amask, &priv->lmask, mask_val); > > if (force) > - cpu = cpumask_first(&amask); > + cpumask_copy(&amask, mask_val); > else > - cpu = cpumask_any_and(&amask, cpu_online_mask); > - > - if (cpu >= nr_cpu_ids) > - return -EINVAL; > + cpumask_and(&amask, &amask, cpu_online_mask); > > plic_irq_toggle(&priv->lmask, d, 0); > - plic_irq_toggle(cpumask_of(cpu), d, 1); > + plic_irq_toggle(&amask, d, 1); > > - irq_data_update_effective_affinity(d, cpumask_of(cpu)); > + irq_data_update_effective_affinity(d, &amask); > > return IRQ_SET_MASK_OK_DONE; > } > -- > 2.26.1 > I strongly oppose (NACK) this patch due to performance reasons. In PLIC, if we enable an IRQ X for N CPUs then when IRQ X occurs: 1) All N CPUs will take interrupt 2) All N CPUs will try to read PLIC CLAIM register 3) Only one of the CPUs will see IRQ X using the CLAIM register but other N - 1 CPUs will see no interrupt and return back to what they were doing. In other words, N - 1 CPUs will just waste CPU every time IRQ X occurs. Example1, one Application doing heavy network traffic will degrade performance of other applications because with every network RX/TX interrupt N-1 CPUs will waste CPU trying to process network interrupt. Example1, one Application doing heavy MMC/SD traffic will degrade performance of other applications because with every SPI read/write interrupt N-1 CPUs will waste CPU trying to process it. In fact, the current PLIC approach is actually a performance optimization. This implementation also works fine with in-kernel load-balancer and user space load balancer. Regards, Anup