Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp2592002ybz; Mon, 27 Apr 2020 00:04:28 -0700 (PDT) X-Google-Smtp-Source: APiQypK3iYRt2ZbSIHhkz2Z1pLuvby/gbgPsWqSHRnnuULLEzGoYCWJ/Ei3Z0h6cofcilLwv06/J X-Received: by 2002:a17:906:7c4e:: with SMTP id g14mr18737760ejp.382.1587971068679; Mon, 27 Apr 2020 00:04:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587971068; cv=none; d=google.com; s=arc-20160816; b=CTuVSQ+1Me94O7WZ2AVxgNMp2F5/k3DCoQyFEmeQzykH6MBqRMtDpsudV9/Dp6UhEt MmWylhHucWm5ZIUd2ZpHyH+eakO00us4r/N2jjkyFe9S0IaGI7OB9T/vwOvytrZHgvCf YWpz17ttlkwgQ9umgRiQFNjSptfms33vU2U2o1lL3+DiHMkVxjGmigl+Hkk5jWbf2Icc y9hz+sQiLW9X57JKLhakt/MmSgJlJdrUz77ExJocMLS1j/KzHCSV2F1CQ70eomH3O+zW unvaWGCt9cqDwycikyNH12MZgJEhXuJ6iYe949n2SakOuy481E9rhmat5COrrQ1P4o7F 4yQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=Ow7McboKHnEHo9r5OLmsIz4Khg2qeAiEPFCtI9iJ5vg=; b=c2DrR77RXOHaWcDAejzyvEmcpe+OvAvj+piL+Vr5T88iv68KWGJkuahiTikysEBHvd MjrehOKDNAYB812tHb6YE9a3Migoyj3YhwT7ap2tYKWLblMXH955TLqhHzX3QNFp8yq6 7tnku0Z+IBmAvDVjd9RhVNjd1CbMDeRK10seWHRDxSMNuQJx2p4ESm6WTrHAOZ1nn3o8 3YbkeZ4Lnoh+IY7aISPINWIw8xu0RPBmeO6TwPYj0wOk99nYMxAfZcQBS6twf8o3emeY 944EnmiTQTpcn1hCcJBh/aii9LTk3vGA19wseGpiTTADEiaM/JXfbC5jc88g8se8kYWv crJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=chxZwDHN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y5si7470088ede.173.2020.04.27.00.04.03; Mon, 27 Apr 2020 00:04:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@brainfault-org.20150623.gappssmtp.com header.s=20150623 header.b=chxZwDHN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726728AbgD0G7b (ORCPT + 99 others); Mon, 27 Apr 2020 02:59:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726692AbgD0G7b (ORCPT ); Mon, 27 Apr 2020 02:59:31 -0400 Received: from mail-wm1-x341.google.com (mail-wm1-x341.google.com [IPv6:2a00:1450:4864:20::341]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D9FCC061A0F for ; Sun, 26 Apr 2020 23:59:30 -0700 (PDT) Received: by mail-wm1-x341.google.com with SMTP id x4so18263468wmj.1 for ; Sun, 26 Apr 2020 23:59:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=brainfault-org.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=Ow7McboKHnEHo9r5OLmsIz4Khg2qeAiEPFCtI9iJ5vg=; b=chxZwDHNXTJ9LJOmjqpAusafVfgN00EstU/LDJ2x3UU47pnuc7h7KV8ZvK1X26+9cQ m0HnppQ06MM/x9JczH2v3crR6R0b5FE62JPp/F+lKJ2mSd6UsMg0SNJBsdEVIkSNQQQh 3xv7cS3VR3lkk5n0s8DKhUhGrUTRAtYosFliB+eaLaJJqr7LsQI4SD0MbG/Qsp7OQDUE 5L4gF1dXrH6DMlyQbT7i8tXJznHiRk2NgtMmfQEKB1bKpigiPhCwi/+V38yU1DE1T+lD EQOKi9Hl1Vimen90hq9H05/SwNfrbCpzUc4pDTUWnGVlg61A83AIbexOkQdIdDzuKXv+ Lbjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=Ow7McboKHnEHo9r5OLmsIz4Khg2qeAiEPFCtI9iJ5vg=; b=dBYr06C48peOJ8LUG+T/2YSiUrNITurGaats1NQDmZ9kBjfPjx/VlwKpve1pNEKIle drB5xbIznUFVnQokvJeqpTl9sF455SbpHtfE+fNgGDaEfRzhlqoPhINHeyIZ/ZZjagop gf6SquEXaQs/G/jb4hspO79MFAdBDOPgexbg4a6BqiYc/FeGl7bVDx6xCZLJLp6Kk/Bg L/4ni84NB1w3vxVPc0gLSZDmpr8MvR4dNhMG8881cfPhfM2/JZXEbg8ubeqF8/bXTtgj 40Ooz6bIgctKEolv3i6hT0FlvQ967zNzFvNtOo3WkKQZ715K6BFdtPJ9vRstOC3NF8bt XX4g== X-Gm-Message-State: AGi0PuYiW8odpCMozLeGIZ9TZ7so7biPRZmuTGaxNGZnvz0TzhvCg/Mg DH6bNLXsDUN0MRDtdGan27DOyZqJpndwiEItmobtEw== X-Received: by 2002:a7b:c755:: with SMTP id w21mr23998184wmk.120.1587970768932; Sun, 26 Apr 2020 23:59:28 -0700 (PDT) MIME-Version: 1.0 References: <20200426110740.123638-1-zong.li@sifive.com> In-Reply-To: From: Anup Patel Date: Mon, 27 Apr 2020 12:29:17 +0530 Message-ID: Subject: Re: [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs To: Greentime Hu Cc: Zong Li , David Abdurachmanov , Marc Zyngier , "linux-kernel@vger.kernel.org List" , Palmer Dabbelt , Paul Walmsley , linux-riscv Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 27, 2020 at 12:19 PM Greentime Hu wro= te: > > Zong Li =E6=96=BC 2020=E5=B9=B44=E6=9C=8826=E6=97=A5= =E9=80=B1=E6=97=A5 =E4=B8=8B=E5=8D=8811:35=E5=AF=AB=E9=81=93=EF=BC=9A > > > > On Sun, Apr 26, 2020 at 11:21 PM Anup Patel wrote= : > > > > > > On Sun, Apr 26, 2020 at 8:42 PM Zong Li wrote: > > > > > > > > On Sun, Apr 26, 2020 at 9:38 PM Anup Patel wr= ote: > > > > > > > > > > +Mark Z > > > > > > > > > > On Sun, Apr 26, 2020 at 6:49 PM Zong Li wrot= e: > > > > > > > > > > > > On Sun, Apr 26, 2020 at 8:47 PM Anup Patel wrote: > > > > > > > > > > > > > > On Sun, Apr 26, 2020 at 4:37 PM Zong Li = wrote: > > > > > > > > > > > > > > > > Currently, driver forces the IRQs to be handled by only one= core. This > > > > > > > > patch provides the way to enable others cores to handle IRQ= s if needed, > > > > > > > > so users could decide how many cores they wanted on default= by boot > > > > > > > > argument. > > > > > > > > > > > > > > > > Use 'irqaffinity' boot argument to determine affinity. If t= here is no > > > > > > > > irqaffinity in dts or kernel configuration, use irq default= affinity, > > > > > > > > so all harts would try to claim IRQ. > > > > > > > > > > > > > > > > For example, add irqaffinity=3D0 in chosen node to set irq = affinity to > > > > > > > > hart 0. It also supports more than one harts to handle irq,= such as set > > > > > > > > irqaffinity=3D0,3,4. > > > > > > > > > > > > > > > > You can change IRQ affinity from user-space using procfs. F= or example, > > > > > > > > you can make CPU0 and CPU2 serve IRQ together by the follow= ing command: > > > > > > > > > > > > > > > > echo 4 > /proc/irq//smp_affinity > > > > > > > > > > > > > > > > Signed-off-by: Zong Li > > > > > > > > --- > > > > > > > > drivers/irqchip/irq-sifive-plic.c | 21 +++++++------------= -- > > > > > > > > 1 file changed, 7 insertions(+), 14 deletions(-) > > > > > > > > > > > > > > > > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/ir= qchip/irq-sifive-plic.c > > > > > > > > index d0a71febdadc..bc1440d54185 100644 > > > > > > > > --- a/drivers/irqchip/irq-sifive-plic.c > > > > > > > > +++ b/drivers/irqchip/irq-sifive-plic.c > > > > > > > > @@ -111,15 +111,12 @@ static inline void plic_irq_toggle(co= nst struct cpumask *mask, > > > > > > > > static void plic_irq_unmask(struct irq_data *d) > > > > > > > > { > > > > > > > > struct cpumask amask; > > > > > > > > - unsigned int cpu; > > > > > > > > struct plic_priv *priv =3D irq_get_chip_data(d->irq= ); > > > > > > > > > > > > > > > > cpumask_and(&amask, &priv->lmask, cpu_online_mask); > > > > > > > > - cpu =3D cpumask_any_and(irq_data_get_affinity_mask(= d), > > > > > > > > - &amask); > > > > > > > > - if (WARN_ON_ONCE(cpu >=3D nr_cpu_ids)) > > > > > > > > - return; > > > > > > > > - plic_irq_toggle(cpumask_of(cpu), d, 1); > > > > > > > > + cpumask_and(&amask, &amask, irq_data_get_affinity_m= ask(d)); > > > > > > > > + > > > > > > > > + plic_irq_toggle(&amask, d, 1); > > > > > > > > } > > > > > > > > > > > > > > > > static void plic_irq_mask(struct irq_data *d) > > > > > > > > @@ -133,24 +130,20 @@ static void plic_irq_mask(struct irq_= data *d) > > > > > > > > static int plic_set_affinity(struct irq_data *d, > > > > > > > > const struct cpumask *mask_val= , bool force) > > > > > > > > { > > > > > > > > - unsigned int cpu; > > > > > > > > struct cpumask amask; > > > > > > > > struct plic_priv *priv =3D irq_get_chip_data(d->irq= ); > > > > > > > > > > > > > > > > cpumask_and(&amask, &priv->lmask, mask_val); > > > > > > > > > > > > > > > > if (force) > > > > > > > > - cpu =3D cpumask_first(&amask); > > > > > > > > + cpumask_copy(&amask, mask_val); > > > > > > > > else > > > > > > > > - cpu =3D cpumask_any_and(&amask, cpu_online_= mask); > > > > > > > > - > > > > > > > > - if (cpu >=3D nr_cpu_ids) > > > > > > > > - return -EINVAL; > > > > > > > > + cpumask_and(&amask, &amask, cpu_online_mask= ); > > > > > > > > > > > > > > > > plic_irq_toggle(&priv->lmask, d, 0); > > > > > > > > - plic_irq_toggle(cpumask_of(cpu), d, 1); > > > > > > > > + plic_irq_toggle(&amask, d, 1); > > > > > > > > > > > > > > > > - irq_data_update_effective_affinity(d, cpumask_of(cp= u)); > > > > > > > > + irq_data_update_effective_affinity(d, &amask); > > > > > > > > > > > > > > > > return IRQ_SET_MASK_OK_DONE; > > > > > > > > } > > > > > > > > -- > > > > > > > > 2.26.1 > > > > > > > > > > > > > > > > > > > > > > I strongly oppose (NACK) this patch due to performance reason= s. > > > > > > > > > > > > > > In PLIC, if we enable an IRQ X for N CPUs then when IRQ X occ= urs: > > > > > > > 1) All N CPUs will take interrupt > > > > > > > 2) All N CPUs will try to read PLIC CLAIM register > > > > > > > 3) Only one of the CPUs will see IRQ X using the CLAIM regist= er > > > > > > > but other N - 1 CPUs will see no interrupt and return back to= what > > > > > > > they were doing. In other words, N - 1 CPUs will just waste C= PU > > > > > > > every time IRQ X occurs. > > > > > > > > > > > > > > Example1, one Application doing heavy network traffic will > > > > > > > degrade performance of other applications because with every > > > > > > > network RX/TX interrupt N-1 CPUs will waste CPU trying to > > > > > > > process network interrupt. > > > > > > > > > > > > > > Example1, one Application doing heavy MMC/SD traffic will > > > > > > > degrade performance of other applications because with every > > > > > > > SPI read/write interrupt N-1 CPUs will waste CPU trying to > > > > > > > process it. > > > > > > > > > Hi Anup, > > If there is a case that there are a lot of interrupts from a single > irq number coming very quickly, the first hart is still serving the > first interrupt and still in its top half so it doesn't enable > interrupt yet but the second interrupt is coming and waiting, only the > rest of the harts can serve it. If they are masked, the interrupt > won't be able to be served it right away. In this case, I think this > patch can get better performance. Maybe we can still keep this patch > for user to define their usage in his case? The way you described issue, it clearly means that other device driver you are using does not have a optimized interrupt handler and the driver should be fixed right away. I don't see the point in hacking PLIC driver and slowing it down because some other device driver spending too much time in interrupt context. Seeing your comment, I quickly looked at Cadance MACB driver which is very important on SiFive Unleashed. It turns out that Cadance MACB driver does not have well designed top-half and bottom-half. Best thing would be to change Candance MACB driver to use threaded irqs so that majority of packet processing work on SiFive unleashed is done in kernel thread (which is preemtible). Regards, Anup