Received: by 2002:a25:1985:0:0:0:0:0 with SMTP id 127csp2584139ybz; Sun, 26 Apr 2020 23:52:18 -0700 (PDT) X-Google-Smtp-Source: APiQypL79eggP1JtVU0Fa0X8tJCgS8aAAMiYhv5TIhYoFsBzg/7brW0ZriXcRWw6pkFfJ7txHgR1 X-Received: by 2002:a17:906:78c:: with SMTP id l12mr17298014ejc.189.1587970338558; Sun, 26 Apr 2020 23:52:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1587970338; cv=none; d=google.com; s=arc-20160816; b=OGvFBm+rcWrmjIjLbQ6mui+KDQYMwhR6Ypya8kcpEX/z7dzqOftWg9eofpooyZUvFp 8TKxFlQPXCGnYZrYFoqA3TwflGzWjkKDyt1fADTK3SOj1igdPXRtufwH1ho/JKEpKaM9 MuC1fiNiakhvLZXAvpEq4/qriDbEFMRYmPTXZ2zF10SF4QcFW1HnTywdbFDMlHdoTi7s jSdOpDF3uT8dGGIsDTujaQ+IQSlH2J4i1w0tznor5YhZxT6ZoWg7UGudDWjraCyq2zs5 PZfzR5jYwFYPm+Vc7ZeFdgqRfg4Jvt67DSDybMs//tZ3nB7KYHIBIQ7QKb7tLWthBPkU 5dGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=1PVKCPVALnvwEaD1U6h5gcX2vEpDpla/uMGs4UxNrgo=; b=sG4WO8OGMcSMrnhiCWHLPSz1GsPuZa8BdQE9CdoyJ1EEtqiRdfqNzhaTAB3WlpgIEz YIHo4WgmC9kgbS+J14DJUvg9rtEyoLlVio6piIDsKRLvmW0Zbk2Tuk/U+DLPCVaBZwmG RySE5ro4lYZ6QfKhcB/Z5puwwJTHgxHw9gJYnhQLS5laEN6fVlVGiR9Bmzjpp3NupwuD 2fnUvVk6138wkiutIrTf/fxAPIuEp0FrwRQsxPDk5dldAZvxyckCWOn9qWNTLGit/zwR gmMy0v1FxjCZxxLSfFXxh/hQZ6CAIRcQ0xxaNJkmTS+SheYBa8Ql5sWwANCpGWylZp2N /R6g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@sifive.com header.s=google header.b=dnof+psR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m25si7504622eds.232.2020.04.26.23.51.54; Sun, 26 Apr 2020 23:52:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@sifive.com header.s=google header.b=dnof+psR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726512AbgD0Gty (ORCPT + 99 others); Mon, 27 Apr 2020 02:49:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726246AbgD0Gty (ORCPT ); Mon, 27 Apr 2020 02:49:54 -0400 Received: from mail-qk1-x742.google.com (mail-qk1-x742.google.com [IPv6:2607:f8b0:4864:20::742]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45B9EC061A0F for ; Sun, 26 Apr 2020 23:49:54 -0700 (PDT) Received: by mail-qk1-x742.google.com with SMTP id q7so4891072qkf.3 for ; Sun, 26 Apr 2020 23:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=1PVKCPVALnvwEaD1U6h5gcX2vEpDpla/uMGs4UxNrgo=; b=dnof+psRHwMyPQG6q9AUqfbAV+LAZEaYYdzXGpK7uS08Ud2s7wACrrU1FhJgePSRWn As3HERLV8aQDCz392ujJWVtQ2zp7Bby4eFUObw/+cUg8OemZ9HNCYeFUDbxhqRm+UB9l WgHVng0KHWYMKjDQuQdnvK8GMxQ+lVDxd3iSM92CiwfLKwfCdaJYrtyaOC944UOf3Crb P+tsZncrVASVuD9uWj+1z/KDtnNqU7f6lyLvw4ssMDC4nIoM38vs4wMGbu3Kz1acgnFN gzTfBTMLeUBWp/MptIk2IFJ+sxlszVD0NiiK0o7T+FC5G8NSCCRGcep4zgV9mHD53uDo 2XpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=1PVKCPVALnvwEaD1U6h5gcX2vEpDpla/uMGs4UxNrgo=; b=IiNYSxvVPQ6UxFNyKgI6McA71hDa0bHj1XrCb99rjl/JxSbUtw/kE57hVXsVd6iO1n plGEokdiEM4k8G4EygtiHSQp0Ko1KWVDAKdfwLxBQK5/FiAiDDtzhCellZOJBkUORxyL 3S8Xa9UGlyxVdooHn+M6fYuzStbFXrfq/TPXJQm+1R0o2XThQKlTcOc9dGMZP18VxpHy 0lbmEx7dbtzcsonrRVASWG4ms/aOCv/fBva4eP5A+wf+opAAEpXFJvdqfLjDA8d4SXOq gG42hDM5U+Q3QaKcYmgBh2eYvgZLBabA2nA650RPE9pNF5WXXbMsiKJRPy1w8dWYDVXj IDyw== X-Gm-Message-State: AGi0Pub1QM9dczqWtXDKZDfknpcC3WD6mcTOkVBMFClViTh8UWE1v6pO kTGnuL4XC57RBKRB/ThHJbg4Cc/Zy5Xh380hZbmMnQ== X-Received: by 2002:a37:91c6:: with SMTP id t189mr20287581qkd.280.1587970193304; Sun, 26 Apr 2020 23:49:53 -0700 (PDT) MIME-Version: 1.0 References: <20200426110740.123638-1-zong.li@sifive.com> In-Reply-To: From: Greentime Hu Date: Mon, 27 Apr 2020 14:49:42 +0800 Message-ID: Subject: Re: [PATCH] irqchip/sifive-plic: allow many cores to handle IRQs To: Zong Li Cc: Anup Patel , David Abdurachmanov , Marc Zyngier , "linux-kernel@vger.kernel.org List" , Palmer Dabbelt , Paul Walmsley , linux-riscv Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Zong Li =E6=96=BC 2020=E5=B9=B44=E6=9C=8826=E6=97=A5 = =E9=80=B1=E6=97=A5 =E4=B8=8B=E5=8D=8811:35=E5=AF=AB=E9=81=93=EF=BC=9A > > On Sun, Apr 26, 2020 at 11:21 PM Anup Patel wrote: > > > > On Sun, Apr 26, 2020 at 8:42 PM Zong Li wrote: > > > > > > On Sun, Apr 26, 2020 at 9:38 PM Anup Patel wrot= e: > > > > > > > > +Mark Z > > > > > > > > On Sun, Apr 26, 2020 at 6:49 PM Zong Li wrote: > > > > > > > > > > On Sun, Apr 26, 2020 at 8:47 PM Anup Patel = wrote: > > > > > > > > > > > > On Sun, Apr 26, 2020 at 4:37 PM Zong Li wr= ote: > > > > > > > > > > > > > > Currently, driver forces the IRQs to be handled by only one c= ore. This > > > > > > > patch provides the way to enable others cores to handle IRQs = if needed, > > > > > > > so users could decide how many cores they wanted on default b= y boot > > > > > > > argument. > > > > > > > > > > > > > > Use 'irqaffinity' boot argument to determine affinity. If the= re is no > > > > > > > irqaffinity in dts or kernel configuration, use irq default a= ffinity, > > > > > > > so all harts would try to claim IRQ. > > > > > > > > > > > > > > For example, add irqaffinity=3D0 in chosen node to set irq af= finity to > > > > > > > hart 0. It also supports more than one harts to handle irq, s= uch as set > > > > > > > irqaffinity=3D0,3,4. > > > > > > > > > > > > > > You can change IRQ affinity from user-space using procfs. For= example, > > > > > > > you can make CPU0 and CPU2 serve IRQ together by the followin= g command: > > > > > > > > > > > > > > echo 4 > /proc/irq//smp_affinity > > > > > > > > > > > > > > Signed-off-by: Zong Li > > > > > > > --- > > > > > > > drivers/irqchip/irq-sifive-plic.c | 21 +++++++-------------- > > > > > > > 1 file changed, 7 insertions(+), 14 deletions(-) > > > > > > > > > > > > > > diff --git a/drivers/irqchip/irq-sifive-plic.c b/drivers/irqc= hip/irq-sifive-plic.c > > > > > > > index d0a71febdadc..bc1440d54185 100644 > > > > > > > --- a/drivers/irqchip/irq-sifive-plic.c > > > > > > > +++ b/drivers/irqchip/irq-sifive-plic.c > > > > > > > @@ -111,15 +111,12 @@ static inline void plic_irq_toggle(cons= t struct cpumask *mask, > > > > > > > static void plic_irq_unmask(struct irq_data *d) > > > > > > > { > > > > > > > struct cpumask amask; > > > > > > > - unsigned int cpu; > > > > > > > struct plic_priv *priv =3D irq_get_chip_data(d->irq); > > > > > > > > > > > > > > cpumask_and(&amask, &priv->lmask, cpu_online_mask); > > > > > > > - cpu =3D cpumask_any_and(irq_data_get_affinity_mask(d)= , > > > > > > > - &amask); > > > > > > > - if (WARN_ON_ONCE(cpu >=3D nr_cpu_ids)) > > > > > > > - return; > > > > > > > - plic_irq_toggle(cpumask_of(cpu), d, 1); > > > > > > > + cpumask_and(&amask, &amask, irq_data_get_affinity_mas= k(d)); > > > > > > > + > > > > > > > + plic_irq_toggle(&amask, d, 1); > > > > > > > } > > > > > > > > > > > > > > static void plic_irq_mask(struct irq_data *d) > > > > > > > @@ -133,24 +130,20 @@ static void plic_irq_mask(struct irq_da= ta *d) > > > > > > > static int plic_set_affinity(struct irq_data *d, > > > > > > > const struct cpumask *mask_val, = bool force) > > > > > > > { > > > > > > > - unsigned int cpu; > > > > > > > struct cpumask amask; > > > > > > > struct plic_priv *priv =3D irq_get_chip_data(d->irq); > > > > > > > > > > > > > > cpumask_and(&amask, &priv->lmask, mask_val); > > > > > > > > > > > > > > if (force) > > > > > > > - cpu =3D cpumask_first(&amask); > > > > > > > + cpumask_copy(&amask, mask_val); > > > > > > > else > > > > > > > - cpu =3D cpumask_any_and(&amask, cpu_online_ma= sk); > > > > > > > - > > > > > > > - if (cpu >=3D nr_cpu_ids) > > > > > > > - return -EINVAL; > > > > > > > + cpumask_and(&amask, &amask, cpu_online_mask); > > > > > > > > > > > > > > plic_irq_toggle(&priv->lmask, d, 0); > > > > > > > - plic_irq_toggle(cpumask_of(cpu), d, 1); > > > > > > > + plic_irq_toggle(&amask, d, 1); > > > > > > > > > > > > > > - irq_data_update_effective_affinity(d, cpumask_of(cpu)= ); > > > > > > > + irq_data_update_effective_affinity(d, &amask); > > > > > > > > > > > > > > return IRQ_SET_MASK_OK_DONE; > > > > > > > } > > > > > > > -- > > > > > > > 2.26.1 > > > > > > > > > > > > > > > > > > > I strongly oppose (NACK) this patch due to performance reasons. > > > > > > > > > > > > In PLIC, if we enable an IRQ X for N CPUs then when IRQ X occur= s: > > > > > > 1) All N CPUs will take interrupt > > > > > > 2) All N CPUs will try to read PLIC CLAIM register > > > > > > 3) Only one of the CPUs will see IRQ X using the CLAIM register > > > > > > but other N - 1 CPUs will see no interrupt and return back to w= hat > > > > > > they were doing. In other words, N - 1 CPUs will just waste CPU > > > > > > every time IRQ X occurs. > > > > > > > > > > > > Example1, one Application doing heavy network traffic will > > > > > > degrade performance of other applications because with every > > > > > > network RX/TX interrupt N-1 CPUs will waste CPU trying to > > > > > > process network interrupt. > > > > > > > > > > > > Example1, one Application doing heavy MMC/SD traffic will > > > > > > degrade performance of other applications because with every > > > > > > SPI read/write interrupt N-1 CPUs will waste CPU trying to > > > > > > process it. > > > > > > Hi Anup, If there is a case that there are a lot of interrupts from a single irq number coming very quickly, the first hart is still serving the first interrupt and still in its top half so it doesn't enable interrupt yet but the second interrupt is coming and waiting, only the rest of the harts can serve it. If they are masked, the interrupt won't be able to be served it right away. In this case, I think this patch can get better performance. Maybe we can still keep this patch for user to define their usage in his case?