Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1685277pxa; Thu, 6 Aug 2020 13:22:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx0mUlTNpHHaMmG3MH9lokFa4iNBI5NsKBOu3SgeqtyLpo86OpXAJMR+Z8IQK2YEo8Q3ZzN X-Received: by 2002:a50:fc82:: with SMTP id f2mr5640646edq.53.1596745368484; Thu, 06 Aug 2020 13:22:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596745368; cv=none; d=google.com; s=arc-20160816; b=V2B3gYOFP8g5HR3QEAVTyvBpx/Up9I0jl1Ieq16m07jAMoo9cIOH6nsR0y6a0Fir8e SDKGqx3r0Zl3293+BVzHFK33v5Jbl04lmh0oLU9aDotcnW2dDhnRctCILQm94pSReJzy nvgOpiZI/3RqfZhSUQm/Ada2RU9TWy9L1ChdsHo6huAtIBmzYSJUuuIeaX1LPUjUyCA9 7S24gE0Y+n5EE6ZcdOWLfAOcavYBL/p9Z7dSnU8FE0G7l2k5ijZRo9WMLjUwOlTnlSdo YDIUnuMz/xrrGzcdM96oRqss4/mQ9fyfcNNaDM6Up5zCK6mbUGRxftzlOY+FZMZesSuU BAOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:references :in-reply-to:subject:cc:to:dkim-signature:dkim-signature:from; bh=xtHNLCxJfzsiJSCp6/W+TfOKxP2aJ6UgREgy+loBHMI=; b=Z60UXypV9WKQpxRbULJD+8Rrz6y3mYsy/5AH5ffDYbnMsu6ABOdChHMOOopSRv7kP2 PrTEsXVGj1+YOe50BGNMjRlSraxcub2xlQL95qxZ2D5VQt1OLSWhn7m/H1LchFP3NMXe gC9A5ZWMi+pufzICmE2jjUgU2Nk23x/gIzd3NZTRXvmGn4mFAK4lhO6WbTwr3NPJFTed h84g2Vnihm3iaV97zabHfHpu/Rg/VF6L6Ev06R8AznGEbJ6tin6z1BLsSmGsf0rlrDsg q1kVyZXYvxI+DFqc3+r9+zqnWC+idXfWoGxnNwXe4Bp9m1RFJ4wG9HzN63IXkEK9kcz/ +H7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=R6Q0ESn3; dkim=neutral (no key) header.i=@linutronix.de header.b=fXJRubCj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id x15si3306010eje.180.2020.08.06.13.22.25; Thu, 06 Aug 2020 13:22:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=R6Q0ESn3; dkim=neutral (no key) header.i=@linutronix.de header.b=fXJRubCj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726293AbgHFUVS (ORCPT + 99 others); Thu, 6 Aug 2020 16:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726058AbgHFUVS (ORCPT ); Thu, 6 Aug 2020 16:21:18 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFF37C061574; Thu, 6 Aug 2020 13:21:16 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1596745273; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xtHNLCxJfzsiJSCp6/W+TfOKxP2aJ6UgREgy+loBHMI=; b=R6Q0ESn3y1nFnbdCuyGK0a83rqiXG8o7/o8w7zgwZPTxcrxycOlwCP9+kCI/nt8ak7n7Mg g6Fgmkcr/k7pm/ulgLm4xEo+wxRiaRf5Ov3SHz1fIwIeAt+frnBgPWY1ZpLV0T0Gh4mWMx lxTmOBiTx4Zt4fkY9zIv8/dbw0BUyPXIIbKc/W7VT8dguTXihqNIaqifWjlr3dkYQwPPy7 4E9Xj5KIHhrChnRxs5vRovcAPlvzVFcp2d1fZGEI5bZKDaPtFiVQcKpSvVSF+17+JTdpq9 TQmnaTen4VK/BEvUBXUp3A2OX2nGzdAwUhLJmPo6skT+IToB1D95OLH0712vUA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1596745273; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xtHNLCxJfzsiJSCp6/W+TfOKxP2aJ6UgREgy+loBHMI=; b=fXJRubCjssIpQgTtL/ZduGNhVfFn7HZPRc8hhVtuSscb+ctPyf0CnqZPYi2XebcAC4VBfm OwvT7Zfw9fXJ0vCQ== To: "Dey\, Megha" , Jason Gunthorpe Cc: Marc Zyngier , "Jiang\, Dave" , "vkoul\@kernel.org" , "bhelgaas\@google.com" , "rafael\@kernel.org" , "gregkh\@linuxfoundation.org" , "hpa\@zytor.com" , "alex.williamson\@redhat.com" , "Pan\, Jacob jun" , "Raj\, Ashok" , "Liu\, Yi L" , "Lu\, Baolu" , "Tian\, Kevin" , "Kumar\, Sanjay K" , "Luck\, Tony" , "Lin\, Jing" , "Williams\, Dan J" , "kwankhede\@nvidia.com" , "eric.auger\@redhat.com" , "parav\@mellanox.com" , "Hansen\, Dave" , "netanelg\@mellanox.com" , "shahafs\@mellanox.com" , "yan.y.zhao\@linux.intel.com" , "pbonzini\@redhat.com" , "Ortiz\, Samuel" , "Hossain\, Mona" , "dmaengine\@vger.kernel.org" , "linux-kernel\@vger.kernel.org" , "x86\@kernel.org" , "linux-pci\@vger.kernel.org" , "kvm\@vger.kernel.org" Subject: Re: [PATCH RFC v2 02/18] irq/dev-msi: Add support for a new DEV_MSI irq domain In-Reply-To: <014ffe59-38d3-b770-e065-dfa2d589adc6@intel.com> References: <159534667974.28840.2045034360240786644.stgit@djiang5-desk3.ch.intel.com> <159534734833.28840.10067945890695808535.stgit@djiang5-desk3.ch.intel.com> <878sfbxtzi.wl-maz@kernel.org> <20200722195928.GN2021248@mellanox.com> <96a1eb5ccc724790b5404a642583919d@intel.com> <20200805221548.GK19097@mellanox.com> <70465fd3a7ae428a82e19f98daa779e8@intel.com> <20200805225330.GL19097@mellanox.com> <630e6a4dc17b49aba32675377f5a50e0@intel.com> <20200806001927.GM19097@mellanox.com> <87tuxfhf9u.fsf@nanos.tec.linutronix.de> <014ffe59-38d3-b770-e065-dfa2d589adc6@intel.com> Date: Thu, 06 Aug 2020 22:21:11 +0200 Message-ID: <87h7tfh6fc.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Megha, "Dey, Megha" writes: > On 8/6/2020 10:10 AM, Thomas Gleixner wrote: >> If the DEV/MSI domain has it's own per IR unit resource management, then >> you need one per IR unit. >> >> If the resource management is solely per device then having a domain per >> device is the right choice. > > The dev-msi domain can be used by other devices if they too would want > to follow the vector->intel IR->dev-msi IRQ hierarchy. I do create > one dev-msi IRQ domain instance per IR unit. So I guess for this case, > it makes most sense to have a dev-msi IRQ domain per IR unit as > opposed to create one per individual driver.. I'm not really convinced. I looked at the idxd driver and that has it's own interrupt related resource management for the IMS slots and provides the mask,unmask callbacks for the interrupt chip via this crude platform data indirection. So I don't see the value of the dev-msi domain per IR unit. The domain itself does not provide much functionality other than indirections and you clearly need per device interrupt resource management on the side and a customized irq chip. I rather see it as a plain layering violation. The point is that your IDXD driver manages the per device IMS slots which is a interrupt related resource. The story would be different if the IMS slots would be managed by some central or per IR unit entity, but in that case you'd need IMS specific domain(s). So the obvious consequence of the hierarchical irq design is: vector -> IR -> IDXD which makes the control flow of allocating an interrupt for a subdevice straight forward following the irq hierarchy rules. This still wants to inherit the existing msi domain functionality, but the amount of code required is small and removes all these pointless indirections and integrates the slot management naturally. If you expect or know that there are other devices coming up with IMS integrated then most of that code can be made a common library. But for this to make sense, you really want to make sure that these other devices do not require yet another horrible layer of indirection. A side note: I just read back on the specification and stumbled over the following gem: "IMS may also optionally support per-message masking and pending bit status, similar to the per-vector mask and pending bit array in the PCI Express MSI-X capability." Optionally? Please tell the hardware folks to make this mandatory. We have enough pain with non maskable MSI interrupts already so introducing yet another non maskable interrupt trainwreck is not an option. It's more than a decade now that I tell HW people not to repeat the non-maskable MSI failure, but obviously they still think that non-maskable interrupts are a brilliant idea. I know that HW folks believe that everything they omit can be fixed in software, but they have to finally understand that this particular issue _cannot_ be fixed at all. Thanks, tglx