Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932286AbcKRMx5 convert rfc822-to-8bit (ORCPT ); Fri, 18 Nov 2016 07:53:57 -0500 Received: from lhrrgout.huawei.com ([194.213.3.17]:4184 "EHLO lhrrgout.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932221AbcKRMxy (ORCPT ); Fri, 18 Nov 2016 07:53:54 -0500 From: Gabriele Paoloni To: Arnd Bergmann CC: "liviu.dudau@arm.com" , "linux-arm-kernel@lists.infradead.org" , Yuanzhichang , "mark.rutland@arm.com" , "devicetree@vger.kernel.org" , "lorenzo.pieralisi@arm.com" , "minyard@acm.org" , "linux-pci@vger.kernel.org" , "benh@kernel.crashing.org" , John Garry , "will.deacon@arm.com" , "linux-kernel@vger.kernel.org" , "xuwei (O)" , Linuxarm , "zourongrong@gmail.com" , "robh+dt@kernel.org" , "kantyzc@163.com" , "linux-serial@vger.kernel.org" , "catalin.marinas@arm.com" , "olof@lixom.net" , "bhelgaas@go og le.com" , "zhichang.yuan02@gmail.com" , Jason Gunthorpe , Thomas Petazzoni Subject: RE: [PATCH V5 3/3] ARM64 LPC: LPC driver implementation on Hip06 Thread-Topic: [PATCH V5 3/3] ARM64 LPC: LPC driver implementation on Hip06 Thread-Index: AQHSOW8K5aTV4LQ1M0O6BqeWVWJhSaDPRlGAgAFKX+CAAJ6WAIAAmH8AgAAqcoCAAGmLEIAACmiAgAFdAQCAAB6AgIAADp7QgAAsLwCAACOg0IAEIOeAgAY2M4CAABshYIAACDgAgAADn1A= Date: Fri, 18 Nov 2016 12:53:08 +0000 Message-ID: References: <1478576829-112707-1-git-send-email-yuanzhichang@hisilicon.com> <2281986.miqFuFkAbO@wuerfel> <2822893.F0LqNAm9bT@wuerfel> In-Reply-To: <2822893.F0LqNAm9bT@wuerfel> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.203.181.151] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090202.582EF9C3.0004,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1e9f88d50239c4133042febe4386a53e Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6676 Lines: 161 > -----Original Message----- > From: Arnd Bergmann [mailto:arnd@arndb.de] > Sent: 18 November 2016 12:24 > To: Gabriele Paoloni > Cc: liviu.dudau@arm.com; linux-arm-kernel@lists.infradead.org; > Yuanzhichang; mark.rutland@arm.com; devicetree@vger.kernel.org; > lorenzo.pieralisi@arm.com; minyard@acm.org; linux-pci@vger.kernel.org; > benh@kernel.crashing.org; John Garry; will.deacon@arm.com; linux- > kernel@vger.kernel.org; xuwei (O); Linuxarm; zourongrong@gmail.com; > robh+dt@kernel.org; kantyzc@163.com; linux-serial@vger.kernel.org; > catalin.marinas@arm.com; olof@lixom.net; bhelgaas@go og le.com; > zhichang.yuan02@gmail.com; Jason Gunthorpe; Thomas Petazzoni > Subject: Re: [PATCH V5 3/3] ARM64 LPC: LPC driver implementation on > Hip06 > > On Friday, November 18, 2016 12:07:28 PM CET Gabriele Paoloni wrote: > > > From: Arnd Bergmann [mailto:arnd@arndb.de] > > > On Monday, November 14, 2016 11:26:25 AM CET liviu.dudau@arm.com > wrote: > > > > On Mon, Nov 14, 2016 at 08:26:42AM +0000, Gabriele Paoloni wrote: > > > > > > Nope, that is not what it means. It means that PCI devices > can > > > see I/O > > > > > > addresses > > > > > > on the bus that start from 0. There never was any usage for > non- > > > PCI > > > > > > controllers > > > > > > > > > > So I am a bit confused... > > > > > From http://www.firmware.org/1275/bindings/isa/isa0_4d.ps > > > > > It seems that ISA buses operate on cpu I/O address range [0, > > > 0xFFF]. > > > > > I thought that was the reason why for most architectures we > have > > > > > PCIBIOS_MIN_IO equal to 0x1000 (so I thought that ISA > controllers > > > > > usually use [0, PCIBIOS_MIN_IO - 1] ) > > > > > > > > First of all, cpu I/O addresses is an x86-ism. ARM architectures > and > > > others > > > > have no separate address space for I/O, it is all merged into > one > > > unified > > > > address space. So, on arm/arm64 for example, PCIBIOS_MIN_IO = 0 > could > > > mean > > > > that we don't care about ISA I/O because the platform does not > > > support having > > > > an ISA bus (e.g.). > > > > > > I think to be more specific, PCIBIOS_MIN_IO=0 would indicate that > you > > > cannot > > > have a PCI-to-ISA or PCI-to-LPC bridge in any PCI domain. This is > > > different > > > from having an LPC master outside of PCI, as that lives in its own > > > domain > > > and has a separately addressable I/O space. > > > > Yes correct so if we go for the single domain solution arch that > > have PCIBIOS_MIN_IO=0 cannot support special devices such as LPC > > unless we also redefine PCIBIOS_MIN_IO, right? > > This is what I was referring to below as the difference between > a) and b): Setting PCIBIOS_MIN_IO=0 means you cannot have LPC > behind PCI, but it shouldn't stop you from having a separate > LPC bridge. > > > > The PCIBIOS_MIN_DIRECT_IO name still suggests having something > related > > > to > > > PCIBIOS_MIN_IO, but it really isn't. We are talking about multiple > > > concepts here that are not the same but that are somewhat related: > > > > > > a) keeping PCI devices from allocating low I/O ports on the PCI bus > > > that would conflict with ISA devices behind a bridge of the > > > same bus. > > > > > > b) reserving the low 0x0-0xfff range of the Linux-internal I/O > > > space abstraction to a particular LPC or PCI domain to make > > > legacy device drivers work that hardcode a particular port > > > number. > > > > > > c) Redirecting inb/outb to call a domain-specific accessor function > > > rather than doing the normal MMIO window for an LPC master or > > > more generally any arbitrary LPC or PCI domain that has a > > > nonstandard I/O space. > > > [side note: actually if we generalized this, we could avoid > > > assigning an MMIO range for the I/O space on the pci-mvebu > > > driver, and that would help free up some other remapping > > > windows] > > > > > > I think there is no need to change a) here, we have PCIBIOS_MIN_IO > > > today and even if we don't need it, there is no obvious downside. > > > I would also argue that we can ignore b) for the discussion of > > > the HiSilicon LPC driver, we just need to assign some range > > > of logical addresses to each domain. > > > > > > That means solving c) is the important problem here, and it > > > shouldn't be so hard. We can do this either with a single > > > special domain as in the v5 patch series, or by generalizing it > > > so that any I/O space mapping gets looked up through the device > > > pointer of the bus master. > > > > I am not very on the "generalized" multi-domain solution... > > Currently the IO accessors prototypes have an unsigned long addr > > as input parameter. If we live in a multi-domain IO system > > how can we distinguish inside the accessor which domain addr > > belongs to? > > The easiest change compared to the v5 code would be to walk > a linked list of 'struct extio_ops' structures rather than > assuming there is only ever one of them. I think one of the > earlier versions actually did this. Right but if my understanding is correct if we live in a multi- domain I/O space world when you have an input addr in the I/O accessors this addr can be duplicated (for example for the standard PCI IO domain and for our special LPC domain). So effectively even if you walk a linked list there is a problem of disambiguation...am I right? > > Another option the IA64 approach mentioned in another subthread > today, looking up the operations based on an index from the > upper bits of the port number. If we do this, we probably > want to do that for all PIO access and replace the entire > virtual address remapping logic with that. I think Bjorn > in the past argued in favor of such an approach, while I > advocated the current scheme for simplicity based on how > every I/O space these days is just memory mapped (which now > turned out to be false, both on powerpc and arm64). This seems really complex...I am a bit worried that possibly we end up in having the maintainers saying that it is not worth to re-invent the world just for this special LPC device... To be honest with you I would keep things simple for this LPC and introduce more complex reworks later if more devices need to be introduced. What if we stick on a single domain now where we introduce a reserved threshold for the IO space (say INDIRECT_MAX_IO). We define INDIRECT_MAX_IO as 0 in "include/linux/extio.h" and we define INDIRECT_MAX_IO as 0x1000 in "arch/arm64/include/asm/io.h" So effectively this threshold can change according to the architecture and so far we only define it for ARM64 as we need it for ARM64... Thoughts? Thanks again Gab > > Arnd