2023-10-31 21:42:09

by Jiaxun Yang

[permalink] [raw]
Subject: Overhead of io{read,write}{8,16,32,64} on x86

Hi all,

I'm trying to improve Kernel's support of devices that have ioports
mapped into MMIO, that involves converting existing driver which is
using {in,out}{l,w,b} to use io{read,write}{8,16,32,64}, so they can
benefit from ioport_map and pci_iomap.

However, the problem is io{read,write}{8,16,32,64} will incur penalty
on x86 by introducing extra function calls (they are not inlined) and
having extra condition judgment on MMIO vs PIO.

x86 folks, do you think this kind of overhead is acceptable? I do think
most of PCI/ISA drivers will need to be converted.

linux-arch folks, do you think it will be better if we introduce a
variant of io{read,write}{8,16,32,64} that direct to PIO on x86 but
remains the same functionality on other architectures?

Thanks
- Jiaxun


2023-11-01 09:09:35

by Arnd Bergmann

[permalink] [raw]
Subject: Re: Overhead of io{read,write}{8,16,32,64} on x86

On Tue, Oct 31, 2023, at 22:41, Jiaxun Yang wrote:
> Hi all,
>
> I'm trying to improve Kernel's support of devices that have ioports
> mapped into MMIO, that involves converting existing driver which is
> using {in,out}{l,w,b} to use io{read,write}{8,16,32,64}, so they can
> benefit from ioport_map and pci_iomap.
>
> However, the problem is io{read,write}{8,16,32,64} will incur penalty
> on x86 by introducing extra function calls (they are not inlined) and
> having extra condition judgment on MMIO vs PIO.
>
> x86 folks, do you think this kind of overhead is acceptable? I do think
> most of PCI/ISA drivers will need to be converted.
>
> linux-arch folks, do you think it will be better if we introduce a
> variant of io{read,write}{8,16,32,64} that direct to PIO on x86 but
> remains the same functionality on other architectures?

I think in general there is not much of a problem here since
the inb()/outb() operations themselves are extremely slow already,
in particular the outb() writes are non-posted unlike writeb().

My feeling is that converting to ioread/iowrite is generally a win
for any driver that already needs to support both cases (e.g.
serial-8250) since this can unify the two code paths.

However, for drivers that only support inb()/outb() today, I don't
see a real benefit in converting them from the traditional methods.

Another question is whether we actually want to keep the ISA-only
drivers around. Usually once you look closely, any particular
ISA driver tends to be entirely unused already and can be removed,
aside from a few known devices that are either soldered-down on
motherboards or that have an LPC variant using the same ISA driver.

Arnd

2023-11-01 10:29:31

by Peter Zijlstra

[permalink] [raw]
Subject: Re: Overhead of io{read,write}{8,16,32,64} on x86

On Wed, Nov 01, 2023 at 10:08:42AM +0100, Arnd Bergmann wrote:
> On Tue, Oct 31, 2023, at 22:41, Jiaxun Yang wrote:
> > Hi all,
> >
> > I'm trying to improve Kernel's support of devices that have ioports
> > mapped into MMIO, that involves converting existing driver which is
> > using {in,out}{l,w,b} to use io{read,write}{8,16,32,64}, so they can
> > benefit from ioport_map and pci_iomap.
> >
> > However, the problem is io{read,write}{8,16,32,64} will incur penalty
> > on x86 by introducing extra function calls (they are not inlined) and
> > having extra condition judgment on MMIO vs PIO.
> >
> > x86 folks, do you think this kind of overhead is acceptable? I do think
> > most of PCI/ISA drivers will need to be converted.
> >
> > linux-arch folks, do you think it will be better if we introduce a
> > variant of io{read,write}{8,16,32,64} that direct to PIO on x86 but
> > remains the same functionality on other architectures?
>
> I think in general there is not much of a problem here since
> the inb()/outb() operations themselves are extremely slow already,
> in particular the outb() writes are non-posted unlike writeb().
>
> My feeling is that converting to ioread/iowrite is generally a win
> for any driver that already needs to support both cases (e.g.
> serial-8250) since this can unify the two code paths.

And here I looked at iowrite8 and find it includes tracing and all
sorts, which means it is unsuitable for things like early-serial and the
shiny new atomic write functionality of said serial-8250.


2023-11-01 11:56:30

by Jiaxun Yang

[permalink] [raw]
Subject: Re: Overhead of io{read,write}{8,16,32,64} on x86



在2023年11月1日十一月 上午9:08,Arnd Bergmann写道:
[...]
> My feeling is that converting to ioread/iowrite is generally a win
> for any driver that already needs to support both cases (e.g.
> serial-8250) since this can unify the two code paths.
>
> However, for drivers that only support inb()/outb() today, I don't
> see a real benefit in converting them from the traditional methods.

Unfortunately, there are tons of old system trying to mimic PC do
rely on those drivers :-(

I think the universal target is to remove provision of inb()/outb()
family on archs other than x86, and perhaps remove PCI_IOBASE
as well because we can manage io regions with ioremap afterwards.

Besides PnP system may need an overhaul to handle enablement of
ISA device, presumably the ability of receiving information from
OF and platform code can be helpful.

> Another question is whether we actually want to keep the ISA-only
> drivers around. Usually once you look closely, any particular
> ISA driver tends to be entirely unused already and can be removed,
> aside from a few known devices that are either soldered-down on
> motherboards or that have an LPC variant using the same ISA driver.

Well for MIPS Alpha PPC m68k I guess that's worth it. Those systems
tends to have random hardware pieces from PC, including ISA EISA slots.

Thanks.

>
> Arnd

--
- Jiaxun