2002-03-14 17:09:07

by Martin Wilck

[permalink] [raw]
Subject: IO delay, port 0x80, and BIOS POST codes


Hello,

the BIOS on our machines (Phoenix) uses IO-port 0x80 for storing
POST codes, not only during sytem startup, but also for messages
generated during SMM (system management mode) operation.
I have been told other BIOSs do the same.

Unfortunately we can't read this information because Linux uses
port 80 as "dummy" port for delay operations. (outb_p and friends,
actually there seem to be a more hard-coded references to port
0x80 in the code).

It seems this problem was always there, just nobody took notice of it yet
(at least in our company). Sometimes people wondered about the weird POST
codes displayed in the LCD panel, but who cares once the machine is up...

Would it be too outrageous to ask that this port number be changed, or
made configurable?

Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy






2002-03-14 17:52:05

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Martin Wilck wrote:

>
> Hello,
>
> the BIOS on our machines (Phoenix) uses IO-port 0x80 for storing
> POST codes, not only during sytem startup, but also for messages
> generated during SMM (system management mode) operation.
> I have been told other BIOSs do the same.
>
> Unfortunately we can't read this information because Linux uses
> port 80 as "dummy" port for delay operations. (outb_p and friends,
> actually there seem to be a more hard-coded references to port
> 0x80 in the code).
>
> It seems this problem was always there, just nobody took notice of it yet
> (at least in our company). Sometimes people wondered about the weird POST
> codes displayed in the LCD panel, but who cares once the machine is up...
>
> Would it be too outrageous to ask that this port number be changed, or
> made configurable?
>
> Martin

This is a 'N' year-old question. Do you know of a port that is
guaranteed to exist on the Intel/PC/AT class machine? If so, submit
a patch. I proposed using 0x19h (DMA scratch register) several
years ago, but it was shot down for some reason. Then I proposed
0x42 (PIT Misc register), that too was declared off-limits. So
I suggested that the outb to 0x80 be changed to an inp, saving
%eax on the stack first. That too was shot down. So, you try
something... and good luck.


Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 17:54:43

by Alan

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> Unfortunately we can't read this information because Linux uses
> port 80 as "dummy" port for delay operations. (outb_p and friends,
> actually there seem to be a more hard-coded references to port
> 0x80 in the code).

The dummy port needs to exist. By using 0x80 we have probably the only
port we can safely use in this way. We know it fouls old style POST
boards on odd occasions.

Alan

2002-03-14 18:19:17

by Thunder from the hill

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Richard B. Johnson wrote:

> On Thu, 14 Mar 2002, Martin Wilck wrote:
>
>
> >Hello,
> >
> >the BIOS on our machines (Phoenix) uses IO-port 0x80 for storing
> >POST codes, not only during sytem startup, but also for messages
> >generated during SMM (system management mode) operation.
> >I have been told other BIOSs do the same.
> >
> >Unfortunately we can't read this information because Linux uses
> >port 80 as "dummy" port for delay operations. (outb_p and friends,
> >actually there seem to be a more hard-coded references to port
> >0x80 in the code).
> >
> >It seems this problem was always there, just nobody took notice of it yet
> >(at least in our company). Sometimes people wondered about the weird POST
> >codes displayed in the LCD panel, but who cares once the machine is up...
> >
> >Would it be too outrageous to ask that this port number be changed, or
> >made configurable?
> >
> >Martin
>
> This is a 'N' year-old question. Do you know of a port that is
> guaranteed to exist on the Intel/PC/AT class machine? If so, submit
> a patch. I proposed using 0x19h (DMA scratch register) several
> years ago, but it was shot down for some reason. Then I proposed
> 0x42 (PIT Misc register), that too was declared off-limits. So
> I suggested that the outb to 0x80 be changed to an inp, saving
> %eax on the stack first. That too was shot down. So, you try
> something... and good luck.
I also remember this been discussed anually. Making it configurable with
a warning might be a solution, but that's nothing we could decide. Maybe
add a config option? It night be a [DANGEROUS] one, so the guys and gals
who might compile are warned of changing this.
I think the problem is that on PC arch anything is quite limited.

Thunder

2002-03-14 19:25:23

by Linus Torvalds

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

In article <[email protected]>,
Alan Cox <[email protected]> wrote:
>> Unfortunately we can't read this information because Linux uses
>> port 80 as "dummy" port for delay operations. (outb_p and friends,
>> actually there seem to be a more hard-coded references to port
>> 0x80 in the code).
>
>The dummy port needs to exist. By using 0x80 we have probably the only
>port we can safely use in this way. We know it fouls old style POST
>boards on odd occasions.

In fact, port 0x80 is safe exactly _because_ it is used for BIOS POST
codes, which pretty much guarantees that it will never be used for
anything else. That tends to not be as true of any other ports.

Also, it should be noted that to get the 1us delay, the port should be
behind the ISA bridge in a legacy world (in a modern all-PCI system it
doesn't really matter, because the ports that need more delays are
faster too, so this works out ok). That's why I personally would be
nervous about using some of the well-specified (but irrelevant) ports
that are on the PCI side of a super-IO controller.

I suspect the _real_ solution is to stop using "inb_p/outb_p" and make
the delay explicit, although it may be that some drivers depend on the
fact that not only is the "outb $0x80" a delay, it also tends to act as
a posting barrier.

Linus

2002-03-14 19:47:26

by H. Peter Anvin

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Followup to: <[email protected]>
By author: [email protected] (Linus Torvalds)
In newsgroup: linux.dev.kernel
>
> I suspect the _real_ solution is to stop using "inb_p/outb_p" and make
> the delay explicit, although it may be that some drivers depend on the
> fact that not only is the "outb $0x80" a delay, it also tends to act as
> a posting barrier.
>

... as well as a push-out to the ISA bus. I suspect dumping the outb
way of doing it and instead wait in the CPU might cause the delay to
happen in the wrong part of the system (consider split-transaction
queued busses like HyperTransport, where a delay in the CPU doesn't
necessarily mean a delay in the southbridge.)

Port 0x80 has served us well, at least as a default. If you really
care about the POST display you can recompile using a different port.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>

2002-03-14 20:48:39

by John Heil

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Linus Torvalds wrote:

> Date: Thu, 14 Mar 2002 19:23:20 +0000 (UTC)
> From: Linus Torvalds <[email protected]>
> To: [email protected]
> Subject: Re: IO delay, port 0x80, and BIOS POST codes
>
> In article <[email protected]>,
> Alan Cox <[email protected]> wrote:
> >> Unfortunately we can't read this information because Linux uses
> >> port 80 as "dummy" port for delay operations. (outb_p and friends,
> >> actually there seem to be a more hard-coded references to port
> >> 0x80 in the code).
> >
> >The dummy port needs to exist. By using 0x80 we have probably the only
> >port we can safely use in this way. We know it fouls old style POST
> >boards on odd occasions.
>
> In fact, port 0x80 is safe exactly _because_ it is used for BIOS POST
> codes, which pretty much guarantees that it will never be used for
> anything else. That tends to not be as true of any other ports.
>
> Also, it should be noted that to get the 1us delay, the port should be
> behind the ISA bridge in a legacy world (in a modern all-PCI system it
> doesn't really matter, because the ports that need more delays are
> faster too, so this works out ok). That's why I personally would be
> nervous about using some of the well-specified (but irrelevant) ports
> that are on the PCI side of a super-IO controller.
>
> I suspect the _real_ solution is to stop using "inb_p/outb_p" and make
> the delay explicit, although it may be that some drivers depend on the
> fact that not only is the "outb $0x80" a delay, it also tends to act as
> a posting barrier.
>
> Linus

No, the better/correct port is 0xED which removes the conflict.

We've used 0xED w/o problem doing an embedded linux implementation
at kernel 2.4.1, where SMM issues were involved. (It was recommended
to me by an x-Phoenix BIOS developer, because of its safety as well as
conflict resolution,

Johnh

-
-----------------------------------------------------------------
John Heil
South Coast Software
Custom systems software for UNIX and IBM MVS mainframes
1-714-774-6952
[email protected]
http://www.sc-software.com
-----------------------------------------------------------------

2002-03-14 21:04:47

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, John Heil wrote:

> On Thu, 14 Mar 2002, Linus Torvalds wrote:
>
> > Date: Thu, 14 Mar 2002 19:23:20 +0000 (UTC)
> > From: Linus Torvalds <[email protected]>
> > To: [email protected]
> > Subject: Re: IO delay, port 0x80, and BIOS POST codes
> >
> > In article <[email protected]>,
> > Alan Cox <[email protected]> wrote:
> > >> Unfortunately we can't read this information because Linux uses
> > >> port 80 as "dummy" port for delay operations. (outb_p and friends,
> > >> actually there seem to be a more hard-coded references to port
> > >> 0x80 in the code).
> > >
> > >The dummy port needs to exist. By using 0x80 we have probably the only
> > >port we can safely use in this way. We know it fouls old style POST
> > >boards on odd occasions.
> >
> > In fact, port 0x80 is safe exactly _because_ it is used for BIOS POST
> > codes, which pretty much guarantees that it will never be used for
> > anything else. That tends to not be as true of any other ports.
> >
> > Also, it should be noted that to get the 1us delay, the port should be
> > behind the ISA bridge in a legacy world (in a modern all-PCI system it
> > doesn't really matter, because the ports that need more delays are
> > faster too, so this works out ok). That's why I personally would be
> > nervous about using some of the well-specified (but irrelevant) ports
> > that are on the PCI side of a super-IO controller.
> >
> > I suspect the _real_ solution is to stop using "inb_p/outb_p" and make
> > the delay explicit, although it may be that some drivers depend on the
> > fact that not only is the "outb $0x80" a delay, it also tends to act as
> > a posting barrier.
> >
> > Linus
>
> No, the better/correct port is 0xED which removes the conflict.
>
> We've used 0xED w/o problem doing an embedded linux implementation
> at kernel 2.4.1, where SMM issues were involved. (It was recommended
> to me by an x-Phoenix BIOS developer, because of its safety as well as
> conflict resolution,
>
> Johnh
>

Well I can see why he's an EX-Phoenix BIOS developer. A port at 0xed
does not exist on any standard or known non-standard Intel/PC/AT
compatible.

Remember DOS debug?

C:\>debug

-i ed
FF
-o ed aa
-i ed
FF
-o ed 55
-i ed
FF
-q


This is not a DOS emulation. This is a real-mode boot where any ports
will be visible. If you used it with success, it means that you didn't
need the I/O delay of writing to a real port. Instead you got the
few hundred nanoseconds of delay you get by writing to nowhere.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 21:12:37

by H. Peter Anvin

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Followup to: <Pine.LNX.4.33.0203141234170.1286-100000@scsoftware.sc-software.com>
By author: John Heil <[email protected]>
In newsgroup: linux.dev.kernel
>
> No, the better/correct port is 0xED which removes the conflict.
>
> We've used 0xED w/o problem doing an embedded linux implementation
> at kernel 2.4.1, where SMM issues were involved. (It was recommended
> to me by an x-Phoenix BIOS developer, because of its safety as well as
> conflict resolution,
>

Sorry, causes breakage on quite a few machines. I used it briefly in
SYSLINUX. Phoenix doesn't have that issue since they're part of the
platform, so they can make it a requirement for their BIOS.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>

2002-03-14 21:21:58

by Pedro M. Rodrigues

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes


This piece of code is taken from an old Minix source code tree, the file being
boothead.s . Notice the port 0xED usage and the comment.


! Enable (ah = 0xDF) or disable (ah = 0xDD) the A20 address line.
gate_A20:
call kb_wait
movb al, #0xD1 ! Tell keyboard that a command is coming
outb 0x64
call kb_wait
movb al, ah ! Enable or disable code
outb 0x60
call kb_wait


mov ax, #25 ! 25 microsec delay for slow keyboard chip
0: out 0xED ! Write to an unused port (1us)
dec ax
jne 0b

ret
kb_wait:
inb 0x64
testb al, #0x02 ! Keyboard input buffer full?
jnz kb_wait ! If so, wait
ret



/Pedro

On 14 Mar 2002 at 16:03, Richard B. Johnson wrote:

>
>
> Well I can see why he's an EX-Phoenix BIOS developer. A port at 0xed
> does not exist on any standard or known non-standard Intel/PC/AT
> compatible.
>
> Remember DOS debug?
>
> C:\>debug
>
> -i ed
> FF
> -o ed aa
> -i ed
> FF
> -o ed 55
> -i ed
> FF
> -q
>
>
> This is not a DOS emulation. This is a real-mode boot where any ports
> will be visible. If you used it with success, it means that you didn't
> need the I/O delay of writing to a real port. Instead you got the few
> hundred nanoseconds of delay you get by writing to nowhere.
>
> Cheers,
> Dick Johnson


2002-03-14 21:26:01

by Linus Torvalds

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes


On Thu, 14 Mar 2002, John Heil wrote:
>
> No, the better/correct port is 0xED which removes the conflict.

Port ED is fine for a BIOS, which (by definition) knows what the
motherboard devices are, and thus knows that ED cannot be used by
anything.

But it _is_ an unused port, and that's exactly the kind of thing that
might be used sometime in the future. Remember the port 22/23 brouhaha
with Cyrix using it for their stuff, and later Intel getting into the fray
too?

So the fact that ED works doesn't mean that _stays_ working.

The fact that 80 is the post code register means that it is fairly likely
to _stay_ that way, without any ugly surprises.

Now, if there is something _else_ than just the fact that it is unused
that makes ED a good choice in the future too, that might be worth looking
into (like NT using it for the same purpose as Linux does port 80),

Linus

2002-03-14 21:29:18

by Linus Torvalds

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes


On Thu, 14 Mar 2002, Richard B. Johnson wrote:
>
> Well I can see why he's an EX-Phoenix BIOS developer. A port at 0xed
> does not exist on any standard or known non-standard Intel/PC/AT
> compatible.

Note that "doesn't exist" is actually a _bonus_. It means that no
controller will answer to it, which causes the IO to time out, which on a
regular ISA bus will also take the same 1us. Which is what we want.

Real ports with real controllers can be faster - they could, for example,
be fast motherboard PCI ports and be positively decoded and be faster than
1us.

Linus

2002-03-14 21:32:18

by John Heil

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Pedro M. Rodrigues wrote:

> Date: Thu, 14 Mar 2002 22:19:23 +0100
> From: Pedro M. Rodrigues <[email protected]>
> To: John Heil <[email protected]>, [email protected]
> Cc: Linus Torvalds <[email protected]>, [email protected],
> Martin Wilck <[email protected]>
> Subject: Re: IO delay, port 0x80, and BIOS POST codes
>
>
> This piece of code is taken from an old Minix source code tree, the file being
> boothead.s . Notice the port 0xED usage and the comment.
>
>
> ! Enable (ah = 0xDF) or disable (ah = 0xDD) the A20 address line.
> gate_A20:
> call kb_wait
> movb al, #0xD1 ! Tell keyboard that a command is coming
> outb 0x64
> call kb_wait
> movb al, ah ! Enable or disable code
> outb 0x60
> call kb_wait
>
>
> mov ax, #25 ! 25 microsec delay for slow keyboard chip
> 0: out 0xED ! Write to an unused port (1us)
> dec ax
> jne 0b
>
> ret
> kb_wait:
> inb 0x64
> testb al, #0x02 ! Keyboard input buffer full?
> jnz kb_wait ! If so, wait
> ret
>
>
>
> /Pedro
>
> On 14 Mar 2002 at 16:03, Richard B. Johnson wrote:
>
> >
> >
> > Well I can see why he's an EX-Phoenix BIOS developer. A port at 0xed
> > does not exist on any standard or known non-standard Intel/PC/AT
> > compatible.
> >
> > Remember DOS debug?
> >
> > C:\>debug
> >
> > -i ed
> > FF
> > -o ed aa
> > -i ed
> > FF
> > -o ed 55
> > -i ed
> > FF
> > -q
> >
> >
> > This is not a DOS emulation. This is a real-mode boot where any ports
> > will be visible. If you used it with success, it means that you didn't
> > need the I/O delay of writing to a real port. Instead you got the few
> > hundred nanoseconds of delay you get by writing to nowhere.
> >
> > Cheers,
> > Dick Johnson
>
>

We did not want the I/O delay based on the port itself.
We specifically wanted an unused port, and avoid the 0x80 conflict.

Perhaps this should be a kernel hacking/debug option, due to the
difference in environments and needs.


Johnh

-
-----------------------------------------------------------------
John Heil
South Coast Software
Custom systems software for UNIX and IBM MVS mainframes
1-714-774-6952
[email protected]
http://www.sc-software.com
-----------------------------------------------------------------

2002-03-14 21:42:00

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Pedro M. Rodrigues wrote:

>
> This piece of code is taken from an old Minix source code tree, the file being
> boothead.s . Notice the port 0xED usage and the comment.
>
>
> ! Enable (ah = 0xDF) or disable (ah = 0xDD) the A20 address line.
> gate_A20:
> call kb_wait
> movb al, #0xD1 ! Tell keyboard that a command is coming
> outb 0x64
> call kb_wait
> movb al, ah ! Enable or disable code
> outb 0x60
> call kb_wait
>
>
> mov ax, #25 ! 25 microsec delay for slow keyboard chip
> 0: out 0xED ! Write to an unused port (1us)
> dec ax
> jne 0b
>
> ret
> kb_wait:
> inb 0x64
> testb al, #0x02 ! Keyboard input buffer full?
> jnz kb_wait ! If so, wait
> ret
>
>
>
> /Pedro
>

Well I see the comment. Writing to IO Space where there are no devices
is basically a no-op. The ISA/PC/AT bus is asynchronous, it is not
clocked. If there's is no contention due to bus activity from some
hardware read, it's just some address lines and data bits that are
eventually sinked by the bus capacity. The CPU isn't forced to wait
for anything. Since it's in I/O space, you don't even get the delay
from a cache-line reload. No thanks, it's bogus as a delay mechanism.


Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 21:44:53

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Linus Torvalds wrote:

>
> On Thu, 14 Mar 2002, Richard B. Johnson wrote:
> >
> > Well I can see why he's an EX-Phoenix BIOS developer. A port at 0xed
> > does not exist on any standard or known non-standard Intel/PC/AT
> > compatible.
>
> Note that "doesn't exist" is actually a _bonus_. It means that no
> controller will answer to it, which causes the IO to time out, which on a
> regular ISA bus will also take the same 1us. Which is what we want.
>
> Real ports with real controllers can be faster - they could, for example,
> be fast motherboard PCI ports and be positively decoded and be faster than
> 1us.
>
> Linus
>

Well no, IO doesn't "time-out". The PC/AT/ISA bus is asychronous, it's
not clocked. If there's no hardware activity as a result of the write
to nowhere, it's just a no-op. The CPU isn't slowed down at all. It's
just some bits that got flung out on the bus with no feed-back at all.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 21:57:01

by H. Peter Anvin

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Followup to: <[email protected]>
By author: "Richard B. Johnson" <[email protected]>
In newsgroup: linux.dev.kernel
>
> Well no, IO doesn't "time-out". The PC/AT/ISA bus is asychronous, it's
> not clocked. If there's no hardware activity as a result of the write
> to nowhere, it's just a no-op. The CPU isn't slowed down at all. It's
> just some bits that got flung out on the bus with no feed-back at all.
>

An OUT on the x86 architecture is synchronous... the CPU will not
proceed until the OUT is present on the bus. This is a requirement of
the SMM architecture, actually.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>

2002-03-14 21:58:41

by H. Peter Anvin

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Followup to: <[email protected]>
By author: Linus Torvalds <[email protected]>
In newsgroup: linux.dev.kernel
>
> Port ED is fine for a BIOS, which (by definition) knows what the
> motherboard devices are, and thus knows that ED cannot be used by
> anything.
>
> But it _is_ an unused port, and that's exactly the kind of thing that
> might be used sometime in the future. Remember the port 22/23 brouhaha
> with Cyrix using it for their stuff, and later Intel getting into the fray
> too?
>
> So the fact that ED works doesn't mean that _stays_ working.
>

It is, in fact, broken on several systems -- I tried ED in SYSLINUX
for a while, and it broke things for people.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>

2002-03-14 22:11:23

by John Heil

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On 14 Mar 2002, H. Peter Anvin wrote:

> Date: 14 Mar 2002 13:57:40 -0800
> From: H. Peter Anvin <[email protected]>
> To: [email protected]
> Subject: Re: IO delay, port 0x80, and BIOS POST codes
>
> Followup to: <[email protected]>
> By author: Linus Torvalds <[email protected]>
> In newsgroup: linux.dev.kernel
> >
> > Port ED is fine for a BIOS, which (by definition) knows what the
> > motherboard devices are, and thus knows that ED cannot be used by
> > anything.
> >
> > But it _is_ an unused port, and that's exactly the kind of thing that
> > might be used sometime in the future. Remember the port 22/23 brouhaha
> > with Cyrix using it for their stuff, and later Intel getting into the fray
> > too?
> >
> > So the fact that ED works doesn't mean that _stays_ working.
> >
>
> It is, in fact, broken on several systems -- I tried ED in SYSLINUX
> for a while, and it broke things for people.

It does work on many, in fact, we used on a Crusoe based platform
as well as the other x86s

Let's make it a configurable kernel debug/hacking option else
we have the added burden of chasing down a common address.

Johnh

>
> -hpa
> --
> <[email protected]> at work, <[email protected]> in private!
> "Unix gives you enough rope to shoot yourself in the foot."
> http://www.zytor.com/~hpa/puzzle.txt <[email protected]>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

-
-----------------------------------------------------------------
John Heil
South Coast Software
Custom systems software for UNIX and IBM MVS mainframes
1-714-774-6952
[email protected]
http://www.sc-software.com
-----------------------------------------------------------------

2002-03-14 22:12:44

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On 14 Mar 2002, H. Peter Anvin wrote:

> Followup to: <[email protected]>
> By author: "Richard B. Johnson" <[email protected]>
> In newsgroup: linux.dev.kernel
> >
> > Well no, IO doesn't "time-out". The PC/AT/ISA bus is asychronous, it's
> > not clocked. If there's no hardware activity as a result of the write
> > to nowhere, it's just a no-op. The CPU isn't slowed down at all. It's
> > just some bits that got flung out on the bus with no feed-back at all.
> >
>
> An OUT on the x86 architecture is synchronous... the CPU will not
> proceed until the OUT is present on the bus. This is a requirement of
> the SMM architecture, actually.
>
> -hpa

Yeh? Then "how do it know?". It doesn't. I/O instructions are ordered,
however, that's all. There is no bus-interface state machine that exists
except on the addressed device. The CPU driven interface device just
makes sure that the data is valid before the address and I/O-read or
I/O-write are valid after this. The address is decoded by the device
and is used to enable the device. It either puts its data onto the
bus in the case of a read, or gets data off the bus, in the case of
a write. The interface timing is specified and is handled by hardware.
In the meantime the CPU has not waited because there is nothing to
wait for. On a READ, if the device cannot put its data on the bus
fast enough, it puts its finger io IO-chan-ready. This forces the
CPU (through its bus-interface) to wait.

Writes to nowhere are just that, writes to nowhere.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 22:21:03

by H. Peter Anvin

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Richard B. Johnson wrote:

>
> Yeh? Then "how do it know?". It doesn't. I/O instructions are ordered,
> however, that's all. There is no bus-interface state machine that exists
> except on the addressed device. The CPU driven interface device just
> makes sure that the data is valid before the address and I/O-read or
> I/O-write are valid after this. The address is decoded by the device
> and is used to enable the device. It either puts its data onto the
> bus in the case of a read, or gets data off the bus, in the case of
> a write. The interface timing is specified and is handled by hardware.
> In the meantime the CPU has not waited because there is nothing to
> wait for. On a READ, if the device cannot put its data on the bus
> fast enough, it puts its finger io IO-chan-ready. This forces the
> CPU (through its bus-interface) to wait.
>
> Writes to nowhere are just that, writes to nowhere.
>


On the ISA bus, yes. The PCI and front side busses will be held in wait
until the transaction is completed.

The exact requirement is a bit more complicated, something along the
lines of "an SMI triggered in response to an OUT will be taken before
the OUT is retired."

-hpa


2002-03-14 22:25:13

by Richard B. Johnson

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, H. Peter Anvin wrote:

> Richard B. Johnson wrote:
>
> >
> > Yeh? Then "how do it know?". It doesn't. I/O instructions are ordered,
> > however, that's all. There is no bus-interface state machine that exists
> > except on the addressed device. The CPU driven interface device just
> > makes sure that the data is valid before the address and I/O-read or
> > I/O-write are valid after this. The address is decoded by the device
> > and is used to enable the device. It either puts its data onto the
> > bus in the case of a read, or gets data off the bus, in the case of
> > a write. The interface timing is specified and is handled by hardware.
> > In the meantime the CPU has not waited because there is nothing to
> > wait for. On a READ, if the device cannot put its data on the bus
> > fast enough, it puts its finger io IO-chan-ready. This forces the
> > CPU (through its bus-interface) to wait.
> >
> > Writes to nowhere are just that, writes to nowhere.
> >
>
>
> On the ISA bus, yes. The PCI and front side busses will be held in wait
> until the transaction is completed.
>
> The exact requirement is a bit more complicated, something along the
> lines of "an SMI triggered in response to an OUT will be taken before
> the OUT is retired."
>

Correct! The PCI is very much different, I'm glad, and hopefully nobody
will be using that for deliberate waits.

Cheers,
Dick Johnson

Penguin : Linux version 2.4.18 on an i686 machine (797.90 BogoMips).

Windows-2000/Professional isn't.

2002-03-14 22:41:36

by Alan

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> Yeh? Then "how do it know?". It doesn't. I/O instructions are ordered,
> however, that's all. There is no bus-interface state machine that exists

How about because
o The intel docs say out is synchronizing
o HPA works for an x86 clone manufacturer

Alan

2002-03-14 22:40:46

by Alan

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> > It is, in fact, broken on several systems -- I tried ED in SYSLINUX
> > for a while, and it broke things for people.
>
> It does work on many, in fact, we used on a Crusoe based platform
> as well as the other x86s
>
> Let's make it a configurable kernel debug/hacking option else
> we have the added burden of chasing down a common address.

We've got one. Its 0x80. It works everywhere with only marginal non
problematic side effects

2002-03-14 22:43:16

by Alan

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> Correct! The PCI is very much different, I'm glad, and hopefully nobody
> will be using that for deliberate waits.

On the x86 platform the PCI bridges make PCI out instructions synchronous.
Lots of driver code relies on this and people handling ports to non x86
where the odd bridge doesn't do this have had some real fun.

PCI mmio writes are however (even on the good old x86 platform) async

Alan

2002-03-14 22:44:57

by John Heil

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Alan Cox wrote:

> Date: Thu, 14 Mar 2002 22:55:45 +0000 (GMT)
> From: Alan Cox <[email protected]>
> To: John Heil <[email protected]>
> Cc: H. Peter Anvin <[email protected]>, [email protected]
> Subject: Re: IO delay, port 0x80, and BIOS POST codes
>
> > > It is, in fact, broken on several systems -- I tried ED in SYSLINUX
> > > for a while, and it broke things for people.
> >
> > It does work on many, in fact, we used on a Crusoe based platform
> > as well as the other x86s
> >
> > Let's make it a configurable kernel debug/hacking option else
> > we have the added burden of chasing down a common address.
>
> We've got one. Its 0x80. It works everywhere with only marginal non
> problematic side effects
>

Ok, cool but does that mean you agree or disagree with a configurable
override for those of us in the minority?

Johnh

-
-----------------------------------------------------------------
John Heil
South Coast Software
Custom systems software for UNIX and IBM MVS mainframes
1-714-774-6952
[email protected]
http://www.sc-software.com
-----------------------------------------------------------------

2002-03-14 23:14:32

by Alan

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> > We've got one. Its 0x80. It works everywhere with only marginal non
> > problematic side effects
> >
>
> Ok, cool but does that mean you agree or disagree with a configurable
> override for those of us in the minority?

If we put every single requested obscure fix for one or two boxes into
the kernel configuration you'd be spending weeks wading through

"Handle weird APM on Dave's homebrew mediagx"

and other questions.

Let me suggest something else. For any kernel built with TSC assumed
(586TSC +) initialize the udelay loop to something guaranteed to be at
least too long for any conceivable processor and use udelay() for the I/O
delay timing.

Alan

2002-03-15 00:07:43

by David Golden

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thursday 14 March 2002 22:55, Alan Cox wrote:
>
> We've got one. Its 0x80. It works everywhere with only marginal non
> problematic side effects

I've always liked POST cards. They could hypothetically be useful
for kernel development,too - who hasn't wanted a low-level
single-asm-instruction status output from a running system at one time or
another , independent of any other output mechanisms?

OK it's a single byte, but it's still nice... That's two whole hex digits!
DE... AD... BE... EF... !

2002-03-15 00:16:14

by John Heil

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Fri, 15 Mar 2002, David Golden wrote:

> Date: Fri, 15 Mar 2002 00:12:42 +0000
> From: David Golden <[email protected]>
> To: [email protected]
> Subject: Re: IO delay, port 0x80, and BIOS POST codes
>
> On Thursday 14 March 2002 22:55, Alan Cox wrote:
> >
> > We've got one. Its 0x80. It works everywhere with only marginal non
> > problematic side effects
>
> I've always liked POST cards. They could hypothetically be useful
> for kernel development,too - who hasn't wanted a low-level
> single-asm-instruction status output from a running system at one time or
> another , independent of any other output mechanisms?
>
> OK it's a single byte, but it's still nice... That's two whole hex digits!
> DE... AD... BE... EF... !

Any number of consecutive bits and a target I/O address can be very
useful.

I do it regularly for embedded kernel hacking... harmless I/O, picked up
by your logic analyzer.

Only drawback is it adds pathlength which can impact realtime if you get
excessive.

Johnh

> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

-
-----------------------------------------------------------------
John Heil
South Coast Software
Custom systems software for UNIX and IBM MVS mainframes
1-714-774-6952
[email protected]
http://www.sc-software.com
-----------------------------------------------------------------

2002-03-15 11:36:43

by Eric W. Biederman

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Linus Torvalds <[email protected]> writes:

> On Thu, 14 Mar 2002, John Heil wrote:
> >
> > No, the better/correct port is 0xED which removes the conflict.
>
> Port ED is fine for a BIOS, which (by definition) knows what the
> motherboard devices are, and thus knows that ED cannot be used by
> anything.
>
> But it _is_ an unused port, and that's exactly the kind of thing that
> might be used sometime in the future. Remember the port 22/23 brouhaha
> with Cyrix using it for their stuff, and later Intel getting into the fray
> too?
>
> So the fact that ED works doesn't mean that _stays_ working.
>
> The fact that 80 is the post code register means that it is fairly likely
> to _stay_ that way, without any ugly surprises.
>
> Now, if there is something _else_ than just the fact that it is unused
> that makes ED a good choice in the future too, that might be worth looking
> into (like NT using it for the same purpose as Linux does port 80),

Does the logic outb_p uses continue to work if you have a PCI post
card (possibly on the motherboard). And an ISA device?

Systems without ISA slots but with ISA or LPC devices onboard must
use a PCI post card so I have trouble believing that outb_b and friends
really work as expected....


Eric

2002-03-15 11:43:33

by Martin Wilck

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Thu, 14 Mar 2002, Thunder from the hill wrote:

> I also remember this been discussed anually. Making it configurable with
> a warning might be a solution, but that's nothing we could decide. Maybe
> add a config option? It night be a [DANGEROUS] one, so the guys and gals
> who might compile are warned of changing this.

It doesn't even have to be a config option - a line

/* Port used for dummy writes for I/O delays */
/* Change this only if you know what you're doing ! */
#define DUMMY_IO_PORT 0x80

in a header file would perfectly suffice.

Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy





2002-03-15 11:52:14

by Andi Kleen

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Martin Wilck <[email protected]> writes:

> On Thu, 14 Mar 2002, Thunder from the hill wrote:
>
> > I also remember this been discussed anually. Making it configurable with
> > a warning might be a solution, but that's nothing we could decide. Maybe
> > add a config option? It night be a [DANGEROUS] one, so the guys and gals
> > who might compile are warned of changing this.
>
> It doesn't even have to be a config option - a line
>
> /* Port used for dummy writes for I/O delays */
> /* Change this only if you know what you're doing ! */
> #define DUMMY_IO_PORT 0x80
>
> in a header file would perfectly suffice.

That effectively already exists. You just need to change the __SLOW_DOWN_IO
macro in include/asm-i387/io.h

-Andi

2002-03-15 12:45:09

by Martin Wilck

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On 15 Mar 2002, Andi Kleen wrote:

> > It doesn't even have to be a config option - a line
> >
> > /* Port used for dummy writes for I/O delays */
> > /* Change this only if you know what you're doing ! */
> > #define DUMMY_IO_PORT 0x80
> >
> > in a header file would perfectly suffice.
>
> That effectively already exists. You just need to change the __SLOW_DOWN_IO
> macro in include/asm-i387/io.h

No, that doesn't cover all accesses to port 80. I am still searching.

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy





2002-03-15 12:53:04

by Andi Kleen

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

On Fri, Mar 15, 2002 at 01:47:39PM +0100, Martin Wilck wrote:
> On 15 Mar 2002, Andi Kleen wrote:
>
> > > It doesn't even have to be a config option - a line
> > >
> > > /* Port used for dummy writes for I/O delays */
> > > /* Change this only if you know what you're doing ! */
> > > #define DUMMY_IO_PORT 0x80
> > >
> > > in a header file would perfectly suffice.
> >
> > That effectively already exists. You just need to change the __SLOW_DOWN_IO
> > macro in include/asm-i387/io.h
>
> No, that doesn't cover all accesses to port 80. I am still searching.

It should. I would consider all other accesses a bug.
It is possible that some driver used it for private debugging and left it in by
mistake. These should be removed.

-Andi

2002-03-15 15:07:52

by Thunder from the hill

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Martin Wilck wrote:
> Thunder from the hill wrote:
> > Maybe add a config option?
> It doesn't even have to be a config option - a line
> in a header file would perfectly suffice.
So guys who don't have an idea of it won't find it. Good idea!

> Martin
Thunder
--
begin-base64 755 -
IyEgL3Vzci9iaW4vcGVybApteSAgICAgJHNheWluZyA9CSMgVGhlIHNjcmlw
dCBvbiB0aGUgbGVmdCBpcyB0aGUgcHJvb2YKIk5lbmEgaXN0IGVpbiIgLgkj
IHRoYXQgaXQgaXNuJ3QgYWxsIHRoZSB3YXkgaXQgc2VlbXMKIiB2ZXJhbHRl
dGVyICIgLgkjIHRvIGJlIChlc3BlY2lhbGx5IG5vdCB3aXRoIG1lKQoiTkRX
LVN0YXIuXG4iICA7CiRzYXlpbmcgPX4Kcy9ORFctU3Rhci9rYW5uXAogdW5z
IHJldHRlbi9nICA7CiRzYXlpbmcgICAgICAgPX4Kcy92ZXJhbHRldGVyL2Rp
XAplIExpZWJlL2c7CiRzYXlpbmcgPX5zL2Vpbi8KbnVyL2c7JHNheWluZyA9
fgpzL2lzdC9zYWd0LC9nICA7CiRzYXlpbmc9fnMvXG4vL2cKO3ByaW50Zigk
c2F5aW5nKQo7cHJpbnRmKCJcbiIpOwo=
====
Extract this and see what will happen if you execute my
signature. Just save it to file and do a
> uudecode $file | perl

2002-03-15 17:34:57

by Gerald Champagne

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

> If we put every single requested obscure fix for one or two boxes into
> the kernel configuration you'd be spending weeks wading through
>
> "Handle weird APM on Dave's homebrew mediagx"
>
> and other questions.
>

A config option that lets you pick the address for the dummy io would
be a pretty obscure option. But having a CONFIG_POST_SUPPORT buried
somewhere wouldn't be that obscure or confusing. If that config
option is set, then a second question would prompt for an alternate
address to be used for the delay io, and a macro would be defined to
display post codes. If the option is unset, then 0x80 would be the
default for the delay address, and the post code macro would be defined
to do nothing.

Gerald

2002-03-15 17:38:57

by Martin Wilck

[permalink] [raw]
Subject: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)


Hi,

this patch makes the use of the "dummy" port 0x80 globally configurable
through a macro in the (new) file asm-i386/iodelay.h.
I think nobody wants to see this in the configuration menus.

I have tried to capture all accesses to port 0x80, although some may
have escaped.

Even if nobody wants to use anything other than 0x80 in the near future,
I think the patch is useful because grepping the source for 0x80 is
really no fun.

With the patch, our BIOS post code LEDs really stand still.

I am still wondering, though, why this method of getting a delay
is used so often. IMO in most places one could use udelay(1) instead,
with much less risk of doing wrong.

Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy



--- ./include/asm-i386/io.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/io.h Fri Mar 15 18:30:03 2002
@@ -221,17 +221,9 @@

#endif /* __KERNEL__ */

-#ifdef SLOW_IO_BY_JUMPING
-#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
-#else
-#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
-#endif
-
-#ifdef REALLY_SLOW_IO
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
-#else
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
-#endif
+/* Moved the __SLOW_DOWN_IO macros to a separate file
+ * that can be included by setup.S */
+#include <asm/iodelay.h>

#ifdef CONFIG_MULTIQUAD
extern void *xquad_portio; /* Where the IO area was mapped */
--- ./include/asm-i386/floppy.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/floppy.h Fri Mar 15 17:55:04 2002
@@ -89,8 +89,7 @@
jmp 5f
4: movb (%2),%0
outb %b0,%w4
-5: decw %w4
- outb %0,$0x80
+5: decw %w4" __SLOW_DOWN_IO "
decl %1
incl %2
testl %1,%1
--- ./include/asm-i386/iodelay.h.orig Fri Mar 15 18:31:01 2002
+++ ./include/asm-i386/iodelay.h Fri Mar 15 18:30:36 2002
@@ -0,0 +1,25 @@
+#ifndef _ASM_IODELAY_H
+#define _ASM_IODELAY_H
+
+#ifdef SLOW_IO_BY_JUMPING
+#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
+#else
+
+ /*
+ * The dummy IO port to use for delays.
+ * Change only if you really know what you're doing
+ * Default value: 0x80
+ * Other values that have been suggested: 0x19, 0x42, 0xe2, 0xed
+ * Both macros must be changed to capture all I/O delays in the kernel.
+ */
+#define __SLOW_DOWN_IO_PORT 0x80
+#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
+#endif
+
+#ifdef REALLY_SLOW_IO
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
+#else
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
+#endif
+
+#endif
--- ./drivers/char/serial.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/serial.c Fri Mar 15 17:24:13 2002
@@ -3661,12 +3661,12 @@
scratch = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0);
#ifdef __i386__
- outb(0xff, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch2 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0x0F);
#ifdef __i386__
- outb(0, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch3 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, scratch);
--- ./drivers/char/riscom8.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/riscom8.c Fri Mar 15 17:24:13 2002
@@ -278,10 +278,10 @@

/* Are the I/O ports here ? */
rc_out(bp, CD180_PPRL, 0x5a);
- outb(0xff, 0x80);
+ outb(0xff, __SLOW_DOWN_IO_PORT);
val1 = rc_in(bp, CD180_PPRL);
rc_out(bp, CD180_PPRL, 0xa5);
- outb(0x00, 0x80);
+ outb(0x00, __SLOW_DOWN_IO_PORT);
val2 = rc_in(bp, CD180_PPRL);

if ((val1 != 0x5a) || (val2 != 0xa5)) {
--- ./drivers/scsi/atp870u.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/scsi/atp870u.c Fri Mar 15 17:24:13 2002
@@ -1042,7 +1042,7 @@
tmport = dev->ioport + 0x1b;
outb(0x02, tmport);

- outb(0, 0x80);
+ outb(0, __SLOW_DOWN_IO_PORT);

val = 0x0080; /* bsy */
tmport = dev->ioport + 0x1c;
@@ -1051,7 +1051,7 @@
outw(val, tmport);
val |= 0x0004; /* msg */
outw(val, tmport);
- inb(0x80); /* 2 deskew delay(45ns*2=90ns) */
+ inb(__SLOW_DOWN_IO_PORT); /* 2 deskew delay(45ns*2=90ns) */
val &= 0x007f; /* no bsy */
outw(val, tmport);
mydlyu(0xffff); /* recommanded SCAM selection response time */
@@ -1062,7 +1062,7 @@
if ((inb(tmport) & 0x04) != 0) {
goto wait_nomsg;
}
- outb(1, 0x80);
+ outb(1, __SLOW_DOWN_IO_PORT);
mydlyu(100);
for (n = 0; n < 0x30000; n++) {
if ((inb(tmport) & 0x80) != 0) { /* bsy ? */
@@ -1078,13 +1078,13 @@
}
goto TCM_SYNC;
wait_io1:
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val |= 0x8003; /* io,cd,db7 */
outw(val, tmport);
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val &= 0x00bf; /* no sel */
outw(val, tmport);
- outb(2, 0x80);
+ outb(2, __SLOW_DOWN_IO_PORT);
TCM_SYNC:
mydlyu(0x800);
if ((inb(tmport) & 0x80) == 0x00) { /* bsy ? */
@@ -1103,18 +1103,18 @@
val &= 0x00ff; /* synchronization */
val |= 0x3f00;
fun_scam(dev, &val);
- outb(3, 0x80);
+ outb(3, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* isolation */
val |= 0x2000;
fun_scam(dev, &val);
- outb(4, 0x80);
+ outb(4, __SLOW_DOWN_IO_PORT);
i = 8;
j = 0;
TCM_ID:
if ((inw(tmport) & 0x2000) == 0) {
goto TCM_ID;
}
- outb(5, 0x80);
+ outb(5, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* get ID_STRING */
val |= 0x2000;
k = fun_scam(dev, &val);
--- ./drivers/video/sis/sis_main.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/video/sis/sis_main.c Fri Mar 15 17:27:16 2002
@@ -2308,7 +2308,7 @@
u8 reg;
int nRes;

- outb (0x77, 0x80);
+ outb (0x77, __SLOW_DOWN_IO_PORT);

if (sisfb_off)
return -ENXIO;
--- ./arch/i386/boot/setup.S.orig Fri Mar 15 17:23:15 2002
+++ ./arch/i386/boot/setup.S Fri Mar 15 18:33:12 2002
@@ -54,6 +54,7 @@
#include <asm/boot.h>
#include <asm/e820.h>
#include <asm/page.h>
+#include <asm/iodelay.h>

/* Signature words to ensure LILO loaded us right */
#define SIG1 0xAA55
@@ -65,6 +66,7 @@
# ... and the former contents of CS

DELTA_INITSEG = SETUPSEG - INITSEG # 0x0020
+DELAY_PORT = __SLOW_DOWN_IO_PORT # port for IO delay (0x80)

.code16
.globl begtext, begdata, begbss, endtext, enddata, endbss
@@ -1001,7 +1003,7 @@

# Delay is needed after doing I/O
delay:
- outb %al,$0x80
+ outb %al,$DELAY_PORT
ret

# Descriptor tables

2002-03-15 17:50:27

by Alan

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

> I am still wondering, though, why this method of getting a delay
> is used so often. IMO in most places one could use udelay(1) instead,
> with much less risk of doing wrong.

udelay(1) I don't believe is enough. Unfortunately I can't find my
documentation on the ISA bus which covers the timeout for acknowledging an
address cycle. Otherwise for tsc capable boxes I agree entirely.

2002-03-15 19:21:45

by Andreas Dilger

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

On Mar 15, 2002 18:41 +0100, Martin Wilck wrote:
> +#define __SLOW_DOWN_IO_PORT 0x80
> +#define __SLOW_DOWN_IO "\noutb %%al,$0x80"

You may want to change the above to:
#define __SLOW_DOWN_IO_ASM "\noutb %%al,$__SLOW_DOWN_IO_PORT"

> + outb(3, __SLOW_DOWN_IO_PORT);

You may also want to replace the above entirely with a macro, like:
#define __SLOW_DOWN_IO outb(3, __SLOW_DOWN_IO_PORT)

so that on architectures that don't need/have this ISA nonsense can
just replace __SLOW_DOWN_IO with something else like udelay.

> --- ./arch/i386/boot/setup.S.orig Fri Mar 15 17:23:15 2002
> +++ ./arch/i386/boot/setup.S Fri Mar 15 18:33:12 2002
> @@ -65,6 +66,7 @@
> # ... and the former contents of CS
>
> DELTA_INITSEG = SETUPSEG - INITSEG # 0x0020
> +DELAY_PORT = __SLOW_DOWN_IO_PORT # port for IO delay (0x80)
>
> .code16
> .globl begtext, begdata, begbss, endtext, enddata, endbss

May as well just stick with a single define here (i.e. remove DELAY_PORT).

> @@ -1001,7 +1003,7 @@
>
> # Delay is needed after doing I/O
> delay:
> - outb %al,$0x80
> + outb %al,$DELAY_PORT
> ret

And use __SLOW_DOWN_IO_ASM here.

Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert

2002-03-15 20:14:21

by Martin Wilck

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

On Fri, 15 Mar 2002, Andreas Dilger wrote:

> > +#define __SLOW_DOWN_IO_PORT 0x80
> > +#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
>
> You may want to change the above to:
> #define __SLOW_DOWN_IO_ASM "\noutb %%al,$__SLOW_DOWN_IO_PORT"

Won't work, cpp doesn't substitute between double quotes.
(at least the one I'm using). Or am I gettimng something wrong here??

> You may also want to replace the above entirely with a macro, like:
> #define __SLOW_DOWN_IO outb(3, __SLOW_DOWN_IO_PORT)
>
> so that on architectures that don't need/have this ISA nonsense can
> just replace __SLOW_DOWN_IO with something else like udelay.

I wanted to change as little as possible, generated code was intended
to be binary-identical with old code if the port is set to 0x80.

Currently source files can #define "SLOW_IO_BY_JUMPING" to avoid
writing to the dummy port. Perhaps someone should introduce
SLOW_IO_BY_UDELAY or so.

> May as well just stick with a single define here (i.e. remove DELAY_PORT).
OK.

Please take a look at the new version below.
(Changed only setup.S code, and some comments).

Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy


--- ./include/asm-i386/io.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/io.h Fri Mar 15 18:30:03 2002
@@ -221,17 +221,9 @@

#endif /* __KERNEL__ */

-#ifdef SLOW_IO_BY_JUMPING
-#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
-#else
-#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
-#endif
-
-#ifdef REALLY_SLOW_IO
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
-#else
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
-#endif
+/* Moved the __SLOW_DOWN_IO macros to a separate file
+ * that can be included by setup.S */
+#include <asm/iodelay.h>

#ifdef CONFIG_MULTIQUAD
extern void *xquad_portio; /* Where the IO area was mapped */
--- ./include/asm-i386/floppy.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/floppy.h Fri Mar 15 17:55:04 2002
@@ -89,8 +89,7 @@
jmp 5f
4: movb (%2),%0
outb %b0,%w4
-5: decw %w4
- outb %0,$0x80
+5: decw %w4" __SLOW_DOWN_IO "
decl %1
incl %2
testl %1,%1
--- ./include/asm-i386/iodelay.h.orig Fri Mar 15 18:31:01 2002
+++ ./include/asm-i386/iodelay.h Fri Mar 15 21:14:17 2002
@@ -0,0 +1,28 @@
+#ifndef _ASM_IODELAY_H
+#define _ASM_IODELAY_H
+
+/* File suitable for inclusion by setup.S, normally included via <asm/io.h> */
+
+#ifdef SLOW_IO_BY_JUMPING
+#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
+#else
+
+/*
+ * The dummy IO port to use for delays.
+ * Change only if you really know what you're doing
+ * Default value: 0x80
+ * Other values that have been suggested: 0x19, 0x42, 0xe2, 0xed
+ * Both macros must be changed to modify all IO delays in the kernel.
+ */
+
+#define __SLOW_DOWN_IO_PORT 0x80
+#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
+#endif
+
+#ifdef REALLY_SLOW_IO
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
+#else
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
+#endif
+
+#endif
--- ./drivers/char/serial.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/serial.c Fri Mar 15 17:24:13 2002
@@ -3661,12 +3661,12 @@
scratch = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0);
#ifdef __i386__
- outb(0xff, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch2 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0x0F);
#ifdef __i386__
- outb(0, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch3 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, scratch);
--- ./drivers/char/riscom8.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/riscom8.c Fri Mar 15 17:24:13 2002
@@ -278,10 +278,10 @@

/* Are the I/O ports here ? */
rc_out(bp, CD180_PPRL, 0x5a);
- outb(0xff, 0x80);
+ outb(0xff, __SLOW_DOWN_IO_PORT);
val1 = rc_in(bp, CD180_PPRL);
rc_out(bp, CD180_PPRL, 0xa5);
- outb(0x00, 0x80);
+ outb(0x00, __SLOW_DOWN_IO_PORT);
val2 = rc_in(bp, CD180_PPRL);

if ((val1 != 0x5a) || (val2 != 0xa5)) {
--- ./drivers/scsi/atp870u.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/scsi/atp870u.c Fri Mar 15 17:24:13 2002
@@ -1042,7 +1042,7 @@
tmport = dev->ioport + 0x1b;
outb(0x02, tmport);

- outb(0, 0x80);
+ outb(0, __SLOW_DOWN_IO_PORT);

val = 0x0080; /* bsy */
tmport = dev->ioport + 0x1c;
@@ -1051,7 +1051,7 @@
outw(val, tmport);
val |= 0x0004; /* msg */
outw(val, tmport);
- inb(0x80); /* 2 deskew delay(45ns*2=90ns) */
+ inb(__SLOW_DOWN_IO_PORT); /* 2 deskew delay(45ns*2=90ns) */
val &= 0x007f; /* no bsy */
outw(val, tmport);
mydlyu(0xffff); /* recommanded SCAM selection response time */
@@ -1062,7 +1062,7 @@
if ((inb(tmport) & 0x04) != 0) {
goto wait_nomsg;
}
- outb(1, 0x80);
+ outb(1, __SLOW_DOWN_IO_PORT);
mydlyu(100);
for (n = 0; n < 0x30000; n++) {
if ((inb(tmport) & 0x80) != 0) { /* bsy ? */
@@ -1078,13 +1078,13 @@
}
goto TCM_SYNC;
wait_io1:
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val |= 0x8003; /* io,cd,db7 */
outw(val, tmport);
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val &= 0x00bf; /* no sel */
outw(val, tmport);
- outb(2, 0x80);
+ outb(2, __SLOW_DOWN_IO_PORT);
TCM_SYNC:
mydlyu(0x800);
if ((inb(tmport) & 0x80) == 0x00) { /* bsy ? */
@@ -1103,18 +1103,18 @@
val &= 0x00ff; /* synchronization */
val |= 0x3f00;
fun_scam(dev, &val);
- outb(3, 0x80);
+ outb(3, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* isolation */
val |= 0x2000;
fun_scam(dev, &val);
- outb(4, 0x80);
+ outb(4, __SLOW_DOWN_IO_PORT);
i = 8;
j = 0;
TCM_ID:
if ((inw(tmport) & 0x2000) == 0) {
goto TCM_ID;
}
- outb(5, 0x80);
+ outb(5, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* get ID_STRING */
val |= 0x2000;
k = fun_scam(dev, &val);
--- ./drivers/video/sis/sis_main.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/video/sis/sis_main.c Fri Mar 15 17:27:16 2002
@@ -2308,7 +2308,7 @@
u8 reg;
int nRes;

- outb (0x77, 0x80);
+ outb (0x77, __SLOW_DOWN_IO_PORT);

if (sisfb_off)
return -ENXIO;
--- ./arch/i386/boot/setup.S.orig Fri Mar 15 17:23:15 2002
+++ ./arch/i386/boot/setup.S Fri Mar 15 20:50:13 2002
@@ -54,6 +54,7 @@
#include <asm/boot.h>
#include <asm/e820.h>
#include <asm/page.h>
+#include <asm/iodelay.h>

/* Signature words to ensure LILO loaded us right */
#define SIG1 0xAA55
@@ -1001,7 +1002,7 @@

# Delay is needed after doing I/O
delay:
- outb %al,$0x80
+ outb %al,$__SLOW_DOWN_IO_PORT
ret

# Descriptor tables

2002-03-15 20:41:57

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

Followup to: <[email protected]>
By author: Alan Cox <[email protected]>
In newsgroup: linux.dev.kernel
>
> > I am still wondering, though, why this method of getting a delay
> > is used so often. IMO in most places one could use udelay(1) instead,
> > with much less risk of doing wrong.
>
> udelay(1) I don't believe is enough. Unfortunately I can't find my
> documentation on the ISA bus which covers the timeout for acknowledging an
> address cycle. Otherwise for tsc capable boxes I agree entirely.
>

The ISA bus doesn't time out; a cycle on the ISA bus just happens, and
the fact that noone is there to listen doesn't seem to matter.

The delay is something like 8 cycles @ 8.3 MHz or around 1 ms.
However, an important thing to note is that this delay applies *at the
southbridge*. An OUT is a fully synchronizing operation, so it
doesn't just give a 1 ms delay due to the ISA bus cycle, but it also
makes sure everything else in the system is completed before the
timing counter even starts to tick.

Of course, if all you're doing is IOIO (on an x86!) it doesn't matter
-- IOIO is fully synchronizing anyway.

-hpa
--
<[email protected]> at work, <[email protected]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt <[email protected]>

2002-03-15 21:21:53

by Alan

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

> > documentation on the ISA bus which covers the timeout for acknowledging an
> > address cycle. Otherwise for tsc capable boxes I agree entirely.
> >
> The ISA bus doesn't time out; a cycle on the ISA bus just happens, and
> the fact that noone is there to listen doesn't seem to matter.

Not so simple. I found my IEEE draft 8)

The address out comes from the chipset (southbridge now days). The
sequence is

BALE high
Output address
BALE low
Set IORC/IOWC etc

Wait for NOWS while watching IOCHRDY

NOWS low says - card now ready
IOCHRDY high suppresses the wait state timer count

The default timeout is 4 wait states, which is 6 bus clocks for a failure
Maybe 7 - Im not clear if the final cycle to recover and start again is
always there.

> The delay is something like 8 cycles @ 8.3 MHz or around 1 ms.

1uS ?

Alan

2002-03-15 21:36:03

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

In article <[email protected]>,
H. Peter Anvin <[email protected]> wrote:
>
>The ISA bus doesn't time out; a cycle on the ISA bus just happens, and
>the fact that noone is there to listen doesn't seem to matter.

The ISA bus doesn't time out, but the PCI access before it gets
forwarded to the ISA bus _does_, if the ISA bus is decoded using
nagative decoding.

This is why it's important that there not be a motherboard PCI device
that can decode the port - because if there is, the access is
potentially a much faster PCI-only decode.

Note that this really only matters on low-end machines anyway, as the
whole "inb_p()" thing tends to be used only for old ISA devices. If you
have a new machine that is all PCI, I doubt that port 80h access matters
not at all.

(Another way of saying it: if you have a machine with a PCI POST card,
none of this will matter)

Linus

2002-03-16 00:02:42

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

Alan Cox wrote:

>
>>The delay is something like 8 cycles @ 8.3 MHz or around 1 ms.
>
> 1uS ?
>


1 ?s, right :-/

-hpa



2002-03-17 02:05:09

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

Martin Wilck wrote:
> > > +#define __SLOW_DOWN_IO_PORT 0x80
> > > +#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
> >
> > You may want to change the above to:
> > #define __SLOW_DOWN_IO_ASM "\noutb %%al,$__SLOW_DOWN_IO_PORT"
>
> Won't work, cpp doesn't substitute between double quotes.
> (at least the one I'm using). Or am I gettimng something wrong here??

As long as __SLOW_DOWN_IO_PORT is a simple constant, you can just use
this instead:

#define __SLOW_DOWN_IO_ASM "\noutb %%al,$" #__SLOW_DOWN_IO_PORT

-- Jamie

2002-03-18 09:15:51

by Martin Wilck

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

On Sun, 17 Mar 2002, Jamie Lokier wrote:

> As long as __SLOW_DOWN_IO_PORT is a simple constant, you can just use
> this instead:
>
> #define __SLOW_DOWN_IO_ASM "\noutb %%al,$" #__SLOW_DOWN_IO_PORT

What cpp are you guys using? Mine does stringification (#s) only with
arguments of function-like macros. However

#define __SLOW_DOWN_IO_P(p) "\noutb %%al,%" #p
#define __SLOW_DOWN_IO __SLOW_DOWN_IO(__SLOW_DOWN_IO_PORT)

won't work, either, because cpp does not recursively substitute macros
for stringification, so in the above _SLOW_DOWN_IO wuold evaluate as
"\noutb %%al,$__SLOW_DOWN_IO_PORT" - bad.

I have tried a number of things to make this a single cpp line, but they
all don't work. The only way would be to change the way the inb_p ...
macros are coded.

It is possible to write
#define __SLOW_DOWN_IO __asm__ ("outb %%al, %0" : : "i" (__SLOW_DOWN_IO_PORT));

but only if one modifies the definitions of inb_p etc which are so complex
that I don't dare touch them now.

Please note that, as an intermediate solution, my patch reduces explicit
usage of the constant 0x80 from ~20 in 8 different source files to 2
immediately following each other in 1 source file.

Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy





2002-03-18 09:39:58

by Keith Owens

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

On Mon, 18 Mar 2002 10:18:06 +0100 (CET),
Martin Wilck <[email protected]> wrote:
>On Sun, 17 Mar 2002, Jamie Lokier wrote:
>
>> As long as __SLOW_DOWN_IO_PORT is a simple constant, you can just use
>> this instead:
>>
>> #define __SLOW_DOWN_IO_ASM "\noutb %%al,$" #__SLOW_DOWN_IO_PORT
>
>What cpp are you guys using? Mine does stringification (#s) only with
>arguments of function-like macros. However

Recent 2.4 and 2.5 kernels have include/linux/stringify.h. This should
work.

#define __SLOW_DOWN_IO_ASM "\noutb %%al,$" __stringify(__SLOW_DOWN_IO_PORT)

2002-03-18 11:18:49

by Martin Wilck

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

On Mon, 18 Mar 2002, Keith Owens wrote:

> Recent 2.4 and 2.5 kernels have include/linux/stringify.h. This should
> work.

OK, that's it. Thanks a lot Keith!

Please find the new patch below.
Martin

--
Martin Wilck Phone: +49 5251 8 15113
Fujitsu Siemens Computers Fax: +49 5251 8 20409
Heinz-Nixdorf-Ring 1 mailto:[email protected]
D-33106 Paderborn http://www.fujitsu-siemens.com/primergy


--- ./include/asm-i386/io.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/io.h Fri Mar 15 18:30:03 2002
@@ -221,17 +221,9 @@

#endif /* __KERNEL__ */

-#ifdef SLOW_IO_BY_JUMPING
-#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
-#else
-#define __SLOW_DOWN_IO "\noutb %%al,$0x80"
-#endif
-
-#ifdef REALLY_SLOW_IO
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
-#else
-#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
-#endif
+/* Moved the __SLOW_DOWN_IO macros to a separate file
+ * that can be included by setup.S */
+#include <asm/iodelay.h>

#ifdef CONFIG_MULTIQUAD
extern void *xquad_portio; /* Where the IO area was mapped */
--- ./include/asm-i386/floppy.h.orig Fri Mar 15 17:23:15 2002
+++ ./include/asm-i386/floppy.h Fri Mar 15 17:55:04 2002
@@ -89,8 +89,7 @@
jmp 5f
4: movb (%2),%0
outb %b0,%w4
-5: decw %w4
- outb %0,$0x80
+5: decw %w4" __SLOW_DOWN_IO "
decl %1
incl %2
testl %1,%1
--- ./include/asm-i386/iodelay.h.orig Fri Mar 15 18:31:01 2002
+++ ./include/asm-i386/iodelay.h Mon Mar 18 12:17:20 2002
@@ -0,0 +1,25 @@
+#ifndef _ASM_IODELAY_H
+#define _ASM_IODELAY_H
+
+/*
+ * The dummy IO port to use for delays.
+ * Change only if you really know what you're doing !!
+ * Default value: 0x80.
+ * Other values that have been suggested: 0x19, 0x42, 0xe2, 0xed.
+ */
+#define __SLOW_DOWN_IO_PORT 0x80
+
+#ifdef SLOW_IO_BY_JUMPING
+#define __SLOW_DOWN_IO "\njmp 1f\n1:\tjmp 1f\n1:"
+#else
+#include <linux/stringify.h>
+#define __SLOW_DOWN_IO "\noutb %%al,$" __stringify(__SLOW_DOWN_IO_PORT)
+#endif
+
+#ifdef REALLY_SLOW_IO
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO __SLOW_DOWN_IO
+#else
+#define __FULL_SLOW_DOWN_IO __SLOW_DOWN_IO
+#endif
+
+#endif
--- ./drivers/char/serial.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/serial.c Fri Mar 15 17:24:13 2002
@@ -3661,12 +3661,12 @@
scratch = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0);
#ifdef __i386__
- outb(0xff, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch2 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, 0x0F);
#ifdef __i386__
- outb(0, 0x080);
+ outb(0xff,__SLOW_DOWN_IO_PORT);
#endif
scratch3 = serial_inp(info, UART_IER);
serial_outp(info, UART_IER, scratch);
--- ./drivers/char/riscom8.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/char/riscom8.c Fri Mar 15 17:24:13 2002
@@ -278,10 +278,10 @@

/* Are the I/O ports here ? */
rc_out(bp, CD180_PPRL, 0x5a);
- outb(0xff, 0x80);
+ outb(0xff, __SLOW_DOWN_IO_PORT);
val1 = rc_in(bp, CD180_PPRL);
rc_out(bp, CD180_PPRL, 0xa5);
- outb(0x00, 0x80);
+ outb(0x00, __SLOW_DOWN_IO_PORT);
val2 = rc_in(bp, CD180_PPRL);

if ((val1 != 0x5a) || (val2 != 0xa5)) {
--- ./drivers/scsi/atp870u.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/scsi/atp870u.c Fri Mar 15 17:24:13 2002
@@ -1042,7 +1042,7 @@
tmport = dev->ioport + 0x1b;
outb(0x02, tmport);

- outb(0, 0x80);
+ outb(0, __SLOW_DOWN_IO_PORT);

val = 0x0080; /* bsy */
tmport = dev->ioport + 0x1c;
@@ -1051,7 +1051,7 @@
outw(val, tmport);
val |= 0x0004; /* msg */
outw(val, tmport);
- inb(0x80); /* 2 deskew delay(45ns*2=90ns) */
+ inb(__SLOW_DOWN_IO_PORT); /* 2 deskew delay(45ns*2=90ns) */
val &= 0x007f; /* no bsy */
outw(val, tmport);
mydlyu(0xffff); /* recommanded SCAM selection response time */
@@ -1062,7 +1062,7 @@
if ((inb(tmport) & 0x04) != 0) {
goto wait_nomsg;
}
- outb(1, 0x80);
+ outb(1, __SLOW_DOWN_IO_PORT);
mydlyu(100);
for (n = 0; n < 0x30000; n++) {
if ((inb(tmport) & 0x80) != 0) { /* bsy ? */
@@ -1078,13 +1078,13 @@
}
goto TCM_SYNC;
wait_io1:
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val |= 0x8003; /* io,cd,db7 */
outw(val, tmport);
- inb(0x80);
+ inb(__SLOW_DOWN_IO_PORT);
val &= 0x00bf; /* no sel */
outw(val, tmport);
- outb(2, 0x80);
+ outb(2, __SLOW_DOWN_IO_PORT);
TCM_SYNC:
mydlyu(0x800);
if ((inb(tmport) & 0x80) == 0x00) { /* bsy ? */
@@ -1103,18 +1103,18 @@
val &= 0x00ff; /* synchronization */
val |= 0x3f00;
fun_scam(dev, &val);
- outb(3, 0x80);
+ outb(3, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* isolation */
val |= 0x2000;
fun_scam(dev, &val);
- outb(4, 0x80);
+ outb(4, __SLOW_DOWN_IO_PORT);
i = 8;
j = 0;
TCM_ID:
if ((inw(tmport) & 0x2000) == 0) {
goto TCM_ID;
}
- outb(5, 0x80);
+ outb(5, __SLOW_DOWN_IO_PORT);
val &= 0x00ff; /* get ID_STRING */
val |= 0x2000;
k = fun_scam(dev, &val);
--- ./drivers/video/sis/sis_main.c.orig Fri Mar 15 17:23:15 2002
+++ ./drivers/video/sis/sis_main.c Fri Mar 15 17:27:16 2002
@@ -2308,7 +2308,7 @@
u8 reg;
int nRes;

- outb (0x77, 0x80);
+ outb (0x77, __SLOW_DOWN_IO_PORT);

if (sisfb_off)
return -ENXIO;
--- ./arch/i386/boot/setup.S.orig Fri Mar 15 17:23:15 2002
+++ ./arch/i386/boot/setup.S Fri Mar 15 20:50:13 2002
@@ -54,6 +54,7 @@
#include <asm/boot.h>
#include <asm/e820.h>
#include <asm/page.h>
+#include <asm/iodelay.h>

/* Signature words to ensure LILO loaded us right */
#define SIG1 0xAA55
@@ -1001,7 +1002,7 @@

# Delay is needed after doing I/O
delay:
- outb %al,$0x80
+ outb %al,$__SLOW_DOWN_IO_PORT
ret

# Descriptor tables



2002-03-18 15:22:53

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] Cleanup port 0x80 use (was: Re: IO delay ...)

Martin Wilck wrote:
> > As long as __SLOW_DOWN_IO_PORT is a simple constant, you can just use
> > this instead:
> >
> > #define __SLOW_DOWN_IO_ASM "\noutb %%al,$" #__SLOW_DOWN_IO_PORT
>
> What cpp are you guys using? Mine does stringification (#s) only with
> arguments of function-like macros.

You're right, my error. You need to use an argument, as has already
been pointed out -- <linux/stringify.h>.

-- Jamie

2002-03-19 21:09:22

by Pavel Machek

[permalink] [raw]
Subject: Re: IO delay, port 0x80, and BIOS POST codes

Hi!
> > We've got one. Its 0x80. It works everywhere with only marginal non
> > problematic side effects
>
> I've always liked POST cards. They could hypothetically be useful
> for kernel development,too - who hasn't wanted a low-level
> single-asm-instruction status output from a running system at one time or
> another , independent of any other output mechanisms?
>
> OK it's a single byte, but it's still nice... That's two whole hex digits!
> DE... AD... BE... EF... !

Use 0x378 for that, works equally well.
Pavel
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.