Failure to use real-time delay here causes the keyboard to become demonically
possessed in the event of a kernel crash, with wildly blinking lights and
unpredictable behavior. This has resulted in several injuries.
Signed-off-by: Zachary Amsden <[email protected]>
diff -r 10fac6d484e2 drivers/input/serio/i8042.c
--- a/drivers/input/serio/i8042.c Tue Jan 30 16:44:54 2007 -0800
+++ b/drivers/input/serio/i8042.c Tue Jan 30 16:45:00 2007 -0800
@@ -9,6 +9,7 @@
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*/
+#define USE_REAL_TIME_DELAY
#include <linux/delay.h>
#include <linux/module.h>
On Mon, Feb 05, 2007 at 07:53:30PM -0800, Zachary Amsden wrote:
> Failure to use real-time delay here causes the keyboard to become demonically
> possessed in the event of a kernel crash, with wildly blinking lights and
> unpredictable behavior. This has resulted in several injuries.
There must be a reason why it wasn't default before. Has this
reason changed?
-Andi
>
> Signed-off-by: Zachary Amsden <[email protected]>
>
> diff -r 10fac6d484e2 drivers/input/serio/i8042.c
> --- a/drivers/input/serio/i8042.c Tue Jan 30 16:44:54 2007 -0800
> +++ b/drivers/input/serio/i8042.c Tue Jan 30 16:45:00 2007 -0800
> @@ -9,6 +9,7 @@
> * under the terms of the GNU General Public License version 2 as published by
> * the Free Software Foundation.
> */
> +#define USE_REAL_TIME_DELAY
>
> #include <linux/delay.h>
> #include <linux/module.h>
Andi Kleen wrote:
> On Mon, Feb 05, 2007 at 07:53:30PM -0800, Zachary Amsden wrote:
>
>> Failure to use real-time delay here causes the keyboard to become demonically
>> possessed in the event of a kernel crash, with wildly blinking lights and
>> unpredictable behavior. This has resulted in several injuries.
>>
>
> There must be a reason why it wasn't default before. Has this
> reason changed?
>
This only matters under paravirt; non-paravirt kernels and kernels
running on native hardware will always behave properly.
But paravirtualized kernels with fake devices have no need to udelay to
accommodate slow hardware - the hardware is just virtual. The
USE_REAL_TIME_DELAY define allows udelay to be specifically reverted
back to being a real delay. There are only a couple cases where it
matters - one is booting APs on SMP systems (there is a real delay
before they come up), and one is any hardware that drives world
interacting devices - such as keyboard LEDs in a panic loop.
Zach
Hi!
> >
> >>Failure to use real-time delay here causes the
> >>keyboard to become demonically
> >>possessed in the event of a kernel crash, with wildly
> >>blinking lights and
> >>unpredictable behavior. This has resulted in several
> >>injuries.
> >
> >There must be a reason why it wasn't default before.
> >Has this
> >reason changed?
>
> This only matters under paravirt; non-paravirt kernels
> and kernels running on native hardware will always
> behave properly.
>
> But paravirtualized kernels with fake devices have no
> need to udelay to accommodate slow hardware - the
> hardware is just virtual. The USE_REAL_TIME_DELAY
> define allows udelay to be specifically reverted back to
> being a real delay. There are only a couple cases where
Ugh, it sounds like paravirt is more b0rken then I thought. It should
always to the proper delay, then replace those udelays that are not
needed on virtualized hardware with something else.
Just magically defining udelay into nop is broken.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On 2/6/07, Zachary Amsden <[email protected]> wrote:
> Andi Kleen wrote:
> > On Mon, Feb 05, 2007 at 07:53:30PM -0800, Zachary Amsden wrote:
> >
> >> Failure to use real-time delay here causes the keyboard to become demonically
> >> possessed in the event of a kernel crash, with wildly blinking lights and
> >> unpredictable behavior. This has resulted in several injuries.
> >>
> >
> > There must be a reason why it wasn't default before. Has this
> > reason changed?
> >
>
> This only matters under paravirt; non-paravirt kernels and kernels
> running on native hardware will always behave properly.
>
> But paravirtualized kernels with fake devices have no need to udelay to
> accommodate slow hardware - the hardware is just virtual. The
> USE_REAL_TIME_DELAY define allows udelay to be specifically reverted
> back to being a real delay. There are only a couple cases where it
> matters - one is booting APs on SMP systems (there is a real delay
> before they come up), and one is any hardware that drives world
> interacting devices - such as keyboard LEDs in a panic loop.
>
I am confused - does i8042 talk to a virtual or real hardware here? In
any case I think you need to fix kernel/panic.c to have proper
(m)delay, not mess with i8042.
--
Dmitry
On Wed, 2007-02-07 at 12:35 +0000, Pavel Machek wrote:
> Ugh, it sounds like paravirt is more b0rken then I thought. It should
> always to the proper delay, then replace those udelays that are not
> needed on virtualized hardware with something else.
>
> Just magically defining udelay into nop is broken.
We'd have to audit and figure out what udelays are for hardware and
which are not, but the evidence is that the vast majority of them are
for hardware and not needed for virtualization.
Changing udelay to "hardware_udelay" or something all over the kernel
would have delayed the paravirt_ops merge by an infinite amount 8)
Rusty.
Rusty Russell wrote:
> On Wed, 2007-02-07 at 12:35 +0000, Pavel Machek wrote:
>
>> Ugh, it sounds like paravirt is more b0rken then I thought. It should
>> always to the proper delay, then replace those udelays that are not
>> needed on virtualized hardware with something else.
>>
>> Just magically defining udelay into nop is broken.
>>
>
> We'd have to audit and figure out what udelays are for hardware and
> which are not, but the evidence is that the vast majority of them are
> for hardware and not needed for virtualization.
>
> Changing udelay to "hardware_udelay" or something all over the kernel
> would have delayed the paravirt_ops merge by an infinite amount 8)
>
Yes, so I chose the same approach used for b0rken hardware - #define
REALLY_SLOW_IO before including headers. It's ugly, but it works
without rewriting the entire source base.
Zach
Dmitry Torokhov wrote:
>
> I am confused - does i8042 talk to a virtual or real hardware here? In
> any case I think you need to fix kernel/panic.c to have proper
> (m)delay, not mess with i8042.
I think I need to fix both of them actually. This is virtual hardware,
but when you grab focus on a VM, the virtual hardware gets reflected to
the actual physical keyboard. Driving physical hardware that fast is bad.
Zach
On Wed, Feb 07, 2007 at 02:31:57PM -0800, Zachary Amsden wrote:
> Dmitry Torokhov wrote:
> >
> >I am confused - does i8042 talk to a virtual or real hardware here? In
> >any case I think you need to fix kernel/panic.c to have proper
> >(m)delay, not mess with i8042.
>
> I think I need to fix both of them actually. This is virtual hardware,
> but when you grab focus on a VM, the virtual hardware gets reflected to
> the actual physical keyboard. Driving physical hardware that fast is bad.
???
Surely the physical keyboard is always handled by the host kernel?
I hope you're not saying it's trying to access the io ports directly?
-Andi
Andi Kleen wrote:
> On Wed, Feb 07, 2007 at 02:31:57PM -0800, Zachary Amsden wrote:
>
>> Dmitry Torokhov wrote:
>>
>>> I am confused - does i8042 talk to a virtual or real hardware here? In
>>> any case I think you need to fix kernel/panic.c to have proper
>>> (m)delay, not mess with i8042.
>>>
>> I think I need to fix both of them actually. This is virtual hardware,
>> but when you grab focus on a VM, the virtual hardware gets reflected to
>> the actual physical keyboard. Driving physical hardware that fast is bad.
>>
>
> ???
>
> Surely the physical keyboard is always handled by the host kernel?
> I hope you're not saying it's trying to access the io ports directly?
>
No, not that. But the virtual keyboard I/O gets processed and converted
to physical keyboard I/O when a keyboard is attached to a VM. The
result is that the virtual keyboard spinning out of control causes the
physical keyboard to receive the same commands, far too rapidly.
So the keyboard blinks out of control and acts as if possessed by demons.
Zach
On Thu, Feb 08, 2007 at 01:08:14AM -0800, Zachary Amsden wrote:
> Andi Kleen wrote:
> >On Wed, Feb 07, 2007 at 02:31:57PM -0800, Zachary Amsden wrote:
> >
> >>Dmitry Torokhov wrote:
> >>
> >>>I am confused - does i8042 talk to a virtual or real hardware here? In
> >>>any case I think you need to fix kernel/panic.c to have proper
> >>>(m)delay, not mess with i8042.
> >>>
> >>I think I need to fix both of them actually. This is virtual hardware,
> >>but when you grab focus on a VM, the virtual hardware gets reflected to
> >>the actual physical keyboard. Driving physical hardware that fast is bad.
> >>
> >
> >???
> >
> >Surely the physical keyboard is always handled by the host kernel?
> >I hope you're not saying it's trying to access the io ports directly?
> >
>
> No, not that. But the virtual keyboard I/O gets processed and converted
> to physical keyboard I/O when a keyboard is attached to a VM. The
You mean the commands to change the keyboard LEDs?
> result is that the virtual keyboard spinning out of control causes the
> physical keyboard to receive the same commands, far too rapidly.
Hmm i would expect the host kernel keyboard driver to throttle these.
I'm pretty sure the Linux one does the necessary mdelays at least.
> So the keyboard blinks out of control and acts as if possessed by demons.
Still sounds weird.
-Andi
On 8 Feb 2007 14:33:21 +0100, Andi Kleen <[email protected]> wrote:
> On Thu, Feb 08, 2007 at 01:08:14AM -0800, Zachary Amsden wrote:
> > Andi Kleen wrote:
> > >On Wed, Feb 07, 2007 at 02:31:57PM -0800, Zachary Amsden wrote:
> > >
> > >>Dmitry Torokhov wrote:
> > >>
> > >>>I am confused - does i8042 talk to a virtual or real hardware here? In
> > >>>any case I think you need to fix kernel/panic.c to have proper
> > >>>(m)delay, not mess with i8042.
> > >>>
> > >>I think I need to fix both of them actually. This is virtual hardware,
> > >>but when you grab focus on a VM, the virtual hardware gets reflected to
> > >>the actual physical keyboard. Driving physical hardware that fast is bad.
> > >>
> > >
> > >???
> > >
> > >Surely the physical keyboard is always handled by the host kernel?
> > >I hope you're not saying it's trying to access the io ports directly?
> > >
> >
> > No, not that. But the virtual keyboard I/O gets processed and converted
> > to physical keyboard I/O when a keyboard is attached to a VM. The
>
> You mean the commands to change the keyboard LEDs?
>
> > result is that the virtual keyboard spinning out of control causes the
> > physical keyboard to receive the same commands, far too rapidly.
>
> Hmm i would expect the host kernel keyboard driver to throttle these.
> I'm pretty sure the Linux one does the necessary mdelays at least.
There is no throttling, we will switch leds as fast as keyboard
controller will allow us (not that I consider it a bad thing).
>
> > So the keyboard blinks out of control and acts as if possessed by demons.
>
> Still sounds weird.
>
The problem is that panic blink routine expects to be called with 1ms
frequency. If mdelay in kernel/panic.c is changed to noop then i8042
will try to blink as fast as it possibly can.
--
Dmitry
On 2/7/07, Rusty Russell <[email protected]> wrote:
> On Wed, 2007-02-07 at 12:35 +0000, Pavel Machek wrote:
> > Ugh, it sounds like paravirt is more b0rken then I thought. It should
> > always to the proper delay, then replace those udelays that are not
> > needed on virtualized hardware with something else.
> >
> > Just magically defining udelay into nop is broken.
>
> We'd have to audit and figure out what udelays are for hardware and
> which are not, but the evidence is that the vast majority of them are
> for hardware and not needed for virtualization.
>
> Changing udelay to "hardware_udelay" or something all over the kernel
> would have delayed the paravirt_ops merge by an infinite amount 8)
>
However I am not really fond of idea of adding constructs like this
all over the code:
#define USE_REAL_TIME_DELAY_I_REALLY_MEAN_IT_THIS_TIME_I_SWEAR
as the time passes... Drivers should be blissfully ignorant of being
run on virtual hardware.
--
Dmitry
Dmitry Torokhov wrote:
>
> However I am not really fond of idea of adding constructs like this
> all over the code:
>
> #define USE_REAL_TIME_DELAY_I_REALLY_MEAN_IT_THIS_TIME_I_SWEAR
>
> as the time passes... Drivers should be blissfully ignorant of being
> run on virtual hardware.
I agree in general, but there are two uses for this ugly construct. One
is for drivers and isolated sections of code which actually interact
with the real world. They need real time delays. The only two examples
so far are SMP coprocessor bootstrapping and blinking LEDs with a
recognizable frequency. I don't expect many more to show up.
The other use for this construct is compiling a hardware privileged
virtual domain that actually drives real hardware - you can just define
this globally to override any delay masking.
Zach
On 2/8/07, Zachary Amsden <[email protected]> wrote:
> Dmitry Torokhov wrote:
> >
> > However I am not really fond of idea of adding constructs like this
> > all over the code:
> >
> > #define USE_REAL_TIME_DELAY_I_REALLY_MEAN_IT_THIS_TIME_I_SWEAR
> >
> > as the time passes... Drivers should be blissfully ignorant of being
> > run on virtual hardware.
>
> I agree in general, but there are two uses for this ugly construct. One
> is for drivers and isolated sections of code which actually interact
> with the real world. They need real time delays. The only two examples
> so far are SMP coprocessor bootstrapping and blinking LEDs with a
> recognizable frequency. I don't expect many more to show up.
>
I am pretty sure that using that construct inside of i8042 is wrong -
the host OS should handle hardware access/delay issues. Have you
verified that fixing kerne/panic.c to call i8042_panic_blink once per
1 ms (as it happens on native running kernel) does not produce the
desired blinking?
--
Dmitry
On Thu 2007-02-08 07:36:12, Rusty Russell wrote:
> On Wed, 2007-02-07 at 12:35 +0000, Pavel Machek wrote:
> > Ugh, it sounds like paravirt is more b0rken then I thought. It should
> > always to the proper delay, then replace those udelays that are not
> > needed on virtualized hardware with something else.
> >
> > Just magically defining udelay into nop is broken.
>
> We'd have to audit and figure out what udelays are for hardware and
> which are not, but the evidence is that the vast majority of them are
> for hardware and not needed for virtualization.
You did not time to do the full audit, so you just did #define.
> Changing udelay to "hardware_udelay" or something all over the kernel
> would have delayed the paravirt_ops merge by an infinite amount 8)
And here you claim you could not do the right thing, because people
would notice you are doing huge search/replace without audit, and
would stop you. So you simply hidden it from them :-(.
Plus... udelay() should just work under virtualization, right? You get
slightly slower kernel, but still working, so the "full audit" is not
as hard as you are telling me.
Just replace udelay() with hardware_udelay() on places that matter in
your workload...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
> No, not that. But the virtual keyboard I/O gets processed and converted
> to physical keyboard I/O when a keyboard is attached to a VM. The
> result is that the virtual keyboard spinning out of control causes the
> physical keyboard to receive the same commands, far too rapidly.
>
> So the keyboard blinks out of control and acts as if possessed by demons.
Which is a good example of why breaking udelay is the wrong thing to do.
I bet vmware keyboard emulation isn't the only problematic case.
Alan
> We'd have to audit and figure out what udelays are for hardware and
> which are not, but the evidence is that the vast majority of them are
> for hardware and not needed for virtualization.
Which is irrelevant since the hardware drivers won't be used in a
virtualised environment with any kind of performance optimisation.
> Changing udelay to "hardware_udelay" or something all over the kernel
> would have delayed the paravirt_ops merge by an infinite amount 8)
paravirt_ops has no business fiddling with udelay. Not only does it
create more code bloat and stalls in relatively fast paths but its
optimising the wrong thing anyway.
My performance sucks -> optimise out udelay is the wrong approach. My
performance sucks, switch to the virtual block driver is the right
approach, and a virtual block driver won't be using udelay anyway
Alan
Pavel Machek wrote:
> On Thu 2007-02-08 07:36:12, Rusty Russell wrote:
>
>> On Wed, 2007-02-07 at 12:35 +0000, Pavel Machek wrote:
>>
>>> Ugh, it sounds like paravirt is more b0rken then I thought. It should
>>> always to the proper delay, then replace those udelays that are not
>>> needed on virtualized hardware with something else.
>>>
>>> Just magically defining udelay into nop is broken.
>>>
>> We'd have to audit and figure out what udelays are for hardware and
>> which are not, but the evidence is that the vast majority of them are
>> for hardware and not needed for virtualization.
>>
>
> You did not time to do the full audit, so you just did #define.
>
Yes, of course. Since 99% of the drivers are completely irrelevant for
paravirt, and 99% of the udelays are in drivers, there isn't much point
to auditing a bunch of code we're not even going to be affected by. The
default case for udelay is it is not needed.
>> Changing udelay to "hardware_udelay" or something all over the kernel
>> would have delayed the paravirt_ops merge by an infinite amount 8)
>>
>
> And here you claim you could not do the right thing, because people
> would notice you are doing huge search/replace without audit, and
> would stop you. So you simply hidden it from them :-(.
>
What ludicrousness is this? Hidden what? That the default case for
udelay is that it is not needed?
> Plus... udelay() should just work under virtualization, right? You get
> slightly slower kernel, but still working, so the "full audit" is not
> as hard as you are telling me.
>
Save the time of doing a useless full audit and making sure we didn't
accidentally redefine or misspell some symbol on a bunch of
architectures we aren't even set up to compile for.
> Just replace udelay() with hardware_udelay() on places that matter in
> your workload...
>
That's inconsistent. We would be doing 2 SCSI drivers, part of the IDE
code, some i386 arch code, some random places in the kernel... and now
nobody else knows whether to use udelay or hardware_udelay and the code
gets jumbled to the point that it is useless because there is no clear
distinction between the two. It is non-trivial to come up with a list
of source files that we have to actually do this to. One C-file calls a
shared routine in a library, and now you've got a hidden udelay that you
have absolutely no way of detecting. The right thing to do if you want
to do it on a line by line basis is exactly the opposite. Remove udelay
and find out what breaks. Bugs are easier to find and fix than hidden
code. If I were to do it on a line by line basis, I would chose to
replace udelay() with real_time_udelay() for those places that actually
need it.
Zach
Alan wrote:
>> We'd have to audit and figure out what udelays are for hardware and
>> which are not, but the evidence is that the vast majority of them are
>> for hardware and not needed for virtualization.
>>
>
> Which is irrelevant since the hardware drivers won't be used in a
> virtualised environment with any kind of performance optimisation.
>
Which is why an audit is irrelevant for the most part. Note on the
performance below.
>> Changing udelay to "hardware_udelay" or something all over the kernel
>> would have delayed the paravirt_ops merge by an infinite amount 8)
>>
>
> paravirt_ops has no business fiddling with udelay. Not only does it
> create more code bloat and stalls in relatively fast paths but its
> optimising the wrong thing anyway.
>
??? I fail to see the code bloat and also the fast paths. Which fast
paths use udelay?
> My performance sucks -> optimise out udelay is the wrong approach. My
> performance sucks, switch to the virtual block driver is the right
> approach, and a virtual block driver won't be using udelay anyway
>
This is not to stop performance from sucking. It doesn't. This is not
an "approach". Sure, a virtual block driver won't be using udelay.
Everyone else who writes hypervisors writes virtual block drivers
because they don't have optimized I/O emulation for real hardware.
Their performance sucks without it because they have to go switch to
some other context and run a device emulator. Our doesn't. We have
optimized almost every I/O device we emulate. But sitting around
spinning in udelay is wasting everybody's time. There is an overhead
cost to trapping out on I/O instructions. Removing the udelays that
typically happen around I/O instructions causes the emulation to break even.
And that is a good thing. It's certainly not required, nor is it a
significant win while the kernel is running. It does cut the boot time
by a lot, and you will notice an obvious difference with a much faster
kernel boot simply because a lot of the hardware setup has very
conservative udelays which take a lot of time during device
initialization. Since boot time * number of reboots has a direct impact
on the number of 9's you can claim for uptime, this is actually a large
win for reliability.
Zach
> ??? I fail to see the code bloat and also the fast paths. Which fast
> paths use udelay?
IDE on several platforms has performance critical paths that use
ndelay(400) or failing that udelay(1)
Alan wrote:
>> ??? I fail to see the code bloat and also the fast paths. Which fast
>> paths use udelay?
>>
>
> IDE on several platforms has performance critical paths that use
> ndelay(400) or failing that udelay(1)
Ok, I buy that. A 486DX / 33 Mhz processor takes 10 cycles to issue a
CALL / RET pair. This is about 300ns. Is there an issue with being too
early to issue I/O operations or too late?
If it is too early, then we have lost 10 cycles of processor time per
udelay, which considering the antiquity of this piece of hardware, seems
acceptable.
If it is too late, then there could be a real issue on older machines.
But I fail to see how such careful timing can be done at this
granularity on such hardware without well tweaked assembly code. A
suboptimal compile output could easily throw you off by just as much,
sticking a little bit of arithmetic before and after the delay.
Zach
On Mon, 2007-02-05 at 19:53 -0800, Zachary Amsden wrote:
> Failure to use real-time delay here causes the keyboard to become demonically
> possessed in the event of a kernel crash, with wildly blinking lights and
> unpredictable behavior. This has resulted in several injuries.
The problem is the normal one when we introduce a new concept into the
kernel; there are two kinds of udelay, and they've been conflated. The
most common case is a delay for real hardware devices; this can be
eliminated for paravirtualization. The other cases, the tiny minority,
are visible delays (keyboard leds), not knowing if they're necessary
(very early boot), and async events (other CPUs coming up): ie.
everything else.
Is implementation of this distinction worth it? The answer IMHO is no;
this is a case where simplicity wins.
Firstly, most modern distributions ship kernels which have almost no
drivers, and modprobe in early boot. So assuming paravirtualized
drivers, we don't even experience the boot delays.
Secondly, the major delay for my current kernel is IDE probing (3 out of
5 seconds of boot time!), but implementing the delay ops in lguest does
*nothing* for this, because it uses "msleep()" not "mdelay()". so the
current solution isn't sufficient anyway. If this is a real problem for
users I'd rather try to convince the IDE maintainer to use a
"real_hardware_mdelay()" or something here.
Cheers!
Rusty.
PS. lguest doesn't implement the delay functions; I just tried
implemented that now for testing.
> users I'd rather try to convince the IDE maintainer to use a
> "real_hardware_mdelay()" or something here.
The IDE/ATA layer has a single function which decides what non PCI
probing to do so could learn to spot virtualisation easily enough. In the
case of libata ISA is now obscure enough that you must load a driver to
trigger ISA probes so the problem is on its way out anyway.
On Wed, 14 Feb 2007 13:53:08 -0800
Zachary Amsden <[email protected]> wrote:
> > IDE on several platforms has performance critical paths that use
> > ndelay(400) or failing that udelay(1)
>
> Ok, I buy that. A 486DX / 33 Mhz processor takes 10 cycles to issue a
> CALL / RET pair. This is about 300ns. Is there an issue with being too
> early to issue I/O operations or too late?
Too early you lose, too late you just waste clock time.
> But I fail to see how such careful timing can be done at this
> granularity on such hardware without well tweaked assembly code.
Thats what is used most platforms use udelay(1) in fact however
Hi!
> >>We'd have to audit and figure out what udelays are for hardware and
> >>which are not, but the evidence is that the vast majority of them are
> >>for hardware and not needed for virtualization.
> >>
> >
> >Which is irrelevant since the hardware drivers won't be used in a
> >virtualised environment with any kind of performance optimisation.
> >
>
> Which is why an audit is irrelevant for the most part. Note on the
> performance below.
You know it is ugly. Alan demonstrated it even hurts performance, but
being ugly is the main problem.
If you _need_ to avoid udelay() in some cases, introduce
udelay_unless_virtualized(), and switch few users to it. Just globaly
defining udelay to nop is _ugly_.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On 2/14/07, Rusty Russell <[email protected]> wrote:
> On Mon, 2007-02-05 at 19:53 -0800, Zachary Amsden wrote:
> > Failure to use real-time delay here causes the keyboard to become demonically
> > possessed in the event of a kernel crash, with wildly blinking lights and
> > unpredictable behavior. This has resulted in several injuries.
>
> The problem is the normal one when we introduce a new concept into the
> kernel; there are two kinds of udelay, and they've been conflated. The
> most common case is a delay for real hardware devices; this can be
> eliminated for paravirtualization. The other cases, the tiny minority,
> are visible delays (keyboard leds), not knowing if they're necessary
> (very early boot), and async events (other CPUs coming up): ie.
> everything else.
>
Just for the record - I believe that i8042 is fine as is now. The only
place where real delay needs to be enforced is panic.c - it should
call panic blink routine once every 1 ms.
--
Dmitry
Pavel Machek wrote:
>
> You know it is ugly. Alan demonstrated it even hurts performance, but
> being ugly is the main problem.
>
No argument with that. Well, we're ok with dropping it. Actually,
reverting the entire set of udelay changes now seems wise. The same bug
that happened with i8042 can happen with any hardware device driven by
the VM in passthrough mode - potentially USB or IDE CDROM or direct SCSI.
Since that is per-device and not global, a global udelay disable really
is broken in that case, and recompiling individual C files or modules
for passthrough vs. non-passthrough is not the answer.
So Rusty, Chris, Jeremy, any objections to killing udelay() and friends
in paravirt-ops? It would simplify things a bit. The only thing we
lose is a slightly faster boot time in the 100% emulated device case.
I'm ok with losing that. Even the PIT fast paths don't use udelay, so I
don't think we care for runtime performance at all.
Zach
Hi!
> >You know it is ugly. Alan demonstrated it even hurts performance, but
> >being ugly is the main problem.
> >
>
> No argument with that. Well, we're ok with dropping it. Actually,
> reverting the entire set of udelay changes now seems wise. The same
>bug
Good, thanks.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Zachary Amsden wrote:
> So Rusty, Chris, Jeremy, any objections to killing udelay() and
> friends in paravirt-ops? It would simplify things a bit. The only
> thing we lose is a slightly faster boot time in the 100% emulated
> device case. I'm ok with losing that. Even the PIT fast paths don't
> use udelay, so I don't think we care for runtime performance at all.
Fine with me. That always seemed a bit warty to me.
J
On Thu, 2007-02-15 at 15:42 -0800, Zachary Amsden wrote:
> So Rusty, Chris, Jeremy, any objections to killing udelay() and friends
> in paravirt-ops? It would simplify things a bit.
I agree.
Thanks!
Rusty.