2001-04-09 16:03:57

by Wade Hampton

[permalink] [raw]
Subject: 2.4.3, VMWare, 2 VMs

Greetings,

Is anyone having problems with running more than
1 VM on 2.4.3. I have crashed my host O/S several
times when I try to start two VMs. Currently,
I don't have an oops or other info to report, but
I did see a post on the vmware list about 2.4.3 SMP
and VMWARE.

Host:
dual PIII/800, 256 M RAM, RedHat 6.2 with updates, 2.4.3

VM1:
RedHat 7.0 with recent updates

VM2:
RedHat 7.0 with 2.4.3 and LIDS

Cheers,
--
W. Wade, Hampton <[email protected]>
If Microsoft Built Cars: Every time they repainted the
lines on the road, you'd have to buy a new car.
Occasionally your car would just die for no reason, and
you'd have to restart it, but you'd just accept this.


2001-04-09 16:51:34

by Petr Vandrovec

[permalink] [raw]
Subject: Re: 2.4.3, VMWare, 2 VMs

On 9 Apr 01 at 12:03, Wade Hampton wrote:

> Is anyone having problems with running more than
> 1 VM on 2.4.3. I have crashed my host O/S several
> times when I try to start two VMs. Currently,
> I don't have an oops or other info to report, but
> I did see a post on the vmware list about 2.4.3 SMP
> and VMWARE.

As I already answered on VMware newsgroups:

VMware's 2.0.3 vmmon module uses save_flags() + cli()
in poll() fops, and after this cli() it calls
spin_lock() :-( It is not safest thing to do.
But it should not cause reboot. You should get

/dev/vmmon: 11 wait for global VM lock XX

and now dead machine with disabled interrupts...

As all other callers of HostIF_GlobalVMLock() hold
big kernel lock, easiest thing to do is to add
lock_kernel()/unlock_kernel() around LinuxDriver_Poll()
body.

Removing whole save_flags/cli is for sure much better,
but it is still in my queue (if you are looking into vmmon
driver, then whole poll mess is there to get wakeup on
next jiffy, and not on next + one...).
Petr Vandrovec
[email protected]

2001-04-09 17:15:14

by Wade Hampton

[permalink] [raw]
Subject: Re: 2.4.3, VMWare, 2 VMs

Petr Vandrovec wrote:
>
> On 9 Apr 01 at 12:03, Wade Hampton wrote:
>
> > Is anyone having problems with running more than
> As I already answered on VMware newsgroups:
Thanks. I didn't see the post on the VMware newsgroup....

> VMware's 2.0.3 vmmon module uses save_flags() + cli()
> in poll() fops, and after this cli() it calls
> spin_lock() :-( It is not safest thing to do.
> But it should not cause reboot. You should get
>
> /dev/vmmon: 11 wait for global VM lock XX
I had over 2000 of those in /var/log/messages
(not counting the "repeated" lines in /var/log/messages).
Yep, that's the problem....
>
> and now dead machine with disabled interrupts...
Yes, basically a dead machine with NO response to
anything....
>
> As all other callers of HostIF_GlobalVMLock() hold
> big kernel lock, easiest thing to do is to add
> lock_kernel()/unlock_kernel() around LinuxDriver_Poll()
> body.
>
> Removing whole save_flags/cli is for sure much better,
> but it is still in my queue (if you are looking into vmmon
> driver, then whole poll mess is there to get wakeup on
> next jiffy, and not on next + one...).
No, I can wait for the a release that fixes this. If
you have a patch or test version, send it to me and I'll
test it on my development machine....

For now, I'll just not use 2 VMs until it is fixed.

Cheers,
--
W. Wade, Hampton <[email protected]>