I allow myself to suggest the following, although not sure if I post in
the right group:
Suppose Linux could save the total state of a program to disk, for
instance, imagine a program like mozilla with many open windows. I give
it a SIGNAL-SAVETODISK and the process memory image is dropped to a
file. I can then turn off the computer and later continue using the
program where I left it, by loading it back into memory.
Would that be possible? At least a program can be given a ctrl-z and is
swapped out if physical memory is needed. This is somewhat similar (?)
Would that need kernel parameters to be included in the process image
file? What about X-windows resources? Is this simply to easy to exploit
by having altered process images loaded back into the memory? ('virus')
If possible, a neat titlebar icon 'zzz' could be added to the decoration
provided by the (X) window manager.
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
On Sat, 2005-10-01 at 13:30 -0800, lokum spand wrote:
> I allow myself to suggest the following, although not sure if I post in
> the right group:
>
> Suppose Linux could save the total state of a program to disk, for
> instance, imagine a program like mozilla with many open windows. I give
> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> file. I can then turn off the computer and later continue using the
> program where I left it, by loading it back into memory.
>
> Would that be possible? At least a program can be given a ctrl-z and is
there is a LOT of state though.. the moment you add networking in the
picture the amount of state just isn't funny anymore. Your X example is
a good one as well...
Arjan van de Ven wrote:
>On Sat, 2005-10-01 at 13:30 -0800, lokum spand wrote:
>
>
>>I allow myself to suggest the following, although not sure if I post in
>>the right group:
>>
>>Suppose Linux could save the total state of a program to disk, for
>>instance, imagine a program like mozilla with many open windows. I give
>>it a SIGNAL-SAVETODISK and the process memory image is dropped to a
>>file. I can then turn off the computer and later continue using the
>>program where I left it, by loading it back into memory.
>>
>>Would that be possible? At least a program can be given a ctrl-z and is
>>
>>
>
>there is a LOT of state though.. the moment you add networking in the
>picture the amount of state just isn't funny anymore. Your X example is
>a good one as well...
>
>
There are a few cluster/parallel computing libraries out there that are
starting to allow "process migration"...
One would assume that "saving it to a disk" is simply a degenerate case
of migrating the process...
Presuming they have process migration working (and it seemed close a
while ago when I last looked), saving to a file might already be
supported... I'd google "process migration" and you are likely to find
a lot of discussion on this topic...
/mike
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
>From: Michael Concannon <[email protected]>
>To: Arjan van de Ven <[email protected]>
>CC: lokum spand <[email protected]>, [email protected]
>Subject: Re: A possible idea for Linux: Save running programs to disk
>Date: Sat, 01 Oct 2005 18:21:37 -0400
>
>Arjan van de Ven wrote:
>
>>On Sat, 2005-10-01 at 13:30 -0800, lokum spand wrote:
>>
>>
>>>I allow myself to suggest the following, although not sure if I post in
>>>the right group:
>>>
>>>Suppose Linux could save the total state of a program to disk, for
>>>instance, imagine a program like mozilla with many open windows. I give
>>>it a SIGNAL-SAVETODISK and the process memory image is dropped to a
>>>file. I can then turn off the computer and later continue using the
>>>program where I left it, by loading it back into memory.
>>>
>>>Would that be possible? At least a program can be given a ctrl-z and is
>>>
>>>
>>
>>there is a LOT of state though.. the moment you add networking in the
>>picture the amount of state just isn't funny anymore. Your X example is
>>a good one as well...
>>
>>
>There are a few cluster/parallel computing libraries out there that are
>starting to allow "process migration"...
>
>One would assume that "saving it to a disk" is simply a degenerate case of
>migrating the process...
>
>Presuming they have process migration working (and it seemed close a while
>ago when I last looked), saving to a file might already be supported...
>I'd google "process migration" and you are likely to find a lot of
>discussion on this topic...
>
>/mike
>
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>the body of a message to [email protected]
>>More majordomo info at http://vger.kernel.org/majordomo-info.html
>>Please read the FAQ at http://www.tux.org/lkml/
>>
>>
>>
>
In fact moving processes from one machine to another would be a brilliant
feature at my work, since we run fairly large and time-consuming simulations
on electronic circuits. If the kernel could natively support bouncing jobs
back and forth, that would really be something. Since we simulate with
proprietary software, I suppose we can't rely on the simulator being
rewritten to support such special libraries.
Does any other Unix variant have process bouncing already?
_________________________________________________________________
On the road to retirement? Check out MSN Life Events for advice on how to
get there! http://lifeevents.msn.com/category.aspx?cid=Retirement
>>>>
>>>> Suppose Linux could save the total state of a program to disk, for
>>>> instance, imagine a program like mozilla with many open windows. I
>>>> give
>>>> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
>>>> file. I can then turn off the computer and later continue using the
>>>> program where I left it, by loading it back into memory.
>>>>
>>>> Would that be possible? At least a program can be given a ctrl-z
>>>> and is
>>>>
>>>>
>>>
>>> there is a LOT of state though.. the moment you add networking in the
>>> picture the amount of state just isn't funny anymore. Your X example is
>>> a good one as well...
>>>
>>>
>> There are a few cluster/parallel computing libraries out there that
>> are starting to allow "process migration"...
>>
>> One would assume that "saving it to a disk" is simply a degenerate
>> case of migrating the process...
>>
>> Presuming they have process migration working (and it seemed close a
>> while ago when I last looked), saving to a file might already be
>> supported... I'd google "process migration" and you are likely to
>> find a lot of discussion on this topic...
>>
>> /mike
>>
>>> -
>>> To unsubscribe from this list: send the line "unsubscribe
>>> linux-kernel" in
>>> the body of a message to [email protected]
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at http://www.tux.org/lkml/
>>>
>>>
>>>
>>
>
> In fact moving processes from one machine to another would be a
> brilliant feature at my work, since we run fairly large and
> time-consuming simulations on electronic circuits. If the kernel could
> natively support bouncing jobs back and forth, that would really be
> something. Since we simulate with proprietary software, I suppose we
> can't rely on the simulator being rewritten to support such special
> libraries.
>
> Does any other Unix variant have process bouncing already?
Yes, plenty of proprietary tools in that space... This is something
Big-Iron has needed to do for quite a while - if nothing else
fault-tolerance requires migrating processes of dead or dying nodes.
More recently some of the gnu stuff has come up to speed - though, I
must admit it has been a while since I looked...
For the gnu world, have a look at "mosix" - but again, groups.google.com
and search for "process migration"... I am sure there have been
developments in the time since I last looked at it... I satisfied my
needs with gridengine as the least invasive way to deal with load
balance of EDA tools without having to use $LSF. My requirements at the
time were - "can I get it running and use it in an hour...", So, I doubt
it was the best or only choice...
The goal of most of the clustering libraries is that it is transparent
to the tool, but flexlm causes trouble even if you don't do screwy
things, so YMMV. At this point I have exhausted my expertise on the
subject :-).
search terms:
gridengine
PVM
mosix
cluter
process migration
/mike
El Sat, 01 Oct 2005 23:39:14 +0200,
Arjan van de Ven <[email protected]> escribi?:
> there is a LOT of state though.. the moment you add networking in the
> picture the amount of state just isn't funny anymore. Your X example is
> a good one as well...
If X allowed to disconnect an app from the server and re-connect it again
(and it seems there's people in X.org looking into things like this since
its neccesary for people using X's networkin through wireless connections)
it'd be easier to support it.
Some operative systems already have something like this and call
it "process checkpointing": "Checkpointing allows you to freeze a copy of an
application, and then at a later time, it can be restored."
http://kerneltrap.org/node/1042
Desktops users would love it: Instead of "exiting" your desktop session,
just dump all your running apps to disk, restore them the next time
you start your desktop, just like you left. This is already doable with some
support from apps, but doesn't seem to be implemented in the real world :/
lokum spand <[email protected]> wrote:
> I allow myself to suggest the following, although not sure if I post in
> the right group:
>
> Suppose Linux could save the total state of a program to disk, for
> instance, imagine a program like mozilla with many open windows. I give
> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> file. I can then turn off the computer and later continue using the
> program where I left it, by loading it back into memory.
What about the open file descriptors? If the program uses temporary files,
closing them will destroy the data in the temp files. Therefore you can't
close these fds, and this prevents you from doing a shutdown.
Use suspend-to-disk instead.
--
Ich danke GMX daf?r, die Verwendung meiner Adressen mittels per SPF
verbreiteten L?gen zu sabotieren.
On Sat, Oct 01, 2005 at 01:30:22PM -0800, lokum spand wrote:
> Suppose Linux could save the total state of a program to disk, for
> instance, imagine a program like mozilla with many open windows. I give
> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> file. I can then turn off the computer and later continue using the
> program where I left it, by loading it back into memory.
http://www.checkpointing.org lists several solutions for this.
I'm the author of CryoPID[1] - it's a checkpointing program that
allows you to save the state of a process to a file without any
prior thought when linking or running the process. It won't handle
an entire mozilla process, but single-threaded console-based apps
are quite feasible. Migration between machines works too - 2.6 to
2.6 works, 2.4 to 2.4 works, 2.6 to 2.4 works, and 2.4 to 2.6 mostly
works with some TLS emulation (which might be incomplete, but can
always be improved).
Open files are reopened. Opened temporary files (unlinked) could
potentially be restored by scraping the contents out of the file
while the process in question has it open.
Networking isn't too bad really so long as you keep the same IP. TCP
sockets can be handled by tcpcp[2] which is already supported by
CryoPID. UDP sockets are pretty trivial, but not yet done. For both
of these to be reliable though, there needs to be some sort of
arrangement to drop packets on these connections whilst they are
suspended and not have the kernel send an RST back. (Thinking a
daemon that drives some iptables).
Unix sockets are indeed trickier. Mostly this is for X applications,
and for this I'm actually looking towards toolkit-based solutions as
apps can't be expected to deal with things like colour depth changes
and so on. Gtk+ can already migrate applications between displays,
with the only issue being that not all the resources tied to the
original X server are freed, so you can't lose it. This is scheduled
to be fixed for Gtk+ 2.10 though, so I'm holding out hope for this.
Multithreading or even multiple processes will be a fun one though.
Ditto for shared memory and other IPC stuff. Determined that it's
possible, just not sure how yet. :)
As for portability, it was written for x86 and has been ported to
AMD64, and I'm also in the middle of porting it to sparc. (ppc and
alpha planned too).
Yes, it has to do some pretty vile things to avoid modifying the
kernel or userspace programs, but it's quite suitable for doing
things like backing up your irssi sessions hourly, saving
computational jobs across a reboot or moving them to another
machine, or showing off features of an application.
Bernard.
[1] http://cryopid.berlios.de/
[2] http://tcpcp.sf.net/
On 10/1/05, lokum spand <[email protected]> wrote:
> ... a program like mozilla with many open windows. I give
> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> file. I can then turn off the computer and later continue using the
> program where I left it, by loading it back into memory.
FWIW, you can already do this with Firefox (and Mozilla, I'm sure)
using the Sessionsaver plugin.
And while I can shed no further light on your idea, I wholeheartedly
support it. It would be a nice alternative to swsusp/Suspend2 in that
it could possibly avoid hardware issues involved with hibernation.
-Andy
On Sat, Oct 01, 2005 at 02:51:17PM -0800, lokum spand wrote:
> Does any other Unix variant have process bouncing already?
DragonflyBSD supports (or plan to - I'm not sure) process checkpointing.
--
Tomasz Torcz Only gods can safely risk perfection,
[email protected] it's a dangerous thing for a man. -- Alia
Hi,
Looks like this can be done in user space...
Bernard, Is there any kernel api that adding would make cryopid
more dependable/cleaner?
Ed Tomlinson
On Sunday 02 October 2005 00:53, Bernard Blackham wrote:
> On Sat, Oct 01, 2005 at 01:30:22PM -0800, lokum spand wrote:
> > Suppose Linux could save the total state of a program to disk, for
> > instance, imagine a program like mozilla with many open windows. I give
> > it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> > file. I can then turn off the computer and later continue using the
> > program where I left it, by loading it back into memory.
>
> http://www.checkpointing.org lists several solutions for this.
>
> I'm the author of CryoPID[1] - it's a checkpointing program that
> allows you to save the state of a process to a file without any
> prior thought when linking or running the process. It won't handle
> an entire mozilla process, but single-threaded console-based apps
> are quite feasible. Migration between machines works too - 2.6 to
> 2.6 works, 2.4 to 2.4 works, 2.6 to 2.4 works, and 2.4 to 2.6 mostly
> works with some TLS emulation (which might be incomplete, but can
> always be improved).
>
> Open files are reopened. Opened temporary files (unlinked) could
> potentially be restored by scraping the contents out of the file
> while the process in question has it open.
>
> Networking isn't too bad really so long as you keep the same IP. TCP
> sockets can be handled by tcpcp[2] which is already supported by
> CryoPID. UDP sockets are pretty trivial, but not yet done. For both
> of these to be reliable though, there needs to be some sort of
> arrangement to drop packets on these connections whilst they are
> suspended and not have the kernel send an RST back. (Thinking a
> daemon that drives some iptables).
>
> Unix sockets are indeed trickier. Mostly this is for X applications,
> and for this I'm actually looking towards toolkit-based solutions as
> apps can't be expected to deal with things like colour depth changes
> and so on. Gtk+ can already migrate applications between displays,
> with the only issue being that not all the resources tied to the
> original X server are freed, so you can't lose it. This is scheduled
> to be fixed for Gtk+ 2.10 though, so I'm holding out hope for this.
>
> Multithreading or even multiple processes will be a fun one though.
> Ditto for shared memory and other IPC stuff. Determined that it's
> possible, just not sure how yet. :)
>
> As for portability, it was written for x86 and has been ported to
> AMD64, and I'm also in the middle of porting it to sparc. (ppc and
> alpha planned too).
>
> Yes, it has to do some pretty vile things to avoid modifying the
> kernel or userspace programs, but it's quite suitable for doing
> things like backing up your irssi sessions hourly, saving
> computational jobs across a reboot or moving them to another
> machine, or showing off features of an application.
>
> Bernard.
>
> [1] http://cryopid.berlios.de/
> [2] http://tcpcp.sf.net/
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
On 10/2/05, lokum spand <[email protected]> wrote:
> >From: Michael Concannon <[email protected]>
> >To: Arjan van de Ven <[email protected]>
> >CC: lokum spand <[email protected]>, [email protected]
> >Subject: Re: A possible idea for Linux: Save running programs to disk
> >Date: Sat, 01 Oct 2005 18:21:37 -0400
> >
> >Arjan van de Ven wrote:
> >
> >>there is a LOT of state though.. the moment you add networking in the
> >>picture the amount of state just isn't funny anymore. Your X example is
> >>a good one as well...
> >>
> >>
> >There are a few cluster/parallel computing libraries out there that are
> >starting to allow "process migration"...
> >
> >One would assume that "saving it to a disk" is simply a degenerate case of
> >migrating the process...
> >
> >Presuming they have process migration working (and it seemed close a while
> >ago when I last looked), saving to a file might already be supported...
> >I'd google "process migration" and you are likely to find a lot of
> >discussion on this topic...
> >
> >/mike
> >
> >
>
> In fact moving processes from one machine to another would be a brilliant
> feature at my work, since we run fairly large and time-consuming simulations
> on electronic circuits. If the kernel could natively support bouncing jobs
> back and forth, that would really be something. Since we simulate with
> proprietary software, I suppose we can't rely on the simulator being
> rewritten to support such special libraries.
>
> Does any other Unix variant have process bouncing already?
>
You can have a look at kerrighed or openssi. They have modified
kernels who features process migration (and checkpointing for
kerrighed).
regards,
Benoit
On Sun, Oct 02, 2005 at 08:57:26AM -0400, Ed Tomlinson wrote:
> Is there any kernel api that adding would make cryopid more
> dependable/cleaner?
Currently a fair bit of information is obtained by injecting code
into the process's memory space, executing it, and reaping out the
results (eg, termcaps, file offsets, fcntl states, locks, signal
actions, etc). Can't think of ways to make it cleaner off the top
of my head, but I'm open to ideas.
Seeing as you asked though, here's my wishlist :) I don't expect all
of these to be implemented, but every bit would help. Issues I
haven't been able to address so far:
- Processes that cache their PID and need it, or rely on PIDs of
their children.
Some way to request a given PID when cloning/forking (or on the
fly even) would make life easier.
- UNIX sockets aren't currently supported but figuring out what is
connected to what seems a little shaky. Some old code used to
take a guess that socketpairs had inodes k and k+1 with k odd,
which seemed to work in all cases I saw. It certainly didn't feel
reliable though.
An extra field in /proc/net/unix saying what inode socket was on
the other end (only needed for the connect() end) would be great.
- Setting cmdline as appears in /proc/$pid/cmdline. argv and
environ point somewhere into the process's stack determined by
the size of argv and environ at exec time. Without va space
randomisation in the picture, it's not too difficult to reproduce
this and get it back where it should be (it's a hack though), but
with va space randomisation it's pretty much impossible.
An API to change the actual memory location of this (task's
mm->arg_start) would be handy (prctl maybe?)
- Merging tcpcp for TCP connection saving support.
- I haven't put a great deal of thought into the multithreading
side of things, but a non-intrusive way to determine which
threads share filesystem contexts, FDs, namespaces, VMAs, and so
on would be infinitely useful. My current plan of attack was to
tinker with one and see if I could observe it from the others.
That's all I can think of for now. If any of these look even
remotely plausible to incorporate, I'd be quite happy to prepare
patches (2 and 3 seem trivial enough :)
Kind regards,
Bernard.
Bernard Blackham <[email protected]> wrote:
> On Sun, Oct 02, 2005 at 08:57:26AM -0400, Ed Tomlinson wrote:
>> Is there any kernel api that adding would make cryopid more
>> dependable/cleaner?
>
> Currently a fair bit of information is obtained by injecting code
> into the process's memory space, executing it, and reaping out the
> results (eg, termcaps, file offsets, fcntl states, locks, signal
> actions, etc). Can't think of ways to make it cleaner off the top
> of my head, but I'm open to ideas.
What about using an uml wrapper + vncserver? This would give you a complete
virtual environment, and if you can make uml suspend-to-disk, you've got
most of it. (I admit I never tried uml, so this is just a guess.)
Off cause the network connections will still time out etc etc, but that's
nothing you can do about that.
Be fvzcyl hfr bcren, juvpu pna fnsr vg'f bja fgngr.
--
Ich danke GMX daf?r, die Verwendung meiner Adressen mittels per SPF
verbreiteten L?gen zu sabotieren.
On Sun, Oct 02, 2005 at 07:08:43PM +0200, Bodo Eggert wrote:
> Bernard Blackham <[email protected]> wrote:
> >> Is there any kernel api that adding would make cryopid more
> >> dependable/cleaner?
> >
> > Currently a fair bit of information is obtained by injecting code
> > into the process's memory space, executing it, and reaping out the
> > results (eg, termcaps, file offsets, fcntl states, locks, signal
> > actions, etc). Can't think of ways to make it cleaner off the top
> > of my head, but I'm open to ideas.
>
> What about using an uml wrapper + vncserver?
Requires consciously doing so when you start it. It most certainly
could be done that way, but one of cryopid's aims is to work on any
running process without prior planning.
Interesting idea though - it'd be somewhat akin to porting
suspend-to-disk to UML (which has been on suspend2's todo list for a
while though :)
Bernard.
On 10/1/05, lokum spand <[email protected]> wrote:
> I allow myself to suggest the following, although not sure if I post in
> the right group:
I've looked at similar, would have been my PhD area of interest.
> Suppose Linux could save the total state of a program to disk, for
> instance, imagine a program like mozilla with many open windows. I give
> it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> file. I can then turn off the computer and later continue using the
> program where I left it, by loading it back into memory.
My interest is in having journalled processes at the system call level
so you can do full forward error recovery and resume on another node.
But in this day and age of webby stuff it's often not necessary for
the enterprise and a lot of hassle for everyone else (especially
preserving and handling network state). In any case, at OLS, I asked
the Xen folks about this and was told some people are apparently
looking into somehow "transactionalising" Xen so you'll be able to
checkpoint as you go and handle failover.
> Would that be possible? At least a program can be given a ctrl-z and is
> swapped out if physical memory is needed. This is somewhat similar (?)
> Would that need kernel parameters to be included in the process image
> file? What about X-windows resources? Is this simply to easy to exploit
> by having altered process images loaded back into the memory? ('virus')
I think that for very specific applications it would be possible,
others would make it much harder and necessary to have userland
support (even if not in the application itself). It's something I'd
recommend as a research topic due to it's open ended nature but I'm
not so sure we'll see this in Linux any time soon :-)
Jon.
On 10/1/05, lokum spand <[email protected]> wrote:
> In fact moving processes from one machine to another would be a brilliant
> feature at my work, since we run fairly large and time-consuming simulations
> on electronic circuits. If the kernel could natively support bouncing jobs
> back and forth, that would really be something. Since we simulate with
> proprietary software, I suppose we can't rely on the simulator being
> rewritten to support such special libraries.
But it does that. Projects like OpenMOSIX can abuse the already
existing migration code in the scheduler, Xen supports moving whole
virtual machines on the fly. There are others too.
> Does any other Unix variant have process bouncing already?
Lots.
Cheers!
Jon.
On Mon, Oct 03, 2005 at 01:51:16AM +0800, Bernard Blackham wrote:
> Interesting idea though - it'd be somewhat akin to porting
> suspend-to-disk to UML (which has been on suspend2's todo list for a
> while though :)
It would be exactly that. Note that external network connections are still
going to cause problems.
Jeff
lokum spand wrote:
>
> In fact moving processes from one machine to another would be a
> brilliant feature at my work, since we run fairly large and
> time-consuming simulations on electronic circuits. If the kernel could
> natively support bouncing jobs back and forth, that would really be
> something. Since we simulate with proprietary software, I suppose we
> can't rely on the simulator being rewritten to support such special
> libraries.
>
The OpenSSI patches to the Kernel can make a network of machines behave
like a single system image with automatic process migration, among other
things.
http://openssi.org
Regards,
LL
On Sun, Oct 02, 2005 at 01:36:12AM -0400, Andrew Haninger wrote:
> On 10/1/05, lokum spand <[email protected]> wrote:
> > ... a program like mozilla with many open windows. I give
> > it a SIGNAL-SAVETODISK and the process memory image is dropped to a
> > file. I can then turn off the computer and later continue using the
> > program where I left it, by loading it back into memory.
> FWIW, you can already do this with Firefox (and Mozilla, I'm sure)
> using the Sessionsaver plugin.
>
> And while I can shed no further light on your idea, I wholeheartedly
> support it. It would be a nice alternative to swsusp/Suspend2 in that
> it could possibly avoid hardware issues involved with hibernation.
Where are hardware issues with suspend to disk?
> -Andy
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On 10/3/05, Adrian Bunk <[email protected]> wrote:
> Where are hardware issues with suspend to disk?
>
Actually, very few currently [AFAIK, on my hardware]. However, at
least in the past, my r200 card wasn't useable after resume from
suspend without patches to XFree86. I know people had trouble with the
fglrx drivers not supporting suspend-to-disk. [I believe current r300
drivers work fine, but I do not have personal confirmation.] I have a
machine that used to have issues because I was using a keyboard and no
mouse. When I resumed, the keyboard didn't work. If I had a mouse
plugged in, suspend/resume worked fine. Here's a link to my mail to
LKML about this issue:
http://marc.theaimsgroup.com/?l=linux-kernel&m=112139506118959&w=2
As new hardware is introduced and drivers have to be written or
reverse-engineered and kinks worked out, bugs like these will crop up
again and again. [That is, unless manufacturers become more open about
their hardware. From my perspective, they are becoming more closed.
ATI, for example.]
If processes could be suspended to disk independantly of the "physical
state" of the machine, it would avoid issues like these. You could
"suspend-to-disk", install a new video/sound/network card and then
resume as though nothing happened. (Ignoring issues with TCP, of
course.)
Neat.
-Andy
On Mon, Oct 03, 2005 at 02:52:52PM -0400, Andrew Haninger wrote:
> On 10/3/05, Adrian Bunk <[email protected]> wrote:
> > Where are hardware issues with suspend to disk?
> >
> Actually, very few currently [AFAIK, on my hardware]. However, at
> least in the past, my r200 card wasn't useable after resume from
> suspend without patches to XFree86. I know people had trouble with the
> fglrx drivers not supporting suspend-to-disk. [I believe current r300
> drivers work fine, but I do not have personal confirmation.] I have a
> machine that used to have issues because I was using a keyboard and no
> mouse. When I resumed, the keyboard didn't work. If I had a mouse
> plugged in, suspend/resume worked fine. Here's a link to my mail to
> LKML about this issue:
> http://marc.theaimsgroup.com/?l=linux-kernel&m=112139506118959&w=2
>
> As new hardware is introduced and drivers have to be written or
> reverse-engineered and kinks worked out, bugs like these will crop up
> again and again. [That is, unless manufacturers become more open about
> their hardware. From my perspective, they are becoming more closed.
> ATI, for example.]
>...
These are all software problems, not hardware problems.
> Neat.
>
> -Andy
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On 10/3/05, Adrian Bunk <[email protected]> wrote:
> These are all software problems, not hardware problems.
>
Okay. You're correct. I should have used the word "driver" instead of
"hardware" in my original post.
An idea as presented in the original post would be helpful with
avoiding driver issues involved with software suspension.
-Andy
Hi!
> > What about using an uml wrapper + vncserver?
>
> Requires consciously doing so when you start it. It most certainly
> could be done that way, but one of cryopid's aims is to work on any
> running process without prior planning.
>
> Interesting idea though - it'd be somewhat akin to porting
> suspend-to-disk to UML (which has been on suspend2's todo list for a
> while though :)
Better port swsusp1... I was thinking about that, too, but it is going
to be quite complex. Patches certainly welcome.
Pavel
--
if you have sharp zaurus hardware you don't need... you know my address
Quoting Bernard Blackham ([email protected]):
> On Sun, Oct 02, 2005 at 08:57:26AM -0400, Ed Tomlinson wrote:
> > Is there any kernel api that adding would make cryopid more
> > dependable/cleaner?
>
> Currently a fair bit of information is obtained by injecting code
> into the process's memory space, executing it, and reaping out the
> results (eg, termcaps, file offsets, fcntl states, locks, signal
> actions, etc). Can't think of ways to make it cleaner off the top
> of my head, but I'm open to ideas.
>
> Seeing as you asked though, here's my wishlist :) I don't expect all
> of these to be implemented, but every bit would help. Issues I
> haven't been able to address so far:
>
> - Processes that cache their PID and need it, or rely on PIDs of
> their children.
>
> Some way to request a given PID when cloning/forking (or on the
> fly even) would make life easier.
Have you considered any ways of implementing this? Perhaps the simplest
way would actually be to allow a process set to be started in some kind
of job/jail/container/vserver, where any userspace query of or by pid
uses the virtual pid - which might collide with a virtual pid in some
other container - but of course the kernel continues to track by real
pids. So pid 3728 may be vpid 2287 in job 3. A process inside job 3
just asks to kill -9 2287, whereas a process not in a job must ask to
kill pid 3728, and a process in job 2 can't touch tasks in job 3. Is
there another way this could work?
-serge
Apologies for the delay.
On Sun, Oct 09, 2005 at 08:13:04PM -0500, [email protected] wrote:
> Quoting Bernard Blackham ([email protected]):
> > Some way to request a given PID when cloning/forking (or on the
> > fly even) would make life easier.
>
> Have you considered any ways of implementing this? Perhaps the simplest
> way would actually be to allow a process set to be started in some kind
> of job/jail/container/vserver, where any userspace query of or by pid
> uses the virtual pid - which might collide with a virtual pid in some
> other container - but of course the kernel continues to track by real
> pids. So pid 3728 may be vpid 2287 in job 3. A process inside job 3
> just asks to kill -9 2287, whereas a process not in a job must ask to
> kill pid 3728, and a process in job 2 can't touch tasks in job 3. Is
> there another way this could work?
I did try this once by having a 'supervisor' process ptrace every
resumed process and translate PIDs inside system calls, but this got
very messy very fast - particularly for terminal ioctls.
Additionally, it means parents can't get notification of when their
children die, and it makes the whole show just that much slower.
Getting them back their original PIDs seems like less effort (so
long as they're available). I'm probably shouldn't admit to what
I'm currently doing - editing last_pid through /dev/kmem, to force
the next pid fork() returns. (Unbelievably racy, but works as a
temporary measure).
Bernard.
>>>>> "Bernard" == Bernard Blackham <[email protected]> writes:
Bernard> Apologies for the delay. On Sun, Oct 09, 2005 at 08:13:04PM
Bernard> -0500, [email protected] wrote:
>> Quoting Bernard Blackham ([email protected]): > Some way to
>> request a given PID when cloning/forking (or on the > fly even)
>> would make life easier.
>>
>> Have you considered any ways of implementing this? Perhaps the
>> simplest way would actually be to allow a process set to be started
>> in some kind of job/jail/container/vserver, where any userspace
>> query of or by pid uses the virtual pid - which might collide with
>> a virtual pid in some other container - but of course the kernel
>> continues to track by real pids. So pid 3728 may be vpid 2287 in
>> job 3. A process inside job 3 just asks to kill -9 2287, whereas a
>> process not in a job must ask to kill pid 3728, and a process in
>> job 2 can't touch tasks in job 3. Is there another way this could
>> work?
Bernard> I did try this once by having a 'supervisor' process ptrace
Bernard> every resumed process and translate PIDs inside system calls,
Bernard> but this got very messy very fast - particularly for terminal
Bernard> ioctls. Additionally, it means parents can't get
Bernard> notification of when their children die, and it makes the
Bernard> whole show just that much slower.
Indeed.
For HibernatorII (the checkpoint/restart system developed for UXP/M
and Irix) we introduced a new, privileged, system call : pid_clone()
that took the same args as clone() but an extra PID argument. If the
process id was available, it'd use it, otherwise it would fail.
--
Dr Peter Chubb http://www.gelato.unsw.edu.au peterc AT gelato.unsw.edu.au
The technical we do immediately, the political takes *forever*