2009-10-30 02:19:08

by Ryan C. Gordon

[permalink] [raw]
Subject: FatELF patches...


Having heard a bunch of commentary, and made a bunch of changes based on
some really good feedback, here are my hopefully-final FatELF patches. I'm
pretty happy with the final results. The only changes over the last
posting is that I cleaned up all the checkpatch.pl complaints (whitespace
etc).

What's the best way to get this moving towards the mainline? It's not
clear to me who the binfmt_elf maintainer would be. Is this something that
should go to Andrew Morton for the -mm tree?

--ryan.


2009-10-30 05:42:38

by Rayson Ho

[permalink] [raw]
Subject: Re: FatELF patches...

On Thu, Oct 29, 2009 at 9:19 PM, Ryan C. Gordon <[email protected]> wrote:
> What's the best way to get this moving towards the mainline? It's not
> clear to me who the binfmt_elf maintainer would be. Is this something that
> should go to Andrew Morton for the -mm tree?

Can we first find out whether it is safe from a legal point of view??
After the SCO v. IBM lawsuit, we should be way more careful.

Like it or not, Apple invented universal binaries in 1993, and so far
we are not able to find any prior arts... If we integrate something
that infringes Apple's patent, then Apple can ban all the Linux
distributions and devices from shipping.

Rayson



>
> --ryan.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2009-10-30 14:54:31

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> Can we first find out whether it is safe from a legal point of view??
> After the SCO v. IBM lawsuit, we should be way more careful.

Does anyone have a spare patent lawyer? I'm not against changing my patch
to work around a patent, but not knowing _how_ to change it, or if it
needs changing at all? That's maddening.

--ryan.

2009-11-01 19:20:05

by David Hagood

[permalink] [raw]
Subject: Re: FatELF patches...

On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
> Having heard a bunch of commentary, and made a bunch of changes based on
> some really good feedback, here are my hopefully-final FatELF patches.

I hope it's not too late for a request for consideration: if we start
having fat binaries, could one of the "binaries" be one of the "not
quite compiled code" formats like Architecture Neutral Distribution
Format (ANDF), such that, given a fat binary which does NOT support a
given CPU, you could at least in theory process the ANDF section to
create the needed target binary? Bonus points for being able to then
append the newly created section to the file.

That way you could have a binary that supported some "common" subset of
CPUs (e.g. x86,x86-64,PPC,ARM) but still run on the "not common"
processors (Alpha, MIPS, Sparc) - it would just take a bit more time to
start.

As an embedded systems guy who is looking to have to support multiple
CPU types, this is really very interesting to me.

2009-11-01 20:28:51

by Måns Rullgård

[permalink] [raw]
Subject: Re: FatELF patches...

David Hagood <[email protected]> writes:

> On Thu, 2009-10-29 at 22:19 -0400, Ryan C. Gordon wrote:
>> Having heard a bunch of commentary, and made a bunch of changes based on
>> some really good feedback, here are my hopefully-final FatELF patches.
>
> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

Am I the only one who sees this as nothing bloat for its own sake?
Did I miss a massive drop in intelligence of Linux users, causing them
to no longer be capable of picking the correct file themselves?

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

As an embedded systems guy, I'm more concerned about precious flash
space going to waste than about some hypothetical convenience.

--
M?ns Rullg?rd
[email protected]

2009-11-01 20:40:24

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

It's not a goal of mine, but I suppose you could have an ELF OSABI for it.

I don't think it changes the FatELF kernel patch at all. I don't know much
about ANDF, but you'd probably just want to set the ELF "interpreter" to
something other than ld.so and do this all in userspace, and maybe add a
change to elf_check_arch() to approve ANDF binaries...or something.

To me, ANDF is interesting in an academic sense, but not enough to spend
effort on it. YMMV. :)

--ryan.

2009-11-01 20:59:55

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> Am I the only one who sees this as nothing bloat for its own sake?

I posted a fairly large list of benefits here: http://icculus.org/fatelf/

Some are more far-fetched than others, I will grant. Also, I suspect most
people will find one benefit and ten things they don't care about, but
that benefit is different for different people. I'm confident that the
benefits far outweigh the size of the kernel patch.

> Did I miss a massive drop in intelligence of Linux users, causing them
> to no longer be capable of picking the correct file themselves?

Also known as "market saturation." :)

(But really, there are benefits beyond helping dumb people, even if
helping dumb people wasn't a worthwhile goal in itself.)

> As an embedded systems guy, I'm more concerned about precious flash
> space going to waste than about some hypothetical convenience.

I wouldn't imagine this is the target audience for FatELF. For embedded
devices, just use the same ELF files you've always used.

--ryan.

2009-11-01 21:25:46

by Måns Rullgård

[permalink] [raw]
Subject: Re: FatELF patches...

"Ryan C. Gordon" <[email protected]> writes:

>> Am I the only one who sees this as nothing bloat for its own sake?
>
> I posted a fairly large list of benefits here: http://icculus.org/fatelf/

I've read the list, and I can't find anything I agree with. Honestly.

> Some are more far-fetched than others, I will grant. Also, I suspect most
> people will find one benefit and ten things they don't care about, but
> that benefit is different for different people. I'm confident that the
> benefits far outweigh the size of the kernel patch.

It's not the size of the kernel patch I'm worried about. What worries
me is the disk space needed when *all* my executables and libraries
are suddenly 3, 4, or 5 times the size they need to be.

There is also the issue of speed to launch these things. It *has* to
be slower than executing a native file directly.

>> Did I miss a massive drop in intelligence of Linux users, causing them
>> to no longer be capable of picking the correct file themselves?
>
> Also known as "market saturation." :)
>
> (But really, there are benefits beyond helping dumb people, even if
> helping dumb people wasn't a worthwhile goal in itself.)

It's far too easy to use computers already. That's the reason for the
spam problem.

Besides, clueless users would be installing a distro, which could
easily download the correct packages automatically. In fact, that is
what they already do. The bootable installation media would still
need to be distributed separately, since the boot formats differ
vastly between architectures. It is not possible to create a CD/DVD
that is bootable on multiple system types (with a few exceptions).

>> As an embedded systems guy, I'm more concerned about precious flash
>> space going to waste than about some hypothetical convenience.
>
> I wouldn't imagine this is the target audience for FatELF. For embedded
> devices, just use the same ELF files you've always used.

Of course I will. The question is, will everybody else? I'm seeing
enough bloat in the embedded world as it is without handing out new
ways to make it even easier.

--
M?ns Rullg?rd
[email protected]

2009-11-01 21:35:11

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> It's not the size of the kernel patch I'm worried about. What worries
> me is the disk space needed when *all* my executables and libraries
> are suddenly 3, 4, or 5 times the size they need to be.

Then don't make FatELF files with 5 binaries in it. Or don't make FatELF
files at all.

I glued two full Ubuntu installs together as a proof of concept, but I
think if Ubuntu did this as a distribution-wide policy, then people would
probably choose a different distribution.

Then again, I hope Ubuntu uses FatELF on a handful of binaries, and
removes the /lib64 and /lib32 directories.

> There is also the issue of speed to launch these things. It *has* to
> be slower than executing a native file directly.

In that there will be one extra read of 128 bytes, yes, but I'm not sure
that's a measurable performance hit. For regular ELF files, the overhead
is approximately one extra branch instruction. Considering that most files
won't be FatELF, that seems like an acceptable cost.

> It's far too easy to use computers already. That's the reason for the
> spam problem.

Clearly that's going to remain as a philosophical difference between us,
so I won't waste your time trying to dissuade you.

--ryan.

2009-11-01 22:08:22

by Rayson Ho

[permalink] [raw]
Subject: Re: FatELF patches...

2009/11/1 M?ns Rullg?rd <[email protected]>:
> I've read the list, and I can't find anything I agree with. Honestly.

+1.

Adding code that might bring lawsuits to Linux developers,
distributors, users is a BIG disadvantage.

And beside the legal issues, this first point is already not right:

"Given enough disc space, there's no reason you couldn't have one DVD
.iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

The boot loader is different on different systems, and in fact
different with different firmware. A single DVD that can boot on
different hardware platforms might not be an easy thing to do.

Also, why not build the logic for picking which binary to install into
the installer?? This way, the users don't need to have half of the
disk space wasted due to this FatELF thing.

IMO, the biggest problem users get is not with which hardware binary
to download, but the incompatibly of different Linux kernels and glibc
(the API/ABI).

Rayson

2009-11-02 00:00:14

by Alan

[permalink] [raw]
Subject: Re: FatELF patches...

Lets go down the list of "benefits"

- Separate downloads
- Doesn't work. The network usage would increase dramatically
pulling all sorts of unneeded crap.
- Already solved by having a packaging system (in fact FatELF is
basically obsoleted by packaging tools)

- Separate lib, lib32, lib64
- So you have one file with 3 files in it rather than three files
with one file in them. Directories were invented for a reason
- Makes updates bigger
- Stops users only having 32bit libs for some packages

- Third party packagers no longer have to publish multiple rpm/deb etc
- By vastly increasing download size
- By making updates vastly bigger
- Assumes data files are not dependant on binary (often not true)
- And is irrelevant really because 90% or more of the cost is
testing

- You no longer need to use shell scripts and flakey logic to pick the
right binary ...
- Since the 1990s we've used package managers to do that instead.
I just type "yum install bzflag", the rest is done for me.

- The ELF OSABI for your system changes someday?
- We already handle that

- Ship a single shared library that provides bindings for a scripting
language and not have to worry about whether the scripting language
itself is built for the same architecture as your bindings.
- Except if they don't overlap it won't run

- Ship web browser plugins that work out of the box with multiple
platforms.
- yum install just works, and there is a search path in firefox
etc

- Ship kernel drivers for multiple processors in one file.
- Not useful see separate downloads

- Transition to a new architecture in incremental steps.
- IFF the CPU supports both old and new
- and we can already do that

- Support 64-bit and 32-bit compatibility binaries in one file.
- Not useful as we've already seen

- No more ia32 compatibility libraries! Even if your distro
doesn't make a complete set of FatELF binaries available, they can
still provide it for the handful of packages you need for 99% of 32-bit
apps you want to run on a 64-bit system.

- Argument against FatELF - why waste the disk space if its rare ?

- Have a CPU that can handle different byte orders? Ship one binary that
satisfies all configurations!

- Variant of the distribution "advantage" - same problem - its
better to have two files, its all about testing anyway

- Ship one file that works across Linux and FreeBSD (without a platform
compatibility layer on either of them).

- Ditto

- One hard drive partition can be booted on different machines with
different CPU architectures, for development and experimentation. Same
root file system, different kernel and CPU architecture.

- Now we are getting desperate.

- Prepare your app on a USB stick for sneakernet, know it'll work on
whatever Linux box you are likely to plug it into.

- No I don't because of the dependancies, architecture ordering
of data files, lack of testing on each platform and the fact
architecture isn't sufficient to define a platform

- Prepare your app on a network share, know it will work with all
the workstations on your LAN.

- Variant of the distribution idea, again better to have multiple
files for updating and management, need to deal with
dependancies etc. Waste of storage space.
- We have search paths, multiple mount points etc.

So why exactly do we want FatELF. It was obsoleted in the early 1990s
when architecture handling was introduced into package managers.

2009-11-02 01:18:46

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> Adding code that might bring lawsuits to Linux developers,
> distributors, users is a BIG disadvantage.

I'm tracking down a lawyer to discuss the issue. I'm surprised there
aren't a few hanging around here, honestly. I sent a request in to the
SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to
pay a lawyer for a few hours of her time.

If it's a big deal, we'll figure out what to do from there. But let's not
talk about the sky falling until we get to that point, please.

> "Given enough disc space, there's no reason you couldn't have one DVD
> .iso that installs an x86-64, x86, PowerPC, SPARC, and MIPS system"

I've had about a million people point out the boot loader thing. There's
an x86/amd64 forest if you can see past the MIPS trees.

Still, I said there were different points that were more compelling for
different individuals. I don't think this is the most compelling argument
on that page, and I think there's a value in talking about theoretical
benefits in addition to practical ones. Theoretical ones become practical
the moment someone decides to roll out a company-internal distribution
that works on all the workstations inside IBM or Google or whatever...even
if Fedora would turn their nose up at the idea for a general-purpose
release.

> IMO, the biggest problem users get is not with which hardware binary
> to download, but the incompatibly of different Linux kernels and glibc
> (the API/ABI).

These are concerns, too, but the kernel has been, in my experience, very
good at binary compatibility with user space back as far as I can
remember. glibc has had some painful progress, but since NPTL stabilized a
long time ago, even this hasn't been bad at all.

Certainly one has to be careful--I would even use the word diligent--to
maintain binary compatibility, but this was much more of a hurting for
application developers a decade ago.

At least, that's been my experience.

--ryan.

2009-11-02 02:22:24

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I'm not minimizing your other points by trimming down to one quote. Some
of it I already covered, but mostly I suspect I'm talking way too much, so
I'll spare everyone a little. I'm happy to address your other points if
you like, though, even the one where you said I was being desperate. :)

Most of your points are "package managers solve this problem" but they
simply do not solve all of them.

Package managers are a _fantastic_ invention. They are a killer feature
over other operating systems, including ones people pay way too much money
to use. That being said, there are lots of places where using a package
manager doesn't make sense: experimental software that might have an
audience but isn't ready for wide adoption, software that isn't
appropriate for an apt/yum repository, software that distros refuse to
package but is still perfectly useful, closed-source software, and
software that wants to work between distros that don't have
otherwise-compatible rpm/debs (or perhaps no package manager at all).

I'm certain I'm about to get a flood of replies that say "you can make a
cross-distro-compatible RPM if you just follow these steps" but that
completely misses the point. Not all software comes from yum, or even from
an .rpm, even if most of it _should_. This isn't about replacing or
competing with apt-get or yum.

I'm certain if we made a Venn diagram, there would be an overlap. But
FatELF solves different problems than package managers, and in the case of
ia32 compatibility packages, it helps the package manager solve its
problems better.

--ryan.

2009-11-02 03:27:39

by Rayson Ho

[permalink] [raw]
Subject: Re: FatELF patches...

On Sun, Nov 1, 2009 at 8:17 PM, Ryan C. Gordon <[email protected]> wrote:
> I'm tracking down a lawyer to discuss the issue. I'm surprised there
> aren't a few hanging around here, honestly. I sent a request in to the
> SFLC, and if that doesn't pan out, I'll dig for coins in my car seat to
> pay a lawyer for a few hours of her time.

Good!! And thanks :)

And is the lawyer specialized in patent laws??


> I've had about a million people point out the boot loader thing. There's
> an x86/amd64 forest if you can see past the MIPS trees.

If it's x86 vs. AMD64, then the installer can already do most of the
work, and it can ask the user to insert the right 2nd/3rd/etc CD/DVD.


> Theoretical ones become practical
> the moment someone decides to roll out a company-internal distribution
> that works on all the workstations inside IBM or Google or whatever...even
> if Fedora would turn their nose up at the idea for a general-purpose
> release.

Don't you think that taking a CD/DVD to each workstation and start the
installation or upgrade is so old school??

Software updates inside those companies are done via the internet
network, and it does not matter whether the DVD can handle all the
architectures or not.

And the idea of a general-purpose release might not work. As 90% of
the users are using a single architecture (I count AMD64 as x86 with
"some" extensions...), we won't get the benefits so having the extra
code in the kernel and the userspace. Most of the shipped commercial
binaries will be x86 anyways -- and as Alan stated, the packaging
system is already doing most of the work for us already (I don't
recall providing anything except the package name when I do apt-get).

For embedded systems, they wanted to take away all the fat more than
shipping a single app.


> These are concerns, too, but the kernel has been, in my experience, very
> good at binary compatibility with user space back as far as I can
> remember. glibc has had some painful progress, but since NPTL stabilized a
> long time ago, even this hasn't been bad at all.
>
> Certainly one has to be careful--I would even use the word diligent--to
> maintain binary compatibility, but this was much more of a hurting for
> application developers a decade ago.

The kernel part refers to kernel modules.

But yes, binary compatibility was a real pain when I "really" (played
with it in 1995, didn't really like it at that time) started using
Linux in 1997. However, I think the installer/package manager took out
most of the burden.

Rayson



>
> At least, that's been my experience.
>
> --ryan.
>
>
>

2009-11-02 04:58:47

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: FatELF patches...

On Sun, 01 Nov 2009 16:35:05 EST, "Ryan C. Gordon" said:

> I glued two full Ubuntu installs together as a proof of concept, but I
> think if Ubuntu did this as a distribution-wide policy, then people would
> probably choose a different distribution.

Hmm.. so let's see - people compiling stuff for themselves won't use this
feature. And if a distro uses it, users would probably go to a different
distro.

That's a bad sign right there...

> Then again, I hope Ubuntu uses FatELF on a handful of binaries, and
> removes the /lib64 and /lib32 directories.

Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
are using FatELF - as long as there's any binaries doing things The Old Way,
you need to keep the supporting binaries around.

> > There is also the issue of speed to launch these things. It *has* to
> > be slower than executing a native file directly.

> In that there will be one extra read of 128 bytes, yes, but I'm not sure
> that's a measurable performance hit. For regular ELF files, the overhead
> is approximately one extra branch instruction. Considering that most files
> won't be FatELF, that seems like an acceptable cost.

Don't forget you take that hit once for each shared library involved. Plus
I'm not sure if there's hidden gotchas lurking in there (is there code that
assumes that if executable code is mmap'ed, it's only done so in one arch?
Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
in both 32 and 64 bit modes?


Attachments:
(No filename) (227.00 B)

2009-11-02 06:28:13

by Julien BLACHE

[permalink] [raw]
Subject: Re: FatELF patches...

"Ryan C. Gordon" <[email protected]> wrote:

Hi,

With my Debian Developer hat on...

> Package managers are a _fantastic_ invention. They are a killer
> feature over other operating systems, including ones people pay way
> too much money to use. That being said, there are lots of places where
> using a package manager doesn't make sense:

> experimental software that might have an audience but isn't ready for
> wide adoption

That usually ships as sources or prebuilt binaries in a tarball - target
/opt and voila! For a bigger audience you'll see a lot of experimental
stuff that gets packaged (even in quick'n'dirty mode).

> software that isn't appropriate for an apt/yum repository

Just create a repository for the damn thing if you want to distribute it
that way. There's no "appropriate / not appropriate" that applies here.

> software that distros refuse to package but is still perfectly useful

Look at what happens today. A lot of that gets packaged by third
parties, and more often than not they involve distribution
maintainers. (See debian-multimedia, PLF for Mandriva, ...)

> closed-source software

Why do we even care? Besides, commercial companies can just stop sitting
on their hands and start distributing real packages. It's no different
from rolling out a Windows Installer or Innosetup. It's packaging.

> and software that wants to work between distros that don't have
> otherwise-compatible rpm/debs (or perhaps no package manager at all).

Tarball, /opt, static build.


And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
systems, the solution to that is multiarch and it's being worked
on. It's a lot better and cleaner than the fat binary kludge.

JB.

--
Julien BLACHE <http://www.jblache.org>
<[email protected]> GPG KeyID 0xF5D65169

2009-11-02 06:27:32

by David Miller

[permalink] [raw]
Subject: Re: FatELF patches...

From: "Ryan C. Gordon" <[email protected]>
Date: Sun, 1 Nov 2009 21:21:47 -0500 (EST)

> That being said, there are lots of places where using a package
> manager doesn't make sense:

Yeah like maybe, just maybe, in an embedded system where increasing
space costs like FatELF does makes even less sense.

I think Alan's arguments against FatELF were the most comprehensive
and detailed, and I haven't seem them refuted very well, if at all.

2009-11-02 09:14:51

by Alan

[permalink] [raw]
Subject: Re: FatELF patches...

> I'm certain if we made a Venn diagram, there would be an overlap. But
> FatELF solves different problems than package managers, and in the case of
> ia32 compatibility packages, it helps the package manager solve its
> problems better.

Not really - as I said it drives disk usage up, it drives network
bandwidth up (which is a big issue for a distro vendor) and the package
manager and file system exist to avoid this kind of mess being needed.

You can ask the same question as FatELF the other way around and it
becomes even more obvious that it's a bad idea.

Imagine you did it by name not by architecture. So you had a single
"FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
to worry about people having different sets of binaries, it means they
are always compatible. And like FatELF it's not a very good idea.

Welcome to the invention of the directory.

Alan

2009-11-02 15:14:13

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> > think if Ubuntu did this as a distribution-wide policy, then people would
> > probably choose a different distribution.
>
> Hmm.. so let's see - people compiling stuff for themselves won't use this
> feature. And if a distro uses it, users would probably go to a different
> distro.

I probably wasn't clear when I said "distribution-wide policy" followed by
a "then again." I meant there would be backlash if the distribution glued
the whole system together, instead of just binaries that made sense to do
it to.

And, again, there's a third use-case besides compiling your programs and
getting them from the package manager, and FatELF is meant to address
that.

> Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> are using FatELF - as long as there's any binaries doing things The Old Way,
> you need to keep the supporting binaries around.

Binaries don't refer directly to /libXX, they count on ld.so to tapdance
on their behalf. My virtual machine example left the dirs there as
symlinks to /lib, but they could probably just go away directly.

> Don't forget you take that hit once for each shared library involved. Plus

That happens in user space in ld.so, so it's not a kernel problem in any
case, but still...we're talking about, what? Twenty more branch
instructions per-process?

> I'm not sure if there's hidden gotchas lurking in there (is there code that
> assumes that if executable code is mmap'ed, it's only done so in one arch?

The current code sets up file mappings based on the offset of the desired
ELF binary.

> Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> in both 32 and 64 bit modes?

Whose refcounts would this screw up? If there's a possible bug, I'd like
to make sure it gets resolved, of course.

--ryan.

2009-11-02 15:32:34

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> > That being said, there are lots of places where using a package
> > manager doesn't make sense:
>
> Yeah like maybe, just maybe, in an embedded system where increasing
> space costs like FatELF does makes even less sense.

I listed several examples. Embedded systems wasn't one of them.

> I think Alan's arguments against FatELF were the most comprehensive
> and detailed, and I haven't seem them refuted very well, if at all.

I said I was trying to avoid talking everyone to death. :)

I'll respond to them, then.

--ryan.

2009-11-02 15:39:11

by Diego Calleja

[permalink] [raw]
Subject: Re: FatELF patches...

On Lunes 02 Noviembre 2009 03:21:47 Ryan C. Gordon escribi?:
> FatELF solves different problems than package managers, and in the case of
> ia32 compatibility packages, it helps the package manager solve its
> problems better.

Package managers can be modified to allow embeddeding a package inside of
another package. That could allow shipping support for multiple architectures
in a single package, and it could even do things that fatelf can't, like
in the case of experimental packages that need other experimental
dependencies: all of them could be packed in a single package, even with
support for multiple architectures. Heck, it could even be a new kind of
container that would allow packing .rpm and .debs for multiple distros
together. And it wouldnt touch a single line of kernel code.

So I don't think that fatelf is solving the problems of package managers,
it's quite the opposite.

2009-11-02 16:20:51

by Chris Adams

[permalink] [raw]
Subject: Re: FatELF patches...

Once upon a time, Ryan C. Gordon <[email protected]> said:
>I wouldn't imagine this is the target audience for FatELF. For embedded
>devices, just use the same ELF files you've always used.

What _is_ the target audience?

As I see it, there are three main groups of Linux consumers:

- embedded: No interest in this; adds significant bloat, generally
embedded systems don't allow random binaries anyway

- enterprise distributions (e.g. Red Hat, SuSE): They have specific
supported architectures, with partner programs to support those archs.
If something is supported, they can support all archs with
arch-specific binaries.

- community distributions (e.g. Ubuntu, Fedora, Debian): This would
greatly increase build infrastructure complexity, mirror disk space,
and download bandwidth, and (from a user perspective) slow down update
downloads significantly.

If you don't have buy-in from at least a large majority of one of these
segments, this is a big waste. If none of the above support it, it will
not be used by any binary-only software distributors.

Is any major distribution (enterprise or community) going to use this?
If not, kill it now.

--
Chris Adams <[email protected]>
Systems and Network Administrator - HiWAAY Internet Services
I don't speak for anybody but myself - that's enough trouble.

2009-11-02 17:40:14

by David Lang

[permalink] [raw]
Subject: Re: FatELF patches...

On Mon, 2 Nov 2009, Alan Cox wrote:

>> I'm certain if we made a Venn diagram, there would be an overlap. But
>> FatELF solves different problems than package managers, and in the case of
>> ia32 compatibility packages, it helps the package manager solve its
>> problems better.
>
> Not really - as I said it drives disk usage up, it drives network
> bandwidth up (which is a big issue for a distro vendor) and the package
> manager and file system exist to avoid this kind of mess being needed.

I think this depends on the particular package.

how much of the package is binary executables (which get multiplied) vs
how much is data or scripts (which do not)

fo any individual user it will alsays be a larger download, but if you
have to support more than one architecture (even 32 bit vs 64 bit x86)
it may be smaller to have one fat package than to have two 'normal'
packages.

yes, the package manager could handle this by splitting the package up
into more pieces, with some of the pieces being arch independant, but that
also adds complexity.

David Lang

> You can ask the same question as FatELF the other way around and it
> becomes even more obvious that it's a bad idea.
>
> Imagine you did it by name not by architecture. So you had a single
> "FatDirectory" file for /bin, /sbin and /usr/bin. It means you don't have
> to worry about people having different sets of binaries, it means they
> are always compatible. And like FatELF it's not a very good idea.
>
> Welcome to the invention of the directory.
>
> Alan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2009-11-02 17:43:02

by Alan

[permalink] [raw]
Subject: Re: FatELF patches...

> how much of the package is binary executables (which get multiplied) vs
> how much is data or scripts (which do not)

IFF the data is not platform dependant formats.

> fo any individual user it will alsays be a larger download, but if you
> have to support more than one architecture (even 32 bit vs 64 bit x86)
> it may be smaller to have one fat package than to have two 'normal'
> packages.

Nope. The data files for non arch specific material get packaged
accordingly. Have done for years.

>
> yes, the package manager could handle this by splitting the package up
> into more pieces, with some of the pieces being arch independant, but that
> also adds complexity.

Which was implemented years ago and turns out to be vital because only
some data is not arch specific.

2009-11-02 17:52:01

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


(As requested by davem.)

On Mon, 2 Nov 2009, Alan Cox wrote:
> Lets go down the list of "benefits"
>
> - Separate downloads
> - Doesn't work. The network usage would increase dramatically
> pulling all sorts of unneeded crap.

Sure, this doesn't work for everyone, but this list isn't meant to be a
massive pile of silver bullets. Some of the items are "that's a cool
trick" and some are "that would help solve an annoyance." I can see a
use-case for the one-iso-multiple-arch example, but it's not going to be
Ubuntu.

> - Already solved by having a packaging system (in fact FatELF is
> basically obsoleted by packaging tools)

I think I've probably talked this to death, and will again when I reply to
Julien, but: packaging tools are a different thing entirely. They solve
some of the same issues, they cause other issues. The fact that Debian is
now talking about "multiarch" shows that they've experienced some of these
problems, too, despite having a world-class package manager.

> - Separate lib, lib32, lib64
> - So you have one file with 3 files in it rather than three files
> with one file in them. Directories were invented for a reason

We covered this when talking about shell scripts.

> - Makes updates bigger

I'm sure, but I'm not sure the increase is a staggering amount. We're not
talking about making all packages into FatELF binaries.

> - Stops users only having 32bit libs for some packages

Is that a serious concern?

> - Third party packagers no longer have to publish multiple rpm/deb etc
> - By vastly increasing download size
> - By making updates vastly bigger

It's true that /bin/ls would double in size (although I'm sure at least
the download saves some of this in compression). But how much of, say,
Gnome or OpenOffice or Doom 3 is executable code? These things would be
nowhere near "vastly" bigger.

> - Assumes data files are not dependant on binary (often not true)

Turns out that /usr/sbin/hald's cache file was. That would need to be
fixed, which is trivial, but in my virtual machine test I had it delete
and regenerate the file on each boot as a fast workaround.

The rest of the Ubuntu install boots and runs. This is millions of lines
of code that does not depend on the byte order, alignment, and word size
for its data files.

I don't claim to be an expert on the inner workings of every package you
would find on a Linux system, but like you, I expected there would be a
lot of things to fix. It turns out that "often not true" just turned out
to actually _not_ be true at all.

> - And is irrelevant really because 90% or more of the cost is
> testing

Testing doesn't really change with what I'm describing. If you want to
ship a program for PowerPC and x86, you still need to test it on PowerPC
and x86, no matter how you distribute or launch it.

> - You no longer need to use shell scripts and flakey logic to pick the
> right binary ...
> - Since the 1990s we've used package managers to do that instead.
> I just type "yum install bzflag", the rest is done for me.

Yes, that is true for software shipped via yum, which does not encompass
all the software you may want to run on your system. I'm not arguing
against package management.

> - The ELF OSABI for your system changes someday?
> - We already handle that

Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere,
outside of my FatELF patches, where we check an ELF file's OSABI or OSABI
version at all.

The kernel blindly loads ELF binaries without checking the ABI, and glibc
checks the ABI for shared libraries--and flatly rejects files that don't
match what it expects.

Where do we handle an ABI change gracefully? Am I misunderstanding the
code?

> - Ship a single shared library that provides bindings for a scripting
> language and not have to worry about whether the scripting language
> itself is built for the same architecture as your bindings.
> - Except if they don't overlap it won't run

True. If I try to run a PowerPC binary on a Sparc, it fails in any
circumstance. I recognize the goal of this post was to shoot down every
single point, but you can't see a scenario where this adds a benefit? Even
in a world that's still running 32-bit web browsers on _every major
operating system_ because some crucial plugins aren't 64-bit yet?

> - Ship web browser plugins that work out of the box with multiple
> platforms.
> - yum install just works, and there is a search path in firefox
> etc

So it's better to have a thousand little unique solutions to the same
problem? Everything has a search path (except things that don't), and all
of those search paths are set up in the same way (except things that
aren't). Do we really need to have every single program screwing around
with their own personal spiritual successor to the CLASSPATH environment
variable?

> - Ship kernel drivers for multiple processors in one file.
> - Not useful see separate downloads

Pain in the butt see "which installer is right for me?" :)

I don't want to get into a holy war about out-of-tree kernel drivers,
because I'm totally on board with getting drivers into the mainline. But
it doesn't change the fact that I downloaded the wrong nvidia drivers the
other day because I accidentally grabbed the ia32 package instead of the
amd64 one. So much for saving bandwidth.

I wasn't paying attention. But lots of people wouldn't know which to pick
even if they were. Nvidia, etc, could certainly put everything in one
shell script and choose for you, but now we're back at square one again.

This discussion applies to applications, not just kernel modules.
The applications are more important here, in my opinion.

> - Transition to a new architecture in incremental steps.
> - IFF the CPU supports both old and new

A lateral move would be painful (although Apple just did this very thing
with a FatELF-style solution, albeit with the help of an emulator), but if
we're talking about the most common case at the moment, x86 to amd64, it's
not a serious concern.

> - and we can already do that

Not really. compat_binfmt_elf will run legacy binaries on new systems, but
not vice versa. The goal is having something that will let it work on both
without having to go through a package manager infrastructure.

> - Support 64-bit and 32-bit compatibility binaries in one file.
> - Not useful as we've already seen

Where did we see that? There are certainly tradeoffs, pros and cons, but
this is very dismissive despite several counter-examples.

> - No more ia32 compatibility libraries! Even if your distro
> doesn't make a complete set of FatELF binaries available, they can
> still provide it for the handful of packages you need for 99% of 32-bit
> apps you want to run on a 64-bit system.
>
> - Argument against FatELF - why waste the disk space if its rare ?

This is _not_ an argument against FatELF.

Why install Gimp by default if I'm not an artist? Because disk space is
cheap in the configurations I'm talking about and it's better to have it
just in case, for the 1% of users that will want it. A desktop, laptop or
server can swallow a few megabytes to clean up some awkward design
decisions, like the /lib64 thing.

A few more megabytes installed may cut down on the support load for
distributions when some old 32 bit program refuses to start at all.

In a world where terrabyte hard drives are cheap consumer-level
commodities, the tradeoff seems like a complete no-brainer to me.

> - Have a CPU that can handle different byte orders? Ship one binary that
> satisfies all configurations!
>
> - Variant of the distribution "advantage" - same problem - its
> better to have two files, its all about testing anyway
>
> - Ship one file that works across Linux and FreeBSD (without a platform
> compatibility layer on either of them).
>
> - Ditto

And ditto from me, too: testing is still testing, no matter how you
package and ship it. It's just simply not related to FatELF. This problem
exists in shipping binaries via apt and yum, too.

> - One hard drive partition can be booted on different machines with
> different CPU architectures, for development and experimentation. Same
> root file system, different kernel and CPU architecture.
>
> - Now we are getting desperate.

It's not like this is unheard of. Apple is selling this very thing for 129
bucks a copy.

> - Prepare your app on a USB stick for sneakernet, know it'll work on
> whatever Linux box you are likely to plug it into.
>
> - No I don't because of the dependancies, architecture ordering
> of data files, lack of testing on each platform and the fact
> architecture isn't sufficient to define a platform

Yes, it's not a silver bullet. Fedora will not be promising binaries that
run on every Unix box on the planet.

But the guy with the USB stick? He probably knows the details of every
machine he wants to plug it into...

> - Prepare your app on a network share, know it will work with all
> the workstations on your LAN.

...and so does the LAN's administrator.

It's possible to ship binaries that don't depend on a specific
distribution, or preinstalled dependencies, beyond the existance of a
glibc that was built in the last five years or so. I do it every day. It's
not unreasonable, if you aren't part of the package management network, to
make something that will run, generically on "Linux."

> - We have search paths, multiple mount points etc.

I'm proposing a unified, clean, elegant way to solve the problem.

> So why exactly do we want FatELF. It was obsoleted in the early 1990s
> when architecture handling was introduced into package managers.

I can't speak for anyone but myself, but I can see lots of places where it
would personally help me as a developer that isn't always inside the
packaging system.

There are programs I support that I just simply won't bother moving to
amd64 because it just complicates things for the end user, for example.

Goofy one-off example: a game that I ported named Lugaru (
http://www.wolfire.com/lugaru ) is being updated for Intel Mac OS X. The
build on my hard drive will run natively as a PowerPC, x86, and amd64
process, and Mac OS X just does the right thing on whatever hardware tries
to launch it. On Linux...well, I'm not updating it. You can enjoy the x86
version. It's easier on me, I have other projects to work on, and too bad
for you. Granted, the x86_64 version _works_ on Linux, but shipping it is
a serious pain, so it just won't ship.

That is anecdotal, and I apologize for that. But I'm not the only
developer that's not in an apt repository, and all of these rebuttals are
anecdotal: "I just use yum [...because I don't personally care about
Debian users]."

The "third-party" is important. If your answer is "you should have
petitioned Fedora, Ubuntu, Gentoo, CentOS, Slackware and every other
distro to package it, or packaged it for all of those yourself, or open
sourced someone else's software on their behalf and let the community
figure it out" then I just don't think we're talking about the same
reality at all, and I can't resolve that issue for you.

And, since I'm about to get a flood of "closed source is evil" emails:
this applies to Free Software too. Take something bleeding edge but open
source, like, say, Songbird, and you are going to find yourself working
outside of apt-get to get a modern build, or perhaps a build at all.

In short: I'm glad yum works great for your users, but they aren't all the
users, and it sure doesn't work well for all developers.

--ryan.

2009-11-02 18:18:42

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> With my Debian Developer hat on...

I'm repeating myself now, so I'm sorry if this is getting tedious for
anyone. FatELF isn't meant to replace the package managers.

tl;dr: If all you have is an apt-get hammer, everything looks like a .deb nail.

> That usually ships as sources or prebuilt binaries in a tarball - target
> /opt and voila! For a bigger audience you'll see a lot of experimental
> stuff that gets packaged (even in quick'n'dirty mode).

"A lot" is hard to quantify. We can certainly see thousands of forum posts
for help with software that hadn't been packaged yet.

> > software that isn't appropriate for an apt/yum repository
>
> Just create a repository for the damn thing if you want to distribute it
> that way. There's no "appropriate / not appropriate" that applies here.

I can't imagine most people are interested in building repositories and
telling their users how to add it to their package manager, period, but
even less so when you have to build different repositories for different
sets of users, and not know what to build for whatever is the next popular
distribution. For things like Gentoo, which for years didn't have a way to
extend portage, what was the solution?

(har har, don't run Gentoo is the solution, let's get the joke out of our
systems here.)

> > software that distros refuse to package but is still perfectly useful
>
> Look at what happens today. A lot of that gets packaged by third
> parties, and more often than not they involve distribution
> maintainers. (See debian-multimedia, PLF for Mandriva, ...)

I'm hearing a lot of "a lot" ... what actually happens today is that you
depend on the kindness of strangers to package your software or you make a
bunch of incompatible packages for different distributions.

> > closed-source software
>
> Why do we even care?

Maybe you don't care, but that doesn't mean no one cares.

I am on Team Stallman. I'll take a crappy free software solution over a
high quality closed-source one, and strive to improve the free software
one until it is indisputably better. Most of my free time goes towards
this very endeavor.

But still, let's not be jerks about it.

> Tarball,

Ugh.

> /opt,

Ugh.

> static build.

Ugh!

I think we can do better than that when we're outside of the package
managers, but it's a rant for another time.

> And, about the /lib, /lib32, /lib64 situation Debian and Debian-derived
> systems, the solution to that is multiarch and it's being worked
> on. It's a lot better and cleaner than the fat binary kludge.

Having read the multiarch wiki briefly, I'm pleased to see other people
find the current system "unwieldy," but it seems like FatELF "kludge"
solves several of the points in the "unresolved issues" section.

YMMV, I guess.

--ryan.

2009-11-02 18:51:38

by Alan

[permalink] [raw]
Subject: Re: FatELF patches...

> Sure, this doesn't work for everyone, but this list isn't meant to be a

You've not shown a single meaningful use case yet.

> some of the same issues, they cause other issues. The fact that Debian is
> now talking about "multiarch" shows that they've experienced some of these
> problems, too, despite having a world-class package manager.

No it means that Debian is finally catching up with rpm on this issue,
where it has been solved for years.

>
> > - Separate lib, lib32, lib64
> > - So you have one file with 3 files in it rather than three files
> > with one file in them. Directories were invented for a reason
>
> We covered this when talking about shell scripts.

Without providing a justification

> I'm sure, but I'm not sure the increase is a staggering amount. We're not
> talking about making all packages into FatELF binaries.

How will you jhandle cross package dependancies

> > - Stops users only having 32bit libs for some packages
>
> Is that a serious concern?

Yes from a space perspective and a minimising updates perspective.

> > - Third party packagers no longer have to publish multiple rpm/deb etc
> > - By vastly increasing download size
> > - By making updates vastly bigger
>
> It's true that /bin/ls would double in size (although I'm sure at least
> the download saves some of this in compression). But how much of, say,
> Gnome or OpenOffice or Doom 3 is executable code? These things would be
> nowhere near "vastly" bigger.

Guess what: all the data files for Doom and OpenOffice are already
packaged separately as are many of the gnome ones, or automagically
shared by the two rpm packages.

>
> > - Assumes data files are not dependant on binary (often not true)
>
> Turns out that /usr/sbin/hald's cache file was. That would need to be
> fixed, which is trivial, but in my virtual machine test I had it delete
> and regenerate the file on each boot as a fast workaround.
>
> The rest of the Ubuntu install boots and runs. This is millions of lines
> of code that does not depend on the byte order, alignment, and word size
> for its data files.

That you've noticed. But you've not done any formal testing with tens of
thousands of users so you've not done more than the "hey mummy it boots"
test (which is about one point over the Linus 'it might compile' stage)

> I don't claim to be an expert on the inner workings of every package you
> would find on a Linux system, but like you, I expected there would be a
> lot of things to fix. It turns out that "often not true" just turned out
> to actually _not_ be true at all.

You need an expert on the inner workings of each package to review and
test them. Fortunately that work is already done- by the rpm packagers
for all the distros

> > - The ELF OSABI for your system changes someday?
> > - We already handle that
>
> Do we? I grepped for OSABI in the 2.6.31 sources, and can't find anywhere,
> outside of my FatELF patches, where we check an ELF file's OSABI or OSABI
> version at all.

ARM has migrated ABI at least once.

> Where do we handle an ABI change gracefully? Am I misunderstanding the
> code?

You add code for the migration as needed, in the distro

> single point, but you can't see a scenario where this adds a benefit? Even
> in a world that's still running 32-bit web browsers on _every major
> operating system_ because some crucial plugins aren't 64-bit yet?

Your distro must be out of date or a bit backward. Good ones thunk those
or run them in a different process (which is a very good idea for quality
reasons as well as security)

>
> > - Ship web browser plugins that work out of the box with multiple
> > platforms.
> > - yum install just works, and there is a search path in firefox
> > etc
>
> So it's better to have a thousand little unique solutions to the same
> problem?

We have one solution - package management. You want to add the extra one.

> it doesn't change the fact that I downloaded the wrong nvidia drivers the
> other day because I accidentally grabbed the ia32 package instead of the
> amd64 one. So much for saving bandwidth.

You mean your package manager didn't do it for you ? Anyway kernel
drivers are dependant on about 1500 variables and 1500! is a very very
large FatELF binary so it won't work.

> Not really. compat_binfmt_elf will run legacy binaries on new systems, but
> not vice versa. The goal is having something that will let it work on both
> without having to go through a package manager infrastructure.

See binfmt_misc. In fact you can probably do your ELF hacks in userspace
that way if you really must.

> In a world where terrabyte hard drives are cheap consumer-level
> commodities, the tradeoff seems like a complete no-brainer to me.

Except that
- we are moving away from rotating storage for primary media
- flash still costs rather more
- virtual machines mean that disk space is now a real cost again as is RAM

> version. It's easier on me, I have other projects to work on, and too bad
> for you. Granted, the x86_64 version _works_ on Linux, but shipping it is
> a serious pain, so it just won't ship.

Distro problem, in the open source world someone will package it.

> That is anecdotal, and I apologize for that. But I'm not the only
> developer that's not in an apt repository, and all of these rebuttals are
> anecdotal: "I just use yum [...because I don't personally care about
> Debian users]."

No. See yum/rpm demonstrates that it can be done right. Debian has fallen
a bit behind on that issue. We know it can be done right, and that tells
us that the Debian tools will eventually catch up and also do it right.

You have a solution (quite a nicely programmed one) in search of a
problem, and with patent concerns. That's a complete non-flier for the
kernel. It's not a dumping ground for neat toys and it would be several
gigabytes of code if it was.

You are also ignoring the other inconvenient detail. The architecture
selection used even by package managers is far more complex than i386 v
x86_64. Some distros build i686, some i686 optimisation but without cmov,
some i386, some install i386 or i686, others optimise for newer
processors only and so on.

Alan

2009-11-02 18:59:51

by Julien BLACHE

[permalink] [raw]
Subject: Re: FatELF patches...

"Ryan C. Gordon" <[email protected]> wrote:

Hi,

> "A lot" is hard to quantify. We can certainly see thousands of forum
> posts for help with software that hadn't been packaged yet.

"A lot" certainly doesn't mean "all of it", sure, but that's already a
clear improvement over the situation 10 years ago.

> I can't imagine most people are interested in building repositories and
> telling their users how to add it to their package manager, period, but
> even less so when you have to build different repositories for different
> sets of users, and not know what to build for whatever is the next popular
> distribution. For things like Gentoo, which for years didn't have a way to
> extend portage, what was the solution?

You need to decide if and how you want to distribute your software,
define your target audience and work from there. Yes, it takes some
effort. Yes, it's not something that's very valued by today's
standards. So what?

You can as well decide that your software is so good that packagers from
everywhere will package it for you. Except sometimes your software
actually isn't that good and nobody gives a damn.

As it stands, it really looks like your main problem is that it's too
hard to distribute software for Linux, but you're really making it a lot
more difficult than it really is.

Basically, these days, if you can ship a generic RPM and a clean .deb,
you've got most of your users covered. Oh, that's per-architecture, so
with i386 and amd64, that makes 4 packages. And the accompanying source
packages, because that can't hurt.

Anyone that can't use those packages either knows how to build stuff on
her distro of choice or needs to upgrade.

> I'm hearing a lot of "a lot" ... what actually happens today is that you
> depend on the kindness of strangers to package your software or you make a
> bunch of incompatible packages for different distributions.

Err. Excuse me, but if you "depend on the kindness of strangers" it's
because you made that choice in the first place. There is nothing that
prevents you from producing packages yourself. You might even learn a
thing or ten in the process!

When software doesn't get packaged properly after some time, it's
usually because nobody knows about it or because it's not that good and
nobody bothered. As the author, you can fix both issues.

>> > closed-source software
>>
>> Why do we even care?
>
> Maybe you don't care, but that doesn't mean no one cares.

The ones who care have the resources to produce proper packages. They
just don't do it.

> I am on Team Stallman. I'll take a crappy free software solution over a
> high quality closed-source one, and strive to improve the free software

I don't think FatELF improves anything at all in the Free Software
world.

[static builds distributed as tarballs]
> I think we can do better than that when we're outside of the package
> managers, but it's a rant for another time.

Actually, no, you can't, because too many people out there writing
software don't have a clue about shared libraries. If you want things to
work everywhere, static is the way to go.

> Having read the multiarch wiki briefly, I'm pleased to see other people
> find the current system "unwieldy," but it seems like FatELF "kludge"
> solves several of the points in the "unresolved issues" section.

Err, the unresolved issues are all packaging issues, to which the
solutions have not been decided yet. I don't see what FatELF can fix
here.

Now, to put it in a nutshell, you are coming forward with a technical
solution to a problem that *isn't*:
- "my software, Zorglub++ isn't packaged anywhere!"
Did you package it? No? Why not? Besides, maybe nobody knows about
it, maybe nobody needs it, maybe it's just crap. Whatever. Find out
and act from there.

- "proprietary Blahblah7 is not packaged!"
Yeah, well, WeDoProprietaryStuff, Inc. decided not to package it
for whatever reason. What about contacting them, finding out the
reason and then working from there?

JB.

--
Julien BLACHE <http://www.jblache.org>
<[email protected]> GPG KeyID 0xF5D65169

2009-11-02 19:08:24

by Jesús Guerrero

[permalink] [raw]
Subject: Re: FatELF patches...

On Mon, 2 Nov 2009 13:18:41 -0500 (EST), "Ryan C. Gordon"
<[email protected]> wrote:
>> > software that isn't appropriate for an apt/yum repository
>>
>> Just create a repository for the damn thing if you want to distribute
it
>> that way. There's no "appropriate / not appropriate" that applies here.
>
> I can't imagine most people are interested in building repositories and
> telling their users how to add it to their package manager, period, but
> even less so when you have to build different repositories for different

> sets of users, and not know what to build for whatever is the next
popular
> distribution. For things like Gentoo, which for years didn't have a way
to
> extend portage, what was the solution?

I am not going into the FatELF thing. I am just following the debate
because it's interesting :)

However, for the sake of correctness about Gentoo,

1)
Gentoo has had support for "overlays" *for ages*. I am sure they were
there when I joined in 2004. So I am not sure why you say that portage
can't be extended. I can't be sure when did overlays get into scene, I have
no idea if they were there from the beginning, but even at that stage, if
nothing else, you could still use the "ebuild" tool directly over an
ebuild, stored at any arbitrary place, not necessarily in the portage tree.
Nowadays there's a great number of well known overlays, where several
Gentoo devs are involved. Some of these are the testbed for trees that are
later incorporated to the official portage tree. A well known example is
sunrise, because it's big and of a great quality, but there are many more.

2)
Gentoo is probably the last distro that would benefit from FatELF, since
it's a distro where each user slims the system down to his/her needs.
Gentoo is not about making things generic. That's what compiling for your
architecture, USE flags, etc. are all about. If there's a distro out there
where FatELF doesn't make any sense at all, that's Gentoo for sure (as a
representative of source distros, I guess the same could apply to LFS,
sourcemage, etc.).

3)
Besides that, the average Gentoo user has no problem rolling his own
ebuilds if needed and putting them into a local overlay. And even if they
lack the skill there's always the forum and bugzilla for that. This is as
last resource, as said, there are *lots* of well known and maintained
overlays out there.

Again, these are not arguments in favor or against FatELF, as said, I am
staying away of the discussion, just some clarifications for things that I
thought are not correct. :)
--
Jesús Guerrero

2009-11-02 19:56:36

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: FatELF patches...

[email protected] writes:

> fo any individual user it will alsays be a larger download, but if you
> have to support more than one architecture (even 32 bit vs 64 bit x86)
> it may be smaller to have one fat package than to have two 'normal'
> packages.

In terms on disk space on distro TFTP servers only. You'll need to
transfer more, both from user's and distro's POV (obviously). This one
simple fact alone is more than enough to forget the FatELF.

Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
it doesn't seem to be going to change.

FatELF means you have to compile for many archs. Do you even have the
necessary compilers? Extra time and disk space used for what, to solve
a non-problem?

> yes, the package manager could handle this by splitting the package up
> into more pieces, with some of the pieces being arch independant, but
> that also adds complexity.

Even without splitting, separate per-arch packages are a clear win.

I'm surprised this idea made it here. It certainly has merit for
installation medium, but it's called directory tree and/or .tar or .zip
there.
--
Krzysztof Halasa

2009-11-02 20:11:51

by David Lang

[permalink] [raw]
Subject: Re: FatELF patches...

On Mon, 2 Nov 2009, Krzysztof Halasa wrote:

> [email protected] writes:
>
>> fo any individual user it will alsays be a larger download, but if you
>> have to support more than one architecture (even 32 bit vs 64 bit x86)
>> it may be smaller to have one fat package than to have two 'normal'
>> packages.
>
> In terms on disk space on distro TFTP servers only. You'll need to
> transfer more, both from user's and distro's POV (obviously). This one
> simple fact alone is more than enough to forget the FatELF.

it depends on if there is only one arch being downloaded ot not.

it could be considerably cheaper for mirroring bandwidth. Even if Alan is
correct and distros have re-packaged everything so that the arch
independant stuff is really in seperate packages, most
mirroring/repository systems keep each distro release/arch in a seperate
directory tree, so each of these arch-independant things gets copied
multiple times.

> Disk space on FTP servers is cheap (though maybe not so on 32 GB SSDs
> and certainly not on 16 MB NOR flash chips). Bandwidth is expensive. And
> it doesn't seem to be going to change.
>
> FatELF means you have to compile for many archs. Do you even have the
> necessary compilers? Extra time and disk space used for what, to solve
> a non-problem?

you don't have to compile multiple arches anymore than you have to provide
any other support for that arch. FatELF is a way to bundle the binaries
that you were already creating, not something to force you to support an
arch you otherwise wouldn't (although if it did make it easy enough for
you to do so that you started to support additional arches, that would be
a good thing)

>> yes, the package manager could handle this by splitting the package up
>> into more pieces, with some of the pieces being arch independant, but
>> that also adds complexity.
>
> Even without splitting, separate per-arch packages are a clear win.
>
> I'm surprised this idea made it here. It certainly has merit for
> installation medium, but it's called directory tree and/or .tar or .zip
> there.

if you have a 1M binary with 500M data, repeated for 5 arches it is not a
win vs a single 505M FatELF package in all cases.

David Lang

2009-11-02 20:13:35

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> You've not shown a single meaningful use case yet.

I feel like we're at the point where we're each making points of various
quality and the other person is going "nuh-uh."

You mentioned the patent thing and I don't have an answer at all yet from
a lawyer. Let's table this for awhile until I have more information about
that. If there's going to be a patent problem, it's not worth wasting
everyone's time any further.

If it turns out to be no big deal, we can decide to revisit this.

--ryan.

2009-11-02 20:33:18

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: FatELF patches...

[email protected] writes:

>> In terms on disk space on distro TFTP servers only. You'll need to
>> transfer more, both from user's and distro's POV (obviously). This one
>> simple fact alone is more than enough to forget the FatELF.
>
> it depends on if there is only one arch being downloaded ot not.

Well, from user's POV it may get close if the user downloads maybe 5
different archs out of all supported by the distro. Not very typical
I guess.

> it could be considerably cheaper for mirroring bandwidth.

Maybe (though it can be solved with the existing techniques).
What does now count - bandwidth consumed by users or by mirrors?

> Even if Alan
> is correct and distros have re-packaged everything so that the arch
> independant stuff is really in seperate packages, most
> mirroring/repository systems keep each distro release/arch in a
> seperate directory tree, so each of these arch-independant things gets
> copied multiple times.

If it was a (serious) problem (I think it's not), it could be easily
solved. Think rsync, sha1|256-based mirroring stuff etc.

> you don't have to compile multiple arches anymore than you have to
> provide any other support for that arch. FatELF is a way to bundle the
> binaries that you were already creating, not something to force you to
> support an arch you otherwise wouldn't (although if it did make it
> easy enough for you to do so that you started to support additional
> arches, that would be a good thing)

Not sure - longer compile times, longer downloads, no testing.

> if you have a 1M binary with 500M data, repeated for 5 arches it is
> not a win vs a single 505M FatELF package in all cases.

A real example of such binary maybe?
--
Krzysztof Halasa

2009-11-03 01:36:04

by Mikael Pettersson

[permalink] [raw]
Subject: Re: FatELF patches...

[email protected] writes:
> > FatELF means you have to compile for many archs. Do you even have the
> > necessary compilers? Extra time and disk space used for what, to solve
> > a non-problem?
>
> you don't have to compile multiple arches anymore than you have to provide
> any other support for that arch. FatELF is a way to bundle the binaries
> that you were already creating, not something to force you to support an
> arch you otherwise wouldn't (although if it did make it easy enough for
> you to do so that you started to support additional arches, that would be
> a good thing)

'bundle' by gluing .o files together rather than using what we already have:
directories, search paths, $VARIABLES in search paths, and ELF interpreters
and .so loaders that know to look in $ARCH subdirectories first (I used that
feature to perform an incremental upgrade from OABI to EABI on my ARM/Linux
systems last winter).

Someone, somewhere, has to inspect $ARCH and make a decision. Moving that
decision from user-space to kernel-space for ELF file loading is neither
necessary nor sufficient. Consider .a and .h files for instance.

> > I'm surprised this idea made it here. It certainly has merit for
> > installation medium, but it's called directory tree and/or .tar or .zip
> > there.
>
> if you have a 1M binary with 500M data, repeated for 5 arches it is not a
> win vs a single 505M FatELF package in all cases.

If I have a 1M binary with 500M non-arch data I'll split the package because
I'm not a complete moron.

IMNSHO FatELF is a technology pretending to be a solution to "problems"
that don't exist or have user-space solutions. Either way, it doesn't
belong in the Linux kernel.

2009-11-03 06:51:09

by Eric Windisch

[permalink] [raw]
Subject: Re: FatELF patches...

First, I apologize if this message gets top-posted or otherwise
improperly threaded, as I'm not currently a subscriber to the list (I
can no longer handle the daily traffic). I politely ask that I be CC'ed
on any replies.

In response to Alan's request for a FatELF use-case, I'll submit two of
my own.

I have customers which operate low-memory x86 virtual machine instances.
Until recently, these ran with as little as 64MB of RAM. Many customers
have chosen 32-bit distributions for these systems, but would like the
flexibility of scaling beyond 4GB of memory. These customers would like
the choice of migrating to 64-bit without having to reinstall their
distribution.

Furthermore, I'm involved in several "cloud computing" initiatives,
including interoperability efforts. There has been discussion of
assuring portability of virtual machine images across varying
infrastructure services. I could see how FatELF could be part of a
solution to this problem, enabling a single image to function against
host services running a variety of architectures.

As for negatives: I'm running ZFS which now supports deduplication, so
this might potentially eliminate my own concerns in regard to storage.
Eventually, Btrfs will provide this capability under Linux directly. The
networking isn't much of an issue either, as I have my own mirrors for
the popular distributions. While this isn't the typical end-user
environment, it might be a typical environment for companies facing the
unique problems FatELF solves.

I concede that there are a number of ways that solutions to these
problems might be implemented, and FatELF binaries might not be the
optimal solution. Regardless, I do feel that use cases do exist, even
if there are questions and concerns about the implementation.

--
Regards,
Eric Windisch

2009-11-03 11:26:32

by Bernd Petrovitsch

[permalink] [raw]
Subject: Re: FatELF patches...

On Tue, 2009-11-03 at 01:43 -0500, Eric Windisch wrote:
> First, I apologize if this message gets top-posted or otherwise
> improperly threaded, as I'm not currently a subscriber to the list (I
Given proper references:-headers, the mail should have threaded
properly.
> can no longer handle the daily traffic). I politely ask that I be CC'ed
> on any replies.
Which raises the question why you didn't cc: anyone in the first place.

> In response to Alan's request for a FatELF use-case, I'll submit two of
> my own.
>
> I have customers which operate low-memory x86 virtual machine instances.
Low resource environments (be it embedded or not) are probably the last
who wants (or even can handle) such "bloat by design".
The question in that world is not "how can I make it run on more
architectures" but "how can I get rid of run-time code as soon as
possible".

> Until recently, these ran with as little as 64MB of RAM. Many customers
> have chosen 32-bit distributions for these systems, but would like the
> flexibility of scaling beyond 4GB of memory. These customers would like
> the choice of migrating to 64-bit without having to reinstall their
> distribution.
Just install a 64bit kernel (and leave the user-space intact). A 64bit
kernel can run 32bit binaries.

> Furthermore, I'm involved in several "cloud computing" initiatives,
> including interoperability efforts. There has been discussion of
The better solution is probably to agree on pseudo-machine-code (like
e.g. JVM, parrot, or whatever) with good interpreters/JIT-compilers
which focus more on security and how to validate potentially hostile
programs than anything else.

> assuring portability of virtual machine images across varying
> infrastructure services. I could see how FatELF could be part of a
> solution to this problem, enabling a single image to function against
> host services running a variety of architectures.
Let's hope that the n versions in a given FatElf image actually are
instances of the same source.

[....]
> I concede that there are a number of ways that solutions to these
> problems might be implemented, and FatELF binaries might not be the
> optimal solution. Regardless, I do feel that use cases do exist, even
> if there are questions and concerns about the implementation.
The obvious drawbacks are:
- Even if disk space is cheap, the vast amount is a problem for
mirroring that stuff.
- Fat-Binaries (ab)use more Internet bandwidth. Hell, Fedora/RedHat got
delta-RPMS working (just?) for this reason.
- Fat-Binaries (ab)use much more memory and I/O bandwidth - loading code
for n architectures and throw n-1 of it away doesn't sound very sound.
- Compiling+linking for n architectures needs n-1 cross-compilers
installed and working.
- Compiling+linking for n architectures needs much more *time* than for
1 (n times or so).
Guess what people/developers did first on the old NeXT machines: They
disable the default "build for all architectures" as it speeded things
up.
Even if the expected development setup is "build for local only", at
least packagers and regression testers won't have the luxury of that.

The only remotely useful benefit in the long run I can imagine is: The
permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
the alternatives are applicable without reading the generated
configure.sh (and config.log) to guess how to tell the script some
details.
But that isn't really worth it - as we are living without it for long.

Bernd
--
Firmix Software GmbH http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
Embedded Linux Development and Services

2009-11-03 14:54:26

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: FatELF patches...

On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:

> I probably wasn't clear when I said "distribution-wide policy" followed by
> a "then again." I meant there would be backlash if the distribution glued
> the whole system together, instead of just binaries that made sense to do
> it to.

OK.. I'll bite - which binaries does it make sense to do so? Remember in
your answer to address the very valid point that any binaries you *don't*
do this for will still need equivalent hand-holding by the package manager.
So if you're not doing all of them, you need to address the additional
maintenance overhead of "which way is this package supposed to be built?"
and all the derivative headaches.

It might be instructive to not do a merge of *everything* in Ubuntu as you
did, but only select a random 20% or so of the packages and convert them
to FatELF, and see what breaks. (If our experience with 'make randconfig'
in the kernel is any indication, you'll hit a *lot* of corner cases and
pre-reqs you didn't know about...)

> > Actually, they can't nuke the /lib{32,64} directories unless *all* binaries
> > are using FatELF - as long as there's any binaries doing things The Old Way,
> > you need to keep the supporting binaries around.
>
> Binaries don't refer directly to /libXX, they count on ld.so to tapdance
> on their behalf. My virtual machine example left the dirs there as
> symlinks to /lib, but they could probably just go away directly.

Only if all your shared libs (which are binaries too) have migrated to FatELF.

On my box, I have:

% ls -l /usr/lib{,64}/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1274156 2009-10-06 13:49 /usr/lib/libX11.so.6.3.0
-rwxr-xr-x 1 root root 1308600 2009-10-06 13:49 /usr/lib64/libX11.so.6.3.0

You can't dump them both into /usr/lib without making it a FatElf or doing
some name mangling. You probably didn't notice because you merged *all* of
an ubuntu distro into FatELF.

> > Don't forget you take that hit once for each shared library involved. Plus
>
> That happens in user space in ld.so, so it's not a kernel problem in any
> case, but still...we're talking about, what? Twenty more branch
> instructions per-process?

No, a lot more than that - you already identified an extra 128-byte read
as needing to happen. Plus syscall overhead.

> > Or will a FatELF glibc.so screw up somebody's refcounts if it's mapped
> > in both 32 and 64 bit modes?
>
> Whose refcounts would this screw up? If there's a possible bug, I'd like
> to make sure it gets resolved, of course.

That's the point - nobody's done an audit for such things. Does the kernel
DTRT when counting mapped pages (probably close-to-right, if you got it to boot)?
Where are the corresponding patches, if any, for tools like perf and oprofile?
Does lsof DTRT? /proc/<pid>/pagemap? Any other tools that may break because
the make an assumption that executable files are mapped as 32-bit or 64-bit,
but not both (most likely choking if they see a 64-bit address someplace
after they've decided the binary is a 32-bit)?


Attachments:
(No filename) (227.00 B)

2009-11-03 18:30:29

by Matt Thrailkill

[permalink] [raw]
Subject: Re: FatELF patches...

On Tue, Nov 3, 2009 at 6:54 AM, <[email protected]> wrote:
> On Mon, 02 Nov 2009 10:14:15 EST, "Ryan C. Gordon" said:
>
>> I probably wasn't clear when I said "distribution-wide policy" followed by
>> a "then again." I meant there would be backlash if the distribution glued
>> the whole system together, instead of just binaries that made sense to do
>> it to.
>
> OK.. I'll bite - which binaries does it make sense to do so? ?Remember in
> your answer to address the very valid point that any binaries you *don't*
> do this for will still need equivalent hand-holding by the package manager.
> So if you're not doing all of them, you need to address the additional
> maintenance overhead of "which way is this package supposed to be built?"
> and all the derivative headaches.
>
> It might be instructive to not do a merge of *everything* in Ubuntu as you
> did, but only select a random 20% or so of the packages and convert them
> to FatELF, and see what breaks. (If our experience with 'make randconfig'
> in the kernel is any indication, you'll hit a *lot* of corner cases and
> pre-reqs you didn't know about...)

I think he is thinking of only having FatELF binaries for binaries and
libraries
that overlap between 32- and 64-bit in a distro install. Perhaps everything
that is sitting in /lib32 for example could instead be in a FatELF
binaries in /lib,
alongside the 64-bit binary.

A thought I had, that I don't think has come up in this thread:
could it be practical or worthwhile for distros to use FatElf to ship multiple
executables with different compiler optimizations? i586, i686, etc.

2009-11-04 01:09:54

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: FatELF patches...


> You mentioned the patent thing and I don't have an answer at all yet from
> a lawyer. Let's table this for awhile until I have more information about
> that. If there's going to be a patent problem, it's not worth wasting
> everyone's time any further.
>
> If it turns out to be no big deal, we can decide to revisit this.

The Software Freedom Law Center replied with this...

"I refer you to our Legal Guide section on dealing with patents available
from our website. I also refer you to our amici brief in Bilski, where we
argue that patents on pure software are invalid. If a patent is invalid,
there's no reason to consider whether it is infringed."

...which may be promising some day, but doesn't resolve current concerns.
Also: "I read a FAQ" doesn't hold up in court. :)

Based on feedback from this list, the patent concern that I'm not
qualified to resolve myself, and belief that I'll be on the losing end of
the same argument with the glibc maintainers after this, I'm withdrawing
my FatELF patch. If anyone wants it, I'll leave the project page and
patches in place at http://icculus.org/fatelf/ ...

Thank you everyone for your time and feedback.

--ryan.

2009-11-04 17:10:00

by Mikulas Patocka

[permalink] [raw]
Subject: package managers [was: FatELF patches...]

> Package managers are a _fantastic_ invention. They are a killer feature
> over other operating systems, including ones people pay way too much money
> to use.

No, package managers are evil feature that suppresses third party software
and kills Linux success on desktop.

Package managers are super-easy to use --- but only as long as the package
exists. No developer can make a package for all versions of all
distributions. No distribution can make a package for all versions of all
Linux software. So, inevitably, there are holes in the
[distribution X software] matrix, where the package isn't available.

- With Windows installers (next - next - next - finish), even a
technically unskilled person can select which version of a given
software he wants to use. If the software doesn't work, he can simply
uninstall it and try another version.

- With Linux package managers, the user is stuck with the software and
version shipped by the distribution. If he wants to install anything
newer or older, it turns into black magic and the typical desktop user
(non-hacker) can't do it.

- For non-technical user who can't compile, getting newer software for
Linux means reinstalling the whole distribution to a newer version. So,
"upgrade one program" translates into "upgrade all programs" (that will
bring many changes that the user didn't want and new bugs)


Let me say that instead of making a single binary for multiple
architectures, you should concentrate on developing a method to make a
single binary that works on all installations on one architecture :)

Mikulas

2009-11-04 16:52:29

by Alan

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> - With Linux package managers, the user is stuck with the software and
> version shipped by the distribution. If he wants to install anything
> newer or older, it turns into black magic and the typical desktop user
> (non-hacker) can't do it.

In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
packages or whatever button on whatever gui wrapper you happen to have.

And of course yum supports third party repositories so you can also deal
with the updating problem which Windows tends not to do well for third
party software.

Installing it is the easy bit, keeping it current and secure is the fun
bit.

All pretty routine stuff and a lot of users add other repositories
themselves: generally by having a package that adds the repository, so
you just have one package to click on in a web browser and open, then off
it all goes.

2009-11-04 17:25:09

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 4 Nov 2009, Alan Cox wrote:

> > - With Linux package managers, the user is stuck with the software and
> > version shipped by the distribution. If he wants to install anything
> > newer or older, it turns into black magic and the typical desktop user
> > (non-hacker) can't do it.
>
> In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
> packages or whatever button on whatever gui wrapper you happen to have.

And what if there isn't a package? Upgrade option doesn't solve the need
for [ distributions X software ] matrix of packages.

> And of course yum supports third party repositories so you can also deal
> with the updating problem which Windows tends not to do well for third
> party software.

A practical example --- when I wanted to get Wine on RHEL 5, all I found
was a package for 1.0.1. Nothing newer.

I managed to compile the current version of Wine (it wasn't straghtforward
and took few days to solve all the problems) and it ran the program I
wanted. But I can imagine that a typical business user or home gamer will
just say "that Linux sux".

You can say that I should delete RHEL-5 and install Fedora, but that is
just that "upgrade one program" => "upgrade all programs" problem.

Mikulas

2009-11-04 17:39:00

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 04 Nov 2009 17:40:02 +0100, Mikulas Patocka said:

> - With Windows installers (next - next - next - finish), even a
> technically unskilled person can select which version of a given
> software he wants to use. If the software doesn't work, he can simply
> uninstall it and try another version.

Theoretically. There's this little detail called "DLL Hell" though...

(And one could reasonably argue that it requires *more* clue to resolve a
DLL Hell issue than it does to fix the equivalent dependency issue on Linux...)


Attachments:
(No filename) (227.00 B)

2009-11-04 17:55:19

by Martin Nybo Andersen

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wednesday 04 November 2009 18:25:07 Mikulas Patocka wrote:
> On Wed, 4 Nov 2009, Alan Cox wrote:
> > > - With Linux package managers, the user is stuck with the software and
> > > version shipped by the distribution. If he wants to install anything
> > > newer or older, it turns into black magic and the typical desktop
> > > user (non-hacker) can't do it.
> >
> > In the rpm/yumworld that would be "yum downgrade" and "yum upgrade" for
> > packages or whatever button on whatever gui wrapper you happen to have.
>
> And what if there isn't a package? Upgrade option doesn't solve the need
> for [ distributions X software ] matrix of packages.
>
> > And of course yum supports third party repositories so you can also deal
> > with the updating problem which Windows tends not to do well for third
> > party software.
>
> A practical example --- when I wanted to get Wine on RHEL 5, all I found
> was a package for 1.0.1. Nothing newer.
>
> I managed to compile the current version of Wine (it wasn't straghtforward
> and took few days to solve all the problems) and it ran the program I
> wanted. But I can imagine that a typical business user or home gamer will
> just say "that Linux sux".
>
> You can say that I should delete RHEL-5 and install Fedora, but that is
> just that "upgrade one program" => "upgrade all programs" problem.

Have you ever tried upgrading Windows because some program is incompatible
with the current installation? ... That is indeed an 'upgrade all' procedure
... _If_ you're lucky enough to be able to reinstall your software.

Being able to upgrade at least Debian -- and others as well -- without the
need to attend the computer is IMHO one of Linux' biggest wins.

BTW: Wine has, like many others, the newest version of their software
prepackaged for RHEL 4 & 5 among others at their site:
http://www.winehq.org/download/

If all else fail the developers could go for statically compiled binaries in
an executable tarball, which then handles the installation to /usr/local

-Martin

2009-11-04 18:46:41

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> > You can say that I should delete RHEL-5 and install Fedora, but that is
> > just that "upgrade one program" => "upgrade all programs" problem.
>
> Have you ever tried upgrading Windows because some program is incompatible
> with the current installation? ... That is indeed an 'upgrade all' procedure
> ... _If_ you're lucky enough to be able to reinstall your software.

Some Windows programs force upgrade, but not in yearly cycles, like Linux
programs. Majority of programs still work on XP shipped in 2001.

> Being able to upgrade at least Debian -- and others as well -- without the
> need to attend the computer is IMHO one of Linux' biggest wins.

When I did it (from Etch to Lenny), two programs that I have compiled
manually ("vim" and "links") stopped working because Etch and Lenny have
binary-incompatible libgpm.

If some library cannot keep binary compatibility, it should be linked
staticaly, dynamic version shouldn't even exists on the system --- so that
no one can create incompatible binaries.

> BTW: Wine has, like many others, the newest version of their software
> prepackaged for RHEL 4 & 5 among others at their site:
> http://www.winehq.org/download/

This is exactly the link that I followed and the last version for "RHEL 5"
is "wine-1.0.1-1.el5.i386.rpm".

> If all else fail the developers could go for statically compiled binaries in
> an executable tarball, which then handles the installation to /usr/local
>
> -Martin

Static linking doesn't work for any program that needs plug-ins (i.e.
you'd have one glibc statically linked into the program and another glibc
dynamically linked with a plug-in and these two glibcs will beat each
other).

---

I mean this --- the distributions should agree on a common set of
libraries and their versions (call this for example "Linux-2010
standard"). This standard should include libraries that are used
frequently, that have low occurence of bugs and security holes and that
have never had an ABI change.

A distribution that claims compatibility with the standard must ship
libraries that are compatible with the libraries in the standard (not
necessarily the same version, it may ship higher version for security or
so).

Software developers that claim compatibility with the standard will link
standard libraries dynamically and must use static linking for all
libraries not included in the standard. Or they can use dynamic linking
and ship the non-standard library with the application in its private
directory (so that nothing but that application links against it).

Then, software developers could make a release for "Linux-2010" and it
would work on all distributions.

You'd no longer need a [ distributions X programs ] matrix of binaries
and packages.

In five years, you could revisit the standard to "Linux-2015" with newer
versions of libraries and force the users only to five-years upgrades, not
yearly upgrades as it is now. "Linux-2015" should be backward compatible
with "Linux-2010", so an user doing upgrade would only need to overwrite
his /lib and /usr/lib, he woudn't even need to change the programs.

Mikulas

2009-11-04 19:45:13

by Alan

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> > BTW: Wine has, like many others, the newest version of their software
> > prepackaged for RHEL 4 & 5 among others at their site:
> > http://www.winehq.org/download/
>
> This is exactly the link that I followed and the last version for "RHEL 5"
> is "wine-1.0.1-1.el5.i386.rpm".

So you have a supplier issue. A random windows user wouldnt cope with
that either. You try installing a Windows Vista only app on XP ;)

> A distribution that claims compatibility with the standard must ship
> libraries that are compatible with the libraries in the standard (not
> necessarily the same version, it may ship higher version for security or
> so).

Welcome to the Linux Standard Base. It's been done and it exists.
Generally speaking open source projects don't seem to care to build to it
but prefer to build to each distro.

2009-11-04 20:02:50

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:

> When I did it (from Etch to Lenny), two programs that I have compiled
> manually ("vim" and "links") stopped working because Etch and Lenny have
> binary-incompatible libgpm.
>
> If some library cannot keep binary compatibility, it should be linked
> staticaly, dynamic version shouldn't even exists on the system --- so that
> no one can create incompatible binaries.

No, all they need to do is bump the .so version number.

I have a creeping-horror binary that was linked against an older audit shared
library. Fedora shipped a newer one. The fix? Upgraded the lib, then
snarfed the old version off backups (you *do* make backups, right?)

% ls -l /lib64/libaudit*
lrwxrwxrwx 1 root root 17 2009-09-26 16:47 /lib64/libaudit.so.0 -> libaudit.so.0.0.0
-rwxr-xr-x 1 root root 107304 2009-04-03 15:47 /lib64/libaudit.so.0.0.0
lrwxrwxrwx 1 root root 17 2009-09-30 11:09 /lib64/libaudit.so.1 -> libaudit.so.1.0.0
-rwxr-xr-x 1 root root 103208 2009-09-28 16:00 /lib64/libaudit.so.1.0.0

They happily co-exist. My creeping horror references libaudit.so.0, the rest
of the system references libaudit.so.1 and everybody is happy.

And some distros even pre-package the previous set of libraries for some packages:

% yum list 'compat*'
Loaded plugins: dellsysidplugin2, downloadonly, refresh-packagekit, refresh-updatesd
Installed Packages
compat-expat1.x86_64 1.95.8-6 @rawhide
compat-readline5.i686 5.2-17.fc12 @rawhide
compat-readline5.x86_64 5.2-17.fc12 @rawhide
Available Packages
compat-db.x86_64 4.6.21-5.fc10 rawhide
compat-db45.x86_64 4.5.20-5.fc10 rawhide
compat-db46.x86_64 4.6.21-5.fc10 rawhide
compat-erlang.x86_64 R10B-15.12.fc12 rawhide
compat-expat1.i686 1.95.8-6 rawhide
compat-flex.x86_64 2.5.4a-6.fc12 rawhide
compat-gcc-34.x86_64 3.4.6-18 rawhide
compat-gcc-34-c++.x86_64 3.4.6-18 rawhide
compat-gcc-34-g77.x86_64 3.4.6-18 rawhide
compat-guichan05.i686 0.5.0-10.fc12 rawhide
compat-guichan05.x86_64 0.5.0-10.fc12 rawhide
compat-guichan05-devel.i686 0.5.0-10.fc12 rawhide
compat-guichan05-devel.x86_64 0.5.0-10.fc12 rawhide
compat-libf2c-34.i686 3.4.6-18 rawhide
compat-libf2c-34.x86_64 3.4.6-18 rawhide
compat-libgda.i686 3.1.2-3.fc12 rawhide
compat-libgda.x86_64 3.1.2-3.fc12 rawhide
compat-libgda-devel.i686 3.1.2-3.fc12 rawhide
compat-libgda-devel.x86_64 3.1.2-3.fc12 rawhide
compat-libgda-sqlite.x86_64 3.1.2-3.fc12 rawhide
compat-libgda-sqlite-devel.i686 3.1.2-3.fc12 rawhide
compat-libgda-sqlite-devel.x86_64 3.1.2-3.fc12 rawhide
compat-libgdamm.i686 3.0.1-4.fc12 rawhide
compat-libgdamm.x86_64 3.0.1-4.fc12 rawhide
compat-libgdamm-devel.i686 3.0.1-4.fc12 rawhide
compat-libgdamm-devel.x86_64 3.0.1-4.fc12 rawhide
compat-libgfortran-41.i686 4.1.2-38 rawhide
compat-libgfortran-41.x86_64 4.1.2-38 rawhide
compat-libstdc++-296.i686 2.96-143 rawhide
compat-libstdc++-33.i686 3.2.3-68 rawhide
compat-libstdc++-33.x86_64 3.2.3-68 rawhide
compat-readline5-devel.i686 5.2-17.fc12 rawhide
compat-readline5-devel.x86_64 5.2-17.fc12 rawhide
compat-readline5-static.x86_64 5.2-17.fc12 rawhide


Attachments:
(No filename) (227.00 B)

2009-11-04 20:04:24

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> Welcome to the Linux Standard Base. It's been done and it exists.
> Generally speaking open source projects don't seem to care to build to it
> but prefer to build to each distro.

Why?

Mikulas

2009-11-04 20:07:57

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 4 Nov 2009, [email protected] wrote:

> On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:
>
> > When I did it (from Etch to Lenny), two programs that I have compiled
> > manually ("vim" and "links") stopped working because Etch and Lenny have
> > binary-incompatible libgpm.
> >
> > If some library cannot keep binary compatibility, it should be linked
> > staticaly, dynamic version shouldn't even exists on the system --- so that
> > no one can create incompatible binaries.
>
> No, all they need to do is bump the .so version number.

That's what Debian did. Obviously, I can extract the old library from the
old package. But non-technical desktop user can't.

Mikulas

2009-11-04 20:27:33

by David Lang

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 4 Nov 2009, Mikulas Patocka wrote:

>> Welcome to the Linux Standard Base. It's been done and it exists.
>> Generally speaking open source projects don't seem to care to build to it
>> but prefer to build to each distro.
>
> Why?

also note that commercial products generally don't use LSB either.

David Lang

2009-11-04 20:27:58

by Ryan C. Gordon

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]


> No, package managers are evil feature that suppresses third party software
> and kills Linux success on desktop.

There are merits and flaws, of course, but I'm going to take this moment
to encourage everyone to not descend into a conversation about this on
linux-kernel. My point with FatELF wasn't to start a conversation about
package management at all, let alone on this mailing list.

--ryan.

2009-11-04 20:42:10

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 04 Nov 2009 21:08:01 +0100, Mikulas Patocka said:
> On Wed, 4 Nov 2009, [email protected] wrote:
>
> > On Wed, 04 Nov 2009 19:46:44 +0100, Mikulas Patocka said:
> >
> > > When I did it (from Etch to Lenny), two programs that I have compiled
> > > manually ("vim" and "links") stopped working because Etch and Lenny have
> > > binary-incompatible libgpm.
> > >
> > > If some library cannot keep binary compatibility, it should be linked
> > > staticaly, dynamic version shouldn't even exists on the system --- so tha
t
> > > no one can create incompatible binaries.
> >
> > No, all they need to do is bump the .so version number.
>
> That's what Debian did. Obviously, I can extract the old library from the
> old package. But non-technical desktop user can't.

But the non-technical user probably wouldn't have hand-compiled vim and links
either, so how would they get into that situation?


Attachments:
(No filename) (227.00 B)

2009-11-04 21:11:44

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> > > No, all they need to do is bump the .so version number.
> >
> > That's what Debian did. Obviously, I can extract the old library from the
> > old package. But non-technical desktop user can't.
>
> But the non-technical user probably wouldn't have hand-compiled vim and links
> either, so how would they get into that situation?

Non-technical users won't hand-compile but they want third party software
that doesn't come from the distribution. And package management system
hates it. Truly. It is written with the assumption that everything
installed is registered in the package database.

Another example: I needed new binutils because it had some bugs fixed over
standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org,
compiled it, then issued a command to remove the old package, passed it a
flag to ignore broken dependencies and then typed make install to install
new binaries. --- guess what --- on any further invocation of dselect it
complained that there are broken dependencies (the compiler needs
binutils) and tried to install the old binutils package!

Why is the package management so stupid? Why can't it check $PATH for "ld"
and if there is one, don't try to install it again?

After few hours, I resolved the issue by creating an empty "binutils"
package and stuffing it into the database.

Now, if I were not a programmer ... if I were an artist who needs the
latest version of graphics software, if I were a musican who needs the
latest version of audio software, if I were a gamer who needs the latest
version of wine ... I'd be f'cked up. That's why I think that package
management is an evil feature hurts desktop users. As a technical user, I
somehow solve these quirks and install what I want, as a non-technical
user, I wouldn't have a chance.

Mikulas

2009-11-04 21:32:20

by kevin granade

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, Nov 4, 2009 at 3:11 PM, Mikulas Patocka
<[email protected]> wrote:
>> > > No, all they need to do is bump the .so version number.
>> >
>> > That's what Debian did. Obviously, I can extract the old library from the
>> > old package. But non-technical desktop user can't.
>>
>> But the non-technical user probably wouldn't have hand-compiled vim and links
>> either, so how would they get into that situation?
>
> Non-technical users won't hand-compile but they want third party software
> that doesn't come from the distribution. And package management system
> hates it. Truly. It is written with the assumption that everything
> installed is registered in the package database.
>
> Another example: I needed new binutils because it had some bugs fixed over
> standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org,
> compiled it, then issued a command to remove the old package, passed it a
> flag to ignore broken dependencies and then typed make install to install
> new binaries. --- guess what --- on any further invocation of dselect it
> complained that there are broken dependencies (the compiler needs
> binutils) and tried to install the old binutils package!
>
> Why is the package management so stupid? Why can't it check $PATH for "ld"
> and if there is one, don't try to install it again?
>
> After few hours, I resolved the issue by creating an empty "binutils"
> package and stuffing it into the database.
>
> Now, if I were not a programmer ... if I were an artist who needs the
> latest version of graphics software, if I were a musican who needs the
> latest version of audio software, if I were a gamer who needs the latest
> version of wine ... I'd be f'cked up. That's why I think that package
> management is an evil feature hurts desktop users. As a technical user, I
> somehow solve these quirks and install what I want, as a non-technical
> user, I wouldn't have a chance.

I think the important question here is what is is exactly that the
package manager *did* to break the app you are talking about? Did it
keep the person who released the software from including the required
libraries? Did it keep them from compiling it statically? Did it
interfere with them building against LSB? No, it didn't do any of
these things, all it did was not be as up to date as you wanted it to
be, and not magically be able to discern that you've replaced one of
the most core packages in the system (which, by the way is most
definitely not something that %99.999 of users are going to try)

I'm of the opinion that the package manager IS the "killer app" for
Linux, and the main thing that makes it usable at all for the
less-technical users you seem to think it is driving off. Is it
perfect? of course not, particularly if you want to strike off on
your own and install things manually. But the pain you're running
into when you do that isn't caused by the package manager, it's what
is left if you take the package manager away.

>
> Mikulas
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/
>

2009-11-04 22:05:14

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> I think the important question here is what is is exactly that the
> package manager *did* to break the app you are talking about?

It interferred with my will to install the version of the software that I
want.

> be, and not magically be able to discern that you've replaced one of
> the most core packages in the system (which, by the way is most
> definitely not something that %99.999 of users are going to try)

If you need new 3D driver because of better gaming performance ... if you
need new lame because it encodes mp3 better ... if you need new libsane
because it supports the new scanner that you have ... you are going to
face the same problems like me when I needed new binutils. But the big
problem is that persons needing these things usually don't have enough
skills to install the software on their own and then fight with the
package management system.

On Windows, the user can just download the EXE, run it, click
next-next-next-finish and have it installed. There is no package
management that would try to overwrite what you have just installed.

Mikulas

2009-11-04 22:19:37

by Marcin Letyns

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

2009/11/4 Mikulas Patocka <[email protected]>:
>
> It interferred with my will to install the version of the software that I
> want.

You did it in very idiotic way...

>
> If you need new 3D driver because of better gaming performance ... if you
> need new lame because it encodes mp3 better ... if you need new libsane
> because it supports the new scanner that you have ... you are going to
> face the same problems like me when I needed new binutils. But the big
> problem is that persons needing these things usually don't have enough
> skills to install the software on their own and then fight with the
> package management system.

You use a rolling distro or add a proper repository with newer
packages. Nope, I never faced such problems, but I'm smart enough to
install software in a proper way. I consider package managers being
killer features you can only dream about being windows user.

> On Windows, the user can just download the EXE, run it, click
> next-next-next-finish and have it installed. There is no package
> management that would try to overwrite what you have just installed.

On windows, user cannot upgrade entire system in such easy way (he
can't even install a thing in such easy way) as Linux distros let you
to do so. I recommend you to stop writing such bull. It was you, who
wanted to overwrite what you have just installed. Stop trolling.

2009-11-04 22:28:40

by David Lang

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 4 Nov 2009, Marcin Letyns wrote:

> 2009/11/4 Mikulas Patocka <[email protected]>:
>>
>> It interferred with my will to install the version of the software that I
>> want.
>
> You did it in very idiotic way...

he's not alone in trying to do this

package managers are wonderful when they work and the package you need is
in there. they are a pain to be worked around when the package you want
isn't in the repository. if the package just isn't in there it's not a big
deal to deal with it, the problem comes when you want a package that's
different from one that _is_ in the repository.

how easy or hard it is to work around the package manager depends in large
part on if you know the tricks for that particular package manager.

and no, a rolling update distro doesn't solve the problem. one issue is
that trying to upgrade one package may trigger a pull of many others, but
the bigger problem shows up when you need to compile a package with
different options and really need to tell the package manager "hands off,
I'll do this manually". They all have a way to do this, but most of the
time it means learning enough about how packages work on that system to be
able to create a dummy package to trick the package manager.

I think both sides here are overstating it.

package managers are neither the solution to all possible problems, nor
are they the root of all evil.

David Lang

>>
>> If you need new 3D driver because of better gaming performance ... if you
>> need new lame because it encodes mp3 better ... if you need new libsane
>> because it supports the new scanner that you have ... you are going to
>> face the same problems like me when I needed new binutils. But the big
>> problem is that persons needing these things usually don't have enough
>> skills to install the software on their own and then fight with the
>> package management system.
>
> You use a rolling distro or add a proper repository with newer
> packages. Nope, I never faced such problems, but I'm smart enough to
> install software in a proper way. I consider package managers being
> killer features you can only dream about being windows user.
>
>> On Windows, the user can just download the EXE, run it, click
>> next-next-next-finish and have it installed. There is no package
>> management that would try to overwrite what you have just installed.
>
> On windows, user cannot upgrade entire system in such easy way (he
> can't even install a thing in such easy way) as Linux distros let you
> to do so. I recommend you to stop writing such bull. It was you, who
> wanted to overwrite what you have just installed. Stop trolling.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2009-11-04 22:42:59

by Martin Nybo Andersen

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wednesday 04 November 2009 23:05:17 Mikulas Patocka wrote:
> > I think the important question here is what is is exactly that the
> > package manager *did* to break the app you are talking about?
>
> It interferred with my will to install the version of the software that I
> want.
>
> > be, and not magically be able to discern that you've replaced one of
> > the most core packages in the system (which, by the way is most
> > definitely not something that %99.999 of users are going to try)
>
> If you need new 3D driver because of better gaming performance ... if you
> need new lame because it encodes mp3 better ... if you need new libsane
> because it supports the new scanner that you have ... you are going to
> face the same problems like me when I needed new binutils. But the big
> problem is that persons needing these things usually don't have enough
> skills to install the software on their own and then fight with the
> package management system.
>
> On Windows, the user can just download the EXE, run it, click
> next-next-next-finish and have it installed. There is no package
> management that would try to overwrite what you have just installed.

Exactly. There is nothing to help you from installing incompatible software
(ie libraries). If your next-next-next-finish installer overwrites a crucial
library, you're screwed. The package manager, on the other hand, knows about
all your installed files and their dependencies and conflicts.

If you really want to fiddle with your own software versions, dependencies, and
conflicts, then the equivs package is a perfect helper, which lets you create
virtual Debian packages (empty packages with dependencies and such).
For instance, I compile mplayer directly from the subversion repository -
however I still have some packages installed, which depends on mplayer. Here
the virtual mplayer package keeps apt and friends from complaining.

My home brewed mplayer will still fail to work when a needed library is gone,
but now I only have about a dozen apps that can break this way (all are nicely
installed under /usr/local/stow btw).

Without the package manager, it would have been all of them.

Another nice thing about apt: It's an installer, that frees you from the next-
next-next steps. ;-)

-Martin

2009-11-04 23:12:09

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Wed, 04 Nov 2009 22:11:47 +0100, Mikulas Patocka said:

> Another example: I needed new binutils because it had some bugs fixed over
> standard Debian binutils. So I downloaded .tar.gz from ftp.gnu.org,
> compiled it, then issued a command to remove the old package, passed it a
> flag to ignore broken dependencies and then typed make install to install
> new binaries. --- guess what --- on any further invocation of dselect it
> complained that there are broken dependencies (the compiler needs
> binutils) and tried to install the old binutils package!

> Why is the package management so stupid? Why can't it check $PATH for "ld"
> and if there is one, don't try to install it again?

Because it has no way to tell what version of /usr/bin/foobar you installed
behind its back, if it's GNU Foobar or some other foobar, what its flags are,
whether it's bug-compatible with the foobar other things on the system are
expecting, and so on. (And go look at the scripts/ver_linux file in the Linux
source tree before you suggest the package manager run the program to find out
its version. That's only 10-15 binaries, and you'd need something like that for
*every single thing in /usr/bin). And it can't blindly assume you installed a
newer version - you may have intentionally installed a *backlevel* binary,
because you found a showstopper bug in the shipped version. So the only sane
thing it can do is try to re-install what it thinks is current.

Walking $PATH is even worse - if it finds a /usr/local/bin/ld, it's a pretty
damned good guess that it's there *because* it's not the /bin/ld that the
system shipped with. So why should it use it?


Attachments:
(No filename) (227.00 B)

2009-11-04 23:55:49

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> > On Windows, the user can just download the EXE, run it, click
> > next-next-next-finish and have it installed. There is no package
> > management that would try to overwrite what you have just installed.
>
> Exactly. There is nothing to help you from installing incompatible software
> (ie libraries). If your next-next-next-finish installer overwrites a crucial
> library, you're screwed. The package manager, on the other hand, knows about
> all your installed files and their dependencies and conflicts.

The package manager can make the system unbootable too --- because of bugs
in it or in packages.

In some situations, the package manager is even more dangerous than manual
install. For example, if you are manually installing new alpha-quality
version of mplayer, and it is buggy, you end up with a working system with
broken mplayer. If you install alpha-quality version from some package
repository, it may need experimental version of libfoo, that needs
experimental version of libfee, that needs experimental version of glibc,
that contains a bug --- and you won't boot (are rescue CDs smart enough to
revert that upgrade?)

> If you really want to fiddle with your own software versions,
> dependencies, and conflicts, then the equivs package is a perfect
> helper, which lets you create virtual Debian packages (empty packages
> with dependencies and such). For instance, I compile mplayer directly
> from the subversion repository - however I still have some packages
> installed, which depends on mplayer. Here the virtual mplayer package
> keeps apt and friends from complaining.

Nice description ... the problem is that for desktop users it is still too
complicated task.

I think the ultimate installer should work somehow like:
- extract the configure file from the *.tar.gz package.
- parse the options from configure (or configure.in/.ac) and present them
to the user as checkboxes / input fields.
- let him click through the options with next-next-next-finish buttons :)
- try to run configure with user's options.
- if it fails, try to parse its output and install missing dependencies.
This can't work in 100% cases, but it can work in >90% cases --- if we
fail to parse the output, present it to the user and let him act on it.
- compile the program on background. (or foreground for geeks :)
- run "make install" it into a temporary directory.
- record what it tried to install (for possible undo) and then copy the
files to the real places.

At least this would allow Linux users to use a lot available free software
without relying on what the distribution does or doesn't pack. The user
would work just like in Windows, download the program from developer's
webpage and install it. He could upgrade or downgrade to any available
versions released by the developer.

Mikulas

2009-11-05 00:05:20

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

> Walking $PATH is even worse - if it finds a /usr/local/bin/ld, it's a pretty
> damned good guess that it's there *because* it's not the /bin/ld that the
> system shipped with. So why should it use it?

If it finds /usr/local/bin/ld it's because the admin installed it there
--- and he installed it there because he wants it to be used. So it's OK
to use it.

Anyway, if you have both /usr/bin/ld and /usr/local/bin/ld, you are in a
pretty unpleasant situation because different programs search for them in
a different order and you never know which one will be used. (i.e. what if
./configure scripts prepends /usr/local/bin to $PATH ... or if it prepends
/usr/bin? who knows - no one can check all ./configure scripts. These
scripts do crazy things just because it worked around some flaw on some
ancient Unix system).

Mikulas

2009-11-05 02:25:04

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

On Thu, 05 Nov 2009 00:55:53 +0100, Mikulas Patocka said:

> In some situations, the package manager is even more dangerous than manual
> install. For example, if you are manually installing new alpha-quality
> version of mplayer, and it is buggy, you end up with a working system with
> broken mplayer. If you install alpha-quality version from some package
> repository, it may need experimental version of libfoo, that needs
> experimental version of libfee, that needs experimental version of glibc,
> that contains a bug

Total bullshit. You know *damned* well that if you were installing that alpha
version of mplayer by hand, and it needed experimental libfoo, you'd go and
build libfoo by hand, and then build the experimental libfee by hand, and then
shoehorn in that glibc by hand, and bricked your system anyhow.

Or if you're arguing "you'd give up after seeing it needed an experimental
libfoo", I'll counter "you'd hopefully think twice if yum said it was
installing a experimental mplayer, and dragging in a whole chain of pre-reqs".

And any *sane* package manager won't even *try* to install an experimental one
unless you specifically *tell* it that the vendor-testing repository is
fair game. You install Fedora, it looks in Releases and Updates. You want
it to look for testing versions in Rawhide, you have to enable that by hand.
I'm positive Debian and Ubuntu and Suse are similar.

Plus, building by hand you're *more* likely to produce a brick-able library,
because you didn't specify the same './configure --enable-foobar' flags that
the rest of your system was expecting. (Been there, done that - reported a
Fedora Rawhide bug that an X11 upgrade borked the keyboard mapping, so the
keysym reported for 'uparrow' was 'Katakana', among other things. Actual root
cause - running a -mm kernel that didn't have CONFIG_INPUT_EVDEV defined.
Previous X didn't care, updated it. Whoops).


Attachments:
(No filename) (227.00 B)

2009-11-05 02:52:32

by Mikulas Patocka

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]



On Wed, 4 Nov 2009, [email protected] wrote:

> On Thu, 05 Nov 2009 00:55:53 +0100, Mikulas Patocka said:
>
> > In some situations, the package manager is even more dangerous than manual
> > install. For example, if you are manually installing new alpha-quality
> > version of mplayer, and it is buggy, you end up with a working system with
> > broken mplayer. If you install alpha-quality version from some package
> > repository, it may need experimental version of libfoo, that needs
> > experimental version of libfee, that needs experimental version of glibc,
> > that contains a bug
>
> Total bullshit. You know *damned* well that if you were installing that
> alpha version of mplayer by hand, and it needed experimental libfoo,
> you'd go and build libfoo by hand, and then build the experimental
> libfee by hand, and then shoehorn in that glibc by hand, and bricked
> your system anyhow.

No, if I compile alpha version of mplayer by hand, it compiles and links
against whatever libraries I have on my system. If I pull it out of some
"testing" repository, it is already compiled and linked against libraries
in the same "testing" repository and it will load the system with crap.

That is the unfortunate reality of not having a binary standard :-(

> Or if you're arguing "you'd give up after seeing it needed an experimental
> libfoo", I'll counter "you'd hopefully think twice if yum said it was
> installing a experimental mplayer, and dragging in a whole chain of pre-reqs".

... or use --disable-libfoo if it insists on newer version and I don't
want to upgrade it. Or maybe the configure scripts detects on its own that
the library is too old will compile without new features. Or it uses
libfoo shipped with the sources.

But if the binary in the package is compiled with --enable-libfoo, there
is no other way. It forces libfoo upgrade.

Mikulas

2009-11-10 10:08:07

by Enrico Weigelt

[permalink] [raw]
Subject: Re: FatELF patches...

* David Hagood <[email protected]> wrote:

Hi,

> I hope it's not too late for a request for consideration: if we start
> having fat binaries, could one of the "binaries" be one of the "not
> quite compiled code" formats like Architecture Neutral Distribution
> Format (ANDF), such that, given a fat binary which does NOT support a
> given CPU, you could at least in theory process the ANDF section to
> create the needed target binary? Bonus points for being able to then
> append the newly created section to the file.

If you really wanna have arch independent binaries, you need some sort
of virtual processor. Java, LLVM, etc. The idea is far from being new,
IMHO originally came from old Burroughs Mainframes, which ran some
Algol-tailored bytecode, driven by an interpreter in microcode.
(I'm currently desiging a new VP with similar concepts, just in case
anybody's interested).

BTW: this does not need additional kernel support - binfmt_misc
is your friend ;-P

> As an embedded systems guy who is looking to have to support multiple
> CPU types, this is really very interesting to me.

Just for the protocol: you want to have FatELF on embedded system ?


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 10:13:30

by Enrico Weigelt

[permalink] [raw]
Subject: Re: FatELF patches...

* Bernd Petrovitsch <[email protected]> wrote:

> The only remotely useful benefit in the long run I can imagine is: The
> permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
> the alternatives are applicable without reading the generated
> configure.sh (and config.log) to guess how to tell the script some
> details.

hmm, that could be the real killer argument - evolutionarily
sort out the guys who're too dumb to write proper buildscripts ;-)


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 10:23:43

by Enrico Weigelt

[permalink] [raw]
Subject: Re: FatELF patches...

* Eric Windisch <[email protected]> wrote:

Hi,

> I have customers which operate low-memory x86 virtual machine instances.
> Until recently, these ran with as little as 64MB of RAM. Many customers
> have chosen 32-bit distributions for these systems, but would like the
> flexibility of scaling beyond 4GB of memory. These customers would like
> the choice of migrating to 64-bit without having to reinstall their
> distribution.

Assuming that are reasonably critical production systems, you wont
get around a specially tailored distro/package manager (where anybody
already did all the vast amount of testing of the upgrade process)
or do it all manually. Nevertheless you'll (at least temporarily)
need an multilib system or jails.

I don't see where FatELF will give you special help here.

> Furthermore, I'm involved in several "cloud computing" initiatives,
> including interoperability efforts. There has been discussion of
> assuring portability of virtual machine images across varying
> infrastructure services. I could see how FatELF could be part of a
> solution to this problem, enabling a single image to function against
> host services running a variety of architectures.

Drop that idea. Better create images for each target platform.
Let an automated build system handle that. (if you need one,
feel free to contact me off-list).

You want to migrate a running VM to a different arch ?
Forget it. You won't come around processor emulation, better use
some VP like Java, LLVM, etc.


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 11:30:40

by Enrico Weigelt

[permalink] [raw]
Subject: Re: FatELF patches...

* Ryan C. Gordon <[email protected]> wrote:

> It's true that /bin/ls would double in size (although I'm sure at least
> the download saves some of this in compression). But how much of, say,
> Gnome or OpenOffice or Doom 3 is executable code? These things would be
> nowhere near "vastly" bigger.

OO takes about 140 MB for binaries at my site. Now just multiply it by
the number of targets you'd like to support.

Gnome stuff also tends to be quite fat.

> > - Assumes data files are not dependant on binary (often not true)
>
> Turns out that /usr/sbin/hald's cache file was. That would need to be
> fixed, which is trivial, but in my virtual machine test I had it delete
> and regenerate the file on each boot as a fast workaround.

Well, hald (and already the dbus stuff) is a misdesign, so we shouldn't
count it here ;-P

> Testing doesn't really change with what I'm describing. If you want to
> ship a program for PowerPC and x86, you still need to test it on PowerPC
> and x86, no matter how you distribute or launch it.

BUT: you have to test the whole combination on dozens of targets.
And it in now way releaves to from testing dozens of different distros.

If you want one binary package for many different targets, go for
autopackage, LSM, etc.

> Yes, that is true for software shipped via yum, which does not encompass
> all the software you may want to run on your system. I'm not arguing
> against package management.

Why not fixing the package ?

> True. If I try to run a PowerPC binary on a Sparc, it fails in any
> circumstance. I recognize the goal of this post was to shoot down every
> single point, but you can't see a scenario where this adds a benefit? Even
> in a world that's still running 32-bit web browsers on _every major
> operating system_ because some crucial plugins aren't 64-bit yet?

The root of evil are plugins - even worse: binary-only plugins.

Let's just take browsers: is there any damn good reason for not putting
those things into their own process (9P provides a fine IPC for that),
besides stupidity and lazyness of certain devs (yes, this explicitly
includes mozilla guys) ?

> > - Ship web browser plugins that work out of the box with multiple
> > platforms.
> > - yum install just works, and there is a search path in firefox
> > etc
>
> So it's better to have a thousand little unique solutions to the same
> problem? Everything has a search path (except things that don't), and all
> of those search paths are set up in the same way (except things that
> aren't). Do we really need to have every single program screwing around
> with their own personal spiritual successor to the CLASSPATH environment
> variable?

You dont like $PATH ? Use a unionfs and let a installer / package manager
handle proper setups.

Yes, on Linux (contrary to Plan9) this (AFAIK) still requires root
privileges, but there're ways around this.

> > - Ship kernel drivers for multiple processors in one file.
> > - Not useful see separate downloads
>
> Pain in the butt see "which installer is right for me?" :)

It even gets worse: you need different modules for different kernel
versions *and* kernel configs. Kernel image and modules strictly
belong together - it's in fact *one* kernel that just happens to be
split off into several files so parts of it can be loaded on-demand.

> I don't want to get into a holy war about out-of-tree kernel drivers,
> because I'm totally on board with getting drivers into the mainline. But
> it doesn't change the fact that I downloaded the wrong nvidia drivers the
> other day because I accidentally grabbed the ia32 package instead of the
> amd64 one. So much for saving bandwidth.

NVidia is a bad reference here. These folks simply don't get their
stuff stable, instead playing around w/ ugly code obfuscation.
No mercy for those jerks.

I'm strongly in favour of prohibiting proprietary kernel drivers.

> I wasn't paying attention. But lots of people wouldn't know which to pick
> even if they were. Nvidia, etc, could certainly put everything in one
> shell script and choose for you, but now we're back at square one again.

If NV wants to stick in their binary crap, they'll have to bite the
bullet of maintaining proper packaging. The fault is on their side,
not on Linux' one.

> > - Transition to a new architecture in incremental steps.
> > - IFF the CPU supports both old and new
>
> A lateral move would be painful (although Apple just did this very thing
> with a FatELF-style solution, albeit with the help of an emulator), but if
> we're talking about the most common case at the moment, x86 to amd64, it's
> not a serious concern.

This is a specific case, which could be handled easily in userland, IMHO.

> Why install Gimp by default if I'm not an artist? Because disk space is
> cheap in the configurations I'm talking about and it's better to have it
> just in case, for the 1% of users that will want it. A desktop, laptop or
> server can swallow a few megabytes to clean up some awkward design
> decisions, like the /lib64 thing.

What's so especially bad on the multilib approach ?

> A few more megabytes installed may cut down on the support load for
> distributions when some old 32 bit program refuses to start at all.

The distro could simply provide a few compat packages.
It even could use a hooked-up ld.so which does appropriate checks
and notify the package manager if some 32bit libs are missing.

> > - One hard drive partition can be booted on different machines with
> > different CPU architectures, for development and experimentation. Same
> > root file system, different kernel and CPU architecture.
> >
> > - Now we are getting desperate.
>
> It's not like this is unheard of. Apple is selling this very thing for 129
> bucks a copy.

Distro issue.
You need to have all packages installed for each supported arch *and*
all applications must be capable of handling different bytesex or
typesizes in their data.

> > - Prepare your app on a USB stick for sneakernet, know it'll work on
> > whatever Linux box you are likely to plug it into.
> >
> > - No I don't because of the dependancies, architecture ordering
> > of data files, lack of testing on each platform and the fact
> > architecture isn't sufficient to define a platform
>
> Yes, it's not a silver bullet. Fedora will not be promising binaries that
> run on every Unix box on the planet.
>
> But the guy with the USB stick? He probably knows the details of every
> machine he wants to plug it into...

Then he's most likely capable of maintaining a multiarch distro.
Leaving out binary application data (see above), it's not such a big
deal - just work-intensive. Using FatELF most likely increases that work.

> It's possible to ship binaries that don't depend on a specific
> distribution, or preinstalled dependencies, beyond the existance of a
> glibc that was built in the last five years or so. I do it every day. It's
> not unreasonable, if you aren't part of the package management network, to
> make something that will run, generically on "Linux."

Good, why do you need FatELF then ?

> There are programs I support that I just simply won't bother moving to
> amd64 because it just complicates things for the end user, for example.

Why don't you just solve that in userland ?

> That is anecdotal, and I apologize for that. But I'm not the only
> developer that's not in an apt repository, and all of these rebuttals are
> anecdotal: "I just use yum [...because I don't personally care about
> Debian users]."

Can't just just make up your own repo ? Is it so hard ?
Just can speak for Gentoo - overlays are quite convenient here.


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 11:43:42

by Enrico Weigelt

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

* Mikulas Patocka <[email protected]> wrote:

> No, if I compile alpha version of mplayer by hand, it compiles and links
> against whatever libraries I have on my system. If I pull it out of some
> "testing" repository, it is already compiled and linked against libraries
> in the same "testing" repository and it will load the system with crap.

You picked the wrong repo. Use one which contains only the wanted
package, not tons of other stuff. If there is none, create it.

> > Or if you're arguing "you'd give up after seeing it needed an experimental
> > libfoo", I'll counter "you'd hopefully think twice if yum said it was
> > installing a experimental mplayer, and dragging in a whole chain of pre-reqs".
>
> ... or use --disable-libfoo if it insists on newer version and I don't
> want to upgrade it.

Either abdicate the feature requiring libfoo or statically link that
new version. In neither way FatELF will help here.

> Or maybe the configure scripts detects on its own that the library is
> too old will compile without new features. Or it uses libfoo shipped
> with the sources.

Blame mplayer folks for their crappy configure script. Automatically
switching features on presence of some libs (also *against* explicit
options), or - even worse - hard coded system lib pathes (!) is simply
insane. FatELF can't delete ignorance from jerks like Rich Felker ;-O


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 11:59:49

by Enrico Weigelt

[permalink] [raw]
Subject: Re: package managers [was: FatELF patches...]

* Mikulas Patocka <[email protected]> wrote:

> Some Windows programs force upgrade, but not in yearly cycles, like Linux
> programs. Majority of programs still work on XP shipped in 2001.

You really use old, outdated software on production systems ?

> > Being able to upgrade at least Debian -- and others as well -- without the
> > need to attend the computer is IMHO one of Linux' biggest wins.
>
> When I did it (from Etch to Lenny), two programs that I have compiled
> manually ("vim" and "links") stopped working because Etch and Lenny have
> binary-incompatible libgpm.

Distro issue. If ABI changes, the binary package has get a different name.

> Static linking doesn't work for any program that needs plug-ins (i.e.
> you'd have one glibc statically linked into the program and another glibc
> dynamically linked with a plug-in and these two glibcs will beat each
> other).

Plugins are crap by design. Same situation like w/ kernel modules:
you need them compiled against the right version of the main program,
in fact: on binary packaging they are *part* of the main program which
just happen to be loaded on-demand. If you want to split them up into
several packages, you'll end up in a dependency nightmare.

> I mean this --- the distributions should agree on a common set of
> libraries and their versions (call this for example "Linux-2010
> standard"). This standard should include libraries that are used
> frequently, that have low occurence of bugs and security holes and that
> have never had an ABI change.

See the discussion on stable kernel module ABI.

> Software developers that claim compatibility with the standard will link
> standard libraries dynamically and must use static linking for all
> libraries not included in the standard. Or they can use dynamic linking
> and ship the non-standard library with the application in its private
> directory (so that nothing but that application links against it).

Yeah, ending up in the windows-world maintenance hell. Dozens of packages
will ship dozens of own library copies, making their own private changes,
not keeping up with upstream, so carrying around ancient bugs.

Wonderful idea.


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 12:16:01

by Bernd Petrovitsch

[permalink] [raw]
Subject: Re: FatELF patches...

On Tue, 2009-11-10 at 11:10 +0100, Enrico Weigelt wrote:
> * Bernd Petrovitsch <[email protected]> wrote:
>
> > The only remotely useful benefit in the long run I can imagine is: The
> > permanent cross-compiling will make AC_TRY_RUN() go away. Or at least
> > the alternatives are applicable without reading the generated
> > configure.sh (and config.log) to guess how to tell the script some
> > details.
>
> hmm, that could be the real killer argument - evolutionarily
> sort out the guys who're too dumb to write proper buildscripts ;-)
Obviously your irony detector triggered;-)

Bernd
--
Firmix Software GmbH http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
Embedded Linux Development and Services

2009-11-10 12:40:42

by Bernd Petrovitsch

[permalink] [raw]
Subject: Re: FatELF patches...

On Tue, 2009-11-10 at 12:27 +0100, Enrico Weigelt wrote:
> * Ryan C. Gordon <[email protected]> wrote:
[...]
> > True. If I try to run a PowerPC binary on a Sparc, it fails in any
> > circumstance. I recognize the goal of this post was to shoot down every
If tools like qemu support PowerPC or Sparc (similar to some dialects of
ARM), you your run it through that (on every hardware where qemu as such
runs[0]).
And if you have bimfmt_misc, you can start it like any other "native"
program.

> > single point, but you can't see a scenario where this adds a benefit? Even
> > in a world that's still running 32-bit web browsers on _every major
> > operating system_ because some crucial plugins aren't 64-bit yet?
>
> The root of evil are plugins - even worse: binary-only plugins.
>
> Let's just take browsers: is there any damn good reason for not putting
> those things into their own process (9P provides a fine IPC for that),
> besides stupidity and lazyness of certain devs (yes, this explicitly
> includes mozilla guys) ?
Or implement running 32bit plugins from a 64bit browser.

[...]
> > > - Prepare your app on a USB stick for sneakernet, know it'll work on
> > > whatever Linux box you are likely to plug it into.
Trojan horse deployers paradise BTW.

[....]
> > It's possible to ship binaries that don't depend on a specific
> > distribution, or preinstalled dependencies, beyond the existance of a
> > glibc that was built in the last five years or so. I do it every day. It's
ACK, just link it statically and be done (but then you have other
problems, e.g. "$LIB has an exploit and I have to rebuild and redeploy
$BINARY").

[...]
> > That is anecdotal, and I apologize for that. But I'm not the only
> > developer that's not in an apt repository, and all of these rebuttals are
> > anecdotal: "I just use yum [...because I don't personally care about
> > Debian users]."
It's not that the other way around is much of a difference:-(
And if there is some really interested Debian user, he can package it
for Debian.
IMHO better no package for $DISTRIBUTION than only bad (and old) ones
because some packager (which is not necessarily a core programmer) has
only very little personal interest in the .deb version.

> Can't just just make up your own repo ? Is it so hard ?
> Just can speak for Gentoo - overlays are quite convenient here.
And it's not that hard to write .spec files for RPM (for average
packages - e.g. the kernel and gcc is somewhat different). Just take a
small one (e.g. the one from "trace") and start from there.
SCNR,
Bernd

[09: I never tried to cascade qemu though.
--
Firmix Software GmbH http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
Embedded Linux Development and Services

2009-11-10 13:04:00

by Enrico Weigelt

[permalink] [raw]
Subject: Re: FatELF patches...

* Bernd Petrovitsch <[email protected]> wrote:

> > The root of evil are plugins - even worse: binary-only plugins.
> >
> > Let's just take browsers: is there any damn good reason for not putting
> > those things into their own process (9P provides a fine IPC for that),
> > besides stupidity and lazyness of certain devs (yes, this explicitly
> > includes mozilla guys) ?
> Or implement running 32bit plugins from a 64bit browser.

And land in an nightmare: you have to create an kind of in-process jail,
so all referenced 32bit lib references get properly emulated.

Better kickoff the whole idea of plugins at all.


cu
--
---------------------------------------------------------------------
Enrico Weigelt == metux IT service - http://www.metux.de/
---------------------------------------------------------------------
Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
---------------------------------------------------------------------

2009-11-10 13:17:16

by Alan

[permalink] [raw]
Subject: Re: FatELF patches...

> > Or implement running 32bit plugins from a 64bit browser.
>
> And land in an nightmare: you have to create an kind of in-process jail,
> so all referenced 32bit lib references get properly emulated.

You instead want them out of process. Something that most distributions
seem to have managed.

http://gwenole.beauchesne.info//en/projects/nspluginwrapper

Alan