So, I brought up the idea of a linux/sys for kernel level include files.
A few other people came up with a desire of a 'kernel' dir under
include, parallel w/linux.
So I ran into a snag with that scenario. Let's suppose we have
a module developer or a company developing a driver in their own
/home/nvidia/video/drivers/newcard directory. Now they need to include
kernel
development files and are used to just doing the:
#include <linux/blahblah.h>
Which works because in a normal compile environment they have /usr/include
in their include path and /usr/include/linux points to the directory
under /usr/src/linux/include.
So if we create a separate /usr/src/linux/include/kernel dir, does that
imply that we'll have a 2nd link:
/usr/include/kernel ==> /usr/src/linux/include/kernel ?
If the idea was to 'hide' kernel interfaces and make them not 'easy'
to include doesn't providing a 2nd link defeat that?
If we don't provide a 2nd link, how do module writers access kernel
includes?
If the kernel directory is under 'linux' (as in linux/sys), then the
link is already there and we can just say 'don't use sys in apps'. If
we create 'kernel' under 'include', it seems we'll still end up having to
tell users "don't include files under directory "x"' (either kernel/ or
linux/sys/)
Note that putting kernel as a new directory parallel to linux requires
adding another symlink -- so is that solving anything or adding more
administrative "gotcha's"?
-linda
--
L A Walsh | Trust Technology, Core Linux, SGI
[email protected] | Voice/Vmail: (650) 933-5338
In article <[email protected]>,
LA Walsh <[email protected]> wrote:
>Which works because in a normal compile environment they have /usr/include
>in their include path and /usr/include/linux points to the directory
>under /usr/src/linux/include.
No, that a redhat-ism.
Sane distributions simply include a known good copy of
/usr/src/linux/include/{asm,linux} verbatim in their libc6-dev package.
Debian has done that for a long, long time.
Several core glibc developers use Debian, are even Debian developers...
Mike.
On Thu, 14 Dec 2000, LA Walsh wrote:
> So I ran into a snag with that scenario. Let's suppose we have
> a module developer or a company developing a driver in their own
> /home/nvidia/video/drivers/newcard directory. Now they need to include
> kernel
> development files and are used to just doing the:
> #include <linux/blahblah.h>
>
> Which works because in a normal compile environment they have /usr/include
> in their include path and /usr/include/linux points to the directory
> under /usr/src/linux/include.
Huh?
% ls -ld /usr/include/linux
drwxr-xr-x 6 root root 18432 Sep 2 22:35 /usr/include/linux/
> So if we create a separate /usr/src/linux/include/kernel dir, does that
> imply that we'll have a 2nd link:
What 2nd link? There should be _no_ links from /usr/include to the
kernel tree. Period. Case closed.
Stuff in /usr/include is private libc copy extracted from some kernel
version. Which may have _nothing_ to the kernel you are developing for.
In the situation above they should have -I<wherever_the_tree_lives>/include
in CFLAGS. Always had to. No links, no pain in ass, no interference with
userland compiles.
IOW, let them fix their Makefiles.
On 15 Dec 2000, Miquel van Smoorenburg wrote:
> In article <[email protected]>,
> LA Walsh <[email protected]> wrote:
> >Which works because in a normal compile environment they have /usr/include
> >in their include path and /usr/include/linux points to the directory
> >under /usr/src/linux/include.
>
> No, that a redhat-ism.
Not even all versions of redhat do that.
> >Which works because in a normal compile environment they have /usr/include
> >in their include path and /usr/include/linux points to the directory
> >under /usr/src/linux/include.
>
> No, that a redhat-ism.
Umm, its a most people except Debianism. People relied on it despite it
being wrong. RH7 ships with a matching library set of headers. I got to close
a lot of bug reports explaining to people that the new setup was in fact
right 8(
In article <[email protected]>,
Alexander Viro <[email protected]> wrote:
>On 15 Dec 2000, Miquel van Smoorenburg wrote:
>
>> In article <[email protected]>,
>> LA Walsh <[email protected]> wrote:
>> >Which works because in a normal compile environment they have /usr/include
>> >in their include path and /usr/include/linux points to the directory
>> >under /usr/src/linux/include.
>>
>> No, that a redhat-ism.
>
>Not even all versions of redhat do that.
If that has been fixed recently, that is a very Good Thing (tm).
Now if in 2.5 <kernel>/include/net would be moved to <kernel>/linux/net
so that user-level code that uses <net/whatever.h> can be compiled
with -I/usr/src/some/kernel/42.42/include/ I'd be even happier
Mike.
On Fri, 15 Dec 2000, Alan Cox wrote:
> > >Which works because in a normal compile environment they have /usr/include
> > >in their include path and /usr/include/linux points to the directory
> > >under /usr/src/linux/include.
> >
> > No, that a redhat-ism.
>
> Umm, its a most people except Debianism. People relied on it despite it
> being wrong. RH7 ships with a matching library set of headers. I got to close
> a lot of bug reports explaining to people that the new setup was in fact
> right 8(
Actually, I suspect that quite a few of us had done that since long -
IIRC I've got burned on 1.2/1.3 and decided that I had enough. Bugger if I
remember what exactly it was - ISTR that it was restore(8) built with
1.3.<something> headers and playing funny games on 1.2, but it might be
something else...
Alexander Viro wrote:
>
> Actually, I suspect that quite a few of us had done that since long -
> IIRC I've got burned on 1.2/1.3 and decided that I had enough. Bugger if I
> remember what exactly it was - ISTR that it was restore(8) built with
> 1.3.<something> headers and playing funny games on 1.2, but it might be
> something else...
So then what's the correct header tree to put in /usr/include/linux? I
could use the stock 2.2.14-patched headers that came with the dist, but
how often does it need to be updated? Or should I use the latest 2.2?
--
"Windows for Dummies? Isn't that redundant?"
On Thu, 14 Dec 2000, David Riley wrote:
> Alexander Viro wrote:
> >
> > Actually, I suspect that quite a few of us had done that since long -
> > IIRC I've got burned on 1.2/1.3 and decided that I had enough. Bugger if I
> > remember what exactly it was - ISTR that it was restore(8) built with
> > 1.3.<something> headers and playing funny games on 1.2, but it might be
> > something else...
>
> So then what's the correct header tree to put in /usr/include/linux? I
> could use the stock 2.2.14-patched headers that came with the dist, but
> how often does it need to be updated? Or should I use the latest 2.2?
Whatever your libc was built against. It shouldn't matter that much,
but when shit hits the fan... you really don't want to be there.
Look at it that way: you don't want to build some object files with one
set of headers, some - with another and link them together. Now,
s/some object files/libc/. With a minimal luck you will be OK, but
it's easier not to ask for trouble in the first place.
On Thu, 14 Dec 2000, Alexander Viro wrote:
>
>
> On Thu, 14 Dec 2000, David Riley wrote:
>
> > Alexander Viro wrote:
> > >
> > > Actually, I suspect that quite a few of us had done that since long -
> > > IIRC I've got burned on 1.2/1.3 and decided that I had enough. Bugger if I
> > > remember what exactly it was - ISTR that it was restore(8) built with
> > > 1.3.<something> headers and playing funny games on 1.2, but it might be
> > > something else...
> >
> > So then what's the correct header tree to put in /usr/include/linux? I
> > could use the stock 2.2.14-patched headers that came with the dist, but
> > how often does it need to be updated? Or should I use the latest 2.2?
>
> Whatever your libc was built against. It shouldn't matter that much,
> but when shit hits the fan... you really don't want to be there.
>
> Look at it that way: you don't want to build some object files with one
> set of headers, some - with another and link them together. Now,
> s/some object files/libc/. With a minimal luck you will be OK, but
> it's easier not to ask for trouble in the first place.
Yep. At one point, about six months ago, I recompiled glibc 2.0.7(?)
against 2.2.15(?) with USB backport due to occational USB v4l
device-related bus locks, recompiled the v4l app I was using (w3cam
package I think) and the problems mostly went away. As far as I understand
it's a matter of a kernel/userland seperation. But again, sometimes you
just have to update your libc.
> Huh?
> % ls -ld /usr/include/linux
> drwxr-xr-x 6 root root 18432 Sep 2 22:35
> /usr/include/linux/
>
> > So if we create a separate /usr/src/linux/include/kernel dir, does that
> > imply that we'll have a 2nd link:
>
> What 2nd link? There should be _no_ links from /usr/include to the
> kernel tree. Period. Case closed.
---
> ll -d /usr/include/linux
lrwxrwxrwx 1 root root 26 Dec 25 1999 /usr/include/linux ->
../src/linux/include/linux/
---
I've seen this setup on RH, SuSE and Mandrake systems. I thought
this was somehow normal practice?
> Stuff in /usr/include is private libc copy extracted from some kernel
> version. Which may have _nothing_ to the kernel you are developing for.
> In the situation above they should have
> -I<wherever_the_tree_lives>/include
> in CFLAGS. Always had to. No links, no pain in ass, no interference with
> userland compiles.
>
> IOW, let them fix their Makefiles.
---
Why would Linus want two separate directories -- one for 'kernel-only'
include files and one for kernel files that may be included in user
land? It seems to me, if /usr/include/linux was normally a separate
directory there would be no need for him to mention a desire to create
a separate kernel-only include directory, so my assumption was the
linked behavior was somehow 'normal'.
I think many source packages only use "-I /usr/include" and
make no provision for compiling against kernel header files in
different locations that need to be entered by hand. It is difficult
to create an automatic package regeneration mechanism like RPM if such
details need to be entered for each package.
So what you seem to be saying, if I may rephrase, is that
the idea of automatic package generation for some given kernel is
impractical because users should be expected to edit each package
makefile for their own setup with no expectation from the packages
designers of a standard kernel include location?
I'm not convinced this is a desirable goal.
:-/
-linda
"LA Walsh" <[email protected]> writes:
> I've seen this setup on RH, SuSE and Mandrake systems. I thought
> this was somehow normal practice?
it's done for us till the 7.2 but next version we gonna to switch to a
different headers directory like Debian/RH does...
--
MandrakeSoft Inc http://www.chmouel.org
--Chmouel
Alexander Viro wrote:
> In the situation above they should have -I<wherever_the_tree_lives>/include
> in CFLAGS. Always had to. No links, no pain in ass, no interference with
> userland compiles.
As long as there's a standard location for "<wherever_the_tree_lives>",
this is fine. In most cases, the tree one expects to find is "roughly
the kernel we're running". Actually, maybe a script to provide the
path would be even better (*). Such a script could also complain if
there's an obvious problem.
I think there are three possible directions wrt visibility of kernel
headers:
- non at all - anything that needs kernel headers needs to provide them
itself
- kernel-specific extentions only; libc is self-contained, but user
space can get items from .../include/linux (the current glibc
approach)
- share as much as possible; libc relies on kernel for "standard"
definitions (the libc5 approach, and also reasonably feasible
today)
My personal preference is the third direction, because it simplifies the
deployment of new "standard" elements, and changes to existing interfaces.
The first direction would effectively discourage any new interfaces or
changes to existing ones, while the second direction allows at least a
moderate amount of flexibility for kernel-specific interfaces. In my
experiments with newlib, I was largely able to use the third approach.
I don't want to re-open the discussion on which way is better for glibc,
but I think we can agree that "clean" kernel headers are always a good
idea.
So we get at least the following levels of visibility:
0) kernel-internal interfaces; should only be visible to "base" kernel
1) public kernel interfaces; should be visible to modules (exposing
type 0 interfaces to modules may create ways to undermine the GPL)
2) interfaces to kernel-specific user space tools (modutils, mount,
etc.); should be visible to user space that really wants them
3) interface to common non-POSIX extensions (BSD system calls, etc.);
should be visible to user space on request, or on an opt-out basis
4) interfaces to POSIX elements (e.g. struct stat, mode_t); should be
visible unconditionally (**)
Distinguishing level 0 and 1 is always a little difficult. It seems
that a "kmodcc" that inserts the necessary flags would be useful there
(*). There needs to be a clear distinction between level 1 and 2 (the
current #ifdef __KERNEL__). This boundary could certainly be improved
be moving user-space-visible header files to a separate directory.
Levels 2, 3, and 4 are more difficult to separate, because the people
who know what should go where are not always the ones writing the
kernel headers. Perhaps some more input from libc hackers could help.
(*) Crude examples for such scripts (for newlib): newlib-flags and
newlib-cc in
ftp://icaftp.epfl.ch/pub/people/almesber/misc/newlib-linux/
newlib-linux-20.tar.gz
(**) Multiple versions of the interface can be a problem here, e.g.
struct oldold_utsname vs. struct old_utsname vs.
struct new_utsname. It would be nice to have them prefixed at
least with __. Maybe a __LATEST_utsname macro would be useful
too ;-) (I know, breaks binary compatibility, hence the
smiley.)
So ... what's the opinion on slowly introducing a redirection via
scripts ?
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
On Fri, Dec 15, 2000 at 12:14:04AM +0000, Miquel van Smoorenburg wrote:
> In article <[email protected]>,
> LA Walsh <[email protected]> wrote:
> >Which works because in a normal compile environment they have /usr/include
> >in their include path and /usr/include/linux points to the directory
> >under /usr/src/linux/include.
>
> No, that a redhat-ism.
>
> Sane distributions simply include a known good copy of
> /usr/src/linux/include/{asm,linux} verbatim in their libc6-dev package.
The glibc FAQ still has this in it:
2.17. I have /usr/include/net and /usr/include/scsi as symlinks
into my Linux source tree. Is that wrong?
{PB} This was necessary for libc5, but is not correct when using glibc.
Including the kernel header files directly in user programs usually does not
work (see question 3.5). glibc provides its own <net/*> and <scsi/*> header
files to replace them, and you may have to remove any symlink that you have
in place before you install glibc. However, /usr/include/asm and
/usr/include/linux should remain as they were.
It's the version that's in cvs, I just did an cvs update. It's
been in it for ages. If it's wrong, someone *please* correct it.
Kurt
> On Fri, Dec 15, 2000 at 12:14:04AM +0000, Miquel van Smoorenburg wrote:
> It's the version that's in cvs, I just did an cvs update. It's
> been in it for ages. If it's wrong, someone *please* correct it.
I think this is the important part.
This subject has come up quite a few times in the past
couple of weeks on the scyld (eepro/tulip) mailing lists.
Essentially, whatever solution is implemented MUST ensure :
1 - glibc will work properly (the headers in /usr/include/* don't
change in an incompatible manner)
2 - programs that need to compile against the current kernel MUST
be able to do so in a quasi-predictable manner.
Here's some suggestions (feel free to hack this to pieces, but please
don't let this fall to the side with everyone doing it differently!
we need consensus! :)
- /usr/include/[linux|asm] will be directories, not symlinks, and
their content will be the headers that glibc was compiled against.
- /usr/include/kernel will be a symlink to the target kernel headers.
i.e. /usr/src/linux/include for most of us.
This way, anything that needs to use the 'default' methods of accessing
these headers will be able to function as usual, and anyone who needs
to access the specific kernel headers can simply do -I/usr/include/kernel
I know that for my projects this is essentially what I do : I make sure that
all of my separate-from-kernel compiling that needs to be done that depends
on the kernel gets recompiled every time i change the kernel,
but I only change /usr/include/linux when I recompile glibc.
We really need a documented way to deal with this!
It's getting silly the number of questions that people ask!
--
Dana Lacoste
Linux Developer
Peregrine Systems
On Fri, 15 Dec 2000, Werner Almesberger wrote:
> Alexander Viro wrote:
> > In the situation above they should have -I<wherever_the_tree_lives>/include
> > in CFLAGS. Always had to. No links, no pain in ass, no interference with
> > userland compiles.
>
> As long as there's a standard location for "<wherever_the_tree_lives>",
> this is fine. In most cases, the tree one expects to find is "roughly
> the kernel we're running". Actually, maybe a script to provide the
> path would be even better (*). Such a script could also complain if
> there's an obvious problem.
>
> I think there are three possible directions wrt visibility of kernel
> headers:
>
> - non at all - anything that needs kernel headers needs to provide them
> itself
> - kernel-specific extentions only; libc is self-contained, but user
> space can get items from .../include/linux (the current glibc
> approach)
> - share as much as possible; libc relies on kernel for "standard"
> definitions (the libc5 approach, and also reasonably feasible
> today)
>
> My personal preference is the third direction, because it simplifies the
> deployment of new "standard" elements, and changes to existing interfaces.
> The first direction would effectively discourage any new interfaces or
> changes to existing ones, while the second direction allows at least a
> moderate amount of flexibility for kernel-specific interfaces. In my
> experiments with newlib, I was largely able to use the third approach.
>
> I don't want to re-open the discussion on which way is better for glibc,
> but I think we can agree that "clean" kernel headers are always a good
> idea.
>
> So we get at least the following levels of visibility:
>
> 0) kernel-internal interfaces; should only be visible to "base" kernel
> 1) public kernel interfaces; should be visible to modules (exposing
> type 0 interfaces to modules may create ways to undermine the GPL)
> 2) interfaces to kernel-specific user space tools (modutils, mount,
> etc.); should be visible to user space that really wants them
> 3) interface to common non-POSIX extensions (BSD system calls, etc.);
> should be visible to user space on request, or on an opt-out basis
> 4) interfaces to POSIX elements (e.g. struct stat, mode_t); should be
> visible unconditionally (**)
>
> Distinguishing level 0 and 1 is always a little difficult. It seems
> that a "kmodcc" that inserts the necessary flags would be useful there
> (*). There needs to be a clear distinction between level 1 and 2 (the
> current #ifdef __KERNEL__). This boundary could certainly be improved
> be moving user-space-visible header files to a separate directory.
>
> Levels 2, 3, and 4 are more difficult to separate, because the people
> who know what should go where are not always the ones writing the
> kernel headers. Perhaps some more input from libc hackers could help.
>
> (*) Crude examples for such scripts (for newlib): newlib-flags and
> newlib-cc in
> ftp://icaftp.epfl.ch/pub/people/almesber/misc/newlib-linux/
> newlib-linux-20.tar.gz
>
> (**) Multiple versions of the interface can be a problem here, e.g.
> struct oldold_utsname vs. struct old_utsname vs.
> struct new_utsname. It would be nice to have them prefixed at
> least with __. Maybe a __LATEST_utsname macro would be useful
> too ;-) (I know, breaks binary compatibility, hence the
> smiley.)
>
> So ... what's the opinion on slowly introducing a redirection via
> scripts ?
Just out of curiosity, what would happen with redirection if your source
tree for 'the currently running kernel' version happens to be configured
for a different 'the currently running kernel', perhaps a machine of a
foreign arch that you are cross-compiling for?
I do this: I use ONE machine to compile kernels for five: four i386 and
one SUN4C. My other machines don't even HAVE /usr/src/linux, so where does
this redirection leave them?
[email protected] wrote:
> Just out of curiosity, what would happen with redirection if your source
> tree for 'the currently running kernel' version happens to be configured
> for a different 'the currently running kernel', perhaps a machine of a
> foreign arch that you are cross-compiling for?
Two choices:
1) try to find an alternative. If there's none, fail.
2) make the corresponding asm or asm/arch branch available (non-trivial
and maybe not desirable)
> I do this: I use ONE machine to compile kernels for five: four i386 and
> one SUN4C. My other machines don't even HAVE /usr/src/linux, so where does
> this redirection leave them?
Depends on your distribution: if it doesn't install any kernel-specific
headers, you wouldn't be able to compile programs requiring anything
beyond what it provided by your libc. Otherwise, there could be a
default location (such as /usr/src/linux is a default location now).
The main advantage of a script would be that one could easily compile
for multiple kernels, e.g. with
export TARGET_KERNEL=2.0.4
make
Even if your system is running 2.4.13-test1.
The architecture could be obtained from the tree or the tree could be
picked based on the architecture. This is a policy decision that could
be hidden in the script.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
> From: Werner Almesberger [mailto:[email protected]]
>
> I think there are three possible directions wrt visibility of kernel
> headers:
>
> - non at all - anything that needs kernel headers needs to provide them
> itself
> - kernel-specific extentions only; libc is self-contained, but user
> space can get items from .../include/linux (the current glibc
> approach)
> - share as much as possible; libc relies on kernel for "standard"
> definitions (the libc5 approach, and also reasonably feasible
> today)
>
> So we get at least the following levels of visibility:
>
> 0) kernel-internal interfaces; should only be visible to "base" kernel
> 1) public kernel interfaces; should be visible to modules (exposing
> type 0 interfaces to modules may create ways to undermine the GPL)
> 2) interfaces to kernel-specific user space tools (modutils, mount,
> etc.); should be visible to user space that really wants them
> 3) interface to common non-POSIX extensions (BSD system calls, etc.);
> should be visible to user space on request, or on an opt-out basis
> 4) interfaces to POSIX elements (e.g. struct stat, mode_t); should be
> visible unconditionally (**)
---
The problem came in a case where I had a kernel module that included
standard memory allocation <linux/malloc.h>. That file, in turn, included
<linux/slab.h>, then that included <linux/mm.h> and <linux/cache.h>. From
there more and more files were included until it got down to files in
a kernel/kernel-module only directory "<include/kernel>". It was at that
point, the externally compiled module "barfed", because like many modules,
it expected, like many externally compiled modules, that it could simply
access all of it's needed files through /usr/include/linux which it gets
by putting /usr/include in it's path. I've seen commercial modules like
vmware's kernel modules use a similar system where they expect
/usr/include/linux to contain or point to headers for the currently running
kernel.
So I'm doing my compile in a 'chrooted' environment where the headers
for the new kernel are installed. However, now, with the new include/kernel
dir in the linux kernel, modules compiled separately out of the kernel
tree have no way of finding hidden kernel include files -- even though
those files may be needed for modules. Precisely -- in this case, "memory
allocation" for the kernel (not userland) was needed. Arguably, this
belongs(ed)
in a kernel-only directory. If that directory is not /usr/include/linux or
*under* /usr/include/linux, then modules need a separate way to find it --
namely a new link in /usr/include(<kernel>) to point to the new location,
or we move the internal kernel interfaces to something under the current
<include/linux> so while the intent of "kernel-only" is made clear, they
are still accessible in the way they already are, thus not requiring
rewrites
of all the existing makefiles.
I think in my specific case, perhaps, linux/malloc.h *is* a public
interface that is to be included by module writers and belongs in the
'public interface dir -- and that's great. But it includes files like
'slab.h' which are kernel mm-specific that may change in the future. Those
files should be in the private interface dir. But that dir may still need
to be included by public interface (malloc) file.
So the user should/needs to be blind to how that is handled. They
shouldn't have to change their makefiles or add new links just because
how 'malloc' implements its functionality changes. This would impy that
kernel only interfaces need to be include-able within the current
model -- just moved out of the existing "public-for-module" interface
directory (/usr/include/linux). For that to happen transparently, that
directory needs to exist under the current hierarchy (under
/usr/include/linux),
not parallel.
So at that point it becomes what we should name it under
/usr/include/linux. Should it be:
1) "/usr/include/linux/sys" (my preference)
2) "/usr/include/linux/kernel"
3) "/usr/include/linux/private"
4) "/usr/include/linux/kernel-only"
5) <other>
???
Any other solution, as I see it, would break existing module code.
Comments?? Any preferences from /dev/linus?
Any flaws in my logic chain?
tnx,
-linda
Werner Almesberger wrote:
>
> Alexander Viro wrote:
> > In the situation above they should have -I<wherever_the_tree_lives>/include
> > in CFLAGS. Always had to. No links, no pain in ass, no interference with
> > userland compiles.
>
> As long as there's a standard location for "<wherever_the_tree_lives>",
> this is fine. In most cases, the tree one expects to find is "roughly
> the kernel we're running". Actually, maybe a script to provide the
> path would be even better (*). Such a script could also complain if
> there's an obvious problem.
I personally think the definition of an environment variable to point to
a header file location is the right way to go. Same with tools -- that
way I can say build with $(TOOLDIR), which pulls whatever tools that
tree uses, and use $(INCDIR) as my kernel include files.
Then you can build using whatever header files you want to use, using
whatever compilers/linkers/whatever you want to. So:
TOOLDIR=/src/gcctree
INCDIR=/src/2.2.18
or:
TOOLDIR=/src/egcstree
INCDIR=/src/2.4.0-test12-custom
Then a 'make' from my $(TOPDIR) builds everything with the tools in
$(TOOLDIR) and uses -I$(INCDIR) for header files. It's a beautiful
thing.
--Matt
My solution to this has always been to make a cross compiler environment
(even if it is the same processor family). Thusly i386-linux-gcc knows
that the target system's include files are in:
/usr/local/<project>-tools/i386-linux/include (/linux, /asm)
The other advantage to this is that I can switch my host environment
(within reason - compatible host glibcs, ok) and not have to change the
target compiler.
Werner Almesberger wrote:
> [email protected] wrote:
>
>> Just out of curiosity, what would happen with redirection if your source
>> tree for 'the currently running kernel' version happens to be configured
>> for a different 'the currently running kernel', perhaps a machine of a
>> foreign arch that you are cross-compiling for?
>
>
> Two choices:
> 1) try to find an alternative. If there's none, fail.
> 2) make the corresponding asm or asm/arch branch available (non-trivial
> and maybe not desirable)
>
>
>> I do this: I use ONE machine to compile kernels for five: four i386 and
>> one SUN4C. My other machines don't even HAVE /usr/src/linux, so where does
>> this redirection leave them?
>
>
> Depends on your distribution: if it doesn't install any kernel-specific
> headers, you wouldn't be able to compile programs requiring anything
> beyond what it provided by your libc. Otherwise, there could be a
> default location (such as /usr/src/linux is a default location now).
>
> The main advantage of a script would be that one could easily compile
> for multiple kernels, e.g. with
>
> export TARGET_KERNEL=2.0.4
> make
>
> Even if your system is running 2.4.13-test1.
>
> The architecture could be obtained from the tree or the tree could be
> picked based on the architecture. This is a policy decision that could
> be hidden in the script.
>
> - Werner
--
Joe deBlaquiere
Red Hat, Inc.
307 Wynn Drive
Huntsville AL, 35805
voice : (256)-704-9200
fax : (256)-837-3839
In article <[email protected]>,
LA Walsh <[email protected]> wrote:
>It was at that
>point, the externally compiled module "barfed", because like many modules,
>it expected, like many externally compiled modules, that it could simply
>access all of it's needed files through /usr/include/linux which it gets
>by putting /usr/include in it's path. I've seen commercial modules like
>vmware's kernel modules use a similar system where they expect
>/usr/include/linux to contain or point to headers for the currently running
>kernel.
vmware asks you nicely
where are the running kernels include files [/usr/src/linux/include" >
And then compiles the modules with -I/path/to/include/files
In fact, the 2.2.18 kernel already puts a 'build' symlink in
/lib/modules/`uname -r` that points to the kernel source,
which should be sufficient to solve this problem.. almost.
It doesn't tell you the specific flags used to compile the kernel,
such as -m486 -DCPU=686
> So at that point it becomes what we should name it under
>/usr/include/linux. Should it be:
>1) "/usr/include/linux/sys" (my preference)
/usr should be static. It could be a read-only NFS mount.
Putting system dependant configuration info here (which a
/usr/include/linux/sys symlink *is*) is wrong.
I think /lib/modules/`uname -r`/ should contain a script that
reproduces the CFLAGS used to compile the kernel. That way,
you not only get the correct -I/path/to/kernel/include but
the other compile-time flags (like -m486 etc) as well.
# sh /lib/modules/`uname -r`/kconfig --cflags
-D__KERNEL__ -I/usr/src/linux-2.2.18pre24/include -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -fno-strict-aliasing -pipe -fno-strength-reduce -m486 -malign-loops=2 -malign-jumps=2 -malign-functions=2 -DCPU=686
Standard module Makefile that will _always_ work:
#! /usr/bin/make -f
CC = $(shell /lib/modules/`uname -r`/kconfig --cc)
CFLAGS = $(shell /lib/modules/`uname -r`/kconfig --cflags)
module.o:
$(CC) $(CFLAGS) -c module.c
Flags could be:
--check Consistency check - are the header files there and
does include/linux/version.h match
--cc Outputs the CC variable used to compile the kernel
(important now that we have gcc, kgcc, gcc272)
--arch Outputs the ARCH variable
--cflags Outputs the CFLAGS
--include-path Outputs just the path to the include files
Generating and installing this 'kconfig' script as part of
"make modules_install" is trivial, and would solve all problems.
Well as far as I can see, ofcourse, I might have missed something..
Mike.
--
RAND USR 16514
LA Walsh wrote:
> I think in my specific case, perhaps, linux/malloc.h *is* a public
> interface that is to be included by module writers and belongs in the
> 'public interface dir -- and that's great. But it includes files like
> 'slab.h' which are kernel mm-specific that may change in the future.
As far as I understand the scenario you're describing, this seems to
be the only problem. <public>/malloc.h shouldn't need to include
<private>/slab.h.
If there's anything <private>/slab.h provides (either directly or
indirectly) that is needed in <public>/malloc.h, that should either be
in another public header, or malloc.h actually shouldn't depend on it.
Exception: opaque types; there one would have to go via a __ identifier,
i.e.
<public>/foo.h defines struct __foo ...;
<public>/bar.h includes <public>/foo.h
and uses #define FOOSIZE sizeof(struct __foo)
<private>/foo.h either typedef struct __foo foo_t;
or #define foo __foo /* ugly */
Too bad there's no typedef struct __foo struct foo;
I don't think restructuring the headers in this way would cause
a long period of instability. The main problem seems to be to
decide what is officially private and what isn't.
> Any other solution, as I see it, would break existing module code.
Hmm, I think what I've outlined above wouldn't break more code than
your approach. Obviously, modiles currently using "private" interfaces
are in trouble either way.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
Joe deBlaquiere wrote:
> My solution to this has always been to make a cross compiler environment
;-))) I think lots of people would really enjoy to have "build a
cross-gcc" added to the prerequisites for installing some driver
module ;-)
I know, it's not *that* bad. But it still adds quite a few possible
points of failure. Also, it adds a fair amount of overhead to any
directory name change or creation of a new kernel tree.
> The other advantage to this is that I can switch my host environment
> (within reason - compatible host glibcs, ok) and not have to change the
> target compiler.
Hmm, I don't quite understand what you mean here.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
Matt D. Robinson wrote:
> I personally think the definition of an environment variable to point to
> a header file location is the right way to go.
I see two disadvantages of this, compared to a script:
- need to hard-code a default (unless we assume the variables are always
set)
- the way how environment variables are propagated
A script-based approach has the advantage that one can make a single
change (to a file) that instantly affects the whole local environment
(be this system-wide, per-user, or whatever). So there's no risk of
typing "make" to that forgotten xterm and an incompatible build
starts.
I like environment variables as a means to override auto-detected
defaults, though.
Also, environment variables don't solve the problem of conveniently
providing other compiler arguments (the kmodcc idea - the problem is
very old, but I think it's still not solved).
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
> From: Werner Almesberger [mailto:[email protected]]
> Sent: Friday, December 15, 2000 1:21 PM
> I don't think restructuring the headers in this way would cause
> a long period of instability. The main problem seems to be to
> decide what is officially private and what isn't.
---
If someone wants to restructure headers, that's fine. I was only
trying to understand the confusingly stated intentions of Linus. I
was attempting to fit into those intentions, not change the world.
> > Any other solution, as I see it, would break existing module code.
>
> Hmm, I think what I've outlined above wouldn't break more code than
> your approach. Obviously, modiles currently using "private" interfaces
> are in trouble either way.
---
You've misunderstood. My approach would break *nothing*.
If module-public include file includes a private, it would still
work since 'sys' would be a directory under 'include/linux'. No new
links need be added, needed or referenced. Thus nothing breaks.
-l
On 2000/12/15 Werner Almesberger wrote:
> LA Walsh wrote:
>
> Exception: opaque types; there one would have to go via a __ identifier,
> i.e.
>
> <public>/foo.h defines struct __foo ...;
> <public>/bar.h includes <public>/foo.h
> and uses #define FOOSIZE sizeof(struct __foo)
> <private>/foo.h either typedef struct __foo foo_t;
> or #define foo __foo /* ugly */
>
Easier: public kernel interfaces only work through pointers.
<public>/foo.h typedef struct foo foo_t;
foo_t* foo_new();
<private>/foo.h includes <public>/foo.h
struct foo { ............... };
and uses #define FOOSIZE sizeof(foo_t)
Drawback: public access is slow (always through foo_set_xxxx_field(foo_t*))
private access from kernel or modules is fast (foo_t->x = ...)
Advantage: kernel can change, foo_t internals can change and it is binary
compatible. Even public headers can be kernel version
independent.
Too kind-of-classroom-not-real-world-useless-thing ?
All depends on public access needing full fast paths...
--
Juan Antonio Magallon Lacarta #> cd /pub
mailto:[email protected] #> more beer
Linux werewolf 2.2.19-pre1 #1 SMP Fri Dec 15 22:25:20 CET 2000 i686
Werner Almesberger wrote:
> Joe deBlaquiere wrote:
>
>> My solution to this has always been to make a cross compiler environment
>
>
> ;-))) I think lots of people would really enjoy to have "build a
> cross-gcc" added to the prerequisites for installing some driver
> module ;-)
>
> I know, it's not *that* bad. But it still adds quite a few possible
> points of failure. Also, it adds a fair amount of overhead to any
> directory name change or creation of a new kernel tree.
>
might be a good newbie filter... but actually the best thing about it is
that the compiler people of the work might make generating a proper
cross-toolchain less difficult by one or two magnitudes...
>
>> The other advantage to this is that I can switch my host environment
>> (within reason - compatible host glibcs, ok) and not have to change the
>> target compiler.
>
>
> Hmm, I don't quite understand what you mean here.
>
This way I can upgrade my host system from RH6.2 to RH7 and not worry
about compiler differences affecting my kernel builds for the various
projects I'm working on... including systems based on 2.0, 2.2 and 2.4...
If anybody thinks gcc-2.96 messes up a 2.4 kernel, you should see what
happens when you compile 2.0.33 ;o).
> - Werner
--
Joe deBlaquiere
Red Hat, Inc.
307 Wynn Drive
Huntsville AL, 35805
voice : (256)-704-9200
fax : (256)-837-3839
On Fri, 15 Dec 2000, Dana Lacoste wrote:
> We really need a documented way to deal with this! It's getting silly
> the number of questions that people ask!
Please, that would be helpful - I'm still using a heavily mutated
slackware 3.1 that's been hacked up to the same level (if not beyond) as
Red Hat 6.2 but the kernel headers structure is still slackware 3.1!
Cheers,
Alex
--
The truth is out there.
http://www.tahallah.clara.co.uk
J . A . Magallon wrote:
> Easier: public kernel interfaces only work through pointers.
Requires more elaborate wrappers or a new layer of wrapper functions
around system calls, if you want to make this completely general.
Also, doesn't provide FOOSIZE to "public" space.
> Too kind-of-classroom-not-real-world-useless-thing ?
I'm afraid so ...
I don't think there are many opaque types where there's no trival
solution. Actually, I don't think there are many opaque types at
kernel APIs to start with. The one I know offhand is atm_kptr_t
in include/linux/atmapi.h, in this case, there's little risk in
exposing the internal structure.
So I'd consider opaque types more as a hypothetical obstacle.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
Joe deBlaquiere wrote:
> but actually the best thing about it is
> that the compiler people of the work might make generating a proper
> cross-toolchain less difficult by one or two magnitudes...
You have a point here ... particularly gcc-glibc interdependencies are
a little irritating (not sure if they still cause build failures - they
were great fun in egcs-1.1b)
> This way I can upgrade my host system from RH6.2 to RH7 and not worry
> about compiler differences affecting my kernel builds for the various
> projects I'm working on... including systems based on 2.0, 2.2 and 2.4...
Ah, you mean you _do_ use a different compiler, but you _don't_ have to
change the compiler you normally invoke with "gcc" ? I see.
> If anybody thinks gcc-2.96 messes up a 2.4 kernel, you should see what
> happens when you compile 2.0.33 ;o).
I think a -Wall-as-of=2.7.2 might be convenient at times ;-)
- Werner (drifting off-topic :-( )
--
_________________________________________________________________________
/ Werner Almesberger, ICA, EPFL, CH [email protected] /
/_IN_N_032__Tel_+41_21_693_6621__Fax_+41_21_693_6610_____________________/
In article <[email protected]> you write:
>In article <[email protected]>,
>LA Walsh <[email protected]> wrote:
>>It was at that
>>point, the externally compiled module "barfed", because like many modules,
>>it expected, like many externally compiled modules, that it could simply
>>access all of it's needed files through /usr/include/linux which it gets
>>by putting /usr/include in it's path. I've seen commercial modules like
>>vmware's kernel modules use a similar system where they expect
>>/usr/include/linux to contain or point to headers for the currently running
>>kernel.
>
>vmware asks you nicely
>
>where are the running kernels include files [/usr/src/linux/include" >
>
>And then compiles the modules with -I/path/to/include/files
>
>In fact, the 2.2.18 kernel already puts a 'build' symlink in
>/lib/modules/`uname -r` that points to the kernel source,
>which should be sufficient to solve this problem.. almost.
Don't get me started on that... The link points back to where the code
was when it was built, not where it is installed. This makes a difference
if you're building RPMs... in which case the link points back to (for
example) /usr/src/sgi/BUILD/linux-<version> and not
/usr/src/linux-<version>/.
And, top it all... once the build is completed the BUILD directory
is deleted.
Good Idea, but poorly thought out.
>I think /lib/modules/`uname -r`/ should contain a script that
>reproduces the CFLAGS used to compile the kernel. That way,
>you not only get the correct -I/path/to/kernel/include but
>the other compile-time flags (like -m486 etc) as well.
Anything that uses uname to work out what kernel is running doesn't work
if you're in a chrooted environment. Symlinks work better for this...if
all fails you can manage them manually during the build...
[ changeling 2.2.15-#5 ] uname -r
2.2.15-3SGI_39
[ changeling 2.2.15-#5 ] echo $ROOT
/work/root
[ changeling 2.2.15-#5 ] sudo chroot $ROOT
[ changeling 2.2.15-#5 ] uname -r
2.2.15-3SGI_39
[ changeling 2.2.15-#5 ] rpm -qa | grep kernel
kernel-headers-2.2.16-3
kernel-2.2.16-3
Trust me, $ROOT does not include 2.2.15....
>Mike.
richard.
-----------------------------------------------------------------------
Richard Offer Widget FAQ --> http://reality.sgi.com/widgetFAQ/
{X,Motif,Trust} on {Irix,Linux}
__________________________________________http://reality.sgi.com/offer/
On 15 Dec 2000, Miquel van Smoorenburg wrote:
> In article <[email protected]>,
> LA Walsh <[email protected]> wrote:
> >It was at that
> >point, the externally compiled module "barfed", because like many modules,
> >it expected, like many externally compiled modules, that it could simply
> >access all of it's needed files through /usr/include/linux which it gets
> >by putting /usr/include in it's path. I've seen commercial modules like
> >vmware's kernel modules use a similar system where they expect
> >/usr/include/linux to contain or point to headers for the currently running
> >kernel.
>
> vmware asks you nicely
>
> where are the running kernels include files [/usr/src/linux/include" >
>
> And then compiles the modules with -I/path/to/include/files
>
> In fact, the 2.2.18 kernel already puts a 'build' symlink in
> /lib/modules/`uname -r` that points to the kernel source,
> which should be sufficient to solve this problem.. almost.
>
> It doesn't tell you the specific flags used to compile the kernel,
> such as -m486 -DCPU=686
>
> > So at that point it becomes what we should name it under
> >/usr/include/linux. Should it be:
> >1) "/usr/include/linux/sys" (my preference)
>
> /usr should be static. It could be a read-only NFS mount.
> Putting system dependant configuration info here (which a
> /usr/include/linux/sys symlink *is*) is wrong.
>
> I think /lib/modules/`uname -r`/ should contain a script that
> reproduces the CFLAGS used to compile the kernel. That way,
> you not only get the correct -I/path/to/kernel/include but
> the other compile-time flags (like -m486 etc) as well.
[snip script]
However it happens, the necessary information THAT I KNOW OF is:
0) The kernel headers under linux/include
1) The state information generated by running make configure ; make dep ;
make clean -OR- equivalently by Debian's make-kpkg clean ; make-kpkg
configure which does the same thing but adds packaging-specific metadata.
A SYMLINK from /lib/modules/`uname -r`/ into /path/to/kernel-`uname -r`
will not always work, because:
0) The destination may not even exist (if the kernel has been installed
onto another machine)
1) The destination has no metadata or has the WRONG metadata (if I've just
compiled and installed 2.4.0-test11 on my i386, then am now building same
for my sun4c)
I have been recently told that a full copy of kernel headers and metadata
in /lib/modules/`uname -r`/ isn't going to work either, but the gentleman
who informed me of this hasn't yet shown why.
Debian's kernel package system allows for the creation of a kernel-headers
package which I THINK contains the correct metadata, but I've not
verified. This is placed into /usr/src/kernel-headers-`uname -r`/. This is
distro-specific but still A technically sound solution AFAIK. And not much
different than mentioned in the directly preceding paragraph.
Who else can offer an alternative solution, to the specific problem of
making configured KERNEL headers available for building third-party
modules?
--Ferret
On Fri, 15 Dec 2000, J . A . Magallon wrote:
>
> On 2000/12/15 Werner Almesberger wrote:
> > LA Walsh wrote:
> >
> > Exception: opaque types; there one would have to go via a __ identifier,
> > i.e.
> >
> > <public>/foo.h defines struct __foo ...;
> > <public>/bar.h includes <public>/foo.h
> > and uses #define FOOSIZE sizeof(struct __foo)
> > <private>/foo.h either typedef struct __foo foo_t;
> > or #define foo __foo /* ugly */
> >
>
> Easier: public kernel interfaces only work through pointers.
> <public>/foo.h typedef struct foo foo_t;
> foo_t* foo_new();
> <private>/foo.h includes <public>/foo.h
> struct foo { ............... };
> and uses #define FOOSIZE sizeof(foo_t)
>
> Drawback: public access is slow (always through foo_set_xxxx_field(foo_t*))
> private access from kernel or modules is fast (foo_t->x = ...)
> Advantage: kernel can change, foo_t internals can change and it is binary
> compatible. Even public headers can be kernel version
> independent.
I think collectively we're mixing what should really be two seperate but
related issues: userland interface from /usr/include/linux; and exported
kernel header interface for third-party modules.
>From a first reading, your Drawback: appears to belong to the sphere of
kernel interface, and your Advantage: to the sphere of userland interface.
But on the second reading (after opening a bottle of Jones) I can see how
the Advantage: would apply to both spheres.
I'm just asking that people please try to be a little more precise with
the rather imprecise list language.
--Ferret
On Fri, 15 Dec 2000, richard offer wrote:
> In article <[email protected]> you write:
> >In article <[email protected]>,
> >LA Walsh <[email protected]> wrote:
> >>It was at that
> >>point, the externally compiled module "barfed", because like many modules,
> >>it expected, like many externally compiled modules, that it could simply
> >>access all of it's needed files through /usr/include/linux which it gets
> >>by putting /usr/include in it's path. I've seen commercial modules like
> >>vmware's kernel modules use a similar system where they expect
> >>/usr/include/linux to contain or point to headers for the currently running
> >>kernel.
> >
> >vmware asks you nicely
> >
> >where are the running kernels include files [/usr/src/linux/include" >
> >
> >And then compiles the modules with -I/path/to/include/files
> >
> >In fact, the 2.2.18 kernel already puts a 'build' symlink in
> >/lib/modules/`uname -r` that points to the kernel source,
> >which should be sufficient to solve this problem.. almost.
>
> Don't get me started on that... The link points back to where the code
> was when it was built, not where it is installed. This makes a difference
> if you're building RPMs... in which case the link points back to (for
> example) /usr/src/sgi/BUILD/linux-<version> and not
> /usr/src/linux-<version>/.
>
> And, top it all... once the build is completed the BUILD directory
> is deleted.
>
>
> Good Idea, but poorly thought out.
[snip]
Once again, I'd like to suggest Debian's kernel package system as a good
working example of this sort of administrative-level kernel management. I
brought this up on the list once before, maybe eight months ago, but I
recall not even one reply worth of discussion about it. I have a fairly
basic idea of what could be done to merge part of 'make-kpkg' into the
kernel-side management, but I'd like to see some other trained eyeballs
taking a look.
On Fri, 15 Dec 2000, richard offer wrote:
>
> * $ from [email protected] at "15-Dec: 8:22pm" | sed "1,$s/^/* /"
> *
> *
> * Once again, I'd like to suggest Debian's kernel package system as a good
> * working example of this sort of administrative-level kernel management. I
> * brought this up on the list once before, maybe eight months ago, but I
> * recall not even one reply worth of discussion about it. I have a fairly
> * basic idea of what could be done to merge part of 'make-kpkg' into the
> * kernel-side management, but I'd like to see some other trained eyeballs
> * taking a look.
>
> I'm not familiar with Debian at all.. do you have some pointers to information
> on make-kpkg ?
Off the top of my head:
* perl script (but this can be changed if we wanted to adopt it)
* Has build support for kernel 'flavours': 2.2.17-flavour
* Has build support for modules outside of the kernel tree: alsa and
lm-sensors, and others
* Has support for cross-compiling the kernel and modules, by passing a
single destination-archetecture paramater, with the support of an
installed cross-compilation suite.
Has full support for building Debian packages (of course!):
* Kernel image
* Kernel headers: placed into /usr/src/kernel-headers-<version>
* Kernel source: placed into /usr/src/kernel-source-<version>
* External modules
The package build features could technically be seperated off into stubs
on the main package. but it seems Connectiva is working on merging Red
Hat's and Debian's packaging systems into something the LSB can adopt, if
I hear correctly.
Some of my ideas regarding the use of kernel-package with the main kernel
source:
* Simplifies the build of third-party modules AT KERNEL BUILD TIME:
The sources go into /usr/src/modules/<package>
* Protects against the dreaded accidental overwriting of current kernel
image and modules but is easily enough overridden: newbie or
asleep-at-console protection.
* Could be easily hooked into local package management system.
* The current monolithic kernel tarball could be split up to take
advantage of the modules build system, although the configuration
scripts would have to be changed. This would have a liability that code
outside the core kernel tree would be more difficult to compile into a
kernel, but would benefit by allowing non-core components to be
developed and released asynchronously. A non-core component would be
anything not required for booting, basic networking, or console access.
Examples: infrared, multimedia, and sound.
Areas in which the kernel-package system would need to be improved:
* Support for building new modules after kernel build time. This is a
current issue which could be more easily solved in the framework of
kernel-package.
* Support for calling an interactive configuration.
* Scripting support to run a user-defined sequence with a single command.
A typical build cycle on my build machine goes like:
# make-kpkg clean
# make menuconfig (if I need to change or interactively verify my
options)
# make-kpkg --revision=<hostname>.<build #> configure
# make-kpkg modules-clean
# make-kpkg modules
# make-kpkg kernel-headers (which I usually skip for personal use)
# make-kpkg kernel-image
I end up with package files called something like:
kernel-image-2.4.0-test11_heathen.01_i386.deb
kernel-headers-2.4.0-test11_heathen.01_i386.deb
alsa-modules-2.4.0-test11_0.5.9d+heathen.01_i386.deb
Getting it:
* The package is called 'kernel-package', and you can download the source
for it through http://www.debian.org
* The archives ARE undergoing reorganisation at this time, so if anyone
has troubles I can place a copy onto my webserver.
In article <[email protected]>,
<[email protected]> wrote:
>On 15 Dec 2000, Miquel van Smoorenburg wrote:
>
>> I think /lib/modules/`uname -r`/ should contain a script that
>> reproduces the CFLAGS used to compile the kernel.
>
>However it happens, the necessary information THAT I KNOW OF is:
>0) The kernel headers under linux/include
>1) The state information generated by running make configure ; make dep ;
>make clean -OR- equivalently by Debian's make-kpkg clean ; make-kpkg
>configure which does the same thing but adds packaging-specific metadata.
That state info is in ARCH, CFLAGS, CC and include/linux/autoconf.h
>A SYMLINK from /lib/modules/`uname -r`/ into /path/to/kernel-`uname -r`
>will not always work, because:
>0) The destination may not even exist (if the kernel has been installed
>onto another machine)
Yes but in that case, how are you going to compile a module without
the kernel headers anyway. If you compiled a kernel on one
machine and you know that you want to be able to compile modules
on the second machine you need to copy over /usr/src/linux-x.y.z/include
as well.
>1) The destination has no metadata or has the WRONG metadata (if I've just
>compiled and installed 2.4.0-test11 on my i386, then am now building same
>for my sun4c)
As I said the kconfig script should do some simple sanity checks-
compare version and architecture at least.
>I have been recently told that a full copy of kernel headers and metadata
>in /lib/modules/`uname -r`/ isn't going to work either, but the gentleman
>who informed me of this hasn't yet shown why.
Because lots of use have a small root file system of just 30 MB ?
Hmm, but as soon as you start thinking about cross-compiles etc
you need more and more state - like CROSS_COMPILE, AS, LD, CPP, AR,
NM etc etc. Yuck. It would probably be better to put all that info
in /usr/src/linux/Config.make, and use the current "build" symlink.
A module makefile would then look like this:
#! /usr/bin/make -f
# You might want to point BUILD somewhere else.
BUILD = /lib/modules/$(shell uname -r)/build
include $(BUILD)/Config.make
module.o:
$(CC) $(CFLAGS) -c module.c
Ah yes, this is probably a much better approach then a kconfig script
Mike.
--
RAND USR 16514
[email protected] (Alan Cox) writes:
> > >Which works because in a normal compile environment they have /usr/include
> > >in their include path and /usr/include/linux points to the directory
> > >under /usr/src/linux/include.
> >
> > No, that a redhat-ism.
>
> Umm, its a most people except Debianism. People relied on it despite it
> being wrong. RH7 ships with a matching library set of headers. I got to close
> a lot of bug reports explaining to people that the new setup was in fact
> right 8(
Fine, now if all distributions could also put something like:
#ifdef __KERNEL__
# error To build kernel modules you must point the compiler to
# error headers matching your current kernel!
#endif
in /usr/include/linux/module.h 3:d party kernel module developers
would be saved a lot of silly "bug" reports, and everybody would
be happy.
//Marcus
--
-------------------------------+-----------------------------------
Marcus Sundberg | Phone: +46 707 452062
Embedded Systems Consultant | Email: [email protected]
Cendio Systems AB | http://www.cendio.com
On 16 Dec 2000, Miquel van Smoorenburg wrote:
> In article <[email protected]>,
> <[email protected]> wrote:
> >On 15 Dec 2000, Miquel van Smoorenburg wrote:
> >
> >> I think /lib/modules/`uname -r`/ should contain a script that
> >> reproduces the CFLAGS used to compile the kernel.
> >
> >However it happens, the necessary information THAT I KNOW OF is:
> >0) The kernel headers under linux/include
> >1) The state information generated by running make configure ; make dep ;
> >make clean -OR- equivalently by Debian's make-kpkg clean ; make-kpkg
> >configure which does the same thing but adds packaging-specific metadata.
>
> That state info is in ARCH, CFLAGS, CC and include/linux/autoconf.h
Ah, thank you for resolving the state information for me. =)
> >A SYMLINK from /lib/modules/`uname -r`/ into /path/to/kernel-`uname -r`
> >will not always work, because:
> >0) The destination may not even exist (if the kernel has been installed
> >onto another machine)
>
> Yes but in that case, how are you going to compile a module without
> the kernel headers anyway. If you compiled a kernel on one
> machine and you know that you want to be able to compile modules
> on the second machine you need to copy over /usr/src/linux-x.y.z/include
> as well.
My point was that this symlink makes an assumption that is not always
valid. And assuming things that are not always valid (opposing to things
that do not always exist) is generally Wrong.
> >1) The destination has no metadata or has the WRONG metadata (if I've just
> >compiled and installed 2.4.0-test11 on my i386, then am now building same
> >for my sun4c)
>
> As I said the kconfig script should do some simple sanity checks-
> compare version and architecture at least.
>
> >I have been recently told that a full copy of kernel headers and metadata
> >in /lib/modules/`uname -r`/ isn't going to work either, but the gentleman
> >who informed me of this hasn't yet shown why.
>
> Because lots of use have a small root file system of just 30 MB ?
Yes, yes. Very good point. So (as I was thinking this morning as I awoke)
the symlink could stay, IF it could be guarenteed not to be dangling on
the target machine. I suspect making the symlink should therefore be left
to the administrator or local package manager.
We still can say that IF the symlink exists and is not dangling, that
third-party modules source MAY ASSUME it points to headers configured for
the running kernel. But the kernel build process SHOULD NOT make the
symlink.
> Hmm, but as soon as you start thinking about cross-compiles etc
> you need more and more state - like CROSS_COMPILE, AS, LD, CPP, AR,
> NM etc etc. Yuck. It would probably be better to put all that info
> in /usr/src/linux/Config.make, and use the current "build" symlink.
> A module makefile would then look like this:
>
> #! /usr/bin/make -f
>
> # You might want to point BUILD somewhere else.
> BUILD = /lib/modules/$(shell uname -r)/build
> include $(BUILD)/Config.make
>
> module.o:
> $(CC) $(CFLAGS) -c module.c
>
> Ah yes, this is probably a much better approach then a kconfig script
s/better/simpler/
A kconfig script will of course need to override BUILD in certain cases,
but with make this is trivial.
This solves one specific outstanding issue, and IMO at first glance before
my first cup of coffee should solve it well. But a kconfig script is meant
to do many other things as well. Some of these other things are big enough
to be held until 2.5.0, and are mentioned in another post.
[Dana Lacoste]
> Essentially, whatever solution is implemented MUST ensure :
>
> 1 - glibc will work properly (the headers in /usr/include/* don't
> change in an incompatible manner)
>
> 2 - programs that need to compile against the current kernel MUST
> be able to do so in a quasi-predictable manner.
(2) is bogus. NO program needs to compile against the current kernel
headers. The only things that need to compile against the current
kernel headers are kernel modules and perhaps libc itself. As I put it
a few days ago--
http://marc.theaimsgroup.com/?l=linux-kernel&m=97658613604208&w=2
So for your external modules, let me suggest the lovely
/lib/modules/{version}/build/include/. Recent-ish modutils required.
Peter
[Joe deBlaquiere]
> might be a good newbie filter... but actually the best thing about it
> is that the compiler people of the work might make generating a
> proper cross-toolchain less difficult by one or two magnitudes...
*WHAT*? How much less difficult could it possibly get? This is the
kernel, there is no cross-libc to worry about, so a cross-toolchain is
already down to a pair of CMMIs[1].
I do agree that anyone who can't do *that* should probably be using a
distro-packaged kernel....
Peter
[1] configure-make-make-install
[Miquel van Smoorenburg]
> In fact, the 2.2.18 kernel already puts a 'build' symlink in
> /lib/modules/`uname -r` that points to the kernel source,
> which should be sufficient to solve this problem.. almost.
>
> It doesn't tell you the specific flags used to compile the kernel,
> such as -m486 -DCPU=686
Sure it does.
make -C /lib/modules/`uname -r`/build modules SUBDIRS=$(pwd)
or, if you like clumsy and slow:
SRCDIR := /lib/modules/`uname -r`/build
CFLAGS := $(shell $(MAKE) -s -C $(SRCDIR) script SCRIPT='echo $$$$CFLAGS')
Peter
Last I recall you have to at least have newlib around to get through the
build process of gcc. libgcc doesn't affect the kernel but you can't do
'make install' without building it.
BTW : The Linux newlib stuff should go a long way toward solving some of
these problems (at least for x86 these days... other arches sure to follow)
Peter Samuelson wrote:
> [Joe deBlaquiere]
>
>> might be a good newbie filter... but actually the best thing about it
>> is that the compiler people of the work might make generating a
>> proper cross-toolchain less difficult by one or two magnitudes...
>
>
> *WHAT*? How much less difficult could it possibly get? This is the
> kernel, there is no cross-libc to worry about, so a cross-toolchain is
> already down to a pair of CMMIs[1].
>
> I do agree that anyone who can't do *that* should probably be using a
> distro-packaged kernel....
>
> Peter
>
> [1] configure-make-make-install
--
Joe deBlaquiere
Red Hat, Inc.
307 Wynn Drive
Huntsville AL, 35805
voice : (256)-704-9200
fax : (256)-837-3839
Lets try another scenario (from user point of view)
Ship kernel sources split in kernel-headers-2.2.18(tar.bz2,rpm,deb) and
kernel-source-2.2.18, and a binary kernel-2.2.18
One user not doing kernel compiles, but with various installed
kernels to try has:
/usr/include/kernel-2.2.18
/usr/include/kernel-2.4.0
/usr/include/kernel -> kernel-2.2.18 (setup at boot-init scripts
with uname -r)
User can compile userspace apps and test kernel modules including
/usr/include/kernel. If glibc is kernel independent, glibc headers
just include 'kernel'. If it is kernel dependent, include
'kernel-x.y.z'.
User rebuilding kernel: kernel-source-x.y.z always look at
/usr/include/kernel-x.y.z, not just kernel.
A developer building a patched kernel that does not change headers
can manually do
/usr/include/kernel-2.2.18-my-pre -> /usr/include/kernel-2.2.18
If he need to change headers, dup include tree.
--
Juan Antonio Magallon Lacarta #> cd /pub
mailto:[email protected] #> more beer
Linux werewolf 2.2.19-pre1 #1 SMP Fri Dec 15 22:25:20 CET 2000 i686
In article <[email protected]>,
Peter Samuelson <[email protected]> wrote:
>[Miquel van Smoorenburg]
>> In fact, the 2.2.18 kernel already puts a 'build' symlink in
>> /lib/modules/`uname -r` that points to the kernel source,
>> which should be sufficient to solve this problem.. almost.
>>
>> It doesn't tell you the specific flags used to compile the kernel,
>> such as -m486 -DCPU=686
>
>Sure it does.
>
> make -C /lib/modules/`uname -r`/build modules SUBDIRS=$(pwd)
Excellent. Is there any way to put his in a Makefile?
Mike.
--
RAND USR 16514
[Joe deBlaquiere]
> Last I recall you have to at least have newlib around to get through
> the build process of gcc. libgcc doesn't affect the kernel but you
> can't do 'make install' without building it.
Hmmm. I do not recall needing newlib or anything like it, last time I
built a cross-gcc+binutils. Perhaps it depends on the target arch. If
you pick one where the libgcc math functions haven't all been written
yet, you're probably right.
Peter
[Peter Samuelson]
> > make -C /lib/modules/`uname -r`/build modules SUBDIRS=$(pwd)
[Miquel van Smoorenburg]
> Excellent. Is there any way to put his in a Makefile?
I don't know why not. Here's what I would start with:
PWD := $(shell pwd)
KERNEL := /lib/modules/$(shell uname -r)/build
CALLMAKE = $(MAKE) -C $(KERNEL) SUBDIRS=$(PWD)
all:
# ...other stuff...
$(CALLMAKE) modules
install:
# ...other stuff...
$(CALLMAKE) modules_install
# ...other stuff...
include $(TOPDIR)/Rules.make
Not *quite* that simple, but it's a start. Note that with this
construction you let the savvy user override $(KERNEL) at the command
line. (I, for one, would insist.)
Peter
"Peter Samuelson" <[email protected]> wrote :
> [Dana Lacoste]
> > 2 - programs that need to compile against the current kernel MUST
> > be able to do so in a quasi-predictable manner.
> (2) is bogus. NO program needs to compile against the current kernel
> headers. The only things that need to compile against the current
> kernel headers are kernel modules and perhaps libc itself. As I put it
> a few days ago--
> http://marc.theaimsgroup.com/?l=linux-kernel&m=97658613604208&w=2
> So for your external modules, let me suggest the lovely
> /lib/modules/{version}/build/include/. Recent-ish modutils required.
ok, i'll rephrase my request :)
For sake of argument (i.e. this might not be true, but pretend it is :)
- I write an external/third party kernel module
- For various reasons, I must have this kernel module installed to boot
(i can't compile without my module running)
- I need to upgrade kernels to a new version, one where there are
not-insignificant changes in the kernel headers.
- I distribute this module online, and thousands of people use this module
with various platform and distribution combinations.
How can I know where the 'correct' Linux kernel headers are in such
a way that is as transparent as possible to the user doing the compiling?
Potential answers that have come up so far :
1 - /lib/modules/* directories that involve `uname -r`
This won't work because i might not be compiling for the `uname -r` kernel
2 - /lib/modules/<version>/build/include/ :
we could recommend that all kernel headers for all versions be put in
the directory with the modules as listed above. someone doesn't like
the idea of symlinks (dangling symlinks ARE bad :) and someone else
pointed
out that his root partition is only 30MB. therefore this idea has flaws
too.
3 - the script (Config.make, etc) approach : several people recommended one
kind or another of script that could be run prior to compilation that
could set all the relevant variables, including one that would point to
where the kernel headers are, and one that would have the 'correct'
compile flags, etc.
4 - a link in the /usr/include directory tree that points to the kernel
headers,
so that /usr/include/linux would have the glibc-compiled headers, and this
other directory would have the current kernel's headers : this doesn't
really support cross-compiling.
#1 and #2 and #4 all seem to be limiting somehow. I think the biggest problem
so far has been that many developers don't recognize just how varied the linux
development universe is! For me personally, it's nothing to cross-compile for
other hardware platforms, and any solution that doesn't take that possibility
into account is just being silly :)
Can we get a #3 going? I think it could really help both the cross-compile
people and those who just want to make sure their modules are compiling in
the 'correct' environment. It also allows for things like 'kgcc vs. gcc' to
be 'properly' resolved by the distribution-creator as it should be, instead of
linux-kernel or the 3rd party module mailing lists.
So? What do you think?
--
Dana Lacoste
Linux Developer
Peregrine Systems
On 18 Dec 00 at 10:51, Dana Lacoste wrote:
> Potential answers that have come up so far :
> 1 - /lib/modules/* directories that involve `uname -r`
> This won't work because i might not be compiling for the `uname -r` kernel
> 2 - /lib/modules/<version>/build/include/ :
> we could recommend that all kernel headers for all versions be put in
> the directory with the modules as listed above. someone doesn't like
> the idea of symlinks (dangling symlinks ARE bad :) and someone else
> pointed
> out that his root partition is only 30MB. therefore this idea has flaws
> too.
> 3 - the script (Config.make, etc) approach : several people recommended one
> kind or another of script that could be run prior to compilation that
> could set all the relevant variables, including one that would point to
> where the kernel headers are, and one that would have the 'correct'
> compile flags, etc.
How you can make sure that script will not point to non-existing directory?
With danglink symlink you can test... No, if user removes its kernel source,
he should also remove symlink. And if he does not know about symlink, it
does not matter that symlink is dangling.
> Can we get a #3 going? I think it could really help both the cross-compile
> people and those who just want to make sure their modules are compiling in
> the 'correct' environment. It also allows for things like 'kgcc vs. gcc' to
> be 'properly' resolved by the distribution-creator as it should be, instead of
> linux-kernel or the 3rd party module mailing lists.
You should just use /lib/modules/*/build/Makefile:
cd /lib/modules/*/build
make subdir-m=/my/dir modules
and that's all. Complete environment already here. And if you built
kernel with 'make CC=my-kernel-gcc' ? Sorry, but we cannot store all
possible parameters of your environment. What if you built kernel
with 'PATH=/my/kernel/tools:$PATH make' ? Should we store path too?
Complete environment?
Just my 0.02 cents.
Best regards,
Petr Vandrovec
[email protected]
P.S.: While we are cleaning up Makefiles, what about switching from
xxxxx-x to xxxxx_x ? Bash otherwise complains in 'make subdir-m=/my/subdir'
that 'invalid character 45 in exportstr for subdir-m' ... I know
it is late, but better now than sometime later.
[Dana Lacoste]
> - I write an external/third party kernel module
> - For various reasons, I must have this kernel module installed to boot
> (i can't compile without my module running)
In that case "compile script for dummies" will probably fail anyway.
If you need it to boot, you probably need to either (a) compile it
directly into the kernel (not modular) or (b) use a custom initrd after
compiling. Neither option is easy to automate for the clueless user.
> How can I know where the 'correct' Linux kernel headers are in such a
> way that is as transparent as possible to the user doing the
> compiling?
The official correct answer is
/lib/modules/{version}/build/include
The only time this fails is if the user has moved or deleted his kernel
tree since installing, and if he does that, obviously he doesn't want
to compile any external modules.
The difficulty here is determining {version}. It is `uname -r` for the
currently running kernel, but could be anything at all for other
kernels.
So when in doubt, generate a list of `cd /lib/modules; echo *` and have
the user pick one.
> I think the biggest problem so far has been that many developers
> don't recognize just how varied the linux development universe is!
> For me personally, it's nothing to cross-compile for other hardware
> platforms, and any solution that doesn't take that possibility into
> account is just being silly :)
I think the biggest problem is trying to cater to users who don't know
how the kernel compile process works. If you're going to compile your
own kernel and/or modules, you had better do your homework, is what I
say. All the problems we are discussing magically go away as soon as
you assume a user with a quarter of a clue.
Peter
On Mon, Dec 18, 2000 at 10:51:09AM -0500, Dana Lacoste wrote:
>
> Can we get a #3 going? I think it could really help both the cross-compile
> people and those who just want to make sure their modules are compiling in
> the 'correct' environment. It also allows for things like 'kgcc vs. gcc' to
> be 'properly' resolved by the distribution-creator as it should be, instead of
> linux-kernel or the 3rd party module mailing lists.
I use the following script (scripts/dep.linux from Comedi-0.7.53).
It could easily be improved to handle the /lib/modules/*/build/include
link. I've also developed (actually, "gathered") a lot of other stuff
for convenient non-kernel module compiling, including compatiblity
header files, Makefiles, etc. Good places to look for stuff include
comedi, RTAI, RTLinux, PCMCIA, and MTD.
Keep in mind that there is no "correct" environment except that
which the user specifies.
dave...
#!/bin/sh
if [ "$LINUXDIR" = "" ]
then
echo -n "Enter location of Linux source tree [/usr/src/linux]: "
read LINUXDIR
: ${LINUXDIR:=/usr/src/linux}
fi
if [ ! -f "$LINUXDIR/.config" ];then
echo Kernel source tree at $LINUXDIR is not configured
echo Fix before continuing
exit 1
fi
echo using LINUXDIR=$LINUXDIR
echo LINUXDIR=$LINUXDIR >.sourcedirs
. $LINUXDIR/.config
#
# check for a bad situation
#
if [ "$CONFIG_MODULES" = "n" ]
then
cat <<EOF
*****
***** WARNING!!!
*****
***** Your kernel is configured to not allow loadable modules.
***** You are attempting to compile a loadable module for this
***** kernel. This is a problem. Please correct it.
*****
EOF
exit
fi
#
# check running kernel vs. /usr/src/linux and warn if necessary
#
read dummy dummy dummy2 <$LINUXDIR/include/linux/version.h
UTS_VERSION=`echo $dummy2|sed 's/"//g'`
echo UTS_VERSION=$UTS_VERSION >.uts_version
if [ "$(uname -r)" != "$UTS_VERSION" ]
then
cat <<EOF
*****
***** WARNING!!!
*****
***** The kernel that is currently running is a different
***** version than the source in $LINUXDIR. The current
***** compile will create a module that is *incompatible*
***** with the running kernel.
*****
EOF
fi
In article <[email protected]> you write:
>
>[Dana Lacoste]
>> Essentially, whatever solution is implemented MUST ensure :
>>
>> 1 - glibc will work properly (the headers in /usr/include/* don't
>> change in an incompatible manner)
>>
>> 2 - programs that need to compile against the current kernel MUST
>> be able to do so in a quasi-predictable manner.
>
>(2) is bogus. NO program needs to compile against the current kernel
>headers. The only things that need to compile against the current
>kernel headers are kernel modules and perhaps libc itself.
Or userland libraries/applications that need to bypass libc and make
direct kernel calls because libc hasn't yet implemented those new
kernel calls.
>
>Peter
richard.
-----------------------------------------------------------------------
Richard Offer Widget FAQ --> http://reality.sgi.com/widgetFAQ/
{X,Motif,Trust} on {Irix,Linux}
__________________________________________http://reality.sgi.com/offer/
[richard offer]
> Or userland libraries/applications that need to bypass libc and make
> direct kernel calls because libc hasn't yet implemented those new
> kernel calls.
Nah, it's still error-prone because it's too hard to guarantee that the
user compiling your program has up-to-date kernel headers in a location
you can find. Too many things can go wrong.
So just '#include <asm/unistd.h>' -- the libc version -- then have your
own header for those few things you consider "too new to be in libc":
/* my_unistd.h */
/* [not sure if all the __{arch}__ defines are right] */
#include <asm/unistd.h> /* from libc, not from kernel */
#ifndef __NR_pivot_root
# ifdef __alpha__
# define __NR_pivot_root 374
# endif
# if defined(__i386__) || defined(__s390__) || defined(__superh__)
# define __NR_pivot_root 217
# endif
# ifdef __mips__
# define __NR_pivot_root (__NR_Linux + 216)
# endif
# ifdef __hppa__
# define __NR_pivot_root (__NR_Linux + 67)
# endif
# ifdef __sparc__
# define __NR_pivot_root 146
# endif
#endif
#ifndef __NR_pivot_root
# error Your architecture is not known to support pivot_root(2)
#endif
_syscall2(int,pivot_root,char *,new,char *,old)
Yes it's clumsy but it's guaranteed to be where you expect it. (And
it's not nearly as clumsy if you don't feel the need to support all
architectures.)
Peter