The last couple of kbuild patches has put attention to testing for
features in the kernel so an external modules can stay compatible
with a broad range of kernels.
Since vendors backport patches then testing for the kernel version is not
an option, so other means are reqired.
Two approaches are in widespread use:
a) grep kernel headers
b) Try to compile a small .c file (nvidia is a good example)
The a) approach is not robust for changes in .h files, mainly when contant
is moved from one file to another. This will happen when the linuxabi
project is kicked off.
The b) approach required glibc headers to be in full sync with the kernel,
this cannot be guaranteed. And will fail when building for another kernel
version than the running one.
How about the following approach where the kbuild system is used to compile
a few sample programs?
If compile succeeeds -> feature is present.
If compile fails -> feature not implemented.
The files needs to be in a separate directory.
Use: dir/feature.sh /lib/modules/`uname -r`/build ../feature.h
The script usually used to build the module can call feature.sh, and afterwards
the module can include the feature.h file.
Comments welcome...
Sam
--- /dev/null 2003-09-23 19:59:22.000000000 +0200
+++ features.sh 2004-06-24 22:23:10.475441120 +0200
@@ -0,0 +1,19 @@
+# Check for presence of certain kernel features
+# $1 = kernel dir
+# $2 = output file
+
+#
+dir=`dirname $0`
+cd $dir
+
+
+make -C $1 M=`pwd`
+
+if [ -f remap4.o ]; then
+ echo "#define REMAP4 1" > $2
+elif [ -f remap5.o ]; then
+ echo "#define REMAP5 1" > $2
+fi
+
+make -C $1 M=`pwd` clean
+
--- /dev/null 2003-09-23 19:59:22.000000000 +0200
+++ Makefile 2004-06-24 21:55:45.071580528 +0200
@@ -0,0 +1,5 @@
+MAKEFLAGS += -k
+
+obj-y := remap4.o remap5.o
+
+
--- /dev/null 2003-09-23 19:59:22.000000000 +0200
+++ remap4.c 2004-06-24 21:44:28.022507632 +0200
@@ -0,0 +1,7 @@
+#include <linux/mm.h>
+
+int do_test_remap_page_range(void)
+{
+ pgprot_t pgprot;
+ remap_page_range(0L, 0L, 0L, pgprot);
+}
--- /dev/null 2003-09-23 19:59:22.000000000 +0200
+++ remap5.c 2004-06-24 21:44:10.033242416 +0200
@@ -0,0 +1,7 @@
+#include <linux/mm.h>
+
+int do_test_remap_page_range(void)
+{
+ pgprot_t pgprot;
+ remap_page_range(NULL, 0L, 0L, 0L, pgprot);
+}
On Thu, 24 Jun 2004 22:30:43 +0200, Sam Ravnborg <[email protected]> wrote:
>
> The last couple of kbuild patches has put attention to testing for
> features in the kernel so an external modules can stay compatible
> with a broad range of kernels.
> Since vendors backport patches then testing for the kernel version is not
> an option, so other means are reqired.
>
> Two approaches are in widespread use:
> a) grep kernel headers
> b) Try to compile a small .c file (nvidia is a good example)
Why can't you check the .config file to see if features are enabled?
--
Patrick "Diablo-D3" McFarland || [email protected]
"Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989
On Jun 24, 2004 22:30 +0200, Sam Ravnborg wrote:
> +if [ -f remap4.o ]; then
> + echo "#define REMAP4 1" > $2
> +elif [ -f remap5.o ]; then
> + echo "#define REMAP5 1" > $2
> +fi
I would prefer that these be called something like HAVE_REMAP5, or
better yet something descriptive like HAVE_REMAP_PAGE_RANGE_VMA.
This obviously needs to be smarter also, to handle adding multiple
#defines to a single .h file.
Ideally, when people make an incompatible kernel API change like this
they would just #define HAVE_REMAP_PAGE_RANGE_VMA in the header that
declares remap_page_range() directly (e.g. KERNEL_AS_O_DIRECT was added
for this reason) instead of external builds having to figure this out
themselves. Adding the check script is no less work than just adding
the #define to the appropriate header directly.
Having something like "features.h" is only useful as far as it checks
for features that applications care about. If it doesn't have checks
for features, then the apps need to implement those checks anyways
and different apps will name the script/#define differently so until
they make it into the stock kernel it isn't terribly useful.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://members.shaw.ca/adilger/ http://members.shaw.ca/golinux/
On Thu, Jun 24, 2004 at 04:24:01PM -0400, Patrick McFarland wrote:
> On Thu, 24 Jun 2004 22:30:43 +0200, Sam Ravnborg <[email protected]> wrote:
> >
> > The last couple of kbuild patches has put attention to testing for
> > features in the kernel so an external modules can stay compatible
> > with a broad range of kernels.
> > Since vendors backport patches then testing for the kernel version is not
> > an option, so other means are reqired.
> >
> > Two approaches are in widespread use:
> > a) grep kernel headers
> > b) Try to compile a small .c file (nvidia is a good example)
>
> Why can't you check the .config file to see if features are enabled?
Features in a broad sense so API changes with respect to types
and function calls
Sam
On Thu, Jun 24, 2004 at 02:35:16PM -0600, Andreas Dilger wrote:
> On Jun 24, 2004 22:30 +0200, Sam Ravnborg wrote:
> > +if [ -f remap4.o ]; then
> > + echo "#define REMAP4 1" > $2
> > +elif [ -f remap5.o ]; then
> > + echo "#define REMAP5 1" > $2
> > +fi
>
> I would prefer that these be called something like HAVE_REMAP5, or
> better yet something descriptive like HAVE_REMAP_PAGE_RANGE_VMA.
Agreed - the above was just to give intro to the concept.
>
> This obviously needs to be smarter also, to handle adding multiple
> #defines to a single .h file.
Agreed again - using >> would do the trick here.
The idea was to have a common way for external modules to detect
certain API changes, type changes etc. that would cause the build
to fail otherwise.
Also the code fragment shown were supposed to be part of the
external modules - not the main steram kernel.
> Ideally, when people make an incompatible kernel API change like this
> they would just #define HAVE_REMAP_PAGE_RANGE_VMA in the header that
> declares remap_page_range() directly (e.g. KERNEL_AS_O_DIRECT was added
> for this reason) instead of external builds having to figure this out
> themselves. Adding the check script is no less work than just adding
> the #define to the appropriate header directly.
That practice would be nice.
Sam
Andreas Dilger <[email protected]> writes:
> Ideally, when people make an incompatible kernel API change like this
> they would just #define HAVE_REMAP_PAGE_RANGE_VMA in the header that
> declares remap_page_range() directly (e.g. KERNEL_AS_O_DIRECT was added
> for this reason) instead of external builds having to figure this out
> themselves. Adding the check script is no less work than just adding
> the #define to the appropriate header directly.
I fully agree here, just having #defines for kernel features (which
can't be detected otherwise) is much easier.
Ge- "#define HAVE_V4L2 1" rd
--
2004-06-24: Switched mail/news setup to a new machine. If something
is unusual/strange/broken/whatever feel free to drop me a note.
=> [email protected]
On Thursday 24 June 2004 22:35, Andreas Dilger wrote:
> Ideally, when people make an incompatible kernel API change like this
> they would just #define HAVE_REMAP_PAGE_RANGE_VMA in the header that
> declares remap_page_range() directly (e.g. KERNEL_AS_O_DIRECT was added
> for this reason) instead of external builds having to figure this out
> themselves. Adding the check script is no less work than just adding
> the #define to the appropriate header directly.
I disagree. I don't think we want to clutter the code with feature definitions
that have no known users. That doesn't age/scale very well. It's easy enough
to test for features in the external module.
Cheers,
--
Andreas Gruenbacher <[email protected]>
SUSE Labs, SUSE LINUX AG
On 2004-06-25T10:32:22,
Andreas Gruenbacher <[email protected]> said:
> I disagree. I don't think we want to clutter the code with feature
> definitions that have no known users. That doesn't age/scale very
> well. It's easy enough to test for features in the external module.
True enough, but how do you propose to do that? I do understand the pain
of the external module builds who have to try and support the vanilla
kernel plus several vendor trees.
Yes, of course, we could end up with a autoconf like approach for
building them, but ... you know ... that's sort of ugly.
Having a list of defines to document the version of a specific API in
the kernel, and a set of defines pre-fixed with <vendor>_ to document
vendor tree extensions may not be the worst thing:
- if the vendor backports a given feature + API from mainstream, the
define can be set to match the mainstream version;
- If vendor introduces a vendor API extension, the vendor extension
would come into play.
- If the vendor API eventually merges with the mainstream API again, the
vendor define goes away again and rule 1 applies.
This should age pretty well - as soon as an external code tree drops
support for a given version, they can clean out all the #ifdefs they had
based on this.
Now the granularity of the API versioning is interesting - per .h is too
coarse, and per-call would be too fine. But I'm sure someone could come
up with a sane proposal here.
Sincerely,
Lars Marowsky-Br?e <[email protected]>
--
High Availability & Clustering \ ever tried. ever failed. no matter.
SUSE Labs, Research and Development | try again. fail again. fail better.
SUSE LINUX AG - A Novell company \ -- Samuel Beckett
On Fri, Jun 25, 2004 at 11:04:13AM +0200, Lars Marowsky-Bree wrote:
> On 2004-06-25T10:32:22,
> Andreas Gruenbacher <[email protected]> said:
>
> > I disagree. I don't think we want to clutter the code with feature
> > definitions that have no known users. That doesn't age/scale very
> > well. It's easy enough to test for features in the external module.
>
> True enough, but how do you propose to do that? I do understand the pain
> of the external module builds who have to try and support the vanilla
> kernel plus several vendor trees.
>
> Yes, of course, we could end up with a autoconf like approach for
> building them, but ... you know ... that's sort of ugly.
>
> Having a list of defines to document the version of a specific API in
> the kernel, and a set of defines pre-fixed with <vendor>_ to document
> vendor tree extensions may not be the worst thing:
>...
> Now the granularity of the API versioning is interesting - per .h is too
> coarse, and per-call would be too fine. But I'm sure someone could come
> up with a sane proposal here.
What's an API for modules?
- whether a .h file is present under include/
- every EXPORT_SYMBOL{,_GPL}'ed function
- every inlined function under include/
- every struct defined under include/
- perhaps more things I'm currently forgetting
Every change to something mentioned above during a development kernel
needs to be cover by an appropriate API versioning.
And then consider as an example cases like a function returning
irqreturn_t in 2.6:
- in 2.6, this function returns irqreturn_t (typedef'd to int)
- in 2.4, this function might return irqreturn_t (typedef'd to void)
- in 2.4, this function might return void
I'm sure there is a correct solution for such cases - but it's extra
work and easy to get things wrong.
Why do you dislike autoconf? I do not pretend autoconf where perfect -
but it works. Looking at the external ALSA, autoconf seems to be a good
solution to probe for exact the things a module needs without a big
overhead in kernel development.
> Sincerely,
> Lars Marowsky-Br?e <[email protected]>
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed