I was interested in finding a tool that would tell me all the paths
through the kernel leading to some particular function, for example in
the case of do_mmap_pgoff:
do_mmap_pgoff do_mmap2 old_mmap old_mmap_i386
do_mmap_pgoff do_mmap2 sys_mmap2
do_mmap_pgoff do_mmap aio_setup_ring ioctx_alloc sys_io_setup
do_mmap_pgoff do_mmap elf_map load_elf_binary
...
I submitted a tool ('fscope') to do this but no one has picked up
on the discussion. So I am wondering if there isn't already some
existing and better way to accomplish the same thing.
Could somebody tell me please, what is that way?
I know you can do a backtrace w/ gdb but that begs the question
how are you going to sure you have found every path?
Carl Spalletta wrote:
> I was interested in finding a tool that would tell me all the paths
> through the kernel leading to some particular function, for example in
> the case of do_mmap_pgoff:
>
> do_mmap_pgoff do_mmap2 old_mmap old_mmap_i386
> do_mmap_pgoff do_mmap2 sys_mmap2
> do_mmap_pgoff do_mmap aio_setup_ring ioctx_alloc sys_io_setup
> do_mmap_pgoff do_mmap elf_map load_elf_binary
> ...
>
> I submitted a tool ('fscope') to do this but no one has picked up
> on the discussion. So I am wondering if there isn't already some
> existing and better way to accomplish the same thing.
>
> Could somebody tell me please, what is that way?
>
> I know you can do a backtrace w/ gdb but that begs the question
> how are you going to sure you have found every path?
It seems everybody is busy trying Linus' sparse, so maybe it was
overlooked. Right now, it seems we have a few new tools to play with,
which were not available at the time of 2.4-test.
(Alphabetic order)
-Checker (Stanford people)
-Fscope (Carl Spalletta)
-Smatch (Dan Carpenter)
-Sparse (Linus Torvalds)
Now we only need one additional tool to *prove* correctness of the
kernel ;-)
-an automatic race finder
Liberal use of these tools should result in the most stable kernel ever.
Regards,
Carl-Daniel
--- Carl-Daniel Hailfinger <[email protected]> wrote:
> It seems everybody is busy trying Linus' sparse
I don't get any sensible output out of the test clients provided by
Linus for sparse - that is I don't see how to construct the syntax treee
from the output.
> Now we only need one additional tool to *prove* correctness
> -an automatic race finder
> Liberal use of these tools should result in the most stable kernel ever.
For a fact, you do have a point.;-(
Carl-Daniel Hailfinger wrote:
> Now we only need one additional tool to *prove* correctness of the
> kernel ;-)
Perhaps another interesting approach for such source code analyzers
would be to take a top-down view, i.e. assume a certain functional
structure, and check that all the elements are there and in the
right order.
This should be feasible for code that basically follows a certain
template, e.g. network card drivers.
This would also help with the update problem of "fill in the blanks"
type of templates, i.e. if the template changes or is augmented,
some drivers using it may no longer conform to it.
Such a top-down view could be layered on top of a bottom-up analyzer,
e.g. by - manually or automatically - translating some "skeleton"
into a set of rules of the type "if you've called X, you must later
call Y", etc.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, Buenos Aires, Argentina [email protected] /
/_http://www.almesberger.net/____________________________________________/
On Fri, 30 May 2003, Werner Almesberger wrote:
> Carl-Daniel Hailfinger wrote:
> > Now we only need one additional tool to *prove* correctness of the
> > kernel ;-)
>
> Perhaps another interesting approach for such source code analyzers
> would be to take a top-down view, i.e. assume a certain functional
> structure, and check that all the elements are there and in the
> right order.
>
> This should be feasible for code that basically follows a certain
> template, e.g. network card drivers.
>
> This would also help with the update problem of "fill in the blanks"
> type of templates, i.e. if the template changes or is augmented,
> some drivers using it may no longer conform to it.
>
> Such a top-down view could be layered on top of a bottom-up analyzer,
> e.g. by - manually or automatically - translating some "skeleton"
> into a set of rules of the type "if you've called X, you must later
> call Y", etc.
>
> - Werner
The problem with 'correctness" is in the definition
of 'correct'. In many cases there are hundreds of
methods that can be used to implement a particular
software algorithm. Ultimately, it will come down
to how the implementor "wants" to code it, rather than
how it "should" be coded.
Which of the following are "correct"?
for(i=0; i< NR; i++)
do_domething();
i = NR;
while(i--)
do_something();
Since 'i' is only used as a counter, it doesn't matter
if the count is up or down. There are some that would
argue that down-counting is 'better' because the setting
of a 'zero' flag on some CPU happens without additional
code if one decrements to zero. However, that depends
upon the implementation and upon the 'C' compiler itself.
There is no guarantee that the 'C' compiler will create
any particular code sequence, only that the logic and the
mathematics will correspond to the rules defined by the
standards.
FYI, using egcs-2.91.66 both methods produce code that is
larger and has more jumps than code written like:
i = NR;
again: if(!i) goto quit;
i--;
do_something();
goto again;
quit:;
It is unlikely that code such as this would be allowed in
software that can be reviewed by others. Goto freaks would
be committing crimes against humanity and civilization,
as we know it now, would cease to exist.
So, we are left with the correctness definition problem
that seems unsolvable.
One can, however, create an analysis engine that determines
compliance with certain rules and, or, certain templates.
For instance, a 'kernel-aware' kind of 'lint', one that
can follow in-line assembly and review all possible code-
paths through a kernel just might find deadlock situations,
and memory leaks, using an automated mechanism. However,
such a mechanism will still not verify, or even evaluate
'correctness'.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.20 on an i686 machine (797.90 BogoMips).
Why is the government concerned about the lunatic fringe? Think about it.
Richard B. Johnson wrote:
> The problem with 'correctness" is in the definition
> of 'correct'. In many cases there are hundreds of
> methods that can be used to implement a particular
> software algorithm.
Of course. Your example would probably be too low-level for
most such checkers, and might better be handled by run-time
checks (e.g. through something like valgrind).
Also, a template check wouldn't have to cover all equivalent
coding variants. E.g. if the template just allows one kind
of loop, it's probably always acceptable to change code that
uses an equivalent alternative.
Furthermore, if someone figures out a way for doing things
more efficiently, it should be easier to update drivers that
conform to a well-known status quo ante.
> One can, however, create an analysis engine that determines
> compliance with certain rules and, or, certain templates.
Yes, that's exactly what I mean.
- Werner
--
_________________________________________________________________________
/ Werner Almesberger, Buenos Aires, Argentina [email protected] /
/_http://www.almesberger.net/____________________________________________/