2007-03-22 02:53:06

by Avantika Mathur

[permalink] [raw]
Subject: Ext4 devel interlock meeting minutes (March 21, 2007)

Ext4 Developer Interlock Call: 03/21/2007 Meeting Minutes

Attendees: Mingming Cao, Dave Kleikamp, Jean-Noel Cordenner, Valerie Clement, Ted T'so, Andreas Dilger, Jose Santos, Avantika Mathur

- Jose Santos just joined the IBM LTC filesystem team, and will start looking at 64 bit support in e2fsprogs

E2fsprogs:
- Ted gave a detailed outline on what changes are needed to support 64 bit block numbers in e2fsprogs. He will be sending out a write up of this outline to the linux-ext4 mailing list.

- Ted and Andreas also discussed if there is immediate need for 64 bit support. If not, the extents suport can be the primary focus for e2fsprogs, after which 64 bit support can be implemented. This question will also be posted on the mailin list

- Andreas suggested creating an e4fsprogs, which uses the same code base as e2fsprogs, but has 64 bit block_t; and doesn't build shared libraries.

Git Tree:
- Ted has created an ext4 git patch queue; which multiple users can access and update. Currently Ted, Mingming and Shaggy have access to the tree, but more users can be added.
- This tree will help clarify which patches are ready for mainline or -mm, and which patches still need to be tested. This will prevent patches that have issues (e.g whitespace etc) from going into the mm tree.
- Every time the tree is update, a cron job will shoot off various benchmarks on different architectures, on the IBM automated test tool. We will then create a summary of results to post to the list. If the patches pass sufficient tests, they can by passed to the mm git tree.
- Shaggy mentioned he uses sparse to test for endian issues in patches, and will post the options he uses to the mailing list.
- There will also need to be a mechanism for informing developers that we have fixed/changed their patches, so they use the update patch for future versions.
- Anyone who adds a patch should test that the tree builds cleanly on basic architectures.

PATCH STATUS:

- Kalpak Shah posted patches to break 32000 subdir limit

- Amit Arora posted updated preallocation patches
- Mingming was wondering what the need for 64 bit length of fallocate syscall is.

Uninitialized Bitmaps:
- Andreas has been working on patches to support uninitialized block and inode bitmaps, using the group descriptors and checksumming.
- There is a flag in the group descriptor, which had been added for lazy block groups. If set, this means that the block and inode bitmap are uninitialized. The group is marked as having zero blocks, and the kernel does not touch them
- This greatly improves fsck time because uninitialized groups do not need to be scanned. it also improves mkfs times
- in preliminary fsck testing, the run time grown linearly with number of inodes.
- This feature is RO_COMPAT.
- strictly need to maintain the group checksum; of the flag is accidentally set, the whole group would be skipped.
- So far Andreas has not done any performance testing.

Mballoc and Delalloc:
- Alex has been working on polishing the mballoc allocater. Andreas will ask him to submit a new version to the list, since the online defragmenation patches are based on an older version.
- Alex spent has spend some time addressing the request that delayed allocation be implemented in the VFS layer. Recently Christoph Hellwin pulled delayed allocation out of XFS, and will be doing the implementation in the VFS.
- the ext4 delalloc patches will then be dependent on these patches from Christoph. Will implement hooks in ext4 to use the vfs level delayed allocation.
- Though mballoc is not very useful without the delayed allocation, Andreas will ask Alex to post patches, so that mballoc can be tetsed using direct IO.

64 bit Inode and Dynamic Inode Table Discussion:
- Though this feature has been discussed for many years; there does not seem to be high demand currently for 64 bit inode numbers, but it is a problem which will eventually arise.
- If this incompat feature is implemented, there are many other changes that need to be considered.
- Mingming and Ted suggested the inode number could be based on block number, with 48 bits for block number, and 5-7 bits for the offset; to directly point to the inode location.
- Andreas is concerned about inode relocation, it would take a lot of effort; because references to the inode would have to be updated.
- Another option Andreas suggested is the inode number be an offset in and inode table. The table could be virtually mapped around the filesystem, and also be defragmented.
- Ted believes that this could be used as a faster way of dealing with the 32 bit stat proble, because the logical block numbers that the inode number represents could be used to see what the 32 bit inode number would be.
- There are many issues to address before 64 bit inodes can be fully implemented, Andreas sees this feature as a very long term future plan.

Large Inode:
- Andreas working on patch that will resize the inode if space is needed for the nanosecond timestamp fields
- This entails shifting down EAs if there is enough space in the inode.
- If there isn't enough space in the inode for the EA's, they are moved.


2007-03-22 17:45:51

by Mingming Cao

[permalink] [raw]
Subject: 64bit inode number and dynamic inode table for ext4

On Wed, 2007-03-21 at 19:53 -0700, Avantika Mathur wrote:
> Ext4 Developer Interlock Call: 03/21/2007 Meeting Minutes
...
> 64 bit Inode and Dynamic Inode Table Discussion:
> - Though this feature has been discussed for many years; there does not seem to be high demand currently for 64 bit inode numbers, but it is a problem which will eventually arise.

The benefit of dynamic inode table is clear, not only it could scales up
the number of inode of files fs could support, it could also help speed
up fsck since there are only used inode stored in fs. fsck scalability
issue is a more high demand with now that ext4 could support larger
filesystem.

> - If this incompat feature is implemented, there are many other changes that need to be considered.
> - Mingming and Ted suggested the inode number could be based on block number, with 48 bits for block number, and 5-7 bits for the offset; to directly point to the inode location.

Here is the basic idea about the dynamic inode table:

In default 4k filesystem block size, a inode table block could store 4
265 bytes inode structures(4*265 = 4k). To avoid inode table blocks
fragmentation, we could allocate a cluster of contigous blocks for inode
tables at run time, for every say, 64 inodes or 8 blocks 16*8=64 inodes.

To efficiently allocate and deallocate inode structures, we could link
all free/used inode structures within the block group and store the
first free/used inode number in the block group descriptor.

There are some safety concern with dynamic inode table allocation in the
case of block group corruption. This could be addressed by checksuming
the block group descriptor.

With dynamical inode table, the block to store the inode structure is
not at fixed location anymore. One idea to efficiently map the inode
number to the block store the corresponding inode structure is encoding
the block number into the inode number directly. This implies to use 64
bit inode number. The low 4-5 bit of the inode number stores the offset
bits within the inode table block, and the rest of 59 bits is enough to
store the 48 bit block number, or 32 bit block group number + relative
block number within the group:

63 47 31 20 4 0
----------------|-----------------------------|--------------|------|
| | 32bit group # | 15 bit | 5bit |
| | | blk # |offset|
----------------|-----------------------------|--------------|------|

The bigger concern is possible inode number collision if we choose 64
bit inode number. Although today linux kernel VFS layer is fixed to
handle 64 bit inode number, applications might still using 32 bit stat()
to access inode numbers could break. It is unclear how common this case
is, and whether by now the application is fixed to use the 64 bit stat64
().

One solution is avoid generate inode number >2**32 on 32 bit platform.
Since ext4 only could address 16TB fs on 32 bit arch, the max of group
number is 2**17 (2**17 * 2**15 blocks = 2**32 blocks = 16TB(on 4k blk)),
if we could force that inode table blocks could only be allocated at the
first 2**10 blocks within a block group, like this:

63 47 31 15 4 0
----------------|----------------|------------------|---------|------|
| | High 15 bit |low 17bit grp # |10 bit | 5bit |
| | grp # | |blk # |offset|
----------------|----------------|------------------|---------|------|


Then on 32 bit platform, the inode number is always <2**32. So even if
inode number on fs is 64 bit, since it's high 32 bit is always 0, user
application using stat() will get unique inode number.

On 64 bit plat format, there should not be collision issue for 64 bit
applications. For 32 bit application running on 64 bit platform,
hopefully they are fixed by now. or we could force the inode table block
allocated at the first 16TB of fs, since anyway we need meta block group
to support >256TB fs, and that already makes the inode structure apart
from the data blocks.


> - Andreas is concerned about inode relocation, it would take a lot of effort; because references to the inode would have to be updated.

I am not clear about this concern. Andreas, are you worried about online
defrag? I thought online defrag only transfer the extent maps from the
temp inode to the original inode, we do not transfer inode number and
structure.


> - Another option Andreas suggested is the inode number be an offset in and inode table. The table could be virtually mapped around the filesystem, and also be defragmented.
> - Ted believes that this could be used as a faster way of dealing with the 32 bit stat proble, because the logical block numbers that the inode number represents could be used to see what the 32 bit inode number would be.
> - There are many issues to address before 64 bit inodes can be fully implemented, Andreas sees this feature as a very long term future plan.

I agree there are many ext4 features could be done in short-term, but
thinking back why we have ext4: it was initially started by address
scalability issue: fs limit and large file performance (32 bit block
number issue and extent). It was cloned from ext3 mostly political
reason, but having a new fs also allow us to design ext4 for a longer
view. Since we are already in ext4, and now it is still called ext4dev,
why postpone it later. Think about how long it takes ext3 from start to
stable and then ext4 start with extent and 48/64 bit bit number (10
years?), I think ext5 is at least 10 years away. There are customer
already use millions or billions of files today, or even ask for
trillions of files, it could be a issue hit us within a few years.


Regards,

Mingming

2007-03-28 13:15:09

by Jan Kara

[permalink] [raw]
Subject: Re: 64bit inode number and dynamic inode table for ext4

> On Wed, 2007-03-21 at 19:53 -0700, Avantika Mathur wrote:
> > Ext4 Developer Interlock Call: 03/21/2007 Meeting Minutes
> Here is the basic idea about the dynamic inode table:
>
> In default 4k filesystem block size, a inode table block could store 4
> 265 bytes inode structures(4*265 = 4k). To avoid inode table blocks
^^^^^^^^^^ so k=265? ;)

> fragmentation, we could allocate a cluster of contigous blocks for inode
> tables at run time, for every say, 64 inodes or 8 blocks 16*8=64 inodes.
>
> To efficiently allocate and deallocate inode structures, we could link
> all free/used inode structures within the block group and store the
> first free/used inode number in the block group descriptor.
So you aren't expecting to shrink space allocated to inode, are you?

> There are some safety concern with dynamic inode table allocation in the
> case of block group corruption. This could be addressed by checksuming
> the block group descriptor.
But will it help you in finding those lost inodes once the descriptor
is corrupted? I guess it would make more sence to checksum each inode
separately. Then in case of corruption you could at least search through
all blocks (I know this is desperate ;) and find inodes by verifying
whether the inode checksum is correct. If it is, you have found an inode
block with a high probability (especially if a checksum for most of the
inodes in the block is correct). Another option would be to compute some
more robust checksum for the whole inode block...

> With dynamical inode table, the block to store the inode structure is
> not at fixed location anymore. One idea to efficiently map the inode
> number to the block store the corresponding inode structure is encoding
> the block number into the inode number directly. This implies to use 64
> bit inode number. The low 4-5 bit of the inode number stores the offset
> bits within the inode table block, and the rest of 59 bits is enough to
> store the 48 bit block number, or 32 bit block group number + relative
> block number within the group:
>
> 63 47 31 20 4 0
> ----------------|-----------------------------|--------------|------|
> | | 32bit group # | 15 bit | 5bit |
> | | | blk # |offset|
> ----------------|-----------------------------|--------------|------|
>
> The bigger concern is possible inode number collision if we choose 64
> bit inode number. Although today linux kernel VFS layer is fixed to
> handle 64 bit inode number, applications might still using 32 bit stat()
> to access inode numbers could break. It is unclear how common this case
> is, and whether by now the application is fixed to use the 64 bit stat64
> ().
>
> One solution is avoid generate inode number >2**32 on 32 bit platform.
> Since ext4 only could address 16TB fs on 32 bit arch, the max of group
> number is 2**17 (2**17 * 2**15 blocks = 2**32 blocks = 16TB(on 4k blk)),
> if we could force that inode table blocks could only be allocated at the
> first 2**10 blocks within a block group, like this:
>
> 63 47 31 15 4 0
> ----------------|----------------|------------------|---------|------|
> | | High 15 bit |low 17bit grp # |10 bit | 5bit |
> | | grp # | |blk # |offset|
> ----------------|----------------|------------------|---------|------|
>
>
> Then on 32 bit platform, the inode number is always <2**32. So even if
> inode number on fs is 64 bit, since it's high 32 bit is always 0, user
> application using stat() will get unique inode number.
I think that by the time this gets to production I would expect
all the apps to be converted. And if they are not, then they deserve to be
screwed...

> On 64 bit plat format, there should not be collision issue for 64 bit
> applications. For 32 bit application running on 64 bit platform,
> hopefully they are fixed by now. or we could force the inode table block
> allocated at the first 16TB of fs, since anyway we need meta block group
> to support >256TB fs, and that already makes the inode structure apart
> from the data blocks.
>
>
> > - Andreas is concerned about inode relocation, it would take a
> > lot of effort; because references to the inode would have to be
> > updated.
>
> I am not clear about this concern. Andreas, are you worried about online
> defrag? I thought online defrag only transfer the extent maps from the
> temp inode to the original inode, we do not transfer inode number and
> structure.
Eventually, it would be nice to relocate inodes too (especially if we
have the possibility to store the inode anywhere on disk). Currently,
online inode relocation is quite hard as that means changing
inode numbers and thus updating directory entries...
But I'm not sure that the easier inode relocation is worth the additional
burden of translating inode numbers to disk location (which has to be
performed on every inode read). On the other hand the extent tree (or
simple radix tree - I'm not sure what would be better in case of inodes)
would not have to be too deep so maybe it won't be that bad.

Honza
--
Jan Kara <[email protected]>
SuSE CR Labs

2007-03-29 18:08:12

by Mingming Cao

[permalink] [raw]
Subject: Re: 64bit inode number and dynamic inode table for ext4

On Wed, 2007-03-28 at 15:15 +0200, Jan Kara wrote:
> > On Wed, 2007-03-21 at 19:53 -0700, Avantika Mathur wrote:
> > > Ext4 Developer Interlock Call: 03/21/2007 Meeting Minutes
> > Here is the basic idea about the dynamic inode table:
> >
> > In default 4k filesystem block size, a inode table block could store 4
> > 265 bytes inode structures(4*265 = 4k). To avoid inode table blocks
> ^^^^^^^^^^ so k=265? ;)
>
sorry, it should be 16 inodes in a 4k block, 16*256 = 4k;)
> > fragmentation, we could allocate a cluster of contigous blocks for inode
> > tables at run time, for every say, 64 inodes or 8 blocks 16*8=64 inodes.
> >
> > To efficiently allocate and deallocate inode structures, we could link
> > all free/used inode structures within the block group and store the
> > first free/used inode number in the block group descriptor.
> So you aren't expecting to shrink space allocated to inode, are you?
>
In theory we could shrink space allocated to inodes, but I am not sure
if this worth the effort.

> > There are some safety concern with dynamic inode table allocation in the
> > case of block group corruption. This could be addressed by checksuming
> > the block group descriptor.
> But will it help you in finding those lost inodes once the descriptor
> is corrupted? I guess it would make more sence to checksum each inode
> separately. Then in case of corruption you could at least search through
> all blocks (I know this is desperate ;) and find inodes by verifying
> whether the inode checksum is correct. If it is, you have found an inode
> block with a high probability (especially if a checksum for most of the
> inodes in the block is correct). Another option would be to compute some
> more robust checksum for the whole inode block...
>

If the block group descriptor is corrupted (with checksuming), we could
locate majority of inodes in the group by scanning the directory
entries, the inode number directly point to the inode table blocks.

Yeah, adding checksum for the whole inode block(s) is what I am
thinking. Andreas suggested adding magic number to the inode table
block(s) (a cluster of blocks for inode). So in the case the block group
descriptor is corrupted , we could scan the block group and easily
locate the inode block(s).

If a cluster of inode blocks is 8 blocks, we could user one 256 bytes to
store the magic number and checksum. We could also
store the bitmap of this 127 inodes to indicating whether they are free
or not. This is an alternative way(vs. the free/used inode linked
list) to locate inode within the tables to do allocation/deallocation.
Shrinking the freed inode blocks also becomes easier, I assume.
This allows us to do inode allocation in parellal. Then we could store
the address of the previous or next chunk of inode tables in this
cluster header for additional safety protection. The first and last
cluster address(first block number of the chuck) are stored in the block
group descriptor.

> > With dynamical inode table, the block to store the inode structure
> is
> > not at fixed location anymore. One idea to efficiently map the inode
> > number to the block store the corresponding inode structure is encoding
> > the block number into the inode number directly. This implies to use 64
> > bit inode number. The low 4-5 bit of the inode number stores the offset
> > bits within the inode table block, and the rest of 59 bits is enough to
> > store the 48 bit block number, or 32 bit block group number + relative
> > block number within the group:
> >
> > 63 47 31 20 4 0
> > ----------------|-----------------------------|--------------|------|
> > | | 32bit group # | 15 bit | 5bit |
> > | | | blk # |offset|
> > ----------------|-----------------------------|--------------|------|
> >
> > The bigger concern is possible inode number collision if we choose 64
> > bit inode number. Although today linux kernel VFS layer is fixed to
> > handle 64 bit inode number, applications might still using 32 bit stat()
> > to access inode numbers could break. It is unclear how common this case
> > is, and whether by now the application is fixed to use the 64 bit stat64
> > ().
> >
> > One solution is avoid generate inode number >2**32 on 32 bit platform.
> > Since ext4 only could address 16TB fs on 32 bit arch, the max of group
> > number is 2**17 (2**17 * 2**15 blocks = 2**32 blocks = 16TB(on 4k blk)),
> > if we could force that inode table blocks could only be allocated at the
> > first 2**10 blocks within a block group, like this:
> >
> > 63 47 31 15 4 0
> > ----------------|----------------|------------------|---------|------|
> > | | High 15 bit |low 17bit grp # |10 bit | 5bit |
> > | | grp # | |blk # |offset|
> > ----------------|----------------|------------------|---------|------|
> >
> >
> > Then on 32 bit platform, the inode number is always <2**32. So even if
> > inode number on fs is 64 bit, since it's high 32 bit is always 0, user
> > application using stat() will get unique inode number.
> I think that by the time this gets to production I would expect
> all the apps to be converted. And if they are not, then they deserve to be
> screwed..
>

This kind of compromise on 32 bit platform is not complexed, just a
different set of the micros to decode the inode number, based on 32 bit
archs or 64 bit archs. So the complexity is not a big deal here.

It's not very clear how many apps there will be impacted by the 32->64
bit inode number changes. The plan, per yesterday's ext4 interlock
meeting, is to add a mount option for ext3, and generate the in-kernel
64 bit inode number on the fly, then try ext3 with some commercial
backup tools or ls-al, tar, rsync etc on 32 bit platforms.


> > On 64 bit plat format, there should not be collision issue for 64 bit
> > applications. For 32 bit application running on 64 bit platform,
> > hopefully they are fixed by now. or we could force the inode table block
> > allocated at the first 16TB of fs, since anyway we need meta block group
> > to support >256TB fs, and that already makes the inode structure apart
> > from the data blocks.
> >
> >
> > > - Andreas is concerned about inode relocation, it would take a
> > > lot of effort; because references to the inode would have to be
> > > updated.
> >
> > I am not clear about this concern. Andreas, are you worried about online
> > defrag? I thought online defrag only transfer the extent maps from the
> > temp inode to the original inode, we do not transfer inode number and
> > structure.
> Eventually, it would be nice to relocate inodes too (especially if we
> have the possibility to store the inode anywhere on disk). Currently,
> online inode relocation is quite hard as that means changing
> inode numbers and thus updating directory entries...
>
> But I'm not sure that the easier inode relocation is worth the additional
> burden of translating inode numbers to disk location (which has to be
> performed on every inode read).
>
Andreas explained that to me in a separate email. Is inode relocation a
rare case or pretty common? I thought we need to do inode relocation
lookup only if the inode number mismatch what is stored on disk.

> On the other hand the extent tree (or
> simple radix tree - I'm not sure what would be better in case of inodes)
> would not have to be too deep so maybe it won't be that bad.
>
Probably. I assume you mean having a per block group inode table file ,
and use extent tree indirectly to lookup block number, given a inode
number. We have to serialized the multiple lookups within least per
block group though.

The concern I think is mostly reliability. In this scheme, we need to
back up of the inode table file, in case the original inode table file
corrupted we will not lost the inodes for the entire block group.

Mingming
>
> Honza

2007-04-02 12:38:32

by Jan Kara

[permalink] [raw]
Subject: Re: 64bit inode number and dynamic inode table for ext4

On Thu 29-03-07 10:08:08, Mingming Cao wrote:
> > > To efficiently allocate and deallocate inode structures, we could link
> > > all free/used inode structures within the block group and store the
> > > first free/used inode number in the block group descriptor.
> > So you aren't expecting to shrink space allocated to inode, are you?
> >
> In theory we could shrink space allocated to inodes, but I am not sure
> if this worth the effort.
Yes, I agree...

> > > There are some safety concern with dynamic inode table allocation in the
> > > case of block group corruption. This could be addressed by checksuming
> > > the block group descriptor.
> > But will it help you in finding those lost inodes once the descriptor
> > is corrupted? I guess it would make more sence to checksum each inode
> > separately. Then in case of corruption you could at least search through
> > all blocks (I know this is desperate ;) and find inodes by verifying
> > whether the inode checksum is correct. If it is, you have found an inode
> > block with a high probability (especially if a checksum for most of the
> > inodes in the block is correct). Another option would be to compute some
> > more robust checksum for the whole inode block...
> >
>
> If the block group descriptor is corrupted (with checksuming), we could
> locate majority of inodes in the group by scanning the directory
> entries, the inode number directly point to the inode table blocks.
Yes, you're right that works in case inode numbers are easily translated
into physical location on disk.

> Yeah, adding checksum for the whole inode block(s) is what I am
> thinking. Andreas suggested adding magic number to the inode table
> block(s) (a cluster of blocks for inode). So in the case the block group
> descriptor is corrupted , we could scan the block group and easily
> locate the inode block(s).
>
> If a cluster of inode blocks is 8 blocks, we could user one 256 bytes to
> store the magic number and checksum. We could also
Storing the checksum for 8 inode blocks has two disadvantages:
1) You have to update the checksum for each inode write (i.e. writing one
inode block suddently means writing two disk blocks).
2) If it is really a checksum over all 8 blocks and not just 8 checksums
over single blocks, you have to have all 8 blocks in memory to be able to
compute the checksum.

> store the bitmap of this 127 inodes to indicating whether they are free
> or not. This is an alternative way(vs. the free/used inode linked
> list) to locate inode within the tables to do allocation/deallocation.
> Shrinking the freed inode blocks also becomes easier, I assume.
> This allows us to do inode allocation in parellal. Then we could store
> the address of the previous or next chunk of inode tables in this
> cluster header for additional safety protection. The first and last
> cluster address(first block number of the chuck) are stored in the block
> group descriptor.
Yes, this looks like a good alternative.

<snip>
> > > > - Andreas is concerned about inode relocation, it would take a
> > > > lot of effort; because references to the inode would have to be
> > > > updated.
> > >
> > > I am not clear about this concern. Andreas, are you worried about online
> > > defrag? I thought online defrag only transfer the extent maps from the
> > > temp inode to the original inode, we do not transfer inode number and
> > > structure.
> > Eventually, it would be nice to relocate inodes too (especially if we
> > have the possibility to store the inode anywhere on disk). Currently,
> > online inode relocation is quite hard as that means changing
> > inode numbers and thus updating directory entries...
> >
> > But I'm not sure that the easier inode relocation is worth the additional
> > burden of translating inode numbers to disk location (which has to be
> > performed on every inode read).
> >
> Andreas explained that to me in a separate email. Is inode relocation a
> rare case or pretty common? I thought we need to do inode relocation
> lookup only if the inode number mismatch what is stored on disk.
Inode relocation is useful for defragmentation and such beasts. So you
don't care much about the performance of a relocation as such. But after
you relocate the inode to some other place, you'd like it to behave as if
it was there since the beginning...

> > On the other hand the extent tree (or
> > simple radix tree - I'm not sure what would be better in case of inodes)
> > would not have to be too deep so maybe it won't be that bad.
> >
> Probably. I assume you mean having a per block group inode table file ,
> and use extent tree indirectly to lookup block number, given a inode
> number. We have to serialized the multiple lookups within least per
> block group though.
Yes, I meant such inode file. Why would we have to serialize lookups?
They can be perfectly parallel. The only thing you have to serialize are
modifications and those should be append-only anyway...

> The concern I think is mostly reliability. In this scheme, we need to
> back up of the inode table file, in case the original inode table file
> corrupted we will not lost the inodes for the entire block group.
If you have checksums / magic numbers, you will be able to find blocks
belonging to the inode table file. If you also implement the idea with the
chunks of inode blocks (actually, it looks like a small inode table) with a
header, you can even store all the information you need for reconstruction
in the header...

Honza
--
Jan Kara <[email protected]>
SuSE CR Labs