Is there any way to estimate the size of the user base for 2.6.16?
e.g. how many downloads does it get?
On 1/29/07, Chuck Ebbert <[email protected]> wrote:
> Is there any way to estimate the size of the user base for 2.6.16?
>
> e.g. how many downloads does it get?
Are you including distros that use it as well?
josh
Josh Boyer wrote:
> On 1/29/07, Chuck Ebbert <[email protected]> wrote:
>> Is there any way to estimate the size of the user base for 2.6.16?
>>
>> e.g. how many downloads does it get?
>
> Are you including distros that use it as well?
>
Yes, if they're based on Adrian's stable series.
On Mon, 29 Jan 2007 15:30:00 -0500
Chuck Ebbert <[email protected]> wrote:
> Is there any way to estimate the size of the user base for 2.6.16?
>
> e.g. how many downloads does it get?
I've often wondered that myself, as I'm concerned for it to continue
to be maintained. I'm very appreciative of what Adrian is doing with
it. (Thanks!)
I've been using Adrian's 2.6.16 kernel releases on two internet
servers that I look after remotely. One of them is RHEL 4 the other
is Fedora Core 2 (Ensim Webppliance). I'm especially wary of breaking
RHEL 4, and the 2.6.16.xx kernels work perfectly except for the
hald not starting (but that doesn't matter on that server). The stock
RHEL 4 kernels exhibit some awful VM behaviour, with crippling iowait
on that system (mainly php/mysql workload)
Mike Houston
On Mon, Jan 29, 2007 at 04:04:48PM -0500, Mike Houston wrote:
> On Mon, 29 Jan 2007 15:30:00 -0500
> Chuck Ebbert <[email protected]> wrote:
>
> > Is there any way to estimate the size of the user base for 2.6.16?
> >
> > e.g. how many downloads does it get?
>
> I've often wondered that myself, as I'm concerned for it to continue
> to be maintained. I'm very appreciative of what Adrian is doing with
> it. (Thanks!)
We're still running 2.6.16 kernels on a bunch of machines, though 2.6.19
has been looking pretty nice on the couple of machines that are testing
it. 2.6.17 and 2.6.18 felt less stable.
We do a lot of Cyrus which does a lot of MMAP - and we also use the
Areca driver - which are both strong reasons to move to 2.6.19.2, but
if the MMAP fix was ported back to 2.6.16 we might consider staying
there instead.
Bron.
On Tue, Jan 30, 2007 at 09:13:00AM +1100, Bron Gondwana wrote:
> On Mon, Jan 29, 2007 at 04:04:48PM -0500, Mike Houston wrote:
> > On Mon, 29 Jan 2007 15:30:00 -0500
> > Chuck Ebbert <[email protected]> wrote:
> >
> > > Is there any way to estimate the size of the user base for 2.6.16?
> > >
> > > e.g. how many downloads does it get?
> >
> > I've often wondered that myself, as I'm concerned for it to continue
> > to be maintained. I'm very appreciative of what Adrian is doing with
> > it. (Thanks!)
>
> We're still running 2.6.16 kernels on a bunch of machines, though 2.6.19
> has been looking pretty nice on the couple of machines that are testing
> it. 2.6.17 and 2.6.18 felt less stable.
>
> We do a lot of Cyrus which does a lot of MMAP - and we also use the
> Areca driver - which are both strong reasons to move to 2.6.19.2, but
> if the MMAP fix was ported back to 2.6.16 we might consider staying
> there instead.
Please correct me if I'm wrong, but as far as I understand the problem
the mmap bug was introduced in 2.6.19.
> Bron.
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On Mon, Jan 29, 2007 at 04:04:48PM -0500, Mike Houston wrote:
> On Mon, 29 Jan 2007 15:30:00 -0500
> Chuck Ebbert <[email protected]> wrote:
>
> > Is there any way to estimate the size of the user base for 2.6.16?
> >
> > e.g. how many downloads does it get?
>
> I've often wondered that myself, as I'm concerned for it to continue
> to be maintained. I'm very appreciative of what Adrian is doing with
> it. (Thanks!)
>
> I've been using Adrian's 2.6.16 kernel releases on two internet
> servers that I look after remotely. One of them is RHEL 4 the other
> is Fedora Core 2 (Ensim Webppliance). I'm especially wary of breaking
> RHEL 4, and the 2.6.16.xx kernels work perfectly except for the
> hald not starting (but that doesn't matter on that server).
>...
I haven't heard of this before, and in a quick test hald from
HAL 0.5.8.1 starts fine here.
Are there any error messages?
> Mike Houston
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
Adrian Bunk wrote:
> On Tue, Jan 30, 2007 at 09:13:00AM +1100, Bron Gondwana wrote:
>> We do a lot of Cyrus which does a lot of MMAP - and we also use the
>> Areca driver - which are both strong reasons to move to 2.6.19.2, but
>> if the MMAP fix was ported back to 2.6.16 we might consider staying
>> there instead.
>
> Please correct me if I'm wrong, but as far as I understand the problem
> the mmap bug was introduced in 2.6.19.
I believe our featherless leader said he though it was an ancient bug,
exasperated by something that went into 2.6.19.
If Linus's opinion is correct (still?), then the bug exists in all
kernels since somewhere back in the 2.4.xx days.
Linus?
On Wed, 31 Jan 2007 00:52:15 +0100
Adrian Bunk <[email protected]> wrote:
> On Mon, Jan 29, 2007 at 04:04:48PM -0500, Mike Houston wrote:
> > I've been using Adrian's 2.6.16 kernel releases on two internet
> > servers that I look after remotely. One of them is RHEL 4 the
> > other is Fedora Core 2 (Ensim Webppliance). I'm especially wary
> > of breaking RHEL 4, and the 2.6.16.xx kernels work perfectly
> > except for the hald not starting (but that doesn't matter on that
> > server).
> >...
>
> I haven't heard of this before, and in a quick test hald from
> HAL 0.5.8.1 starts fine here.
>
> Are there any error messages?
>
I think I recall hearing about hald breaking in rhel4 with modern
kernels here on this list, some time before I built a custom kernel
for that rig (Athlon 64 3200+ on Asus A8V w' VIA K8T800Pro chipset
running 32 bit RHEL 4 ES). I was expecting it to happen. I think at
the time, the current kernel was 2.6.15.2 or thereabouts.
However, I haven't upgraded any software that's in the distro packages
beyond what up2date provides, so:
$ rpm -qa | grep hal
hal-0.4.2-4.EL4
So sorry about that, I didn't mean to take up any of your time. I
only mentioned it incidentally and wasn't expecting it to be
addressed. (I was more happily stating that nothing of significance
to me is broken in the distro). If that was something I needed, I
would have looked into upgrading it.
Mike Houston
On Tue, 30 Jan 2007, Mark Lord wrote:
>
> I believe our featherless leader said he though it was an ancient bug,
> exasperated by something that went into 2.6.19.
>
> If Linus's opinion is correct (still?), then the bug exists in all
> kernels since somewhere back in the 2.4.xx days.
The issue was somewhat confused by people certainly *reporting* it for
older kernels. Also, as part of the dirty bit cleanups and sanity
checkingwe did actually seem to fix a long-standing CIFS corruption (and
apparently reisertfs/XFS problems too).
But the *common* case was actually introduced with 2.6.19, and 2.6.16
wouldn't be affected.
Linus
On Tue, Jan 30, 2007 at 06:36:48PM -0800, Linus Torvalds wrote:
>
>
> On Tue, 30 Jan 2007, Mark Lord wrote:
> >
> > I believe our featherless leader said he though it was an ancient bug,
> > exasperated by something that went into 2.6.19.
> >
> > If Linus's opinion is correct (still?), then the bug exists in all
> > kernels since somewhere back in the 2.4.xx days.
>
> The issue was somewhat confused by people certainly *reporting* it for
> older kernels. Also, as part of the dirty bit cleanups and sanity
> checkingwe did actually seem to fix a long-standing CIFS corruption (and
> apparently reisertfs/XFS problems too).
>
> But the *common* case was actually introduced with 2.6.19, and 2.6.16
> wouldn't be affected.
We run on reiserfs. I did try ext3 for a little while on a couple of
servers but performance was really awful compared to reiser, and we
heaved a sigh of relief when we finally migrated all the users off
those filesystems. There were many complaints about the speed of our
service for a while.
I'm really hoping this is the cause, because do still see occasional
corruption of MMAPed files under heavy load, though less often now
that we've balanced our servers to the point where load spikes are
much less common.
The servers are using either internal Areca cards or LSI SCSI adaptors
connected to external SATA raid boxes. Either way, there's a few
terabytes of SATA attached to each box, with 10kRPM drives in RAID1
for Cyrus's metadata and 7,2k bigger drives in RAID5 for the actual
emails. According to iostat these drives are being utilised at over
50% of available bandwidth even now during the "quiet time" - there
are many tens of thousands of users per machine - so we tend to
stress the IO subsystem quite a lot.
Cyrus is also very liberal in its use of MMAP, so we get to push
all sorts of exciting edge cases. We were still applying patches
to reiserfs until recently, and I'm not sure what the status of that
is (Hans Reiser said to keep harassing him about it - but he's
hardly in a position to be dealing with our issues right now)
Thankfully, now that we're using 300Gb maximum rather than 2Tb
partitions (running multiple instances of Cyrus instead) with
the associated smaller mailboxes.db (the biggest MMAPed and
frequently updated file) things seem less edgy. I don't like
edgy (queue Ubuntu jokes).
Anyway, I'm hoping to update one of our boxes to 2.6.19.2 soon.
We do have one box running a 2.6.18 series kernel which has been
fine as well. I'll give feedback if we see any issues with MMAP
on there.
Bron.
On Tue, Jan 30, 2007 at 06:36:48PM -0800, Linus Torvalds wrote:
>
>
> On Tue, 30 Jan 2007, Mark Lord wrote:
> >
> > I believe our featherless leader said he though it was an ancient bug,
> > exasperated by something that went into 2.6.19.
> >
> > If Linus's opinion is correct (still?), then the bug exists in all
> > kernels since somewhere back in the 2.4.xx days.
>
> The issue was somewhat confused by people certainly *reporting* it for
> older kernels. Also, as part of the dirty bit cleanups and sanity
> checkingwe did actually seem to fix a long-standing CIFS corruption (and
> apparently reisertfs/XFS problems too).
>
> But the *common* case was actually introduced with 2.6.19, and 2.6.16
> wouldn't be affected.
Thanks for the clarifications.
Regarding the longstanding CIFS/reiserfs/XFS problems, it seems the
status is:
CIFS:
commit cb876f451455b6187a7d69de2c112c45ec4b7f99
Fix up CIFS for "test_clear_page_dirty()" removal
queued for 2.6.19.3
applies and compiles against 2.6.16
reiserfs:
commit de14569f94513279e3d44d9571a421e9da1759ae
[PATCH] resierfs: avoid tail packing if an inode was ever mmapped
backport to 2.6.16 required
XFS:
fix not yet in your tree
> Linus
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On Wed, Jan 31, 2007 at 08:02:37AM +0100, Adrian Bunk wrote:
> reiserfs:
> commit de14569f94513279e3d44d9571a421e9da1759ae
> [PATCH] resierfs: avoid tail packing if an inode was ever mmapped
> backport to 2.6.16 required
Which would explain the "notail" I've been careful to cargo-cult
into every mount string since I started at this job, even though
we're storing mainly very small files.
Referring back to: <[email protected]> (which went
to reiserfs-dev and a couple of the ever-growing CC list above)
we're still not 100% sure if it's safe to remove the patch that
I attached there:
>>>>--- file.c~ 2004-10-02 12:29:33.223660850 +0400
>>>>+++ file.c 2004-10-08 10:03:03.001561661 +0400
>>>>@@ -1137,6 +1137,8 @@
>>>>return result;
>>>> }
>>>>
>>>>+ return generic_file_write(file, buf, count, ppos);
>>>>+
>>>> if ( unlikely((ssize_t) count < 0 ))
>>>> return -EINVAL;
which Hans asserted was about 5% slower than the resierfs custom
write implementation, but we countered at least meant that we
didn't crash in a steaming pile of processes stuck in D state
with no way out every few days.
It doesn't apply against 2.6.19 any more, which may be a good
sign. I haven't seen anything in the changelogs that jumped
out at me as the fix though.
Regards,
Bron.
On Wed, Jan 31, 2007 at 08:02:37AM +0100, Adrian Bunk wrote:
> On Tue, Jan 30, 2007 at 06:36:48PM -0800, Linus Torvalds wrote:
> > The issue was somewhat confused by people certainly *reporting* it for
> > older kernels. Also, as part of the dirty bit cleanups and sanity
> > checkingwe did actually seem to fix a long-standing CIFS corruption (and
> > apparently reisertfs/XFS problems too).
> >
> > But the *common* case was actually introduced with 2.6.19, and 2.6.16
> > wouldn't be affected.
>
> Thanks for the clarifications.
>
> Regarding the longstanding CIFS/reiserfs/XFS problems, it seems the
> status is:
....
> XFS:
> fix not yet in your tree
With the WARN_ON() in cancel_dirty_page() removed:
http://git2.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=ecdfc9787fe527491baefc22dce8b2dbd5b2908d
XFS will behave exactly the same as 2.6.19 and previous releases.
The patches I sent were only ever really workarounds to greatly
reduce the race window that could lead to the warning being
triggered.
We really need Nick Piggin's invalidate/truncate/mmap race fixes to
properly solve the XFS issues uncovered by Linus' changes. Given
that we haven't had any reported cases of data corruption on XFS
(and I couldn't trigger any even when seeing the warnings) I think
we are fairly safe just maintaining the status quo and waiting the
right fix to make it's way into the tree....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
David Chinner wrote:
>On Wed, Jan 31, 2007 at 08:02:37AM +0100, Adrian Bunk wrote:
>
>
>>On Tue, Jan 30, 2007 at 06:36:48PM -0800, Linus Torvalds wrote:
>>
>>
>>>The issue was somewhat confused by people certainly *reporting* it for
>>>older kernels. Also, as part of the dirty bit cleanups and sanity
>>>checkingwe did actually seem to fix a long-standing CIFS corruption (and
>>>apparently reisertfs/XFS problems too).
>>>
>>>But the *common* case was actually introduced with 2.6.19, and 2.6.16
>>>wouldn't be affected.
>>>
>>>
>>Thanks for the clarifications.
>>
>>Regarding the longstanding CIFS/reiserfs/XFS problems, it seems the
>>status is:
>>
>>
>....
>
>
>>XFS:
>>fix not yet in your tree
>>
>>
>
>With the WARN_ON() in cancel_dirty_page() removed:
>
>http://git2.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=ecdfc9787fe527491baefc22dce8b2dbd5b2908d
>
>XFS will behave exactly the same as 2.6.19 and previous releases.
>The patches I sent were only ever really workarounds to greatly
>reduce the race window that could lead to the warning being
>triggered.
>
>We really need Nick Piggin's invalidate/truncate/mmap race fixes to
>properly solve the XFS issues uncovered by Linus' changes. Given
>that we haven't had any reported cases of data corruption on XFS
>(and I couldn't trigger any even when seeing the warnings) I think
>we are fairly safe just maintaining the status quo and waiting the
>right fix to make it's way into the tree....
>
>Cheers,
>
>Dave.
>
>
We did have one bug report of data corruption in cifs on older kernels
copying large files which this resolves,
but 2.6.16 seems far enough to go back.
On Wed, 31 Jan 2007, Adrian Bunk wrote:
>
> Regarding the longstanding CIFS/reiserfs/XFS problems, it seems the
> status is:
>
> CIFS:
> commit cb876f451455b6187a7d69de2c112c45ec4b7f99
> Fix up CIFS for "test_clear_page_dirty()" removal
> queued for 2.6.19.3
> applies and compiles against 2.6.16
>
> reiserfs:
> commit de14569f94513279e3d44d9571a421e9da1759ae
> [PATCH] resierfs: avoid tail packing if an inode was ever mmapped
> backport to 2.6.16 required
>
> XFS:
> fix not yet in your tree
Yes. The XFS problem should only be triggerable through O_DIRECT and
non-O_DIRECT mmap at the same time, so the fix for that got pushed back as
noncritical.
NOTE! I'm still not 100% sure about the older bdb corruption reports. Were
they just noise due to other issues? Flaky RAM? One report of corruption
was actually due to running my test-program without enough diskspace, so
while that confused the issue for a while, it turned out to be a
non-issue.
So I'm just saying that there might be other things lurking too.
Linus
Hello
On Wednesday 31 January 2007 10:02, Adrian Bunk wrote:
> On Tue, Jan 30, 2007 at 06:36:48PM -0800, Linus Torvalds wrote:
> >
> >
> > On Tue, 30 Jan 2007, Mark Lord wrote:
> > >
> > > I believe our featherless leader said he though it was an ancient bug,
> > > exasperated by something that went into 2.6.19.
> > >
> > > If Linus's opinion is correct (still?), then the bug exists in all
> > > kernels since somewhere back in the 2.4.xx days.
> >
> > The issue was somewhat confused by people certainly *reporting* it for
> > older kernels. Also, as part of the dirty bit cleanups and sanity
> > checkingwe did actually seem to fix a long-standing CIFS corruption (and
> > apparently reisertfs/XFS problems too).
> >
> > But the *common* case was actually introduced with 2.6.19, and 2.6.16
> > wouldn't be affected.
>
> Thanks for the clarifications.
>
> Regarding the longstanding CIFS/reiserfs/XFS problems, it seems the
> status is:
>
> CIFS:
> commit cb876f451455b6187a7d69de2c112c45ec4b7f99
> Fix up CIFS for "test_clear_page_dirty()" removal
> queued for 2.6.19.3
> applies and compiles against 2.6.16
>
> reiserfs:
> commit de14569f94513279e3d44d9571a421e9da1759ae
> [PATCH] resierfs: avoid tail packing if an inode was ever mmapped
> backport to 2.6.16 required
>
Here it goes:
From: [email protected]
The patch titled
resierfs: avoid tail packing if an inode was ever mmapped
has been added to the -mm tree. Its filename is
reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped.patch
See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find
out what to do about this
------------------------------------------------------
Subject: resierfs: avoid tail packing if an inode was ever mmapped
From: Vladimir Saveliev <[email protected]>
This patch fixes a confusion reiserfs has for a long time.
On release file operation reiserfs used to try to pack file data stored in
last incomplete page of some files into metadata blocks. After packing the
page got cleared with clear_page_dirty. It did not take into account that
the page may be mmaped into other process's address space. Recent
replacement for clear_page_dirty cancel_dirty_page found the confusion with
sanity check that page has to be not mapped.
The patch fixes the confusion by making reiserfs avoid tail packing if an
inode was ever mmapped. reiserfs_mmap and reiserfs_file_release are
serialized with mutex in reiserfs specific inode. reiserfs_mmap locks the
mutex and sets a bit in reiserfs specific inode flags.
reiserfs_file_release checks the bit having the mutex locked. If bit is
set - tail packing is avoided. This eliminates a possibility that mmapped
page gets cancel_page_dirty-ed.
Signed-off-by: Vladimir Saveliev <[email protected]>
Cc: Jeff Mahoney <[email protected]>
Cc: Chris Mason <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---
diff -puN fs/reiserfs/file.c~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped fs/reiserfs/file.c
diff -puN fs/reiserfs/file.c~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped fs/reiserfs/file.c
--- linux-2.6.16/fs/reiserfs/file.c~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped 2007-02-01 12:30:08.000000000 +0300
+++ linux-2.6.16-vs/fs/reiserfs/file.c 2007-02-01 12:43:07.000000000 +0300
@@ -50,6 +50,11 @@ static int reiserfs_file_release(struct
reiserfs_write_lock(inode->i_sb);
mutex_lock(&inode->i_mutex);
+
+ mutex_lock(&(REISERFS_I(inode)->i_mmap));
+ if (REISERFS_I(inode)->i_flags & i_ever_mapped)
+ REISERFS_I(inode)->i_flags &= ~i_pack_on_close_mask;
+
/* freeing preallocation only involves relogging blocks that
* are already in the current transaction. preallocation gets
* freed at the end of each transaction, so it is impossible for
@@ -100,11 +105,24 @@ static int reiserfs_file_release(struct
err = reiserfs_truncate_file(inode, 0);
}
out:
+ mutex_unlock(&(REISERFS_I(inode)->i_mmap));
mutex_unlock(&inode->i_mutex);
reiserfs_write_unlock(inode->i_sb);
return err;
}
+static int reiserfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct inode *inode;
+
+ inode = file->f_dentry->d_inode;
+ mutex_lock(&(REISERFS_I(inode)->i_mmap));
+ REISERFS_I(inode)->i_flags |= i_ever_mapped;
+ mutex_unlock(&(REISERFS_I(inode)->i_mmap));
+
+ return generic_file_mmap(file, vma);
+}
+
static void reiserfs_vfs_truncate_file(struct inode *inode)
{
reiserfs_truncate_file(inode, 1);
@@ -1570,7 +1588,7 @@ struct file_operations reiserfs_file_ope
.read = generic_file_read,
.write = reiserfs_file_write,
.ioctl = reiserfs_ioctl,
- .mmap = generic_file_mmap,
+ .mmap = reiserfs_file_mmap,
.release = reiserfs_file_release,
.fsync = reiserfs_sync_file,
.sendfile = generic_file_sendfile,
diff -puN fs/reiserfs/inode.c~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped fs/reiserfs/inode.c
--- linux-2.6.16/fs/reiserfs/inode.c~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped 2007-02-01 12:30:08.000000000 +0300
+++ linux-2.6.16-vs/fs/reiserfs/inode.c 2007-02-01 12:41:32.000000000 +0300
@@ -1140,6 +1140,7 @@ static void init_inode(struct inode *ino
REISERFS_I(inode)->i_prealloc_count = 0;
REISERFS_I(inode)->i_trans_id = 0;
REISERFS_I(inode)->i_jl = NULL;
+ mutex_init(&(REISERFS_I(inode)->i_mmap));
REISERFS_I(inode)->i_acl_access = NULL;
REISERFS_I(inode)->i_acl_default = NULL;
init_rwsem(&REISERFS_I(inode)->xattr_sem);
@@ -1847,6 +1848,7 @@ int reiserfs_new_inode(struct reiserfs_t
REISERFS_I(inode)->i_attrs =
REISERFS_I(dir)->i_attrs & REISERFS_INHERIT_MASK;
sd_attrs_to_i_attrs(REISERFS_I(inode)->i_attrs, inode);
+ mutex_init(&(REISERFS_I(inode)->i_mmap));
REISERFS_I(inode)->i_acl_access = NULL;
REISERFS_I(inode)->i_acl_default = NULL;
init_rwsem(&REISERFS_I(inode)->xattr_sem);
diff -puN include/linux/reiserfs_fs_i.h~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped include/linux/reiserfs_fs_i.h
--- linux-2.6.16/include/linux/reiserfs_fs_i.h~reiserfs-avoid-tail-packing-if-an-inode-was-ever-mmapped 2007-02-01 12:30:08.000000000 +0300
+++ linux-2.6.16-vs/include/linux/reiserfs_fs_i.h 2007-02-01 12:35:50.000000000 +0300
@@ -25,6 +25,7 @@ typedef enum {
i_link_saved_truncate_mask = 0x0020,
i_has_xattr_dir = 0x0040,
i_data_log = 0x0080,
+ i_ever_mapped = 0x0100
} reiserfs_inode_flags;
struct reiserfs_inode_info {
@@ -52,6 +53,7 @@ struct reiserfs_inode_info {
** flushed */
unsigned long i_trans_id;
struct reiserfs_journal_list *i_jl;
+ struct mutex i_mmap;
struct posix_acl *i_acl_access;
struct posix_acl *i_acl_default;
_
On Thu, Feb 01, 2007 at 03:13:03PM +0300, Vladimir V. Saveliev wrote:
> Hello
Hi Vladimir,
> On Wednesday 31 January 2007 10:02, Adrian Bunk wrote:
>...
> > reiserfs:
> > commit de14569f94513279e3d44d9571a421e9da1759ae
> > [PATCH] resierfs: avoid tail packing if an inode was ever mmapped
> > backport to 2.6.16 required
>
> Here it goes:
>...
thanks a lot, applied to 2.6.16.
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On Mon, Jan 29, 2007 at 03:38:42PM -0500, Chuck Ebbert wrote:
> Josh Boyer wrote:
> > On 1/29/07, Chuck Ebbert <[email protected]> wrote:
> >> Is there any way to estimate the size of the user base for 2.6.16?
> >>
> >> e.g. how many downloads does it get?
> >
> > Are you including distros that use it as well?
> >
>
> Yes, if they're based on Adrian's stable series.
SLES 10 is based on it. So there's a few thousand users for ya :)