2007-08-28 15:53:18

by Martin Knoblauch

[permalink] [raw]
Subject: Understanding I/O behaviour - next try

Keywords: I/O, bdi-v9, cfs

Hi,

a while ago I asked a few questions on the Linux I/O behaviour,
because I were (still am) fighting some "misbehaviour" related to heavy
I/O.

The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
The performance of the block device with O_DIRECT is about 90 MB/sec.

The problematic behaviour comes when we are moving large files through
the system. The file usage in this case is mostly "use once" or
streaming. As soon as the amount of file data is larger than 7.5 GB, we
see occasional unresponsiveness of the system (e.g. no more ssh
connections into the box) of more than 1 or 2 minutes (!) duration
(kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
some other poor guys being in "D" state.

The data flows in basically three modes. All of them are affected:

local-disk -> NFS
NFS -> local-disk
NFS -> NFS

NFS is V3/TCP.

So, I made a few experiments in the last few days, using three
different kernels: 2.6.22.5, 2.6.22.5+cfs20.4 an 2.6.22.5+bdi-v9.

The first observation (independent of the kernel) is that we *should*
use O_DIRECT, at least for output to the local disk. Here we see about
90 MB/sec write performance. A simple "dd" using 1,2 and 3 parallel
threads to the same block device (through a ext2 FS) gives:

O_Direct: 88 MB/s, 2x44, 3x29.5
non-O_DIRECT: 51 MB/s, 2x19, 3x12.5

- Observation 1a: IO schedulers are mostly equivalent, with CFQ
slightly worse than AS and DEADLINE
- Observation 1b: when using a 2.6.22.5+cfs20.4, the non-O_DIRECT
performance goes [slightly] down. With three threads it is 3x10 MB/s.
Ingo?
- Observation 1c: bdi-v9 does not help in this case, which is not
surprising.

The real question here is why the non-O_DIRECT case is so slow. Is
this a general thing? Is this related to the CCISS controller? Using
O_DIRECT is unfortunatelly not an option for us.

When using three different targets (local disk plus two different NFS
Filesystems) bdi-v9 is a big winner. Without it, all threads are [seem
to be] limited to the speed of the slowest FS. With bdi-v9 we see a
considerable speedup.

Just by chance I found out that doing all I/O inc sync-mode does
prevent the load from going up. Of course, I/O throughput is not
stellar (but not much worse than the non-O_DIRECT case). But the
responsiveness seem OK. Maybe a solution, as this can be controlled via
mount (would be great for O_DIRECT :-).

In general 2.6.22 seems to bee better that 2.6.19, but this is highly
subjective :-( I am using the following setting in /proc. They seem to
provide the smoothest responsiveness:

vm.dirty_background_ratio = 1
vm.dirty_ratio = 1
vm.swappiness = 1
vm.vfs_cache_pressure = 1

Another thing I saw during my tests is that when writing to NFS, the
"dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
or a bug?

In any case, view this as a report for one specific loadcase that does
not behave very well. It seems there are ways to make things better
(sync, per device throttling, ...), but nothing "perfect yet. Use once
does seem to be a problem.

Cheers
Martin

------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de


2007-08-29 01:39:16

by Wu Fengguang

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
[...]
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The problematic behaviour comes when we are moving large files through
> the system. The file usage in this case is mostly "use once" or
> streaming. As soon as the amount of file data is larger than 7.5 GB, we
> see occasional unresponsiveness of the system (e.g. no more ssh
> connections into the box) of more than 1 or 2 minutes (!) duration
> (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
> some other poor guys being in "D" state.
[...]
> Just by chance I found out that doing all I/O inc sync-mode does
> prevent the load from going up. Of course, I/O throughput is not
> stellar (but not much worse than the non-O_DIRECT case). But the
> responsiveness seem OK. Maybe a solution, as this can be controlled via
> mount (would be great for O_DIRECT :-).
>
> In general 2.6.22 seems to bee better that 2.6.19, but this is highly
> subjective :-( I am using the following setting in /proc. They seem to
> provide the smoothest responsiveness:
>
> vm.dirty_background_ratio = 1
> vm.dirty_ratio = 1
> vm.swappiness = 1
> vm.vfs_cache_pressure = 1

You are apparently running into the sluggish kupdate-style writeback
problem with large files: huge amount of dirty pages are getting
accumulated and flushed to the disk all at once when dirty background
ratio is reached. The current -mm tree has some fixes for it, and
there are some more in my tree. Martin, I'll send you the patch if
you'd like to try it out.

> Another thing I saw during my tests is that when writing to NFS, the
> "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
> or a bug?

What are the nr_unstable numbers?

Fengguang

2007-08-29 08:16:09

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Fengguang Wu <[email protected]> wrote:

> On Tue, Aug 28, 2007 at 08:53:07AM -0700, Martin Knoblauch wrote:
> [...]
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the block device with O_DIRECT is about 90
> MB/sec.
> >
> > The problematic behaviour comes when we are moving large files
> through
> > the system. The file usage in this case is mostly "use once" or
> > streaming. As soon as the amount of file data is larger than 7.5
> GB, we
> > see occasional unresponsiveness of the system (e.g. no more ssh
> > connections into the box) of more than 1 or 2 minutes (!) duration
> > (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads
> and
> > some other poor guys being in "D" state.
> [...]
> > Just by chance I found out that doing all I/O inc sync-mode does
> > prevent the load from going up. Of course, I/O throughput is not
> > stellar (but not much worse than the non-O_DIRECT case). But the
> > responsiveness seem OK. Maybe a solution, as this can be controlled
> via
> > mount (would be great for O_DIRECT :-).
> >
> > In general 2.6.22 seems to bee better that 2.6.19, but this is
> highly
> > subjective :-( I am using the following setting in /proc. They seem
> to
> > provide the smoothest responsiveness:
> >
> > vm.dirty_background_ratio = 1
> > vm.dirty_ratio = 1
> > vm.swappiness = 1
> > vm.vfs_cache_pressure = 1
>
> You are apparently running into the sluggish kupdate-style writeback
> problem with large files: huge amount of dirty pages are getting
> accumulated and flushed to the disk all at once when dirty background
> ratio is reached. The current -mm tree has some fixes for it, and
> there are some more in my tree. Martin, I'll send you the patch if
> you'd like to try it out.
>
Hi Fengguang,

Yeah, that pretty much describes the situation we end up. Although
"sluggish" is much to friendly if we hit the situation :-)

Yes, I am very interested to check out your patch. I saw your
postings on LKML already and was already curious. Any chance you have
something agains 2.6.22-stable? I have reasons not to move to -23 or
-mm.

> > Another thing I saw during my tests is that when writing to NFS,
> the
> > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual
> thing,
> > or a bug?
>
> What are the nr_unstable numbers?
>

Ahh. Yes, they go up to 80-90k pages. Comparable to the nr_dirty
numbers for the disk case. Good to know.

For NFS, the nr_writeback numbers seem surprisingly high. They also go
to 80-90k (pages ?). In the disk case they rarely go over 12k.

Cheers
Martin
> Fengguang
>
>


------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-08-29 08:40:58

by Wu Fengguang

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
>
> --- Fengguang Wu <[email protected]> wrote:
>
> > You are apparently running into the sluggish kupdate-style writeback
> > problem with large files: huge amount of dirty pages are getting
> > accumulated and flushed to the disk all at once when dirty background
> > ratio is reached. The current -mm tree has some fixes for it, and
> > there are some more in my tree. Martin, I'll send you the patch if
> > you'd like to try it out.
> >
> Hi Fengguang,
>
> Yeah, that pretty much describes the situation we end up. Although
> "sluggish" is much to friendly if we hit the situation :-)
>
> Yes, I am very interested to check out your patch. I saw your
> postings on LKML already and was already curious. Any chance you have
> something agains 2.6.22-stable? I have reasons not to move to -23 or
> -mm.

Well, they are a dozen patches from various sources. I managed to
back-port them. It compiles and runs, however I cannot guarantee
more...

> > > Another thing I saw during my tests is that when writing to NFS,
> > the
> > > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual
> > thing,
> > > or a bug?
> >
> > What are the nr_unstable numbers?
> >
>
> Ahh. Yes, they go up to 80-90k pages. Comparable to the nr_dirty
> numbers for the disk case. Good to know.
>
> For NFS, the nr_writeback numbers seem surprisingly high. They also go
> to 80-90k (pages ?). In the disk case they rarely go over 12k.

Maybe the difference of throttling one single 'cp' and a dozen 'nfsd'?

Fengguang


Attachments:
(No filename) (1.55 kB)
writeback-fixes.patch (33.04 kB)
Download all attachments

2007-08-29 09:22:58

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Fengguang Wu <[email protected]> wrote:

> On Wed, Aug 29, 2007 at 01:15:45AM -0700, Martin Knoblauch wrote:
> >
> > --- Fengguang Wu <[email protected]> wrote:
> >
> > > You are apparently running into the sluggish kupdate-style
> writeback
> > > problem with large files: huge amount of dirty pages are getting
> > > accumulated and flushed to the disk all at once when dirty
> background
> > > ratio is reached. The current -mm tree has some fixes for it, and
> > > there are some more in my tree. Martin, I'll send you the patch
> if
> > > you'd like to try it out.
> > >
> > Hi Fengguang,
> >
> > Yeah, that pretty much describes the situation we end up. Although
> > "sluggish" is much to friendly if we hit the situation :-)
> >
> > Yes, I am very interested to check out your patch. I saw your
> > postings on LKML already and was already curious. Any chance you
> have
> > something agains 2.6.22-stable? I have reasons not to move to -23
> or
> > -mm.
>
> Well, they are a dozen patches from various sources. I managed to
> back-port them. It compiles and runs, however I cannot guarantee
> more...
>

Thanks. I understand the limited scope of the warranty :-) I will give
it a spin today.

> > > > Another thing I saw during my tests is that when writing to
> NFS,
> > > the
> > > > "dirty" or "nr_dirty" numbers are always 0. Is this a
> conceptual
> > > thing,
> > > > or a bug?
> > >
> > > What are the nr_unstable numbers?
> > >
> >
> > Ahh. Yes, they go up to 80-90k pages. Comparable to the nr_dirty
> > numbers for the disk case. Good to know.
> >
> > For NFS, the nr_writeback numbers seem surprisingly high. They
> also go
> > to 80-90k (pages ?). In the disk case they rarely go over 12k.
>
> Maybe the difference of throttling one single 'cp' and a dozen
> 'nfsd'?
>

No "nfsd" running on that box. It is just a client.

Cheers
Martin


------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-08-29 09:48:17

by Jens Axboe

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

On Tue, Aug 28 2007, Martin Knoblauch wrote:
> Keywords: I/O, bdi-v9, cfs
>
> Hi,
>
> a while ago I asked a few questions on the Linux I/O behaviour,
> because I were (still am) fighting some "misbehaviour" related to heavy
> I/O.
>
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The problematic behaviour comes when we are moving large files through
> the system. The file usage in this case is mostly "use once" or
> streaming. As soon as the amount of file data is larger than 7.5 GB, we
> see occasional unresponsiveness of the system (e.g. no more ssh
> connections into the box) of more than 1 or 2 minutes (!) duration
> (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
> some other poor guys being in "D" state.
>
> The data flows in basically three modes. All of them are affected:
>
> local-disk -> NFS
> NFS -> local-disk
> NFS -> NFS
>
> NFS is V3/TCP.
>
> So, I made a few experiments in the last few days, using three
> different kernels: 2.6.22.5, 2.6.22.5+cfs20.4 an 2.6.22.5+bdi-v9.
>
> The first observation (independent of the kernel) is that we *should*
> use O_DIRECT, at least for output to the local disk. Here we see about
> 90 MB/sec write performance. A simple "dd" using 1,2 and 3 parallel
> threads to the same block device (through a ext2 FS) gives:
>
> O_Direct: 88 MB/s, 2x44, 3x29.5
> non-O_DIRECT: 51 MB/s, 2x19, 3x12.5
>
> - Observation 1a: IO schedulers are mostly equivalent, with CFQ
> slightly worse than AS and DEADLINE
> - Observation 1b: when using a 2.6.22.5+cfs20.4, the non-O_DIRECT
> performance goes [slightly] down. With three threads it is 3x10 MB/s.
> Ingo?
> - Observation 1c: bdi-v9 does not help in this case, which is not
> surprising.
>
> The real question here is why the non-O_DIRECT case is so slow. Is
> this a general thing? Is this related to the CCISS controller? Using
> O_DIRECT is unfortunatelly not an option for us.
>
> When using three different targets (local disk plus two different NFS
> Filesystems) bdi-v9 is a big winner. Without it, all threads are [seem
> to be] limited to the speed of the slowest FS. With bdi-v9 we see a
> considerable speedup.
>
> Just by chance I found out that doing all I/O inc sync-mode does
> prevent the load from going up. Of course, I/O throughput is not
> stellar (but not much worse than the non-O_DIRECT case). But the
> responsiveness seem OK. Maybe a solution, as this can be controlled via
> mount (would be great for O_DIRECT :-).
>
> In general 2.6.22 seems to bee better that 2.6.19, but this is highly
> subjective :-( I am using the following setting in /proc. They seem to
> provide the smoothest responsiveness:
>
> vm.dirty_background_ratio = 1
> vm.dirty_ratio = 1
> vm.swappiness = 1
> vm.vfs_cache_pressure = 1
>
> Another thing I saw during my tests is that when writing to NFS, the
> "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
> or a bug?
>
> In any case, view this as a report for one specific loadcase that does
> not behave very well. It seems there are ways to make things better
> (sync, per device throttling, ...), but nothing "perfect yet. Use once
> does seem to be a problem.

Try limiting the queue depth on the cciss device, some of those are
notoriously bad at starving commands. Something like the below hack, see
if it makes a difference (and please verify in dmesg that it prints the
message about limiting depth!):

diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
index 084358a..257e1c3 100644
--- a/drivers/block/cciss.c
+++ b/drivers/block/cciss.c
@@ -2992,7 +2992,12 @@ static int cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev)
if (board_id == products[i].board_id) {
c->product_name = products[i].product_name;
c->access = *(products[i].access);
+#if 0
c->nr_cmds = products[i].nr_cmds;
+#else
+ c->nr_cmds = 2;
+ printk("cciss: limited max commands to 2\n");
+#endif
break;
}
}

--
Jens Axboe

2007-08-29 14:26:31

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Jens Axboe <[email protected]> wrote:

> On Tue, Aug 28 2007, Martin Knoblauch wrote:
> > Keywords: I/O, bdi-v9, cfs
> >
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a difference (and please verify in dmesg that it prints
> the
> message about limiting depth!):
>
> diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
> index 084358a..257e1c3 100644
> --- a/drivers/block/cciss.c
> +++ b/drivers/block/cciss.c
> @@ -2992,7 +2992,12 @@ static int cciss_pci_init(ctlr_info_t *c,
> struct pci_dev *pdev)
> if (board_id == products[i].board_id) {
> c->product_name = products[i].product_name;
> c->access = *(products[i].access);
> +#if 0
> c->nr_cmds = products[i].nr_cmds;
> +#else
> + c->nr_cmds = 2;
> + printk("cciss: limited max commands to 2\n");
> +#endif
> break;
> }
> }
>
> --
> Jens Axboe
>
>
>
Hi Jens,

thanks for the suggestion. Unfortunatelly the non-direct [parallel]
writes to the device got considreably slower. I guess the "6i"
controller copes better with higher values.

Can nr_cmds be changed at runtime? Maybe there is a optimal setting.

[ 69.438851] SCSI subsystem initialized
[ 69.442712] HP CISS Driver (v 3.6.14)
[ 69.442871] ACPI: PCI Interrupt 0000:04:03.0[A] -> GSI 51 (level,
low) -> IRQ 51
[ 69.442899] cciss: limited max commands to 2 (Smart Array 6i)
[ 69.482370] cciss0: <0x46> at PCI 0000:04:03.0 IRQ 51 using DAC
[ 69.494352] blocks= 426759840 block_size= 512
[ 69.498350] heads=255, sectors=32, cylinders=52299
[ 69.498352]
[ 69.498509] blocks= 426759840 block_size= 512
[ 69.498602] heads=255, sectors=32, cylinders=52299
[ 69.498604]
[ 69.498608] cciss/c0d0: p1 p2

Cheers
Martin


------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-08-29 14:29:31

by Robert Hancock

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

Jens Axboe wrote:
> On Tue, Aug 28 2007, Martin Knoblauch wrote:
>> Keywords: I/O, bdi-v9, cfs
>>
>> Hi,
>>
>> a while ago I asked a few questions on the Linux I/O behaviour,
>> because I were (still am) fighting some "misbehaviour" related to heavy
>> I/O.
>>
>> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
>> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
>> The performance of the block device with O_DIRECT is about 90 MB/sec.
>>
>> The problematic behaviour comes when we are moving large files through
>> the system. The file usage in this case is mostly "use once" or
>> streaming. As soon as the amount of file data is larger than 7.5 GB, we
>> see occasional unresponsiveness of the system (e.g. no more ssh
>> connections into the box) of more than 1 or 2 minutes (!) duration
>> (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
>> some other poor guys being in "D" state.
>>
>> The data flows in basically three modes. All of them are affected:
>>
>> local-disk -> NFS
>> NFS -> local-disk
>> NFS -> NFS
>>
>> NFS is V3/TCP.
>>
>> So, I made a few experiments in the last few days, using three
>> different kernels: 2.6.22.5, 2.6.22.5+cfs20.4 an 2.6.22.5+bdi-v9.
>>
>> The first observation (independent of the kernel) is that we *should*
>> use O_DIRECT, at least for output to the local disk. Here we see about
>> 90 MB/sec write performance. A simple "dd" using 1,2 and 3 parallel
>> threads to the same block device (through a ext2 FS) gives:
>>
>> O_Direct: 88 MB/s, 2x44, 3x29.5
>> non-O_DIRECT: 51 MB/s, 2x19, 3x12.5
>>
>> - Observation 1a: IO schedulers are mostly equivalent, with CFQ
>> slightly worse than AS and DEADLINE
>> - Observation 1b: when using a 2.6.22.5+cfs20.4, the non-O_DIRECT
>> performance goes [slightly] down. With three threads it is 3x10 MB/s.
>> Ingo?
>> - Observation 1c: bdi-v9 does not help in this case, which is not
>> surprising.
>>
>> The real question here is why the non-O_DIRECT case is so slow. Is
>> this a general thing? Is this related to the CCISS controller? Using
>> O_DIRECT is unfortunatelly not an option for us.
>>
>> When using three different targets (local disk plus two different NFS
>> Filesystems) bdi-v9 is a big winner. Without it, all threads are [seem
>> to be] limited to the speed of the slowest FS. With bdi-v9 we see a
>> considerable speedup.
>>
>> Just by chance I found out that doing all I/O inc sync-mode does
>> prevent the load from going up. Of course, I/O throughput is not
>> stellar (but not much worse than the non-O_DIRECT case). But the
>> responsiveness seem OK. Maybe a solution, as this can be controlled via
>> mount (would be great for O_DIRECT :-).
>>
>> In general 2.6.22 seems to bee better that 2.6.19, but this is highly
>> subjective :-( I am using the following setting in /proc. They seem to
>> provide the smoothest responsiveness:
>>
>> vm.dirty_background_ratio = 1
>> vm.dirty_ratio = 1
>> vm.swappiness = 1
>> vm.vfs_cache_pressure = 1
>>
>> Another thing I saw during my tests is that when writing to NFS, the
>> "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
>> or a bug?
>>
>> In any case, view this as a report for one specific loadcase that does
>> not behave very well. It seems there are ways to make things better
>> (sync, per device throttling, ...), but nothing "perfect yet. Use once
>> does seem to be a problem.
>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack, see
> if it makes a difference (and please verify in dmesg that it prints the
> message about limiting depth!):

I saw a bulletin from HP recently that sugggested disabling the
write-back cache on some Smart Array controllers as a workaround because
it reduced performance in applications that did large bulk writes.
Presumably they are planning on releasing some updated firmware that
fixes this eventually..

--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from [email protected]
Home Page: http://www.roberthancock.com/

2007-08-29 16:25:26

by Chuck Ebbert

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
>
> The basic setup is a dual x86_64 box with 8 GB of memory. The DL380
> has a HW RAID5, made from 4x72GB disks and about 100 MB write cache.
> The performance of the block device with O_DIRECT is about 90 MB/sec.
>
> The problematic behaviour comes when we are moving large files through
> the system. The file usage in this case is mostly "use once" or
> streaming. As soon as the amount of file data is larger than 7.5 GB, we
> see occasional unresponsiveness of the system (e.g. no more ssh
> connections into the box) of more than 1 or 2 minutes (!) duration
> (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads and
> some other poor guys being in "D" state.

Try booting with "mem=4096M", "mem=2048M", ...

2007-08-29 21:43:24

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Chuck Ebbert <[email protected]> wrote:

> On 08/28/2007 11:53 AM, Martin Knoblauch wrote:
> >
> > The basic setup is a dual x86_64 box with 8 GB of memory. The
> DL380
> > has a HW RAID5, made from 4x72GB disks and about 100 MB write
> cache.
> > The performance of the block device with O_DIRECT is about 90
> MB/sec.
> >
> > The problematic behaviour comes when we are moving large files
> through
> > the system. The file usage in this case is mostly "use once" or
> > streaming. As soon as the amount of file data is larger than 7.5
> GB, we
> > see occasional unresponsiveness of the system (e.g. no more ssh
> > connections into the box) of more than 1 or 2 minutes (!) duration
> > (kernels up to 2.6.19). Load goes up, mainly due to pdflush threads
> and
> > some other poor guys being in "D" state.
>
> Try booting with "mem=4096M", "mem=2048M", ...
>
>

hmm. I tried 1024M a while ago and IIRC did not see a lot [any]
difference. But as it is no big deal, I will repeat it tomorrow.

Just curious - what are you expecting? Why should it help?

Thanks
Martin


------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-08-30 10:26:32

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Robert Hancock <[email protected]> wrote:

>
> I saw a bulletin from HP recently that sugggested disabling the
> write-back cache on some Smart Array controllers as a workaround
> because
> it reduced performance in applications that did large bulk writes.
> Presumably they are planning on releasing some updated firmware that
> fixes this eventually..
>
> --
> Robert Hancock Saskatoon, SK, Canada
> To email, remove "nospam" from [email protected]
> Home Page: http://www.roberthancock.com/
>
Robert,

just checked it out. At least with the "6i", you do not want to
disable the WBC :-) Performance really goes down the toilet for all
cases.

Do you still have a pointer to that bulletin?

Cheers
Martin

------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-08-30 10:50:19

by Martin Knoblauch

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try


--- Jens Axboe <[email protected]> wrote:

>
> Try limiting the queue depth on the cciss device, some of those are
> notoriously bad at starving commands. Something like the below hack,
> see
> if it makes a difference (and please verify in dmesg that it prints
> the
> message about limiting depth!):
>
> diff --git a/drivers/block/cciss.c b/drivers/block/cciss.c
> index 084358a..257e1c3 100644
> --- a/drivers/block/cciss.c
> +++ b/drivers/block/cciss.c
> @@ -2992,7 +2992,12 @@ static int cciss_pci_init(ctlr_info_t *c,
> struct pci_dev *pdev)
> if (board_id == products[i].board_id) {
> c->product_name = products[i].product_name;
> c->access = *(products[i].access);
> +#if 0
> c->nr_cmds = products[i].nr_cmds;
> +#else
> + c->nr_cmds = 2;
> + printk("cciss: limited max commands to 2\n");
> +#endif
> break;
> }
> }
>
> --
> Jens Axboe
>
>
Hi Jens,

how exactely is the queue depth related to the max # of commands? I
ask, because with the 2.6.22 kernel the "maximum queue depth since
init" seems to be never higher than 16, even with much higher
outstanding commands. On a 2.6.19 kernel, maximum queue depth is much
higher, just a bit below "max # of commands since init".

[2.6.22]# cat /proc/driver/cciss/cciss0
cciss0: HP Smart Array 6i Controller
Board ID: 0x40910e11
Firmware Version: 2.76
IRQ: 51
Logical drives: 1
Max sectors: 2048
Current Q depth: 0
Current # commands on controller: 145
Max Q depth since init: 16
Max # commands on controller since init: 204
Max SG entries since init: 31
Sequential access devices: 0

[2.6.19] cat /proc/driver/cciss/cciss0
cciss0: HP Smart Array 6i Controller
Board ID: 0x40910e11
Firmware Version: 2.76
IRQ: 51
Logical drives: 1
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 197
Max # commands on controller since init: 198
Max SG entries since init: 31
Sequential access devices: 0

Cheers
Martin




------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de

2007-09-13 14:17:49

by Peter Zijlstra

[permalink] [raw]
Subject: Re: Understanding I/O behaviour - next try

On Wed, 2007-08-29 at 01:15 -0700, Martin Knoblauch wrote:

> > > Another thing I saw during my tests is that when writing to NFS, the
> > > "dirty" or "nr_dirty" numbers are always 0. Is this a conceptual thing,
> > > or a bug?
> >
> > What are the nr_unstable numbers?

NFS has the concept of unstable storage, that is a state where it is
agreed the page has been transferred to the remote server, but has not
yet been written to disk.

> Ahh. Yes, they go up to 80-90k pages. Comparable to the nr_dirty
> numbers for the disk case. Good to know.
>
> For NFS, the nr_writeback numbers seem surprisingly high. They also go
> to 80-90k (pages ?). In the disk case they rarely go over 12k.

see: /proc/sys/fs/nfs/nfs_congestion_kb

That is the limit for when the nfs BDI is marked congested, so
nfs_writeout + nfs_unstable <= nfs_congestion_kb

The nfs_dirty always being 0 just means that pages very quickly start
their writeout cycle.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part