2010-04-08 01:38:34

by Brian Haslett

[permalink] [raw]
Subject: [PATCH] increase pipe size/buffers/atomicity :D

(tested and working with 2.6.32.8 kernel, on a Athlon/686)


--- include/linux/pipe_fs_i.h.orig 2010-04-06 22:56:51.000000000 -0500
+++ include/linux/pipe_fs_i.h 2010-04-06 22:56:58.000000000 -0500
@@ -3,7 +3,7 @@

#define PIPEFS_MAGIC 0x50495045

-#define PIPE_BUFFERS (16)
+#define PIPE_BUFFERS (32)

#define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */
#define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */
--- include/asm-generic/page.h.orig 2010-04-06 22:57:08.000000000 -0500
+++ include/asm-generic/page.h 2010-04-06 22:57:23.000000000 -0500
@@ -12,7 +12,7 @@

/* PAGE_SHIFT determines the page size */

-#define PAGE_SHIFT 12
+#define PAGE_SHIFT 13
#ifdef __ASSEMBLY__
#define PAGE_SIZE (1 << PAGE_SHIFT)
#else
--- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
+++ include/linux/limits.h 2010-04-06 22:56:28.000000000 -0500
@@ -10,7 +10,7 @@
#define MAX_INPUT 255 /* size of the type-ahead buffer */
#define NAME_MAX 255 /* # chars in a file name */
#define PATH_MAX 4096 /* # chars in a path name including nul */
-#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */
+#define PIPE_BUF 8192 /* # bytes in atomic write to a pipe */
#define XATTR_NAME_MAX 255 /* # chars in an extended attribute name */
#define XATTR_SIZE_MAX 65536 /* size of an extended attribute value (64k) */
#define XATTR_LIST_MAX 65536 /* size of extended attribute namelist (64k) */


2010-04-08 05:11:15

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] increase pipe size/buffers/atomicity :D

Le mercredi 07 avril 2010 à 19:38 -0600, brian a écrit :
> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
>
>
> --- include/linux/pipe_fs_i.h.orig 2010-04-06 22:56:51.000000000 -0500
> +++ include/linux/pipe_fs_i.h 2010-04-06 22:56:58.000000000 -0500
> @@ -3,7 +3,7 @@
>
> #define PIPEFS_MAGIC 0x50495045
>
> -#define PIPE_BUFFERS (16)
> +#define PIPE_BUFFERS (32)

Doing such a thing puts high pressure on stack usage on some parts of
the kernel, and actually slow down some benches.


2010-04-08 15:14:55

by Steven J. Magnani

[permalink] [raw]
Subject: Re: [PATCH] increase pipe size/buffers/atomicity :D

Brian -

On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
> (tested and working with 2.6.32.8 kernel, on a Athlon/686)

It would be good to know what issue this addresses. Gives people a way
to weigh any side-effects/drawbacks against the benefits, and an
opportunity to suggest alternate/better approaches.

> --- include/linux/pipe_fs_i.h.orig 2010-04-06 22:56:51.000000000 -0500
> +++ include/linux/pipe_fs_i.h 2010-04-06 22:56:58.000000000 -0500
> @@ -3,7 +3,7 @@
>
> #define PIPEFS_MAGIC 0x50495045
>
> -#define PIPE_BUFFERS (16)
> +#define PIPE_BUFFERS (32)

This worries me. In several places there are functions with 2 or 3
pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
anywhere from 128 to 384 bytes to the stack in these functions depending
on sizeof(void*) and the number of arrays.

>
> #define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */
> #define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */
> --- include/asm-generic/page.h.orig 2010-04-06 22:57:08.000000000 -0500
> +++ include/asm-generic/page.h 2010-04-06 22:57:23.000000000 -0500
> @@ -12,7 +12,7 @@
>
> /* PAGE_SHIFT determines the page size */
>
> -#define PAGE_SHIFT 12
> +#define PAGE_SHIFT 13

This has pretty wide-ranging implications, both within and across
arches. I don't think it's something that can be changed easily. Also I
don't believe this #define used in your configuration (Athlon/686)
unless you're running without a MMU.

> #ifdef __ASSEMBLY__
> #define PAGE_SIZE (1 << PAGE_SHIFT)
> #else
> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
> +++ include/linux/limits.h 2010-04-06 22:56:28.000000000 -0500
> @@ -10,7 +10,7 @@
> #define MAX_INPUT 255 /* size of the type-ahead buffer */
> #define NAME_MAX 255 /* # chars in a file name */
> #define PATH_MAX 4096 /* # chars in a path name including nul */
> -#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */
> +#define PIPE_BUF 8192 /* # bytes in atomic write to a pipe */

I don't see this being used within the kernel, so I assume its a
userspace representation of PAGE_SIZE (ARM seems to associate these
explicitly). I would think you'd need to rebuild your glibc or
equivalent to notice any difference from a change.

Regards,
------------------------------------------------------------------------
Steven J. Magnani "I claim this network for MARS!
http://www.digidescorp.com Earthling, return my space modulator!"

#include <standard.disclaimer>


2010-04-09 19:51:23

by Brian Haslett

[permalink] [raw]
Subject: Re: [PATCH] increase pipe size/buffers/atomicity :D

> On Wed, 2010-04-07 at 19:38 -0600, brian wrote:
>> (tested and working with 2.6.32.8 kernel, on a Athlon/686)
>
> It would be good to know what issue this addresses. Gives people a way
> to weigh any side-effects/drawbacks against the benefits, and an
> opportunity to suggest alternate/better approaches.
>

I wouldn't say it addresses anything that I'd really consider broken;
it started as a personal experiment of mine, aimed at some little
performance gain. I figured, hey, bigger pipes, why not? Looks like
these pipe sizes have practically been around since the epoch.


>> #define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */
>> #define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */
>> --- include/asm-generic/page.h.orig 2010-04-06 22:57:08.000000000
>> -0500
>> +++ include/asm-generic/page.h 2010-04-06 22:57:23.000000000 -0500
>> @@ -12,7 +12,7 @@
>>
>> /* PAGE_SHIFT determines the page size */
>>
>> -#define PAGE_SHIFT 12
>> +#define PAGE_SHIFT 13
>
> This has pretty wide-ranging implications, both within and across
> arches. I don't think it's something that can be changed easily. Also I
> don't believe this #define used in your configuration (Athlon/686)
> unless you're running without a MMU.
>

actually, the reason I went after this, gets into the only reason I
started this whole ordeal to begin with, line#135 in pipe_fs_i.h that
reads "#define PIPE_SIZE PAGE_SIZE".


>> #ifdef __ASSEMBLY__
>> #define PAGE_SIZE (1 << PAGE_SHIFT)
>> #else
>> --- include/linux/limits.h.orig 2010-04-06 22:54:15.000000000 -0500
>> +++ include/linux/limits.h 2010-04-06 22:56:28.000000000 -0500
>> @@ -10,7 +10,7 @@
>> #define MAX_INPUT 255 /* size of the type-ahead buffer */
>> #define NAME_MAX 255 /* # chars in a file name */
>> #define PATH_MAX 4096 /* # chars in a path name including nul */
>> -#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */
>> +#define PIPE_BUF 8192 /* # bytes in atomic write to a pipe */
>

You'd think so (according to some posts I'd read before I tried this),
but I actually tried several variations on a few things, and until I
changed *this one in particular*, my kernel would in fact boot up
fine, but the shell/init/system phase itself would start giving me
errors to the effect of "unable to create pipe" and "too many file
descriptors open" over and over again.

>> --- include/linux/pipe_fs_i.h.orig 2010-04-06 22:56:51.000000000
>> -0500
>> +++ include/linux/pipe_fs_i.h 2010-04-06 22:56:58.000000000 -0500
>> @@ -3,7 +3,7 @@
>>
>> #define PIPEFS_MAGIC 0x50495045
>>
>> -#define PIPE_BUFFERS (16)
>> +#define PIPE_BUFFERS (32)
>
> This worries me. In several places there are functions with 2 or 3
> pointer arrays of dimension [PIPE_BUFFERS] on the stack. So this adds
> anywhere from 128 to 384 bytes to the stack in these functions depending
> on sizeof(void*) and the number of arrays.
>

As my initial hope/goal was to just increase the size of the pipes, I
figured I may as well increase the buffers as well (although I'll
admit ignorance to not having poked around every little .c/.h file
that calls it).

I guess I wasn't seriously trying to push anyone into jumping through
hoops for this thing, I was just a little excited and figured I'd
share with you all. I probably spent the better part of a few days
either researching, poking around the kernel headers, or experimenting
with different combinations. As such, I've attached a .txt file
explaining the controlled (but probably not as thorough as you're used
to) benchmark I ran. It's not a pretty graph, I know, but gimme a
break, I wrote it in vim and did the math with bc ;)


Attachments:
benchmark1.txt (4.31 kB)