2005-05-11 21:45:17

by Benjamin LaHaise

[permalink] [raw]
Subject: reducing max segments expected to work?

Hello Jens et al,

Is reducing the max number of segments in the block layer supposed to
work (as done in the patch below), or should i be sticking to mucking
with MAX_PHYS_SEGMENTS? I seem to get a kernel thatt cannot boot with
the below patch applied, and was wondering if you're aware of any
problems in this area. I'll probably post something more detailed
tomorrow after trying a few things.

-ben
--
"Time is what keeps everything from happening all at once." -- John Wheeler


diff -purN v2.6.12-rc4/include/linux/blkdev.h test-rc4/include/linux/blkdev.h
--- v2.6.12-rc4/include/linux/blkdev.h 2005-04-28 11:02:01.000000000 -0400
+++ test-rc4/include/linux/blkdev.h 2005-05-11 17:06:10.000000000 -0400
@@ -667,8 +667,8 @@ extern long blk_congestion_wait(int rw,
extern void blk_rq_bio_prep(request_queue_t *, struct request *, struct bio *);
extern int blkdev_issue_flush(struct block_device *, sector_t *);

-#define MAX_PHYS_SEGMENTS 128
-#define MAX_HW_SEGMENTS 128
+#define MAX_PHYS_SEGMENTS 32
+#define MAX_HW_SEGMENTS 32
#define MAX_SECTORS 255

#define MAX_SEGMENT_SIZE 65536


2005-05-12 06:38:07

by Jens Axboe

[permalink] [raw]
Subject: Re: reducing max segments expected to work?

On Wed, May 11 2005, Benjamin LaHaise wrote:
> Hello Jens et al,
>
> Is reducing the max number of segments in the block layer supposed to
> work (as done in the patch below), or should i be sticking to mucking
> with MAX_PHYS_SEGMENTS? I seem to get a kernel thatt cannot boot with
> the below patch applied, and was wondering if you're aware of any
> problems in this area. I'll probably post something more detailed
> tomorrow after trying a few things.
>
> -ben
> --
> "Time is what keeps everything from happening all at once." -- John Wheeler
>
>
> diff -purN v2.6.12-rc4/include/linux/blkdev.h test-rc4/include/linux/blkdev.h
> --- v2.6.12-rc4/include/linux/blkdev.h 2005-04-28 11:02:01.000000000 -0400
> +++ test-rc4/include/linux/blkdev.h 2005-05-11 17:06:10.000000000 -0400
> @@ -667,8 +667,8 @@ extern long blk_congestion_wait(int rw,
> extern void blk_rq_bio_prep(request_queue_t *, struct request *, struct bio *);
> extern int blkdev_issue_flush(struct block_device *, sector_t *);
>
> -#define MAX_PHYS_SEGMENTS 128
> -#define MAX_HW_SEGMENTS 128
> +#define MAX_PHYS_SEGMENTS 32
> +#define MAX_HW_SEGMENTS 32
> #define MAX_SECTORS 255

This doesn't really do what you would think it does - the defines should
be called DEFAULT_PHYS_SEGMENTS etc, since they are just default values
and do not denote any max-allowed-by-driver value.

But it is strange why your system wont boot after applying the above.
What happens (and what kind of storage)?

--
Jens Axboe

2005-05-12 15:16:36

by Benjamin LaHaise

[permalink] [raw]
Subject: Re: reducing max segments expected to work?

On Thu, May 12, 2005 at 08:37:57AM +0200, Jens Axboe wrote:
> This doesn't really do what you would think it does - the defines should
> be called DEFAULT_PHYS_SEGMENTS etc, since they are just default values
> and do not denote any max-allowed-by-driver value.

They do place a limit on athe sgpool entries in scsi_lib.c. I'm curious
about the overhead from these data structures, hence the experimentation.

> But it is strange why your system wont boot after applying the above.
> What happens (and what kind of storage)?

The system is a pretty standard P4 with SATA on ICH6. I tried booting
with MAX_SECTORS = 31 (with *_SEGMENTS = 32) to no avail. The system
usually gets to some point in early userland init with whatever program
(init) being stuck in D state waiting for io to complete. I'm curious
if there is some unwritten dependancy on MAX_SEGMENT_SIZE or some other
piece of code being hit here...

-ben
--
"Time is what keeps everything from happening all at once." -- John Wheeler