We see about 30% regression in tiobench 32 threads 80M file sequential read.
The regression is caused by below commits.
5db5d64277bf390056b1a87d0bb288c8b8553f96
The commit makes the slice too small. In the test, the slice is limitted
to 2 * idle_slice(300ms/32 < 2*idle_slice). This dramatically impacts io
thoughput. The low_latency knob used to be only impact random io, now it
impacts sequential io too. Any idea to fix it?
df5fe3e8e13883f58dc97489076bbcc150789a21
b3b6d0408c953524f979468562e7e210d8634150
The coop merge is too aggressive. For example, if two tasks are reading two
files where the two files have some adjecent blocks, cfq will immediately
merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
big. I did a test to make cfq_rq_close() always checks the distence according
to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
distence far away request as close. Taking them close doesn't improve any thoughtput
to me. Maybe we should always use CIC_SEEK_THR as close criteria).
So sounds we need make split more aggressive. But the split is too lazay,
which requires to wait 1s. Time based check isn't reliable as queue might not
run at given time, so uses a small time isn't ok. I'm thinking changing the split
check based on requests number instead of time. That is if several continuous
requests are regarded as seeky, the coop queue is split. See blow RFC patch.
How many count a queue should be split after need more consideration,
below patch just uses an arbitary number. This reduce about 5% performance
lost when doing tio 32 threads sequential read.
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2f8046..d4d51b5 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -46,7 +46,7 @@ static const int cfq_hist_divisor = 4;
* Allow merged cfqqs to perform this amount of seeky I/O before
* deciding to break the queues up again.
*/
-#define CFQQ_COOP_TOUT (HZ)
+#define CFQQ_COOP_TOUT (cfq_quantum)
#define CFQ_SLICE_SCALE (5)
#define CFQ_HW_QUEUE_MIN (5)
@@ -138,6 +138,7 @@ struct cfq_queue {
sector_t seek_mean;
sector_t last_request_pos;
unsigned long seeky_start;
+ unsigned int reqs_since_seeky;
pid_t pid;
@@ -3035,9 +3036,10 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq,
* queues apart again.
*/
if (cfq_cfqq_coop(cfqq)) {
- if (CFQQ_SEEKY(cfqq) && !cfqq->seeky_start)
- cfqq->seeky_start = jiffies;
- else if (!CFQQ_SEEKY(cfqq))
+ if (CFQQ_SEEKY(cfqq) && !cfqq->seeky_start) {
+ cfqq->seeky_start = 1;
+ cfqq->reqs_since_seeky = 0;
+ } else if (!CFQQ_SEEKY(cfqq))
cfqq->seeky_start = 0;
}
}
@@ -3189,6 +3191,8 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
cfq_update_idle_window(cfqd, cfqq, cic);
cfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
+ if (cfqq->seeky_start)
+ cfqq->reqs_since_seeky ++;
if (cfqq == cfqd->active_queue) {
/*
@@ -3476,8 +3480,7 @@ cfq_merge_cfqqs(struct cfq_data *cfqd, struct cfq_io_context *cic,
static int should_split_cfqq(struct cfq_queue *cfqq)
{
- if (cfqq->seeky_start &&
- time_after(jiffies, cfqq->seeky_start + CFQQ_COOP_TOUT))
+ if (cfqq->seeky_start && cfqq->reqs_since_seeky > CFQQ_COOP_TOUT)
return 1;
return 0;
}
@@ -3491,6 +3494,7 @@ split_cfqq(struct cfq_io_context *cic, struct cfq_queue *cfqq)
{
if (cfqq_process_refs(cfqq) == 1) {
cfqq->seeky_start = 0;
+ cfqq->reqs_since_seeky = 0;
cfqq->pid = current->pid;
cfq_clear_cfqq_coop(cfqq);
return cfqq;
Shaohua Li wrote:
> We see about 30% regression in tiobench 32 threads 80M file sequential read.
> The regression is caused by below commits.
>
> 5db5d64277bf390056b1a87d0bb288c8b8553f96
> The commit makes the slice too small. In the test, the slice is limitted
> to 2 * idle_slice(300ms/32 < 2*idle_slice). This dramatically impacts io
> thoughput. The low_latency knob used to be only impact random io, now it
> impacts sequential io too. Any idea to fix it?
Hi Shaohua,
IMHO this shouldn't be a problem. Currently, low_latency is used to improve
the latency for the whole system. If someone would like to achieve high throughput,
just turn off low_latency knob.
On Thu, Dec 24, 2009 at 03:48:38PM +0800, Gui Jianfeng wrote:
> Shaohua Li wrote:
> > We see about 30% regression in tiobench 32 threads 80M file sequential read.
> > The regression is caused by below commits.
> >
> > 5db5d64277bf390056b1a87d0bb288c8b8553f96
> > The commit makes the slice too small. In the test, the slice is limitted
> > to 2 * idle_slice(300ms/32 < 2*idle_slice). This dramatically impacts io
> > thoughput. The low_latency knob used to be only impact random io, now it
> > impacts sequential io too. Any idea to fix it?
>
> Hi Shaohua,
>
> IMHO this shouldn't be a problem. Currently, low_latency is used to improve
> the latency for the whole system. If someone would like to achieve high throughput,
> just turn off low_latency knob.
The concern is low_latency is default on. A end user is unlikely to know the knob.
Thanks,
Shaohua
The default setting should satisfy most users (how many desktop users
run 32 concurrent seq.readers?).
Those who have particular needs (e.g. DB users) can try their workload
with both settings. low latency can probably benefit them too, if the
workload is latency sensitive.
Corrado
2009/12/24, Shaohua Li <[email protected]>:
> On Thu, Dec 24, 2009 at 03:48:38PM +0800, Gui Jianfeng wrote:
>> Shaohua Li wrote:
>> > We see about 30% regression in tiobench 32 threads 80M file sequential
>> > read.
>> > The regression is caused by below commits.
>> >
>> > 5db5d64277bf390056b1a87d0bb288c8b8553f96
>> > The commit makes the slice too small. In the test, the slice is limitted
>> > to 2 * idle_slice(300ms/32 < 2*idle_slice). This dramatically impacts io
>> > thoughput. The low_latency knob used to be only impact random io, now it
>> > impacts sequential io too. Any idea to fix it?
>>
>> Hi Shaohua,
>>
>> IMHO this shouldn't be a problem. Currently, low_latency is used to
>> improve
>> the latency for the whole system. If someone would like to achieve high
>> throughput,
>> just turn off low_latency knob.
> The concern is low_latency is default on. A end user is unlikely to know the
> knob.
>
> Thanks,
> Shaohua
>
--
Inviato dal mio dispositivo mobile
__________________________________________________________________________
dott. Corrado Zoccolo mailto:[email protected]
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
Tales of Power - C. Castaneda
Hi Shaohua,
On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> df5fe3e8e13883f58dc97489076bbcc150789a21
> b3b6d0408c953524f979468562e7e210d8634150
> The coop merge is too aggressive. For example, if two tasks are reading two
> files where the two files have some adjecent blocks, cfq will immediately
> merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> big. I did a test to make cfq_rq_close() always checks the distence according
> to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> distence far away request as close. Taking them close doesn't improve any thoughtput
> to me. Maybe we should always use CIC_SEEK_THR as close criteria).
Yes, when deciding if two queues are going to be merged, we should use
the constant CIC_SEEK_THR.
> So sounds we need make split more aggressive. But the split is too lazay,
> which requires to wait 1s. Time based check isn't reliable as queue might not
> run at given time, so uses a small time isn't ok.
1s is too much, but I wouldn't abandon a time based approach. To fix
the problem of queue not being run, you can consider a slice. If at
the end of the slice, the queue is seeky, you split it.
> I'm thinking changing the split
> check based on requests number instead of time. That is if several continuous
> requests are regarded as seeky, the coop queue is split. See blow RFC patch.
> How many count a queue should be split after need more consideration,
> below patch just uses an arbitary number. This reduce about 5% performance
> lost when doing tio 32 threads sequential read.
Thanks,
Corrado
--
__________________________________________________________________________
dott. Corrado Zoccolo mailto:[email protected]
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> Hi Shaohua,
> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> > df5fe3e8e13883f58dc97489076bbcc150789a21
> > b3b6d0408c953524f979468562e7e210d8634150
> > The coop merge is too aggressive. For example, if two tasks are reading two
> > files where the two files have some adjecent blocks, cfq will immediately
> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> > big. I did a test to make cfq_rq_close() always checks the distence according
> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> > distence far away request as close. Taking them close doesn't improve any thoughtput
> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> Yes, when deciding if two queues are going to be merged, we should use
> the constant CIC_SEEK_THR.
Ok, will prepare a seperate patch for this.
> > So sounds we need make split more aggressive. But the split is too lazay,
> > which requires to wait 1s. Time based check isn't reliable as queue might not
> > run at given time, so uses a small time isn't ok.
> 1s is too much, but I wouldn't abandon a time based approach. To fix
> the problem of queue not being run, you can consider a slice. If at
> the end of the slice, the queue is seeky, you split it.
Sounds good, will take this way.
Thanks,
Shaohua
On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> Hi Shaohua,
> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> > df5fe3e8e13883f58dc97489076bbcc150789a21
> > b3b6d0408c953524f979468562e7e210d8634150
> > The coop merge is too aggressive. For example, if two tasks are reading two
> > files where the two files have some adjecent blocks, cfq will immediately
> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> > big. I did a test to make cfq_rq_close() always checks the distence according
> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> > distence far away request as close. Taking them close doesn't improve any thoughtput
> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> Yes, when deciding if two queues are going to be merged, we should use
> the constant CIC_SEEK_THR.
seek_mean could be very big sometimes, using it as close criteria is meanless
as this doen't improve any performance. So if it's big, let's fallback to
default value.
Signed-off-by: Shaohua Li<[email protected]>
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2f8046..8025605 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
if (!sample_valid(cfqq->seek_samples))
sdist = CFQQ_SEEK_THR;
+ /* if seek_mean is big, using it as close criteria is meanless */
+ if (sdist > CFQQ_SEEK_THR)
+ sdist = CFQQ_SEEK_THR;
+
return cfq_dist_from_last(cfqd, rq) <= sdist;
}
On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> Hi Shaohua,
> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> > df5fe3e8e13883f58dc97489076bbcc150789a21
> > b3b6d0408c953524f979468562e7e210d8634150
> > The coop merge is too aggressive. For example, if two tasks are reading two
> > files where the two files have some adjecent blocks, cfq will immediately
> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> > big. I did a test to make cfq_rq_close() always checks the distence according
> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> > distence far away request as close. Taking them close doesn't improve any thoughtput
> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> Yes, when deciding if two queues are going to be merged, we should use
> the constant CIC_SEEK_THR.
> > So sounds we need make split more aggressive. But the split is too lazay,
> > which requires to wait 1s. Time based check isn't reliable as queue might not
> > run at given time, so uses a small time isn't ok.
> 1s is too much, but I wouldn't abandon a time based approach. To fix
> the problem of queue not being run, you can consider a slice. If at
> the end of the slice, the queue is seeky, you split it.
Currently we split seeky coop queues after 1s, which is too big. Below patch
marks seeky coop queue split_coop flag after one slice. After that, if new
requests come in, the queues will be splitted. Patch is suggested by Corrado.
Signed-off-by: Shaohua Li <[email protected]>
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2f8046..d4d5cca 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -52,6 +52,9 @@ static const int cfq_hist_divisor = 4;
#define CFQ_HW_QUEUE_MIN (5)
#define CFQ_SERVICE_SHIFT 12
+#define CFQQ_SEEK_THR 8 * 1024
+#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
+
#define RQ_CIC(rq) \
((struct cfq_io_context *) (rq)->elevator_private)
#define RQ_CFQQ(rq) (struct cfq_queue *) ((rq)->elevator_private2)
@@ -137,7 +140,6 @@ struct cfq_queue {
u64 seek_total;
sector_t seek_mean;
sector_t last_request_pos;
- unsigned long seeky_start;
pid_t pid;
@@ -317,6 +319,7 @@ enum cfqq_state_flags {
CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */
CFQ_CFQQ_FLAG_sync, /* synchronous queue */
CFQ_CFQQ_FLAG_coop, /* cfqq is shared */
+ CFQ_CFQQ_FLAG_split_coop, /* shared cfqq will be splitted */
CFQ_CFQQ_FLAG_deep, /* sync cfqq experienced large depth */
CFQ_CFQQ_FLAG_wait_busy, /* Waiting for next request */
};
@@ -345,6 +348,7 @@ CFQ_CFQQ_FNS(prio_changed);
CFQ_CFQQ_FNS(slice_new);
CFQ_CFQQ_FNS(sync);
CFQ_CFQQ_FNS(coop);
+CFQ_CFQQ_FNS(split_coop);
CFQ_CFQQ_FNS(deep);
CFQ_CFQQ_FNS(wait_busy);
#undef CFQ_CFQQ_FNS
@@ -1574,6 +1578,15 @@ __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
cfq_clear_cfqq_wait_busy(cfqq);
/*
+ * If this cfqq is shared between multiple processes, check to
+ * make sure that those processes are still issuing I/Os within
+ * the mean seek distance. If not, it may be time to break the
+ * queues apart again.
+ */
+ if (cfq_cfqq_coop(cfqq) && CFQQ_SEEKY(cfqq))
+ cfq_mark_cfqq_split_coop(cfqq);
+
+ /*
* store what was left of this slice, if the queue idled/timed out
*/
if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
@@ -1671,9 +1684,6 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
return cfqd->last_position - blk_rq_pos(rq);
}
-#define CFQQ_SEEK_THR 8 * 1024
-#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
-
static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
struct request *rq)
{
@@ -3027,19 +3037,6 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq,
total = cfqq->seek_total + (cfqq->seek_samples/2);
do_div(total, cfqq->seek_samples);
cfqq->seek_mean = (sector_t)total;
-
- /*
- * If this cfqq is shared between multiple processes, check to
- * make sure that those processes are still issuing I/Os within
- * the mean seek distance. If not, it may be time to break the
- * queues apart again.
- */
- if (cfq_cfqq_coop(cfqq)) {
- if (CFQQ_SEEKY(cfqq) && !cfqq->seeky_start)
- cfqq->seeky_start = jiffies;
- else if (!CFQQ_SEEKY(cfqq))
- cfqq->seeky_start = 0;
- }
}
/*
@@ -3474,14 +3471,6 @@ cfq_merge_cfqqs(struct cfq_data *cfqd, struct cfq_io_context *cic,
return cic_to_cfqq(cic, 1);
}
-static int should_split_cfqq(struct cfq_queue *cfqq)
-{
- if (cfqq->seeky_start &&
- time_after(jiffies, cfqq->seeky_start + CFQQ_COOP_TOUT))
- return 1;
- return 0;
-}
-
/*
* Returns NULL if a new cfqq should be allocated, or the old cfqq if this
* was the last process referring to said cfqq.
@@ -3490,9 +3479,9 @@ static struct cfq_queue *
split_cfqq(struct cfq_io_context *cic, struct cfq_queue *cfqq)
{
if (cfqq_process_refs(cfqq) == 1) {
- cfqq->seeky_start = 0;
cfqq->pid = current->pid;
cfq_clear_cfqq_coop(cfqq);
+ cfq_clear_cfqq_split_coop(cfqq);
return cfqq;
}
@@ -3531,7 +3520,7 @@ new_queue:
/*
* If the queue was seeky for too long, break it apart.
*/
- if (cfq_cfqq_coop(cfqq) && should_split_cfqq(cfqq)) {
+ if (cfq_cfqq_coop(cfqq) && cfq_cfqq_split_coop(cfqq)) {
cfq_log_cfqq(cfqd, cfqq, "breaking apart cfqq");
cfqq = split_cfqq(cic, cfqq);
if (!cfqq)
Hi Shaohua,
On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
> On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>> Hi Shaohua,
>> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
>> > df5fe3e8e13883f58dc97489076bbcc150789a21
>> > b3b6d0408c953524f979468562e7e210d8634150
>> > The coop merge is too aggressive. For example, if two tasks are reading two
>> > files where the two files have some adjecent blocks, cfq will immediately
>> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
>> > big. I did a test to make cfq_rq_close() always checks the distence according
>> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
>> > distence far away request as close. Taking them close doesn't improve any thoughtput
>> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
>> Yes, when deciding if two queues are going to be merged, we should use
>> the constant CIC_SEEK_THR.
>
> seek_mean could be very big sometimes, using it as close criteria is meanless
> as this doen't improve any performance. So if it's big, let's fallback to
> default value.
meanless -> meaningless (also in the comment within code)
>
> Signed-off-by: Shaohua Li<[email protected]>
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index e2f8046..8025605 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> if (!sample_valid(cfqq->seek_samples))
> sdist = CFQQ_SEEK_THR;
>
> + /* if seek_mean is big, using it as close criteria is meanless */
> + if (sdist > CFQQ_SEEK_THR)
> + sdist = CFQQ_SEEK_THR;
> +
> return cfq_dist_from_last(cfqd, rq) <= sdist;
> }
>
>
This changes also the cfq_should_preempt behaviour, where a large
seek_mean could be meaningful, so I think it is better to add a
boolean parameter to cfq_rq_close, to distinguish whether we are
preempting or looking for queue merges, and make the new code
conditional on merging.
Thanks,
Corrado
On Mon, Dec 28, 2009 at 4:19 AM, Shaohua Li <[email protected]> wrote:
> On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>> Hi Shaohua,
>> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
>> > df5fe3e8e13883f58dc97489076bbcc150789a21
>> > b3b6d0408c953524f979468562e7e210d8634150
>> > The coop merge is too aggressive. For example, if two tasks are reading two
>> > files where the two files have some adjecent blocks, cfq will immediately
>> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
>> > big. I did a test to make cfq_rq_close() always checks the distence according
>> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
>> > distence far away request as close. Taking them close doesn't improve any thoughtput
>> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
>> Yes, when deciding if two queues are going to be merged, we should use
>> the constant CIC_SEEK_THR.
>> > So sounds we need make split more aggressive. But the split is too lazay,
>> > which requires to wait 1s. Time based check isn't reliable as queue might not
>> > run at given time, so uses a small time isn't ok.
>> 1s is too much, but I wouldn't abandon a time based approach. To fix
>> the problem of queue not being run, you can consider a slice. If at
>> the end of the slice, the queue is seeky, you split it.
>
> Currently we split seeky coop queues after 1s, which is too big. Below patch
> marks seeky coop queue split_coop flag after one slice. After that, if new
> requests come in, the queues will be splitted. Patch is suggested by Corrado.
>
> Signed-off-by: Shaohua Li <[email protected]>
You can also remove the no longer used define:
#define CFQQ_COOP_TOUT (HZ)
Reviewed-by: Corrado Zoccolo <[email protected]>
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index e2f8046..d4d5cca 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -52,6 +52,9 @@ static const int cfq_hist_divisor = 4;
> #define CFQ_HW_QUEUE_MIN (5)
> #define CFQ_SERVICE_SHIFT 12
>
> +#define CFQQ_SEEK_THR 8 * 1024
> +#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
> +
> #define RQ_CIC(rq) \
> ((struct cfq_io_context *) (rq)->elevator_private)
> #define RQ_CFQQ(rq) (struct cfq_queue *) ((rq)->elevator_private2)
> @@ -137,7 +140,6 @@ struct cfq_queue {
> u64 seek_total;
> sector_t seek_mean;
> sector_t last_request_pos;
> - unsigned long seeky_start;
>
> pid_t pid;
>
> @@ -317,6 +319,7 @@ enum cfqq_state_flags {
> CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */
> CFQ_CFQQ_FLAG_sync, /* synchronous queue */
> CFQ_CFQQ_FLAG_coop, /* cfqq is shared */
> + CFQ_CFQQ_FLAG_split_coop, /* shared cfqq will be splitted */
> CFQ_CFQQ_FLAG_deep, /* sync cfqq experienced large depth */
> CFQ_CFQQ_FLAG_wait_busy, /* Waiting for next request */
> };
> @@ -345,6 +348,7 @@ CFQ_CFQQ_FNS(prio_changed);
> CFQ_CFQQ_FNS(slice_new);
> CFQ_CFQQ_FNS(sync);
> CFQ_CFQQ_FNS(coop);
> +CFQ_CFQQ_FNS(split_coop);
> CFQ_CFQQ_FNS(deep);
> CFQ_CFQQ_FNS(wait_busy);
> #undef CFQ_CFQQ_FNS
> @@ -1574,6 +1578,15 @@ __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> cfq_clear_cfqq_wait_busy(cfqq);
>
> /*
> + * If this cfqq is shared between multiple processes, check to
> + * make sure that those processes are still issuing I/Os within
> + * the mean seek distance. If not, it may be time to break the
> + * queues apart again.
> + */
> + if (cfq_cfqq_coop(cfqq) && CFQQ_SEEKY(cfqq))
> + cfq_mark_cfqq_split_coop(cfqq);
> +
> + /*
> * store what was left of this slice, if the queue idled/timed out
> */
> if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
> @@ -1671,9 +1684,6 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
> return cfqd->last_position - blk_rq_pos(rq);
> }
>
> -#define CFQQ_SEEK_THR 8 * 1024
> -#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
> -
> static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> struct request *rq)
> {
> @@ -3027,19 +3037,6 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> total = cfqq->seek_total + (cfqq->seek_samples/2);
> do_div(total, cfqq->seek_samples);
> cfqq->seek_mean = (sector_t)total;
> -
> - /*
> - * If this cfqq is shared between multiple processes, check to
> - * make sure that those processes are still issuing I/Os within
> - * the mean seek distance. If not, it may be time to break the
> - * queues apart again.
> - */
> - if (cfq_cfqq_coop(cfqq)) {
> - if (CFQQ_SEEKY(cfqq) && !cfqq->seeky_start)
> - cfqq->seeky_start = jiffies;
> - else if (!CFQQ_SEEKY(cfqq))
> - cfqq->seeky_start = 0;
> - }
> }
>
> /*
> @@ -3474,14 +3471,6 @@ cfq_merge_cfqqs(struct cfq_data *cfqd, struct cfq_io_context *cic,
> return cic_to_cfqq(cic, 1);
> }
>
> -static int should_split_cfqq(struct cfq_queue *cfqq)
> -{
> - if (cfqq->seeky_start &&
> - time_after(jiffies, cfqq->seeky_start + CFQQ_COOP_TOUT))
> - return 1;
> - return 0;
> -}
> -
> /*
> * Returns NULL if a new cfqq should be allocated, or the old cfqq if this
> * was the last process referring to said cfqq.
> @@ -3490,9 +3479,9 @@ static struct cfq_queue *
> split_cfqq(struct cfq_io_context *cic, struct cfq_queue *cfqq)
> {
> if (cfqq_process_refs(cfqq) == 1) {
> - cfqq->seeky_start = 0;
> cfqq->pid = current->pid;
> cfq_clear_cfqq_coop(cfqq);
> + cfq_clear_cfqq_split_coop(cfqq);
> return cfqq;
> }
>
> @@ -3531,7 +3520,7 @@ new_queue:
> /*
> * If the queue was seeky for too long, break it apart.
> */
> - if (cfq_cfqq_coop(cfqq) && should_split_cfqq(cfqq)) {
> + if (cfq_cfqq_coop(cfqq) && cfq_cfqq_split_coop(cfqq)) {
> cfq_log_cfqq(cfqd, cfqq, "breaking apart cfqq");
> cfqq = split_cfqq(cic, cfqq);
> if (!cfqq)
>
On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
> Hi Shaohua,
> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> >> Hi Shaohua,
> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
> >> > b3b6d0408c953524f979468562e7e210d8634150
> >> > The coop merge is too aggressive. For example, if two tasks are reading two
> >> > files where the two files have some adjecent blocks, cfq will immediately
> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> >> > big. I did a test to make cfq_rq_close() always checks the distence according
> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> >> Yes, when deciding if two queues are going to be merged, we should use
> >> the constant CIC_SEEK_THR.
> >
> > seek_mean could be very big sometimes, using it as close criteria is meanless
> > as this doen't improve any performance. So if it's big, let's fallback to
> > default value.
>
> meanless -> meaningless (also in the comment within code)
oops
> > Signed-off-by: Shaohua Li<[email protected]>
> >
> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> > index e2f8046..8025605 100644
> > --- a/block/cfq-iosched.c
> > +++ b/block/cfq-iosched.c
> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> > ? ? ? ?if (!sample_valid(cfqq->seek_samples))
> > ? ? ? ? ? ? ? ?sdist = CFQQ_SEEK_THR;
> >
> > + ? ? ? /* if seek_mean is big, using it as close criteria is meanless */
> > + ? ? ? if (sdist > CFQQ_SEEK_THR)
> > + ? ? ? ? ? ? ? sdist = CFQQ_SEEK_THR;
> > +
> > ? ? ? ?return cfq_dist_from_last(cfqd, rq) <= sdist;
> > ?}
> >
> >
> This changes also the cfq_should_preempt behaviour, where a large
> seek_mean could be meaningful, so I think it is better to add a
> boolean parameter to cfq_rq_close, to distinguish whether we are
> preempting or looking for queue merges, and make the new code
> conditional on merging.
can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
very big, for example > 10*seek_mean, the preempt seems not meaningless.
Thanks,
Shaohua
On Mon, Dec 28, 2009 at 04:40:30PM +0800, Corrado Zoccolo wrote:
> On Mon, Dec 28, 2009 at 4:19 AM, Shaohua Li <[email protected]> wrote:
> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> >> Hi Shaohua,
> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
> >> > b3b6d0408c953524f979468562e7e210d8634150
> >> > The coop merge is too aggressive. For example, if two tasks are reading two
> >> > files where the two files have some adjecent blocks, cfq will immediately
> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> >> > big. I did a test to make cfq_rq_close() always checks the distence according
> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> >> Yes, when deciding if two queues are going to be merged, we should use
> >> the constant CIC_SEEK_THR.
> >> > So sounds we need make split more aggressive. But the split is too lazay,
> >> > which requires to wait 1s. Time based check isn't reliable as queue might not
> >> > run at given time, so uses a small time isn't ok.
> >> 1s is too much, but I wouldn't abandon a time based approach. To fix
> >> the problem of queue not being run, you can consider a slice. If at
> >> the end of the slice, the queue is seeky, you split it.
> >
> > Currently we split seeky coop queues after 1s, which is too big. Below patch
> > marks seeky coop queue split_coop flag after one slice. After that, if new
> > requests come in, the queues will be splitted. Patch is suggested by Corrado.
> >
> > Signed-off-by: Shaohua Li <[email protected]>
> You can also remove the no longer used define:
> #define CFQQ_COOP_TOUT (HZ)
>
> Reviewed-by: Corrado Zoccolo <[email protected]>
forgot about that macro, updated patch.
Currently we split seeky coop queues after 1s, which is too big. Below patch
marks seeky coop queue split_coop flag after one slice. After that, if new
requests come in, the queues will be splitted. Patch is suggested by Corrado.
Signed-off-by: Shaohua Li <[email protected]>
Reviewed-by: Corrado Zoccolo <[email protected]>
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2f8046..348618f 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -42,16 +42,13 @@ static const int cfq_hist_divisor = 4;
*/
#define CFQ_MIN_TT (2)
-/*
- * Allow merged cfqqs to perform this amount of seeky I/O before
- * deciding to break the queues up again.
- */
-#define CFQQ_COOP_TOUT (HZ)
-
#define CFQ_SLICE_SCALE (5)
#define CFQ_HW_QUEUE_MIN (5)
#define CFQ_SERVICE_SHIFT 12
+#define CFQQ_SEEK_THR 8 * 1024
+#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
+
#define RQ_CIC(rq) \
((struct cfq_io_context *) (rq)->elevator_private)
#define RQ_CFQQ(rq) (struct cfq_queue *) ((rq)->elevator_private2)
@@ -137,7 +134,6 @@ struct cfq_queue {
u64 seek_total;
sector_t seek_mean;
sector_t last_request_pos;
- unsigned long seeky_start;
pid_t pid;
@@ -317,6 +313,7 @@ enum cfqq_state_flags {
CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */
CFQ_CFQQ_FLAG_sync, /* synchronous queue */
CFQ_CFQQ_FLAG_coop, /* cfqq is shared */
+ CFQ_CFQQ_FLAG_split_coop, /* shared cfqq will be splitted */
CFQ_CFQQ_FLAG_deep, /* sync cfqq experienced large depth */
CFQ_CFQQ_FLAG_wait_busy, /* Waiting for next request */
};
@@ -345,6 +342,7 @@ CFQ_CFQQ_FNS(prio_changed);
CFQ_CFQQ_FNS(slice_new);
CFQ_CFQQ_FNS(sync);
CFQ_CFQQ_FNS(coop);
+CFQ_CFQQ_FNS(split_coop);
CFQ_CFQQ_FNS(deep);
CFQ_CFQQ_FNS(wait_busy);
#undef CFQ_CFQQ_FNS
@@ -1574,6 +1572,15 @@ __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq,
cfq_clear_cfqq_wait_busy(cfqq);
/*
+ * If this cfqq is shared between multiple processes, check to
+ * make sure that those processes are still issuing I/Os within
+ * the mean seek distance. If not, it may be time to break the
+ * queues apart again.
+ */
+ if (cfq_cfqq_coop(cfqq) && CFQQ_SEEKY(cfqq))
+ cfq_mark_cfqq_split_coop(cfqq);
+
+ /*
* store what was left of this slice, if the queue idled/timed out
*/
if (timed_out && !cfq_cfqq_slice_new(cfqq)) {
@@ -1671,9 +1678,6 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
return cfqd->last_position - blk_rq_pos(rq);
}
-#define CFQQ_SEEK_THR 8 * 1024
-#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
-
static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
struct request *rq)
{
@@ -3027,19 +3031,6 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_queue *cfqq,
total = cfqq->seek_total + (cfqq->seek_samples/2);
do_div(total, cfqq->seek_samples);
cfqq->seek_mean = (sector_t)total;
-
- /*
- * If this cfqq is shared between multiple processes, check to
- * make sure that those processes are still issuing I/Os within
- * the mean seek distance. If not, it may be time to break the
- * queues apart again.
- */
- if (cfq_cfqq_coop(cfqq)) {
- if (CFQQ_SEEKY(cfqq) && !cfqq->seeky_start)
- cfqq->seeky_start = jiffies;
- else if (!CFQQ_SEEKY(cfqq))
- cfqq->seeky_start = 0;
- }
}
/*
@@ -3474,14 +3465,6 @@ cfq_merge_cfqqs(struct cfq_data *cfqd, struct cfq_io_context *cic,
return cic_to_cfqq(cic, 1);
}
-static int should_split_cfqq(struct cfq_queue *cfqq)
-{
- if (cfqq->seeky_start &&
- time_after(jiffies, cfqq->seeky_start + CFQQ_COOP_TOUT))
- return 1;
- return 0;
-}
-
/*
* Returns NULL if a new cfqq should be allocated, or the old cfqq if this
* was the last process referring to said cfqq.
@@ -3490,9 +3473,9 @@ static struct cfq_queue *
split_cfqq(struct cfq_io_context *cic, struct cfq_queue *cfqq)
{
if (cfqq_process_refs(cfqq) == 1) {
- cfqq->seeky_start = 0;
cfqq->pid = current->pid;
cfq_clear_cfqq_coop(cfqq);
+ cfq_clear_cfqq_split_coop(cfqq);
return cfqq;
}
@@ -3531,7 +3514,7 @@ new_queue:
/*
* If the queue was seeky for too long, break it apart.
*/
- if (cfq_cfqq_coop(cfqq) && should_split_cfqq(cfqq)) {
+ if (cfq_cfqq_coop(cfqq) && cfq_cfqq_split_coop(cfqq)) {
cfq_log_cfqq(cfqd, cfqq, "breaking apart cfqq");
cfqq = split_cfqq(cic, cfqq);
if (!cfqq)
Hi Shaohua,
On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]> wrote:
> On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
>> Hi Shaohua,
>> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
>> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>> >> Hi Shaohua,
>> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
>> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
>> >> > b3b6d0408c953524f979468562e7e210d8634150
>> >> > The coop merge is too aggressive. For example, if two tasks are reading two
>> >> > files where the two files have some adjecent blocks, cfq will immediately
>> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
>> >> > big. I did a test to make cfq_rq_close() always checks the distence according
>> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
>> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
>> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
>> >> Yes, when deciding if two queues are going to be merged, we should use
>> >> the constant CIC_SEEK_THR.
>> >
>> > seek_mean could be very big sometimes, using it as close criteria is meanless
>> > as this doen't improve any performance. So if it's big, let's fallback to
>> > default value.
>>
>> meanless -> meaningless (also in the comment within code)
> oops
>
>> > Signed-off-by: Shaohua Li<[email protected]>
>> >
>> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> > index e2f8046..8025605 100644
>> > --- a/block/cfq-iosched.c
>> > +++ b/block/cfq-iosched.c
>> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
>> > if (!sample_valid(cfqq->seek_samples))
>> > sdist = CFQQ_SEEK_THR;
>> >
>> > + /* if seek_mean is big, using it as close criteria is meanless */
>> > + if (sdist > CFQQ_SEEK_THR)
>> > + sdist = CFQQ_SEEK_THR;
>> > +
>> > return cfq_dist_from_last(cfqd, rq) <= sdist;
>> > }
>> >
>> >
>> This changes also the cfq_should_preempt behaviour, where a large
>> seek_mean could be meaningful, so I think it is better to add a
>> boolean parameter to cfq_rq_close, to distinguish whether we are
>> preempting or looking for queue merges, and make the new code
>> conditional on merging.
> can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
> very big, for example > 10*seek_mean, the preempt seems not meaningless.
Disk access time is a complex function, but for rotational disks it is
'sort of' increasing with the amplitude of the seek. So, if you have a
new request that is within the mean seek distance (even if it is
larger than our constant), it is good to chose this request instead of
waiting for an other one from the active queue (this behaviour is the
same exhibited by AS, so we need a good reason to change).
This is reasonable for chosing the next request, but not to determine
if two queues have to be merged (since they can depart soon), so my
suggestion is to differentiate using a parameter.
>
> Thanks,
> Shaohua
>
Thanks
Corrado
On Mon, Dec 28, 2009 at 05:11:25PM +0800, Corrado Zoccolo wrote:
> Hi Shaohua,
> On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]> wrote:
> > On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
> >> Hi Shaohua,
> >> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
> >> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> >> >> Hi Shaohua,
> >> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> >> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
> >> >> > b3b6d0408c953524f979468562e7e210d8634150
> >> >> > The coop merge is too aggressive. For example, if two tasks are reading two
> >> >> > files where the two files have some adjecent blocks, cfq will immediately
> >> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> >> >> > big. I did a test to make cfq_rq_close() always checks the distence according
> >> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> >> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
> >> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> >> >> Yes, when deciding if two queues are going to be merged, we should use
> >> >> the constant CIC_SEEK_THR.
> >> >
> >> > seek_mean could be very big sometimes, using it as close criteria is meanless
> >> > as this doen't improve any performance. So if it's big, let's fallback to
> >> > default value.
> >>
> >> meanless -> meaningless (also in the comment within code)
> > oops
> >
> >> > Signed-off-by: Shaohua Li<[email protected]>
> >> >
> >> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> >> > index e2f8046..8025605 100644
> >> > --- a/block/cfq-iosched.c
> >> > +++ b/block/cfq-iosched.c
> >> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> >> > ? ? ? ?if (!sample_valid(cfqq->seek_samples))
> >> > ? ? ? ? ? ? ? ?sdist = CFQQ_SEEK_THR;
> >> >
> >> > + ? ? ? /* if seek_mean is big, using it as close criteria is meanless */
> >> > + ? ? ? if (sdist > CFQQ_SEEK_THR)
> >> > + ? ? ? ? ? ? ? sdist = CFQQ_SEEK_THR;
> >> > +
> >> > ? ? ? ?return cfq_dist_from_last(cfqd, rq) <= sdist;
> >> > ?}
> >> >
> >> >
> >> This changes also the cfq_should_preempt behaviour, where a large
> >> seek_mean could be meaningful, so I think it is better to add a
> >> boolean parameter to cfq_rq_close, to distinguish whether we are
> >> preempting or looking for queue merges, and make the new code
> >> conditional on merging.
> > can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
> > very big, for example > 10*seek_mean, the preempt seems not meaningless.
>
> Disk access time is a complex function, but for rotational disks it is
> 'sort of' increasing with the amplitude of the seek. So, if you have a
> new request that is within the mean seek distance (even if it is
> larger than our constant), it is good to chose this request instead of
> waiting for an other one from the active queue (this behaviour is the
> same exhibited by AS, so we need a good reason to change).
I have no good reason, but just thought it's a little strange.
If other ioscheds take the same way, let's stick.
seek_mean could be very big sometimes, using it as close criteria is meaningless
as this doen't improve any performance. So if it's big, let's fallback to
default value.
Signed-off-by: Shaohua Li<[email protected]>
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index e2f8046..e80bd47 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1675,13 +1675,17 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
#define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
- struct request *rq)
+ struct request *rq, bool for_preempt)
{
sector_t sdist = cfqq->seek_mean;
if (!sample_valid(cfqq->seek_samples))
sdist = CFQQ_SEEK_THR;
+ /* if seek_mean is big, using it as close criteria is meaningless */
+ if (sdist > CFQQ_SEEK_THR && !for_preempt)
+ sdist = CFQQ_SEEK_THR;
+
return cfq_dist_from_last(cfqd, rq) <= sdist;
}
@@ -1709,7 +1713,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
* will contain the closest sector.
*/
__cfqq = rb_entry(parent, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
if (blk_rq_pos(__cfqq->next_rq) < sector)
@@ -1720,7 +1724,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
return NULL;
__cfqq = rb_entry(node, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
return __cfqq;
return NULL;
@@ -3143,7 +3147,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
* if this request is as-good as one we would expect from the
* current cfqq, let it preempt
*/
- if (cfq_rq_close(cfqd, cfqq, rq))
+ if (cfq_rq_close(cfqd, cfqq, rq, true))
return true;
return false;
On Mon, Dec 28, 2009 at 10:28 AM, Shaohua Li <[email protected]> wrote:
> On Mon, Dec 28, 2009 at 05:11:25PM +0800, Corrado Zoccolo wrote:
>> Hi Shaohua,
>> On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]> wrote:
>> > On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
>> >> Hi Shaohua,
>> >> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
>> >> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>> >> >> Hi Shaohua,
>> >> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
>> >> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
>> >> >> > b3b6d0408c953524f979468562e7e210d8634150
>> >> >> > The coop merge is too aggressive. For example, if two tasks are reading two
>> >> >> > files where the two files have some adjecent blocks, cfq will immediately
>> >> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
>> >> >> > big. I did a test to make cfq_rq_close() always checks the distence according
>> >> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
>> >> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
>> >> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
>> >> >> Yes, when deciding if two queues are going to be merged, we should use
>> >> >> the constant CIC_SEEK_THR.
>> >> >
>> >> > seek_mean could be very big sometimes, using it as close criteria is meanless
>> >> > as this doen't improve any performance. So if it's big, let's fallback to
>> >> > default value.
>> >>
>> >> meanless -> meaningless (also in the comment within code)
>> > oops
>> >
>> >> > Signed-off-by: Shaohua Li<[email protected]>
>> >> >
>> >> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> >> > index e2f8046..8025605 100644
>> >> > --- a/block/cfq-iosched.c
>> >> > +++ b/block/cfq-iosched.c
>> >> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
>> >> > if (!sample_valid(cfqq->seek_samples))
>> >> > sdist = CFQQ_SEEK_THR;
>> >> >
>> >> > + /* if seek_mean is big, using it as close criteria is meanless */
>> >> > + if (sdist > CFQQ_SEEK_THR)
>> >> > + sdist = CFQQ_SEEK_THR;
>> >> > +
>> >> > return cfq_dist_from_last(cfqd, rq) <= sdist;
>> >> > }
>> >> >
>> >> >
>> >> This changes also the cfq_should_preempt behaviour, where a large
>> >> seek_mean could be meaningful, so I think it is better to add a
>> >> boolean parameter to cfq_rq_close, to distinguish whether we are
>> >> preempting or looking for queue merges, and make the new code
>> >> conditional on merging.
>> > can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
>> > very big, for example > 10*seek_mean, the preempt seems not meaningless.
>>
>> Disk access time is a complex function, but for rotational disks it is
>> 'sort of' increasing with the amplitude of the seek. So, if you have a
>> new request that is within the mean seek distance (even if it is
>> larger than our constant), it is good to chose this request instead of
>> waiting for an other one from the active queue (this behaviour is the
>> same exhibited by AS, so we need a good reason to change).
> I have no good reason, but just thought it's a little strange.
> If other ioscheds take the same way, let's stick.
>
>
> seek_mean could be very big sometimes, using it as close criteria is meaningless
> as this doen't improve any performance. So if it's big, let's fallback to
> default value.
>
> Signed-off-by: Shaohua Li<[email protected]>
Reviewed-by: Corrado Zoccolo <[email protected]>
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index e2f8046..e80bd47 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1675,13 +1675,17 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
> #define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
>
> static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> - struct request *rq)
> + struct request *rq, bool for_preempt)
> {
> sector_t sdist = cfqq->seek_mean;
>
> if (!sample_valid(cfqq->seek_samples))
> sdist = CFQQ_SEEK_THR;
>
> + /* if seek_mean is big, using it as close criteria is meaningless */
> + if (sdist > CFQQ_SEEK_THR && !for_preempt)
> + sdist = CFQQ_SEEK_THR;
> +
> return cfq_dist_from_last(cfqd, rq) <= sdist;
> }
>
> @@ -1709,7 +1713,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> * will contain the closest sector.
> */
> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
> return __cfqq;
>
> if (blk_rq_pos(__cfqq->next_rq) < sector)
> @@ -1720,7 +1724,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> return NULL;
>
> __cfqq = rb_entry(node, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
> return __cfqq;
>
> return NULL;
> @@ -3143,7 +3147,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
> * if this request is as-good as one we would expect from the
> * current cfqq, let it preempt
> */
> - if (cfq_rq_close(cfqd, cfqq, rq))
> + if (cfq_rq_close(cfqd, cfqq, rq, true))
> return true;
>
> return false;
>
On Mon, Dec 28 2009, Shaohua Li wrote:
> On Mon, Dec 28, 2009 at 05:11:25PM +0800, Corrado Zoccolo wrote:
> > Hi Shaohua,
> > On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]> wrote:
> > > On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
> > >> Hi Shaohua,
> > >> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
> > >> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
> > >> >> Hi Shaohua,
> > >> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
> > >> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
> > >> >> > b3b6d0408c953524f979468562e7e210d8634150
> > >> >> > The coop merge is too aggressive. For example, if two tasks are reading two
> > >> >> > files where the two files have some adjecent blocks, cfq will immediately
> > >> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
> > >> >> > big. I did a test to make cfq_rq_close() always checks the distence according
> > >> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
> > >> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
> > >> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
> > >> >> Yes, when deciding if two queues are going to be merged, we should use
> > >> >> the constant CIC_SEEK_THR.
> > >> >
> > >> > seek_mean could be very big sometimes, using it as close criteria is meanless
> > >> > as this doen't improve any performance. So if it's big, let's fallback to
> > >> > default value.
> > >>
> > >> meanless -> meaningless (also in the comment within code)
> > > oops
> > >
> > >> > Signed-off-by: Shaohua Li<[email protected]>
> > >> >
> > >> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> > >> > index e2f8046..8025605 100644
> > >> > --- a/block/cfq-iosched.c
> > >> > +++ b/block/cfq-iosched.c
> > >> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> > >> > ? ? ? ?if (!sample_valid(cfqq->seek_samples))
> > >> > ? ? ? ? ? ? ? ?sdist = CFQQ_SEEK_THR;
> > >> >
> > >> > + ? ? ? /* if seek_mean is big, using it as close criteria is meanless */
> > >> > + ? ? ? if (sdist > CFQQ_SEEK_THR)
> > >> > + ? ? ? ? ? ? ? sdist = CFQQ_SEEK_THR;
> > >> > +
> > >> > ? ? ? ?return cfq_dist_from_last(cfqd, rq) <= sdist;
> > >> > ?}
> > >> >
> > >> >
> > >> This changes also the cfq_should_preempt behaviour, where a large
> > >> seek_mean could be meaningful, so I think it is better to add a
> > >> boolean parameter to cfq_rq_close, to distinguish whether we are
> > >> preempting or looking for queue merges, and make the new code
> > >> conditional on merging.
> > > can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
> > > very big, for example > 10*seek_mean, the preempt seems not meaningless.
> >
> > Disk access time is a complex function, but for rotational disks it is
> > 'sort of' increasing with the amplitude of the seek. So, if you have a
> > new request that is within the mean seek distance (even if it is
> > larger than our constant), it is good to chose this request instead of
> > waiting for an other one from the active queue (this behaviour is the
> > same exhibited by AS, so we need a good reason to change).
> I have no good reason, but just thought it's a little strange.
> If other ioscheds take the same way, let's stick.
>
>
> seek_mean could be very big sometimes, using it as close criteria is meaningless
> as this doen't improve any performance. So if it's big, let's fallback to
> default value.
Sorry for the lack of response in this thread, christmas vacation is
upon us. The below looks good, I've added it. Thanks!
--
Jens Axboe
Shaohua Li <[email protected]> writes:
> seek_mean could be very big sometimes, using it as close criteria is meaningless
> as this doen't improve any performance. So if it's big, let's fallback to
> default value.
I think the real problem is that we lost a check for a seeky queue. The
close cooperator code initially was written to help interleaved
sequential I/Os. Given that, I'm not a big fan of the proposed patch.
I'd rather see an extra check for CFQQ_SEEKY added to the close
cooperator code paths and leave cfq_rq_close alone.
Cheers,
Jeff
>
> Signed-off-by: Shaohua Li<[email protected]>
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index e2f8046..e80bd47 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1675,13 +1675,17 @@ static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
> #define CFQQ_SEEKY(cfqq) ((cfqq)->seek_mean > CFQQ_SEEK_THR)
>
> static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> - struct request *rq)
> + struct request *rq, bool for_preempt)
> {
> sector_t sdist = cfqq->seek_mean;
>
> if (!sample_valid(cfqq->seek_samples))
> sdist = CFQQ_SEEK_THR;
>
> + /* if seek_mean is big, using it as close criteria is meaningless */
> + if (sdist > CFQQ_SEEK_THR && !for_preempt)
> + sdist = CFQQ_SEEK_THR;
> +
> return cfq_dist_from_last(cfqd, rq) <= sdist;
> }
>
> @@ -1709,7 +1713,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> * will contain the closest sector.
> */
> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
> return __cfqq;
>
> if (blk_rq_pos(__cfqq->next_rq) < sector)
> @@ -1720,7 +1724,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> return NULL;
>
> __cfqq = rb_entry(node, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq, false))
> return __cfqq;
>
> return NULL;
> @@ -3143,7 +3147,7 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq,
> * if this request is as-good as one we would expect from the
> * current cfqq, let it preempt
> */
> - if (cfq_rq_close(cfqd, cfqq, rq))
> + if (cfq_rq_close(cfqd, cfqq, rq, true))
> return true;
>
> return false;
Shaohua Li <[email protected]> writes:
> Currently we split seeky coop queues after 1s, which is too big. Below patch
> marks seeky coop queue split_coop flag after one slice. After that, if new
> requests come in, the queues will be splitted. Patch is suggested by Corrado.
>
> Signed-off-by: Shaohua Li <[email protected]>
> Reviewed-by: Corrado Zoccolo <[email protected]>
This makes a lot of sense. Thanks for looking into it.
Signed-off-by: Jeff Moyer <[email protected]>
Shaohua Li <[email protected]> writes:
> On Mon, Dec 28, 2009 at 05:11:25PM +0800, Corrado Zoccolo wrote:
>> Hi Shaohua,
>> On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]> wrote:
>> > On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
>> >> Hi Shaohua,
>> >> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]> wrote:
>> >> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>> >> >> Hi Shaohua,
>> >> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]> wrote:
>> >> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
>> >> >> > b3b6d0408c953524f979468562e7e210d8634150
>> >> >> > The coop merge is too aggressive. For example, if two tasks are reading two
>> >> >> > files where the two files have some adjecent blocks, cfq will immediately
>> >> >> > merge them. cfq_rq_close() also has trouble, sometimes the seek_mean is very
>> >> >> > big. I did a test to make cfq_rq_close() always checks the distence according
>> >> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why we take a long
>> >> >> > distence far away request as close. Taking them close doesn't improve any thoughtput
>> >> >> > to me. Maybe we should always use CIC_SEEK_THR as close criteria).
>> >> >> Yes, when deciding if two queues are going to be merged, we should use
>> >> >> the constant CIC_SEEK_THR.
>> >> >
>> >> > seek_mean could be very big sometimes, using it as close criteria is meanless
>> >> > as this doen't improve any performance. So if it's big, let's fallback to
>> >> > default value.
>> >>
>> >> meanless -> meaningless (also in the comment within code)
>> > oops
>> >
>> >> > Signed-off-by: Shaohua Li<[email protected]>
>> >> >
>> >> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> >> > index e2f8046..8025605 100644
>> >> > --- a/block/cfq-iosched.c
>> >> > +++ b/block/cfq-iosched.c
>> >> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
>> >> > if (!sample_valid(cfqq->seek_samples))
>> >> > sdist = CFQQ_SEEK_THR;
>> >> >
>> >> > + /* if seek_mean is big, using it as close criteria is meanless */
>> >> > + if (sdist > CFQQ_SEEK_THR)
>> >> > + sdist = CFQQ_SEEK_THR;
>> >> > +
>> >> > return cfq_dist_from_last(cfqd, rq) <= sdist;
>> >> > }
>> >> >
>> >> >
>> >> This changes also the cfq_should_preempt behaviour, where a large
>> >> seek_mean could be meaningful, so I think it is better to add a
>> >> boolean parameter to cfq_rq_close, to distinguish whether we are
>> >> preempting or looking for queue merges, and make the new code
>> >> conditional on merging.
>> > can you explain why it's meaningful for cfq_should_preempt()? Unless sdist is
>> > very big, for example > 10*seek_mean, the preempt seems not meaningless.
>>
>> Disk access time is a complex function, but for rotational disks it is
>> 'sort of' increasing with the amplitude of the seek. So, if you have a
>> new request that is within the mean seek distance (even if it is
>> larger than our constant), it is good to chose this request instead of
>> waiting for an other one from the active queue (this behaviour is the
>> same exhibited by AS, so we need a good reason to change).
> I have no good reason, but just thought it's a little strange.
> If other ioscheds take the same way, let's stick.
>
>
> seek_mean could be very big sometimes, using it as close criteria is meaningless
> as this doen't improve any performance. So if it's big, let's fallback to
> default value.
Sorry, I don't follow how you came to these conclusions.
cfq_close_cooperator checks whether cur_cfqq is seeky:
if (CFQQ_SEEKY(cur_cfqq))
return NULL;
So, by the time we get to the call to cfqq_close, we know that the
passed-in cfqq has a seek_mean within the CFQQ_SEEK_THR.
cfqq_close then calls cfq_rq_close with the non-seeky cfqq and the next
request of the queue it is checking:
if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
So, it seems to me that the checks you added are superfluous. How did
you test this to ensure that your patch improved performance? Your
statement about testing above seems to indicate that you did not see any
performance improvement.
For now, I'm leaning towards asking Jens to revert this. It may still
be worth making sure that we don't merge a seeky queue with a non-seeky
queue. I have a patch for that if folks are interested.
Cheers,
Jeff
>-----Original Message-----
>From: Jeff Moyer [mailto:[email protected]]
>Sent: Wednesday, January 06, 2010 5:16 AM
>To: Li, Shaohua
>Cc: Corrado Zoccolo; [email protected]; [email protected];
>Zhang, Yanmin
>Subject: Re: [PATCH]cfq-iosched: don't take requests with long distence as
>close
>
>Shaohua Li <[email protected]> writes:
>
>> On Mon, Dec 28, 2009 at 05:11:25PM +0800, Corrado Zoccolo wrote:
>>> Hi Shaohua,
>>> On Mon, Dec 28, 2009 at 9:46 AM, Shaohua Li <[email protected]>
>wrote:
>>> > On Mon, Dec 28, 2009 at 04:36:39PM +0800, Corrado Zoccolo wrote:
>>> >> Hi Shaohua,
>>> >> On Mon, Dec 28, 2009 at 3:03 AM, Shaohua Li <[email protected]>
>wrote:
>>> >> > On Fri, Dec 25, 2009 at 06:16:27PM +0800, Corrado Zoccolo wrote:
>>> >> >> Hi Shaohua,
>>> >> >> On Thu, Dec 24, 2009 at 1:55 AM, Shaohua Li <[email protected]>
>wrote:
>>> >> >> > df5fe3e8e13883f58dc97489076bbcc150789a21
>>> >> >> > b3b6d0408c953524f979468562e7e210d8634150
>>> >> >> > The coop merge is too aggressive. For example, if two tasks are
>reading two
>>> >> >> > files where the two files have some adjecent blocks, cfq will
>immediately
>>> >> >> > merge them. cfq_rq_close() also has trouble, sometimes the
>seek_mean is very
>>> >> >> > big. I did a test to make cfq_rq_close() always checks the
>distence according
>>> >> >> > to CIC_SEEK_THR, but still saw a lot of wrong merge. (BTW, why
>we take a long
>>> >> >> > distence far away request as close. Taking them close doesn't
>improve any thoughtput
>>> >> >> > to me. Maybe we should always use CIC_SEEK_THR as close
>criteria).
>>> >> >> Yes, when deciding if two queues are going to be merged, we
>should use
>>> >> >> the constant CIC_SEEK_THR.
>>> >> >
>>> >> > seek_mean could be very big sometimes, using it as close criteria
>is meanless
>>> >> > as this doen't improve any performance. So if it's big, let's
>fallback to
>>> >> > default value.
>>> >>
>>> >> meanless -> meaningless (also in the comment within code)
>>> > oops
>>> >
>>> >> > Signed-off-by: Shaohua Li<[email protected]>
>>> >> >
>>> >> > diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>>> >> > index e2f8046..8025605 100644
>>> >> > --- a/block/cfq-iosched.c
>>> >> > +++ b/block/cfq-iosched.c
>>> >> > @@ -1682,6 +1682,10 @@ static inline int cfq_rq_close(struct
>cfq_data *cfqd, struct cfq_queue *cfqq,
>>> >> > if (!sample_valid(cfqq->seek_samples))
>>> >> > sdist = CFQQ_SEEK_THR;
>>> >> >
>>> >> > + /* if seek_mean is big, using it as close criteria is
>meanless */
>>> >> > + if (sdist > CFQQ_SEEK_THR)
>>> >> > + sdist = CFQQ_SEEK_THR;
>>> >> > +
>>> >> > return cfq_dist_from_last(cfqd, rq) <= sdist;
>>> >> > }
>>> >> >
>>> >> >
>>> >> This changes also the cfq_should_preempt behaviour, where a large
>>> >> seek_mean could be meaningful, so I think it is better to add a
>>> >> boolean parameter to cfq_rq_close, to distinguish whether we are
>>> >> preempting or looking for queue merges, and make the new code
>>> >> conditional on merging.
>>> > can you explain why it's meaningful for cfq_should_preempt()? Unless
>sdist is
>>> > very big, for example > 10*seek_mean, the preempt seems not
>meaningless.
>>>
>>> Disk access time is a complex function, but for rotational disks it is
>>> 'sort of' increasing with the amplitude of the seek. So, if you have a
>>> new request that is within the mean seek distance (even if it is
>>> larger than our constant), it is good to chose this request instead of
>>> waiting for an other one from the active queue (this behaviour is the
>>> same exhibited by AS, so we need a good reason to change).
>> I have no good reason, but just thought it's a little strange.
>> If other ioscheds take the same way, let's stick.
>>
>>
>> seek_mean could be very big sometimes, using it as close criteria is
>meaningless
>> as this doen't improve any performance. So if it's big, let's fallback
>to
>> default value.
>
>Sorry, I don't follow how you came to these conclusions.
>
>cfq_close_cooperator checks whether cur_cfqq is seeky:
>
> if (CFQQ_SEEKY(cur_cfqq))
> return NULL;
>
>So, by the time we get to the call to cfqq_close, we know that the
>passed-in cfqq has a seek_mean within the CFQQ_SEEK_THR.
>
>cfqq_close then calls cfq_rq_close with the non-seeky cfqq and the next
>request of the queue it is checking:
>
> if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
>
>So, it seems to me that the checks you added are superfluous. How did
>you test this to ensure that your patch improved performance? Your
>statement about testing above seems to indicate that you did not see any
>performance improvement.
>
>For now, I'm leaning towards asking Jens to revert this. It may still
>be worth making sure that we don't merge a seeky queue with a non-seeky
>queue. I have a patch for that if folks are interested.
Ha, yes, you are right. I did see some improvement on tiobench test,
but maybe it's from another patch (the split patch) or I checked an old
code. So please revert the patch.
Thanks,
Shaohua
????{.n?+???????+%?????ݶ??w??{.n?+????{??G?????{ay?ʇڙ?,j??f???h?????????z_??(?階?ݢj"???m??????G????????????&???~???iO???z??v?^?m????????????I?
On Tue, Jan 5, 2010 at 10:16 PM, Jeff Moyer <[email protected]> wrote:
>
> For now, I'm leaning towards asking Jens to revert this. It may still
> be worth making sure that we don't merge a seeky queue with a non-seeky
> queue. I have a patch for that if folks are interested.
Jeff, can you send this patch to Yanmin, that is investigating a
regression apparently caused by excessive queue merge?
http://lkml.org/lkml/2010/1/4/194
Cheers,
Corrado
Corrado Zoccolo <[email protected]> writes:
> On Tue, Jan 5, 2010 at 10:16 PM, Jeff Moyer <[email protected]> wrote:
>>
>> For now, I'm leaning towards asking Jens to revert this. It may still
>> be worth making sure that we don't merge a seeky queue with a non-seeky
>> queue. I have a patch for that if folks are interested.
> Jeff, can you send this patch to Yanmin, that is investigating a
> regression apparently caused by excessive queue merge?
>
> http://lkml.org/lkml/2010/1/4/194
You first have to back out Shaohua's patch, then apply this one.
Cheers,
Jeff
cfq-iosched: don't allow merging with seeky queues
Shaohua Li noticed that cfq currently can merge with seeky queues, which
causes unwanted merge/unmerge activity. We already know that the
cur_cfqq is not seeky, so this patch just makes sure that the non-seeky
queue is not merged with a seeky one.
Signed-off-by: Jeff Moyer <[email protected]>
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 8df4fe5..3db9050 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1677,6 +1677,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
return cfq_dist_from_last(cfqd, rq) <= sdist;
}
+/*
+ * Search for a cfqq that is issuing non-seeky I/Os within the seek
+ * mean of the current cfqq.
+ */
static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
struct cfq_queue *cur_cfqq)
{
@@ -1701,7 +1705,14 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
* will contain the closest sector.
*/
__cfqq = rb_entry(parent, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ /*
+ * If the cfqq does not have enough seek samples, assume it is
+ * sequential until proven otherwise. If it is assumed that the
+ * queue is seeky first, then the close cooperator detection logic
+ * may never trigger as one queue strays further from the other(s).
+ */
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
+ (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
return __cfqq;
if (blk_rq_pos(__cfqq->next_rq) < sector)
@@ -1712,7 +1723,8 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
return NULL;
__cfqq = rb_entry(node, struct cfq_queue, p_node);
- if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
+ if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
+ (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
return __cfqq;
return NULL;
On Thu, 2010-01-07 at 09:30 -0500, Jeff Moyer wrote:
> Corrado Zoccolo <[email protected]> writes:
>
> > On Tue, Jan 5, 2010 at 10:16 PM, Jeff Moyer <[email protected]> wrote:
> >>
> >> For now, I'm leaning towards asking Jens to revert this. It may still
> >> be worth making sure that we don't merge a seeky queue with a non-seeky
> >> queue. I have a patch for that if folks are interested.
> > Jeff, can you send this patch to Yanmin, that is investigating a
> > regression apparently caused by excessive queue merge?
> >
> > http://lkml.org/lkml/2010/1/4/194
>
>
> You first have to back out Shaohua's patch, then apply this one.
Thanks for forwarding me the patches.
Actually, we found tiobench randread has about 20% regression since kernel
2.6.33-rc1, and fio randread has more than 40% regression.
>
> Cheers,
> Jeff
>
> cfq-iosched: don't allow merging with seeky queues
With your new patch applied on 2.6.33-rc1, I don't see improvement on
both tiobench and fio randread regression. I know unexpected merge/unmerge
is just one root cause of the regressions. A couple of other patches
are also related to them.
I also tried to apply both your patch and Corrado's patch at
http://lkml.org/lkml/2010/1/9/57. The result seems like the one when I just apply
Corrado's patch, with which regression almost disappears when low_latency=0. But
when low_latency=1, there is still about 25% regression.
>
> Shaohua Li noticed that cfq currently can merge with seeky queues, which
> causes unwanted merge/unmerge activity. We already know that the
> cur_cfqq is not seeky, so this patch just makes sure that the non-seeky
> queue is not merged with a seeky one.
>
> Signed-off-by: Jeff Moyer <[email protected]>
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 8df4fe5..3db9050 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1677,6 +1677,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> return cfq_dist_from_last(cfqd, rq) <= sdist;
> }
>
> +/*
> + * Search for a cfqq that is issuing non-seeky I/Os within the seek
> + * mean of the current cfqq.
> + */
> static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> struct cfq_queue *cur_cfqq)
> {
> @@ -1701,7 +1705,14 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> * will contain the closest sector.
> */
> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + /*
> + * If the cfqq does not have enough seek samples, assume it is
> + * sequential until proven otherwise. If it is assumed that the
> + * queue is seeky first, then the close cooperator detection logic
> + * may never trigger as one queue strays further from the other(s).
> + */
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> return __cfqq;
>
> if (blk_rq_pos(__cfqq->next_rq) < sector)
> @@ -1712,7 +1723,8 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> return NULL;
>
> __cfqq = rb_entry(node, struct cfq_queue, p_node);
> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> return __cfqq;
>
> return NULL;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
On Mon, Jan 11, 2010 at 6:20 AM, Zhang, Yanmin
<[email protected]> wrote:
> On Thu, 2010-01-07 at 09:30 -0500, Jeff Moyer wrote:
>> Corrado Zoccolo <[email protected]> writes:
>>
>> > On Tue, Jan 5, 2010 at 10:16 PM, Jeff Moyer <[email protected]> wrote:
>> >>
>> >> For now, I'm leaning towards asking Jens to revert this. It may still
>> >> be worth making sure that we don't merge a seeky queue with a non-seeky
>> >> queue. I have a patch for that if folks are interested.
>> > Jeff, can you send this patch to Yanmin, that is investigating a
>> > regression apparently caused by excessive queue merge?
>> >
>> > http://lkml.org/lkml/2010/1/4/194
>>
>>
>> You first have to back out Shaohua's patch, then apply this one.
> Thanks for forwarding me the patches.
> Actually, we found tiobench randread has about 20% regression since kernel
> 2.6.33-rc1, and fio randread has more than 40% regression.
>
>>
>> Cheers,
>> Jeff
>>
>> cfq-iosched: don't allow merging with seeky queues
> With your new patch applied on 2.6.33-rc1, I don't see improvement on
> both tiobench and fio randread regression. I know unexpected merge/unmerge
> is just one root cause of the regressions. A couple of other patches
> are also related to them.
Hi Yanmin,
are you testing Jeff's patch with your full fio script, instead of the
simplified one?
Since they are fixing the merging part, that happens only with the
full fio script.
>
> I also tried to apply both your patch and Corrado's patch at
> http://lkml.org/lkml/2010/1/9/57. The result seems like the one when I just apply
> Corrado's patch, with which regression almost disappears when low_latency=0. But
> when low_latency=1, there is still about 25% regression.
>
Yes, low_latency is supposed to bring some regression.
http://lkml.org/lkml/2009/12/30/47
Thanks,
Corrado
>
>>
>> Shaohua Li noticed that cfq currently can merge with seeky queues, which
>> causes unwanted merge/unmerge activity. We already know that the
>> cur_cfqq is not seeky, so this patch just makes sure that the non-seeky
>> queue is not merged with a seeky one.
>>
>> Signed-off-by: Jeff Moyer <[email protected]>
>>
>> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
>> index 8df4fe5..3db9050 100644
>> --- a/block/cfq-iosched.c
>> +++ b/block/cfq-iosched.c
>> @@ -1677,6 +1677,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
>> return cfq_dist_from_last(cfqd, rq) <= sdist;
>> }
>>
>> +/*
>> + * Search for a cfqq that is issuing non-seeky I/Os within the seek
>> + * mean of the current cfqq.
>> + */
>> static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
>> struct cfq_queue *cur_cfqq)
>> {
>> @@ -1701,7 +1705,14 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
>> * will contain the closest sector.
>> */
>> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
>> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
>> + /*
>> + * If the cfqq does not have enough seek samples, assume it is
>> + * sequential until proven otherwise. If it is assumed that the
>> + * queue is seeky first, then the close cooperator detection logic
>> + * may never trigger as one queue strays further from the other(s).
>> + */
>> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
>> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
>> return __cfqq;
>>
>> if (blk_rq_pos(__cfqq->next_rq) < sector)
>> @@ -1712,7 +1723,8 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
>> return NULL;
>>
>> __cfqq = rb_entry(node, struct cfq_queue, p_node);
>> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
>> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
>> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
>> return __cfqq;
>>
>> return NULL;
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>
>
>
--
__________________________________________________________________________
dott. Corrado Zoccolo mailto:[email protected]
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
Tales of Power - C. Castaneda
On Mon, 2010-01-11 at 16:05 +0100, Corrado Zoccolo wrote:
> On Mon, Jan 11, 2010 at 6:20 AM, Zhang, Yanmin
> <[email protected]> wrote:
> > On Thu, 2010-01-07 at 09:30 -0500, Jeff Moyer wrote:
> >> Corrado Zoccolo <[email protected]> writes:
> >>
> >> > On Tue, Jan 5, 2010 at 10:16 PM, Jeff Moyer <[email protected]> wrote:
> >> >>
> >> >> For now, I'm leaning towards asking Jens to revert this. It may still
> >> >> be worth making sure that we don't merge a seeky queue with a non-seeky
> >> >> queue. I have a patch for that if folks are interested.
> >> > Jeff, can you send this patch to Yanmin, that is investigating a
> >> > regression apparently caused by excessive queue merge?
> >> >
> >> > http://lkml.org/lkml/2010/1/4/194
> >>
> >>
> >> You first have to back out Shaohua's patch, then apply this one.
> > Thanks for forwarding me the patches.
> > Actually, we found tiobench randread has about 20% regression since kernel
> > 2.6.33-rc1, and fio randread has more than 40% regression.
> >
> >>
> >> Cheers,
> >> Jeff
> >>
> >> cfq-iosched: don't allow merging with seeky queues
> > With your new patch applied on 2.6.33-rc1, I don't see improvement on
> > both tiobench and fio randread regression. I know unexpected merge/unmerge
> > is just one root cause of the regressions. A couple of other patches
> > are also related to them.
>
> Hi Yanmin,
> are you testing Jeff's patch with your full fio script, instead of the
> simplified one?
Thanks for your reminder. I tested the patch with simplified one.
> Since they are fixing the merging part, that happens only with the
> full fio script.
Ok. I tested the full fio script a moment ago and didn't find improvement.
>
> >
> > I also tried to apply both your patch and Corrado's patch at
> > http://lkml.org/lkml/2010/1/9/57. The result seems like the one when I just apply
> > Corrado's patch, with which regression almost disappears when low_latency=0. But
> > when low_latency=1, there is still about 25% regression.
> >
> Yes, low_latency is supposed to bring some regression.
> http://lkml.org/lkml/2009/12/30/47
>
> Thanks,
> Corrado
> >
> >>
> >> Shaohua Li noticed that cfq currently can merge with seeky queues, which
> >> causes unwanted merge/unmerge activity. We already know that the
> >> cur_cfqq is not seeky, so this patch just makes sure that the non-seeky
> >> queue is not merged with a seeky one.
> >>
> >> Signed-off-by: Jeff Moyer <[email protected]>
> >>
> >> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> >> index 8df4fe5..3db9050 100644
> >> --- a/block/cfq-iosched.c
> >> +++ b/block/cfq-iosched.c
> >> @@ -1677,6 +1677,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> >> return cfq_dist_from_last(cfqd, rq) <= sdist;
> >> }
> >>
> >> +/*
> >> + * Search for a cfqq that is issuing non-seeky I/Os within the seek
> >> + * mean of the current cfqq.
> >> + */
> >> static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> >> struct cfq_queue *cur_cfqq)
> >> {
> >> @@ -1701,7 +1705,14 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> >> * will contain the closest sector.
> >> */
> >> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
> >> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> >> + /*
> >> + * If the cfqq does not have enough seek samples, assume it is
> >> + * sequential until proven otherwise. If it is assumed that the
> >> + * queue is seeky first, then the close cooperator detection logic
> >> + * may never trigger as one queue strays further from the other(s).
> >> + */
> >> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> >> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> >> return __cfqq;
> >>
> >> if (blk_rq_pos(__cfqq->next_rq) < sector)
> >> @@ -1712,7 +1723,8 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> >> return NULL;
> >>
> >> __cfqq = rb_entry(node, struct cfq_queue, p_node);
> >> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> >> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> >> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> >> return __cfqq;
> >>
> >> return NULL;
On Tue, Jan 12, 2010 at 3:43 AM, Zhang, Yanmin
<[email protected]> wrote:
> On Mon, 2010-01-11 at 16:05 +0100, Corrado Zoccolo wrote:
> > On Mon, Jan 11, 2010 at 6:20 AM, Zhang, Yanmin
> > <[email protected]> wrote:
> > > On Thu, 2010-01-07 at 09:30 -0500, Jeff Moyer wrote:
> > Hi Yanmin,
> > are you testing Jeff's patch with your full fio script, instead of the
> > simplified one?
> Thanks for your reminder. I tested the patch with simplified one.
>
> > Since they are fixing the merging part, that happens only with the
> > full fio script.
> Ok. I tested the full fio script a moment ago and didn't find improvement.
> > >>
> > >> Shaohua Li noticed that cfq currently can merge with seeky queues, which
> > >> causes unwanted merge/unmerge activity. We already know that the
> > >> cur_cfqq is not seeky, so this patch just makes sure that the non-seeky
> > >> queue is not merged with a seeky one.
> > >>
> > >> Signed-off-by: Jeff Moyer <[email protected]>
> > >>
> > >> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> > >> index 8df4fe5..3db9050 100644
> > >> --- a/block/cfq-iosched.c
> > >> +++ b/block/cfq-iosched.c
> > >> @@ -1677,6 +1677,10 @@ static inline int cfq_rq_close(struct cfq_data *cfqd, struct cfq_queue *cfqq,
> > >> return cfq_dist_from_last(cfqd, rq) <= sdist;
> > >> }
> > >>
> > >> +/*
> > >> + * Search for a cfqq that is issuing non-seeky I/Os within the seek
> > >> + * mean of the current cfqq.
> > >> + */
> > >> static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> > >> struct cfq_queue *cur_cfqq)
> > >> {
> > >> @@ -1701,7 +1705,14 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> > >> * will contain the closest sector.
> > >> */
> > >> __cfqq = rb_entry(parent, struct cfq_queue, p_node);
> > >> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> > >> + /*
> > >> + * If the cfqq does not have enough seek samples, assume it is
> > >> + * sequential until proven otherwise. If it is assumed that the
> > >> + * queue is seeky first, then the close cooperator detection logic
> > >> + * may never trigger as one queue strays further from the other(s).
> > >> + */
> > >> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> > >> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> > >> return __cfqq;
> > >>
> > >> if (blk_rq_pos(__cfqq->next_rq) < sector)
> > >> @@ -1712,7 +1723,8 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
> > >> return NULL;
> > >>
> > >> __cfqq = rb_entry(node, struct cfq_queue, p_node);
> > >> - if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq))
> > >> + if (cfq_rq_close(cfqd, cur_cfqq, __cfqq->next_rq) &&
> > >> + (!sample_valid(__cfqq->seek_samples) || !CFQQ_SEEKY(__cfqq)))
> > >> return __cfqq;
> > >>
> > >> return NULL;
Hi Jeff,
I think this patch has the same flaw as Shaohua's.
The seekiness check that you introduce in cfq_rq_close is already
present in its caller, cfq_close_cooperator, so it is not effective.
Up to now, the only patch that improves this situation is the one that
changes the unmerge policy to unmerge after a single time slice.
Thanks,
Corado
Corrado Zoccolo <[email protected]> writes:
> Hi Jeff,
> I think this patch has the same flaw as Shaohua's.
> The seekiness check that you introduce in cfq_rq_close is already
> present in its caller, cfq_close_cooperator, so it is not effective.
I don't think so. There are two queues, here. One queue is checked by
the caller, and that is the cur_cfqq. The __cfqq needs to also be
checked.
> Up to now, the only patch that improves this situation is the one that
> changes the unmerge policy to unmerge after a single time slice.
Yes, I definitely agree with that, and I think that patch should go in.
Cheers,
Jeff
On Fri, Jan 15, 2010 at 8:45 PM, Jeff Moyer <[email protected]> wrote:
> Corrado Zoccolo <[email protected]> writes:
>
>> Hi Jeff,
>> I think this patch has the same flaw as Shaohua's.
>> The seekiness check that you introduce in cfq_rq_close is already
>> present in its caller, cfq_close_cooperator, so it is not effective.
>
> I don't think so. There are two queues, here. One queue is checked by
> the caller, and that is the cur_cfqq. The __cfqq needs to also be
> checked.
The other one, i.e. the returned one, is also checked by the caller,
some lines below.
1756 cfqq = cfqq_close(cfqd, cur_cfqq);
1757 if (!cfqq)
1758 return NULL;
1759
1760 /* If new queue belongs to different cfq_group, don't choose it */
1761 if (cur_cfqq->cfqg != cfqq->cfqg)
1762 return NULL;
1763
1764 /*
1765 * It only makes sense to merge sync queues.
1766 */
1767 if (!cfq_cfqq_sync(cfqq))
1768 return NULL;
1769 if (CFQQ_SEEKY(cfqq))
1770 return NULL;
>
>> Up to now, the only patch that improves this situation is the one that
>> changes the unmerge policy to unmerge after a single time slice.
>
> Yes, I definitely agree with that, and I think that patch should go in.
>
> Cheers,
> Jeff
>
--
__________________________________________________________________________
dott. Corrado Zoccolo mailto:[email protected]
PhD - Department of Computer Science - University of Pisa, Italy
--------------------------------------------------------------------------
The self-confidence of a warrior is not the self-confidence of the average
man. The average man seeks certainty in the eyes of the onlooker and calls
that self-confidence. The warrior seeks impeccability in his own eyes and
calls that humbleness.
Tales of Power - C. Castaneda
Corrado Zoccolo <[email protected]> writes:
> On Fri, Jan 15, 2010 at 8:45 PM, Jeff Moyer <[email protected]> wrote:
>> Corrado Zoccolo <[email protected]> writes:
>>
>>> Hi Jeff,
>>> I think this patch has the same flaw as Shaohua's.
>>> The seekiness check that you introduce in cfq_rq_close is already
>>> present in its caller, cfq_close_cooperator, so it is not effective.
>>
>> I don't think so. There are two queues, here. One queue is checked by
>> the caller, and that is the cur_cfqq. The __cfqq needs to also be
>> checked.
> The other one, i.e. the returned one, is also checked by the caller,
> some lines below.
haha. OK, I guess I wrote it right the first time? ;-)
-Jeff