Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757869AbZFICLR (ORCPT ); Mon, 8 Jun 2009 22:11:17 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757553AbZFICKW (ORCPT ); Mon, 8 Jun 2009 22:10:22 -0400 Received: from mx2.redhat.com ([66.187.237.31]:43962 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757511AbZFICKS (ORCPT ); Mon, 8 Jun 2009 22:10:18 -0400 From: Vivek Goyal To: linux-kernel@vger.kernel.org, containers@lists.linux-foundation.org, dm-devel@redhat.com, jens.axboe@oracle.com, nauman@google.com, dpshah@google.com, lizf@cn.fujitsu.com, mikew@google.com, fchecconi@gmail.com, paolo.valente@unimore.it, ryov@valinux.co.jp, fernando@oss.ntt.co.jp, s-uchida@ap.jp.nec.com, taka@valinux.co.jp, guijianfeng@cn.fujitsu.com, jmoyer@redhat.com, dhaval@linux.vnet.ibm.com, balbir@linux.vnet.ibm.com, righi.andrea@gmail.com, m-ikeda@ds.jp.nec.com, jbaron@redhat.com Cc: agk@redhat.com, snitzer@redhat.com, vgoyal@redhat.com, akpm@linux-foundation.org, peterz@infradead.org Subject: [PATCH 08/19] io-controller: idle for sometime on sync queue before expiring it Date: Mon, 8 Jun 2009 22:08:51 -0400 Message-Id: <1244513342-11758-9-git-send-email-vgoyal@redhat.com> In-Reply-To: <1244513342-11758-1-git-send-email-vgoyal@redhat.com> References: <1244513342-11758-1-git-send-email-vgoyal@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 14388 Lines: 430 o When a sync queue expires, in many cases it might be empty and then it will be deleted from the active tree. This will lead to a scenario where out of two competing queues, only one is on the tree and when a new queue is selected, vtime jump takes place and we don't see services provided in proportion to weight. o In general this is a fundamental problem with fairness of sync queues where queues are not continuously backlogged. Looks like idling is only solution to make sure such kind of queues can get some decent amount of disk bandwidth in the face of competion from continusouly backlogged queues. But excessive idling has potential to reduce performance on SSD and disks with commnad queuing. o This patch experiments with waiting for next request to come before a queue is expired after it has consumed its time slice. This can ensure more accurate fairness numbers in some cases. o Introduced a tunable "fairness". If set, io-controller will put more focus on getting fairness right than getting throughput right. Signed-off-by: Vivek Goyal --- block/blk-sysfs.c | 7 ++ block/elevator-fq.c | 165 ++++++++++++++++++++++++++++++++++++++++++++++----- block/elevator-fq.h | 15 +++++ 3 files changed, 171 insertions(+), 16 deletions(-) diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index 082a273..c942ddc 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -294,6 +294,12 @@ static struct queue_sysfs_entry queue_slice_async_entry = { .show = elv_slice_async_show, .store = elv_slice_async_store, }; + +static struct queue_sysfs_entry queue_fairness_entry = { + .attr = {.name = "fairness", .mode = S_IRUGO | S_IWUSR }, + .show = elv_fairness_show, + .store = elv_fairness_store, +}; #endif static struct attribute *default_attrs[] = { @@ -311,6 +317,7 @@ static struct attribute *default_attrs[] = { &queue_slice_idle_entry.attr, &queue_slice_sync_entry.attr, &queue_slice_async_entry.attr, + &queue_fairness_entry.attr, #endif NULL, }; diff --git a/block/elevator-fq.c b/block/elevator-fq.c index 7165902..ce8e47b 100644 --- a/block/elevator-fq.c +++ b/block/elevator-fq.c @@ -368,6 +368,7 @@ static void bfq_active_insert(struct io_service_tree *st, struct rb_node *node = &entity->rb_node; bfq_insert(&st->active, entity); + entity->sched_data->nr_active++; if (node->rb_left != NULL) node = node->rb_left; else if (node->rb_right != NULL) @@ -426,6 +427,7 @@ static void bfq_active_extract(struct io_service_tree *st, node = bfq_find_deepest(&entity->rb_node); bfq_extract(&st->active, entity); + entity->sched_data->nr_active--; if (node != NULL) bfq_update_active_tree(node); } @@ -959,6 +961,21 @@ void io_put_io_group_queues(struct elevator_queue *e, struct io_group *iog) elv_release_ioq(e, &iog->async_idle_queue); } +/* + * Returns the number of active entities a particular io group has. This + * includes number of active entities on service tree as well as the active + * entity which is being served currently, if any. + */ + +static inline int elv_iog_nr_active(struct io_group *iog) +{ + struct io_sched_data *sd = &iog->sched_data; + + if (sd->active_entity) + return sd->nr_active + 1; + else + return sd->nr_active; +} /* Mainly hierarchical grouping code */ #ifdef CONFIG_GROUP_IOSCHED @@ -1900,6 +1917,44 @@ static inline int is_root_group_ioq(struct request_queue *q, return (ioq->entity.sched_data == &efqd->root_group->sched_data); } +/* Functions to show and store fairness value through sysfs */ +ssize_t elv_fairness_show(struct request_queue *q, char *name) +{ + struct elv_fq_data *efqd; + unsigned int data; + unsigned long flags; + + spin_lock_irqsave(q->queue_lock, flags); + efqd = &q->elevator->efqd; + data = efqd->fairness; + spin_unlock_irqrestore(q->queue_lock, flags); + return sprintf(name, "%d\n", data); +} + +ssize_t elv_fairness_store(struct request_queue *q, const char *name, + size_t count) +{ + struct elv_fq_data *efqd; + unsigned int data; + unsigned long flags; + + char *p = (char *)name; + + data = simple_strtoul(p, &p, 10); + + if (data < 0) + data = 0; + else if (data > INT_MAX) + data = INT_MAX; + + spin_lock_irqsave(q->queue_lock, flags); + efqd = &q->elevator->efqd; + efqd->fairness = data; + spin_unlock_irqrestore(q->queue_lock, flags); + + return count; +} + /* Functions to show and store elv_idle_slice value through sysfs */ ssize_t elv_slice_idle_show(struct request_queue *q, char *name) { @@ -2140,7 +2195,7 @@ static void elv_ioq_update_idle_window(struct elevator_queue *eq, * io scheduler if it wants to disable idling based on additional * considrations like seek pattern. */ - if (enable_idle) { + if (enable_idle && !efqd->fairness) { if (eq->ops->elevator_update_idle_window_fn) enable_idle = eq->ops->elevator_update_idle_window_fn( eq, ioq->sched_queue, rq); @@ -2321,6 +2376,7 @@ static void __elv_set_active_ioq(struct elv_fq_data *efqd, struct io_queue *ioq, elv_clear_ioq_wait_request(ioq); elv_clear_ioq_must_dispatch(ioq); + elv_clear_ioq_wait_busy_done(ioq); elv_mark_ioq_slice_new(ioq); del_timer(&efqd->idle_slice_timer); @@ -2474,10 +2530,12 @@ void __elv_ioq_slice_expired(struct request_queue *q, struct io_queue *ioq) assert_spin_locked(q->queue_lock); elv_log_ioq(efqd, ioq, "slice expired"); - if (elv_ioq_wait_request(ioq)) + if (elv_ioq_wait_request(ioq) || elv_ioq_wait_busy(ioq)) del_timer(&efqd->idle_slice_timer); elv_clear_ioq_wait_request(ioq); + elv_clear_ioq_wait_busy(ioq); + elv_clear_ioq_wait_busy_done(ioq); /* * if ioq->slice_end = 0, that means a queue was expired before first @@ -2642,7 +2700,7 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq) * has other work pending, don't risk delaying until the * idle timer unplug to continue working. */ - if (elv_ioq_wait_request(ioq)) { + if (elv_ioq_wait_request(ioq) && !elv_ioq_wait_busy(ioq)) { if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE || efqd->busy_queues > 1) { del_timer(&efqd->idle_slice_timer); @@ -2650,6 +2708,18 @@ void elv_ioq_request_add(struct request_queue *q, struct request *rq) } elv_mark_ioq_must_dispatch(ioq); } + + /* + * If we were waiting for a request on this queue, wait is + * done. Schedule the next dispatch + */ + if (elv_ioq_wait_busy(ioq)) { + del_timer(&efqd->idle_slice_timer); + elv_clear_ioq_wait_busy(ioq); + elv_mark_ioq_wait_busy_done(ioq); + elv_clear_ioq_must_dispatch(ioq); + elv_schedule_dispatch(q); + } } else if (elv_should_preempt(q, ioq, rq)) { /* * not the active queue - expire current slice if it is @@ -2677,6 +2747,9 @@ void elv_idle_slice_timer(unsigned long data) if (ioq) { + if (elv_ioq_wait_busy(ioq)) + goto expire; + /* * We saw a request before the queue expired, let it through */ @@ -2710,7 +2783,7 @@ out_cont: spin_unlock_irqrestore(q->queue_lock, flags); } -void elv_ioq_arm_slice_timer(struct request_queue *q) +void elv_ioq_arm_slice_timer(struct request_queue *q, int wait_for_busy) { struct elv_fq_data *efqd = &q->elevator->efqd; struct io_queue *ioq = elv_active_ioq(q->elevator); @@ -2723,26 +2796,38 @@ void elv_ioq_arm_slice_timer(struct request_queue *q) * for devices that support queuing, otherwise we still have a problem * with sync vs async workloads. */ - if (blk_queue_nonrot(q) && efqd->hw_tag) + if (blk_queue_nonrot(q) && efqd->hw_tag && !efqd->fairness) return; /* - * still requests with the driver, don't idle + * idle is disabled, either manually or by past process history */ - if (efqd->rq_in_driver) + if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq)) return; /* - * idle is disabled, either manually or by past process history + * This queue has consumed its time slice. We are waiting only for + * it to become busy before we select next queue for dispatch. */ - if (!efqd->elv_slice_idle || !elv_ioq_idle_window(ioq)) + if (wait_for_busy) { + elv_mark_ioq_wait_busy(ioq); + sl = efqd->elv_slice_idle; + mod_timer(&efqd->idle_slice_timer, jiffies + sl); + elv_log_ioq(efqd, ioq, "arm idle: %lu wait busy=1", sl); + return; + } + + /* + * still requests with the driver, don't idle + */ + if (efqd->rq_in_driver && !efqd->fairness) return; /* * may be iosched got its own idling logic. In that case io * schduler will take care of arming the timer, if need be. */ - if (q->elevator->ops->elevator_arm_slice_timer_fn) { + if (q->elevator->ops->elevator_arm_slice_timer_fn && !efqd->fairness) { q->elevator->ops->elevator_arm_slice_timer_fn(q, ioq->sched_queue); } else { @@ -2776,11 +2861,38 @@ void *elv_fq_select_ioq(struct request_queue *q, int force) goto expire; } + /* We are waiting for this queue to become busy before it expires.*/ + if (efqd->fairness && elv_ioq_wait_busy(ioq)) { + ioq = NULL; + goto keep_queue; + } + /* * The active queue has run out of time, expire it and select new. */ - if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) - goto expire; + if (elv_ioq_slice_used(ioq) && !elv_ioq_must_dispatch(ioq)) { + /* + * Queue has used up its slice. Wait busy is not on otherwise + * we wouldn't have been here. There is a chance that after + * slice expiry no request from the queue completed hence + * wait busy timer could not be turned on. If that's the case + * don't expire the queue yet. Next request completion from + * the queue will arm the wait busy timer. + * + * Don't wait if this group has other active queues. This + * will make sure that we don't loose fairness at group level + * at the same time in root group we will not see cfq + * regressions. + */ + if (elv_ioq_sync(ioq) && !ioq->nr_queued + && elv_ioq_nr_dispatched(ioq) + && (elv_iog_nr_active(ioq_to_io_group(ioq)) <= 1) + && !elv_ioq_wait_busy_done(ioq)) { + ioq = NULL; + goto keep_queue; + } else + goto expire; + } /* * If we have a RT cfqq waiting, then we pre-empt the current non-rt @@ -2957,11 +3069,13 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq) const int sync = rq_is_sync(rq); struct io_queue *ioq; struct elv_fq_data *efqd = &q->elevator->efqd; + struct io_group *iog; if (!elv_iosched_fair_queuing_enabled(q->elevator)) return; ioq = rq->ioq; + iog = ioq_to_io_group(ioq); elv_log_ioq(efqd, ioq, "complete"); @@ -2987,6 +3101,12 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq) elv_ioq_set_prio_slice(q, ioq); elv_clear_ioq_slice_new(ioq); } + + if (elv_ioq_class_idle(ioq)) { + elv_ioq_slice_expired(q); + goto done; + } + /* * If there are no requests waiting in this queue, and * there are other queues ready to issue requests, AND @@ -2994,13 +3114,24 @@ void elv_ioq_completed_request(struct request_queue *q, struct request *rq) * mean seek distance, give them a chance to run instead * of idling. */ - if (elv_ioq_slice_used(ioq) || elv_ioq_class_idle(ioq)) - elv_ioq_slice_expired(q); - else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1) + if (elv_ioq_slice_used(ioq)) { + if (sync && !ioq->nr_queued + && (elv_iog_nr_active(iog) <= 1)) { + /* + * Idle for one extra period in hierarchical + * setup + */ + elv_ioq_arm_slice_timer(q, 1); + } else { + /* Expire the queue */ + elv_ioq_slice_expired(q); + } + } else if (!ioq->nr_queued && !elv_close_cooperator(q, ioq, 1) && sync && !rq_noidle(rq)) - elv_ioq_arm_slice_timer(q); + elv_ioq_arm_slice_timer(q, 0); } +done: if (!efqd->rq_in_driver) elv_schedule_dispatch(q); } @@ -3105,6 +3236,8 @@ int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e) efqd->elv_slice_idle = elv_slice_idle; efqd->hw_tag = 1; + /* For the time being keep fairness enabled by default */ + efqd->fairness = 1; return 0; } diff --git a/block/elevator-fq.h b/block/elevator-fq.h index 447cea0..54b3657 100644 --- a/block/elevator-fq.h +++ b/block/elevator-fq.h @@ -74,6 +74,7 @@ struct io_service_tree { struct io_sched_data { struct io_entity *active_entity; struct io_entity *next_active; + int nr_active; struct io_service_tree service_tree[IO_IOPRIO_CLASSES]; }; @@ -321,6 +322,13 @@ struct elv_fq_data { unsigned long long rate_sampling_start; /*sampling window start jifies*/ /* number of sectors finished io during current sampling window */ unsigned long rate_sectors_current; + + /* + * If set to 1, will disable many optimizations done for boost + * throughput and focus more on providing fairness for sync + * queues. + */ + int fairness; }; extern int elv_slice_idle; @@ -345,6 +353,8 @@ enum elv_queue_state_flags { ELV_QUEUE_FLAG_wait_request, /* waiting for a request */ ELV_QUEUE_FLAG_must_dispatch, /* must be allowed a dispatch */ ELV_QUEUE_FLAG_slice_new, /* no requests dispatched in slice */ + ELV_QUEUE_FLAG_wait_busy, /* wait for this queue to get busy */ + ELV_QUEUE_FLAG_wait_busy_done, /* Have already waited on this queue*/ ELV_QUEUE_FLAG_NR, }; @@ -368,6 +378,8 @@ ELV_IO_QUEUE_FLAG_FNS(wait_request) ELV_IO_QUEUE_FLAG_FNS(must_dispatch) ELV_IO_QUEUE_FLAG_FNS(idle_window) ELV_IO_QUEUE_FLAG_FNS(slice_new) +ELV_IO_QUEUE_FLAG_FNS(wait_busy) +ELV_IO_QUEUE_FLAG_FNS(wait_busy_done) static inline struct io_service_tree * io_entity_service_tree(struct io_entity *entity) @@ -541,6 +553,9 @@ extern ssize_t elv_slice_sync_store(struct request_queue *q, const char *name, extern ssize_t elv_slice_async_show(struct request_queue *q, char *name); extern ssize_t elv_slice_async_store(struct request_queue *q, const char *name, size_t count); +extern ssize_t elv_fairness_show(struct request_queue *q, char *name); +extern ssize_t elv_fairness_store(struct request_queue *q, const char *name, + size_t count); /* Functions used by elevator.c */ extern int elv_init_fq_data(struct request_queue *q, struct elevator_queue *e); -- 1.6.0.6 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/