Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932441AbbFEW6Q (ORCPT ); Fri, 5 Jun 2015 18:58:16 -0400 Received: from mail-ig0-f176.google.com ([209.85.213.176]:34779 "EHLO mail-ig0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752858AbbFEW6O (ORCPT ); Fri, 5 Jun 2015 18:58:14 -0400 MIME-Version: 1.0 In-Reply-To: References: <1432068921-17184-1-git-send-email-tahsin@google.com> Date: Fri, 5 Jun 2015 15:58:14 -0700 Message-ID: Subject: Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs From: Tahsin Erdogan To: Jens Axboe , Vivek Goyal , tytso@mit.edu Cc: linux-kernel@vger.kernel.org, Tahsin Erdogan , Nauman Rafique Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2841 Lines: 66 On Wed, May 27, 2015 at 1:14 PM, Tahsin Erdogan wrote: > On Tue, May 19, 2015 at 1:55 PM, Tahsin Erdogan wrote: >> CFQ idling causes reduced IOPS throughput on non-rotational disks. >> Since disk head seeking is not applicable to SSDs, it doesn't really >> help performance by anticipating future near-by IO requests. >> >> By turning off idling (and switching to IOPS mode), we allow other >> processes to dispatch IO requests down to the driver and so increase IO >> throughput. >> >> Following FIO benchmark results were taken on a cloud SSD offering with >> idling on and off: >> >> Idling iops avg-lat(ms) stddev bw >> ------------------------------------------------------ >> On 7054 90.107 38.697 28217KB/s >> Off 29255 21.836 11.730 117022KB/s >> >> fio --name=temp --size=100G --time_based --ioengine=libaio \ >> --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \ >> --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \ >> --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10 >> >> And the following is from a local SSD run: >> >> Idling iops avg-lat(ms) stddev bw >> ------------------------------------------------------ >> On 19320 33.043 14.068 77281KB/s >> Off 21626 29.465 12.662 86507KB/s >> >> fio --name=temp --size=5G --time_based --ioengine=libaio \ >> --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \ >> --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \ >> --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10 >> >> Reviewed-by: Nauman Rafique >> Signed-off-by: Tahsin Erdogan >> --- >> block/cfq-iosched.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c >> index 5da8e6e..402be01 100644 >> --- a/block/cfq-iosched.c >> +++ b/block/cfq-iosched.c >> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e) >> cfqd->cfq_slice[1] = cfq_slice_sync; >> cfqd->cfq_target_latency = cfq_target_latency; >> cfqd->cfq_slice_async_rq = cfq_slice_async_rq; >> - cfqd->cfq_slice_idle = cfq_slice_idle; >> + cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle; >> cfqd->cfq_group_idle = cfq_group_idle; >> cfqd->cfq_latency = 1; >> cfqd->hw_tag = -1; >> -- >> 2.2.0.rc0.207.ga3a616c >> > > Ping... Trying once more.. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/