Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752249AbbFFBVH (ORCPT ); Fri, 5 Jun 2015 21:21:07 -0400 Received: from mail-qg0-f49.google.com ([209.85.192.49]:33243 "EHLO mail-qg0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751593AbbFFBVC (ORCPT ); Fri, 5 Jun 2015 21:21:02 -0400 Message-ID: <55724AFB.70800@kernel.dk> Date: Fri, 05 Jun 2015 19:20:59 -0600 From: Jens Axboe User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Tahsin Erdogan , Vivek Goyal , tytso@mit.edu CC: linux-kernel@vger.kernel.org, Nauman Rafique Subject: Re: [PATCH] block: Make CFQ default to IOPS mode on SSDs References: <1432068921-17184-1-git-send-email-tahsin@google.com> In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3078 Lines: 74 On 06/05/2015 04:58 PM, Tahsin Erdogan wrote: > On Wed, May 27, 2015 at 1:14 PM, Tahsin Erdogan wrote: >> On Tue, May 19, 2015 at 1:55 PM, Tahsin Erdogan wrote: >>> CFQ idling causes reduced IOPS throughput on non-rotational disks. >>> Since disk head seeking is not applicable to SSDs, it doesn't really >>> help performance by anticipating future near-by IO requests. >>> >>> By turning off idling (and switching to IOPS mode), we allow other >>> processes to dispatch IO requests down to the driver and so increase IO >>> throughput. >>> >>> Following FIO benchmark results were taken on a cloud SSD offering with >>> idling on and off: >>> >>> Idling iops avg-lat(ms) stddev bw >>> ------------------------------------------------------ >>> On 7054 90.107 38.697 28217KB/s >>> Off 29255 21.836 11.730 117022KB/s >>> >>> fio --name=temp --size=100G --time_based --ioengine=libaio \ >>> --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \ >>> --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \ >>> --filename=/dev/sdb --runtime=10 --iodepth=64 --numjobs=10 >>> >>> And the following is from a local SSD run: >>> >>> Idling iops avg-lat(ms) stddev bw >>> ------------------------------------------------------ >>> On 19320 33.043 14.068 77281KB/s >>> Off 21626 29.465 12.662 86507KB/s >>> >>> fio --name=temp --size=5G --time_based --ioengine=libaio \ >>> --randrepeat=0 --direct=1 --invalidate=1 --verify=0 \ >>> --verify_fatal=0 --rw=randread --blocksize=4k --group_reporting=1 \ >>> --filename=/fio_data --runtime=10 --iodepth=64 --numjobs=10 >>> >>> Reviewed-by: Nauman Rafique >>> Signed-off-by: Tahsin Erdogan >>> --- >>> block/cfq-iosched.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c >>> index 5da8e6e..402be01 100644 >>> --- a/block/cfq-iosched.c >>> +++ b/block/cfq-iosched.c >>> @@ -4460,7 +4460,7 @@ static int cfq_init_queue(struct request_queue *q, struct elevator_type *e) >>> cfqd->cfq_slice[1] = cfq_slice_sync; >>> cfqd->cfq_target_latency = cfq_target_latency; >>> cfqd->cfq_slice_async_rq = cfq_slice_async_rq; >>> - cfqd->cfq_slice_idle = cfq_slice_idle; >>> + cfqd->cfq_slice_idle = blk_queue_nonrot(q) ? 0 : cfq_slice_idle; >>> cfqd->cfq_group_idle = cfq_group_idle; >>> cfqd->cfq_latency = 1; >>> cfqd->hw_tag = -1; >>> -- >>> 2.2.0.rc0.207.ga3a616c >>> >> >> Ping... > > Trying once more.. This one worked :-). I agree, it's probably the sane thing to do, I'll apply this for 4.2. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/