Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752652AbZIICHI (ORCPT ); Tue, 8 Sep 2009 22:07:08 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752090AbZIICHI (ORCPT ); Tue, 8 Sep 2009 22:07:08 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48096 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751822AbZIICHH (ORCPT ); Tue, 8 Sep 2009 22:07:07 -0400 Date: Tue, 8 Sep 2009 22:06:20 -0400 From: Vivek Goyal To: Fabio Checconi Cc: Rik van Riel , Ryo Tsuruta , linux-kernel@vger.kernel.org, dm-devel@redhat.com, jens.axboe@oracle.com, agk@redhat.com, akpm@linux-foundation.org, nauman@google.com, guijianfeng@cn.fujitsu.com, jmoyer@redhat.com, balbir@linux.vnet.ibm.com Subject: Re: Regarding dm-ioband tests Message-ID: <20090909020620.GC3594@redhat.com> References: <20090904231129.GA3689@redhat.com> <20090907.200222.193693062.ryov@valinux.co.jp> <4AA51065.6050000@redhat.com> <20090908.120119.71095369.ryov@valinux.co.jp> <4AA6AF58.3050501@redhat.com> <20090909000900.GK17468@gandalf.sssup.it> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20090909000900.GK17468@gandalf.sssup.it> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2017 Lines: 48 On Wed, Sep 09, 2009 at 02:09:00AM +0200, Fabio Checconi wrote: > Hi, > > > From: Rik van Riel > > Date: Tue, Sep 08, 2009 03:24:08PM -0400 > > > > Ryo Tsuruta wrote: > > >Rik van Riel wrote: > > > > >>Are you saying that dm-ioband is purposely unfair, > > >>until a certain load level is reached? > > > > > >Not unfair, dm-ioband(weight policy) is intentionally designed to > > >use bandwidth efficiently, weight policy tries to give spare bandwidth > > >of inactive groups to active groups. > > > > This sounds good, except that the lack of anticipation > > means that a group with just one task doing reads will > > be considered "inactive" in-between reads. > > > > anticipation helps in achieving fairness, but CFQ currently disables > idling for nonrot+NCQ media, to avoid the resulting throughput loss on > some SSDs. Are we really sure that we want to introduce anticipation > everywhere, not only to improve throughput on rotational media, but to > achieve fairness too? That's a good point. Personally I think that fairness requirements for individual queues and groups are little different. CFQ in general seems to be focussing more on latency and throughput at the cost of fairness. With groups, we probably need to put a greater amount of emphasis on group fairness. So group will be a relatively a slower entity (with anticiaption on and more idling), but it will also give you a greater amount of isolation. So in practice, one will create groups carefully and they will not proliferate like queues. This can mean overall reduced throughput on SSD. Having said that, group idling is tunable and one can always reduce it to achieve a balance between fairness vs throughput depending on his need. Thanks Vivek -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/