Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754873AbcJFUAO (ORCPT ); Thu, 6 Oct 2016 16:00:14 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:50507 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751448AbcJFUAF (ORCPT ); Thu, 6 Oct 2016 16:00:05 -0400 Date: Thu, 6 Oct 2016 12:57:36 -0700 From: Shaohua Li To: Paolo Valente CC: Tejun Heo , Vivek Goyal , , , Jens Axboe , , , Mark Brown , Linus Walleij , Ulf Hansson Subject: Re: [PATCH V3 00/11] block-throttle: add .high limit Message-ID: <20161006195533.GA20511@shli-mbp.local> References: <20161004191427.GG4205@htj.duckdns.org> <20161004202754.GJ4205@htj.duckdns.org> <257945FA-6789-4D80-8DA3-AC75640C71AE@unimore.it> <20161005144946.GA26977@htj.duckdns.org> <20161005183052.GA97491@anikkar-mbp.local.dhcp.thefacebook.com> <20161005204601.GB1754@anikkar-mbp.local.dhcp.thefacebook.com> <5699035C-6DC3-497A-9D7A-A4E43D17C3CD@unimore.it> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <5699035C-6DC3-497A-9D7A-A4E43D17C3CD@unimore.it> User-Agent: Mutt/1.6.1 (2016-04-27) X-Originating-IP: [199.201.64.136] X-ClientProxiedBy: MWHPR04CA0033.namprd04.prod.outlook.com (10.172.163.19) To CY1PR15MB0409.namprd15.prod.outlook.com (10.163.234.23) X-MS-Office365-Filtering-Correlation-Id: b7616919-98dd-4a00-b0ea-08d3ee23549f X-Microsoft-Exchange-Diagnostics: 1;CY1PR15MB0409;2:9yvggXTL5VBJUaEAcZMvNvqVN9tQFosi/bvMXpBrelEep5mLC6IqcU9QfwqRRFeHZJlA3IAgqE7/CAlwavFPMH1lVTB03mvmOFiQDEDna4RsXMALPUAkjig1i/X0iRWhHYJ2pZit0xATYj6Th9EkN5n3zbV0EEkBw1JbEzlwivFi2WEiuTzLcta6Sb2cswKL;3:wHqR/TZGp2bqDm5XxHnulynQhCo1jk5llwRRaaQPyY5B+JRDFi090GSTMSWi+vEEaF2RYVDSCWB/eZoLdhsamGx2sTchTdYqmuz/dpz3YgeZg+qy5i6/1vbtjJ8qc5Jp;25:M++5+fpflA0LpEF+GqUO3WuQUDuVlUUCBeILAvA5ZOjIYw4PPVCjrqsohrVLCWqnIusCk6R9f22TTEN4Ge6OqseSKwpFm/BP3Ogm8uBSBt2VAIw3kWhgRI2M9neBVxA1xz/dHxvL9PfCJ76Q/kyNVrwcb4gTcMg6qNh6OwLaW+lJNP3Zxnp2tmbC+N0eRriL78rxzuvTl4MyFEmQRUpOlHyx3unFHAes9JjzETOE41elhDdgeDYvh+P1/ZExn51vwJqc86qvGvp1QzzOSdiVHdEfBqkZVsss1zSO3u8Mavgh1tsg/N4B5ow6mN+LgarjEiTflomHTOhz7q0DWJ6O7Tr1zYWmU/qOd2jdnToOa9jJqL5Uo+B4uLJwQIaoNbSxzqrNmjnXYwfL+rpZsGOUO8NTGL7aX9hF4ZHqQRznrdg= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:CY1PR15MB0409; X-Microsoft-Exchange-Diagnostics: 1;CY1PR15MB0409;31:l3741QxgXae/bKkgPomL1P7DLuAeHCBt9rf4fljr5J2bTW7lVeYrlenpsGI71bEcchPtXOpurJkBP1WZUaarbecidsFtz2cTeHARlfUG+RxfeRfZJTKbtfBv4xaG/KnHM5A6Pi8FQH9fJw9Ws+gN9vXJ588+eAZCo/4y7W5vn9dFqdrX5rjib7vf8/FuOT8kpQ9IhTnhh53iK+YyuTqaNgCz9tdHlmySbMUHtH8dQ5I=;20:WjYVu1fdNoeBTKWcoczrNvvhb1igt05Xlml96JyxIUfI8h18outzo8MhXpmkXVA5Ce9x4c+tkn893QR5eRWNh0r/+xGahWLAqjfoLeehkMQ3cYje7Sp9tQgNH+tzw0LgQswueYBLbdrEzag5M0Innj6jZN4Uk0FURzW/K6LssIw= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(158342451672863)(10436049006162)(166708455590820)(67672495146484)(278428928389397); X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(6040176)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046);SRVR:CY1PR15MB0409;BCL:0;PCL:0;RULEID:;SRVR:CY1PR15MB0409; X-Microsoft-Exchange-Diagnostics: 1;CY1PR15MB0409;4:6yNKMqnOV3zpHEHFFA9udySSa6hk0JcJhuof5REdCVOr/9TbKR3RhzFGKRSUTNmILAQmqPhU5QDi5YXj/o/LmyPl5wnK0e36It/SNLsBxcleXc05Mvudf5sGfDPBKfYPJRptLk2+9tiUfWwZm9zMYMB3Bw6TPL5dYO2SrlXC3qPjSW381B9B9REXSqb3D4o72W0YUqAliIFU3dA3YL090ovUNCqC5Yi6hwUdC5VH7AgbwWadM/AmW+C3HRdIlw8QkntlNJSbYMT0gRjaTEShiy1PHObyT0850eOMR3YoI48Ldazr7eueS03SPqemt8Go8TceBiwA3WjSC8uUu+N4OoMKcj2CQcJywWyigpRewi3wylwAp3JjUrfH4jya6V3i7VLWMq5/9jR+OqqC893XgoqDN6FM2g7ZIqNNR8Xx9Vl7wZdqvTKPBybMMUAs3be5rZzcYl2g/6HxBJB+MwX3t5TtroOt1vGgpqCeQBOol6jJn2iWN5bVZ8pJ1VNOB7y2ws85xoCT8sPcjKYx3TZexqWFzZstRBL5NOIpA2XqXwYTVD649d+U59jPpGtQPakd X-Forefront-PRVS: 00872B689F X-Forefront-Antispam-Report: SFV:NSPM;SFS:(10019020)(4630300001)(6009001)(7916002)(199003)(45984002)(52314003)(24454002)(189002)(6116002)(1076002)(3846002)(4001350100001)(586003)(97736004)(23726003)(97756001)(50986999)(54356999)(42186005)(76176999)(98436002)(33656002)(7846002)(305945005)(7736002)(5660300001)(110136003)(68736007)(105586002)(19580395003)(101416001)(50466002)(189998001)(9686002)(46406003)(86362001)(575784001)(19580405001)(93886004)(92566002)(47776003)(66066001)(8676002)(81156014)(6666003)(6916009)(81166006)(2950100002)(15975445007)(106356001)(83506001)(77096005)(4326007)(2906002)(16799955002)(18370500001)(142923001);DIR:OUT;SFP:1102;SCL:1;SRVR:CY1PR15MB0409;H:shli-mbp.local;FPR:;SPF:None;PTR:InfoNoRecords;A:1;MX:1;LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1;CY1PR15MB0409;23:jFUQf08tg97zYGxPjETIIOF1Lyp6sh6MffNXRdHZK?= =?us-ascii?Q?IBoK1BnFMCs7rZnLucuqrdjzkgW2DZIzTw6pkxCWuV1QKwR6/9YMfNE00Jyg?= =?us-ascii?Q?od8pG3Lx78EmjQ/qytTVgW0d1sfUn/fHDs0cPNsOVUjzOtRjkKl7sVjpEnWN?= =?us-ascii?Q?JUSgeI+oUHJnbE5+JHZbtumF2vpZv5K+0e8l+RfQdlT5wlw0219V/5GSQLrK?= =?us-ascii?Q?D/7LLDTdLmMOxnYohok7xwE+I//Wn5ETpnPysWgZhB0yA3DP7PpFHQpp4gP6?= =?us-ascii?Q?azBQXv7OkuhF7lJZ4I/uukKv75+ic4Y60MIJkO/eRNjpS/OHAv/LaiM2rgrQ?= =?us-ascii?Q?iz01IhPpGJO4+DeYg1totYqRfrD56bc6tnupMLOwDRDhEidLxTBgyvsCRSN6?= =?us-ascii?Q?ZwtPvgIFaalz5DFiasL4OF4A+hiA9AFs33HEX6vNqeuE75mgg95jaCWVMNyh?= =?us-ascii?Q?jSH4AwDDdXJt++k0NWUtW+l64hKGf9W5KB64PDGav660OOxWpqWQZxmF5M+Q?= =?us-ascii?Q?bfryvP2ceYgXuYHjDZMpBREWI6UftN/lOrsvl7KdQ4u4lH5b98lc59f7MZ/B?= =?us-ascii?Q?ShmeR+6LcGLkn7i7t8o+K7kwRdkMDwABwmE234k9/q9hRARi06u/nMaLg0eD?= =?us-ascii?Q?Fdgil6la8SvCTZ8bkJ5iyLeQJIYMtHTtBaTBoRAs668pfk1V/wXdbP1rCc16?= =?us-ascii?Q?dtEh4m4ZRGWPcr+MIWPPwpZFxdFThW7U8BFKVdpvgkR5MgTdV6ibLhPsaf0q?= =?us-ascii?Q?jISWjVvtMzv73O0r5PvrrhHUrq3G3NNM3W1QRQAZzrkFTdsyV3cxjLEBv5af?= =?us-ascii?Q?QmaczDRBwxg+nHJdbfIN3WPBJPZi2it/voKP20ycELEcz4lvcBiVV0w8ktD5?= =?us-ascii?Q?fd4Sx71dXdmKLfVZgj6qqjn+lSIgxtmD48MCtFOSoYiltOcsly64KDakj2UM?= =?us-ascii?Q?R2MSTD9Gy4IOoKlEDkFUd/BVZqJq/2zMGcpVNBZTkxsABgUE9vEw2Vy8X+1L?= =?us-ascii?Q?ur7JI3DTHled3tQEMpCxrbtShv/2FUh4IPohxKRBSmcMQdTF25cCuIieZSe6?= =?us-ascii?Q?XivzaEfPbrNiHXsqMB0LxjRBSWuavDApJ/+0W5Z1CagU+rbtwM6hWRJbPlFK?= =?us-ascii?Q?mZmeozY6nohyYn4fECL3QJRpsdpCaww2y/p2tM3RtDst+PrInBP30XOV42Yg?= =?us-ascii?Q?tn706eAGYOnUtXR5pP2kOTqCfHww2pwBASock5uUg20bw1HrnzaomZTiFVGY?= =?us-ascii?Q?MobVhdhlvfDvcWoGyaOQhRMlRs55l6NU39go5SgA1+ffk4DevQFgguZa1p97?= =?us-ascii?Q?bsNoS3KD68lPRBf4wuE9540Hue1Yx3gC8uQ8CxDUu9jaatapw6Uxi+fJwbAh?= =?us-ascii?Q?CvtEXmotsSCkaWhW8MajTPyf3gP61NwX7+Tkh1+OLj6ZOA8?= X-Microsoft-Exchange-Diagnostics: 1;CY1PR15MB0409;6:dCz72yiltfs72oHVJkS1gMdPD07oAqrb9D3+KYkVYGHDhqUNHAzefBY6+WcXs/6m4GAVpZ01xMJBtooiouLFzq1Xsz9DkHNCuKSnK453ALHbFxAWAgDKXz16KXkuc7Fhgfr9Xp2W3R3KWA0oouhWZUXN6q24ox9sP6Au62alxghxFR+vVsNu3rE4v5hny7B7B3nUunCf9W0WwynOzh+4FQestmLH21+4Ki7RZfW4OFS6uryOIYklT/bsAO4O8+wgZ6eAKskixdmlPJBfr8ahV35JQ2elzFQO02TsBVFOgp8=;5:eEC9ovDXTAuiLQf4KXmlIaX/7KUls4S3SWtbQKZMsMMJHAxVZjXUabQ4C5iNXWFASCIaoq3X2PsWxq21kDBWGOS/poV3QkwSk2NX3o1E2s/2/GjQLedPXy4s9Z8z6h9wxp2WPTkWUqTDA7mqBqfglw==;24:5RTpayZXEjl2awjHO+OjM7Qap6KDocedVvWRQk885/NEQMA9X5rNbM9VpNdaGnKxecWhEjNO7kj7cZwXSPJdtSfuZ9A9SCKpEWyVYsJdvmY=;7:NxNux1rZrCMdEk5NKXcC+bIPLZzmq2FsYbKvcdWawtXhIIV+Cc4mA4YNVyaDbysAq9057nVw4hsneZPiXsPgCVE4Y7hhDpnfkt37uc1ET2T/I3FtHeMF58VuEp+i/OT5f04C6sLjCTJbesMpemOdAucGMPV0DsducEgyV7xa8Vc7Noh2soAEuGkFjl42S/bOD8A9gQwtHXvAtFRwcFPVP9afX6LfS6bHTFHtROTDikCJaIbJQlcMW5/+2MYZj4jp0Nz6oAVlCIYoQubsJOqF63tgnCHDO9VGsYxH5QTEafEl4W2stzTJQAiJkrNUepIE SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1;CY1PR15MB0409;20:+V0s63ng1X4y3MpM1TTexeDXxgwywMLqR75u7CxUazED74Z2WAd0/+g8ECa4mEs0UWeG8xZs4iKrkdnK1v7kU+q/SVBAzwvcg0Yme9uT+DFhrP1dUtikh/zTMqbGgTYiSEfokJBMfbuqhd7Kku+zSgMvuCsHowm5lLBLqCCZui0= X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2016 19:59:49.1402 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR15MB0409 X-OriginatorOrg: fb.com X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-10-06_08:,, signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 10964 Lines: 233 On Thu, Oct 06, 2016 at 09:58:44AM +0200, Paolo Valente wrote: > > > Il giorno 05 ott 2016, alle ore 22:46, Shaohua Li ha scritto: > > > > On Wed, Oct 05, 2016 at 09:47:19PM +0200, Paolo Valente wrote: > >> > >>> Il giorno 05 ott 2016, alle ore 20:30, Shaohua Li ha scritto: > >>> > >>> On Wed, Oct 05, 2016 at 10:49:46AM -0400, Tejun Heo wrote: > >>>> Hello, Paolo. > >>>> > >>>> On Wed, Oct 05, 2016 at 02:37:00PM +0200, Paolo Valente wrote: > >>>>> In this respect, for your generic, unpredictable scenario to make > >>>>> sense, there must exist at least one real system that meets the > >>>>> requirements of such a scenario. Or, if such a real system does not > >>>>> yet exist, it must be possible to emulate it. If it is impossible to > >>>>> achieve this last goal either, then I miss the usefulness > >>>>> of looking for solutions for such a scenario. > >>>>> > >>>>> That said, let's define the instance(s) of the scenario that you find > >>>>> most representative, and let's test BFQ on it/them. Numbers will give > >>>>> us the answers. For example, what about all or part of the following > >>>>> groups: > >>>>> . one cyclically doing random I/O for some second and then sequential I/O > >>>>> for the next seconds > >>>>> . one doing, say, quasi-sequential I/O in ON/OFF cycles > >>>>> . one starting an application cyclically > >>>>> . one playing back or streaming a movie > >>>>> > >>>>> For each group, we could then measure the time needed to complete each > >>>>> phase of I/O in each cycle, plus the responsiveness in the group > >>>>> starting an application, plus the frame drop in the group streaming > >>>>> the movie. In addition, we can measure the bandwidth/iops enjoyed by > >>>>> each group, plus, of course, the aggregate throughput of the whole > >>>>> system. In particular we could compare results with throttling, BFQ, > >>>>> and CFQ. > >>>>> > >>>>> Then we could write resulting numbers on the stone, and stick to them > >>>>> until something proves them wrong. > >>>>> > >>>>> What do you (or others) think about it? > >>>> > >>>> That sounds great and yeah it's lame that we didn't start with that. > >>>> Shaohua, would it be difficult to compare how bfq performs against > >>>> blk-throttle? > >>> > >>> I had a test of BFQ. > >> > >> Thank you very much for testing BFQ! > >> > >>> I'm using BFQ found at > >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__algogroup.unimore.it_people_paolo_disk-5Fsched_sources.php&d=DQIFAg&c=5VD0RTtNlTh3ycd41b3MUw&r=i6WobKxbeG3slzHSIOxTVtYIJw7qjCE6S0spDTKL-J4&m=2pG8KEx5tRymExa_K0ddKH_YvhH3qvJxELBd1_lw0-w&s=FZKEAOu2sw95y9jZio2k012cQWoLzlBWDl0NiGPVW78&e= . version is > >>> 4.7.0-v8r3. > >> > >> That's the latest stable version. The development version [1] already > >> contains further improvements for fairness, latency and throughput. > >> It is however still a release candidate. > >> > >> [1] https://github.com/linusw/linux-bfq/tree/bfq-v8 > >> > >>> It's a LSI SSD, queue depth 32. I use default setting. fio script > >>> is: > >>> > >>> [global] > >>> ioengine=libaio > >>> direct=1 > >>> readwrite=randread > >>> bs=4k > >>> runtime=60 > >>> time_based=1 > >>> file_service_type=random:36 > >>> overwrite=1 > >>> thread=0 > >>> group_reporting=1 > >>> filename=/dev/sdb > >>> iodepth=1 > >>> numjobs=8 > >>> > >>> [groupA] > >>> prio=2 > >>> > >>> [groupB] > >>> new_group > >>> prio=6 > >>> > >>> I'll change iodepth, numjobs and prio in different tests. result unit is MB/s. > >>> > >>> iodepth=1 numjobs=1 prio 4:4 > >>> CFQ: 28:28 BFQ: 21:21 deadline: 29:29 > >>> > >>> iodepth=8 numjobs=1 prio 4:4 > >>> CFQ: 162:162 BFQ: 102:98 deadline: 205:205 > >>> > >>> iodepth=1 numjobs=8 prio 4:4 > >>> CFQ: 157:157 BFQ: 81:92 deadline: 196:197 > >>> > >>> iodepth=1 numjobs=1 prio 2:6 > >>> CFQ: 26.7:27.6 BFQ: 20:6 deadline: 29:29 > >>> > >>> iodepth=8 numjobs=1 prio 2:6 > >>> CFQ: 166:174 BFQ: 139:72 deadline: 202:202 > >>> > >>> iodepth=1 numjobs=8 prio 2:6 > >>> CFQ: 148:150 BFQ: 90:77 deadline: 198:197 > >>> > >>> CFQ isn't fair at all. BFQ is very good in this side, but has poor throughput > >>> even prio is the default value. > >>> > >> > >> Throughput is lower with BFQ for two reasons. > >> > >> First, you certainly left the low_latency in its default state, i.e., > >> on. As explained, e.g., here [2], low_latency mode is totally geared > >> towards maximum responsiveness and minimum latency for soft real-time > >> applications (e.g., video players). To achieve this goal, BFQ is > >> willing to perform more idling, when necessary. This lowers > >> throughput (I'll get back on this at the end of the discussion of the > >> second reason). > > > > changing low_latency to 0 seems not change anything, at least for the test: > > iodepth=1 numjobs=1 prio 2:6 A bs 4k:64k > > > >> The second, most important reason, is that a minimum of idling is the > >> *only* way to achieve differentiated bandwidth distribution, as you > >> requested by setting different ioprios. I stress that this constraint > >> is not a technological accident, but a intrinsic, logical necessity. > >> The proof is simple, and if the following explanation is too boring or > >> confusing, I can show it to you with any trace of sync I/O. > >> > >> First, to provide differentiated service, you need per-process > >> scheduling, i.e., schedulers in which there is a separate queue > >> associated with each process. Now, let A be the process with higher > >> weight (ioprio), and B the process with lower weight. Both processes > >> are sync, thus, by definition, they issue requests as follows: a few > >> requests (probably two, or a little bit more with larger iodepth), > >> then a little break to wait for request completion, then the next > >> small batch and so on. For each process, the queue associated with > >> the process (in the scheduler) is necessarily empty on the break. As > >> a consequence, if there is no idling, then every time A reaches its > >> break, the scheduler has only the option to switch to B (which is > >> extremely likely to have pending requests). > >> > >> The service pattern of the processes then unavoidably becomes: > >> > >> A B A B A B ... > >> > >> where each letter represents a full small batch served for the > >> process. That is, 50% of the bw for each process, and complete loss > >> of control on the desired bandwidth distribution. > >> > >> So, to sum up, the reason why BFQ achieves a lower total bw is that it > >> behaves in the only correct way to respect weights with sync I/O, > >> i.e., it performs a little idling. If low_latency is on, then BFQ > >> increases idling further, and this may be have caused further bw loss > >> in your test (but this varies greatly with devices, so you can > >> discover it only by trying). > >> > >> The bottom line is that if you do want to achieve differentiation with > >> sync I/O, you have to pay a price in terms of bw, because of idling. > >> Actually, the recent preemption mechanism that I have introduced in > >> BFQ is proving so effective in preserving differentiation, that I'm > >> tempted to try some almost idleness solution. A little of accuracy > >> should however be sacrificed. Anyway, this is still work in progress. > > > > Yep, I fully understand why idle is required here. As long as workload io depth > > is lower than queue io depth, idle is the only way to maintain fairness. This > > is the core of CFQ, I bet the same for BFQ. Unfortunately idle disk harms > > throughput too much especially for high end SSD. > > > > Then I'm afraid I have to give you very bad news: bw limiting causes > the same throughput loss. You can see it from your very tests. Here > is one of your results with BFQ (one that is likely to have been > affected less by the fact that you left low_latency on, or by further > issues that I may have not yet addressed thoroughly): > > iodepth=8 numjobs=1 prio 2:6 > CFQ: 166:174 BFQ: 139:72 deadline: 202:202 > > Here is, instead, your test with bw limitation: > iodepth=8 numjobs=1 prio 2:6, group A has 50M/s limit > CFQ:51:207 BFQ: 51:45 deadline: 51:216 > > From the first test, you see that the total bw achievable by the > device is at least 404MB/s. But in the second test you get at most > 267MB/s, with deadline. In this respect, the total bw achieved by BFQ > in the first test is 211MB/s. > > So, both throttling and proportional share need to waste bw, BFQ > looses about 13% more of the total bw. I don't think you calculate this correct. In iodepth 8 request size 4k, the workload can only dispatch 216M/s. Even group A doesn't dispatch any IO, group B can only dispatch 216M/s. So deadline doesn't waste any bw. CFQ wastes 216 - 207, while BFQ wastes 216 - 45. That's the problem. > In return, it gives you > incomparably better bw and latency guarantees, while allowing you to > configure your system with zero or minimal effort. In contrast, using > bw limits to properly configure a common system like, e.g., a large > file server, may become a nightmare for a sysadmin. For example, if, > in the simplest case, she/he configures limits for the worst-case, > then per-client limits will have to be extremely low. But the system > is large and dynamic, so the actual number of clients and at the > actual bw consumed by each client will vary without a break, even in > the short-medium term. The bw redistribution heuristic do not give > any provable guarantees on accuracy of bw redistribution. The result > will likely be highly varying client bandwidths, with unlucky clients > unjustly limited to low limits, and then experiencing high latencies. > The latter will be further emphasized by the intrinsically bursty > nature of throttling. I don't disagree here. The bw/iops throttling is not easy to configure. It's kind of low level configuration, people need to know the workload very well to configure. If we have way to do proportional scheduling, everybody will cheer up. The goal of the tests is checking if proportional scheduling is feasible, the result isn't very optimistic so far. No, I don't say BFQ isn't good. It is much more fair than CFQ. I suppose it would work well for desktop workloads. > In addition, the scenario in your tests is the worst case for a > proportional share solution: in a generic system, such as the the file > server above, part of the workload is likely to be sequential or > quasi-sequential (at least in medium-length time intervals), and this > is enough to get very close to peak bw with a proportional-share > scheduler. No configuration needed. With bw throttling, you must do > the math very well to get peak bw all the time in a dynamic system. I'm afraid this is not true. Workloads seldomly fully utilize the bandwidth of a highend SSD. At least that's true here. My tests are actually pretty normal. I didn't do weird tests at all. Thanks, Shaohua