Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752547AbcDOWp2 (ORCPT ); Fri, 15 Apr 2016 18:45:28 -0400 Received: from mail-yw0-f178.google.com ([209.85.161.178]:34236 "EHLO mail-yw0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751355AbcDOWp0 (ORCPT ); Fri, 15 Apr 2016 18:45:26 -0400 Date: Fri, 15 Apr 2016 18:45:23 -0400 From: Tejun Heo To: Paolo Valente Cc: Jens Axboe , Fabio Checconi , Arianna Avanzini , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Ulf Hansson , Linus Walleij , Mark Brown Subject: Re: [PATCH RFC 09/22] block, cfq: replace CFQ with the BFQ-v0 I/O scheduler Message-ID: <20160415224523.GM12583@htj.duckdns.org> References: <20160301184656.GI3965@htj.duckdns.org> <20160413204110.GF20142@htj.duckdns.org> <2B664E4D-857C-4BBA-BE77-97EA6CC3F270@linaro.org> <20160414162953.GG12583@htj.duckdns.org> <427F5DF5-507A-4657-8279-B6A8FD98F6D8@linaro.org> <20160415150835.GI12583@htj.duckdns.org> <700B77C8-CB01-41C3-96E7-ED2C0B5A85D0@linaro.org> <20160415192930.GL12583@htj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1183 Lines: 27 Hello, Paolo. On Sat, Apr 16, 2016 at 12:08:44AM +0200, Paolo Valente wrote: > Maybe the source of confusion is the fact that a simple sector-based, > proportional share scheduler always distributes total bandwidth > according to weights. The catch is the additional BFQ rule: random > workloads get only time isolation, and are charged for full budgets, > so as to not affect the schedule of quasi-sequential workloads. So, > the correct claim for BFQ is that it distributes total bandwidth > according to weights (only) when all competing workloads are > quasi-sequential. If some workloads are random, then these workloads > are just time scheduled. This does break proportional-share bandwidth > distribution with mixed workloads, but, much more importantly, saves > both total throughput and individual bandwidths of quasi-sequential > workloads. > > We could then check whether I did succeed in tuning timeouts and > budgets so as to achieve the best tradeoffs. But this is probably a > second-order problem as of now. Ah, I see. Yeah, that clears it up for me. I'm gonna play with cgroup settings and see how it actually behaves. Thanks for your patience. :) -- tejun