Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753341AbdF0S3U (ORCPT ); Tue, 27 Jun 2017 14:29:20 -0400 Received: from mail-io0-f179.google.com ([209.85.223.179]:33029 "EHLO mail-io0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753199AbdF0S3N (ORCPT ); Tue, 27 Jun 2017 14:29:13 -0400 Subject: Re: [PATCH BUGFIX V2] block, bfq: update wr_busy_queues if needed on a queue split To: Paolo Valente Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, ulf.hansson@linaro.org, broonie@kernel.org References: <20170619114316.2587-1-paolo.valente@linaro.org> <8520D3AF-C161-439F-A7E8-A6B7202DA2D9@linaro.org> <4AFF2E52-DCE4-4DC7-9CB0-849EEED3A9AB@linaro.org> From: Jens Axboe Message-ID: <3b5987e2-fa11-af94-27f4-5760612c0f22@kernel.dk> Date: Tue, 27 Jun 2017 12:29:10 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1 MIME-Version: 1.0 In-Reply-To: <4AFF2E52-DCE4-4DC7-9CB0-849EEED3A9AB@linaro.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3065 Lines: 64 On 06/27/2017 12:27 PM, Paolo Valente wrote: > >> Il giorno 27 giu 2017, alle ore 16:41, Jens Axboe ha scritto: >> >> On 06/27/2017 12:09 AM, Paolo Valente wrote: >>> >>>> Il giorno 19 giu 2017, alle ore 13:43, Paolo Valente ha scritto: >>>> >>>> This commit fixes a bug triggered by a non-trivial sequence of >>>> events. These events are briefly described in the next two >>>> paragraphs. The impatiens, or those who are familiar with queue >>>> merging and splitting, can jump directly to the last paragraph. >>>> >>>> On each I/O-request arrival for a shared bfq_queue, i.e., for a >>>> bfq_queue that is the result of the merge of two or more bfq_queues, >>>> BFQ checks whether the shared bfq_queue has become seeky (i.e., if too >>>> many random I/O requests have arrived for the bfq_queue; if the device >>>> is non rotational, then random requests must be also small for the >>>> bfq_queue to be tagged as seeky). If the shared bfq_queue is actually >>>> detected as seeky, then a split occurs: the bfq I/O context of the >>>> process that has issued the request is redirected from the shared >>>> bfq_queue to a new non-shared bfq_queue. As a degenerate case, if the >>>> shared bfq_queue actually happens to be shared only by one process >>>> (because of previous splits), then no new bfq_queue is created: the >>>> state of the shared bfq_queue is just changed from shared to non >>>> shared. >>>> >>>> Regardless of whether a brand new non-shared bfq_queue is created, or >>>> the pre-existing shared bfq_queue is just turned into a non-shared >>>> bfq_queue, several parameters of the non-shared bfq_queue are set >>>> (restored) to the original values they had when the bfq_queue >>>> associated with the bfq I/O context of the process (that has just >>>> issued an I/O request) was merged with the shared bfq_queue. One of >>>> these parameters is the weight-raising state. >>>> >>>> If, on the split of a shared bfq_queue, >>>> 1) a pre-existing shared bfq_queue is turned into a non-shared >>>> bfq_queue; >>>> 2) the previously shared bfq_queue happens to be busy; >>>> 3) the weight-raising state of the previously shared bfq_queue happens >>>> to change; >>>> the number of weight-raised busy queues changes. The field >>>> wr_busy_queues must then be updated accordingly, but such an update >>>> was missing. This commit adds the missing update. >>>> >>> >>> Hi Jens, >>> any idea of the possible fate of this fix? >> >> I sort of missed this one. It looks trivial enough for 4.12, or we >> can defer until 4.13. What do you think? >> > > It should actually be something trivial, and hopefully correct, > because a further throughput improvement (for BFQ), which depends on > this fix, is now working properly, and we didn't see any regression so > far. In addition, since this improvement is virtually ready for > submission, further steps may be probably easier if this fix gets in > sooner (whatever the luck of the improvement will be). OK, let's queue it up for 4.13 then. -- Jens Axboe