Received: by 2002:a25:8b12:0:0:0:0:0 with SMTP id i18csp2946543ybl; Mon, 19 Aug 2019 09:43:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqwIqXfN28fnJxiiEtq7szYw6n2/WNjNjescOHYNCrYIYEYoL2E8Tlmy0BZHE2ty39zje93O X-Received: by 2002:a17:90a:dac3:: with SMTP id g3mr21694900pjx.45.1566233027741; Mon, 19 Aug 2019 09:43:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566233027; cv=none; d=google.com; s=arc-20160816; b=bbbCXPP13CuPhu71/x9u7piYv1ItHifAiaZ3UQN8IuKOuK9VyQ15y7aYXJWHAlmJ+8 82iL8FPHfdIowNauMXUSguwsn8J0S36XKYl7fZCEfh6TWrbHDDx36D6cikfCvtE+dpGM PxNAvG+kIQcYSRmsUgm8LKip97u/zdrn4B2LMcCvQP+MKHbgo1H9fGbHzbr8vCtU3WSh h2a9cpyzVLv+pTezvumE16BUsprphzE7lJUir0N9rODEELXvViZedHvnOfUrGf4Xus3V GNZb71I3nwhDkMhPZU6S6oHFiu8o4JXJgAGKjw2ZBrdU0Hd/KGXrw4BiBdcWMcsNmZbx DN7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:to:references:message-id :content-transfer-encoding:cc:date:in-reply-to:from:subject :mime-version:dkim-signature; bh=pMGMLO6UjefUm7+k/txVpXFSPCOa2JfVErL3sZVkDrI=; b=B4Mf8gCTwUJVau6oFjC+B8SML0wvHSxgN0sRwEMWDDXvIjSGXILW4UFiOi+In3WTcd GSdI6x1AGQN5KSpS28sOz/RmiPnT2vyEX5rNt/1MKcCjjVBChNSKjY/Vi3KZzBt15Ign Wt3T/iITnc5zL55LJU/aSQmDKWkK4JrdKlG/1upz59+4AnWblPK+pQbN+3px+wDsW8T9 kccTXMLG/sX92dpPTbJVC5y9aHOqmWTfhrOrrMAZHz6s63E6a44mPU343keihCBn0jZV fC+BFHKzV+lCdRLBL33z73d/IhOfg2LV8oVB6bEhqmW1Lp9KZX5VrhSIQyohanJcLEkN HNpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=S7youAEF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b16si10317845pgh.296.2019.08.19.09.43.33; Mon, 19 Aug 2019 09:43:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=S7youAEF; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727872AbfHSQlP (ORCPT + 99 others); Mon, 19 Aug 2019 12:41:15 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:33020 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727094AbfHSQlP (ORCPT ); Mon, 19 Aug 2019 12:41:15 -0400 Received: by mail-wm1-f67.google.com with SMTP id p77so428448wme.0 for ; Mon, 19 Aug 2019 09:41:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=pMGMLO6UjefUm7+k/txVpXFSPCOa2JfVErL3sZVkDrI=; b=S7youAEFO/K2j7/8f/JdMJRjoHB24GY+XpmpUfAHKSllFlECNZu3qnpFff5evg55Ps YZTGo35eBMwiXtshPLEeni46s4GpxP7NNVcmGaYWTHZZ4dx87zGtyIG+jZgt+csh9yZ3 Tm/ByD9Wwq76SQqSdsAH//4VJ0dfjNnB2UxoLJMkKGzx+9qAVj2uOPRSEMsdZa0VuMv7 C3pJ4fh925xR2oGIcGBW7NWUNNygLMxyQfReA1xnbaW5rES+d90ToWm0ZTSNr+ZF3jx6 QpMUlH/fK9BBZsje57xbKlWz67H5SOMDb6K+CNGE54BpMAxtP8DGPbgn6mCq/HCchcbM KI9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=pMGMLO6UjefUm7+k/txVpXFSPCOa2JfVErL3sZVkDrI=; b=QS0BEjsQYnZIMPfYFvtMBTtY9HiwwzYPn/I1aFlAd2Qia3ltONzRI58feQ03Mqe/QY cqcQjwFzzoJFZhwygXszxkpMccRiucghRi7dyrzA80mg44OPD3yEpnfIOeCOIuFGEcs/ O7R3gPWeqX3uysmJN/+lNc0Co68QRIb7GApcH1vDaQ4yI+CGoK+QlRxhMaHa8cPi3Gt4 CoyBAmML4kyy3acJ8ckJMNgACVI4ZRtSXuNp5hS12GFfs9z4jkS9SGpWwv5xSd8Uhu9e Uf+uz/lrisWrhNXMef7iljzqubvWcF4y0r9PHpB72SVciZlYbH2eauXehFBhgS+okdp3 gvcg== X-Gm-Message-State: APjAAAXtVBLJz38VVLPsTF0aifYVgr/SRCsQeRHLAOXOIW5IorWDW/0N dtz5lMWrzXmUbAqKpNTWDxgvsA== X-Received: by 2002:a7b:cf2d:: with SMTP id m13mr21584468wmg.120.1566232872042; Mon, 19 Aug 2019 09:41:12 -0700 (PDT) Received: from [192.168.0.101] (88-147-64-56.dyn.eolo.it. [88.147.64.56]) by smtp.gmail.com with ESMTPSA id o17sm14182163wrx.60.2019.08.19.09.41.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 19 Aug 2019 09:41:11 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.8\)) Subject: Re: io.latency controller apparently not working From: Paolo Valente In-Reply-To: <88488E82-E360-45B0-B010-209190D32892@linaro.org> Date: Mon, 19 Aug 2019 18:41:06 +0200 Cc: linux-block , linux-kernel , Jens Axboe , Vincent Guittot , noreply-spamdigest via bfq-iosched , Tejun Heo Content-Transfer-Encoding: quoted-printable Message-Id: References: <22878C62-54B8-41BA-B90C-1C5414F3060F@linaro.org> <20190816132124.ggedqxrhi5povqlo@macbook-pro-91.dhcp.thefacebook.com> <1842D618-3E31-47FE-8B9C-F26BF1F5349C@linaro.org> <20190816175931.cxpdko44cuyq7trj@MacBook-Pro-91.local> <88488E82-E360-45B0-B010-209190D32892@linaro.org> To: Josef Bacik X-Mailer: Apple Mail (2.3445.104.8) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Il giorno 16 ago 2019, alle ore 20:17, Paolo Valente = ha scritto: >=20 >=20 >=20 >> Il giorno 16 ago 2019, alle ore 19:59, Josef Bacik = ha scritto: >>=20 >> On Fri, Aug 16, 2019 at 07:52:40PM +0200, Paolo Valente wrote: >>>=20 >>>=20 >>>> Il giorno 16 ago 2019, alle ore 15:21, Josef Bacik = ha scritto: >>>>=20 >>>> On Fri, Aug 16, 2019 at 12:57:41PM +0200, Paolo Valente wrote: >>>>> Hi, >>>>> I happened to test the io.latency controller, to make a comparison >>>>> between this controller and BFQ. But io.latency seems not to = work, >>>>> i.e., not to reduce latency compared with what happens with no I/O >>>>> control at all. Here is a summary of the results for one of the >>>>> workloads I tested, on three different devices (latencies in ms): >>>>>=20 >>>>> no I/O control io.latency BFQ >>>>> NVMe SSD 1.9 1.9 0.07 >>>>> SATA SSD 39 56 0.7 >>>>> HDD 4500 4500 11 >>>>>=20 >>>>> I have put all details on hardware, OS, scenarios and results in = the >>>>> attached pdf. For your convenience, I'm pasting the source file = too. >>>>>=20 >>>>=20 >>>> Do you have the fio jobs you use for this? >>>=20 >>> The script mentioned in the draft (executed with the command line >>> reported in the draft), executes one fio instance for the target >>> process, and one fio instance for each interferer. I couldn't do = with >>> just one fio instance executing all jobs, because the weight = parameter >>> doesn't work in fio jobfiles for some reason, and because the ioprio >>> class cannot be set for individual jobs. >>>=20 >>> In particular, the script generates a job with the following >>> parameters for the target process: >>>=20 >>> ioengine=3Dsync >>> loops=3D10000 >>> direct=3D0 >>> readwrite=3Drandread >>> fdatasync=3D0 >>> bs=3D4k >>> thread=3D0 >>> filename=3D/mnt/scsi_debug/largefile_interfered0 >>> iodepth=3D1 >>> numjobs=3D1 >>> invalidate=3D1 >>>=20 >>> and a job with the following parameters for each of the interferers, >>> in case, e.g., of a workload made of reads: >>>=20 >>> ioengine=3Dsync >>> direct=3D0 >>> readwrite=3Dread >>> fdatasync=3D0 >>> bs=3D4k >>> filename=3D/mnt/scsi_debug/largefileX >>> invalidate=3D1 >>>=20 >>> Should you fail to reproduce this issue by creating groups, setting >>> latencies and starting fio jobs manually, what if you try by just >>> executing my script? Maybe this could help us spot the culprit more >>> quickly. >>=20 >> Ah ok, you are doing it on a mountpoint. >=20 > Yep >=20 >> Are you using btrfs? >=20 > ext4 >=20 >> Cause otherwise >> you are going to have a sad time. >=20 > Could you elaborate more on this? I/O seems to be controllable on = ext4. >=20 >> The other thing is you are using buffered, >=20 > Actually, the problem is suffered by sync random reads, which always > hit the disk in this test. >=20 >> which may or may not hit the disk. This is what I use to test = io.latency >>=20 >> https://patchwork.kernel.org/patch/10714425/ >>=20 >> I had to massage it since it didn't apply directly, but running this = against the >> actual block device, with O_DIRECT so I'm sure to be measure the = actual impact >> of the controller, it all works out fine. >=20 > I'm not getting why non-direct sync reads, or buffered writes, should > be uncontrollable. As a trivial example, BFQ in this tests controls > I/O as expected, and keeps latency extremely low. >=20 > What am I missing? >=20 While waiting for your answer, I've added also the direct-I/O case to my test. Now we have also this new case reproduced by the command line reported in the draft. Even with direct I/O, nothing changes with writers as interferers, apart from latency becoming at least equal to the case of no I/O control for the HDD. Summing up, with writers as interferers (latency in ms): no I/O control io.latency BFQ NVMe SSD 3 3 0.2 SATA SSD 3 3 0.2 HDD 56 56 13 In contrast, there are important improvements with the SSDs, in case of readers as interferers. This is the new situation (latency still in ms): no I/O control io.latency BFQ NVMe SSD 1.9 0.08 0.07 SATA SSD 39 0.2 0.7 HDD 4500 118 11 Thanks, Paolo > Thanks, > Paolo >=20 >> Thanks, >>=20 >> Josef