Received: by 2002:a25:86ce:0:0:0:0:0 with SMTP id y14csp259058ybm; Mon, 20 May 2019 15:46:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqxYpDaO9pNf/LYZ3tmgrTwKMHwTa73szqXlz9iDgpZ5mPxi2kl2jOyX/m1KNNafbJ6xn/Vq X-Received: by 2002:a17:902:1347:: with SMTP id r7mr34467191ple.45.1558392382689; Mon, 20 May 2019 15:46:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558392382; cv=none; d=google.com; s=arc-20160816; b=KmIKktjKNRzUMpYM3xEilEwZb9kChYNRIBJqAdAQjBHFqgOy+sqVQ6s9wipfuWqeDm 7mXLG+x85XBpcG4lh1A550h5664r2IG44+0Af/vRNkKcpFDhmpX/OA0EWpn2m2pR/6oU xnr4t5ieh1qnol+AoUScWJqfGd7NEyWfo4WCVTBUwcfr4VZxlazDPPgs47uJyWYCHVmM swy2WbfsWJ+N6/e8rSzxMO23iES9B/2nfkUre5a9u3WO5Z6JbjGlfDd1hUrdLPNUewK3 tAhASaaxNqGH/kFmFvyQm1p32rJ223B/dD1P1n2I2hdnDaCZQrYIU0HxDOhMFKoTnpW6 HoZg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=66xeR35K1OpJgrYK8UzRyY3n2J6NMOuIienHYb2iSf0=; b=bJiXkafSf46xpP70Ft1IIIU2t5ilDA99dojoyrdRtswTK6dottn1v6sBkG34hQmgLc CWNlp/5B6DUMLDQpy2003azQDGCFtP6ttkCxGIFxLNGJGIfkkaBcPrYNdE1Uv7eylHtb brxauPaC8Ci9zI+Oew9USRI02r5fvmJiAgfs1nfDlS7dDk1aHArNVh/j57N2sS9ckq8R xZpj2I9y6+3OIDiEwJkn997z1XKMT6UrrUXL4Jqwr+sGbjBgt9kI2wbRx/X3FW/q2fs9 KRbtWfvSO5/GmY+IqOmzoy26y3jXlhwHU8fno2u6IL6hdHYwGsAEZsK42hbH2RZmj9OX fk0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=csail.mit.edu Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g90si19788492plb.140.2019.05.20.15.46.00; Mon, 20 May 2019 15:46:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=csail.mit.edu Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727043AbfETWpz (ORCPT + 99 others); Mon, 20 May 2019 18:45:55 -0400 Received: from outgoing-stata.csail.mit.edu ([128.30.2.210]:53113 "EHLO outgoing-stata.csail.mit.edu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726107AbfETWpz (ORCPT ); Mon, 20 May 2019 18:45:55 -0400 Received: from [4.30.142.84] (helo=srivatsab-a01.vmware.com) by outgoing-stata.csail.mit.edu with esmtpsa (TLS1.2:RSA_AES_128_CBC_SHA1:128) (Exim 4.82) (envelope-from ) id 1hSr2T-000DNC-Gj; Mon, 20 May 2019 18:45:49 -0400 Subject: Re: CFQ idling kills I/O performance on ext4 with blkio cgroup controller To: Paolo Valente Cc: linux-fsdevel@vger.kernel.org, linux-block , linux-ext4@vger.kernel.org, cgroups@vger.kernel.org, kernel list , Jens Axboe , Jan Kara , jmoyer@redhat.com, tytso@mit.edu, amakhalov@vmware.com, anishs@vmware.com, srivatsab@vmware.com References: <8d72fcf7-bbb4-2965-1a06-e9fc177a8938@csail.mit.edu> <1812E450-14EF-4D5A-8F31-668499E13652@linaro.org> <46c6a4be-f567-3621-2e16-0e341762b828@csail.mit.edu> <07D11833-8285-49C2-943D-E4C1D23E8859@linaro.org> From: "Srivatsa S. Bhat" Message-ID: <238e14ff-68d1-3b21-a291-28de4f2d77af@csail.mit.edu> Date: Mon, 20 May 2019 15:45:46 -0700 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <07D11833-8285-49C2-943D-E4C1D23E8859@linaro.org> Content-Type: text/plain; charset=windows-1252 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On 5/20/19 3:19 AM, Paolo Valente wrote: > > >> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S. Bhat ha scritto: >> >> On 5/18/19 11:39 AM, Paolo Valente wrote: >>> I've addressed these issues in my last batch of improvements for BFQ, >>> which landed in the upcoming 5.2. If you give it a try, and still see >>> the problem, then I'll be glad to reproduce it, and hopefully fix it >>> for you. >>> >> >> Hi Paolo, >> >> Thank you for looking into this! >> >> I just tried current mainline at commit 72cf0b07, but unfortunately >> didn't see any improvement: >> >> dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync >> >> With mq-deadline, I get: >> >> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 3.90981 s, 1.3 MB/s >> >> With bfq, I get: >> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 84.8216 s, 60.4 kB/s >> > > Hi Srivatsa, > thanks for reproducing this on mainline. I seem to have reproduced a > bonsai-tree version of this issue. Before digging into the block > trace, I'd like to ask you for some feedback. > > First, in my test, the total throughput of the disk happens to be > about 20 times as high as that enjoyed by dd, regardless of the I/O > scheduler. I guess this massive overhead is normal with dsync, but > I'd like know whether it is about the same on your side. This will > help me understand whether I'll actually be analyzing about the same > problem as yours. > Do you mean to say the throughput obtained by dd'ing directly to the block device (bypassing the filesystem)? That does give me a 20x speedup with bs=512, but much more with a bigger block size (achieving a max throughput of about 110 MB/s). dd if=/dev/zero of=/dev/sdc bs=512 count=10000 conv=fsync 10000+0 records in 10000+0 records out 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.15257 s, 33.6 MB/s dd if=/dev/zero of=/dev/sdc bs=4k count=10000 conv=fsync 10000+0 records in 10000+0 records out 40960000 bytes (41 MB, 39 MiB) copied, 0.395081 s, 104 MB/s I'm testing this on a Toshiba MG03ACA1 (1TB) hard disk. > Second, the commands I used follow. Do they implement your test case > correctly? > > [root@localhost tmp]# mkdir /sys/fs/cgroup/blkio/testgrp > [root@localhost tmp]# echo $BASHPID > /sys/fs/cgroup/blkio/testgrp/cgroup.procs > [root@localhost tmp]# cat /sys/block/sda/queue/scheduler > [mq-deadline] bfq none > [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync > 10000+0 record dentro > 10000+0 record fuori > 5120000 bytes (5,1 MB, 4,9 MiB) copied, 14,6892 s, 349 kB/s > [root@localhost tmp]# echo bfq > /sys/block/sda/queue/scheduler > [root@localhost tmp]# dd if=/dev/zero of=/root/test.img bs=512 count=10000 oflag=dsync > 10000+0 record dentro > 10000+0 record fuori > 5120000 bytes (5,1 MB, 4,9 MiB) copied, 20,1953 s, 254 kB/s > Yes, this is indeed the testcase, although I see a much bigger drop in performance with bfq, compared to the results from your setup. Regards, Srivatsa