Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758739AbcDAOdZ (ORCPT ); Fri, 1 Apr 2016 10:33:25 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:48456 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752212AbcDAOdX (ORCPT ); Fri, 1 Apr 2016 10:33:23 -0400 Subject: Re: [PATCHSET v3][RFC] Make background writeback not suck To: Dave Chinner References: <1459350477-16404-1-git-send-email-axboe@fb.com> <20160331082433.GO11812@dastard> <56FD344F.70908@fb.com> <56FD4E70.3090203@fb.com> <20160401005608.GU11812@dastard> <56FDEB1A.2030404@fb.com> <56FDED6D.4070200@fb.com> <20160401061610.GX11812@dastard> CC: , , From: Jens Axboe Message-ID: <56FE86AE.7070704@fb.com> Date: Fri, 1 Apr 2016 08:33:18 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0 MIME-Version: 1.0 In-Reply-To: <20160401061610.GX11812@dastard> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-04-01_06:,, signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3937 Lines: 83 On 04/01/2016 12:16 AM, Dave Chinner wrote: > On Thu, Mar 31, 2016 at 09:39:25PM -0600, Jens Axboe wrote: >> On 03/31/2016 09:29 PM, Jens Axboe wrote: >>>>> I can't seem to reproduce this at all. On an nvme device, I get a >>>>> fairly steady 60K/sec file creation rate, and we're nowhere near >>>>> being IO bound. So the throttling has no effect at all. >>>> >>>> That's too slow to show the stalls - your likely concurrency bound >>>> in allocation by the default AG count (4) from mkfs. Use mkfs.xfs -d >>>> agcount=32 so that every thread works in it's own AG. >>> >>> That's the key, with that I get 300-400K ops/sec instead. I'll run some >>> testing with this tomorrow and see what I can find, it did one full run >>> now and I didn't see any issues, but I need to run it at various >>> settings and see if I can find the issue. >> >> No stalls seen, I get the same performance with it disabled and with >> it enabled, at both default settings, and lower ones >> (wb_percent=20). Looking at iostat, we don't drive a lot of depth, >> so it makes sense, even with the throttling we're doing essentially >> the same amount of IO. > > Try appending numa=fake=4 to your guest's kernel command line. > > (that's what I'm using) Sure, I can give that a go. >> What does 'nr_requests' say for your virtio_blk device? Looks like >> virtio_blk has a queue_depth setting, but it's not set by default, >> and then it uses the free entries in the ring. But I don't know what >> that is... > > $ cat /sys/block/vdc/queue/nr_requests > 128 OK, so that would put you in the 16/32/64 category for idle/normal/high priority writeback. Which fits with the iostat below, which is in the ~16 range. So the META thing should help, it'll bump it up a bit. But we're also seeing smaller requests, and I think that could be because after we do throttle, we could potentially have a merge candidate. The code doesn't check post-sleeping, it'll allow any merges before though. Though that part is a little harder to read from the iostat numbers, but there does seem to be a correlation between your higher depths and bigger request sizes. > I'll try the "don't throttle REQ_META" patch, but this seems like a > fragile way to solve this problem - it shuts up the messenger, but > doesn't solve the problem for any other subsystem that might have a > similer issue. e.g. next we're going to have to make sure direct IO > (which is also REQ_WRITE dispatch) does not get throttled, and so > on.... I don't think there's anything wrong with the REQ_META patch. Sure, we could have better classifications (like discussed below), but that's mainly tweaking. As long as we get the same answers, it's fine. There's no throttling of O_DIRECT writes in the current code, it specifically doesn't include those. It's only for the unbounded writes, which writeback tends to be. > It seems to me that the right thing to do here is add a separate > classification flag for IO that can be throttled. e.g. as > REQ_WRITEBACK and only background writeback work sets this flag. > That would ensure that when the IO is being dispatched from other > sources (e.g. fsync, sync_file_range(), direct IO, filesystem > metadata, etc) it is clear that it is not a target for throttling. > This would also allow us to easily switch off throttling if > writeback is occurring for memory reclaim reasons, and so on. > Throttling policy decisions belong above the block layer, even > though the throttle mechanism itself is in the block layer. We're already doing all of that, it's just doesn't include a specific REQ_WRITEBACK flag. And yeah, that would clean up the checking for request type, but functionally it should be the same as it is now. It'll be a bit more robust and easier to read if we just have a REQ_WRITEBACK, right now it's WRITE_SYNC vs WRITE for important vs not-important, with a check for write vs O_DIRECT write as well. -- Jens Axboe