Received: by 10.223.176.46 with SMTP id f43csp184611wra; Fri, 19 Jan 2018 15:54:16 -0800 (PST) X-Google-Smtp-Source: AH8x227HgatcV6FtJDmGrjv4mG4vbYPvwqCyDYsdSyRPi3GZ45wanGjE3G60uBX2Px3wwuJrhVGA X-Received: by 10.98.225.7 with SMTP id q7mr248089pfh.22.1516406056699; Fri, 19 Jan 2018 15:54:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516406056; cv=none; d=google.com; s=arc-20160816; b=UChff2Orab6ojsw1KPDX2Z72dmv7j4P+SlK5ebVZZRR/xAzdX6/OxqmdJPKaGN39SP bBpJYv02gzQfTYpl98VgLvFgCPTeWkYh3IK7lhu1OKcwbMfzmR1o6grw9fNfxx97uw8R 2ZSxzkZE3MmanNZf0Krl5g2bPnxKbfd59nKfpYYACT4npw78iy5Hht+6tLOcH/LXpRrb dNUmqk0EwPg25x41u0JKKxj1rY/iMxEkErMGZZ8CkIEXDR1stliYQC2Q+LVdDJPIB1ik vyBRchyW9+Yy/eMBSyTtDlfgyC0R8XYP4sTteKE7pNq5eagHqSI7KT3DFCkfViduj2mm gtyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=UKvfvkr+0vZnxl9Tn7hJPlEHUxSn8eBaPEReoVunw8E=; b=OaPwX7XbyHHP+aHn9y+N1nzSt/OQexuzMHV6jIalcY8EN36gJXYgKfC89d4pbrNrfd u5cKxAJnLjT0nKPKAPUFgCHyoCjwAiIZkG9dtdjVE27Y4wYI5R7GlzO4DXuLqgdIh2xK X8NK+YDCHwBeHdmqnBSOdfTLenQ/IlweuQUYZlLcevOUpNw4LBCcLT2UZIx5w5n7b12z IWygB2fM441VuJ9slcg4GCz3fOALkEThhO6PGM60TEy9XnoiVYMHuuLtr851bycCKEKs oIlBrp9kMhx97ZrIhdgqcfFrqczwZipTYd1THjmPU5yv0BQyPIwYcc98Om/MWw8mSkYa 0kmw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j3si9227188pgc.456.2018.01.19.15.54.02; Fri, 19 Jan 2018 15:54:16 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756141AbeASXxL (ORCPT + 99 others); Fri, 19 Jan 2018 18:53:11 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52942 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752184AbeASXxE (ORCPT ); Fri, 19 Jan 2018 18:53:04 -0500 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 6E7DD81DF0; Fri, 19 Jan 2018 23:53:04 +0000 (UTC) Received: from ming.t460p (ovpn-12-27.pek2.redhat.com [10.72.12.27]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C3E8360C4A; Fri, 19 Jan 2018 23:52:51 +0000 (UTC) Date: Sat, 20 Jan 2018 07:52:48 +0800 From: Ming Lei To: Jens Axboe Cc: Bart Van Assche , "snitzer@redhat.com" , "dm-devel@redhat.com" , "hch@infradead.org" , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "osandov@fb.com" Subject: Re: [RFC PATCH] blk-mq: fixup RESTART when queue becomes idle Message-ID: <20180119235247.GA17195@ming.t460p> References: <20180119072623.GB25369@ming.t460p> <047f68ec-f51b-190f-2f89-f413325c2540@kernel.dk> <20180119154047.GB14827@ming.t460p> <540e1239-c415-766b-d4ff-bb0b7f3517a7@kernel.dk> <20180119160518.GC14827@ming.t460p> <4a5c049f-0fab-bbaf-bfe2-eb5bca73f2c8@kernel.dk> <20180119162618.GD14827@ming.t460p> <1f072086-533e-4b75-d0e3-9e621b2120d8@kernel.dk> <20180119163736.GE14827@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 19 Jan 2018 23:53:04 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote: > On 1/19/18 9:37 AM, Ming Lei wrote: > > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote: > >> On 1/19/18 9:26 AM, Ming Lei wrote: > >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote: > >>>> On 1/19/18 9:05 AM, Ming Lei wrote: > >>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote: > >>>>>> On 1/19/18 8:40 AM, Ming Lei wrote: > >>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact > >>>>>>>>>> resource are we running out of? > >>>>>>>>> > >>>>>>>>> It is from blk_get_request(underlying queue), see > >>>>>>>>> multipath_clone_and_map(). > >>>>>>>> > >>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's > >>>>>>>> quite possible that this situation can happen. Two potential solutions > >>>>>>>> I see: > >>>>>>>> > >>>>>>>> 1) As described earlier in this thread, having a mechanism for being > >>>>>>>> notified when the scarce resource becomes available. It would not > >>>>>>>> be hard to tap into the existing sbitmap wait queue for that. > >>>>>>>> > >>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource > >>>>>>>> allocation. I haven't read the dm code to know if this is a > >>>>>>>> possibility or not. > >>>>>>>> > >>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the > >>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait > >>>>>>>> queue head, retry, and bail if that also fails. Connecting the > >>>>>>>> scarce resource and the consumer is the only way to really fix > >>>>>>>> this, without bogus arbitrary delays. > >>>>>>> > >>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with > >>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce > >>>>>>> resource should fix this issue. > >>>>>> > >>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow > >>>>>> down the dm device by some random amount. > >>>>>> > >>>>>> A simple test case would be to have a null_blk device with a queue depth > >>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one > >>>>>> that does IO to the underlying device, and one that does IO to the dm > >>>>>> device. If the job on the dm device runs substantially slower than the > >>>>>> one to the underlying device, then the problem isn't really fixed. > >>>>> > >>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug, > >>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath > >>>>> may be slower? Because both two IO contexts call same get_request(), and > >>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for > >>>>> underlying queue, without io scheduler involved. > >>>> > >>>> Because if you lose the race for getting the request, you'll have some > >>>> arbitrary delay before trying again, potentially. Compared to the direct > >>> > >>> But the restart still works, one request is completed, then the queue > >>> is return immediately because we use mod_delayed_work_on(0), so looks > >>> no such issue. > >> > >> There are no pending requests for this case, nothing to restart the > >> queue. When you fail that blk_get_request(), you are idle, nothing > >> is pending. > > > > I think we needn't worry about that, once a device is attached to > > dm-rq, it can't be mounted any more, and usually user don't use the device > > directly and by dm-mpath at the same time. > > Here's an example of that, using my current block tree (merged into > master). The setup is dm-mpath on top of null_blk, the latter having > just a single request. Both are mq devices. > > Fio direct 4k random reads on dm_mq: ~250K iops > > Start dd on underlying device (or partition on same device), just doing > sequential reads. > > Fio direct 4k random reads on dm_mq with dd running: 9 iops > > No schedulers involved. > > https://i.imgur.com/WTDnnwE.gif If null_blk's timer mode is used with a bit delay introduced, I guess the effect from direct access to underlying queue shouldn't be so serious. But it still won't be good as direct access. Another way may be to introduce a variants blk_get_request(), such as blk_get_request_with_notify(), then pass the current dm-rq's hctx to it, and use the tag's waitqueue to handle that. But the change can be a bit big. -- Ming