Received: by 10.223.176.46 with SMTP id f43csp1364503wra; Fri, 19 Jan 2018 10:25:40 -0800 (PST) X-Google-Smtp-Source: ACJfBot44EWX6KJSFxGXyUFpxNXYK7fdTS/oaa16QtQG6vHks2sOybQmNtfwJlQMhC+wItCF29hU X-Received: by 10.98.133.193 with SMTP id m62mr34768455pfk.18.1516386340063; Fri, 19 Jan 2018 10:25:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516386340; cv=none; d=google.com; s=arc-20160816; b=Ts+yYZqFCxeq0qTvkBigkvjkCnBf5NvSxEHcraXkf59lB/X0//fexBaQFQdidK+FjQ q14SXdiCoS1ZqaUJ77DlMb4Suf3Z6/fjx71+btkU8+R8M7yGpyFk+iJ8DYBesXBU6On5 QV4m5gKjUTytpOADCpv1FrS73+fdWbAmTmXo6XjSf7h9MM6bZpfv7TSlWkQMXD8E8klL 9RG3kQO15ftJYuRofjsEpSzwmghVfBik1mhV26NGaE9hWYq5kGBJXh9A1QjoTZ4F4OEI FApe0DXJfX0xx5ndx4vG/9yPVvLssr0VUQ3GmmcbC7++ypbO3ptr0bzBKohD+FkKlY84 Li3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=7+h0TP3JyHRafRoM2qgRHoE0e7yF5C3LIcQsqeJLKC0=; b=ssopCEejnSvX7BZa2ofQwG/u5Ihg9t5HwIHJPHAKmuET643AkF6OyYnX6sFZNJJ5hY Kg+q1DLDhsUumQ86nW5m/Czi8DXYPWSN00FtUVfYvU6Sv8WgCVi0E4GlVnrMV1jwanui UWZBVzVmxbgjXlixwiYaKxJXUTPoVtcNf/7euqPF8zRRqgfOffL8UHscOLTkRZNPO4bX zv4+jTdGpLct76mkZPilL+e7Wn/jUaF3834QnTcvLvLPrRytaHg8fwRRsTlxIL9XuI3Z epNhtHiNYbMV+yJAjqCmjx3AOn4aGvqo2f1/NnKkDlTc51bz6fgjovztIWMbrk42cftP NoXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o17si8725453pgn.280.2018.01.19.10.25.25; Fri, 19 Jan 2018 10:25:40 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756068AbeASSY6 (ORCPT + 99 others); Fri, 19 Jan 2018 13:24:58 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40930 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755830AbeASSYx (ORCPT ); Fri, 19 Jan 2018 13:24:53 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EDC05CD4CA; Fri, 19 Jan 2018 18:24:52 +0000 (UTC) Received: from ming.t460p (ovpn-12-54.pek2.redhat.com [10.72.12.54]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 24B625D6B2; Fri, 19 Jan 2018 18:24:06 +0000 (UTC) Date: Sat, 20 Jan 2018 02:24:03 +0800 From: Ming Lei To: Jens Axboe Cc: Bart Van Assche , "snitzer@redhat.com" , "dm-devel@redhat.com" , "hch@infradead.org" , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "osandov@fb.com" Subject: Re: [RFC PATCH] blk-mq: fixup RESTART when queue becomes idle Message-ID: <20180119182402.GC15610@ming.t460p> References: <20180119072623.GB25369@ming.t460p> <047f68ec-f51b-190f-2f89-f413325c2540@kernel.dk> <20180119154047.GB14827@ming.t460p> <540e1239-c415-766b-d4ff-bb0b7f3517a7@kernel.dk> <20180119160518.GC14827@ming.t460p> <4a5c049f-0fab-bbaf-bfe2-eb5bca73f2c8@kernel.dk> <20180119162618.GD14827@ming.t460p> <1f072086-533e-4b75-d0e3-9e621b2120d8@kernel.dk> <20180119163736.GE14827@ming.t460p> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Fri, 19 Jan 2018 18:24:53 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote: > On 1/19/18 9:37 AM, Ming Lei wrote: > > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote: > >> On 1/19/18 9:26 AM, Ming Lei wrote: > >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote: > >>>> On 1/19/18 9:05 AM, Ming Lei wrote: > >>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote: > >>>>>> On 1/19/18 8:40 AM, Ming Lei wrote: > >>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact > >>>>>>>>>> resource are we running out of? > >>>>>>>>> > >>>>>>>>> It is from blk_get_request(underlying queue), see > >>>>>>>>> multipath_clone_and_map(). > >>>>>>>> > >>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's > >>>>>>>> quite possible that this situation can happen. Two potential solutions > >>>>>>>> I see: > >>>>>>>> > >>>>>>>> 1) As described earlier in this thread, having a mechanism for being > >>>>>>>> notified when the scarce resource becomes available. It would not > >>>>>>>> be hard to tap into the existing sbitmap wait queue for that. > >>>>>>>> > >>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource > >>>>>>>> allocation. I haven't read the dm code to know if this is a > >>>>>>>> possibility or not. > >>>>>>>> > >>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the > >>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait > >>>>>>>> queue head, retry, and bail if that also fails. Connecting the > >>>>>>>> scarce resource and the consumer is the only way to really fix > >>>>>>>> this, without bogus arbitrary delays. > >>>>>>> > >>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with > >>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce > >>>>>>> resource should fix this issue. > >>>>>> > >>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow > >>>>>> down the dm device by some random amount. > >>>>>> > >>>>>> A simple test case would be to have a null_blk device with a queue depth > >>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one > >>>>>> that does IO to the underlying device, and one that does IO to the dm > >>>>>> device. If the job on the dm device runs substantially slower than the > >>>>>> one to the underlying device, then the problem isn't really fixed. > >>>>> > >>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug, > >>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath > >>>>> may be slower? Because both two IO contexts call same get_request(), and > >>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for > >>>>> underlying queue, without io scheduler involved. > >>>> > >>>> Because if you lose the race for getting the request, you'll have some > >>>> arbitrary delay before trying again, potentially. Compared to the direct > >>> > >>> But the restart still works, one request is completed, then the queue > >>> is return immediately because we use mod_delayed_work_on(0), so looks > >>> no such issue. > >> > >> There are no pending requests for this case, nothing to restart the > >> queue. When you fail that blk_get_request(), you are idle, nothing > >> is pending. > > > > I think we needn't worry about that, once a device is attached to > > dm-rq, it can't be mounted any more, and usually user don't use the device > > directly and by dm-mpath at the same time. > > Here's an example of that, using my current block tree (merged into > master). The setup is dm-mpath on top of null_blk, the latter having > just a single request. Both are mq devices. > > Fio direct 4k random reads on dm_mq: ~250K iops > > Start dd on underlying device (or partition on same device), just doing > sequential reads. > > Fio direct 4k random reads on dm_mq with dd running: 9 iops > > No schedulers involved. > > https://i.imgur.com/WTDnnwE.gif This DM specific issue might be addressed by applying notifier_chain (or similar mechanism)between the two queues, will think about the details tomorrow. -- Ming