Received: by 10.223.176.46 with SMTP id f43csp1138049wra; Fri, 19 Jan 2018 07:24:58 -0800 (PST) X-Google-Smtp-Source: ACJfBosC09TspAcKgWoykfcWrjHW6Qty8GY616w4cguwQCIcMmjRxz0VAg2sB5FaKlaPyTfT9qZ+ X-Received: by 10.98.60.67 with SMTP id j64mr40433757pfa.217.1516375498331; Fri, 19 Jan 2018 07:24:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516375498; cv=none; d=google.com; s=arc-20160816; b=eJ6CD4SmI3yTksuHWCQDuk6q/gSyf6yq4/g6B/7KyX4X8Q5AvLGv9tY6XOvSzsx/y4 HZoYxXLkCrP2S0mj8v9jNlTjF1WMasiNbgKTGYoi8wGQWZq0RSlgMUYMteScqWHIzI59 Pd0pl0DfDhjwxBIZw1f7K3j53Ul46MttkXd2xHbwx8zY0RQlJZKpotfC5MrcO3SHD9VY oCYsZwSTwFBSR4i+jDpKg5Yprkv2z/6zoXgc04hoiqMMlRSdRdBTLEap9jLql1UUipRU 66lzRm1Aw1qSMsCB09IrbflYoOam956UWO28o16MSr3B4r1hLYLZSSTb/R+hMOcUfPys PX7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=syOWtUZP4ao4yHQdyA1+W85MwNayXCEmSxYRWVmA7h0=; b=KUK9p/U4tUX12qwMI0TNxC6oSP4ufoGzMMFaKF0lCKV6GF2ikphc0SlLqofxGPG9lr BChu7yZ7HXKtADHhWlpnsMbjm5wMZGqwndnm4tZHEhuM+zbiJ7GW/fEREpfwUOpnZdad /642jP1AHIaHMQHnkfvfkp6ndEBtg+T0pulTiMC9X6eHAQoVT1TbIUGlLI5+Vvadqs4U xqpWR/ZpXHyiorh+iQMo0+64XIC1eP3vttQ6JErvD3T4ZicVYFTTvp3XXevDbII4z6fB XP1EeSn4JwOKU00Vbr084s6TPAeP2BSM4cahUHib8yPlpWaIf58+wbUfnDw4wW2lBXBh FHXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b=du7qxneA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 66si9528512pff.145.2018.01.19.07.24.43; Fri, 19 Jan 2018 07:24:58 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel-dk.20150623.gappssmtp.com header.s=20150623 header.b=du7qxneA; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755923AbeASPYQ (ORCPT + 99 others); Fri, 19 Jan 2018 10:24:16 -0500 Received: from mail-it0-f47.google.com ([209.85.214.47]:39716 "EHLO mail-it0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754860AbeASPYJ (ORCPT ); Fri, 19 Jan 2018 10:24:09 -0500 Received: by mail-it0-f47.google.com with SMTP id 68so2502016ite.4 for ; Fri, 19 Jan 2018 07:24:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=syOWtUZP4ao4yHQdyA1+W85MwNayXCEmSxYRWVmA7h0=; b=du7qxneAC47ySs/vf9nuz1ZTrorAko6+a6O728n9prucnQ/oFEqgSjBEhNBnfDOz9N ftSWXp7itV5fazSR/Bhv19G0fVyd6Mtgz6z0wgeD3N00wyHXmb9lNIkvi3m0pFhnHynV ipWpBzPXuxAda9hcStMczMd42zWNUA5EaZtrx60KWvcDfShOFm9bKs+/QPlksD7OmQ0r 7Hwb+4Uwi7+8l7XJ7WRKckfAIhz5FgeL+3gy5Fh4VbXBFt/7mBDiCqKO6H9SCcds0Cfn cnj3cP5nT/Hrty8RCi1Y0nQLU48r5ZOswmgGTGDQq/Jvax3I9VkbbtDNx1V212qVlgtl jpXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=syOWtUZP4ao4yHQdyA1+W85MwNayXCEmSxYRWVmA7h0=; b=fCaWXtwLMza7gDxDTkBVpDpJADHDqEBFS6a2LQ5jE4fsf4bsq4yYxL7oxy8W2boQi8 wgf5eXCtGoUsFBoihO4NYJAF3oFaHarjF2WkvLinkANH3ZXndN/PetHxld2TF2fDVqvR Frnt+rUZmgABSoX3ClAFPbE2W8ehb3ZuOhHoOFikIB+1MxkdtKtY5tiHmv9/FnSYgMdW QKCpJIawNlhHPPLmvhSEPZtdlOKtqomSdBdRgXURjSyJqaUfih8BE14ge7sruvb6OZME jfzJxD8uJzO7MJJzLEMg5JX1odGzq/DpT+i5DGlHwq81lpBwCcgjxFRFXelyyS1qIeoR qKnw== X-Gm-Message-State: AKwxytf0WVBNvLUJHmkcGQjM7WVLeW7e3iQFBFjgHD1Yv0MuWIOkANIF eBh8WYWYRU7oky/ggv4b+O3u+Q== X-Received: by 10.36.19.76 with SMTP id 73mr31272878itz.3.1516375448970; Fri, 19 Jan 2018 07:24:08 -0800 (PST) Received: from [192.168.1.154] ([216.160.245.98]) by smtp.gmail.com with ESMTPSA id o73sm2258032ito.4.2018.01.19.07.24.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Jan 2018 07:24:07 -0800 (PST) Subject: Re: [RFC PATCH] blk-mq: fixup RESTART when queue becomes idle To: Ming Lei Cc: Bart Van Assche , "snitzer@redhat.com" , "dm-devel@redhat.com" , "hch@infradead.org" , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "osandov@fb.com" References: <20180118024124.8079-1-ming.lei@redhat.com> <20180118170353.GB19734@redhat.com> <1516296056.2676.23.camel@wdc.com> <20180118183039.GA20121@redhat.com> <1516301278.2676.35.camel@wdc.com> <20180119023212.GA25413@ming.t460p> <20180119072623.GB25369@ming.t460p> From: Jens Axboe Message-ID: <047f68ec-f51b-190f-2f89-f413325c2540@kernel.dk> Date: Fri, 19 Jan 2018 08:24:06 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:58.0) Gecko/20100101 Thunderbird/58.0 MIME-Version: 1.0 In-Reply-To: <20180119072623.GB25369@ming.t460p> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 1/19/18 12:26 AM, Ming Lei wrote: > On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote: >> On 1/18/18 7:32 PM, Ming Lei wrote: >>> On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote: >>>> On 1/18/18 11:47 AM, Bart Van Assche wrote: >>>>>> This is all very tiresome. >>>>> >>>>> Yes, this is tiresome. It is very annoying to me that others keep >>>>> introducing so many regressions in such important parts of the kernel. >>>>> It is also annoying to me that I get blamed if I report a regression >>>>> instead of seeing that the regression gets fixed. >>>> >>>> I agree, it sucks that any change there introduces the regression. I'm >>>> fine with doing the delay insert again until a new patch is proven to be >>>> better. >>> >>> That way is still buggy as I explained, since rerun queue before adding >>> request to hctx->dispatch_list isn't correct. Who can make sure the request >>> is visible when __blk_mq_run_hw_queue() is called? >> >> That race basically doesn't exist for a 10ms gap. >> >>> Not mention this way will cause performance regression again. >> >> How so? It's _exactly_ the same as what you are proposing, except mine >> will potentially run the queue when it need not do so. But given that >> these are random 10ms queue kicks because we are screwed, it should not >> matter. The key point is that it only should be if we have NO better >> options. If it's a frequently occurring event that we have to return >> BLK_STS_RESOURCE, then we need to get a way to register an event for >> when that condition clears. That event will then kick the necessary >> queue(s). > > Please see queue_delayed_work_on(), hctx->run_work is shared by all > scheduling, once blk_mq_delay_run_hw_queue(100ms) returns, no new > scheduling can make progress during the 100ms. That's a bug, plain and simple. If someone does "run this queue in 100ms" and someone else comes in and says "run this queue now", the correct outcome is running this queue now. >>>> From the original topic of this email, we have conditions that can cause >>>> the driver to not be able to submit an IO. A set of those conditions can >>>> only happen if IO is in flight, and those cases we have covered just >>>> fine. Another set can potentially trigger without IO being in flight. >>>> These are cases where a non-device resource is unavailable at the time >>>> of submission. This might be iommu running out of space, for instance, >>>> or it might be a memory allocation of some sort. For these cases, we >>>> don't get any notification when the shortage clears. All we can do is >>>> ensure that we restart operations at some point in the future. We're SOL >>>> at that point, but we have to ensure that we make forward progress. >>> >>> Right, it is a generic issue, not DM-specific one, almost all drivers >>> call kmalloc(GFP_ATOMIC) in IO path. >> >> GFP_ATOMIC basically never fails, unless we are out of memory. The > > I guess GFP_KERNEL may never fail, but GFP_ATOMIC failure might be > possible, and it is mentioned[1] there is such code in mm allocation > path, also OOM can happen too. > > if (some randomly generated condition) && (request is atomic) > return NULL; > > [1] https://lwn.net/Articles/276731/ That article is 10 years old. Once you run large scale production, you see what the real failures are. Fact is, for zero order allocation, if the atomic alloc fails the shit has really hit the fan. In that case, a delay of 10ms is not your main issue. It's a total red herring when you compare to the frequency of what Bart is seeing. It's noise, and irrelevant here. For an atomic zero order allocation failure, doing a short random sleep is perfectly fine. >> exception is higher order allocations. If a driver has a higher order >> atomic allocation in its IO path, the device driver writer needs to be >> taken out behind the barn and shot. Simple as that. It will NEVER work >> well in a production environment. Witness the disaster that so many NIC >> driver writers have learned. >> >> This is NOT the case we care about here. It's resources that are more >> readily depleted because other devices are using them. If it's a high >> frequency or generally occurring event, then we simply must have a >> callback to restart the queue from that. The condition then becomes >> identical to device private starvation, the only difference being from >> where we restart the queue. >> >>> IMO, there is enough time for figuring out a generic solution before >>> 4.16 release. >> >> I would hope so, but the proposed solutions have not filled me with >> a lot of confidence in the end result so far. >> >>>> That last set of conditions better not be a a common occurence, since >>>> performance is down the toilet at that point. I don't want to introduce >>>> hot path code to rectify it. Have the driver return if that happens in a >>>> way that is DIFFERENT from needing a normal restart. The driver knows if >>>> this is a resource that will become available when IO completes on this >>>> device or not. If we get that return, we have a generic run-again delay. >>> >>> Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and >>> it should be DM-only which returns STS_RESOURCE so often. >> >> Where does the dm STS_RESOURCE error usually come from - what's exact >> resource are we running out of? > > It is from blk_get_request(underlying queue), see > multipath_clone_and_map(). That's what I thought. So for a low queue depth underlying queue, it's quite possible that this situation can happen. Two potential solutions I see: 1) As described earlier in this thread, having a mechanism for being notified when the scarce resource becomes available. It would not be hard to tap into the existing sbitmap wait queue for that. 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource allocation. I haven't read the dm code to know if this is a possibility or not. I'd probably prefer #1. It's a classic case of trying to get the request, and if it fails, add ourselves to the sbitmap tag wait queue head, retry, and bail if that also fails. Connecting the scarce resource and the consumer is the only way to really fix this, without bogus arbitrary delays. -- Jens Axboe