Received: by 2002:ac0:bc90:0:0:0:0:0 with SMTP id a16csp4860045img; Tue, 26 Mar 2019 19:15:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqwPGAiwG5Hc1lvAJe6f6N8k7lP219FqhwYhI7SU/GLJ0iDcyaU67G/hLL4eYA17U0+S4N+5 X-Received: by 2002:a17:902:9a88:: with SMTP id w8mr34886933plp.8.1553652914991; Tue, 26 Mar 2019 19:15:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553652914; cv=none; d=google.com; s=arc-20160816; b=lauFKXcAZJ4fcDkGLC1VrvYDXO0c3aol9c3Cuwd9w09z3/RpdwdhJYez7OU9MKqHe7 fj7gt6onDxUsJLX8XlxV/o5saNb2G82VgUZzebN8Rmp0dkJYZH5Xzo/FtL/1lRtfEehR 4HTr+tmyODZ3tnSI3qs80yxZMcHHkADNZXxqY/pdPx3eznk8vwhUmy8sXme6fW91mkCW HUcciTPv4zr40eBnQu1qy62x73o2b/7YtzDH/zowt+a3xAkXo7YKSlXPEJ/5TnFPbvx/ HDdYLJY4XmVulWTpz1r1tKyA4UZmcf1BlTivxwKRsYAv0KVVN0jsyptU4ctkU45k+END UdgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=wQHGRZoZh/hEhgCmpzq+C6EWGNQFOANPozZmh9JVbaw=; b=AbAaf8Xark1ZqWxE1E3//9rqll5tTIDA9D13yYF86mVeRYkfn5C/NKD7Dgxvso0hEM JZSEh2aMwQfdCfdaOrnfWrI+6glzPdFStUnvGrixbhgP/+NfPTx8eJXskH3HHp7k+9LU QULv35MJDgTdfDM557HBDq+LlVi1Caz2LRT04572TcUE0cXNQyM0BZ50eexvveOSCWbM kOPvcbIxHhWCloE+Iq2LcBw1o0NZ+hlF957o58uWuScgjHDzPcjEkERNZpZEzftSQaFn RiQKrtMxCvKDc2UmXqmc7vwPqUhEI6drAXMu15WxMg/Ml49aypUT0YkyFIiMejI0pXpA 5Lyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bh5si17574850plb.344.2019.03.26.19.14.59; Tue, 26 Mar 2019 19:15:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731981AbfC0COO (ORCPT + 99 others); Tue, 26 Mar 2019 22:14:14 -0400 Received: from mga01.intel.com ([192.55.52.88]:2329 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726922AbfC0COO (ORCPT ); Tue, 26 Mar 2019 22:14:14 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Mar 2019 19:14:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,274,1549958400"; d="scan'208";a="130476793" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by orsmga006.jf.intel.com with ESMTP; 26 Mar 2019 19:14:11 -0700 Date: Tue, 26 Mar 2019 20:15:22 -0600 From: Keith Busch To: "jianchao.wang" Cc: Jens Axboe , linux-block , James Smart , Bart Van Assche , Ming Lei , Josef Bacik , linux-nvme , Linux Kernel Mailing List , "Busch, Keith" , Hannes Reinecke , Johannes Thumshirn , Christoph Hellwig , Sagi Grimberg Subject: Re: [PATCH V2 7/8] nvme: use blk_mq_queue_tag_inflight_iter Message-ID: <20190327021521.GA7389@localhost.localdomain> References: <1553492318-1810-1-git-send-email-jianchao.w.wang@oracle.com> <1553492318-1810-8-git-send-email-jianchao.w.wang@oracle.com> <20190325134917.GA4328@localhost.localdomain> <70e14e12-2ffc-37db-dd8f-229bc580546e@oracle.com> <20190326235726.GC4328@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 27, 2019 at 10:03:26AM +0800, jianchao.wang wrote: > Hi Keith > > On 3/27/19 7:57 AM, Keith Busch wrote: > > On Mon, Mar 25, 2019 at 08:05:53PM -0700, jianchao.wang wrote: > >> What if there used to be a io scheduler and leave some stale requests of sched tags ? > >> Or the nr_hw_queues was decreased and leave the hctx->fq->flush_rq ? > > > > Requests internally queued in scheduler or block layer are not eligible > > for the nvme driver's iterator callback. We only use it to reclaim > > dispatched requests that the target can't return, which only applies to > > requests that must have a valid rq->tag value from hctx->tags. > > > >> The stable request could be some tings freed and used > >> by others and the state field happen to be overwritten to non-zero... > > > > I am not sure I follow what this means. At least for nvme, every queue > > sharing the same tagset is quiesced and frozen, there should be no > > request state in flux at the time we iterate. > > > > In nvme_dev_disable, when we try to reclaim the in-flight requests with blk_mq_tagset_busy_iter, > the request_queues are quiesced but just start-freeze. > We will try to _drain_ the in-flight requests for the _shutdown_ case when controller is not dead. > For the reset case, there still could be someone escapes the checking of queue freezing and enters > blk_mq_make_request and tries to allocate tag, then we may get, > > generic_make_request nvme_dev_disable > -> blk_queue_enter > -> nvme_start_freeze (just start freeze, no drain) > -> nvme_stop_queues > -> blk_mq_make_request > - > blk_mq_get_request -> blk_mq_tagset_busy_iter > -> blk_mq_get_tag >    -> bt_tags_for_each >     -> bt_tags_iter >    -> rq = tags->rqs[] ---> [1] > -> blk_mq_rq_ctx_init > -> data->hctx->tags->rqs[rq->tag] = rq; > > The rq got on position [1] could be a stale request that has been freed due to, > 1. a hctx->fq.flush_rq of dead request_queue that shares the same tagset > 2. a removed io scheduler's sched request > > And this stale request may have been used by others and the request->state is changed to a non-zero > value and passes the checking of blk_mq_request_started and then it will be handled by nvme_cancel_request. How is that request state going to be anyting other than IDLE? A freed request state is IDLE, and continues to be IDLE until dispatched. But dispatch is blocked for the entire tagset, so request states can't be started during an nvme reset.