Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1627293yba; Thu, 25 Apr 2019 03:04:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqwuzZFsqZNBfhnI0ivcgfzjkwJuTD/5MHP9KDO7KxJiytfhMOfzUItQUDRFOSQdy89/DBzs X-Received: by 2002:a17:902:a50d:: with SMTP id s13mr38115217plq.58.1556186680871; Thu, 25 Apr 2019 03:04:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556186680; cv=none; d=google.com; s=arc-20160816; b=hMX8EpgNYqfmADn5GGAfR5V6I3jEDVmayAbZkw6fe/9LiPl57U0oA/HmhpB2Y2dSWT Hrk1W37iHMll2vlS1jkDka/KzYWx4gphUe9SHMqmYIbW2Gkztvb/QlqLOKxYqrIujFos j+DZuyHPm2rIYJVVRPaAy9mpT9LGNlArKheyJiiJ3ZhmaN7Hvj1xSH6aoKaXKXxTxWI4 FzuOC0Zimycr8sOikragfHXJ9FrZxLafSP3IQvi5DTkLkzyS2aviPaHFTBFRy79Y4u0d FoKg1zFCFCmOZvFvVvCKClCQQPn7ZVR0tyqHLmRfZNPbfk6Sb0qt9Zzc/7aZneccatwv o8Tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=LDOlGzfdRHpz4Sn68LrD1Nwx2V2VasDghFvN8iyzhwU=; b=ChSROzCh0C2EZP3z1NK1++ybNtV6S2YKhGNvDHyZ7KPwG11lKKvCY9nVlv06ljk7MD bRorUuYfvGcsxlThcrUk3yx1Gb8eQKMl75i+ZIlt37LRgFrZBuUllIqKTAJGmObbDCcb X38a5rn3m2R2zsDj0LIwdL7cr6ImbvOk7stLFoa/bggcORm2ERKmuA+aNO6DVAhRfg6H k8n+WmJ0u8g4WDiFwKNxzlTv9GYKL6ji55TmqmtE8l/2RFZOG3O99wOVT2BeEZEh7Ccz CS2Hv2Xh/Xd+951+NDCDyMBYLh27AeC+RTKd2TljZngJ+ZG4i6yxeCFKaLBnJl10whqB 9xUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b67si23143012plb.20.2019.04.25.03.04.24; Thu, 25 Apr 2019 03:04:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727365AbfDXUNI (ORCPT + 99 others); Wed, 24 Apr 2019 16:13:08 -0400 Received: from mga03.intel.com ([134.134.136.65]:56592 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726004AbfDXUNI (ORCPT ); Wed, 24 Apr 2019 16:13:08 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Apr 2019 13:13:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,390,1549958400"; d="scan'208";a="167600792" Received: from unknown (HELO localhost.localdomain) ([10.232.112.69]) by fmsmga001.fm.intel.com with ESMTP; 24 Apr 2019 13:13:07 -0700 Date: Wed, 24 Apr 2019 14:07:06 -0600 From: Keith Busch To: Sagi Grimberg Cc: Maximilian Heyne , David Woodhouse , Amit Shah , Keith Busch , Jens Axboe , Christoph Hellwig , James Smart , linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 0/2] Adding per-controller timeout support to nvme Message-ID: <20190424200706.GB15412@localhost.localdomain> References: <20190403123506.122904-1-mheyne@amazon.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 24, 2019 at 09:55:16AM -0700, Sagi Grimberg wrote: > > > As different nvme controllers are connect via different fabrics, some require > > different timeout settings than others. This series implements per-controller > > timeouts in the nvme subsystem which can be set via sysfs. > > How much of a real issue is this? > > block io_timeout defaults to 30 seconds which are considered a universal > eternity for pretty much any nvme fabric. Moreover, io_timeout is > mutable already on a per-namespace level. > > This leaves the admin_timeout which goes beyond this to 60 seconds... > > Can you describe what exactly are you trying to solve? I think they must have an nvme target that is backed by slow media (i.e. non-SSD). If that's the case, I think it may be a better option if the target advertises relatively shallow queue depths and/or lower MDTS that better aligns to the backing storage capabilies.