Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757426AbeAHMxT (ORCPT + 1 other); Mon, 8 Jan 2018 07:53:19 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:41665 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756056AbeAHMxR (ORCPT ); Mon, 8 Jan 2018 07:53:17 -0500 X-Google-Smtp-Source: ACJfBosWt9hsFB9BseNkoIjH+JbM1JlES108Bntjg7CKVTFCRSEm29dwhaV5SIuiCQk7cN9+3w3ksA== Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 11.2 \(3445.5.20\)) Subject: Re: [GIT PULL 24/25] lightnvm: pblk: add iostat support From: =?utf-8?Q?Javier_Gonz=C3=A1lez?= In-Reply-To: <20180108115457.GA28922@infradead.org> Date: Mon, 8 Jan 2018 13:53:11 +0100 Cc: =?utf-8?Q?Matias_Bj=C3=B8rling?= , Jens Axboe , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: 8BIT Message-Id: References: <20180105131621.20808-1-m@bjorling.me> <20180105131621.20808-25-m@bjorling.me> <20180105154230.GA13829@kernel.dk> <2491bb34-2098-32f5-3d5b-e3d89fbce4b3@bjorling.me> <20180108115457.GA28922@infradead.org> To: Christoph Hellwig X-Mailer: Apple Mail (2.3445.5.20) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: > On 8 Jan 2018, at 12.54, Christoph Hellwig wrote: > > On Fri, Jan 05, 2018 at 07:33:36PM +0100, Matias Bjørling wrote: >> On 01/05/2018 04:42 PM, Jens Axboe wrote: >>> On Fri, Jan 05 2018, Matias Bjørling wrote: >>>> From: Javier González >>>> >>>> Since pblk registers its own block device, the iostat accounting is >>>> not automatically done for us. Therefore, add the necessary >>>> accounting logic to satisfy the iostat interface. >>> >>> Ignorant question - why is it a raw block device, not using blk-mq? >> >> The current flow is using the raw block device, together with the blk-mq >> nvme device driver. A bio is sent down to the nvme_nvm_submit_io() path in >> the /drivers/nvme/host/lightnvm.c file. From there it attaches the to NVMe >> blk-mq implementation. >> >> Is there a better way to do it? > > I suspect the right way to do things is to split NVMe for different > I/O command sets, and make this an I/O command set. This makes sense. This was actually how I implemented it to start with, but I changed it to be less intrusive on the nvme path. Let's revert the patch and we can add it back when we push the 2.0 patches. > But before touching much of NVMe, I'd really, really like to see an > actual spec first. The 2.0 spec. is open and is available here [1]. I thought you had looked into it already... Anyway, feedback is more than welcome. [1] https://docs.google.com/document/d/1kedBY_1-hfkAlqT4EdwY6gz-6UOZbn7kIjWpmBLPNj0 Javier