Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757052AbeAHLzB (ORCPT + 1 other); Mon, 8 Jan 2018 06:55:01 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:53320 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756604AbeAHLy7 (ORCPT ); Mon, 8 Jan 2018 06:54:59 -0500 Date: Mon, 8 Jan 2018 03:54:57 -0800 From: Christoph Hellwig To: Matias =?iso-8859-1?Q?Bj=F8rling?= Cc: Jens Axboe , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Javier =?iso-8859-1?Q?Gonz=E1lez?= Subject: Re: [GIT PULL 24/25] lightnvm: pblk: add iostat support Message-ID: <20180108115457.GA28922@infradead.org> References: <20180105131621.20808-1-m@bjorling.me> <20180105131621.20808-25-m@bjorling.me> <20180105154230.GA13829@kernel.dk> <2491bb34-2098-32f5-3d5b-e3d89fbce4b3@bjorling.me> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2491bb34-2098-32f5-3d5b-e3d89fbce4b3@bjorling.me> User-Agent: Mutt/1.9.1 (2017-09-22) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Fri, Jan 05, 2018 at 07:33:36PM +0100, Matias Bj?rling wrote: > On 01/05/2018 04:42 PM, Jens Axboe wrote: > > On Fri, Jan 05 2018, Matias Bj?rling wrote: > > > From: Javier Gonz?lez > > > > > > Since pblk registers its own block device, the iostat accounting is > > > not automatically done for us. Therefore, add the necessary > > > accounting logic to satisfy the iostat interface. > > > > Ignorant question - why is it a raw block device, not using blk-mq? > > The current flow is using the raw block device, together with the blk-mq > nvme device driver. A bio is sent down to the nvme_nvm_submit_io() path in > the /drivers/nvme/host/lightnvm.c file. From there it attaches the to NVMe > blk-mq implementation. > > Is there a better way to do it? I suspect the right way to do things is to split NVMe for different I/O command sets, and make this an I/O command set. But before touching much of NVMe, I'd really, really like to see an actual spec first.