2015-12-02 16:19:54

by Mark Brown

[permalink] [raw]
Subject: linux-next: build failure after merge of the block tree

Hi Jens,

After merging the block tree, today's linux-next build (x86 allmodconfig)
failed like this:

/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_identity':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:279:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:292:32: error: dereferencing pointer to incomplete type 'struct nvme_dev'
ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_get_l2p_tbl':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:317:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:319:36: error: dereferencing pointer to incomplete type 'struct nvme_dev'
u32 len = queue_max_hw_sectors(dev->admin_q) << 9;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_get_bb_tbl':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:365:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:379:32: error: dereferencing pointer to incomplete type 'struct nvme_dev'
ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_set_bb_tbl':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:422:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:432:32: error: dereferencing pointer to incomplete type 'struct nvme_dev'
ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_create_dma_pool':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:521:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:523:34: error: dereferencing pointer to incomplete type 'struct nvme_dev'
return dma_pool_create(name, dev->dev, PAGE_SIZE, PAGE_SIZE, 0);
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_ns_supported':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:576:27: error: 'struct nvme_ns' has no member named 'dev'
struct nvme_dev *dev = ns->dev;
^
In file included from /home/broonie/next/next/include/linux/list.h:8:0,
from /home/broonie/next/next/include/linux/pci.h:25,
from /home/broonie/next/next/drivers/nvme/host/nvme.h:18,
from /home/broonie/next/next/drivers/nvme/host/lightnvm.c:23:
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:577:39: error: dereferencing pointer to incomplete type 'struct nvme_dev'
struct pci_dev *pdev = to_pci_dev(dev->dev);
^
/home/broonie/next/next/include/linux/kernel.h:813:49: note: in definition of macro 'container_of'
const typeof( ((type *)0)->member ) *__mptr = (ptr); \
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:577:25: note: in expansion of macro 'to_pci_dev'
struct pci_dev *pdev = to_pci_dev(dev->dev);
^
/home/broonie/next/next/drivers/nvme/host/lightnvm.c: In function 'nvme_nvm_create_dma_pool':
/home/broonie/next/next/drivers/nvme/host/lightnvm.c:524:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
/home/broonie/next/next/scripts/Makefile.build:258: recipe for target 'drivers/nvme/host/lightnvm.o' failed
make[4]: *** [drivers/nvme/host/lightnvm.o] Error 1

Caused by commit 1c63dc66580d4b (nvme: split a new struct nvme_ctrl out of
struct nvme_dev) which replaced dev in struct nvme_ns with ctrl without
updating this user. I used the block tree from the last -next build
(3f3a8a218c2f) instead.


Attachments:
(No filename) (4.25 kB)
(No filename) (473.00 B)
Download all attachments

2015-12-02 16:45:33

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

Looks like I didn't test with CONFIG_NVM enabled, and neither did
the build bot.

Most of this is really weird crazy shit in the lighnvm support, though.

Struct nvme_ns is a structure for the NVM I/O command set, and it has
no business poking into it. Second this commit:

commit 47b3115ae7b799be8b77b0f024215ad4f68d6460
Author: Wenwei Tao <[email protected]>
Date: Fri Nov 20 13:47:55 2015 +0100

nvme: lightnvm: use admin queues for admin cmds

Does even more crazy stuff. If a function gets a request_queue parameter
passed it'd better use that and not look for another one.

Quick patch below, but this code will need some more attention:

diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index d5622f9..09cf0b9 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -276,7 +276,6 @@ static int init_grps(struct nvm_id *nvm_id, struct nvme_nvm_id *nvme_nvm_id)
static int nvme_nvm_identity(struct request_queue *q, struct nvm_id *nvm_id)
{
struct nvme_ns *ns = q->queuedata;
- struct nvme_dev *dev = ns->dev;
struct nvme_nvm_id *nvme_nvm_id;
struct nvme_nvm_command c = {};
int ret;
@@ -289,7 +288,7 @@ static int nvme_nvm_identity(struct request_queue *q, struct nvm_id *nvm_id)
if (!nvme_nvm_id)
return -ENOMEM;

- ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
+ ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, (struct nvme_command *)&c,
nvme_nvm_id, sizeof(struct nvme_nvm_id));
if (ret) {
ret = -EIO;
@@ -314,9 +313,8 @@ static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u32 nlb,
nvm_l2p_update_fn *update_l2p, void *priv)
{
struct nvme_ns *ns = q->queuedata;
- struct nvme_dev *dev = ns->dev;
struct nvme_nvm_command c = {};
- u32 len = queue_max_hw_sectors(dev->admin_q) << 9;
+ u32 len = queue_max_hw_sectors(ns->ctrl->admin_q) << 9;
u32 nlb_pr_rq = len / sizeof(u64);
u64 cmd_slba = slba;
void *entries;
@@ -334,10 +332,10 @@ static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u32 nlb,
c.l2p.slba = cpu_to_le64(cmd_slba);
c.l2p.nlb = cpu_to_le32(cmd_nlb);

- ret = nvme_submit_sync_cmd(dev->admin_q,
+ ret = nvme_submit_sync_cmd(ns->ctrl->admin_q,
(struct nvme_command *)&c, entries, len);
if (ret) {
- dev_err(dev->dev, "L2P table transfer failed (%d)\n",
+ dev_err(ns->ctrl->dev, "L2P table transfer failed (%d)\n",
ret);
ret = -EIO;
goto out;
@@ -362,7 +360,7 @@ static int nvme_nvm_get_bb_tbl(struct request_queue *q, struct ppa_addr ppa,
void *priv)
{
struct nvme_ns *ns = q->queuedata;
- struct nvme_dev *dev = ns->dev;
+ struct nvme_ctrl *ctrl = ns->ctrl;
struct nvme_nvm_command c = {};
struct nvme_nvm_bb_tbl *bb_tbl;
int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blocks;
@@ -376,30 +374,30 @@ static int nvme_nvm_get_bb_tbl(struct request_queue *q, struct ppa_addr ppa,
if (!bb_tbl)
return -ENOMEM;

- ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
+ ret = nvme_submit_sync_cmd(ctrl->admin_q, (struct nvme_command *)&c,
bb_tbl, tblsz);
if (ret) {
- dev_err(dev->dev, "get bad block table failed (%d)\n", ret);
+ dev_err(ctrl->dev, "get bad block table failed (%d)\n", ret);
ret = -EIO;
goto out;
}

if (bb_tbl->tblid[0] != 'B' || bb_tbl->tblid[1] != 'B' ||
bb_tbl->tblid[2] != 'L' || bb_tbl->tblid[3] != 'T') {
- dev_err(dev->dev, "bbt format mismatch\n");
+ dev_err(ctrl->dev, "bbt format mismatch\n");
ret = -EINVAL;
goto out;
}

if (le16_to_cpu(bb_tbl->verid) != 1) {
ret = -EINVAL;
- dev_err(dev->dev, "bbt version not supported\n");
+ dev_err(ctrl->dev, "bbt version not supported\n");
goto out;
}

if (le32_to_cpu(bb_tbl->tblks) != nr_blocks) {
ret = -EINVAL;
- dev_err(dev->dev, "bbt unsuspected blocks returned (%u!=%u)",
+ dev_err(ctrl->dev, "bbt unsuspected blocks returned (%u!=%u)",
le32_to_cpu(bb_tbl->tblks), nr_blocks);
goto out;
}
@@ -419,7 +417,6 @@ static int nvme_nvm_set_bb_tbl(struct request_queue *q, struct nvm_rq *rqd,
int type)
{
struct nvme_ns *ns = q->queuedata;
- struct nvme_dev *dev = ns->dev;
struct nvme_nvm_command c = {};
int ret = 0;

@@ -429,10 +426,10 @@ static int nvme_nvm_set_bb_tbl(struct request_queue *q, struct nvm_rq *rqd,
c.set_bb.nlb = cpu_to_le16(rqd->nr_pages - 1);
c.set_bb.value = type;

- ret = nvme_submit_sync_cmd(dev->admin_q, (struct nvme_command *)&c,
+ ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, (struct nvme_command *)&c,
NULL, 0);
if (ret)
- dev_err(dev->dev, "set bad block table failed (%d)\n", ret);
+ dev_err(ns->ctrl->dev, "set bad block table failed (%d)\n", ret);
return ret;
}

@@ -518,9 +515,8 @@ static int nvme_nvm_erase_block(struct request_queue *q, struct nvm_rq *rqd)
static void *nvme_nvm_create_dma_pool(struct request_queue *q, char *name)
{
struct nvme_ns *ns = q->queuedata;
- struct nvme_dev *dev = ns->dev;

- return dma_pool_create(name, dev->dev, PAGE_SIZE, PAGE_SIZE, 0);
+ return dma_pool_create(name, ns->ctrl->dev, PAGE_SIZE, PAGE_SIZE, 0);
}

static void nvme_nvm_destroy_dma_pool(void *pool)
@@ -573,8 +569,9 @@ void nvme_nvm_unregister(struct request_queue *q, char *disk_name)

int nvme_nvm_ns_supported(struct nvme_ns *ns, struct nvme_id_ns *id)
{
- struct nvme_dev *dev = ns->dev;
- struct pci_dev *pdev = to_pci_dev(dev->dev);
+ struct nvme_ctrl *ctrl = ns->ctrl;
+ /* XXX: this is poking into PCI structures from generic code! */
+ struct pci_dev *pdev = to_pci_dev(ctrl->dev);

/* QEMU NVMe simulator - PCI ID + Vendor specific bit */
if (pdev->vendor == PCI_VENDOR_ID_INTEL && pdev->device == 0x5845 &&

2015-12-02 21:07:41

by Jens Axboe

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On 12/02/2015 09:45 AM, Christoph Hellwig wrote:
> Looks like I didn't test with CONFIG_NVM enabled, and neither did
> the build bot.
>
> Most of this is really weird crazy shit in the lighnvm support, though.
>
> Struct nvme_ns is a structure for the NVM I/O command set, and it has
> no business poking into it. Second this commit:
>
> commit 47b3115ae7b799be8b77b0f024215ad4f68d6460
> Author: Wenwei Tao <[email protected]>
> Date: Fri Nov 20 13:47:55 2015 +0100
>
> nvme: lightnvm: use admin queues for admin cmds
>
> Does even more crazy stuff. If a function gets a request_queue parameter
> passed it'd better use that and not look for another one.
>
> Quick patch below, but this code will need some more attention:

Christoph, for-4.5/nvme also fails if integrity isn't enabled:

CC drivers/nvme/host/core.o
drivers/nvme/host/core.c: In function ?__nvme_submit_user_cmd?:
drivers/nvme/host/core.c:192:10: error: implicit declaration of function
?bio_integrity_alloc? [-Werror=implicit-function-declaration]
bip = bio_integrity_alloc(bio, GFP_KERNEL, 1);
^
drivers/nvme/host/core.c:192:8: warning: assignment makes pointer from
integer without a cast [-Wint-conversion]
bip = bio_integrity_alloc(bio, GFP_KERNEL, 1);
^
drivers/nvme/host/core.c:198:7: error: dereferencing pointer to
incomplete type ?struct bio_integrity_payload?
bip->bip_iter.bi_size = meta_len;
^
drivers/nvme/host/core.c:201:10: error: implicit declaration of function
?bio_integrity_add_page? [-Werror=implicit-function-declaration]
ret = bio_integrity_add_page(bio, virt_to_page(meta),
^


--
Jens Axboe

2015-12-02 21:14:04

by Keith Busch

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Wed, Dec 02, 2015 at 02:07:34PM -0700, Jens Axboe wrote:
> Christoph, for-4.5/nvme also fails if integrity isn't enabled:

I forgot about this since I've merged this in my repo to fix:

https://lkml.org/lkml/2015/10/26/546

That ok, or should we handle this differently?

2015-12-02 21:35:55

by Jens Axboe

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On 12/02/2015 02:14 PM, Keith Busch wrote:
> On Wed, Dec 02, 2015 at 02:07:34PM -0700, Jens Axboe wrote:
>> Christoph, for-4.5/nvme also fails if integrity isn't enabled:
>
> I forgot about this since I've merged this in my repo to fix:
>
> https://lkml.org/lkml/2015/10/26/546
>
> That ok, or should we handle this differently?

I think that should make it compile, but the behavior will be a bit odd.
If you pass in meta and integrity isn't enabled, you'll get an ENOMEM
error. That seems a bit nonsensical.

We could make bio_integrity_alloc() return an error pointer. That way we
could retain the ifdefs in the bip code, and not let it spread to drivers.

--
Jens Axboe


Attachments:
int-err.patch (2.93 kB)

2015-12-03 07:35:26

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Wed, Dec 02, 2015 at 02:27:40PM -0700, Jens Axboe wrote:
> On 12/02/2015 02:14 PM, Keith Busch wrote:
>> On Wed, Dec 02, 2015 at 02:07:34PM -0700, Jens Axboe wrote:
>>> Christoph, for-4.5/nvme also fails if integrity isn't enabled:
>>
>> I forgot about this since I've merged this in my repo to fix:
>>
>> https://lkml.org/lkml/2015/10/26/546
>>
>> That ok, or should we handle this differently?
>
> I think that should make it compile, but the behavior will be a bit odd. If
> you pass in meta and integrity isn't enabled, you'll get an ENOMEM error.
> That seems a bit nonsensical.
>
> We could make bio_integrity_alloc() return an error pointer. That way we
> could retain the ifdefs in the bip code, and not let it spread to drivers.

This looks reasonable to me.

2015-12-03 08:39:07

by Matias Bjørling

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On 12/02/2015 10:07 PM, Jens Axboe wrote:
> On 12/02/2015 09:45 AM, Christoph Hellwig wrote:
>> Looks like I didn't test with CONFIG_NVM enabled, and neither did
>> the build bot.
>>
>> Most of this is really weird crazy shit in the lighnvm support, though.
>>
>> Struct nvme_ns is a structure for the NVM I/O command set, and it has
>> no business poking into it. Second this commit:
>>
>> commit 47b3115ae7b799be8b77b0f024215ad4f68d6460
>> Author: Wenwei Tao <[email protected]>
>> Date: Fri Nov 20 13:47:55 2015 +0100
>>
>> nvme: lightnvm: use admin queues for admin cmds
>>
>> Does even more crazy stuff. If a function gets a request_queue parameter
>> passed it'd better use that and not look for another one.

A little crazy yes. The reason is that the NVMe admin queues and NVMe
user queues are driven by different request queues. Previously this was
patched up with having two queues in the lightnvm core. One for admin
and another for user. But was later merged into a single queue.

We can pass both request queues into lightnvm core, but I prefer to
handle it in some good way in the nvme-lightnvm integration.

2015-12-03 09:06:50

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Thu, Dec 03, 2015 at 09:39:01AM +0100, Matias Bj?rling wrote:
> A little crazy yes. The reason is that the NVMe admin queues and NVMe user
> queues are driven by different request queues. Previously this was patched
> up with having two queues in the lightnvm core. One for admin and another
> for user. But was later merged into a single queue.

Why? If you look at the current structure we have the admin queue
which is always allocated by the Low level driver, although it could and
should move to the core eventually. And then we have Command set specific
request_queues for the I/O queues. One per NS for NVM currenly, either
one per NS or one globally for LightNVM, and in Fabrics I currently
have another magic one :) Due to the tagset pointer in struct nvme_ctrl
that's really easy to handle.

2015-12-03 09:52:52

by Matias Bjørling

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On 12/03/2015 10:06 AM, Christoph Hellwig wrote:
> On Thu, Dec 03, 2015 at 09:39:01AM +0100, Matias Bj?rling wrote:
>> A little crazy yes. The reason is that the NVMe admin queues and NVMe user
>> queues are driven by different request queues. Previously this was patched
>> up with having two queues in the lightnvm core. One for admin and another
>> for user. But was later merged into a single queue.
>
> Why? If you look at the current structure we have the admin queue
> which is always allocated by the Low level driver, although it could and
> should move to the core eventually. And then we have Command set specific
> request_queues for the I/O queues. One per NS for NVM currenly, either
> one per NS or one globally for LightNVM, and in Fabrics I currently
> have another magic one :) Due to the tagset pointer in struct nvme_ctrl
> that's really easy to handle.
>

The identify geometry command and bad block commands are part of the
admin command set. Surely, as all these take a ns id, they can be moved
and be accessed naturally through the user queues.

Let me send out a revert for that patch.

2015-12-03 09:57:31

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Thu, Dec 03, 2015 at 10:52:46AM +0100, Matias Bj?rling wrote:
> The identify geometry command and bad block commands are part of the admin
> command set. Surely, as all these take a ns id, they can be moved and be
> accessed naturally through the user queues.

Nah, these admin commands should go through the admin queue - but
having a request_queue as the argument to the callback just seems rather
off if it's not the right one. Why can't you just pass the 'struct nvm_dev'
instead of the request_queue for these methods?

2015-12-03 10:09:12

by Matias Bjørling

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree



On 12/03/2015 10:57 AM, Christoph Hellwig wrote:
> On Thu, Dec 03, 2015 at 10:52:46AM +0100, Matias Bj?rling wrote:
>> The identify geometry command and bad block commands are part of the admin
>> command set. Surely, as all these take a ns id, they can be moved and be
>> accessed naturally through the user queues.
>
> Nah, these admin commands should go through the admin queue - but
> having a request_queue as the argument to the callback just seems rather
> off if it's not the right one. Why can't you just pass the 'struct nvm_dev'
> instead of the request_queue for these methods?
>

That'll work as well.

Similar to this?

diff --git i/drivers/lightnvm/core.c w/drivers/lightnvm/core.c
index 86ce887..4a8d1fe 100644
--- i/drivers/lightnvm/core.c
+++ w/drivers/lightnvm/core.c
@@ -74,7 +74,7 @@ EXPORT_SYMBOL(nvm_unregister_target);
void *nvm_dev_dma_alloc(struct nvm_dev *dev, gfp_t mem_flags,
dma_addr_t *dma_handler)
{
- return dev->ops->dev_dma_alloc(dev->q, dev->ppalist_pool, mem_flags,
+ return dev->ops->dev_dma_alloc(dev, dev->ppalist_pool, mem_flags,
dma_handler);
}
EXPORT_SYMBOL(nvm_dev_dma_alloc);
@@ -246,7 +246,7 @@ static int nvm_init(struct nvm_dev *dev)
if (!dev->q || !dev->ops)
return ret;

- if (dev->ops->identity(dev->q, &dev->identity)) {
+ if (dev->ops->identity(dev, &dev->identity)) {
pr_err("nvm: device could not be identified\n");
goto err;
}
@@ -326,8 +326,7 @@ int nvm_register(struct request_queue *q, char
*disk_name,
}

if (dev->ops->max_phys_sect > 1) {
- dev->ppalist_pool = dev->ops->create_dma_pool(dev->q,
- "ppalist");
+ dev->ppalist_pool = dev->ops->create_dma_pool(dev, "ppalist");
if (!dev->ppalist_pool) {
pr_err("nvm: could not create ppa pool\n");
ret = -ENOMEM;
diff --git i/drivers/nvme/host/lightnvm.c w/drivers/nvme/host/lightnvm.c
index 762c9a7..15f2acb 100644
--- i/drivers/nvme/host/lightnvm.c
+++ w/drivers/nvme/host/lightnvm.c
@@ -271,9 +271,9 @@ static int init_grps(struct nvm_id *nvm_id, struct
nvme_nvm_id *nvme_nvm_id)
return 0;
}

-static int nvme_nvm_identity(struct request_queue *q, struct nvm_id
*nvm_id)
+static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
{
- struct nvme_ns *ns = q->queuedata;
+ struct nvme_ns *ns = nvmdev->q->queuedata;
struct nvme_dev *dev = ns->dev;
struct nvme_nvm_id *nvme_nvm_id;
struct nvme_nvm_command c = {};
@@ -308,10 +308,10 @@ out:
return ret;
}

-static int nvme_nvm_get_l2p_tbl(struct request_queue *q, u64 slba, u32 nlb,
+static int nvme_nvm_get_l2p_tbl(struct nvm_dev *nvmdev, u64 slba, u32 nlb,
nvm_l2p_update_fn *update_l2p, void *priv)
{
- struct nvme_ns *ns = q->queuedata;
+ struct nvme_ns *ns = nvmdev->q->queuedata;
struct nvme_dev *dev = ns->dev;
struct nvme_nvm_command c = {};
u32 len = queue_max_hw_sectors(dev->admin_q) << 9;
@@ -415,10 +415,10 @@ out:
return ret;
}

-static int nvme_nvm_set_bb_tbl(struct request_queue *q, struct nvm_rq *rqd,
+static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct nvm_rq *rqd,
int type)
{
- struct nvme_ns *ns = q->queuedata;
+ struct nvme_ns *ns = nvmdev->q->queuedata;
struct nvme_dev *dev = ns->dev;
struct nvme_nvm_command c = {};
int ret = 0;
@@ -463,8 +463,9 @@ static void nvme_nvm_end_io(struct request *rq, int
error)
blk_mq_free_request(rq);
}

-static int nvme_nvm_submit_io(struct request_queue *q, struct nvm_rq *rqd)
+static int nvme_nvm_submit_io(struct nvm_dev *dev, struct nvm_rq *rqd)
{
+ struct request_queue *q = dev->q;
struct nvme_ns *ns = q->queuedata;
struct request *rq;
struct bio *bio = rqd->bio;
@@ -502,8 +503,9 @@ static int nvme_nvm_submit_io(struct request_queue
*q, struct nvm_rq *rqd)
return 0;
}

-static int nvme_nvm_erase_block(struct request_queue *q, struct nvm_rq
*rqd)
+static int nvme_nvm_erase_block(struct nvm_dev *dev, struct nvm_rq *rqd)
{
+ struct request_queue *q = dev->q;
struct nvme_ns *ns = q->queuedata;
struct nvme_nvm_command c = {};

@@ -515,9 +517,9 @@ static int nvme_nvm_erase_block(struct request_queue
*q, struct nvm_rq *rqd)
return nvme_submit_sync_cmd(q, (struct nvme_command *)&c, NULL, 0);
}

-static void *nvme_nvm_create_dma_pool(struct request_queue *q, char *name)
+static void *nvme_nvm_create_dma_pool(struct nvm_dev *nvmdev, char *name)
{
- struct nvme_ns *ns = q->queuedata;
+ struct nvme_ns *ns = nvmdev->q->queuedata;
struct nvme_dev *dev = ns->dev;

return dma_pool_create(name, dev->dev, PAGE_SIZE, PAGE_SIZE, 0);
@@ -530,7 +532,7 @@ static void nvme_nvm_destroy_dma_pool(void *pool)
dma_pool_destroy(dma_pool);
}

-static void *nvme_nvm_dev_dma_alloc(struct request_queue *q, void *pool,
+static void *nvme_nvm_dev_dma_alloc(struct nvm_dev *dev, void *pool,
gfp_t mem_flags, dma_addr_t *dma_handler)
{
return dma_pool_alloc(pool, mem_flags, dma_handler);
diff --git i/include/linux/lightnvm.h w/include/linux/lightnvm.h
index 935ef38..034117b 100644
--- i/include/linux/lightnvm.h
+++ w/include/linux/lightnvm.h
@@ -183,17 +183,17 @@ struct nvm_block;

typedef int (nvm_l2p_update_fn)(u64, u32, __le64 *, void *);
typedef int (nvm_bb_update_fn)(struct ppa_addr, int, u8 *, void *);
-typedef int (nvm_id_fn)(struct request_queue *, struct nvm_id *);
-typedef int (nvm_get_l2p_tbl_fn)(struct request_queue *, u64, u32,
+typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
+typedef int (nvm_get_l2p_tbl_fn)(struct nvm_dev *, u64, u32,
nvm_l2p_update_fn *, void *);
typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, int,
nvm_bb_update_fn *, void *);
-typedef int (nvm_op_set_bb_fn)(struct request_queue *, struct nvm_rq *,
int);
-typedef int (nvm_submit_io_fn)(struct request_queue *, struct nvm_rq *);
-typedef int (nvm_erase_blk_fn)(struct request_queue *, struct nvm_rq *);
-typedef void *(nvm_create_dma_pool_fn)(struct request_queue *, char *);
+typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct nvm_rq *, int);
+typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
+typedef int (nvm_erase_blk_fn)(struct nvm_dev *, struct nvm_rq *);
+typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
typedef void (nvm_destroy_dma_pool_fn)(void *);
-typedef void *(nvm_dev_dma_alloc_fn)(struct request_queue *, void *, gfp_t,
+typedef void *(nvm_dev_dma_alloc_fn)(struct nvm_dev *, void *, gfp_t,
dma_addr_t *);
typedef void (nvm_dev_dma_free_fn)(void *, void*, dma_addr_t);

2015-12-03 10:21:38

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Thu, Dec 03, 2015 at 11:09:03AM +0100, Matias Bj?rling wrote:
> Similar to this?

For the interface yes. Now just get rid of using nvme_ns entirely -
seems like you just want ns_id and lba_shift, and those should fit
well into nvm_dev I think.

2015-12-03 11:08:58

by Matias Bjørling

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On 12/03/2015 11:21 AM, Christoph Hellwig wrote:
> On Thu, Dec 03, 2015 at 11:09:03AM +0100, Matias Bj?rling wrote:
>> Similar to this?
>
> For the interface yes. Now just get rid of using nvme_ns entirely -
> seems like you just want ns_id and lba_shift, and those should fit
> well into nvm_dev I think.
>

What is the reason to keep the nvme_ns internally to the nvme core?

We can definitely move ->nsid and the lba_shift into nvm_dev. Only thing
I have is that it moves a small part of nvme logic into the lightnvm core.

2015-12-03 16:43:05

by Christoph Hellwig

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree

On Thu, Dec 03, 2015 at 12:07:23PM +0100, Matias Bj?rling wrote:
> What is the reason to keep the nvme_ns internally to the nvme core?
>
> We can definitely move ->nsid and the lba_shift into nvm_dev. Only thing I
> have is that it moves a small part of nvme logic into the lightnvm core.

It's a structure specific to the NVM command set, and the block device
use to implement it in Linux. Similar to how you wouldn't use the SCSI
disk driver data structures to implement the tape driver for example.

2015-12-04 12:16:26

by Matias Bjørling

[permalink] [raw]
Subject: Re: linux-next: build failure after merge of the block tree



On 12/03/2015 05:42 PM, Christoph Hellwig wrote:
> On Thu, Dec 03, 2015 at 12:07:23PM +0100, Matias Bj?rling wrote:
>> What is the reason to keep the nvme_ns internally to the nvme core?
>>
>> We can definitely move ->nsid and the lba_shift into nvm_dev. Only thing I
>> have is that it moves a small part of nvme logic into the lightnvm core.
>
> It's a structure specific to the NVM command set, and the block device
> use to implement it in Linux. Similar to how you wouldn't use the SCSI
> disk driver data structures to implement the tape driver for example.
>

Granted. Taking a step back and see how it would look in the
specification. For that case, the identify geometry should properly
replace the identify namespace command and extend the controller
identify with a method to expose identify geometry structures. It falls
back to that we don't have the appropriate bits in the specification (as
there isn't a vendor specific command set (CSS) place at the moment).
I'll rather not (and patches properly won't be accepted) push specific
meanings to reserved bits in the standard, which then is overruled by
vendor specific pci id's.

The NVMe and LightNVM still shares some common ground as I/O
submission/completion still follows normal I/O submissions. The
Submission/Completion sizes (SQES/CQES), ONCS bits, number of namespaces
could properly be either abstracted or duplicated in vendor specific bits.

I'll work on getting ->nsid, and lba_shift abstracted away. However, the
best solution will be to get some bits in the spec and implement it
appropriately.