2018-02-13 14:07:18

by Javier González

[permalink] [raw]
Subject: [PATCH 0/8] lightnvm: pblk: implement support for 2.0

This patchset implements support for 2.0 spec in pblk.

The first patch abstracts the geometry retrieved from the identify
command and allows both specs to coexist under the same geometry
description. From there on, we build the missing 2.0 support in lightnvm
core: address format, address conversion and report chnk get log page.
The last three patches implement the actual support for 2.0 in pblk.

Note that we only port functionality for 2.0. Newer functionality
enabled by 2.0 (e.g., wear-leveling) will be submitted in following
patches.

This patches apply on top of Matias' latest patches.

Javier González (8):
lightnvm: exposed generic geometry to targets
lightnvm: show generic geometry in sysfs
lightnvm: add support for 2.0 address format
lightnvm: convert address based on spec. version
lightnvm: implement get log report chunk helpers
lightnvm: pblk: implement get log report chunk
lightnvm: pblk: refactor init/exit sequences
lightnvm: pblk: implement 2.0 support

drivers/lightnvm/core.c | 171 ++++-----
drivers/lightnvm/pblk-core.c | 134 +++++--
drivers/lightnvm/pblk-gc.c | 2 +-
drivers/lightnvm/pblk-init.c | 787 +++++++++++++++++++++++----------------
drivers/lightnvm/pblk-read.c | 2 +-
drivers/lightnvm/pblk-recovery.c | 14 +-
drivers/lightnvm/pblk-rl.c | 2 +-
drivers/lightnvm/pblk-sysfs.c | 130 ++++++-
drivers/lightnvm/pblk-write.c | 2 +-
drivers/lightnvm/pblk.h | 253 +++++++++----
drivers/nvme/host/lightnvm.c | 553 ++++++++++++++++++---------
include/linux/lightnvm.h | 317 ++++++++++------
12 files changed, 1553 insertions(+), 814 deletions(-)

--
2.7.4



2018-02-13 14:08:01

by Javier González

[permalink] [raw]
Subject: [PATCH 1/8] lightnvm: exposed generic geometry to targets

With the inclusion of 2.0 support, we need a generic geometry that
describes the OCSSD independently of the specification that it
implements. Otherwise, geometry specific code is required, which
complicates targets and makes maintenance much more difficult.

This patch refactors the identify path and populates a generic geometry
that is then given to the targets on creation. Since the 2.0 geometry is
much more abstract that 1.2, the generic geometry resembles 2.0, but it
is not identical, as it needs to understand 1.2 abstractions too.

Signed-off-by: Javier González <[email protected]>
---
drivers/lightnvm/core.c | 143 ++++++---------
drivers/lightnvm/pblk-core.c | 16 +-
drivers/lightnvm/pblk-gc.c | 2 +-
drivers/lightnvm/pblk-init.c | 149 ++++++++-------
drivers/lightnvm/pblk-read.c | 2 +-
drivers/lightnvm/pblk-recovery.c | 14 +-
drivers/lightnvm/pblk-rl.c | 2 +-
drivers/lightnvm/pblk-sysfs.c | 39 ++--
drivers/lightnvm/pblk-write.c | 2 +-
drivers/lightnvm/pblk.h | 105 +++++------
drivers/nvme/host/lightnvm.c | 379 ++++++++++++++++++++++++---------------
include/linux/lightnvm.h | 220 +++++++++++++----------
12 files changed, 586 insertions(+), 487 deletions(-)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 9b1255b3e05e..80492fa6ee76 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -111,6 +111,7 @@ static void nvm_release_luns_err(struct nvm_dev *dev, int lun_begin,
static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
{
struct nvm_dev *dev = tgt_dev->parent;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
struct nvm_dev_map *dev_map = tgt_dev->map;
int i, j;

@@ -122,7 +123,7 @@ static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
if (clear) {
for (j = 0; j < ch_map->nr_luns; j++) {
int lun = j + lun_offs[j];
- int lunid = (ch * dev->geo.nr_luns) + lun;
+ int lunid = (ch * dev_geo->num_lun) + lun;

WARN_ON(!test_and_clear_bit(lunid,
dev->lun_map));
@@ -143,19 +144,20 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
u16 lun_begin, u16 lun_end,
u16 op)
{
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
struct nvm_tgt_dev *tgt_dev = NULL;
struct nvm_dev_map *dev_rmap = dev->rmap;
struct nvm_dev_map *dev_map;
struct ppa_addr *luns;
int nr_luns = lun_end - lun_begin + 1;
int luns_left = nr_luns;
- int nr_chnls = nr_luns / dev->geo.nr_luns;
- int nr_chnls_mod = nr_luns % dev->geo.nr_luns;
- int bch = lun_begin / dev->geo.nr_luns;
- int blun = lun_begin % dev->geo.nr_luns;
+ int nr_chnls = nr_luns / dev_geo->num_lun;
+ int nr_chnls_mod = nr_luns % dev_geo->num_lun;
+ int bch = lun_begin / dev_geo->num_lun;
+ int blun = lun_begin % dev_geo->num_lun;
int lunid = 0;
int lun_balanced = 1;
- int prev_nr_luns;
+ int sec_per_lun, prev_nr_luns;
int i, j;

nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
@@ -173,15 +175,15 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
if (!luns)
goto err_luns;

- prev_nr_luns = (luns_left > dev->geo.nr_luns) ?
- dev->geo.nr_luns : luns_left;
+ prev_nr_luns = (luns_left > dev_geo->num_lun) ?
+ dev_geo->num_lun : luns_left;
for (i = 0; i < nr_chnls; i++) {
struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
int *lun_roffs = ch_rmap->lun_offs;
struct nvm_ch_map *ch_map = &dev_map->chnls[i];
int *lun_offs;
- int luns_in_chnl = (luns_left > dev->geo.nr_luns) ?
- dev->geo.nr_luns : luns_left;
+ int luns_in_chnl = (luns_left > dev_geo->num_lun) ?
+ dev_geo->num_lun : luns_left;

if (lun_balanced && prev_nr_luns != luns_in_chnl)
lun_balanced = 0;
@@ -215,18 +217,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
if (!tgt_dev)
goto err_ch;

- memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
/* Target device only owns a portion of the physical device */
- tgt_dev->geo.nr_chnls = nr_chnls;
+ tgt_dev->geo.num_ch = nr_chnls;
+ tgt_dev->geo.num_lun = (lun_balanced) ? prev_nr_luns : -1;
tgt_dev->geo.all_luns = nr_luns;
- tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1;
+ tgt_dev->geo.all_chunks = nr_luns * dev_geo->c.num_chk;
+
+ tgt_dev->geo.max_rq_size = dev->ops->max_phys_sect * dev_geo->c.csecs;
tgt_dev->geo.op = op;
- tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun;
+
+ sec_per_lun = dev_geo->c.clba * dev_geo->c.num_chk;
+ tgt_dev->geo.total_secs = nr_luns * sec_per_lun;
+
+ tgt_dev->geo.c = dev_geo->c;
+
tgt_dev->q = dev->q;
tgt_dev->map = dev_map;
tgt_dev->luns = luns;
- memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id));
-
tgt_dev->parent = dev;

return tgt_dev;
@@ -268,12 +275,12 @@ static struct nvm_tgt_type *nvm_find_target_type(const char *name)
return tt;
}

-static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
+static int nvm_config_check_luns(struct nvm_dev_geo *dev_geo, int lun_begin,
int lun_end)
{
- if (lun_begin > lun_end || lun_end >= geo->all_luns) {
+ if (lun_begin > lun_end || lun_end >= dev_geo->all_luns) {
pr_err("nvm: lun out of bound (%u:%u > %u)\n",
- lun_begin, lun_end, geo->all_luns - 1);
+ lun_begin, lun_end, dev_geo->all_luns - 1);
return -EINVAL;
}

@@ -283,24 +290,24 @@ static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
static int __nvm_config_simple(struct nvm_dev *dev,
struct nvm_ioctl_create_simple *s)
{
- struct nvm_geo *geo = &dev->geo;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;

if (s->lun_begin == -1 && s->lun_end == -1) {
s->lun_begin = 0;
- s->lun_end = geo->all_luns - 1;
+ s->lun_end = dev_geo->all_luns - 1;
}

- return nvm_config_check_luns(geo, s->lun_begin, s->lun_end);
+ return nvm_config_check_luns(dev_geo, s->lun_begin, s->lun_end);
}

static int __nvm_config_extended(struct nvm_dev *dev,
struct nvm_ioctl_create_extended *e)
{
- struct nvm_geo *geo = &dev->geo;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;

if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) {
e->lun_begin = 0;
- e->lun_end = dev->geo.all_luns - 1;
+ e->lun_end = dev_geo->all_luns - 1;
}

/* op not set falls into target's default */
@@ -313,7 +320,7 @@ static int __nvm_config_extended(struct nvm_dev *dev,
return -EINVAL;
}

- return nvm_config_check_luns(geo, e->lun_begin, e->lun_end);
+ return nvm_config_check_luns(dev_geo, e->lun_begin, e->lun_end);
}

static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
@@ -496,6 +503,7 @@ static int nvm_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)

static int nvm_register_map(struct nvm_dev *dev)
{
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
struct nvm_dev_map *rmap;
int i, j;

@@ -503,15 +511,15 @@ static int nvm_register_map(struct nvm_dev *dev)
if (!rmap)
goto err_rmap;

- rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct nvm_ch_map),
+ rmap->chnls = kcalloc(dev_geo->num_ch, sizeof(struct nvm_ch_map),
GFP_KERNEL);
if (!rmap->chnls)
goto err_chnls;

- for (i = 0; i < dev->geo.nr_chnls; i++) {
+ for (i = 0; i < dev_geo->num_ch; i++) {
struct nvm_ch_map *ch_rmap;
int *lun_roffs;
- int luns_in_chnl = dev->geo.nr_luns;
+ int luns_in_chnl = dev_geo->num_lun;

ch_rmap = &rmap->chnls[i];

@@ -542,10 +550,11 @@ static int nvm_register_map(struct nvm_dev *dev)

static void nvm_unregister_map(struct nvm_dev *dev)
{
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
struct nvm_dev_map *rmap = dev->rmap;
int i;

- for (i = 0; i < dev->geo.nr_chnls; i++)
+ for (i = 0; i < dev_geo->num_ch; i++)
kfree(rmap->chnls[i].lun_offs);

kfree(rmap->chnls);
@@ -674,7 +683,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
int i, plane_cnt, pl_idx;
struct ppa_addr ppa;

- if (geo->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
+ if (geo->c.pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
rqd->nr_ppas = nr_ppas;
rqd->ppa_addr = ppas[0];

@@ -688,7 +697,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
return -ENOMEM;
}

- plane_cnt = geo->plane_mode;
+ plane_cnt = geo->c.pln_mode;
rqd->nr_ppas *= plane_cnt;

for (i = 0; i < nr_ppas; i++) {
@@ -811,18 +820,18 @@ EXPORT_SYMBOL(nvm_end_io);
*/
int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
{
- struct nvm_geo *geo = &dev->geo;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
int blk, offset, pl, blktype;

- if (nr_blks != geo->nr_chks * geo->plane_mode)
+ if (nr_blks != dev_geo->c.num_chk * dev_geo->c.pln_mode)
return -EINVAL;

- for (blk = 0; blk < geo->nr_chks; blk++) {
- offset = blk * geo->plane_mode;
+ for (blk = 0; blk < dev_geo->c.num_chk; blk++) {
+ offset = blk * dev_geo->c.pln_mode;
blktype = blks[offset];

/* Bad blocks on any planes take precedence over other types */
- for (pl = 0; pl < geo->plane_mode; pl++) {
+ for (pl = 0; pl < dev_geo->c.pln_mode; pl++) {
if (blks[offset + pl] &
(NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
blktype = blks[offset + pl];
@@ -833,7 +842,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
blks[blk] = blktype;
}

- return geo->nr_chks;
+ return dev_geo->c.num_chk;
}
EXPORT_SYMBOL(nvm_bb_tbl_fold);

@@ -850,44 +859,10 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);

static int nvm_core_init(struct nvm_dev *dev)
{
- struct nvm_id *id = &dev->identity;
- struct nvm_geo *geo = &dev->geo;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
int ret;

- memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format));
-
- if (id->mtype != 0) {
- pr_err("nvm: memory type not supported\n");
- return -EINVAL;
- }
-
- /* Whole device values */
- geo->nr_chnls = id->num_ch;
- geo->nr_luns = id->num_lun;
-
- /* Generic device geometry values */
- geo->ws_min = id->ws_min;
- geo->ws_opt = id->ws_opt;
- geo->ws_seq = id->ws_seq;
- geo->ws_per_chk = id->ws_per_chk;
- geo->nr_chks = id->num_chk;
- geo->sec_size = id->csecs;
- geo->oob_size = id->sos;
- geo->mccap = id->mccap;
- geo->max_rq_size = dev->ops->max_phys_sect * geo->sec_size;
-
- geo->sec_per_chk = id->clba;
- geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks;
- geo->all_luns = geo->nr_luns * geo->nr_chnls;
-
- /* 1.2 spec device geometry values */
- geo->plane_mode = 1 << geo->ws_seq;
- geo->nr_planes = geo->ws_opt / geo->ws_min;
- geo->sec_per_pg = geo->ws_min;
- geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes;
-
- dev->total_secs = geo->all_luns * geo->sec_per_lun;
- dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns),
+ dev->lun_map = kcalloc(BITS_TO_LONGS(dev_geo->all_luns),
sizeof(unsigned long), GFP_KERNEL);
if (!dev->lun_map)
return -ENOMEM;
@@ -901,7 +876,7 @@ static int nvm_core_init(struct nvm_dev *dev)
if (ret)
goto err_fmtype;

- blk_queue_logical_block_size(dev->q, geo->sec_size);
+ blk_queue_logical_block_size(dev->q, dev_geo->c.csecs);
return 0;
err_fmtype:
kfree(dev->lun_map);
@@ -923,19 +898,17 @@ static void nvm_free(struct nvm_dev *dev)

static int nvm_init(struct nvm_dev *dev)
{
- struct nvm_geo *geo = &dev->geo;
+ struct nvm_dev_geo *dev_geo = &dev->dev_geo;
int ret = -EINVAL;

- if (dev->ops->identity(dev, &dev->identity)) {
+ if (dev->ops->identity(dev)) {
pr_err("nvm: device could not be identified\n");
goto err;
}

- if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) {
- pr_err("nvm: device ver_id %d not supported by kernel.\n",
- dev->identity.ver_id);
- goto err;
- }
+ pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n",
+ dev_geo->major_ver_id, dev_geo->minor_ver_id,
+ dev_geo->c.vmnt);

ret = nvm_core_init(dev);
if (ret) {
@@ -943,10 +916,10 @@ static int nvm_init(struct nvm_dev *dev)
goto err;
}

- pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n",
- dev->name, geo->sec_per_pg, geo->nr_planes,
- geo->ws_per_chk, geo->nr_chks,
- geo->all_luns, geo->nr_chnls);
+ pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n",
+ dev->name, dev_geo->c.ws_min, dev_geo->c.ws_opt,
+ dev_geo->c.num_chk, dev_geo->all_luns,
+ dev_geo->num_ch);
return 0;
err:
pr_err("nvm: failed to initialize nvm\n");
diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index 22e61cd4f801..519af8b9eab7 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line,
memset(&rqd, 0, sizeof(struct nvm_rq));

rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;

bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len,
l_mg->emeta_alloc_type, GFP_KERNEL);
@@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line)
if (bit >= lm->blk_per_line)
return -1;

- return bit * geo->sec_per_pl;
+ return bit * geo->c.ws_opt;
}

static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line,
@@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
/* Capture bad block information on line mapping bitmaps */
while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line,
bit + 1)) < lm->blk_per_line) {
- off = bit * geo->sec_per_pl;
+ off = bit * geo->c.ws_opt;
bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off,
lm->sec_per_line);
bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux,
lm->sec_per_line);
- line->sec_in_line -= geo->sec_per_chk;
+ line->sec_in_line -= geo->c.clba;
if (bit >= lm->emeta_bb)
nr_bb++;
}

/* Mark smeta metadata sectors as bad sectors */
bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line);
- off = bit * geo->sec_per_pl;
+ off = bit * geo->c.ws_opt;
bitmap_set(line->map_bitmap, off, lm->smeta_sec);
line->sec_in_line -= lm->smeta_sec;
line->smeta_ssec = off;
@@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
emeta_secs = lm->emeta_sec[0];
off = lm->sec_per_line;
while (emeta_secs) {
- off -= geo->sec_per_pl;
+ off -= geo->c.ws_opt;
if (!test_bit(off, line->invalid_bitmap)) {
- bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl);
- emeta_secs -= geo->sec_per_pl;
+ bitmap_set(line->invalid_bitmap, off, geo->c.ws_opt);
+ emeta_secs -= geo->c.ws_opt;
}
}

diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c
index 320f99af99e9..16afea3f5541 100644
--- a/drivers/lightnvm/pblk-gc.c
+++ b/drivers/lightnvm/pblk-gc.c
@@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work)

up(&gc->gc_sem);

- gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size);
+ gc_rq->data = vmalloc(gc_rq->nr_secs * geo->c.csecs);
if (!gc_rq->data) {
pr_err("pblk: could not GC line:%d (%d/%d)\n",
line->id, *line->vsc, gc_rq->nr_secs);
diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 86a94a7faa96..72b7902e5d1c 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -80,7 +80,7 @@ static size_t pblk_trans_map_size(struct pblk *pblk)
{
int entry_size = 8;

- if (pblk->ppaf_bitsize < 32)
+ if (pblk->addrf_len < 32)
entry_size = 4;

return entry_size * pblk->rl.nr_secs;
@@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk)
return -ENOMEM;

power_size = get_count_order(nr_entries);
- power_seg_sz = get_count_order(geo->sec_size);
+ power_seg_sz = get_count_order(geo->c.csecs);

return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz);
}
@@ -154,47 +154,63 @@ static int pblk_rwb_init(struct pblk *pblk)
/* Minimum pages needed within a lun */
#define ADDR_POOL_SIZE 64

-static int pblk_set_ppaf(struct pblk *pblk)
+static int pblk_set_addrf_12(struct nvm_geo *geo,
+ struct nvm_addr_format_12 *dst)
{
- struct nvm_tgt_dev *dev = pblk->dev;
- struct nvm_geo *geo = &dev->geo;
- struct nvm_addr_format ppaf = geo->ppaf;
+ struct nvm_addr_format_12 *src =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;
int power_len;

/* Re-calculate channel and lun format to adapt to configuration */
- power_len = get_count_order(geo->nr_chnls);
- if (1 << power_len != geo->nr_chnls) {
+ power_len = get_count_order(geo->num_ch);
+ if (1 << power_len != geo->num_ch) {
pr_err("pblk: supports only power-of-two channel config.\n");
return -EINVAL;
}
- ppaf.ch_len = power_len;
+ dst->ch_len = power_len;

- power_len = get_count_order(geo->nr_luns);
- if (1 << power_len != geo->nr_luns) {
+ power_len = get_count_order(geo->num_lun);
+ if (1 << power_len != geo->num_lun) {
pr_err("pblk: supports only power-of-two LUN config.\n");
return -EINVAL;
}
- ppaf.lun_len = power_len;
+ dst->lun_len = power_len;

- pblk->ppaf.sec_offset = 0;
- pblk->ppaf.pln_offset = ppaf.sect_len;
- pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len;
- pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len;
- pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len;
- pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len;
- pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1;
- pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) <<
- pblk->ppaf.pln_offset;
- pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) <<
- pblk->ppaf.ch_offset;
- pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) <<
- pblk->ppaf.lun_offset;
- pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) <<
- pblk->ppaf.pg_offset;
- pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) <<
- pblk->ppaf.blk_offset;
+ dst->blk_len = src->blk_len;
+ dst->pg_len = src->pg_len;
+ dst->pln_len = src->pln_len;
+ dst->sec_len = src->sec_len;

- pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len;
+ dst->sec_offset = 0;
+ dst->pln_offset = dst->sec_len;
+ dst->ch_offset = dst->pln_offset + dst->pln_len;
+ dst->lun_offset = dst->ch_offset + dst->ch_len;
+ dst->pg_offset = dst->lun_offset + dst->lun_len;
+ dst->blk_offset = dst->pg_offset + dst->pg_len;
+
+ dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
+ dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
+ dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
+ dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
+ dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
+ dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
+
+ return dst->blk_offset + src->blk_len;
+}
+
+static int pblk_set_addrf(struct pblk *pblk)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ int mod;
+
+ div_u64_rem(geo->c.clba, pblk->min_write_pgs, &mod);
+ if (mod) {
+ pr_err("pblk: bad configuration of sectors/pages\n");
+ return -EINVAL;
+ }
+
+ pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf);

return 0;
}
@@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk)
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;

- pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg *
- geo->nr_planes * geo->all_luns;
+ pblk->pgs_in_buffer = geo->c.mw_cunits * geo->c.ws_opt * geo->all_luns;

if (pblk_init_global_caches(pblk))
return -ENOMEM;
@@ -305,7 +320,7 @@ static int pblk_core_init(struct pblk *pblk)
if (!pblk->r_end_wq)
goto free_bb_wq;

- if (pblk_set_ppaf(pblk))
+ if (pblk_set_addrf(pblk))
goto free_r_end_wq;

if (pblk_rwb_init(pblk))
@@ -434,7 +449,7 @@ static void *pblk_bb_get_log(struct pblk *pblk)
int i, nr_blks, blk_per_lun;
int ret;

- blk_per_lun = geo->nr_chks * geo->plane_mode;
+ blk_per_lun = geo->c.num_chk * geo->c.pln_mode;
nr_blks = blk_per_lun * geo->all_luns;

log = kmalloc(nr_blks, GFP_KERNEL);
@@ -484,7 +499,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
int i;

/* TODO: Implement unbalanced LUN support */
- if (geo->nr_luns < 0) {
+ if (geo->num_lun < 0) {
pr_err("pblk: unbalanced LUN config.\n");
return -EINVAL;
}
@@ -496,9 +511,9 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)

for (i = 0; i < geo->all_luns; i++) {
/* Stripe across channels */
- int ch = i % geo->nr_chnls;
- int lun_raw = i / geo->nr_chnls;
- int lunid = lun_raw + ch * geo->nr_luns;
+ int ch = i % geo->num_ch;
+ int lun_raw = i / geo->num_ch;
+ int lunid = lun_raw + ch * geo->num_lun;

rlun = &pblk->luns[i];
rlun->bppa = luns[lunid];
@@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk)
/* Round to sector size so that lba_list starts on its own sector */
lm->emeta_sec[1] = DIV_ROUND_UP(
sizeof(struct line_emeta) + lm->blk_bitmap_len +
- sizeof(struct wa_counters), geo->sec_size);
- lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size;
+ sizeof(struct wa_counters), geo->c.csecs);
+ lm->emeta_len[1] = lm->emeta_sec[1] * geo->c.csecs;

/* Round to sector size so that vsc_list starts on its own sector */
lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0];
lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64),
- geo->sec_size);
- lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size;
+ geo->c.csecs);
+ lm->emeta_len[2] = lm->emeta_sec[2] * geo->c.csecs;

lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32),
- geo->sec_size);
- lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size;
+ geo->c.csecs);
+ lm->emeta_len[3] = lm->emeta_sec[3] * geo->c.csecs;

lm->vsc_list_len = l_mg->nr_lines * sizeof(u32);

@@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks)
* on user capacity consider only provisioned blocks
*/
pblk->rl.total_blocks = nr_free_blks;
- pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk;
+ pblk->rl.nr_secs = nr_free_blks * geo->c.clba;

/* Consider sectors used for metadata */
sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
- blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
+ blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);

- pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk;
+ pblk->capacity = (provisioned - blk_meta) * geo->c.clba;

atomic_set(&pblk->rl.free_blocks, nr_free_blks);
atomic_set(&pblk->rl.free_user_blocks, nr_free_blks);
@@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk)
void *chunk_log;
unsigned int smeta_len, emeta_len;
long nr_bad_blks = 0, nr_free_blks = 0;
- int bb_distance, max_write_ppas, mod;
+ int bb_distance, max_write_ppas;
int i, ret;

- pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE);
+ pblk->min_write_pgs = geo->c.ws_opt * (geo->c.csecs / PAGE_SIZE);
max_write_ppas = pblk->min_write_pgs * geo->all_luns;
pblk->max_write_pgs = (max_write_ppas < nvm_max_phys_sects(dev)) ?
max_write_ppas : nvm_max_phys_sects(dev);
@@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk)
return -EINVAL;
}

- div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod);
- if (mod) {
- pr_err("pblk: bad configuration of sectors/pages\n");
- return -EINVAL;
- }
-
- l_mg->nr_lines = geo->nr_chks;
+ l_mg->nr_lines = geo->c.num_chk;
l_mg->log_line = l_mg->data_line = NULL;
l_mg->l_seq_nr = l_mg->d_seq_nr = 0;
l_mg->nr_free_lines = 0;
bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES);

- lm->sec_per_line = geo->sec_per_chk * geo->all_luns;
+ lm->sec_per_line = geo->c.clba * geo->all_luns;
lm->blk_per_line = geo->all_luns;
lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long);
lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long);
@@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk)
*/
i = 1;
add_smeta_page:
- lm->smeta_sec = i * geo->sec_per_pl;
- lm->smeta_len = lm->smeta_sec * geo->sec_size;
+ lm->smeta_sec = i * geo->c.ws_opt;
+ lm->smeta_len = lm->smeta_sec * geo->c.csecs;

smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len;
if (smeta_len > lm->smeta_len) {
@@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk)
*/
i = 1;
add_emeta_page:
- lm->emeta_sec[0] = i * geo->sec_per_pl;
- lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size;
+ lm->emeta_sec[0] = i * geo->c.ws_opt;
+ lm->emeta_len[0] = lm->emeta_sec[0] * geo->c.csecs;

emeta_len = calc_emeta_len(pblk);
if (emeta_len > lm->emeta_len[0]) {
@@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk)
lm->min_blk_line = 1;
if (geo->all_luns > 1)
lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec +
- lm->emeta_sec[0], geo->sec_per_chk);
+ lm->emeta_sec[0], geo->c.clba);

if (lm->min_blk_line > lm->blk_per_line) {
pr_err("pblk: config. not supported. Min. LUN in line:%d\n",
@@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk)
goto fail_free_bb_template;
}

- bb_distance = (geo->all_luns) * geo->sec_per_pl;
+ bb_distance = (geo->all_luns) * geo->c.ws_opt;
for (i = 0; i < lm->sec_per_line; i += bb_distance)
- bitmap_set(l_mg->bb_template, i, geo->sec_per_pl);
+ bitmap_set(l_mg->bb_template, i, geo->c.ws_opt);

INIT_LIST_HEAD(&l_mg->free_list);
INIT_LIST_HEAD(&l_mg->corrupt_list);
@@ -982,9 +991,15 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
struct pblk *pblk;
int ret;

- if (dev->identity.dom & NVM_RSP_L2P) {
+ if (geo->c.version != NVM_OCSSD_SPEC_12) {
+ pr_err("pblk: OCSSD version not supported (%u)\n",
+ geo->c.version);
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (geo->c.version == NVM_OCSSD_SPEC_12 && geo->c.dom & NVM_RSP_L2P) {
pr_err("pblk: host-side L2P table not supported. (%x)\n",
- dev->identity.dom);
+ geo->c.dom);
return ERR_PTR(-EINVAL);
}

@@ -1092,7 +1107,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,

blk_queue_write_cache(tqueue, true, false);

- tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size;
+ tqueue->limits.discard_granularity = geo->c.clba * geo->c.csecs;
tqueue->limits.discard_alignment = 0;
blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9);
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue);
diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
index 2f761283f43e..ebb6bae3a3b8 100644
--- a/drivers/lightnvm/pblk-read.c
+++ b/drivers/lightnvm/pblk-read.c
@@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq)
if (!(gc_rq->secs_to_gc))
goto out;

- data_len = (gc_rq->secs_to_gc) * geo->sec_size;
+ data_len = (gc_rq->secs_to_gc) * geo->c.csecs;
bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len,
PBLK_VMALLOC_META, GFP_KERNEL);
if (IS_ERR(bio)) {
diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
index e75a1af2eebe..beacef1412a2 100644
--- a/drivers/lightnvm/pblk-recovery.c
+++ b/drivers/lightnvm/pblk-recovery.c
@@ -188,7 +188,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line)
int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line);

return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] -
- nr_bb * geo->sec_per_chk;
+ nr_bb * geo->c.clba;
}

struct pblk_recov_alloc {
@@ -236,7 +236,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line,
rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
if (!rq_ppas)
rq_ppas = pblk->min_write_pgs;
- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;

bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
if (IS_ERR(bio))
@@ -355,7 +355,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
if (!pad_rq)
return -ENOMEM;

- data = vzalloc(pblk->max_write_pgs * geo->sec_size);
+ data = vzalloc(pblk->max_write_pgs * geo->c.csecs);
if (!data) {
ret = -ENOMEM;
goto free_rq;
@@ -372,7 +372,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
goto fail_free_pad;
}

- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;

meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list);
if (!meta_list) {
@@ -513,7 +513,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line,
rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
if (!rq_ppas)
rq_ppas = pblk->min_write_pgs;
- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;

bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
if (IS_ERR(bio))
@@ -644,7 +644,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line,
rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
if (!rq_ppas)
rq_ppas = pblk->min_write_pgs;
- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;

bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
if (IS_ERR(bio))
@@ -749,7 +749,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line)
ppa_list = (void *)(meta_list) + pblk_dma_meta_size;
dma_ppa_list = dma_meta_list + pblk_dma_meta_size;

- data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL);
+ data = kcalloc(pblk->max_write_pgs, geo->c.csecs, GFP_KERNEL);
if (!data) {
ret = -ENOMEM;
goto free_meta_list;
diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c
index 0d457b162f23..bcab203477ec 100644
--- a/drivers/lightnvm/pblk-rl.c
+++ b/drivers/lightnvm/pblk-rl.c
@@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget)

/* Consider sectors used for metadata */
sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
- blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
+ blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);

rl->high = pblk->op_blks - blk_meta - lm->blk_per_line;
rl->high_pw = get_count_order(rl->high);
diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
index d93e9b1f083a..d3b50741b691 100644
--- a/drivers/lightnvm/pblk-sysfs.c
+++ b/drivers/lightnvm/pblk-sysfs.c
@@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
+ struct nvm_addr_format_12 *ppaf;
+ struct nvm_addr_format_12 *geo_ppaf;
ssize_t sz = 0;

- sz = snprintf(page, PAGE_SIZE - sz,
- "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
- pblk->ppaf_bitsize,
- pblk->ppaf.blk_offset, geo->ppaf.blk_len,
- pblk->ppaf.pg_offset, geo->ppaf.pg_len,
- pblk->ppaf.lun_offset, geo->ppaf.lun_len,
- pblk->ppaf.ch_offset, geo->ppaf.ch_len,
- pblk->ppaf.pln_offset, geo->ppaf.pln_len,
- pblk->ppaf.sec_offset, geo->ppaf.sect_len);
+ ppaf = (struct nvm_addr_format_12 *)&pblk->addrf;
+ geo_ppaf = (struct nvm_addr_format_12 *)&geo->c.addrf;
+
+ sz = snprintf(page, PAGE_SIZE,
+ "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
+ pblk->addrf_len,
+ ppaf->ch_offset, ppaf->ch_len,
+ ppaf->lun_offset, ppaf->lun_len,
+ ppaf->blk_offset, ppaf->blk_len,
+ ppaf->pg_offset, ppaf->pg_len,
+ ppaf->pln_offset, ppaf->pln_len,
+ ppaf->sec_offset, ppaf->sec_len);

sz += snprintf(page + sz, PAGE_SIZE - sz,
- "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
- geo->ppaf.blk_offset, geo->ppaf.blk_len,
- geo->ppaf.pg_offset, geo->ppaf.pg_len,
- geo->ppaf.lun_offset, geo->ppaf.lun_len,
- geo->ppaf.ch_offset, geo->ppaf.ch_len,
- geo->ppaf.pln_offset, geo->ppaf.pln_len,
- geo->ppaf.sect_offset, geo->ppaf.sect_len);
+ "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
+ geo_ppaf->ch_offset, geo_ppaf->ch_len,
+ geo_ppaf->lun_offset, geo_ppaf->lun_len,
+ geo_ppaf->blk_offset, geo_ppaf->blk_len,
+ geo_ppaf->pg_offset, geo_ppaf->pg_len,
+ geo_ppaf->pln_offset, geo_ppaf->pln_len,
+ geo_ppaf->sec_offset, geo_ppaf->sec_len);

return sz;
}
@@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page)
"blk_line:%d, sec_line:%d, sec_blk:%d\n",
lm->blk_per_line,
lm->sec_per_line,
- geo->sec_per_chk);
+ geo->c.clba);

return sz;
}
diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
index aae86ed60b98..c49b27539d5a 100644
--- a/drivers/lightnvm/pblk-write.c
+++ b/drivers/lightnvm/pblk-write.c
@@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
m_ctx = nvm_rq_to_pdu(rqd);
m_ctx->private = meta_line;

- rq_len = rq_ppas * geo->sec_size;
+ rq_len = rq_ppas * geo->c.csecs;
data = ((void *)emeta->buf) + emeta->mem;

bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len,
diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
index 282dfc8780e8..46b29a492f74 100644
--- a/drivers/lightnvm/pblk.h
+++ b/drivers/lightnvm/pblk.h
@@ -551,21 +551,6 @@ struct pblk_line_meta {
unsigned int meta_distance; /* Distance between data and metadata */
};

-struct pblk_addr_format {
- u64 ch_mask;
- u64 lun_mask;
- u64 pln_mask;
- u64 blk_mask;
- u64 pg_mask;
- u64 sec_mask;
- u8 ch_offset;
- u8 lun_offset;
- u8 pln_offset;
- u8 blk_offset;
- u8 pg_offset;
- u8 sec_offset;
-};
-
enum {
PBLK_STATE_RUNNING = 0,
PBLK_STATE_STOPPING = 1,
@@ -585,8 +570,8 @@ struct pblk {
struct pblk_line_mgmt l_mg; /* Line management */
struct pblk_line_meta lm; /* Line metadata */

- int ppaf_bitsize;
- struct pblk_addr_format ppaf;
+ struct nvm_addr_format addrf;
+ int addrf_len;

struct pblk_rb rwb;

@@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line)
return le32_to_cpu(*line->vsc);
}

-#define NVM_MEM_PAGE_WRITE (8)
-
static inline int pblk_pad_distance(struct pblk *pblk)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;

- return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl;
+ return geo->c.mw_cunits * geo->all_luns * geo->c.ws_opt;
}

static inline int pblk_ppa_to_line(struct ppa_addr p)
@@ -958,21 +941,23 @@ static inline int pblk_ppa_to_line(struct ppa_addr p)

static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p)
{
- return p.g.lun * geo->nr_chnls + p.g.ch;
+ return p.g.lun * geo->num_ch + p.g.ch;
}

static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
u64 line_id)
{
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
struct ppa_addr ppa;

ppa.ppa = 0;
ppa.g.blk = line_id;
- ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset;
- ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset;
- ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset;
- ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset;
- ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset;
+ ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset;
+ ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset;
+ ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset;
+ ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset;
+ ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset;

return ppa;
}
@@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk,
struct ppa_addr p)
{
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
u64 paddr;

- paddr = (u64)p.g.pg << pblk->ppaf.pg_offset;
- paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset;
- paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset;
- paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset;
- paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset;
+ paddr = (u64)p.g.ch << ppaf->ch_offset;
+ paddr |= (u64)p.g.lun << ppaf->lun_offset;
+ paddr |= (u64)p.g.pg << ppaf->pg_offset;
+ paddr |= (u64)p.g.pl << ppaf->pln_offset;
+ paddr |= (u64)p.g.sec << ppaf->sec_offset;

return paddr;
}
@@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32)
ppa64.c.line = ppa32 & ((~0U) >> 1);
ppa64.c.is_cached = 1;
} else {
- ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >>
- pblk->ppaf.blk_offset;
- ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >>
- pblk->ppaf.pg_offset;
- ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >>
- pblk->ppaf.lun_offset;
- ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >>
- pblk->ppaf.ch_offset;
- ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >>
- pblk->ppaf.pln_offset;
- ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >>
- pblk->ppaf.sec_offset;
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
+
+ ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset;
+ ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset;
+ ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset;
+ ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset;
+ ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset;
+ ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset;
}

return ppa64;
@@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64)
ppa32 |= ppa64.c.line;
ppa32 |= 1U << 31;
} else {
- ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset;
- ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset;
- ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset;
- ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset;
- ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset;
- ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset;
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
+
+ ppa32 |= ppa64.g.ch << ppaf->ch_offset;
+ ppa32 |= ppa64.g.lun << ppaf->lun_offset;
+ ppa32 |= ppa64.g.blk << ppaf->blk_offset;
+ ppa32 |= ppa64.g.pg << ppaf->pg_offset;
+ ppa32 |= ppa64.g.pl << ppaf->pln_offset;
+ ppa32 |= ppa64.g.sec << ppaf->sec_offset;
}

return ppa32;
@@ -1046,7 +1033,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
{
struct ppa_addr ppa;

- if (pblk->ppaf_bitsize < 32) {
+ if (pblk->addrf_len < 32) {
u32 *map = (u32 *)pblk->trans_map;

ppa = pblk_ppa32_to_ppa64(pblk, map[lba]);
@@ -1062,7 +1049,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
static inline void pblk_trans_map_set(struct pblk *pblk, sector_t lba,
struct ppa_addr ppa)
{
- if (pblk->ppaf_bitsize < 32) {
+ if (pblk->addrf_len < 32) {
u32 *map = (u32 *)pblk->trans_map;

map[lba] = pblk_ppa64_to_ppa32(pblk, ppa);
@@ -1153,7 +1140,7 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type)
struct nvm_geo *geo = &dev->geo;
int flags;

- flags = geo->plane_mode >> 1;
+ flags = geo->c.pln_mode >> 1;

if (type == PBLK_WRITE)
flags |= NVM_IO_SCRAMBLE_ENABLE;
@@ -1174,7 +1161,7 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type)

flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE;
if (type == PBLK_READ_SEQUENTIAL)
- flags |= geo->plane_mode >> 1;
+ flags |= geo->c.pln_mode >> 1;

return flags;
}
@@ -1227,12 +1214,12 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev,
ppa = &ppas[i];

if (!ppa->c.is_cached &&
- ppa->g.ch < geo->nr_chnls &&
- ppa->g.lun < geo->nr_luns &&
- ppa->g.pl < geo->nr_planes &&
- ppa->g.blk < geo->nr_chks &&
- ppa->g.pg < geo->ws_per_chk &&
- ppa->g.sec < geo->sec_per_pg)
+ ppa->g.ch < geo->num_ch &&
+ ppa->g.lun < geo->num_lun &&
+ ppa->g.pl < geo->c.num_pln &&
+ ppa->g.blk < geo->c.num_chk &&
+ ppa->g.pg < geo->c.num_pg &&
+ ppa->g.sec < geo->c.ws_min)
continue;

print_ppa(ppa, "boundary", i);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index a19e85f0cbae..97739e668602 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf {
__u8 blk_len;
__u8 pg_offset;
__u8 pg_len;
- __u8 sect_offset;
- __u8 sect_len;
+ __u8 sec_offset;
+ __u8 sec_len;
__u8 res[4];
} __packed;

@@ -170,6 +170,12 @@ struct nvme_nvm_id12 {
__u8 resv2[2880];
} __packed;

+/* Generic identification structure */
+struct nvme_nvm_id {
+ __u8 ver_id;
+ __u8 resv[4095];
+} __packed;
+
struct nvme_nvm_bb_tbl {
__u8 tblid[4];
__le16 verid;
@@ -254,121 +260,195 @@ static inline void _nvme_nvm_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE);
}

-static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12)
+static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst,
+ struct nvme_nvm_id12_addrf *src)
{
+ dst->ch_len = src->ch_len;
+ dst->lun_len = src->lun_len;
+ dst->blk_len = src->blk_len;
+ dst->pg_len = src->pg_len;
+ dst->pln_len = src->pln_len;
+ dst->sec_len = src->sec_len;
+
+ dst->ch_offset = src->ch_offset;
+ dst->lun_offset = src->lun_offset;
+ dst->blk_offset = src->blk_offset;
+ dst->pg_offset = src->pg_offset;
+ dst->pln_offset = src->pln_offset;
+ dst->sec_offset = src->sec_offset;
+
+ dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
+ dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
+ dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
+ dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
+ dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
+ dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
+}
+
+static int nvme_nvm_setup_12(struct nvme_nvm_id *gen_id,
+ struct nvm_dev_geo *dev_geo)
+{
+ struct nvme_nvm_id12 *id = (struct nvme_nvm_id12 *)gen_id;
struct nvme_nvm_id12_grp *src;
int sec_per_pg, sec_per_pl, pg_per_blk;

- if (id12->cgrps != 1)
+ if (id->cgrps != 1)
return -EINVAL;

- src = &id12->grp;
+ src = &id->grp;

- nvm_id->mtype = src->mtype;
- nvm_id->fmtype = src->fmtype;
+ if (src->mtype != 0) {
+ pr_err("nvm: memory type not supported\n");
+ return -EINVAL;
+ }
+
+ /* 1.2 spec. only reports a single version id - unfold */
+ dev_geo->major_ver_id = 1;
+ dev_geo->minor_ver_id = 2;
+
+ /* Set compacted version for upper layers */
+ dev_geo->c.version = NVM_OCSSD_SPEC_12;

- nvm_id->num_ch = src->num_ch;
- nvm_id->num_lun = src->num_lun;
+ dev_geo->num_ch = src->num_ch;
+ dev_geo->num_lun = src->num_lun;
+ dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;

- nvm_id->num_chk = le16_to_cpu(src->num_chk);
- nvm_id->csecs = le16_to_cpu(src->csecs);
- nvm_id->sos = le16_to_cpu(src->sos);
+ dev_geo->c.num_chk = le16_to_cpu(src->num_chk);
+ dev_geo->c.csecs = le16_to_cpu(src->csecs);
+ dev_geo->c.sos = le16_to_cpu(src->sos);

pg_per_blk = le16_to_cpu(src->num_pg);
- sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs;
+ sec_per_pg = le16_to_cpu(src->fpg_sz) / dev_geo->c.csecs;
sec_per_pl = sec_per_pg * src->num_pln;
- nvm_id->clba = sec_per_pl * pg_per_blk;
- nvm_id->ws_per_chk = pg_per_blk;
-
- nvm_id->mpos = le32_to_cpu(src->mpos);
- nvm_id->cpar = le16_to_cpu(src->cpar);
- nvm_id->mccap = le32_to_cpu(src->mccap);
-
- nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg;
- nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
-
- if (nvm_id->mpos & 0x020202) {
- nvm_id->ws_seq = NVM_IO_DUAL_ACCESS;
- nvm_id->ws_opt <<= 1;
- } else if (nvm_id->mpos & 0x040404) {
- nvm_id->ws_seq = NVM_IO_QUAD_ACCESS;
- nvm_id->ws_opt <<= 2;
- }
+ dev_geo->c.clba = sec_per_pl * pg_per_blk;
+
+ dev_geo->c.ws_min = sec_per_pg;
+ dev_geo->c.ws_opt = sec_per_pg;
+ dev_geo->c.mw_cunits = 8; /* default to MLC safe values */
+ dev_geo->c.maxoc = dev_geo->all_luns; /* default to 1 chunk per LUN */
+ dev_geo->c.maxocpu = 1; /* default to 1 chunk per LUN */

- nvm_id->trdt = le32_to_cpu(src->trdt);
- nvm_id->trdm = le32_to_cpu(src->trdm);
- nvm_id->tprt = le32_to_cpu(src->tprt);
- nvm_id->tprm = le32_to_cpu(src->tprm);
- nvm_id->tbet = le32_to_cpu(src->tbet);
- nvm_id->tbem = le32_to_cpu(src->tbem);
+ dev_geo->c.mccap = le32_to_cpu(src->mccap);
+
+ dev_geo->c.trdt = le32_to_cpu(src->trdt);
+ dev_geo->c.trdm = le32_to_cpu(src->trdm);
+ dev_geo->c.tprt = le32_to_cpu(src->tprt);
+ dev_geo->c.tprm = le32_to_cpu(src->tprm);
+ dev_geo->c.tbet = le32_to_cpu(src->tbet);
+ dev_geo->c.tbem = le32_to_cpu(src->tbem);

/* 1.2 compatibility */
- nvm_id->num_pln = src->num_pln;
- nvm_id->num_pg = le16_to_cpu(src->num_pg);
- nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz);
+ dev_geo->c.vmnt = id->vmnt;
+ dev_geo->c.cap = le32_to_cpu(id->cap);
+ dev_geo->c.dom = le32_to_cpu(id->dom);
+
+ dev_geo->c.mtype = src->mtype;
+ dev_geo->c.fmtype = src->fmtype;
+
+ dev_geo->c.cpar = le16_to_cpu(src->cpar);
+ dev_geo->c.mpos = le32_to_cpu(src->mpos);
+
+ dev_geo->c.pln_mode = NVM_PLANE_SINGLE;
+
+ if (dev_geo->c.mpos & 0x020202) {
+ dev_geo->c.pln_mode = NVM_PLANE_DOUBLE;
+ dev_geo->c.ws_opt <<= 1;
+ } else if (dev_geo->c.mpos & 0x040404) {
+ dev_geo->c.pln_mode = NVM_PLANE_QUAD;
+ dev_geo->c.ws_opt <<= 2;
+ }
+
+ dev_geo->c.num_pln = src->num_pln;
+ dev_geo->c.num_pg = le16_to_cpu(src->num_pg);
+ dev_geo->c.fpg_sz = le16_to_cpu(src->fpg_sz);
+
+ nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&dev_geo->c.addrf,
+ &id->ppaf);

return 0;
}

-static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
- struct nvme_nvm_id12 *id)
+static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst,
+ struct nvme_nvm_id20_addrf *src)
{
- nvm_id->ver_id = id->ver_id;
- nvm_id->vmnt = id->vmnt;
- nvm_id->cap = le32_to_cpu(id->cap);
- nvm_id->dom = le32_to_cpu(id->dom);
- memcpy(&nvm_id->ppaf, &id->ppaf,
- sizeof(struct nvm_addr_format));
-
- return init_grp(nvm_id, id);
+ dst->ch_len = src->grp_len;
+ dst->lun_len = src->pu_len;
+ dst->chk_len = src->chk_len;
+ dst->sec_len = src->lba_len;
+
+ dst->sec_offset = 0;
+ dst->chk_offset = dst->sec_len;
+ dst->lun_offset = dst->chk_offset + dst->chk_len;
+ dst->ch_offset = dst->lun_offset + dst->lun_len;
+
+ dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
+ dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
+ dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset;
+ dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
}

-static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
- struct nvme_nvm_id20 *id)
+static int nvme_nvm_setup_20(struct nvme_nvm_id *gen_id,
+ struct nvm_dev_geo *dev_geo)
{
- nvm_id->ver_id = id->mjr;
+ struct nvme_nvm_id20 *id = (struct nvme_nvm_id20 *)gen_id;

- nvm_id->num_ch = le16_to_cpu(id->num_grp);
- nvm_id->num_lun = le16_to_cpu(id->num_pu);
- nvm_id->num_chk = le32_to_cpu(id->num_chk);
- nvm_id->clba = le32_to_cpu(id->clba);
+ dev_geo->major_ver_id = id->mjr;
+ dev_geo->minor_ver_id = id->mnr;

- nvm_id->ws_min = le32_to_cpu(id->ws_min);
- nvm_id->ws_opt = le32_to_cpu(id->ws_opt);
- nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits);
+ /* Set compacted version for upper layers */
+ dev_geo->c.version = NVM_OCSSD_SPEC_20;

- nvm_id->trdt = le32_to_cpu(id->trdt);
- nvm_id->trdm = le32_to_cpu(id->trdm);
- nvm_id->tprt = le32_to_cpu(id->twrt);
- nvm_id->tprm = le32_to_cpu(id->twrm);
- nvm_id->tbet = le32_to_cpu(id->tcrst);
- nvm_id->tbem = le32_to_cpu(id->tcrsm);
+ if (!(dev_geo->major_ver_id == 2 && dev_geo->minor_ver_id == 0)) {
+ pr_err("nvm: OCSSD version not supported (v%d.%d)\n",
+ dev_geo->major_ver_id, dev_geo->minor_ver_id);
+ return -EINVAL;
+ }

- /* calculated values */
- nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min;
+ dev_geo->num_ch = le16_to_cpu(id->num_grp);
+ dev_geo->num_lun = le16_to_cpu(id->num_pu);
+ dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;

- /* 1.2 compatibility */
- nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
+ dev_geo->c.num_chk = le32_to_cpu(id->num_chk);
+ dev_geo->c.clba = le32_to_cpu(id->clba);
+ dev_geo->c.csecs = -1; /* Set by nvme identify */
+ dev_geo->c.sos = -1; /* Set bu nvme identify */
+
+ dev_geo->c.ws_min = le32_to_cpu(id->ws_min);
+ dev_geo->c.ws_opt = le32_to_cpu(id->ws_opt);
+ dev_geo->c.mw_cunits = le32_to_cpu(id->mw_cunits);
+ dev_geo->c.maxoc = le32_to_cpu(id->maxoc);
+ dev_geo->c.maxocpu = le32_to_cpu(id->maxocpu);
+
+ dev_geo->c.mccap = le32_to_cpu(id->mccap);
+
+ dev_geo->c.trdt = le32_to_cpu(id->trdt);
+ dev_geo->c.trdm = le32_to_cpu(id->trdm);
+ dev_geo->c.tprt = le32_to_cpu(id->twrt);
+ dev_geo->c.tprm = le32_to_cpu(id->twrm);
+ dev_geo->c.tbet = le32_to_cpu(id->tcrst);
+ dev_geo->c.tbem = le32_to_cpu(id->tcrsm);
+
+ nvme_nvm_set_addr_20(&dev_geo->c.addrf, &id->lbaf);

return 0;
}

-static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
+static int nvme_nvm_identity(struct nvm_dev *nvmdev)
{
struct nvme_ns *ns = nvmdev->q->queuedata;
- struct nvme_nvm_id12 *id;
+ struct nvme_nvm_id *nvme_nvm_id;
struct nvme_nvm_command c = {};
int ret;

c.identity.opcode = nvme_nvm_admin_identity;
c.identity.nsid = cpu_to_le32(ns->head->ns_id);

- id = kmalloc(sizeof(struct nvme_nvm_id12), GFP_KERNEL);
- if (!id)
+ nvme_nvm_id = kmalloc(sizeof(struct nvme_nvm_id), GFP_KERNEL);
+ if (!nvme_nvm_id)
return -ENOMEM;

ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, (struct nvme_command *)&c,
- id, sizeof(struct nvme_nvm_id12));
+ nvme_nvm_id, sizeof(struct nvme_nvm_id));
if (ret) {
ret = -EIO;
goto out;
@@ -378,22 +458,21 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
* The 1.2 and 2.0 specifications share the first byte in their geometry
* command to make it possible to know what version a device implements.
*/
- switch (id->ver_id) {
+ switch (nvme_nvm_id->ver_id) {
case 1:
- ret = nvme_nvm_setup_12(nvmdev, nvm_id, id);
+ ret = nvme_nvm_setup_12(nvme_nvm_id, &nvmdev->dev_geo);
break;
case 2:
- ret = nvme_nvm_setup_20(nvmdev, nvm_id,
- (struct nvme_nvm_id20 *)id);
+ ret = nvme_nvm_setup_20(nvme_nvm_id, &nvmdev->dev_geo);
break;
default:
- dev_err(ns->ctrl->device,
- "OCSSD revision not supported (%d)\n",
- nvm_id->ver_id);
+ dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n",
+ nvme_nvm_id->ver_id);
ret = -EINVAL;
}
+
out:
- kfree(id);
+ kfree(nvme_nvm_id);
return ret;
}

@@ -401,12 +480,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
u8 *blks)
{
struct request_queue *q = nvmdev->q;
- struct nvm_geo *geo = &nvmdev->geo;
+ struct nvm_dev_geo *dev_geo = &nvmdev->dev_geo;
struct nvme_ns *ns = q->queuedata;
struct nvme_ctrl *ctrl = ns->ctrl;
struct nvme_nvm_command c = {};
struct nvme_nvm_bb_tbl *bb_tbl;
- int nr_blks = geo->nr_chks * geo->plane_mode;
+ int nr_blks = dev_geo->c.num_chk * dev_geo->c.num_pln;
int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks;
int ret = 0;

@@ -447,7 +526,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
goto out;
}

- memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode);
+ memcpy(blks, bb_tbl->blk, dev_geo->c.num_chk * dev_geo->c.num_pln);
out:
kfree(bb_tbl);
return ret;
@@ -817,9 +896,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg)
void nvme_nvm_update_nvm_info(struct nvme_ns *ns)
{
struct nvm_dev *ndev = ns->ndev;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;

- ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift;
- ndev->identity.sos = ndev->geo.oob_size = ns->ms;
+ dev_geo->c.csecs = 1 << ns->lba_shift;
+ dev_geo->c.sos = ns->ms;
}

int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
@@ -852,23 +932,24 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
{
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
struct nvm_dev *ndev = ns->ndev;
- struct nvm_id *id;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
struct attribute *attr;

if (!ndev)
return 0;

- id = &ndev->identity;
attr = &dattr->attr;

if (strcmp(attr->name, "version") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id);
+ return scnprintf(page, PAGE_SIZE, "%u.%u\n",
+ dev_geo->major_ver_id,
+ dev_geo->minor_ver_id);
} else if (strcmp(attr->name, "capabilities") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->cap);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
} else if (strcmp(attr->name, "read_typ") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
} else if (strcmp(attr->name, "read_max") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdm);
} else {
return scnprintf(page,
PAGE_SIZE,
@@ -877,76 +958,80 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
}
}

+static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf,
+ char *page)
+{
+ return scnprintf(page, PAGE_SIZE,
+ "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
+ ppaf->ch_offset, ppaf->ch_len,
+ ppaf->lun_offset, ppaf->lun_len,
+ ppaf->pln_offset, ppaf->pln_len,
+ ppaf->blk_offset, ppaf->blk_len,
+ ppaf->pg_offset, ppaf->pg_len,
+ ppaf->sec_offset, ppaf->sec_len);
+}
+
static ssize_t nvm_dev_attr_show_12(struct device *dev,
struct device_attribute *dattr, char *page)
{
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
struct nvm_dev *ndev = ns->ndev;
- struct nvm_id *id;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
struct attribute *attr;

if (!ndev)
return 0;

- id = &ndev->identity;
attr = &dattr->attr;

if (strcmp(attr->name, "vendor_opcode") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
} else if (strcmp(attr->name, "device_mode") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->dom);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
/* kept for compatibility */
} else if (strcmp(attr->name, "media_manager") == 0) {
return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
} else if (strcmp(attr->name, "ppa_format") == 0) {
- return scnprintf(page, PAGE_SIZE,
- "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
- id->ppaf.ch_offset, id->ppaf.ch_len,
- id->ppaf.lun_offset, id->ppaf.lun_len,
- id->ppaf.pln_offset, id->ppaf.pln_len,
- id->ppaf.blk_offset, id->ppaf.blk_len,
- id->ppaf.pg_offset, id->ppaf.pg_len,
- id->ppaf.sect_offset, id->ppaf.sect_len);
+ return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
} else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
- return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
} else if (strcmp(attr->name, "flash_media_type") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
} else if (strcmp(attr->name, "num_channels") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
} else if (strcmp(attr->name, "num_luns") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
} else if (strcmp(attr->name, "num_planes") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pln);
} else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
} else if (strcmp(attr->name, "num_pages") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pg);
} else if (strcmp(attr->name, "page_size") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
} else if (strcmp(attr->name, "hw_sector_size") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
} else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
- return scnprintf(page, PAGE_SIZE, "%u\n", id->sos);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
} else if (strcmp(attr->name, "prog_typ") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
} else if (strcmp(attr->name, "prog_max") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
} else if (strcmp(attr->name, "erase_typ") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
} else if (strcmp(attr->name, "erase_max") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
} else if (strcmp(attr->name, "multiplane_modes") == 0) {
- return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos);
+ return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
} else if (strcmp(attr->name, "media_capabilities") == 0) {
- return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap);
+ return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
} else if (strcmp(attr->name, "max_phys_secs") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n",
ndev->ops->max_phys_sect);
} else {
- return scnprintf(page,
- PAGE_SIZE,
- "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
- attr->name);
+ return scnprintf(page, PAGE_SIZE,
+ "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
+ attr->name);
}
}

@@ -955,42 +1040,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
{
struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
struct nvm_dev *ndev = ns->ndev;
- struct nvm_id *id;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
struct attribute *attr;

if (!ndev)
return 0;

- id = &ndev->identity;
attr = &dattr->attr;

if (strcmp(attr->name, "groups") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
} else if (strcmp(attr->name, "punits") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
} else if (strcmp(attr->name, "chunks") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
} else if (strcmp(attr->name, "clba") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->clba);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
} else if (strcmp(attr->name, "ws_min") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
} else if (strcmp(attr->name, "ws_opt") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
} else if (strcmp(attr->name, "mw_cunits") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
} else if (strcmp(attr->name, "write_typ") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
} else if (strcmp(attr->name, "write_max") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
} else if (strcmp(attr->name, "reset_typ") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
} else if (strcmp(attr->name, "reset_max") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
} else {
- return scnprintf(page,
- PAGE_SIZE,
- "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
- attr->name);
+ return scnprintf(page, PAGE_SIZE,
+ "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
+ attr->name);
}
}

@@ -1109,10 +1192,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = {

int nvme_nvm_register_sysfs(struct nvme_ns *ns)
{
- if (!ns->ndev)
+ struct nvm_dev *ndev = ns->ndev;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
+
+ if (!ndev)
return -EINVAL;

- switch (ns->ndev->identity.ver_id) {
+ switch (dev_geo->major_ver_id) {
case 1:
return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
&nvm_dev_attr_group_12);
@@ -1126,7 +1212,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns)

void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
{
- switch (ns->ndev->identity.ver_id) {
+ struct nvm_dev *ndev = ns->ndev;
+ struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
+
+ switch (dev_geo->major_ver_id) {
case 1:
sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
&nvm_dev_attr_group_12);
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index b717c000b712..6a567bd19b73 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -23,6 +23,11 @@ enum {
#define NVM_LUN_BITS (8)
#define NVM_CH_BITS (7)

+enum {
+ NVM_OCSSD_SPEC_12 = 12,
+ NVM_OCSSD_SPEC_20 = 20,
+};
+
struct ppa_addr {
/* Generic structure for all addresses */
union {
@@ -50,7 +55,7 @@ struct nvm_id;
struct nvm_dev;
struct nvm_tgt_dev;

-typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
+typedef int (nvm_id_fn)(struct nvm_dev *);
typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
@@ -154,62 +159,113 @@ struct nvm_id_lp_tbl {
struct nvm_id_lp_mlc mlc;
};

-struct nvm_addr_format {
- u8 ch_offset;
+struct nvm_addr_format_12 {
u8 ch_len;
- u8 lun_offset;
u8 lun_len;
- u8 pln_offset;
+ u8 blk_len;
+ u8 pg_len;
u8 pln_len;
+ u8 sec_len;
+
+ u8 ch_offset;
+ u8 lun_offset;
u8 blk_offset;
- u8 blk_len;
u8 pg_offset;
- u8 pg_len;
- u8 sect_offset;
- u8 sect_len;
+ u8 pln_offset;
+ u8 sec_offset;
+
+ u64 ch_mask;
+ u64 lun_mask;
+ u64 blk_mask;
+ u64 pg_mask;
+ u64 pln_mask;
+ u64 sec_mask;
+};
+
+struct nvm_addr_format {
+ u8 ch_len;
+ u8 lun_len;
+ u8 chk_len;
+ u8 sec_len;
+ u8 rsv_len[2];
+
+ u8 ch_offset;
+ u8 lun_offset;
+ u8 chk_offset;
+ u8 sec_offset;
+ u8 rsv_off[2];
+
+ u64 ch_mask;
+ u64 lun_mask;
+ u64 chk_mask;
+ u64 sec_mask;
+ u64 rsv_mask[2];
};

-struct nvm_id {
- u8 ver_id;
+/* Device common geometry */
+struct nvm_common_geo {
+ /* kernel short version */
+ u8 version;
+
+ /* chunk geometry */
+ u32 num_chk; /* chunks per lun */
+ u32 clba; /* sectors per chunk */
+ u16 csecs; /* sector size */
+ u16 sos; /* out-of-band area size */
+
+ /* device write constrains */
+ u32 ws_min; /* minimum write size */
+ u32 ws_opt; /* optimal write size */
+ u32 mw_cunits; /* distance required for successful read */
+ u32 maxoc; /* maximum open chunks */
+ u32 maxocpu; /* maximum open chunks per parallel unit */
+
+ /* device capabilities */
+ u32 mccap;
+
+ /* device timings */
+ u32 trdt; /* Avg. Tread (ns) */
+ u32 trdm; /* Max Tread (ns) */
+ u32 tprt; /* Avg. Tprog (ns) */
+ u32 tprm; /* Max Tprog (ns) */
+ u32 tbet; /* Avg. Terase (ns) */
+ u32 tbem; /* Max Terase (ns) */
+
+ /* generic address format */
+ struct nvm_addr_format addrf;
+
+ /* 1.2 compatibility */
u8 vmnt;
u32 cap;
u32 dom;

- struct nvm_addr_format ppaf;
-
- u8 num_ch;
- u8 num_lun;
- u16 num_chk;
- u16 clba;
- u16 csecs;
- u16 sos;
-
- u32 ws_min;
- u32 ws_opt;
- u32 mw_cunits;
-
- u32 trdt;
- u32 trdm;
- u32 tprt;
- u32 tprm;
- u32 tbet;
- u32 tbem;
- u32 mpos;
- u32 mccap;
- u16 cpar;
-
- /* calculated values */
- u16 ws_seq;
- u16 ws_per_chk;
-
- /* 1.2 compatibility */
u8 mtype;
u8 fmtype;

+ u16 cpar;
+ u32 mpos;
+
u8 num_pln;
+ u8 pln_mode;
u16 num_pg;
u16 fpg_sz;
-} __packed;
+};
+
+/* Device identified geometry */
+struct nvm_dev_geo {
+ /* device reported version */
+ u8 major_ver_id;
+ u8 minor_ver_id;
+
+ /* full device geometry */
+ u16 num_ch;
+ u16 num_lun;
+
+ /* calculated values */
+ u16 all_luns;
+
+ struct nvm_common_geo c;
+};

struct nvm_target {
struct list_head list;
@@ -274,38 +330,23 @@ enum {
NVM_BLK_ST_BAD = 0x8, /* Bad block */
};

-
-/* Device generic information */
+/* Instance geometry */
struct nvm_geo {
- /* generic geometry */
- int nr_chnls;
- int all_luns; /* across channels */
- int nr_luns; /* per channel */
- int nr_chks; /* per lun */
-
- int sec_size;
- int oob_size;
- int mccap;
-
- int sec_per_chk;
- int sec_per_lun;
-
- int ws_min;
- int ws_opt;
- int ws_seq;
- int ws_per_chk;
+ /* instance specific geometry */
+ int num_ch;
+ int num_lun; /* per channel */

int max_rq_size;
-
int op;

- struct nvm_addr_format ppaf;
+ /* common geometry */
+ struct nvm_common_geo c;

- /* Legacy 1.2 specific geometry */
- int plane_mode; /* drive device in single, double or quad mode */
- int nr_planes;
- int sec_per_pg; /* only sectors for a single page */
- int sec_per_pl; /* all sectors across planes */
+ /* calculated values */
+ int all_luns; /* across channels */
+ int all_chunks; /* across channels */
+
+ sector_t total_secs; /* across channels */
};

/* sub-device structure */
@@ -316,9 +357,6 @@ struct nvm_tgt_dev {
/* Base ppas for target LUNs */
struct ppa_addr *luns;

- sector_t total_secs;
-
- struct nvm_id identity;
struct request_queue *q;

struct nvm_dev *parent;
@@ -331,15 +369,11 @@ struct nvm_dev {
struct list_head devices;

/* Device information */
- struct nvm_geo geo;
-
- unsigned long total_secs;
+ struct nvm_dev_geo dev_geo;

unsigned long *lun_map;
void *dma_pool;

- struct nvm_id identity;
-
/* Backend device */
struct request_queue *q;
char name[DISK_NAME_LEN];
@@ -359,14 +393,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr r)
{
struct nvm_geo *geo = &tgt_dev->geo;
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;
struct ppa_addr l;

- l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset;
- l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset;
- l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset;
- l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset;
- l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset;
- l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset;
+ l.ppa = ((u64)r.g.ch) << ppaf->ch_offset;
+ l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset;
+ l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset;
+ l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset;
+ l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset;
+ l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset;

return l;
}
@@ -375,24 +411,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr r)
{
struct nvm_geo *geo = &tgt_dev->geo;
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;
struct ppa_addr l;

l.ppa = 0;
- /*
- * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc.
- */
- l.g.blk = (r.ppa >> geo->ppaf.blk_offset) &
- (((1 << geo->ppaf.blk_len) - 1));
- l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) &
- (((1 << geo->ppaf.pg_len) - 1));
- l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) &
- (((1 << geo->ppaf.sect_len) - 1));
- l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) &
- (((1 << geo->ppaf.pln_len) - 1));
- l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) &
- (((1 << geo->ppaf.lun_len) - 1));
- l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) &
- (((1 << geo->ppaf.ch_len) - 1));
+
+ l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset;
+ l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset;
+ l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset;
+ l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset;
+ l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset;
+ l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset;

return l;
}
--
2.7.4


2018-02-13 14:08:30

by Javier González

[permalink] [raw]
Subject: [PATCH 8/8] lightnvm: pblk: implement 2.0 support

Implement 2.0 support in pblk. This includes the address formatting and
mapping paths, as well as the sysfs entries for them.

Signed-off-by: Javier González <[email protected]>
---
drivers/lightnvm/pblk-init.c | 57 ++++++++++--
drivers/lightnvm/pblk-sysfs.c | 36 ++++++--
drivers/lightnvm/pblk.h | 198 ++++++++++++++++++++++++++++++++----------
3 files changed, 233 insertions(+), 58 deletions(-)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 04685f2d39d3..d5a31fc986cc 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -231,20 +231,63 @@ static int pblk_set_addrf_12(struct nvm_geo *geo,
return dst->blk_offset + src->blk_len;
}

+static int pblk_set_addrf_20(struct nvm_geo *geo,
+ struct nvm_addr_format *adst,
+ struct pblk_addr_format *udst)
+{
+ struct nvm_addr_format *src = &geo->c.addrf;
+
+ adst->ch_len = get_count_order(geo->num_ch);
+ adst->lun_len = get_count_order(geo->num_lun);
+ adst->chk_len = src->chk_len;
+ adst->sec_len = src->sec_len;
+
+ adst->sec_offset = 0;
+ adst->ch_offset = adst->sec_len;
+ adst->lun_offset = adst->ch_offset + adst->ch_len;
+ adst->chk_offset = adst->lun_offset + adst->lun_len;
+
+ adst->sec_mask = ((1ULL << adst->sec_len) - 1) << adst->sec_offset;
+ adst->chk_mask = ((1ULL << adst->chk_len) - 1) << adst->chk_offset;
+ adst->lun_mask = ((1ULL << adst->lun_len) - 1) << adst->lun_offset;
+ adst->ch_mask = ((1ULL << adst->ch_len) - 1) << adst->ch_offset;
+
+ udst->sec_stripe = geo->c.ws_opt;
+ udst->ch_stripe = geo->num_ch;
+ udst->lun_stripe = geo->num_lun;
+
+ udst->sec_lun_stripe = udst->sec_stripe * udst->ch_stripe;
+ udst->sec_ws_stripe = udst->sec_lun_stripe * udst->lun_stripe;
+
+ return adst->chk_offset + adst->chk_len;
+}
+
static int pblk_set_addrf(struct pblk *pblk)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
int mod;

- div_u64_rem(geo->c.clba, pblk->min_write_pgs, &mod);
- if (mod) {
- pr_err("pblk: bad configuration of sectors/pages\n");
+ switch (geo->c.version) {
+ case NVM_OCSSD_SPEC_12:
+ div_u64_rem(geo->c.clba, pblk->min_write_pgs, &mod);
+ if (mod) {
+ pr_err("pblk: bad configuration of sectors/pages\n");
+ return -EINVAL;
+ }
+
+ pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf);
+ break;
+ case NVM_OCSSD_SPEC_20:
+ pblk->addrf_len = pblk_set_addrf_20(geo, (void *)&pblk->addrf,
+ &pblk->uaddrf);
+ break;
+ default:
+ pr_err("pblk: OCSSD revision not supported (%d)\n",
+ geo->c.version);
return -EINVAL;
}

- pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf);
-
return 0;
}

@@ -1111,7 +1154,9 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
struct pblk *pblk;
int ret;

- if (geo->c.version != NVM_OCSSD_SPEC_12) {
+ /* pblk supports 1.2 and 2.0 versions */
+ if (!(geo->c.version == NVM_OCSSD_SPEC_12 ||
+ geo->c.version == NVM_OCSSD_SPEC_20)) {
pr_err("pblk: OCSSD version not supported (%u)\n",
geo->c.version);
return ERR_PTR(-EINVAL);
diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
index 191af0c6591e..60b8d931e4ba 100644
--- a/drivers/lightnvm/pblk-sysfs.c
+++ b/drivers/lightnvm/pblk-sysfs.c
@@ -113,15 +113,16 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
- struct nvm_addr_format_12 *ppaf;
- struct nvm_addr_format_12 *geo_ppaf;
ssize_t sz = 0;

- ppaf = (struct nvm_addr_format_12 *)&pblk->addrf;
- geo_ppaf = (struct nvm_addr_format_12 *)&geo->c.addrf;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
+ struct nvm_addr_format_12 *geo_ppaf =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;

- sz = snprintf(page, PAGE_SIZE,
- "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
+ sz = snprintf(page, PAGE_SIZE,
+ "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
pblk->addrf_len,
ppaf->ch_offset, ppaf->ch_len,
ppaf->lun_offset, ppaf->lun_len,
@@ -130,14 +131,33 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
ppaf->pln_offset, ppaf->pln_len,
ppaf->sec_offset, ppaf->sec_len);

- sz += snprintf(page + sz, PAGE_SIZE - sz,
- "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
+ sz += snprintf(page + sz, PAGE_SIZE - sz,
+ "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
geo_ppaf->ch_offset, geo_ppaf->ch_len,
geo_ppaf->lun_offset, geo_ppaf->lun_len,
geo_ppaf->blk_offset, geo_ppaf->blk_len,
geo_ppaf->pg_offset, geo_ppaf->pg_len,
geo_ppaf->pln_offset, geo_ppaf->pln_len,
geo_ppaf->sec_offset, geo_ppaf->sec_len);
+ } else {
+ struct nvm_addr_format *ppaf = &pblk->addrf;
+ struct nvm_addr_format *geo_ppaf = &geo->c.addrf;
+
+ sz = snprintf(page, PAGE_SIZE,
+ "pblk:(s:%d)ch:%d/%d,lun:%d/%d,chk:%d/%d/sec:%d/%d\n",
+ pblk->addrf_len,
+ ppaf->ch_offset, ppaf->ch_len,
+ ppaf->lun_offset, ppaf->lun_len,
+ ppaf->chk_offset, ppaf->chk_len,
+ ppaf->sec_offset, ppaf->sec_len);
+
+ sz += snprintf(page + sz, PAGE_SIZE - sz,
+ "device:ch:%d/%d,lun:%d/%d,chk:%d/%d,sec:%d/%d\n",
+ geo_ppaf->ch_offset, geo_ppaf->ch_len,
+ geo_ppaf->lun_offset, geo_ppaf->lun_len,
+ geo_ppaf->chk_offset, geo_ppaf->chk_len,
+ geo_ppaf->sec_offset, geo_ppaf->sec_len);
+ }

return sz;
}
diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
index fba978e7f7c1..a85befe836bc 100644
--- a/drivers/lightnvm/pblk.h
+++ b/drivers/lightnvm/pblk.h
@@ -574,6 +574,18 @@ enum {
PBLK_STATE_STOPPED = 3,
};

+/* Internal format to support not power-of-2 device formats (for now) */
+struct pblk_addr_format {
+ /* gen to dev */
+ int sec_stripe;
+ int ch_stripe;
+ int lun_stripe;
+
+ /* dev to gen */
+ int sec_lun_stripe;
+ int sec_ws_stripe;
+};
+
struct pblk {
struct nvm_tgt_dev *dev;
struct gendisk *disk;
@@ -586,7 +598,8 @@ struct pblk {
struct pblk_line_mgmt l_mg; /* Line management */
struct pblk_line_meta lm; /* Line metadata */

- struct nvm_addr_format addrf;
+ struct nvm_addr_format addrf; /* Aligned address format */
+ struct pblk_addr_format uaddrf; /* Unaligned address format */
int addrf_len;

struct pblk_rb rwb;
@@ -967,17 +980,43 @@ static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p)
static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
u64 line_id)
{
- struct nvm_addr_format_12 *ppaf =
- (struct nvm_addr_format_12 *)&pblk->addrf;
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
struct ppa_addr ppa;

- ppa.ppa = 0;
- ppa.g.blk = line_id;
- ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset;
- ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset;
- ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset;
- ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset;
- ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
+
+ ppa.ppa = 0;
+ ppa.g.blk = line_id;
+ ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset;
+ ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset;
+ ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset;
+ ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset;
+ ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset;
+ } else {
+ struct pblk_addr_format *uaddrf = &pblk->uaddrf;
+ int secs, chnls, luns;
+
+ ppa.ppa = 0;
+
+ ppa.m.chk = line_id;
+
+ div_u64_rem(paddr, uaddrf->sec_stripe, &secs);
+ ppa.m.sec = secs;
+
+ sector_div(paddr, uaddrf->sec_stripe);
+ div_u64_rem(paddr, uaddrf->ch_stripe, &chnls);
+ ppa.m.ch = chnls;
+
+ sector_div(paddr, uaddrf->ch_stripe);
+ div_u64_rem(paddr, uaddrf->lun_stripe, &luns);
+ ppa.m.lun = luns;
+
+ sector_div(paddr, uaddrf->lun_stripe);
+ ppa.m.sec += uaddrf->sec_stripe * paddr;
+ }

return ppa;
}
@@ -985,15 +1024,32 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk,
struct ppa_addr p)
{
- struct nvm_addr_format_12 *ppaf =
- (struct nvm_addr_format_12 *)&pblk->addrf;
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
u64 paddr;

- paddr = (u64)p.g.ch << ppaf->ch_offset;
- paddr |= (u64)p.g.lun << ppaf->lun_offset;
- paddr |= (u64)p.g.pg << ppaf->pg_offset;
- paddr |= (u64)p.g.pl << ppaf->pln_offset;
- paddr |= (u64)p.g.sec << ppaf->sec_offset;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&pblk->addrf;
+
+ paddr = (u64)p.g.ch << ppaf->ch_offset;
+ paddr |= (u64)p.g.lun << ppaf->lun_offset;
+ paddr |= (u64)p.g.pg << ppaf->pg_offset;
+ paddr |= (u64)p.g.pl << ppaf->pln_offset;
+ paddr |= (u64)p.g.sec << ppaf->sec_offset;
+ } else {
+ struct pblk_addr_format *uaddrf = &pblk->uaddrf;
+ u64 secs = (u64)p.m.sec;
+ int sec_stripe;
+
+ paddr = (u64)p.m.ch * uaddrf->sec_stripe;
+ paddr += (u64)p.m.lun * uaddrf->sec_lun_stripe;
+
+ div_u64_rem(secs, uaddrf->sec_stripe, &sec_stripe);
+ sector_div(secs, uaddrf->sec_stripe);
+ paddr += secs * uaddrf->sec_ws_stripe;
+ paddr += sec_stripe;
+ }

return paddr;
}
@@ -1010,15 +1066,37 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32)
ppa64.c.line = ppa32 & ((~0U) >> 1);
ppa64.c.is_cached = 1;
} else {
- struct nvm_addr_format_12 *ppaf =
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
(struct nvm_addr_format_12 *)&pblk->addrf;

- ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset;
- ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset;
- ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset;
- ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset;
- ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset;
- ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset;
+ ppa64.g.ch = (ppa32 & ppaf->ch_mask) >>
+ ppaf->ch_offset;
+ ppa64.g.lun = (ppa32 & ppaf->lun_mask) >>
+ ppaf->lun_offset;
+ ppa64.g.blk = (ppa32 & ppaf->blk_mask) >>
+ ppaf->blk_offset;
+ ppa64.g.pg = (ppa32 & ppaf->pg_mask) >>
+ ppaf->pg_offset;
+ ppa64.g.pl = (ppa32 & ppaf->pln_mask) >>
+ ppaf->pln_offset;
+ ppa64.g.sec = (ppa32 & ppaf->sec_mask) >>
+ ppaf->sec_offset;
+ } else {
+ struct nvm_addr_format *lbaf = &pblk->addrf;
+
+ ppa64.m.ch = (ppa32 & lbaf->ch_mask) >>
+ lbaf->ch_offset;
+ ppa64.m.lun = (ppa32 & lbaf->lun_mask) >>
+ lbaf->lun_offset;
+ ppa64.m.chk = (ppa32 & lbaf->chk_mask) >>
+ lbaf->chk_offset;
+ ppa64.m.sec = (ppa32 & lbaf->sec_mask) >>
+ lbaf->sec_offset;
+ }
}

return ppa64;
@@ -1034,15 +1112,27 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64)
ppa32 |= ppa64.c.line;
ppa32 |= 1U << 31;
} else {
- struct nvm_addr_format_12 *ppaf =
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
(struct nvm_addr_format_12 *)&pblk->addrf;

- ppa32 |= ppa64.g.ch << ppaf->ch_offset;
- ppa32 |= ppa64.g.lun << ppaf->lun_offset;
- ppa32 |= ppa64.g.blk << ppaf->blk_offset;
- ppa32 |= ppa64.g.pg << ppaf->pg_offset;
- ppa32 |= ppa64.g.pl << ppaf->pln_offset;
- ppa32 |= ppa64.g.sec << ppaf->sec_offset;
+ ppa32 |= ppa64.g.ch << ppaf->ch_offset;
+ ppa32 |= ppa64.g.lun << ppaf->lun_offset;
+ ppa32 |= ppa64.g.blk << ppaf->blk_offset;
+ ppa32 |= ppa64.g.pg << ppaf->pg_offset;
+ ppa32 |= ppa64.g.pl << ppaf->pln_offset;
+ ppa32 |= ppa64.g.sec << ppaf->sec_offset;
+ } else {
+ struct nvm_addr_format *lbaf = &pblk->addrf;
+
+ ppa32 |= ppa64.m.ch << lbaf->ch_offset;
+ ppa32 |= ppa64.m.lun << lbaf->lun_offset;
+ ppa32 |= ppa64.m.chk << lbaf->chk_offset;
+ ppa32 |= ppa64.m.sec << lbaf->sec_offset;
+ }
}

return ppa32;
@@ -1160,6 +1250,9 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type)
struct nvm_geo *geo = &dev->geo;
int flags;

+ if (geo->c.version == NVM_OCSSD_SPEC_20)
+ return 0;
+
flags = geo->c.pln_mode >> 1;

if (type == PBLK_WRITE)
@@ -1179,6 +1272,9 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type)
struct nvm_geo *geo = &dev->geo;
int flags;

+ if (geo->c.version == NVM_OCSSD_SPEC_20)
+ return 0;
+
flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE;
if (type == PBLK_READ_SEQUENTIAL)
flags |= geo->c.pln_mode >> 1;
@@ -1192,16 +1288,21 @@ static inline int pblk_io_aligned(struct pblk *pblk, int nr_secs)
}

#ifdef CONFIG_NVM_DEBUG
-static inline void print_ppa(struct ppa_addr *p, char *msg, int error)
+static inline void print_ppa(struct nvm_geo *geo, struct ppa_addr *p,
+ char *msg, int error)
{
if (p->c.is_cached) {
pr_err("ppa: (%s: %x) cache line: %llu\n",
msg, error, (u64)p->c.line);
- } else {
+ } else if (geo->c.version == NVM_OCSSD_SPEC_12) {
pr_err("ppa: (%s: %x):ch:%d,lun:%d,blk:%d,pg:%d,pl:%d,sec:%d\n",
msg, error,
p->g.ch, p->g.lun, p->g.blk,
p->g.pg, p->g.pl, p->g.sec);
+ } else {
+ pr_err("ppa: (%s: %x):ch:%d,lun:%d,chk:%d,sec:%d\n",
+ msg, error,
+ p->m.ch, p->m.lun, p->m.chk, p->m.sec);
}
}

@@ -1211,13 +1312,13 @@ static inline void pblk_print_failed_rqd(struct pblk *pblk, struct nvm_rq *rqd,
int bit = -1;

if (rqd->nr_ppas == 1) {
- print_ppa(&rqd->ppa_addr, "rqd", error);
+ print_ppa(&pblk->dev->geo, &rqd->ppa_addr, "rqd", error);
return;
}

while ((bit = find_next_bit((void *)&rqd->ppa_status, rqd->nr_ppas,
bit + 1)) < rqd->nr_ppas) {
- print_ppa(&rqd->ppa_list[bit], "rqd", error);
+ print_ppa(&pblk->dev->geo, &rqd->ppa_list[bit], "rqd", error);
}

pr_err("error:%d, ppa_status:%llx\n", error, rqd->ppa_status);
@@ -1233,16 +1334,25 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev,
for (i = 0; i < nr_ppas; i++) {
ppa = &ppas[i];

- if (!ppa->c.is_cached &&
- ppa->g.ch < geo->num_ch &&
- ppa->g.lun < geo->num_lun &&
- ppa->g.pl < geo->c.num_pln &&
- ppa->g.blk < geo->c.num_chk &&
- ppa->g.pg < geo->c.num_pg &&
- ppa->g.sec < geo->c.ws_min)
- continue;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ if (!ppa->c.is_cached &&
+ ppa->g.ch < geo->num_ch &&
+ ppa->g.lun < geo->num_lun &&
+ ppa->g.pl < geo->c.num_pln &&
+ ppa->g.blk < geo->c.num_chk &&
+ ppa->g.pg < geo->c.num_pg &&
+ ppa->g.sec < geo->c.ws_min)
+ continue;
+ } else {
+ if (!ppa->c.is_cached &&
+ ppa->m.ch < geo->num_ch &&
+ ppa->m.lun < geo->num_lun &&
+ ppa->m.chk < geo->c.num_chk &&
+ ppa->m.sec < geo->c.clba)
+ continue;
+ }

- print_ppa(ppa, "boundary", i);
+ print_ppa(geo, ppa, "boundary", i);

return 1;
}
--
2.7.4


2018-02-13 14:09:00

by Javier González

[permalink] [raw]
Subject: [PATCH 4/8] lightnvm: convert address based on spec. version

Create the device ppa for both 1.2 and 2.0.

Signed-off-by: Javier González <[email protected]>
---
include/linux/lightnvm.h | 52 +++++++++++++++++++++++++++++++++---------------
1 file changed, 36 insertions(+), 16 deletions(-)

diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index e035ae4c9acc..1148b3f22b27 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -412,16 +412,26 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr r)
{
struct nvm_geo *geo = &tgt_dev->geo;
- struct nvm_addr_format_12 *ppaf =
- (struct nvm_addr_format_12 *)&geo->c.addrf;
struct ppa_addr l;

- l.ppa = ((u64)r.g.ch) << ppaf->ch_offset;
- l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset;
- l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset;
- l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset;
- l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset;
- l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;
+
+ l.ppa = ((u64)r.g.ch) << ppaf->ch_offset;
+ l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset;
+ l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset;
+ l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset;
+ l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset;
+ l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset;
+ } else {
+ struct nvm_addr_format *lbaf = &geo->c.addrf;
+
+ l.ppa = ((u64)r.m.ch) << lbaf->ch_offset;
+ l.ppa |= ((u64)r.m.lun) << lbaf->lun_offset;
+ l.ppa |= ((u64)r.m.chk) << lbaf->chk_offset;
+ l.ppa |= ((u64)r.m.sec) << lbaf->sec_offset;
+ }

return l;
}
@@ -430,18 +440,28 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr r)
{
struct nvm_geo *geo = &tgt_dev->geo;
- struct nvm_addr_format_12 *ppaf =
- (struct nvm_addr_format_12 *)&geo->c.addrf;
struct ppa_addr l;

l.ppa = 0;

- l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset;
- l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset;
- l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset;
- l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset;
- l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset;
- l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset;
+ if (geo->c.version == NVM_OCSSD_SPEC_12) {
+ struct nvm_addr_format_12 *ppaf =
+ (struct nvm_addr_format_12 *)&geo->c.addrf;
+
+ l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset;
+ l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset;
+ l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset;
+ l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset;
+ l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset;
+ l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset;
+ } else {
+ struct nvm_addr_format *lbaf = &geo->c.addrf;
+
+ l.m.ch = (r.ppa & lbaf->ch_mask) >> lbaf->ch_offset;
+ l.m.lun = (r.ppa & lbaf->lun_mask) >> lbaf->lun_offset;
+ l.m.chk = (r.ppa & lbaf->chk_mask) >> lbaf->chk_offset;
+ l.m.sec = (r.ppa & lbaf->sec_mask) >> lbaf->sec_offset;
+ }

return l;
}
--
2.7.4


2018-02-13 14:09:15

by Javier González

[permalink] [raw]
Subject: [PATCH 2/8] lightnvm: show generic geometry in sysfs

From: Javier González <[email protected]>

Apart from showing the geometry returned by the different identify
commands, provide the generic geometry too, as this is the geometry that
targets will use to describe the device.

Signed-off-by: Javier González <[email protected]>
---
drivers/nvme/host/lightnvm.c | 146 ++++++++++++++++++++++++++++---------------
1 file changed, 97 insertions(+), 49 deletions(-)

diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 97739e668602..7bc75182c723 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -944,8 +944,27 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
return scnprintf(page, PAGE_SIZE, "%u.%u\n",
dev_geo->major_ver_id,
dev_geo->minor_ver_id);
- } else if (strcmp(attr->name, "capabilities") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
+ } else if (strcmp(attr->name, "clba") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
+ } else if (strcmp(attr->name, "csecs") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
+ } else if (strcmp(attr->name, "sos") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
+ } else if (strcmp(attr->name, "ws_min") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
+ } else if (strcmp(attr->name, "ws_opt") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
+ } else if (strcmp(attr->name, "maxoc") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxoc);
+ } else if (strcmp(attr->name, "maxocpu") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxocpu);
+ } else if (strcmp(attr->name, "mw_cunits") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
+ } else if (strcmp(attr->name, "media_capabilities") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mccap);
+ } else if (strcmp(attr->name, "max_phys_secs") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n",
+ ndev->ops->max_phys_sect);
} else if (strcmp(attr->name, "read_typ") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
} else if (strcmp(attr->name, "read_max") == 0) {
@@ -984,19 +1003,8 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,

attr = &dattr->attr;

- if (strcmp(attr->name, "vendor_opcode") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
- } else if (strcmp(attr->name, "device_mode") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
- /* kept for compatibility */
- } else if (strcmp(attr->name, "media_manager") == 0) {
- return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
- } else if (strcmp(attr->name, "ppa_format") == 0) {
+ if (strcmp(attr->name, "ppa_format") == 0) {
return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
- } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
- } else if (strcmp(attr->name, "flash_media_type") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
} else if (strcmp(attr->name, "num_channels") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
} else if (strcmp(attr->name, "num_luns") == 0) {
@@ -1011,8 +1019,6 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
} else if (strcmp(attr->name, "hw_sector_size") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
- } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
} else if (strcmp(attr->name, "prog_typ") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
} else if (strcmp(attr->name, "prog_max") == 0) {
@@ -1021,13 +1027,21 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
} else if (strcmp(attr->name, "erase_max") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
+ } else if (strcmp(attr->name, "vendor_opcode") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
+ } else if (strcmp(attr->name, "device_mode") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
+ /* kept for compatibility */
+ } else if (strcmp(attr->name, "media_manager") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
+ } else if (strcmp(attr->name, "capabilities") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
+ } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
+ } else if (strcmp(attr->name, "flash_media_type") == 0) {
+ return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
} else if (strcmp(attr->name, "multiplane_modes") == 0) {
return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
- } else if (strcmp(attr->name, "media_capabilities") == 0) {
- return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
- } else if (strcmp(attr->name, "max_phys_secs") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n",
- ndev->ops->max_phys_sect);
} else {
return scnprintf(page, PAGE_SIZE,
"Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
@@ -1035,6 +1049,17 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
}
}

+static ssize_t nvm_dev_attr_show_lbaf(struct nvm_addr_format *lbaf,
+ char *page)
+{
+ return scnprintf(page, PAGE_SIZE,
+ "0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
+ lbaf->ch_offset, lbaf->ch_len,
+ lbaf->lun_offset, lbaf->lun_len,
+ lbaf->chk_offset, lbaf->chk_len,
+ lbaf->sec_offset, lbaf->sec_len);
+}
+
static ssize_t nvm_dev_attr_show_20(struct device *dev,
struct device_attribute *dattr, char *page)
{
@@ -1048,20 +1073,14 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,

attr = &dattr->attr;

- if (strcmp(attr->name, "groups") == 0) {
+ if (strcmp(attr->name, "lba_format") == 0) {
+ return nvm_dev_attr_show_lbaf((void *)&dev_geo->c.addrf, page);
+ } else if (strcmp(attr->name, "groups") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
} else if (strcmp(attr->name, "punits") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
} else if (strcmp(attr->name, "chunks") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
- } else if (strcmp(attr->name, "clba") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
- } else if (strcmp(attr->name, "ws_min") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
- } else if (strcmp(attr->name, "ws_opt") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
- } else if (strcmp(attr->name, "mw_cunits") == 0) {
- return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
} else if (strcmp(attr->name, "write_typ") == 0) {
return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
} else if (strcmp(attr->name, "write_max") == 0) {
@@ -1086,7 +1105,19 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,

/* general attributes */
static NVM_DEV_ATTR_RO(version);
-static NVM_DEV_ATTR_RO(capabilities);
+
+static NVM_DEV_ATTR_RO(ws_min);
+static NVM_DEV_ATTR_RO(ws_opt);
+static NVM_DEV_ATTR_RO(mw_cunits);
+static NVM_DEV_ATTR_RO(maxoc);
+static NVM_DEV_ATTR_RO(maxocpu);
+
+static NVM_DEV_ATTR_RO(media_capabilities);
+static NVM_DEV_ATTR_RO(max_phys_secs);
+
+static NVM_DEV_ATTR_RO(clba);
+static NVM_DEV_ATTR_RO(csecs);
+static NVM_DEV_ATTR_RO(sos);

static NVM_DEV_ATTR_RO(read_typ);
static NVM_DEV_ATTR_RO(read_max);
@@ -1105,42 +1136,53 @@ static NVM_DEV_ATTR_12_RO(num_blocks);
static NVM_DEV_ATTR_12_RO(num_pages);
static NVM_DEV_ATTR_12_RO(page_size);
static NVM_DEV_ATTR_12_RO(hw_sector_size);
-static NVM_DEV_ATTR_12_RO(oob_sector_size);
static NVM_DEV_ATTR_12_RO(prog_typ);
static NVM_DEV_ATTR_12_RO(prog_max);
static NVM_DEV_ATTR_12_RO(erase_typ);
static NVM_DEV_ATTR_12_RO(erase_max);
static NVM_DEV_ATTR_12_RO(multiplane_modes);
-static NVM_DEV_ATTR_12_RO(media_capabilities);
-static NVM_DEV_ATTR_12_RO(max_phys_secs);
+static NVM_DEV_ATTR_12_RO(capabilities);

static struct attribute *nvm_dev_attrs_12[] = {
&dev_attr_version.attr,
- &dev_attr_capabilities.attr,
-
- &dev_attr_vendor_opcode.attr,
- &dev_attr_device_mode.attr,
- &dev_attr_media_manager.attr,
&dev_attr_ppa_format.attr,
- &dev_attr_media_type.attr,
- &dev_attr_flash_media_type.attr,
+
&dev_attr_num_channels.attr,
&dev_attr_num_luns.attr,
&dev_attr_num_planes.attr,
&dev_attr_num_blocks.attr,
&dev_attr_num_pages.attr,
&dev_attr_page_size.attr,
+
&dev_attr_hw_sector_size.attr,
- &dev_attr_oob_sector_size.attr,
+
+ &dev_attr_clba.attr,
+ &dev_attr_csecs.attr,
+ &dev_attr_sos.attr,
+
+ &dev_attr_ws_min.attr,
+ &dev_attr_ws_opt.attr,
+ &dev_attr_maxoc.attr,
+ &dev_attr_maxocpu.attr,
+ &dev_attr_mw_cunits.attr,
+
+ &dev_attr_media_capabilities.attr,
+ &dev_attr_max_phys_secs.attr,
+
&dev_attr_read_typ.attr,
&dev_attr_read_max.attr,
&dev_attr_prog_typ.attr,
&dev_attr_prog_max.attr,
&dev_attr_erase_typ.attr,
&dev_attr_erase_max.attr,
+
+ &dev_attr_vendor_opcode.attr,
+ &dev_attr_device_mode.attr,
+ &dev_attr_media_manager.attr,
+ &dev_attr_capabilities.attr,
+ &dev_attr_media_type.attr,
+ &dev_attr_flash_media_type.attr,
&dev_attr_multiplane_modes.attr,
- &dev_attr_media_capabilities.attr,
- &dev_attr_max_phys_secs.attr,

NULL,
};
@@ -1152,12 +1194,9 @@ static const struct attribute_group nvm_dev_attr_group_12 = {

/* 2.0 values */
static NVM_DEV_ATTR_20_RO(groups);
+static NVM_DEV_ATTR_20_RO(lba_format);
static NVM_DEV_ATTR_20_RO(punits);
static NVM_DEV_ATTR_20_RO(chunks);
-static NVM_DEV_ATTR_20_RO(clba);
-static NVM_DEV_ATTR_20_RO(ws_min);
-static NVM_DEV_ATTR_20_RO(ws_opt);
-static NVM_DEV_ATTR_20_RO(mw_cunits);
static NVM_DEV_ATTR_20_RO(write_typ);
static NVM_DEV_ATTR_20_RO(write_max);
static NVM_DEV_ATTR_20_RO(reset_typ);
@@ -1165,16 +1204,25 @@ static NVM_DEV_ATTR_20_RO(reset_max);

static struct attribute *nvm_dev_attrs_20[] = {
&dev_attr_version.attr,
- &dev_attr_capabilities.attr,
+ &dev_attr_lba_format.attr,

&dev_attr_groups.attr,
&dev_attr_punits.attr,
&dev_attr_chunks.attr,
+
&dev_attr_clba.attr,
+ &dev_attr_csecs.attr,
+ &dev_attr_sos.attr,
+
&dev_attr_ws_min.attr,
&dev_attr_ws_opt.attr,
+ &dev_attr_maxoc.attr,
+ &dev_attr_maxocpu.attr,
&dev_attr_mw_cunits.attr,

+ &dev_attr_media_capabilities.attr,
+ &dev_attr_max_phys_secs.attr,
+
&dev_attr_read_typ.attr,
&dev_attr_read_max.attr,
&dev_attr_write_typ.attr,
--
2.7.4


2018-02-13 14:09:23

by Javier González

[permalink] [raw]
Subject: [PATCH 3/8] lightnvm: add support for 2.0 address format

Add support for 2.0 address format. Also, align address bits for 1.2 and 2.0 to
align.

Signed-off-by: Javier González <[email protected]>
---
include/linux/lightnvm.h | 45 ++++++++++++++++++++++++++++++++-------------
1 file changed, 32 insertions(+), 13 deletions(-)

diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 6a567bd19b73..e035ae4c9acc 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -16,12 +16,21 @@ enum {
NVM_IOTYPE_GC = 1,
};

-#define NVM_BLK_BITS (16)
-#define NVM_PG_BITS (16)
-#define NVM_SEC_BITS (8)
-#define NVM_PL_BITS (8)
-#define NVM_LUN_BITS (8)
-#define NVM_CH_BITS (7)
+/* 1.2 format */
+#define NVM_12_CH_BITS (8)
+#define NVM_12_LUN_BITS (8)
+#define NVM_12_BLK_BITS (16)
+#define NVM_12_PG_BITS (16)
+#define NVM_12_PL_BITS (4)
+#define NVM_12_SEC_BITS (4)
+#define NVM_12_RESERVED (8)
+
+/* 2.0 format */
+#define NVM_20_CH_BITS (8)
+#define NVM_20_LUN_BITS (8)
+#define NVM_20_CHK_BITS (16)
+#define NVM_20_SEC_BITS (24)
+#define NVM_20_RESERVED (8)

enum {
NVM_OCSSD_SPEC_12 = 12,
@@ -31,16 +40,26 @@ enum {
struct ppa_addr {
/* Generic structure for all addresses */
union {
+ /* 1.2 device format */
struct {
- u64 blk : NVM_BLK_BITS;
- u64 pg : NVM_PG_BITS;
- u64 sec : NVM_SEC_BITS;
- u64 pl : NVM_PL_BITS;
- u64 lun : NVM_LUN_BITS;
- u64 ch : NVM_CH_BITS;
- u64 reserved : 1;
+ u64 ch : NVM_12_CH_BITS;
+ u64 lun : NVM_12_LUN_BITS;
+ u64 blk : NVM_12_BLK_BITS;
+ u64 pg : NVM_12_PG_BITS;
+ u64 pl : NVM_12_PL_BITS;
+ u64 sec : NVM_12_SEC_BITS;
+ u64 reserved : NVM_12_RESERVED;
} g;

+ /* 2.0 device format */
+ struct {
+ u64 ch : NVM_20_CH_BITS;
+ u64 lun : NVM_20_LUN_BITS;
+ u64 chk : NVM_20_CHK_BITS;
+ u64 sec : NVM_20_SEC_BITS;
+ u64 reserved : NVM_20_RESERVED;
+ } m;
+
struct {
u64 line : 63;
u64 is_cached : 1;
--
2.7.4


2018-02-13 14:09:45

by Javier González

[permalink] [raw]
Subject: [PATCH 5/8] lightnvm: implement get log report chunk helpers

From: Javier González <[email protected]>

The 2.0 spec provides a report chunk log page that can be retrieved
using the stangard nvme get log page. This replaces the dedicated
get/put bad block table in 1.2.

This patch implements the helper functions to allow targets retrieve the
chunk metadata using get log page

Signed-off-by: Javier González <[email protected]>
---
drivers/lightnvm/core.c | 28 +++++++++++++++++++++++++
drivers/nvme/host/lightnvm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/lightnvm.h | 32 ++++++++++++++++++++++++++++
3 files changed, 110 insertions(+)

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 80492fa6ee76..6857a888544a 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -43,6 +43,8 @@ struct nvm_ch_map {
struct nvm_dev_map {
struct nvm_ch_map *chnls;
int nr_chnls;
+ int bch;
+ int blun;
};

static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name)
@@ -171,6 +173,9 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
if (!dev_map->chnls)
goto err_chnls;

+ dev_map->bch = bch;
+ dev_map->blun = blun;
+
luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL);
if (!luns)
goto err_luns;
@@ -561,6 +566,19 @@ static void nvm_unregister_map(struct nvm_dev *dev)
kfree(rmap);
}

+static unsigned long nvm_log_off_tgt_to_dev(struct nvm_tgt_dev *tgt_dev)
+{
+ struct nvm_dev_map *dev_map = tgt_dev->map;
+ struct nvm_geo *geo = &tgt_dev->geo;
+ int lun_off;
+ unsigned long off;
+
+ lun_off = dev_map->blun + dev_map->bch * geo->num_lun;
+ off = lun_off * geo->c.num_chk * sizeof(struct nvm_chunk_log_page);
+
+ return off;
+}
+
static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
{
struct nvm_dev_map *dev_map = tgt_dev->map;
@@ -720,6 +738,16 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev,
nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list);
}

+int nvm_get_chunk_log_page(struct nvm_tgt_dev *tgt_dev,
+ struct nvm_chunk_log_page *log,
+ unsigned long off, unsigned long len)
+{
+ struct nvm_dev *dev = tgt_dev->parent;
+
+ off += nvm_log_off_tgt_to_dev(tgt_dev);
+
+ return dev->ops->get_chunk_log_page(tgt_dev->parent, log, off, len);
+}

int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas,
int nr_ppas, int type)
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 7bc75182c723..355d9b0cf084 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode {
nvme_nvm_admin_set_bb_tbl = 0xf1,
};

+enum nvme_nvm_log_page {
+ NVME_NVM_LOG_REPORT_CHUNK = 0xCA,
+};
+
struct nvme_nvm_ph_rw {
__u8 opcode;
__u8 flags;
@@ -553,6 +557,50 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas,
return ret;
}

+static int nvme_nvm_get_chunk_log_page(struct nvm_dev *nvmdev,
+ struct nvm_chunk_log_page *log,
+ unsigned long off,
+ unsigned long total_len)
+{
+ struct nvme_ns *ns = nvmdev->q->queuedata;
+ struct nvme_command c = { };
+ unsigned long offset = off, left = total_len;
+ unsigned long len, len_dwords;
+ void *buf = log;
+ int ret;
+
+ /* The offset needs to be dword-aligned */
+ if (offset & 0x3)
+ return -EINVAL;
+
+ do {
+ /* Send 256KB at a time */
+ len = (1 << 18) > left ? left : (1 << 18);
+ len_dwords = (len >> 2) - 1;
+
+ c.get_log_page.opcode = nvme_admin_get_log_page;
+ c.get_log_page.nsid = cpu_to_le32(ns->head->ns_id);
+ c.get_log_page.lid = NVME_NVM_LOG_REPORT_CHUNK;
+ c.get_log_page.lpol = cpu_to_le32(offset & 0xffffffff);
+ c.get_log_page.lpou = cpu_to_le32(offset >> 32);
+ c.get_log_page.numdl = cpu_to_le16(len_dwords & 0xffff);
+ c.get_log_page.numdu = cpu_to_le16(len_dwords >> 16);
+
+ ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, buf, len);
+ if (ret) {
+ dev_err(ns->ctrl->device,
+ "get chunk log page failed (%d)\n", ret);
+ break;
+ }
+
+ buf += len;
+ offset += len;
+ left -= len;
+ } while (left);
+
+ return ret;
+}
+
static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns,
struct nvme_nvm_command *c)
{
@@ -684,6 +732,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
.get_bb_tbl = nvme_nvm_get_bb_tbl,
.set_bb_tbl = nvme_nvm_set_bb_tbl,

+ .get_chunk_log_page = nvme_nvm_get_chunk_log_page,
+
.submit_io = nvme_nvm_submit_io,
.submit_io_sync = nvme_nvm_submit_io_sync,

diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index 1148b3f22b27..eb2900a18160 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -73,10 +73,13 @@ struct nvm_rq;
struct nvm_id;
struct nvm_dev;
struct nvm_tgt_dev;
+struct nvm_chunk_log_page;

typedef int (nvm_id_fn)(struct nvm_dev *);
typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
+typedef int (nvm_get_chunk_lp_fn)(struct nvm_dev *, struct nvm_chunk_log_page *,
+ unsigned long, unsigned long);
typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *);
typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
@@ -90,6 +93,8 @@ struct nvm_dev_ops {
nvm_op_bb_tbl_fn *get_bb_tbl;
nvm_op_set_bb_fn *set_bb_tbl;

+ nvm_get_chunk_lp_fn *get_chunk_log_page;
+
nvm_submit_io_fn *submit_io;
nvm_submit_io_sync_fn *submit_io_sync;

@@ -286,6 +291,30 @@ struct nvm_dev_geo {
struct nvm_common_geo c;
};

+enum {
+ /* Chunk states */
+ NVM_CHK_ST_FREE = 1 << 0,
+ NVM_CHK_ST_CLOSED = 1 << 1,
+ NVM_CHK_ST_OPEN = 1 << 2,
+ NVM_CHK_ST_OFFLINE = 1 << 3,
+ NVM_CHK_ST_HOST_USE = 1 << 7,
+
+ /* Chunk types */
+ NVM_CHK_TP_W_SEQ = 1 << 0,
+ NVM_CHK_TP_W_RAN = 1 << 2,
+ NVM_CHK_TP_SZ_SPEC = 1 << 4,
+};
+
+struct nvm_chunk_log_page {
+ __u8 state;
+ __u8 type;
+ __u8 wear_index;
+ __u8 rsvd[5];
+ __u64 slba;
+ __u64 cnlb;
+ __u64 wp;
+};
+
struct nvm_target {
struct list_head list;
struct nvm_tgt_dev *dev;
@@ -505,6 +534,9 @@ extern struct nvm_dev *nvm_alloc_dev(int);
extern int nvm_register(struct nvm_dev *);
extern void nvm_unregister(struct nvm_dev *);

+extern int nvm_get_chunk_log_page(struct nvm_tgt_dev *,
+ struct nvm_chunk_log_page *,
+ unsigned long, unsigned long);
extern int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *, struct ppa_addr *,
int, int);
extern int nvm_max_phys_sects(struct nvm_tgt_dev *);
--
2.7.4


2018-02-13 14:10:53

by Javier González

[permalink] [raw]
Subject: [PATCH 7/8] lightnvm: pblk: refactor init/exit sequences

Refactor init and exit sequences to improve readability. In the way, fix
bad free ordering on the init error path.

Signed-off-by: Javier González <[email protected]>
---
drivers/lightnvm/pblk-init.c | 503 ++++++++++++++++++++++---------------------
1 file changed, 254 insertions(+), 249 deletions(-)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index dfc68718e27e..04685f2d39d3 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -103,7 +103,40 @@ static void pblk_l2p_free(struct pblk *pblk)
vfree(pblk->trans_map);
}

-static int pblk_l2p_init(struct pblk *pblk)
+static int pblk_l2p_recover(struct pblk *pblk, bool factory_init)
+{
+ struct pblk_line *line = NULL;
+
+ if (factory_init) {
+ pblk_setup_uuid(pblk);
+ } else {
+ line = pblk_recov_l2p(pblk);
+ if (IS_ERR(line)) {
+ pr_err("pblk: could not recover l2p table\n");
+ return -EFAULT;
+ }
+ }
+
+#ifdef CONFIG_NVM_DEBUG
+ pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk));
+#endif
+
+ /* Free full lines directly as GC has not been started yet */
+ pblk_gc_free_full_lines(pblk);
+
+ if (!line) {
+ /* Configure next line for user data */
+ line = pblk_line_get_first_data(pblk);
+ if (!line) {
+ pr_err("pblk: line list corrupted\n");
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int pblk_l2p_init(struct pblk *pblk, bool factory_init)
{
sector_t i;
struct ppa_addr ppa;
@@ -119,7 +152,7 @@ static int pblk_l2p_init(struct pblk *pblk)
for (i = 0; i < pblk->rl.nr_secs; i++)
pblk_trans_map_set(pblk, i, ppa);

- return 0;
+ return pblk_l2p_recover(pblk, factory_init);
}

static void pblk_rwb_free(struct pblk *pblk)
@@ -268,87 +301,114 @@ static int pblk_core_init(struct pblk *pblk)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
+ int max_write_ppas;
+
+ atomic64_set(&pblk->user_wa, 0);
+ atomic64_set(&pblk->pad_wa, 0);
+ atomic64_set(&pblk->gc_wa, 0);
+ pblk->user_rst_wa = 0;
+ pblk->pad_rst_wa = 0;
+ pblk->gc_rst_wa = 0;
+
+ atomic_long_set(&pblk->nr_flush, 0);
+ pblk->nr_flush_rst = 0;

pblk->pgs_in_buffer = geo->c.mw_cunits * geo->c.ws_opt * geo->all_luns;

+ pblk->min_write_pgs = geo->c.ws_opt * (geo->c.csecs / PAGE_SIZE);
+ max_write_ppas = pblk->min_write_pgs * geo->all_luns;
+ pblk->max_write_pgs = (max_write_ppas < nvm_max_phys_sects(dev)) ?
+ max_write_ppas : nvm_max_phys_sects(dev);
+ pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
+
+ if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) {
+ pr_err("pblk: cannot support device max_phys_sect\n");
+ return -EINVAL;
+ }
+
+ pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t),
+ GFP_KERNEL);
+ if (!pblk->pad_dist)
+ return -ENOMEM;
+
if (pblk_init_global_caches(pblk))
- return -ENOMEM;
+ goto fail_free_pad_dist;

/* Internal bios can be at most the sectors signaled by the device. */
pblk->page_bio_pool = mempool_create_page_pool(nvm_max_phys_sects(dev),
0);
if (!pblk->page_bio_pool)
- goto free_global_caches;
+ goto fail_free_global_caches;

pblk->gen_ws_pool = mempool_create_slab_pool(PBLK_GEN_WS_POOL_SIZE,
pblk_ws_cache);
if (!pblk->gen_ws_pool)
- goto free_page_bio_pool;
+ goto fail_free_page_bio_pool;

pblk->rec_pool = mempool_create_slab_pool(geo->all_luns,
pblk_rec_cache);
if (!pblk->rec_pool)
- goto free_gen_ws_pool;
+ goto fail_free_gen_ws_pool;

pblk->r_rq_pool = mempool_create_slab_pool(geo->all_luns,
pblk_g_rq_cache);
if (!pblk->r_rq_pool)
- goto free_rec_pool;
+ goto fail_free_rec_pool;

pblk->e_rq_pool = mempool_create_slab_pool(geo->all_luns,
pblk_g_rq_cache);
if (!pblk->e_rq_pool)
- goto free_r_rq_pool;
+ goto fail_free_r_rq_pool;

pblk->w_rq_pool = mempool_create_slab_pool(geo->all_luns,
pblk_w_rq_cache);
if (!pblk->w_rq_pool)
- goto free_e_rq_pool;
+ goto fail_free_e_rq_pool;

pblk->close_wq = alloc_workqueue("pblk-close-wq",
WQ_MEM_RECLAIM | WQ_UNBOUND, PBLK_NR_CLOSE_JOBS);
if (!pblk->close_wq)
- goto free_w_rq_pool;
+ goto fail_free_w_rq_pool;

pblk->bb_wq = alloc_workqueue("pblk-bb-wq",
WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
if (!pblk->bb_wq)
- goto free_close_wq;
+ goto fail_free_close_wq;

pblk->r_end_wq = alloc_workqueue("pblk-read-end-wq",
WQ_MEM_RECLAIM | WQ_UNBOUND, 0);
if (!pblk->r_end_wq)
- goto free_bb_wq;
+ goto fail_free_bb_wq;

if (pblk_set_addrf(pblk))
- goto free_r_end_wq;
-
- if (pblk_rwb_init(pblk))
- goto free_r_end_wq;
+ goto fail_free_r_end_wq;

INIT_LIST_HEAD(&pblk->compl_list);
+
return 0;

-free_r_end_wq:
+fail_free_r_end_wq:
destroy_workqueue(pblk->r_end_wq);
-free_bb_wq:
+fail_free_bb_wq:
destroy_workqueue(pblk->bb_wq);
-free_close_wq:
+fail_free_close_wq:
destroy_workqueue(pblk->close_wq);
-free_w_rq_pool:
+fail_free_w_rq_pool:
mempool_destroy(pblk->w_rq_pool);
-free_e_rq_pool:
+fail_free_e_rq_pool:
mempool_destroy(pblk->e_rq_pool);
-free_r_rq_pool:
+fail_free_r_rq_pool:
mempool_destroy(pblk->r_rq_pool);
-free_rec_pool:
+fail_free_rec_pool:
mempool_destroy(pblk->rec_pool);
-free_gen_ws_pool:
+fail_free_gen_ws_pool:
mempool_destroy(pblk->gen_ws_pool);
-free_page_bio_pool:
+fail_free_page_bio_pool:
mempool_destroy(pblk->page_bio_pool);
-free_global_caches:
+fail_free_global_caches:
pblk_free_global_caches(pblk);
+fail_free_pad_dist:
+ kfree(pblk->pad_dist);
return -ENOMEM;
}

@@ -370,9 +430,8 @@ static void pblk_core_free(struct pblk *pblk)
mempool_destroy(pblk->e_rq_pool);
mempool_destroy(pblk->w_rq_pool);

- pblk_rwb_free(pblk);
-
pblk_free_global_caches(pblk);
+ kfree(pblk->pad_dist);
}

static void pblk_luns_free(struct pblk *pblk)
@@ -394,8 +453,6 @@ static void pblk_line_mg_free(struct pblk *pblk)
pblk_mfree(l_mg->eline_meta[i]->buf, l_mg->emeta_alloc_type);
kfree(l_mg->eline_meta[i]);
}
-
- kfree(pblk->lines);
}

static void pblk_line_meta_free(struct pblk_line *line)
@@ -419,6 +476,10 @@ static void pblk_lines_free(struct pblk *pblk)
pblk_line_meta_free(line);
}
spin_unlock(&l_mg->free_lock);
+
+ pblk_line_mg_free(pblk);
+
+ kfree(pblk->lines);
}

static int pblk_bb_get_tbl(struct nvm_tgt_dev *dev, struct pblk_lun *rlun,
@@ -516,38 +577,6 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
return 0;
}

-static int pblk_lines_configure(struct pblk *pblk, int flags)
-{
- struct pblk_line *line = NULL;
- int ret = 0;
-
- if (!(flags & NVM_TARGET_FACTORY)) {
- line = pblk_recov_l2p(pblk);
- if (IS_ERR(line)) {
- pr_err("pblk: could not recover l2p table\n");
- ret = -EFAULT;
- }
- }
-
-#ifdef CONFIG_NVM_DEBUG
- pr_info("pblk init: L2P CRC: %x\n", pblk_l2p_crc(pblk));
-#endif
-
- /* Free full lines directly as GC has not been started yet */
- pblk_gc_free_full_lines(pblk);
-
- if (!line) {
- /* Configure next line for user data */
- line = pblk_line_get_first_data(pblk);
- if (!line) {
- pr_err("pblk: line list corrupted\n");
- ret = -EFAULT;
- }
- }
-
- return ret;
-}
-
/* See comment over struct line_emeta definition */
static unsigned int calc_emeta_len(struct pblk *pblk)
{
@@ -613,81 +642,6 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks)
atomic_set(&pblk->rl.free_user_blocks, nr_free_blks);
}

-static int pblk_lines_alloc_metadata(struct pblk *pblk)
-{
- struct pblk_line_mgmt *l_mg = &pblk->l_mg;
- struct pblk_line_meta *lm = &pblk->lm;
- int i;
-
- /* smeta is always small enough to fit on a kmalloc memory allocation,
- * emeta depends on the number of LUNs allocated to the pblk instance
- */
- for (i = 0; i < PBLK_DATA_LINES; i++) {
- l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL);
- if (!l_mg->sline_meta[i])
- goto fail_free_smeta;
- }
-
- /* emeta allocates three different buffers for managing metadata with
- * in-memory and in-media layouts
- */
- for (i = 0; i < PBLK_DATA_LINES; i++) {
- struct pblk_emeta *emeta;
-
- emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL);
- if (!emeta)
- goto fail_free_emeta;
-
- if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) {
- l_mg->emeta_alloc_type = PBLK_VMALLOC_META;
-
- emeta->buf = vmalloc(lm->emeta_len[0]);
- if (!emeta->buf) {
- kfree(emeta);
- goto fail_free_emeta;
- }
-
- emeta->nr_entries = lm->emeta_sec[0];
- l_mg->eline_meta[i] = emeta;
- } else {
- l_mg->emeta_alloc_type = PBLK_KMALLOC_META;
-
- emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL);
- if (!emeta->buf) {
- kfree(emeta);
- goto fail_free_emeta;
- }
-
- emeta->nr_entries = lm->emeta_sec[0];
- l_mg->eline_meta[i] = emeta;
- }
- }
-
- l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL);
- if (!l_mg->vsc_list)
- goto fail_free_emeta;
-
- for (i = 0; i < l_mg->nr_lines; i++)
- l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY);
-
- return 0;
-
-fail_free_emeta:
- while (--i >= 0) {
- if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META)
- vfree(l_mg->eline_meta[i]->buf);
- else
- kfree(l_mg->eline_meta[i]->buf);
- kfree(l_mg->eline_meta[i]);
- }
-
-fail_free_smeta:
- for (i = 0; i < PBLK_DATA_LINES; i++)
- kfree(l_mg->sline_meta[i]);
-
- return -ENOMEM;
-}
-
static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
void *chunk_log)
{
@@ -831,29 +785,13 @@ static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line)
return 0;
}

-static int pblk_lines_init(struct pblk *pblk)
+static int pblk_line_mg_init(struct pblk *pblk)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
struct pblk_line_mgmt *l_mg = &pblk->l_mg;
struct pblk_line_meta *lm = &pblk->lm;
- struct pblk_line *line;
- void *chunk_log;
- unsigned int smeta_len, emeta_len;
- long nr_free_chks = 0;
- int bb_distance, max_write_ppas;
- int i, ret;
-
- pblk->min_write_pgs = geo->c.ws_opt * (geo->c.csecs / PAGE_SIZE);
- max_write_ppas = pblk->min_write_pgs * geo->all_luns;
- pblk->max_write_pgs = (max_write_ppas < nvm_max_phys_sects(dev)) ?
- max_write_ppas : nvm_max_phys_sects(dev);
- pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
-
- if (pblk->max_write_pgs > PBLK_MAX_REQ_ADDRS) {
- pr_err("pblk: cannot support device max_phys_sect\n");
- return -EINVAL;
- }
+ int i, bb_distance;

l_mg->nr_lines = geo->c.num_chk;
l_mg->log_line = l_mg->data_line = NULL;
@@ -862,6 +800,119 @@ static int pblk_lines_init(struct pblk *pblk)
atomic_set(&l_mg->sysfs_line_state, -1);
bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES);

+ INIT_LIST_HEAD(&l_mg->free_list);
+ INIT_LIST_HEAD(&l_mg->corrupt_list);
+ INIT_LIST_HEAD(&l_mg->bad_list);
+ INIT_LIST_HEAD(&l_mg->gc_full_list);
+ INIT_LIST_HEAD(&l_mg->gc_high_list);
+ INIT_LIST_HEAD(&l_mg->gc_mid_list);
+ INIT_LIST_HEAD(&l_mg->gc_low_list);
+ INIT_LIST_HEAD(&l_mg->gc_empty_list);
+
+ INIT_LIST_HEAD(&l_mg->emeta_list);
+
+ l_mg->gc_lists[0] = &l_mg->gc_high_list;
+ l_mg->gc_lists[1] = &l_mg->gc_mid_list;
+ l_mg->gc_lists[2] = &l_mg->gc_low_list;
+
+ spin_lock_init(&l_mg->free_lock);
+ spin_lock_init(&l_mg->close_lock);
+ spin_lock_init(&l_mg->gc_lock);
+
+ l_mg->vsc_list = kcalloc(l_mg->nr_lines, sizeof(__le32), GFP_KERNEL);
+ if (!l_mg->vsc_list)
+ goto fail;
+
+ l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL);
+ if (!l_mg->bb_template)
+ goto fail_free_vsc_list;
+
+ l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL);
+ if (!l_mg->bb_aux)
+ goto fail_free_bb_template;
+
+ /* smeta is always small enough to fit on a kmalloc memory allocation,
+ * emeta depends on the number of LUNs allocated to the pblk instance
+ */
+ for (i = 0; i < PBLK_DATA_LINES; i++) {
+ l_mg->sline_meta[i] = kmalloc(lm->smeta_len, GFP_KERNEL);
+ if (!l_mg->sline_meta[i])
+ goto fail_free_smeta;
+ }
+
+ /* emeta allocates three different buffers for managing metadata with
+ * in-memory and in-media layouts
+ */
+ for (i = 0; i < PBLK_DATA_LINES; i++) {
+ struct pblk_emeta *emeta;
+
+ emeta = kmalloc(sizeof(struct pblk_emeta), GFP_KERNEL);
+ if (!emeta)
+ goto fail_free_emeta;
+
+ if (lm->emeta_len[0] > KMALLOC_MAX_CACHE_SIZE) {
+ l_mg->emeta_alloc_type = PBLK_VMALLOC_META;
+
+ emeta->buf = vmalloc(lm->emeta_len[0]);
+ if (!emeta->buf) {
+ kfree(emeta);
+ goto fail_free_emeta;
+ }
+
+ emeta->nr_entries = lm->emeta_sec[0];
+ l_mg->eline_meta[i] = emeta;
+ } else {
+ l_mg->emeta_alloc_type = PBLK_KMALLOC_META;
+
+ emeta->buf = kmalloc(lm->emeta_len[0], GFP_KERNEL);
+ if (!emeta->buf) {
+ kfree(emeta);
+ goto fail_free_emeta;
+ }
+
+ emeta->nr_entries = lm->emeta_sec[0];
+ l_mg->eline_meta[i] = emeta;
+ }
+ }
+
+ for (i = 0; i < l_mg->nr_lines; i++)
+ l_mg->vsc_list[i] = cpu_to_le32(EMPTY_ENTRY);
+
+ bb_distance = (geo->all_luns) * geo->c.ws_opt;
+ for (i = 0; i < lm->sec_per_line; i += bb_distance)
+ bitmap_set(l_mg->bb_template, i, geo->c.ws_opt);
+
+ return 0;
+
+fail_free_emeta:
+ while (--i >= 0) {
+ if (l_mg->emeta_alloc_type == PBLK_VMALLOC_META)
+ vfree(l_mg->eline_meta[i]->buf);
+ else
+ kfree(l_mg->eline_meta[i]->buf);
+ kfree(l_mg->eline_meta[i]);
+ }
+fail_free_smeta:
+ kfree(l_mg->bb_aux);
+
+ for (i = 0; i < PBLK_DATA_LINES; i++)
+ kfree(l_mg->sline_meta[i]);
+fail_free_bb_template:
+ kfree(l_mg->bb_template);
+fail_free_vsc_list:
+ kfree(l_mg->vsc_list);
+fail:
+ return -ENOMEM;
+}
+
+static int pblk_line_meta_init(struct pblk *pblk)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ struct pblk_line_meta *lm = &pblk->lm;
+ unsigned int smeta_len, emeta_len;
+ int i;
+
lm->sec_per_line = geo->c.clba * geo->all_luns;
lm->blk_per_line = geo->all_luns;
lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long);
@@ -912,58 +963,38 @@ static int pblk_lines_init(struct pblk *pblk)
return -EINVAL;
}

- ret = pblk_lines_alloc_metadata(pblk);
+ return 0;
+}
+
+static int pblk_lines_init(struct pblk *pblk)
+{
+ struct pblk_line_mgmt *l_mg = &pblk->l_mg;
+ struct pblk_line *line;
+ void *chunk_log;
+ long nr_free_chks = 0;
+ int i, ret;
+
+ ret = pblk_line_meta_init(pblk);
if (ret)
return ret;

- l_mg->bb_template = kzalloc(lm->sec_bitmap_len, GFP_KERNEL);
- if (!l_mg->bb_template) {
- ret = -ENOMEM;
- goto fail_free_meta;
- }
-
- l_mg->bb_aux = kzalloc(lm->sec_bitmap_len, GFP_KERNEL);
- if (!l_mg->bb_aux) {
- ret = -ENOMEM;
- goto fail_free_bb_template;
- }
-
- bb_distance = (geo->all_luns) * geo->c.ws_opt;
- for (i = 0; i < lm->sec_per_line; i += bb_distance)
- bitmap_set(l_mg->bb_template, i, geo->c.ws_opt);
-
- INIT_LIST_HEAD(&l_mg->free_list);
- INIT_LIST_HEAD(&l_mg->corrupt_list);
- INIT_LIST_HEAD(&l_mg->bad_list);
- INIT_LIST_HEAD(&l_mg->gc_full_list);
- INIT_LIST_HEAD(&l_mg->gc_high_list);
- INIT_LIST_HEAD(&l_mg->gc_mid_list);
- INIT_LIST_HEAD(&l_mg->gc_low_list);
- INIT_LIST_HEAD(&l_mg->gc_empty_list);
-
- INIT_LIST_HEAD(&l_mg->emeta_list);
-
- l_mg->gc_lists[0] = &l_mg->gc_high_list;
- l_mg->gc_lists[1] = &l_mg->gc_mid_list;
- l_mg->gc_lists[2] = &l_mg->gc_low_list;
-
- spin_lock_init(&l_mg->free_lock);
- spin_lock_init(&l_mg->close_lock);
- spin_lock_init(&l_mg->gc_lock);
-
- pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line),
- GFP_KERNEL);
- if (!pblk->lines) {
- ret = -ENOMEM;
- goto fail_free_bb_aux;
- }
+ ret = pblk_line_mg_init(pblk);
+ if (ret)
+ return ret;

chunk_log = pblk_chunk_get_log(pblk);
if (IS_ERR(chunk_log)) {
pr_err("pblk: could not get chunk log (%lu)\n",
PTR_ERR(chunk_log));
ret = PTR_ERR(chunk_log);
- goto fail_free_lines;
+ goto fail_free_meta;
+ }
+
+ pblk->lines = kcalloc(l_mg->nr_lines, sizeof(struct pblk_line),
+ GFP_KERNEL);
+ if (!pblk->lines) {
+ ret = -ENOMEM;
+ goto fail_free_chunk_log;
}

for (i = 0; i < l_mg->nr_lines; i++) {
@@ -971,7 +1002,7 @@ static int pblk_lines_init(struct pblk *pblk)

ret = pblk_alloc_line_meta(pblk, line);
if (ret)
- goto fail_free_chunk_log;
+ goto fail_free_lines;

nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_log, i);
}
@@ -981,16 +1012,12 @@ static int pblk_lines_init(struct pblk *pblk)
kfree(chunk_log);
return 0;

-fail_free_chunk_log:
- kfree(chunk_log);
+fail_free_lines:
while (--i >= 0)
pblk_line_meta_free(&pblk->lines[i]);
-fail_free_lines:
kfree(pblk->lines);
-fail_free_bb_aux:
- kfree(l_mg->bb_aux);
-fail_free_bb_template:
- kfree(l_mg->bb_template);
+fail_free_chunk_log:
+ kfree(chunk_log);
fail_free_meta:
pblk_line_mg_free(pblk);

@@ -1033,12 +1060,11 @@ static void pblk_writer_stop(struct pblk *pblk)

static void pblk_free(struct pblk *pblk)
{
- pblk_luns_free(pblk);
pblk_lines_free(pblk);
- kfree(pblk->pad_dist);
- pblk_line_mg_free(pblk);
- pblk_core_free(pblk);
pblk_l2p_free(pblk);
+ pblk_rwb_free(pblk);
+ pblk_core_free(pblk);
+ pblk_luns_free(pblk);

kfree(pblk);
}
@@ -1109,19 +1135,6 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
spin_lock_init(&pblk->trans_lock);
spin_lock_init(&pblk->lock);

- if (flags & NVM_TARGET_FACTORY)
- pblk_setup_uuid(pblk);
-
- atomic64_set(&pblk->user_wa, 0);
- atomic64_set(&pblk->pad_wa, 0);
- atomic64_set(&pblk->gc_wa, 0);
- pblk->user_rst_wa = 0;
- pblk->pad_rst_wa = 0;
- pblk->gc_rst_wa = 0;
-
- atomic_long_set(&pblk->nr_flush, 0);
- pblk->nr_flush_rst = 0;
-
#ifdef CONFIG_NVM_DEBUG
atomic_long_set(&pblk->inflight_writes, 0);
atomic_long_set(&pblk->padded_writes, 0);
@@ -1145,48 +1158,42 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
atomic_long_set(&pblk->write_failed, 0);
atomic_long_set(&pblk->erase_failed, 0);

+
ret = pblk_luns_init(pblk, dev->luns);
if (ret) {
pr_err("pblk: could not initialize luns\n");
goto fail;
}

- ret = pblk_lines_init(pblk);
- if (ret) {
- pr_err("pblk: could not initialize lines\n");
- goto fail_free_luns;
- }
-
- pblk->pad_dist = kzalloc((pblk->min_write_pgs - 1) * sizeof(atomic64_t),
- GFP_KERNEL);
- if (!pblk->pad_dist) {
- ret = -ENOMEM;
- goto fail_free_line_meta;
- }
-
ret = pblk_core_init(pblk);
if (ret) {
pr_err("pblk: could not initialize core\n");
- goto fail_free_pad_dist;
+ goto fail_free_luns;
}

- ret = pblk_l2p_init(pblk);
+ ret = pblk_lines_init(pblk);
if (ret) {
- pr_err("pblk: could not initialize maps\n");
+ pr_err("pblk: could not initialize lines\n");
goto fail_free_core;
}

- ret = pblk_lines_configure(pblk, flags);
+ ret = pblk_rwb_init(pblk);
if (ret) {
- pr_err("pblk: could not configure lines\n");
- goto fail_free_l2p;
+ pr_err("pblk: could not initialize write buffer\n");
+ goto fail_free_lines;
+ }
+
+ ret = pblk_l2p_init(pblk, flags & NVM_TARGET_FACTORY);
+ if (ret) {
+ pr_err("pblk: could not initialize maps\n");
+ goto fail_free_rwb;
}

ret = pblk_writer_init(pblk);
if (ret) {
if (ret != -EINTR)
pr_err("pblk: could not initialize write thread\n");
- goto fail_free_lines;
+ goto fail_free_l2p;
}

ret = pblk_gc_init(pblk);
@@ -1221,16 +1228,14 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,

fail_stop_writer:
pblk_writer_stop(pblk);
-fail_free_lines:
- pblk_lines_free(pblk);
fail_free_l2p:
pblk_l2p_free(pblk);
+fail_free_rwb:
+ pblk_rwb_free(pblk);
+fail_free_lines:
+ pblk_lines_free(pblk);
fail_free_core:
pblk_core_free(pblk);
-fail_free_pad_dist:
- kfree(pblk->pad_dist);
-fail_free_line_meta:
- pblk_line_mg_free(pblk);
fail_free_luns:
pblk_luns_free(pblk);
fail:
--
2.7.4


2018-02-13 14:11:51

by Javier González

[permalink] [raw]
Subject: [PATCH 6/8] lightnvm: pblk: implement get log report chunk

From: Javier González <[email protected]>

In preparation of pblk supporting 2.0, implement the get log report
chunk in pblk.

This patch only replicates de bad block functionality as the rest of the
metadata requires new pblk functionality (e.g., wear-index to implement
wear-leveling). This functionality will come in future patches.

Signed-off-by: Javier González <[email protected]>
---
drivers/lightnvm/pblk-core.c | 118 +++++++++++++++++++++++----
drivers/lightnvm/pblk-init.c | 186 +++++++++++++++++++++++++++++++-----------
drivers/lightnvm/pblk-sysfs.c | 67 +++++++++++++++
drivers/lightnvm/pblk.h | 20 +++++
4 files changed, 327 insertions(+), 64 deletions(-)

diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index 519af8b9eab7..01b78ee5c0e0 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work)
}

static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
- struct ppa_addr *ppa)
+ struct ppa_addr ppa_addr)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
- int pos = pblk_ppa_to_pos(geo, *ppa);
+ struct ppa_addr *ppa;
+ int pos = pblk_ppa_to_pos(geo, ppa_addr);

pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos);
atomic_long_inc(&pblk->erase_failed);
@@ -58,6 +59,15 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n",
line->id, pos);

+ /* Not necessary to mark bad blocks on 2.0 spec. */
+ if (geo->c.version == NVM_OCSSD_SPEC_20)
+ return;
+
+ ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
+ if (!ppa)
+ return;
+
+ *ppa = ppa_addr;
pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb,
GFP_ATOMIC, pblk->bb_wq);
}
@@ -69,16 +79,8 @@ static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd)
line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)];
atomic_dec(&line->left_seblks);

- if (rqd->error) {
- struct ppa_addr *ppa;
-
- ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
- if (!ppa)
- return;
-
- *ppa = rqd->ppa_addr;
- pblk_mark_bb(pblk, line, ppa);
- }
+ if (rqd->error)
+ pblk_mark_bb(pblk, line, rqd->ppa_addr);

atomic_dec(&pblk->inflight_io);
}
@@ -92,6 +94,47 @@ static void pblk_end_io_erase(struct nvm_rq *rqd)
mempool_free(rqd, pblk->e_rq_pool);
}

+/*
+ * Get information for all chunks from the device.
+ *
+ * The caller is responsible for freeing the returned structure
+ */
+struct nvm_chunk_log_page *pblk_chunk_get_info(struct pblk *pblk)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ struct nvm_chunk_log_page *log;
+ unsigned long len;
+ int ret;
+
+ len = geo->all_chunks * sizeof(*log);
+ log = kzalloc(len, GFP_KERNEL);
+ if (!log)
+ return ERR_PTR(-ENOMEM);
+
+ ret = nvm_get_chunk_log_page(dev, log, 0, len);
+ if (ret) {
+ pr_err("pblk: could not get chunk log page (%d)\n", ret);
+ kfree(log);
+ return ERR_PTR(-EIO);
+ }
+
+ return log;
+}
+
+struct nvm_chunk_log_page *pblk_chunk_get_off(struct pblk *pblk,
+ struct nvm_chunk_log_page *lp,
+ struct ppa_addr ppa)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ int ch_off = ppa.m.ch * geo->c.num_chk * geo->num_lun;
+ int lun_off = ppa.m.lun * geo->c.num_chk;
+ int chk_off = ppa.m.chk;
+
+ return lp + ch_off + lun_off + chk_off;
+}
+
void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line,
u64 paddr)
{
@@ -1094,10 +1137,38 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
return 1;
}

+static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line)
+{
+ struct pblk_line_meta *lm = &pblk->lm;
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ int blk_to_erase = atomic_read(&line->blk_in_line);
+ int i;
+
+ for (i = 0; i < lm->blk_per_line; i++) {
+ int state = line->chks[i].state;
+ struct pblk_lun *rlun = &pblk->luns[i];
+
+ /* Free chunks should not be erased */
+ if (state & NVM_CHK_ST_FREE) {
+ set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
+ line->erase_bitmap);
+ blk_to_erase--;
+ line->chks[i].state = NVM_CHK_ST_HOST_USE;
+ }
+
+ WARN_ONCE(state & NVM_CHK_ST_OPEN,
+ "pblk: open chunk in new line: %d\n",
+ line->id);
+ }
+
+ return blk_to_erase;
+}
+
static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
{
struct pblk_line_meta *lm = &pblk->lm;
- int blk_in_line = atomic_read(&line->blk_in_line);
+ int blk_to_erase;

line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC);
if (!line->map_bitmap)
@@ -1110,7 +1181,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
return -ENOMEM;
}

+ /* Bad blocks do not need to be erased */
+ bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
+
spin_lock(&line->lock);
+
+ /* If we have not written to this line, we need to mark up free chunks
+ * as already erased
+ */
+ if (line->state == PBLK_LINESTATE_NEW) {
+ blk_to_erase = pblk_prepare_new_line(pblk, line);
+ line->state = PBLK_LINESTATE_FREE;
+ } else {
+ blk_to_erase = atomic_read(&line->blk_in_line);
+ }
+
if (line->state != PBLK_LINESTATE_FREE) {
kfree(line->map_bitmap);
kfree(line->invalid_bitmap);
@@ -1122,15 +1207,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)

line->state = PBLK_LINESTATE_OPEN;

- atomic_set(&line->left_eblks, blk_in_line);
- atomic_set(&line->left_seblks, blk_in_line);
+ atomic_set(&line->left_eblks, blk_to_erase);
+ atomic_set(&line->left_seblks, blk_to_erase);

line->meta_distance = lm->meta_distance;
spin_unlock(&line->lock);

- /* Bad blocks do not need to be erased */
- bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
-
kref_init(&line->ref);

return 0;
diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 72b7902e5d1c..dfc68718e27e 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -402,6 +402,7 @@ static void pblk_line_meta_free(struct pblk_line *line)
{
kfree(line->blk_bitmap);
kfree(line->erase_bitmap);
+ kfree(line->chks);
}

static void pblk_lines_free(struct pblk *pblk)
@@ -470,25 +471,15 @@ static void *pblk_bb_get_log(struct pblk *pblk)
return log;
}

-static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line,
- u8 *bb_log, int blk_per_line)
+static void *pblk_chunk_get_log(struct pblk *pblk)
{
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = &dev->geo;
- int i, bb_cnt = 0;

- for (i = 0; i < blk_per_line; i++) {
- struct pblk_lun *rlun = &pblk->luns[i];
- u8 *lun_bb_log = bb_log + i * blk_per_line;
-
- if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
- continue;
-
- set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
- bb_cnt++;
- }
-
- return bb_cnt;
+ if (geo->c.version == NVM_OCSSD_SPEC_12)
+ return pblk_bb_get_log(pblk);
+ else
+ return pblk_chunk_get_info(pblk);
}

static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
@@ -517,6 +508,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)

rlun = &pblk->luns[i];
rlun->bppa = luns[lunid];
+ rlun->chunk_bppa = luns[i];

sema_init(&rlun->wr_sem, 1);
}
@@ -696,8 +688,125 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk)
return -ENOMEM;
}

-static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line,
- void *chunk_log, long *nr_bad_blks)
+static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
+ void *chunk_log)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ struct pblk_line_meta *lm = &pblk->lm;
+ int i, chk_per_lun, nr_bad_chks = 0;
+
+ chk_per_lun = geo->c.num_chk * geo->c.pln_mode;
+
+ for (i = 0; i < lm->blk_per_line; i++) {
+ struct pblk_chunk *chunk = &line->chks[i];
+ struct pblk_lun *rlun = &pblk->luns[i];
+ u8 *lun_bb_log = chunk_log + i * chk_per_lun;
+
+ /*
+ * In 1.2 spec. chunk state is not persisted by the device. Thus
+ * some of the values are reset each time pblk is instantiated.
+ */
+ if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
+ chunk->state = NVM_CHK_ST_HOST_USE;
+ else
+ chunk->state = NVM_CHK_ST_OFFLINE;
+
+ chunk->type = NVM_CHK_TP_W_SEQ;
+ chunk->wi = 0;
+ chunk->slba = -1;
+ chunk->cnlb = geo->c.clba;
+ chunk->wp = 0;
+
+ if (!(chunk->state & NVM_CHK_ST_OFFLINE))
+ continue;
+
+ set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
+ nr_bad_chks++;
+ }
+
+ return nr_bad_chks;
+}
+
+static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line,
+ struct nvm_chunk_log_page *log_page)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ struct pblk_line_meta *lm = &pblk->lm;
+ int i, nr_bad_chks = 0;
+
+ for (i = 0; i < lm->blk_per_line; i++) {
+ struct pblk_chunk *chunk = &line->chks[i];
+ struct pblk_lun *rlun = &pblk->luns[i];
+ struct nvm_chunk_log_page *chunk_log_page;
+ struct ppa_addr ppa;
+
+ ppa = rlun->chunk_bppa;
+ ppa.m.chk = line->id;
+ chunk_log_page = pblk_chunk_get_off(pblk, log_page, ppa);
+
+ chunk->state = chunk_log_page->state;
+ chunk->type = chunk_log_page->type;
+ chunk->wi = chunk_log_page->wear_index;
+ chunk->slba = le64_to_cpu(chunk_log_page->slba);
+ chunk->cnlb = le64_to_cpu(chunk_log_page->cnlb);
+ chunk->wp = le64_to_cpu(chunk_log_page->wp);
+
+ if (!(chunk->state & NVM_CHK_ST_OFFLINE))
+ continue;
+
+ if (chunk->type & NVM_CHK_TP_SZ_SPEC) {
+ WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n");
+ continue;
+ }
+
+ set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
+ line->blk_bitmap);
+ nr_bad_chks++;
+ }
+
+ return nr_bad_chks;
+}
+
+static long pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line,
+ void *chunk_log, int line_id)
+{
+ struct nvm_tgt_dev *dev = pblk->dev;
+ struct nvm_geo *geo = &dev->geo;
+ struct pblk_line_mgmt *l_mg = &pblk->l_mg;
+ struct pblk_line_meta *lm = &pblk->lm;
+ long nr_bad_chks, chk_in_line;
+
+ line->pblk = pblk;
+ line->id = line_id;
+ line->type = PBLK_LINETYPE_FREE;
+ line->state = PBLK_LINESTATE_NEW;
+ line->gc_group = PBLK_LINEGC_NONE;
+ line->vsc = &l_mg->vsc_list[line_id];
+ spin_lock_init(&line->lock);
+
+ if (geo->c.version == NVM_OCSSD_SPEC_12)
+ nr_bad_chks = pblk_setup_line_meta_12(pblk, line, chunk_log);
+ else
+ nr_bad_chks = pblk_setup_line_meta_20(pblk, line, chunk_log);
+
+ chk_in_line = lm->blk_per_line - nr_bad_chks;
+ if (nr_bad_chks < 0 || nr_bad_chks > lm->blk_per_line ||
+ chk_in_line < lm->min_blk_line) {
+ line->state = PBLK_LINESTATE_BAD;
+ list_add_tail(&line->list, &l_mg->bad_list);
+ return 0;
+ }
+
+ atomic_set(&line->blk_in_line, chk_in_line);
+ list_add_tail(&line->list, &l_mg->free_list);
+ l_mg->nr_free_lines++;
+
+ return chk_in_line;
+}
+
+static int pblk_alloc_line_meta(struct pblk *pblk, struct pblk_line *line)
{
struct pblk_line_meta *lm = &pblk->lm;

@@ -711,7 +820,13 @@ static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line,
return -ENOMEM;
}

- *nr_bad_blks = pblk_bb_line(pblk, line, chunk_log, lm->blk_per_line);
+ line->chks = kmalloc(lm->blk_per_line * sizeof(struct pblk_chunk),
+ GFP_KERNEL);
+ if (!line->chks) {
+ kfree(line->erase_bitmap);
+ kfree(line->blk_bitmap);
+ return -ENOMEM;
+ }

return 0;
}
@@ -725,7 +840,7 @@ static int pblk_lines_init(struct pblk *pblk)
struct pblk_line *line;
void *chunk_log;
unsigned int smeta_len, emeta_len;
- long nr_bad_blks = 0, nr_free_blks = 0;
+ long nr_free_chks = 0;
int bb_distance, max_write_ppas;
int i, ret;

@@ -744,6 +859,7 @@ static int pblk_lines_init(struct pblk *pblk)
l_mg->log_line = l_mg->data_line = NULL;
l_mg->l_seq_nr = l_mg->d_seq_nr = 0;
l_mg->nr_free_lines = 0;
+ atomic_set(&l_mg->sysfs_line_state, -1);
bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES);

lm->sec_per_line = geo->c.clba * geo->all_luns;
@@ -842,47 +958,25 @@ static int pblk_lines_init(struct pblk *pblk)
goto fail_free_bb_aux;
}

- chunk_log = pblk_bb_get_log(pblk);
+ chunk_log = pblk_chunk_get_log(pblk);
if (IS_ERR(chunk_log)) {
- pr_err("pblk: could not get bad block log (%lu)\n",
+ pr_err("pblk: could not get chunk log (%lu)\n",
PTR_ERR(chunk_log));
ret = PTR_ERR(chunk_log);
goto fail_free_lines;
}

for (i = 0; i < l_mg->nr_lines; i++) {
- int chk_in_line;
-
line = &pblk->lines[i];

- line->pblk = pblk;
- line->id = i;
- line->type = PBLK_LINETYPE_FREE;
- line->state = PBLK_LINESTATE_FREE;
- line->gc_group = PBLK_LINEGC_NONE;
- line->vsc = &l_mg->vsc_list[i];
- spin_lock_init(&line->lock);
-
- ret = pblk_setup_line_meta(pblk, line, chunk_log, &nr_bad_blks);
+ ret = pblk_alloc_line_meta(pblk, line);
if (ret)
goto fail_free_chunk_log;

- chk_in_line = lm->blk_per_line - nr_bad_blks;
- if (nr_bad_blks < 0 || nr_bad_blks > lm->blk_per_line ||
- chk_in_line < lm->min_blk_line) {
- line->state = PBLK_LINESTATE_BAD;
- list_add_tail(&line->list, &l_mg->bad_list);
- continue;
- }
-
- nr_free_blks += chk_in_line;
- atomic_set(&line->blk_in_line, chk_in_line);
-
- l_mg->nr_free_lines++;
- list_add_tail(&line->list, &l_mg->free_list);
+ nr_free_chks += pblk_setup_line_meta(pblk, line, chunk_log, i);
}

- pblk_set_provision(pblk, nr_free_blks);
+ pblk_set_provision(pblk, nr_free_chks);

kfree(chunk_log);
return 0;
diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
index d3b50741b691..191af0c6591e 100644
--- a/drivers/lightnvm/pblk-sysfs.c
+++ b/drivers/lightnvm/pblk-sysfs.c
@@ -142,6 +142,40 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
return sz;
}

+static ssize_t pblk_sysfs_line_state_show(struct pblk *pblk, char *page)
+{
+ struct pblk_line_meta *lm = &pblk->lm;
+ struct pblk_line_mgmt *l_mg = &pblk->l_mg;
+ struct pblk_line *line;
+ int line_id = atomic_read(&l_mg->sysfs_line_state);
+ ssize_t sz = 0;
+ int i;
+
+ if (line_id < 0 || line_id >= l_mg->nr_lines)
+ return 0;
+
+ sz = snprintf(page, PAGE_SIZE,
+ "line\tchunk\tstate\ttype\twear-index\tslba\t\tcnlb\twp\n");
+
+ line = &pblk->lines[line_id];
+
+ for (i = 0; i < lm->blk_per_line; i++) {
+ struct pblk_chunk *chunk = &line->chks[i];
+
+ sz += snprintf(page + sz, PAGE_SIZE - sz,
+ "%d\t%d\t%d\t%d\t%d\t\t%llu\t\t%llu\t%llu\n",
+ line->id, i,
+ chunk->state,
+ chunk->type,
+ chunk->wi,
+ chunk->slba,
+ chunk->cnlb,
+ chunk->wp);
+ }
+
+ return sz;
+}
+
static ssize_t pblk_sysfs_lines(struct pblk *pblk, char *page)
{
struct nvm_tgt_dev *dev = pblk->dev;
@@ -398,6 +432,29 @@ static ssize_t pblk_sysfs_stats_debug(struct pblk *pblk, char *page)
}
#endif

+
+static ssize_t pblk_sysfs_line_state_store(struct pblk *pblk, const char *page,
+ size_t len)
+{
+ struct pblk_line_mgmt *l_mg = &pblk->l_mg;
+ size_t c_len;
+ int line_id;
+
+ c_len = strcspn(page, "\n");
+ if (c_len >= len)
+ return -EINVAL;
+
+ if (kstrtouint(page, 0, &line_id))
+ return -EINVAL;
+
+ if (line_id < 0 || line_id >= l_mg->nr_lines)
+ return -EINVAL;
+
+ atomic_set(&l_mg->sysfs_line_state, line_id);
+
+ return len;
+}
+
static ssize_t pblk_sysfs_gc_force(struct pblk *pblk, const char *page,
size_t len)
{
@@ -529,6 +586,11 @@ static struct attribute sys_lines_info_attr = {
.mode = 0444,
};

+static struct attribute sys_line_state_attr = {
+ .name = "line_state",
+ .mode = 0644,
+};
+
static struct attribute sys_gc_force = {
.name = "gc_force",
.mode = 0200,
@@ -572,6 +634,7 @@ static struct attribute *pblk_attrs[] = {
&sys_stats_ppaf_attr,
&sys_lines_attr,
&sys_lines_info_attr,
+ &sys_line_state_attr,
&sys_write_amp_mileage,
&sys_write_amp_trip,
&sys_padding_dist,
@@ -602,6 +665,8 @@ static ssize_t pblk_sysfs_show(struct kobject *kobj, struct attribute *attr,
return pblk_sysfs_lines(pblk, buf);
else if (strcmp(attr->name, "lines_info") == 0)
return pblk_sysfs_lines_info(pblk, buf);
+ else if (strcmp(attr->name, "line_state") == 0)
+ return pblk_sysfs_line_state_show(pblk, buf);
else if (strcmp(attr->name, "max_sec_per_write") == 0)
return pblk_sysfs_get_sec_per_write(pblk, buf);
else if (strcmp(attr->name, "write_amp_mileage") == 0)
@@ -628,6 +693,8 @@ static ssize_t pblk_sysfs_store(struct kobject *kobj, struct attribute *attr,
return pblk_sysfs_set_sec_per_write(pblk, buf, len);
else if (strcmp(attr->name, "write_amp_trip") == 0)
return pblk_sysfs_set_write_amp_trip(pblk, buf, len);
+ else if (strcmp(attr->name, "line_state") == 0)
+ return pblk_sysfs_line_state_store(pblk, buf, len);
else if (strcmp(attr->name, "padding_dist") == 0)
return pblk_sysfs_set_padding_dist(pblk, buf, len);
return 0;
diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
index 46b29a492f74..fba978e7f7c1 100644
--- a/drivers/lightnvm/pblk.h
+++ b/drivers/lightnvm/pblk.h
@@ -201,6 +201,8 @@ struct pblk_rb {

struct pblk_lun {
struct ppa_addr bppa;
+ struct ppa_addr chunk_bppa;
+
struct semaphore wr_sem;
};

@@ -297,6 +299,7 @@ enum {
PBLK_LINETYPE_DATA = 2,

/* Line state */
+ PBLK_LINESTATE_NEW = 9,
PBLK_LINESTATE_FREE = 10,
PBLK_LINESTATE_OPEN = 11,
PBLK_LINESTATE_CLOSED = 12,
@@ -412,6 +415,15 @@ struct pblk_smeta {
struct line_smeta *buf; /* smeta buffer in persistent format */
};

+struct pblk_chunk {
+ int state;
+ int type;
+ int wi;
+ u64 slba;
+ u64 cnlb;
+ u64 wp;
+};
+
struct pblk_line {
struct pblk *pblk;
unsigned int id; /* Line number corresponds to the
@@ -426,6 +438,8 @@ struct pblk_line {

unsigned long *lun_bitmap; /* Bitmap for LUNs mapped in line */

+ struct pblk_chunk *chks; /* Chunks forming line */
+
struct pblk_smeta *smeta; /* Start metadata */
struct pblk_emeta *emeta; /* End medatada */

@@ -513,6 +527,8 @@ struct pblk_line_mgmt {
unsigned long d_seq_nr; /* Data line unique sequence number */
unsigned long l_seq_nr; /* Log line unique sequence number */

+ atomic_t sysfs_line_state; /* Line being monitored in sysfs */
+
spinlock_t free_lock;
spinlock_t close_lock;
spinlock_t gc_lock;
@@ -729,6 +745,10 @@ void pblk_set_sec_per_write(struct pblk *pblk, int sec_per_write);
int pblk_setup_w_rec_rq(struct pblk *pblk, struct nvm_rq *rqd,
struct pblk_c_ctx *c_ctx);
void pblk_discard(struct pblk *pblk, struct bio *bio);
+struct nvm_chunk_log_page *pblk_chunk_get_info(struct pblk *pblk);
+struct nvm_chunk_log_page *pblk_chunk_get_off(struct pblk *pblk,
+ struct nvm_chunk_log_page *lp,
+ struct ppa_addr ppa);
void pblk_log_write_err(struct pblk *pblk, struct nvm_rq *rqd);
void pblk_log_read_err(struct pblk *pblk, struct nvm_rq *rqd);
int pblk_submit_io(struct pblk *pblk, struct nvm_rq *rqd);
--
2.7.4


2018-02-15 10:14:52

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 1/8] lightnvm: exposed generic geometry to targets

On 02/13/2018 03:06 PM, Javier González wrote:
> With the inclusion of 2.0 support, we need a generic geometry that
> describes the OCSSD independently of the specification that it
> implements. Otherwise, geometry specific code is required, which
> complicates targets and makes maintenance much more difficult.
>
> This patch refactors the identify path and populates a generic geometry
> that is then given to the targets on creation. Since the 2.0 geometry is
> much more abstract that 1.2, the generic geometry resembles 2.0, but it
> is not identical, as it needs to understand 1.2 abstractions too.
>
> Signed-off-by: Javier González <[email protected]>
> ---
> drivers/lightnvm/core.c | 143 ++++++---------
> drivers/lightnvm/pblk-core.c | 16 +-
> drivers/lightnvm/pblk-gc.c | 2 +-
> drivers/lightnvm/pblk-init.c | 149 ++++++++-------
> drivers/lightnvm/pblk-read.c | 2 +-
> drivers/lightnvm/pblk-recovery.c | 14 +-
> drivers/lightnvm/pblk-rl.c | 2 +-
> drivers/lightnvm/pblk-sysfs.c | 39 ++--
> drivers/lightnvm/pblk-write.c | 2 +-
> drivers/lightnvm/pblk.h | 105 +++++------
> drivers/nvme/host/lightnvm.c | 379 ++++++++++++++++++++++++---------------
> include/linux/lightnvm.h | 220 +++++++++++++----------
> 12 files changed, 586 insertions(+), 487 deletions(-)
>
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> index 9b1255b3e05e..80492fa6ee76 100644
> --- a/drivers/lightnvm/core.c
> +++ b/drivers/lightnvm/core.c
> @@ -111,6 +111,7 @@ static void nvm_release_luns_err(struct nvm_dev *dev, int lun_begin,
> static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
> {
> struct nvm_dev *dev = tgt_dev->parent;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> struct nvm_dev_map *dev_map = tgt_dev->map;
> int i, j;
>
> @@ -122,7 +123,7 @@ static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
> if (clear) {
> for (j = 0; j < ch_map->nr_luns; j++) {
> int lun = j + lun_offs[j];
> - int lunid = (ch * dev->geo.nr_luns) + lun;
> + int lunid = (ch * dev_geo->num_lun) + lun;
>
> WARN_ON(!test_and_clear_bit(lunid,
> dev->lun_map));
> @@ -143,19 +144,20 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
> u16 lun_begin, u16 lun_end,
> u16 op)
> {
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> struct nvm_tgt_dev *tgt_dev = NULL;
> struct nvm_dev_map *dev_rmap = dev->rmap;
> struct nvm_dev_map *dev_map;
> struct ppa_addr *luns;
> int nr_luns = lun_end - lun_begin + 1;
> int luns_left = nr_luns;
> - int nr_chnls = nr_luns / dev->geo.nr_luns;
> - int nr_chnls_mod = nr_luns % dev->geo.nr_luns;
> - int bch = lun_begin / dev->geo.nr_luns;
> - int blun = lun_begin % dev->geo.nr_luns;
> + int nr_chnls = nr_luns / dev_geo->num_lun;
> + int nr_chnls_mod = nr_luns % dev_geo->num_lun;
> + int bch = lun_begin / dev_geo->num_lun;
> + int blun = lun_begin % dev_geo->num_lun;
> int lunid = 0;
> int lun_balanced = 1;
> - int prev_nr_luns;
> + int sec_per_lun, prev_nr_luns;
> int i, j;
>
> nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
> @@ -173,15 +175,15 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
> if (!luns)
> goto err_luns;
>
> - prev_nr_luns = (luns_left > dev->geo.nr_luns) ?
> - dev->geo.nr_luns : luns_left;
> + prev_nr_luns = (luns_left > dev_geo->num_lun) ?
> + dev_geo->num_lun : luns_left;
> for (i = 0; i < nr_chnls; i++) {
> struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
> int *lun_roffs = ch_rmap->lun_offs;
> struct nvm_ch_map *ch_map = &dev_map->chnls[i];
> int *lun_offs;
> - int luns_in_chnl = (luns_left > dev->geo.nr_luns) ?
> - dev->geo.nr_luns : luns_left;
> + int luns_in_chnl = (luns_left > dev_geo->num_lun) ?
> + dev_geo->num_lun : luns_left;
>
> if (lun_balanced && prev_nr_luns != luns_in_chnl)
> lun_balanced = 0;
> @@ -215,18 +217,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
> if (!tgt_dev)
> goto err_ch;
>
> - memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
> /* Target device only owns a portion of the physical device */
> - tgt_dev->geo.nr_chnls = nr_chnls;
> + tgt_dev->geo.num_ch = nr_chnls;
> + tgt_dev->geo.num_lun = (lun_balanced) ? prev_nr_luns : -1;
> tgt_dev->geo.all_luns = nr_luns;
> - tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1;
> + tgt_dev->geo.all_chunks = nr_luns * dev_geo->c.num_chk;
> +
> + tgt_dev->geo.max_rq_size = dev->ops->max_phys_sect * dev_geo->c.csecs;
> tgt_dev->geo.op = op;
> - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun;
> +
> + sec_per_lun = dev_geo->c.clba * dev_geo->c.num_chk;
> + tgt_dev->geo.total_secs = nr_luns * sec_per_lun;
> +
> + tgt_dev->geo.c = dev_geo->c;
> +
> tgt_dev->q = dev->q;
> tgt_dev->map = dev_map;
> tgt_dev->luns = luns;
> - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id));
> -
> tgt_dev->parent = dev;
>
> return tgt_dev;
> @@ -268,12 +275,12 @@ static struct nvm_tgt_type *nvm_find_target_type(const char *name)
> return tt;
> }
>
> -static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
> +static int nvm_config_check_luns(struct nvm_dev_geo *dev_geo, int lun_begin,
> int lun_end)
> {
> - if (lun_begin > lun_end || lun_end >= geo->all_luns) {
> + if (lun_begin > lun_end || lun_end >= dev_geo->all_luns) {
> pr_err("nvm: lun out of bound (%u:%u > %u)\n",
> - lun_begin, lun_end, geo->all_luns - 1);
> + lun_begin, lun_end, dev_geo->all_luns - 1);
> return -EINVAL;
> }
>
> @@ -283,24 +290,24 @@ static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
> static int __nvm_config_simple(struct nvm_dev *dev,
> struct nvm_ioctl_create_simple *s)
> {
> - struct nvm_geo *geo = &dev->geo;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>
> if (s->lun_begin == -1 && s->lun_end == -1) {
> s->lun_begin = 0;
> - s->lun_end = geo->all_luns - 1;
> + s->lun_end = dev_geo->all_luns - 1;
> }
>
> - return nvm_config_check_luns(geo, s->lun_begin, s->lun_end);
> + return nvm_config_check_luns(dev_geo, s->lun_begin, s->lun_end);
> }
>
> static int __nvm_config_extended(struct nvm_dev *dev,
> struct nvm_ioctl_create_extended *e)
> {
> - struct nvm_geo *geo = &dev->geo;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>
> if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) {
> e->lun_begin = 0;
> - e->lun_end = dev->geo.all_luns - 1;
> + e->lun_end = dev_geo->all_luns - 1;
> }
>
> /* op not set falls into target's default */
> @@ -313,7 +320,7 @@ static int __nvm_config_extended(struct nvm_dev *dev,
> return -EINVAL;
> }
>
> - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end);
> + return nvm_config_check_luns(dev_geo, e->lun_begin, e->lun_end);
> }
>
> static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
> @@ -496,6 +503,7 @@ static int nvm_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
>
> static int nvm_register_map(struct nvm_dev *dev)
> {
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> struct nvm_dev_map *rmap;
> int i, j;
>
> @@ -503,15 +511,15 @@ static int nvm_register_map(struct nvm_dev *dev)
> if (!rmap)
> goto err_rmap;
>
> - rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct nvm_ch_map),
> + rmap->chnls = kcalloc(dev_geo->num_ch, sizeof(struct nvm_ch_map),
> GFP_KERNEL);
> if (!rmap->chnls)
> goto err_chnls;
>
> - for (i = 0; i < dev->geo.nr_chnls; i++) {
> + for (i = 0; i < dev_geo->num_ch; i++) {
> struct nvm_ch_map *ch_rmap;
> int *lun_roffs;
> - int luns_in_chnl = dev->geo.nr_luns;
> + int luns_in_chnl = dev_geo->num_lun;
>
> ch_rmap = &rmap->chnls[i];
>
> @@ -542,10 +550,11 @@ static int nvm_register_map(struct nvm_dev *dev)
>
> static void nvm_unregister_map(struct nvm_dev *dev)
> {
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> struct nvm_dev_map *rmap = dev->rmap;
> int i;
>
> - for (i = 0; i < dev->geo.nr_chnls; i++)
> + for (i = 0; i < dev_geo->num_ch; i++)
> kfree(rmap->chnls[i].lun_offs);
>
> kfree(rmap->chnls);
> @@ -674,7 +683,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
> int i, plane_cnt, pl_idx;
> struct ppa_addr ppa;
>
> - if (geo->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
> + if (geo->c.pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
> rqd->nr_ppas = nr_ppas;
> rqd->ppa_addr = ppas[0];
>
> @@ -688,7 +697,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
> return -ENOMEM;
> }
>
> - plane_cnt = geo->plane_mode;
> + plane_cnt = geo->c.pln_mode;
> rqd->nr_ppas *= plane_cnt;
>
> for (i = 0; i < nr_ppas; i++) {
> @@ -811,18 +820,18 @@ EXPORT_SYMBOL(nvm_end_io);
> */
> int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
> {
> - struct nvm_geo *geo = &dev->geo;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> int blk, offset, pl, blktype;
>
> - if (nr_blks != geo->nr_chks * geo->plane_mode)
> + if (nr_blks != dev_geo->c.num_chk * dev_geo->c.pln_mode)
> return -EINVAL;
>
> - for (blk = 0; blk < geo->nr_chks; blk++) {
> - offset = blk * geo->plane_mode;
> + for (blk = 0; blk < dev_geo->c.num_chk; blk++) {
> + offset = blk * dev_geo->c.pln_mode;
> blktype = blks[offset];
>
> /* Bad blocks on any planes take precedence over other types */
> - for (pl = 0; pl < geo->plane_mode; pl++) {
> + for (pl = 0; pl < dev_geo->c.pln_mode; pl++) {
> if (blks[offset + pl] &
> (NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
> blktype = blks[offset + pl];
> @@ -833,7 +842,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
> blks[blk] = blktype;
> }
>
> - return geo->nr_chks;
> + return dev_geo->c.num_chk;
> }
> EXPORT_SYMBOL(nvm_bb_tbl_fold);
>
> @@ -850,44 +859,10 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);
>
> static int nvm_core_init(struct nvm_dev *dev)
> {
> - struct nvm_id *id = &dev->identity;
> - struct nvm_geo *geo = &dev->geo;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> int ret;
>
> - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format));
> -
> - if (id->mtype != 0) {
> - pr_err("nvm: memory type not supported\n");
> - return -EINVAL;
> - }
> -
> - /* Whole device values */
> - geo->nr_chnls = id->num_ch;
> - geo->nr_luns = id->num_lun;
> -
> - /* Generic device geometry values */
> - geo->ws_min = id->ws_min;
> - geo->ws_opt = id->ws_opt;
> - geo->ws_seq = id->ws_seq;
> - geo->ws_per_chk = id->ws_per_chk;
> - geo->nr_chks = id->num_chk;
> - geo->sec_size = id->csecs;
> - geo->oob_size = id->sos;
> - geo->mccap = id->mccap;
> - geo->max_rq_size = dev->ops->max_phys_sect * geo->sec_size;
> -
> - geo->sec_per_chk = id->clba;
> - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks;
> - geo->all_luns = geo->nr_luns * geo->nr_chnls;
> -
> - /* 1.2 spec device geometry values */
> - geo->plane_mode = 1 << geo->ws_seq;
> - geo->nr_planes = geo->ws_opt / geo->ws_min;
> - geo->sec_per_pg = geo->ws_min;
> - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes;
> -
> - dev->total_secs = geo->all_luns * geo->sec_per_lun;
> - dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns),
> + dev->lun_map = kcalloc(BITS_TO_LONGS(dev_geo->all_luns),
> sizeof(unsigned long), GFP_KERNEL);
> if (!dev->lun_map)
> return -ENOMEM;
> @@ -901,7 +876,7 @@ static int nvm_core_init(struct nvm_dev *dev)
> if (ret)
> goto err_fmtype;
>
> - blk_queue_logical_block_size(dev->q, geo->sec_size);
> + blk_queue_logical_block_size(dev->q, dev_geo->c.csecs);
> return 0;
> err_fmtype:
> kfree(dev->lun_map);
> @@ -923,19 +898,17 @@ static void nvm_free(struct nvm_dev *dev)
>
> static int nvm_init(struct nvm_dev *dev)
> {
> - struct nvm_geo *geo = &dev->geo;
> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
> int ret = -EINVAL;
>
> - if (dev->ops->identity(dev, &dev->identity)) {
> + if (dev->ops->identity(dev)) {
> pr_err("nvm: device could not be identified\n");
> goto err;
> }
>
> - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) {
> - pr_err("nvm: device ver_id %d not supported by kernel.\n",
> - dev->identity.ver_id);
> - goto err;
> - }
> + pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n",
> + dev_geo->major_ver_id, dev_geo->minor_ver_id,
> + dev_geo->c.vmnt);
>
> ret = nvm_core_init(dev);
> if (ret) {
> @@ -943,10 +916,10 @@ static int nvm_init(struct nvm_dev *dev)
> goto err;
> }
>
> - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n",
> - dev->name, geo->sec_per_pg, geo->nr_planes,
> - geo->ws_per_chk, geo->nr_chks,
> - geo->all_luns, geo->nr_chnls);
> + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n",
> + dev->name, dev_geo->c.ws_min, dev_geo->c.ws_opt,
> + dev_geo->c.num_chk, dev_geo->all_luns,
> + dev_geo->num_ch);
> return 0;
> err:
> pr_err("nvm: failed to initialize nvm\n");
> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
> index 22e61cd4f801..519af8b9eab7 100644
> --- a/drivers/lightnvm/pblk-core.c
> +++ b/drivers/lightnvm/pblk-core.c
> @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line,
> memset(&rqd, 0, sizeof(struct nvm_rq));
>
> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
>
> bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len,
> l_mg->emeta_alloc_type, GFP_KERNEL);
> @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line)
> if (bit >= lm->blk_per_line)
> return -1;
>
> - return bit * geo->sec_per_pl;
> + return bit * geo->c.ws_opt;
> }
>
> static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line,
> @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
> /* Capture bad block information on line mapping bitmaps */
> while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line,
> bit + 1)) < lm->blk_per_line) {
> - off = bit * geo->sec_per_pl;
> + off = bit * geo->c.ws_opt;
> bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off,
> lm->sec_per_line);
> bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux,
> lm->sec_per_line);
> - line->sec_in_line -= geo->sec_per_chk;
> + line->sec_in_line -= geo->c.clba;
> if (bit >= lm->emeta_bb)
> nr_bb++;
> }
>
> /* Mark smeta metadata sectors as bad sectors */
> bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line);
> - off = bit * geo->sec_per_pl;
> + off = bit * geo->c.ws_opt;
> bitmap_set(line->map_bitmap, off, lm->smeta_sec);
> line->sec_in_line -= lm->smeta_sec;
> line->smeta_ssec = off;
> @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
> emeta_secs = lm->emeta_sec[0];
> off = lm->sec_per_line;
> while (emeta_secs) {
> - off -= geo->sec_per_pl;
> + off -= geo->c.ws_opt;
> if (!test_bit(off, line->invalid_bitmap)) {
> - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl);
> - emeta_secs -= geo->sec_per_pl;
> + bitmap_set(line->invalid_bitmap, off, geo->c.ws_opt);
> + emeta_secs -= geo->c.ws_opt;
> }
> }
>
> diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c
> index 320f99af99e9..16afea3f5541 100644
> --- a/drivers/lightnvm/pblk-gc.c
> +++ b/drivers/lightnvm/pblk-gc.c
> @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work)
>
> up(&gc->gc_sem);
>
> - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size);
> + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->c.csecs);
> if (!gc_rq->data) {
> pr_err("pblk: could not GC line:%d (%d/%d)\n",
> line->id, *line->vsc, gc_rq->nr_secs);
> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
> index 86a94a7faa96..72b7902e5d1c 100644
> --- a/drivers/lightnvm/pblk-init.c
> +++ b/drivers/lightnvm/pblk-init.c
> @@ -80,7 +80,7 @@ static size_t pblk_trans_map_size(struct pblk *pblk)
> {
> int entry_size = 8;
>
> - if (pblk->ppaf_bitsize < 32)
> + if (pblk->addrf_len < 32)
> entry_size = 4;
>
> return entry_size * pblk->rl.nr_secs;
> @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk)
> return -ENOMEM;
>
> power_size = get_count_order(nr_entries);
> - power_seg_sz = get_count_order(geo->sec_size);
> + power_seg_sz = get_count_order(geo->c.csecs);
>
> return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz);
> }
> @@ -154,47 +154,63 @@ static int pblk_rwb_init(struct pblk *pblk)
> /* Minimum pages needed within a lun */
> #define ADDR_POOL_SIZE 64
>
> -static int pblk_set_ppaf(struct pblk *pblk)
> +static int pblk_set_addrf_12(struct nvm_geo *geo,
> + struct nvm_addr_format_12 *dst)
> {
> - struct nvm_tgt_dev *dev = pblk->dev;
> - struct nvm_geo *geo = &dev->geo;
> - struct nvm_addr_format ppaf = geo->ppaf;
> + struct nvm_addr_format_12 *src =
> + (struct nvm_addr_format_12 *)&geo->c.addrf;
> int power_len;
>
> /* Re-calculate channel and lun format to adapt to configuration */
> - power_len = get_count_order(geo->nr_chnls);
> - if (1 << power_len != geo->nr_chnls) {
> + power_len = get_count_order(geo->num_ch);
> + if (1 << power_len != geo->num_ch) {
> pr_err("pblk: supports only power-of-two channel config.\n");
> return -EINVAL;
> }
> - ppaf.ch_len = power_len;
> + dst->ch_len = power_len;
>
> - power_len = get_count_order(geo->nr_luns);
> - if (1 << power_len != geo->nr_luns) {
> + power_len = get_count_order(geo->num_lun);
> + if (1 << power_len != geo->num_lun) {
> pr_err("pblk: supports only power-of-two LUN config.\n");
> return -EINVAL;
> }
> - ppaf.lun_len = power_len;
> + dst->lun_len = power_len;
>
> - pblk->ppaf.sec_offset = 0;
> - pblk->ppaf.pln_offset = ppaf.sect_len;
> - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len;
> - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len;
> - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len;
> - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len;
> - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1;
> - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) <<
> - pblk->ppaf.pln_offset;
> - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) <<
> - pblk->ppaf.ch_offset;
> - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) <<
> - pblk->ppaf.lun_offset;
> - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) <<
> - pblk->ppaf.pg_offset;
> - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) <<
> - pblk->ppaf.blk_offset;
> + dst->blk_len = src->blk_len;
> + dst->pg_len = src->pg_len;
> + dst->pln_len = src->pln_len;
> + dst->sec_len = src->sec_len;
>
> - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len;
> + dst->sec_offset = 0;
> + dst->pln_offset = dst->sec_len;
> + dst->ch_offset = dst->pln_offset + dst->pln_len;
> + dst->lun_offset = dst->ch_offset + dst->ch_len;
> + dst->pg_offset = dst->lun_offset + dst->lun_len;
> + dst->blk_offset = dst->pg_offset + dst->pg_len;
> +
> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
> +
> + return dst->blk_offset + src->blk_len;
> +}
> +
> +static int pblk_set_addrf(struct pblk *pblk)
> +{
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + int mod;
> +
> + div_u64_rem(geo->c.clba, pblk->min_write_pgs, &mod);
> + if (mod) {
> + pr_err("pblk: bad configuration of sectors/pages\n");
> + return -EINVAL;
> + }
> +
> + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf);
>
> return 0;
> }
> @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk)
> struct nvm_tgt_dev *dev = pblk->dev;
> struct nvm_geo *geo = &dev->geo;
>
> - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg *
> - geo->nr_planes * geo->all_luns;
> + pblk->pgs_in_buffer = geo->c.mw_cunits * geo->c.ws_opt * geo->all_luns;
>
> if (pblk_init_global_caches(pblk))
> return -ENOMEM;
> @@ -305,7 +320,7 @@ static int pblk_core_init(struct pblk *pblk)
> if (!pblk->r_end_wq)
> goto free_bb_wq;
>
> - if (pblk_set_ppaf(pblk))
> + if (pblk_set_addrf(pblk))
> goto free_r_end_wq;
>
> if (pblk_rwb_init(pblk))
> @@ -434,7 +449,7 @@ static void *pblk_bb_get_log(struct pblk *pblk)
> int i, nr_blks, blk_per_lun;
> int ret;
>
> - blk_per_lun = geo->nr_chks * geo->plane_mode;
> + blk_per_lun = geo->c.num_chk * geo->c.pln_mode;
> nr_blks = blk_per_lun * geo->all_luns;
>
> log = kmalloc(nr_blks, GFP_KERNEL);
> @@ -484,7 +499,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
> int i;
>
> /* TODO: Implement unbalanced LUN support */
> - if (geo->nr_luns < 0) {
> + if (geo->num_lun < 0) {
> pr_err("pblk: unbalanced LUN config.\n");
> return -EINVAL;
> }
> @@ -496,9 +511,9 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>
> for (i = 0; i < geo->all_luns; i++) {
> /* Stripe across channels */
> - int ch = i % geo->nr_chnls;
> - int lun_raw = i / geo->nr_chnls;
> - int lunid = lun_raw + ch * geo->nr_luns;
> + int ch = i % geo->num_ch;
> + int lun_raw = i / geo->num_ch;
> + int lunid = lun_raw + ch * geo->num_lun;
>
> rlun = &pblk->luns[i];
> rlun->bppa = luns[lunid];
> @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk)
> /* Round to sector size so that lba_list starts on its own sector */
> lm->emeta_sec[1] = DIV_ROUND_UP(
> sizeof(struct line_emeta) + lm->blk_bitmap_len +
> - sizeof(struct wa_counters), geo->sec_size);
> - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size;
> + sizeof(struct wa_counters), geo->c.csecs);
> + lm->emeta_len[1] = lm->emeta_sec[1] * geo->c.csecs;
>
> /* Round to sector size so that vsc_list starts on its own sector */
> lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0];
> lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64),
> - geo->sec_size);
> - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size;
> + geo->c.csecs);
> + lm->emeta_len[2] = lm->emeta_sec[2] * geo->c.csecs;
>
> lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32),
> - geo->sec_size);
> - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size;
> + geo->c.csecs);
> + lm->emeta_len[3] = lm->emeta_sec[3] * geo->c.csecs;
>
> lm->vsc_list_len = l_mg->nr_lines * sizeof(u32);
>
> @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks)
> * on user capacity consider only provisioned blocks
> */
> pblk->rl.total_blocks = nr_free_blks;
> - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk;
> + pblk->rl.nr_secs = nr_free_blks * geo->c.clba;
>
> /* Consider sectors used for metadata */
> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
> + blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);
>
> - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk;
> + pblk->capacity = (provisioned - blk_meta) * geo->c.clba;
>
> atomic_set(&pblk->rl.free_blocks, nr_free_blks);
> atomic_set(&pblk->rl.free_user_blocks, nr_free_blks);
> @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk)
> void *chunk_log;
> unsigned int smeta_len, emeta_len;
> long nr_bad_blks = 0, nr_free_blks = 0;
> - int bb_distance, max_write_ppas, mod;
> + int bb_distance, max_write_ppas;
> int i, ret;
>
> - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE);
> + pblk->min_write_pgs = geo->c.ws_opt * (geo->c.csecs / PAGE_SIZE);
> max_write_ppas = pblk->min_write_pgs * geo->all_luns;
> pblk->max_write_pgs = (max_write_ppas < nvm_max_phys_sects(dev)) ?
> max_write_ppas : nvm_max_phys_sects(dev);
> @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk)
> return -EINVAL;
> }
>
> - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod);
> - if (mod) {
> - pr_err("pblk: bad configuration of sectors/pages\n");
> - return -EINVAL;
> - }
> -
> - l_mg->nr_lines = geo->nr_chks;
> + l_mg->nr_lines = geo->c.num_chk;
> l_mg->log_line = l_mg->data_line = NULL;
> l_mg->l_seq_nr = l_mg->d_seq_nr = 0;
> l_mg->nr_free_lines = 0;
> bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES);
>
> - lm->sec_per_line = geo->sec_per_chk * geo->all_luns;
> + lm->sec_per_line = geo->c.clba * geo->all_luns;
> lm->blk_per_line = geo->all_luns;
> lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long);
> lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long);
> @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk)
> */
> i = 1;
> add_smeta_page:
> - lm->smeta_sec = i * geo->sec_per_pl;
> - lm->smeta_len = lm->smeta_sec * geo->sec_size;
> + lm->smeta_sec = i * geo->c.ws_opt;
> + lm->smeta_len = lm->smeta_sec * geo->c.csecs;
>
> smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len;
> if (smeta_len > lm->smeta_len) {
> @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk)
> */
> i = 1;
> add_emeta_page:
> - lm->emeta_sec[0] = i * geo->sec_per_pl;
> - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size;
> + lm->emeta_sec[0] = i * geo->c.ws_opt;
> + lm->emeta_len[0] = lm->emeta_sec[0] * geo->c.csecs;
>
> emeta_len = calc_emeta_len(pblk);
> if (emeta_len > lm->emeta_len[0]) {
> @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk)
> lm->min_blk_line = 1;
> if (geo->all_luns > 1)
> lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec +
> - lm->emeta_sec[0], geo->sec_per_chk);
> + lm->emeta_sec[0], geo->c.clba);
>
> if (lm->min_blk_line > lm->blk_per_line) {
> pr_err("pblk: config. not supported. Min. LUN in line:%d\n",
> @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk)
> goto fail_free_bb_template;
> }
>
> - bb_distance = (geo->all_luns) * geo->sec_per_pl;
> + bb_distance = (geo->all_luns) * geo->c.ws_opt;
> for (i = 0; i < lm->sec_per_line; i += bb_distance)
> - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl);
> + bitmap_set(l_mg->bb_template, i, geo->c.ws_opt);
>
> INIT_LIST_HEAD(&l_mg->free_list);
> INIT_LIST_HEAD(&l_mg->corrupt_list);
> @@ -982,9 +991,15 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
> struct pblk *pblk;
> int ret;
>
> - if (dev->identity.dom & NVM_RSP_L2P) {
> + if (geo->c.version != NVM_OCSSD_SPEC_12) {
> + pr_err("pblk: OCSSD version not supported (%u)\n",
> + geo->c.version);
> + return ERR_PTR(-EINVAL);
> + }
> +
> + if (geo->c.version == NVM_OCSSD_SPEC_12 && geo->c.dom & NVM_RSP_L2P) {
> pr_err("pblk: host-side L2P table not supported. (%x)\n",
> - dev->identity.dom);
> + geo->c.dom);
> return ERR_PTR(-EINVAL);
> }
>
> @@ -1092,7 +1107,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
>
> blk_queue_write_cache(tqueue, true, false);
>
> - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size;
> + tqueue->limits.discard_granularity = geo->c.clba * geo->c.csecs;
> tqueue->limits.discard_alignment = 0;
> blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9);
> queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue);
> diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
> index 2f761283f43e..ebb6bae3a3b8 100644
> --- a/drivers/lightnvm/pblk-read.c
> +++ b/drivers/lightnvm/pblk-read.c
> @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq)
> if (!(gc_rq->secs_to_gc))
> goto out;
>
> - data_len = (gc_rq->secs_to_gc) * geo->sec_size;
> + data_len = (gc_rq->secs_to_gc) * geo->c.csecs;
> bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len,
> PBLK_VMALLOC_META, GFP_KERNEL);
> if (IS_ERR(bio)) {
> diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
> index e75a1af2eebe..beacef1412a2 100644
> --- a/drivers/lightnvm/pblk-recovery.c
> +++ b/drivers/lightnvm/pblk-recovery.c
> @@ -188,7 +188,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line)
> int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line);
>
> return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] -
> - nr_bb * geo->sec_per_chk;
> + nr_bb * geo->c.clba;
> }
>
> struct pblk_recov_alloc {
> @@ -236,7 +236,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line,
> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
> if (!rq_ppas)
> rq_ppas = pblk->min_write_pgs;
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
>
> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
> if (IS_ERR(bio))
> @@ -355,7 +355,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
> if (!pad_rq)
> return -ENOMEM;
>
> - data = vzalloc(pblk->max_write_pgs * geo->sec_size);
> + data = vzalloc(pblk->max_write_pgs * geo->c.csecs);
> if (!data) {
> ret = -ENOMEM;
> goto free_rq;
> @@ -372,7 +372,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
> goto fail_free_pad;
> }
>
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
>
> meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list);
> if (!meta_list) {
> @@ -513,7 +513,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line,
> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
> if (!rq_ppas)
> rq_ppas = pblk->min_write_pgs;
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
>
> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
> if (IS_ERR(bio))
> @@ -644,7 +644,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line,
> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
> if (!rq_ppas)
> rq_ppas = pblk->min_write_pgs;
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
>
> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
> if (IS_ERR(bio))
> @@ -749,7 +749,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line)
> ppa_list = (void *)(meta_list) + pblk_dma_meta_size;
> dma_ppa_list = dma_meta_list + pblk_dma_meta_size;
>
> - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL);
> + data = kcalloc(pblk->max_write_pgs, geo->c.csecs, GFP_KERNEL);
> if (!data) {
> ret = -ENOMEM;
> goto free_meta_list;
> diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c
> index 0d457b162f23..bcab203477ec 100644
> --- a/drivers/lightnvm/pblk-rl.c
> +++ b/drivers/lightnvm/pblk-rl.c
> @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget)
>
> /* Consider sectors used for metadata */
> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
> + blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);
>
> rl->high = pblk->op_blks - blk_meta - lm->blk_per_line;
> rl->high_pw = get_count_order(rl->high);
> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
> index d93e9b1f083a..d3b50741b691 100644
> --- a/drivers/lightnvm/pblk-sysfs.c
> +++ b/drivers/lightnvm/pblk-sysfs.c
> @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
> {
> struct nvm_tgt_dev *dev = pblk->dev;
> struct nvm_geo *geo = &dev->geo;
> + struct nvm_addr_format_12 *ppaf;
> + struct nvm_addr_format_12 *geo_ppaf;
> ssize_t sz = 0;
>
> - sz = snprintf(page, PAGE_SIZE - sz,
> - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
> - pblk->ppaf_bitsize,
> - pblk->ppaf.blk_offset, geo->ppaf.blk_len,
> - pblk->ppaf.pg_offset, geo->ppaf.pg_len,
> - pblk->ppaf.lun_offset, geo->ppaf.lun_len,
> - pblk->ppaf.ch_offset, geo->ppaf.ch_len,
> - pblk->ppaf.pln_offset, geo->ppaf.pln_len,
> - pblk->ppaf.sec_offset, geo->ppaf.sect_len);
> + ppaf = (struct nvm_addr_format_12 *)&pblk->addrf;
> + geo_ppaf = (struct nvm_addr_format_12 *)&geo->c.addrf;
> +
> + sz = snprintf(page, PAGE_SIZE,
> + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
> + pblk->addrf_len,
> + ppaf->ch_offset, ppaf->ch_len,
> + ppaf->lun_offset, ppaf->lun_len,
> + ppaf->blk_offset, ppaf->blk_len,
> + ppaf->pg_offset, ppaf->pg_len,
> + ppaf->pln_offset, ppaf->pln_len,
> + ppaf->sec_offset, ppaf->sec_len);
>
> sz += snprintf(page + sz, PAGE_SIZE - sz,
> - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
> - geo->ppaf.blk_offset, geo->ppaf.blk_len,
> - geo->ppaf.pg_offset, geo->ppaf.pg_len,
> - geo->ppaf.lun_offset, geo->ppaf.lun_len,
> - geo->ppaf.ch_offset, geo->ppaf.ch_len,
> - geo->ppaf.pln_offset, geo->ppaf.pln_len,
> - geo->ppaf.sect_offset, geo->ppaf.sect_len);
> + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
> + geo_ppaf->ch_offset, geo_ppaf->ch_len,
> + geo_ppaf->lun_offset, geo_ppaf->lun_len,
> + geo_ppaf->blk_offset, geo_ppaf->blk_len,
> + geo_ppaf->pg_offset, geo_ppaf->pg_len,
> + geo_ppaf->pln_offset, geo_ppaf->pln_len,
> + geo_ppaf->sec_offset, geo_ppaf->sec_len);
>
> return sz;
> }
> @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page)
> "blk_line:%d, sec_line:%d, sec_blk:%d\n",
> lm->blk_per_line,
> lm->sec_per_line,
> - geo->sec_per_chk);
> + geo->c.clba);
>
> return sz;
> }
> diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
> index aae86ed60b98..c49b27539d5a 100644
> --- a/drivers/lightnvm/pblk-write.c
> +++ b/drivers/lightnvm/pblk-write.c
> @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
> m_ctx = nvm_rq_to_pdu(rqd);
> m_ctx->private = meta_line;
>
> - rq_len = rq_ppas * geo->sec_size;
> + rq_len = rq_ppas * geo->c.csecs;
> data = ((void *)emeta->buf) + emeta->mem;
>
> bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len,
> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
> index 282dfc8780e8..46b29a492f74 100644
> --- a/drivers/lightnvm/pblk.h
> +++ b/drivers/lightnvm/pblk.h
> @@ -551,21 +551,6 @@ struct pblk_line_meta {
> unsigned int meta_distance; /* Distance between data and metadata */
> };
>
> -struct pblk_addr_format {
> - u64 ch_mask;
> - u64 lun_mask;
> - u64 pln_mask;
> - u64 blk_mask;
> - u64 pg_mask;
> - u64 sec_mask;
> - u8 ch_offset;
> - u8 lun_offset;
> - u8 pln_offset;
> - u8 blk_offset;
> - u8 pg_offset;
> - u8 sec_offset;
> -};
> -
> enum {
> PBLK_STATE_RUNNING = 0,
> PBLK_STATE_STOPPING = 1,
> @@ -585,8 +570,8 @@ struct pblk {
> struct pblk_line_mgmt l_mg; /* Line management */
> struct pblk_line_meta lm; /* Line metadata */
>
> - int ppaf_bitsize;
> - struct pblk_addr_format ppaf;
> + struct nvm_addr_format addrf;
> + int addrf_len;
>
> struct pblk_rb rwb;
>
> @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line)
> return le32_to_cpu(*line->vsc);
> }
>
> -#define NVM_MEM_PAGE_WRITE (8)
> -
> static inline int pblk_pad_distance(struct pblk *pblk)
> {
> struct nvm_tgt_dev *dev = pblk->dev;
> struct nvm_geo *geo = &dev->geo;
>
> - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl;
> + return geo->c.mw_cunits * geo->all_luns * geo->c.ws_opt;
> }
>
> static inline int pblk_ppa_to_line(struct ppa_addr p)
> @@ -958,21 +941,23 @@ static inline int pblk_ppa_to_line(struct ppa_addr p)
>
> static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p)
> {
> - return p.g.lun * geo->nr_chnls + p.g.ch;
> + return p.g.lun * geo->num_ch + p.g.ch;
> }
>
> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
> u64 line_id)
> {
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&pblk->addrf;
> struct ppa_addr ppa;
>
> ppa.ppa = 0;
> ppa.g.blk = line_id;
> - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset;
> - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset;
> - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset;
> - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset;
> - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset;
> + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset;
> + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset;
> + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset;
> + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset;
> + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset;
>
> return ppa;
> }
> @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
> static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk,
> struct ppa_addr p)
> {
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&pblk->addrf;
> u64 paddr;
>
> - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset;
> - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset;
> - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset;
> - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset;
> - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset;
> + paddr = (u64)p.g.ch << ppaf->ch_offset;
> + paddr |= (u64)p.g.lun << ppaf->lun_offset;
> + paddr |= (u64)p.g.pg << ppaf->pg_offset;
> + paddr |= (u64)p.g.pl << ppaf->pln_offset;
> + paddr |= (u64)p.g.sec << ppaf->sec_offset;
>
> return paddr;
> }
> @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32)
> ppa64.c.line = ppa32 & ((~0U) >> 1);
> ppa64.c.is_cached = 1;
> } else {
> - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >>
> - pblk->ppaf.blk_offset;
> - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >>
> - pblk->ppaf.pg_offset;
> - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >>
> - pblk->ppaf.lun_offset;
> - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >>
> - pblk->ppaf.ch_offset;
> - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >>
> - pblk->ppaf.pln_offset;
> - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >>
> - pblk->ppaf.sec_offset;
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&pblk->addrf;
> +
> + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset;
> + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset;
> + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset;
> + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset;
> + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset;
> + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset;
> }
>
> return ppa64;
> @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64)
> ppa32 |= ppa64.c.line;
> ppa32 |= 1U << 31;
> } else {
> - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset;
> - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset;
> - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset;
> - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset;
> - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset;
> - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset;
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&pblk->addrf;
> +
> + ppa32 |= ppa64.g.ch << ppaf->ch_offset;
> + ppa32 |= ppa64.g.lun << ppaf->lun_offset;
> + ppa32 |= ppa64.g.blk << ppaf->blk_offset;
> + ppa32 |= ppa64.g.pg << ppaf->pg_offset;
> + ppa32 |= ppa64.g.pl << ppaf->pln_offset;
> + ppa32 |= ppa64.g.sec << ppaf->sec_offset;
> }
>
> return ppa32;
> @@ -1046,7 +1033,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
> {
> struct ppa_addr ppa;
>
> - if (pblk->ppaf_bitsize < 32) {
> + if (pblk->addrf_len < 32) {
> u32 *map = (u32 *)pblk->trans_map;
>
> ppa = pblk_ppa32_to_ppa64(pblk, map[lba]);
> @@ -1062,7 +1049,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
> static inline void pblk_trans_map_set(struct pblk *pblk, sector_t lba,
> struct ppa_addr ppa)
> {
> - if (pblk->ppaf_bitsize < 32) {
> + if (pblk->addrf_len < 32) {
> u32 *map = (u32 *)pblk->trans_map;
>
> map[lba] = pblk_ppa64_to_ppa32(pblk, ppa);
> @@ -1153,7 +1140,7 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type)
> struct nvm_geo *geo = &dev->geo;
> int flags;
>
> - flags = geo->plane_mode >> 1;
> + flags = geo->c.pln_mode >> 1;
>
> if (type == PBLK_WRITE)
> flags |= NVM_IO_SCRAMBLE_ENABLE;
> @@ -1174,7 +1161,7 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type)
>
> flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE;
> if (type == PBLK_READ_SEQUENTIAL)
> - flags |= geo->plane_mode >> 1;
> + flags |= geo->c.pln_mode >> 1;
>
> return flags;
> }
> @@ -1227,12 +1214,12 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev,
> ppa = &ppas[i];
>
> if (!ppa->c.is_cached &&
> - ppa->g.ch < geo->nr_chnls &&
> - ppa->g.lun < geo->nr_luns &&
> - ppa->g.pl < geo->nr_planes &&
> - ppa->g.blk < geo->nr_chks &&
> - ppa->g.pg < geo->ws_per_chk &&
> - ppa->g.sec < geo->sec_per_pg)
> + ppa->g.ch < geo->num_ch &&
> + ppa->g.lun < geo->num_lun &&
> + ppa->g.pl < geo->c.num_pln &&
> + ppa->g.blk < geo->c.num_chk &&
> + ppa->g.pg < geo->c.num_pg &&
> + ppa->g.sec < geo->c.ws_min)
> continue;
>
> print_ppa(ppa, "boundary", i);
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index a19e85f0cbae..97739e668602 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf {
> __u8 blk_len;
> __u8 pg_offset;
> __u8 pg_len;
> - __u8 sect_offset;
> - __u8 sect_len;
> + __u8 sec_offset;
> + __u8 sec_len;
> __u8 res[4];
> } __packed;
>
> @@ -170,6 +170,12 @@ struct nvme_nvm_id12 {
> __u8 resv2[2880];
> } __packed;
>
> +/* Generic identification structure */
> +struct nvme_nvm_id {
> + __u8 ver_id;
> + __u8 resv[4095];
> +} __packed;
> +
> struct nvme_nvm_bb_tbl {
> __u8 tblid[4];
> __le16 verid;
> @@ -254,121 +260,195 @@ static inline void _nvme_nvm_check_size(void)
> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE);
> }
>
> -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12)
> +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst,
> + struct nvme_nvm_id12_addrf *src)
> {
> + dst->ch_len = src->ch_len;
> + dst->lun_len = src->lun_len;
> + dst->blk_len = src->blk_len;
> + dst->pg_len = src->pg_len;
> + dst->pln_len = src->pln_len;
> + dst->sec_len = src->sec_len;
> +
> + dst->ch_offset = src->ch_offset;
> + dst->lun_offset = src->lun_offset;
> + dst->blk_offset = src->blk_offset;
> + dst->pg_offset = src->pg_offset;
> + dst->pln_offset = src->pln_offset;
> + dst->sec_offset = src->sec_offset;
> +
> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
> +}
> +
> +static int nvme_nvm_setup_12(struct nvme_nvm_id *gen_id,
> + struct nvm_dev_geo *dev_geo)
> +{
> + struct nvme_nvm_id12 *id = (struct nvme_nvm_id12 *)gen_id;
> struct nvme_nvm_id12_grp *src;
> int sec_per_pg, sec_per_pl, pg_per_blk;
>
> - if (id12->cgrps != 1)
> + if (id->cgrps != 1)
> return -EINVAL;
>
> - src = &id12->grp;
> + src = &id->grp;
>
> - nvm_id->mtype = src->mtype;
> - nvm_id->fmtype = src->fmtype;
> + if (src->mtype != 0) {
> + pr_err("nvm: memory type not supported\n");
> + return -EINVAL;
> + }
> +
> + /* 1.2 spec. only reports a single version id - unfold */
> + dev_geo->major_ver_id = 1;
> + dev_geo->minor_ver_id = 2;
> +
> + /* Set compacted version for upper layers */
> + dev_geo->c.version = NVM_OCSSD_SPEC_12;
>
> - nvm_id->num_ch = src->num_ch;
> - nvm_id->num_lun = src->num_lun;
> + dev_geo->num_ch = src->num_ch;
> + dev_geo->num_lun = src->num_lun;
> + dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;
>
> - nvm_id->num_chk = le16_to_cpu(src->num_chk);
> - nvm_id->csecs = le16_to_cpu(src->csecs);
> - nvm_id->sos = le16_to_cpu(src->sos);
> + dev_geo->c.num_chk = le16_to_cpu(src->num_chk);
> + dev_geo->c.csecs = le16_to_cpu(src->csecs);
> + dev_geo->c.sos = le16_to_cpu(src->sos);
>
> pg_per_blk = le16_to_cpu(src->num_pg);
> - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs;
> + sec_per_pg = le16_to_cpu(src->fpg_sz) / dev_geo->c.csecs;
> sec_per_pl = sec_per_pg * src->num_pln;
> - nvm_id->clba = sec_per_pl * pg_per_blk;
> - nvm_id->ws_per_chk = pg_per_blk;
> -
> - nvm_id->mpos = le32_to_cpu(src->mpos);
> - nvm_id->cpar = le16_to_cpu(src->cpar);
> - nvm_id->mccap = le32_to_cpu(src->mccap);
> -
> - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg;
> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
> -
> - if (nvm_id->mpos & 0x020202) {
> - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS;
> - nvm_id->ws_opt <<= 1;
> - } else if (nvm_id->mpos & 0x040404) {
> - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS;
> - nvm_id->ws_opt <<= 2;
> - }
> + dev_geo->c.clba = sec_per_pl * pg_per_blk;
> +
> + dev_geo->c.ws_min = sec_per_pg;
> + dev_geo->c.ws_opt = sec_per_pg;
> + dev_geo->c.mw_cunits = 8; /* default to MLC safe values */
> + dev_geo->c.maxoc = dev_geo->all_luns; /* default to 1 chunk per LUN */
> + dev_geo->c.maxocpu = 1; /* default to 1 chunk per LUN */
>
> - nvm_id->trdt = le32_to_cpu(src->trdt);
> - nvm_id->trdm = le32_to_cpu(src->trdm);
> - nvm_id->tprt = le32_to_cpu(src->tprt);
> - nvm_id->tprm = le32_to_cpu(src->tprm);
> - nvm_id->tbet = le32_to_cpu(src->tbet);
> - nvm_id->tbem = le32_to_cpu(src->tbem);
> + dev_geo->c.mccap = le32_to_cpu(src->mccap);
> +
> + dev_geo->c.trdt = le32_to_cpu(src->trdt);
> + dev_geo->c.trdm = le32_to_cpu(src->trdm);
> + dev_geo->c.tprt = le32_to_cpu(src->tprt);
> + dev_geo->c.tprm = le32_to_cpu(src->tprm);
> + dev_geo->c.tbet = le32_to_cpu(src->tbet);
> + dev_geo->c.tbem = le32_to_cpu(src->tbem);
>
> /* 1.2 compatibility */
> - nvm_id->num_pln = src->num_pln;
> - nvm_id->num_pg = le16_to_cpu(src->num_pg);
> - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz);
> + dev_geo->c.vmnt = id->vmnt;
> + dev_geo->c.cap = le32_to_cpu(id->cap);
> + dev_geo->c.dom = le32_to_cpu(id->dom);
> +
> + dev_geo->c.mtype = src->mtype;
> + dev_geo->c.fmtype = src->fmtype;
> +
> + dev_geo->c.cpar = le16_to_cpu(src->cpar);
> + dev_geo->c.mpos = le32_to_cpu(src->mpos);
> +
> + dev_geo->c.pln_mode = NVM_PLANE_SINGLE;
> +
> + if (dev_geo->c.mpos & 0x020202) {
> + dev_geo->c.pln_mode = NVM_PLANE_DOUBLE;
> + dev_geo->c.ws_opt <<= 1;
> + } else if (dev_geo->c.mpos & 0x040404) {
> + dev_geo->c.pln_mode = NVM_PLANE_QUAD;
> + dev_geo->c.ws_opt <<= 2;
> + }
> +
> + dev_geo->c.num_pln = src->num_pln;
> + dev_geo->c.num_pg = le16_to_cpu(src->num_pg);
> + dev_geo->c.fpg_sz = le16_to_cpu(src->fpg_sz);
> +
> + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&dev_geo->c.addrf,
> + &id->ppaf);
>
> return 0;
> }
>
> -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
> - struct nvme_nvm_id12 *id)
> +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst,
> + struct nvme_nvm_id20_addrf *src)
> {
> - nvm_id->ver_id = id->ver_id;
> - nvm_id->vmnt = id->vmnt;
> - nvm_id->cap = le32_to_cpu(id->cap);
> - nvm_id->dom = le32_to_cpu(id->dom);
> - memcpy(&nvm_id->ppaf, &id->ppaf,
> - sizeof(struct nvm_addr_format));
> -
> - return init_grp(nvm_id, id);
> + dst->ch_len = src->grp_len;
> + dst->lun_len = src->pu_len;
> + dst->chk_len = src->chk_len;
> + dst->sec_len = src->lba_len;
> +
> + dst->sec_offset = 0;
> + dst->chk_offset = dst->sec_len;
> + dst->lun_offset = dst->chk_offset + dst->chk_len;
> + dst->ch_offset = dst->lun_offset + dst->lun_len;
> +
> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
> + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset;
> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
> }
>
> -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
> - struct nvme_nvm_id20 *id)
> +static int nvme_nvm_setup_20(struct nvme_nvm_id *gen_id,
> + struct nvm_dev_geo *dev_geo)
> {
> - nvm_id->ver_id = id->mjr;
> + struct nvme_nvm_id20 *id = (struct nvme_nvm_id20 *)gen_id;
>
> - nvm_id->num_ch = le16_to_cpu(id->num_grp);
> - nvm_id->num_lun = le16_to_cpu(id->num_pu);
> - nvm_id->num_chk = le32_to_cpu(id->num_chk);
> - nvm_id->clba = le32_to_cpu(id->clba);
> + dev_geo->major_ver_id = id->mjr;
> + dev_geo->minor_ver_id = id->mnr;
>
> - nvm_id->ws_min = le32_to_cpu(id->ws_min);
> - nvm_id->ws_opt = le32_to_cpu(id->ws_opt);
> - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits);
> + /* Set compacted version for upper layers */
> + dev_geo->c.version = NVM_OCSSD_SPEC_20;
>
> - nvm_id->trdt = le32_to_cpu(id->trdt);
> - nvm_id->trdm = le32_to_cpu(id->trdm);
> - nvm_id->tprt = le32_to_cpu(id->twrt);
> - nvm_id->tprm = le32_to_cpu(id->twrm);
> - nvm_id->tbet = le32_to_cpu(id->tcrst);
> - nvm_id->tbem = le32_to_cpu(id->tcrsm);
> + if (!(dev_geo->major_ver_id == 2 && dev_geo->minor_ver_id == 0)) {
> + pr_err("nvm: OCSSD version not supported (v%d.%d)\n",
> + dev_geo->major_ver_id, dev_geo->minor_ver_id);
> + return -EINVAL;
> + }
>
> - /* calculated values */
> - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min;
> + dev_geo->num_ch = le16_to_cpu(id->num_grp);
> + dev_geo->num_lun = le16_to_cpu(id->num_pu);
> + dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;
>
> - /* 1.2 compatibility */
> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
> + dev_geo->c.num_chk = le32_to_cpu(id->num_chk);
> + dev_geo->c.clba = le32_to_cpu(id->clba);
> + dev_geo->c.csecs = -1; /* Set by nvme identify */
> + dev_geo->c.sos = -1; /* Set bu nvme identify */
> +
> + dev_geo->c.ws_min = le32_to_cpu(id->ws_min);
> + dev_geo->c.ws_opt = le32_to_cpu(id->ws_opt);
> + dev_geo->c.mw_cunits = le32_to_cpu(id->mw_cunits);
> + dev_geo->c.maxoc = le32_to_cpu(id->maxoc);
> + dev_geo->c.maxocpu = le32_to_cpu(id->maxocpu);
> +
> + dev_geo->c.mccap = le32_to_cpu(id->mccap);
> +
> + dev_geo->c.trdt = le32_to_cpu(id->trdt);
> + dev_geo->c.trdm = le32_to_cpu(id->trdm);
> + dev_geo->c.tprt = le32_to_cpu(id->twrt);
> + dev_geo->c.tprm = le32_to_cpu(id->twrm);
> + dev_geo->c.tbet = le32_to_cpu(id->tcrst);
> + dev_geo->c.tbem = le32_to_cpu(id->tcrsm);
> +
> + nvme_nvm_set_addr_20(&dev_geo->c.addrf, &id->lbaf);
>
> return 0;
> }
>
> -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
> +static int nvme_nvm_identity(struct nvm_dev *nvmdev)
> {
> struct nvme_ns *ns = nvmdev->q->queuedata;
> - struct nvme_nvm_id12 *id;
> + struct nvme_nvm_id *nvme_nvm_id;
> struct nvme_nvm_command c = {};
> int ret;
>
> c.identity.opcode = nvme_nvm_admin_identity;
> c.identity.nsid = cpu_to_le32(ns->head->ns_id);
>
> - id = kmalloc(sizeof(struct nvme_nvm_id12), GFP_KERNEL);
> - if (!id)
> + nvme_nvm_id = kmalloc(sizeof(struct nvme_nvm_id), GFP_KERNEL);
> + if (!nvme_nvm_id)
> return -ENOMEM;
>
> ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, (struct nvme_command *)&c,
> - id, sizeof(struct nvme_nvm_id12));
> + nvme_nvm_id, sizeof(struct nvme_nvm_id));
> if (ret) {
> ret = -EIO;
> goto out;
> @@ -378,22 +458,21 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
> * The 1.2 and 2.0 specifications share the first byte in their geometry
> * command to make it possible to know what version a device implements.
> */
> - switch (id->ver_id) {
> + switch (nvme_nvm_id->ver_id) {
> case 1:
> - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id);
> + ret = nvme_nvm_setup_12(nvme_nvm_id, &nvmdev->dev_geo);
> break;
> case 2:
> - ret = nvme_nvm_setup_20(nvmdev, nvm_id,
> - (struct nvme_nvm_id20 *)id);
> + ret = nvme_nvm_setup_20(nvme_nvm_id, &nvmdev->dev_geo);
> break;
> default:
> - dev_err(ns->ctrl->device,
> - "OCSSD revision not supported (%d)\n",
> - nvm_id->ver_id);
> + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n",
> + nvme_nvm_id->ver_id);
> ret = -EINVAL;
> }
> +
> out:
> - kfree(id);
> + kfree(nvme_nvm_id);
> return ret;
> }
>
> @@ -401,12 +480,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
> u8 *blks)
> {
> struct request_queue *q = nvmdev->q;
> - struct nvm_geo *geo = &nvmdev->geo;
> + struct nvm_dev_geo *dev_geo = &nvmdev->dev_geo;
> struct nvme_ns *ns = q->queuedata;
> struct nvme_ctrl *ctrl = ns->ctrl;
> struct nvme_nvm_command c = {};
> struct nvme_nvm_bb_tbl *bb_tbl;
> - int nr_blks = geo->nr_chks * geo->plane_mode;
> + int nr_blks = dev_geo->c.num_chk * dev_geo->c.num_pln;
> int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks;
> int ret = 0;
>
> @@ -447,7 +526,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
> goto out;
> }
>
> - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode);
> + memcpy(blks, bb_tbl->blk, dev_geo->c.num_chk * dev_geo->c.num_pln);
> out:
> kfree(bb_tbl);
> return ret;
> @@ -817,9 +896,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg)
> void nvme_nvm_update_nvm_info(struct nvme_ns *ns)
> {
> struct nvm_dev *ndev = ns->ndev;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>
> - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift;
> - ndev->identity.sos = ndev->geo.oob_size = ns->ms;
> + dev_geo->c.csecs = 1 << ns->lba_shift;
> + dev_geo->c.sos = ns->ms;
> }
>
> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
> @@ -852,23 +932,24 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
> {
> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
> struct nvm_dev *ndev = ns->ndev;
> - struct nvm_id *id;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
> struct attribute *attr;
>
> if (!ndev)
> return 0;
>
> - id = &ndev->identity;
> attr = &dattr->attr;
>
> if (strcmp(attr->name, "version") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id);
> + return scnprintf(page, PAGE_SIZE, "%u.%u\n",
> + dev_geo->major_ver_id,
> + dev_geo->minor_ver_id);
> } else if (strcmp(attr->name, "capabilities") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
> } else if (strcmp(attr->name, "read_typ") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
> } else if (strcmp(attr->name, "read_max") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdm);
> } else {
> return scnprintf(page,
> PAGE_SIZE,
> @@ -877,76 +958,80 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
> }
> }
>
> +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf,
> + char *page)
> +{
> + return scnprintf(page, PAGE_SIZE,
> + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
> + ppaf->ch_offset, ppaf->ch_len,
> + ppaf->lun_offset, ppaf->lun_len,
> + ppaf->pln_offset, ppaf->pln_len,
> + ppaf->blk_offset, ppaf->blk_len,
> + ppaf->pg_offset, ppaf->pg_len,
> + ppaf->sec_offset, ppaf->sec_len);
> +}
> +
> static ssize_t nvm_dev_attr_show_12(struct device *dev,
> struct device_attribute *dattr, char *page)
> {
> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
> struct nvm_dev *ndev = ns->ndev;
> - struct nvm_id *id;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
> struct attribute *attr;
>
> if (!ndev)
> return 0;
>
> - id = &ndev->identity;
> attr = &dattr->attr;
>
> if (strcmp(attr->name, "vendor_opcode") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
> } else if (strcmp(attr->name, "device_mode") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
> /* kept for compatibility */
> } else if (strcmp(attr->name, "media_manager") == 0) {
> return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
> } else if (strcmp(attr->name, "ppa_format") == 0) {
> - return scnprintf(page, PAGE_SIZE,
> - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
> - id->ppaf.ch_offset, id->ppaf.ch_len,
> - id->ppaf.lun_offset, id->ppaf.lun_len,
> - id->ppaf.pln_offset, id->ppaf.pln_len,
> - id->ppaf.blk_offset, id->ppaf.blk_len,
> - id->ppaf.pg_offset, id->ppaf.pg_len,
> - id->ppaf.sect_offset, id->ppaf.sect_len);
> + return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
> } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
> } else if (strcmp(attr->name, "flash_media_type") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
> } else if (strcmp(attr->name, "num_channels") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
> } else if (strcmp(attr->name, "num_luns") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
> } else if (strcmp(attr->name, "num_planes") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pln);
> } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
> } else if (strcmp(attr->name, "num_pages") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pg);
> } else if (strcmp(attr->name, "page_size") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
> } else if (strcmp(attr->name, "hw_sector_size") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
> } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
> } else if (strcmp(attr->name, "prog_typ") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
> } else if (strcmp(attr->name, "prog_max") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
> } else if (strcmp(attr->name, "erase_typ") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
> } else if (strcmp(attr->name, "erase_max") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
> } else if (strcmp(attr->name, "multiplane_modes") == 0) {
> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos);
> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
> } else if (strcmp(attr->name, "media_capabilities") == 0) {
> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap);
> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
> } else if (strcmp(attr->name, "max_phys_secs") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n",
> ndev->ops->max_phys_sect);
> } else {
> - return scnprintf(page,
> - PAGE_SIZE,
> - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
> - attr->name);
> + return scnprintf(page, PAGE_SIZE,
> + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
> + attr->name);
> }
> }
>
> @@ -955,42 +1040,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
> {
> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
> struct nvm_dev *ndev = ns->ndev;
> - struct nvm_id *id;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
> struct attribute *attr;
>
> if (!ndev)
> return 0;
>
> - id = &ndev->identity;
> attr = &dattr->attr;
>
> if (strcmp(attr->name, "groups") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
> } else if (strcmp(attr->name, "punits") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
> } else if (strcmp(attr->name, "chunks") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
> } else if (strcmp(attr->name, "clba") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
> } else if (strcmp(attr->name, "ws_min") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
> } else if (strcmp(attr->name, "ws_opt") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
> } else if (strcmp(attr->name, "mw_cunits") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
> } else if (strcmp(attr->name, "write_typ") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
> } else if (strcmp(attr->name, "write_max") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
> } else if (strcmp(attr->name, "reset_typ") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
> } else if (strcmp(attr->name, "reset_max") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
> } else {
> - return scnprintf(page,
> - PAGE_SIZE,
> - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
> - attr->name);
> + return scnprintf(page, PAGE_SIZE,
> + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
> + attr->name);
> }
> }
>
> @@ -1109,10 +1192,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
>
> int nvme_nvm_register_sysfs(struct nvme_ns *ns)
> {
> - if (!ns->ndev)
> + struct nvm_dev *ndev = ns->ndev;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
> +
> + if (!ndev)
> return -EINVAL;
>
> - switch (ns->ndev->identity.ver_id) {
> + switch (dev_geo->major_ver_id) {
> case 1:
> return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
> &nvm_dev_attr_group_12);
> @@ -1126,7 +1212,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns)
>
> void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
> {
> - switch (ns->ndev->identity.ver_id) {
> + struct nvm_dev *ndev = ns->ndev;
> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
> +
> + switch (dev_geo->major_ver_id) {
> case 1:
> sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
> &nvm_dev_attr_group_12);
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> index b717c000b712..6a567bd19b73 100644
> --- a/include/linux/lightnvm.h
> +++ b/include/linux/lightnvm.h
> @@ -23,6 +23,11 @@ enum {
> #define NVM_LUN_BITS (8)
> #define NVM_CH_BITS (7)
>
> +enum {
> + NVM_OCSSD_SPEC_12 = 12,
> + NVM_OCSSD_SPEC_20 = 20,
> +};
> +
> struct ppa_addr {
> /* Generic structure for all addresses */
> union {
> @@ -50,7 +55,7 @@ struct nvm_id;
> struct nvm_dev;
> struct nvm_tgt_dev;
>
> -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
> +typedef int (nvm_id_fn)(struct nvm_dev *);
> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> @@ -154,62 +159,113 @@ struct nvm_id_lp_tbl {
> struct nvm_id_lp_mlc mlc;
> };
>
> -struct nvm_addr_format {
> - u8 ch_offset;
> +struct nvm_addr_format_12 {
> u8 ch_len;
> - u8 lun_offset;
> u8 lun_len;
> - u8 pln_offset;
> + u8 blk_len;
> + u8 pg_len;
> u8 pln_len;
> + u8 sec_len;
> +
> + u8 ch_offset;
> + u8 lun_offset;
> u8 blk_offset;
> - u8 blk_len;
> u8 pg_offset;
> - u8 pg_len;
> - u8 sect_offset;
> - u8 sect_len;
> + u8 pln_offset;
> + u8 sec_offset;
> +
> + u64 ch_mask;
> + u64 lun_mask;
> + u64 blk_mask;
> + u64 pg_mask;
> + u64 pln_mask;
> + u64 sec_mask;
> +};
> +
> +struct nvm_addr_format {
> + u8 ch_len;
> + u8 lun_len;
> + u8 chk_len;
> + u8 sec_len;
> + u8 rsv_len[2];
> +
> + u8 ch_offset;
> + u8 lun_offset;
> + u8 chk_offset;
> + u8 sec_offset;
> + u8 rsv_off[2];
> +
> + u64 ch_mask;
> + u64 lun_mask;
> + u64 chk_mask;
> + u64 sec_mask;
> + u64 rsv_mask[2];
> };
>
> -struct nvm_id {
> - u8 ver_id;
> +/* Device common geometry */
> +struct nvm_common_geo {
> + /* kernel short version */
> + u8 version;
> +
> + /* chunk geometry */
> + u32 num_chk; /* chunks per lun */
> + u32 clba; /* sectors per chunk */
> + u16 csecs; /* sector size */
> + u16 sos; /* out-of-band area size */
> +
> + /* device write constrains */
> + u32 ws_min; /* minimum write size */
> + u32 ws_opt; /* optimal write size */
> + u32 mw_cunits; /* distance required for successful read */
> + u32 maxoc; /* maximum open chunks */
> + u32 maxocpu; /* maximum open chunks per parallel unit */
> +
> + /* device capabilities */
> + u32 mccap;
> +
> + /* device timings */
> + u32 trdt; /* Avg. Tread (ns) */
> + u32 trdm; /* Max Tread (ns) */
> + u32 tprt; /* Avg. Tprog (ns) */
> + u32 tprm; /* Max Tprog (ns) */
> + u32 tbet; /* Avg. Terase (ns) */
> + u32 tbem; /* Max Terase (ns) */
> +
> + /* generic address format */
> + struct nvm_addr_format addrf;
> +
> + /* 1.2 compatibility */
> u8 vmnt;
> u32 cap;
> u32 dom;
>
> - struct nvm_addr_format ppaf;
> -
> - u8 num_ch;
> - u8 num_lun;
> - u16 num_chk;
> - u16 clba;
> - u16 csecs;
> - u16 sos;
> -
> - u32 ws_min;
> - u32 ws_opt;
> - u32 mw_cunits;
> -
> - u32 trdt;
> - u32 trdm;
> - u32 tprt;
> - u32 tprm;
> - u32 tbet;
> - u32 tbem;
> - u32 mpos;
> - u32 mccap;
> - u16 cpar;
> -
> - /* calculated values */
> - u16 ws_seq;
> - u16 ws_per_chk;
> -
> - /* 1.2 compatibility */
> u8 mtype;
> u8 fmtype;
>
> + u16 cpar;
> + u32 mpos;
> +
> u8 num_pln;
> + u8 pln_mode;
> u16 num_pg;
> u16 fpg_sz;
> -} __packed;
> +};
> +
> +/* Device identified geometry */
> +struct nvm_dev_geo {
> + /* device reported version */
> + u8 major_ver_id;
> + u8 minor_ver_id;
> +
> + /* full device geometry */
> + u16 num_ch;
> + u16 num_lun;
> +
> + /* calculated values */
> + u16 all_luns;
> +
> + struct nvm_common_geo c;
> +};
>
> struct nvm_target {
> struct list_head list;
> @@ -274,38 +330,23 @@ enum {
> NVM_BLK_ST_BAD = 0x8, /* Bad block */
> };
>
> -
> -/* Device generic information */
> +/* Instance geometry */
> struct nvm_geo {
> - /* generic geometry */
> - int nr_chnls;
> - int all_luns; /* across channels */
> - int nr_luns; /* per channel */
> - int nr_chks; /* per lun */
> -
> - int sec_size;
> - int oob_size;
> - int mccap;
> -
> - int sec_per_chk;
> - int sec_per_lun;
> -
> - int ws_min;
> - int ws_opt;
> - int ws_seq;
> - int ws_per_chk;
> + /* instance specific geometry */
> + int num_ch;
> + int num_lun; /* per channel */
>
> int max_rq_size;
> -
> int op;
>
> - struct nvm_addr_format ppaf;
> + /* common geometry */
> + struct nvm_common_geo c;
>
> - /* Legacy 1.2 specific geometry */
> - int plane_mode; /* drive device in single, double or quad mode */
> - int nr_planes;
> - int sec_per_pg; /* only sectors for a single page */
> - int sec_per_pl; /* all sectors across planes */
> + /* calculated values */
> + int all_luns; /* across channels */
> + int all_chunks; /* across channels */
> +
> + sector_t total_secs; /* across channels */
> };
>
> /* sub-device structure */
> @@ -316,9 +357,6 @@ struct nvm_tgt_dev {
> /* Base ppas for target LUNs */
> struct ppa_addr *luns;
>
> - sector_t total_secs;
> -
> - struct nvm_id identity;
> struct request_queue *q;
>
> struct nvm_dev *parent;
> @@ -331,15 +369,11 @@ struct nvm_dev {
> struct list_head devices;
>
> /* Device information */
> - struct nvm_geo geo;
> -
> - unsigned long total_secs;
> + struct nvm_dev_geo dev_geo;
>
> unsigned long *lun_map;
> void *dma_pool;
>
> - struct nvm_id identity;
> -
> /* Backend device */
> struct request_queue *q;
> char name[DISK_NAME_LEN];
> @@ -359,14 +393,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev,
> struct ppa_addr r)
> {
> struct nvm_geo *geo = &tgt_dev->geo;
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&geo->c.addrf;
> struct ppa_addr l;
>
> - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset;
> - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset;
> - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset;
> - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset;
> - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset;
> - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset;
> + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset;
> + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset;
> + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset;
> + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset;
> + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset;
> + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset;
>
> return l;
> }
> @@ -375,24 +411,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev,
> struct ppa_addr r)
> {
> struct nvm_geo *geo = &tgt_dev->geo;
> + struct nvm_addr_format_12 *ppaf =
> + (struct nvm_addr_format_12 *)&geo->c.addrf;
> struct ppa_addr l;
>
> l.ppa = 0;
> - /*
> - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc.
> - */
> - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) &
> - (((1 << geo->ppaf.blk_len) - 1));
> - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) &
> - (((1 << geo->ppaf.pg_len) - 1));
> - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) &
> - (((1 << geo->ppaf.sect_len) - 1));
> - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) &
> - (((1 << geo->ppaf.pln_len) - 1));
> - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) &
> - (((1 << geo->ppaf.lun_len) - 1));
> - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) &
> - (((1 << geo->ppaf.ch_len) - 1));
> +
> + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset;
> + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset;
> + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset;
> + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset;
> + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset;
> + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset;
>
> return l;
> }
>

This code looks like a lot of shuffling around for little gain.

Instead of going from the base assumption that is

base
-> 1.2
-> 2.0

go with

base 2.0
-> 1.2

That simplifies where the code is going, and where it will be in the
future. It is more complex to maintain the above, when new targets in
the future most properly will only consider only 2.0 implementations.

The patch does a lot of things at the same time. E.g,.

1) Adding 1.2 version check in pblk_init.c. This should be a separate patch.
2) Introduces constants for spec versions. This should be a separate patch.
3) Refactors nvm_geo into nvm_dev_geo. Keep it as nvm_geo and make pblk
use that structure by default. It should not be necessary for pblk to
know about the 1.2 data structures. For the special case get/set and
addressing, it can use the 1.2 variables in the nvm_geo if necessary. We
can also put it in lightnvm core, but it is properly not worth to do.
4) maxoc / maxocpu, I did not add them in the early patches, as there is
no implementation that will use it. When it is implemented, it can be
added. At least it should go into a separate patch.
5) the rename of ppaf -> addrf / ppaf_bitsize ->addrf_len should be in a
separate patch.
6) rename sec_offset/sec_len -> go into a separate patch or keep as is.
7) addition of nvme_nvm_id data structure, I can see where you are going
with this, but it does not have anything to with what the patch
describes. It should go into a separate patch. However, I rather just
have the original implementation for identifying 1.2/2.0.
8) If you want to remove the identify data structure in nvm_geo, make it
in another patch.

2018-02-15 10:22:15

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 2/8] lightnvm: show generic geometry in sysfs

On 02/13/2018 03:06 PM, Javier González wrote:
> From: Javier González <[email protected]>
>
> Apart from showing the geometry returned by the different identify
> commands, provide the generic geometry too, as this is the geometry that
> targets will use to describe the device.
>
> Signed-off-by: Javier González <[email protected]>
> ---
> drivers/nvme/host/lightnvm.c | 146 ++++++++++++++++++++++++++++---------------
> 1 file changed, 97 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 97739e668602..7bc75182c723 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -944,8 +944,27 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
> return scnprintf(page, PAGE_SIZE, "%u.%u\n",
> dev_geo->major_ver_id,
> dev_geo->minor_ver_id);
> - } else if (strcmp(attr->name, "capabilities") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
> + } else if (strcmp(attr->name, "clba") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
> + } else if (strcmp(attr->name, "csecs") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
> + } else if (strcmp(attr->name, "sos") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
> + } else if (strcmp(attr->name, "ws_min") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
> + } else if (strcmp(attr->name, "ws_opt") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
> + } else if (strcmp(attr->name, "maxoc") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxoc);
> + } else if (strcmp(attr->name, "maxocpu") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxocpu);
> + } else if (strcmp(attr->name, "mw_cunits") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
> + } else if (strcmp(attr->name, "media_capabilities") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mccap);
> + } else if (strcmp(attr->name, "max_phys_secs") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n",
> + ndev->ops->max_phys_sect);
> } else if (strcmp(attr->name, "read_typ") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
> } else if (strcmp(attr->name, "read_max") == 0) {
> @@ -984,19 +1003,8 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>
> attr = &dattr->attr;
>
> - if (strcmp(attr->name, "vendor_opcode") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
> - } else if (strcmp(attr->name, "device_mode") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
> - /* kept for compatibility */
> - } else if (strcmp(attr->name, "media_manager") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
> - } else if (strcmp(attr->name, "ppa_format") == 0) {
> + if (strcmp(attr->name, "ppa_format") == 0) {
> return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
> - } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
> - } else if (strcmp(attr->name, "flash_media_type") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
> } else if (strcmp(attr->name, "num_channels") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
> } else if (strcmp(attr->name, "num_luns") == 0) {
> @@ -1011,8 +1019,6 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
> } else if (strcmp(attr->name, "hw_sector_size") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
> - } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
> } else if (strcmp(attr->name, "prog_typ") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
> } else if (strcmp(attr->name, "prog_max") == 0) {
> @@ -1021,13 +1027,21 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
> } else if (strcmp(attr->name, "erase_max") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
> + } else if (strcmp(attr->name, "vendor_opcode") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
> + } else if (strcmp(attr->name, "device_mode") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
> + /* kept for compatibility */
> + } else if (strcmp(attr->name, "media_manager") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
> + } else if (strcmp(attr->name, "capabilities") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
> + } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
> + } else if (strcmp(attr->name, "flash_media_type") == 0) {
> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
> } else if (strcmp(attr->name, "multiplane_modes") == 0) {
> return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
> - } else if (strcmp(attr->name, "media_capabilities") == 0) {
> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
> - } else if (strcmp(attr->name, "max_phys_secs") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n",
> - ndev->ops->max_phys_sect);
> } else {
> return scnprintf(page, PAGE_SIZE,
> "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
> @@ -1035,6 +1049,17 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
> }
> }
>
> +static ssize_t nvm_dev_attr_show_lbaf(struct nvm_addr_format *lbaf,
> + char *page)
> +{
> + return scnprintf(page, PAGE_SIZE,
> + "0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
> + lbaf->ch_offset, lbaf->ch_len,
> + lbaf->lun_offset, lbaf->lun_len,
> + lbaf->chk_offset, lbaf->chk_len,
> + lbaf->sec_offset, lbaf->sec_len);
> +}
> +
> static ssize_t nvm_dev_attr_show_20(struct device *dev,
> struct device_attribute *dattr, char *page)
> {
> @@ -1048,20 +1073,14 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>
> attr = &dattr->attr;
>
> - if (strcmp(attr->name, "groups") == 0) {
> + if (strcmp(attr->name, "lba_format") == 0) {
> + return nvm_dev_attr_show_lbaf((void *)&dev_geo->c.addrf, page);
> + } else if (strcmp(attr->name, "groups") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
> } else if (strcmp(attr->name, "punits") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
> } else if (strcmp(attr->name, "chunks") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
> - } else if (strcmp(attr->name, "clba") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
> - } else if (strcmp(attr->name, "ws_min") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
> - } else if (strcmp(attr->name, "ws_opt") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
> - } else if (strcmp(attr->name, "mw_cunits") == 0) {
> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
> } else if (strcmp(attr->name, "write_typ") == 0) {
> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
> } else if (strcmp(attr->name, "write_max") == 0) {
> @@ -1086,7 +1105,19 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>
> /* general attributes */
> static NVM_DEV_ATTR_RO(version);
> -static NVM_DEV_ATTR_RO(capabilities);
> +
> +static NVM_DEV_ATTR_RO(ws_min);
> +static NVM_DEV_ATTR_RO(ws_opt);
> +static NVM_DEV_ATTR_RO(mw_cunits);
> +static NVM_DEV_ATTR_RO(maxoc);
> +static NVM_DEV_ATTR_RO(maxocpu);
> +
> +static NVM_DEV_ATTR_RO(media_capabilities);
> +static NVM_DEV_ATTR_RO(max_phys_secs);
> +
> +static NVM_DEV_ATTR_RO(clba);
> +static NVM_DEV_ATTR_RO(csecs);
> +static NVM_DEV_ATTR_RO(sos);
>
> static NVM_DEV_ATTR_RO(read_typ);
> static NVM_DEV_ATTR_RO(read_max);
> @@ -1105,42 +1136,53 @@ static NVM_DEV_ATTR_12_RO(num_blocks);
> static NVM_DEV_ATTR_12_RO(num_pages);
> static NVM_DEV_ATTR_12_RO(page_size);
> static NVM_DEV_ATTR_12_RO(hw_sector_size);
> -static NVM_DEV_ATTR_12_RO(oob_sector_size);
> static NVM_DEV_ATTR_12_RO(prog_typ);
> static NVM_DEV_ATTR_12_RO(prog_max);
> static NVM_DEV_ATTR_12_RO(erase_typ);
> static NVM_DEV_ATTR_12_RO(erase_max);
> static NVM_DEV_ATTR_12_RO(multiplane_modes);
> -static NVM_DEV_ATTR_12_RO(media_capabilities);
> -static NVM_DEV_ATTR_12_RO(max_phys_secs);
> +static NVM_DEV_ATTR_12_RO(capabilities);
>
> static struct attribute *nvm_dev_attrs_12[] = {
> &dev_attr_version.attr,
> - &dev_attr_capabilities.attr,
> -
> - &dev_attr_vendor_opcode.attr,
> - &dev_attr_device_mode.attr,
> - &dev_attr_media_manager.attr,
> &dev_attr_ppa_format.attr,
> - &dev_attr_media_type.attr,
> - &dev_attr_flash_media_type.attr,
> +
> &dev_attr_num_channels.attr,
> &dev_attr_num_luns.attr,
> &dev_attr_num_planes.attr,
> &dev_attr_num_blocks.attr,
> &dev_attr_num_pages.attr,
> &dev_attr_page_size.attr,
> +
> &dev_attr_hw_sector_size.attr,
> - &dev_attr_oob_sector_size.attr,
> +
> + &dev_attr_clba.attr,
> + &dev_attr_csecs.attr,
> + &dev_attr_sos.attr,
> +
> + &dev_attr_ws_min.attr,
> + &dev_attr_ws_opt.attr,
> + &dev_attr_maxoc.attr,
> + &dev_attr_maxocpu.attr,
> + &dev_attr_mw_cunits.attr,
> +
> + &dev_attr_media_capabilities.attr,
> + &dev_attr_max_phys_secs.attr,
> +

This breaks user-space. The intention is for user-space to decide based
on version id. Then it can either retrieve the 1.2 or 2.0 attributes.
The 2.0 attributes should not be available when a device is 1.2.

> &dev_attr_read_typ.attr,
> &dev_attr_read_max.attr,
> &dev_attr_prog_typ.attr,
> &dev_attr_prog_max.attr,
> &dev_attr_erase_typ.attr,
> &dev_attr_erase_max.attr,
> +
> + &dev_attr_vendor_opcode.attr,
> + &dev_attr_device_mode.attr,
> + &dev_attr_media_manager.attr,
> + &dev_attr_capabilities.attr,
> + &dev_attr_media_type.attr,
> + &dev_attr_flash_media_type.attr,
> &dev_attr_multiplane_modes.attr,
> - &dev_attr_media_capabilities.attr,
> - &dev_attr_max_phys_secs.attr,
>
> NULL,
> };
> @@ -1152,12 +1194,9 @@ static const struct attribute_group nvm_dev_attr_group_12 = {
>
> /* 2.0 values */
> static NVM_DEV_ATTR_20_RO(groups);
> +static NVM_DEV_ATTR_20_RO(lba_format);
> static NVM_DEV_ATTR_20_RO(punits);
> static NVM_DEV_ATTR_20_RO(chunks);
> -static NVM_DEV_ATTR_20_RO(clba);
> -static NVM_DEV_ATTR_20_RO(ws_min);
> -static NVM_DEV_ATTR_20_RO(ws_opt);
> -static NVM_DEV_ATTR_20_RO(mw_cunits);
> static NVM_DEV_ATTR_20_RO(write_typ);
> static NVM_DEV_ATTR_20_RO(write_max);
> static NVM_DEV_ATTR_20_RO(reset_typ);
> @@ -1165,16 +1204,25 @@ static NVM_DEV_ATTR_20_RO(reset_max);
>
> static struct attribute *nvm_dev_attrs_20[] = {
> &dev_attr_version.attr,
> - &dev_attr_capabilities.attr,
> + &dev_attr_lba_format.attr,
>
> &dev_attr_groups.attr,
> &dev_attr_punits.attr,
> &dev_attr_chunks.attr,
> +
> &dev_attr_clba.attr,
> + &dev_attr_csecs.attr,
> + &dev_attr_sos.attr,

csecs and sos are derived from the the generic block device data structures.

> +
> &dev_attr_ws_min.attr,
> &dev_attr_ws_opt.attr,
> + &dev_attr_maxoc.attr,
> + &dev_attr_maxocpu.attr,

When the maxoc/maxocpu are in another patch, these changes can be included.

> &dev_attr_mw_cunits.attr,
>
> + &dev_attr_media_capabilities.attr,

What is the meaning of media in this context? The 2.0 spec defines
vector copy and double resets in its capabilities, it does not have
media in mind.

> + &dev_attr_max_phys_secs.attr,
> +

I kill max_phys_secs in another patch. It has been made redundant after
null_blk has been removed.
> &dev_attr_read_typ.attr,
> &dev_attr_read_max.attr,
> &dev_attr_write_typ.attr,
>

2018-02-15 10:22:35

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 3/8] lightnvm: add support for 2.0 address format

On 02/13/2018 03:06 PM, Javier González wrote:
> Add support for 2.0 address format. Also, align address bits for 1.2 and 2.0 to
> align.
>
> Signed-off-by: Javier González <[email protected]>
> ---
> include/linux/lightnvm.h | 45 ++++++++++++++++++++++++++++++++-------------
> 1 file changed, 32 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> index 6a567bd19b73..e035ae4c9acc 100644
> --- a/include/linux/lightnvm.h
> +++ b/include/linux/lightnvm.h
> @@ -16,12 +16,21 @@ enum {
> NVM_IOTYPE_GC = 1,
> };
>
> -#define NVM_BLK_BITS (16)
> -#define NVM_PG_BITS (16)
> -#define NVM_SEC_BITS (8)
> -#define NVM_PL_BITS (8)
> -#define NVM_LUN_BITS (8)
> -#define NVM_CH_BITS (7)
> +/* 1.2 format */
> +#define NVM_12_CH_BITS (8)
> +#define NVM_12_LUN_BITS (8)
> +#define NVM_12_BLK_BITS (16)
> +#define NVM_12_PG_BITS (16)
> +#define NVM_12_PL_BITS (4)
> +#define NVM_12_SEC_BITS (4)
> +#define NVM_12_RESERVED (8)
> +
> +/* 2.0 format */
> +#define NVM_20_CH_BITS (8)
> +#define NVM_20_LUN_BITS (8)
> +#define NVM_20_CHK_BITS (16)
> +#define NVM_20_SEC_BITS (24)
> +#define NVM_20_RESERVED (8)
>
> enum {
> NVM_OCSSD_SPEC_12 = 12,
> @@ -31,16 +40,26 @@ enum {
> struct ppa_addr {
> /* Generic structure for all addresses */
> union {
> + /* 1.2 device format */
> struct {
> - u64 blk : NVM_BLK_BITS;
> - u64 pg : NVM_PG_BITS;
> - u64 sec : NVM_SEC_BITS;
> - u64 pl : NVM_PL_BITS;
> - u64 lun : NVM_LUN_BITS;
> - u64 ch : NVM_CH_BITS;
> - u64 reserved : 1;
> + u64 ch : NVM_12_CH_BITS;
> + u64 lun : NVM_12_LUN_BITS;
> + u64 blk : NVM_12_BLK_BITS;
> + u64 pg : NVM_12_PG_BITS;
> + u64 pl : NVM_12_PL_BITS;
> + u64 sec : NVM_12_SEC_BITS;
> + u64 reserved : NVM_12_RESERVED;
> } g;
>
> + /* 2.0 device format */
> + struct {
> + u64 ch : NVM_20_CH_BITS;
> + u64 lun : NVM_20_LUN_BITS;
> + u64 chk : NVM_20_CHK_BITS;
> + u64 sec : NVM_20_SEC_BITS;
> + u64 reserved : NVM_20_RESERVED;
> + } m;
> +
> struct {
> u64 line : 63;
> u64 is_cached : 1;
>

You can fold this into the next patch.

2018-02-15 11:00:06

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 6/8] lightnvm: pblk: implement get log report chunk

On 02/13/2018 03:06 PM, Javier González wrote:
> From: Javier González <[email protected]>
>
> In preparation of pblk supporting 2.0, implement the get log report
> chunk in pblk.
>
> This patch only replicates de bad block functionality as the rest of the
> metadata requires new pblk functionality (e.g., wear-index to implement
> wear-leveling). This functionality will come in future patches.
>
> Signed-off-by: Javier González <[email protected]>
> ---
> drivers/lightnvm/pblk-core.c | 118 +++++++++++++++++++++++----
> drivers/lightnvm/pblk-init.c | 186 +++++++++++++++++++++++++++++++-----------
> drivers/lightnvm/pblk-sysfs.c | 67 +++++++++++++++
> drivers/lightnvm/pblk.h | 20 +++++
> 4 files changed, 327 insertions(+), 64 deletions(-)
>
> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
> index 519af8b9eab7..01b78ee5c0e0 100644
> --- a/drivers/lightnvm/pblk-core.c
> +++ b/drivers/lightnvm/pblk-core.c
> @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work)
> }
>
> static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
> - struct ppa_addr *ppa)
> + struct ppa_addr ppa_addr)
> {
> struct nvm_tgt_dev *dev = pblk->dev;
> struct nvm_geo *geo = &dev->geo;
> - int pos = pblk_ppa_to_pos(geo, *ppa);
> + struct ppa_addr *ppa;
> + int pos = pblk_ppa_to_pos(geo, ppa_addr);
>
> pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos);
> atomic_long_inc(&pblk->erase_failed);
> @@ -58,6 +59,15 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
> pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n",
> line->id, pos);
>
> + /* Not necessary to mark bad blocks on 2.0 spec. */
> + if (geo->c.version == NVM_OCSSD_SPEC_20)
> + return;
> +
> + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
> + if (!ppa)
> + return;
> +
> + *ppa = ppa_addr;
> pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb,
> GFP_ATOMIC, pblk->bb_wq);
> }
> @@ -69,16 +79,8 @@ static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd)
> line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)];
> atomic_dec(&line->left_seblks);
>
> - if (rqd->error) {
> - struct ppa_addr *ppa;
> -
> - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
> - if (!ppa)
> - return;
> -
> - *ppa = rqd->ppa_addr;
> - pblk_mark_bb(pblk, line, ppa);
> - }
> + if (rqd->error)
> + pblk_mark_bb(pblk, line, rqd->ppa_addr);
>
> atomic_dec(&pblk->inflight_io);
> }
> @@ -92,6 +94,47 @@ static void pblk_end_io_erase(struct nvm_rq *rqd)
> mempool_free(rqd, pblk->e_rq_pool);
> }
>
> +/*
> + * Get information for all chunks from the device.
> + *
> + * The caller is responsible for freeing the returned structure
> + */
> +struct nvm_chunk_log_page *pblk_chunk_get_info(struct pblk *pblk)
> +{
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + struct nvm_chunk_log_page *log;
> + unsigned long len;
> + int ret;
> +
> + len = geo->all_chunks * sizeof(*log);
> + log = kzalloc(len, GFP_KERNEL);
> + if (!log)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = nvm_get_chunk_log_page(dev, log, 0, len);
> + if (ret) {
> + pr_err("pblk: could not get chunk log page (%d)\n", ret);
> + kfree(log);
> + return ERR_PTR(-EIO);
> + }
> +
> + return log;
> +}
> +
> +struct nvm_chunk_log_page *pblk_chunk_get_off(struct pblk *pblk,
> + struct nvm_chunk_log_page *lp,
> + struct ppa_addr ppa)
> +{
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + int ch_off = ppa.m.ch * geo->c.num_chk * geo->num_lun;
> + int lun_off = ppa.m.lun * geo->c.num_chk;
> + int chk_off = ppa.m.chk;
> +
> + return lp + ch_off + lun_off + chk_off;
> +}
> +
> void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line,
> u64 paddr)
> {
> @@ -1094,10 +1137,38 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
> return 1;
> }
>
> +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line)
> +{
> + struct pblk_line_meta *lm = &pblk->lm;
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + int blk_to_erase = atomic_read(&line->blk_in_line);
> + int i;
> +
> + for (i = 0; i < lm->blk_per_line; i++) {
> + int state = line->chks[i].state;
> + struct pblk_lun *rlun = &pblk->luns[i];
> +
> + /* Free chunks should not be erased */
> + if (state & NVM_CHK_ST_FREE) {
> + set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
> + line->erase_bitmap);
> + blk_to_erase--;
> + line->chks[i].state = NVM_CHK_ST_HOST_USE;
> + }
> +
> + WARN_ONCE(state & NVM_CHK_ST_OPEN,
> + "pblk: open chunk in new line: %d\n",
> + line->id);
> + }
> +
> + return blk_to_erase;
> +}
> +
> static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
> {
> struct pblk_line_meta *lm = &pblk->lm;
> - int blk_in_line = atomic_read(&line->blk_in_line);
> + int blk_to_erase;
>
> line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC);
> if (!line->map_bitmap)
> @@ -1110,7 +1181,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
> return -ENOMEM;
> }
>
> + /* Bad blocks do not need to be erased */
> + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
> +
> spin_lock(&line->lock);
> +
> + /* If we have not written to this line, we need to mark up free chunks
> + * as already erased
> + */
> + if (line->state == PBLK_LINESTATE_NEW) {
> + blk_to_erase = pblk_prepare_new_line(pblk, line);
> + line->state = PBLK_LINESTATE_FREE;
> + } else {
> + blk_to_erase = atomic_read(&line->blk_in_line);
> + }
> +
> if (line->state != PBLK_LINESTATE_FREE) {
> kfree(line->map_bitmap);
> kfree(line->invalid_bitmap);
> @@ -1122,15 +1207,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
>
> line->state = PBLK_LINESTATE_OPEN;
>
> - atomic_set(&line->left_eblks, blk_in_line);
> - atomic_set(&line->left_seblks, blk_in_line);
> + atomic_set(&line->left_eblks, blk_to_erase);
> + atomic_set(&line->left_seblks, blk_to_erase);
>
> line->meta_distance = lm->meta_distance;
> spin_unlock(&line->lock);
>
> - /* Bad blocks do not need to be erased */
> - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
> -
> kref_init(&line->ref);
>
> return 0;
> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
> index 72b7902e5d1c..dfc68718e27e 100644
> --- a/drivers/lightnvm/pblk-init.c
> +++ b/drivers/lightnvm/pblk-init.c
> @@ -402,6 +402,7 @@ static void pblk_line_meta_free(struct pblk_line *line)
> {
> kfree(line->blk_bitmap);
> kfree(line->erase_bitmap);
> + kfree(line->chks);
> }
>
> static void pblk_lines_free(struct pblk *pblk)
> @@ -470,25 +471,15 @@ static void *pblk_bb_get_log(struct pblk *pblk)
> return log;
> }
>
> -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line,
> - u8 *bb_log, int blk_per_line)
> +static void *pblk_chunk_get_log(struct pblk *pblk)
> {
> struct nvm_tgt_dev *dev = pblk->dev;
> struct nvm_geo *geo = &dev->geo;
> - int i, bb_cnt = 0;
>
> - for (i = 0; i < blk_per_line; i++) {
> - struct pblk_lun *rlun = &pblk->luns[i];
> - u8 *lun_bb_log = bb_log + i * blk_per_line;
> -
> - if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
> - continue;
> -
> - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
> - bb_cnt++;
> - }
> -
> - return bb_cnt;
> + if (geo->c.version == NVM_OCSSD_SPEC_12)
> + return pblk_bb_get_log(pblk);
> + else
> + return pblk_chunk_get_info(pblk);
> }
>
> static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
> @@ -517,6 +508,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>
> rlun = &pblk->luns[i];
> rlun->bppa = luns[lunid];
> + rlun->chunk_bppa = luns[i];
>
> sema_init(&rlun->wr_sem, 1);
> }
> @@ -696,8 +688,125 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk)
> return -ENOMEM;
> }
>
> -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line,
> - void *chunk_log, long *nr_bad_blks)
> +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
> + void *chunk_log)
> +{
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + struct pblk_line_meta *lm = &pblk->lm;
> + int i, chk_per_lun, nr_bad_chks = 0;
> +
> + chk_per_lun = geo->c.num_chk * geo->c.pln_mode;
> +
> + for (i = 0; i < lm->blk_per_line; i++) {
> + struct pblk_chunk *chunk = &line->chks[i];
> + struct pblk_lun *rlun = &pblk->luns[i];
> + u8 *lun_bb_log = chunk_log + i * chk_per_lun;
> +
> + /*
> + * In 1.2 spec. chunk state is not persisted by the device. Thus
> + * some of the values are reset each time pblk is instantiated.
> + */
> + if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
> + chunk->state = NVM_CHK_ST_HOST_USE;
> + else
> + chunk->state = NVM_CHK_ST_OFFLINE;
> +
> + chunk->type = NVM_CHK_TP_W_SEQ;
> + chunk->wi = 0;
> + chunk->slba = -1;
> + chunk->cnlb = geo->c.clba;
> + chunk->wp = 0;
> +
> + if (!(chunk->state & NVM_CHK_ST_OFFLINE))
> + continue;
> +
> + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
> + nr_bad_chks++;
> + }
> +
> + return nr_bad_chks;
> +}
> +
> +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line,
> + struct nvm_chunk_log_page *log_page)
> +{
> + struct nvm_tgt_dev *dev = pblk->dev;
> + struct nvm_geo *geo = &dev->geo;
> + struct pblk_line_meta *lm = &pblk->lm;
> + int i, nr_bad_chks = 0;
> +
> + for (i = 0; i < lm->blk_per_line; i++) {
> + struct pblk_chunk *chunk = &line->chks[i];
> + struct pblk_lun *rlun = &pblk->luns[i];
> + struct nvm_chunk_log_page *chunk_log_page;
> + struct ppa_addr ppa;
> +
> + ppa = rlun->chunk_bppa;
> + ppa.m.chk = line->id;
> + chunk_log_page = pblk_chunk_get_off(pblk, log_page, ppa);
> +
> + chunk->state = chunk_log_page->state;
> + chunk->type = chunk_log_page->type;
> + chunk->wi = chunk_log_page->wear_index;
> + chunk->slba = le64_to_cpu(chunk_log_page->slba);
> + chunk->cnlb = le64_to_cpu(chunk_log_page->cnlb);
> + chunk->wp = le64_to_cpu(chunk_log_page->wp);
> +
> + if (!(chunk->state & NVM_CHK_ST_OFFLINE))
> + continue;
> +
> + if (chunk->type & NVM_CHK_TP_SZ_SPEC) {
> + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n");
> + continue;
> + }
> +
> + set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
> + line->blk_bitmap);
> + nr_bad_chks++;
> + }
> +
> + return nr_bad_chks;
> +}
> +

The device chunk to nvm_chunk logic belongs in the lightnvm core. A
target should preferably not handle the case between 1.2 and 2.0 interface.



2018-02-15 12:52:55

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 5/8] lightnvm: implement get log report chunk helpers

On 02/13/2018 03:06 PM, Javier González wrote:
> From: Javier González <[email protected]>
>
> The 2.0 spec provides a report chunk log page that can be retrieved
> using the stangard nvme get log page. This replaces the dedicated
> get/put bad block table in 1.2.
>
> This patch implements the helper functions to allow targets retrieve the
> chunk metadata using get log page
>
> Signed-off-by: Javier González <[email protected]>
> ---
> drivers/lightnvm/core.c | 28 +++++++++++++++++++++++++
> drivers/nvme/host/lightnvm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
> include/linux/lightnvm.h | 32 ++++++++++++++++++++++++++++
> 3 files changed, 110 insertions(+)
>
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> index 80492fa6ee76..6857a888544a 100644
> --- a/drivers/lightnvm/core.c
> +++ b/drivers/lightnvm/core.c
> @@ -43,6 +43,8 @@ struct nvm_ch_map {
> struct nvm_dev_map {
> struct nvm_ch_map *chnls;
> int nr_chnls;
> + int bch;
> + int blun;
> };

bch/blun should be unnecessary if the map_to_dev / map_to_tgt functions
are implemented correctly (they can with the ppa_addr order update as
far as I can see)

What is the reason they can't be used? I might be missing something.

>
> static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name)
> @@ -171,6 +173,9 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
> if (!dev_map->chnls)
> goto err_chnls;
>
> + dev_map->bch = bch;
> + dev_map->blun = blun;
> +
> luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL);
> if (!luns)
> goto err_luns;
> @@ -561,6 +566,19 @@ static void nvm_unregister_map(struct nvm_dev *dev)
> kfree(rmap);
> }
>
> +static unsigned long nvm_log_off_tgt_to_dev(struct nvm_tgt_dev *tgt_dev)
> +{
> + struct nvm_dev_map *dev_map = tgt_dev->map;
> + struct nvm_geo *geo = &tgt_dev->geo;
> + int lun_off;
> + unsigned long off;
> +
> + lun_off = dev_map->blun + dev_map->bch * geo->num_lun;
> + off = lun_off * geo->c.num_chk * sizeof(struct nvm_chunk_log_page);
> +
> + return off;
> +}
> +
> static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
> {
> struct nvm_dev_map *dev_map = tgt_dev->map;
> @@ -720,6 +738,16 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev,
> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list);
> }
>
> +int nvm_get_chunk_log_page(struct nvm_tgt_dev *tgt_dev,
> + struct nvm_chunk_log_page *log,
> + unsigned long off, unsigned long len)
> +{
> + struct nvm_dev *dev = tgt_dev->parent;
> +
> + off += nvm_log_off_tgt_to_dev(tgt_dev);
> +
> + return dev->ops->get_chunk_log_page(tgt_dev->parent, log, off, len);
> +}

I think that this should be exported as get_bb and set_bb. Else linking
fails if pblk is compiled as a module.

>
> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas,
> int nr_ppas, int type)
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 7bc75182c723..355d9b0cf084 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode {
> nvme_nvm_admin_set_bb_tbl = 0xf1,
> };
>
> +enum nvme_nvm_log_page {
> + NVME_NVM_LOG_REPORT_CHUNK = 0xCA,
> +};
> +

The convention is to have it as lower-case.

> struct nvme_nvm_ph_rw {
> __u8 opcode;
> __u8 flags;
> @@ -553,6 +557,50 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas,
> return ret;
> }
>
> +static int nvme_nvm_get_chunk_log_page(struct nvm_dev *nvmdev,
> + struct nvm_chunk_log_page *log,
> + unsigned long off,
> + unsigned long total_len)

The chunk_log_page interface are both to be used by targets and the
block layer code. Therefore, it is not convenient to have a
byte-addressible interface exposed all the way up to a target. Instead,
use slba and nlb. That simplifies what a target has to implement, and
also allows the offset check to be removed.

Chunk log page should be defined in the nvme implementation, such that
it can be accessed through the traditional LBA path.

struct nvme_nvm_chk_meta {
__u8 state;
__u8 type;
__u8 wli;
__u8 rsvd[5];
__le64 slba;
__le64 cnlb;
__le64 wp;
};

> +{
> + struct nvme_ns *ns = nvmdev->q->queuedata;
> + struct nvme_command c = { };
> + unsigned long offset = off, left = total_len;
> + unsigned long len, len_dwords;
> + void *buf = log;
> + int ret;
> +
> + /* The offset needs to be dword-aligned */
> + if (offset & 0x3)
> + return -EINVAL;

No need to check for this with the above interface changes.

> +
> + do {
> + /* Send 256KB at a time */
> + len = (1 << 18) > left ? left : (1 << 18);
> + len_dwords = (len >> 2) - 1;

This is namespace dependent. Use ctrl->max_hw_sectors << 9 instead.

> +
> + c.get_log_page.opcode = nvme_admin_get_log_page;
> + c.get_log_page.nsid = cpu_to_le32(ns->head->ns_id);
> + c.get_log_page.lid = NVME_NVM_LOG_REPORT_CHUNK;
> + c.get_log_page.lpol = cpu_to_le32(offset & 0xffffffff);
> + c.get_log_page.lpou = cpu_to_le32(offset >> 32);
> + c.get_log_page.numdl = cpu_to_le16(len_dwords & 0xffff);
> + c.get_log_page.numdu = cpu_to_le16(len_dwords >> 16);
> +
> + ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, buf, len);
> + if (ret) {
> + dev_err(ns->ctrl->device,
> + "get chunk log page failed (%d)\n", ret);
> + break;
> + }
> +
> + buf += len;
> + offset += len;
> + left -= len;
> + } while (left);
> +
> + return ret;
> +}
> +
> static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns,
> struct nvme_nvm_command *c)
> {
> @@ -684,6 +732,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
> .get_bb_tbl = nvme_nvm_get_bb_tbl,
> .set_bb_tbl = nvme_nvm_set_bb_tbl,
>
> + .get_chunk_log_page = nvme_nvm_get_chunk_log_page,
> +
> .submit_io = nvme_nvm_submit_io,
> .submit_io_sync = nvme_nvm_submit_io_sync,
>
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> index 1148b3f22b27..eb2900a18160 100644
> --- a/include/linux/lightnvm.h
> +++ b/include/linux/lightnvm.h
> @@ -73,10 +73,13 @@ struct nvm_rq;
> struct nvm_id;
> struct nvm_dev;
> struct nvm_tgt_dev;
> +struct nvm_chunk_log_page;
>
> typedef int (nvm_id_fn)(struct nvm_dev *);
> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
> +typedef int (nvm_get_chunk_lp_fn)(struct nvm_dev *, struct nvm_chunk_log_page *,
> + unsigned long, unsigned long);
> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *);
> typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
> @@ -90,6 +93,8 @@ struct nvm_dev_ops {
> nvm_op_bb_tbl_fn *get_bb_tbl;
> nvm_op_set_bb_fn *set_bb_tbl;
>
> + nvm_get_chunk_lp_fn *get_chunk_log_page;
> +
> nvm_submit_io_fn *submit_io;
> nvm_submit_io_sync_fn *submit_io_sync;
>
> @@ -286,6 +291,30 @@ struct nvm_dev_geo {
> struct nvm_common_geo c;
> };
>
> +enum {
> + /* Chunk states */
> + NVM_CHK_ST_FREE = 1 << 0,
> + NVM_CHK_ST_CLOSED = 1 << 1,
> + NVM_CHK_ST_OPEN = 1 << 2,
> + NVM_CHK_ST_OFFLINE = 1 << 3,
> + NVM_CHK_ST_HOST_USE = 1 << 7,
> +
> + /* Chunk types */
> + NVM_CHK_TP_W_SEQ = 1 << 0,
> + NVM_CHK_TP_W_RAN = 1 << 2,

The RAN bit is the second bit (1 << 1)

> + NVM_CHK_TP_SZ_SPEC = 1 << 4,
> +};
> +
> +struct nvm_chunk_log_page {
> + __u8 state;
> + __u8 type;
> + __u8 wear_index;
> + __u8 rsvd[5];
> + __u64 slba;
> + __u64 cnlb;
> + __u64 wp;
> +};

Should be represented both within the device driver and the lightnvm
header file.
> +
> struct nvm_target {
> struct list_head list;
> struct nvm_tgt_dev *dev;
> @@ -505,6 +534,9 @@ extern struct nvm_dev *nvm_alloc_dev(int);
> extern int nvm_register(struct nvm_dev *);
> extern void nvm_unregister(struct nvm_dev *);
>
> +extern int nvm_get_chunk_log_page(struct nvm_tgt_dev *,
> + struct nvm_chunk_log_page *,
> + unsigned long, unsigned long);
> extern int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *, struct ppa_addr *,
> int, int);
> extern int nvm_max_phys_sects(struct nvm_tgt_dev *);
>

Here is a compile tested and lightly tested patch with the fixes above.
Note that the chunk state definition has been taken out, as it properly
shall go into the next patch. Also note that it uses the get log page
patch I sent that wires up the 1.2.1 get log page support.

diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 689c97b97775..cc22bf48fd13 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -841,6 +841,19 @@ int nvm_get_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev,
struct ppa_addr ppa,
}
EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);

+int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta
*meta,
+ struct ppa_addr ppa, int nchks)
+{
+ struct nvm_dev *dev = tgt_dev->parent;
+
+ nvm_map_to_dev(tgt_dev, &ppa);
+ ppa = generic_to_dev_addr(tgt_dev, ppa);
+
+ return dev->ops->get_chk_meta(tgt_dev->parent, meta,
+ (sector_t)ppa.ppa, nchks);
+}
+EXPORT_SYMBOL(nvm_get_chunk_meta);
+
static int nvm_core_init(struct nvm_dev *dev)
{
struct nvm_id *id = &dev->identity;
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 839c0b96466a..8f81f41a504c 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode {
nvme_nvm_admin_set_bb_tbl = 0xf1,
};

+enum nvme_nvm_log_page {
+ NVME_NVM_LOG_REPORT_CHUNK = 0xca,
+};
+
struct nvme_nvm_ph_rw {
__u8 opcode;
__u8 flags;
@@ -236,6 +240,16 @@ struct nvme_nvm_id20 {
__u8 vs[1024];
};

+struct nvme_nvm_chk_meta {
+ __u8 state;
+ __u8 type;
+ __u8 wli;
+ __u8 rsvd[5];
+ __le64 slba;
+ __le64 cnlb;
+ __le64 wp;
+};
+
/*
* Check we didn't inadvertently grow the command struct
*/
@@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64);
BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8);
BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE);
+ BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32);
+ BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) !=
+ sizeof(struct nvm_chk_meta));
}

static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12)
@@ -474,6 +491,48 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev
*nvmdev, struct ppa_addr *ppas,
return ret;
}

+static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev,
+ struct nvm_chk_meta *meta,
+ sector_t slba, int nchks)
+{
+ struct nvme_ns *ns = ndev->q->queuedata;
+ struct nvme_ctrl *ctrl = ns->ctrl;
+ struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta;
+ size_t left = nchks * sizeof(struct nvme_nvm_chk_meta);
+ size_t offset, len;
+ int ret, i;
+
+ offset = slba * sizeof(struct nvme_nvm_chk_meta);
+
+ while (left) {
+ len = min_t(unsigned, left, ctrl->max_hw_sectors << 9);
+
+ ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK,
+ dev_meta, len, offset);
+ if (ret) {
+ dev_err(ctrl->device, "Get REPORT CHUNK log error\n");
+ break;
+ }
+
+ for (i = 0; i < len; i += sizeof(struct nvme_nvm_chk_meta)) {
+ meta->state = dev_meta->state;
+ meta->type = dev_meta->type;
+ meta->wli = dev_meta->wli;
+ meta->slba = le64_to_cpu(dev_meta->slba);
+ meta->cnlb = le64_to_cpu(dev_meta->cnlb);
+ meta->wp = le64_to_cpu(dev_meta->wp);
+
+ meta++;
+ dev_meta++;
+ }
+
+ offset += len;
+ left -= len;
+ }
+
+ return ret;
+}
+
static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns
*ns,
struct nvme_nvm_command *c)
{
@@ -605,6 +664,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
.get_bb_tbl = nvme_nvm_get_bb_tbl,
.set_bb_tbl = nvme_nvm_set_bb_tbl,

+ .get_chk_meta = nvme_nvm_get_chk_meta,
+
.submit_io = nvme_nvm_submit_io,
.submit_io_sync = nvme_nvm_submit_io_sync,

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 1ca08f4993ba..12abe16d6e64 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -396,6 +396,10 @@ int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);

+int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ u8 log_page, void *log,
+ size_t size, size_t offset);
+
extern const struct attribute_group nvme_ns_id_attr_group;
extern const struct block_device_operations nvme_ns_head_ops;

diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index e55b10573c99..f056cf72144f 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -49,10 +49,13 @@ struct nvm_rq;
struct nvm_id;
struct nvm_dev;
struct nvm_tgt_dev;
+struct nvm_chk_meta;

typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *,
int, int);
+typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, struct nvm_chk_meta *,
+ sector_t, int);
typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *);
typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
@@ -66,6 +69,8 @@ struct nvm_dev_ops {
nvm_op_bb_tbl_fn *get_bb_tbl;
nvm_op_set_bb_fn *set_bb_tbl;

+ nvm_get_chk_meta_fn *get_chk_meta;
+
nvm_submit_io_fn *submit_io;
nvm_submit_io_sync_fn *submit_io_sync;

@@ -353,6 +358,20 @@ struct nvm_dev {
struct list_head targets;
};

+/*
+ * Note: The structure size is linked to nvme_nvm_chk_meta such that
the same
+ * buffer can be used when converting from little endian to cpu addressing.
+ */
+struct nvm_chk_meta {
+ u8 state;
+ u8 type;
+ u8 wli;
+ u8 rsvd[5];
+ u64 slba;
+ u64 cnlb;
+ u64 wp;
+};
+
static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev
*tgt_dev,
struct ppa_addr r)
{


2018-02-15 18:34:27

by Javier Gonzalez

[permalink] [raw]
Subject: Re: [PATCH 1/8] lightnvm: exposed generic geometry to targets


> On 15 Feb 2018, at 02.13, Matias Bjørling <[email protected]> wrote:
>
>> On 02/13/2018 03:06 PM, Javier González wrote:
>> With the inclusion of 2.0 support, we need a generic geometry that
>> describes the OCSSD independently of the specification that it
>> implements. Otherwise, geometry specific code is required, which
>> complicates targets and makes maintenance much more difficult.
>> This patch refactors the identify path and populates a generic geometry
>> that is then given to the targets on creation. Since the 2.0 geometry is
>> much more abstract that 1.2, the generic geometry resembles 2.0, but it
>> is not identical, as it needs to understand 1.2 abstractions too.
>> Signed-off-by: Javier González <[email protected]>
>> ---
>> drivers/lightnvm/core.c | 143 ++++++---------
>> drivers/lightnvm/pblk-core.c | 16 +-
>> drivers/lightnvm/pblk-gc.c | 2 +-
>> drivers/lightnvm/pblk-init.c | 149 ++++++++-------
>> drivers/lightnvm/pblk-read.c | 2 +-
>> drivers/lightnvm/pblk-recovery.c | 14 +-
>> drivers/lightnvm/pblk-rl.c | 2 +-
>> drivers/lightnvm/pblk-sysfs.c | 39 ++--
>> drivers/lightnvm/pblk-write.c | 2 +-
>> drivers/lightnvm/pblk.h | 105 +++++------
>> drivers/nvme/host/lightnvm.c | 379 ++++++++++++++++++++++++---------------
>> include/linux/lightnvm.h | 220 +++++++++++++----------
>> 12 files changed, 586 insertions(+), 487 deletions(-)
>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
>> index 9b1255b3e05e..80492fa6ee76 100644
>> --- a/drivers/lightnvm/core.c
>> +++ b/drivers/lightnvm/core.c
>> @@ -111,6 +111,7 @@ static void nvm_release_luns_err(struct nvm_dev *dev, int lun_begin,
>> static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
>> {
>> struct nvm_dev *dev = tgt_dev->parent;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> struct nvm_dev_map *dev_map = tgt_dev->map;
>> int i, j;
>> @@ -122,7 +123,7 @@ static void nvm_remove_tgt_dev(struct nvm_tgt_dev *tgt_dev, int clear)
>> if (clear) {
>> for (j = 0; j < ch_map->nr_luns; j++) {
>> int lun = j + lun_offs[j];
>> - int lunid = (ch * dev->geo.nr_luns) + lun;
>> + int lunid = (ch * dev_geo->num_lun) + lun;
>> WARN_ON(!test_and_clear_bit(lunid,
>> dev->lun_map));
>> @@ -143,19 +144,20 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
>> u16 lun_begin, u16 lun_end,
>> u16 op)
>> {
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> struct nvm_tgt_dev *tgt_dev = NULL;
>> struct nvm_dev_map *dev_rmap = dev->rmap;
>> struct nvm_dev_map *dev_map;
>> struct ppa_addr *luns;
>> int nr_luns = lun_end - lun_begin + 1;
>> int luns_left = nr_luns;
>> - int nr_chnls = nr_luns / dev->geo.nr_luns;
>> - int nr_chnls_mod = nr_luns % dev->geo.nr_luns;
>> - int bch = lun_begin / dev->geo.nr_luns;
>> - int blun = lun_begin % dev->geo.nr_luns;
>> + int nr_chnls = nr_luns / dev_geo->num_lun;
>> + int nr_chnls_mod = nr_luns % dev_geo->num_lun;
>> + int bch = lun_begin / dev_geo->num_lun;
>> + int blun = lun_begin % dev_geo->num_lun;
>> int lunid = 0;
>> int lun_balanced = 1;
>> - int prev_nr_luns;
>> + int sec_per_lun, prev_nr_luns;
>> int i, j;
>> nr_chnls = (nr_chnls_mod == 0) ? nr_chnls : nr_chnls + 1;
>> @@ -173,15 +175,15 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
>> if (!luns)
>> goto err_luns;
>> - prev_nr_luns = (luns_left > dev->geo.nr_luns) ?
>> - dev->geo.nr_luns : luns_left;
>> + prev_nr_luns = (luns_left > dev_geo->num_lun) ?
>> + dev_geo->num_lun : luns_left;
>> for (i = 0; i < nr_chnls; i++) {
>> struct nvm_ch_map *ch_rmap = &dev_rmap->chnls[i + bch];
>> int *lun_roffs = ch_rmap->lun_offs;
>> struct nvm_ch_map *ch_map = &dev_map->chnls[i];
>> int *lun_offs;
>> - int luns_in_chnl = (luns_left > dev->geo.nr_luns) ?
>> - dev->geo.nr_luns : luns_left;
>> + int luns_in_chnl = (luns_left > dev_geo->num_lun) ?
>> + dev_geo->num_lun : luns_left;
>> if (lun_balanced && prev_nr_luns != luns_in_chnl)
>> lun_balanced = 0;
>> @@ -215,18 +217,23 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
>> if (!tgt_dev)
>> goto err_ch;
>> - memcpy(&tgt_dev->geo, &dev->geo, sizeof(struct nvm_geo));
>> /* Target device only owns a portion of the physical device */
>> - tgt_dev->geo.nr_chnls = nr_chnls;
>> + tgt_dev->geo.num_ch = nr_chnls;
>> + tgt_dev->geo.num_lun = (lun_balanced) ? prev_nr_luns : -1;
>> tgt_dev->geo.all_luns = nr_luns;
>> - tgt_dev->geo.nr_luns = (lun_balanced) ? prev_nr_luns : -1;
>> + tgt_dev->geo.all_chunks = nr_luns * dev_geo->c.num_chk;
>> +
>> + tgt_dev->geo.max_rq_size = dev->ops->max_phys_sect * dev_geo->c.csecs;
>> tgt_dev->geo.op = op;
>> - tgt_dev->total_secs = nr_luns * tgt_dev->geo.sec_per_lun;
>> +
>> + sec_per_lun = dev_geo->c.clba * dev_geo->c.num_chk;
>> + tgt_dev->geo.total_secs = nr_luns * sec_per_lun;
>> +
>> + tgt_dev->geo.c = dev_geo->c;
>> +
>> tgt_dev->q = dev->q;
>> tgt_dev->map = dev_map;
>> tgt_dev->luns = luns;
>> - memcpy(&tgt_dev->identity, &dev->identity, sizeof(struct nvm_id));
>> -
>> tgt_dev->parent = dev;
>> return tgt_dev;
>> @@ -268,12 +275,12 @@ static struct nvm_tgt_type *nvm_find_target_type(const char *name)
>> return tt;
>> }
>> -static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
>> +static int nvm_config_check_luns(struct nvm_dev_geo *dev_geo, int lun_begin,
>> int lun_end)
>> {
>> - if (lun_begin > lun_end || lun_end >= geo->all_luns) {
>> + if (lun_begin > lun_end || lun_end >= dev_geo->all_luns) {
>> pr_err("nvm: lun out of bound (%u:%u > %u)\n",
>> - lun_begin, lun_end, geo->all_luns - 1);
>> + lun_begin, lun_end, dev_geo->all_luns - 1);
>> return -EINVAL;
>> }
>> @@ -283,24 +290,24 @@ static int nvm_config_check_luns(struct nvm_geo *geo, int lun_begin,
>> static int __nvm_config_simple(struct nvm_dev *dev,
>> struct nvm_ioctl_create_simple *s)
>> {
>> - struct nvm_geo *geo = &dev->geo;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> if (s->lun_begin == -1 && s->lun_end == -1) {
>> s->lun_begin = 0;
>> - s->lun_end = geo->all_luns - 1;
>> + s->lun_end = dev_geo->all_luns - 1;
>> }
>> - return nvm_config_check_luns(geo, s->lun_begin, s->lun_end);
>> + return nvm_config_check_luns(dev_geo, s->lun_begin, s->lun_end);
>> }
>> static int __nvm_config_extended(struct nvm_dev *dev,
>> struct nvm_ioctl_create_extended *e)
>> {
>> - struct nvm_geo *geo = &dev->geo;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> if (e->lun_begin == 0xFFFF && e->lun_end == 0xFFFF) {
>> e->lun_begin = 0;
>> - e->lun_end = dev->geo.all_luns - 1;
>> + e->lun_end = dev_geo->all_luns - 1;
>> }
>> /* op not set falls into target's default */
>> @@ -313,7 +320,7 @@ static int __nvm_config_extended(struct nvm_dev *dev,
>> return -EINVAL;
>> }
>> - return nvm_config_check_luns(geo, e->lun_begin, e->lun_end);
>> + return nvm_config_check_luns(dev_geo, e->lun_begin, e->lun_end);
>> }
>> static int nvm_create_tgt(struct nvm_dev *dev, struct nvm_ioctl_create *create)
>> @@ -496,6 +503,7 @@ static int nvm_remove_tgt(struct nvm_dev *dev, struct nvm_ioctl_remove *remove)
>> static int nvm_register_map(struct nvm_dev *dev)
>> {
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> struct nvm_dev_map *rmap;
>> int i, j;
>> @@ -503,15 +511,15 @@ static int nvm_register_map(struct nvm_dev *dev)
>> if (!rmap)
>> goto err_rmap;
>> - rmap->chnls = kcalloc(dev->geo.nr_chnls, sizeof(struct nvm_ch_map),
>> + rmap->chnls = kcalloc(dev_geo->num_ch, sizeof(struct nvm_ch_map),
>> GFP_KERNEL);
>> if (!rmap->chnls)
>> goto err_chnls;
>> - for (i = 0; i < dev->geo.nr_chnls; i++) {
>> + for (i = 0; i < dev_geo->num_ch; i++) {
>> struct nvm_ch_map *ch_rmap;
>> int *lun_roffs;
>> - int luns_in_chnl = dev->geo.nr_luns;
>> + int luns_in_chnl = dev_geo->num_lun;
>> ch_rmap = &rmap->chnls[i];
>> @@ -542,10 +550,11 @@ static int nvm_register_map(struct nvm_dev *dev)
>> static void nvm_unregister_map(struct nvm_dev *dev)
>> {
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> struct nvm_dev_map *rmap = dev->rmap;
>> int i;
>> - for (i = 0; i < dev->geo.nr_chnls; i++)
>> + for (i = 0; i < dev_geo->num_ch; i++)
>> kfree(rmap->chnls[i].lun_offs);
>> kfree(rmap->chnls);
>> @@ -674,7 +683,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
>> int i, plane_cnt, pl_idx;
>> struct ppa_addr ppa;
>> - if (geo->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
>> + if (geo->c.pln_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
>> rqd->nr_ppas = nr_ppas;
>> rqd->ppa_addr = ppas[0];
>> @@ -688,7 +697,7 @@ static int nvm_set_rqd_ppalist(struct nvm_tgt_dev *tgt_dev, struct nvm_rq *rqd,
>> return -ENOMEM;
>> }
>> - plane_cnt = geo->plane_mode;
>> + plane_cnt = geo->c.pln_mode;
>> rqd->nr_ppas *= plane_cnt;
>> for (i = 0; i < nr_ppas; i++) {
>> @@ -811,18 +820,18 @@ EXPORT_SYMBOL(nvm_end_io);
>> */
>> int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
>> {
>> - struct nvm_geo *geo = &dev->geo;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> int blk, offset, pl, blktype;
>> - if (nr_blks != geo->nr_chks * geo->plane_mode)
>> + if (nr_blks != dev_geo->c.num_chk * dev_geo->c.pln_mode)
>> return -EINVAL;
>> - for (blk = 0; blk < geo->nr_chks; blk++) {
>> - offset = blk * geo->plane_mode;
>> + for (blk = 0; blk < dev_geo->c.num_chk; blk++) {
>> + offset = blk * dev_geo->c.pln_mode;
>> blktype = blks[offset];
>> /* Bad blocks on any planes take precedence over other types */
>> - for (pl = 0; pl < geo->plane_mode; pl++) {
>> + for (pl = 0; pl < dev_geo->c.pln_mode; pl++) {
>> if (blks[offset + pl] &
>> (NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
>> blktype = blks[offset + pl];
>> @@ -833,7 +842,7 @@ int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
>> blks[blk] = blktype;
>> }
>> - return geo->nr_chks;
>> + return dev_geo->c.num_chk;
>> }
>> EXPORT_SYMBOL(nvm_bb_tbl_fold);
>> @@ -850,44 +859,10 @@ EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);
>> static int nvm_core_init(struct nvm_dev *dev)
>> {
>> - struct nvm_id *id = &dev->identity;
>> - struct nvm_geo *geo = &dev->geo;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> int ret;
>> - memcpy(&geo->ppaf, &id->ppaf, sizeof(struct nvm_addr_format));
>> -
>> - if (id->mtype != 0) {
>> - pr_err("nvm: memory type not supported\n");
>> - return -EINVAL;
>> - }
>> -
>> - /* Whole device values */
>> - geo->nr_chnls = id->num_ch;
>> - geo->nr_luns = id->num_lun;
>> -
>> - /* Generic device geometry values */
>> - geo->ws_min = id->ws_min;
>> - geo->ws_opt = id->ws_opt;
>> - geo->ws_seq = id->ws_seq;
>> - geo->ws_per_chk = id->ws_per_chk;
>> - geo->nr_chks = id->num_chk;
>> - geo->sec_size = id->csecs;
>> - geo->oob_size = id->sos;
>> - geo->mccap = id->mccap;
>> - geo->max_rq_size = dev->ops->max_phys_sect * geo->sec_size;
>> -
>> - geo->sec_per_chk = id->clba;
>> - geo->sec_per_lun = geo->sec_per_chk * geo->nr_chks;
>> - geo->all_luns = geo->nr_luns * geo->nr_chnls;
>> -
>> - /* 1.2 spec device geometry values */
>> - geo->plane_mode = 1 << geo->ws_seq;
>> - geo->nr_planes = geo->ws_opt / geo->ws_min;
>> - geo->sec_per_pg = geo->ws_min;
>> - geo->sec_per_pl = geo->sec_per_pg * geo->nr_planes;
>> -
>> - dev->total_secs = geo->all_luns * geo->sec_per_lun;
>> - dev->lun_map = kcalloc(BITS_TO_LONGS(geo->all_luns),
>> + dev->lun_map = kcalloc(BITS_TO_LONGS(dev_geo->all_luns),
>> sizeof(unsigned long), GFP_KERNEL);
>> if (!dev->lun_map)
>> return -ENOMEM;
>> @@ -901,7 +876,7 @@ static int nvm_core_init(struct nvm_dev *dev)
>> if (ret)
>> goto err_fmtype;
>> - blk_queue_logical_block_size(dev->q, geo->sec_size);
>> + blk_queue_logical_block_size(dev->q, dev_geo->c.csecs);
>> return 0;
>> err_fmtype:
>> kfree(dev->lun_map);
>> @@ -923,19 +898,17 @@ static void nvm_free(struct nvm_dev *dev)
>> static int nvm_init(struct nvm_dev *dev)
>> {
>> - struct nvm_geo *geo = &dev->geo;
>> + struct nvm_dev_geo *dev_geo = &dev->dev_geo;
>> int ret = -EINVAL;
>> - if (dev->ops->identity(dev, &dev->identity)) {
>> + if (dev->ops->identity(dev)) {
>> pr_err("nvm: device could not be identified\n");
>> goto err;
>> }
>> - if (dev->identity.ver_id != 1 && dev->identity.ver_id != 2) {
>> - pr_err("nvm: device ver_id %d not supported by kernel.\n",
>> - dev->identity.ver_id);
>> - goto err;
>> - }
>> + pr_debug("nvm: ver:%u.%u nvm_vendor:%x\n",
>> + dev_geo->major_ver_id, dev_geo->minor_ver_id,
>> + dev_geo->c.vmnt);
>> ret = nvm_core_init(dev);
>> if (ret) {
>> @@ -943,10 +916,10 @@ static int nvm_init(struct nvm_dev *dev)
>> goto err;
>> }
>> - pr_info("nvm: registered %s [%u/%u/%u/%u/%u/%u]\n",
>> - dev->name, geo->sec_per_pg, geo->nr_planes,
>> - geo->ws_per_chk, geo->nr_chks,
>> - geo->all_luns, geo->nr_chnls);
>> + pr_info("nvm: registered %s [%u/%u/%u/%u/%u]\n",
>> + dev->name, dev_geo->c.ws_min, dev_geo->c.ws_opt,
>> + dev_geo->c.num_chk, dev_geo->all_luns,
>> + dev_geo->num_ch);
>> return 0;
>> err:
>> pr_err("nvm: failed to initialize nvm\n");
>> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
>> index 22e61cd4f801..519af8b9eab7 100644
>> --- a/drivers/lightnvm/pblk-core.c
>> +++ b/drivers/lightnvm/pblk-core.c
>> @@ -613,7 +613,7 @@ static int pblk_line_submit_emeta_io(struct pblk *pblk, struct pblk_line *line,
>> memset(&rqd, 0, sizeof(struct nvm_rq));
>> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len,
>> l_mg->emeta_alloc_type, GFP_KERNEL);
>> @@ -722,7 +722,7 @@ u64 pblk_line_smeta_start(struct pblk *pblk, struct pblk_line *line)
>> if (bit >= lm->blk_per_line)
>> return -1;
>> - return bit * geo->sec_per_pl;
>> + return bit * geo->c.ws_opt;
>> }
>> static int pblk_line_submit_smeta_io(struct pblk *pblk, struct pblk_line *line,
>> @@ -1035,19 +1035,19 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
>> /* Capture bad block information on line mapping bitmaps */
>> while ((bit = find_next_bit(line->blk_bitmap, lm->blk_per_line,
>> bit + 1)) < lm->blk_per_line) {
>> - off = bit * geo->sec_per_pl;
>> + off = bit * geo->c.ws_opt;
>> bitmap_shift_left(l_mg->bb_aux, l_mg->bb_template, off,
>> lm->sec_per_line);
>> bitmap_or(line->map_bitmap, line->map_bitmap, l_mg->bb_aux,
>> lm->sec_per_line);
>> - line->sec_in_line -= geo->sec_per_chk;
>> + line->sec_in_line -= geo->c.clba;
>> if (bit >= lm->emeta_bb)
>> nr_bb++;
>> }
>> /* Mark smeta metadata sectors as bad sectors */
>> bit = find_first_zero_bit(line->blk_bitmap, lm->blk_per_line);
>> - off = bit * geo->sec_per_pl;
>> + off = bit * geo->c.ws_opt;
>> bitmap_set(line->map_bitmap, off, lm->smeta_sec);
>> line->sec_in_line -= lm->smeta_sec;
>> line->smeta_ssec = off;
>> @@ -1066,10 +1066,10 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
>> emeta_secs = lm->emeta_sec[0];
>> off = lm->sec_per_line;
>> while (emeta_secs) {
>> - off -= geo->sec_per_pl;
>> + off -= geo->c.ws_opt;
>> if (!test_bit(off, line->invalid_bitmap)) {
>> - bitmap_set(line->invalid_bitmap, off, geo->sec_per_pl);
>> - emeta_secs -= geo->sec_per_pl;
>> + bitmap_set(line->invalid_bitmap, off, geo->c.ws_opt);
>> + emeta_secs -= geo->c.ws_opt;
>> }
>> }
>> diff --git a/drivers/lightnvm/pblk-gc.c b/drivers/lightnvm/pblk-gc.c
>> index 320f99af99e9..16afea3f5541 100644
>> --- a/drivers/lightnvm/pblk-gc.c
>> +++ b/drivers/lightnvm/pblk-gc.c
>> @@ -88,7 +88,7 @@ static void pblk_gc_line_ws(struct work_struct *work)
>> up(&gc->gc_sem);
>> - gc_rq->data = vmalloc(gc_rq->nr_secs * geo->sec_size);
>> + gc_rq->data = vmalloc(gc_rq->nr_secs * geo->c.csecs);
>> if (!gc_rq->data) {
>> pr_err("pblk: could not GC line:%d (%d/%d)\n",
>> line->id, *line->vsc, gc_rq->nr_secs);
>> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
>> index 86a94a7faa96..72b7902e5d1c 100644
>> --- a/drivers/lightnvm/pblk-init.c
>> +++ b/drivers/lightnvm/pblk-init.c
>> @@ -80,7 +80,7 @@ static size_t pblk_trans_map_size(struct pblk *pblk)
>> {
>> int entry_size = 8;
>> - if (pblk->ppaf_bitsize < 32)
>> + if (pblk->addrf_len < 32)
>> entry_size = 4;
>> return entry_size * pblk->rl.nr_secs;
>> @@ -146,7 +146,7 @@ static int pblk_rwb_init(struct pblk *pblk)
>> return -ENOMEM;
>> power_size = get_count_order(nr_entries);
>> - power_seg_sz = get_count_order(geo->sec_size);
>> + power_seg_sz = get_count_order(geo->c.csecs);
>> return pblk_rb_init(&pblk->rwb, entries, power_size, power_seg_sz);
>> }
>> @@ -154,47 +154,63 @@ static int pblk_rwb_init(struct pblk *pblk)
>> /* Minimum pages needed within a lun */
>> #define ADDR_POOL_SIZE 64
>> -static int pblk_set_ppaf(struct pblk *pblk)
>> +static int pblk_set_addrf_12(struct nvm_geo *geo,
>> + struct nvm_addr_format_12 *dst)
>> {
>> - struct nvm_tgt_dev *dev = pblk->dev;
>> - struct nvm_geo *geo = &dev->geo;
>> - struct nvm_addr_format ppaf = geo->ppaf;
>> + struct nvm_addr_format_12 *src =
>> + (struct nvm_addr_format_12 *)&geo->c.addrf;
>> int power_len;
>> /* Re-calculate channel and lun format to adapt to configuration */
>> - power_len = get_count_order(geo->nr_chnls);
>> - if (1 << power_len != geo->nr_chnls) {
>> + power_len = get_count_order(geo->num_ch);
>> + if (1 << power_len != geo->num_ch) {
>> pr_err("pblk: supports only power-of-two channel config.\n");
>> return -EINVAL;
>> }
>> - ppaf.ch_len = power_len;
>> + dst->ch_len = power_len;
>> - power_len = get_count_order(geo->nr_luns);
>> - if (1 << power_len != geo->nr_luns) {
>> + power_len = get_count_order(geo->num_lun);
>> + if (1 << power_len != geo->num_lun) {
>> pr_err("pblk: supports only power-of-two LUN config.\n");
>> return -EINVAL;
>> }
>> - ppaf.lun_len = power_len;
>> + dst->lun_len = power_len;
>> - pblk->ppaf.sec_offset = 0;
>> - pblk->ppaf.pln_offset = ppaf.sect_len;
>> - pblk->ppaf.ch_offset = pblk->ppaf.pln_offset + ppaf.pln_len;
>> - pblk->ppaf.lun_offset = pblk->ppaf.ch_offset + ppaf.ch_len;
>> - pblk->ppaf.pg_offset = pblk->ppaf.lun_offset + ppaf.lun_len;
>> - pblk->ppaf.blk_offset = pblk->ppaf.pg_offset + ppaf.pg_len;
>> - pblk->ppaf.sec_mask = (1ULL << ppaf.sect_len) - 1;
>> - pblk->ppaf.pln_mask = ((1ULL << ppaf.pln_len) - 1) <<
>> - pblk->ppaf.pln_offset;
>> - pblk->ppaf.ch_mask = ((1ULL << ppaf.ch_len) - 1) <<
>> - pblk->ppaf.ch_offset;
>> - pblk->ppaf.lun_mask = ((1ULL << ppaf.lun_len) - 1) <<
>> - pblk->ppaf.lun_offset;
>> - pblk->ppaf.pg_mask = ((1ULL << ppaf.pg_len) - 1) <<
>> - pblk->ppaf.pg_offset;
>> - pblk->ppaf.blk_mask = ((1ULL << ppaf.blk_len) - 1) <<
>> - pblk->ppaf.blk_offset;
>> + dst->blk_len = src->blk_len;
>> + dst->pg_len = src->pg_len;
>> + dst->pln_len = src->pln_len;
>> + dst->sec_len = src->sec_len;
>> - pblk->ppaf_bitsize = pblk->ppaf.blk_offset + ppaf.blk_len;
>> + dst->sec_offset = 0;
>> + dst->pln_offset = dst->sec_len;
>> + dst->ch_offset = dst->pln_offset + dst->pln_len;
>> + dst->lun_offset = dst->ch_offset + dst->ch_len;
>> + dst->pg_offset = dst->lun_offset + dst->lun_len;
>> + dst->blk_offset = dst->pg_offset + dst->pg_len;
>> +
>> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
>> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
>> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
>> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
>> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
>> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
>> +
>> + return dst->blk_offset + src->blk_len;
>> +}
>> +
>> +static int pblk_set_addrf(struct pblk *pblk)
>> +{
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + int mod;
>> +
>> + div_u64_rem(geo->c.clba, pblk->min_write_pgs, &mod);
>> + if (mod) {
>> + pr_err("pblk: bad configuration of sectors/pages\n");
>> + return -EINVAL;
>> + }
>> +
>> + pblk->addrf_len = pblk_set_addrf_12(geo, (void *)&pblk->addrf);
>> return 0;
>> }
>> @@ -253,8 +269,7 @@ static int pblk_core_init(struct pblk *pblk)
>> struct nvm_tgt_dev *dev = pblk->dev;
>> struct nvm_geo *geo = &dev->geo;
>> - pblk->pgs_in_buffer = NVM_MEM_PAGE_WRITE * geo->sec_per_pg *
>> - geo->nr_planes * geo->all_luns;
>> + pblk->pgs_in_buffer = geo->c.mw_cunits * geo->c.ws_opt * geo->all_luns;
>> if (pblk_init_global_caches(pblk))
>> return -ENOMEM;
>> @@ -305,7 +320,7 @@ static int pblk_core_init(struct pblk *pblk)
>> if (!pblk->r_end_wq)
>> goto free_bb_wq;
>> - if (pblk_set_ppaf(pblk))
>> + if (pblk_set_addrf(pblk))
>> goto free_r_end_wq;
>> if (pblk_rwb_init(pblk))
>> @@ -434,7 +449,7 @@ static void *pblk_bb_get_log(struct pblk *pblk)
>> int i, nr_blks, blk_per_lun;
>> int ret;
>> - blk_per_lun = geo->nr_chks * geo->plane_mode;
>> + blk_per_lun = geo->c.num_chk * geo->c.pln_mode;
>> nr_blks = blk_per_lun * geo->all_luns;
>> log = kmalloc(nr_blks, GFP_KERNEL);
>> @@ -484,7 +499,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>> int i;
>> /* TODO: Implement unbalanced LUN support */
>> - if (geo->nr_luns < 0) {
>> + if (geo->num_lun < 0) {
>> pr_err("pblk: unbalanced LUN config.\n");
>> return -EINVAL;
>> }
>> @@ -496,9 +511,9 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>> for (i = 0; i < geo->all_luns; i++) {
>> /* Stripe across channels */
>> - int ch = i % geo->nr_chnls;
>> - int lun_raw = i / geo->nr_chnls;
>> - int lunid = lun_raw + ch * geo->nr_luns;
>> + int ch = i % geo->num_ch;
>> + int lun_raw = i / geo->num_ch;
>> + int lunid = lun_raw + ch * geo->num_lun;
>> rlun = &pblk->luns[i];
>> rlun->bppa = luns[lunid];
>> @@ -552,18 +567,18 @@ static unsigned int calc_emeta_len(struct pblk *pblk)
>> /* Round to sector size so that lba_list starts on its own sector */
>> lm->emeta_sec[1] = DIV_ROUND_UP(
>> sizeof(struct line_emeta) + lm->blk_bitmap_len +
>> - sizeof(struct wa_counters), geo->sec_size);
>> - lm->emeta_len[1] = lm->emeta_sec[1] * geo->sec_size;
>> + sizeof(struct wa_counters), geo->c.csecs);
>> + lm->emeta_len[1] = lm->emeta_sec[1] * geo->c.csecs;
>> /* Round to sector size so that vsc_list starts on its own sector */
>> lm->dsec_per_line = lm->sec_per_line - lm->emeta_sec[0];
>> lm->emeta_sec[2] = DIV_ROUND_UP(lm->dsec_per_line * sizeof(u64),
>> - geo->sec_size);
>> - lm->emeta_len[2] = lm->emeta_sec[2] * geo->sec_size;
>> + geo->c.csecs);
>> + lm->emeta_len[2] = lm->emeta_sec[2] * geo->c.csecs;
>> lm->emeta_sec[3] = DIV_ROUND_UP(l_mg->nr_lines * sizeof(u32),
>> - geo->sec_size);
>> - lm->emeta_len[3] = lm->emeta_sec[3] * geo->sec_size;
>> + geo->c.csecs);
>> + lm->emeta_len[3] = lm->emeta_sec[3] * geo->c.csecs;
>> lm->vsc_list_len = l_mg->nr_lines * sizeof(u32);
>> @@ -594,13 +609,13 @@ static void pblk_set_provision(struct pblk *pblk, long nr_free_blks)
>> * on user capacity consider only provisioned blocks
>> */
>> pblk->rl.total_blocks = nr_free_blks;
>> - pblk->rl.nr_secs = nr_free_blks * geo->sec_per_chk;
>> + pblk->rl.nr_secs = nr_free_blks * geo->c.clba;
>> /* Consider sectors used for metadata */
>> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
>> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
>> + blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);
>> - pblk->capacity = (provisioned - blk_meta) * geo->sec_per_chk;
>> + pblk->capacity = (provisioned - blk_meta) * geo->c.clba;
>> atomic_set(&pblk->rl.free_blocks, nr_free_blks);
>> atomic_set(&pblk->rl.free_user_blocks, nr_free_blks);
>> @@ -711,10 +726,10 @@ static int pblk_lines_init(struct pblk *pblk)
>> void *chunk_log;
>> unsigned int smeta_len, emeta_len;
>> long nr_bad_blks = 0, nr_free_blks = 0;
>> - int bb_distance, max_write_ppas, mod;
>> + int bb_distance, max_write_ppas;
>> int i, ret;
>> - pblk->min_write_pgs = geo->sec_per_pl * (geo->sec_size / PAGE_SIZE);
>> + pblk->min_write_pgs = geo->c.ws_opt * (geo->c.csecs / PAGE_SIZE);
>> max_write_ppas = pblk->min_write_pgs * geo->all_luns;
>> pblk->max_write_pgs = (max_write_ppas < nvm_max_phys_sects(dev)) ?
>> max_write_ppas : nvm_max_phys_sects(dev);
>> @@ -725,19 +740,13 @@ static int pblk_lines_init(struct pblk *pblk)
>> return -EINVAL;
>> }
>> - div_u64_rem(geo->sec_per_chk, pblk->min_write_pgs, &mod);
>> - if (mod) {
>> - pr_err("pblk: bad configuration of sectors/pages\n");
>> - return -EINVAL;
>> - }
>> -
>> - l_mg->nr_lines = geo->nr_chks;
>> + l_mg->nr_lines = geo->c.num_chk;
>> l_mg->log_line = l_mg->data_line = NULL;
>> l_mg->l_seq_nr = l_mg->d_seq_nr = 0;
>> l_mg->nr_free_lines = 0;
>> bitmap_zero(&l_mg->meta_bitmap, PBLK_DATA_LINES);
>> - lm->sec_per_line = geo->sec_per_chk * geo->all_luns;
>> + lm->sec_per_line = geo->c.clba * geo->all_luns;
>> lm->blk_per_line = geo->all_luns;
>> lm->blk_bitmap_len = BITS_TO_LONGS(geo->all_luns) * sizeof(long);
>> lm->sec_bitmap_len = BITS_TO_LONGS(lm->sec_per_line) * sizeof(long);
>> @@ -751,8 +760,8 @@ static int pblk_lines_init(struct pblk *pblk)
>> */
>> i = 1;
>> add_smeta_page:
>> - lm->smeta_sec = i * geo->sec_per_pl;
>> - lm->smeta_len = lm->smeta_sec * geo->sec_size;
>> + lm->smeta_sec = i * geo->c.ws_opt;
>> + lm->smeta_len = lm->smeta_sec * geo->c.csecs;
>> smeta_len = sizeof(struct line_smeta) + lm->lun_bitmap_len;
>> if (smeta_len > lm->smeta_len) {
>> @@ -765,8 +774,8 @@ static int pblk_lines_init(struct pblk *pblk)
>> */
>> i = 1;
>> add_emeta_page:
>> - lm->emeta_sec[0] = i * geo->sec_per_pl;
>> - lm->emeta_len[0] = lm->emeta_sec[0] * geo->sec_size;
>> + lm->emeta_sec[0] = i * geo->c.ws_opt;
>> + lm->emeta_len[0] = lm->emeta_sec[0] * geo->c.csecs;
>> emeta_len = calc_emeta_len(pblk);
>> if (emeta_len > lm->emeta_len[0]) {
>> @@ -779,7 +788,7 @@ static int pblk_lines_init(struct pblk *pblk)
>> lm->min_blk_line = 1;
>> if (geo->all_luns > 1)
>> lm->min_blk_line += DIV_ROUND_UP(lm->smeta_sec +
>> - lm->emeta_sec[0], geo->sec_per_chk);
>> + lm->emeta_sec[0], geo->c.clba);
>> if (lm->min_blk_line > lm->blk_per_line) {
>> pr_err("pblk: config. not supported. Min. LUN in line:%d\n",
>> @@ -803,9 +812,9 @@ static int pblk_lines_init(struct pblk *pblk)
>> goto fail_free_bb_template;
>> }
>> - bb_distance = (geo->all_luns) * geo->sec_per_pl;
>> + bb_distance = (geo->all_luns) * geo->c.ws_opt;
>> for (i = 0; i < lm->sec_per_line; i += bb_distance)
>> - bitmap_set(l_mg->bb_template, i, geo->sec_per_pl);
>> + bitmap_set(l_mg->bb_template, i, geo->c.ws_opt);
>> INIT_LIST_HEAD(&l_mg->free_list);
>> INIT_LIST_HEAD(&l_mg->corrupt_list);
>> @@ -982,9 +991,15 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
>> struct pblk *pblk;
>> int ret;
>> - if (dev->identity.dom & NVM_RSP_L2P) {
>> + if (geo->c.version != NVM_OCSSD_SPEC_12) {
>> + pr_err("pblk: OCSSD version not supported (%u)\n",
>> + geo->c.version);
>> + return ERR_PTR(-EINVAL);
>> + }
>> +
>> + if (geo->c.version == NVM_OCSSD_SPEC_12 && geo->c.dom & NVM_RSP_L2P) {
>> pr_err("pblk: host-side L2P table not supported. (%x)\n",
>> - dev->identity.dom);
>> + geo->c.dom);
>> return ERR_PTR(-EINVAL);
>> }
>> @@ -1092,7 +1107,7 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct gendisk *tdisk,
>> blk_queue_write_cache(tqueue, true, false);
>> - tqueue->limits.discard_granularity = geo->sec_per_chk * geo->sec_size;
>> + tqueue->limits.discard_granularity = geo->c.clba * geo->c.csecs;
>> tqueue->limits.discard_alignment = 0;
>> blk_queue_max_discard_sectors(tqueue, UINT_MAX >> 9);
>> queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, tqueue);
>> diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
>> index 2f761283f43e..ebb6bae3a3b8 100644
>> --- a/drivers/lightnvm/pblk-read.c
>> +++ b/drivers/lightnvm/pblk-read.c
>> @@ -563,7 +563,7 @@ int pblk_submit_read_gc(struct pblk *pblk, struct pblk_gc_rq *gc_rq)
>> if (!(gc_rq->secs_to_gc))
>> goto out;
>> - data_len = (gc_rq->secs_to_gc) * geo->sec_size;
>> + data_len = (gc_rq->secs_to_gc) * geo->c.csecs;
>> bio = pblk_bio_map_addr(pblk, gc_rq->data, gc_rq->secs_to_gc, data_len,
>> PBLK_VMALLOC_META, GFP_KERNEL);
>> if (IS_ERR(bio)) {
>> diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
>> index e75a1af2eebe..beacef1412a2 100644
>> --- a/drivers/lightnvm/pblk-recovery.c
>> +++ b/drivers/lightnvm/pblk-recovery.c
>> @@ -188,7 +188,7 @@ static int pblk_calc_sec_in_line(struct pblk *pblk, struct pblk_line *line)
>> int nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line);
>> return lm->sec_per_line - lm->smeta_sec - lm->emeta_sec[0] -
>> - nr_bb * geo->sec_per_chk;
>> + nr_bb * geo->c.clba;
>> }
>> struct pblk_recov_alloc {
>> @@ -236,7 +236,7 @@ static int pblk_recov_read_oob(struct pblk *pblk, struct pblk_line *line,
>> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
>> if (!rq_ppas)
>> rq_ppas = pblk->min_write_pgs;
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
>> if (IS_ERR(bio))
>> @@ -355,7 +355,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
>> if (!pad_rq)
>> return -ENOMEM;
>> - data = vzalloc(pblk->max_write_pgs * geo->sec_size);
>> + data = vzalloc(pblk->max_write_pgs * geo->c.csecs);
>> if (!data) {
>> ret = -ENOMEM;
>> goto free_rq;
>> @@ -372,7 +372,7 @@ static int pblk_recov_pad_oob(struct pblk *pblk, struct pblk_line *line,
>> goto fail_free_pad;
>> }
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> meta_list = nvm_dev_dma_alloc(dev->parent, GFP_KERNEL, &dma_meta_list);
>> if (!meta_list) {
>> @@ -513,7 +513,7 @@ static int pblk_recov_scan_all_oob(struct pblk *pblk, struct pblk_line *line,
>> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
>> if (!rq_ppas)
>> rq_ppas = pblk->min_write_pgs;
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
>> if (IS_ERR(bio))
>> @@ -644,7 +644,7 @@ static int pblk_recov_scan_oob(struct pblk *pblk, struct pblk_line *line,
>> rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
>> if (!rq_ppas)
>> rq_ppas = pblk->min_write_pgs;
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> bio = bio_map_kern(dev->q, data, rq_len, GFP_KERNEL);
>> if (IS_ERR(bio))
>> @@ -749,7 +749,7 @@ static int pblk_recov_l2p_from_oob(struct pblk *pblk, struct pblk_line *line)
>> ppa_list = (void *)(meta_list) + pblk_dma_meta_size;
>> dma_ppa_list = dma_meta_list + pblk_dma_meta_size;
>> - data = kcalloc(pblk->max_write_pgs, geo->sec_size, GFP_KERNEL);
>> + data = kcalloc(pblk->max_write_pgs, geo->c.csecs, GFP_KERNEL);
>> if (!data) {
>> ret = -ENOMEM;
>> goto free_meta_list;
>> diff --git a/drivers/lightnvm/pblk-rl.c b/drivers/lightnvm/pblk-rl.c
>> index 0d457b162f23..bcab203477ec 100644
>> --- a/drivers/lightnvm/pblk-rl.c
>> +++ b/drivers/lightnvm/pblk-rl.c
>> @@ -200,7 +200,7 @@ void pblk_rl_init(struct pblk_rl *rl, int budget)
>> /* Consider sectors used for metadata */
>> sec_meta = (lm->smeta_sec + lm->emeta_sec[0]) * l_mg->nr_free_lines;
>> - blk_meta = DIV_ROUND_UP(sec_meta, geo->sec_per_chk);
>> + blk_meta = DIV_ROUND_UP(sec_meta, geo->c.clba);
>> rl->high = pblk->op_blks - blk_meta - lm->blk_per_line;
>> rl->high_pw = get_count_order(rl->high);
>> diff --git a/drivers/lightnvm/pblk-sysfs.c b/drivers/lightnvm/pblk-sysfs.c
>> index d93e9b1f083a..d3b50741b691 100644
>> --- a/drivers/lightnvm/pblk-sysfs.c
>> +++ b/drivers/lightnvm/pblk-sysfs.c
>> @@ -113,26 +113,31 @@ static ssize_t pblk_sysfs_ppaf(struct pblk *pblk, char *page)
>> {
>> struct nvm_tgt_dev *dev = pblk->dev;
>> struct nvm_geo *geo = &dev->geo;
>> + struct nvm_addr_format_12 *ppaf;
>> + struct nvm_addr_format_12 *geo_ppaf;
>> ssize_t sz = 0;
>> - sz = snprintf(page, PAGE_SIZE - sz,
>> - "g:(b:%d)blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
>> - pblk->ppaf_bitsize,
>> - pblk->ppaf.blk_offset, geo->ppaf.blk_len,
>> - pblk->ppaf.pg_offset, geo->ppaf.pg_len,
>> - pblk->ppaf.lun_offset, geo->ppaf.lun_len,
>> - pblk->ppaf.ch_offset, geo->ppaf.ch_len,
>> - pblk->ppaf.pln_offset, geo->ppaf.pln_len,
>> - pblk->ppaf.sec_offset, geo->ppaf.sect_len);
>> + ppaf = (struct nvm_addr_format_12 *)&pblk->addrf;
>> + geo_ppaf = (struct nvm_addr_format_12 *)&geo->c.addrf;
>> +
>> + sz = snprintf(page, PAGE_SIZE,
>> + "pblk:(s:%d)ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
>> + pblk->addrf_len,
>> + ppaf->ch_offset, ppaf->ch_len,
>> + ppaf->lun_offset, ppaf->lun_len,
>> + ppaf->blk_offset, ppaf->blk_len,
>> + ppaf->pg_offset, ppaf->pg_len,
>> + ppaf->pln_offset, ppaf->pln_len,
>> + ppaf->sec_offset, ppaf->sec_len);
>> sz += snprintf(page + sz, PAGE_SIZE - sz,
>> - "d:blk:%d/%d,pg:%d/%d,lun:%d/%d,ch:%d/%d,pl:%d/%d,sec:%d/%d\n",
>> - geo->ppaf.blk_offset, geo->ppaf.blk_len,
>> - geo->ppaf.pg_offset, geo->ppaf.pg_len,
>> - geo->ppaf.lun_offset, geo->ppaf.lun_len,
>> - geo->ppaf.ch_offset, geo->ppaf.ch_len,
>> - geo->ppaf.pln_offset, geo->ppaf.pln_len,
>> - geo->ppaf.sect_offset, geo->ppaf.sect_len);
>> + "device:ch:%d/%d,lun:%d/%d,blk:%d/%d,pg:%d/%d,pl:%d/%d,sec:%d/%d\n",
>> + geo_ppaf->ch_offset, geo_ppaf->ch_len,
>> + geo_ppaf->lun_offset, geo_ppaf->lun_len,
>> + geo_ppaf->blk_offset, geo_ppaf->blk_len,
>> + geo_ppaf->pg_offset, geo_ppaf->pg_len,
>> + geo_ppaf->pln_offset, geo_ppaf->pln_len,
>> + geo_ppaf->sec_offset, geo_ppaf->sec_len);
>> return sz;
>> }
>> @@ -288,7 +293,7 @@ static ssize_t pblk_sysfs_lines_info(struct pblk *pblk, char *page)
>> "blk_line:%d, sec_line:%d, sec_blk:%d\n",
>> lm->blk_per_line,
>> lm->sec_per_line,
>> - geo->sec_per_chk);
>> + geo->c.clba);
>> return sz;
>> }
>> diff --git a/drivers/lightnvm/pblk-write.c b/drivers/lightnvm/pblk-write.c
>> index aae86ed60b98..c49b27539d5a 100644
>> --- a/drivers/lightnvm/pblk-write.c
>> +++ b/drivers/lightnvm/pblk-write.c
>> @@ -333,7 +333,7 @@ int pblk_submit_meta_io(struct pblk *pblk, struct pblk_line *meta_line)
>> m_ctx = nvm_rq_to_pdu(rqd);
>> m_ctx->private = meta_line;
>> - rq_len = rq_ppas * geo->sec_size;
>> + rq_len = rq_ppas * geo->c.csecs;
>> data = ((void *)emeta->buf) + emeta->mem;
>> bio = pblk_bio_map_addr(pblk, data, rq_ppas, rq_len,
>> diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
>> index 282dfc8780e8..46b29a492f74 100644
>> --- a/drivers/lightnvm/pblk.h
>> +++ b/drivers/lightnvm/pblk.h
>> @@ -551,21 +551,6 @@ struct pblk_line_meta {
>> unsigned int meta_distance; /* Distance between data and metadata */
>> };
>> -struct pblk_addr_format {
>> - u64 ch_mask;
>> - u64 lun_mask;
>> - u64 pln_mask;
>> - u64 blk_mask;
>> - u64 pg_mask;
>> - u64 sec_mask;
>> - u8 ch_offset;
>> - u8 lun_offset;
>> - u8 pln_offset;
>> - u8 blk_offset;
>> - u8 pg_offset;
>> - u8 sec_offset;
>> -};
>> -
>> enum {
>> PBLK_STATE_RUNNING = 0,
>> PBLK_STATE_STOPPING = 1,
>> @@ -585,8 +570,8 @@ struct pblk {
>> struct pblk_line_mgmt l_mg; /* Line management */
>> struct pblk_line_meta lm; /* Line metadata */
>> - int ppaf_bitsize;
>> - struct pblk_addr_format ppaf;
>> + struct nvm_addr_format addrf;
>> + int addrf_len;
>> struct pblk_rb rwb;
>> @@ -941,14 +926,12 @@ static inline int pblk_line_vsc(struct pblk_line *line)
>> return le32_to_cpu(*line->vsc);
>> }
>> -#define NVM_MEM_PAGE_WRITE (8)
>> -
>> static inline int pblk_pad_distance(struct pblk *pblk)
>> {
>> struct nvm_tgt_dev *dev = pblk->dev;
>> struct nvm_geo *geo = &dev->geo;
>> - return NVM_MEM_PAGE_WRITE * geo->all_luns * geo->sec_per_pl;
>> + return geo->c.mw_cunits * geo->all_luns * geo->c.ws_opt;
>> }
>> static inline int pblk_ppa_to_line(struct ppa_addr p)
>> @@ -958,21 +941,23 @@ static inline int pblk_ppa_to_line(struct ppa_addr p)
>> static inline int pblk_ppa_to_pos(struct nvm_geo *geo, struct ppa_addr p)
>> {
>> - return p.g.lun * geo->nr_chnls + p.g.ch;
>> + return p.g.lun * geo->num_ch + p.g.ch;
>> }
>> static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
>> u64 line_id)
>> {
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&pblk->addrf;
>> struct ppa_addr ppa;
>> ppa.ppa = 0;
>> ppa.g.blk = line_id;
>> - ppa.g.pg = (paddr & pblk->ppaf.pg_mask) >> pblk->ppaf.pg_offset;
>> - ppa.g.lun = (paddr & pblk->ppaf.lun_mask) >> pblk->ppaf.lun_offset;
>> - ppa.g.ch = (paddr & pblk->ppaf.ch_mask) >> pblk->ppaf.ch_offset;
>> - ppa.g.pl = (paddr & pblk->ppaf.pln_mask) >> pblk->ppaf.pln_offset;
>> - ppa.g.sec = (paddr & pblk->ppaf.sec_mask) >> pblk->ppaf.sec_offset;
>> + ppa.g.pg = (paddr & ppaf->pg_mask) >> ppaf->pg_offset;
>> + ppa.g.lun = (paddr & ppaf->lun_mask) >> ppaf->lun_offset;
>> + ppa.g.ch = (paddr & ppaf->ch_mask) >> ppaf->ch_offset;
>> + ppa.g.pl = (paddr & ppaf->pln_mask) >> ppaf->pln_offset;
>> + ppa.g.sec = (paddr & ppaf->sec_mask) >> ppaf->sec_offset;
>> return ppa;
>> }
>> @@ -980,13 +965,15 @@ static inline struct ppa_addr addr_to_gen_ppa(struct pblk *pblk, u64 paddr,
>> static inline u64 pblk_dev_ppa_to_line_addr(struct pblk *pblk,
>> struct ppa_addr p)
>> {
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&pblk->addrf;
>> u64 paddr;
>> - paddr = (u64)p.g.pg << pblk->ppaf.pg_offset;
>> - paddr |= (u64)p.g.lun << pblk->ppaf.lun_offset;
>> - paddr |= (u64)p.g.ch << pblk->ppaf.ch_offset;
>> - paddr |= (u64)p.g.pl << pblk->ppaf.pln_offset;
>> - paddr |= (u64)p.g.sec << pblk->ppaf.sec_offset;
>> + paddr = (u64)p.g.ch << ppaf->ch_offset;
>> + paddr |= (u64)p.g.lun << ppaf->lun_offset;
>> + paddr |= (u64)p.g.pg << ppaf->pg_offset;
>> + paddr |= (u64)p.g.pl << ppaf->pln_offset;
>> + paddr |= (u64)p.g.sec << ppaf->sec_offset;
>> return paddr;
>> }
>> @@ -1003,18 +990,15 @@ static inline struct ppa_addr pblk_ppa32_to_ppa64(struct pblk *pblk, u32 ppa32)
>> ppa64.c.line = ppa32 & ((~0U) >> 1);
>> ppa64.c.is_cached = 1;
>> } else {
>> - ppa64.g.blk = (ppa32 & pblk->ppaf.blk_mask) >>
>> - pblk->ppaf.blk_offset;
>> - ppa64.g.pg = (ppa32 & pblk->ppaf.pg_mask) >>
>> - pblk->ppaf.pg_offset;
>> - ppa64.g.lun = (ppa32 & pblk->ppaf.lun_mask) >>
>> - pblk->ppaf.lun_offset;
>> - ppa64.g.ch = (ppa32 & pblk->ppaf.ch_mask) >>
>> - pblk->ppaf.ch_offset;
>> - ppa64.g.pl = (ppa32 & pblk->ppaf.pln_mask) >>
>> - pblk->ppaf.pln_offset;
>> - ppa64.g.sec = (ppa32 & pblk->ppaf.sec_mask) >>
>> - pblk->ppaf.sec_offset;
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&pblk->addrf;
>> +
>> + ppa64.g.ch = (ppa32 & ppaf->ch_mask) >> ppaf->ch_offset;
>> + ppa64.g.lun = (ppa32 & ppaf->lun_mask) >> ppaf->lun_offset;
>> + ppa64.g.blk = (ppa32 & ppaf->blk_mask) >> ppaf->blk_offset;
>> + ppa64.g.pg = (ppa32 & ppaf->pg_mask) >> ppaf->pg_offset;
>> + ppa64.g.pl = (ppa32 & ppaf->pln_mask) >> ppaf->pln_offset;
>> + ppa64.g.sec = (ppa32 & ppaf->sec_mask) >> ppaf->sec_offset;
>> }
>> return ppa64;
>> @@ -1030,12 +1014,15 @@ static inline u32 pblk_ppa64_to_ppa32(struct pblk *pblk, struct ppa_addr ppa64)
>> ppa32 |= ppa64.c.line;
>> ppa32 |= 1U << 31;
>> } else {
>> - ppa32 |= ppa64.g.blk << pblk->ppaf.blk_offset;
>> - ppa32 |= ppa64.g.pg << pblk->ppaf.pg_offset;
>> - ppa32 |= ppa64.g.lun << pblk->ppaf.lun_offset;
>> - ppa32 |= ppa64.g.ch << pblk->ppaf.ch_offset;
>> - ppa32 |= ppa64.g.pl << pblk->ppaf.pln_offset;
>> - ppa32 |= ppa64.g.sec << pblk->ppaf.sec_offset;
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&pblk->addrf;
>> +
>> + ppa32 |= ppa64.g.ch << ppaf->ch_offset;
>> + ppa32 |= ppa64.g.lun << ppaf->lun_offset;
>> + ppa32 |= ppa64.g.blk << ppaf->blk_offset;
>> + ppa32 |= ppa64.g.pg << ppaf->pg_offset;
>> + ppa32 |= ppa64.g.pl << ppaf->pln_offset;
>> + ppa32 |= ppa64.g.sec << ppaf->sec_offset;
>> }
>> return ppa32;
>> @@ -1046,7 +1033,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
>> {
>> struct ppa_addr ppa;
>> - if (pblk->ppaf_bitsize < 32) {
>> + if (pblk->addrf_len < 32) {
>> u32 *map = (u32 *)pblk->trans_map;
>> ppa = pblk_ppa32_to_ppa64(pblk, map[lba]);
>> @@ -1062,7 +1049,7 @@ static inline struct ppa_addr pblk_trans_map_get(struct pblk *pblk,
>> static inline void pblk_trans_map_set(struct pblk *pblk, sector_t lba,
>> struct ppa_addr ppa)
>> {
>> - if (pblk->ppaf_bitsize < 32) {
>> + if (pblk->addrf_len < 32) {
>> u32 *map = (u32 *)pblk->trans_map;
>> map[lba] = pblk_ppa64_to_ppa32(pblk, ppa);
>> @@ -1153,7 +1140,7 @@ static inline int pblk_set_progr_mode(struct pblk *pblk, int type)
>> struct nvm_geo *geo = &dev->geo;
>> int flags;
>> - flags = geo->plane_mode >> 1;
>> + flags = geo->c.pln_mode >> 1;
>> if (type == PBLK_WRITE)
>> flags |= NVM_IO_SCRAMBLE_ENABLE;
>> @@ -1174,7 +1161,7 @@ static inline int pblk_set_read_mode(struct pblk *pblk, int type)
>> flags = NVM_IO_SUSPEND | NVM_IO_SCRAMBLE_ENABLE;
>> if (type == PBLK_READ_SEQUENTIAL)
>> - flags |= geo->plane_mode >> 1;
>> + flags |= geo->c.pln_mode >> 1;
>> return flags;
>> }
>> @@ -1227,12 +1214,12 @@ static inline int pblk_boundary_ppa_checks(struct nvm_tgt_dev *tgt_dev,
>> ppa = &ppas[i];
>> if (!ppa->c.is_cached &&
>> - ppa->g.ch < geo->nr_chnls &&
>> - ppa->g.lun < geo->nr_luns &&
>> - ppa->g.pl < geo->nr_planes &&
>> - ppa->g.blk < geo->nr_chks &&
>> - ppa->g.pg < geo->ws_per_chk &&
>> - ppa->g.sec < geo->sec_per_pg)
>> + ppa->g.ch < geo->num_ch &&
>> + ppa->g.lun < geo->num_lun &&
>> + ppa->g.pl < geo->c.num_pln &&
>> + ppa->g.blk < geo->c.num_chk &&
>> + ppa->g.pg < geo->c.num_pg &&
>> + ppa->g.sec < geo->c.ws_min)
>> continue;
>> print_ppa(ppa, "boundary", i);
>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
>> index a19e85f0cbae..97739e668602 100644
>> --- a/drivers/nvme/host/lightnvm.c
>> +++ b/drivers/nvme/host/lightnvm.c
>> @@ -152,8 +152,8 @@ struct nvme_nvm_id12_addrf {
>> __u8 blk_len;
>> __u8 pg_offset;
>> __u8 pg_len;
>> - __u8 sect_offset;
>> - __u8 sect_len;
>> + __u8 sec_offset;
>> + __u8 sec_len;
>> __u8 res[4];
>> } __packed;
>> @@ -170,6 +170,12 @@ struct nvme_nvm_id12 {
>> __u8 resv2[2880];
>> } __packed;
>> +/* Generic identification structure */
>> +struct nvme_nvm_id {
>> + __u8 ver_id;
>> + __u8 resv[4095];
>> +} __packed;
>> +
>> struct nvme_nvm_bb_tbl {
>> __u8 tblid[4];
>> __le16 verid;
>> @@ -254,121 +260,195 @@ static inline void _nvme_nvm_check_size(void)
>> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE);
>> }
>> -static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12)
>> +static void nvme_nvm_set_addr_12(struct nvm_addr_format_12 *dst,
>> + struct nvme_nvm_id12_addrf *src)
>> {
>> + dst->ch_len = src->ch_len;
>> + dst->lun_len = src->lun_len;
>> + dst->blk_len = src->blk_len;
>> + dst->pg_len = src->pg_len;
>> + dst->pln_len = src->pln_len;
>> + dst->sec_len = src->sec_len;
>> +
>> + dst->ch_offset = src->ch_offset;
>> + dst->lun_offset = src->lun_offset;
>> + dst->blk_offset = src->blk_offset;
>> + dst->pg_offset = src->pg_offset;
>> + dst->pln_offset = src->pln_offset;
>> + dst->sec_offset = src->sec_offset;
>> +
>> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
>> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
>> + dst->blk_mask = ((1ULL << dst->blk_len) - 1) << dst->blk_offset;
>> + dst->pg_mask = ((1ULL << dst->pg_len) - 1) << dst->pg_offset;
>> + dst->pln_mask = ((1ULL << dst->pln_len) - 1) << dst->pln_offset;
>> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
>> +}
>> +
>> +static int nvme_nvm_setup_12(struct nvme_nvm_id *gen_id,
>> + struct nvm_dev_geo *dev_geo)
>> +{
>> + struct nvme_nvm_id12 *id = (struct nvme_nvm_id12 *)gen_id;
>> struct nvme_nvm_id12_grp *src;
>> int sec_per_pg, sec_per_pl, pg_per_blk;
>> - if (id12->cgrps != 1)
>> + if (id->cgrps != 1)
>> return -EINVAL;
>> - src = &id12->grp;
>> + src = &id->grp;
>> - nvm_id->mtype = src->mtype;
>> - nvm_id->fmtype = src->fmtype;
>> + if (src->mtype != 0) {
>> + pr_err("nvm: memory type not supported\n");
>> + return -EINVAL;
>> + }
>> +
>> + /* 1.2 spec. only reports a single version id - unfold */
>> + dev_geo->major_ver_id = 1;
>> + dev_geo->minor_ver_id = 2;
>> +
>> + /* Set compacted version for upper layers */
>> + dev_geo->c.version = NVM_OCSSD_SPEC_12;
>> - nvm_id->num_ch = src->num_ch;
>> - nvm_id->num_lun = src->num_lun;
>> + dev_geo->num_ch = src->num_ch;
>> + dev_geo->num_lun = src->num_lun;
>> + dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;
>> - nvm_id->num_chk = le16_to_cpu(src->num_chk);
>> - nvm_id->csecs = le16_to_cpu(src->csecs);
>> - nvm_id->sos = le16_to_cpu(src->sos);
>> + dev_geo->c.num_chk = le16_to_cpu(src->num_chk);
>> + dev_geo->c.csecs = le16_to_cpu(src->csecs);
>> + dev_geo->c.sos = le16_to_cpu(src->sos);
>> pg_per_blk = le16_to_cpu(src->num_pg);
>> - sec_per_pg = le16_to_cpu(src->fpg_sz) / nvm_id->csecs;
>> + sec_per_pg = le16_to_cpu(src->fpg_sz) / dev_geo->c.csecs;
>> sec_per_pl = sec_per_pg * src->num_pln;
>> - nvm_id->clba = sec_per_pl * pg_per_blk;
>> - nvm_id->ws_per_chk = pg_per_blk;
>> -
>> - nvm_id->mpos = le32_to_cpu(src->mpos);
>> - nvm_id->cpar = le16_to_cpu(src->cpar);
>> - nvm_id->mccap = le32_to_cpu(src->mccap);
>> -
>> - nvm_id->ws_opt = nvm_id->ws_min = sec_per_pg;
>> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
>> -
>> - if (nvm_id->mpos & 0x020202) {
>> - nvm_id->ws_seq = NVM_IO_DUAL_ACCESS;
>> - nvm_id->ws_opt <<= 1;
>> - } else if (nvm_id->mpos & 0x040404) {
>> - nvm_id->ws_seq = NVM_IO_QUAD_ACCESS;
>> - nvm_id->ws_opt <<= 2;
>> - }
>> + dev_geo->c.clba = sec_per_pl * pg_per_blk;
>> +
>> + dev_geo->c.ws_min = sec_per_pg;
>> + dev_geo->c.ws_opt = sec_per_pg;
>> + dev_geo->c.mw_cunits = 8; /* default to MLC safe values */
>> + dev_geo->c.maxoc = dev_geo->all_luns; /* default to 1 chunk per LUN */
>> + dev_geo->c.maxocpu = 1; /* default to 1 chunk per LUN */
>> - nvm_id->trdt = le32_to_cpu(src->trdt);
>> - nvm_id->trdm = le32_to_cpu(src->trdm);
>> - nvm_id->tprt = le32_to_cpu(src->tprt);
>> - nvm_id->tprm = le32_to_cpu(src->tprm);
>> - nvm_id->tbet = le32_to_cpu(src->tbet);
>> - nvm_id->tbem = le32_to_cpu(src->tbem);
>> + dev_geo->c.mccap = le32_to_cpu(src->mccap);
>> +
>> + dev_geo->c.trdt = le32_to_cpu(src->trdt);
>> + dev_geo->c.trdm = le32_to_cpu(src->trdm);
>> + dev_geo->c.tprt = le32_to_cpu(src->tprt);
>> + dev_geo->c.tprm = le32_to_cpu(src->tprm);
>> + dev_geo->c.tbet = le32_to_cpu(src->tbet);
>> + dev_geo->c.tbem = le32_to_cpu(src->tbem);
>> /* 1.2 compatibility */
>> - nvm_id->num_pln = src->num_pln;
>> - nvm_id->num_pg = le16_to_cpu(src->num_pg);
>> - nvm_id->fpg_sz = le16_to_cpu(src->fpg_sz);
>> + dev_geo->c.vmnt = id->vmnt;
>> + dev_geo->c.cap = le32_to_cpu(id->cap);
>> + dev_geo->c.dom = le32_to_cpu(id->dom);
>> +
>> + dev_geo->c.mtype = src->mtype;
>> + dev_geo->c.fmtype = src->fmtype;
>> +
>> + dev_geo->c.cpar = le16_to_cpu(src->cpar);
>> + dev_geo->c.mpos = le32_to_cpu(src->mpos);
>> +
>> + dev_geo->c.pln_mode = NVM_PLANE_SINGLE;
>> +
>> + if (dev_geo->c.mpos & 0x020202) {
>> + dev_geo->c.pln_mode = NVM_PLANE_DOUBLE;
>> + dev_geo->c.ws_opt <<= 1;
>> + } else if (dev_geo->c.mpos & 0x040404) {
>> + dev_geo->c.pln_mode = NVM_PLANE_QUAD;
>> + dev_geo->c.ws_opt <<= 2;
>> + }
>> +
>> + dev_geo->c.num_pln = src->num_pln;
>> + dev_geo->c.num_pg = le16_to_cpu(src->num_pg);
>> + dev_geo->c.fpg_sz = le16_to_cpu(src->fpg_sz);
>> +
>> + nvme_nvm_set_addr_12((struct nvm_addr_format_12 *)&dev_geo->c.addrf,
>> + &id->ppaf);
>> return 0;
>> }
>> -static int nvme_nvm_setup_12(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
>> - struct nvme_nvm_id12 *id)
>> +static void nvme_nvm_set_addr_20(struct nvm_addr_format *dst,
>> + struct nvme_nvm_id20_addrf *src)
>> {
>> - nvm_id->ver_id = id->ver_id;
>> - nvm_id->vmnt = id->vmnt;
>> - nvm_id->cap = le32_to_cpu(id->cap);
>> - nvm_id->dom = le32_to_cpu(id->dom);
>> - memcpy(&nvm_id->ppaf, &id->ppaf,
>> - sizeof(struct nvm_addr_format));
>> -
>> - return init_grp(nvm_id, id);
>> + dst->ch_len = src->grp_len;
>> + dst->lun_len = src->pu_len;
>> + dst->chk_len = src->chk_len;
>> + dst->sec_len = src->lba_len;
>> +
>> + dst->sec_offset = 0;
>> + dst->chk_offset = dst->sec_len;
>> + dst->lun_offset = dst->chk_offset + dst->chk_len;
>> + dst->ch_offset = dst->lun_offset + dst->lun_len;
>> +
>> + dst->ch_mask = ((1ULL << dst->ch_len) - 1) << dst->ch_offset;
>> + dst->lun_mask = ((1ULL << dst->lun_len) - 1) << dst->lun_offset;
>> + dst->chk_mask = ((1ULL << dst->chk_len) - 1) << dst->chk_offset;
>> + dst->sec_mask = ((1ULL << dst->sec_len) - 1) << dst->sec_offset;
>> }
>> -static int nvme_nvm_setup_20(struct nvm_dev *nvmdev, struct nvm_id *nvm_id,
>> - struct nvme_nvm_id20 *id)
>> +static int nvme_nvm_setup_20(struct nvme_nvm_id *gen_id,
>> + struct nvm_dev_geo *dev_geo)
>> {
>> - nvm_id->ver_id = id->mjr;
>> + struct nvme_nvm_id20 *id = (struct nvme_nvm_id20 *)gen_id;
>> - nvm_id->num_ch = le16_to_cpu(id->num_grp);
>> - nvm_id->num_lun = le16_to_cpu(id->num_pu);
>> - nvm_id->num_chk = le32_to_cpu(id->num_chk);
>> - nvm_id->clba = le32_to_cpu(id->clba);
>> + dev_geo->major_ver_id = id->mjr;
>> + dev_geo->minor_ver_id = id->mnr;
>> - nvm_id->ws_min = le32_to_cpu(id->ws_min);
>> - nvm_id->ws_opt = le32_to_cpu(id->ws_opt);
>> - nvm_id->mw_cunits = le32_to_cpu(id->mw_cunits);
>> + /* Set compacted version for upper layers */
>> + dev_geo->c.version = NVM_OCSSD_SPEC_20;
>> - nvm_id->trdt = le32_to_cpu(id->trdt);
>> - nvm_id->trdm = le32_to_cpu(id->trdm);
>> - nvm_id->tprt = le32_to_cpu(id->twrt);
>> - nvm_id->tprm = le32_to_cpu(id->twrm);
>> - nvm_id->tbet = le32_to_cpu(id->tcrst);
>> - nvm_id->tbem = le32_to_cpu(id->tcrsm);
>> + if (!(dev_geo->major_ver_id == 2 && dev_geo->minor_ver_id == 0)) {
>> + pr_err("nvm: OCSSD version not supported (v%d.%d)\n",
>> + dev_geo->major_ver_id, dev_geo->minor_ver_id);
>> + return -EINVAL;
>> + }
>> - /* calculated values */
>> - nvm_id->ws_per_chk = nvm_id->clba / nvm_id->ws_min;
>> + dev_geo->num_ch = le16_to_cpu(id->num_grp);
>> + dev_geo->num_lun = le16_to_cpu(id->num_pu);
>> + dev_geo->all_luns = dev_geo->num_ch * dev_geo->num_lun;
>> - /* 1.2 compatibility */
>> - nvm_id->ws_seq = NVM_IO_SNGL_ACCESS;
>> + dev_geo->c.num_chk = le32_to_cpu(id->num_chk);
>> + dev_geo->c.clba = le32_to_cpu(id->clba);
>> + dev_geo->c.csecs = -1; /* Set by nvme identify */
>> + dev_geo->c.sos = -1; /* Set bu nvme identify */
>> +
>> + dev_geo->c.ws_min = le32_to_cpu(id->ws_min);
>> + dev_geo->c.ws_opt = le32_to_cpu(id->ws_opt);
>> + dev_geo->c.mw_cunits = le32_to_cpu(id->mw_cunits);
>> + dev_geo->c.maxoc = le32_to_cpu(id->maxoc);
>> + dev_geo->c.maxocpu = le32_to_cpu(id->maxocpu);
>> +
>> + dev_geo->c.mccap = le32_to_cpu(id->mccap);
>> +
>> + dev_geo->c.trdt = le32_to_cpu(id->trdt);
>> + dev_geo->c.trdm = le32_to_cpu(id->trdm);
>> + dev_geo->c.tprt = le32_to_cpu(id->twrt);
>> + dev_geo->c.tprm = le32_to_cpu(id->twrm);
>> + dev_geo->c.tbet = le32_to_cpu(id->tcrst);
>> + dev_geo->c.tbem = le32_to_cpu(id->tcrsm);
>> +
>> + nvme_nvm_set_addr_20(&dev_geo->c.addrf, &id->lbaf);
>> return 0;
>> }
>> -static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
>> +static int nvme_nvm_identity(struct nvm_dev *nvmdev)
>> {
>> struct nvme_ns *ns = nvmdev->q->queuedata;
>> - struct nvme_nvm_id12 *id;
>> + struct nvme_nvm_id *nvme_nvm_id;
>> struct nvme_nvm_command c = {};
>> int ret;
>> c.identity.opcode = nvme_nvm_admin_identity;
>> c.identity.nsid = cpu_to_le32(ns->head->ns_id);
>> - id = kmalloc(sizeof(struct nvme_nvm_id12), GFP_KERNEL);
>> - if (!id)
>> + nvme_nvm_id = kmalloc(sizeof(struct nvme_nvm_id), GFP_KERNEL);
>> + if (!nvme_nvm_id)
>> return -ENOMEM;
>> ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, (struct nvme_command *)&c,
>> - id, sizeof(struct nvme_nvm_id12));
>> + nvme_nvm_id, sizeof(struct nvme_nvm_id));
>> if (ret) {
>> ret = -EIO;
>> goto out;
>> @@ -378,22 +458,21 @@ static int nvme_nvm_identity(struct nvm_dev *nvmdev, struct nvm_id *nvm_id)
>> * The 1.2 and 2.0 specifications share the first byte in their geometry
>> * command to make it possible to know what version a device implements.
>> */
>> - switch (id->ver_id) {
>> + switch (nvme_nvm_id->ver_id) {
>> case 1:
>> - ret = nvme_nvm_setup_12(nvmdev, nvm_id, id);
>> + ret = nvme_nvm_setup_12(nvme_nvm_id, &nvmdev->dev_geo);
>> break;
>> case 2:
>> - ret = nvme_nvm_setup_20(nvmdev, nvm_id,
>> - (struct nvme_nvm_id20 *)id);
>> + ret = nvme_nvm_setup_20(nvme_nvm_id, &nvmdev->dev_geo);
>> break;
>> default:
>> - dev_err(ns->ctrl->device,
>> - "OCSSD revision not supported (%d)\n",
>> - nvm_id->ver_id);
>> + dev_err(ns->ctrl->device, "OCSSD revision not supported (%d)\n",
>> + nvme_nvm_id->ver_id);
>> ret = -EINVAL;
>> }
>> +
>> out:
>> - kfree(id);
>> + kfree(nvme_nvm_id);
>> return ret;
>> }
>> @@ -401,12 +480,12 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
>> u8 *blks)
>> {
>> struct request_queue *q = nvmdev->q;
>> - struct nvm_geo *geo = &nvmdev->geo;
>> + struct nvm_dev_geo *dev_geo = &nvmdev->dev_geo;
>> struct nvme_ns *ns = q->queuedata;
>> struct nvme_ctrl *ctrl = ns->ctrl;
>> struct nvme_nvm_command c = {};
>> struct nvme_nvm_bb_tbl *bb_tbl;
>> - int nr_blks = geo->nr_chks * geo->plane_mode;
>> + int nr_blks = dev_geo->c.num_chk * dev_geo->c.num_pln;
>> int tblsz = sizeof(struct nvme_nvm_bb_tbl) + nr_blks;
>> int ret = 0;
>> @@ -447,7 +526,7 @@ static int nvme_nvm_get_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr ppa,
>> goto out;
>> }
>> - memcpy(blks, bb_tbl->blk, geo->nr_chks * geo->plane_mode);
>> + memcpy(blks, bb_tbl->blk, dev_geo->c.num_chk * dev_geo->c.num_pln);
>> out:
>> kfree(bb_tbl);
>> return ret;
>> @@ -817,9 +896,10 @@ int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg)
>> void nvme_nvm_update_nvm_info(struct nvme_ns *ns)
>> {
>> struct nvm_dev *ndev = ns->ndev;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> - ndev->identity.csecs = ndev->geo.sec_size = 1 << ns->lba_shift;
>> - ndev->identity.sos = ndev->geo.oob_size = ns->ms;
>> + dev_geo->c.csecs = 1 << ns->lba_shift;
>> + dev_geo->c.sos = ns->ms;
>> }
>> int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
>> @@ -852,23 +932,24 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
>> {
>> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
>> struct nvm_dev *ndev = ns->ndev;
>> - struct nvm_id *id;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> struct attribute *attr;
>> if (!ndev)
>> return 0;
>> - id = &ndev->identity;
>> attr = &dattr->attr;
>> if (strcmp(attr->name, "version") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ver_id);
>> + return scnprintf(page, PAGE_SIZE, "%u.%u\n",
>> + dev_geo->major_ver_id,
>> + dev_geo->minor_ver_id);
>> } else if (strcmp(attr->name, "capabilities") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->cap);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
>> } else if (strcmp(attr->name, "read_typ") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdt);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
>> } else if (strcmp(attr->name, "read_max") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->trdm);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdm);
>> } else {
>> return scnprintf(page,
>> PAGE_SIZE,
>> @@ -877,76 +958,80 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
>> }
>> }
>> +static ssize_t nvm_dev_attr_show_ppaf(struct nvm_addr_format_12 *ppaf,
>> + char *page)
>> +{
>> + return scnprintf(page, PAGE_SIZE,
>> + "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
>> + ppaf->ch_offset, ppaf->ch_len,
>> + ppaf->lun_offset, ppaf->lun_len,
>> + ppaf->pln_offset, ppaf->pln_len,
>> + ppaf->blk_offset, ppaf->blk_len,
>> + ppaf->pg_offset, ppaf->pg_len,
>> + ppaf->sec_offset, ppaf->sec_len);
>> +}
>> +
>> static ssize_t nvm_dev_attr_show_12(struct device *dev,
>> struct device_attribute *dattr, char *page)
>> {
>> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
>> struct nvm_dev *ndev = ns->ndev;
>> - struct nvm_id *id;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> struct attribute *attr;
>> if (!ndev)
>> return 0;
>> - id = &ndev->identity;
>> attr = &dattr->attr;
>> if (strcmp(attr->name, "vendor_opcode") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->vmnt);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
>> } else if (strcmp(attr->name, "device_mode") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->dom);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
>> /* kept for compatibility */
>> } else if (strcmp(attr->name, "media_manager") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
>> } else if (strcmp(attr->name, "ppa_format") == 0) {
>> - return scnprintf(page, PAGE_SIZE,
>> - "0x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\n",
>> - id->ppaf.ch_offset, id->ppaf.ch_len,
>> - id->ppaf.lun_offset, id->ppaf.lun_len,
>> - id->ppaf.pln_offset, id->ppaf.pln_len,
>> - id->ppaf.blk_offset, id->ppaf.blk_len,
>> - id->ppaf.pg_offset, id->ppaf.pg_len,
>> - id->ppaf.sect_offset, id->ppaf.sect_len);
>> + return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
>> } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mtype);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
>> } else if (strcmp(attr->name, "flash_media_type") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fmtype);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
>> } else if (strcmp(attr->name, "num_channels") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>> } else if (strcmp(attr->name, "num_luns") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
>> } else if (strcmp(attr->name, "num_planes") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pln);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pln);
>> } else if (strcmp(attr->name, "num_blocks") == 0) { /* u16 */
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
>> } else if (strcmp(attr->name, "num_pages") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_pg);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_pg);
>> } else if (strcmp(attr->name, "page_size") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->fpg_sz);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
>> } else if (strcmp(attr->name, "hw_sector_size") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->csecs);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
>> } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->sos);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
>> } else if (strcmp(attr->name, "prog_typ") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>> } else if (strcmp(attr->name, "prog_max") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
>> } else if (strcmp(attr->name, "erase_typ") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
>> } else if (strcmp(attr->name, "erase_max") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
>> } else if (strcmp(attr->name, "multiplane_modes") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mpos);
>> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
>> } else if (strcmp(attr->name, "media_capabilities") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", id->mccap);
>> + return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
>> } else if (strcmp(attr->name, "max_phys_secs") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n",
>> ndev->ops->max_phys_sect);
>> } else {
>> - return scnprintf(page,
>> - PAGE_SIZE,
>> - "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
>> - attr->name);
>> + return scnprintf(page, PAGE_SIZE,
>> + "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
>> + attr->name);
>> }
>> }
>> @@ -955,42 +1040,40 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>> {
>> struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
>> struct nvm_dev *ndev = ns->ndev;
>> - struct nvm_id *id;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> struct attribute *attr;
>> if (!ndev)
>> return 0;
>> - id = &ndev->identity;
>> attr = &dattr->attr;
>> if (strcmp(attr->name, "groups") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_ch);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>> } else if (strcmp(attr->name, "punits") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_lun);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
>> } else if (strcmp(attr->name, "chunks") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->num_chk);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
>> } else if (strcmp(attr->name, "clba") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->clba);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
>> } else if (strcmp(attr->name, "ws_min") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_min);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
>> } else if (strcmp(attr->name, "ws_opt") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->ws_opt);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
>> } else if (strcmp(attr->name, "mw_cunits") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->mw_cunits);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
>> } else if (strcmp(attr->name, "write_typ") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprt);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>> } else if (strcmp(attr->name, "write_max") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tprm);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprm);
>> } else if (strcmp(attr->name, "reset_typ") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbet);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
>> } else if (strcmp(attr->name, "reset_max") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", id->tbem);
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
>> } else {
>> - return scnprintf(page,
>> - PAGE_SIZE,
>> - "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
>> - attr->name);
>> + return scnprintf(page, PAGE_SIZE,
>> + "Unhandled attr(%s) in `nvm_dev_attr_show_20`\n",
>> + attr->name);
>> }
>> }
>> @@ -1109,10 +1192,13 @@ static const struct attribute_group nvm_dev_attr_group_20 = {
>> int nvme_nvm_register_sysfs(struct nvme_ns *ns)
>> {
>> - if (!ns->ndev)
>> + struct nvm_dev *ndev = ns->ndev;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> +
>> + if (!ndev)
>> return -EINVAL;
>> - switch (ns->ndev->identity.ver_id) {
>> + switch (dev_geo->major_ver_id) {
>> case 1:
>> return sysfs_create_group(&disk_to_dev(ns->disk)->kobj,
>> &nvm_dev_attr_group_12);
>> @@ -1126,7 +1212,10 @@ int nvme_nvm_register_sysfs(struct nvme_ns *ns)
>> void nvme_nvm_unregister_sysfs(struct nvme_ns *ns)
>> {
>> - switch (ns->ndev->identity.ver_id) {
>> + struct nvm_dev *ndev = ns->ndev;
>> + struct nvm_dev_geo *dev_geo = &ndev->dev_geo;
>> +
>> + switch (dev_geo->major_ver_id) {
>> case 1:
>> sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
>> &nvm_dev_attr_group_12);
>> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
>> index b717c000b712..6a567bd19b73 100644
>> --- a/include/linux/lightnvm.h
>> +++ b/include/linux/lightnvm.h
>> @@ -23,6 +23,11 @@ enum {
>> #define NVM_LUN_BITS (8)
>> #define NVM_CH_BITS (7)
>> +enum {
>> + NVM_OCSSD_SPEC_12 = 12,
>> + NVM_OCSSD_SPEC_20 = 20,
>> +};
>> +
>> struct ppa_addr {
>> /* Generic structure for all addresses */
>> union {
>> @@ -50,7 +55,7 @@ struct nvm_id;
>> struct nvm_dev;
>> struct nvm_tgt_dev;
>> -typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
>> +typedef int (nvm_id_fn)(struct nvm_dev *);
>> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
>> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
>> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
>> @@ -154,62 +159,113 @@ struct nvm_id_lp_tbl {
>> struct nvm_id_lp_mlc mlc;
>> };
>> -struct nvm_addr_format {
>> - u8 ch_offset;
>> +struct nvm_addr_format_12 {
>> u8 ch_len;
>> - u8 lun_offset;
>> u8 lun_len;
>> - u8 pln_offset;
>> + u8 blk_len;
>> + u8 pg_len;
>> u8 pln_len;
>> + u8 sec_len;
>> +
>> + u8 ch_offset;
>> + u8 lun_offset;
>> u8 blk_offset;
>> - u8 blk_len;
>> u8 pg_offset;
>> - u8 pg_len;
>> - u8 sect_offset;
>> - u8 sect_len;
>> + u8 pln_offset;
>> + u8 sec_offset;
>> +
>> + u64 ch_mask;
>> + u64 lun_mask;
>> + u64 blk_mask;
>> + u64 pg_mask;
>> + u64 pln_mask;
>> + u64 sec_mask;
>> +};
>> +
>> +struct nvm_addr_format {
>> + u8 ch_len;
>> + u8 lun_len;
>> + u8 chk_len;
>> + u8 sec_len;
>> + u8 rsv_len[2];
>> +
>> + u8 ch_offset;
>> + u8 lun_offset;
>> + u8 chk_offset;
>> + u8 sec_offset;
>> + u8 rsv_off[2];
>> +
>> + u64 ch_mask;
>> + u64 lun_mask;
>> + u64 chk_mask;
>> + u64 sec_mask;
>> + u64 rsv_mask[2];
>> };
>> -struct nvm_id {
>> - u8 ver_id;
>> +/* Device common geometry */
>> +struct nvm_common_geo {
>> + /* kernel short version */
>> + u8 version;
>> +
>> + /* chunk geometry */
>> + u32 num_chk; /* chunks per lun */
>> + u32 clba; /* sectors per chunk */
>> + u16 csecs; /* sector size */
>> + u16 sos; /* out-of-band area size */
>> +
>> + /* device write constrains */
>> + u32 ws_min; /* minimum write size */
>> + u32 ws_opt; /* optimal write size */
>> + u32 mw_cunits; /* distance required for successful read */
>> + u32 maxoc; /* maximum open chunks */
>> + u32 maxocpu; /* maximum open chunks per parallel unit */
>> +
>> + /* device capabilities */
>> + u32 mccap;
>> +
>> + /* device timings */
>> + u32 trdt; /* Avg. Tread (ns) */
>> + u32 trdm; /* Max Tread (ns) */
>> + u32 tprt; /* Avg. Tprog (ns) */
>> + u32 tprm; /* Max Tprog (ns) */
>> + u32 tbet; /* Avg. Terase (ns) */
>> + u32 tbem; /* Max Terase (ns) */
>> +
>> + /* generic address format */
>> + struct nvm_addr_format addrf;
>> +
>> + /* 1.2 compatibility */
>> u8 vmnt;
>> u32 cap;
>> u32 dom;
>> - struct nvm_addr_format ppaf;
>> -
>> - u8 num_ch;
>> - u8 num_lun;
>> - u16 num_chk;
>> - u16 clba;
>> - u16 csecs;
>> - u16 sos;
>> -
>> - u32 ws_min;
>> - u32 ws_opt;
>> - u32 mw_cunits;
>> -
>> - u32 trdt;
>> - u32 trdm;
>> - u32 tprt;
>> - u32 tprm;
>> - u32 tbet;
>> - u32 tbem;
>> - u32 mpos;
>> - u32 mccap;
>> - u16 cpar;
>> -
>> - /* calculated values */
>> - u16 ws_seq;
>> - u16 ws_per_chk;
>> -
>> - /* 1.2 compatibility */
>> u8 mtype;
>> u8 fmtype;
>> + u16 cpar;
>> + u32 mpos;
>> +
>> u8 num_pln;
>> + u8 pln_mode;
>> u16 num_pg;
>> u16 fpg_sz;
>> -} __packed;
>> +};
>> +
>> +/* Device identified geometry */
>> +struct nvm_dev_geo {
>> + /* device reported version */
>> + u8 major_ver_id;
>> + u8 minor_ver_id;
>> +
>> + /* full device geometry */
>> + u16 num_ch;
>> + u16 num_lun;
>> +
>> + /* calculated values */
>> + u16 all_luns;
>> +
>> + struct nvm_common_geo c;
>> +};
>> struct nvm_target {
>> struct list_head list;
>> @@ -274,38 +330,23 @@ enum {
>> NVM_BLK_ST_BAD = 0x8, /* Bad block */
>> };
>> -
>> -/* Device generic information */
>> +/* Instance geometry */
>> struct nvm_geo {
>> - /* generic geometry */
>> - int nr_chnls;
>> - int all_luns; /* across channels */
>> - int nr_luns; /* per channel */
>> - int nr_chks; /* per lun */
>> -
>> - int sec_size;
>> - int oob_size;
>> - int mccap;
>> -
>> - int sec_per_chk;
>> - int sec_per_lun;
>> -
>> - int ws_min;
>> - int ws_opt;
>> - int ws_seq;
>> - int ws_per_chk;
>> + /* instance specific geometry */
>> + int num_ch;
>> + int num_lun; /* per channel */
>> int max_rq_size;
>> -
>> int op;
>> - struct nvm_addr_format ppaf;
>> + /* common geometry */
>> + struct nvm_common_geo c;
>> - /* Legacy 1.2 specific geometry */
>> - int plane_mode; /* drive device in single, double or quad mode */
>> - int nr_planes;
>> - int sec_per_pg; /* only sectors for a single page */
>> - int sec_per_pl; /* all sectors across planes */
>> + /* calculated values */
>> + int all_luns; /* across channels */
>> + int all_chunks; /* across channels */
>> +
>> + sector_t total_secs; /* across channels */
>> };
>> /* sub-device structure */
>> @@ -316,9 +357,6 @@ struct nvm_tgt_dev {
>> /* Base ppas for target LUNs */
>> struct ppa_addr *luns;
>> - sector_t total_secs;
>> -
>> - struct nvm_id identity;
>> struct request_queue *q;
>> struct nvm_dev *parent;
>> @@ -331,15 +369,11 @@ struct nvm_dev {
>> struct list_head devices;
>> /* Device information */
>> - struct nvm_geo geo;
>> -
>> - unsigned long total_secs;
>> + struct nvm_dev_geo dev_geo;
>> unsigned long *lun_map;
>> void *dma_pool;
>> - struct nvm_id identity;
>> -
>> /* Backend device */
>> struct request_queue *q;
>> char name[DISK_NAME_LEN];
>> @@ -359,14 +393,16 @@ static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev,
>> struct ppa_addr r)
>> {
>> struct nvm_geo *geo = &tgt_dev->geo;
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&geo->c.addrf;
>> struct ppa_addr l;
>> - l.ppa = ((u64)r.g.blk) << geo->ppaf.blk_offset;
>> - l.ppa |= ((u64)r.g.pg) << geo->ppaf.pg_offset;
>> - l.ppa |= ((u64)r.g.sec) << geo->ppaf.sect_offset;
>> - l.ppa |= ((u64)r.g.pl) << geo->ppaf.pln_offset;
>> - l.ppa |= ((u64)r.g.lun) << geo->ppaf.lun_offset;
>> - l.ppa |= ((u64)r.g.ch) << geo->ppaf.ch_offset;
>> + l.ppa = ((u64)r.g.ch) << ppaf->ch_offset;
>> + l.ppa |= ((u64)r.g.lun) << ppaf->lun_offset;
>> + l.ppa |= ((u64)r.g.blk) << ppaf->blk_offset;
>> + l.ppa |= ((u64)r.g.pg) << ppaf->pg_offset;
>> + l.ppa |= ((u64)r.g.pl) << ppaf->pln_offset;
>> + l.ppa |= ((u64)r.g.sec) << ppaf->sec_offset;
>> return l;
>> }
>> @@ -375,24 +411,18 @@ static inline struct ppa_addr dev_to_generic_addr(struct nvm_tgt_dev *tgt_dev,
>> struct ppa_addr r)
>> {
>> struct nvm_geo *geo = &tgt_dev->geo;
>> + struct nvm_addr_format_12 *ppaf =
>> + (struct nvm_addr_format_12 *)&geo->c.addrf;
>> struct ppa_addr l;
>> l.ppa = 0;
>> - /*
>> - * (r.ppa << X offset) & X len bitmask. X eq. blk, pg, etc.
>> - */
>> - l.g.blk = (r.ppa >> geo->ppaf.blk_offset) &
>> - (((1 << geo->ppaf.blk_len) - 1));
>> - l.g.pg |= (r.ppa >> geo->ppaf.pg_offset) &
>> - (((1 << geo->ppaf.pg_len) - 1));
>> - l.g.sec |= (r.ppa >> geo->ppaf.sect_offset) &
>> - (((1 << geo->ppaf.sect_len) - 1));
>> - l.g.pl |= (r.ppa >> geo->ppaf.pln_offset) &
>> - (((1 << geo->ppaf.pln_len) - 1));
>> - l.g.lun |= (r.ppa >> geo->ppaf.lun_offset) &
>> - (((1 << geo->ppaf.lun_len) - 1));
>> - l.g.ch |= (r.ppa >> geo->ppaf.ch_offset) &
>> - (((1 << geo->ppaf.ch_len) - 1));
>> +
>> + l.g.ch = (r.ppa & ppaf->ch_mask) >> ppaf->ch_offset;
>> + l.g.lun = (r.ppa & ppaf->lun_mask) >> ppaf->lun_offset;
>> + l.g.blk = (r.ppa & ppaf->blk_mask) >> ppaf->blk_offset;
>> + l.g.pg = (r.ppa & ppaf->pg_mask) >> ppaf->pg_offset;
>> + l.g.pl = (r.ppa & ppaf->pln_mask) >> ppaf->pln_offset;
>> + l.g.sec = (r.ppa & ppaf->sec_mask) >> ppaf->sec_offset;
>> return l;
>> }
>
> This code looks like a lot of shuffling around for little gain.

The gain is that we move from a spec. centric geometry to an abstract geometry that contains all the necessary information for targets to use drive. This goes beyond 1.2 and 2.0, so as I see it it is a big gain. Think that upper layers (beyond pblk) can eventually use this geometry to access the device.
>
> Instead of going from the base assumption that is
>
> base
> -> 1.2
> -> 2.0
>
> go with
>
> base 2.0
> -> 1.2
>
> That simplifies where the code is going, and where it will be in the future. It is more complex to maintain the above, when new targets in the future most properly will only consider only 2.0 implementations.

We have 1.2 and 2.0 at the subsystem level, not at the target level, so there is no base, they are just separate.

I’d rather keep them separated instead of deriving 1.2 from 2.0 so that we kan keep paths clean. Otherwise we end up doing castings that will make the cod very difficult to maintain as we add revisions to the 2.0 spec.

>
> The patch does a lot of things at the same time. E.g,.
>
> 1) Adding 1.2 version check in pblk_init.c. This should be a separate patch.
> 2) Introduces constants for spec versions. This should be a separate patch.
> 3) Refactors nvm_geo into nvm_dev_geo. Keep it as nvm_geo and make pblk use that structure by default. It should not be necessary for pblk to know about the 1.2 data structures. For the special case get/set and addressing, it can use the 1.2 variables in the nvm_geo if necessary. We can also put it in lightnvm core, but it is properly not worth to do.

The reason to do this is to have he geo_common structure in one place so that targets reuse the same geometry in case the device is partitioned. I think it simplifies the geometry as we do not do the “hack” of changing the number of channels and LUNs at a per instance level for each different partitions.

Pblk needs the 1.2 to form 1.2 addresses. We cannot assume that 1.2 page/plane/sector is continuous in the ppa format - it is this way for CNEX Westlake, but it is not necessarily this way for other 1.2 controllers out there. Believe me, I would really want to get rid of it...

> 4) maxoc / maxocpu, I did not add them in the early patches, as there is no implementation that will use it. When it is implemented, it can be added. At least it should go into a separate patch.

We use it. I’ll separate it. I am missing support in pblk, but that will come in future patches.

> 5) the rename of ppaf -> addrf / ppaf_bitsize ->addrf_len should be in a separate patch.
> 6) rename sec_offset/sec_len -> go into a separate patch or keep as is.
> 7) addition of nvme_nvm_id data structure, I can see where you are going with this, but it does not have anything to with what the patch describes. It should go into a separate patch. However, I rather just have the original implementation for identifying 1.2/2.0.

The only point here is to have it spec-agnostic. Let me separate the patch and then we see what’s best. This is more a cleaning up than functional patch.

> 8) If you want to remove the identify data structure in nvm_geo, make it in another patch.

Fair enough. I wanted to keep things together as they are all related. But I see the point.

I’ll separate the patches and then we can take one at a time.

Javier

2018-02-16 18:48:45

by Javier González

[permalink] [raw]
Subject: Re: [PATCH 6/8] lightnvm: pblk: implement get log report chunk


> On 15 Feb 2018, at 02.59, Matias Bjørling <[email protected]> wrote:
>
> On 02/13/2018 03:06 PM, Javier González wrote:
>> From: Javier González <[email protected]>
>> In preparation of pblk supporting 2.0, implement the get log report
>> chunk in pblk.
>> This patch only replicates de bad block functionality as the rest of the
>> metadata requires new pblk functionality (e.g., wear-index to implement
>> wear-leveling). This functionality will come in future patches.
>> Signed-off-by: Javier González <[email protected]>
>> ---
>> drivers/lightnvm/pblk-core.c | 118 +++++++++++++++++++++++----
>> drivers/lightnvm/pblk-init.c | 186 +++++++++++++++++++++++++++++++-----------
>> drivers/lightnvm/pblk-sysfs.c | 67 +++++++++++++++
>> drivers/lightnvm/pblk.h | 20 +++++
>> 4 files changed, 327 insertions(+), 64 deletions(-)
>> diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
>> index 519af8b9eab7..01b78ee5c0e0 100644
>> --- a/drivers/lightnvm/pblk-core.c
>> +++ b/drivers/lightnvm/pblk-core.c
>> @@ -44,11 +44,12 @@ static void pblk_line_mark_bb(struct work_struct *work)
>> }
>> static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
>> - struct ppa_addr *ppa)
>> + struct ppa_addr ppa_addr)
>> {
>> struct nvm_tgt_dev *dev = pblk->dev;
>> struct nvm_geo *geo = &dev->geo;
>> - int pos = pblk_ppa_to_pos(geo, *ppa);
>> + struct ppa_addr *ppa;
>> + int pos = pblk_ppa_to_pos(geo, ppa_addr);
>> pr_debug("pblk: erase failed: line:%d, pos:%d\n", line->id, pos);
>> atomic_long_inc(&pblk->erase_failed);
>> @@ -58,6 +59,15 @@ static void pblk_mark_bb(struct pblk *pblk, struct pblk_line *line,
>> pr_err("pblk: attempted to erase bb: line:%d, pos:%d\n",
>> line->id, pos);
>> + /* Not necessary to mark bad blocks on 2.0 spec. */
>> + if (geo->c.version == NVM_OCSSD_SPEC_20)
>> + return;
>> +
>> + ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
>> + if (!ppa)
>> + return;
>> +
>> + *ppa = ppa_addr;
>> pblk_gen_run_ws(pblk, NULL, ppa, pblk_line_mark_bb,
>> GFP_ATOMIC, pblk->bb_wq);
>> }
>> @@ -69,16 +79,8 @@ static void __pblk_end_io_erase(struct pblk *pblk, struct nvm_rq *rqd)
>> line = &pblk->lines[pblk_ppa_to_line(rqd->ppa_addr)];
>> atomic_dec(&line->left_seblks);
>> - if (rqd->error) {
>> - struct ppa_addr *ppa;
>> -
>> - ppa = kmalloc(sizeof(struct ppa_addr), GFP_ATOMIC);
>> - if (!ppa)
>> - return;
>> -
>> - *ppa = rqd->ppa_addr;
>> - pblk_mark_bb(pblk, line, ppa);
>> - }
>> + if (rqd->error)
>> + pblk_mark_bb(pblk, line, rqd->ppa_addr);
>> atomic_dec(&pblk->inflight_io);
>> }
>> @@ -92,6 +94,47 @@ static void pblk_end_io_erase(struct nvm_rq *rqd)
>> mempool_free(rqd, pblk->e_rq_pool);
>> }
>> +/*
>> + * Get information for all chunks from the device.
>> + *
>> + * The caller is responsible for freeing the returned structure
>> + */
>> +struct nvm_chunk_log_page *pblk_chunk_get_info(struct pblk *pblk)
>> +{
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + struct nvm_chunk_log_page *log;
>> + unsigned long len;
>> + int ret;
>> +
>> + len = geo->all_chunks * sizeof(*log);
>> + log = kzalloc(len, GFP_KERNEL);
>> + if (!log)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + ret = nvm_get_chunk_log_page(dev, log, 0, len);
>> + if (ret) {
>> + pr_err("pblk: could not get chunk log page (%d)\n", ret);
>> + kfree(log);
>> + return ERR_PTR(-EIO);
>> + }
>> +
>> + return log;
>> +}
>> +
>> +struct nvm_chunk_log_page *pblk_chunk_get_off(struct pblk *pblk,
>> + struct nvm_chunk_log_page *lp,
>> + struct ppa_addr ppa)
>> +{
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + int ch_off = ppa.m.ch * geo->c.num_chk * geo->num_lun;
>> + int lun_off = ppa.m.lun * geo->c.num_chk;
>> + int chk_off = ppa.m.chk;
>> +
>> + return lp + ch_off + lun_off + chk_off;
>> +}
>> +
>> void __pblk_map_invalidate(struct pblk *pblk, struct pblk_line *line,
>> u64 paddr)
>> {
>> @@ -1094,10 +1137,38 @@ static int pblk_line_init_bb(struct pblk *pblk, struct pblk_line *line,
>> return 1;
>> }
>> +static int pblk_prepare_new_line(struct pblk *pblk, struct pblk_line *line)
>> +{
>> + struct pblk_line_meta *lm = &pblk->lm;
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + int blk_to_erase = atomic_read(&line->blk_in_line);
>> + int i;
>> +
>> + for (i = 0; i < lm->blk_per_line; i++) {
>> + int state = line->chks[i].state;
>> + struct pblk_lun *rlun = &pblk->luns[i];
>> +
>> + /* Free chunks should not be erased */
>> + if (state & NVM_CHK_ST_FREE) {
>> + set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
>> + line->erase_bitmap);
>> + blk_to_erase--;
>> + line->chks[i].state = NVM_CHK_ST_HOST_USE;
>> + }
>> +
>> + WARN_ONCE(state & NVM_CHK_ST_OPEN,
>> + "pblk: open chunk in new line: %d\n",
>> + line->id);
>> + }
>> +
>> + return blk_to_erase;
>> +}
>> +
>> static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
>> {
>> struct pblk_line_meta *lm = &pblk->lm;
>> - int blk_in_line = atomic_read(&line->blk_in_line);
>> + int blk_to_erase;
>> line->map_bitmap = kzalloc(lm->sec_bitmap_len, GFP_ATOMIC);
>> if (!line->map_bitmap)
>> @@ -1110,7 +1181,21 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
>> return -ENOMEM;
>> }
>> + /* Bad blocks do not need to be erased */
>> + bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
>> +
>> spin_lock(&line->lock);
>> +
>> + /* If we have not written to this line, we need to mark up free chunks
>> + * as already erased
>> + */
>> + if (line->state == PBLK_LINESTATE_NEW) {
>> + blk_to_erase = pblk_prepare_new_line(pblk, line);
>> + line->state = PBLK_LINESTATE_FREE;
>> + } else {
>> + blk_to_erase = atomic_read(&line->blk_in_line);
>> + }
>> +
>> if (line->state != PBLK_LINESTATE_FREE) {
>> kfree(line->map_bitmap);
>> kfree(line->invalid_bitmap);
>> @@ -1122,15 +1207,12 @@ static int pblk_line_prepare(struct pblk *pblk, struct pblk_line *line)
>> line->state = PBLK_LINESTATE_OPEN;
>> - atomic_set(&line->left_eblks, blk_in_line);
>> - atomic_set(&line->left_seblks, blk_in_line);
>> + atomic_set(&line->left_eblks, blk_to_erase);
>> + atomic_set(&line->left_seblks, blk_to_erase);
>> line->meta_distance = lm->meta_distance;
>> spin_unlock(&line->lock);
>> - /* Bad blocks do not need to be erased */
>> - bitmap_copy(line->erase_bitmap, line->blk_bitmap, lm->blk_per_line);
>> -
>> kref_init(&line->ref);
>> return 0;
>> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
>> index 72b7902e5d1c..dfc68718e27e 100644
>> --- a/drivers/lightnvm/pblk-init.c
>> +++ b/drivers/lightnvm/pblk-init.c
>> @@ -402,6 +402,7 @@ static void pblk_line_meta_free(struct pblk_line *line)
>> {
>> kfree(line->blk_bitmap);
>> kfree(line->erase_bitmap);
>> + kfree(line->chks);
>> }
>> static void pblk_lines_free(struct pblk *pblk)
>> @@ -470,25 +471,15 @@ static void *pblk_bb_get_log(struct pblk *pblk)
>> return log;
>> }
>> -static int pblk_bb_line(struct pblk *pblk, struct pblk_line *line,
>> - u8 *bb_log, int blk_per_line)
>> +static void *pblk_chunk_get_log(struct pblk *pblk)
>> {
>> struct nvm_tgt_dev *dev = pblk->dev;
>> struct nvm_geo *geo = &dev->geo;
>> - int i, bb_cnt = 0;
>> - for (i = 0; i < blk_per_line; i++) {
>> - struct pblk_lun *rlun = &pblk->luns[i];
>> - u8 *lun_bb_log = bb_log + i * blk_per_line;
>> -
>> - if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
>> - continue;
>> -
>> - set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
>> - bb_cnt++;
>> - }
>> -
>> - return bb_cnt;
>> + if (geo->c.version == NVM_OCSSD_SPEC_12)
>> + return pblk_bb_get_log(pblk);
>> + else
>> + return pblk_chunk_get_info(pblk);
>> }
>> static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>> @@ -517,6 +508,7 @@ static int pblk_luns_init(struct pblk *pblk, struct ppa_addr *luns)
>> rlun = &pblk->luns[i];
>> rlun->bppa = luns[lunid];
>> + rlun->chunk_bppa = luns[i];
>> sema_init(&rlun->wr_sem, 1);
>> }
>> @@ -696,8 +688,125 @@ static int pblk_lines_alloc_metadata(struct pblk *pblk)
>> return -ENOMEM;
>> }
>> -static int pblk_setup_line_meta(struct pblk *pblk, struct pblk_line *line,
>> - void *chunk_log, long *nr_bad_blks)
>> +static int pblk_setup_line_meta_12(struct pblk *pblk, struct pblk_line *line,
>> + void *chunk_log)
>> +{
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + struct pblk_line_meta *lm = &pblk->lm;
>> + int i, chk_per_lun, nr_bad_chks = 0;
>> +
>> + chk_per_lun = geo->c.num_chk * geo->c.pln_mode;
>> +
>> + for (i = 0; i < lm->blk_per_line; i++) {
>> + struct pblk_chunk *chunk = &line->chks[i];
>> + struct pblk_lun *rlun = &pblk->luns[i];
>> + u8 *lun_bb_log = chunk_log + i * chk_per_lun;
>> +
>> + /*
>> + * In 1.2 spec. chunk state is not persisted by the device. Thus
>> + * some of the values are reset each time pblk is instantiated.
>> + */
>> + if (lun_bb_log[line->id] == NVM_BLK_T_FREE)
>> + chunk->state = NVM_CHK_ST_HOST_USE;
>> + else
>> + chunk->state = NVM_CHK_ST_OFFLINE;
>> +
>> + chunk->type = NVM_CHK_TP_W_SEQ;
>> + chunk->wi = 0;
>> + chunk->slba = -1;
>> + chunk->cnlb = geo->c.clba;
>> + chunk->wp = 0;
>> +
>> + if (!(chunk->state & NVM_CHK_ST_OFFLINE))
>> + continue;
>> +
>> + set_bit(pblk_ppa_to_pos(geo, rlun->bppa), line->blk_bitmap);
>> + nr_bad_chks++;
>> + }
>> +
>> + return nr_bad_chks;
>> +}
>> +
>> +static int pblk_setup_line_meta_20(struct pblk *pblk, struct pblk_line *line,
>> + struct nvm_chunk_log_page *log_page)
>> +{
>> + struct nvm_tgt_dev *dev = pblk->dev;
>> + struct nvm_geo *geo = &dev->geo;
>> + struct pblk_line_meta *lm = &pblk->lm;
>> + int i, nr_bad_chks = 0;
>> +
>> + for (i = 0; i < lm->blk_per_line; i++) {
>> + struct pblk_chunk *chunk = &line->chks[i];
>> + struct pblk_lun *rlun = &pblk->luns[i];
>> + struct nvm_chunk_log_page *chunk_log_page;
>> + struct ppa_addr ppa;
>> +
>> + ppa = rlun->chunk_bppa;
>> + ppa.m.chk = line->id;
>> + chunk_log_page = pblk_chunk_get_off(pblk, log_page, ppa);
>> +
>> + chunk->state = chunk_log_page->state;
>> + chunk->type = chunk_log_page->type;
>> + chunk->wi = chunk_log_page->wear_index;
>> + chunk->slba = le64_to_cpu(chunk_log_page->slba);
>> + chunk->cnlb = le64_to_cpu(chunk_log_page->cnlb);
>> + chunk->wp = le64_to_cpu(chunk_log_page->wp);
>> +
>> + if (!(chunk->state & NVM_CHK_ST_OFFLINE))
>> + continue;
>> +
>> + if (chunk->type & NVM_CHK_TP_SZ_SPEC) {
>> + WARN_ONCE(1, "pblk: custom-sized chunks unsupported\n");
>> + continue;
>> + }
>> +
>> + set_bit(pblk_ppa_to_pos(geo, rlun->chunk_bppa),
>> + line->blk_bitmap);
>> + nr_bad_chks++;
>> + }
>> +
>> + return nr_bad_chks;
>> +}
>> +
>
> The device chunk to nvm_chunk logic belongs in the lightnvm core. A
> target should preferably not handle the case between 1.2 and 2.0
> interface.
>

I thought about this and it is cleaner, the problem is that the return
values from these two interfaces are completely different. My other
approach was asking only for report chunk and then create the log page
format for 1.2 I can see this working good. I'll give it another spin.

Javier


Attachments:
signature.asc (849.00 B)
Message signed with OpenPGP

2018-02-16 18:49:19

by Javier Gonzalez

[permalink] [raw]
Subject: Re: [PATCH 2/8] lightnvm: show generic geometry in sysfs


> On 15 Feb 2018, at 02.20, Matias Bjørling <[email protected]> wrote:
>
> On 02/13/2018 03:06 PM, Javier González wrote:
>> From: Javier González <[email protected]>
>> Apart from showing the geometry returned by the different identify
>> commands, provide the generic geometry too, as this is the geometry that
>> targets will use to describe the device.
>> Signed-off-by: Javier González <[email protected]>
>> ---
>> drivers/nvme/host/lightnvm.c | 146 ++++++++++++++++++++++++++++---------------
>> 1 file changed, 97 insertions(+), 49 deletions(-)
>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
>> index 97739e668602..7bc75182c723 100644
>> --- a/drivers/nvme/host/lightnvm.c
>> +++ b/drivers/nvme/host/lightnvm.c
>> @@ -944,8 +944,27 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
>> return scnprintf(page, PAGE_SIZE, "%u.%u\n",
>> dev_geo->major_ver_id,
>> dev_geo->minor_ver_id);
>> - } else if (strcmp(attr->name, "capabilities") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
>> + } else if (strcmp(attr->name, "clba") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
>> + } else if (strcmp(attr->name, "csecs") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
>> + } else if (strcmp(attr->name, "sos") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
>> + } else if (strcmp(attr->name, "ws_min") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
>> + } else if (strcmp(attr->name, "ws_opt") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
>> + } else if (strcmp(attr->name, "maxoc") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxoc);
>> + } else if (strcmp(attr->name, "maxocpu") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxocpu);
>> + } else if (strcmp(attr->name, "mw_cunits") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
>> + } else if (strcmp(attr->name, "media_capabilities") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mccap);
>> + } else if (strcmp(attr->name, "max_phys_secs") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n",
>> + ndev->ops->max_phys_sect);
>> } else if (strcmp(attr->name, "read_typ") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
>> } else if (strcmp(attr->name, "read_max") == 0) {
>> @@ -984,19 +1003,8 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>> attr = &dattr->attr;
>> - if (strcmp(attr->name, "vendor_opcode") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
>> - } else if (strcmp(attr->name, "device_mode") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
>> - /* kept for compatibility */
>> - } else if (strcmp(attr->name, "media_manager") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
>> - } else if (strcmp(attr->name, "ppa_format") == 0) {
>> + if (strcmp(attr->name, "ppa_format") == 0) {
>> return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
>> - } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
>> - } else if (strcmp(attr->name, "flash_media_type") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
>> } else if (strcmp(attr->name, "num_channels") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>> } else if (strcmp(attr->name, "num_luns") == 0) {
>> @@ -1011,8 +1019,6 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
>> } else if (strcmp(attr->name, "hw_sector_size") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
>> - } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
>> } else if (strcmp(attr->name, "prog_typ") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>> } else if (strcmp(attr->name, "prog_max") == 0) {
>> @@ -1021,13 +1027,21 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
>> } else if (strcmp(attr->name, "erase_max") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
>> + } else if (strcmp(attr->name, "vendor_opcode") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
>> + } else if (strcmp(attr->name, "device_mode") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
>> + /* kept for compatibility */
>> + } else if (strcmp(attr->name, "media_manager") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
>> + } else if (strcmp(attr->name, "capabilities") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
>> + } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
>> + } else if (strcmp(attr->name, "flash_media_type") == 0) {
>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
>> } else if (strcmp(attr->name, "multiplane_modes") == 0) {
>> return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
>> - } else if (strcmp(attr->name, "media_capabilities") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
>> - } else if (strcmp(attr->name, "max_phys_secs") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n",
>> - ndev->ops->max_phys_sect);
>> } else {
>> return scnprintf(page, PAGE_SIZE,
>> "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
>> @@ -1035,6 +1049,17 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>> }
>> }
>> +static ssize_t nvm_dev_attr_show_lbaf(struct nvm_addr_format *lbaf,
>> + char *page)
>> +{
>> + return scnprintf(page, PAGE_SIZE,
>> + "0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
>> + lbaf->ch_offset, lbaf->ch_len,
>> + lbaf->lun_offset, lbaf->lun_len,
>> + lbaf->chk_offset, lbaf->chk_len,
>> + lbaf->sec_offset, lbaf->sec_len);
>> +}
>> +
>> static ssize_t nvm_dev_attr_show_20(struct device *dev,
>> struct device_attribute *dattr, char *page)
>> {
>> @@ -1048,20 +1073,14 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>> attr = &dattr->attr;
>> - if (strcmp(attr->name, "groups") == 0) {
>> + if (strcmp(attr->name, "lba_format") == 0) {
>> + return nvm_dev_attr_show_lbaf((void *)&dev_geo->c.addrf, page);
>> + } else if (strcmp(attr->name, "groups") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>> } else if (strcmp(attr->name, "punits") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
>> } else if (strcmp(attr->name, "chunks") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
>> - } else if (strcmp(attr->name, "clba") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
>> - } else if (strcmp(attr->name, "ws_min") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
>> - } else if (strcmp(attr->name, "ws_opt") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
>> - } else if (strcmp(attr->name, "mw_cunits") == 0) {
>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
>> } else if (strcmp(attr->name, "write_typ") == 0) {
>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>> } else if (strcmp(attr->name, "write_max") == 0) {
>> @@ -1086,7 +1105,19 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>> /* general attributes */
>> static NVM_DEV_ATTR_RO(version);
>> -static NVM_DEV_ATTR_RO(capabilities);
>> +
>> +static NVM_DEV_ATTR_RO(ws_min);
>> +static NVM_DEV_ATTR_RO(ws_opt);
>> +static NVM_DEV_ATTR_RO(mw_cunits);
>> +static NVM_DEV_ATTR_RO(maxoc);
>> +static NVM_DEV_ATTR_RO(maxocpu);
>> +
>> +static NVM_DEV_ATTR_RO(media_capabilities);
>> +static NVM_DEV_ATTR_RO(max_phys_secs);
>> +
>> +static NVM_DEV_ATTR_RO(clba);
>> +static NVM_DEV_ATTR_RO(csecs);
>> +static NVM_DEV_ATTR_RO(sos);
>> static NVM_DEV_ATTR_RO(read_typ);
>> static NVM_DEV_ATTR_RO(read_max);
>> @@ -1105,42 +1136,53 @@ static NVM_DEV_ATTR_12_RO(num_blocks);
>> static NVM_DEV_ATTR_12_RO(num_pages);
>> static NVM_DEV_ATTR_12_RO(page_size);
>> static NVM_DEV_ATTR_12_RO(hw_sector_size);
>> -static NVM_DEV_ATTR_12_RO(oob_sector_size);
>> static NVM_DEV_ATTR_12_RO(prog_typ);
>> static NVM_DEV_ATTR_12_RO(prog_max);
>> static NVM_DEV_ATTR_12_RO(erase_typ);
>> static NVM_DEV_ATTR_12_RO(erase_max);
>> static NVM_DEV_ATTR_12_RO(multiplane_modes);
>> -static NVM_DEV_ATTR_12_RO(media_capabilities);
>> -static NVM_DEV_ATTR_12_RO(max_phys_secs);
>> +static NVM_DEV_ATTR_12_RO(capabilities);
>> static struct attribute *nvm_dev_attrs_12[] = {
>> &dev_attr_version.attr,
>> - &dev_attr_capabilities.attr,
>> -
>> - &dev_attr_vendor_opcode.attr,
>> - &dev_attr_device_mode.attr,
>> - &dev_attr_media_manager.attr,
>> &dev_attr_ppa_format.attr,
>> - &dev_attr_media_type.attr,
>> - &dev_attr_flash_media_type.attr,
>> +
>> &dev_attr_num_channels.attr,
>> &dev_attr_num_luns.attr,
>> &dev_attr_num_planes.attr,
>> &dev_attr_num_blocks.attr,
>> &dev_attr_num_pages.attr,
>> &dev_attr_page_size.attr,
>> +
>> &dev_attr_hw_sector_size.attr,
>> - &dev_attr_oob_sector_size.attr,
>> +
>> + &dev_attr_clba.attr,
>> + &dev_attr_csecs.attr,
>> + &dev_attr_sos.attr,
>> +
>> + &dev_attr_ws_min.attr,
>> + &dev_attr_ws_opt.attr,
>> + &dev_attr_maxoc.attr,
>> + &dev_attr_maxocpu.attr,
>> + &dev_attr_mw_cunits.attr,
>> +
>> + &dev_attr_media_capabilities.attr,
>> + &dev_attr_max_phys_secs.attr,
>> +
>
> This breaks user-space. The intention is for user-space to decide
> based on version id. Then it can either retrieve the 1.2 or 2.0
> attributes. The 2.0 attributes should not be available when a device
> is 1.2.
>

Why does it break it? I'm only adding new entries.

The objective is to expose the genneric geometry, since this is the
structure that is passed on to the targets. Since some of the values are
calculated, there is value on exposing this information, I believe.

Another way of doing it, is adding the generic geometry at the target
level, showing what base values it is getting, including the real number
of channels/groups and luns/pus.

Would this be better in your opinion?


>> &dev_attr_read_typ.attr,
>> &dev_attr_read_max.attr,
>> &dev_attr_prog_typ.attr,
>> &dev_attr_prog_max.attr,
>> &dev_attr_erase_typ.attr,
>> &dev_attr_erase_max.attr,
>> +
>> + &dev_attr_vendor_opcode.attr,
>> + &dev_attr_device_mode.attr,
>> + &dev_attr_media_manager.attr,
>> + &dev_attr_capabilities.attr,
>> + &dev_attr_media_type.attr,
>> + &dev_attr_flash_media_type.attr,
>> &dev_attr_multiplane_modes.attr,
>> - &dev_attr_media_capabilities.attr,
>> - &dev_attr_max_phys_secs.attr,
>> NULL,
>> };
>> @@ -1152,12 +1194,9 @@ static const struct attribute_group nvm_dev_attr_group_12 = {
>> /* 2.0 values */
>> static NVM_DEV_ATTR_20_RO(groups);
>> +static NVM_DEV_ATTR_20_RO(lba_format);
>> static NVM_DEV_ATTR_20_RO(punits);
>> static NVM_DEV_ATTR_20_RO(chunks);
>> -static NVM_DEV_ATTR_20_RO(clba);
>> -static NVM_DEV_ATTR_20_RO(ws_min);
>> -static NVM_DEV_ATTR_20_RO(ws_opt);
>> -static NVM_DEV_ATTR_20_RO(mw_cunits);
>> static NVM_DEV_ATTR_20_RO(write_typ);
>> static NVM_DEV_ATTR_20_RO(write_max);
>> static NVM_DEV_ATTR_20_RO(reset_typ);
>> @@ -1165,16 +1204,25 @@ static NVM_DEV_ATTR_20_RO(reset_max);
>> static struct attribute *nvm_dev_attrs_20[] = {
>> &dev_attr_version.attr,
>> - &dev_attr_capabilities.attr,
>> + &dev_attr_lba_format.attr,
>> &dev_attr_groups.attr,
>> &dev_attr_punits.attr,
>> &dev_attr_chunks.attr,
>> +
>> &dev_attr_clba.attr,
>> + &dev_attr_csecs.attr,
>> + &dev_attr_sos.attr,
>
> csecs and sos are derived from the the generic block device data structures.

As mentioned above, it is to represent the generic geometry.

>
>> +
>> &dev_attr_ws_min.attr,
>> &dev_attr_ws_opt.attr,
>> + &dev_attr_maxoc.attr,
>> + &dev_attr_maxocpu.attr,
>
> When the maxoc/maxocpu are in another patch, these changes can be included.

ok.

>
>> &dev_attr_mw_cunits.attr,
>> + &dev_attr_media_capabilities.attr,
>
> What is the meaning of media in this context? The 2.0 spec defines
> vector copy and double resets in its capabilities, it does not have
> media in mind.
>

It refers to the mcap (vector copy and double resets for now, as you
mention). I kept the name, name, but I can rename it if it is better...

>> + &dev_attr_max_phys_secs.attr,
>> +
>
> I kill max_phys_secs in another patch. It has been made redundant
> after null_blk has been removed.

I'll answer this on the patch - I have a questions here.

>> &dev_attr_read_typ.attr,
>> &dev_attr_read_max.attr,
>> &dev_attr_write_typ.attr,

Javier


Attachments:
signature.asc (849.00 B)
Message signed with OpenPGP

2018-02-16 18:49:53

by Javier González

[permalink] [raw]
Subject: Re: [PATCH 5/8] lightnvm: implement get log report chunk helpers

> On 15 Feb 2018, at 04.51, Matias Bjørling <[email protected]> wrote:
>
> On 02/13/2018 03:06 PM, Javier González wrote:
>> From: Javier González <[email protected]>
>> The 2.0 spec provides a report chunk log page that can be retrieved
>> using the stangard nvme get log page. This replaces the dedicated
>> get/put bad block table in 1.2.
>> This patch implements the helper functions to allow targets retrieve the
>> chunk metadata using get log page
>> Signed-off-by: Javier González <[email protected]>
>> ---
>> drivers/lightnvm/core.c | 28 +++++++++++++++++++++++++
>> drivers/nvme/host/lightnvm.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
>> include/linux/lightnvm.h | 32 ++++++++++++++++++++++++++++
>> 3 files changed, 110 insertions(+)
>> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
>> index 80492fa6ee76..6857a888544a 100644
>> --- a/drivers/lightnvm/core.c
>> +++ b/drivers/lightnvm/core.c
>> @@ -43,6 +43,8 @@ struct nvm_ch_map {
>> struct nvm_dev_map {
>> struct nvm_ch_map *chnls;
>> int nr_chnls;
>> + int bch;
>> + int blun;
>> };
>
> bch/blun should be unnecessary if the map_to_dev / map_to_tgt
> functions are implemented correctly (they can with the ppa_addr order
> update as far as I can see)
>
> What is the reason they can't be used? I might be missing something.

This is a precalculated value used for the offset on
nvm_get_chunk_log_page() actually, not on map_to_dev and map_to_tgt in
the fast path.

The problem is that since we offset to always start at ch:0,lun:0 on
target creation, we need this value. How would you get the offset otherwise?

>
>> static struct nvm_target *nvm_find_target(struct nvm_dev *dev, const char *name)
>> @@ -171,6 +173,9 @@ static struct nvm_tgt_dev *nvm_create_tgt_dev(struct nvm_dev *dev,
>> if (!dev_map->chnls)
>> goto err_chnls;
>> + dev_map->bch = bch;
>> + dev_map->blun = blun;
>> +
>> luns = kcalloc(nr_luns, sizeof(struct ppa_addr), GFP_KERNEL);
>> if (!luns)
>> goto err_luns;
>> @@ -561,6 +566,19 @@ static void nvm_unregister_map(struct nvm_dev *dev)
>> kfree(rmap);
>> }
>> +static unsigned long nvm_log_off_tgt_to_dev(struct nvm_tgt_dev *tgt_dev)
>> +{
>> + struct nvm_dev_map *dev_map = tgt_dev->map;
>> + struct nvm_geo *geo = &tgt_dev->geo;
>> + int lun_off;
>> + unsigned long off;
>> +
>> + lun_off = dev_map->blun + dev_map->bch * geo->num_lun;
>> + off = lun_off * geo->c.num_chk * sizeof(struct nvm_chunk_log_page);
>> +
>> + return off;
>> +}
>> +
>> static void nvm_map_to_dev(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *p)
>> {
>> struct nvm_dev_map *dev_map = tgt_dev->map;
>> @@ -720,6 +738,16 @@ static void nvm_free_rqd_ppalist(struct nvm_tgt_dev *tgt_dev,
>> nvm_dev_dma_free(tgt_dev->parent, rqd->ppa_list, rqd->dma_ppa_list);
>> }
>> +int nvm_get_chunk_log_page(struct nvm_tgt_dev *tgt_dev,
>> + struct nvm_chunk_log_page *log,
>> + unsigned long off, unsigned long len)
>> +{
>> + struct nvm_dev *dev = tgt_dev->parent;
>> +
>> + off += nvm_log_off_tgt_to_dev(tgt_dev);
>> +
>> + return dev->ops->get_chunk_log_page(tgt_dev->parent, log, off, len);
>> +}
>
> I think that this should be exported as get_bb and set_bb. Else
> linking fails if pblk is compiled as a module.
>

It is implemented as get_bb and set_bb. Am I missing anything here?

>> int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr *ppas,
>> int nr_ppas, int type)
>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
>> index 7bc75182c723..355d9b0cf084 100644
>> --- a/drivers/nvme/host/lightnvm.c
>> +++ b/drivers/nvme/host/lightnvm.c
>> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode {
>> nvme_nvm_admin_set_bb_tbl = 0xf1,
>> };
>> +enum nvme_nvm_log_page {
>> + NVME_NVM_LOG_REPORT_CHUNK = 0xCA,
>> +};
>> +
>
> The convention is to have it as lower-case.

Ok.

>
>> struct nvme_nvm_ph_rw {
>> __u8 opcode;
>> __u8 flags;
>> @@ -553,6 +557,50 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas,
>> return ret;
>> }
>> +static int nvme_nvm_get_chunk_log_page(struct nvm_dev *nvmdev,
>> + struct nvm_chunk_log_page *log,
>> + unsigned long off,
>> + unsigned long total_len)
>
> The chunk_log_page interface are both to be used by targets and the block layer code. Therefore, it is not convenient to have a byte-addressible interface exposed all the way up to a target. Instead, use slba and nlb. That simplifies what a target has to implement, and also allows the offset check to be removed.
>
> Chunk log page should be defined in the nvme implementation, such that it can be accessed through the traditional LBA path.
>
> struct nvme_nvm_chk_meta {
> __u8 state;
> __u8 type;
> __u8 wli;
> __u8 rsvd[5];
> __le64 slba;
> __le64 cnlb;
> __le64 wp;
> };

It makes sense to have this way, yes.

>
>> +{
>> + struct nvme_ns *ns = nvmdev->q->queuedata;
>> + struct nvme_command c = { };
>> + unsigned long offset = off, left = total_len;
>> + unsigned long len, len_dwords;
>> + void *buf = log;
>> + int ret;
>> +
>> + /* The offset needs to be dword-aligned */
>> + if (offset & 0x3)
>> + return -EINVAL;
>
> No need to check for this with the above interface changes.

ok.

>
>> +
>> + do {
>> + /* Send 256KB at a time */
>> + len = (1 << 18) > left ? left : (1 << 18);
>> + len_dwords = (len >> 2) - 1;
>
> This is namespace dependent. Use ctrl->max_hw_sectors << 9 instead.

ok.

>
>> +
>> + c.get_log_page.opcode = nvme_admin_get_log_page;
>> + c.get_log_page.nsid = cpu_to_le32(ns->head->ns_id);
>> + c.get_log_page.lid = NVME_NVM_LOG_REPORT_CHUNK;
>> + c.get_log_page.lpol = cpu_to_le32(offset & 0xffffffff);
>> + c.get_log_page.lpou = cpu_to_le32(offset >> 32);
>> + c.get_log_page.numdl = cpu_to_le16(len_dwords & 0xffff);
>> + c.get_log_page.numdu = cpu_to_le16(len_dwords >> 16);
>> +
>> + ret = nvme_submit_sync_cmd(ns->ctrl->admin_q, &c, buf, len);
>> + if (ret) {
>> + dev_err(ns->ctrl->device,
>> + "get chunk log page failed (%d)\n", ret);
>> + break;
>> + }
>> +
>> + buf += len;
>> + offset += len;
>> + left -= len;
>> + } while (left);
>> +
>> + return ret;
>> +}
>> +
>> static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns,
>> struct nvme_nvm_command *c)
>> {
>> @@ -684,6 +732,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
>> .get_bb_tbl = nvme_nvm_get_bb_tbl,
>> .set_bb_tbl = nvme_nvm_set_bb_tbl,
>> + .get_chunk_log_page = nvme_nvm_get_chunk_log_page,
>> +
>> .submit_io = nvme_nvm_submit_io,
>> .submit_io_sync = nvme_nvm_submit_io_sync,
>> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
>> index 1148b3f22b27..eb2900a18160 100644
>> --- a/include/linux/lightnvm.h
>> +++ b/include/linux/lightnvm.h
>> @@ -73,10 +73,13 @@ struct nvm_rq;
>> struct nvm_id;
>> struct nvm_dev;
>> struct nvm_tgt_dev;
>> +struct nvm_chunk_log_page;
>> typedef int (nvm_id_fn)(struct nvm_dev *);
>> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
>> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
>> +typedef int (nvm_get_chunk_lp_fn)(struct nvm_dev *, struct nvm_chunk_log_page *,
>> + unsigned long, unsigned long);
>> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
>> typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *);
>> typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
>> @@ -90,6 +93,8 @@ struct nvm_dev_ops {
>> nvm_op_bb_tbl_fn *get_bb_tbl;
>> nvm_op_set_bb_fn *set_bb_tbl;
>> + nvm_get_chunk_lp_fn *get_chunk_log_page;
>> +
>> nvm_submit_io_fn *submit_io;
>> nvm_submit_io_sync_fn *submit_io_sync;
>> @@ -286,6 +291,30 @@ struct nvm_dev_geo {
>> struct nvm_common_geo c;
>> };
>> +enum {
>> + /* Chunk states */
>> + NVM_CHK_ST_FREE = 1 << 0,
>> + NVM_CHK_ST_CLOSED = 1 << 1,
>> + NVM_CHK_ST_OPEN = 1 << 2,
>> + NVM_CHK_ST_OFFLINE = 1 << 3,
>> + NVM_CHK_ST_HOST_USE = 1 << 7,
>> +
>> + /* Chunk types */
>> + NVM_CHK_TP_W_SEQ = 1 << 0,
>> + NVM_CHK_TP_W_RAN = 1 << 2,
>
> The RAN bit is the second bit (1 << 1)
>

Yes, my bad...

>> + NVM_CHK_TP_SZ_SPEC = 1 << 4,
>> +};
>> +
>> +struct nvm_chunk_log_page {
>> + __u8 state;
>> + __u8 type;
>> + __u8 wear_index;
>> + __u8 rsvd[5];
>> + __u64 slba;
>> + __u64 cnlb;
>> + __u64 wp;
>> +};
>
> Should be represented both within the device driver and the lightnvm header file.

ok.

>> +
>> struct nvm_target {
>> struct list_head list;
>> struct nvm_tgt_dev *dev;
>> @@ -505,6 +534,9 @@ extern struct nvm_dev *nvm_alloc_dev(int);
>> extern int nvm_register(struct nvm_dev *);
>> extern void nvm_unregister(struct nvm_dev *);
>> +extern int nvm_get_chunk_log_page(struct nvm_tgt_dev *,
>> + struct nvm_chunk_log_page *,
>> + unsigned long, unsigned long);
>> extern int nvm_set_tgt_bb_tbl(struct nvm_tgt_dev *, struct ppa_addr *,
>> int, int);
>> extern int nvm_max_phys_sects(struct nvm_tgt_dev *);
>
> Here is a compile tested and lightly tested patch with the fixes above. Note that the chunk state definition has been taken out, as it properly shall go into the next patch. Also note that it uses the get log page patch I sent that wires up the 1.2.1 get log page support.

Cool! Yes, after seeing your patch generalizing get log page I was
planning on rebasing either way - just wanted to get this out for
review and avoid rebasing too many times. I can put it together with the
rest of the patches to fit it on the series. You can sign it when you
pick it up if you want.

>
> diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
> index 689c97b97775..cc22bf48fd13 100644
> --- a/drivers/lightnvm/core.c
> +++ b/drivers/lightnvm/core.c
> @@ -841,6 +841,19 @@ int nvm_get_tgt_bb_tbl(struct nvm_tgt_dev *tgt_dev, struct ppa_addr ppa,
> }
> EXPORT_SYMBOL(nvm_get_tgt_bb_tbl);
>
> +int nvm_get_chunk_meta(struct nvm_tgt_dev *tgt_dev, struct nvm_chk_meta *meta,
> + struct ppa_addr ppa, int nchks)
> +{
> + struct nvm_dev *dev = tgt_dev->parent;
> +
> + nvm_map_to_dev(tgt_dev, &ppa);
> + ppa = generic_to_dev_addr(tgt_dev, ppa);
> +
> + return dev->ops->get_chk_meta(tgt_dev->parent, meta,
> + (sector_t)ppa.ppa, nchks);
> +}
> +EXPORT_SYMBOL(nvm_get_chunk_meta);
> +
> static int nvm_core_init(struct nvm_dev *dev)
> {
> struct nvm_id *id = &dev->identity;
> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
> index 839c0b96466a..8f81f41a504c 100644
> --- a/drivers/nvme/host/lightnvm.c
> +++ b/drivers/nvme/host/lightnvm.c
> @@ -35,6 +35,10 @@ enum nvme_nvm_admin_opcode {
> nvme_nvm_admin_set_bb_tbl = 0xf1,
> };
>
> +enum nvme_nvm_log_page {
> + NVME_NVM_LOG_REPORT_CHUNK = 0xca,
> +};
> +
> struct nvme_nvm_ph_rw {
> __u8 opcode;
> __u8 flags;
> @@ -236,6 +240,16 @@ struct nvme_nvm_id20 {
> __u8 vs[1024];
> };
>
> +struct nvme_nvm_chk_meta {
> + __u8 state;
> + __u8 type;
> + __u8 wli;
> + __u8 rsvd[5];
> + __le64 slba;
> + __le64 cnlb;
> + __le64 wp;
> +};
> +
> /*
> * Check we didn't inadvertently grow the command struct
> */
> @@ -252,6 +266,9 @@ static inline void _nvme_nvm_check_size(void)
> BUILD_BUG_ON(sizeof(struct nvme_nvm_bb_tbl) != 64);
> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20_addrf) != 8);
> BUILD_BUG_ON(sizeof(struct nvme_nvm_id20) != NVME_IDENTIFY_DATA_SIZE);
> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) != 32);
> + BUILD_BUG_ON(sizeof(struct nvme_nvm_chk_meta) !=
> + sizeof(struct nvm_chk_meta));
> }
>
> static int init_grp(struct nvm_id *nvm_id, struct nvme_nvm_id12 *id12)
> @@ -474,6 +491,48 @@ static int nvme_nvm_set_bb_tbl(struct nvm_dev *nvmdev, struct ppa_addr *ppas,
> return ret;
> }
>
> +static int nvme_nvm_get_chk_meta(struct nvm_dev *ndev,
> + struct nvm_chk_meta *meta,
> + sector_t slba, int nchks)
> +{
> + struct nvme_ns *ns = ndev->q->queuedata;
> + struct nvme_ctrl *ctrl = ns->ctrl;
> + struct nvme_nvm_chk_meta *dev_meta = (struct nvme_nvm_chk_meta *)meta;
> + size_t left = nchks * sizeof(struct nvme_nvm_chk_meta);
> + size_t offset, len;
> + int ret, i;
> +
> + offset = slba * sizeof(struct nvme_nvm_chk_meta);
> +
> + while (left) {
> + len = min_t(unsigned, left, ctrl->max_hw_sectors << 9);
> +
> + ret = nvme_get_log_ext(ctrl, ns, NVME_NVM_LOG_REPORT_CHUNK,
> + dev_meta, len, offset);
> + if (ret) {
> + dev_err(ctrl->device, "Get REPORT CHUNK log error\n");
> + break;
> + }
> +
> + for (i = 0; i < len; i += sizeof(struct nvme_nvm_chk_meta)) {
> + meta->state = dev_meta->state;
> + meta->type = dev_meta->type;
> + meta->wli = dev_meta->wli;
> + meta->slba = le64_to_cpu(dev_meta->slba);
> + meta->cnlb = le64_to_cpu(dev_meta->cnlb);
> + meta->wp = le64_to_cpu(dev_meta->wp);
> +
> + meta++;
> + dev_meta++;
> + }
> +
> + offset += len;
> + left -= len;
> + }
> +
> + return ret;
> +}
> +
> static inline void nvme_nvm_rqtocmd(struct nvm_rq *rqd, struct nvme_ns *ns,
> struct nvme_nvm_command *c)
> {
> @@ -605,6 +664,8 @@ static struct nvm_dev_ops nvme_nvm_dev_ops = {
> .get_bb_tbl = nvme_nvm_get_bb_tbl,
> .set_bb_tbl = nvme_nvm_set_bb_tbl,
>
> + .get_chk_meta = nvme_nvm_get_chk_meta,
> +
> .submit_io = nvme_nvm_submit_io,
> .submit_io_sync = nvme_nvm_submit_io_sync,
>
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 1ca08f4993ba..12abe16d6e64 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -396,6 +396,10 @@ int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
> int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
> int nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
>
> +int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
> + u8 log_page, void *log,
> + size_t size, size_t offset);
> +
> extern const struct attribute_group nvme_ns_id_attr_group;
> extern const struct block_device_operations nvme_ns_head_ops;
>
> diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
> index e55b10573c99..f056cf72144f 100644
> --- a/include/linux/lightnvm.h
> +++ b/include/linux/lightnvm.h
> @@ -49,10 +49,13 @@ struct nvm_rq;
> struct nvm_id;
> struct nvm_dev;
> struct nvm_tgt_dev;
> +struct nvm_chk_meta;
>
> typedef int (nvm_id_fn)(struct nvm_dev *, struct nvm_id *);
> typedef int (nvm_op_bb_tbl_fn)(struct nvm_dev *, struct ppa_addr, u8 *);
> typedef int (nvm_op_set_bb_fn)(struct nvm_dev *, struct ppa_addr *, int, int);
> +typedef int (nvm_get_chk_meta_fn)(struct nvm_dev *, struct nvm_chk_meta *,
> + sector_t, int);
> typedef int (nvm_submit_io_fn)(struct nvm_dev *, struct nvm_rq *);
> typedef int (nvm_submit_io_sync_fn)(struct nvm_dev *, struct nvm_rq *);
> typedef void *(nvm_create_dma_pool_fn)(struct nvm_dev *, char *);
> @@ -66,6 +69,8 @@ struct nvm_dev_ops {
> nvm_op_bb_tbl_fn *get_bb_tbl;
> nvm_op_set_bb_fn *set_bb_tbl;
>
> + nvm_get_chk_meta_fn *get_chk_meta;
> +
> nvm_submit_io_fn *submit_io;
> nvm_submit_io_sync_fn *submit_io_sync;
>
> @@ -353,6 +358,20 @@ struct nvm_dev {
> struct list_head targets;
> };
>
> +/*
> + * Note: The structure size is linked to nvme_nvm_chk_meta such that the same
> + * buffer can be used when converting from little endian to cpu addressing.
> + */
> +struct nvm_chk_meta {
> + u8 state;
> + u8 type;
> + u8 wli;
> + u8 rsvd[5];
> + u64 slba;
> + u64 cnlb;
> + u64 wp;
> +};
> +
> static inline struct ppa_addr generic_to_dev_addr(struct nvm_tgt_dev *tgt_dev,
> struct ppa_addr r)
> {


Attachments:
signature.asc (849.00 B)
Message signed with OpenPGP

2018-02-19 07:28:15

by Matias Bjørling

[permalink] [raw]
Subject: Re: [PATCH 2/8] lightnvm: show generic geometry in sysfs

On 02/16/2018 07:35 AM, Javier Gonzalez wrote:
>
>> On 15 Feb 2018, at 02.20, Matias Bjørling <[email protected]> wrote:
>>
>> On 02/13/2018 03:06 PM, Javier González wrote:
>>> From: Javier González <[email protected]>
>>> Apart from showing the geometry returned by the different identify
>>> commands, provide the generic geometry too, as this is the geometry that
>>> targets will use to describe the device.
>>> Signed-off-by: Javier González <[email protected]>
>>> ---
>>> drivers/nvme/host/lightnvm.c | 146 ++++++++++++++++++++++++++++---------------
>>> 1 file changed, 97 insertions(+), 49 deletions(-)
>>> diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
>>> index 97739e668602..7bc75182c723 100644
>>> --- a/drivers/nvme/host/lightnvm.c
>>> +++ b/drivers/nvme/host/lightnvm.c
>>> @@ -944,8 +944,27 @@ static ssize_t nvm_dev_attr_show(struct device *dev,
>>> return scnprintf(page, PAGE_SIZE, "%u.%u\n",
>>> dev_geo->major_ver_id,
>>> dev_geo->minor_ver_id);
>>> - } else if (strcmp(attr->name, "capabilities") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
>>> + } else if (strcmp(attr->name, "clba") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
>>> + } else if (strcmp(attr->name, "csecs") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
>>> + } else if (strcmp(attr->name, "sos") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
>>> + } else if (strcmp(attr->name, "ws_min") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
>>> + } else if (strcmp(attr->name, "ws_opt") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
>>> + } else if (strcmp(attr->name, "maxoc") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxoc);
>>> + } else if (strcmp(attr->name, "maxocpu") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.maxocpu);
>>> + } else if (strcmp(attr->name, "mw_cunits") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
>>> + } else if (strcmp(attr->name, "media_capabilities") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mccap);
>>> + } else if (strcmp(attr->name, "max_phys_secs") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n",
>>> + ndev->ops->max_phys_sect);
>>> } else if (strcmp(attr->name, "read_typ") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.trdt);
>>> } else if (strcmp(attr->name, "read_max") == 0) {
>>> @@ -984,19 +1003,8 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>>> attr = &dattr->attr;
>>> - if (strcmp(attr->name, "vendor_opcode") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
>>> - } else if (strcmp(attr->name, "device_mode") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
>>> - /* kept for compatibility */
>>> - } else if (strcmp(attr->name, "media_manager") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
>>> - } else if (strcmp(attr->name, "ppa_format") == 0) {
>>> + if (strcmp(attr->name, "ppa_format") == 0) {
>>> return nvm_dev_attr_show_ppaf((void *)&dev_geo->c.addrf, page);
>>> - } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
>>> - } else if (strcmp(attr->name, "flash_media_type") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
>>> } else if (strcmp(attr->name, "num_channels") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>>> } else if (strcmp(attr->name, "num_luns") == 0) {
>>> @@ -1011,8 +1019,6 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fpg_sz);
>>> } else if (strcmp(attr->name, "hw_sector_size") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.csecs);
>>> - } else if (strcmp(attr->name, "oob_sector_size") == 0) {/* u32 */
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.sos);
>>> } else if (strcmp(attr->name, "prog_typ") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>>> } else if (strcmp(attr->name, "prog_max") == 0) {
>>> @@ -1021,13 +1027,21 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbet);
>>> } else if (strcmp(attr->name, "erase_max") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tbem);
>>> + } else if (strcmp(attr->name, "vendor_opcode") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.vmnt);
>>> + } else if (strcmp(attr->name, "device_mode") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.dom);
>>> + /* kept for compatibility */
>>> + } else if (strcmp(attr->name, "media_manager") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%s\n", "gennvm");
>>> + } else if (strcmp(attr->name, "capabilities") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.cap);
>>> + } else if (strcmp(attr->name, "media_type") == 0) { /* u8 */
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mtype);
>>> + } else if (strcmp(attr->name, "flash_media_type") == 0) {
>>> + return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.fmtype);
>>> } else if (strcmp(attr->name, "multiplane_modes") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mpos);
>>> - } else if (strcmp(attr->name, "media_capabilities") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "0x%08x\n", dev_geo->c.mccap);
>>> - } else if (strcmp(attr->name, "max_phys_secs") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n",
>>> - ndev->ops->max_phys_sect);
>>> } else {
>>> return scnprintf(page, PAGE_SIZE,
>>> "Unhandled attr(%s) in `nvm_dev_attr_show_12`\n",
>>> @@ -1035,6 +1049,17 @@ static ssize_t nvm_dev_attr_show_12(struct device *dev,
>>> }
>>> }
>>> +static ssize_t nvm_dev_attr_show_lbaf(struct nvm_addr_format *lbaf,
>>> + char *page)
>>> +{
>>> + return scnprintf(page, PAGE_SIZE,
>>> + "0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
>>> + lbaf->ch_offset, lbaf->ch_len,
>>> + lbaf->lun_offset, lbaf->lun_len,
>>> + lbaf->chk_offset, lbaf->chk_len,
>>> + lbaf->sec_offset, lbaf->sec_len);
>>> +}
>>> +
>>> static ssize_t nvm_dev_attr_show_20(struct device *dev,
>>> struct device_attribute *dattr, char *page)
>>> {
>>> @@ -1048,20 +1073,14 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>>> attr = &dattr->attr;
>>> - if (strcmp(attr->name, "groups") == 0) {
>>> + if (strcmp(attr->name, "lba_format") == 0) {
>>> + return nvm_dev_attr_show_lbaf((void *)&dev_geo->c.addrf, page);
>>> + } else if (strcmp(attr->name, "groups") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_ch);
>>> } else if (strcmp(attr->name, "punits") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->num_lun);
>>> } else if (strcmp(attr->name, "chunks") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.num_chk);
>>> - } else if (strcmp(attr->name, "clba") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.clba);
>>> - } else if (strcmp(attr->name, "ws_min") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_min);
>>> - } else if (strcmp(attr->name, "ws_opt") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.ws_opt);
>>> - } else if (strcmp(attr->name, "mw_cunits") == 0) {
>>> - return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.mw_cunits);
>>> } else if (strcmp(attr->name, "write_typ") == 0) {
>>> return scnprintf(page, PAGE_SIZE, "%u\n", dev_geo->c.tprt);
>>> } else if (strcmp(attr->name, "write_max") == 0) {
>>> @@ -1086,7 +1105,19 @@ static ssize_t nvm_dev_attr_show_20(struct device *dev,
>>> /* general attributes */
>>> static NVM_DEV_ATTR_RO(version);
>>> -static NVM_DEV_ATTR_RO(capabilities);
>>> +
>>> +static NVM_DEV_ATTR_RO(ws_min);
>>> +static NVM_DEV_ATTR_RO(ws_opt);
>>> +static NVM_DEV_ATTR_RO(mw_cunits);
>>> +static NVM_DEV_ATTR_RO(maxoc);
>>> +static NVM_DEV_ATTR_RO(maxocpu);
>>> +
>>> +static NVM_DEV_ATTR_RO(media_capabilities);
>>> +static NVM_DEV_ATTR_RO(max_phys_secs);
>>> +
>>> +static NVM_DEV_ATTR_RO(clba);
>>> +static NVM_DEV_ATTR_RO(csecs);
>>> +static NVM_DEV_ATTR_RO(sos);
>>> static NVM_DEV_ATTR_RO(read_typ);
>>> static NVM_DEV_ATTR_RO(read_max);
>>> @@ -1105,42 +1136,53 @@ static NVM_DEV_ATTR_12_RO(num_blocks);
>>> static NVM_DEV_ATTR_12_RO(num_pages);
>>> static NVM_DEV_ATTR_12_RO(page_size);
>>> static NVM_DEV_ATTR_12_RO(hw_sector_size);
>>> -static NVM_DEV_ATTR_12_RO(oob_sector_size);
>>> static NVM_DEV_ATTR_12_RO(prog_typ);
>>> static NVM_DEV_ATTR_12_RO(prog_max);
>>> static NVM_DEV_ATTR_12_RO(erase_typ);
>>> static NVM_DEV_ATTR_12_RO(erase_max);
>>> static NVM_DEV_ATTR_12_RO(multiplane_modes);
>>> -static NVM_DEV_ATTR_12_RO(media_capabilities);
>>> -static NVM_DEV_ATTR_12_RO(max_phys_secs);
>>> +static NVM_DEV_ATTR_12_RO(capabilities);
>>> static struct attribute *nvm_dev_attrs_12[] = {
>>> &dev_attr_version.attr,
>>> - &dev_attr_capabilities.attr,
>>> -
>>> - &dev_attr_vendor_opcode.attr,
>>> - &dev_attr_device_mode.attr,
>>> - &dev_attr_media_manager.attr,
>>> &dev_attr_ppa_format.attr,
>>> - &dev_attr_media_type.attr,
>>> - &dev_attr_flash_media_type.attr,
>>> +
>>> &dev_attr_num_channels.attr,
>>> &dev_attr_num_luns.attr,
>>> &dev_attr_num_planes.attr,
>>> &dev_attr_num_blocks.attr,
>>> &dev_attr_num_pages.attr,
>>> &dev_attr_page_size.attr,
>>> +
>>> &dev_attr_hw_sector_size.attr,
>>> - &dev_attr_oob_sector_size.attr,
>>> +
>>> + &dev_attr_clba.attr,
>>> + &dev_attr_csecs.attr,
>>> + &dev_attr_sos.attr,
>>> +
>>> + &dev_attr_ws_min.attr,
>>> + &dev_attr_ws_opt.attr,
>>> + &dev_attr_maxoc.attr,
>>> + &dev_attr_maxocpu.attr,
>>> + &dev_attr_mw_cunits.attr,
>>> +
>>> + &dev_attr_media_capabilities.attr,
>>> + &dev_attr_max_phys_secs.attr,
>>> +
>>
>> This breaks user-space. The intention is for user-space to decide
>> based on version id. Then it can either retrieve the 1.2 or 2.0
>> attributes. The 2.0 attributes should not be available when a device
>> is 1.2.
>>
>
> Why does it break it? I'm only adding new entries.
>
> The objective is to expose the genneric geometry, since this is the
> structure that is passed on to the targets. Since some of the values are
> calculated, there is value on exposing this information, I believe.
>
> Another way of doing it, is adding the generic geometry at the target
> level, showing what base values it is getting, including the real number
> of channels/groups and luns/pus.
>
> Would this be better in your opinion?
>

No. It should be one set of attributes for 1.2 (keep the way it is
today), and then separate 2.0 attributes. User-space should then
identify either by either 1 or 2 in the version attribute.

>
>>> &dev_attr_read_typ.attr,
>>> &dev_attr_read_max.attr,
>>> &dev_attr_prog_typ.attr,
>>> &dev_attr_prog_max.attr,
>>> &dev_attr_erase_typ.attr,
>>> &dev_attr_erase_max.attr,
>>> +
>>> + &dev_attr_vendor_opcode.attr,
>>> + &dev_attr_device_mode.attr,
>>> + &dev_attr_media_manager.attr,
>>> + &dev_attr_capabilities.attr,
>>> + &dev_attr_media_type.attr,
>>> + &dev_attr_flash_media_type.attr,
>>> &dev_attr_multiplane_modes.attr,
>>> - &dev_attr_media_capabilities.attr,
>>> - &dev_attr_max_phys_secs.attr,
>>> NULL,
>>> };
>>> @@ -1152,12 +1194,9 @@ static const struct attribute_group nvm_dev_attr_group_12 = {
>>> /* 2.0 values */
>>> static NVM_DEV_ATTR_20_RO(groups);
>>> +static NVM_DEV_ATTR_20_RO(lba_format);
>>> static NVM_DEV_ATTR_20_RO(punits);
>>> static NVM_DEV_ATTR_20_RO(chunks);
>>> -static NVM_DEV_ATTR_20_RO(clba);
>>> -static NVM_DEV_ATTR_20_RO(ws_min);
>>> -static NVM_DEV_ATTR_20_RO(ws_opt);
>>> -static NVM_DEV_ATTR_20_RO(mw_cunits);
>>> static NVM_DEV_ATTR_20_RO(write_typ);
>>> static NVM_DEV_ATTR_20_RO(write_max);
>>> static NVM_DEV_ATTR_20_RO(reset_typ);
>>> @@ -1165,16 +1204,25 @@ static NVM_DEV_ATTR_20_RO(reset_max);
>>> static struct attribute *nvm_dev_attrs_20[] = {
>>> &dev_attr_version.attr,
>>> - &dev_attr_capabilities.attr,
>>> + &dev_attr_lba_format.attr,
>>> &dev_attr_groups.attr,
>>> &dev_attr_punits.attr,
>>> &dev_attr_chunks.attr,
>>> +
>>> &dev_attr_clba.attr,
>>> + &dev_attr_csecs.attr,
>>> + &dev_attr_sos.attr,
>>
>> csecs and sos are derived from the the generic block device data structures.
>
> As mentioned above, it is to represent the generic geometry.

They are not part of the 2.0 spec. The fields can be derived from elsewhere.

>
>>
>>> +
>>> &dev_attr_ws_min.attr,
>>> &dev_attr_ws_opt.attr,
>>> + &dev_attr_maxoc.attr,
>>> + &dev_attr_maxocpu.attr,
>>
>> When the maxoc/maxocpu are in another patch, these changes can be included.
>
> ok.
>
>>
>>> &dev_attr_mw_cunits.attr,
>>> + &dev_attr_media_capabilities.attr,
>>
>> What is the meaning of media in this context? The 2.0 spec defines
>> vector copy and double resets in its capabilities, it does not have
>> media in mind.
>>
>
> It refers to the mcap (vector copy and double resets for now, as you
> mention). I kept the name, name, but I can rename it if it is better...
>
>>> + &dev_attr_max_phys_secs.attr,
>>> +
>>
>> I kill max_phys_secs in another patch. It has been made redundant
>> after null_blk has been removed.
>
> I'll answer this on the patch - I have a questions here.
>
>>> &dev_attr_read_typ.attr,
>>> &dev_attr_read_max.attr,
>>> &dev_attr_write_typ.attr,
>
> Javier
>


2018-02-19 13:42:49

by Javier González

[permalink] [raw]
Subject: Re: [PATCH 2/8] lightnvm: show generic geometry in sysfs


>>> This breaks user-space. The intention is for user-space to decide
>>> based on version id. Then it can either retrieve the 1.2 or 2.0
>>> attributes. The 2.0 attributes should not be available when a device
>>> is 1.2.
>>>
>> Why does it break it? I'm only adding new entries.
>> The objective is to expose the genneric geometry, since this is the
>> structure that is passed on to the targets. Since some of the values are
>> calculated, there is value on exposing this information, I believe.
>> Another way of doing it, is adding the generic geometry at the target
>> level, showing what base values it is getting, including the real number
>> of channels/groups and luns/pus.
>> Would this be better in your opinion?
>
> No. It should be one set of attributes for 1.2 (keep the way it is today), and then separate 2.0 attributes. User-space should then identify either by either 1 or 2 in the version attribute.
>
>>>> ...

>>> csecs and sos are derived from the the generic block device data structures.
>> As mentioned above, it is to represent the generic geometry.
>
> They are not part of the 2.0 spec. The fields can be derived from elsewhere.
>>>

Ok. Thanks for looking into it.

Javier.