2018-09-04 10:49:22

by Ocean He

[permalink] [raw]
Subject: [PATCH v2] libnvdimm, region_devs: stop NDD_ALIASING bit test if one test pass

From: Ocean He <[email protected]>

There is no need to finish entire loop to execute NDD_ALIASING bit test
against every nvdimm->flags. In practice, all the nd_mapping->nvdimm
have the same flags. So it's safe to return ND_DEVICE_NAMESPACE_PMEM
if the NDD_ALIASING bit is found inside the loop, while saving a few
cpu cycles.

Signed-off-by: Ocean He <[email protected]>
---
v1: https://lkml.org/lkml/2018/8/19/4
v2: Per Vishal's comments in patch v1, remove 'alias' variable.
In the loop, just return ND_DEVICE_NAMESPACE_PMEM if the NDD_ALIASING bit is
found for any mapping.
Outside the loop, simply return ND_DEVICE_NAMESPACE_IO.

The following test pass on Lenovo ThinkSystem SR630 based on 4.19-rc1.
# ndctl create-namespace -r region0 -s 1g -t pmem -m fsdax
# ndctl create-namespace -r region0 -s 1g -t pmem -m sector
# ndctl create-namespace -r region0 -s 1g -t pmem -m devdax
# ndctl list
[
{
"dev":"namespace0.2",
"mode":"devdax",
"map":"dev",
"size":1054867456,
"uuid":"fc3a2126-9b8e-4ab4-baa4-a3ec7f62a326",
"raw_uuid":"eadc6965-daee-48c5-a0ae-1865ee0c8573",
"chardev":"dax0.2",
"numa_node":0
},
{
"dev":"namespace0.1",
"mode":"sector",
"size":1071616000,
"uuid":"0d81d040-93a1-45c6-9791-3dbb7b5f89d2",
"raw_uuid":"2b1b29e6-0510-4dcf-9902-43b77ccb9df5",
"sector_size":4096,
"blockdev":"pmem0.1s",
"numa_node":0
},
{
"dev":"namespace0.0",
"mode":"fsdax",
"map":"dev",
"size":1054867456,
"uuid":"cbff92ed-cd45-4d24-9353-7a56d42122b1",
"raw_uuid":"f6ea1001-5ef0-4942-889a-99005079ab5d",
"sector_size":512,
"blockdev":"pmem0",
"numa_node":0
}
]

# reboot and OS could boot up normally
# ndctl destroy-namespace namespace0.2 -f
# ndctl destroy-namespace namespace0.1 -f
# ndctl destroy-namespace namespace0.0 -f

drivers/nvdimm/region_devs.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index fa37afc..16ee153 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -228,19 +228,18 @@ void nd_blk_region_set_provider_data(struct nd_blk_region *ndbr, void *data)
int nd_region_to_nstype(struct nd_region *nd_region)
{
if (is_memory(&nd_region->dev)) {
- u16 i, alias;
+ u16 i;

- for (i = 0, alias = 0; i < nd_region->ndr_mappings; i++) {
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
struct nd_mapping *nd_mapping = &nd_region->mapping[i];
struct nvdimm *nvdimm = nd_mapping->nvdimm;

if (test_bit(NDD_ALIASING, &nvdimm->flags))
- alias++;
+ return ND_DEVICE_NAMESPACE_PMEM;
}
- if (alias)
- return ND_DEVICE_NAMESPACE_PMEM;
- else
- return ND_DEVICE_NAMESPACE_IO;
+
+ return ND_DEVICE_NAMESPACE_IO;
+
} else if (is_nd_blk(&nd_region->dev)) {
return ND_DEVICE_NAMESPACE_BLK;
}
--
1.8.3.1



2018-09-04 15:59:46

by Dan Williams

[permalink] [raw]
Subject: Re: [PATCH v2] libnvdimm, region_devs: stop NDD_ALIASING bit test if one test pass

On Tue, Sep 4, 2018 at 3:47 AM, Ocean He <[email protected]> wrote:
> From: Ocean He <[email protected]>
>
> There is no need to finish entire loop to execute NDD_ALIASING bit test
> against every nvdimm->flags.

Of course there is. I see nothing stopping someone mixing an NVDIMM
that supports labels with one that doesn't. If anything I think we
need fixes to make sure this operates correctly to force disable
BLK-mode capactiy when the PMEM capacity is interleaved with a
label-less NVDIMM.

2018-09-05 03:28:58

by Ocean HY1 He

[permalink] [raw]
Subject: RE: [External] Re: [PATCH v2] libnvdimm, region_devs: stop NDD_ALIASING bit test if one test pass



> -----Original Message-----
> From: Dan Williams <[email protected]>
> Sent: Tuesday, September 04, 2018 11:58 PM
> To: Ocean He <[email protected]>
> Cc: Ross Zwisler <[email protected]>; Vishal L Verma
> <[email protected]>; Dave Jiang <[email protected]>; linux-
> nvdimm <[email protected]>; Linux Kernel Mailing List <linux-
> [email protected]>; Ocean HY1 He <[email protected]>
> Subject: [External] Re: [PATCH v2] libnvdimm, region_devs: stop
> NDD_ALIASING bit test if one test pass
>
> On Tue, Sep 4, 2018 at 3:47 AM, Ocean He <[email protected]> wrote:
> > From: Ocean He <[email protected]>
> >
> > There is no need to finish entire loop to execute NDD_ALIASING bit test
> > against every nvdimm->flags.
>
> Of course there is. I see nothing stopping someone mixing an NVDIMM
> that supports labels with one that doesn't.
Hi Dan,
Thanks for your comments.

I only have NVDIMM which supports label, so I could not do this type test yet.
As I understand your words, in mixing status, the nstype would be
ND_DEVICE_NAMESPACE_PMEM if one NVDIMM supports labels. Am I right?

By the way, do you think my patch is valuable to save a few cpu cycles here?

Ocean.
> If anything I think we
> need fixes to make sure this operates correctly to force disable
> BLK-mode capactiy when the PMEM capacity is interleaved with a
> label-less NVDIMM.
I am trying to translate your words to test steps, please correct me for
misunderstanding.
#1. Prepare 2 NVDIMMs which has no label capacity.
#2. Create a region which has PMEM capacity and interleaved.
ipmctl create -f -goal -socket 0x1 PersistentMemoryType=AppDirect
#3. Create a BLK-mode capacity namespace, then this namespace should be
"force disable" ?
ndctl create-namespace -r region1 -s 1g -t pmem -m sector

Ocean.