2018-02-09 01:40:52

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe

On Tue, Aug 16 2016, James Simmons wrote:

>
> +static inline bool
> +lsm_md_eq(const struct lmv_stripe_md *lsm1, const struct lmv_stripe_md *lsm2)
> +{
> + int idx;
> +
> + if (lsm1->lsm_md_magic != lsm2->lsm_md_magic ||
> + lsm1->lsm_md_stripe_count != lsm2->lsm_md_stripe_count ||
> + lsm1->lsm_md_master_mdt_index != lsm2->lsm_md_master_mdt_index ||
> + lsm1->lsm_md_hash_type != lsm2->lsm_md_hash_type ||
> + lsm1->lsm_md_layout_version != lsm2->lsm_md_layout_version ||
> + !strcmp(lsm1->lsm_md_pool_name, lsm2->lsm_md_pool_name))
> + return false;

Hi James and all,
This patch (8f18c8a48b736c2f in linux) is different from the
corresponding patch in lustre-release (60e07b972114df).

In that patch, the last clause in the 'if' condition is

+ strcmp(lsm1->lsm_md_pool_name,
+ lsm2->lsm_md_pool_name) != 0)

Whoever converted it to "!strcmp()" inverted the condition. This is a
perfect example of why I absolutely *loathe* the "!strcmp()" construct!!

This causes many tests in the 'sanity' test suite to return
-ENOMEM (that had me puzzled for a while!!).
This seems to suggest that no-one has been testing the mainline linux
lustre.
It also seems to suggest that there is a good chance that there
are other bugs that have crept in while no-one has really been caring.
Given that the sanity test suite doesn't complete for me, but just
hangs (in test_27z I think), that seems particularly likely.


So my real question - to anyone interested in lustre for mainline linux
- is: can we actually trust this code at all?
I'm seriously tempted to suggest that we just
rm -r drivers/staging/lustre

drivers/staging is great for letting the community work on code that has
been "thrown over the wall" and is not openly developed elsewhere, but
that is not the case for lustre. lustre has (or seems to have) an open
development process. Having on-going development happen both there and
in drivers/staging seems a waste of resources.

Might it make sense to instead start cleaning up the code in
lustre-release so as to make it meet the upstream kernel standards.
Then when the time is right, the kernel code can be moved *out* of
lustre-release and *in* to linux. Then development can continue in
Linux (just like it does with other Linux filesystems).

An added bonus of this is that there is an obvious path to getting
server support in mainline Linux. The current situation of client-only
support seems weird given how interdependent the two are.

What do others think? Is there any chance that the current lustre in
Linux will ever be more than a poor second-cousin to the external
lustre-release. If there isn't, should we just discard it now and move
on?

Thanks,
NeilBrown


Attachments:
signature.asc (847.00 B)

2018-02-09 02:02:40

by Oleg Drokin

[permalink] [raw]
Subject: Re: [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
>
> On Tue, Aug 16 2016, James Simmons wrote:

my that’s an old patch

>
>>
>> +static inline bool
>> +lsm_md_eq(const struct lmv_stripe_md *lsm1, const struct lmv_stripe_md *lsm2)
>> +{
>> + int idx;
>> +
>> + if (lsm1->lsm_md_magic != lsm2->lsm_md_magic ||
>> + lsm1->lsm_md_stripe_count != lsm2->lsm_md_stripe_count ||
>> + lsm1->lsm_md_master_mdt_index != lsm2->lsm_md_master_mdt_index ||
>> + lsm1->lsm_md_hash_type != lsm2->lsm_md_hash_type ||
>> + lsm1->lsm_md_layout_version != lsm2->lsm_md_layout_version ||
>> + !strcmp(lsm1->lsm_md_pool_name, lsm2->lsm_md_pool_name))
>> + return false;
>
> Hi James and all,
> This patch (8f18c8a48b736c2f in linux) is different from the
> corresponding patch in lustre-release (60e07b972114df).
>
> In that patch, the last clause in the 'if' condition is
>
> + strcmp(lsm1->lsm_md_pool_name,
> + lsm2->lsm_md_pool_name) != 0)
>
> Whoever converted it to "!strcmp()" inverted the condition. This is a
> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>
> This causes many tests in the 'sanity' test suite to return
> -ENOMEM (that had me puzzled for a while!!).

huh? I am not seeing anything of the sort and I was running sanity
all the time until a recent pause (but going to resume).

> This seems to suggest that no-one has been testing the mainline linux
> lustre.
> It also seems to suggest that there is a good chance that there
> are other bugs that have crept in while no-one has really been caring.
> Given that the sanity test suite doesn't complete for me, but just
> hangs (in test_27z I think), that seems particularly likely.

Works for me, here’s a run from earlier today on 4.15.0:
== sanity test 27z: check SEQ/OID on the MDT and OST filesystems ===================================== 16:43:58 (1518126238)
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169548 s, 61.8 MB/s
2+0 records in
2+0 records out
2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.02782 s, 75.4 MB/s
check file /mnt/lustre/d27z.sanity/f27z.sanity-1
FID seq 0x200000401, oid 0x4640 ver 0x0
LOV seq 0x200000401, oid 0x4640, count: 1
want: stripe:0 ost:0 oid:314/0x13a seq:0
Stopping /mnt/lustre-ost1 (opts:) on centos6-17
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
Starting ost1: -o loop /tmp/lustre-ost1 /mnt/lustre-ost1
Failed to initialize ZFS library: 256
h2tcp: deprecated, use h2nettype instead
centos6-17.localnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super all -lnet -lnd -pinger 16
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
Started lustre-OST0000
/mnt/lustre-ost1/O/0/d26/314: parent=[0x200000401:0x4640:0x0] stripe=0 stripe_size=0 stripe_count=0
check file /mnt/lustre/d27z.sanity/f27z.sanity-2
FID seq 0x200000401, oid 0x4642 ver 0x0
LOV seq 0x200000401, oid 0x4642, count: 2
want: stripe:0 ost:1 oid:1187/0x4a3 seq:0
Stopping /mnt/lustre-ost2 (opts:) on centos6-17
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
Starting ost2: -o loop /tmp/lustre-ost2 /mnt/lustre-ost2
Failed to initialize ZFS library: 256
h2tcp: deprecated, use h2nettype instead
centos6-17.localnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super all -lnet -lnd -pinger 16
pdsh@fedora1: centos6-17: ssh exited with exit code 1
pdsh@fedora1: centos6-17: ssh exited with exit code 1
Started lustre-OST0001
/mnt/lustre-ost2/O/0/d3/1187: parent=[0x200000401:0x4642:0x0] stripe=0 stripe_size=0 stripe_count=0
want: stripe:1 ost:0 oid:315/0x13b seq:0
got: objid=0 seq=0 parent=[0x200000401:0x4642:0x0] stripe=1
Resetting fail_loc on all nodes...done.
16:44:32 (1518126272) waiting for centos6-16 network 5 secs ...
16:44:32 (1518126272) network interface is UP
16:44:33 (1518126273) waiting for centos6-17 network 5 secs ...
16:44:33 (1518126273) network interface is UP


> So my real question - to anyone interested in lustre for mainline linux
> - is: can we actually trust this code at all?

Absolutely. Seems that you just stumbled upon a corner case that was not
being hit by people that do the testing, so you have something unique about
your setup, I guess.

> I'm seriously tempted to suggest that we just
> rm -r drivers/staging/lustre
>
> drivers/staging is great for letting the community work on code that has
> been "thrown over the wall" and is not openly developed elsewhere, but
> that is not the case for lustre. lustre has (or seems to have) an open
> development process. Having on-going development happen both there and
> in drivers/staging seems a waste of resources.

It is a bit of a waste of resources, but there are some other things here.
E.g. we cannot have any APIs with no users in the kernel.
Also some people like to have in-kernel modules coming with their distros
(there were some users that used staging client on ubuntu as their
setup).

Instead the plan was to clean up the staging client into acceptable state,
move it out of staging, bring in all the missing features and then
drop the client (more or less) from the lustre-release.

> Might it make sense to instead start cleaning up the code in
> lustre-release so as to make it meet the upstream kernel standards.
> Then when the time is right, the kernel code can be moved *out* of
> lustre-release and *in* to linux. Then development can continue in
> Linux (just like it does with other Linux filesystems).

While we can be cleaning lustre in lustre-release, there are some things
we cannot do as easily, e.g. decoupling Lustre client from the server.
Also it would not attract any reviews from all the janitor or
(more importantly) Al Viro and other people with a sharp eyes.

> An added bonus of this is that there is an obvious path to getting
> server support in mainline Linux. The current situation of client-only
> support seems weird given how interdependent the two are.

Given the pushback Lustre client was given I have no hope Lustre server
will get into mainline in my lifetime.

> What do others think? Is there any chance that the current lustre in
> Linux will ever be more than a poor second-cousin to the external
> lustre-release. If there isn't, should we just discard it now and move
> on?


I think many useful cleanups and fixes came from the staging tree at
the very least.
The biggest problem with it all is that we are in staging tree so
we cannot bring it to parity much. And we are in staging tree because
there’s a whole bunch of “cleanups” requested that take a lot of effort
(in both implementing them and then in finding other ways of achieving
things that were done in old ways before).
I understand that beggars cannot be choosers and while there are people
that are grandfathered with their atrocities in current kernel tree,
we must adhere to the shining standards first before having our chance,
but the standards are not easy to adhere to in an established sizeable
codebase.

Realistically speaking I suspect if we drop Lustre from staging,
it’s unlikely there would remain any steam behind the cleanup efforts
at all.

Bye,
Oleg

2018-02-09 03:12:09

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe

On Thu, Feb 08 2018, Oleg Drokin wrote:

>> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
>>
>> On Tue, Aug 16 2016, James Simmons wrote:
>
> my that’s an old patch
>
>>
...
>>
>> Whoever converted it to "!strcmp()" inverted the condition. This is a
>> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>>
>> This causes many tests in the 'sanity' test suite to return
>> -ENOMEM (that had me puzzled for a while!!).
>
> huh? I am not seeing anything of the sort and I was running sanity
> all the time until a recent pause (but going to resume).

That does surprised me - I reproduce it every time.
I have two VMs running a SLE12-SP2 kernel with patches from
lustre-release applied. These are servers. They have 2 3G virtual disks
each.
I have two over VMs running current mainline. These are clients.

I guess your 'recent pause' included between v4.15-rc1 (8e55b6fd0660)
and v4.15-rc6 (a93639090a27) - a full month when lustre wouldn't work at
all :-(


>
>> This seems to suggest that no-one has been testing the mainline linux
>> lustre.
>> It also seems to suggest that there is a good chance that there
>> are other bugs that have crept in while no-one has really been caring.
>> Given that the sanity test suite doesn't complete for me, but just
>> hangs (in test_27z I think), that seems particularly likely.
>
> Works for me, here’s a run from earlier today on 4.15.0:

Well that's encouraging .. I haven't looked into this one yet - I'm not
even sure where to start.

>
>> So my real question - to anyone interested in lustre for mainline linux
>> - is: can we actually trust this code at all?
>
> Absolutely. Seems that you just stumbled upon a corner case that was not
> being hit by people that do the testing, so you have something unique about
> your setup, I guess.
>
>> I'm seriously tempted to suggest that we just
>> rm -r drivers/staging/lustre
>>
>> drivers/staging is great for letting the community work on code that has
>> been "thrown over the wall" and is not openly developed elsewhere, but
>> that is not the case for lustre. lustre has (or seems to have) an open
>> development process. Having on-going development happen both there and
>> in drivers/staging seems a waste of resources.
>
> It is a bit of a waste of resources, but there are some other things here.
> E.g. we cannot have any APIs with no users in the kernel.
> Also some people like to have in-kernel modules coming with their distros
> (there were some users that used staging client on ubuntu as their
> setup).
>
> Instead the plan was to clean up the staging client into acceptable state,
> move it out of staging, bring in all the missing features and then
> drop the client (more or less) from the lustre-release.

That sounds like a great plan. Any idea why it didn't happen?
It seems there is a lot of upstream work mixed in with the clean up, and
I don't think that really helps anyone.

Is it at all realistic that the client might be removed from
lustre-release? That might be a good goal to work towards.

>
>> Might it make sense to instead start cleaning up the code in
>> lustre-release so as to make it meet the upstream kernel standards.
>> Then when the time is right, the kernel code can be moved *out* of
>> lustre-release and *in* to linux. Then development can continue in
>> Linux (just like it does with other Linux filesystems).
>
> While we can be cleaning lustre in lustre-release, there are some things
> we cannot do as easily, e.g. decoupling Lustre client from the server.
> Also it would not attract any reviews from all the janitor or
> (more importantly) Al Viro and other people with a sharp eyes.
>
>> An added bonus of this is that there is an obvious path to getting
>> server support in mainline Linux. The current situation of client-only
>> support seems weird given how interdependent the two are.
>
> Given the pushback Lustre client was given I have no hope Lustre server
> will get into mainline in my lifetime.

Even if it is horrible it would be nice to have it in staging... I guess
the changes required to ext4 prohibit that... I don't suppose it can be
made to work with mainline ext4 in a reduced-functionality-and-performance
way??

I think it would be a lot easier to motivate forward progress if there
were a credible end goal of everything being in mainline.

>
>> What do others think? Is there any chance that the current lustre in
>> Linux will ever be more than a poor second-cousin to the external
>> lustre-release. If there isn't, should we just discard it now and move
>> on?
>
>
> I think many useful cleanups and fixes came from the staging tree at
> the very least.
> The biggest problem with it all is that we are in staging tree so
> we cannot bring it to parity much. And we are in staging tree because
> there’s a whole bunch of “cleanups” requested that take a lot of effort
> (in both implementing them and then in finding other ways of achieving
> things that were done in old ways before).

Do you have a list of requested cleanups? I would find that to be
useful.


> I understand that beggars cannot be choosers and while there are people
> that are grandfathered with their atrocities in current kernel tree,
> we must adhere to the shining standards first before having our chance,
> but the standards are not easy to adhere to in an established sizeable
> codebase.
>
> Realistically speaking I suspect if we drop Lustre from staging,
> it’s unlikely there would remain any steam behind the cleanup efforts
> at all.

Thanks for your thoughts,
NeilBrown


Attachments:
signature.asc (847.00 B)

2018-02-09 03:51:44

by Oleg Drokin

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> On Feb 8, 2018, at 10:10 PM, NeilBrown <[email protected]> wrote:
>
> On Thu, Feb 08 2018, Oleg Drokin wrote:
>
>>> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
>>>
>>> On Tue, Aug 16 2016, James Simmons wrote:
>>
>> my that’s an old patch
>>
>>>
> ...
>>>
>>> Whoever converted it to "!strcmp()" inverted the condition. This is a
>>> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>>>
>>> This causes many tests in the 'sanity' test suite to return
>>> -ENOMEM (that had me puzzled for a while!!).
>>
>> huh? I am not seeing anything of the sort and I was running sanity
>> all the time until a recent pause (but going to resume).
>
> That does surprised me - I reproduce it every time.
> I have two VMs running a SLE12-SP2 kernel with patches from
> lustre-release applied. These are servers. They have 2 3G virtual disks
> each.
> I have two over VMs running current mainline. These are clients.
>
> I guess your 'recent pause' included between v4.15-rc1 (8e55b6fd0660)
> and v4.15-rc6 (a93639090a27) - a full month when lustre wouldn't work at
> all :-(

More than that, but I am pretty sure James Simmons is running tests all the time too
(he has a different config, I only have tcp).

>>> This seems to suggest that no-one has been testing the mainline linux
>>> lustre.
>>> It also seems to suggest that there is a good chance that there
>>> are other bugs that have crept in while no-one has really been caring.
>>> Given that the sanity test suite doesn't complete for me, but just
>>> hangs (in test_27z I think), that seems particularly likely.
>>
>> Works for me, here’s a run from earlier today on 4.15.0:
>
> Well that's encouraging .. I haven't looked into this one yet - I'm not
> even sure where to start.

m… debug logs for example (greatly neutered in staging tree, but still useful)?
try lctl dk and see what’s in there.

>> Instead the plan was to clean up the staging client into acceptable state,
>> move it out of staging, bring in all the missing features and then
>> drop the client (more or less) from the lustre-release.
>
> That sounds like a great plan. Any idea why it didn't happen?

Because meeting open-ended demands is hard and certain demands sound like
“throw away your X and rewrite it from scratch" (e.g. everything IB-related).

Certain things that sound useless (like the debug subsystem in Lustre)
is very useful when you have a 10k nodes in a cluster and need to selectively
pull stuff from a run to debug a complicated cross-node interaction.
I asked NFS people how do they do it and they don’t have anything that scales
and usually involves reducing the problem to a much smaller set of nodes first.

> It seems there is a lot of upstream work mixed in with the clean up, and
> I don't think that really helps anyone.

I don’t understand what you mean here.

> Is it at all realistic that the client might be removed from
> lustre-release? That might be a good goal to work towards.

Assuming we can bring the whole functionality over - sure.

Of course there’d still be some separate development place and we would
need to create patches (new features?) for like SuSE and other distros
and for testing of server features, I guess, but that could just that -
a side branch somewhere I hope.

It’s not that we are super glad to chase every kernel vendors put out,
of course it would be much easier if the kernels already included
a very functional Lustre client.

>>> Might it make sense to instead start cleaning up the code in
>>> lustre-release so as to make it meet the upstream kernel standards.
>>> Then when the time is right, the kernel code can be moved *out* of
>>> lustre-release and *in* to linux. Then development can continue in
>>> Linux (just like it does with other Linux filesystems).
>>
>> While we can be cleaning lustre in lustre-release, there are some things
>> we cannot do as easily, e.g. decoupling Lustre client from the server.
>> Also it would not attract any reviews from all the janitor or
>> (more importantly) Al Viro and other people with a sharp eyes.
>>
>>> An added bonus of this is that there is an obvious path to getting
>>> server support in mainline Linux. The current situation of client-only
>>> support seems weird given how interdependent the two are.
>>
>> Given the pushback Lustre client was given I have no hope Lustre server
>> will get into mainline in my lifetime.
>
> Even if it is horrible it would be nice to have it in staging... I guess
> the changes required to ext4 prohibit that... I don't suppose it can be
> made to work with mainline ext4 in a reduced-functionality-and-performance
> way??

We support unpatched ZFS as a server too! ;)
(and if somebody invests the time into it, there was some half-baked btrfs
backend too I think).
That said nobody here believes in any success of pushing Lustre server into
mainline.
It would just be easier to push the whole server into userspace (And there
was a project like this in the past, now abandoned because it was mostly
targeting Solaris anyway).

> I think it would be a lot easier to motivate forward progress if there
> were a credible end goal of everything being in mainline.
>
>>
>>> What do others think? Is there any chance that the current lustre in
>>> Linux will ever be more than a poor second-cousin to the external
>>> lustre-release. If there isn't, should we just discard it now and move
>>> on?
>>
>>
>> I think many useful cleanups and fixes came from the staging tree at
>> the very least.
>> The biggest problem with it all is that we are in staging tree so
>> we cannot bring it to parity much. And we are in staging tree because
>> there’s a whole bunch of “cleanups” requested that take a lot of effort
>> (in both implementing them and then in finding other ways of achieving
>> things that were done in old ways before).
>
> Do you have a list of requested cleanups? I would find that to be
> useful.

As Greg would tell you, “if you don’t know what needs to be done,
let’s just remove the whole thing from staging now”.

I assume you saw drivers/staging/lustre/TODO already, it’s only partially done.

We had a bunch of other requests from various people ranging from wholesale
removal of various parts to making sure there’s no checkpatch warnings
(Turned out rather hard to do, even though we greatly pared the numbers).

I have some patches to make Lustre a lot more monolithic too.
People want us to remove our indirections hell so the code is more readable
(I have some patches that need to be freshened up some that help here a bit,
but the work is huge.)

Other requests come out as some of the prior ones get completed due to
“you need o finish current level of cleanups so that we can see what other
cleanups are needed, the current code is too bad to see everything” pretty much.

Bye,
Oleg

2018-02-10 20:58:58

by James Simmons

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> > On Feb 8, 2018, at 10:10 PM, NeilBrown <[email protected]> wrote:
> >
> > On Thu, Feb 08 2018, Oleg Drokin wrote:
> >
> >>> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
> >>>
> >>> On Tue, Aug 16 2016, James Simmons wrote:
> >>
> >> my that’s an old patch
> >>
> >>>
> > ...
> >>>
> >>> Whoever converted it to "!strcmp()" inverted the condition. This is a
> >>> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
> >>>
> >>> This causes many tests in the 'sanity' test suite to return
> >>> -ENOMEM (that had me puzzled for a while!!).
> >>
> >> huh? I am not seeing anything of the sort and I was running sanity
> >> all the time until a recent pause (but going to resume).
> >
> > That does surprised me - I reproduce it every time.
> > I have two VMs running a SLE12-SP2 kernel with patches from
> > lustre-release applied. These are servers. They have 2 3G virtual disks
> > each.
> > I have two over VMs running current mainline. These are clients.
> >
> > I guess your 'recent pause' included between v4.15-rc1 (8e55b6fd0660)
> > and v4.15-rc6 (a93639090a27) - a full month when lustre wouldn't work at
> > all :-(
>
> More than that, but I am pretty sure James Simmons is running tests all the time too
> (he has a different config, I only have tcp).

Yes I have been testing and haven't encountered this problem. Let me try
the fix you pointed out.

> > Do you have a list of requested cleanups? I would find that to be
> > useful.
>
> As Greg would tell you, “if you don’t know what needs to be done,
> let’s just remove the whole thing from staging now”.
>
> I assume you saw drivers/staging/lustre/TODO already, it’s only partially done.

Actually the complete list is at :

https://jira.hpdd.intel.com/browse/LU-9679

I need to move that to our TODO list. Sorry I have been short on cycles.

2018-02-10 22:15:31

by James Simmons

[permalink] [raw]
Subject: Re: [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> > +static inline bool
> > +lsm_md_eq(const struct lmv_stripe_md *lsm1, const struct lmv_stripe_md *lsm2)
> > +{
> > + int idx;
> > +
> > + if (lsm1->lsm_md_magic != lsm2->lsm_md_magic ||
> > + lsm1->lsm_md_stripe_count != lsm2->lsm_md_stripe_count ||
> > + lsm1->lsm_md_master_mdt_index != lsm2->lsm_md_master_mdt_index ||
> > + lsm1->lsm_md_hash_type != lsm2->lsm_md_hash_type ||
> > + lsm1->lsm_md_layout_version != lsm2->lsm_md_layout_version ||
> > + !strcmp(lsm1->lsm_md_pool_name, lsm2->lsm_md_pool_name))
> > + return false;
>
> Hi James and all,
> This patch (8f18c8a48b736c2f in linux) is different from the
> corresponding patch in lustre-release (60e07b972114df).
>
> In that patch, the last clause in the 'if' condition is
>
> + strcmp(lsm1->lsm_md_pool_name,
> + lsm2->lsm_md_pool_name) != 0)
>
> Whoever converted it to "!strcmp()" inverted the condition. This is a
> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>
> This causes many tests in the 'sanity' test suite to return
> -ENOMEM (that had me puzzled for a while!!).
> This seems to suggest that no-one has been testing the mainline linux
> lustre.
> It also seems to suggest that there is a good chance that there
> are other bugs that have crept in while no-one has really been caring.
> Given that the sanity test suite doesn't complete for me, but just
> hangs (in test_27z I think), that seems particularly likely.
>
>
> So my real question - to anyone interested in lustre for mainline linux
> - is: can we actually trust this code at all?
> I'm seriously tempted to suggest that we just
> rm -r drivers/staging/lustre
>
> drivers/staging is great for letting the community work on code that has
> been "thrown over the wall" and is not openly developed elsewhere, but
> that is not the case for lustre. lustre has (or seems to have) an open
> development process. Having on-going development happen both there and
> in drivers/staging seems a waste of resources.
>
> Might it make sense to instead start cleaning up the code in
> lustre-release so as to make it meet the upstream kernel standards.
> Then when the time is right, the kernel code can be moved *out* of
> lustre-release and *in* to linux. Then development can continue in
> Linux (just like it does with other Linux filesystems).
>
> An added bonus of this is that there is an obvious path to getting
> server support in mainline Linux. The current situation of client-only
> support seems weird given how interdependent the two are.
>
> What do others think? Is there any chance that the current lustre in
> Linux will ever be more than a poor second-cousin to the external
> lustre-release. If there isn't, should we just discard it now and move
> on?

If you think that the OpenSFS/Intel branch (lustre-release) is the land
of milk and honey you are very wrong. Take for example the UAPI header
cleanup I push to the linux client several months ago. That work took
5 years to complete. I had to complete that work in the Intel branch
since it impacted our tools. This isn't the only example. I worked along
side Intel for increasing striping of a file to more then the 160 stripe
limit Lustre use to have. That work took 3 years to complete. If the
patch is more than one line it will normally take 1 to 2 months to land.
It is common to have patches 6 months or more in age.

This is one of the major reasons I'm involved in the upstream client
work. If lustre remains a tiny under manned community it is doomed to
remain a niche file system. For years I have tried to recruit new
developers to help out and even gave talks at lustre conferences on
internals. That effort was meet with little success. This is not the
case with the linux lustre client. We do have people contributing
including you. So the reality is that if we removed the lustre client
it would be at least 3+ years before the code would be ready to merged
back in. It would be another 3+ years before it left staging. Many
cleanups in the linux client which impact many lines of code have not
been ported to the Intel branch. It would take forever to get those in.
Honestly I gave up some time ago for those types of cleanups. The cleanups
done in the upstream client would have to be redone. What we really
need is to expand the community. Recently a lot of work has gone into
supporting Ubuntu for our utilities. I hope this helps to get Canonical
involved with the upstream lustre client.

The upstream client is not as bad as you think. A year ago no one in
their right mind would touch the upstream client but their are actually
sites using it today. Its not perfect but it is usable and it is improving
all the time. Yes we have quite a few bugs to squash that show up in
our test suite but the barrier to leaving staging is much much smaller
than it used to be. Once the number of bugs reported in test suite
becomes reasonable we can start auto testing patches posted here. The
ultimate goal is that as more people join in the linux client effort
and it becomes a full member of the broader linux open source community
that we can leave the Intel lustre-release branch in the dust. I believe
the future is much closer than you think.

2018-02-11 23:45:29

by NeilBrown

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe

On Thu, Feb 08 2018, Oleg Drokin wrote:

>> On Feb 8, 2018, at 10:10 PM, NeilBrown <[email protected]> wrote:
>>
>> On Thu, Feb 08 2018, Oleg Drokin wrote:
>>
>>>> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
>>>>
>>>> On Tue, Aug 16 2016, James Simmons wrote:
>>>
>>> my that’s an old patch
>>>
>>>>
>> ...
>>>>
>>>> Whoever converted it to "!strcmp()" inverted the condition. This is a
>>>> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>>>>
>>>> This causes many tests in the 'sanity' test suite to return
>>>> -ENOMEM (that had me puzzled for a while!!).
>>>
>>> huh? I am not seeing anything of the sort and I was running sanity
>>> all the time until a recent pause (but going to resume).
>>
>> That does surprised me - I reproduce it every time.
>> I have two VMs running a SLE12-SP2 kernel with patches from
>> lustre-release applied. These are servers. They have 2 3G virtual disks
>> each.
>> I have two over VMs running current mainline. These are clients.
>>
>> I guess your 'recent pause' included between v4.15-rc1 (8e55b6fd0660)
>> and v4.15-rc6 (a93639090a27) - a full month when lustre wouldn't work at
>> all :-(
>
> More than that, but I am pretty sure James Simmons is running tests all the time too
> (he has a different config, I only have tcp).
>
>>>> This seems to suggest that no-one has been testing the mainline linux
>>>> lustre.
>>>> It also seems to suggest that there is a good chance that there
>>>> are other bugs that have crept in while no-one has really been caring.
>>>> Given that the sanity test suite doesn't complete for me, but just
>>>> hangs (in test_27z I think), that seems particularly likely.
>>>
>>> Works for me, here’s a run from earlier today on 4.15.0:
>>
>> Well that's encouraging .. I haven't looked into this one yet - I'm not
>> even sure where to start.
>
> m… debug logs for example (greatly neutered in staging tree, but still useful)?
> try lctl dk and see what’s in there.

Debug logs seem to tell me that some message is being sent to a server
and a reply is being received, but that request we are waiting on
doesn't make progress. I plan to dig in and learn more about how lustre
rpc works so I have a better changes of interpreted those debug logs.


>
>>> Instead the plan was to clean up the staging client into acceptable state,
>>> move it out of staging, bring in all the missing features and then
>>> drop the client (more or less) from the lustre-release.
>>
>> That sounds like a great plan. Any idea why it didn't happen?
>
> Because meeting open-ended demands is hard and certain demands sound like
> “throw away your X and rewrite it from scratch" (e.g. everything IB-related).

My narrow perspective on IB - from when rdma support was added to the
NFS server - is that it is broken by design and impossible to do
"right". So different people could easily have different ideas on how
to make the best of a bad lot.
I might try to have a look.

>
> Certain things that sound useless (like the debug subsystem in Lustre)
> is very useful when you have a 10k nodes in a cluster and need to selectively
> pull stuff from a run to debug a complicated cross-node interaction.
> I asked NFS people how do they do it and they don’t have anything that scales
> and usually involves reducing the problem to a much smaller set of nodes first.

the "rpcdebug" stuff that Linux/nfs has is sometimes useful, but some parts
are changing to tracepoints and some parts have remained, which is a
little confusing.

The fact that lustre tracing seems to *always* log everything so that if
something goes wrong you can extract that last few meg(?) of logs seems
really useful.

I discovered - thanks to James -
https://jira.hpdd.intel.com/browse/LU-8980
Add tracepoint support to Lustre

which is "closed", but I cannot find any trace of tracepoints in
drivers/staging or in lustre-release. Maybe I'm confused.
I suspect tracepoints is a good way to go.

>
>> It seems there is a lot of upstream work mixed in with the clean up, and
>> I don't think that really helps anyone.
>
> I don’t understand what you mean here.

Just that I thought that the main point of drivers/staging is to get the
code into a mergable state, and if feature addition happens at the same
time, then priorities get blurred and goals don't get reached.

>
>> Is it at all realistic that the client might be removed from
>> lustre-release? That might be a good goal to work towards.
>
> Assuming we can bring the whole functionality over - sure.
>
> Of course there’d still be some separate development place and we would
> need to create patches (new features?) for like SuSE and other distros
> and for testing of server features, I guess, but that could just that -
> a side branch somewhere I hope.

Of course - code doesn't go upstream until it is ready. Lots of
development happens elsewhere.
Of course distros like SUSE would generally rather ship code that was
"ready" and so like to see it upstream. There is usually room for
negotiation.

>
> It’s not that we are super glad to chase every kernel vendors put out,
> of course it would be much easier if the kernels already included
> a very functional Lustre client.
>
>>>> Might it make sense to instead start cleaning up the code in
>>>> lustre-release so as to make it meet the upstream kernel standards.
>>>> Then when the time is right, the kernel code can be moved *out* of
>>>> lustre-release and *in* to linux. Then development can continue in
>>>> Linux (just like it does with other Linux filesystems).
>>>
>>> While we can be cleaning lustre in lustre-release, there are some things
>>> we cannot do as easily, e.g. decoupling Lustre client from the server.
>>> Also it would not attract any reviews from all the janitor or
>>> (more importantly) Al Viro and other people with a sharp eyes.
>>>
>>>> An added bonus of this is that there is an obvious path to getting
>>>> server support in mainline Linux. The current situation of client-only
>>>> support seems weird given how interdependent the two are.
>>>
>>> Given the pushback Lustre client was given I have no hope Lustre server
>>> will get into mainline in my lifetime.
>>
>> Even if it is horrible it would be nice to have it in staging... I guess
>> the changes required to ext4 prohibit that... I don't suppose it can be
>> made to work with mainline ext4 in a reduced-functionality-and-performance
>> way??
>
> We support unpatched ZFS as a server too! ;)

So that that mean you would expect lustre-server to work with unpatched
ext4? In that case I won't give up hope of seeing the server in mainline
in my lifetime. Client first though.

> (and if somebody invests the time into it, there was some half-baked btrfs
> backend too I think).
> That said nobody here believes in any success of pushing Lustre server into
> mainline.
> It would just be easier to push the whole server into userspace (And there
> was a project like this in the past, now abandoned because it was mostly
> targeting Solaris anyway).
>
>> I think it would be a lot easier to motivate forward progress if there
>> were a credible end goal of everything being in mainline.
>>
>>>
>>>> What do others think? Is there any chance that the current lustre in
>>>> Linux will ever be more than a poor second-cousin to the external
>>>> lustre-release. If there isn't, should we just discard it now and move
>>>> on?
>>>
>>>
>>> I think many useful cleanups and fixes came from the staging tree at
>>> the very least.
>>> The biggest problem with it all is that we are in staging tree so
>>> we cannot bring it to parity much. And we are in staging tree because
>>> there’s a whole bunch of “cleanups” requested that take a lot of effort
>>> (in both implementing them and then in finding other ways of achieving
>>> things that were done in old ways before).
>>
>> Do you have a list of requested cleanups? I would find that to be
>> useful.
>
> As Greg would tell you, “if you don’t know what needs to be done,
> let’s just remove the whole thing from staging now”.

Of course, but I don't expect that I will see the same things that
others see. And if people have gone to the trouble to provide feedback,
it seems polite to record that feed back for all to see.

>
> I assume you saw drivers/staging/lustre/TODO already, it’s only partially done.

Yes - it isn't very detailed though. Maybe I'll flesh it out with some
of the things you have said.

>
> We had a bunch of other requests from various people ranging from wholesale
> removal of various parts to making sure there’s no checkpatch warnings
> (Turned out rather hard to do, even though we greatly pared the
> numbers).

checkpatch is a useful guide, but an awful master.

% find drivers/staging/lustre/ -name '*.[ch]' | while read a; do
./scripts/checkpatch.pl --max-line-length=10000 --no-summary -f $a
done|grep '^ERROR' | sort | uniq -c


17 ERROR: Macros with complex values should be enclosed in parentheses
2 ERROR: Macros with multiple statements should be enclosed in a do - while loop
12 ERROR: No #include in ...include/uapi/... should use a uapi/ path prefix
1 ERROR: space required before the open brace '{'
8 ERROR: that open brace { should be on the previous line
1 ERROR: trailing statements should be on next line
1 ERROR: trailing whitespace

Thanks isn't too bad - obviously nearly there with checkpatch.
Lots more warnings - some might be interesting.

wholesale removal - like the prng, the workqueues, and the
ll_wait_event() macro? I can do that :-)


>
> I have some patches to make Lustre a lot more monolithic too.

Yes, it annoys me that I cannot build without modules. I took some
steps towards fixing that and went off down a rabbit hole..
Should be fairly easy.

> People want us to remove our indirections hell so the code is more readable
> (I have some patches that need to be freshened up some that help here a bit,
> but the work is huge.)

But indirections solve all problems :-)

>
> Other requests come out as some of the prior ones get completed due to
> “you need o finish current level of cleanups so that we can see what other
> cleanups are needed, the current code is too bad to see everything” pretty much.

Thanks a lot for your helpful reply.

NeilBrown


Attachments:
signature.asc (847.00 B)

2018-02-11 23:51:07

by NeilBrown

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe

On Sat, Feb 10 2018, James Simmons wrote:

>> > On Feb 8, 2018, at 10:10 PM, NeilBrown <[email protected]> wrote:
>> >
>> > On Thu, Feb 08 2018, Oleg Drokin wrote:
>> >
>> >>> On Feb 8, 2018, at 8:39 PM, NeilBrown <[email protected]> wrote:
>> >>>
>> >>> On Tue, Aug 16 2016, James Simmons wrote:
>> >>
>> >> my that’s an old patch
>> >>
>> >>>
>> > ...
>> >>>
>> >>> Whoever converted it to "!strcmp()" inverted the condition. This is a
>> >>> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!
>> >>>
>> >>> This causes many tests in the 'sanity' test suite to return
>> >>> -ENOMEM (that had me puzzled for a while!!).
>> >>
>> >> huh? I am not seeing anything of the sort and I was running sanity
>> >> all the time until a recent pause (but going to resume).
>> >
>> > That does surprised me - I reproduce it every time.
>> > I have two VMs running a SLE12-SP2 kernel with patches from
>> > lustre-release applied. These are servers. They have 2 3G virtual disks
>> > each.
>> > I have two over VMs running current mainline. These are clients.
>> >
>> > I guess your 'recent pause' included between v4.15-rc1 (8e55b6fd0660)
>> > and v4.15-rc6 (a93639090a27) - a full month when lustre wouldn't work at
>> > all :-(
>>
>> More than that, but I am pretty sure James Simmons is running tests all the time too
>> (he has a different config, I only have tcp).
>
> Yes I have been testing and haven't encountered this problem. Let me try
> the fix you pointed out.

Yeah, I guess I over reacted a bit in suggesting that no-one can have
been testing - sorry about that. It seemed really strange though as the
bug was so easy for me to hit.

Maybe - as you suggest in another email - it is due to some
client/server incompatibility. I guess it is unavoidable with an fs
like lustre to have incompatible protocol changes. Is there any
mechanism for detecting the version of other peers in the cluster and
refusing to run if versions are incompatible?

If you haven't hit the problem in testing, I suspect you aren't touching
that code path at all. Maybe put a BUG() call in there to see :-)

>
>> > Do you have a list of requested cleanups? I would find that to be
>> > useful.
>>
>> As Greg would tell you, “if you don’t know what needs to be done,
>> let’s just remove the whole thing from staging now”.
>>
>> I assume you saw drivers/staging/lustre/TODO already, it’s only partially done.
>
> Actually the complete list is at :
>
> https://jira.hpdd.intel.com/browse/LU-9679
>
> I need to move that to our TODO list. Sorry I have been short on cycles.

Just adding that link to TODO would be a great start. I might do that
when I next send some patches.

Thanks,
NeilBrown


Attachments:
signature.asc (847.00 B)

2018-02-12 00:08:05

by Oleg Drokin

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> On Feb 11, 2018, at 6:50 PM, NeilBrown <[email protected]> wrote:
>
> Maybe - as you suggest in another email - it is due to some
> client/server incompatibility. I guess it is unavoidable with an fs
> like lustre to have incompatible protocol changes. Is there any
> mechanism for detecting the version of other peers in the cluster and
> refusing to run if versions are incompatible?

Yes, client and server exchange “feature bits” at connect time
and only use the subset of features that both can understand.

Bye,
Oleg

2018-02-12 01:10:09

by Oleg Drokin

[permalink] [raw]
Subject: Re: [lustre-devel] [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe


> On Feb 11, 2018, at 6:44 PM, NeilBrown <[email protected]> wrote:
>
> On Thu, Feb 08 2018, Oleg Drokin wrote:
>>
>> Certain things that sound useless (like the debug subsystem in Lustre)
>> is very useful when you have a 10k nodes in a cluster and need to selectively
>> pull stuff from a run to debug a complicated cross-node interaction.
>> I asked NFS people how do they do it and they don’t have anything that scales
>> and usually involves reducing the problem to a much smaller set of nodes first.
>
> the "rpcdebug" stuff that Linux/nfs has is sometimes useful, but some parts
> are changing to tracepoints and some parts have remained, which is a
> little confusing.
>
> The fact that lustre tracing seems to *always* log everything so that if
> something goes wrong you can extract that last few meg(?) of logs seems
> really useful.

Not really. Lustre also has a bitmask for logs (since otherwise all those prints
are pretty cpu taxing), but what makes those logs better is:
the size is unlimited, not constrained by dmesg buffer size.
You can capture those logs from a crashdump (something I really wish
somebody would implement for tracepoint buffers, but alas, I have not
found anything for this yet - we have a crash plugin to extract lustre
debug logs from a kernel crashdump).
>>>
>>> Even if it is horrible it would be nice to have it in staging... I guess
>>> the changes required to ext4 prohibit that... I don't suppose it can be
>>> made to work with mainline ext4 in a reduced-functionality-and-performance
>>> way??
>>
>> We support unpatched ZFS as a server too! ;)
>
> So that that mean you would expect lustre-server to work with unpatched
> ext4? In that case I won't give up hope of seeing the server in mainline
> in my lifetime. Client first though.

While unpatched ext4 might in theory be possible, currently it does not export
everything we need from the transaction/fs control perspective.

Bye,
Oleg

2018-02-12 08:16:34

by Dan Carpenter

[permalink] [raw]
Subject: Re: [PATCH 41/80] staging: lustre: lmv: separate master object with master stripe

On Fri, Feb 09, 2018 at 12:39:18PM +1100, NeilBrown wrote:
> On Tue, Aug 16 2016, James Simmons wrote:
>
> >
> > +static inline bool
> > +lsm_md_eq(const struct lmv_stripe_md *lsm1, const struct lmv_stripe_md *lsm2)
> > +{
> > + int idx;
> > +
> > + if (lsm1->lsm_md_magic != lsm2->lsm_md_magic ||
> > + lsm1->lsm_md_stripe_count != lsm2->lsm_md_stripe_count ||
> > + lsm1->lsm_md_master_mdt_index != lsm2->lsm_md_master_mdt_index ||
> > + lsm1->lsm_md_hash_type != lsm2->lsm_md_hash_type ||
> > + lsm1->lsm_md_layout_version != lsm2->lsm_md_layout_version ||
> > + !strcmp(lsm1->lsm_md_pool_name, lsm2->lsm_md_pool_name))
> > + return false;
>
> Hi James and all,
> This patch (8f18c8a48b736c2f in linux) is different from the
> corresponding patch in lustre-release (60e07b972114df).
>
> In that patch, the last clause in the 'if' condition is
>
> + strcmp(lsm1->lsm_md_pool_name,
> + lsm2->lsm_md_pool_name) != 0)
>
> Whoever converted it to "!strcmp()" inverted the condition. This is a
> perfect example of why I absolutely *loathe* the "!strcmp()" construct!!

People think that "if (!strcmp()) " is prefered kernel style but it's
not.

if (foo != NULL) {

The != NULL is a double negative. I don't think it adds anything.
Some kernel developers like this style because it's explicit about the
type. I have never seen any bugs caused by this format or solved by
this format. Anyway checkpatch complains.

if (ret != 0) {

In this situation "ret" is not a number, it's an error code. The != 0
is a double negative and complicated to think about. Btw, I sort of
prefer "if (ret)" to "if (ret < 0)", not because of style but it's
easier for Smatch. No subsystems are totally consistent so the (by
definition inconsistent) "if (ret < 0)" checks cause false positives in
Smatch.

if (len != 0)

This is OK. "len" is a number.

if (strcmp(one, two) != 0) {

With strcmp() I really prefer == 0 and != 0 because it works like this:

strcmp(one, two) == 0 --> means one == two
strcmp(one, two) < 0 --> means one < two
strcmp(one, two) != 0 --> means one != two

Either style is accepted in the kernel but I think == 0 just makes so
much sense. I mostly see bugs from this when people are "fixing" the
style from == 0 to !strcmp() so my sample is very biased. Normally, if
the original author writes the code any bugs are caught in testing so
either way is going to be bug free.

But the only thing that checkpatch complains about is == NULL and
!= NULL.

regards,
dan carpenter