2010-05-06 19:20:19

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 0/7] pnfs-submit api touch ups

After rebasing everything on top of Andy and Fred patchsets I propose the following:

[PATCH 1/2] SQUASHME: pnfs-submit: have initialize_mountpoint return status
[PATCH 2/2] SQUASHME: pnfs-submit: pass struct nfs_server * to getdeviceinfo
[PATCH 3/5] pnfs-post-submit: pass struct nfs_server * to getdevicelist
[PATCH 4/5] pnfs-post-submit: pass mntfh down the init_pnfs path
[PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data
[PATCH 6/6] SQUASHME: pnfs-block: convert APIs pnfs-post-submit
[PATCH 7/7] SQUASHME: pnfs-obj: convert APIs pnfs-post-submit


2010-05-11 08:46:51

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data

On 05/10/2010 05:24 PM, Andy Adamson wrote:
>
> On May 9, 2010, at 12:25 PM, Boaz Harrosh wrote:
>
>> On 05/06/2010 10:23 PM, Benny Halevy wrote:
>>> Temporary relief until we convert to use generic device cache.
>>>
>>
>> [In short]
>> Rrrr, No! This is a complete crash. The private data is per-server but
>> is called per super-block. In the case of two mounts of same "server",
>> the user of this will:
>> - Leak a device cache on second mount
>> - Crash after close of first super block.
>>
>> [The long story]
>> I'm commenting here for the complete series Andy's and Benny included.
>>
>> What Andy tried to do was move the per super-block device cache to a
>> per "server" device cache. (This is the per server struct at the NFS
>> client side not to be confused with an NFS-server what so ever). This
>> is because as mandated by the Protocol each device id uniqueness is
>> governed by a per-server-per-client-per-layoput_type, so multiple
>> mounts can share the device-cache and save resources. The old code
>> of per-mount-point was correct only not optimal.
>>
>> But he did not finish his job. Because he still calls the device
>> cache initialization at per-mount-point init. -
>>> initialize_mountpoint is
>> called from set_pnfs_layoutdriver() which is called with a super-block
>> per mount-point. He went to grate length (I hope, I did not check) to
>> make sure only the first mount, allocates the cache, and the last
>> mount
>> destroys it.
>>
>> But otherwise he noticed (and Benny tried to help) that now
>> initialize_mountpoint is per-server, and not per-sb. Hence the pointer
>> to struct server.
>>
>> So the old code is now a layering violation and hence the mix-up and
>> the bug
>
>>
>> - If it is a per-server? name it ->initialize_server() receiving
>> server
>> pointer, No?
>>

What about the name ?

>> - If it is a per-server then shift the all set_pnfs_layoutdriver() to
>> be called once per-server construction (At the server constructor
>> code)
>
> set_pnfs_layoutdriver checks to see if nfs_server->pnfs_curr_ld is set.

Yes set_pnfs_layoutdriver does, again a layering violation in my opinion.
However ->uninitialize_mountpoint() is still called for every sb-unmount
how useful is that? (at super.c nfs4_kill_super)

It is very simple really.

- ->initialize_server() is called from nfs_server_set_fsinfo() for every
mount the check should be there. If you need it that late? Perhaps it could
be called earlier at nfs_init_server()?

- Call ->uninitialize_server() from nfs_free_server(). And all is well.


- Give me a void* at server to keep my stuff.

> Put the private pointer into struct pnfs_layoutdriver_type and your
> problem will be solved.
>

No pnfs_layoutdriver_type is global. I might as well just put it at
the data_segment. I wanted a per mount. Willing to compromise on per server

> -->Andy
>

OK I finally read the code. and forget everything I said!

So current code is one big bug in regard to filelayout_uninitialize_mountpoint
called for every nfs4_kill_super and the destruction of the cache.

Struct server has nothing to do with it. it is just on the way to receive
the struct *client* pointer. (My god the server/client relationships in
Linux-nfs-client I still don't understand it).

So everything I said above but exchange the name "server" with *"client"*

And most importantly. YOU DO NOT NEED THE REFERENCE COUNTS. (right, there
are two)

if you stick the device-cache on the lifetime of the client structure
then:
- There will not be any super-blocks alive before/after a particular client
dies. (right? sb->ref->server->ref->client)
- There will not be any inodes alive after a super-blocks dies, hence any
IOs nor any layouts.

So a device cache is what you said it is per client structure no more no
less. No?

(I can't find the last version of patches you sent to the mailing list
I wanted to comment on them (Sorry for the delay was busy). I'll look
in Benny's git I hope he did not squash them yet. And will comment
on that)

Boaz

>> then you don't need to sync in multiple places the initialization/
>> destruction
>> of sb(s) vs. servers vs device-caches. Server struct life-cycle
>> will govern that.
>>
>> Accommodating future needs:
>> - In objects layout (In code not yet released) I have a per-super-
>> block: pages-
>> cache-pool, raid-engine governing struct, and some other raid
>> related information.
>> I use per-super-block because this is the most natural In the Linux
>> VFS API. So
>> global stuff per super-block directly pointed by every inode for
>> easy (random)
>> access at every API level. I could shift this to be per-server in
>> NFS-client. I surly
>> don't want it global, (Rrrrr) and per-inode is two small. I will
>> need to work harder
>> to optimize for the extra contention (or maybe not).
>>
>> So the per-server model is fine, I guess, but don't let me slave
>> over a broken API that
>> forces me to duplicate lifetime rules of things that are already
>> taken care of, only
>> not seen by the layout driver.
>>
>> If moving to a per-server model then some current structures
>> referencing and pointing
>> could change to remove the SB from the picture and directly point
>> to server.
>>
>> I know this is lots of work and who's going to do it, but I was not
>> the one who suggested
>> the optimization in the first place. A per-SB is some much easier
>> because of the Linux
>> environment we live in, but if we do it, it must be done right.
>>
>> Boaz
>>
>>> Signed-off-by: Benny Halevy <[email protected]>
>>> ---
>>> include/linux/nfs_fs_sb.h | 1 +
>>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
>>> index cad56a7..00a4e7e 100644
>>> --- a/include/linux/nfs_fs_sb.h
>>> +++ b/include/linux/nfs_fs_sb.h
>>> @@ -164,6 +164,7 @@ struct nfs_server {
>>>
>>> #ifdef CONFIG_NFS_V4_1
>>> struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout
>>> driver */
>>> + void *pnfs_ld_data; /* Per-mount data */
>>> unsigned int ds_rsize; /* Data server read size */
>>> unsigned int ds_wsize; /* Data server write size */
>>> #endif /* CONFIG_NFS_V4_1 */
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs"
>> in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>


2010-05-10 14:25:04

by Andy Adamson

[permalink] [raw]
Subject: Re: [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data


On May 9, 2010, at 12:25 PM, Boaz Harrosh wrote:

> On 05/06/2010 10:23 PM, Benny Halevy wrote:
>> Temporary relief until we convert to use generic device cache.
>>
>
> [In short]
> Rrrr, No! This is a complete crash. The private data is per-server but
> is called per super-block. In the case of two mounts of same "server",
> the user of this will:
> - Leak a device cache on second mount
> - Crash after close of first super block.
>
> [The long story]
> I'm commenting here for the complete series Andy's and Benny included.
>
> What Andy tried to do was move the per super-block device cache to a
> per "server" device cache. (This is the per server struct at the NFS
> client side not to be confused with an NFS-server what so ever). This
> is because as mandated by the Protocol each device id uniqueness is
> governed by a per-server-per-client-per-layoput_type, so multiple
> mounts can share the device-cache and save resources. The old code
> of per-mount-point was correct only not optimal.
>
> But he did not finish his job. Because he still calls the device
> cache initialization at per-mount-point init. -
> >initialize_mountpoint is
> called from set_pnfs_layoutdriver() which is called with a super-block
> per mount-point. He went to grate length (I hope, I did not check) to
> make sure only the first mount, allocates the cache, and the last
> mount
> destroys it.
>
> But otherwise he noticed (and Benny tried to help) that now
> initialize_mountpoint is per-server, and not per-sb. Hence the pointer
> to struct server.
>
> So the old code is now a layering violation and hence the mix-up and
> the bug

>
> - If it is a per-server? name it ->initialize_server() receiving
> server
> pointer, No?
>
> - If it is a per-server then shift the all set_pnfs_layoutdriver() to
> be called once per-server construction (At the server constructor
> code)

set_pnfs_layoutdriver checks to see if nfs_server->pnfs_curr_ld is set.
Put the private pointer into struct pnfs_layoutdriver_type and your
problem will be solved.

-->Andy

> then you don't need to sync in multiple places the initialization/
> destruction
> of sb(s) vs. servers vs device-caches. Server struct life-cycle
> will govern that.
>
> Accommodating future needs:
> - In objects layout (In code not yet released) I have a per-super-
> block: pages-
> cache-pool, raid-engine governing struct, and some other raid
> related information.
> I use per-super-block because this is the most natural In the Linux
> VFS API. So
> global stuff per super-block directly pointed by every inode for
> easy (random)
> access at every API level. I could shift this to be per-server in
> NFS-client. I surly
> don't want it global, (Rrrrr) and per-inode is two small. I will
> need to work harder
> to optimize for the extra contention (or maybe not).
>
> So the per-server model is fine, I guess, but don't let me slave
> over a broken API that
> forces me to duplicate lifetime rules of things that are already
> taken care of, only
> not seen by the layout driver.
>
> If moving to a per-server model then some current structures
> referencing and pointing
> could change to remove the SB from the picture and directly point
> to server.
>
> I know this is lots of work and who's going to do it, but I was not
> the one who suggested
> the optimization in the first place. A per-SB is some much easier
> because of the Linux
> environment we live in, but if we do it, it must be done right.
>
> Boaz
>
>> Signed-off-by: Benny Halevy <[email protected]>
>> ---
>> include/linux/nfs_fs_sb.h | 1 +
>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>
>> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
>> index cad56a7..00a4e7e 100644
>> --- a/include/linux/nfs_fs_sb.h
>> +++ b/include/linux/nfs_fs_sb.h
>> @@ -164,6 +164,7 @@ struct nfs_server {
>>
>> #ifdef CONFIG_NFS_V4_1
>> struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout
>> driver */
>> + void *pnfs_ld_data; /* Per-mount data */
>> unsigned int ds_rsize; /* Data server read size */
>> unsigned int ds_wsize; /* Data server write size */
>> #endif /* CONFIG_NFS_V4_1 */
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs"
> in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html


2010-05-11 15:02:17

by Andy Adamson

[permalink] [raw]
Subject: Re: [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data

On Tue, May 11, 2010 at 4:46 AM, Boaz Harrosh <[email protected]> wrote:
> On 05/10/2010 05:24 PM, Andy Adamson wrote:
>>
>> On May 9, 2010, at 12:25 PM, Boaz Harrosh wrote:
>>
>>> On 05/06/2010 10:23 PM, Benny Halevy wrote:
>>>> Temporary relief until we convert to use generic device cache.
>>>>
>>>
>>> [In short]
>>> Rrrr, No! This is a complete crash. The private data is per-server but
>>> is called per super-block. In the case of two mounts of same "server",
>>> the user of this will:
>>> - Leak a device cache on second mount
>>> - Crash after close of first super block.
>>>
>>> [The long story]
>>> I'm commenting here for the complete series Andy's and Benny included.
>>>
>>> What Andy tried to do was move the per super-block device cache to a
>>> per "server" device cache. (This is the per server struct at the NFS
>>> client side not to be confused with an NFS-server what so ever). This
>>> is because as mandated by the Protocol each device id uniqueness is
>>> governed by a per-server-per-client-per-layoput_type, so multiple
>>> mounts can share the device-cache and save resources. The old code
>>> of per-mount-point was correct only not optimal.
>>>
>>> But he did not finish his job. Because he still calls the device
>>> cache initialization at per-mount-point init. -
>>>> initialize_mountpoint is
>>> called from set_pnfs_layoutdriver() which is called with a super-block
>>> per mount-point. He went to grate length (I hope, I did not check) to
>>> make sure only the first mount, allocates the cache, and the last
>>> mount
>>> destroys it.
>>>
>>> But otherwise he noticed (and Benny tried to help) that now
>>> initialize_mountpoint is per-server, and not per-sb. Hence the pointer
>>> to struct server.
>>>
>>> So the old code is now a layering violation and hence the mix-up and
>>> the bug
>>
>>>
>>> - If it is a per-server? name it ->initialize_server() receiving
>>> server
>>> ?pointer, No?
>>>
>
> What about the name ?
>
>>> - If it is a per-server then shift the all set_pnfs_layoutdriver() to
>>> ?be called once per-server construction (At the server constructor
>>> code)
>>
>> set_pnfs_layoutdriver checks to see if nfs_server->pnfs_curr_ld is set.
>
> Yes set_pnfs_layoutdriver does, again a layering violation in my opinion.
> However ->uninitialize_mountpoint() is still called for every sb-unmount
> how useful is that? (at super.c nfs4_kill_super)
>
> It is very simple really.
>
> - ->initialize_server() is called from nfs_server_set_fsinfo() for every
> ?mount the check should be there. If you need it that late? Perhaps it could
> ?be called earlier at nfs_init_server()?
>
> - Call ->uninitialize_server() from nfs_free_server(). And all is well.
>
>
> - Give me a void* at server to keep my stuff.
>
>> Put the private pointer into struct pnfs_layoutdriver_type and your
>> problem will be solved.
>>
>
> No pnfs_layoutdriver_type is global. I might as well just put it at
> the data_segment. I wanted a per mount. Willing to compromise on per server
>
>> -->Andy
>>
>
> OK I finally read the code. and forget everything I said!

:)

>
> So current code is one big bug in regard to filelayout_uninitialize_mountpoint
> called for every nfs4_kill_super and the destruction of the cache.

Each layout driver initialize_mountpoint function is required to call
nfs4_alloc_init_deviceid_cache() with it's layout driver specific
free_deviceid_callback routine which either allocates the deviceid
cache or bumps the reference count.

The cache is not necessarily destroyed by the uninitialize_mountpoint
call which is required to call nfs4_put_deviceid_cache().

See fs/nfs/nfs4filelayout.c.

The idea is for the deviceid cache to be referenced once per struct
nfs_server in nfs_server_set_fsinfo (which calls nfs4_init_pnfs) and
dereferenced once per struct nfs_server in nfs4_kill_super (which
calls unmount_pnfs_layoutdriver).

But you are right - I now see that I ignored error paths in creating
the super block and associating it with a struct nfs_server, so that
the current code could bump the reference and then not dereference on
error, or if the super block is shared in which case the struct
nfs_server is freed.

This is fixed by moving the unmount_pnfs_layoutdriver call from
nfs4_kill_super into nfs_free_server which is called on all error
paths as well as by nfs4_kill_super.

I'll test a patch today.

>
> Struct server has nothing to do with it. it is just on the way to receive
> the struct *client* pointer. (My god the server/client relationships in
> Linux-nfs-client I still don't understand it).

It is also a way for you to receive a per struct nfs_server private
data pointer which initialize/uninitialize_mount can manage
independently of the nfs4_alloc_init_deviceid_cache and
nfs4_put_deviceid_cache calls used to manage the
generic deviceid cache.

>
> So everything I said above but exchange the name "server" with *"client"*
>
> And most importantly. YOU DO NOT NEED THE REFERENCE COUNTS. (right, there
> are two)
>
> if you stick the device-cache on the lifetime of the client structure
> then:

The nfs_client structure can support nfs_server structs that represent
multiple pNFS and non-pNFS mounts, and eventually multiple pNFS mounts
with multiple layout types. The cache lives only for multiple pNFS
mounts (currently of one layout type).

At the allocation of struct nfs_client, you don't know if pNFS will
be used, nor which layout type(s) will be supported. This is only
known after nfs_probe_fsinfo returns with the fs_layout_type attribute
(or not) which occurs after the nfs_client struct is created (or found
and referenced). So, it is impossible to create a layout type specific
generic deviceid cache before we know which layout type the cache is
for which may occur well after the struct nfs_client is created if
non-pNFS mounts occur before pNFS mounts.

I therefore 'stick the device-cache' on the lifetime of the nfs_server
structures representing pNFS mounts, using ref counts to remove the
cache on last reference.

-->Andy

> - There will not be any super-blocks alive before/after a particular client
> ?dies. (right? sb->ref->server->ref->client)
> - There will not be any inodes alive after a super-blocks dies, hence any
> ?IOs nor any layouts.
>
> So a device cache is what you said it is per client structure no more no
> less. No?
>
> (I can't find the last version of patches you sent to the mailing list
> ?I wanted to comment on them (Sorry for the delay was busy). I'll look
> ?in Benny's git I hope he did not squash them yet. And will comment
> ?on that)
>
> Boaz
>
>>> ?then you don't need to sync in multiple places the initialization/
>>> destruction
>>> ?of sb(s) vs. servers vs device-caches. Server struct life-cycle
>>> will govern that.
>>>
>>> Accommodating future needs:
>>> - In objects layout (In code not yet released) I have a per-super-
>>> block: pages-
>>> ?cache-pool, raid-engine governing struct, and some other raid
>>> related information.
>>> ?I use per-super-block because this is the most natural In the Linux
>>> VFS API. So
>>> ?global stuff per super-block directly pointed by every inode for
>>> easy (random)
>>> ?access at every API level. I could shift this to be per-server in
>>> NFS-client. I surly
>>> ?don't want it global, (Rrrrr) and per-inode is two small. I will
>>> need to work harder
>>> ?to optimize for the extra contention (or maybe not).
>>>
>>> ?So the per-server model is fine, I guess, but don't let me slave
>>> over a broken API that
>>> ?forces me to duplicate lifetime rules of things that are already
>>> taken care of, only
>>> ?not seen by the layout driver.
>>>
>>> ?If moving to a per-server model then some current structures
>>> referencing and pointing
>>> ?could change to remove the SB from the picture and directly point
>>> to server.
>>>
>>> I know this is lots of work and who's going to do it, but I was not
>>> the one who suggested
>>> the optimization in the first place. A per-SB is some much easier
>>> because of the Linux
>>> environment we live in, but if we do it, it must be done right.
>>>
>>> Boaz
>>>
>>>> Signed-off-by: Benny Halevy <[email protected]>
>>>> ---
>>>> include/linux/nfs_fs_sb.h | ? ?1 +
>>>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
>>>> index cad56a7..00a4e7e 100644
>>>> --- a/include/linux/nfs_fs_sb.h
>>>> +++ b/include/linux/nfs_fs_sb.h
>>>> @@ -164,6 +164,7 @@ struct nfs_server {
>>>>
>>>> #ifdef CONFIG_NFS_V4_1
>>>> ? ? struct pnfs_layoutdriver_type ?*pnfs_curr_ld; /* Active layout
>>>> driver */
>>>> + ? void ? ? ? ? ? ? ? ? ? ? ? ? ? *pnfs_ld_data; /* Per-mount data */
>>>> ? ? unsigned int ? ? ? ? ? ? ? ? ? ?ds_rsize; ?/* Data server read size */
>>>> ? ? unsigned int ? ? ? ? ? ? ? ? ? ?ds_wsize; ?/* Data server write size */
>>>> #endif /* CONFIG_NFS_V4_1 */
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs"
>>> in
>>> the body of a message to [email protected]
>>> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>

2010-05-09 16:25:13

by Boaz Harrosh

[permalink] [raw]
Subject: Re: [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data

On 05/06/2010 10:23 PM, Benny Halevy wrote:
> Temporary relief until we convert to use generic device cache.
>

[In short]
Rrrr, No! This is a complete crash. The private data is per-server but
is called per super-block. In the case of two mounts of same "server",
the user of this will:
- Leak a device cache on second mount
- Crash after close of first super block.

[The long story]
I'm commenting here for the complete series Andy's and Benny included.

What Andy tried to do was move the per super-block device cache to a
per "server" device cache. (This is the per server struct at the NFS
client side not to be confused with an NFS-server what so ever). This
is because as mandated by the Protocol each device id uniqueness is
governed by a per-server-per-client-per-layoput_type, so multiple
mounts can share the device-cache and save resources. The old code
of per-mount-point was correct only not optimal.

But he did not finish his job. Because he still calls the device
cache initialization at per-mount-point init. ->initialize_mountpoint is
called from set_pnfs_layoutdriver() which is called with a super-block
per mount-point. He went to grate length (I hope, I did not check) to
make sure only the first mount, allocates the cache, and the last mount
destroys it.

But otherwise he noticed (and Benny tried to help) that now
initialize_mountpoint is per-server, and not per-sb. Hence the pointer
to struct server.

So the old code is now a layering violation and hence the mix-up and the bug:

- If it is a per-server? name it ->initialize_server() receiving server
pointer, No?

- If it is a per-server then shift the all set_pnfs_layoutdriver() to
be called once per-server construction (At the server constructor code)
then you don't need to sync in multiple places the initialization/destruction
of sb(s) vs. servers vs device-caches. Server struct life-cycle will govern that.

Accommodating future needs:
- In objects layout (In code not yet released) I have a per-super-block: pages-
cache-pool, raid-engine governing struct, and some other raid related information.
I use per-super-block because this is the most natural In the Linux VFS API. So
global stuff per super-block directly pointed by every inode for easy (random)
access at every API level. I could shift this to be per-server in NFS-client. I surly
don't want it global, (Rrrrr) and per-inode is two small. I will need to work harder
to optimize for the extra contention (or maybe not).

So the per-server model is fine, I guess, but don't let me slave over a broken API that
forces me to duplicate lifetime rules of things that are already taken care of, only
not seen by the layout driver.

If moving to a per-server model then some current structures referencing and pointing
could change to remove the SB from the picture and directly point to server.

I know this is lots of work and who's going to do it, but I was not the one who suggested
the optimization in the first place. A per-SB is some much easier because of the Linux
environment we live in, but if we do it, it must be done right.

Boaz

> Signed-off-by: Benny Halevy <[email protected]>
> ---
> include/linux/nfs_fs_sb.h | 1 +
> 1 files changed, 1 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
> index cad56a7..00a4e7e 100644
> --- a/include/linux/nfs_fs_sb.h
> +++ b/include/linux/nfs_fs_sb.h
> @@ -164,6 +164,7 @@ struct nfs_server {
>
> #ifdef CONFIG_NFS_V4_1
> struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout driver */
> + void *pnfs_ld_data; /* Per-mount data */
> unsigned int ds_rsize; /* Data server read size */
> unsigned int ds_wsize; /* Data server write size */
> #endif /* CONFIG_NFS_V4_1 */


2010-05-06 19:22:56

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 1/2] SQUASHME: pnfs-submit: have initialize_mountpoint return status

use status convention rather than boolean true for success.

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/nfs4filelayout.c | 10 +++++-----
fs/nfs/pnfs.c | 2 +-
2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/nfs/nfs4filelayout.c b/fs/nfs/nfs4filelayout.c
index 021c853..9d1274d 100644
--- a/fs/nfs/nfs4filelayout.c
+++ b/fs/nfs/nfs4filelayout.c
@@ -71,16 +71,16 @@ struct layoutdriver_io_operations filelayout_io_operations;
int
filelayout_initialize_mountpoint(struct nfs_client *clp)
{
-
- if (nfs4_alloc_init_deviceid_cache(clp,
- nfs4_fl_free_deviceid_callback)) {
+ int status = nfs4_alloc_init_deviceid_cache(clp,
+ nfs4_fl_free_deviceid_callback);
+ if (status) {
printk(KERN_WARNING "%s: deviceid cache could not be "
"initialized\n", __func__);
- return 0;
+ return status;
}
dprintk("%s: deviceid cache has been initialized successfully\n",
__func__);
- return 1;
+ return 0;
}

/* Uninitialize a mountpoint by destroying its device list.
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index ec543d2..3a09b91 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -211,7 +211,7 @@ set_pnfs_layoutdriver(struct nfs_server *server, u32 id)
return;

if (id > 0 && find_pnfs(id, &mod)) {
- if (!mod->pnfs_ld_type->ld_io_ops->initialize_mountpoint(
+ if (mod->pnfs_ld_type->ld_io_ops->initialize_mountpoint(
server->nfs_client)) {
printk(KERN_ERR "%s: Error initializing mount point "
"for layout driver %u. ", __func__, id);
--
1.6.5.1


2010-05-06 19:23:10

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 2/2] SQUASHME: pnfs-submit: pass struct nfs_server * to getdeviceinfo

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/nfs4filelayoutdev.c | 2 +-
fs/nfs/nfs4proc.c | 3 +--
fs/nfs/pnfs.h | 2 +-
include/linux/nfs4_pnfs.h | 2 +-
4 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/fs/nfs/nfs4filelayoutdev.c b/fs/nfs/nfs4filelayoutdev.c
index 4b2b8ac..462f6de 100644
--- a/fs/nfs/nfs4filelayoutdev.c
+++ b/fs/nfs/nfs4filelayoutdev.c
@@ -518,7 +518,7 @@ get_device_info(struct inode *inode, struct pnfs_deviceid *dev_id)
/* TODO: Update types when CB_NOTIFY_DEVICEID is available */
pdev->dev_notify_types = 0;

- rc = pnfs_callback_ops->nfs_getdeviceinfo(inode->i_sb, pdev);
+ rc = pnfs_callback_ops->nfs_getdeviceinfo(server, pdev);
dprintk("%s getdevice info returns %d\n", __func__, rc);
if (rc)
goto out_free;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 43dab3c..dd2e6cf 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -5767,9 +5767,8 @@ out:
return status;
}

-int nfs4_pnfs_getdeviceinfo(struct super_block *sb, struct pnfs_device *pdev)
+int nfs4_pnfs_getdeviceinfo(struct nfs_server *server, struct pnfs_device *pdev)
{
- struct nfs_server *server = NFS_SB(sb);
struct nfs4_pnfs_getdeviceinfo_arg args = {
.pdev = pdev,
};
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index c5e042b..b80157b 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -22,7 +22,7 @@
#include "iostat.h"

/* nfs4proc.c */
-extern int nfs4_pnfs_getdeviceinfo(struct super_block *sb,
+extern int nfs4_pnfs_getdeviceinfo(struct nfs_server *server,
struct pnfs_device *dev);
extern int pnfs4_proc_layoutget(struct nfs4_pnfs_layoutget *lgp);
extern int pnfs4_proc_layoutcommit(struct pnfs_layoutcommit_data *data);
diff --git a/include/linux/nfs4_pnfs.h b/include/linux/nfs4_pnfs.h
index 723706c..d9631de 100644
--- a/include/linux/nfs4_pnfs.h
+++ b/include/linux/nfs4_pnfs.h
@@ -293,7 +293,7 @@ extern void nfs4_unset_layout_deviceid(struct pnfs_layout_segment *,
* E.g., getdeviceinfo, I/O callbacks, etc
*/
struct pnfs_client_operations {
- int (*nfs_getdeviceinfo) (struct super_block *sb,
+ int (*nfs_getdeviceinfo) (struct nfs_server *,
struct pnfs_device *dev);

/* Post read callback. */
--
1.6.5.1


2010-05-06 19:23:27

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 3/5] pnfs-post-submit: pass struct nfs_server * to getdevicelist

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/nfs4proc.c | 11 +++++------
fs/nfs/pnfs.h | 4 ++--
include/linux/nfs4_pnfs.h | 3 ++-
3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index f7ab35d..3742ba0 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -5770,8 +5770,8 @@ out:
/*
* Retrieve the list of Data Server devices from the MDS.
*/
-static int _nfs4_pnfs_getdevicelist(struct nfs_fh *fh,
- struct nfs_server *server,
+static int _nfs4_pnfs_getdevicelist(struct nfs_server *server,
+ const struct nfs_fh *fh,
struct pnfs_devicelist *devlist)
{
struct nfs4_pnfs_getdevicelist_arg arg = {
@@ -5794,17 +5794,16 @@ static int _nfs4_pnfs_getdevicelist(struct nfs_fh *fh,
return status;
}

-int nfs4_pnfs_getdevicelist(struct super_block *sb,
- struct nfs_fh *fh,
+int nfs4_pnfs_getdevicelist(struct nfs_server *server,
+ const struct nfs_fh *fh,
struct pnfs_devicelist *devlist)
{
struct nfs4_exception exception = { };
- struct nfs_server *server = NFS_SB(sb);
int err;

do {
err = nfs4_handle_exception(server,
- _nfs4_pnfs_getdevicelist(fh, server, devlist),
+ _nfs4_pnfs_getdevicelist(server, fh, devlist),
&exception);
} while (exception.retry);

diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index b5bb742..77623ea 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -22,8 +22,8 @@
#include "iostat.h"

/* nfs4proc.c */
-extern int nfs4_pnfs_getdevicelist(struct super_block *sb,
- struct nfs_fh *fh,
+extern int nfs4_pnfs_getdevicelist(struct nfs_server *server,
+ const struct nfs_fh *fh,
struct pnfs_devicelist *devlist);
extern int nfs4_pnfs_getdeviceinfo(struct nfs_server *server,
struct pnfs_device *dev);
diff --git a/include/linux/nfs4_pnfs.h b/include/linux/nfs4_pnfs.h
index 3385a5c..87bb982 100644
--- a/include/linux/nfs4_pnfs.h
+++ b/include/linux/nfs4_pnfs.h
@@ -321,7 +321,8 @@ extern void nfs4_delete_device(struct nfs4_deviceid_cache *,
* E.g., getdeviceinfo, I/O callbacks, etc
*/
struct pnfs_client_operations {
- int (*nfs_getdevicelist) (struct super_block *sb, struct nfs_fh *fh,
+ int (*nfs_getdevicelist) (struct nfs_server *,
+ const struct nfs_fh *fh,
struct pnfs_devicelist *devlist);
int (*nfs_getdeviceinfo) (struct nfs_server *,
struct pnfs_device *dev);
--
1.6.5.1


2010-05-06 19:23:41

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 4/5] pnfs-post-submit: pass mntfh down the init_pnfs path

To allow layout driver to issue getdevicelist at mount time.

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/client.c | 10 +++++-----
fs/nfs/nfs4filelayout.c | 5 +++--
fs/nfs/pnfs.c | 5 +++--
fs/nfs/pnfs.h | 2 +-
include/linux/nfs4_pnfs.h | 3 ++-
5 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index b8c459d..7e1833d 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -873,14 +873,14 @@ error:
/*
* Initialize the pNFS layout driver and setup pNFS related parameters
*/
-static void nfs4_init_pnfs(struct nfs_server *server, struct nfs_fsinfo *fsinfo)
+static void nfs4_init_pnfs(struct nfs_server *server, struct nfs_fh *mntfh, struct nfs_fsinfo *fsinfo)
{
#if defined(CONFIG_NFS_V4_1)
struct nfs_client *clp = server->nfs_client;

if (nfs4_has_session(clp) &&
(clp->cl_exchange_flags & EXCHGID4_FLAG_USE_PNFS_MDS)) {
- set_pnfs_layoutdriver(server, fsinfo->layouttype);
+ set_pnfs_layoutdriver(server, mntfh, fsinfo->layouttype);
pnfs_set_ds_iosize(server);
}
#endif /* CONFIG_NFS_V4_1 */
@@ -889,7 +889,7 @@ static void nfs4_init_pnfs(struct nfs_server *server, struct nfs_fsinfo *fsinfo)
/*
* Load up the server record from information gained in an fsinfo record
*/
-static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *fsinfo)
+static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fh *mntfh, struct nfs_fsinfo *fsinfo)
{
unsigned long max_rpc_payload;

@@ -919,7 +919,7 @@ static void nfs_server_set_fsinfo(struct nfs_server *server, struct nfs_fsinfo *
if (server->wsize > NFS_MAX_FILE_IO_SIZE)
server->wsize = NFS_MAX_FILE_IO_SIZE;
server->wpages = (server->wsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
- nfs4_init_pnfs(server, fsinfo);
+ nfs4_init_pnfs(server, mntfh, fsinfo);

server->wtmult = nfs_block_bits(fsinfo->wtmult, NULL);

@@ -963,7 +963,7 @@ static int nfs_probe_fsinfo(struct nfs_server *server, struct nfs_fh *mntfh, str
if (error < 0)
goto out_error;

- nfs_server_set_fsinfo(server, &fsinfo);
+ nfs_server_set_fsinfo(server, mntfh, &fsinfo);

/* Get some general file system info */
if (server->namelen == 0) {
diff --git a/fs/nfs/nfs4filelayout.c b/fs/nfs/nfs4filelayout.c
index 9d1274d..d649883 100644
--- a/fs/nfs/nfs4filelayout.c
+++ b/fs/nfs/nfs4filelayout.c
@@ -69,9 +69,10 @@ ssize_t filelayout_get_stripesize(struct pnfs_layout_type *);
struct layoutdriver_io_operations filelayout_io_operations;

int
-filelayout_initialize_mountpoint(struct nfs_client *clp)
+filelayout_initialize_mountpoint(struct nfs_server *nfss,
+ const struct nfs_fh *mntfh)
{
- int status = nfs4_alloc_init_deviceid_cache(clp,
+ int status = nfs4_alloc_init_deviceid_cache(nfss->nfs_client,
nfs4_fl_free_deviceid_callback);
if (status) {
printk(KERN_WARNING "%s: deviceid cache could not be "
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 6c0e8fa..059572f 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -203,7 +203,8 @@ unmount_pnfs_layoutdriver(struct nfs_server *nfss)
* Only one pNFS layout driver is supported.
*/
void
-set_pnfs_layoutdriver(struct nfs_server *server, u32 id)
+set_pnfs_layoutdriver(struct nfs_server *server, const struct nfs_fh *mntfh,
+ u32 id)
{
struct pnfs_module *mod;

@@ -212,7 +213,7 @@ set_pnfs_layoutdriver(struct nfs_server *server, u32 id)

if (id > 0 && find_pnfs(id, &mod)) {
if (mod->pnfs_ld_type->ld_io_ops->initialize_mountpoint(
- server->nfs_client)) {
+ server, mntfh)) {
printk(KERN_ERR "%s: Error initializing mount point "
"for layout driver %u. ", __func__, id);
goto out_err;
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 77623ea..8318112 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -41,7 +41,7 @@ int pnfs_update_layout(struct inode *ino, struct nfs_open_context *ctx,
int _pnfs_return_layout(struct inode *, struct nfs4_pnfs_layout_segment *,
const nfs4_stateid *stateid, /* optional */
enum pnfs_layoutreturn_type);
-void set_pnfs_layoutdriver(struct nfs_server *, u32 id);
+void set_pnfs_layoutdriver(struct nfs_server *, const struct nfs_fh *mntfh, u32 id);
void unmount_pnfs_layoutdriver(struct nfs_server *);
int pnfs_use_read(struct inode *inode, ssize_t count);
int pnfs_use_ds_io(struct list_head *, struct inode *, int);
diff --git a/include/linux/nfs4_pnfs.h b/include/linux/nfs4_pnfs.h
index 87bb982..719302a 100644
--- a/include/linux/nfs4_pnfs.h
+++ b/include/linux/nfs4_pnfs.h
@@ -168,7 +168,8 @@ struct layoutdriver_io_operations {

/* Registration information for a new mounted file system
*/
- int (*initialize_mountpoint) (struct nfs_client *);
+ int (*initialize_mountpoint) (struct nfs_server *,
+ const struct nfs_fh * mntfh);
int (*uninitialize_mountpoint) (struct nfs_server *server);
};

--
1.6.5.1


2010-05-06 19:24:04

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data

Temporary relief until we convert to use generic device cache.

Signed-off-by: Benny Halevy <[email protected]>
---
include/linux/nfs_fs_sb.h | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
index cad56a7..00a4e7e 100644
--- a/include/linux/nfs_fs_sb.h
+++ b/include/linux/nfs_fs_sb.h
@@ -164,6 +164,7 @@ struct nfs_server {

#ifdef CONFIG_NFS_V4_1
struct pnfs_layoutdriver_type *pnfs_curr_ld; /* Active layout driver */
+ void *pnfs_ld_data; /* Per-mount data */
unsigned int ds_rsize; /* Data server read size */
unsigned int ds_wsize; /* Data server write size */
#endif /* CONFIG_NFS_V4_1 */
--
1.6.5.1


2010-05-06 19:24:23

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 6/6] SQUASHME: pnfs-block: convert APIs pnfs-post-submit

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/blocklayout/blocklayout.c | 45 ++++++++++++++--------------------
fs/nfs/blocklayout/blocklayout.h | 7 ++---
fs/nfs/blocklayout/blocklayoutdev.c | 8 +++---
fs/nfs/blocklayout/blocklayoutdm.c | 4 +-
4 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
index 768d8fa..918e6d6 100644
--- a/fs/nfs/blocklayout/blocklayout.c
+++ b/fs/nfs/blocklayout/blocklayout.c
@@ -564,7 +564,7 @@ bl_free_layout(void *p)
}

static void *
-bl_alloc_layout(struct pnfs_mount_type *mtype, struct inode *inode)
+bl_alloc_layout(struct inode *inode)
{
struct pnfs_block_layout *bl;

@@ -688,7 +688,7 @@ static void free_blk_mountid(struct block_mount_id *mid)
* It seems much of this should be at the generic pnfs level.
*/
static struct pnfs_block_dev *
-nfs4_blk_get_deviceinfo(struct super_block *sb, struct nfs_fh *fh,
+nfs4_blk_get_deviceinfo(struct nfs_server *server, const struct nfs_fh *fh,
struct pnfs_deviceid *d_id,
struct list_head *sdlist)
{
@@ -698,7 +698,6 @@ nfs4_blk_get_deviceinfo(struct super_block *sb, struct nfs_fh *fh,
int max_pages;
struct page **pages = NULL;
int i, rc;
- struct nfs_server *server = NFS_SB(sb);

/*
* Use the session max response size as the basis for setting
@@ -739,12 +738,12 @@ nfs4_blk_get_deviceinfo(struct super_block *sb, struct nfs_fh *fh,
dev->pglen = PAGE_SIZE * max_pages;
dev->mincount = 0;

- rc = pnfs_block_callback_ops->nfs_getdeviceinfo(sb, dev);
+ rc = pnfs_block_callback_ops->nfs_getdeviceinfo(server, dev);
dprintk("%s getdevice info returns %d\n", __func__, rc);
if (rc)
goto out_free;

- rv = nfs4_blk_decode_device(sb, dev, sdlist);
+ rv = nfs4_blk_decode_device(server, dev, sdlist);
out_free:
if (dev->area != NULL)
vunmap(dev->area);
@@ -759,8 +758,8 @@ nfs4_blk_get_deviceinfo(struct super_block *sb, struct nfs_fh *fh,
/*
* Retrieve the list of available devices for the mountpoint.
*/
-static struct pnfs_mount_type *
-bl_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)
+static int
+bl_initialize_mountpoint(struct nfs_server *server, const struct nfs_fh *fh)
{
struct block_mount_id *b_mt_id = NULL;
struct pnfs_mount_type *mtype = NULL;
@@ -771,21 +770,18 @@ bl_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)

dprintk("%s enter\n", __func__);

- if (NFS_SB(sb)->pnfs_blksize == 0) {
+ if (server->pnfs_blksize == 0) {
dprintk("%s Server did not return blksize\n", __func__);
- return NULL;
+ return -EINVAL;
}
b_mt_id = kzalloc(sizeof(struct block_mount_id), GFP_KERNEL);
- if (!b_mt_id)
+ if (!b_mt_id) {
+ status = -ENOMEM;
goto out_error;
+ }
/* Initialize nfs4 block layout mount id */
- b_mt_id->bm_sb = sb; /* back pointer to retrieve nfs_server struct */
spin_lock_init(&b_mt_id->bm_lock);
INIT_LIST_HEAD(&b_mt_id->bm_devlist);
- mtype = kzalloc(sizeof(struct pnfs_mount_type), GFP_KERNEL);
- if (!mtype)
- goto out_error;
- mtype->mountid = (void *)b_mt_id;

/* Construct a list of all visible scsi disks that have not been
* claimed.
@@ -799,7 +795,8 @@ bl_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)
goto out_error;
dlist->eof = 0;
while (!dlist->eof) {
- status = pnfs_block_callback_ops->nfs_getdevicelist(sb, fh, dlist);
+ status = pnfs_block_callback_ops->nfs_getdevicelist(
+ server, fh, dlist);
if (status)
goto out_error;
dprintk("%s GETDEVICELIST numdevs=%i, eof=%i\n",
@@ -811,7 +808,7 @@ bl_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)
* Construct an LVM meta device from the flat volume topology.
*/
for (i = 0; i < dlist->num_devs; i++) {
- bdev = nfs4_blk_get_deviceinfo(sb, fh,
+ bdev = nfs4_blk_get_deviceinfo(server, fh,
&dlist->dev_id[i],
&scsi_disklist);
if (!bdev)
@@ -822,30 +819,26 @@ bl_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)
}
}
dprintk("%s SUCCESS\n", __func__);
-
+ server->pnfs_ld_data = b_mt_id;
+ status = 0;
out_return:
kfree(dlist);
nfs4_blk_destroy_disk_list(&scsi_disklist);
- return mtype;
+ return status;

out_error:
free_blk_mountid(b_mt_id);
kfree(mtype);
- mtype = NULL;
goto out_return;
}

static int
-bl_uninitialize_mountpoint(struct pnfs_mount_type *mtype)
+bl_uninitialize_mountpoint(struct nfs_server *server)
{
- struct block_mount_id *b_mt_id = NULL;
+ struct block_mount_id *b_mt_id = server->pnfs_ld_data;

dprintk("%s enter\n", __func__);
- if (!mtype)
- return 0;
- b_mt_id = (struct block_mount_id *)mtype->mountid;
free_blk_mountid(b_mt_id);
- kfree(mtype);
dprintk("%s RETURNS\n", __func__);
return 0;
}
diff --git a/fs/nfs/blocklayout/blocklayout.h b/fs/nfs/blocklayout/blocklayout.h
index 45939e1..286adc9 100644
--- a/fs/nfs/blocklayout/blocklayout.h
+++ b/fs/nfs/blocklayout/blocklayout.h
@@ -51,7 +51,6 @@ extern int dm_do_resume(struct dm_ioctl *param);
extern int dm_table_load(struct dm_ioctl *param, size_t param_size);

struct block_mount_id {
- struct super_block *bm_sb; /* back pointer */
spinlock_t bm_lock; /* protects list */
struct list_head bm_devlist; /* holds pnfs_block_dev */
};
@@ -194,7 +193,7 @@ struct bl_layoutupdate_data {
struct list_head ranges;
};

-#define BLK_ID(lo) ((struct block_mount_id *)(PNFS_MOUNTID(lo)->mountid))
+#define BLK_ID(lo) ((struct block_mount_id *)(PNFS_NFS_SERVER(lo)->pnfs_ld_data))
#define BLK_LSEG2EXT(lseg) ((struct pnfs_block_layout *)lseg->layout->ld_data)
#define BLK_LO2EXT(lo) ((struct pnfs_block_layout *)lo->ld_data)

@@ -246,7 +245,7 @@ uint32_t *blk_overflow(uint32_t *p, uint32_t *end, size_t nbytes);
/* blocklayoutdev.c */
struct block_device *nfs4_blkdev_get(dev_t dev);
int nfs4_blkdev_put(struct block_device *bdev);
-struct pnfs_block_dev *nfs4_blk_decode_device(struct super_block *sb,
+struct pnfs_block_dev *nfs4_blk_decode_device(struct nfs_server *server,
struct pnfs_device *dev,
struct list_head *sdlist);
int nfs4_blk_process_layoutget(struct pnfs_layout_type *lo,
@@ -254,7 +253,7 @@ int nfs4_blk_process_layoutget(struct pnfs_layout_type *lo,
int nfs4_blk_create_scsi_disk_list(struct list_head *);
void nfs4_blk_destroy_disk_list(struct list_head *);
/* blocklayoutdm.c */
-struct pnfs_block_dev *nfs4_blk_init_metadev(struct super_block *sb,
+struct pnfs_block_dev *nfs4_blk_init_metadev(struct nfs_server *server,
struct pnfs_device *dev);
int nfs4_blk_flatten(struct pnfs_blk_volume *, int, struct pnfs_block_dev *);
void free_block_dev(struct pnfs_block_dev *bdev);
diff --git a/fs/nfs/blocklayout/blocklayoutdev.c b/fs/nfs/blocklayout/blocklayoutdev.c
index 9fc3d46..4f45523 100644
--- a/fs/nfs/blocklayout/blocklayoutdev.c
+++ b/fs/nfs/blocklayout/blocklayoutdev.c
@@ -489,9 +489,9 @@ static int decode_blk_volume(uint32_t **pp, uint32_t *end,
* in dev->dev_addr_buf.
*/
struct pnfs_block_dev *
-nfs4_blk_decode_device(struct super_block *sb,
- struct pnfs_device *dev,
- struct list_head *sdlist)
+nfs4_blk_decode_device(struct nfs_server *server,
+ struct pnfs_device *dev,
+ struct list_head *sdlist)
{
int num_vols, i, status, count;
struct pnfs_blk_volume *vols, **arrays, **arrays_ptr;
@@ -540,7 +540,7 @@ nfs4_blk_decode_device(struct super_block *sb,
}

/* Now use info in vols to create the meta device */
- rv = nfs4_blk_init_metadev(sb, dev);
+ rv = nfs4_blk_init_metadev(server, dev);
if (!rv)
goto out;
status = nfs4_blk_flatten(vols, num_vols, rv);
diff --git a/fs/nfs/blocklayout/blocklayoutdm.c b/fs/nfs/blocklayout/blocklayoutdm.c
index 4bff748..d70f6b2 100644
--- a/fs/nfs/blocklayout/blocklayoutdm.c
+++ b/fs/nfs/blocklayout/blocklayoutdm.c
@@ -129,7 +129,7 @@ void free_block_dev(struct pnfs_block_dev *bdev)
/*
* Create meta device. Keep it open to use for I/O.
*/
-struct pnfs_block_dev *nfs4_blk_init_metadev(struct super_block *sb,
+struct pnfs_block_dev *nfs4_blk_init_metadev(struct nfs_server *server,
struct pnfs_device *dev)
{
static uint64_t dev_count; /* STUB used for device names */
@@ -151,7 +151,7 @@ struct pnfs_block_dev *nfs4_blk_init_metadev(struct super_block *sb,
bd = nfs4_blkdev_get(meta_dev);
if (!bd)
goto out_err;
- if (bd_claim(bd, sb)) {
+ if (bd_claim(bd, server)) {
dprintk("%s: failed to claim device %d:%d\n",
__func__,
MAJOR(meta_dev),
--
1.6.5.1


2010-05-06 19:24:39

by Benny Halevy

[permalink] [raw]
Subject: [PATCH 7/7] SQUASHME: pnfs-obj: convert APIs pnfs-post-submit

Signed-off-by: Benny Halevy <[email protected]>
---
fs/nfs/objlayout/objio_osd.c | 4 ++--
fs/nfs/objlayout/objlayout.c | 36 ++++++++++++++++--------------------
fs/nfs/objlayout/panfs_shim.c | 2 +-
3 files changed, 19 insertions(+), 23 deletions(-)

diff --git a/fs/nfs/objlayout/objio_osd.c b/fs/nfs/objlayout/objio_osd.c
index b23f845..642d6fa 100644
--- a/fs/nfs/objlayout/objio_osd.c
+++ b/fs/nfs/objlayout/objio_osd.c
@@ -183,7 +183,7 @@ static struct osd_dev *_device_lookup(struct pnfs_layout_type *pnfslay,
struct pnfs_deviceid *d_id;
struct osd_dev *od;
struct osd_dev_info odi;
- struct objio_mount_type *omt = PNFS_MOUNTID(pnfslay)->mountid;
+ struct objio_mount_type *omt = PNFS_NFS_SERVER(pnfslay)->pnfs_ld_data;
int err;

d_id = &layout->olo_comps[comp].oc_object_id.oid_device_id;
@@ -1015,7 +1015,7 @@ objlayout_get_stripesize(struct pnfs_layout_type *pnfslay)
* Get the max [rw]size
*/
static ssize_t
-objlayout_get_blocksize(struct pnfs_mount_type *mountid)
+objlayout_get_blocksize(void)
{
ssize_t sz = BIO_MAX_PAGES_KMALLOC * PAGE_SIZE;

diff --git a/fs/nfs/objlayout/objlayout.c b/fs/nfs/objlayout/objlayout.c
index 3d40fad..880d987 100644
--- a/fs/nfs/objlayout/objlayout.c
+++ b/fs/nfs/objlayout/objlayout.c
@@ -56,7 +56,7 @@ struct pnfs_client_operations *pnfs_client_ops;
* Create a objlayout layout structure for the given inode and return it.
*/
static void *
-objlayout_alloc_layout(struct pnfs_mount_type *mountid, struct inode *inode)
+objlayout_alloc_layout(struct inode *inode)
{
struct objlayout *objlay;

@@ -706,7 +706,7 @@ int objlayout_get_deviceinfo(struct pnfs_layout_type *pnfslay,
pd.mincount = 0;

sb = PNFS_INODE(pnfslay)->i_sb;
- err = pnfs_client_ops->nfs_getdeviceinfo(sb, &pd);
+ err = pnfs_client_ops->nfs_getdeviceinfo(PNFS_NFS_SERVER(pnfslay), &pd);
dprintk("%s nfs_getdeviceinfo returned %d\n", __func__, err);
if (err)
goto err_out;
@@ -744,36 +744,32 @@ void objlayout_put_deviceinfo(struct pnfs_osd_deviceaddr *deviceaddr)
* Return the pnfs_mount_type structure so the
* pNFS_client can refer to the mount point later on.
*/
-static struct pnfs_mount_type *
-objlayout_initialize_mountpoint(struct super_block *sb, struct nfs_fh *fh)
+static int
+objlayout_initialize_mountpoint(struct nfs_server *server,
+ const struct nfs_fh *mntfh)
{
- struct pnfs_mount_type *mt;
-
- mt = kzalloc(sizeof(*mt), GFP_KERNEL);
- if (!mt)
- return NULL;
+ void *data;

- mt->mountid = objio_init_mt();
- if (IS_ERR(mt->mountid)) {
+ data = objio_init_mt();
+ if (IS_ERR(data)) {
printk(KERN_INFO "%s: objlayout lib not ready err=%ld\n",
- __func__, PTR_ERR(mt->mountid));
- kfree(mt);
- return NULL;
+ __func__, PTR_ERR(data));
+ return PTR_ERR(data);
}
+ server->pnfs_ld_data = data;

- dprintk("%s: Return %p\n", __func__, mt);
- return mt;
+ dprintk("%s: Return data=%p\n", __func__, data);
+ return 0;
}

/*
* Uninitialize a mountpoint
*/
static int
-objlayout_uninitialize_mountpoint(struct pnfs_mount_type *mt)
+objlayout_uninitialize_mountpoint(struct nfs_server *server)
{
- dprintk("%s: Begin %p\n", __func__, mt);
- objio_fini_mt(mt->mountid);
- kfree(mt);
+ dprintk("%s: Begin %p\n", __func__, server->pnfs_ld_data);
+ objio_fini_mt(server->pnfs_ld_data);
return 0;
}

diff --git a/fs/nfs/objlayout/panfs_shim.c b/fs/nfs/objlayout/panfs_shim.c
index 414831e..6033d2b 100644
--- a/fs/nfs/objlayout/panfs_shim.c
+++ b/fs/nfs/objlayout/panfs_shim.c
@@ -654,7 +654,7 @@ panlayout_get_stripesize(struct pnfs_layout_type *pnfslay)
* Get the max [rw]size
*/
static ssize_t
-panlayout_get_blocksize(struct pnfs_mount_type *mountid)
+panlayout_get_blocksize(void)
{
ssize_t sz = (PANLAYOUT_MAX_STRIPE_WIDTH-1) *
PANLAYOUT_DEF_STRIPE_UNIT *
--
1.6.5.1


2010-05-06 19:33:06

by Andy Adamson

[permalink] [raw]
Subject: Re: [PATCH 0/7] pnfs-submit api touch ups

Looks good to me

--->Andy

On Thu, May 6, 2010 at 3:20 PM, Benny Halevy <[email protected]> wrot=
e:
> After rebasing everything on top of Andy and Fred patchsets I propose=
the following:
>
> [PATCH 1/2] SQUASHME: pnfs-submit: have initialize_mountpoint return =
status
> [PATCH 2/2] SQUASHME: pnfs-submit: pass struct nfs_server * to getdev=
iceinfo
> [PATCH 3/5] pnfs-post-submit: pass struct nfs_server * to getdeviceli=
st
> [PATCH 4/5] pnfs-post-submit: pass mntfh down the init_pnfs path
> [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private =
data
> [PATCH 6/6] SQUASHME: pnfs-block: convert APIs pnfs-post-submit
> [PATCH 7/7] SQUASHME: pnfs-obj: convert APIs pnfs-post-submit
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" =
in
> the body of a message to [email protected]
> More majordomo info at =A0http://vger.kernel.org/majordomo-info.html
>

2010-05-06 20:24:12

by Benny Halevy

[permalink] [raw]
Subject: Re: [PATCH 0/7] pnfs-submit api touch ups

On 2010-05-06 22:33, William A. (Andy) Adamson wrote:
> Looks good to me

Great. I've release a 2.6.34-rc5 based version
and I'm currently rebasing and retesting over 2.6.34-rc6 (also w/ Zhang's fix)
and will release hopefully tomorrow.

Benny

>
> --->Andy
>
> On Thu, May 6, 2010 at 3:20 PM, Benny Halevy <[email protected]> wrote:
>> After rebasing everything on top of Andy and Fred patchsets I propose the following:
>>
>> [PATCH 1/2] SQUASHME: pnfs-submit: have initialize_mountpoint return status
>> [PATCH 2/2] SQUASHME: pnfs-submit: pass struct nfs_server * to getdeviceinfo
>> [PATCH 3/5] pnfs-post-submit: pass struct nfs_server * to getdevicelist
>> [PATCH 4/5] pnfs-post-submit: pass mntfh down the init_pnfs path
>> [PATCH 5/5] FIXME: pnfs-post-submit: per mount layout driver private data
>> [PATCH 6/6] SQUASHME: pnfs-block: convert APIs pnfs-post-submit
>> [PATCH 7/7] SQUASHME: pnfs-obj: convert APIs pnfs-post-submit
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html