Hi
I'm trying to setup a pNFS experiment environment.
And this is what I have got,
VM-0 (DS) running a iscsi target
VM-1 (MS) initiator, mount a XFS on the device, and export it by NFS with pnfs option
VM-2 (Client) initiator, but not mount, running a blkmapd
mount the shared directory of VM-1 by NFS
And it semes to work well as the mountstatus
LAYOUTGET: 14 14 0 3472 2744 1 1381 1384
GETDEVICEINFO: 1 1 0 196 148 0 5 5
LAYOUTCOMMIT: 8 8 0 2352 1368 0 1256 1257
The kernel version I use is 4.18.19.
And would anyone please help to clarify following questions ?
1. Can I involve multiple DSs here ?
2. Is this stable enough to use in production ? How about earlier version, for example 4.14 ?
Many thanks in advance
Jianchao
Hi Jianchao,
On 12 Jun 2019, at 3:55, Jianchao Wang wrote:
> Hi
>
> I'm trying to setup a pNFS experiment environment.
> And this is what I have got,
> VM-0 (DS) running a iscsi target
> VM-1 (MS) initiator, mount a XFS on the device, and export it by
> NFS with pnfs option
> VM-2 (Client) initiator, but not mount, running a blkmapd
> mount the shared directory of VM-1 by NFS
>
> And it semes to work well as the mountstatus
> LAYOUTGET: 14 14 0 3472 2744 1 1381 1384
> GETDEVICEINFO: 1 1 0 196 148 0 5 5
> LAYOUTCOMMIT: 8 8 0 2352 1368 0 1256 1257
>
> The kernel version I use is 4.18.19.
>
> And would anyone please help to clarify following questions ?
> 1. Can I involve multiple DSs here ?
Yep, you can add a new iSCSI DS with another filesystem and keep the
same
MD. The pNFS SCSI layout has support for multi-device layouts, but I
don't
think anyone has put them through the paces.
The sweet spot for pNFS SCSI is large-scale FC where the fabric allows
nodes
different paths through different controllers. I expect the
do-it-yourself
with iSCSI target on linux to have a bit more limited performance
benefits.
> 2. Is this stable enough to use in production ? How about earlier
> version, for example 4.14 ?
Test it! It would be great to have more users.
It would also be great to hear about your workload and if this shows any
improvements.
Last note - with SCSI layouts, there's no need to run blkmapd. The
kernel
should have all the info it needs to find the correct SCSI devices.
Ben
Every so often I hunt for documentation on how to set up pNFS and can
never find anything. Can someone point me to something that I can use
to test this myself?
On 6/12/19 7:07 AM, Benjamin Coddington wrote:
> Hi Jianchao,
>
> On 12 Jun 2019, at 3:55, Jianchao Wang wrote:
>
>> Hi
>>
>> I'm trying to setup a pNFS experiment environment.
>> And this is what I have got,
>> VM-0 (DS) running a iscsi target
>> VM-1 (MS) initiator, mount a XFS on the device, and export it by
>> NFS with pnfs option
>> VM-2 (Client) initiator, but not mount, running a blkmapd
>> mount the shared directory of VM-1 by NFS
>>
>> And it semes to work well as the mountstatus
>> LAYOUTGET: 14 14 0 3472 2744 1 1381 1384
>> GETDEVICEINFO: 1 1 0 196 148 0 5 5
>> LAYOUTCOMMIT: 8 8 0 2352 1368 0 1256 1257
>>
>> The kernel version I use is 4.18.19.
>>
>> And would anyone please help to clarify following questions ?
>> 1. Can I involve multiple DSs here ?
>
> Yep, you can add a new iSCSI DS with another filesystem and keep the same
> MD. The pNFS SCSI layout has support for multi-device layouts, but I don't
> think anyone has put them through the paces.
>
> The sweet spot for pNFS SCSI is large-scale FC where the fabric allows
> nodes
> different paths through different controllers. I expect the do-it-yourself
> with iSCSI target on linux to have a bit more limited performance benefits.
>
>> 2. Is this stable enough to use in production ? How about earlier
>> version, for example 4.14 ?
>
> Test it! It would be great to have more users.
>
> It would also be great to hear about your workload and if this shows any
> improvements.
>
> Last note - with SCSI layouts, there's no need to run blkmapd. The kernel
> should have all the info it needs to find the correct SCSI devices.
>
> Ben
>>> This message is from an external sender. Learn more about why this <<
>>> matters at https://links.utexas.edu/rtyclf. <<
>
Hi Ben
Thanks so much for your kindly reply.
On 2019/6/12 20:07, Benjamin Coddington wrote:
> Hi Jianchao,
>
> On 12 Jun 2019, at 3:55, Jianchao Wang wrote:
>
>> Hi
>>
>> I'm trying to setup a pNFS experiment environment.
>> And this is what I have got,
>> VM-0 (DS) running a iscsi target
>> VM-1 (MS) initiator, mount a XFS on the device, and export it by NFS with pnfs option
>> VM-2 (Client) initiator, but not mount, running a blkmapd
>> mount the shared directory of VM-1 by NFS
>>
>> And it semes to work well as the mountstatus
>> LAYOUTGET: 14 14 0 3472 2744 1 1381 1384
>> GETDEVICEINFO: 1 1 0 196 148 0 5 5
>> LAYOUTCOMMIT: 8 8 0 2352 1368 0 1256 1257
>>
>> The kernel version I use is 4.18.19.
>>
>> And would anyone please help to clarify following questions ?
>> 1. Can I involve multiple DSs here ?
>
> Yep, you can add a new iSCSI DS with another filesystem and keep the same
> MD. The pNFS SCSI layout has support for multi-device layouts, but I don't
> think anyone has put them through the paces.
>
> The sweet spot for pNFS SCSI is large-scale FC where the fabric allows nodes
> different paths through different controllers. I expect the do-it-yourself
> with iSCSI target on linux to have a bit more limited performance benefits.
>
>> 2. Is this stable enough to use in production ? How about earlier version, for example 4.14 ?
>
> Test it! It would be great to have more users.
>
> It would also be great to hear about your workload and if this shows any
> improvements.
Our workload includes large video files or massive small picture files from multiple clients
I will try to setup an environment in real hardware and see what will happen then
>
> Last note - with SCSI layouts, there's no need to run blkmapd. The kernel
> should have all the info it needs to find the correct SCSI devices.
>
> Ben
Regards
Jianchao
On 13 Jun 2019, at 11:30, Goetz, Patrick G wrote:
> Every so often I hunt for documentation on how to set up pNFS and can
> never find anything. Can someone point me to something that I can use
> to test this myself?
The file Documentation/filesystems/nfs/pnfs-scsi-server.txt in the
kernel
source tree is probably the best source of current documentation, if
very
concise:
pNFS SCSI layout server user guide
==================================
This document describes support for pNFS SCSI layouts in the Linux
NFS
server. With pNFS SCSI layouts, the NFS server acts as Metadata
Server
(MDS) for pNFS, which in addition to handling all the metadata
access to the
NFS export, also hands out layouts to the clients so that they can
directly
access the underlying SCSI LUNs that are shared with the client.
To use pNFS SCSI layouts with with the Linux NFS server, the
exported file
system needs to support the pNFS SCSI layouts (currently just XFS),
and the
file system must sit on a SCSI LUN that is accessible to the
clients in
addition to the MDS. As of now the file system needs to sit
directly on the
exported LUN, striping or concatenation of LUNs on the MDS and
clients is
not supported yet.
On a server built with CONFIG_NFSD_SCSI, the pNFS SCSI volume
support is
automatically enabled if the file system is exported using the
"pnfs" option
and the underlying SCSI device support persistent reservations. On
the
client make sure the kernel has the CONFIG_PNFS_BLOCK option
enabled, and
the file system is mounted using the NFSv4.1 protocol version
(mount -o
vers=4.1).
Should we have more than this?
Ben
On 6/14/19 5:06 AM, Benjamin Coddington wrote:
> On 13 Jun 2019, at 11:30, Goetz, Patrick G wrote:
>
>> Every so often I hunt for documentation on how to set up pNFS and can
>> never find anything. Can someone point me to something that I can use
>> to test this myself?
>
> The file Documentation/filesystems/nfs/pnfs-scsi-server.txt in the kernel
> source tree is probably the best source of current documentation, if very
> concise:
>
> pNFS SCSI layout server user guide
> ==================================
>
> This document describes support for pNFS SCSI layouts in the Linux NFS
> server. With pNFS SCSI layouts, the NFS server acts as Metadata
> Server
> (MDS) for pNFS, which in addition to handling all the metadata
> access to the
> NFS export, also hands out layouts to the clients so that they can
> directly
> access the underlying SCSI LUNs that are shared with the client.
>
> To use pNFS SCSI layouts with with the Linux NFS server, the
> exported file
> system needs to support the pNFS SCSI layouts (currently just XFS),
> and the
> file system must sit on a SCSI LUN that is accessible to the
> clients in
> addition to the MDS. As of now the file system needs to sit
> directly on the
> exported LUN, striping or concatenation of LUNs on the MDS and
> clients is
> not supported yet.
>
> On a server built with CONFIG_NFSD_SCSI, the pNFS SCSI volume
> support is
> automatically enabled if the file system is exported using the
> "pnfs" option
> and the underlying SCSI device support persistent reservations. On
> the
> client make sure the kernel has the CONFIG_PNFS_BLOCK option
> enabled, and
> the file system is mounted using the NFSv4.1 protocol version
> (mount -o
> vers=4.1).
>
> Should we have more than this?
I can't tell if you're being facetious, which is a bad sign. <:)
Yes, most linux admins are probably not going to install the kernel
source tree looking for documentation. I personally find that step by
step howto's (even if they don't match my exact use case) are the best
way to get an overview of how to use a tool). Of course it's free open
source software, so there's no incentive to write documentation, but
I've been doing this for quite some time and (post the sendmail era),
there is a pretty clear correlation between the success of an open
source project and the quality of the documentation that it provides.
Django (for example) is a pointless web framework, in my opinion, but
extremely popular because they took the time to write clear documentation.
Anyway, thanks; this at least gives me a starting point for experimentation.
>
> Ben
>>> This message is from an external sender. Learn more about why this <<
>>> matters at https://links.utexas.edu/rtyclf. <<
>