_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Chuck Lever wrote:
> Peter Staubach wrote:
>> Frank van Maarseveen wrote:
>>> On Thu, Aug 23, 2007 at 04:12:30PM -0400, Peter Staubach wrote:
>>>
>>>> I would guess that not so many people are using the "bg" option,
>>>> period. Many of Linux's customers are ex-Sun customers and they
>>>> were educated to use autofs and to move away from and stay away
>>>> from static mounts via fstab or vfstab.
>>>>
>>>> The "bg" option was a hack added to speed up system booting.
>>>>
>>>
>>> No, it is indispensable to recover properly from a power outage:
>>> servers tend to boot slower than clients. Also, it is not unusual to
>>> have some minor network/server problems after an outage causing the
>>> mount to fail.
>>>
>>> Without the bg option a temporary power outage may render all client
>>> systems unusable.
>>
>> And a better solution to this problem is still to use autofs.
>>
>> That said, what use are the clients _until_ the servers are up?
>> The applications on them can't run correctly because the file
>> systems that they depend upon may or may not be there yet. With
>> autofs, you would have a chance of getting the synchronization
>> right.
>>
>> You also get all sorts of benefits such as decreased resource
>> usage (by not having inactive file systems mounted), reduced
>> hangs (by not having inactive file systems from servers which
>> go down still mounted), in addition to the situation described
>> above and other benefits as well.
>>
>> I do recognize that we can't get rid of the bg option, but I
>> would request that people using it consider different alternatives
>> to solving their problems.
>
> For the record, one downside to using automounter is the mount storm
> that is caused when a distributed application starts up on multiple
> clients requiring many NFS mount points on each client. This is one
> reason some sites choose not to use automounter. "bg"s retry
> behavior, though a kludge, is somewhat more friendly.
>
If the application on each client is going to need many mount
points, then how does "bg" do anything but increase the number
of concurrent mount requests coming from each client, thus
increasing the load?
Autofs supporting dynamic mounting of individual file systems
within a hierarchy would help to reduce the overhead on the
network and server as much as seems possible to me...
I suspect that this is on Ian's and Jeff's lists... :-)
> From my experience, generally mountd (on most any server
> implementation) has been a scalability problem in these scenarios. It
> can't handle more than a few requests per second.
Perhaps we need to look at multithreading mountd? Ala Solaris?
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thu, Aug 30, 2007 at 07:53:32AM -0400, Peter Staubach wrote:
> Frank van Maarseveen wrote:
> >On Thu, Aug 23, 2007 at 04:12:30PM -0400, Peter Staubach wrote:
> >
> >>I would guess that not so many people are using the "bg" option,
> >>period. Many of Linux's customers are ex-Sun customers and they
> >>were educated to use autofs and to move away from and stay away
> >>from static mounts via fstab or vfstab.
> >>
> >>The "bg" option was a hack added to speed up system booting.
> >>
> >
> >No, it is indispensable to recover properly from a power outage:
> >servers tend to boot slower than clients. Also, it is not unusual to
> >have some minor network/server problems after an outage causing the
> >mount to fail.
> >
> >Without the bg option a temporary power outage may render all client
> >systems unusable.
>
> And a better solution to this problem is still to use autofs.
Sometimes, but not always.
>
> That said, what use are the clients _until_ the servers are up?
The point is, you would have to re-issue mount -a whenever a server
starts to honour mount requests again: manual intervention on all
client systems is not really practical with many clients.
> You also get all sorts of benefits such as decreased resource
No, actually the automounter _costs_ resources depending on the situation:
The automatic umount flushes all kinds of caches on the client, requiring
network bandwith after mounting them again.
There is no perfect solution. autofs and statically configured mounts
have both advantages and disadvantages.
--
Frank
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thu, Aug 30, 2007 at 12:07:32PM -0400, Peter Staubach wrote:
> Chuck Lever wrote:
> > From my experience, generally mountd (on most any server
> > implementation) has been a scalability problem in these scenarios. It
> > can't handle more than a few requests per second.
>
> Perhaps we need to look at multithreading mountd? Ala Solaris?
Does nfs-utils commit 11d34d1 (below) do the job?
--b.
commit 11d34d11153df198103a57291937ea9ff8b7356e
Author: Greg Banks <[email protected]>
Date: Wed Jun 14 22:48:10 2006 +1000
multiple threads for mountd
How about the attached patch against nfs-utils tot? It
adds a -t option to set the number of forked workers.
Default is 1 thread, i.e. the old behaviour.
I've verified that showmount -e, the Ogata mount client,
and a real mount from Linux and IRIX boxes work with and
without the new option.
I've verified that you can manually kill any of the workers
without the portmap registration going away, that killing
all the workers causes the manager process to wake up and
unregister, and killing the manager process causes the
workers to be killed and portmap unregistered.
I've verified that all the workers have file descriptors
for the udp socket and the tcp rendezvous socket, that
connections are balanced across all the workers if service
times are sufficiently long, and that performance is
improved by that parallelism, at least for small numbers
of threads. For example, with 60 parallel MOUNT calls
and a testing patch to make DNS lookups take 100 milliseconds
time to perform all mounts (averaged over 5 runs) is:
num elapsed
threads time (sec)
------ ----------
1 13.125
2 6.859
3 4.836
4 3.841
5 3.303
6 3.100
7 3.078
8 3.018
Greg.
--
Greg Banks, R&D Software Engineer, SGI Australian Software Group.
I don't speak for SGI.
diff --git a/support/nfs/svc_socket.c b/support/nfs/svc_socket.c
index a3cb7ce..888c915 100644
--- a/support/nfs/svc_socket.c
+++ b/support/nfs/svc_socket.c
@@ -22,6 +22,7 @@
#include <netdb.h>
#include <rpc/rpc.h>
#include <sys/socket.h>
+#include <sys/fcntl.h>
#include <errno.h>
#ifdef _LIBC
@@ -112,6 +113,26 @@ svc_socket (u_long number, int type, int protocol, int reuse)
}
}
+ if (sock >= 0 && protocol == IPPROTO_TCP)
+ {
+ /* Make the TCP rendezvous socket non-block to avoid
+ * problems with blocking in accept() after a spurious
+ * wakeup from the kernel */
+ int flags;
+ if ((flags = fcntl(sock, F_GETFL)) < 0)
+ {
+ perror (_("svc_socket: can't get socket flags"));
+ (void) __close (sock);
+ sock = -1;
+ }
+ else if (fcntl(sock, F_SETFL, flags|O_NONBLOCK) < 0)
+ {
+ perror (_("svc_socket: can't set socket flags"));
+ (void) __close (sock);
+ sock = -1;
+ }
+ }
+
return sock;
}
diff --git a/utils/mountd/mountd.c b/utils/mountd/mountd.c
index 43606dd..e402bf8 100644
--- a/utils/mountd/mountd.c
+++ b/utils/mountd/mountd.c
@@ -21,6 +21,7 @@
#include <errno.h>
#include <fcntl.h>
#include <sys/resource.h>
+#include <sys/wait.h>
#include "xmalloc.h"
#include "misc.h"
#include "mountd.h"
@@ -43,6 +44,13 @@ int new_cache = 0;
* send mount or unmount requests -- the callout is not needed for 2.6 kernel */
char *ha_callout_prog = NULL;
+/* Number of mountd threads to start. Default is 1 and
+ * that's probably enough unless you need hundreds of
+ * clients to be able to mount at once. */
+static int num_threads = 1;
+/* Arbitrary limit on number of threads */
+#define MAX_THREADS 64
+
static struct option longopts[] =
{
{ "foreground", 0, 0, 'F' },
@@ -57,24 +65,106 @@ static struct option longopts[] =
{ "no-tcp", 0, 0, 'n' },
{ "ha-callout", 1, 0, 'H' },
{ "state-directory-path", 1, 0, 's' },
+ { "num-threads", 1, 0, 't' },
{ NULL, 0, 0, 0 }
};
static int nfs_version = -1;
+static void
+unregister_services (void)
+{
+ if (nfs_version & 0x1)
+ pmap_unset (MOUNTPROG, MOUNTVERS);
+ if (nfs_version & (0x1 << 1))
+ pmap_unset (MOUNTPROG, MOUNTVERS_POSIX);
+ if (nfs_version & (0x1 << 2))
+ pmap_unset (MOUNTPROG, MOUNTVERS_NFSV3);
+}
+
+/* Wait for all worker child processes to exit and reap them */
+static void
+wait_for_workers (void)
+{
+ int status;
+ pid_t pid;
+
+ for (;;) {
+
+ pid = waitpid(0, &status, 0);
+
+ if (pid < 0) {
+ if (errno == ECHILD)
+ return; /* no more children */
+ xlog(L_FATAL, "mountd: can't wait: %s\n",
+ strerror(errno));
+ }
+
+ /* Note: because we SIG_IGN'd SIGCHLD earlier, this
+ * does not happen on 2.6 kernels, and waitpid() blocks
+ * until all the children are dead then returns with
+ * -ECHILD. But, we don't need to do anything on the
+ * death of individual workers, so we don't care. */
+ xlog(L_NOTICE, "mountd: reaped child %d, status %d\n",
+ (int)pid, status);
+ }
+}
+
+/* Fork num_threads worker children and wait for them */
+static void
+fork_workers(void)
+{
+ int i;
+ pid_t pid;
+
+ xlog(L_NOTICE, "mountd: starting %d threads\n", num_threads);
+
+ for (i = 0 ; i < num_threads ; i++) {
+ pid = fork();
+ if (pid < 0) {
+ xlog(L_FATAL, "mountd: cannot fork: %s\n",
+ strerror(errno));
+ }
+ if (pid == 0) {
+ /* worker child */
+
+ /* Re-enable the default action on SIGTERM et al
+ * so that workers die naturally when sent them.
+ * Only the parent unregisters with pmap and
+ * hence needs to do special SIGTERM handling. */
+ struct sigaction sa;
+ sa.sa_handler = SIG_DFL;
+ sa.sa_flags = 0;
+ sigemptyset(&sa.sa_mask);
+ sigaction(SIGHUP, &sa, NULL);
+ sigaction(SIGINT, &sa, NULL);
+ sigaction(SIGTERM, &sa, NULL);
+
+ /* fall into my_svc_run in caller */
+ return;
+ }
+ }
+
+ /* in parent */
+ wait_for_workers();
+ unregister_services();
+ xlog(L_NOTICE, "mountd: no more workers, exiting\n");
+ exit(0);
+}
+
/*
* Signal handler.
*/
static void
killer (int sig)
{
- if (nfs_version & 0x1)
- pmap_unset (MOUNTPROG, MOUNTVERS);
- if (nfs_version & (0x1 << 1))
- pmap_unset (MOUNTPROG, MOUNTVERS_POSIX);
- if (nfs_version & (0x1 << 2))
- pmap_unset (MOUNTPROG, MOUNTVERS_NFSV3);
- xlog (L_FATAL, "Caught signal %d, un-registering and exiting.", sig);
+ unregister_services();
+ if (num_threads > 1) {
+ /* play Kronos and eat our children */
+ kill(0, SIGTERM);
+ wait_for_workers();
+ }
+ xlog (L_FATAL, "Caught signal %d, un-registering and exiting.", sig);
}
static void
@@ -468,7 +558,7 @@ main(int argc, char **argv)
/* Parse the command line options and arguments. */
opterr = 0;
- while ((c = getopt_long(argc, argv, "o:n:Fd:f:p:P:hH:N:V:vs:", longopts, NULL)) != EOF)
+ while ((c = getopt_long(argc, argv, "o:n:Fd:f:p:P:hH:N:V:vs:t:", longopts, NULL)) != EOF)
switch (c) {
case 'o':
descriptors = atoi(optarg);
@@ -515,6 +605,9 @@ main(int argc, char **argv)
exit(1);
}
break;
+ case 't':
+ num_threads = atoi (optarg);
+ break;
case 'V':
nfs_version |= 1 << (atoi (optarg) - 1);
break;
@@ -615,6 +708,17 @@ main(int argc, char **argv)
setsid();
}
+ /* silently bounds check num_threads */
+ if (foreground)
+ num_threads = 1;
+ else if (num_threads < 1)
+ num_threads = 1;
+ else if (num_threads > MAX_THREADS)
+ num_threads = MAX_THREADS;
+
+ if (num_threads > 1)
+ fork_workers();
+
my_svc_run();
xlog(L_ERROR, "Ack! Gack! svc_run returned!\n");
@@ -629,6 +733,7 @@ usage(const char *prog, int n)
" [-o num|--descriptors num] [-f exports-file|--exports-file=file]\n"
" [-p|--port port] [-V version|--nfs-version version]\n"
" [-N version|--no-nfs-version version] [-n|--no-tcp]\n"
-" [-H ha-callout-prog] [-s|--state-directory-path path]\n", prog);
+" [-H ha-callout-prog] [-s|--state-directory-path path]\n"
+" [-t num|--num-threads=num]\n", prog);
exit(n);
}
diff --git a/utils/mountd/mountd.man b/utils/mountd/mountd.man
index bac4421..70166c1 100644
--- a/utils/mountd/mountd.man
+++ b/utils/mountd/mountd.man
@@ -125,6 +125,13 @@ If this option is not specified the default of
.BR /var/lib/nfs
is used.
.TP
+.BR "\-t N" " or " "\-\-num\-threads=N"
+This option specifies the number of worker threads that rpc.mountd
+spawns. The default is 1 thread, which is probably enough. More
+threads are usually only needed for NFS servers which need to handle
+mount storms of hundreds of NFS mounts in a few seconds, or when
+your DNS server is slow or unreliable.
+.TP
.B \-V " or " \-\-nfs-version
This option can be used to request that
.B rpc.mountd
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
At 12:18 PM 8/30/2007, Chuck Lever wrote:
>Peter Staubach wrote:
>> If the application on each client is going to need many mount
>> points, then how does "bg" do anything but increase the number
>> of concurrent mount requests coming from each client, thus
>> increasing the load?
>
>"bg" has an exponential backoff, so the load increase isn't terribly
>bothersome. It's the "bg" recovery mechanism that's useful here for
>getting all the mount requests to be successful in a nondeterministic
>environment.
"bg" also tries synchronously before going into the background (and
backing off). So, by itself "bg" does not generate a storm. It only
slightly raises the mount traffic after a failure, by giving way to the
next filesystem in line - which may well be on another server.
Done right, "bg" is a reasonable approach, IMO.
Tom.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Talpey, Thomas wrote:
> At 12:18 PM 8/30/2007, Chuck Lever wrote:
>
>> Peter Staubach wrote:
>>
>>> If the application on each client is going to need many mount
>>> points, then how does "bg" do anything but increase the number
>>> of concurrent mount requests coming from each client, thus
>>> increasing the load?
>>>
>> "bg" has an exponential backoff, so the load increase isn't terribly
>> bothersome. It's the "bg" recovery mechanism that's useful here for
>> getting all the mount requests to be successful in a nondeterministic
>> environment.
>>
>
> "bg" also tries synchronously before going into the background (and
> backing off). So, by itself "bg" does not generate a storm. It only
> slightly raises the mount traffic after a failure, by giving way to the
> next filesystem in line - which may well be on another server.
>
> Done right, "bg" is a reasonable approach, IMO.
I will belabor this just a little more and then move on.
Yes, if everything is working correctly, then the "bg" option
adds insignificant overhead and that is in the argument processing
for the mount command.
However, if it backgrounds, there can be multiple mounts
running at the same time. This could add up to quite a bit
for very many file systems.
And, why are they getting mounted? Probably not because they
are doing to needed immediately, but because they are being
statically mounted and the only way to do that is via fstab
at mount time.
All of these file systems are sitting, in the namespace,
waiting to cause problems for applications looking at the
namespace if the slightest problem occurs out on the network
or at one of the servers which was mounted from. They add
overhead to applications which do look at the namespace in
the meantime because they add to the number of file systems
that these applications need to keep track of.
I wish that I had a nickel for each time that I heard a
customer complain because some dead server caused his
application or the df command to hang and he wasn't
interested in that server. I'd be retired and typing this
from some fun location. :-)
Oh well, "bg" is a solution, but I think that there are better.
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Thu, Aug 23, 2007 at 04:12:30PM -0400, Peter Staubach wrote:
>
> I would guess that not so many people are using the "bg" option,
> period. Many of Linux's customers are ex-Sun customers and they
> were educated to use autofs and to move away from and stay away
> from static mounts via fstab or vfstab.
>
> The "bg" option was a hack added to speed up system booting.
No, it is indispensable to recover properly from a power outage:
servers tend to boot slower than clients. Also, it is not unusual to
have some minor network/server problems after an outage causing the
mount to fail.
Without the bg option a temporary power outage may render all client
systems unusable.
--
Frank
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Frank van Maarseveen wrote:
> On Thu, Aug 23, 2007 at 04:12:30PM -0400, Peter Staubach wrote:
>
>> I would guess that not so many people are using the "bg" option,
>> period. Many of Linux's customers are ex-Sun customers and they
>> were educated to use autofs and to move away from and stay away
>> from static mounts via fstab or vfstab.
>>
>> The "bg" option was a hack added to speed up system booting.
>>
>
> No, it is indispensable to recover properly from a power outage:
> servers tend to boot slower than clients. Also, it is not unusual to
> have some minor network/server problems after an outage causing the
> mount to fail.
>
> Without the bg option a temporary power outage may render all client
> systems unusable.
And a better solution to this problem is still to use autofs.
That said, what use are the clients _until_ the servers are up?
The applications on them can't run correctly because the file
systems that they depend upon may or may not be there yet. With
autofs, you would have a chance of getting the synchronization
right.
You also get all sorts of benefits such as decreased resource
usage (by not having inactive file systems mounted), reduced
hangs (by not having inactive file systems from servers which
go down still mounted), in addition to the situation described
above and other benefits as well.
I do recognize that we can't get rid of the bg option, but I
would request that people using it consider different alternatives
to solving their problems.
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Chuck Lever wrote:
> Hi all-
>
> The recent addition of the chk_mountpoint() function in
> utils/mount/mount.c in nfs-utils commit 3b55934b has broken a
> particular behavior of background mounts.
>
> nfs(5) states that, if the "bg" option is specified, "A missing mount
> point is treated as a timeout, to allow for nested NFS mounts."
>
> If I try mounting an NFS share onto a non-existent directory while
> using the "bg" option, I now get an immediate failure:
>
> mount.nfs: mount point /mnt/nothere does not exist
>
> instead of the mount backgrounding itself to wait for /mnt/nothere to
> show up. This is because chk_mountpoint() is causing the mount
> request to fail immediately.
>
> Is the documented bg retry behavior still desirable?
Isn't this what autofs is for? To be able to handle hierarchies
of mounts?
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Chuck Lever wrote:
> Peter Staubach wrote:
>> Chuck Lever wrote:
>>> Hi all-
>>>
>>> The recent addition of the chk_mountpoint() function in
>>> utils/mount/mount.c in nfs-utils commit 3b55934b has broken a
>>> particular behavior of background mounts.
>>>
>>> nfs(5) states that, if the "bg" option is specified, "A missing
>>> mount point is treated as a timeout, to allow for nested NFS mounts."
>>>
>>> If I try mounting an NFS share onto a non-existent directory while
>>> using the "bg" option, I now get an immediate failure:
>>>
>>> mount.nfs: mount point /mnt/nothere does not exist
>>>
>>> instead of the mount backgrounding itself to wait for /mnt/nothere
>>> to show up. This is because chk_mountpoint() is causing the mount
>>> request to fail immediately.
>>>
>>> Is the documented bg retry behavior still desirable?
>>
>> Isn't this what autofs is for? To be able to handle hierarchies
>> of mounts?
>
> I think the purpose of this feature is to allow a sysadmin to specify
> a set of mount points in /etc/fstab, some possibly nested. "mount -a
> -tnfs" should work no matter what order the mounts in /etc/fstab are
> specified.
>
> After all, some servers may be unresponsive when the client boots --
> the mounting order is nondeterministic; it can't be depended on, in
> any event.
>
> We also don't know if the automounter itself depends on this feature.
Autofs depending upon this feature would be a large mistake.
IMHO, of course. :-)
I don't think that it does.
But your explanation makes sense, although we should be moving
people away from static mounts in fstab and towards dynamic
mounting via autofs. Ian and Jeff have made autofs much, much
better in recent times. Improving autofs further to make it
only mount file systems which are actually referenced would make
it even better.
How do we find out whether we need to continue supporting this
semantic or whether we can do away with it? Clearly, if it was
busted, then not many people were depending upon it because
there didn't seem to be any hue and cry about it not working.
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Chuck Lever wrote:
> Peter Staubach wrote:
>> But your explanation makes sense, although we should be moving
>> people away from static mounts in fstab and towards dynamic
>> mounting via autofs. Ian and Jeff have made autofs much, much
>> better in recent times. Improving autofs further to make it
>> only mount file systems which are actually referenced would make
>> it even better.
>
> I'm in great favor of autoconfiguration. Anything that will make NFS
> "just work" is goodness, in my book.
>
Is this an argument for or against autofs or these changes?
>> How do we find out whether we need to continue supporting this
>> semantic or whether we can do away with it? Clearly, if it was
>> busted, then not many people were depending upon it because
>> there didn't seem to be any hue and cry about it not working.
>
> Well, that change went into nfs-utils in May of 2007, only 3 months
> ago. Depending on when nfs-utils-1.1.0 got into Fedora 7, I don't
> think the change has had wide exposure quite yet.
>
> Considering there hasn't been much "hue and cry" about "bg" not
> working in Fedora, however, that may not be much of a standard by
> which to measure customer dissatisfaction.
I would guess that not so many people are using the "bg" option,
period. Many of Linux's customers are ex-Sun customers and they
were educated to use autofs and to move away from and stay away
from static mounts via fstab or vfstab.
The "bg" option was a hack added to speed up system booting.
A much better solution to the problem was autofs because it
delayed the mounting until the file system was actually needed.
The "bg" option can lead to applications not working correctly
because the file system may or may not be mounted when they
need it to be there and there is no automatic synchronization
to block them until it is. Autofs supplies this synchronization,
thus once again, making it a vastly superior solution.
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Chuck Lever wrote:
> Peter Staubach wrote:
>> Chuck Lever wrote:
>>> Peter Staubach wrote:
>>>> But your explanation makes sense, although we should be moving
>>>> people away from static mounts in fstab and towards dynamic
>>>> mounting via autofs. Ian and Jeff have made autofs much, much
>>>> better in recent times. Improving autofs further to make it
>>>> only mount file systems which are actually referenced would make
>>>> it even better.
>>>
>>> I'm in great favor of autoconfiguration. Anything that will make
>>> NFS "just work" is goodness, in my book.
>>>
>>
>> Is this an argument for or against autofs or these changes?
>
> Making autofs a reliable and useful facility is a good thing. Kudos
> to Ian and Jeff for their effort.
>
>> The "bg" option was a hack added to speed up system booting.
>
> I don't disagree with that assessment.
>
>> A much better solution to the problem was autofs because it
>> delayed the mounting until the file system was actually needed.
>
> Whether or not it is a kludge, I don't think we have enough
> information about who is using it for what to blithely remove it
> without fanfare or documentation.
>
I would agree completely.
> What alarms me more, though, is that we don't have any unit tests that
> caught this change before it went into the git repo. This change, in
> itself, may not be terribly harmful. But think of a minor and
> unintended change that might go in without notice, and break a lot of
> environments.
>
Well, then we'd hear pretty quickly, I think...
But yes, a test would be a better way to do it. :-)
>> The "bg" option can lead to applications not working correctly
>> because the file system may or may not be mounted when they
>> need it to be there and there is no automatic synchronization
>> to block them until it is. Autofs supplies this synchronization,
>> thus once again, making it a vastly superior solution.
>
> Some people find such enforced "synchronization" to be painful and
> annoying. They would prefer a solution where the application is free
> to take its own recourse rather than hang indefinitely.
>
> I think it is valid to want to have either type of behavior.
This, I would agree with. I still think that autofs is the right place
to put the appropriate synchronization, whether it be to block the
application or fail whatever system call that it was that attempted
to access the not yet mounted file system.
Thanx...
ps
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs