nfs-utils originally had three different logging mechanisms used in the
code.This patch set makes the xlog implementation the one logging
mechanism for nfs-utils. It also reworks xlog to remove all the unsafe
behavior that already existed and updates all the appropriate callers.
Finally it removes unnecessary code from idmapd.
This series has been tested on my FC7 machine and produces the same
logging output as the unmodified verson. I haven't tested if the code
compiles on BSD or MacOSX but I haven't used anything that is Linux
specific.
Signed-off-by: David P. Quigley <[email protected]>
netlink uses a lot more resources than the pipe_fs. why do we need a socket
and the network layer to send a message to userland? at the end of the day,
copy_to_user is what is used. pipe_fs gets there with a lot shorter code
path.
there was some effort to generalize pipe_fs and move it from the rpc
directory - it's really no big deal.
-->Andy
On 9/11/07, David P. Quigley <[email protected]> wrote:
>
> On Tue, 2007-09-11 at 13:16 -0400, J. Bruce Fields wrote:
> > On Tue, Sep 11, 2007 at 01:01:18PM -0400, David P. Quigley wrote:
> > > Another question about the code in general. Why is it that idmapd is
> > > using fcntl and user signals to indicate changes to the rpc pipe.
> Isn't
> > > there something else we can use like Inotify or a Netlink socket?
> >
> > When a new nfs client is created, a new directory containing several
> > files (including the upcall pipe) are created. The dnotify stuff is
> > there so we find out about those new files and directories. I don't
> > know much about netlink--how would it help us there?
>
> I'm not sure that it would be a big help. It seemes that it is a newer
> method for transporting data between user and kernel space. There is a
> Linux journal article from back in 2005 about them which can be found at
> http://www.linuxjournal.com/article/7356. It could remove a dependency
> on pipefs, however the only question would be which technology has been
> around longer. It would also seem the the method of passing the messages
> up to user space would be different. Instead of each client having a
> separate file you could have idmapd/doimapd listen on one netlink socket
> and messages that come in contain a clientid which the daemon would
> associate with the separate clients internally. Again I'm not sure if it
> is a big help, but rather a different way of thinking about it that I am
> considering.
>
> >
> > I assume Inotify would work, but I don't see any advantage over dnotify.
> > If we didn't leave in some compatability code then it would mean
> > dropping support for pre-inotify kernels, wouldn't it?--could be that
> > there were only a few of those and that nfsv4 wasn't really usable on
> > them anyway (I don't remember), but absent a real motivation to move
> > from dnotify to inotify I'd rather not.
>
> That is a good point. Since I'm working on something that will hopefully
> be in future kernels I think I can make the assumption of having the
> extra features.
>
> >
> > > I am assuming that this is just a result of the code being written
> > > before these things were in place. I was originally starting doimapd
> > > based on the idmapd source however if it is acceptable to move to one
> > > of these other techniques I can switched over to that instead.
> >
> > For new stuff I don't think there'd be a problem using something else.
> >
> > --b.
>
> _______________________________________________
> NFSv4 mailing list
> [email protected]
> http://linux-nfs.org/cgi-bin/mailman/listinfo/nfsv4
>
>
This patch reworks the xlog logging code to avoid rebuilding the message into a
fixed size buffer. It also adds two new logging functions xlog_warn and
xlog_err which are replacements for idmap_warn and idmap_err. There use to be
two different variates of these functions with the only difference being that
one flavor tacked on the error string to the end of the message. This
responsibility has been pushed to the called of the function since it
needlessly complicated the function and required us to rebuild the message
strings.
Signed-off-by: David P. Quigley <[email protected]>
---
support/include/xlog.h | 5 +++
support/nfs/xlog.c | 64 ++++++++++++++++++++++++++++++++---------------
2 files changed, 48 insertions(+), 21 deletions(-)
diff --git a/support/include/xlog.h b/support/include/xlog.h
index cf9bc91..fd1a3f4 100644
--- a/support/include/xlog.h
+++ b/support/include/xlog.h
@@ -7,6 +7,8 @@
#ifndef XLOG_H
#define XLOG_H
+#include <stdarg.h>
+
/* These are logged always. L_FATAL also does exit(1) */
#define L_FATAL 0x0100
#define L_ERROR 0x0200
@@ -40,5 +42,8 @@ void xlog_config(int fac, int on);
void xlog_sconfig(char *, int on);
int xlog_enabled(int fac);
void xlog(int fac, const char *fmt, ...);
+void xlog_warn(const char *fmt, ...);
+void xlog_err(const char *fmt, ...);
+void xlog_backend(int fac, const char *fmt, va_list args);
#endif /* XLOG_H */
diff --git a/support/nfs/xlog.c b/support/nfs/xlog.c
index 1bbfd19..26123c5 100644
--- a/support/nfs/xlog.c
+++ b/support/nfs/xlog.c
@@ -131,39 +131,28 @@ xlog_enabled(int fac)
/* Write something to the system logfile and/or stderr */
void
-xlog(int kind, const char *fmt, ...)
+xlog_backend(int kind, const char *fmt, va_list args)
{
- char buff[1024];
- va_list args;
- int n;
-
if (!(kind & (L_ALL)) && !(logging && (kind & logmask)))
return;
- va_start(args, fmt);
- vsnprintf(buff, sizeof (buff), fmt, args);
- va_end(args);
-
- if ((n = strlen(buff)) > 0 && buff[n-1] == '\n')
- buff[--n] = '\0';
-
if (log_syslog) {
switch (kind) {
case L_FATAL:
- syslog(LOG_ERR, "%s", buff);
+ vsyslog(LOG_ERR, fmt, args);
break;
case L_ERROR:
- syslog(LOG_ERR, "%s", buff);
+ vsyslog(LOG_ERR, fmt, args);
break;
case L_WARNING:
- syslog(LOG_WARNING, "%s", buff);
+ vsyslog(LOG_WARNING, fmt, args);
break;
case L_NOTICE:
- syslog(LOG_NOTICE, "%s", buff);
+ vsyslog(LOG_NOTICE, fmt, args);
break;
default:
if (!log_stderr)
- syslog(LOG_INFO, "%s", buff);
+ vsyslog(LOG_INFO, fmt, args);
break;
}
}
@@ -175,16 +164,49 @@ xlog(int kind, const char *fmt, ...)
time(&now);
tm = localtime(&now);
- fprintf(stderr, "%s[%d] %04d-%02d-%02d %02d:%02d:%02d %s\n",
+ fprintf(stderr, "%s[%d] %04d-%02d-%02d %02d:%02d:%02d ",
log_name, log_pid,
tm->tm_year+1900, tm->tm_mon + 1, tm->tm_mday,
- tm->tm_hour, tm->tm_min, tm->tm_sec,
- buff);
+ tm->tm_hour, tm->tm_min, tm->tm_sec);
#else
- fprintf(stderr, "%s: %s\n", log_name, buff);
+ fprintf(stderr, "%s: ", log_name);
#endif
+
+ vfprintf(stderr, fmt, args);
+ fprintf(stderr, "\n");
}
if (kind == L_FATAL)
exit(1);
}
+
+void
+xlog(int kind, const char* fmt, ...)
+{
+ va_list args;
+
+ va_start(args, fmt);
+ xlog_backend(kind, fmt, args);
+ va_end(args);
+}
+
+void
+xlog_warn(const char* fmt, ...)
+{
+ va_list args;
+
+ va_start(args, fmt);
+ xlog_backend(L_WARNING, fmt, args);
+ va_end(args);
+}
+
+
+void
+xlog_err(const char* fmt, ...)
+{
+ va_list args;
+
+ va_start(args, fmt);
+ xlog_backend(L_FATAL, fmt, args);
+ va_end(args);
+}
--
1.5.2.4
This patch removes all of the old idmap_* logging functions and replaced them
with the corresponding xlog functions. In addition that that it also reworks
the gssd logging wrappers to use the new xlog_backend. Finally it makes
necessary changes to the build files to get the project compiling again.
Signed-off-by: David P. Quigley <[email protected]>
---
utils/gssd/Makefile.am | 5 +-
utils/gssd/err_util.c | 43 ++-------------
utils/idmapd/idmapd.c | 142 +++++++++++++-----------------------------------
3 files changed, 47 insertions(+), 143 deletions(-)
diff --git a/utils/gssd/Makefile.am b/utils/gssd/Makefile.am
index 7c32597..763d039 100644
--- a/utils/gssd/Makefile.am
+++ b/utils/gssd/Makefile.am
@@ -38,8 +38,9 @@ gssd_SOURCES = \
krb5_util.h \
write_bytes.h
-gssd_LDADD = $(RPCSECGSS_LIBS) $(KRBLIBS)
-gssd_LDFLAGS = $(KRBLDFLAGS)
+gssd_LDADD = ../../support/nfs/libnfs.a \
+ $(RPCSECGSS_LIBS) $(KRBLIBS)
+gssd_LDFLAGS = $(KRBLDFLAGS)
gssd_CFLAGS = $(AM_CFLAGS) $(CFLAGS) \
$(RPCSECGSS_CFLAGS) $(KRBCFLAGS)
diff --git a/utils/gssd/err_util.c b/utils/gssd/err_util.c
index f331432..5644db6 100644
--- a/utils/gssd/err_util.c
+++ b/utils/gssd/err_util.c
@@ -30,64 +30,33 @@
#include <stdio.h>
#include <stdarg.h>
-#include <syslog.h>
#include <string.h>
-#include "err_util.h"
+#include "xlog.h"
static int verbosity = 0;
static int fg = 0;
-static char message_buf[500];
-
void initerr(char *progname, int set_verbosity, int set_fg)
{
verbosity = set_verbosity;
fg = set_fg;
if (!fg)
- openlog(progname, LOG_PID, LOG_DAEMON);
+ xlog_open(progname);
}
void printerr(int priority, char *format, ...)
{
va_list args;
- int ret;
- int buf_used, buf_available;
- char *buf;
/* Don't bother formatting a message we're never going to print! */
if (priority > verbosity)
return;
- buf_used = strlen(message_buf);
- /* subtract 4 to leave room for "...\n" if necessary */
- buf_available = sizeof(message_buf) - buf_used - 4;
- buf = message_buf + buf_used;
-
- /*
- * Aggregate lines: only print buffer when we get to the
- * end of a line or run out of space
- */
va_start(args, format);
- ret = vsnprintf(buf, buf_available, format, args);
+ if (fg)
+ vfprintf(stderr, format, args);
+ else
+ xlog_backend(L_ERROR, format, args);
va_end(args);
-
- if (ret < 0)
- goto printit;
- if (ret >= buf_available) {
- /* Indicate we're truncating */
- strcat(message_buf, "...\n");
- goto printit;
- }
- if (message_buf[strlen(message_buf) - 1] == '\n')
- goto printit;
- return;
-printit:
- if (fg) {
- fprintf(stderr, "%s", message_buf);
- } else {
- syslog(LOG_ERR, "%s", message_buf);
- }
- /* reset the buffer */
- memset(message_buf, 0, sizeof(message_buf));
}
diff --git a/utils/idmapd/idmapd.c b/utils/idmapd/idmapd.c
index a82cd0b..355c6e1 100644
--- a/utils/idmapd/idmapd.c
+++ b/utils/idmapd/idmapd.c
@@ -55,7 +55,6 @@
#include <stdlib.h>
#include <string.h>
#include <stdarg.h>
-#include <syslog.h>
#include <pwd.h>
#include <grp.h>
#include <limits.h>
@@ -66,6 +65,7 @@
#include "config.h"
#endif /* HAVE_CONFIG_H */
+#include "xlog.h"
#include "cfg.h"
#include "queue.h"
#include "nfslib.h"
@@ -187,71 +187,6 @@ flush_nfsd_idmap_cache(void)
return ret;
}
-static void
-msg_format(char *rtnbuff, int rtnbuffsize, int errval,
- const char *fmt, va_list args)
-{
- char buff[1024];
- int n;
-
- vsnprintf(buff, sizeof(buff), fmt, args);
-
- if ((n = strlen(buff)) > 0 && buff[n-1] == '\n')
- buff[--n] = '\0';
-
- snprintf(rtnbuff, rtnbuffsize, "%s: %s", buff, strerror(errval));
-}
-
-static void
-idmapd_warn(const char *fmt, ...)
-{
- int errval = errno; /* save this! */
- char buff[1024];
- va_list args;
-
- va_start(args, fmt);
- msg_format(buff, sizeof(buff), errval, fmt, args);
- va_end(args);
-
- syslog(LOG_WARNING, "%s", buff);
-}
-
-static void
-idmapd_warnx(const char *fmt, ...)
-{
- va_list args;
-
- va_start(args, fmt);
- vsyslog(LOG_WARNING, fmt, args);
- va_end(args);
-}
-
-static void
-idmapd_err(int eval, const char *fmt, ...)
-{
- int errval = errno; /* save this! */
- char buff[1024];
- va_list args;
-
- va_start(args, fmt);
- msg_format(buff, sizeof(buff), errval, fmt, args);
- va_end(args);
-
- syslog(LOG_ERR, "%s", buff);
- exit(eval);
-}
-
-static void
-idmapd_errx(int eval, const char *fmt, ...)
-{
- va_list args;
-
- va_start(args, fmt);
- vsyslog(LOG_ERR, fmt, args);
- va_end(args);
- exit(eval);
-}
-
int
main(int argc, char **argv)
{
@@ -276,7 +211,7 @@ main(int argc, char **argv)
progname++;
else
progname = argv[0];
- openlog(progname, LOG_PID, LOG_DAEMON);
+ xlog_open(progname);
#define GETOPTSTR "vfd:p:U:G:c:CS"
opterr=0; /* Turn off error messages */
@@ -347,7 +282,7 @@ main(int argc, char **argv)
nobodygid = gr->gr_gid;
#ifdef HAVE_NFS4_SET_DEBUG
- nfs4_set_debug(verbose, idmapd_warnx);
+ nfs4_set_debug(verbose, xlog_warn);
#endif
if (conf_path == NULL)
conf_path = _PATH_IDMAPDCONF;
@@ -360,15 +295,14 @@ main(int argc, char **argv)
event_init();
if (verbose > 0)
- idmapd_warnx("Expiration time is %d seconds.",
+ xlog_warn("Expiration time is %d seconds.",
cache_entry_expiration);
if (serverstart) {
nfsdret = nfsdopen();
if (nfsdret == 0) {
ret = flush_nfsd_idmap_cache();
if (ret)
- idmapd_errx(1,
- "main: Failed to flush nfsd idmap cache\n");
+ xlog_err("main: Failed to flush nfsd idmap cache\n: %s", strerror(errno));
}
}
@@ -383,7 +317,7 @@ main(int argc, char **argv)
char timeout_buf[12];
if ((timeout_fd = open(CLIENT_CACHE_TIMEOUT_FILE,
O_RDWR)) == -1) {
- idmapd_warnx("Unable to open '%s' to set "
+ xlog_warn("Unable to open '%s' to set "
"client cache expiration time "
"to %d seconds\n",
CLIENT_CACHE_TIMEOUT_FILE,
@@ -392,7 +326,7 @@ main(int argc, char **argv)
len = snprintf(timeout_buf, sizeof(timeout_buf),
"%d", cache_entry_expiration);
if ((write(timeout_fd, timeout_buf, len)) != len)
- idmapd_warnx("Error writing '%s' to "
+ xlog_warn("Error writing '%s' to "
"'%s' to set client "
"cache expiration time\n",
timeout_buf,
@@ -402,14 +336,14 @@ main(int argc, char **argv)
}
if ((fd = open(pipefsdir, O_RDONLY)) == -1)
- idmapd_err(1, "main: open(%s)", pipefsdir);
+ xlog_err("main: open(%s): %s", pipefsdir, strerror(errno));
if (fcntl(fd, F_SETSIG, SIGUSR1) == -1)
- idmapd_err(1, "main: fcntl(%s)", pipefsdir);
+ xlog_err("main: fcntl(%s): %s", pipefsdir, strerror(errno));
if (fcntl(fd, F_NOTIFY,
DN_CREATE | DN_DELETE | DN_MODIFY | DN_MULTISHOT) == -1)
- idmapd_err(1, "main: fcntl(%s)", pipefsdir);
+ xlog_err("main: fcntl(%s): %s", pipefsdir, strerror(errno));
TAILQ_INIT(&icq);
@@ -429,12 +363,12 @@ main(int argc, char **argv)
}
if (nfsdret != 0 && fd == 0)
- idmapd_errx(1, "main: Neither NFS client nor NFSd found");
+ xlog_err("main: Neither NFS client nor NFSd found");
release_parent();
if (event_dispatch() < 0)
- idmapd_errx(1, "main: event_dispatch returns errno %d (%s)",
+ xlog_err("main: event_dispatch returns errno %d (%s)",
errno, strerror(errno));
/* NOTREACHED */
return 1;
@@ -451,7 +385,7 @@ dirscancb(int fd, short which, void *data)
nent = scandir(pipefsdir, &ents, NULL, alphasort);
if (nent == -1) {
- idmapd_warn("dirscancb: scandir(%s)", pipefsdir);
+ xlog_warn("dirscancb: scandir(%s): %s", pipefsdir, strerror(errno));
return;
}
@@ -473,7 +407,7 @@ dirscancb(int fd, short which, void *data)
pipefsdir, ents[i]->d_name);
if ((ic->ic_dirfd = open(path, O_RDONLY, 0)) == -1) {
- idmapd_warn("dirscancb: open(%s)", path);
+ xlog_warn("dirscancb: open(%s): %s", path, strerror(errno));
free(ic);
goto out;
}
@@ -482,7 +416,7 @@ dirscancb(int fd, short which, void *data)
strlcpy(ic->ic_path, path, sizeof(ic->ic_path));
if (verbose > 0)
- idmapd_warnx("New client: %s", ic->ic_clid);
+ xlog_warn("New client: %s", ic->ic_clid);
if (nfsopen(ic) == -1) {
close(ic->ic_dirfd);
@@ -508,8 +442,8 @@ dirscancb(int fd, short which, void *data)
close(ic->ic_dirfd);
TAILQ_REMOVE(icq, ic, ic_next);
if (verbose > 0) {
- idmapd_warnx("Stale client: %s", ic->ic_clid);
- idmapd_warnx("\t-> closed %s", ic->ic_path);
+ xlog_warn("Stale client: %s", ic->ic_clid);
+ xlog_warn("\t-> closed %s", ic->ic_path);
}
free(ic);
} else
@@ -560,7 +494,7 @@ nfsdcb(int fd, short which, void *data)
goto out;
if ((len = read(ic->ic_fd, buf, sizeof(buf))) <= 0) {
- idmapd_warnx("nfsdcb: read(%s) failed: errno %d (%s)",
+ xlog_warn("nfsdcb: read(%s) failed: errno %d (%s)",
ic->ic_path, len?errno:0,
len?strerror(errno):"End of File");
goto out;
@@ -574,15 +508,15 @@ nfsdcb(int fd, short which, void *data)
/* Authentication name -- ignored for now*/
if (getfield(&bp, authbuf, sizeof(authbuf)) == -1) {
- idmapd_warnx("nfsdcb: bad authentication name in upcall\n");
+ xlog_warn("nfsdcb: bad authentication name in upcall\n");
return;
}
if (getfield(&bp, typebuf, sizeof(typebuf)) == -1) {
- idmapd_warnx("nfsdcb: bad type in upcall\n");
+ xlog_warn("nfsdcb: bad type in upcall\n");
return;
}
if (verbose > 0)
- idmapd_warnx("nfsdcb: authbuf=%s authtype=%s",
+ xlog_warn("nfsdcb: authbuf=%s authtype=%s",
authbuf, typebuf);
im.im_type = strcmp(typebuf, "user") == 0 ?
@@ -592,26 +526,26 @@ nfsdcb(int fd, short which, void *data)
case IC_NAMEID:
im.im_conv = IDMAP_CONV_NAMETOID;
if (getfield(&bp, im.im_name, sizeof(im.im_name)) == -1) {
- idmapd_warnx("nfsdcb: bad name in upcall\n");
+ xlog_warn("nfsdcb: bad name in upcall\n");
return;
}
break;
case IC_IDNAME:
im.im_conv = IDMAP_CONV_IDTONAME;
if (getfield(&bp, buf1, sizeof(buf1)) == -1) {
- idmapd_warnx("nfsdcb: bad id in upcall\n");
+ xlog_warn("nfsdcb: bad id in upcall\n");
return;
}
tmp = strtoul(buf1, (char **)NULL, 10);
im.im_id = (u_int32_t)tmp;
if ((tmp == ULONG_MAX && errno == ERANGE)
|| (unsigned long)im.im_id != tmp) {
- idmapd_warnx("nfsdcb: id '%s' too big!\n", buf1);
+ xlog_warn("nfsdcb: id '%s' too big!\n", buf1);
return;
}
break;
default:
- idmapd_warnx("nfsdcb: Unknown which type %d", ic->ic_which);
+ xlog_warn("nfsdcb: Unknown which type %d", ic->ic_which);
return;
}
@@ -672,14 +606,14 @@ nfsdcb(int fd, short which, void *data)
break;
default:
- idmapd_warnx("nfsdcb: Unknown which type %d", ic->ic_which);
+ xlog_warn("nfsdcb: Unknown which type %d", ic->ic_which);
return;
}
bsiz = sizeof(buf) - bsiz;
if (atomicio((void*)write, ic->ic_fd, buf, bsiz) != bsiz)
- idmapd_warnx("nfsdcb: write(%s) failed: errno %d (%s)",
+ xlog_warn("nfsdcb: write(%s) failed: errno %d (%s)",
ic->ic_path, errno, strerror(errno));
out:
@@ -693,7 +627,7 @@ imconv(struct idmap_client *ic, struct idmap_msg *im)
case IDMAP_CONV_IDTONAME:
idtonameres(im);
if (verbose > 1)
- idmapd_warnx("%s %s: (%s) id \"%d\" -> name \"%s\"",
+ xlog_warn("%s %s: (%s) id \"%d\" -> name \"%s\"",
ic->ic_id, ic->ic_clid,
im->im_type == IDMAP_TYPE_USER ? "user" : "group",
im->im_id, im->im_name);
@@ -705,13 +639,13 @@ imconv(struct idmap_client *ic, struct idmap_msg *im)
}
nametoidres(im);
if (verbose > 1)
- idmapd_warnx("%s %s: (%s) name \"%s\" -> id \"%d\"",
+ xlog_warn("%s %s: (%s) name \"%s\" -> id \"%d\"",
ic->ic_id, ic->ic_clid,
im->im_type == IDMAP_TYPE_USER ? "user" : "group",
im->im_name, im->im_id);
break;
default:
- idmapd_warnx("imconv: Invalid conversion type (%d) in message",
+ xlog_warn("imconv: Invalid conversion type (%d) in message",
im->im_conv);
im->im_status |= IDMAP_STATUS_INVALIDMSG;
break;
@@ -729,7 +663,7 @@ nfscb(int fd, short which, void *data)
if (atomicio(read, ic->ic_fd, &im, sizeof(im)) != sizeof(im)) {
if (verbose > 0)
- idmapd_warn("nfscb: read(%s)", ic->ic_path);
+ xlog_warn("nfscb: read(%s): %s", ic->ic_path, strerror(errno));
if (errno == EPIPE)
return;
goto out;
@@ -745,7 +679,7 @@ nfscb(int fd, short which, void *data)
im.im_status = IDMAP_STATUS_SUCCESS;
if (atomicio((void*)write, ic->ic_fd, &im, sizeof(im)) != sizeof(im))
- idmapd_warn("nfscb: write(%s)", ic->ic_path);
+ xlog_warn("nfscb: write(%s): %s", ic->ic_path, strerror(errno));
out:
event_add(&ic->ic_event, NULL);
}
@@ -756,7 +690,7 @@ nfsdreopen_one(struct idmap_client *ic)
int fd;
if (verbose > 0)
- idmapd_warnx("ReOpening %s", ic->ic_path);
+ xlog_warn("ReOpening %s", ic->ic_path);
if ((fd = open(ic->ic_path, O_RDWR, 0)) != -1) {
if ((ic->ic_event.ev_flags & EVLIST_INIT))
@@ -768,7 +702,7 @@ nfsdreopen_one(struct idmap_client *ic)
event_set(&ic->ic_event, ic->ic_fd, EV_READ, nfsdcb, ic);
event_add(&ic->ic_event, NULL);
} else {
- idmapd_warnx("nfsdreopen: Opening '%s' failed: errno %d (%s)",
+ xlog_warn("nfsdreopen: Opening '%s' failed: errno %d (%s)",
ic->ic_path, errno, strerror(errno));
}
}
@@ -793,7 +727,7 @@ nfsdopenone(struct idmap_client *ic)
{
if ((ic->ic_fd = open(ic->ic_path, O_RDWR, 0)) == -1) {
if (verbose > 0)
- idmapd_warnx("nfsdopenone: Opening %s failed: "
+ xlog_warn("nfsdopenone: Opening %s failed: "
"errno %d (%s)",
ic->ic_path, errno, strerror(errno));
return (-1);
@@ -803,7 +737,7 @@ nfsdopenone(struct idmap_client *ic)
event_add(&ic->ic_event, NULL);
if (verbose > 0)
- idmapd_warnx("Opened %s", ic->ic_path);
+ xlog_warn("Opened %s", ic->ic_path);
return (0);
}
@@ -819,7 +753,7 @@ nfsopen(struct idmap_client *ic)
DN_CREATE | DN_DELETE | DN_MULTISHOT);
break;
default:
- idmapd_warn("nfsopen: open(%s)", ic->ic_path);
+ xlog_warn("nfsopen: open(%s): %s", ic->ic_path, strerror(errno));
return (-1);
}
} else {
@@ -828,7 +762,7 @@ nfsopen(struct idmap_client *ic)
fcntl(ic->ic_dirfd, F_SETSIG, 0);
fcntl(ic->ic_dirfd, F_NOTIFY, 0);
if (verbose > 0)
- idmapd_warnx("Opened %s", ic->ic_path);
+ xlog_warn("Opened %s", ic->ic_path);
}
return (0);
--
1.5.2.4
This patch removes unnecessary code from idmapd. setproctitle is not used
anywhere and it can be removed. In addition the kernel section of the
nfs_idmap.h header is not used and is out of date and thus is removed.
Signed-off-by: David P. Quigley <[email protected]>
---
utils/idmapd/Makefile.am | 1 -
utils/idmapd/nfs_idmap.h | 7 ---
utils/idmapd/setproctitle.c | 110 -------------------------------------------
3 files changed, 0 insertions(+), 118 deletions(-)
delete mode 100644 utils/idmapd/setproctitle.c
diff --git a/utils/idmapd/Makefile.am b/utils/idmapd/Makefile.am
index 586ac9a..eb393df 100644
--- a/utils/idmapd/Makefile.am
+++ b/utils/idmapd/Makefile.am
@@ -16,7 +16,6 @@ idmapd_SOURCES = \
atomicio.c \
cfg.c \
idmapd.c \
- setproctitle.c \
strlcat.c \
strlcpy.c \
\
diff --git a/utils/idmapd/nfs_idmap.h b/utils/idmapd/nfs_idmap.h
index 5ec7cf1..59bced3 100644
--- a/utils/idmapd/nfs_idmap.h
+++ b/utils/idmapd/nfs_idmap.h
@@ -61,11 +61,4 @@ struct idmap_msg {
u_int8_t im_status;
};
-#ifdef __KERNEL__
-void *nfs_idmap_new(struct nfs_server *);
-void nfs_idmap_delete(struct nfs_server *);
-int nfs_idmap_id(struct nfs_server *, u_int8_t, char *, u_int, uid_t *);
-int nfs_idmap_name(struct nfs_server *, u_int8_t, uid_t, char *, u_int *);
-#endif /* __KERNEL__ */
-
#endif /* NFS_IDMAP_H */
diff --git a/utils/idmapd/setproctitle.c b/utils/idmapd/setproctitle.c
deleted file mode 100644
index 0679f4e..0000000
--- a/utils/idmapd/setproctitle.c
+++ /dev/null
@@ -1,110 +0,0 @@
-/*
- * Modified for OpenSSH by Kevin Steves
- * October 2000
- */
-
-/*
- * Copyright (c) 1994, 1995 Christopher G. Demetriou
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * 3. All advertising materials mentioning features or use of this software
- * must display the following acknowledgement:
- * This product includes software developed by Christopher G. Demetriou
- * for the NetBSD Project.
- * 4. The name of the author may not be used to endorse or promote products
- * derived from this software without specific prior written permission
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
- * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
- * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
- * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#if defined(LIBC_SCCS) && !defined(lint)
-static char rcsid[] = "$OpenBSD: setproctitle.c,v 1.8 2001/11/06 19:21:40 art Exp $";
-#endif /* LIBC_SCCS and not lint */
-
-#include <sys/types.h>
-
-#include <stdio.h>
-#include <string.h>
-#include <stdarg.h>
-
-#ifdef HAVE_CONFIG_H
-#include "config.h"
-#endif /* HAVE_CONFIG_H */
-
-#ifndef HAVE_SETPROCTITLE
-
-#define SPT_NONE 0
-#define SPT_PSTAT 1
-
-#ifndef SPT_TYPE
-#define SPT_TYPE SPT_NONE
-#endif
-
-#if SPT_TYPE == SPT_PSTAT
-#include <sys/param.h>
-#include <sys/pstat.h>
-#endif /* SPT_TYPE == SPT_PSTAT */
-
-#define MAX_PROCTITLE 2048
-
-extern char *__progname;
-
-/*
- * Set Process Title (SPT) defines. Modeled after sendmail's
- * SPT type definition strategy.
- *
- * SPT_TYPE:
- *
- * SPT_NONE: Don't set the process title. Default.
- * SPT_PSTAT: Use pstat(PSTAT_SETCMD). HP-UX specific.
- */
-
-void
-setproctitle(const char *fmt, ...)
-{
-#if SPT_TYPE != SPT_NONE
- va_list ap;
-
- char buf[MAX_PROCTITLE];
- size_t used;
-
-#if SPT_TYPE == SPT_PSTAT
- union pstun pst;
-#endif /* SPT_TYPE == SPT_PSTAT */
-
- va_start(ap, fmt);
- if (fmt != NULL) {
- used = snprintf(buf, MAX_PROCTITLE, "%s: ", __progname);
- if (used >= MAX_PROCTITLE)
- used = MAX_PROCTITLE - 1;
- (void)vsnprintf(buf + used, MAX_PROCTITLE - used, fmt, ap);
- } else
- (void)snprintf(buf, MAX_PROCTITLE, "%s", __progname);
- va_end(ap);
- used = strlen(buf);
-
-#if SPT_TYPE == SPT_PSTAT
- pst.pst_command = buf;
- pstat(PSTAT_SETCMD, pst, used, 0, 0);
-#endif /* SPT_TYPE == SPT_PSTAT */
-
-#endif /* SPT_TYPE != SPT_NONE */
-}
-#endif /* HAVE_SETPROCTITLE */
--
1.5.2.4
On 9/11/07, David P. Quigley <[email protected]> wrote:
> This patch removes all of the old idmap_* logging functions and replaced them
> with the corresponding xlog functions. In addition that that it also reworks
> the gssd logging wrappers to use the new xlog_backend. Finally it makes
> necessary changes to the build files to get the project compiling again.
>
> Signed-off-by: David P. Quigley <[email protected]>
> ---
>
> static int verbosity = 0;
> static int fg = 0;
>
> -static char message_buf[500];
> -
> void initerr(char *progname, int set_verbosity, int set_fg)
> {
> verbosity = set_verbosity;
> fg = set_fg;
> if (!fg)
> - openlog(progname, LOG_PID, LOG_DAEMON);
> + xlog_open(progname);
> }
>
>
> void printerr(int priority, char *format, ...)
> {
> va_list args;
> - int ret;
> - int buf_used, buf_available;
> - char *buf;
>
> /* Don't bother formatting a message we're never going to print! */
> if (priority > verbosity)
> return;
>
> - buf_used = strlen(message_buf);
> - /* subtract 4 to leave room for "...\n" if necessary */
> - buf_available = sizeof(message_buf) - buf_used - 4;
> - buf = message_buf + buf_used;
> -
> - /*
> - * Aggregate lines: only print buffer when we get to the
> - * end of a line or run out of space
> - */
> va_start(args, format);
> - ret = vsnprintf(buf, buf_available, format, args);
> + if (fg)
> + vfprintf(stderr, format, args);
> + else
> + xlog_backend(L_ERROR, format, args);
> va_end(args);
> -
> - if (ret < 0)
> - goto printit;
> - if (ret >= buf_available) {
> - /* Indicate we're truncating */
> - strcat(message_buf, "...\n");
> - goto printit;
> - }
> - if (message_buf[strlen(message_buf) - 1] == '\n')
> - goto printit;
> - return;
> -printit:
> - if (fg) {
> - fprintf(stderr, "%s", message_buf);
> - } else {
> - syslog(LOG_ERR, "%s", message_buf);
> - }
> - /* reset the buffer */
> - memset(message_buf, 0, sizeof(message_buf));
> }
Hi David,
I think we need to keep the code in printerr that aggregates lines, or
figure out another way to print the debugging. The last 20 or so
lines of the following output of svcgssd during a mount when run with
-vvv should be on one line as an ascii dump:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: leaving poll
Sep 11 10:50:17 rock rpc.svcgssd[19135]: handling null request
Sep 11 10:50:17 rock rpc.svcgssd[19135]: in_handle:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: length 0
Sep 11 10:50:17 rock rpc.svcgssd[19135]:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: in_tok:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: length 509
Sep 11 10:50:17 rock rpc.svcgssd[19135]:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 0000:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 60
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 82
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
Sep 11 10:50:17 rock rpc.svcgssd[19135]: f9
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 06
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 09
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 2a
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 86
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 48
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 86
Sep 11 10:50:17 rock rpc.svcgssd[19135]: f7
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 12
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 02
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 02
Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
Sep 11 10:50:17 rock rpc.svcgssd[19135]:
Sep 11 10:50:17 rock rpc.svcgssd[19135]: `
Sep 11 10:50:17 rock rpc.svcgssd[19135]: .
Could you point me to the second in gssd that is doing this? It might be
better to use the print statements in a saner way.
Dave
On Tue, 2007-09-11 at 11:00 -0400, Kevin Coffman wrote:
> On 9/11/07, David P. Quigley <[email protected]> wrote:
> > This patch removes all of the old idmap_* logging functions and replaced them
> > with the corresponding xlog functions. In addition that that it also reworks
> > the gssd logging wrappers to use the new xlog_backend. Finally it makes
> > necessary changes to the build files to get the project compiling again.
> >
> > Signed-off-by: David P. Quigley <[email protected]>
> > ---
> >
> > static int verbosity = 0;
> > static int fg = 0;
> >
> > -static char message_buf[500];
> > -
> > void initerr(char *progname, int set_verbosity, int set_fg)
> > {
> > verbosity = set_verbosity;
> > fg = set_fg;
> > if (!fg)
> > - openlog(progname, LOG_PID, LOG_DAEMON);
> > + xlog_open(progname);
> > }
> >
> >
> > void printerr(int priority, char *format, ...)
> > {
> > va_list args;
> > - int ret;
> > - int buf_used, buf_available;
> > - char *buf;
> >
> > /* Don't bother formatting a message we're never going to print! */
> > if (priority > verbosity)
> > return;
> >
> > - buf_used = strlen(message_buf);
> > - /* subtract 4 to leave room for "...\n" if necessary */
> > - buf_available = sizeof(message_buf) - buf_used - 4;
> > - buf = message_buf + buf_used;
> > -
> > - /*
> > - * Aggregate lines: only print buffer when we get to the
> > - * end of a line or run out of space
> > - */
> > va_start(args, format);
> > - ret = vsnprintf(buf, buf_available, format, args);
> > + if (fg)
> > + vfprintf(stderr, format, args);
> > + else
> > + xlog_backend(L_ERROR, format, args);
> > va_end(args);
> > -
> > - if (ret < 0)
> > - goto printit;
> > - if (ret >= buf_available) {
> > - /* Indicate we're truncating */
> > - strcat(message_buf, "...\n");
> > - goto printit;
> > - }
> > - if (message_buf[strlen(message_buf) - 1] == '\n')
> > - goto printit;
> > - return;
> > -printit:
> > - if (fg) {
> > - fprintf(stderr, "%s", message_buf);
> > - } else {
> > - syslog(LOG_ERR, "%s", message_buf);
> > - }
> > - /* reset the buffer */
> > - memset(message_buf, 0, sizeof(message_buf));
> > }
>
> Hi David,
> I think we need to keep the code in printerr that aggregates lines, or
> figure out another way to print the debugging. The last 20 or so
> lines of the following output of svcgssd during a mount when run with
> -vvv should be on one line as an ascii dump:
>
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: leaving poll
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: handling null request
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: in_handle:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: length 0
> Sep 11 10:50:17 rock rpc.svcgssd[19135]:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: in_tok:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: length 509
> Sep 11 10:50:17 rock rpc.svcgssd[19135]:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 0000:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 60
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 82
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: f9
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 06
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 09
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 2a
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 86
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 48
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 86
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: f7
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 12
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 02
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 02
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: 01
> Sep 11 10:50:17 rock rpc.svcgssd[19135]:
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: `
> Sep 11 10:50:17 rock rpc.svcgssd[19135]: .
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On 9/11/07, David P. Quigley <[email protected]> wrote:
> Could you point me to the second in gssd that is doing this? It might be
> better to use the print statements in a saner way.
>
> Dave
Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
calls printerr() multiple times. I'll look to see if I can find other
instances that might be depending on the line acculation of printerr.
On 9/11/07, Kevin Coffman <[email protected]> wrote:
> On 9/11/07, David P. Quigley <[email protected]> wrote:
> > Could you point me to the second in gssd that is doing this? It might be
> > better to use the print statements in a saner way.
> >
> > Dave
>
> Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> calls printerr() multiple times. I'll look to see if I can find other
> instances that might be depending on the line acculation of printerr.
The qword_* functions in cachio.c also currently use this.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
I am trying to figure out exactly what this function is doing. I think
it might be best to precalculate the length of the string needed to
convert this char * to hex and then use snprintf to build it here rather
than in the logging subsystem. What exactly is the data that we are
getting in via cp? Are we just getting a string of bytes that we want
printed in hex?
Dave
On Tue, 2007-09-11 at 11:24 -0400, Kevin Coffman wrote:
> On 9/11/07, David P. Quigley <[email protected]> wrote:
> > Could you point me to the second in gssd that is doing this? It might be
> > better to use the print statements in a saner way.
> >
> > Dave
>
> Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> calls printerr() multiple times. I'll look to see if I can find other
> instances that might be depending on the line acculation of printerr.
Yes,
Here is sample output with the line accumlation re-enabled in printerr():
Sep 11 11:16:43 rock rpc.svcgssd[22225]:
Sep 11 11:16:43 rock rpc.svcgssd[22225]: in_tok:
Sep 11 11:16:43 rock rpc.svcgssd[22225]: length 490
Sep 11 11:16:43 rock rpc.svcgssd[22225]:
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0000: 6082 01e6 0609 2a86
4886 f712 0102 0201 `.....*.H.......
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0010: 006e 8201 d530 8201
d1a0 0302 0105 a103 .n...0..........
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0020: 0201 0ea2 0703 0500
2000 0000 a381 fa61 ........ ......a
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0030: 81f7 3081 f4a0 0302
0105 a110 1b0e 4349 ..0...........CI
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0040: 5449 2e55 4d49 4348
2e45 4455 a225 3023 TI.UMICH.EDU.%0#
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0050: a003 0201 03a1 1c30
1a1b 036e 6673 1b13 .......0...nfs..
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0060: 726f 636b 2e63 6974
692e 756d 6963 682e rock.citi.umich.
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0070: 6564 75a3 81b3 3081
b0a0 0302 0101 a103 edu...0.........
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0080: 0201 18a2 81a3 0481
a09d 90d7 31a0 e661 ............1..a
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0090: e698 f31c 1aa1 fd18
a9b0 706e 9e4c 36c4 ..........pn.L6.
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00a0: 3391 0612 eab0 8c07
ea70 d087 98e6 ac94 3........p......
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00b0: a81c 205a ba10 0f46
2351 fbb5 12bd 67c8 .. Z...F#Q....g.
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00c0: e7f5 627e 4b71 cd23
0fea c6e3 a6f3 d122 ..b~Kq.#......."
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00d0: a58d 1556 6788 4f0c
908a 36b1 3569 1456 ...Vg.O...6.5i.V
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00e0: 779e 3af7 ba6e afe9
c910 f502 a9f8 2a61 w.:..n........*a
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00f0: a546 07c1 be5a c992
8597 d9a4 6940 18bb .F...Z......i@..
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0100: e10c 263f 6afb 1a3f
ec58 a6c6 315a bd43 ..&?j..?.X..1Z.C
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0110: 7486 903a 36ac 303b
6cd9 ffc0 d3ec 724f t..:6.0;l.....rO
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0120: d7c4 da92 e1a1 33ab
8fa4 81be 3081 bba0 ......3.....0...
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0130: 0302 0101 a281 b304
81b0 35cf 6330 7a2a ..........5.c0z*
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0140: 91b4 591c 38a8 6d3c
9192 ad49 85e6 92d6 ..Y.8.m<...I....
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0150: 539f 2a3a c66a 6e09
c320 589d 0ec8 0cc5 S.*:.jn.. X.....
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0160: c433 cdeb d259 79a2
6547 b0e8 21aa 0324 .3...Yy.eG..!..$
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0170: 4cc5 ab73 fd76 d3b7
f36b 1ffc d140 a163 [email protected]
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0180: c3e1 b1bd 8f82 c47c
a5d4 1bf1 7d09 9be6 .......|....}...
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0190: 65dc 96d2 2bf4 cb5f
637a 6649 0221 1973 e...+.._czfI.!.s
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01a0: 6379 9537 ee31 fa8b
43f1 fe12 3f50 4ea2 cy.7.1..C...?PN.
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01b0: d426 70e3 59c8 1d1f
933b 0e5a 384d a681 .&p.Y....;.Z8M..
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01c0: 8399 6319 d1cc 1890
fe99 b78d 36f6 6d84 ..c.........6.m.
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01d0: 4d3a d4f2 8da2 a3e0
bea7 1cea 77c0 1acd M:..........w...
Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01e0: 4d27 5bfb 5e07 7562
9ffd M'[.^.ub..
Sep 11 11:16:43 rock rpc.svcgssd[22225]: sname =
nfs/[email protected]
Sep 11 11:16:43 rock rpc.svcgssd[22225]: DEBUG: serialize_krb5_ctx:
lucid version!
Sep 11 11:16:43 rock rpc.svcgssd[22225]: prepare_krb5_rfc1964_buffer:
serializing keys with enctype 4 and length 8
Sep 11 11:16:43 rock rpc.svcgssd[22225]: doing downcall
Sep 11 11:16:43 rock rpc.svcgssd[22225]: \x01000000 2147483647 -1 -1 0
krb5 \x000000000000000018a0b8bfb0cfb0002001be0078520508000000000000000087b2e6463f15f013090000002a864886f7120102020400000008000000b9f42c618a2a3bdc04000000080000004904dc917adacb2c
Sep 11 11:16:43 rock rpc.svcgssd[22225]: sending null reply
On 9/11/07, David P. Quigley <[email protected]> wrote:
> I am trying to figure out exactly what this function is doing. I think
> it might be best to precalculate the length of the string needed to
> convert this char * to hex and then use snprintf to build it here rather
> than in the logging subsystem. What exactly is the data that we are
> getting in via cp? Are we just getting a string of bytes that we want
> printed in hex?
>
> Dave
>
> On Tue, 2007-09-11 at 11:24 -0400, Kevin Coffman wrote:
> > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > Could you point me to the second in gssd that is doing this? It might be
> > > better to use the print statements in a saner way.
> > >
> > > Dave
> >
> > Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> > calls printerr() multiple times. I'll look to see if I can find other
> > instances that might be depending on the line acculation of printerr.
>
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Tue, Sep 11, 2007 at 11:28:48AM -0400, Kevin Coffman wrote:
> On 9/11/07, Kevin Coffman <[email protected]> wrote:
> > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > Could you point me to the second in gssd that is doing this? It might be
> > > better to use the print statements in a saner way.
> > >
> > > Dave
> >
> > Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> > calls printerr() multiple times. I'll look to see if I can find other
> > instances that might be depending on the line acculation of printerr.
>
> The qword_* functions in cachio.c also currently use this.
Actually, we only have two print_hexl's, both dumping the data that was
read from the kernel's null upcall pipe on the server side.
I think those could probably go. Kevin, have you used them recently?
There's also always the option of running the program under strace with
-s9999 to see all the read and write data, though that may not be as
convenient.
--b.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
If that is the format why do crazyness with the inner loop to get it
printed out? It seems that the formating for that is ("%x %x %x %x",
int1, int2, int3, int4); Using that inner loop to do that seems a bit
excessive. It seems for each itteration of the outer loop I can have
this print statement ("%04x: %04x %04x %04x %04x\n",length,
(u_int)cp[i],(u_int)cp[i+4],(u_int)cp[i+8],(u_int)cp[i+12]);. This seems
like a much better solution unless I am missing something.
Dave
On Tue, 2007-09-11 at 11:32 -0400, Kevin Coffman wrote:
> Yes,
> Here is sample output with the line accumlation re-enabled in printerr():
>
> Sep 11 11:16:43 rock rpc.svcgssd[22225]:
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: in_tok:
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: length 490
> Sep 11 11:16:43 rock rpc.svcgssd[22225]:
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0000: 6082 01e6 0609 2a86
> 4886 f712 0102 0201 `.....*.H.......
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0010: 006e 8201 d530 8201
> d1a0 0302 0105 a103 .n...0..........
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0020: 0201 0ea2 0703 0500
> 2000 0000 a381 fa61 ........ ......a
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0030: 81f7 3081 f4a0 0302
> 0105 a110 1b0e 4349 ..0...........CI
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0040: 5449 2e55 4d49 4348
> 2e45 4455 a225 3023 TI.UMICH.EDU.%0#
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0050: a003 0201 03a1 1c30
> 1a1b 036e 6673 1b13 .......0...nfs..
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0060: 726f 636b 2e63 6974
> 692e 756d 6963 682e rock.citi.umich.
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0070: 6564 75a3 81b3 3081
> b0a0 0302 0101 a103 edu...0.........
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0080: 0201 18a2 81a3 0481
> a09d 90d7 31a0 e661 ............1..a
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0090: e698 f31c 1aa1 fd18
> a9b0 706e 9e4c 36c4 ..........pn.L6.
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00a0: 3391 0612 eab0 8c07
> ea70 d087 98e6 ac94 3........p......
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00b0: a81c 205a ba10 0f46
> 2351 fbb5 12bd 67c8 .. Z...F#Q....g.
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00c0: e7f5 627e 4b71 cd23
> 0fea c6e3 a6f3 d122 ..b~Kq.#......."
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00d0: a58d 1556 6788 4f0c
> 908a 36b1 3569 1456 ...Vg.O...6.5i.V
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00e0: 779e 3af7 ba6e afe9
> c910 f502 a9f8 2a61 w.:..n........*a
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00f0: a546 07c1 be5a c992
> 8597 d9a4 6940 18bb .F...Z......i@..
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0100: e10c 263f 6afb 1a3f
> ec58 a6c6 315a bd43 ..&?j..?.X..1Z.C
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0110: 7486 903a 36ac 303b
> 6cd9 ffc0 d3ec 724f t..:6.0;l.....rO
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0120: d7c4 da92 e1a1 33ab
> 8fa4 81be 3081 bba0 ......3.....0...
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0130: 0302 0101 a281 b304
> 81b0 35cf 6330 7a2a ..........5.c0z*
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0140: 91b4 591c 38a8 6d3c
> 9192 ad49 85e6 92d6 ..Y.8.m<...I....
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0150: 539f 2a3a c66a 6e09
> c320 589d 0ec8 0cc5 S.*:.jn.. X.....
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0160: c433 cdeb d259 79a2
> 6547 b0e8 21aa 0324 .3...Yy.eG..!..$
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0170: 4cc5 ab73 fd76 d3b7
> f36b 1ffc d140 a163 [email protected]
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0180: c3e1 b1bd 8f82 c47c
> a5d4 1bf1 7d09 9be6 .......|....}...
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0190: 65dc 96d2 2bf4 cb5f
> 637a 6649 0221 1973 e...+.._czfI.!.s
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01a0: 6379 9537 ee31 fa8b
> 43f1 fe12 3f50 4ea2 cy.7.1..C...?PN.
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01b0: d426 70e3 59c8 1d1f
> 933b 0e5a 384d a681 .&p.Y....;.Z8M..
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01c0: 8399 6319 d1cc 1890
> fe99 b78d 36f6 6d84 ..c.........6.m.
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01d0: 4d3a d4f2 8da2 a3e0
> bea7 1cea 77c0 1acd M:..........w...
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01e0: 4d27 5bfb 5e07 7562
> 9ffd M'[.^.ub..
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: sname =
> nfs/[email protected]
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: DEBUG: serialize_krb5_ctx:
> lucid version!
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: prepare_krb5_rfc1964_buffer:
> serializing keys with enctype 4 and length 8
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: doing downcall
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: \x01000000 2147483647 -1 -1 0
> krb5 \x000000000000000018a0b8bfb0cfb0002001be0078520508000000000000000087b2e6463f15f013090000002a864886f7120102020400000008000000b9f42c618a2a3bdc04000000080000004904dc917adacb2c
> Sep 11 11:16:43 rock rpc.svcgssd[22225]: sending null reply
>
>
> On 9/11/07, David P. Quigley <[email protected]> wrote:
> > I am trying to figure out exactly what this function is doing. I think
> > it might be best to precalculate the length of the string needed to
> > convert this char * to hex and then use snprintf to build it here rather
> > than in the logging subsystem. What exactly is the data that we are
> > getting in via cp? Are we just getting a string of bytes that we want
> > printed in hex?
> >
> > Dave
> >
> > On Tue, 2007-09-11 at 11:24 -0400, Kevin Coffman wrote:
> > > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > > Could you point me to the second in gssd that is doing this? It might be
> > > > better to use the print statements in a saner way.
> > > >
> > > > Dave
> > >
> > > Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> > > calls printerr() multiple times. I'll look to see if I can find other
> > > instances that might be depending on the line acculation of printerr.
> >
> >
I'll have to agree with Bruce that we should just remove the calls to
print_hexl() altogether. I'll write a patch for that.
The stuff in gssd/cacheio. can/should be gotten rid of as well. They
are mostly straight copies of the functions in support/nfs/cacheio.c.
There are a couple of important minor changes that could be moved into
the common routine. I'll work on a patch to do that.
K.C.
On 9/11/07, David P. Quigley <[email protected]> wrote:
> If that is the format why do crazyness with the inner loop to get it
> printed out? It seems that the formating for that is ("%x %x %x %x",
> int1, int2, int3, int4); Using that inner loop to do that seems a bit
> excessive. It seems for each itteration of the outer loop I can have
> this print statement ("%04x: %04x %04x %04x %04x\n",length,
> (u_int)cp[i],(u_int)cp[i+4],(u_int)cp[i+8],(u_int)cp[i+12]);. This seems
> like a much better solution unless I am missing something.
>
> Dave
>
> On Tue, 2007-09-11 at 11:32 -0400, Kevin Coffman wrote:
> > Yes,
> > Here is sample output with the line accumlation re-enabled in printerr():
> >
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]:
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: in_tok:
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: length 490
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]:
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0000: 6082 01e6 0609 2a86
> > 4886 f712 0102 0201 `.....*.H.......
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0010: 006e 8201 d530 8201
> > d1a0 0302 0105 a103 .n...0..........
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0020: 0201 0ea2 0703 0500
> > 2000 0000 a381 fa61 ........ ......a
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0030: 81f7 3081 f4a0 0302
> > 0105 a110 1b0e 4349 ..0...........CI
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0040: 5449 2e55 4d49 4348
> > 2e45 4455 a225 3023 TI.UMICH.EDU.%0#
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0050: a003 0201 03a1 1c30
> > 1a1b 036e 6673 1b13 .......0...nfs..
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0060: 726f 636b 2e63 6974
> > 692e 756d 6963 682e rock.citi.umich.
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0070: 6564 75a3 81b3 3081
> > b0a0 0302 0101 a103 edu...0.........
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0080: 0201 18a2 81a3 0481
> > a09d 90d7 31a0 e661 ............1..a
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0090: e698 f31c 1aa1 fd18
> > a9b0 706e 9e4c 36c4 ..........pn.L6.
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00a0: 3391 0612 eab0 8c07
> > ea70 d087 98e6 ac94 3........p......
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00b0: a81c 205a ba10 0f46
> > 2351 fbb5 12bd 67c8 .. Z...F#Q....g.
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00c0: e7f5 627e 4b71 cd23
> > 0fea c6e3 a6f3 d122 ..b~Kq.#......."
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00d0: a58d 1556 6788 4f0c
> > 908a 36b1 3569 1456 ...Vg.O...6.5i.V
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00e0: 779e 3af7 ba6e afe9
> > c910 f502 a9f8 2a61 w.:..n........*a
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 00f0: a546 07c1 be5a c992
> > 8597 d9a4 6940 18bb .F...Z......i@..
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0100: e10c 263f 6afb 1a3f
> > ec58 a6c6 315a bd43 ..&?j..?.X..1Z.C
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0110: 7486 903a 36ac 303b
> > 6cd9 ffc0 d3ec 724f t..:6.0;l.....rO
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0120: d7c4 da92 e1a1 33ab
> > 8fa4 81be 3081 bba0 ......3.....0...
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0130: 0302 0101 a281 b304
> > 81b0 35cf 6330 7a2a ..........5.c0z*
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0140: 91b4 591c 38a8 6d3c
> > 9192 ad49 85e6 92d6 ..Y.8.m<...I....
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0150: 539f 2a3a c66a 6e09
> > c320 589d 0ec8 0cc5 S.*:.jn.. X.....
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0160: c433 cdeb d259 79a2
> > 6547 b0e8 21aa 0324 .3...Yy.eG..!..$
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0170: 4cc5 ab73 fd76 d3b7
> > f36b 1ffc d140 a163 [email protected]
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0180: c3e1 b1bd 8f82 c47c
> > a5d4 1bf1 7d09 9be6 .......|....}...
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 0190: 65dc 96d2 2bf4 cb5f
> > 637a 6649 0221 1973 e...+.._czfI.!.s
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01a0: 6379 9537 ee31 fa8b
> > 43f1 fe12 3f50 4ea2 cy.7.1..C...?PN.
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01b0: d426 70e3 59c8 1d1f
> > 933b 0e5a 384d a681 .&p.Y....;.Z8M..
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01c0: 8399 6319 d1cc 1890
> > fe99 b78d 36f6 6d84 ..c.........6.m.
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01d0: 4d3a d4f2 8da2 a3e0
> > bea7 1cea 77c0 1acd M:..........w...
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: 01e0: 4d27 5bfb 5e07 7562
> > 9ffd M'[.^.ub..
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: sname =
> > nfs/[email protected]
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: DEBUG: serialize_krb5_ctx:
> > lucid version!
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: prepare_krb5_rfc1964_buffer:
> > serializing keys with enctype 4 and length 8
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: doing downcall
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: \x01000000 2147483647 -1 -1 0
> > krb5 \x000000000000000018a0b8bfb0cfb0002001be0078520508000000000000000087b2e6463f15f013090000002a864886f7120102020400000008000000b9f42c618a2a3bdc04000000080000004904dc917adacb2c
> > Sep 11 11:16:43 rock rpc.svcgssd[22225]: sending null reply
> >
> >
> > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > I am trying to figure out exactly what this function is doing. I think
> > > it might be best to precalculate the length of the string needed to
> > > convert this char * to hex and then use snprintf to build it here rather
> > > than in the logging subsystem. What exactly is the data that we are
> > > getting in via cp? Are we just getting a string of bytes that we want
> > > printed in hex?
> > >
> > > Dave
> > >
> > > On Tue, 2007-09-11 at 11:24 -0400, Kevin Coffman wrote:
> > > > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > > > Could you point me to the second in gssd that is doing this? It might be
> > > > > better to use the print statements in a saner way.
> > > > >
> > > > > Dave
> > > >
> > > > Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> > > > calls printerr() multiple times. I'll look to see if I can find other
> > > > instances that might be depending on the line acculation of printerr.
> > >
> > >
>
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
Another question about the code in general. Why is it that idmapd is
using fcntl and user signals to indicate changes to the rpc pipe. Isn't
there something else we can use like Inotify or a Netlink socket? I am
assuming that this is just a result of the code being written before
these things were in place. I was originally starting doimapd based on
the idmapd source however if it is acceptable to move to one of these
other techniques I can switched over to that instead.
Dave
On Tue, 2007-09-11 at 11:37 -0400, J. Bruce Fields wrote:
> On Tue, Sep 11, 2007 at 11:28:48AM -0400, Kevin Coffman wrote:
> > On 9/11/07, Kevin Coffman <[email protected]> wrote:
> > > On 9/11/07, David P. Quigley <[email protected]> wrote:
> > > > Could you point me to the second in gssd that is doing this? It might be
> > > > better to use the print statements in a saner way.
> > > >
> > > > Dave
> > >
> > > Agreed. The cuprit is print_hexl() in svcgssd_proc.c, which currently
> > > calls printerr() multiple times. I'll look to see if I can find other
> > > instances that might be depending on the line acculation of printerr.
> >
> > The qword_* functions in cachio.c also currently use this.
>
> Actually, we only have two print_hexl's, both dumping the data that was
> read from the kernel's null upcall pipe on the server side.
>
> I think those could probably go. Kevin, have you used them recently?
> There's also always the option of running the program under strace with
> -s9999 to see all the read and write data, though that may not be as
> convenient.
>
> --b.
On Tue, Sep 11, 2007 at 01:01:18PM -0400, David P. Quigley wrote:
> Another question about the code in general. Why is it that idmapd is
> using fcntl and user signals to indicate changes to the rpc pipe. Isn't
> there something else we can use like Inotify or a Netlink socket?
When a new nfs client is created, a new directory containing several
files (including the upcall pipe) are created. The dnotify stuff is
there so we find out about those new files and directories. I don't
know much about netlink--how would it help us there?
I assume Inotify would work, but I don't see any advantage over dnotify.
If we didn't leave in some compatability code then it would mean
dropping support for pre-inotify kernels, wouldn't it?--could be that
there were only a few of those and that nfsv4 wasn't really usable on
them anyway (I don't remember), but absent a real motivation to move
from dnotify to inotify I'd rather not.
> I am assuming that this is just a result of the code being written
> before these things were in place. I was originally starting doimapd
> based on the idmapd source however if it is acceptable to move to one
> of these other techniques I can switched over to that instead.
For new stuff I don't think there'd be a problem using something else.
--b.
On Tue, 2007-09-11 at 13:16 -0400, J. Bruce Fields wrote:
> On Tue, Sep 11, 2007 at 01:01:18PM -0400, David P. Quigley wrote:
> > Another question about the code in general. Why is it that idmapd is
> > using fcntl and user signals to indicate changes to the rpc pipe. Isn't
> > there something else we can use like Inotify or a Netlink socket?
>
> When a new nfs client is created, a new directory containing several
> files (including the upcall pipe) are created. The dnotify stuff is
> there so we find out about those new files and directories. I don't
> know much about netlink--how would it help us there?
I'm not sure that it would be a big help. It seemes that it is a newer
method for transporting data between user and kernel space. There is a
Linux journal article from back in 2005 about them which can be found at
http://www.linuxjournal.com/article/7356. It could remove a dependency
on pipefs, however the only question would be which technology has been
around longer. It would also seem the the method of passing the messages
up to user space would be different. Instead of each client having a
separate file you could have idmapd/doimapd listen on one netlink socket
and messages that come in contain a clientid which the daemon would
associate with the separate clients internally. Again I'm not sure if it
is a big help, but rather a different way of thinking about it that I am
considering.
>
> I assume Inotify would work, but I don't see any advantage over dnotify.
> If we didn't leave in some compatability code then it would mean
> dropping support for pre-inotify kernels, wouldn't it?--could be that
> there were only a few of those and that nfsv4 wasn't really usable on
> them anyway (I don't remember), but absent a real motivation to move
> from dnotify to inotify I'd rather not.
That is a good point. Since I'm working on something that will hopefully
be in future kernels I think I can make the assumption of having the
extra features.
>
> > I am assuming that this is just a result of the code being written
> > before these things were in place. I was originally starting doimapd
> > based on the idmapd source however if it is acceptable to move to one
> > of these other techniques I can switched over to that instead.
>
> For new stuff I don't think there'd be a problem using something else.
>
> --b.
On Tue, 11 Sep 2007 13:33:53 -0400
"David P. Quigley" <[email protected]> wrote:
> On Tue, 2007-09-11 at 13:16 -0400, J. Bruce Fields wrote:
> > On Tue, Sep 11, 2007 at 01:01:18PM -0400, David P. Quigley wrote:
> > > Another question about the code in general. Why is it that idmapd is
> > > using fcntl and user signals to indicate changes to the rpc pipe. Isn't
> > > there something else we can use like Inotify or a Netlink socket?
> >
> > When a new nfs client is created, a new directory containing several
> > files (including the upcall pipe) are created. The dnotify stuff is
> > there so we find out about those new files and directories. I don't
> > know much about netlink--how would it help us there?
>
> I'm not sure that it would be a big help. It seemes that it is a newer
> method for transporting data between user and kernel space. There is a
> Linux journal article from back in 2005 about them which can be found at
> http://www.linuxjournal.com/article/7356. It could remove a dependency
> on pipefs, however the only question would be which technology has been
> around longer. It would also seem the the method of passing the messages
> up to user space would be different. Instead of each client having a
> separate file you could have idmapd/doimapd listen on one netlink socket
> and messages that come in contain a clientid which the daemon would
> associate with the separate clients internally. Again I'm not sure if it
> is a big help, but rather a different way of thinking about it that I am
> considering.
>
There's a somewhat related issue to this as well. I've started looking
over what it would take to add kerberos support to CIFS. It would be
*really* nice to be able to use the same gssd that NFS is using and
just massage that data for CIFS.
Doing this with the current rpc_pipefs based daemon is going to be
problematic though. I don't think we want CIFS to be dependent on RPC,
and ripping that code out of the sunrpc module would be a pain.
Moving gssd to netlink sockets might neatly sidestep that problem
altogether...
> >
> > I assume Inotify would work, but I don't see any advantage over dnotify.
> > If we didn't leave in some compatability code then it would mean
> > dropping support for pre-inotify kernels, wouldn't it?--could be that
> > there were only a few of those and that nfsv4 wasn't really usable on
> > them anyway (I don't remember), but absent a real motivation to move
> > from dnotify to inotify I'd rather not.
>
> That is a good point. Since I'm working on something that will hopefully
> be in future kernels I think I can make the assumption of having the
> extra features.
>
> >
> > > I am assuming that this is just a result of the code being written
> > > before these things were in place. I was originally starting doimapd
> > > based on the idmapd source however if it is acceptable to move to one
> > > of these other techniques I can switched over to that instead.
> >
> > For new stuff I don't think there'd be a problem using something else.
> >
> > --b.
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> NFS maillist - [email protected]
> https://lists.sourceforge.net/lists/listinfo/nfs
>
--
Jeff Layton <[email protected]>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
NFS maillist - [email protected]
https://lists.sourceforge.net/lists/listinfo/nfs
On Tue, 2007-09-11 at 14:39 -0400, Jeff Layton wrote:
> On Tue, 11 Sep 2007 13:33:53 -0400
> "David P. Quigley" <[email protected]> wrote:
>
> > On Tue, 2007-09-11 at 13:16 -0400, J. Bruce Fields wrote:
> > > On Tue, Sep 11, 2007 at 01:01:18PM -0400, David P. Quigley wrote:
> > > > Another question about the code in general. Why is it that idmapd is
> > > > using fcntl and user signals to indicate changes to the rpc pipe. Isn't
> > > > there something else we can use like Inotify or a Netlink socket?
> > >
> > > When a new nfs client is created, a new directory containing several
> > > files (including the upcall pipe) are created. The dnotify stuff is
> > > there so we find out about those new files and directories. I don't
> > > know much about netlink--how would it help us there?
> >
> > I'm not sure that it would be a big help. It seemes that it is a newer
> > method for transporting data between user and kernel space. There is a
> > Linux journal article from back in 2005 about them which can be found at
> > http://www.linuxjournal.com/article/7356. It could remove a dependency
> > on pipefs, however the only question would be which technology has been
> > around longer. It would also seem the the method of passing the messages
> > up to user space would be different. Instead of each client having a
> > separate file you could have idmapd/doimapd listen on one netlink socket
> > and messages that come in contain a clientid which the daemon would
> > associate with the separate clients internally. Again I'm not sure if it
> > is a big help, but rather a different way of thinking about it that I am
> > considering.
> >
>
> There's a somewhat related issue to this as well. I've started looking
> over what it would take to add kerberos support to CIFS. It would be
> *really* nice to be able to use the same gssd that NFS is using and
> just massage that data for CIFS.
>
> Doing this with the current rpc_pipefs based daemon is going to be
> problematic though. I don't think we want CIFS to be dependent on RPC,
> and ripping that code out of the sunrpc module would be a pain.
>
> Moving gssd to netlink sockets might neatly sidestep that problem
> altogether...
I've been looking through some implementations using netlink sockets
particularly the connector and it brings up an interesting point.
netlink is a connectionless protocol which means it isn't "reliable".
The connector documentation states
"Netlink itself is not reliable protocol, that means that messages can
be lost due to memory pressure or process' receiving queue overflowed,
so caller is warned must be prepared. That is why struct cn_msg [main
connector's message header] contains u32 seq and u32 ack fields."
I don't see an indication that the connector netlink protocol implements
some sort of reliability in the form of retransmission. I will be
looking into this more but it seems that checks need to be made based on
sequence and acknowledgement numbers on whether or not to retry
messages.
>
> > >
> > > I assume Inotify would work, but I don't see any advantage over dnotify.
> > > If we didn't leave in some compatability code then it would mean
> > > dropping support for pre-inotify kernels, wouldn't it?--could be that
> > > there were only a few of those and that nfsv4 wasn't really usable on
> > > them anyway (I don't remember), but absent a real motivation to move
> > > from dnotify to inotify I'd rather not.
> >
> > That is a good point. Since I'm working on something that will hopefully
> > be in future kernels I think I can make the assumption of having the
> > extra features.
> >
> > >
> > > > I am assuming that this is just a result of the code being written
> > > > before these things were in place. I was originally starting doimapd
> > > > based on the idmapd source however if it is acceptable to move to one
> > > > of these other techniques I can switched over to that instead.
> > >
> > > For new stuff I don't think there'd be a problem using something else.
> > >
> > > --b.
> >
> >
> > -------------------------------------------------------------------------
> > This SF.net email is sponsored by: Microsoft
> > Defy all challenges. Microsoft(R) Visual Studio 2005.
> > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> > _______________________________________________
> > NFS maillist - [email protected]
> > https://lists.sourceforge.net/lists/listinfo/nfs
> >
>
>
On Tue, Sep 11, 2007 at 02:39:05PM -0400, Jeff Layton wrote:
> There's a somewhat related issue to this as well. I've started looking
> over what it would take to add kerberos support to CIFS. It would be
> *really* nice to be able to use the same gssd that NFS is using and
> just massage that data for CIFS.
>
> Doing this with the current rpc_pipefs based daemon is going to be
> problematic though. I don't think we want CIFS to be dependent on RPC,
> and ripping that code out of the sunrpc module would be a pain.
Actually, Daniel Phillips did that a while ago, and I updated it once.
I don't think it'd been tested, but it's hidden away in my git repo, at
refs/attic/rpc_pipefs_library;
git remote add bfields git://linux-nfs.org/~bfields/linux.git
git fetch bfields refs/attic/rpc_pipefs_library:foo
git checkout foo
should get it.
The rpc_pipefs code really shouldn't need to depend on the rest of the
rpc code.
--b.
On Tue, Sep 11, 2007 at 05:17:11PM -0400, David P. Quigley wrote:
> I've been looking through some implementations using netlink sockets
> particularly the connector and it brings up an interesting point.
> netlink is a connectionless protocol which means it isn't "reliable".
> The connector documentation states
>
> "Netlink itself is not reliable protocol, that means that messages can
> be lost due to memory pressure or process' receiving queue overflowed,
> so caller is warned must be prepared. That is why struct cn_msg [main
> connector's message header] contains u32 seq and u32 ack fields."
>
> I don't see an indication that the connector netlink protocol implements
> some sort of reliability in the form of retransmission. I will be
> looking into this more but it seems that checks need to be made based on
> sequence and acknowledgement numbers on whether or not to retry
> messages.
I guess we'd need to understand what exactly you're trying to do.
--b.
On Tue, 2007-09-11 at 17:31 -0400, J. Bruce Fields wrote:
> On Tue, Sep 11, 2007 at 05:17:11PM -0400, David P. Quigley wrote:
> > I've been looking through some implementations using netlink sockets
> > particularly the connector and it brings up an interesting point.
> > netlink is a connectionless protocol which means it isn't "reliable".
> > The connector documentation states
> >
> > "Netlink itself is not reliable protocol, that means that messages can
> > be lost due to memory pressure or process' receiving queue overflowed,
> > so caller is warned must be prepared. That is why struct cn_msg [main
> > connector's message header] contains u32 seq and u32 ack fields."
> >
> > I don't see an indication that the connector netlink protocol implements
> > some sort of reliability in the form of retransmission. I will be
> > looking into this more but it seems that checks need to be made based on
> > sequence and acknowledgement numbers on whether or not to retry
> > messages.
>
> I guess we'd need to understand what exactly you're trying to do.
>
> --b.
The concept for this daemon is that labeled NFS needs to be able to
determine what the generic label on the wire means to the local box.
This is similar in concept to the mapping of user@dns to uid/gid that
idmapd handles. If the kernel calls up to the daemon using a netlink
socket it has the potential to lose the message for a translation due to
memory pressure. Based on what I can see right now the kernel would have
to check the seq number of the next message that it receives and see if
it is the reply to it's request. If it isn't then it is going to have to
retransmit its request again. This isn't a big deal but I think to make
netlink reliable some sort of retransmission needs to be done. We are
still working on the exact method for performing these translations so I
can't yet say how time consuming a translation will be.