2008-02-14 09:31:39

by rzryyvzy

[permalink] [raw]
Subject: Is there a "blackhole" /dev/null directory?

Hello Linux Kernel Hackers,

/dev/null is often very useful, specially if programs force to save data in some file. But some programs like to creates different temporary file names, so /dev/null could no more work.

What is with a "/dev/null"-directory?
I mean a "blackhole pseudo directory" which eats every write to null.

Here is how it could work:
mount -t nulldir nulldir /dev/nulldir

Now if a program does a create(2),
it creates in the memory the file with its fd.
Then if a program does a write(2) to the fd, it eats the writes and give out fakely it has written the number of bytes.
When the program calls does a close(2) of the fd, then the complete inode is deleted in the memory.

The directory should be permanently empty except for the inodes with open file descriptors. So only inode information would be temporary saved in this "nulldir tmpfs" directory.

Is there already existing a possibility to create a null directory?
--
Best regards,
Mika Lawando

--
E-Mail sent with anti-spam site TrashMail.net!
Free disposable email addresses: http://www.trashmail.net/


2008-02-14 09:39:29

by Jasper Bryant-Greene

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

On Thu, 2008-02-14 at 10:30 +0100, rzryyvzy wrote:
> /dev/null is often very useful, specially if programs force to save data in some file. But some programs like to creates different temporary file names, so /dev/null could no more work.
>
> What is with a "/dev/null"-directory?
> I mean a "blackhole pseudo directory" which eats every write to null.
>
> Here is how it could work:
> mount -t nulldir nulldir /dev/nulldir
>
> Now if a program does a create(2),
> it creates in the memory the file with its fd.
> Then if a program does a write(2) to the fd, it eats the writes and give out fakely it has written the number of bytes.
> When the program calls does a close(2) of the fd, then the complete inode is deleted in the memory.
>
> The directory should be permanently empty except for the inodes with open file descriptors. So only inode information would be temporary saved in this "nulldir tmpfs" directory.
>
> Is there already existing a possibility to create a null directory?

This could be done fairly trivially with FUSE, and IMHO is a good use
for FUSE because since you're just throwing most data away, performance
is not a concern.

-j


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2008-02-14 09:46:29

by Andi Kleen

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Jasper Bryant-Greene <[email protected]> writes:
>
> This could be done fairly trivially with FUSE, and IMHO is a good use
> for FUSE because since you're just throwing most data away, performance
> is not a concern.

Q.: how much work would fuse have to do until the user file system
server could decide to ignore the data?
A.: pretty much all of a cached write including all the copies and
context switches.

That is because FUSE has to first hand all the data to the server
until it can decide to do nothing and that's pretty much all (and then
some more) of the cost of a cached write.

So if you want any performance benefit from this (I'm a little sceptical) you
should exactly not use FUSE.

The basic problem with the idea is that programs who create temporary
files usually want to read them back at some point too. So if you
throw everything away things break.

If you just don't want the write to not (usually) hit disk you
can use tmpfs, although I believe at least ext2 (not ext3 unfortunately)
is also reasonably good at not writing out very short lived files.

-Andi

2008-02-14 12:15:43

by rzryyvzy

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Jasper Bryant-Greene schrieb:
> On Thu, 2008-02-14 at 10:30 +0100, rzryyvzy wrote:
>
>> /dev/null is often very useful, specially if programs force to save data in some file. But some programs like to creates different temporary file names, so /dev/null could no more work.
>>
>> What is with a "/dev/null"-directory?
>> I mean a "blackhole pseudo directory" which eats every write to null.
>>
>> Here is how it could work:
>> mount -t nulldir nulldir /dev/nulldir
>>
>> Now if a program does a create(2),
>> it creates in the memory the file with its fd.
>> Then if a program does a write(2) to the fd, it eats the writes and give out fakely it has written the number of bytes.
>> When the program calls does a close(2) of the fd, then the complete inode is deleted in the memory.
>>
>> The directory should be permanently empty except for the inodes with open file descriptors. So only inode information would be temporary saved in this "nulldir tmpfs" directory.
>>
>> Is there already existing a possibility to create a null directory?
>>
>
> This could be done fairly trivially with FUSE, and IMHO is a good use
> for FUSE because since you're just throwing most data away, performance
> is not a concern.
>
Unfortunately performance is a concern because if not I would write on
the hard disk the files, and then remove them with a cronjob.
But from the point of view of the time of developpment, FUSE is a good
idea, because its possible to write a filesystem quickly in Perl.

--
Best regards,
Mika

2008-02-14 15:00:21

by Jan Engelhardt

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?


On Feb 14 2008 10:46, Andi Kleen wrote:
>Jasper Bryant-Greene <[email protected]> writes:
>>
>> This could be done fairly trivially with FUSE, and IMHO is a good use
>> for FUSE because since you're just throwing most data away, performance
>> is not a concern.

There is a much more interesting 'problem' with a "/dev/null directory".

Q: Why would you need such a directory?
A: To temporarily fool a program into believing it wrote something.

Q: Should all files disappear? (e.g. "unlink after open")
A: Maybe not, programs may stat() the file right afterwards and
get confused by the "inexistence".

Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
create a file /dev/nullmnt/foo/barfile?
A: /dev/nullmnt/foo must continue to exist or be accepted for a while,
or perhaps for eternity.

Been there, done that, -
http://dev.computergmbh.de/wsvn/misc_kernel/nullfs/trunk/nullfs.c -
and hit that wall of unanswerable questions.

2008-02-14 15:06:51

by linux-os (Dick Johnson)

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?


On Thu, 14 Feb 2008, Mika Lawando wrote:

> Jasper Bryant-Greene schrieb:
>> On Thu, 2008-02-14 at 10:30 +0100, rzryyvzy wrote:
>>
>>> /dev/null is often very useful, specially if programs force to save data in some file. But some programs like to creates different temporary file names, so /dev/null could no more work.
>>>
>>> What is with a "/dev/null"-directory?
>>> I mean a "blackhole pseudo directory" which eats every write to null.
>>>
>>> Here is how it could work:
>>> mount -t nulldir nulldir /dev/nulldir
>>>
>>> Now if a program does a create(2),
>>> it creates in the memory the file with its fd.
>>> Then if a program does a write(2) to the fd, it eats the writes and give out fakely it has written the number of bytes.
>>> When the program calls does a close(2) of the fd, then the complete inode is deleted in the memory.
>>>
>>> The directory should be permanently empty except for the inodes with open file descriptors. So only inode information would be temporary saved in this "nulldir tmpfs" directory.
>>>
>>> Is there already existing a possibility to create a null directory?
>>>
>>
>> This could be done fairly trivially with FUSE, and IMHO is a good use
>> for FUSE because since you're just throwing most data away, performance
>> is not a concern.
>>
> Unfortunately performance is a concern because if not I would write on
> the hard disk the files, and then remove them with a cronjob.
> But from the point of view of the time of developpment, FUSE is a good
> idea, because its possible to write a filesystem quickly in Perl.
>
> --
> Best regards,
> Mika
> --

Creating a null directory wouldn't work because a directory
is just a link to find a file. The actual file gets written
using the file-descriptor, without any reference whosoever
to the path. If you have root privileges, you can use
`mknod tempfile c 1 3` to create a null file with any
name you want. Unfortunately, somebody decided that
you need root privileges to execute mknod so ordinary
users can not do this.


Cheers,
Dick Johnson
Penguin : Linux version 2.6.22.1 on an i686 machine (5588.28 BogoMips).
My book : http://www.AbominableFirebug.com/
_


****************************************************************
The information transmitted in this message is confidential and may be privileged. Any review, retransmission, dissemination, or other use of this information by persons or entities other than the intended recipient is prohibited. If you are not the intended recipient, please notify Analogic Corporation immediately - by replying to this message or by sending an email to [email protected] - and destroy all copies of this information, including any attachments, without reading or disclosing them.

Thank you.

2008-02-14 15:20:39

by Hans J. Koch

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Am Thu, 14 Feb 2008 16:00:06 +0100 (CET)
schrieb Jan Engelhardt <[email protected]>:

>
> On Feb 14 2008 10:46, Andi Kleen wrote:
> >Jasper Bryant-Greene <[email protected]> writes:
> >>
> >> This could be done fairly trivially with FUSE, and IMHO is a good
> >> use for FUSE because since you're just throwing most data away,
> >> performance is not a concern.
>
> There is a much more interesting 'problem' with a "/dev/null
> directory".
>
> Q: Why would you need such a directory?
> A: To temporarily fool a program into believing it wrote something.
>
> Q: Should all files disappear? (e.g. "unlink after open")
> A: Maybe not, programs may stat() the file right afterwards and
> get confused by the "inexistence".
>
> Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
> create a file /dev/nullmnt/foo/barfile?
> A: /dev/nullmnt/foo must continue to exist or be accepted for a while,
> or perhaps for eternity.

Well, the problem seems to be that a "directory" is not just data but
also contains metadata. While it's easy to write data to /dev/null, you
cannot simply discard metadata associated with a directory. So, such a
"/dev/null-directory" would have to remember metadata (at least all
created filenames including subdirectories) in the same way as other
filesystems do. Only file _content_ can be discarded.
To be honest, I still cannot see many sensible usecases for that...

Thanks,
Hans

2008-02-14 15:23:58

by Jan Engelhardt

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?


On Feb 14 2008 16:19, Hans-Jürgen Koch wrote:
>>
>> Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
>> create a file /dev/nullmnt/foo/barfile?
>> A: /dev/nullmnt/foo must continue to exist or be accepted for a while,
>> or perhaps for eternity.
>
>Well, the problem seems to be that a "directory" is not just data but
>also contains metadata. While it's easy to write data to /dev/null, you
>cannot simply discard metadata associated with a directory. So, such a
>"/dev/null-directory" would have to remember metadata (at least all
>created filenames including subdirectories) in the same way as other
>filesystems do. Only file _content_ can be discarded.

Not even that. Suppose a userspace program (whose output you'd like
to discard) does:

int main(void)
{
int fd = open("/nullmnt/foo.txt", O_WRONLY | O_CREAT | O_EXCL);

/* write lots of nonsensical data that we don't need anyway */
write(fd, "Hello Wor(l)d", 13);

if (lseek(fd, 0, SEEK_SET) < 0) {
/* should not happen */
fprintf(stderr, "Huh, did we write to a pipe or cdev?\n");
abort();
}

/* verify */
char buf[13];
read(fd, buf, 13);
if (memcmp(buf, "Hello Wor(l)d", 13) != 0)
fprintf(stderr, "Aïe, disk corruption!\n");
}

>To be honest, I still cannot see many sensible usecases for that...

I agree.

2008-02-14 15:30:30

by Hans J. Koch

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Am Thu, 14 Feb 2008 16:23:37 +0100 (CET)
schrieb Jan Engelhardt <[email protected]>:

>
> On Feb 14 2008 16:19, Hans-Jürgen Koch wrote:
> >>
> >> Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
> >> create a file /dev/nullmnt/foo/barfile?
> >> A: /dev/nullmnt/foo must continue to exist or be accepted for a
> >> while, or perhaps for eternity.
> >
> >Well, the problem seems to be that a "directory" is not just data but
> >also contains metadata. While it's easy to write data to /dev/null,
> >you cannot simply discard metadata associated with a directory. So,
> >such a "/dev/null-directory" would have to remember metadata (at
> >least all created filenames including subdirectories) in the same
> >way as other filesystems do. Only file _content_ can be discarded.
>
> Not even that. Suppose a userspace program (whose output you'd like
> to discard) does:

[...]

Well, if an application wants to read back written data, you can never
use such a thing, not even in simple cases where the
existing /dev/null would be enough.

> }
>
> >To be honest, I still cannot see many sensible usecases for that...
>
> I agree.

Good :-)

Hans

2008-02-14 17:16:47

by Bodo Eggert

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

rzryyvzy <[email protected]> wrote:

> /dev/null is often very useful, specially if programs force to save data in
> some file. But some programs like to creates different temporary file names,
> so /dev/null could no more work.
>
> What is with a "/dev/null"-directory?
> I mean a "blackhole pseudo directory" which eats every write to null.
>
> Here is how it could work:
> mount -t nulldir nulldir /dev/nulldir
>
> Now if a program does a create(2),
> it creates in the memory the file with its fd.
> Then if a program does a write(2) to the fd, it eats the writes and give out
> fakely it has written the number of bytes. When the program calls does a
> close(2) of the fd, then the complete inode is deleted in the memory.
>
> The directory should be permanently empty except for the inodes with open
> file descriptors. So only inode information would be temporary saved in this
> "nulldir tmpfs" directory.
>
> Is there already existing a possibility to create a null directory?

Please try the patch below. It will add an autounlink option to
tmpfs, which should automatically get rid of non-referenced files.

diff -X dontdiff -dpruN linux-2.6.24.pure/include/linux/shmem_fs.h
linux-2.6.24.autounlink/include/linux/shmem_fs.h
--- linux-2.6.24.pure/include/linux/shmem_fs.h 2006-11-29 22:57:37.000000000
+0100
+++ linux-2.6.24.autounlink/include/linux/shmem_fs.h 2008-02-14
15:35:01.000000000 +0100
@@ -30,11 +30,14 @@ struct shmem_sb_info {
unsigned long free_blocks; /* How many are left for allocation */
unsigned long max_inodes; /* How many inodes are allowed */
unsigned long free_inodes; /* How many are left for allocation */
- int policy; /* Default NUMA memory alloc policy */
- nodemask_t policy_nodes; /* nodemask for preferred and bind */
+ unsigned int flags;
+ int policy; /* Default NUMA memory alloc policy */
+ nodemask_t policy_nodes; /* nodemask for preferred and bind */
spinlock_t stat_lock;
};

+#define TMPFS_FL_AUTOREMOVE 1
+
static inline struct shmem_inode_info *SHMEM_I(struct inode *inode)
{
return container_of(inode, struct shmem_inode_info, vfs_inode);
diff -X dontdiff -dpruN linux-2.6.24.pure/mm/shmem.c
linux-2.6.24.autounlink/mm/shmem.c
--- linux-2.6.24.pure/mm/shmem.c 2008-01-25 15:09:39.000000000 +0100
+++ linux-2.6.24.autounlink/mm/shmem.c 2008-02-14 18:00:54.000000000 +0100
@@ -1747,31 +1747,41 @@ static int
shmem_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)
{
struct inode *inode = shmem_get_inode(dir->i_sb, mode, dev);
+ struct shmem_sb_info *sbinfo = SHMEM_SB(dir->i_sb);
int error = -ENOSPC;

- if (inode) {
- error = security_inode_init_security(inode, dir, NULL, NULL,
- NULL);
- if (error) {
- if (error != -EOPNOTSUPP) {
- iput(inode);
- return error;
- }
- }
- error = shmem_acl_init(inode, dir);
- if (error) {
+ if (!inode)
+ return error;
+
+ error = security_inode_init_security(inode, dir, NULL, NULL,
+ NULL);
+ if (error) {
+ if (error != -EOPNOTSUPP) {
iput(inode);
return error;
}
- if (dir->i_mode & S_ISGID) {
- inode->i_gid = dir->i_gid;
- if (S_ISDIR(mode))
- inode->i_mode |= S_ISGID;
- }
- dir->i_size += BOGO_DIRENT_SIZE;
- dir->i_ctime = dir->i_mtime = CURRENT_TIME;
- d_instantiate(dentry, inode);
+ }
+ error = shmem_acl_init(inode, dir);
+ if (error) {
+ iput(inode);
+ return error;
+ }
+ if (dir->i_mode & S_ISGID) {
+ inode->i_gid = dir->i_gid;
+ if (S_ISDIR(mode))
+ inode->i_mode |= S_ISGID;
+ }
+
+ dir->i_size += BOGO_DIRENT_SIZE;
+ dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ d_instantiate(dentry, inode);
+ if ( S_ISDIR(mode)
+ || !(sbinfo->flags & TMPFS_FL_AUTOREMOVE))
+ {
dget(dentry); /* Extra count - pin the dentry in core */
+ } else {
+ dir->i_size -= BOGO_DIRENT_SIZE;
+ drop_nlink(inode);
}
return error;
}
@@ -1800,6 +1810,11 @@ static int shmem_link(struct dentry *old
struct inode *inode = old_dentry->d_inode;
struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);

+ /* In auto-unlink mode, the newly created link would be unlinked
+ immediately. We don't need to do anything here. */
+ if (sbinfo->flags & TMPFS_FL_AUTOREMOVE)
+ return 0;
+
/*
* No ordinary (disk based) filesystem counts links as inodes;
* but each new link needs a new dentry, pinning lowmem, and
@@ -2095,6 +2110,7 @@ static const struct export_operations sh

static int shmem_parse_options(char *options, int *mode, uid_t *uid,
gid_t *gid, unsigned long *blocks, unsigned long *inodes,
+ unsigned int * flags,
int *policy, nodemask_t *policy_nodes)
{
char *this_char, *value, *rest;
@@ -2120,8 +2136,18 @@ static int shmem_parse_options(char *opt
continue;
if ((value = strchr(this_char,'=')) != NULL) {
*value++ = 0;
+
+ /* These options don't take arguments: */
+ } else if (!strcmp(this_char,"autounlink")) {
+ *flags |= TMPFS_FL_AUTOREMOVE;
+ continue;
+ } else if (!strcmp(this_char,"noautounlink")) {
+ *flags &= ~TMPFS_FL_AUTOREMOVE;
+ continue;
+
+ /* All other options need an argument */
} else {
- printk(KERN_ERR
+ printk(KERN_ERR
"tmpfs: No value for mount option '%s'\n",
this_char);
return 1;
@@ -2192,10 +2218,12 @@ static int shmem_remount_fs(struct super
nodemask_t policy_nodes = sbinfo->policy_nodes;
unsigned long blocks;
unsigned long inodes;
+ unsigned int sbflags;
int error = -EINVAL;

+ sbflags = sbinfo->flags;
if (shmem_parse_options(data, NULL, NULL, NULL, &max_blocks,
- &max_inodes, &policy, &policy_nodes))
+ &max_inodes, &sbflags, &policy, &policy_nodes))
return error;

spin_lock(&sbinfo->stat_lock);
@@ -2221,6 +2249,7 @@ static int shmem_remount_fs(struct super
sbinfo->free_blocks = max_blocks - blocks;
sbinfo->max_inodes = max_inodes;
sbinfo->free_inodes = max_inodes - inodes;
+ sbinfo->flags = sbflags;
sbinfo->policy = policy;
sbinfo->policy_nodes = policy_nodes;
out:
@@ -2247,6 +2276,7 @@ static int shmem_fill_super(struct super
struct shmem_sb_info *sbinfo;
unsigned long blocks = 0;
unsigned long inodes = 0;
+ unsigned int flags = 0;
int policy = MPOL_DEFAULT;
nodemask_t policy_nodes = node_states[N_HIGH_MEMORY];

@@ -2262,7 +2292,7 @@ static int shmem_fill_super(struct super
if (inodes > blocks)
inodes = blocks;
if (shmem_parse_options(data, &mode, &uid, &gid, &blocks,
- &inodes, &policy, &policy_nodes))
+ &inodes, &flags, &policy, &policy_nodes))
return -EINVAL;
}
sb->s_export_op = &shmem_export_ops;
@@ -2281,6 +2311,7 @@ static int shmem_fill_super(struct super
sbinfo->free_blocks = blocks;
sbinfo->max_inodes = inodes;
sbinfo->free_inodes = inodes;
+ sbinfo->flags = flags;
sbinfo->policy = policy;
sbinfo->policy_nodes = policy_nodes;


2008-02-15 01:15:43

by Bodo Eggert

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Hans-J?rgen Koch <[email protected]> wrote:
> schrieb Jan Engelhardt <[email protected]>:

>> There is a much more interesting 'problem' with a "/dev/null
>> directory".
>>
>> Q: Why would you need such a directory?
>> A: To temporarily fool a program into believing it wrote something.
>>
>> Q: Should all files disappear? (e.g. "unlink after open")
>> A: Maybe not, programs may stat() the file right afterwards and
>> get confused by the "inexistence".
>>
>> Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
>> create a file /dev/nullmnt/foo/barfile?
>> A: /dev/nullmnt/foo must continue to exist or be accepted for a while,
>> or perhaps for eternity.
>
> Well, the problem seems to be that a "directory" is not just data but
> also contains metadata. While it's easy to write data to /dev/null, you
> cannot simply discard metadata associated with a directory. So, such a
> "/dev/null-directory" would have to remember metadata (at least all
> created filenames including subdirectories) in the same way as other
> filesystems do. Only file _content_ can be discarded.
> To be honest, I still cannot see many sensible usecases for that...

Since both of you seem to know about the (possible) problems, maybe you can
take a look at my patch.

http://7eggert.dyndns.org:8080/tmp/autounlink.patch
(Not inline, because it would be duplicate in this thread.)
(Yes, this patch needs some cleanup. I just did checkpatch.)

First I thought I'd modify tmpfs to delete the file on O_CREAT, but it
turned out tmpfs will increase the dentry count in order to prevent the
delete-on-close effect. Skipping this step was allmost enough, I had to
prevent link() from pinning these files, too.

Some loops creating files or linking them did not show any decrease in
available memory, and to the best of my knowlege, I did not introduce new
memory leaks. And the best thing is: It's only 150 bytes of code. Not bad
for an additional mount flag, is it?

2008-02-15 19:24:53

by Bill Davidsen

[permalink] [raw]
Subject: Re: Is there a "blackhole" /dev/null directory?

Jan Engelhardt wrote:
> On Feb 14 2008 10:46, Andi Kleen wrote:
>> Jasper Bryant-Greene <[email protected]> writes:
>>> This could be done fairly trivially with FUSE, and IMHO is a good use
>>> for FUSE because since you're just throwing most data away, performance
>>> is not a concern.
>
> There is a much more interesting 'problem' with a "/dev/null directory".
>
> Q: Why would you need such a directory?
> A: To temporarily fool a program into believing it wrote something.

Also: to let a program believe it was creating files which are used to
hold the written data. Otherwise /dev/null would probably be the solution.
>
> Q: Should all files disappear? (e.g. "unlink after open")
> A: Maybe not, programs may stat() the file right afterwards and
> get confused by the "inexistence".

I think what is going to happen is that files created behave as if they
are the result of a mknod resulting in a /dev/null clone.
>
> Q: What if a program attempts to mkdir /dev/nullmnt/foo to just
> create a file /dev/nullmnt/foo/barfile?
> A: /dev/nullmnt/foo must continue to exist or be accepted for a while,
> or perhaps for eternity.

The directory structure can persist, it's the writing of data which can
be avoided.

Real example:

A program which reads log files and prepares a whole raft of reports in
a directory specified. If you just want to see the summary (stdout) and
exception notices (stderr) having a nulldir would avoid the disk space
and i/o load if you were just looking at the critical output rather than
the analysis.

Yes, if this was an original program requirement it would or should have
been a feature. Real world cases sometimes use tools in creative ways.

--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot