commit 15953580e79b ("[JFFS2] Improve getdents vs. f_pos handling on NOR flash.")
is introduced to resolve 'rm -r', which cannot remove all files:
http://lists.infradead.org/pipermail/linux-mtd/2007-October/019658.html
However, it can cause the following issues:
1. 'deletion' dirents is alway in the f->dents list, wasting memory
resource. For example:
There is a file named 'file1'. Then we rename it:
mv file1 file2;
mv file2 file3;
...
mv file99999 file1000000
When CONFIG_JFFS2_SUMMARY is not set, file1~file1000000
always in the f->dents list.
2. Since the list become longer and longer, more CPU time is used
to traverse it.
After reverting the commit, we test 'rm -r', which can remove all
files, and all seems OK!
Signed-off-by: Yufen Yu <[email protected]>
---
fs/jffs2/write.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/fs/jffs2/write.c b/fs/jffs2/write.c
index cda9a361368e..1deed35beb50 100644
--- a/fs/jffs2/write.c
+++ b/fs/jffs2/write.c
@@ -598,31 +598,32 @@ int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f,
jffs2_add_fd_to_list(c, fd, &dir_f->dents);
mutex_unlock(&dir_f->sem);
} else {
+ struct jffs2_full_dirent **prev = &dir_f->dents;
uint32_t nhash = full_name_hash(NULL, name, namelen);
- fd = dir_f->dents;
/* We don't actually want to reserve any space, but we do
want to be holding the alloc_sem when we write to flash */
mutex_lock(&c->alloc_sem);
mutex_lock(&dir_f->sem);
- for (fd = dir_f->dents; fd; fd = fd->next) {
- if (fd->nhash == nhash &&
- !memcmp(fd->name, name, namelen) &&
- !fd->name[namelen]) {
-
- jffs2_dbg(1, "Marking old dirent node (ino #%u) @%08x obsolete\n",
- fd->ino, ref_offset(fd->raw));
- jffs2_mark_node_obsolete(c, fd->raw);
- /* We don't want to remove it from the list immediately,
- because that screws up getdents()/seek() semantics even
- more than they're screwed already. Turn it into a
- node-less deletion dirent instead -- a placeholder */
- fd->raw = NULL;
- fd->ino = 0;
- break;
+ while ((*prev) && (*prev)->nhash <= nhash) {
+ if ((*prev)->nhash == nhash &&
+ !memcmp((*prev)->name, name, namelen) &&
+ !(*prev)->name[namelen]) {
+
+ struct jffs2_full_dirent *this = *prev;
+
+ jffs2_dbg(1, "Marking old dirent node (ino #%u) @%08x obsolete\n",
+ this->ino, ref_offset(this->raw));
+ *prev = this->next;
+ jffs2_mark_node_obsolete(c, this->raw);
+ jffs2_free_full_dirent(this);
+
+ break;
}
+ prev = &((*prev)->next);
}
+
mutex_unlock(&dir_f->sem);
}
--
2.13.6
On Fri, 2018-03-16 at 19:05 +0800, Yufen Yu wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>
>
> commit 15953580e79b ("[JFFS2] Improve getdents vs. f_pos handling on NOR flash.")
> is introduced to resolve 'rm -r', which cannot remove all files:
> http://lists.infradead.org/pipermail/linux-mtd/2007-October/019658.html
>
> However, it can cause the following issues:
>
> 1. 'deletion' dirents is alway in the f->dents list, wasting memory
> resource. For example:
> There is a file named 'file1'. Then we rename it:
> mv file1 file2;
> mv file2 file3;
> ...
> mv file99999 file1000000
>
> When CONFIG_JFFS2_SUMMARY is not set, file1~file1000000
> always in the f->dents list.
>
> 2. Since the list become longer and longer, more CPU time is used
> to traverse it.
>
> After reverting the commit, we test 'rm -r', which can remove all
> files, and all seems OK!
UHH, this is mine (and Davids work from 2007)!
I cannot remember any details this long afterwards but I guess you cannot just
revert that part as it triggers some other bug, David?
Jocke
>
> Signed-off-by: Yufen Yu <[email protected]>
> ---
> fs/jffs2/write.c | 33 +++++++++++++++++----------------
> 1 file changed, 17 insertions(+), 16 deletions(-)
>
> diff --git a/fs/jffs2/write.c b/fs/jffs2/write.c
> index cda9a361368e..1deed35beb50 100644
> --- a/fs/jffs2/write.c
> +++ b/fs/jffs2/write.c
> @@ -598,31 +598,32 @@ int jffs2_do_unlink(struct jffs2_sb_info *c, struct jffs2_inode_info *dir_f,
> jffs2_add_fd_to_list(c, fd, &dir_f->dents);
> mutex_unlock(&dir_f->sem);
> } else {
> + struct jffs2_full_dirent **prev = &dir_f->dents;
> uint32_t nhash = full_name_hash(NULL, name, namelen);
>
> - fd = dir_f->dents;
> /* We don't actually want to reserve any space, but we do
> want to be holding the alloc_sem when we write to flash */
> mutex_lock(&c->alloc_sem);
> mutex_lock(&dir_f->sem);
>
> - for (fd = dir_f->dents; fd; fd = fd->next) {
> - if (fd->nhash == nhash &&
> - !memcmp(fd->name, name, namelen) &&
> - !fd->name[namelen]) {
> -
> - jffs2_dbg(1, "Marking old dirent node (ino #%u) @%08x obsolete\n",
> - fd->ino, ref_offset(fd->raw));
> - jffs2_mark_node_obsolete(c, fd->raw);
> - /* We don't want to remove it from the list immediately,
> - because that screws up getdents()/seek() semantics even
> - more than they're screwed already. Turn it into a
> - node-less deletion dirent instead -- a placeholder */
> - fd->raw = NULL;
> - fd->ino = 0;
> - break;
> + while ((*prev) && (*prev)->nhash <= nhash) {
> + if ((*prev)->nhash == nhash &&
> + !memcmp((*prev)->name, name, namelen) &&
> + !(*prev)->name[namelen]) {
> +
> + struct jffs2_full_dirent *this = *prev;
> +
> + jffs2_dbg(1, "Marking old dirent node (ino #%u) @%08x obsolete\n",
> + this->ino, ref_offset(this->raw));
> + *prev = this->next;
> + jffs2_mark_node_obsolete(c, this->raw);
> + jffs2_free_full_dirent(this);
> +
> + break;
> }
> + prev = &((*prev)->next);
> }
> +
> mutex_unlock(&dir_f->sem);
> }
>
> --
> 2.13.6
>
>
> ______________________________________________________
> Linux MTD discussion mailing list
> http://lists.infradead.org/mailman/listinfo/linux-mtd/
On Fri, 2018-03-16 at 12:39 +0000, Joakim Tjernlund wrote:
>
> > After reverting the commit, we test 'rm -r', which can remove all
> > files, and all seems OK!
>
> UHH, this is mine (and Davids work from 2007)!
> I cannot remember any details this long afterwards but I guess you cannot just
> revert that part as it triggers some other bug, David?
Right, the issue was with f_pos in the directory.
The 'rm' we were testing with at the time would delete a bunch of
directory entries, then continue with its readdir() to work out what
else to delete. But when we were handling f_pos on directories merely
as the position on the list, and when we were *deleting* things from
that list as we went, some dirents ended up moving so that they were
*before* the position that 'rm' had got to with its readdir().
But... the list we're traversing is *already* ordered by CRC, and that
could be a much saner thing to use as f_pos. We just have to make sure
we cope with hash collisions. Shifting left by 4 bits and using the low
4 bits would allow us to cope with 16 names with the same hash... but
I'm not sure that's good enough.
Hi, David
On 2018/3/16 20:52, David Woodhouse wrote:
> On Fri, 2018-03-16 at 12:39 +0000, Joakim Tjernlund wrote:
>>> After reverting the commit, we test 'rm -r', which can remove all
>>> files, and all seems OK!
>> UHH, this is mine (and Davids work from 2007)!
>> I cannot remember any details this long afterwards but I guess you cannot just
>> revert that part as it triggers some other bug, David?
> Right, the issue was with f_pos in the directory.
>
> The 'rm' we were testing with at the time would delete a bunch of
> directory entries, then continue with its readdir() to work out what
> else to delete. But when we were handling f_pos on directories merely
> as the position on the list, and when we were *deleting* things from
> that list as we went, some dirents ended up moving so that they were
> *before* the position that 'rm' had got to with its readdir().
Thanks a for explaining in detail. And you are right.
We have a directory, including 2000 files. After getdents and unlink as
follow:
for ( ; ; ) {
nread = syscall(SYS_getdents, fd, buf, BUF_SIZE); //BUF_SIZE=1024
for (bpos = 0; bpos < nread;) {
unlink(d->d_name);
}
}
we found there is still 990 files!
> But... the list we're traversing is *already* ordered by CRC, and that
> could be a much saner thing to use as f_pos. We just have to make sure
That's true. Our experiments also show that it can quicken traverse.
However, when the list is very long (e.g. 1000000), the number of traverse
times can reach thousands.
What's more, a lot of memory space is occupying and will be more and more.
I have no idea how to improve this. Do you have some good idea?
Thanks,
yufen
> we cope with hash collisions. Shifting left by 4 bits and using the low
> 4 bits would allow us to cope with 16 names with the same hash... but
> I'm not sure that's good enough.