kmap() is being deprecated in favor of kmap_local_page().
There are two main problems with kmap(): (1) It comes with an overhead as
mapping space is restricted and protected by a global lock for
synchronization and (2) it also requires global TLB invalidation when the
kmap’s pool wraps and it might block when the mapping space is fully
utilized until a slot becomes available.
With kmap_local_page() the mappings are per thread, CPU local, can take
page faults, and can be called from any context (including interrupts).
It is faster than kmap() in kernels with HIGHMEM enabled. Furthermore,
the tasks can be preempted and, when they are scheduled to run again, the
kernel virtual addresses are restored and are still valid.
Since its use in bitmap.c is safe everywhere, it should be preferred.
Therefore, replace kmap() with kmap_local_page() in bnode.c.
Suggested-by: Ira Weiny <[email protected]>
Signed-off-by: Fabio M. De Francesco <[email protected]>
---
fs/hfsplus/bitmap.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c
index cebce0cfe340..0848b053b365 100644
--- a/fs/hfsplus/bitmap.c
+++ b/fs/hfsplus/bitmap.c
@@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
start = size;
goto out;
}
- pptr = kmap(page);
+ pptr = kmap_local_page(page);
curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
i = offset % 32;
offset &= ~(PAGE_CACHE_BITS - 1);
@@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
}
curr++;
}
- kunmap(page);
+ kunmap_local(pptr);
offset += PAGE_CACHE_BITS;
if (offset >= size)
break;
@@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
len -= 32;
}
set_page_dirty(page);
- kunmap(page);
+ kunmap_local(pptr);
offset += PAGE_CACHE_BITS;
page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS,
NULL);
@@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
start = size;
goto out;
}
- pptr = kmap(page);
+ pptr = kmap_local_page(page);
curr = pptr;
end = pptr + PAGE_CACHE_BITS / 32;
}
@@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
done:
*curr = cpu_to_be32(n);
set_page_dirty(page);
- kunmap(page);
+ kunmap_local(pptr);
*max = offset + (curr - pptr) * 32 + i - start;
sbi->free_blocks -= *max;
hfsplus_mark_mdb_dirty(sb);
@@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
page = read_mapping_page(mapping, pnr, NULL);
if (IS_ERR(page))
goto kaboom;
- pptr = kmap(page);
+ pptr = kmap_local_page(page);
curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
end = pptr + PAGE_CACHE_BITS / 32;
len = count;
@@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
if (!count)
break;
set_page_dirty(page);
- kunmap(page);
+ kunmap_local(pptr);
page = read_mapping_page(mapping, ++pnr, NULL);
if (IS_ERR(page))
goto kaboom;
- pptr = kmap(page);
+ pptr = kmap_local_page(page);
curr = pptr;
end = pptr + PAGE_CACHE_BITS / 32;
}
@@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
}
out:
set_page_dirty(page);
- kunmap(page);
+ kunmap_local(pptr);
sbi->free_blocks += len;
hfsplus_mark_mdb_dirty(sb);
mutex_unlock(&sbi->alloc_mutex);
--
2.37.1
> On Jul 24, 2022, at 1:50 PM, Fabio M. De Francesco <[email protected]> wrote:
>
> kmap() is being deprecated in favor of kmap_local_page().
>
> There are two main problems with kmap(): (1) It comes with an overhead as
> mapping space is restricted and protected by a global lock for
> synchronization and (2) it also requires global TLB invalidation when the
> kmap’s pool wraps and it might block when the mapping space is fully
> utilized until a slot becomes available.
>
> With kmap_local_page() the mappings are per thread, CPU local, can take
> page faults, and can be called from any context (including interrupts).
> It is faster than kmap() in kernels with HIGHMEM enabled. Furthermore,
> the tasks can be preempted and, when they are scheduled to run again, the
> kernel virtual addresses are restored and are still valid.
>
> Since its use in bitmap.c is safe everywhere, it should be preferred.
>
> Therefore, replace kmap() with kmap_local_page() in bnode.c.
>
Looks good. Maybe, it makes sense to combine all kmap() related modifications in HFS+ into
one patchset?
Reviewed by: Viacheslav Dubeyko <[email protected]>
Thanks,
Slava.
> Suggested-by: Ira Weiny <[email protected]>
> Signed-off-by: Fabio M. De Francesco <[email protected]>
> ---
> fs/hfsplus/bitmap.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c
> index cebce0cfe340..0848b053b365 100644
> --- a/fs/hfsplus/bitmap.c
> +++ b/fs/hfsplus/bitmap.c
> @@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
> start = size;
> goto out;
> }
> - pptr = kmap(page);
> + pptr = kmap_local_page(page);
> curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
> i = offset % 32;
> offset &= ~(PAGE_CACHE_BITS - 1);
> @@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
> }
> curr++;
> }
> - kunmap(page);
> + kunmap_local(pptr);
> offset += PAGE_CACHE_BITS;
> if (offset >= size)
> break;
> @@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
> len -= 32;
> }
> set_page_dirty(page);
> - kunmap(page);
> + kunmap_local(pptr);
> offset += PAGE_CACHE_BITS;
> page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS,
> NULL);
> @@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
> start = size;
> goto out;
> }
> - pptr = kmap(page);
> + pptr = kmap_local_page(page);
> curr = pptr;
> end = pptr + PAGE_CACHE_BITS / 32;
> }
> @@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
> done:
> *curr = cpu_to_be32(n);
> set_page_dirty(page);
> - kunmap(page);
> + kunmap_local(pptr);
> *max = offset + (curr - pptr) * 32 + i - start;
> sbi->free_blocks -= *max;
> hfsplus_mark_mdb_dirty(sb);
> @@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
> page = read_mapping_page(mapping, pnr, NULL);
> if (IS_ERR(page))
> goto kaboom;
> - pptr = kmap(page);
> + pptr = kmap_local_page(page);
> curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
> end = pptr + PAGE_CACHE_BITS / 32;
> len = count;
> @@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
> if (!count)
> break;
> set_page_dirty(page);
> - kunmap(page);
> + kunmap_local(pptr);
> page = read_mapping_page(mapping, ++pnr, NULL);
> if (IS_ERR(page))
> goto kaboom;
> - pptr = kmap(page);
> + pptr = kmap_local_page(page);
> curr = pptr;
> end = pptr + PAGE_CACHE_BITS / 32;
> }
> @@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
> }
> out:
> set_page_dirty(page);
> - kunmap(page);
> + kunmap_local(pptr);
> sbi->free_blocks += len;
> hfsplus_mark_mdb_dirty(sb);
> mutex_unlock(&sbi->alloc_mutex);
> --
> 2.37.1
>
On Mon, Jul 25, 2022 at 10:17:13AM -0700, Viacheslav Dubeyko wrote:
> Looks good. Maybe, it makes sense to combine all kmap() related modifications in HFS+ into
> one patchset?
For bisection, I'd think it best to leave them separate?
On lunedì 25 luglio 2022 19:17:13 CEST Viacheslav Dubeyko wrote:
>
> > On Jul 24, 2022, at 1:50 PM, Fabio M. De Francesco
<[email protected]> wrote:
> >
> > kmap() is being deprecated in favor of kmap_local_page().
> >
> > There are two main problems with kmap(): (1) It comes with an overhead
as
> > mapping space is restricted and protected by a global lock for
> > synchronization and (2) it also requires global TLB invalidation when
the
> > kmap’s pool wraps and it might block when the mapping space is fully
> > utilized until a slot becomes available.
> >
> > With kmap_local_page() the mappings are per thread, CPU local, can take
> > page faults, and can be called from any context (including interrupts).
> > It is faster than kmap() in kernels with HIGHMEM enabled. Furthermore,
> > the tasks can be preempted and, when they are scheduled to run again,
the
> > kernel virtual addresses are restored and are still valid.
> >
> > Since its use in bitmap.c is safe everywhere, it should be preferred.
> >
> > Therefore, replace kmap() with kmap_local_page() in bnode.c.
> >
>
> Looks good. Maybe, it makes sense to combine all kmap() related
modifications in HFS+ into
> one patchset?
>
> Reviewed by: Viacheslav Dubeyko <[email protected]>
Thanks for your reviews of this and of the other patch to bnode.c.
Actually, I started with the first file I met (bnode.c) because I noticed
that maintainers don't need to care about any special ordering for applying
the patches, since each of them is self-contained.
This is why I haven't thought of making a series of them.
Currently only one file is still left with some kmap() call sites. I'll
work on that within the next days.
Again thanks,
Fabio
> Thanks,
> Slava.
>
> > Suggested-by: Ira Weiny <[email protected]>
> > Signed-off-by: Fabio M. De Francesco <[email protected]>
> > ---
> > fs/hfsplus/bitmap.c | 18 +++++++++---------
> > 1 file changed, 9 insertions(+), 9 deletions(-)
[snip]
> On Jul 25, 2022, at 10:54 AM, Matthew Wilcox <[email protected]> wrote:
>
> On Mon, Jul 25, 2022 at 10:17:13AM -0700, Viacheslav Dubeyko wrote:
>> Looks good. Maybe, it makes sense to combine all kmap() related modifications in HFS+ into
>> one patchset?
>
> For bisection, I'd think it best to leave them separate?
I am OK with any way. My point that it will be good to have patchset to see all modified places together, from logical point of view. Even if we have some issue with kmap() change on kmap_local_page(), then, as far as I can see, the root of issue should be kmap_local_page() but not HFS+ code. Oppositely, if it’s some undiscovered HFS+ issue, then again kmap_local_page() changes nothing. But I am OK if it is separate patches too.
Thanks,
Slava.
On Mon, Jul 25, 2022 at 06:54:50PM +0100, Matthew Wilcox wrote:
> On Mon, Jul 25, 2022 at 10:17:13AM -0700, Viacheslav Dubeyko wrote:
> > Looks good. Maybe, it makes sense to combine all kmap() related modifications in HFS+ into
> > one patchset?
>
> For bisection, I'd think it best to leave them separate?
I'm not quite sure I understand why putting the individual patches into a
series makes bisection easier? It does make sense to keep individual patches.
Ira
On Tue, Jul 26, 2022 at 12:11:50PM -0700, Ira Weiny wrote:
> On Mon, Jul 25, 2022 at 06:54:50PM +0100, Matthew Wilcox wrote:
> > On Mon, Jul 25, 2022 at 10:17:13AM -0700, Viacheslav Dubeyko wrote:
> > > Looks good. Maybe, it makes sense to combine all kmap() related modifications in HFS+ into
> > > one patchset?
> >
> > For bisection, I'd think it best to leave them separate?
>
> I'm not quite sure I understand why putting the individual patches into a
> series makes bisection easier? It does make sense to keep individual patches.
If somebody reports a bug, bisection will tell you which of these
kmap-conversion patches is at fault, reducing the amount of brainpower
you have to invest in determining where the bug is.
On martedì 26 luglio 2022 20:40:29 CEST Viacheslav Dubeyko wrote:
>
> > On Jul 25, 2022, at 10:54 AM, Matthew Wilcox <[email protected]>
wrote:
> >
> > On Mon, Jul 25, 2022 at 10:17:13AM -0700, Viacheslav Dubeyko wrote:
> >> Looks good. Maybe, it makes sense to combine all kmap() related
modifications in HFS+ into
> >> one patchset?
> >
> > For bisection, I'd think it best to leave them separate?
>
> I am OK with any way. My point that it will be good to have patchset to
see all modified places together, from logical point of view. Even if we
have some issue with kmap() change on kmap_local_page(), then, as far as I
can see, the root of issue should be kmap_local_page() but not HFS+ code.
Oppositely, if it’s some undiscovered HFS+ issue, then again
kmap_local_page() changes nothing. But I am OK if it is separate patches
too.
>
> Thanks,
> Slava.
>
And I am OK with sending a patchset :-)
I'm sorry because, while working on the last conversions for HFS+ in
btree.c, I just noticed that I had overlooked one other kmap() call site in
bitmap.c.
Therefore, I'd like to ask to drop this patch and I'll also ask to drop the
patch to bnode.c in the related thread.
When done, I'll send a series of three patches, one per file (bnode.c,
bitmap.c, btree.c).
Thanks,
Fabio