2013-03-04 04:49:15

by Joonsoo Kim

[permalink] [raw]
Subject: [PATCH] ARM: mm: clean-up in order to reduce to call kmap_high_get()

In kmap_atomic(), kmap_high_get() is invoked for checking already
mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
we explicitly call kmap_high_get() before kmap_atomic()
when cache_is_vipt(), so kmap_high_get() can be invoked twice.
This is useless operation, so remove one.

Signed-off-by: Joonsoo Kim <[email protected]>

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c7e3759..b7711be 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
if (PageHighMem(page)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
- vaddr = kmap_high_get(page);
- if (vaddr) {
- vaddr += offset;
- op(vaddr, len, dir);
- kunmap_high(page);
- } else if (cache_is_vipt()) {
- /* unmapped pages might still be cached */
+ if (cache_is_vipt()) {
vaddr = kmap_atomic(page);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
+ } else {
+ vaddr = kmap_high_get(page);
+ if (vaddr) {
+ op(vaddr + offset, len, dir);
+ kunmap_high(page);
+ }
}
} else {
vaddr = page_address(page) + offset;
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 1c8f7f5..e6a03d0 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
if (!PageHighMem(page)) {
__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
} else {
- void *addr = kmap_high_get(page);
- if (addr) {
- __cpuc_flush_dcache_area(addr, PAGE_SIZE);
- kunmap_high(page);
- } else if (cache_is_vipt()) {
- /* unmapped pages might still be cached */
+ void *addr;
+
+ if (cache_is_vipt()) {
addr = kmap_atomic(page);
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_atomic(addr);
+ } else {
+ addr = kmap_high_get(page);
+ if (addr) {
+ __cpuc_flush_dcache_area(addr, PAGE_SIZE);
+ kunmap_high(page);
+ }
}
}

--
1.7.9.5


2013-03-05 14:23:33

by Nicolas Pitre

[permalink] [raw]
Subject: Re: [PATCH] ARM: mm: clean-up in order to reduce to call kmap_high_get()

On Mon, 4 Mar 2013, Joonsoo Kim wrote:

> In kmap_atomic(), kmap_high_get() is invoked for checking already
> mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
> we explicitly call kmap_high_get() before kmap_atomic()
> when cache_is_vipt(), so kmap_high_get() can be invoked twice.
> This is useless operation, so remove one.
>
> Signed-off-by: Joonsoo Kim <[email protected]>

Acked-by: Nicolas Pitre <[email protected]>

> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index c7e3759..b7711be 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> if (PageHighMem(page)) {
> if (len + offset > PAGE_SIZE)
> len = PAGE_SIZE - offset;
> - vaddr = kmap_high_get(page);
> - if (vaddr) {
> - vaddr += offset;
> - op(vaddr, len, dir);
> - kunmap_high(page);
> - } else if (cache_is_vipt()) {
> - /* unmapped pages might still be cached */
> + if (cache_is_vipt()) {
> vaddr = kmap_atomic(page);
> op(vaddr + offset, len, dir);
> kunmap_atomic(vaddr);
> + } else {
> + vaddr = kmap_high_get(page);
> + if (vaddr) {
> + op(vaddr + offset, len, dir);
> + kunmap_high(page);
> + }
> }
> } else {
> vaddr = page_address(page) + offset;
> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
> index 1c8f7f5..e6a03d0 100644
> --- a/arch/arm/mm/flush.c
> +++ b/arch/arm/mm/flush.c
> @@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
> if (!PageHighMem(page)) {
> __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
> } else {
> - void *addr = kmap_high_get(page);
> - if (addr) {
> - __cpuc_flush_dcache_area(addr, PAGE_SIZE);
> - kunmap_high(page);
> - } else if (cache_is_vipt()) {
> - /* unmapped pages might still be cached */
> + void *addr;
> +
> + if (cache_is_vipt()) {
> addr = kmap_atomic(page);
> __cpuc_flush_dcache_area(addr, PAGE_SIZE);
> kunmap_atomic(addr);
> + } else {
> + addr = kmap_high_get(page);
> + if (addr) {
> + __cpuc_flush_dcache_area(addr, PAGE_SIZE);
> + kunmap_high(page);
> + }
> }
> }
>
> --
> 1.7.9.5
>

2013-03-07 13:26:50

by Russell King - ARM Linux

[permalink] [raw]
Subject: Re: [PATCH] ARM: mm: clean-up in order to reduce to call kmap_high_get()

On Mon, Mar 04, 2013 at 01:50:09PM +0900, Joonsoo Kim wrote:
> In kmap_atomic(), kmap_high_get() is invoked for checking already
> mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
> we explicitly call kmap_high_get() before kmap_atomic()
> when cache_is_vipt(), so kmap_high_get() can be invoked twice.
> This is useless operation, so remove one.
>
> Signed-off-by: Joonsoo Kim <[email protected]>
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index c7e3759..b7711be 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> if (PageHighMem(page)) {
> if (len + offset > PAGE_SIZE)
> len = PAGE_SIZE - offset;
> - vaddr = kmap_high_get(page);
> - if (vaddr) {
> - vaddr += offset;
> - op(vaddr, len, dir);
> - kunmap_high(page);
> - } else if (cache_is_vipt()) {
> - /* unmapped pages might still be cached */
> + if (cache_is_vipt()) {

This should be:
if (cache_is_vipt_nonaliasing())

to make it _explicit_ that this technique is only for non-aliasing VIPT
caches (this doesn't work on any other of our cache types.) Yes, I
know we don't support highmem with VIPT aliasing caches - but still,
we should ensure that this is self-documented in this code.

Same for arch/arm/mm/flush.c

2013-03-08 00:51:08

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH] ARM: mm: clean-up in order to reduce to call kmap_high_get()

Hello, Russell.

On Thu, Mar 07, 2013 at 01:26:23PM +0000, Russell King - ARM Linux wrote:
> On Mon, Mar 04, 2013 at 01:50:09PM +0900, Joonsoo Kim wrote:
> > In kmap_atomic(), kmap_high_get() is invoked for checking already
> > mapped area. In __flush_dcache_page() and dma_cache_maint_page(),
> > we explicitly call kmap_high_get() before kmap_atomic()
> > when cache_is_vipt(), so kmap_high_get() can be invoked twice.
> > This is useless operation, so remove one.
> >
> > Signed-off-by: Joonsoo Kim <[email protected]>
> >
> > diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> > index c7e3759..b7711be 100644
> > --- a/arch/arm/mm/dma-mapping.c
> > +++ b/arch/arm/mm/dma-mapping.c
> > @@ -822,16 +822,16 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> > if (PageHighMem(page)) {
> > if (len + offset > PAGE_SIZE)
> > len = PAGE_SIZE - offset;
> > - vaddr = kmap_high_get(page);
> > - if (vaddr) {
> > - vaddr += offset;
> > - op(vaddr, len, dir);
> > - kunmap_high(page);
> > - } else if (cache_is_vipt()) {
> > - /* unmapped pages might still be cached */
> > + if (cache_is_vipt()) {
>
> This should be:
> if (cache_is_vipt_nonaliasing())
>
> to make it _explicit_ that this technique is only for non-aliasing VIPT
> caches (this doesn't work on any other of our cache types.) Yes, I
> know we don't support highmem with VIPT aliasing caches - but still,
> we should ensure that this is self-documented in this code.
>
> Same for arch/arm/mm/flush.c

Okay. I will re-work and will send v2 soon.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/