2014-07-17 17:03:28

by Max Filippov

[permalink] [raw]
Subject: [PATCH v2] mm/highmem: make kmap cache coloring aware

From: Leonid Yegoshin <[email protected]>

Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.

Signed-off-by: Leonid Yegoshin <[email protected]>
[ Max: extract architecture-independent part of the original patch, clean
up checkpatch and build warnings. ]
Signed-off-by: Max Filippov <[email protected]>
---
Changes v1->v2:
- fix description

mm/highmem.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..6898a8b 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
*/
#ifdef CONFIG_HIGHMEM

+#ifndef ARCH_PKMAP_COLORING
+#define set_pkmap_color(pg, cl) /* */
+#define get_last_pkmap_nr(p, cl) (p)
+#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
+#define is_no_more_pkmaps(p, cl) (!(p))
+#define get_next_pkmap_counter(c, cl) ((c) - 1)
+#endif
+
unsigned long totalhigh_pages __read_mostly;
EXPORT_SYMBOL(totalhigh_pages);

@@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
{
unsigned long vaddr;
int count;
+ int color __maybe_unused;
+
+ set_pkmap_color(page, color);
+ last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);

start:
count = LAST_PKMAP;
/* Find an empty entry */
for (;;) {
- last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
- if (!last_pkmap_nr) {
+ last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
+ if (is_no_more_pkmaps(last_pkmap_nr, color)) {
flush_all_zero_pkmaps();
count = LAST_PKMAP;
}
if (!pkmap_count[last_pkmap_nr])
break; /* Found a usable entry */
- if (--count)
+ count = get_next_pkmap_counter(count, color);
+ if (count > 0)
continue;

/*
--
1.8.1.4


2014-07-22 00:58:52

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On Thu, 17 Jul 2014, Max Filippov wrote:

> From: Leonid Yegoshin <[email protected]>
>
> Provide hooks that allow architectures with aliasing cache to align
> mapping address of high pages according to their color. Such architectures
> may enforce similar coloring of low- and high-memory page mappings and
> reuse existing cache management functions to support highmem.
>

Typically a change like this would be proposed along with a change to an
architecture which would define this new ARCH_PKMAP_COLORING and have its
own overriding definitions. Based on who you sent this patch to, it looks
like that would be mips and xtensa. Now the only question is where are
those patches to add the alternate definitions for those platforms?

> Signed-off-by: Leonid Yegoshin <[email protected]>
> [ Max: extract architecture-independent part of the original patch, clean
> up checkpatch and build warnings. ]
> Signed-off-by: Max Filippov <[email protected]>
> ---
> Changes v1->v2:
> - fix description
>
> mm/highmem.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/mm/highmem.c b/mm/highmem.c
> index b32b70c..6898a8b 100644
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
> */
> #ifdef CONFIG_HIGHMEM
>
> +#ifndef ARCH_PKMAP_COLORING
> +#define set_pkmap_color(pg, cl) /* */

This is typically done with do {} while (0).

> +#define get_last_pkmap_nr(p, cl) (p)
> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
> +#define is_no_more_pkmaps(p, cl) (!(p))

That's not gramatically proper.

> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
> +#endif
> +
> unsigned long totalhigh_pages __read_mostly;
> EXPORT_SYMBOL(totalhigh_pages);
>
> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
> {
> unsigned long vaddr;
> int count;
> + int color __maybe_unused;
> +
> + set_pkmap_color(page, color);
> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>
> start:
> count = LAST_PKMAP;
> /* Find an empty entry */
> for (;;) {
> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
> - if (!last_pkmap_nr) {
> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
> + if (is_no_more_pkmaps(last_pkmap_nr, color)) {
> flush_all_zero_pkmaps();
> count = LAST_PKMAP;
> }
> if (!pkmap_count[last_pkmap_nr])
> break; /* Found a usable entry */
> - if (--count)
> + count = get_next_pkmap_counter(count, color);

And that's not equivalent at all, --count decrements the auto variable and
then tests it for being non-zero. Your get_next_pkmap_counter() never
decrements count.

> + if (count > 0)
> continue;
>
> /*

2014-07-22 01:14:49

by Leonid Yegoshin

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On 07/21/2014 05:58 PM, David Rientjes wrote:
> On Thu, 17 Jul 2014, Max Filippov wrote:
>
>> From: Leonid Yegoshin <[email protected]>
>>
>> Provide hooks that allow architectures with aliasing cache to align
>> mapping address of high pages according to their color. Such architectures
>> may enforce similar coloring of low- and high-memory page mappings and
>> reuse existing cache management functions to support highmem.
>>
> Typically a change like this would be proposed along with a change to an
> architecture which would define this new ARCH_PKMAP_COLORING and have its
> own overriding definitions. Based on who you sent this patch to, it looks
> like that would be mips and xtensa. Now the only question is where are
> those patches to add the alternate definitions for those platforms?
Yes, there is one, at least for MIPS. This stuff can be a common ground
for both platforms (MIPS and XTENSA)

>
>> Signed-off-by: Leonid Yegoshin <[email protected]>
>> [ Max: extract architecture-independent part of the original patch, clean
>> up checkpatch and build warnings. ]
>> Signed-off-by: Max Filippov <[email protected]>
>> ---
>> Changes v1->v2:
>> - fix description
>>
>> mm/highmem.c | 19 ++++++++++++++++---
>> 1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/highmem.c b/mm/highmem.c
>> index b32b70c..6898a8b 100644
>> --- a/mm/highmem.c
>> +++ b/mm/highmem.c
>> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
>> */
>> #ifdef CONFIG_HIGHMEM
>>
>> +#ifndef ARCH_PKMAP_COLORING
>> +#define set_pkmap_color(pg, cl) /* */
> This is typically done with do {} while (0).
>
>> +#define get_last_pkmap_nr(p, cl) (p)
>> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
>> +#define is_no_more_pkmaps(p, cl) (!(p))
> That's not gramatically proper.
>
>> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
>> +#endif
>> +
>> unsigned long totalhigh_pages __read_mostly;
>> EXPORT_SYMBOL(totalhigh_pages);
>>
>> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
>> {
>> unsigned long vaddr;
>> int count;
>> + int color __maybe_unused;
>> +
>> + set_pkmap_color(page, color);
>> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>>
>> start:
>> count = LAST_PKMAP;
>> /* Find an empty entry */
>> for (;;) {
>> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
>> - if (!last_pkmap_nr) {
>> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
>> + if (is_no_more_pkmaps(last_pkmap_nr, color)) {
>> flush_all_zero_pkmaps();
>> count = LAST_PKMAP;
>> }
>> if (!pkmap_count[last_pkmap_nr])
>> break; /* Found a usable entry */
>> - if (--count)
>> + count = get_next_pkmap_counter(count, color);
> And that's not equivalent at all, --count decrements the auto variable and
> then tests it for being non-zero. Your get_next_pkmap_counter() never
> decrements count.
David, the statements

count = get_next_pkmap_counter(count, color);
if (count > 0)

are extended in STANDARD (non colored) case to

count = (count - 1);
if (count > 0)

which are perfect equivalent of

if (--count)

>
>> + if (count > 0)
>> continue;
>>
>> /*

2014-07-22 01:20:12

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On Mon, 21 Jul 2014, Leonid Yegoshin wrote:

> Yes, there is one, at least for MIPS. This stuff can be a common ground for
> both platforms (MIPS and XTENSA)
>

Needs the mips patch as a followup as justification for the change.

2014-07-22 09:44:54

by Max Filippov

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

Hi David,

On Tue, Jul 22, 2014 at 4:58 AM, David Rientjes <[email protected]> wrote:
> On Thu, 17 Jul 2014, Max Filippov wrote:
>
>> From: Leonid Yegoshin <[email protected]>
>>
>> Provide hooks that allow architectures with aliasing cache to align
>> mapping address of high pages according to their color. Such architectures
>> may enforce similar coloring of low- and high-memory page mappings and
>> reuse existing cache management functions to support highmem.
>>
>
> Typically a change like this would be proposed along with a change to an
> architecture which would define this new ARCH_PKMAP_COLORING and have its
> own overriding definitions. Based on who you sent this patch to, it looks
> like that would be mips and xtensa. Now the only question is where are
> those patches to add the alternate definitions for those platforms?

I'm going to post xtensa series shortly, and I saw corresponding changes
for MIPS in the following patch:
http://permalink.gmane.org/gmane.linux.kernel.mm/107654

>> Signed-off-by: Leonid Yegoshin <[email protected]>
>> [ Max: extract architecture-independent part of the original patch, clean
>> up checkpatch and build warnings. ]
>> Signed-off-by: Max Filippov <[email protected]>
>> ---
>> Changes v1->v2:
>> - fix description
>>
>> mm/highmem.c | 19 ++++++++++++++++---
>> 1 file changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/highmem.c b/mm/highmem.c
>> index b32b70c..6898a8b 100644
>> --- a/mm/highmem.c
>> +++ b/mm/highmem.c
>> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
>> */
>> #ifdef CONFIG_HIGHMEM
>>
>> +#ifndef ARCH_PKMAP_COLORING
>> +#define set_pkmap_color(pg, cl) /* */
>
> This is typically done with do {} while (0).

Will fix.

>> +#define get_last_pkmap_nr(p, cl) (p)
>> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
>> +#define is_no_more_pkmaps(p, cl) (!(p))
>
> That's not gramatically proper.

Just no_more_pkmaps maybe?

>> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
>> +#endif
>> +
>> unsigned long totalhigh_pages __read_mostly;
>> EXPORT_SYMBOL(totalhigh_pages);
>>
>> @@ -161,19 +169,24 @@ static inline unsigned long map_new_virtual(struct page *page)
>> {
>> unsigned long vaddr;
>> int count;
>> + int color __maybe_unused;
>> +
>> + set_pkmap_color(page, color);
>> + last_pkmap_nr = get_last_pkmap_nr(last_pkmap_nr, color);
>>
>> start:
>> count = LAST_PKMAP;
>> /* Find an empty entry */
>> for (;;) {
>> - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
>> - if (!last_pkmap_nr) {
>> + last_pkmap_nr = get_next_pkmap_nr(last_pkmap_nr, color);
>> + if (is_no_more_pkmaps(last_pkmap_nr, color)) {
>> flush_all_zero_pkmaps();
>> count = LAST_PKMAP;
>> }
>> if (!pkmap_count[last_pkmap_nr])
>> break; /* Found a usable entry */
>> - if (--count)
>> + count = get_next_pkmap_counter(count, color);
>
> And that's not equivalent at all, --count decrements the auto variable and
> then tests it for being non-zero. Your get_next_pkmap_counter() never
> decrements count.

Yes, it's not literally equivalent, but it does the same thing: count gets
decreased by 1 and for loop iterates until count reaches zero.
get_next_pkmap_counter has no side effects, which is good. And count
is not supposed to go below zero anyway, but sure I can change the below
condition to 'if (count)'.

>> + if (count > 0)
>> continue;
>>
>> /*

--
Thanks.
-- Max

2014-07-23 21:17:25

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On Thu, 17 Jul 2014 21:03:18 +0400 Max Filippov <[email protected]> wrote:

> From: Leonid Yegoshin <[email protected]>
>
> Provide hooks that allow architectures with aliasing cache to align
> mapping address of high pages according to their color. Such architectures
> may enforce similar coloring of low- and high-memory page mappings and
> reuse existing cache management functions to support highmem.
>
> ...
>
> --- a/mm/highmem.c
> +++ b/mm/highmem.c
> @@ -44,6 +44,14 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
> */
> #ifdef CONFIG_HIGHMEM
>
> +#ifndef ARCH_PKMAP_COLORING
> +#define set_pkmap_color(pg, cl) /* */
> +#define get_last_pkmap_nr(p, cl) (p)
> +#define get_next_pkmap_nr(p, cl) (((p) + 1) & LAST_PKMAP_MASK)
> +#define is_no_more_pkmaps(p, cl) (!(p))
> +#define get_next_pkmap_counter(c, cl) ((c) - 1)
> +#endif

This is the old-school way of doing things. The new Linus-approved way is

#ifndef set_pkmap_color
#define set_pkmap_color ...
#define get_last_pkmap_nr ...
#endif

so we don't need to add yet another symbol and to avoid typos, etc.

Secondly, please identify which per-arch header file is responsible for
defining these symbols. Document that here and make sure that
mm/highmem.c is directly including that file. Otherwise we end up with
different architectures using different header files and it's all a big
mess.

Thirdly, macros are nasty things. It would be nicer to do

#ifndef set_pkmap_color
static inline void set_pkmap_color(...)
{
...
}
#define set_pkmap_color set_pkmap_color

...

#endif

Fourthly, please document these proposed interfaces with code comments.

Fifthly, it would be very useful to publish the performance testing
results for at least one architecture so that we can determine the
patchset's desirability. And perhaps to motivate other architectures
to implement this.

2014-07-24 00:38:05

by Max Filippov

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

Hi Andrew,

thanks for your feedback, I'll address your points in the next version of this
series.

On Thu, Jul 24, 2014 at 1:17 AM, Andrew Morton
<[email protected]> wrote:
> Fifthly, it would be very useful to publish the performance testing
> results for at least one architecture so that we can determine the
> patchset's desirability. And perhaps to motivate other architectures
> to implement this.

What sort of performance numbers would be relevant?
For xtensa this patch enables highmem use for cores with aliasing cache,
that is access to a gigabyte of memory (typical on KC705 FPGA board) vs.
only 128MBytes of low memory, which is highly desirable. But performance
comparison of these two configurations seems to make little sense.
OTOH performance comparison of highmem variants with and without
cache aliasing would show the quality of our cache flushing code.

--
Thanks.
-- Max

2014-07-24 22:21:38

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On Thu, 24 Jul 2014 04:38:01 +0400 Max Filippov <[email protected]> wrote:

> On Thu, Jul 24, 2014 at 1:17 AM, Andrew Morton
> <[email protected]> wrote:
> > Fifthly, it would be very useful to publish the performance testing
> > results for at least one architecture so that we can determine the
> > patchset's desirability. And perhaps to motivate other architectures
> > to implement this.
>
> What sort of performance numbers would be relevant?
> For xtensa this patch enables highmem use for cores with aliasing cache,
> that is access to a gigabyte of memory (typical on KC705 FPGA board) vs.
> only 128MBytes of low memory, which is highly desirable. But performance
> comparison of these two configurations seems to make little sense.
> OTOH performance comparison of highmem variants with and without
> cache aliasing would show the quality of our cache flushing code.

I'd assumed the patch was making cache coloring available as a
performance tweak. But you appear to be saying that the (high) memory
is simply unavailable for such cores without this change. I think.

Please ensure that v3's changelog explains the full reason for the
patch. Assume you're talking to all-the-worlds-an-x86 dummies, OK?

2014-07-25 02:12:10

by Leonid Yegoshin

[permalink] [raw]
Subject: Re: [PATCH v2] mm/highmem: make kmap cache coloring aware

On 07/24/2014 03:21 PM, Andrew Morton wrote:
> On Thu, 24 Jul 2014 04:38:01 +0400 Max Filippov <[email protected]> wrote:
>
>> On Thu, Jul 24, 2014 at 1:17 AM, Andrew Morton
>> <[email protected]> wrote:
>>> Fifthly, it would be very useful to publish the performance testing
>>> results for at least one architecture so that we can determine the
>>> patchset's desirability. And perhaps to motivate other architectures
>>> to implement this.
>> What sort of performance numbers would be relevant?
>> For xtensa this patch enables highmem use for cores with aliasing cache,
>> that is access to a gigabyte of memory (typical on KC705 FPGA board) vs.
>> only 128MBytes of low memory, which is highly desirable. But performance
>> comparison of these two configurations seems to make little sense.
>> OTOH performance comparison of highmem variants with and without
>> cache aliasing would show the quality of our cache flushing code.
> I'd assumed the patch was making cache coloring available as a
> performance tweak. But you appear to be saying that the (high) memory
> is simply unavailable for such cores without this change. I think.

>
> Please ensure that v3's changelog explains the full reason for the
> patch. Assume you're talking to all-the-worlds-an-x86 dummies, OK?
>

I am not sure that I will work on it again, we move to bigger pages and
non-aliasing cache, and I ask Steven Hill to help with MIPS variant.
So, I try to summarise an expanation here:

If cache line of some page in MIPS (and XTENSA?) is accessed via
multiple page virtual addresses (kernel or/and user) then it may be
located twice or more times in L1 cache which is an obvious coherency
bug. It is a trade-off for simple L1 access hardware. Two virtual
addresses of page which hits the same location in L1 cache are named as
"in the same page colour". Usually, colours are numbered and sequential
page colours looks like 0,1,0,1 or 0,1,2,3,0,1,2,3... It is usually
least one-two-or-three bits of PFN.

One simple way to hit this problem is using current HIGHMEM remapping
service because it doesn't take care of "page colouring". To prevent
coherency failure a current HIGHMEM code attempts to flush page from L1
cache each time before changing it's virtual address: flush cache each
PKMAP recycle and at each kunmap_atomic(), see arch/arm/mm/highmem.c -
MIPS code even doesn't have a flush here (BUG!).

However, kunmap_atomic() should do it locally to CPU without kmap_lock
by definition of kmap_atomic() and can't prevent a situation then a
second CPU hyper-thread accesses the same page via kmap() right after
kmap_atomic() got a page and uses a different page colour (different
virtual address set). Also, setting the whole cycle
kmap_atomic()...page...access...kunmap_atomic() under kmap_lock is
impractical.

This patch introduces some interface for architecture code to work with
coloured pages in PKMAP array which eliminates the kmap_atomic problem
and cancels cache flush requirements. It also can be consistent with
kmap_coherent() code which is required for some cache aliasing
architecture to handle aliasing between kernel virtual address and user
virtual address. The whole idea of this patch - force the same page
colour then page is assigned some PKMAP virtual address or kmap_atomic
address. Page colour is set by architecture code, usually it is a
physical address colour (which is usually == KVA colour).

- Leonid.