2008-12-10 17:00:54

by Yuri Tikhonov

[permalink] [raw]
Subject: [PATCH][v2] fork_init: fix division by zero


The following patch fixes divide-by-zero error for the
cases of really big PAGE_SIZEs (e.g. 256KB on ppc44x).
Support for big page sizes on 44x is not present in the
current kernel yet, but coming soon.

Also this patch fixes the comment for the max_threads
settings, as this didn't match the things actually done
in the code.

Signed-off-by: Yuri Tikhonov <[email protected]>
Signed-off-by: Ilya Yanok <[email protected]>
---
kernel/fork.c | 8 ++++++--
1 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/fork.c b/kernel/fork.c
index 8d6a7dd..638eb7f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -181,10 +181,14 @@ void __init fork_init(unsigned long mempages)

/*
* The default maximum number of threads is set to a safe
- * value: the thread structures can take up at most half
- * of memory.
+ * value: the thread structures can take up at most
+ * (1/8) part of memory.
*/
+#if (8 * THREAD_SIZE) > PAGE_SIZE
max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
+#else
+ max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
+#endif

/*
* we need to allow at least 20 threads to boot a system
--
1.5.6.1


2008-12-11 20:18:32

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Wed, 10 Dec 2008 19:50:51 +0300
Yuri Tikhonov <[email protected]> wrote:

>
> The following patch fixes divide-by-zero error for the
> cases of really big PAGE_SIZEs (e.g. 256KB on ppc44x).
> Support for big page sizes on 44x is not present in the
> current kernel yet, but coming soon.
>
> Also this patch fixes the comment for the max_threads
> settings, as this didn't match the things actually done
> in the code.
>
> Signed-off-by: Yuri Tikhonov <[email protected]>
> Signed-off-by: Ilya Yanok <[email protected]>
> ---
> kernel/fork.c | 8 ++++++--
> 1 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 8d6a7dd..638eb7f 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -181,10 +181,14 @@ void __init fork_init(unsigned long mempages)
>
> /*
> * The default maximum number of threads is set to a safe
> - * value: the thread structures can take up at most half
> - * of memory.
> + * value: the thread structures can take up at most
> + * (1/8) part of memory.
> */
> +#if (8 * THREAD_SIZE) > PAGE_SIZE
> max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> +#else
> + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> +#endif

The expression you've chosen here can be quite inacccurate, because
((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
preserve accuracy is

max_threads = (mempages * PAGE_SIZE) / (8 * THREAD_SIZE);

so how about avoiding the nasty ifdefs and doing

--- a/kernel/fork.c~fork_init-fix-division-by-zero
+++ a/kernel/fork.c
@@ -69,6 +69,7 @@
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include <asm/div64.h>

/*
* Protected counters by write_lock_irq(&tasklist_lock)
@@ -185,10 +186,15 @@ void __init fork_init(unsigned long memp

/*
* The default maximum number of threads is set to a safe
- * value: the thread structures can take up at most half
- * of memory.
+ * value: the thread structures can take up at most
+ * (1/8) part of memory.
*/
- max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
+ {
+ /* max_threads = (mempages * PAGE_SIZE) / THREAD_SIZE / 8; */
+ u64 m = mempages * PAGE_SIZE;
+ do_div(m, THREAD_SIZE * 8);
+ max_threads = m;
+ }

/*
* we need to allow at least 20 threads to boot a system
_

?


The code is also inaccurate because it assumes that <whatever allocator
is used for threads> will pack the thread_structs into pages with best
possible density, which isn't necessarily the case. Let's not worry
about that.




OT:

max_threads is widly wrong anyway.

- the caller passes in num_physpages, which includes highmem. And we
can't allocate thread structs from highmem.

- num_physpages includes kernel pages and other stuff which can never
be allocated via the page allocator.

A suitable fix would be to switch the caller to the strangely-named
nr_free_buffer_pages().

If you grep the tree for `num_physpages', you will find a splendid
number of similar bugs. num_physpages should be unexported, burnt,
deleted, etc. It's just an invitation to write buggy code.

2008-12-11 20:28:21

by Al Viro

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Thu, Dec 11, 2008 at 12:16:35PM -0800, Andrew Morton wrote:
> > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > +#else
> > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > +#endif
>
> The expression you've chosen here can be quite inacccurate, because
> ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> preserve accuracy is
>
> max_threads = (mempages * PAGE_SIZE) / (8 * THREAD_SIZE);
>
> so how about avoiding the nasty ifdefs and doing

Are you sure? Do they actually cross the page boundaries?

2008-12-11 20:46:03

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Thu, 11 Dec 2008 20:28:00 +0000
Al Viro <[email protected]> wrote:

> On Thu, Dec 11, 2008 at 12:16:35PM -0800, Andrew Morton wrote:
> > > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > > +#else
> > > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > > +#endif
> >
> > The expression you've chosen here can be quite inacccurate, because
> > ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> > preserve accuracy is
> >
> > max_threads = (mempages * PAGE_SIZE) / (8 * THREAD_SIZE);
> >
> > so how about avoiding the nasty ifdefs and doing
>
> Are you sure?

No, not at all. It's all too hard. Which is why I'm looking for
simplification.

> Do they actually cross the page boundaries?

Some flavours of slab have at times done an order-1 allocation for
objects which would fit into an order-0 page (etc) if it looks like
that will be beneficial from a packing POV. I'm unsure whether that
still happens - I tried to get it stamped out for reliability reasons.

2008-12-11 22:22:48

by Yuri Tikhonov

[permalink] [raw]
Subject: Re[2]: [PATCH][v2] fork_init: fix division by zero


Hello Andrew,

On Thursday, December 11, 2008 you wrote:

[snip]

> The expression you've chosen here can be quite inacccurate, because
> ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number.

But why is it bad? We do multiplication to 'mempages', not division.
All the numbers in the multiplier are the power of 2, so both
expressions:

mempages * (PAGE_SIZE / (8 * THREAD_SIZE))

and

max_threads = (mempages * PAGE_SIZE) / (8 * THREAD_SIZE)

are finally equal.

> The way to preserve accuracy is

> max_threads = (mempages * PAGE_SIZE) / (8 * THREAD_SIZE);

> so how about avoiding the nasty ifdefs and doing

I'm OK with the approach below, but, leading resulting to the same,
this involves some overhead to the code where there was no this
overhead before this patch: e.g. your implementation is finally boils
down to ~5 times more processor instructions than there were before,
plus operations with stack for the 'm' variable.

On the other hand, my approach with nasty (I agree) ifdefs doesn't
lead to overheads to the code which does not need this: i.e. the most
common situation of small PAGE_SIZEs. Big PAGE_SIZE is the exception,
so I believe that the more common cases should not suffer because of
this.

> --- a/kernel/fork.c~fork_init-fix-division-by-zero
> +++ a/kernel/fork.c
> @@ -69,6 +69,7 @@
> #include <asm/mmu_context.h>
> #include <asm/cacheflush.h>
> #include <asm/tlbflush.h>
> +#include <asm/div64.h>
>
> /*
> * Protected counters by write_lock_irq(&tasklist_lock)
> @@ -185,10 +186,15 @@ void __init fork_init(unsigned long memp
>
> /*
> * The default maximum number of threads is set to a safe
> - * value: the thread structures can take up at most half
> - * of memory.
> + * value: the thread structures can take up at most
> + * (1/8) part of memory.
> */
> - max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> + {
> + /* max_threads = (mempages * PAGE_SIZE) / THREAD_SIZE / 8; */
> + u64 m = mempages * PAGE_SIZE;
> + do_div(m, THREAD_SIZE * 8);
> + max_threads = m;
> + }
>
> /*
> * we need to allow at least 20 threads to boot a system
> _

> ?


> The code is also inaccurate because it assumes that <whatever allocator
is used for threads>> will pack the thread_structs into pages with best
> possible density, which isn't necessarily the case. Let's not worry
> about that.




> OT:

> max_threads is widly wrong anyway.

> - the caller passes in num_physpages, which includes highmem. And we
> can't allocate thread structs from highmem.

> - num_physpages includes kernel pages and other stuff which can never
> be allocated via the page allocator.

> A suitable fix would be to switch the caller to the strangely-named
> nr_free_buffer_pages().

> If you grep the tree for `num_physpages', you will find a splendid
> number of similar bugs. num_physpages should be unexported, burnt,
> deleted, etc. It's just an invitation to write buggy code.


Regards, Yuri

--
Yuri Tikhonov, Senior Software Engineer
Emcraft Systems, http://www.emcraft.com

2008-12-11 22:28:15

by Andrew Morton

[permalink] [raw]
Subject: Re: Re[2]: [PATCH][v2] fork_init: fix division by zero

On Fri, 12 Dec 2008 01:22:32 +0300
Yuri Tikhonov <[email protected]> wrote:

> > so how about avoiding the nasty ifdefs and doing
>
> I'm OK with the approach below, but, leading resulting to the same,
> this involves some overhead to the code where there was no this
> overhead before this patch: e.g. your implementation is finally boils
> down to ~5 times more processor instructions than there were before,
> plus operations with stack for the 'm' variable.
>
> On the other hand, my approach with nasty (I agree) ifdefs doesn't
> lead to overheads to the code which does not need this: i.e. the most
> common situation of small PAGE_SIZEs. Big PAGE_SIZE is the exception,
> so I believe that the more common cases should not suffer because of
> this.

yes, but...

> > --- a/kernel/fork.c~fork_init-fix-division-by-zero
> > +++ a/kernel/fork.c
> > @@ -69,6 +69,7 @@
> > #include <asm/mmu_context.h>
> > #include <asm/cacheflush.h>
> > #include <asm/tlbflush.h>
> > +#include <asm/div64.h>
> >
> > /*
> > * Protected counters by write_lock_irq(&tasklist_lock)
> > @@ -185,10 +186,15 @@ void __init fork_init(unsigned long memp

This is __init code and it gets thrown away after bootup.

2008-12-12 00:49:40

by Paul Mackerras

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

Andrew Morton writes:

> > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > +#else
> > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > +#endif
>
> The expression you've chosen here can be quite inacccurate, because
> ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> preserve accuracy is

The assumption is that THREAD_SIZE is a power of 2, as is PAGE_SIZE.

I think Yuri should be increasing THREAD_SIZE for the larger page
sizes he's implementing, because we have on-stack arrays whose size
depends on the page size. I suspect that having THREAD_SIZE less than
1/8 of PAGE_SIZE risks stack overflows, and the better fix is for Yuri
to make sure THREAD_SIZE is at least 1/8 of PAGE_SIZE. (In fact, more
may be needed - someone should work out what fraction is actually
needed.)

Paul.

2008-12-12 01:08:39

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Fri, 12 Dec 2008 11:48:29 +1100 Paul Mackerras <[email protected]> wrote:

> Andrew Morton writes:
>
> > > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > > +#else
> > > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > > +#endif
> >
> > The expression you've chosen here can be quite inacccurate, because
> > ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> > preserve accuracy is
>
> The assumption is that THREAD_SIZE is a power of 2, as is PAGE_SIZE.
>
> I think Yuri should be increasing THREAD_SIZE for the larger page
> sizes he's implementing, because we have on-stack arrays whose size
> depends on the page size. I suspect that having THREAD_SIZE less than
> 1/8 of PAGE_SIZE risks stack overflows, and the better fix is for Yuri
> to make sure THREAD_SIZE is at least 1/8 of PAGE_SIZE. (In fact, more
> may be needed - someone should work out what fraction is actually
> needed.)

OK, yes.

It's the MAX_BUF_PER_PAGE arrays which will hurt. iirc they nest
three-deep on some codepaths.

2008-12-12 02:31:50

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Friday 12 December 2008 07:43, Andrew Morton wrote:
> On Thu, 11 Dec 2008 20:28:00 +0000

> > Do they actually cross the page boundaries?
>
> Some flavours of slab have at times done an order-1 allocation for
> objects which would fit into an order-0 page (etc) if it looks like
> that will be beneficial from a packing POV. I'm unsure whether that
> still happens - I tried to get it stamped out for reliability reasons.

Hmph, SLUB uses order-3 allocations for 832 byte sized objects
by default here (mm struct).

2008-12-12 02:49:26

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Fri, 12 Dec 2008 12:31:33 +1000 Nick Piggin <[email protected]> wrote:

> On Friday 12 December 2008 07:43, Andrew Morton wrote:
> > On Thu, 11 Dec 2008 20:28:00 +0000
>
> > > Do they actually cross the page boundaries?
> >
> > Some flavours of slab have at times done an order-1 allocation for
> > objects which would fit into an order-0 page (etc) if it looks like
> > that will be beneficial from a packing POV. I'm unsure whether that
> > still happens - I tried to get it stamped out for reliability reasons.
>
> Hmph, SLUB uses order-3 allocations for 832 byte sized objects
> by default here (mm struct).

That sucks, but at least it's <= PAGE_ALLOC_COSTLY_ORDER.

It's fortunate that everyone has more than 128GB of memory.

2008-12-12 03:36:19

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Friday 12 December 2008 13:47, Andrew Morton wrote:
> On Fri, 12 Dec 2008 12:31:33 +1000 Nick Piggin <[email protected]>
wrote:
> > On Friday 12 December 2008 07:43, Andrew Morton wrote:
> > > On Thu, 11 Dec 2008 20:28:00 +0000
> > >
> > > > Do they actually cross the page boundaries?
> > >
> > > Some flavours of slab have at times done an order-1 allocation for
> > > objects which would fit into an order-0 page (etc) if it looks like
> > > that will be beneficial from a packing POV. I'm unsure whether that
> > > still happens - I tried to get it stamped out for reliability reasons.
> >
> > Hmph, SLUB uses order-3 allocations for 832 byte sized objects
> > by default here (mm struct).
>
> That sucks, but at least it's <= PAGE_ALLOC_COSTLY_ORDER.

Which is somewhat arbitrary a value. order-1 is costly compared to
order-0...

After running my system here for a while and doing various things
with it, I have the ability to allocate 898 order-0 pages (3592K),
or 36 order-3 pages (1152K).

Not as bad as I expected, but the system's only been up for an hour,
and not exactly doing anything unusual (and it has nearly 30MB free,
out of 4GB).


> It's fortunate that everyone has more than 128GB of memory.

And that SLAB still works quite well :)

2008-12-18 22:47:26

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

On Thu, 18 Dec 2008 10:47:50 +0300
Yuri Tikhonov <[email protected]> wrote:

> Hello Paul,
>
> On Friday 12 December 2008 03:48, Paul Mackerras wrote:
> > Andrew Morton writes:
> >
> > > > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > > > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > > > +#else
> > > > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > > > +#endif
> > >
> > > The expression you've chosen here can be quite inacccurate, because
> > > ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> > > preserve accuracy is
> >
> > The assumption is that THREAD_SIZE is a power of 2, as is PAGE_SIZE.
> >
> > I think Yuri should be increasing THREAD_SIZE for the larger page
> > sizes he's implementing, because we have on-stack arrays whose size
> > depends on the page size. I suspect that having THREAD_SIZE less than
> > 1/8 of PAGE_SIZE risks stack overflows, and the better fix is for Yuri
> > to make sure THREAD_SIZE is at least 1/8 of PAGE_SIZE. (In fact, more
> > may be needed - someone should work out what fraction is actually
> > needed.)
>
> Right, thanks for pointing this. I guess, I was just lucky since didn't run into
> problems with stack overflows. So, I agree that we should increase the
> THREAD_SIZE in case of 256KB pages up to 1/8 of PAGE_SIZE, that is up
> to 32KB.
>
> There is one more warning from the common code when I use 256KB pages:
>
> CC mm/shmem.o
> mm/shmem.c: In function 'shmem_truncate_range':
> mm/shmem.c:613: warning: division by zero
> mm/shmem.c:619: warning: division by zero
> mm/shmem.c:644: warning: division by zero
> mm/shmem.c: In function 'shmem_unuse_inode':
> mm/shmem.c:873: warning: division by zero
>
> The problem here is that ENTRIES_PER_PAGEPAGE becomes 0x1.0000.0000
> when PAGE_SIZE is 256K.
>
> How about the following fix ?
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0ed0752..99d7c91 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -57,7 +57,7 @@
> #include <asm/pgtable.h>
>
> #define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long))
> -#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
> +#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
> #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512)
>
> #define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1))
> @@ -95,7 +95,7 @@ static unsigned long shmem_default_max_inodes(void)
> }
> #endif
>
> -static int shmem_getpage(struct inode *inode, unsigned long idx,
> +static int shmem_getpage(struct inode *inode, unsigned long long idx,
> struct page **pagep, enum sgp_type sgp, int *type);
>
> static inline struct page *shmem_dir_alloc(gfp_t gfp_mask)
> @@ -533,7 +533,7 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)
> int punch_hole;
> spinlock_t *needs_lock;
> spinlock_t *punch_lock;
> - unsigned long upper_limit;
> + unsigned long long upper_limit;
>
> inode->i_ctime = inode->i_mtime = CURRENT_TIME;
> idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
> @@ -1175,7 +1175,7 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
> * vm. If we swap it in we mark it dirty since we also free the swap
> * entry since a page cannot live in both the swap and page cache
> */
> -static int shmem_getpage(struct inode *inode, unsigned long idx,
> +static int shmem_getpage(struct inode *inode, unsigned long long idx,
> struct page **pagep, enum sgp_type sgp, int *type)
> {
> struct address_space *mapping = inode->i_mapping;
>

Looks sane. But to apply this I'd prefer a changelog, a signoff and a
grunt from Hugh.

Thanks.

2008-12-19 05:49:34

by Yuri Tikhonov

[permalink] [raw]
Subject: Re[2]: [PATCH][v2] fork_init: fix division by zero


Hello Andrew,

On Friday, December 19, 2008 you wrote:
[snip]
>> There is one more warning from the common code when I use 256KB pages:
>>
>> CC mm/shmem.o
>> mm/shmem.c: In function 'shmem_truncate_range':
>> mm/shmem.c:613: warning: division by zero
>> mm/shmem.c:619: warning: division by zero
>> mm/shmem.c:644: warning: division by zero
>> mm/shmem.c: In function 'shmem_unuse_inode':
>> mm/shmem.c:873: warning: division by zero
>>
>> The problem here is that ENTRIES_PER_PAGEPAGE becomes 0x1.0000.0000
>> when PAGE_SIZE is 256K.
>>
>> How about the following fix ?

[snip]

> Looks sane.

Thanks for reviewing.

> But to apply this I'd prefer a changelog, a signoff and a grunt from Hugh.

Sure, I'll post this in the separate thread then; keeping Hugh in CC.

Regards, Yuri

--
Yuri Tikhonov, Senior Software Engineer
Emcraft Systems, http://www.emcraft.com

2008-12-18 07:58:39

by Yuri Tikhonov

[permalink] [raw]
Subject: Re: [PATCH][v2] fork_init: fix division by zero

Hello Paul,

On Friday 12 December 2008 03:48, Paul Mackerras wrote:
> Andrew Morton writes:
>
> > > +#if (8 * THREAD_SIZE) > PAGE_SIZE
> > > max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
> > > +#else
> > > + max_threads = mempages * (PAGE_SIZE / (8 * THREAD_SIZE));
> > > +#endif
> >
> > The expression you've chosen here can be quite inacccurate, because
> > ((PAGE_SIZE / (8 * THREAD_SIZE)) is a small number. The way to
> > preserve accuracy is
>
> The assumption is that THREAD_SIZE is a power of 2, as is PAGE_SIZE.
>
> I think Yuri should be increasing THREAD_SIZE for the larger page
> sizes he's implementing, because we have on-stack arrays whose size
> depends on the page size. I suspect that having THREAD_SIZE less than
> 1/8 of PAGE_SIZE risks stack overflows, and the better fix is for Yuri
> to make sure THREAD_SIZE is at least 1/8 of PAGE_SIZE. (In fact, more
> may be needed - someone should work out what fraction is actually
> needed.)

Right, thanks for pointing this. I guess, I was just lucky since didn't run into
problems with stack overflows. So, I agree that we should increase the
THREAD_SIZE in case of 256KB pages up to 1/8 of PAGE_SIZE, that is up
to 32KB.

There is one more warning from the common code when I use 256KB pages:

CC mm/shmem.o
mm/shmem.c: In function 'shmem_truncate_range':
mm/shmem.c:613: warning: division by zero
mm/shmem.c:619: warning: division by zero
mm/shmem.c:644: warning: division by zero
mm/shmem.c: In function 'shmem_unuse_inode':
mm/shmem.c:873: warning: division by zero

The problem here is that ENTRIES_PER_PAGEPAGE becomes 0x1.0000.0000
when PAGE_SIZE is 256K.

How about the following fix ?

diff --git a/mm/shmem.c b/mm/shmem.c
index 0ed0752..99d7c91 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -57,7 +57,7 @@
#include <asm/pgtable.h>

#define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long))
-#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
+#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)
#define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512)

#define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1))
@@ -95,7 +95,7 @@ static unsigned long shmem_default_max_inodes(void)
}
#endif

-static int shmem_getpage(struct inode *inode, unsigned long idx,
+static int shmem_getpage(struct inode *inode, unsigned long long idx,
struct page **pagep, enum sgp_type sgp, int *type);

static inline struct page *shmem_dir_alloc(gfp_t gfp_mask)
@@ -533,7 +533,7 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)
int punch_hole;
spinlock_t *needs_lock;
spinlock_t *punch_lock;
- unsigned long upper_limit;
+ unsigned long long upper_limit;

inode->i_ctime = inode->i_mtime = CURRENT_TIME;
idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
@@ -1175,7 +1175,7 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
* vm. If we swap it in we mark it dirty since we also free the swap
* entry since a page cannot live in both the swap and page cache
*/
-static int shmem_getpage(struct inode *inode, unsigned long idx,
+static int shmem_getpage(struct inode *inode, unsigned long long idx,
struct page **pagep, enum sgp_type sgp, int *type)
{
struct address_space *mapping = inode->i_mapping;

Regards, Yuri