2002-09-04 13:08:04

by Craig Arsenault

[permalink] [raw]
Subject: consequences of lowering "MAX_LOW_MEM"?


Hi all,
Now I'll explain "why" i want to do what I'm asking below, but if
anyone has any reasons/explanations why it won't work, I'd love to
hear it.

In 2.4.x (currently using 2.4.18), for PPC, there is a value for
"MAX_LOW_MEM" defined in "arch/ppc/mm/pgtable.c" as 768MB RAM. Any
memory above 768MB is considered "high" memory. Now our problem is
that we have 1024MB of onboard RAM on our card. I do *NOT* wish to
compile with "CONFIG_HIGHMEM" set to true (see below for why), but i
do wish to have full use of the 1024MB of RAM onboard, or at least
992MB which is the minimum for our app.
So what I did was just change "MAX_LOW_MEM" to be 0x3E000000
(0x30000000), ie. change it to 992 from 768. I recompiled and tested
our application. Things seemed to be running normal with a max of
992MB of RAM.

Is this a potential problem, or will this cause some lurking bug that
anyone can think of? (ie. I'm sure "MAX_LOW_MEM" was set to 768MB for
a reason, but what is that reason). We don't want to move higher
than 1Gig RAM for now, so are we going to be okay doing what I
describe above? Any suggestions or comments as to why that's a very
bad idea would be greatly appreciated. Again, this is for a
PPC-specific board, I'm not sure what the x86 architecture's low
memory max is.


REASON for asking:
Currently, one piece of hardware on our card (a PMC card) is using a
closed-source driver, and they have less-than stellar linux drivers
and support. Their driver has problems with CONFIG_HIGHMEM turned on
(they are using kiobuf's and they are getting messed up), so as a hack
until they fix their driver, we were contemplating moving MAX_LOW_MEM.
Yes, I know closed-source drivers are bad in some cases, but we had
little choice in this product, and our goal is to move away from it
and use something else.

Thanks for any info/help.

--
Craig.
+------------------------------------------------------+
http://www.wombat.ca/rpmon.html RP Music Monitor
http://www.washington.edu/pine/ Pine @ the U of Wash.
+-------------=*sent via Pine4.44*=--------------------+







2002-09-04 14:22:15

by Tom Rini

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

[cc'ed linuxppc-dev, which is where you should probably have asked
this..]

On Wed, Sep 04, 2002 at 09:12:31AM -0400, Craig Arsenault wrote:

> Hi all,
> Now I'll explain "why" i want to do what I'm asking below, but if
> anyone has any reasons/explanations why it won't work, I'd love to
> hear it.
>
> In 2.4.x (currently using 2.4.18), for PPC, there is a value for
> "MAX_LOW_MEM" defined in "arch/ppc/mm/pgtable.c" as 768MB RAM. Any
> memory above 768MB is considered "high" memory. Now our problem is
> that we have 1024MB of onboard RAM on our card. I do *NOT* wish to
> compile with "CONFIG_HIGHMEM" set to true (see below for why), but i
> do wish to have full use of the 1024MB of RAM onboard, or at least
> 992MB which is the minimum for our app.
> So what I did was just change "MAX_LOW_MEM" to be 0x3E000000
> (0x30000000), ie. change it to 992 from 768. I recompiled and tested
> our application. Things seemed to be running normal with a max of
> 992MB of RAM.
>
> Is this a potential problem, or will this cause some lurking bug that
> anyone can think of? (ie. I'm sure "MAX_LOW_MEM" was set to 768MB for
> a reason, but what is that reason). We don't want to move higher
> than 1Gig RAM for now, so are we going to be okay doing what I
> describe above? Any suggestions or comments as to why that's a very
> bad idea would be greatly appreciated. Again, this is for a
> PPC-specific board, I'm not sure what the x86 architecture's low
> memory max is.
>
>
> REASON for asking:
> Currently, one piece of hardware on our card (a PMC card) is using a
> closed-source driver, and they have less-than stellar linux drivers
> and support. Their driver has problems with CONFIG_HIGHMEM turned on
> (they are using kiobuf's and they are getting messed up), so as a hack
> until they fix their driver, we were contemplating moving MAX_LOW_MEM.
> Yes, I know closed-source drivers are bad in some cases, but we had
> little choice in this product, and our goal is to move away from it
> and use something else.

Well, in the linuxppc_2_4_devel tree
(http://penguinppc.org/dev/kernel.shtml), it's possible to change all of
these parameters for custom applications, like what you describe later.

One important thing to keep in mind, is that MAX_LOW_MEM cannot be
bigger than 0xF00000000 - KERNELBASE (which by default is 0xC0000000) or
bad things can happen. And if you modify KERNELBASE, there's other
things you need to be aware of, so if you're going to do this, please
look at the linuxppc_2_4_devel tree (or in 2.5) to see all of the
interdependancies.

--
Tom Rini (TR1265)
http://gate.crashing.org/~trini/

2002-09-04 15:30:25

by Martin J. Bligh

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

>> In 2.4.x (currently using 2.4.18), for PPC, there is a value for
>> "MAX_LOW_MEM" defined in "arch/ppc/mm/pgtable.c" as 768MB RAM. Any
>> memory above 768MB is considered "high" memory. Now our problem is
>> that we have 1024MB of onboard RAM on our card. I do *NOT* wish to
>> compile with "CONFIG_HIGHMEM" set to true (see below for why), but i
>> do wish to have full use of the 1024MB of RAM onboard, or at least
>> 992MB which is the minimum for our app.
>> So what I did was just change "MAX_LOW_MEM" to be 0x3E000000
>> (0x30000000), ie. change it to 992 from 768. I recompiled and tested
>> our application. Things seemed to be running normal with a max of
>> 992MB of RAM.
>>
>> Is this a potential problem, or will this cause some lurking bug that
>> anyone can think of? (ie. I'm sure "MAX_LOW_MEM" was set to 768MB for
>> a reason, but what is that reason). We don't want to move higher
>> than 1Gig RAM for now, so are we going to be okay doing what I
>> describe above? Any suggestions or comments as to why that's a very
>> bad idea would be greatly appreciated. Again, this is for a
>> PPC-specific board, I'm not sure what the x86 architecture's low
>> memory max is.

I think you'll find yourself with no virtual address space left to
do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
I presume it's the same for ppc. Sounds like you may have left
yourself enough space for fixmap & kmap, but any calls to vmalloc
will probably fail ?

M.

2002-09-04 16:31:58

by Matt Porter

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

On Wed, Sep 04, 2002 at 08:32:15AM -0700, Martin J. Bligh wrote:
>
> >> In 2.4.x (currently using 2.4.18), for PPC, there is a value for
> >> "MAX_LOW_MEM" defined in "arch/ppc/mm/pgtable.c" as 768MB RAM. Any
> >> memory above 768MB is considered "high" memory. Now our problem is
> >> that we have 1024MB of onboard RAM on our card. I do *NOT* wish to
> >> compile with "CONFIG_HIGHMEM" set to true (see below for why), but i
> >> do wish to have full use of the 1024MB of RAM onboard, or at least
> >> 992MB which is the minimum for our app.
> >> So what I did was just change "MAX_LOW_MEM" to be 0x3E000000
> >> (0x30000000), ie. change it to 992 from 768. I recompiled and tested
> >> our application. Things seemed to be running normal with a max of
> >> 992MB of RAM.
> >>
> >> Is this a potential problem, or will this cause some lurking bug that
> >> anyone can think of? (ie. I'm sure "MAX_LOW_MEM" was set to 768MB for
> >> a reason, but what is that reason). We don't want to move higher
> >> than 1Gig RAM for now, so are we going to be okay doing what I
> >> describe above? Any suggestions or comments as to why that's a very
> >> bad idea would be greatly appreciated. Again, this is for a
> >> PPC-specific board, I'm not sure what the x86 architecture's low
> >> memory max is.
>
> I think you'll find yourself with no virtual address space left to
> do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
> I presume it's the same for ppc. Sounds like you may have left
> yourself enough space for fixmap & kmap, but any calls to vmalloc
> will probably fail ?

Correct. The solution, in the context of the linuxppc_2_4_devel tree,
is to do the following:

Enable "Prompt for advanced kernel configuration options"
Enable "High memory support"
Enable "Set maximum low memory" and set to 0x40000000
Enable "Set custom kernel base address" and set to 0xa0000000

Note that highmem support will not be used in his case (he didn't
want to use it), because PAGE_OFFSET is at 0xa0000000 and
MAX_LOW_MEM is at 1GB. With this configuration, VMALLOC_START
will be at 0xe0000000 + VMALLOC_OFFSET leaving ample vmalloc
space for most applications. All system memory is mapped as
lowmem .

Regards,
--
Matt Porter
[email protected]
This is Linux Country. On a quiet night, you can hear Windows reboot.

2002-09-04 18:29:10

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

>I think you'll find yourself with no virtual address space left to
>do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
>I presume it's the same for ppc. Sounds like you may have left
>yourself enough space for fixmap & kmap, but any calls to vmalloc
>will probably fail ?

Yes, same problem on PPC, you'll run out of virtual space quite
quickly for vmalloc and ioremap. Stuff a video board with lots
of VRAM or any PCI card exposing large MMIO regions into your
machines and it will probably not even boot.

Ben.


2002-09-04 18:31:46

by Craig Arsenault

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

On Wed, 4 Sep 2002, Matt Porter wrote:

> On Wed, Sep 04, 2002 at 08:32:15AM -0700, Martin J. Bligh wrote:
> >
> > >> In 2.4.x (currently using 2.4.18), for PPC, there is a value for
> > >> "MAX_LOW_MEM" defined in "arch/ppc/mm/pgtable.c" as 768MB RAM. Any
> > >> memory above 768MB is considered "high" memory. Now our problem is
> > >> that we have 1024MB of onboard RAM on our card. I do *NOT* wish to
> > >> compile with "CONFIG_HIGHMEM" set to true (see below for why), but i
> > >> do wish to have full use of the 1024MB of RAM onboard, or at least
> > >> 992MB which is the minimum for our app.
> > >> So what I did was just change "MAX_LOW_MEM" to be 0x3E000000
> > >> (0x30000000), ie. change it to 992 from 768. I recompiled and tested
>
<snip>

> Correct. The solution, in the context of the linuxppc_2_4_devel tree,
> is to do the following:
>
> Enable "Prompt for advanced kernel configuration options"
> Enable "High memory support"
> Enable "Set maximum low memory" and set to 0x40000000
> Enable "Set custom kernel base address" and set to 0xa0000000
>
> Note that highmem support will not be used in his case (he didn't
> want to use it), because PAGE_OFFSET is at 0xa0000000 and
> MAX_LOW_MEM is at 1GB. With this configuration, VMALLOC_START
> will be at 0xe0000000 + VMALLOC_OFFSET leaving ample vmalloc
> space for most applications. All system memory is mapped as
> lowmem .
>

Matt,
That looks exactly like what I'd need. Thanks.
I grabbed the "2.4.20-pre5" source of linuxppc_2_4_devel, and
I do see those options in there.
However, my problem is that I cannot move to a development kernel for
this application -> i have to stick on the stable tree. So i could
either wait for 2.4.20 to come out, OR could i just look at where
"CONFIG_LOWMEM_SIZE" and "CONFIG_KERNEL_START" are used in 2.4.20-pre5
and patch them back myself to 2.4.18?


Regardless, thanks very much to everyone who replied for the info.


--
Craig.
+------------------------------------------------------+
http://www.wombat.ca/rpmon.html RP Music Monitor
http://www.washington.edu/pine/ Pine @ the U of Wash.
+-------------=*sent via Pine4.44*=--------------------+



2002-09-04 18:38:25

by Matt Porter

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

On Wed, Sep 04, 2002 at 02:35:27PM -0400, Craig Arsenault wrote:
>
> On Wed, 4 Sep 2002, Matt Porter wrote:
> > Correct. The solution, in the context of the linuxppc_2_4_devel tree,
> > is to do the following:
> >
> > Enable "Prompt for advanced kernel configuration options"
> > Enable "High memory support"
> > Enable "Set maximum low memory" and set to 0x40000000
> > Enable "Set custom kernel base address" and set to 0xa0000000
> >
> > Note that highmem support will not be used in his case (he didn't
> > want to use it), because PAGE_OFFSET is at 0xa0000000 and
> > MAX_LOW_MEM is at 1GB. With this configuration, VMALLOC_START
> > will be at 0xe0000000 + VMALLOC_OFFSET leaving ample vmalloc
> > space for most applications. All system memory is mapped as
> > lowmem .
> >
>
> Matt,
> That looks exactly like what I'd need. Thanks.
> I grabbed the "2.4.20-pre5" source of linuxppc_2_4_devel, and
> I do see those options in there.
> However, my problem is that I cannot move to a development kernel for
> this application -> i have to stick on the stable tree. So i could
> either wait for 2.4.20 to come out, OR could i just look at where
> "CONFIG_LOWMEM_SIZE" and "CONFIG_KERNEL_START" are used in 2.4.20-pre5
> and patch them back myself to 2.4.18?

_devel is pretty stable. You don't have to use the latest, just
clone up to the changeset including 2.4.18/19 if you like. Or,
as you suggest, just use it as a reference to make a one-off
mod in your own tree. I put in these options because of the
numbers of times I had to hack these things for specific embedded
applications. :)

> Regardless, thanks very much to everyone who replied for the info.

np

Regards,
--
Matt Porter
[email protected]
This is Linux Country. On a quiet night, you can hear Windows reboot.

2002-09-04 18:59:06

by Craig Arsenault

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

On Tue, 3 Sep 2002, Benjamin Herrenschmidt wrote:

> >I think you'll find yourself with no virtual address space left to
> >do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
> >I presume it's the same for ppc. Sounds like you may have left
> >yourself enough space for fixmap & kmap, but any calls to vmalloc
> >will probably fail ?
>
> Yes, same problem on PPC, you'll run out of virtual space quite
> quickly for vmalloc and ioremap. Stuff a video board with lots
> of VRAM or any PCI card exposing large MMIO regions into your
> machines and it will probably not even boot.
>
> Ben.
>

Ben,
But doesn't using Matt's suggestion and moving both MAX_LOW_MEM and
changing KERNELBASE take care of this? It's an embedded board with no
video, but it does have one PCI Mezzanine Card (PMC) on it.

Thanks.

--
Craig.
+------------------------------------------------------+
http://www.wombat.ca/rpmon.html RP Music Monitor
http://www.washington.edu/pine/ Pine @ the U of Wash.
+-------------=*sent via Pine4.44*=--------------------+





2002-09-04 19:57:14

by Benjamin Herrenschmidt

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

>
>> >I think you'll find yourself with no virtual address space left to
>> >do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
>> >I presume it's the same for ppc. Sounds like you may have left
>> >yourself enough space for fixmap & kmap, but any calls to vmalloc
>> >will probably fail ?
>>
>> Yes, same problem on PPC, you'll run out of virtual space quite
>> quickly for vmalloc and ioremap. Stuff a video board with lots
>> of VRAM or any PCI card exposing large MMIO regions into your
>> machines and it will probably not even boot.
>>
>> Ben.
>>
>
>Ben,
> But doesn't using Matt's suggestion and moving both MAX_LOW_MEM and
>changing KERNELBASE take care of this? It's an embedded board with no
>video, but it does have one PCI Mezzanine Card (PMC) on it.

Yes, Matt's suggestion would work, though I never tried lowering
KERNELBASE. I don't think the kernel supports lowering it below
0x80000000 btw.

Ben.


2002-09-04 20:37:24

by Matt Porter

[permalink] [raw]
Subject: Re: consequences of lowering "MAX_LOW_MEM"?

On Wed, Sep 04, 2002 at 10:02:27PM +0200, Benjamin Herrenschmidt wrote:
>
> >
> >> >I think you'll find yourself with no virtual address space left to
> >> >do vmalloc / fixmap / kmap type stuff. Or at least you would on i386,
> >> >I presume it's the same for ppc. Sounds like you may have left
> >> >yourself enough space for fixmap & kmap, but any calls to vmalloc
> >> >will probably fail ?
> >>
> >> Yes, same problem on PPC, you'll run out of virtual space quite
> >> quickly for vmalloc and ioremap. Stuff a video board with lots
> >> of VRAM or any PCI card exposing large MMIO regions into your
> >> machines and it will probably not even boot.
> >>
> >> Ben.
> >>
> >
> >Ben,
> > But doesn't using Matt's suggestion and moving both MAX_LOW_MEM and
> >changing KERNELBASE take care of this? It's an embedded board with no
> >video, but it does have one PCI Mezzanine Card (PMC) on it.
>
> Yes, Matt's suggestion would work, though I never tried lowering
> KERNELBASE. I don't think the kernel supports lowering it below
> 0x80000000 btw.

Take a look at those options. :) I added one to tweak TASK_SIZE
from the PPC default of 0x80000000.

I've run a system with a TASK_SIZE of 1GB, MAX_LOW_MEM at 16MB,
PAGE_OFFSET at 0x40000000, and HIGHMEM on with the PKMAP_BASE
placed up higher than the default 0xfe000000 (with a 1GB of RAM
installed) to allow for close to 3GB of vmalloc space. Certain
embedded systems with large PCI windows (like non-transparent
bridges, for example) gobble up vmalloc space pretty darn quickly.
This is just getting worse with RapidIO on a 32-bit processor.

Regards,
--
Matt Porter
[email protected]
This is Linux Country. On a quiet night, you can hear Windows reboot.