Does anyone have any production experience running Oracle 8i on Linux? I've
run it at home, RH 7.2 with vanilla 2.4.16 kernel all IDE drives, and its
fast. We are replacing our SUN/Oracle 8 servers at work in next couple of
months with Linux/Oracle 8i (Pentium 4 1GB ram). My question is, what is the
best kernel version to use, vanilla 2.4.x or a RH kernel built from the ac
tree with rmap. All drives will be SCSI.
I read an interview yesterday with Rik van Riel where he said rmap worked
better for db servers but I expect that he is partial to rmap 8-).
Our web servers are running vanilla 2.4.16 and we haven't had a problem yet
(knock on wood).
Thanks !
--
Walter Anthony
System Administrator
National Electronic Attachment
"If it's not broke....tweak it"
> Does anyone have any production experience running Oracle 8i on Linux? I've
> run it at home, RH 7.2 with vanilla 2.4.16 kernel all IDE drives, and its
> fast. We are replacing our SUN/Oracle 8 servers at work in next couple of
> months with Linux/Oracle 8i (Pentium 4 1GB ram). My question is, what is the
> best kernel version to use, vanilla 2.4.x or a RH kernel built from the ac
> tree with rmap. All drives will be SCSI.
> I read an interview yesterday with Rik van Riel where he said rmap worked
> better for db servers but I expect that he is partial to rmap 8-).
> Our web servers are running vanilla 2.4.16 and we haven't had a problem yet
> (knock on wood).
The real answer is to try them and do a benchmark for your particular
application. Shouldn't take that long .... try the -aa tree too.
Martin.
On Tue, 12 Mar 2002, Martin J. Bligh wrote:
> > Does anyone have any production experience running Oracle 8i on Linux? I've
> > run it at home, RH 7.2 with vanilla 2.4.16 kernel all IDE drives, and its
> > fast. We are replacing our SUN/Oracle 8 servers at work in next couple of
>
> The real answer is to try them and do a benchmark for your particular
> application. Shouldn't take that long .... try the -aa tree too.
>
I can't speak for -aa, but I can say definitively, DO NOT stay with the
"stock" kernel for oracle applications. We're using -rmap here (mostly 9i
with some 8 scattered around) and performance under moderate and heavy
load is an order of magnitude better.
--
-Jonathan <[email protected]>
Depends on your hardware configuration and how you stress your system with
db workload, you should consider some performance patch from the linux
scalability effort project.
http://lse.sourceforge.net
-----Original Message-----
From: Jonathan A. Davis [mailto:[email protected]]
Sent: Tuesday, March 12, 2002 10:35 AM
To: walter
Cc: [email protected]
Subject: Re: oracle rmap kernel version
On Tue, 12 Mar 2002, Martin J. Bligh wrote:
> > Does anyone have any production experience running Oracle 8i on Linux?
I've
> > run it at home, RH 7.2 with vanilla 2.4.16 kernel all IDE drives, and
its
> > fast. We are replacing our SUN/Oracle 8 servers at work in next couple
of
>
> The real answer is to try them and do a benchmark for your particular
> application. Shouldn't take that long .... try the -aa tree too.
>
I can't speak for -aa, but I can say definitively, DO NOT stay with the
"stock" kernel for oracle applications. We're using -rmap here (mostly 9i
with some 8 scattered around) and performance under moderate and heavy
load is an order of magnitude better.
--
-Jonathan <[email protected]>
Has anyone done any comparisons of Oracle performance on Linux/Intel vs
Solaris/Sun?
--Jauder
On Tue, 12 Mar 2002, walter wrote:
> Does anyone have any production experience running Oracle 8i on Linux? I've
> run it at home, RH 7.2 with vanilla 2.4.16 kernel all IDE drives, and its
> fast. We are replacing our SUN/Oracle 8 servers at work in next couple of
> months with Linux/Oracle 8i (Pentium 4 1GB ram). My question is, what is the
> best kernel version to use, vanilla 2.4.x or a RH kernel built from the ac
> tree with rmap. All drives will be SCSI.
> I read an interview yesterday with Rik van Riel where he said rmap worked
> better for db servers but I expect that he is partial to rmap 8-).
> Our web servers are running vanilla 2.4.16 and we haven't had a problem yet
> (knock on wood).
>
> Thanks !
> --
> Walter Anthony
> System Administrator
> National Electronic Attachment
> "If it's not broke....tweak it"
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>
> better for db servers but I expect that he is partial to rmap 8-).
> Our web servers are running vanilla 2.4.16 and we haven't had a problem yet
> (knock on wood).
I think your .sig is the problem 8)
> "If it's not broke....tweak it"
If it aint broke don't fix it 8)
Alan
Chen, Kenneth W wrote:
> Depends on your hardware configuration and how you stress your system with
> db workload, you should consider some performance patch from the linux
> scalability effort project.
> http://lse.sourceforge.net
In particular, take a look at the rollup patches:
http://sourceforge.net/project/shownotes.php?release_id=77093
This one has been tested pretty well.
http://prdownloads.sourceforge.net/lse/lse01.patch
This could use some more testing, but is not bad by any means:
http://prdownloads.sourceforge.net/lse/lse02.patch
BTW, what SCSI controllers are you planning on using? Some are better
than others.
--
Dave Hansen
[email protected]
On Wednesday 13 March 2002 02:40 pm, you wrote:
> Chen, Kenneth W wrote:
> > Depends on your hardware configuration and how you stress your system
> > with db workload, you should consider some performance patch from the
> > linux scalability effort project.
> > http://lse.sourceforge.net
>
> In particular, take a look at the rollup patches:
> http://sourceforge.net/project/shownotes.php?release_id=77093
>
> This one has been tested pretty well.
> http://prdownloads.sourceforge.net/lse/lse01.patch
>
> This could use some more testing, but is not bad by any means:
> http://prdownloads.sourceforge.net/lse/lse02.patch
>
> BTW, what SCSI controllers are you planning on using? Some are better
> than others.
Not sure right off the top of my head. I'm planning on using 2 controllers,
each from a different manufactures. My reasoning behind this is two fold.
Number one is in case a "bug" creeps up with one of the drivers I can still
string all the drives off the other controller. Performance will decrease,
but I'd rather be slow than dead in the water. The second reason is the
probability of both controllers failing (hardware) at same time due to a bad
chip batch at the manufacture. Do you have any suggestions on controllers?
Adaptec and IBM (not sure which models) ?
Thanks for your input!
walt
walter wrote:
> Not sure right off the top of my head. I'm planning on using 2 controllers,
> each from a different manufactures. My reasoning behind this is two fold.
> Number one is in case a "bug" creeps up with one of the drivers I can still
> string all the drives off the other controller. Performance will decrease,
> but I'd rather be slow than dead in the water. The second reason is the
> probability of both controllers failing (hardware) at same time due to a bad
> chip batch at the manufacture. Do you have any suggestions on controllers?
> Adaptec and IBM (not sure which models) ?
I haven't done any of the testing myself. But, I was told that the
Adaptec AIC stuff is good. I think that the LSE patch has been tested
on with Adaptec (aic7xxx) and QLogic fiber channel controllers. I guess
that the QLogic stuff is liked because the drivers are open source.
I was surprised to see that the ServeRAID driver isn't touched by the
lse patch. I thought that it still uses the io_request_lock in 2.4.
Care to add anything, Gerrit?
--
Dave Hansen
[email protected]
The IPS/ServerRAID driver can work with the siorl patch - just isn't in
the lse02 rollup. It will probably be in the lse04 rollup once I get done
testing the lse03 rollup. ;-)
If the source if available for a particular driver and you are interested
in some level of IO scalability in a 2.4 kernel, we have a fairly robust
patch that can easily be made to work. If the driver is reasonably
written, the modification to support the siorl patch is just to enable
the feature. If it is not well written, we might need to take a look
at the locking model used and propose a few mods. BTW, IDE is not
"well written" from this perspective. Also, I believe the some future
Red Hat kernel will include a more wide-sweeping version of the siorl
patch which may support all drivers out of the box.
gerrit
In message <[email protected]>, > : [email protected] writes:
>
>
>
> To: walter <[email protected]>
> cc: [email protected], [email protected], Gerrit Huizenga
> <[email protected]>
>
>
>
>
>
> walter wrote:
> > Not sure right off the top of my head. I'm planning on using 2
> controllers,
> > each from a different manufactures. My reasoning behind this is two fold.
> > Number one is in case a "bug" creeps up with one of the drivers I can
> still
> > string all the drives off the other controller. Performance will
> decrease,
> > but I'd rather be slow than dead in the water. The second reason is the
> > probability of both controllers failing (hardware) at same time due to a
> bad
> > chip batch at the manufacture. Do you have any suggestions on
> controllers?
> > Adaptec and IBM (not sure which models) ?
>
> I haven't done any of the testing myself. But, I was told that the
> Adaptec AIC stuff is good. I think that the LSE patch has been tested
> on with Adaptec (aic7xxx) and QLogic fiber channel controllers. I guess
> that the QLogic stuff is liked because the drivers are open source.
>
> I was surprised to see that the ServeRAID driver isn't touched by the
> lse patch. I thought that it still uses the io_request_lock in 2.4.
>
> Care to add anything, Gerrit?
>
> --
> Dave Hansen
> [email protected]