What is the optimal TCQ for eg. AIC7xxxx on Adaptec 39160
with 2 x U160 disks on each channel ? (MAN3184MP and
DDYS-T18350N)
What does it depend on ?
Thanks
Margit
Margit Schubert-While wrote:
>
> What is the optimal TCQ for eg. AIC7xxxx on Adaptec 39160
> with 2 x U160 disks on each channel ? (MAN3184MP and
> DDYS-T18350N)
> What does it depend on ?
The optimal depends on what you do, so the best
way to get a good answer is to try several
settings and see what works best.
Generally, a deeper tagged queue gives
better throughput. The difference
is biggest when going from 0 to 1.
Making the queues progressively deeper
gives even more throughput, but the
improvement diminishes quickly.
You have to look up how deep queues your disks
support, because this is variable among
different models. Some might max out at 8,
others can handle 256. There seems to be little
benefit at all beyond 32.
There is currently a downside to deep queues.
Linux v. 2.5 has a fair io scheduler that ensures
that no process waits "too long" for its
disk accesses. This is particularly
important for reads. A deep tagged
queue may improve total throughput, but
this may defeat the io scheduler so
processes doing random reads get starved
waiting for processes doing large linear
reads or writes.
If this don't bother you, go for a deep
queue like 32 perhaps. If you want fairness
(users wait for response, i.e. desktop
machine or file/web server for multiple
people) go for something more shallow.
I have seen recommendations as low as 2-4 tags
to let the io scheduler enforce fairness.
2-4 tags is a lot better than none, going
from 4 to 32 probably makes less difference
than from 0 to 4.
For an example of what a fair scheduler does,
look at previously posted benchmarks with
kernel compiles during a dbench run on the same disk.
Without fairness the compile takes many times longer.
with fairness the compile time is almost the same
as when compiling without the additional burden of a dbench.
Helge Hafting
Hi,
it seems that it isn't possible to map the memory of a pcmcia card or to read
from it if it is behind a pci to pci bridge.
I know that there were some hacks/patches for problems like this, but none of
these patches worked with the hardware i use.
It doesn't matter which kind of cardbus-adapter is use, so the problem depends
on the chipset i use :
it is an nvidia nForce 420 Chipset on an Asus A7N266 Mainboard.
There were some patches for the yenta driver but i think it could be possible,
that the ds module has problems with this hardware enviroment, too.
I have tryed various memory windows for the pcmcia card, but it doen't find a
clear memory window at all.
greetings,
Harald Jung
lspci:
<======
01:07.0 CardBus bridge: Texas Instruments PCI1410 PC card Cardbus Controller
(rev 01)
Subsystem: SCM Microsystems: Unknown device 3000
Flags: medium devsel, IRQ 5
Memory at e5801000 (32-bit, non-prefetchable) [disabled] [size=4K]
Bus: primary=01, secondary=02, subordinate=05, sec-latency=0
I/O window 0: 00000000-00000003 [disabled]
I/O window 1: 00000000-00000003 [disabled]
16-bit legacy interface ports at 0001
=======>
sylog :
<=======
Nov 12 17:38:59 tr rcpcmcia: /sbin/insmod
/lib/modules/2.4.18/kernel/drivers/pcmcia/pcmcia_core.o
Nov 12 17:38:59 tr kernel: Linux Kernel Card Services 3.1.22
Nov 12 17:38:59 tr kernel: options: [pci] [cardbus] [pm]
Nov 12 17:38:59 tr rcpcmcia: /sbin/insmod
/lib/modules/2.4.18/kernel/drivers/pcmcia/yenta_socket.o
Nov 12 17:38:59 tr kernel: PCI: Enabling device 01:07.0 (0000 -> 0002)
Nov 12 17:38:59 tr kernel: Yenta IRQ list 0000, PCI irq5
Nov 12 17:38:59 tr kernel: Socket status: 10000011
Nov 12 17:38:59 tr rcpcmcia: /sbin/insmod
/lib/modules/2.4.18/kernel/drivers/pcmcia/ds.o
Nov 12 17:39:01 tr cardmgr[1012]: watching 1 sockets
Nov 12 17:39:01 tr cardmgr[1012]: could not adjust resource: memory
0xd0000-0xdffff: Input/output error
Nov 12 17:39:01 tr kernel: cs: IO port probe 0x0c00-0x0cff: excluding
0xcf0-0xcff
Nov 12 17:39:01 tr kernel: cs: IO port probe 0x0800-0x08ff: clean.
Nov 12 17:39:01 tr kernel: cs: IO port probe 0x0100-0x04ff: excluding
0x170-0x177 0x200-0x207 0x330-0x337 0x370-0x37f 0x4d0-0x4d7
Nov 12 17:39:01 tr kernel: cs: IO port probe 0x0a00-0x0aff: clean.
Nov 12 17:39:01 tr cardmgr[1013]: starting, version is 3.2.3
Nov 12 17:39:01 tr kernel: cs: memory probe 0xa0000000-0xa0ffffff: excluding
0xa0000000-0xa0ffffff
Nov 12 17:39:01 tr kernel: cs: memory probe 0x60000000-0x60ffffff: excluding
0x60000000-0x60ffffff
Nov 12 17:39:01 tr kernel: cs: memory probe 0x2012000-0x2012fff: excluding
0x2012000-0x2013fff
Nov 12 17:39:01 tr cardmgr[1013]: socket 0: Anonymous Memory
Nov 12 17:39:01 tr cardmgr[1013]: executing: 'modprobe memory_cs'
Nov 12 17:39:01 tr cardmgr[1013]: + modprobe: Can't locate module memory_cs
Nov 12 17:39:01 tr cardmgr[1013]: modprobe exited with status 255
Nov 12 17:39:01 tr cardmgr[1013]: module
/lib/modules/2.4.18/pcmcia/memory_cs.o not available
Nov 12 17:39:02 tr cardmgr[1013]: get dev info on socket 0 failed: Resource
temporarily unavailable
Nov 12 17:39:30 tr cardmgr[1013]: executing: 'modprobe -r memory_cs'
Nov 12 17:39:30 tr cardmgr[1013]: exiting
Nov 12 17:39:30 tr kernel: unloading Kernel Card Services
====>
On Wed, Nov 13, 2002 at 05:50:22PM +0100, Harald Jung wrote:
> it seems that it isn't possible to map the memory of a pcmcia card or to
> read from it if it is behind a pci to pci bridge.
In the present state, yenta.c just can't handle this.
I've got a pile of patches which allow it to work in a similar situation
on non-x86 platforms, but it breaks x86 (since it doesn't use the
generic PCI resource code.)
Still working on it though, between doing lots of other stuff.
--
Russell King ([email protected]) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html