Hi Takashi !
I'm looking into adding proper sound support for the Mac Mini. The
problem is that from what I've seen (Apple driver is only partially
opensource nowadays it seems, and the latest darwin drop is both
incomplete and doesn't build), that beast only has a fixed function D->A
converter, no HW volume control.
It seems that Apple's driver has an in-kernel framework for doing volume
control, mixing, and other horrors right in the kernel, in temporary
buffers, just before they get DMA'ed (gack !)
I want to avoid something like that. How "friendly" would Alsa be to
drivers that don't have any HW volume control capability ? Does typical
userland libraries provide software processing volume control ? Do you
suggest I just don't do any control ? Or should I implement a double
buffer scheme with software gain as well in the kernel driver ?
Ben.
On Mon, Mar 28, 2005 at 09:42:00AM +1000, Benjamin Herrenschmidt wrote:
> suggest I just don't do any control ? Or should I implement a double
> buffer scheme with software gain as well in the kernel driver ?
I recall to have sometime clicked on volume controls that weren't
hardware related, I don't pay much attention when stuff works, perhaps
it was the kde sound system doing it or something like that.
I would suggest doing the D->A only, then adding a basic hack to
g5 too ;), and then go back to the mini to do the gain emulation in
kernel space if somebody complains ;). Doing the software emulation
sounds quite orthogonal to the rest so it can be done later if needed.
Too loud sound is better than no sound.
On Mon, 2005-03-28 at 03:42 +0200, Andrea Arcangeli wrote:
> On Mon, Mar 28, 2005 at 09:42:00AM +1000, Benjamin Herrenschmidt wrote:
> > suggest I just don't do any control ? Or should I implement a double
> > buffer scheme with software gain as well in the kernel driver ?
>
> I recall to have sometime clicked on volume controls that weren't
> hardware related, I don't pay much attention when stuff works, perhaps
> it was the kde sound system doing it or something like that.
>
> I would suggest doing the D->A only, then adding a basic hack to
> g5 too ;), and then go back to the mini to do the gain emulation in
> kernel space if somebody complains ;). Doing the software emulation
> sounds quite orthogonal to the rest so it can be done later if needed.
>
> Too loud sound is better than no sound.
Will do, of course. As for the G5, yes, I need to work on that too.
Ben.
On Mon, 2005-03-28 at 09:42 +1000, Benjamin Herrenschmidt wrote:
> It seems that Apple's driver has an in-kernel framework for doing volume
> control, mixing, and other horrors right in the kernel, in temporary
> buffers, just before they get DMA'ed (gack !)
>
> I want to avoid something like that. How "friendly" would Alsa be to
> drivers that don't have any HW volume control capability ? Does typical
> userland libraries provide software processing volume control ? Do you
> suggest I just don't do any control ? Or should I implement a double
> buffer scheme with software gain as well in the kernel driver ?
alsa-lib handles both mixing (dmix plugin) and volume control (softvol
plugin) in software for codecs like this that don't do it in hardware.
Since Windows does mixing and volume control in the kernel (ugh) it's
increasingly common to find devices that cannot do these. You don't
need to handle it in the driver at all.
dmix has been around for a while but softvol plugin is very new, you
will need ALSA CVS or the upcoming 1.0.9 release.
Lee
On Mon, 2005-03-28 at 22:36 -0500, Lee Revell wrote:
> On Mon, 2005-03-28 at 09:42 +1000, Benjamin Herrenschmidt wrote:
> > It seems that Apple's driver has an in-kernel framework for doing volume
> > control, mixing, and other horrors right in the kernel, in temporary
> > buffers, just before they get DMA'ed (gack !)
> >
> > I want to avoid something like that. How "friendly" would Alsa be to
> > drivers that don't have any HW volume control capability ? Does typical
> > userland libraries provide software processing volume control ? Do you
> > suggest I just don't do any control ? Or should I implement a double
> > buffer scheme with software gain as well in the kernel driver ?
>
> alsa-lib handles both mixing (dmix plugin) and volume control (softvol
> plugin) in software for codecs like this that don't do it in hardware.
> Since Windows does mixing and volume control in the kernel (ugh) it's
> increasingly common to find devices that cannot do these. You don't
> need to handle it in the driver at all.
Yah, OS X does it in the kernel too lately ... at least Apple drivers
are doing it, it's not a "common" lib. They also split treble/bass that
way when you have an iSub plugged on USB and using the machine internal
speakers for treble.
> dmix has been around for a while but softvol plugin is very new, you
> will need ALSA CVS or the upcoming 1.0.9 release.
Ok.
Ben.
On 2005-03-29, at 05:36, Lee Revell wrote:
> On Mon, 2005-03-28 at 09:42 +1000, Benjamin Herrenschmidt wrote:
>> It seems that Apple's driver has an in-kernel framework for doing
>> volume
>> control, mixing, and other horrors right in the kernel, in temporary
>> buffers, just before they get DMA'ed (gack !)
>>
>> I want to avoid something like that. How "friendly" would Alsa be to
>> drivers that don't have any HW volume control capability ? Does
>> typical
>> userland libraries provide software processing volume control ? Do you
>> suggest I just don't do any control ? Or should I implement a double
>> buffer scheme with software gain as well in the kernel driver ?
>
> alsa-lib handles both mixing (dmix plugin) and volume control (softvol
> plugin) in software for codecs like this that don't do it in hardware.
> Since Windows does mixing and volume control in the kernel (ugh) it's
> increasingly common to find devices that cannot do these. You don't
> need to handle it in the driver at all.
>
> dmix has been around for a while but softvol plugin is very new, you
> will need ALSA CVS or the upcoming 1.0.9 release.
Instead of the lame claims on how ugly it is to do hardware mixing in
kernel space the ALSA fans should ask them self the following questions:
1. Where do you have true "real-time" under linux? Kernel or user space?
2. Where would you put the firmware for an DSP? Far away or as near to
hardware as possible?
3. How do you synchronize devices on non real time system?
4. Why the hell do we have whole network protocols inside the kernel?
Couldn't those
be perfectly handled in user space? Or maybe there are good reasons?
5. Should a driver just basically map the hardware to the user space or
shouldn't
it perhaps provide abstraction from the actual hardware implementing it?
6. Is there really a conceptual difference between a DSP+CPU+driver and
just
looking at the MMX IP core of the CPU as an DSP?
> > dmix has been around for a while but softvol plugin is very new, you
> > will need ALSA CVS or the upcoming 1.0.9 release.
>
> Instead of the lame claims on how ugly it is to do hardware mixing in
> kernel space the ALSA fans should ask them self the following questions:
Well, we are claiming _and_ obviously proposing a solution ;)
> 1. Where do you have true "real-time" under linux? Kernel or user space?
That's bullshit. you don't need "true" real time for the mixing/volume
processing in most cases. I've been doing sound drivers on various
platforms who don't have anything that look like true realtime neither
and beleive, it works. Besides, if doing it linux shows latency
problems, let's just fix them.
> 2. Where would you put the firmware for an DSP? Far away or as near to
> hardware as possible?
Yes. This point is moot. The firmware is somewhere in your filesystem
and obtained with the request_firmware() interface, that has nothing to
do in the kernel. If it's really small, it might be ok to stuff it in
the kernel. But anyway, this point is totally unrelated to the statement
you are replying to.
> 3. How do you synchronize devices on non real time system?
I'm not sure I understand what you mean here. I suppose it's about
propagation of clock sources, which is traditionally done in the slave
way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
by the lowest level at a given rate, the sample count beeing the
timestamp, variable sample size having other means (and less precise of
course) to synchronize.
> 4. Why the hell do we have whole network protocols inside the kernel?
> Couldn't those
> be perfectly handled in user space? Or maybe there are good reasons?
Network protocol do very few computation on the data in the packets
(well, except for IPsec for security reasons mostly) but this is a gain
totally unrelated. Like comparing apples and pears.
> 5. Should a driver just basically map the hardware to the user space or
> shouldn't
> it perhaps provide abstraction from the actual hardware implementing it?
This is in no way incompatible with having the mixing and volume control
in userspace. It's actually quite a good idea to have a userland library
that isolates you from the low level "raw" kernel intefaces of the
driver, and in the case of sound, provides you with the means to setup
codec chains, mixing components, etc...
> 6. Is there really a conceptual difference between a DSP+CPU+driver and
> just
> looking at the MMX IP core of the CPU as an DSP?
Again, I don't see how this makes any point in the context of the
discussion above and your heated reply.
Ben.
On 2005-03-29, at 10:18, Benjamin Herrenschmidt wrote:
>
> Well, we are claiming _and_ obviously proposing a solution ;)
I beg to differ.
>> 1. Where do you have true "real-time" under linux? Kernel or user
>> space?
>
> That's bullshit.
Wait a moment...
> you don't need "true" real time for the mixing/volume
> processing in most cases.
Yeah! Give me a break: *Most cases*. Playing sound and video is
paramount for requiring asserted timing. Isn't that a property
RT is defined by?
> I've been doing sound drivers on various
> platforms who don't have anything that look like true realtime neither
> and beleive, it works. Besides, if doing it linux shows latency
> problems, let's just fix them.
Perhaps as an exercise you could fix the jerky mouse movements on
Linux - too? I would be very glad to see the mouse, which has truly
modest
RT requirements, to start to behave the way it's supposed to do.
And yes I expect it to still move smoothly when doing "make -j100
world".
>> 2. Where would you put the firmware for an DSP? Far away or as near to
>> hardware as possible?
>
> Yes. This point is moot. The firmware is somewhere in your filesystem
> and obtained with the request_firmware() interface, that has nothing to
> do in the kernel. If it's really small, it might be ok to stuff it in
> the kernel. But anyway, this point is totally unrelated to the
> statement
> you are replying to.
No. You didn't get it. I'm taking the view that mixing sound is simply
a task you would typically love to make a DSP firmware do.
However providing a DSP for sound processing at 44kHZ on the same
PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
hardware
vendors out there decided to use the main CPU instead. Thus the
"firmware"
is simply running on the main CPU now. Now where should it go? I'm
convinced
that its better to put it near the hardware in the whole stack. You
think
it's best to put it far away and to invent artificial synchronization
problems between different applications putting data down to the
same hardware device.
>> 3. How do you synchronize devices on non real time system?
>
> I'm not sure I understand what you mean here. I suppose it's about
> propagation of clock sources, which is traditionally done in the slave
> way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
> by the lowest level at a given rate, the sample count beeing the
> timestamp, variable sample size having other means (and less precise of
> course) to synchronize.
No I'm simply taking the view that most of the time it's not only a
single
application which will feed the sound output. And quite frequently you
have
to synchronize even with video output.
>
>> 4. Why the hell do we have whole network protocols inside the kernel?
>> Couldn't those
>> be perfectly handled in user space? Or maybe there are good reasons?
>
> Network protocol do very few computation on the data in the packets
> (well, except for IPsec for security reasons mostly) but this is a gain
> totally unrelated. Like comparing apples and pears.
No it's not that far away. The same constraints which did lead most
people
to move TCP in to the kernel basically apply to sound output.
It's just a data stream those days after all.
>> 5. Should a driver just basically map the hardware to the user space
>> or
>> shouldn't
>> it perhaps provide abstraction from the actual hardware implementing
>> it?
>
> This is in no way incompatible with having the mixing and volume
> control
> in userspace. It's actually quite a good idea to have a userland
> library
> that isolates you from the low level "raw" kernel intefaces of the
> driver, and in the case of sound, provides you with the means to setup
> codec chains, mixing components, etc...
It is not. At least every other OS out there with significant care for
sound did came to a different conclusion.
At Mon, 28 Mar 2005 22:36:09 -0500,
Lee Revell wrote:
>
> On Mon, 2005-03-28 at 09:42 +1000, Benjamin Herrenschmidt wrote:
> > It seems that Apple's driver has an in-kernel framework for doing volume
> > control, mixing, and other horrors right in the kernel, in temporary
> > buffers, just before they get DMA'ed (gack !)
> >
> > I want to avoid something like that. How "friendly" would Alsa be to
> > drivers that don't have any HW volume control capability ? Does typical
> > userland libraries provide software processing volume control ? Do you
> > suggest I just don't do any control ? Or should I implement a double
> > buffer scheme with software gain as well in the kernel driver ?
>
> alsa-lib handles both mixing (dmix plugin) and volume control (softvol
> plugin) in software for codecs like this that don't do it in hardware.
> Since Windows does mixing and volume control in the kernel (ugh) it's
> increasingly common to find devices that cannot do these. You don't
> need to handle it in the driver at all.
Yes.
> dmix has been around for a while but softvol plugin is very new, you
> will need ALSA CVS or the upcoming 1.0.9 release.
dmix currently doesn't work on PPC well but I'll fix it soon later.
If it's confirmed to work, we can set dmix/softvol plugins for default
of snd-powermac driver configuration. Hopefully this will be finished
before 1.0.9 final.
Takashi
At Tue, 29 Mar 2005 11:22:07 +0200,
Marcin Dalecki wrote:
>
>
> On 2005-03-29, at 10:18, Benjamin Herrenschmidt wrote:
> >
> > Well, we are claiming _and_ obviously proposing a solution ;)
>
> I beg to differ.
>
> >> 1. Where do you have true "real-time" under linux? Kernel or user
> >> space?
> >
> > That's bullshit.
>
> Wait a moment...
>
> > you don't need "true" real time for the mixing/volume
> > processing in most cases.
>
> Yeah! Give me a break: *Most cases*. Playing sound and video is
> paramount for requiring asserted timing. Isn't that a property
> RT is defined by?
No, still you don't need "true" real-time OS.
(More exactly, do we have a "true" RT OS? :)
> > I've been doing sound drivers on various
> > platforms who don't have anything that look like true realtime neither
> > and beleive, it works. Besides, if doing it linux shows latency
> > problems, let's just fix them.
>
> Perhaps as an exercise you could fix the jerky mouse movements on
> Linux - too? I would be very glad to see the mouse, which has truly
> modest
> RT requirements, to start to behave the way it's supposed to do.
> And yes I expect it to still move smoothly when doing "make -j100
> world".
On the contrary, doing the soft-mixing/volume in kernel is the source
of latency when schedule isn't done properly without preemption.
> >> 2. Where would you put the firmware for an DSP? Far away or as near to
> >> hardware as possible?
> >
> > Yes. This point is moot. The firmware is somewhere in your filesystem
> > and obtained with the request_firmware() interface, that has nothing to
> > do in the kernel. If it's really small, it might be ok to stuff it in
> > the kernel. But anyway, this point is totally unrelated to the
> > statement
> > you are replying to.
>
> No. You didn't get it. I'm taking the view that mixing sound is simply
> a task you would typically love to make a DSP firmware do.
> However providing a DSP for sound processing at 44kHZ on the same
> PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
> hardware
> vendors out there decided to use the main CPU instead. Thus the
> "firmware"
> is simply running on the main CPU now. Now where should it go? I'm
> convinced
> that its better to put it near the hardware in the whole stack.
I don't understand this logic...
> You
> think
> it's best to put it far away and to invent artificial synchronization
> problems between different applications putting data down to the
> same hardware device.
>
> >> 3. How do you synchronize devices on non real time system?
> >
> > I'm not sure I understand what you mean here. I suppose it's about
> > propagation of clock sources, which is traditionally done in the slave
> > way; that is the producer (whatever it is, mixer, app, ...) is "sucked"
> > by the lowest level at a given rate, the sample count beeing the
> > timestamp, variable sample size having other means (and less precise of
> > course) to synchronize.
>
> No I'm simply taking the view that most of the time it's not only a
> single
> application which will feed the sound output. And quite frequently you
> have
> to synchronize even with video output.
Hmm, how is this related to the topic whether a job is done in user or
kernel space...?
> >> 4. Why the hell do we have whole network protocols inside the kernel?
> >> Couldn't those
> >> be perfectly handled in user space? Or maybe there are good reasons?
> >
> > Network protocol do very few computation on the data in the packets
> > (well, except for IPsec for security reasons mostly) but this is a gain
> > totally unrelated. Like comparing apples and pears.
>
> No it's not that far away. The same constraints which did lead most
> people
> to move TCP in to the kernel basically apply to sound output.
> It's just a data stream those days after all.
It depends on the efficiency, too. And, if you think of efficiency,
user-space has a big gain that it can use SIMD operations.
> >> 5. Should a driver just basically map the hardware to the user space
> >> or
> >> shouldn't
> >> it perhaps provide abstraction from the actual hardware implementing
> >> it?
> >
> > This is in no way incompatible with having the mixing and volume
> > control
> > in userspace. It's actually quite a good idea to have a userland
> > library
> > that isolates you from the low level "raw" kernel intefaces of the
> > driver, and in the case of sound, provides you with the means to setup
> > codec chains, mixing components, etc...
>
> It is not. At least every other OS out there with significant care for
> sound did came to a different conclusion.
ALSA provides the "driver" feature in user-space because it's more
flexible, more efficient and safer than doing in kernel. It's
transparent from apps perspective. It really doesn't matter whether
it's in kernel or user space.
I think your misunderstanding is that you beliieve user-space can't do
RT. It's wrong. See JACK (jackit.sf.net), for example.
Takashi
> Yes.
>
> > dmix has been around for a while but softvol plugin is very new, you
> > will need ALSA CVS or the upcoming 1.0.9 release.
>
> dmix currently doesn't work on PPC well but I'll fix it soon later.
> If it's confirmed to work, we can set dmix/softvol plugins for default
> of snd-powermac driver configuration. Hopefully this will be finished
> before 1.0.9 final.
Can the driver advertize in some way what it can do ? depending on the
machine we are running on, it will or will not be able to do HW volume
control... You probably don't want to use softvol in the former case...
dmix by default would be nice though :)
Ben.
At Tue, 29 Mar 2005 21:04:50 +1000,
Benjamin Herrenschmidt wrote:
>
>
> > Yes.
> >
> > > dmix has been around for a while but softvol plugin is very new, you
> > > will need ALSA CVS or the upcoming 1.0.9 release.
> >
> > dmix currently doesn't work on PPC well but I'll fix it soon later.
> > If it's confirmed to work, we can set dmix/softvol plugins for default
> > of snd-powermac driver configuration. Hopefully this will be finished
> > before 1.0.9 final.
>
> Can the driver advertize in some way what it can do ? depending on the
> machine we are running on, it will or will not be able to do HW volume
> control... You probably don't want to use softvol in the former case...
Add the following to ~/.asoundrc (or /etc/asound.conf for systemwise)
pcm.softvol {
type softvol
slave.pcm {
type hw
card 0
device 0
}
control {
name "PCM Playback Volume"
card 0
}
}
Then you can use the PCM "softvol", e.g.
% aplay -Dsoftvol foo.wav
This will create "PCM" volume control if it doesn't exist, and do
volume attenuation in software. If the control already exists (in the
driver), the software volume is skipped automatically.
The newly created volume can be saved/restored via alsactl.
In addition, you can override the ALSA default PCM by defining
~/.asound like:
pcm.!default "softvol"
> dmix by default would be nice though :)
Yeah, in future version, they will be set as default, i.e. without
extra definition of ~/.asoundrc.
Takashi
On Tue, 2005-03-29 at 21:04 +1000, Benjamin Herrenschmidt wrote:
> Can the driver advertize in some way what it can do ? depending on the
> machine we are running on, it will or will not be able to do HW volume
> control... You probably don't want to use softvol in the former case...
>
> dmix by default would be nice though :)
No, there's still no way to ask the driver whether hardware mixing is
supported yet. It's come up on alsa-devel before. Patches are welcome.
dmix by default would not be nice as users who have sound cards that can
do hardware mixing would be annoyed. However, in the upcoming 1.0.9
release softvol will be used by default for all the mobo chipsets.
Lee
At Tue, 29 Mar 2005 14:05:08 -0500,
Lee Revell wrote:
>
> On Tue, 2005-03-29 at 21:04 +1000, Benjamin Herrenschmidt wrote:
> > Can the driver advertize in some way what it can do ? depending on the
> > machine we are running on, it will or will not be able to do HW volume
> > control... You probably don't want to use softvol in the former case...
> >
> > dmix by default would be nice though :)
>
> No, there's still no way to ask the driver whether hardware mixing is
> supported yet. It's come up on alsa-devel before. Patches are welcome.
Well I don't remember the discussion thread on alsa-devel about this,
but it's a good idea that alsa-lib checks the capability of hw-mixing
and apples dmix only if necessary. (In the case of softvol, it can
check the existence of hw control by itself, though.)
Currently, dmix is enabled per driver-type base. That is, dmix is set
to default in each driver's configuration which is known to have no hw
mixing functionality.
> dmix by default would not be nice as users who have sound cards that can
> do hardware mixing would be annoyed. However, in the upcoming 1.0.9
> release softvol will be used by default for all the mobo chipsets.
On 1.0.9, dmix will be default, too, for most of mobo drivers.
Takashi
On Tue, 2005-03-29 at 21:31 +0200, Takashi Iwai wrote:
> Well I don't remember the discussion thread on alsa-devel about this,
> but it's a good idea that alsa-lib checks the capability of hw-mixing
> and apples dmix only if necessary. (In the case of softvol, it can
> check the existence of hw control by itself, though.)
>
> Currently, dmix is enabled per driver-type base. That is, dmix is set
> to default in each driver's configuration which is known to have no hw
> mixing functionality.
It was not discussed at length.
Anyway, I think you can detect hardware mixing support comparing the
number of substreams to the numebr of streams for a device. If the
ratio is greater than 1, hardware mixing is supported.
Lee
On Tue, 2005-03-29 at 21:31 +0200, Takashi Iwai wrote:
> At Tue, 29 Mar 2005 14:05:08 -0500,
> Lee Revell wrote:
> >
> > On Tue, 2005-03-29 at 21:04 +1000, Benjamin Herrenschmidt wrote:
> > > Can the driver advertize in some way what it can do ? depending on the
> > > machine we are running on, it will or will not be able to do HW volume
> > > control... You probably don't want to use softvol in the former case...
> > >
> > > dmix by default would be nice though :)
> >
> > No, there's still no way to ask the driver whether hardware mixing is
> > supported yet. It's come up on alsa-devel before. Patches are welcome.
>
> Well I don't remember the discussion thread on alsa-devel about this,
> but it's a good idea that alsa-lib checks the capability of hw-mixing
> and apples dmix only if necessary. (In the case of softvol, it can
> check the existence of hw control by itself, though.)
Well, for pmac at least, we'll need dmix all the time, but softvol
should be based on what the driver advertises yes.
> Currently, dmix is enabled per driver-type base. That is, dmix is set
> to default in each driver's configuration which is known to have no hw
> mixing functionality.
>
> > dmix by default would not be nice as users who have sound cards that can
> > do hardware mixing would be annoyed. However, in the upcoming 1.0.9
> > release softvol will be used by default for all the mobo chipsets.
>
> On 1.0.9, dmix will be default, too, for most of mobo drivers.
>
>
> Takashi
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Benjamin Herrenschmidt <[email protected]>
On Tue, 2005-03-29 at 11:22 +0200, Marcin Dalecki wrote:
> No. You didn't get it. I'm taking the view that mixing sound is simply
> a task you would typically love to make a DSP firmware do.
> However providing a DSP for sound processing at 44kHZ on the same
> PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
> hardware
> vendors out there decided to use the main CPU instead. Thus the
> "firmware"
> is simply running on the main CPU now. Now where should it go? I'm
> convinced
> that its better to put it near the hardware in the whole stack. You
> think
> it's best to put it far away and to invent artificial synchronization
> problems between different applications putting data down to the
> same hardware device.
This is the exact line of reasoning that led to Winmodems.
Lee
Lee Revell wrote:
> This is the exact line of reasoning that led to Winmodems.
My main issue with winmodems is not so much the software offload, but
rather that the vendors don't release full specs.
If all winmodem manufacturers released full hardware specs, I doubt
people would really complain all that much. There's a fairly large pool
of talent available to write drivers once the interfaces are known.
Chris
On Tue, 2005-03-29 at 17:25 -0600, Chris Friesen wrote:
> Lee Revell wrote:
>
> > This is the exact line of reasoning that led to Winmodems.
>
> My main issue with winmodems is not so much the software offload, but
> rather that the vendors don't release full specs.
>
> If all winmodem manufacturers released full hardware specs, I doubt
> people would really complain all that much. There's a fairly large pool
> of talent available to write drivers once the interfaces are known.
Look at the pile of junk that are most winmodem driver implementations,
nothing I want to see in the kernel ever. Those things should be in
userland.
Ben.
On 2005-03-29, at 12:22, Takashi Iwai wrote:
>
> ALSA provides the "driver" feature in user-space because it's more
> flexible, more efficient and safer than doing in kernel. It's
> transparent from apps perspective. It really doesn't matter whether
> it's in kernel or user space.
Yes because it's that wonder full linux sound processing sucks in
compare
to the other OSs out there doing it in kernel.
> I think your misunderstanding is that you beliieve user-space can't do
> RT. It's wrong. See JACK (jackit.sf.net), for example.
I know JACK in and out. It doesn't provide what you claim.
On 2005-03-30, at 00:13, Lee Revell wrote:
> On Tue, 2005-03-29 at 11:22 +0200, Marcin Dalecki wrote:
>> No. You didn't get it. I'm taking the view that mixing sound is simply
>> a task you would typically love to make a DSP firmware do.
>> However providing a DSP for sound processing at 44kHZ on the same
>> PCB as an 1GHZ CPU is a ridiculous waste of resources. Thus most
>> hardware
>> vendors out there decided to use the main CPU instead. Thus the
>> "firmware"
>> is simply running on the main CPU now. Now where should it go? I'm
>> convinced
>> that its better to put it near the hardware in the whole stack. You
>> think
>> it's best to put it far away and to invent artificial synchronization
>> problems between different applications putting data down to the
>> same hardware device.
>
> This is the exact line of reasoning that led to Winmodems.
Yes and BTW those are from a hardware point of view a technically
perfectly
fine solution. The obstacles here are two fold: Win32 kernel sucks big
rocks
on latency issues. However since the time we are over 1GHz and use XP
they work perfectly
fine. On Linux you don't get the necessary DSP processing code/docs.
Both are just pragmatical arguments which don't apply to sound
processing at all.
And for you note - I'm the guy who several years ago wrote the first
ever GDI-Printer
driver for Linux (oki4linux) despite claims from quite prominent people
here that this couldn't be ever done. And yes I did it in user space
because pages are not data streams.
On 2005-03-30, at 01:39, Benjamin Herrenschmidt wrote:
> On Tue, 2005-03-29 at 17:25 -0600, Chris Friesen wrote:
>> Lee Revell wrote:
>>
>>> This is the exact line of reasoning that led to Winmodems.
>>
>> My main issue with winmodems is not so much the software offload, but
>> rather that the vendors don't release full specs.
>>
>> If all winmodem manufacturers released full hardware specs, I doubt
>> people would really complain all that much. There's a fairly large
>> pool
>> of talent available to write drivers once the interfaces are known.
>
> Look at the pile of junk that are most winmodem driver implementations,
> nothing I want to see in the kernel ever. Those things should be in
> userland.
You are joking? Linux IS NOT an RT OS. And well not too long ago you
could
be jailed for example in germany for using not well behaving
communication devices.
On Wed, 2005-03-30 at 03:45 +0200, Marcin Dalecki wrote:
> On 2005-03-29, at 12:22, Takashi Iwai wrote:
> >
> > ALSA provides the "driver" feature in user-space because it's more
> > flexible, more efficient and safer than doing in kernel. It's
> > transparent from apps perspective. It really doesn't matter whether
> > it's in kernel or user space.
>
> Yes because it's that wonder full linux sound processing sucks in
> compare
> to the other OSs out there doing it in kernel.
What are you taking about? It's actually quite good.
Have you actually tried these other OSes lately? These devices in
question (those lacking hardware mixing and volume control) don't
exactly work great under that OS.
Lee
On Wed, 2005-03-30 at 03:45 +0200, Marcin Dalecki wrote:
> > I think your misunderstanding is that you beliieve user-space can't do
> > RT. It's wrong. See JACK (jackit.sf.net), for example.
>
> I know JACK in and out. It doesn't provide what you claim.
>
This was just an example, to prove the point that user space can do RT
just fine. JACK can do low latency sample accurate audio, and mixing
and volume control are fairly trivial compared to what some JACK clients
do. If it works well enough for professional hard disk recording
systems, then it can certainly handle system sounds and playing movies
and MP3s.
And as a matter of fact you can implement all the audio needs of a
desktop system with JACK, this is what Linspire is doing for the next
release, even though it wasn't designed for this. The system mixer is
just a JACK mixing client and each app opens ports for I/O, and only
JACK talks to the hardware (through ALSA).
The fact that OSX and Windows do this in the kernel is not a good
argument, those kernels are bloated. Windows drivers also do things
like AC3 decoding in the kernel. And the OSX kernel uses 16K stacks.
If audio does not work as well OOTB as on those other OSes, it's an
indication of their relative maturity vs JACK/ALSA, not an inherently
superior design. Most audio people consider JACK + ALSA a better design
than anything in the proprietary world (CoreAudio, ASIO).
Lee
On Wed, 2005-03-30 at 03:45 +0200, Marcin Dalecki wrote:
> > I think your misunderstanding is that you beliieve user-space can't do
> > RT. It's wrong. See JACK (jackit.sf.net), for example.
>
> I know JACK in and out. It doesn't provide what you claim.
Are you implying that "He don't know JACK!"
Sorry, couldn't resist. Move along now, nothing to see here :-) God
it's late, I need to go to bed.
Is that an American phrase. If so, it might not be understood elsewhere.
So just in case others don't understand this stupid joke. There's a
phrase "You don't know Jack" which is equivalent to saying "you don't
know what you're talking about". Which makes this kind of a pun.
-- Steve
On Wed, 2005-03-30 at 03:48 +0200, Marcin Dalecki wrote:
> On 2005-03-30, at 01:39, Benjamin Herrenschmidt wrote:
> > Look at the pile of junk that are most winmodem driver implementations,
> > nothing I want to see in the kernel ever. Those things should be in
> > userland.
>
> You are joking? Linux IS NOT an RT OS.
Are you joking? Any system that can capture audio, do a little DSP on
it and play it back without skipping can drive a Winmodem. Are you
saying Linux can't possibly do that because it's not an RTOS?
I bet you could implement a Winmodem driver as a JACK client.
Lee