Hi Folks,
since the progress on the A2DP support stalled and because the problems
with the Logitech/HP headphones still exists, I thought that it is time
to buy one of these by myself. I now have the Aiptek, Bluewalker and the
HP headphones at home for testing. The first two are GCT based and the
last one is Zeevo based.
All GCT based headphones are working fine and they have the nice SBC
over RFCOMM support for testing. They are not picky about any timing
related transfers and they cache the SBC stream. To deal with the Zeevo
based headphones you can only talk AVDTP and you need to get the timing
right. Otherwise the sound is messed up.
So what I have done so far is that I extended the pcm_a2dp ALSA plugin
in the utils CVS repository with the ability to cache the connection.
This is needed, because players like xmms and bmp close the PCM every
time they change the track. If we close the AVDTP channel it results
into closing the ACL link and this is not what we want. The only problem
at the moment is that this plugin only supports SBC over RFCOMM and it
doesn't care about the timing. This means that if you change a track it
can take 1 or 2 seconds before the headphones reacts, because parts are
already cached. This is a problem (or feature) of the GCT based devices.
To fully support all A2DP headphones (including Logitech/HP) this plugin
needs a AVDTP implementation. I have a hacked one on my development
system and it works fine with the GCT headphones. The code is a little
bit messy, but in general it works. It is fully event driven and so it
deprecates the idea of the AVDTP library. It is a lot easier to do it
directly inside the plugin. I need to cleanup the code and submit it to
the CVS when there is a little bit more time.
I also got first results with the HP headphone by adding an usleep(2500)
for every SBC frame. It is also needed to pack as most SBC frames into
an AVDTP frame as possible. In my case this was up to eight. So this is
all about the timing. The perfect thing now would be if we can tell ALSA
to time everything for us. Problem is that I am not an audio expert and
also not an ALSA expert. Maybe the SBC encoder should return/store the
time the SBC frame is encoded for.
Comments? Ideas?
And btw for testing the pcm_a2dp plugin you need ALSA 1.0.9-rc2 and
an .asoundrc config file like this:
pcm.bluewalker {
type a2dp
bdaddr "00:0B:0D:xx:xx:xx"
}
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Marcel
>>This all sounds very good and I feel like a dope for getting rid of my
>>bluetake headset when I had only ordered, but not received, a
>>replacement. I'm still waiting for my itech. :(
>
> if people start to donate some money, then I can buy you one of the HP
> headphones to work with.
I think I'll just order the HP set myself and get back into things. The
other set is taking too long.
Brad
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Guylhem
> Brad who now has an xscale board now realise how much precious space
> is. Honnestly using alsa is only good if you consider existing
> applications.
I think the bottom line is that the general solution (alsa plugin)
doesn't scale down to very small devices.
That is fine. Very small devices do a lot of things differently and we
will continue to maintain something like a2play. It should be improved
to be a little more general (eg no hardcoded usleep) and have more
features (like mp3 decoding, live line-in streaming) but the basic
operation wouldn't change.
Apps on the desktop should get used to using an alsa plugin and then we
have the freedom to move whatever is appropriate into the kernel without
changing the app interface. I don't believe we'd ever move "enough" into
the kernel to eliminate alsa-lib.
One thing I've been trying on my gumstix is experimenting to see if the
plugin design might scale down to a fairly capable arm board (xscale
400mhz/64mb ram). alsa-lib is big but it might work. Another thing I've
been doing is tweaking our project so our stuff is easily cross-compiled
and works well as a buildroot module.
Brad
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hi Guylhem,
> > My 0.02EUR: There is a problem here. Mayank turned up a very valid point
> > in his "a2play code not according to the specifications" posting: Having
> > the SBC encoding decoupled from the AVDTP negotiation is not a good
> > thing. One of the primary problems of the current pipe approach (and
> > expecting applications to stream SBC into a socket/device/whatever is
> > not different in this regard) is that the format of the SBC stream
> > (sample rate, channel mode, etc.) will have to depend on the
> > capabilities of the receiver, but there is no feedback to the encoder
> > about that.
>
> Maybe the supported encodings/frequencies/etc. could be advertised in
> a separate file? The player would open it first, then send the sbc
> in one of the supported format
the advertised endpoints, encodings etc. can change dynamically and so
you would always need a L2CAP link on PSM 25 first. So this can't be an
option.
> > And also coupling /dev/a2dpN tightly to SBC format would make it even
> > harder (or impossible) to support other audio formats as A2DP does.
>
> That was just a suggestion on the system behaviour - I mean having a
> device supporting a format. You could as well have
> /dev/a2dp0_sbc_someformat /dev/a2dp0_sbc_someotherformat etc. and try
> to open them until once doesn't return an error.
This is an ugly interface and won't even work. Do you wanna create a
connection to the headphone for every device node that you are testing?
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hi Henryk,
> > - for kernel integration, linus &co may not be interested in a
> > kernel-side sbc decoder- they will certainly ask for a /dev/ entry
> > taking the native format - in that case already encoded sbc.
>
> and
>
> > Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
> > could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
> > expected and streamed to the a2dp peripheral bound there.
>
> My 0.02EUR: There is a problem here. Mayank turned up a very valid point
> in his "a2play code not according to the specifications" posting: Having
> the SBC encoding decoupled from the AVDTP negotiation is not a good
> thing. One of the primary problems of the current pipe approach (and
> expecting applications to stream SBC into a socket/device/whatever is
> not different in this regard) is that the format of the SBC stream
> (sample rate, channel mode, etc.) will have to depend on the
> capabilities of the receiver, but there is no feedback to the encoder
> about that.
>
> Therefore the best implementation would be to put the actual encoding
> into the same layer of code that does the negotiation, or at least put
> the control over the encoder there. Otherwise you wouldn't have the
> simple "open and write something into it" semantics most other devices
> have, but would need to have something along the lines of "open the
> device, offer it some audio formats, let it do the negotation, read back
> the audio format that is to be used, encode the audio stream into that
> format".
if we do the kernel AVDTP socket interface then it will do something
like that. However this not only about A2DP, because in the future we
may wanna use it also for VDP.
> And also coupling /dev/a2dpN tightly to SBC format would make it even
> harder (or impossible) to support other audio formats as A2DP does.
Don't worry about /dev/*whatever*, because it will be socket based and
not character devices.
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hi Guylhem,
> > > Then my rant- why everyone wants an alsa plugin? It's ugly!
> >
> > ALSA is not prefect and it has it edges, but it is far away from ugly.
> > It is actually the only sane thing to do on the desktop.
>
> Alsa is not ugly - I mean a plugin to do that is :-)
no it is not. I can't understand why do you think that.
> > So you are going to write IO plugins for Gnomemeeting, Totem, xine,
> > mplayer, sox, mpg123 and so on. Have fun with it ;)
>
> (I won't :-)
So you understand why an ALSA plugin is the only way to support every
audio capable application.
> Brad who now has an xscale board now realise how much precious space
> is. Honnestly using alsa is only good if you consider existing
> applications. But then did you read Nicolas Pitre and Alan Cox
> replies? Apps should be fixed - nothing else.
>
> Another API idea:
> /dev/a2dp_capabilities
>
> listing the existing a2dp devices associated, their bt address, the
> audio formats they support
> /dev/a2dp1
> /dev/a2dp2
> etc.
Please stop designing weird interface. As I said the prefered thing is a
socket API if we move AVDTP into the kernel.
And as a sidenote, you need a connection before you can request the
capabilities of the headphone. It is not going to work your way.
> mplayer/etc. plugins would have to open the first file before sending
> audio to the second file. This would avoid wasting almost 1 Mb for alsa.
>
> An alsa "plugin" could certainly also take advantage of that approach -
> it would simply do like an mplayer plugin, doing the same thing a
> mplayer plugin would do. It'd just have to be a daemon going the
> recoding etc. in user space. Much cleaner, and desktop compatible.
This is not cleaner. It is more ugly. You must put the application
aside. We have ALSA and OSS and sound interfaces and ALSA is used by
almost every distribution by now. So once we get A2DP to play nice with
ALSA every audio application works.
> > for any other application. If then someone comes up with an integer
> > version of the SBC codec, I may think about moving that into the kernel,
> > too. But to be honest, this is future talk and has nothing to do with
> > reality right now.
>
> IMHO an encoder doesn't belong to the kernel.
I know, but it is still an option. Not my preferred one, but still an
option.
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hello,
On Sunday, 24 April 2005 at 17:16:39 (+0200), Marcel Holtmann wrote:
> > Then my rant- why everyone wants an alsa plugin? It's ugly!
>=20
> ALSA is not prefect and it has it edges, but it is far away from ugly.
> It is actually the only sane thing to do on the desktop.
Alsa is not ugly - I mean a plugin to do that is :-)
> So you are going to write IO plugins for Gnomemeeting, Totem, xine,
> mplayer, sox, mpg123 and so on. Have fun with it ;)
(I won't :-)
Brad who now has an xscale board now realise how much precious space
is. Honnestly using alsa is only good if you consider existing
applications. But then did you read Nicolas Pitre and Alan Cox
replies? Apps should be fixed - nothing else.
Another API idea:
/dev/a2dp_capabilities
listing the existing a2dp devices associated, their bt address, the
audio formats they support
/dev/a2dp1
/dev/a2dp2
etc.
mplayer/etc. plugins would have to open the first file before sending
audio to the second file. This would avoid wasting almost 1 Mb for alsa.
An alsa "plugin" could certainly also take advantage of that approach -
it would simply do like an mplayer plugin, doing the same thing a
mplayer plugin would do. It'd just have to be a daemon going the
recoding etc. in user space. Much cleaner, and desktop compatible.
> We are going do this approach, but we will use sockets for it, because
> simple character devices makes no sense. The main interface for RFCOMM
> is also a socket and the RFCOMM TTYs are only for legacy apps.
(Ok never mind)
> for any other application. If then someone comes up with an integer
> version of the SBC codec, I may think about moving that into the kernel=
,
> too. But to be honest, this is future talk and has nothing to do with
> reality right now.
IMHO an encoder doesn't belong to the kernel.
--=20
Bien =E0 vous - Best regards,
Guylhem P. Aznar
--=20
*@externe.net http://externe.n=
et
P=E9rim=E9/Deprecated: @oeil.qc.ca, @metalab.unc.edu, @ibiblio.org, @7=
un.org
GPG: 92EB37C1 DD11C9C9 20519D01 E8FA1B11 42975AF7 http://externe.net/pubk=
ey
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
On Sunday, 24 April 2005 at 18:19:47 (+0200), Henryk Pl=F6tz wrote:
> My 0.02EUR: There is a problem here. Mayank turned up a very valid poin=
t
> in his "a2play code not according to the specifications" posting: Havin=
g
> the SBC encoding decoupled from the AVDTP negotiation is not a good
> thing. One of the primary problems of the current pipe approach (and
> expecting applications to stream SBC into a socket/device/whatever is
> not different in this regard) is that the format of the SBC stream
> (sample rate, channel mode, etc.) will have to depend on the
> capabilities of the receiver, but there is no feedback to the encoder
> about that.
Maybe the supported encodings/frequencies/etc. could be advertised in
a separate file? The player would open it first, then send the sbc
in one of the supported format
> And also coupling /dev/a2dpN tightly to SBC format would make it even
> harder (or impossible) to support other audio formats as A2DP does.
That was just a suggestion on the system behaviour - I mean having a
device supporting a format. You could as well have
/dev/a2dp0_sbc_someformat /dev/a2dp0_sbc_someotherformat etc. and try
to open them until once doesn't return an error.
--=20
Bien =E0 vous - Best regards,
Guylhem P. Aznar
--=20
*@externe.net http://externe.n=
et
P=E9rim=E9/Deprecated: @oeil.qc.ca, @metalab.unc.edu, @ibiblio.org, @7=
un.org
GPG: 92EB37C1 DD11C9C9 20519D01 E8FA1B11 42975AF7 http://externe.net/pubk=
ey
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Moin,
Am Sun, 24 Apr 2005 09:47:47 -0400 schrieb Guylhem Aznar:
> - for kernel integration, linus &co may not be interested in a
> kernel-side sbc decoder- they will certainly ask for a /dev/ entry
> taking the native format - in that case already encoded sbc.
and
> Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
> could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
> expected and streamed to the a2dp peripheral bound there.
My 0.02EUR: There is a problem here. Mayank turned up a very valid point
in his "a2play code not according to the specifications" posting: Having
the SBC encoding decoupled from the AVDTP negotiation is not a good
thing. One of the primary problems of the current pipe approach (and
expecting applications to stream SBC into a socket/device/whatever is
not different in this regard) is that the format of the SBC stream
(sample rate, channel mode, etc.) will have to depend on the
capabilities of the receiver, but there is no feedback to the encoder
about that.
Therefore the best implementation would be to put the actual encoding
into the same layer of code that does the negotiation, or at least put
the control over the encoder there. Otherwise you wouldn't have the
simple "open and write something into it" semantics most other devices
have, but would need to have something along the lines of "open the
device, offer it some audio formats, let it do the negotation, read back
the audio format that is to be used, encode the audio stream into that
format".
And also coupling /dev/a2dpN tightly to SBC format would make it even
harder (or impossible) to support other audio formats as A2DP does.
--
Henryk Pl?tz
Gr??e aus Berlin
~~~~~~~ Un-CDs, nein danke! http://www.heise.de/ct/cd-register/ ~~~~~~~
~ Help Microsoft fight software piracy: Give Linux to a friend today! ~
Hi Brad,
> This all sounds very good and I feel like a dope for getting rid of my
> bluetake headset when I had only ordered, but not received, a
> replacement. I'm still waiting for my itech. :(
if people start to donate some money, then I can buy you one of the HP
headphones to work with.
> > So what I have done so far is that I extended the pcm_a2dp ALSA plugin
> > in the utils CVS repository with the ability to cache the connection.
>
> btsco now fires up the connection on demand and drops it immediately
> after the close. we are going to need caching too. I noticed my phone
> holds the connection open for about 3-5 seconds after it uses it.
Since I want to realize the headset/handsfree support also as ALSA
plugin, we will need that two. It is only for these buggy players.
> > To fully support all A2DP headphones (including Logitech/HP) this plugin
> > needs a AVDTP implementation. I have a hacked one on my development
> > system and it works fine with the GCT headphones. The code is a little
> > bit messy, but in general it works. It is fully event driven and so it
> > deprecates the idea of the AVDTP library. It is a lot easier to do it
> > directly inside the plugin. I need to cleanup the code and submit it to
> > the CVS when there is a little bit more time.
>
> I am excited to see this!
Don't expect too much, because I am totally busy next week.
> > all about the timing. The perfect thing now would be if we can tell ALSA
> > to time everything for us. Problem is that I am not an audio expert and
> > also not an ALSA expert. Maybe the SBC encoder should return/store the
> > time the SBC frame is encoded for.
>
> I think alsa should help us out here. If I have to learn more about the
> alsa api to help, I will do it.
Please do so. If we can use ALSA itself for timing, then we should get a
full working pcm_a2dp plugin when they finally release the 1.0.9 version
of ALSA.
> Doing a2dp inside an alsa plugin is clearly the right way. Thomas was
> saying that requiring alsa-lib was not good for embedded. I'm working on
> an xscale board now and I do have to use a precious 700k of limited
> flash for alsa-lib, but the alternative is to write something into the
> kernel that would never be accepted by Linus.
Embedded stuff is always different. If you look at my LinuxTag 2004
slides, I had an audio sublayer inside the Bluetooth core in mind, but I
think that this will never happen.
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hi Brad,
> > Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
> > could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
> > expected and streamed to the a2dp peripheral bound there.
>
> managing connections is kind of a pain. it can't be done entirely in
> kernel space. you can see how this gets spread into the btsco daemon.
putting AVDTP inside the kernel is quite easy from the connection
viewpoint and in the long run we will need it anyway. The examples are
when it is possible to stream video and audio or if you wanna access two
separate sink endpoints on the same device. The kernel AVDTP layer must
to all sync and multiplexing work here and it is the only place where to
do that in a sane way. However this is out of focus right now.
The SCO and headset/handsfree support is different, because of the
needed RFCOMM connection and the AT parser. Without the AT parser you
can decide when to open the SCO link. It is actually broken by design.
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Hi Guylhem,
> > I also got first results with the HP headphone by adding an usleep(2500)
> > for every SBC frame. It is also needed to pack as most SBC frames into
> > an AVDTP frame as possible. In my case this was up to eight. So this is
> > all about the timing. The perfect thing now would be if we can tell ALSA
> > to time everything for us. Problem is that I am not an audio expert and
> > also not an ALSA expert. Maybe the SBC encoder should return/store the
> > time the SBC frame is encoded for.
> >
> > Comments? Ideas?
>
> (A question first - did you add the usleep to Mayank code? I'll then
> take it and try to optimise it for ARM. Last time I just made it
> barely compile :-)
no and it is totally a hack that only works on my P4-HT desktop systems.
The time to wait will also depend on the time the SBC encoder needs to
actually create the SBC frame. And this of course depends on the CPU
etc.
> Then my rant- why everyone wants an alsa plugin? It's ugly!
ALSA is not prefect and it has it edges, but it is far away from ugly.
It is actually the only sane thing to do on the desktop.
> IMHO a2dp will be mostly useful for embedded system or in specific
> applications on desktop systems (xmms, etc). Having to use alsa is at
> best useless and at worse a potential showstopper.
So you are going to write IO plugins for Gnomemeeting, Totem, xine,
mplayer, sox, mpg123 and so on. Have fun with it ;)
> - on most embedded systems resources are scarce. Having to include
> alsa for what can be currently done from the command line is a bad idea
We are not talking here about ALSA and its pcm_a2dp plugin for embedded
systems. The point for the plugin is the desktop and in the case of the
embedded system you can come up with something by yourself. The only
hard part of A2DP is the SBC encoder/decoder. And this will stay generic
as a library.
> - I think ideally a standalone application would be best suited for
> this. The standalone application could be built over mpg321, some ogg
> player, whatever, and do the mp3/ogg decoding, sbc encoding and a2dp
> streaming until some dedicated tool manage a2dp streaming.
> It would be much easier to optimise for embedded systems - most don't
> have a FPU, so minimizing floating point operation is very important
You are mixing things up here. The AVDTP only describes the transport
protocol and it don't uses floating point. The A2DP profile only talks
about how to deal with AVDTP and how to put the SBC frames into the
AVDTP payload. Here are also no floating point operations involved. Now
we come down to the SBC encoding and decoding and yes, this can be
problematic for embedded systems (especially if they don't have a FPU),
but you can come up with an all integer based implementation of SBC and
I am happy to adapt it (as long as it is under LGPL).
> - on desktop systems a2dp could also be managed by a standalone
> application, which could be an audio out plugin for xmms/mplayer etc.
> This plugin would also manage sbc encoding and a2dp streaming until
> some dedicated tool manage a2dp streaming.
Again. Too many plugin to be written. This is why you are going to
choose ALSA as a middle layer between applications and the actual sound
hardware. A Bluetooth A2DP headphone is a virtual soundcard. Nothing
more and nothing less. View it from this point and realize that they
only replaces the PCI bus with a Bluetooth connection. It is not that
easy in reality, but you must look at it this way.
> - for kernel integration, linus &co may not be interested in a
> kernel-side sbc decoder- they will certainly ask for a /dev/ entry
> taking the native format - in that case already encoded sbc. Remember
> the mono/stereo and 48 kHz rants:
> http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2001-July/003997.html
> http://www.uwsg.indiana.edu/hypermail/linux/kernel/0105.3/0324.html
>
> So my suggestion, at least until a good strategy has been defined, would
> be to use a2play as a work base, and add mp3/ogg decoding - the "all
> in one" approach. Quite easy to test, enhance, fix, finetune, etc -
> better IMHO that separate tools for the moment.
Actually we used a2play as working and test base and we learned a lot.
Now it is time to get this knowledge into the right shape and make it
easy for the end user who wants to use its new A2DP headphone with
Linux.
> Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
> could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
> expected and streamed to the a2dp peripheral bound there.
>
> Then, applications would simply write to that device in sbc encoded
> format. A first candidate application would be a2play [if it follows
> the "all in one" approach], which would simply have to be stripped of
> its current main function [sbcencoding, a2dp streaming], to become a
> mp3/ogg to sbc decoder writting to /dev/a2dpN [that's why I advocate a
> all-in-one approach]
We are going do this approach, but we will use sockets for it, because
simple character devices makes no sense. The main interface for RFCOMM
is also a socket and the RFCOMM TTYs are only for legacy apps.
The point in not doing it at the moment is, because it is too much work.
Writing a kernel layer (it is not a driver) for AVDTP is quite complex
and I don't wanna deal with it at the moment. There are other things to
do that have a higher priority on my todo list. For example the L2CAP
flow control and retransmission support.
But if this AVDTP kernel layer will be there, the pcm_a2dp ALSA plugin
will use it and there is no visible change to for the end user. The
advantage will be that you can also use AVDTP (also for video streaming)
for any other application. If then someone comes up with an integer
version of the SBC codec, I may think about moving that into the kernel,
too. But to be honest, this is future talk and has nothing to do with
reality right now.
> Please don't take that personally, I just want to offer my opinion on
> the right way to do that. I fear an alsa plugin will cause immediate
> problems (to me on ARM systems) then later or for kernel integration.
No offense taken. And actually I asked for comments :)
Regards
Marcel
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
hey
> flash for alsa-lib, but the alternative is to write something into the
> kernel that would never be accepted by Linus.
I'm talking about the alternatives for a driver that is usable through
/dev/dsp (keeping a2play around is an even better fit for embedded)
It is hard to give up 700k on alsa-lib, but you can see how I've added
it to Erik Andersen's buildroot. I'll be making that available soon.
Brad
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Guylhem
> IMHO a2dp will be mostly useful for embedded system or in specific
> applications on desktop systems (xmms, etc). Having to use alsa is at
> best useless and at worse a potential showstopper.
we will keep and update the standalone streamer. it doesn't integrate
well with other apps but you can deal with that when embedding.
> - on desktop systems a2dp could also be managed by a standalone
> application, which could be an audio out plugin for xmms/mplayer etc.
> This plugin would also manage sbc encoding and a2dp streaming until
> some dedicated tool manage a2dp streaming.
We can write one plugin for alsa and get support for
xmms/mplayer/xine/etc. if someone else wants native plugins for all
those apps, that's great.
> So my suggestion, at least until a good strategy has been defined, would
> be to use a2play as a work base, and add mp3/ogg decoding - the "all
> in one" approach. Quite easy to test, enhance, fix, finetune, etc -
> better IMHO that separate tools for the moment.
i think this could be an attractive utility for embedded work, but i
would be careful how you add mp3 decode. my gumstix has analog audio
out. I wouldn't want to use the space for an app that can decode mp3
(new a2play) but can't also send it to analog out.
> Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
> could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
> expected and streamed to the a2dp peripheral bound there.
managing connections is kind of a pain. it can't be done entirely in
kernel space. you can see how this gets spread into the btsco daemon.
Brad
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
Marcel
This all sounds very good and I feel like a dope for getting rid of my
bluetake headset when I had only ordered, but not received, a
replacement. I'm still waiting for my itech. :(
> So what I have done so far is that I extended the pcm_a2dp ALSA plugin
> in the utils CVS repository with the ability to cache the connection.
btsco now fires up the connection on demand and drops it immediately
after the close. we are going to need caching too. I noticed my phone
holds the connection open for about 3-5 seconds after it uses it.
> To fully support all A2DP headphones (including Logitech/HP) this plugin
> needs a AVDTP implementation. I have a hacked one on my development
> system and it works fine with the GCT headphones. The code is a little
> bit messy, but in general it works. It is fully event driven and so it
> deprecates the idea of the AVDTP library. It is a lot easier to do it
> directly inside the plugin. I need to cleanup the code and submit it to
> the CVS when there is a little bit more time.
I am excited to see this!
> all about the timing. The perfect thing now would be if we can tell ALSA
> to time everything for us. Problem is that I am not an audio expert and
> also not an ALSA expert. Maybe the SBC encoder should return/store the
> time the SBC frame is encoded for.
I think alsa should help us out here. If I have to learn more about the
alsa api to help, I will do it.
Doing a2dp inside an alsa plugin is clearly the right way. Thomas was
saying that requiring alsa-lib was not good for embedded. I'm working on
an xscale board now and I do have to use a precious 700k of limited
flash for alsa-lib, but the alternative is to write something into the
kernel that would never be accepted by Linus.
Brad
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel
On Sunday, 24 April 2005 at 14:18:32 (+0200), Marcel Holtmann wrote:
> I also got first results with the HP headphone by adding an usleep(2500=
)
> for every SBC frame. It is also needed to pack as most SBC frames into
> an AVDTP frame as possible. In my case this was up to eight. So this is
> all about the timing. The perfect thing now would be if we can tell ALS=
A
> to time everything for us. Problem is that I am not an audio expert and
> also not an ALSA expert. Maybe the SBC encoder should return/store the
> time the SBC frame is encoded for.
>
> Comments? Ideas?
(A question first - did you add the usleep to Mayank code? I'll then
take it and try to optimise it for ARM. Last time I just made it
barely compile :-)
Then my rant- why everyone wants an alsa plugin? It's ugly!
IMHO a2dp will be mostly useful for embedded system or in specific
applications on desktop systems (xmms, etc). Having to use alsa is at
best useless and at worse a potential showstopper.
- on most embedded systems resources are scarce. Having to include
alsa for what can be currently done from the command line is a bad idea
- I think ideally a standalone application would be best suited for
this. The standalone application could be built over mpg321, some ogg
player, whatever, and do the mp3/ogg decoding, sbc encoding and a2dp
streaming until some dedicated tool manage a2dp streaming.
It would be much easier to optimise for embedded systems - most don't
have a FPU, so minimizing floating point operation is very important
- on desktop systems a2dp could also be managed by a standalone
application, which could be an audio out plugin for xmms/mplayer etc.
This plugin would also manage sbc encoding and a2dp streaming until
some dedicated tool manage a2dp streaming.
- for kernel integration, linus &co may not be interested in a
kernel-side sbc decoder- they will certainly ask for a /dev/ entry
taking the native format - in that case already encoded sbc. Remember
the mono/stereo and 48 kHz rants:
http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2001-July/003997=
.html
http://www.uwsg.indiana.edu/hypermail/linux/kernel/0105.3/0324.html
So my suggestion, at least until a good strategy has been defined, would
be to use a2play as a work base, and add mp3/ogg decoding - the "all
in one" approach. Quite easy to test, enhance, fix, finetune, etc -
better IMHO that separate tools for the moment.
Later on, like rfcomm manages rfcomm0 etc. entries, some a2dp tool
could bind a2dp peripherals to /dev/a2dpN where SBC audio would be
expected and streamed to the a2dp peripheral bound there.
Then, applications would simply write to that device in sbc encoded
format. A first candidate application would be a2play [if it follows
the "all in one" approach], which would simply have to be stripped of
its current main function [sbcencoding, a2dp streaming], to become a
mp3/ogg to sbc decoder writting to /dev/a2dpN [that's why I advocate a
all-in-one approach]
Please don't take that personally, I just want to offer my opinion on
the right way to do that. I fear an alsa plugin will cause immediate
problems (to me on ARM systems) then later or for kernel integration.
--=20
Bien =E0 vous - Best regards,
Guylhem P. Aznar
--=20
*@externe.net http://externe.n=
et
P=E9rim=E9/Deprecated: @oeil.qc.ca, @metalab.unc.edu, @ibiblio.org, @7=
un.org
GPG: 92EB37C1 DD11C9C9 20519D01 E8FA1B11 42975AF7 http://externe.net/pubk=
ey
-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Bluez-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bluez-devel