Hello!
On Wednesday 08 January 2020 14:34:32 Denis Kenzior wrote:
> Hi Pali,
>
> > Do you have a reasonable solution also for second issue?
> >
>
> HSP profile has always been a problem child. It isn't really all that
> useful as a profile, and given how minimal it is, the right place for it
> always seemed to be inside Pulse Audio itself. This is what Marcel & I
> agreed upon back about 8-9 years ago anyway.
>
> You are advocating that HSP is still useful, particularly with vendor
> extensions. Which is fair enough, but now you have to figure out how and
> where to put this support.
>
> As mentioned earlier, one idea you can explore is to create a small daemon
> (or maybe it can even be part of ofonod itself) that will handle HSP
> client/server roles. See for example the dundee daemon that is part of
> ofono.git. dundee handles Bluetooth DUN profile and might be a good model /
> starting point for what you're trying to accomplish.
I looked at dundee, but it does it is separated service, not on org.ofono
So it looks like it does not fit into HSP / HFP needs.
Currently you can list all audio cards by DBus call:
"org.ofono", "/", "org.ofono.HandsfreeAudioManager", "GetCards"
and so this (or some new) call should list all HSP and HFP devices/cards
for audio application (pulseaudio).
Audio application (e.g. pulseaudio) really do not want to handle two
separate services to monitor and process HSP/HFP devices.
For audio application are HSP and HFP devices equivalent, they provide
same features: SCO socket, API for controlling microphone and speaker
gain; plus optionally specify used codec.
So having two separate services which fully divided for audio
application purpose does not make sense.
So it is possible that both HSP and HFP audio cards would be available
via one audio API? Because I do not see how it could be done via design
like dundee.
> You can then implement the same API interfaces for setting up the HSP audio
> stream as oFono does today (i.e. https://git.kernel.org/pub/scm/network/ofono/ofono.git/tree/doc/handsfree-audio-api.txt),
This API is unusable for both HSP and HFP audio streams. In pulseaudio
it is somehow used, but it is not suitable.
In part of designing hsphfpd I created a new DBus API for audio
application to fit for audio daemons. See org.hsphfpd.AudioAgent:
https://github.com/pali/hsphfpd-prototype/blob/prototype/hsphfpd.txt#L600-L663
Main objection for handsfree-audio-api.txt is that it does not provide
information about locally used codec and somehow mixes air codec and
local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
local codec and term "AirCodec" for bluetooth air codec format.
Another objection against handsfree-audio-api.txt API is that it is
bound to HF codecs (via number) and does not support for pass e.g. CSR
codecs.
What is completely missing in that API is controlling volume level.
And also API does not provide socket MTU information or ability to
change/specify which codec would be used.
So something like org.hsphfpd.AudioAgent API in my hsphfpd design would
be needed.
> which would make PulseAudio's job much easier, since the audio stream
> aspects would be essentially identical to HFP. If you're part of oFono's
> tree, then in theory many implementation aspects could be reused.
>
> If you want to provide some higher-level APIs for external applications,
> then HSP specific interfaces (APIs) can be added as needed.
Non-audio APIs which are needed to export (for both HSP and HFP profiles):
* battery level (0% - 100%)
* power source (external, battery, unknown)
* ability to send "our laptop" battery level and power source to remote device
* send text message to embedded display
* process button press event (exported via linux kernel uinput)
(plus all telephony related operations, but those are already supported
and provided by ofono)
> If you decide this is something you want to pursue, then I'm happy to host
> this in the oFono tree.
>
> Regards,
> -Denis
--
Pali Rohár
[email protected]
Hello! I could you or other ofono developer please look at my comments
below and say how we can continue with these problems? Now another month
passed and I really would like to continue in effort to implement
HSP/HFP profiles with audio support on Linux desktop.
On Wednesday 08 January 2020 22:25:37 Pali Rohár wrote:
> Hello!
>
> On Wednesday 08 January 2020 14:34:32 Denis Kenzior wrote:
> > Hi Pali,
> >
> > > Do you have a reasonable solution also for second issue?
> > >
> >
> > HSP profile has always been a problem child. It isn't really all that
> > useful as a profile, and given how minimal it is, the right place for it
> > always seemed to be inside Pulse Audio itself. This is what Marcel & I
> > agreed upon back about 8-9 years ago anyway.
> >
> > You are advocating that HSP is still useful, particularly with vendor
> > extensions. Which is fair enough, but now you have to figure out how and
> > where to put this support.
> >
> > As mentioned earlier, one idea you can explore is to create a small daemon
> > (or maybe it can even be part of ofonod itself) that will handle HSP
> > client/server roles. See for example the dundee daemon that is part of
> > ofono.git. dundee handles Bluetooth DUN profile and might be a good model /
> > starting point for what you're trying to accomplish.
>
> I looked at dundee, but it does it is separated service, not on org.ofono
> So it looks like it does not fit into HSP / HFP needs.
>
> Currently you can list all audio cards by DBus call:
>
> "org.ofono", "/", "org.ofono.HandsfreeAudioManager", "GetCards"
>
> and so this (or some new) call should list all HSP and HFP devices/cards
> for audio application (pulseaudio).
>
> Audio application (e.g. pulseaudio) really do not want to handle two
> separate services to monitor and process HSP/HFP devices.
>
> For audio application are HSP and HFP devices equivalent, they provide
> same features: SCO socket, API for controlling microphone and speaker
> gain; plus optionally specify used codec.
>
> So having two separate services which fully divided for audio
> application purpose does not make sense.
>
> So it is possible that both HSP and HFP audio cards would be available
> via one audio API? Because I do not see how it could be done via design
> like dundee.
>
> > You can then implement the same API interfaces for setting up the HSP audio
> > stream as oFono does today (i.e. https://git.kernel.org/pub/scm/network/ofono/ofono.git/tree/doc/handsfree-audio-api.txt),
>
> This API is unusable for both HSP and HFP audio streams. In pulseaudio
> it is somehow used, but it is not suitable.
>
> In part of designing hsphfpd I created a new DBus API for audio
> application to fit for audio daemons. See org.hsphfpd.AudioAgent:
> https://github.com/pali/hsphfpd-prototype/blob/prototype/hsphfpd.txt#L600-L663
>
> Main objection for handsfree-audio-api.txt is that it does not provide
> information about locally used codec and somehow mixes air codec and
> local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
> local codec and term "AirCodec" for bluetooth air codec format.
>
> Another objection against handsfree-audio-api.txt API is that it is
> bound to HF codecs (via number) and does not support for pass e.g. CSR
> codecs.
>
> What is completely missing in that API is controlling volume level.
>
> And also API does not provide socket MTU information or ability to
> change/specify which codec would be used.
>
> So something like org.hsphfpd.AudioAgent API in my hsphfpd design would
> be needed.
>
> > which would make PulseAudio's job much easier, since the audio stream
> > aspects would be essentially identical to HFP. If you're part of oFono's
> > tree, then in theory many implementation aspects could be reused.
> >
> > If you want to provide some higher-level APIs for external applications,
> > then HSP specific interfaces (APIs) can be added as needed.
>
> Non-audio APIs which are needed to export (for both HSP and HFP profiles):
>
> * battery level (0% - 100%)
> * power source (external, battery, unknown)
> * ability to send "our laptop" battery level and power source to remote device
> * send text message to embedded display
> * process button press event (exported via linux kernel uinput)
>
> (plus all telephony related operations, but those are already supported
> and provided by ofono)
>
> > If you decide this is something you want to pursue, then I'm happy to host
> > this in the oFono tree.
> >
> > Regards,
> > -Denis
>
--
Pali Rohár
[email protected]
Hi Pali,
On 1/8/20 3:25 PM, Pali Rohár wrote:
> Hello!
>
Somehow this went straight to my Junk folder, so I didn't see this
message at all until now.
>
> Audio application (e.g. pulseaudio) really do not want to handle two
> separate services to monitor and process HSP/HFP devices. >
> For audio application are HSP and HFP devices equivalent, they provide
> same features: SCO socket, API for controlling microphone and speaker
> gain; plus optionally specify used codec.
>
> So having two separate services which fully divided for audio
> application purpose does not make sense.
>
> So it is possible that both HSP and HFP audio cards would be available
> via one audio API? Because I do not see how it could be done via design
> like dundee.
>
One API sure. Maybe on multiple services. Honestly, I don't see why
this would be such a burden for PA to watch 2 dbus services instead of
1. From oFono perspective it would make more sense to keep the HSP part
a separate daemon. I could be convinced otherwise if it is indeed a big
burden for PA...
>> You can then implement the same API interfaces for setting up the HSP audio
>> stream as oFono does today (i.e. https://git.kernel.org/pub/scm/network/ofono/ofono.git/tree/doc/handsfree-audio-api.txt),
>
> This API is unusable for both HSP and HFP audio streams. In pulseaudio
> it is somehow used, but it is not suitable.
>
Funny. "It is used but not suitable"?
> Main objection for handsfree-audio-api.txt is that it does not provide
> information about locally used codec and somehow mixes air codec and
> local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
> local codec and term "AirCodec" for bluetooth air codec format.
Okay. But, just FYI, at the time there was no hw that could do such
on-the-fly conversions, so this use case wasn't considered/implemented.
There's really no reason why we couldn't extend our APIs to handle this.
>
> Another objection against handsfree-audio-api.txt API is that it is
> bound to HF codecs (via number) and does not support for pass e.g. CSR
> codecs.
True. In retrospect we probably should have used strings. But it was
assumed that vendor extensions would go via the Bluetooth SIG Assigned
Numbers facility. Anyhow, we can always add a 'Register2' method that
could take codecs as a string array or a combination of strings & ints.
And if Register2 was used, then use 'NewConnection2' with a signature
that supports passing in vendor codecs and whatever else that might be
needed.
>
> What is completely missing in that API is controlling volume level.
>
It is there on the CallVolume interface
> And also API does not provide socket MTU information or ability to
> change/specify which codec would be used.
There was no need, we automatically defaulted to using Wide band if
available. Third party codecs are a new use case (for oFono HFP), so
would require an API extension.
>
> Non-audio APIs which are needed to export (for both HSP and HFP profiles):
>
> * battery level (0% - 100%)
> * power source (external, battery, unknown)
> * ability to send "our laptop" battery level and power source to remote device
> * send text message to embedded display
> * process button press event (exported via linux kernel uinput)
>
I think all of these are feasible to support under the current oFono
structure, either via plugins or API extensions.
Regards,
-Denis
On Wednesday 12 February 2020 15:29:04 Denis Kenzior wrote:
> On 1/8/20 3:25 PM, Pali Rohár wrote:
> > Audio application (e.g. pulseaudio) really do not want to handle two
> > separate services to monitor and process HSP/HFP devices. >
> > For audio application are HSP and HFP devices equivalent, they provide
> > same features: SCO socket, API for controlling microphone and speaker
> > gain; plus optionally specify used codec.
> >
> > So having two separate services which fully divided for audio
> > application purpose does not make sense.
> >
> > So it is possible that both HSP and HFP audio cards would be available
> > via one audio API? Because I do not see how it could be done via design
> > like dundee.
> >
>
> One API sure. Maybe on multiple services. Honestly, I don't see why this
> would be such a burden for PA to watch 2 dbus services instead of 1. From
> oFono perspective it would make more sense to keep the HSP part a separate
> daemon. I could be convinced otherwise if it is indeed a big burden for
> PA...
It is not only pulseaudio, but also other applications which are going
to use HSP and HFP profiles. Having more services just complicates
things. Most majority of devices support both HSP and HFP profiles and
target application would need to start merging these two services into
one to create object overview of one device.
I do not see search why to complicate it for HSP and HFP applications
users to divide these two profiles into separate services and daemons.
Due to these problems I designed my hsphfpd to handle both profiles as
for audio applications they are fully equivalent and for other are very
similar. In that way hsphfpd provides all supported HSP and HFP profiles
and therefore taraget application do not have to introspect lot of
places and then merge information together. Otherwise every one HSP/HFP
application need to do this.
> > > You can then implement the same API interfaces for setting up the HSP audio
> > > stream as oFono does today (i.e. https://git.kernel.org/pub/scm/network/ofono/ofono.git/tree/doc/handsfree-audio-api.txt),
> >
> > This API is unusable for both HSP and HFP audio streams. In pulseaudio
> > it is somehow used, but it is not suitable.
> >
>
> Funny. "It is used but not suitable"?
Used by who? Gateway role is fully broken and client (hfp) role is used
probably only by some power users. Also there is no support for mSBC in
pusleaudio.
So, no, really it is not used.
> > Main objection for handsfree-audio-api.txt is that it does not provide
> > information about locally used codec and somehow mixes air codec and
> > local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
> > local codec and term "AirCodec" for bluetooth air codec format.
>
> Okay. But, just FYI, at the time there was no hw that could do such
> on-the-fly conversions, so this use case wasn't considered/implemented.
This cannot be truth as probably every bluetooth HW is doing on-the-fly
conversion between CVSD and PCM. I was not able to find HW which allows
me to send raw CVSD samples...
But OK, I understand that just for one codec (or two?) API was designed
very simple and nobody though about possibility about usage of HW
encoders also for non-CVSD codecs. For that time in past, it make sense.
> There's really no reason why we couldn't extend our APIs to handle this.
>
> >
> > Another objection against handsfree-audio-api.txt API is that it is
> > bound to HF codecs (via number) and does not support for pass e.g. CSR
> > codecs.
>
> True. In retrospect we probably should have used strings. But it was
> assumed that vendor extensions would go via the Bluetooth SIG Assigned
> Numbers facility. Anyhow, we can always add a 'Register2' method that could
> take codecs as a string array or a combination of strings & ints. And if
> Register2 was used, then use 'NewConnection2' with a signature that supports
> passing in vendor codecs and whatever else that might be needed.
This is still not enough. Audio application (e.g. pulseaudio) need to
register AgentCodec, not AirCodec. And current API is somehow mixed.
Audio application needs to know what is the format which bluetooth chip
sends to userspace (PCM? mSBC? CVSD? PCMA? AuriStream?) and which format
bluetooth chip expects. I named this AgentCodec.
And it is fully different of codec negotiated by HFP protocol. Here is
negotiated codec which is transmitted over the air. Hence the name
AirCodec.
HFP daemon needs to negotiate AirCodec via HF codec and audio daemon
(e.g. pulseaudio) needs to negotiate AgentCodec via HFP daemon.
So API in that form is unusable. And e.g. API which I designed for
hsphfpd Audio Agent is needed.
> >
> > What is completely missing in that API is controlling volume level.
> >
>
> It is there on the CallVolume interface
>
> > And also API does not provide socket MTU information or ability to
> > change/specify which codec would be used.
>
> There was no need, we automatically defaulted to using Wide band if
> available. Third party codecs are a new use case (for oFono HFP), so would
> require an API extension.
MTU is needed also for mSBC codec if encoding is done in software
(pulseaudio). Without it, this wide band support in ofono is unusable
for pulseaudio.
And also API extension for choosing codec. Also for choosing if software
of hardware encoding would be used. We know that there are lot of broken
devices in different way and it is really needed for either blacklist
some codec or switch between hw and sw encoding if something strange
happen. Without it pulseaudio is not going to support more codes then
default required (CVSD).
> >
> > Non-audio APIs which are needed to export (for both HSP and HFP profiles):
> >
> > * battery level (0% - 100%)
> > * power source (external, battery, unknown)
> > * ability to send "our laptop" battery level and power source to remote device
> > * send text message to embedded display
> > * process button press event (exported via linux kernel uinput)
> >
>
> I think all of these are feasible to support under the current oFono
> structure, either via plugins or API extensions.
Ok. Are you going to implement them?
I think that all of these are missing parts in ofono and something which
is needed to be implemented for desktop/laptop HSP and HFP profile
support.
--
Pali Rohár
[email protected]
Hi Pali,
> Used by who? Gateway role is fully broken and client (hfp) role is used
I guess that depends on your perspective. I've already pointed out that
the desktop 'AG' use case was never something we needed to implement.
If you want to fix oFono to do that, great. If you want to write your
own daemon to do this, also great.
> probably only by some power users. Also there is no support for mSBC in
> pusleaudio.
Why is oFono at fault for this? Genuine question. What are the
roadblocks to mSBC support?
>
> So, no, really it is not used.
>
>>> Main objection for handsfree-audio-api.txt is that it does not provide
>>> information about locally used codec and somehow mixes air codec and
>>> local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
>>> local codec and term "AirCodec" for bluetooth air codec format.
>>
>> Okay. But, just FYI, at the time there was no hw that could do such
>> on-the-fly conversions, so this use case wasn't considered/implemented.
>
> This cannot be truth as probably every bluetooth HW is doing on-the-fly
> conversion between CVSD and PCM. I was not able to find HW which allows
> me to send raw CVSD samples...
At the time this was all done in software. Alternatively, the hardware
was directly wired between the sound card / modem and the bluetooth
chip. So just opening the SCO socket was enough.
>> True. In retrospect we probably should have used strings. But it was
>> assumed that vendor extensions would go via the Bluetooth SIG Assigned
>> Numbers facility. Anyhow, we can always add a 'Register2' method that could
>> take codecs as a string array or a combination of strings & ints. And if
>> Register2 was used, then use 'NewConnection2' with a signature that supports
>> passing in vendor codecs and whatever else that might be needed.
>
> This is still not enough. Audio application (e.g. pulseaudio) need to
> register AgentCodec, not AirCodec. And current API is somehow mixed.
> Audio application needs to know what is the format which bluetooth chip
> sends to userspace (PCM? mSBC? CVSD? PCMA? AuriStream?) and which format
> bluetooth chip expects. I named this AgentCodec.
So how do you negotiate the 'AgentCodec'? Does BlueZ expose this
information? If so, how? SCO socket option or ...?
>>> And also API does not provide socket MTU information or ability to
>>> change/specify which codec would be used.
>>
>> There was no need, we automatically defaulted to using Wide band if
>> available. Third party codecs are a new use case (for oFono HFP), so would
>> require an API extension.
>
> MTU is needed also for mSBC codec if encoding is done in software
> (pulseaudio). Without it, this wide band support in ofono is unusable
> for pulseaudio.
Isn't the MTU obtained from the SCO socket itself? How is oFono at
fault here?
>
> And also API extension for choosing codec. Also for choosing if software
> of hardware encoding would be used. We know that there are lot of broken
> devices in different way and it is really needed for either blacklist
> some codec or switch between hw and sw encoding if something strange
> happen. Without it pulseaudio is not going to support more codes then
> default required (CVSD).
This seems to be a kernel / device driver / firmware issue and should
be solved at that level.
>
>>>
>>> Non-audio APIs which are needed to export (for both HSP and HFP profiles):
>>>
>>> * battery level (0% - 100%)
>>> * power source (external, battery, unknown)
>>> * ability to send "our laptop" battery level and power source to remote device
>>> * send text message to embedded display
>>> * process button press event (exported via linux kernel uinput)
>>>
>>
>> I think all of these are feasible to support under the current oFono
>> structure, either via plugins or API extensions.
>
> Ok. Are you going to implement them?
> I think that all of these are missing parts in ofono and something which
> is needed to be implemented for desktop/laptop HSP and HFP profile
> support.
>
There are no plans to do this at the moment.
Regards,
-Denis
On Thursday 13 February 2020 12:14:06 Denis Kenzior wrote:
> Hi Pali,
>
> > Used by who? Gateway role is fully broken and client (hfp) role is used
>
> I guess that depends on your perspective. I've already pointed out that the
> desktop 'AG' use case was never something we needed to implement. If you
> want to fix oFono to do that, great. If you want to write your own daemon
> to do this, also great.
>
> > probably only by some power users. Also there is no support for mSBC in
> > pusleaudio.
>
> Why is oFono at fault for this? Genuine question. What are the roadblocks
> to mSBC support?
Because does not provide needed API.
> >
> > So, no, really it is not used.
> >
> > > > Main objection for handsfree-audio-api.txt is that it does not provide
> > > > information about locally used codec and somehow mixes air codec and
> > > > local codec. In my hsphfpd.txt I used term "AgentCodec" for bluetooth
> > > > local codec and term "AirCodec" for bluetooth air codec format.
> > >
> > > Okay. But, just FYI, at the time there was no hw that could do such
> > > on-the-fly conversions, so this use case wasn't considered/implemented.
> >
> > This cannot be truth as probably every bluetooth HW is doing on-the-fly
> > conversion between CVSD and PCM. I was not able to find HW which allows
> > me to send raw CVSD samples...
>
> At the time this was all done in software.
CVSD was never done in software. Always in hardware. As said, even now I
was not able to find bluetooth HW which would allow to do CVSD in software.
> Alternatively, the hardware was
> directly wired between the sound card / modem and the bluetooth chip. So
> just opening the SCO socket was enough.
>
> > > True. In retrospect we probably should have used strings. But it was
> > > assumed that vendor extensions would go via the Bluetooth SIG Assigned
> > > Numbers facility. Anyhow, we can always add a 'Register2' method that could
> > > take codecs as a string array or a combination of strings & ints. And if
> > > Register2 was used, then use 'NewConnection2' with a signature that supports
> > > passing in vendor codecs and whatever else that might be needed.
> >
> > This is still not enough. Audio application (e.g. pulseaudio) need to
> > register AgentCodec, not AirCodec. And current API is somehow mixed.
> > Audio application needs to know what is the format which bluetooth chip
> > sends to userspace (PCM? mSBC? CVSD? PCMA? AuriStream?) and which format
> > bluetooth chip expects. I named this AgentCodec.
>
> So how do you negotiate the 'AgentCodec'? Does BlueZ expose this
> information? If so, how? SCO socket option or ...?
It is done by HCI commands, therefore by kernel. There is discussion for
exporting userspace <--> kernel API to allow setting arbitrary
configurations for codecs supported by bluetooth HW.
Getting list of supported codecs can be done by this script:
https://github.com/pali/hsphfpd-prototype/blob/prototype/sco_features.pl
(needs to be run as root)
> > > > And also API does not provide socket MTU information or ability to
> > > > change/specify which codec would be used.
> > >
> > > There was no need, we automatically defaulted to using Wide band if
> > > available. Third party codecs are a new use case (for oFono HFP), so would
> > > require an API extension.
> >
> > MTU is needed also for mSBC codec if encoding is done in software
> > (pulseaudio). Without it, this wide band support in ofono is unusable
> > for pulseaudio.
>
> Isn't the MTU obtained from the SCO socket itself? How is oFono at fault
> here?
Yes, via some ioctl it can be done. But bluez for other bluetooth
profiles provides this information via dbus API. As bluez does not
support HSP/HFP it expects that software which implement it, provide
needed info.
> >
> > And also API extension for choosing codec. Also for choosing if software
> > of hardware encoding would be used. We know that there are lot of broken
> > devices in different way and it is really needed for either blacklist
> > some codec or switch between hw and sw encoding if something strange
> > happen. Without it pulseaudio is not going to support more codes then
> > default required (CVSD).
>
> This seems to be a kernel / device driver / firmware issue and should be
> solved at that level.
Why??? It is up to the application which owns SLC socket and this
application needs to provide API for it. Codecs are negotiated via AT
commands, so again only HFP / HSP daemon can do it.
> >
> > > >
> > > > Non-audio APIs which are needed to export (for both HSP and HFP profiles):
> > > >
> > > > * battery level (0% - 100%)
> > > > * power source (external, battery, unknown)
> > > > * ability to send "our laptop" battery level and power source to remote device
> > > > * send text message to embedded display
> > > > * process button press event (exported via linux kernel uinput)
> > > >
> > >
> > > I think all of these are feasible to support under the current oFono
> > > structure, either via plugins or API extensions.
> >
> > Ok. Are you going to implement them?
> > I think that all of these are missing parts in ofono and something which
> > is needed to be implemented for desktop/laptop HSP and HFP profile
> > support.
> >
>
> There are no plans to do this at the moment.
So, why should I even consider to use ofono at all? It does not support
none of above desktop feature, it does not support extended codecs, it
does not support HSP profile and also it does not support HFP profile
without physical modem (majority of desktops and laptops).
Please, do not take me wrong but after 2.5 months long discussion I just
see that ofono does not support needed parts, there are no plans for it
and everything needed is already designed and implemented in my prototype
hsphfpd daemon.
--
Pali Rohár
[email protected]
Hi Pali,
On 2/13/20 12:32 PM, Pali Rohár wrote:
>> At the time this was all done in software.
>
> CVSD was never done in software. Always in hardware. As said, even now I
> was not able to find bluetooth HW which would allow to do CVSD in software.
>
I don't remember the exact details. I seem to remember that for mSBC
the conversion was being done on the host and no 'on-the-fly' conversion
was done in hardware. Thus this host codec negotiation was not needed /
considered.
https://lists.ofono.org/hyperkitty/list/[email protected]/message/6CUFGDPUJBRIZA4GUVFD2EPOET25XTN3/
>> So how do you negotiate the 'AgentCodec'? Does BlueZ expose this
>> information? If so, how? SCO socket option or ...?
>
> It is done by HCI commands, therefore by kernel. There is discussion for
> exporting userspace <--> kernel API to allow setting arbitrary
> configurations for codecs supported by bluetooth HW.
>
> Getting list of supported codecs can be done by this script:
> https://github.com/pali/hsphfpd-prototype/blob/prototype/sco_features.pl
> (needs to be run as root)
So you might want to get BlueZ guys to expose this info properly first.
oFono is not in the business of opening raw hci sockets.
>> Isn't the MTU obtained from the SCO socket itself? How is oFono at fault
>> here?
>
> Yes, via some ioctl it can be done. But bluez for other bluetooth
> profiles provides this information via dbus API. As bluez does not
> support HSP/HFP it expects that software which implement it, provide
> needed info
Only PA (or whatever implements the audio agent) really cares about this
info and it can obtain it via getsockopt. So I really don't see why the
MTU should be exposed via D-Bus. And this is why it wasn't. I don't
see an issue here...?
>>
>> This seems to be a kernel / device driver / firmware issue and should be
>> solved at that level.
>
> Why??? It is up to the application which owns SLC socket and this
> application needs to provide API for it. Codecs are negotiated via AT
> commands, so again only HFP / HSP daemon can do it.
So in my opinion it is really up to the kernel to tell us whether a
given hardware supports wideband speech. So any quirks need to go into
the kernel. Then userspace can select the best available codec
automatically without resorting to having the user twiddle some settings.
> So, why should I even consider to use ofono at all? It does not support
> none of above desktop feature, it does not support extended codecs, it
> does not support HSP profile and also it does not support HFP profile
> without physical modem (majority of desktops and laptops).
Your initial proposal wanted to use oFono as some sort of helper for
your daemon, and that is just not going to be accepted by oFono
upstream. I gave you a few alternatives, including how to extend oFono
to do what you want. If you want to roll your own, go for it.
Regards,
-Denis
On Thursday 13 February 2020 13:08:56 Denis Kenzior wrote:
> > So, why should I even consider to use ofono at all? It does not support
> > none of above desktop feature, it does not support extended codecs, it
> > does not support HSP profile and also it does not support HFP profile
> > without physical modem (majority of desktops and laptops).
>
> Your initial proposal wanted to use oFono as some sort of helper for your
> daemon, and that is just not going to be accepted by oFono upstream. I gave
> you a few alternatives, including how to extend oFono to do what you want.
> If you want to roll your own, go for it.
You have said that ofono is not going to accept usage of hsphfpd for
telephony and also you said that ofono does not have plans to implement
needed HSP and HFP features for desktop users.
So conclusion from all these facts is that ofono is fully unusable for
desktop / latop usage of bluetooth HSP and HFP profiles.
--
Pali Rohár
[email protected]