Linux Enthusiasts:
Over the past year, Linux Support Services, Inc. and Zapata
Telephony, Inc. have been working together on building the "Zaptel" pseudo
TDM bus architecture, and having at least 7 supported boards in a variety
of roles (T1, E1, multi-port T1, E1, FXS and FXO with USB, PCI, ISA, and
Ethernet interfaces), we are now interesting in getting comments on the
driver architecture and moving towards integration into the 2.5 kernel.
The Zaptel telephony infrastructure differs substantially from the
existing Linux telephony structure, because it's designed to produce a
framework for creating a "pseudo TDM" bus inside the kernel, allowing
features like conferencing, DAXing, bridging, echo cancellation, HDLC
packetization, and other resources typically done in hardware to be
replaced by software, by simulating a TDM bus in the Linux kernel (thanks
to its remarkably thin interrupt latency).
The driver framework (and associated user-space library) currently
handles a variety of interfaces (including T1, E1, PRI, FXS, FXO, E&M,
Feature Group D) and features (DTMF detection, echo cancellation,
conferencing, digital gain adjustment, HDLC data modes via SyncPPP, frame
relay, ISDN RAS, etc etc). Drivers for new hardware are very simple to
add, and channels from one driver can be bridged to those of another
driver, even if their timings are not synchronized.
The primary application we use on this interface (although
certainly not the only one) is the Asterisk Open Source PBX
(http://www.asterisk.org) which permits you to build a full featured PBX
(Private Branch eXchange) or IVR (Interactive Voice Response) server with
a Linux box. Using the zaptel infrastructure, Asterisk provides the
ability to deploy phone service with all your expected call features etc.
For more information, go to http://www.linux-support.net, or
http://www.asterisk.org.
I am very interested in seeking comments both on our driver
framework, and on how to go about submitting this for kernel inclusion if
appropriate.
Mark
On Sunday 21 July 2002 01:25, Mark Spencer wrote:
> The primary application we use on this interface (although
> certainly not the only one) is the Asterisk Open Source PBX
> (http://www.asterisk.org) which permits you to build a full featured PBX
> (Private Branch eXchange) or IVR (Interactive Voice Response) server with
> a Linux box. Using the zaptel infrastructure, Asterisk provides the
> ability to deploy phone service with all your expected call features etc.
But can it give error messages in morse code?
> I am very interested in seeking comments both on our driver
> framework, and on how to go about submitting this for kernel inclusion if
> appropriate.
You haven't actually said a lot about the driver framework, judging from my
quick tour of the code and website. (Very quick, it's late here.) Perhaps
you could wax poetic on this subject?
In my quick tour I was looking for where the saw-off between kernel and
user space is in the source tree, and I began to get the feeling the
whole thing is kernel space, is this correct?
Anyway, this effort is exciting and ambitious. I *want* to use this, for
very practical reasons, never mind that it would well turn into yet another
vibrant embedded application area for Linux.
It strikes me that much of what you're doing qualifies as hard realtime
programming, particularly where you are doing things like interleaving file
transmission with realtime voice. I'm thinking that this may be a good
chance to give the new Adeos OS-layering technology a test drive, with a view
to achieving more reliable, lower latency signal processing and equipment
control. If things work out, this could qualify as the first genuine
consumer-oriented hard realtime application for Linux.
--
Daniel
A random question: is there any reason why Ogg isn't among the codecs?
--
Daniel
Greetings,
Mark Spencer <[email protected]> :
[...]
> Over the past year, Linux Support Services, Inc. and Zapata
> Telephony, Inc. have been working together on building the "Zaptel" pseudo
> TDM bus architecture, and having at least 7 supported boards in a variety
> of roles (T1, E1, multi-port T1, E1, FXS and FXO with USB, PCI, ISA, and
> Ethernet interfaces), we are now interesting in getting comments on the
> driver architecture and moving towards integration into the 2.5 kernel.
A few remarks after a quick glance at zaptel-0.2.0
- copy_{to/from}_user() may sleep. Don't call it with spinlock held
(see Documentation/DocBook/kernel-locking.tmpl)
- avoid unchecked copy_{to/from}_user() and friends.
- the failure paths aren't consistent: sometime goto, sometime not.
- the failure paths don't always do their job regarding kmalloc/kfree balance
- fcstab[] declaration is duplicated
- please put blank lines after variables declarations in function body
- zt_common_ioctl/zt_ctl_ioctl() on less than 14/24 screens would be nice
[...]
> I am very interested in seeking comments both on our driver
> framework, and on how to go about submitting this for kernel inclusion if
> appropriate.
The kernel parts are appropriate imho (hint, hint: split these from userspace
code in zaptel package). Cosmetic/CodingStyle issues apart, bugs are lurking
in the code.
--
Ueimor
> A random question: is there any reason why Ogg isn't among the codecs?
It wasn't ready when I started. Ogg, like mp3, is generally a very poor
choice of codec for telephony, and even for the storage of files, unless
its performance has improved greatly.
On a 900 Mhz Athlon, you can get *hundreds* of simultaneous full-duplex
GSM full-rate codecs running. Certainly that's an unrealistic expectation
even for half-duplex ogg or mp3. As for using ogg as an actual telephony
protocol, its frame size is (or at least was at the time I contacted the
author) much too long to be practical. Frame sizes for VoIP should be
around 160 to 240 samples in general.
Mark
> In my quick tour I was looking for where the saw-off between kernel and
> user space is in the source tree, and I began to get the feeling the
> whole thing is kernel space, is this correct?
Nope, in general the saw-off is that signal processing and protocol
implementations that are beyond a few bits of logic are done in userspace.
The exceptions are echo cancellation (which has to be done very close to
the itnerface) and RBS protocols which are easy and important enough to be
done in the kernel. DTMF detection and the FSK modems for Caller*ID and
ADSI are done in userspace, as well as the PRI implementation (libpri).
> Anyway, this effort is exciting and ambitious. I *want* to use this, for
> very practical reasons, never mind that it would well turn into yet another
> vibrant embedded application area for Linux.
:) Just look at "zaptel" as being the kernel interface and "zapata" as
being the user-level interface. Biased as I am, of course, I think that
Asterisk is the most interesting application of the technology but not
everyone seems to agree ;-)
A direct link to the whitepaper is:
ftp://ftp.asterisk.org/pub/asterisk/misc/asterisk-whitepaper.pdf
> It strikes me that much of what you're doing qualifies as hard realtime
> programming, particularly where you are doing things like interleaving file
> transmission with realtime voice. I'm thinking that this may be a good
> chance to give the new Adeos OS-layering technology a test drive, with a view
> to achieving more reliable, lower latency signal processing and equipment
> control. If things work out, this could qualify as the first genuine
> consumer-oriented hard realtime application for Linux.
Actually, Linux right out of the box on modern PC hardware (400+Mhz) seems
more than fine for the task. We even have people running this on 200Mhz
cyrix machines (single channels) and I've been able to driver a T1 on a
233 Mhz PII, although I don't think I'd recommend this configuration
officially! I'd be happy to see what hard-realtime might be able to do in
terms of pushing the hardware requirements even lower, but it seems pretty
exciting just as it is!
One more thing that might be interesting would be seeing how small we
could push the "chunk size". Right now, we use a chunk size of 8, but
a chunk size of 1 (that is, interrupting for EVERY sample) would put us
at a level indistinguishable from hard TDM.
Mark
On Sunday 21 July 2002 21:01, Mark Spencer wrote:
> > A random question: is there any reason why Ogg isn't among the codecs?
>
> It wasn't ready when I started. Ogg, like mp3, is generally a very poor
> choice of codec for telephony, and even for the storage of files, unless
> its performance has improved greatly.
I don't know about performance (except for quality: it's said to require
about half the bitrate for the same quality, compared to mp3) however, it
has one killer advantage over mp3: it's patent-free, and hence, royalty-free.
I'd think that would be important for your project.
> On a 900 Mhz Athlon, you can get *hundreds* of simultaneous full-duplex
> GSM full-rate codecs running. Certainly that's an unrealistic expectation
> even for half-duplex ogg or mp3.
But that would be an argument against supporting mp3 as well.
> As for using ogg as an actual telephony
> protocol, its frame size is (or at least was at the time I contacted the
> author) much too long to be practical. Frame sizes for VoIP should be
> around 160 to 240 samples in general.
On a quick reading, this appears to indicate you can easily have what you
want:
http://www.xiph.org/ogg/vorbis/doc/framing.html
Perhaps the last time you looked at Ogg the streaming format had not yet been
completed?
Also, doesn't part of telephony consist of having lots of pre-recorded audio
around, for voice mail etc? Granted, an encoder optimized for music is not
necessarily optimzed for voice. However, would that not be a matter of
tweaking the encoder? As I understand it, the vorbis compression format is
quite general, and in fact, all the recent work that improved the quality so
noticably involved only the encoder.
OK, this isn't a really kernel issue, so... I'll clamp my hams and post it
anyway ;-)
--
Daniel
On Sunday 21 July 2002 20:59, Mark Spencer wrote:
> > In my quick tour I was looking for where the saw-off between kernel and
> > user space is in the source tree, and I began to get the feeling the
> > whole thing is kernel space, is this correct?
>
> Nope, in general the saw-off is that signal processing and protocol
> implementations that are beyond a few bits of logic are done in userspace.
See the other poster's comment about providing a clear separation of kernel
and userspace components in your source tree. It just makes it easier to
get oriented.
> The exceptions are echo cancellation (which has to be done very close to
> the itnerface) and RBS protocols which are easy and important enough to be
> done in the kernel. DTMF detection and the FSK modems for Caller*ID and
> ADSI are done in userspace, as well as the PRI implementation (libpri).
>
> [...]
>
> > It strikes me that much of what you're doing qualifies as hard realtime
> > programming, particularly where you are doing things like interleaving file
> > transmission with realtime voice. I'm thinking that this may be a good
> > chance to give the new Adeos OS-layering technology a test drive, with a view
> > to achieving more reliable, lower latency signal processing and equipment
> > control. If things work out, this could qualify as the first genuine
> > consumer-oriented hard realtime application for Linux.
>
> Actually, Linux right out of the box on modern PC hardware (400+Mhz) seems
> more than fine for the task. We even have people running this on 200Mhz
> cyrix machines (single channels) and I've been able to driver a T1 on a
> 233 Mhz PII, although I don't think I'd recommend this configuration
> officially! I'd be happy to see what hard-realtime might be able to do in
> terms of pushing the hardware requirements even lower, but it seems pretty
> exciting just as it is!
Hard realtime isn't about efficiency, it's about meeting deadlines. While
you may be seeing very good average interrupt response latency in Linux, it's
well known that response times on the order of milliseconds are not uncommon,
and there is no way to prove that you won't get the occasional spike into the
tens or hundreds of milliseconds, even with low latency patches applied to
spinlocks and preemption enabled. Hard realtime is about being able to prove
that such spikes never happen. As a bonus, you can work with much smaller
packet sizes because you have confidence that you'll be able to service the
interrupts on time.
That said, Adeos does offer what appears to be an very efficient model for
handling interrupts. (Caveat: I haven't tried it myself yet, much less
measured it, just eyeballed the code) You can load a module directly into
the interrupt pipeline and bypass all of Linux's interrupt machinery, even
bypass cli (it just sets a flag in Adeos).
In any event, it's not clear to me how you are going to be able to do echo
cancelation reliably unless you are able to provide guaranteed response
latency.
> One more thing that might be interesting would be seeing how small we
> could push the "chunk size". Right now, we use a chunk size of 8, but
> a chunk size of 1 (that is, interrupting for EVERY sample) would put us
> at a level indistinguishable from hard TDM.
An interrupt rate of 20 KHz is no problem on modern hardware, in fact I
was able to that with a 20 MHz 386. What is your sample rate?
--
Daniel
> I don't know about performance (except for quality: it's said to require
> about half the bitrate for the same quality, compared to mp3) however, it
> has one killer advantage over mp3: it's patent-free, and hence, royalty-free.
> I'd think that would be important for your project.
Right. The mp3 decoder is also (as far as I know) unencumbered, and I
have no intent of putting an mp3 encoder in. I certainly don't mind
adding Ogg support, it's just not a really big priority.
> > On a 900 Mhz Athlon, you can get *hundreds* of simultaneous full-duplex
> > GSM full-rate codecs running. Certainly that's an unrealistic expectation
> > even for half-duplex ogg or mp3.
>
> But that would be an argument against supporting mp3 as well.
Yes, mp3 doesn't make sense either in general except for music on hold,
for example. In fact I've considered dropping it entirely from Asterisk
except that since it's already there, I don't see an overwhelming reason
to remove it.
> Perhaps the last time you looked at Ogg the streaming format had not yet been
> completed?
The streaming format is quite irrelevant here. It's the size of window
that you are encoding that is important. Some review of the
vorbis-spec-intro reveals that it may be able to support smaller frame
sizes and sampling rates, so it might be worth giving then a call.
> Also, doesn't part of telephony consist of having lots of pre-recorded audio
> around, for voice mail etc? Granted, an encoder optimized for music is not
> necessarily optimzed for voice. However, would that not be a matter of
> tweaking the encoder? As I understand it, the vorbis compression format is
> quite general, and in fact, all the recent work that improved the quality so
> noticably involved only the encoder.
In general, Asterisk's prompts are stored in GSM since it is so
inexpensive to compute and generally about the same size as MP3 / Ogg when
corrected for bit rate (and obviously much smaller when not corrected for
bit rate).
> OK, this isn't a really kernel issue, so... I'll clamp my hams and post it
> anyway ;-)
If you'd like to sign up on the Asterisk mailing list, that is presumably
a much better arena for discussing this. If Ogg makes sense to add to
Asterisk I'd certainly be happy to do so.
Mark
> See the other poster's comment about providing a clear separation of kernel
> and userspace components in your source tree. It just makes it easier to
> get oriented.
*nods* Actually the kernel and user packages are in different projects
(zaptel is the kernel level interface, with only a couple of user tools
for its configuration, while zapata is the library interface). Perhaps
some people might want to contact me off-list to suggest cleaner ways of
organizing the code.
> Hard realtime isn't about efficiency, it's about meeting deadlines. While
> you may be seeing very good average interrupt response latency in Linux, it's
> well known that response times on the order of milliseconds are not uncommon,
> and there is no way to prove that you won't get the occasional spike into the
> tens or hundreds of milliseconds, even with low latency patches applied to
> spinlocks and preemption enabled. Hard realtime is about being able to prove
> that such spikes never happen. As a bonus, you can work with much smaller
> packet sizes because you have confidence that you'll be able to service the
> interrupts on time.
Hard realtime would presumably make sense here, but again the beauty of it
is that the current system works reasonably well without it. With 1khz
interrupts, I've been unable to generate misses (except with DMA turned
off on IDE) on most modern systems. In telephony systems, a miss every
now would not be noticed, but in data systems it could represent a dropped
frame.
> That said, Adeos does offer what appears to be an very efficient model for
> handling interrupts. (Caveat: I haven't tried it myself yet, much less
> measured it, just eyeballed the code) You can load a module directly into
> the interrupt pipeline and bypass all of Linux's interrupt machinery, even
> bypass cli (it just sets a flag in Adeos).
If someone involved with, or familiar with, this project would care to
contact me, I'd be happy to talk about RT enabling zaptel / asterisk.
> In any event, it's not clear to me how you are going to be able to do echo
> cancelation reliably unless you are able to provide guaranteed response
> latency.
Just look at the code and you'll see.
> An interrupt rate of 20 KHz is no problem on modern hardware, in fact I
> was able to that with a 20 MHz 386. What is your sample rate?
8khz of course like all conventional telephony. Right now we run at 1khz
for a number of reasons (less stringent requirement, better CPU
utilization, and ability to use USB telephony interfaces easily), but
there are good reasons to push it up to 8khz in some situations.
Mark
Mark Spencer wrote:
> > That said, Adeos does offer what appears to be an very efficient model for
> > handling interrupts. (Caveat: I haven't tried it myself yet, much less
> > measured it, just eyeballed the code) You can load a module directly into
> > the interrupt pipeline and bypass all of Linux's interrupt machinery, even
> > bypass cli (it just sets a flag in Adeos).
>
> If someone involved with, or familiar with, this project would care to
> contact me, I'd be happy to talk about RT enabling zaptel / asterisk.
Right here :)
Any driver that needs direct access to the interrupts can use Adeos' interrupt
pipeline. Your driver's low-level operations should be fairly easy to port to
Adeos.
Since Adeos' release, such porting of code has actually been successfully
done a couple of times already. Apart from Philippe's porting of the Xenomai
interface, here's an interesting tidbit that appeared on the Adeos mailing
list:
> We managed to get a proprietary (sorry nobody's perfect) RT nanokernel
> working with ADEOS. Thus we have two different OSes working at the same
> time on the same computer thanks to ADEOS.
And yet another:
> I explored yesterday the Adeos nanokernel and was surprised by the
> possibilities and also the stability!
>
> I tested for example multiple domains having different priorities and
> installing handlers on the same interrupt which works exactly is it is
> described.
>
> I also tested snooping other interrupts (mouse/keyboard/ide0/ethernet)
> then the timer example provided in the package. (which works ofcourse
> also great)
Feel free to post any questions to Adeos' mailing list. It's been quite
active lately and I'm sure your application will interest others.
Adeos' web site is at:
http://www.freesoftware.fsf.org/adeos/
If you're looking for an introduction to Adeos' functionality have a look
at the original LKML announcement:
http://lwn.net/Articles/1743/
Best regards,
Karim
===================================================
Karim Yaghmour
[email protected]
Embedded and Real-Time Linux Expert
===================================================
On Sunday 21 July 2002 23:00, Mark Spencer wrote:
> > See the other poster's comment about providing a clear separation of
> > kernel and userspace components in your source tree. It just makes it
> > easier to get oriented.
>
> *nods* Actually the kernel and user packages are in different projects
> (zaptel is the kernel level interface, with only a couple of user tools
> for its configuration, while zapata is the library interface). Perhaps
> some people might want to contact me off-list to suggest cleaner ways of
> organizing the code.
I'll stay on-list just in case, it's better to have feedback.
I think you want to take everything that belongs in the kernel and put it
under a subdirectory, say, "kernel", and lay it out exactly as it would be if
your patch were accepted into linus's tree. That is, the headers go in
kernel/include/linux, the modules in kernel/drivers/asterisk/whichever.
It nests your tree more deeply, but it's going to end up like that anyway
(you hope) so it might as well be that way now.
--
Daniel