Hello,
Just as a reminder, Speakup is an in-kernel screen reader that uses
hardware speech synthesis to say what gets printed on the Linux console.
In the process of cleaning it, we are moving its configuration stuff
into proper places. I believe there are two things:
- per- harware speech synthesizer parameters (e.g. speed, pitch, etc.)
- screen reading parameters (e.g. characters pronunciation, key_echo,
current synthesizer being used etc.)
Speech synthesizers should probably have their own device class, how
should it be called? "synth"? "speech"?
Synthesizers are usually plugged on serial ports, but there is no bus
abstraction for that, so I believe we can put them in the virtual bus.
Then there are the screen reading parameters. I'd tend to think that
like there are /sys/{block,firmware,fs,power}, there could be a
/sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
could then be in /sys/a11y/speakup?
Samuel
On Fri, Jun 26, 2009 at 12:04:52AM +0200, Samuel Thibault wrote:
> Hello,
>
> Just as a reminder, Speakup is an in-kernel screen reader that uses
> hardware speech synthesis to say what gets printed on the Linux console.
>
> In the process of cleaning it, we are moving its configuration stuff
> into proper places. I believe there are two things:
>
> - per- harware speech synthesizer parameters (e.g. speed, pitch, etc.)
> - screen reading parameters (e.g. characters pronunciation, key_echo,
> current synthesizer being used etc.)
>
> Speech synthesizers should probably have their own device class, how
> should it be called? "synth"? "speech"?
Which do you think it should?
> Synthesizers are usually plugged on serial ports, but there is no bus
> abstraction for that, so I believe we can put them in the virtual bus.
That would be fine.
> Then there are the screen reading parameters. I'd tend to think that
> like there are /sys/{block,firmware,fs,power}, there could be a
> /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> could then be in /sys/a11y/speakup?
Wouldn't these be on a "per-screen" basis? So they would live under the
screen reader device itself, not way up high in the device tree.
Actually, you are proposing them outside of the device tree, which I do
not think you want at all.
What specific files are you thinking you would need?
thanks,
greg k-h
Hello,
Greg KH, le Mon 29 Jun 2009 21:18:33 -0700, a ?crit :
> > I believe there are two things:
> >
> > - per- harware speech synthesizer parameters (e.g. speed, pitch, etc.)
> > - screen reading parameters (e.g. characters pronunciation, key_echo,
> > current synthesizer being used etc.)
> >
> > Speech synthesizers should probably have their own device class, how
> > should it be called? "synth"? "speech"?
>
> Which do you think it should?
Speakup used to call them "synth", but anything else than speech could
be synthesized, so speech may be better.
> > Then there are the screen reading parameters. I'd tend to think that
> > like there are /sys/{block,firmware,fs,power}, there could be a
> > /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> > could then be in /sys/a11y/speakup?
>
> Wouldn't these be on a "per-screen" basis?
Mmm, what do you call a screen? I guess you mean
/sys/class/vtconsole/vtcon0? It would make sense indeed.
> So they would live under the screen reader device itself, not way up
> high in the device tree.
One problem is usability. That's something that users
will often want to tune, and /sys/a11y/speakup/key_echo is
definitely easier for the very common case of one head, than
/sys/class/vtconsole/vtcon0/reader/speakup/key_echo :)
> Actually, you are proposing them outside of the device tree, which I
> do not think you want at all.
It depends on what you call a "device". It's probably not obvious that
a screen reader is a device, but why not.
> What specific files are you thinking you would need?
There are a lot of them actually, like 20, tuning various aspects of
reading what happens on the console.
Samuel
On Tue, Jun 30, 2009 at 03:08:22PM +0200, Samuel Thibault wrote:
> Hello,
>
> Greg KH, le Mon 29 Jun 2009 21:18:33 -0700, a ?crit :
> > > I believe there are two things:
> > >
> > > - per- harware speech synthesizer parameters (e.g. speed, pitch, etc.)
> > > - screen reading parameters (e.g. characters pronunciation, key_echo,
> > > current synthesizer being used etc.)
> > >
> > > Speech synthesizers should probably have their own device class, how
> > > should it be called? "synth"? "speech"?
> >
> > Which do you think it should?
>
> Speakup used to call them "synth", but anything else than speech could
> be synthesized, so speech may be better.
Ok.
> > > Then there are the screen reading parameters. I'd tend to think that
> > > like there are /sys/{block,firmware,fs,power}, there could be a
> > > /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> > > could then be in /sys/a11y/speakup?
> >
> > Wouldn't these be on a "per-screen" basis?
>
> Mmm, what do you call a screen? I guess you mean
> /sys/class/vtconsole/vtcon0? It would make sense indeed.
Yes, that is what I was referring to.
> > So they would live under the screen reader device itself, not way up
> > high in the device tree.
>
> One problem is usability. That's something that users
> will often want to tune, and /sys/a11y/speakup/key_echo is
> definitely easier for the very common case of one head, than
> /sys/class/vtconsole/vtcon0/reader/speakup/key_echo :)
But as you can have multiple "screens" or readers, you really need to
set this on a per-device basis.
And just wrap all of that up in a simple userspace program if you think
users are going to want to tweak things on the devices. Don't worry
where in /sys/ things live just for user "ease-of-use" as that's not the
point for sysfs. Otherwise we would just cram everything into the root
directory so people wouldn't have to type 'cd' :)
> > Actually, you are proposing them outside of the device tree, which I
> > do not think you want at all.
>
> It depends on what you call a "device". It's probably not obvious that
> a screen reader is a device, but why not.
But it is within the kernel, so please treat it as one.
> > What specific files are you thinking you would need?
>
> There are a lot of them actually, like 20, tuning various aspects of
> reading what happens on the console.
Ok, I suggest writing them all out, as you will need to add them to
Documentation/ABI/ when the patch goes in.
thanks,
greg k-h
On Fri 2009-06-26 00:04:52, Samuel Thibault wrote:
> Hello,
>
> Just as a reminder, Speakup is an in-kernel screen reader that uses
> hardware speech synthesis to say what gets printed on the Linux console.
>
> In the process of cleaning it, we are moving its configuration stuff
> into proper places. I believe there are two things:
>
> - per- harware speech synthesizer parameters (e.g. speed, pitch, etc.)
> - screen reading parameters (e.g. characters pronunciation, key_echo,
> current synthesizer being used etc.)
>
> Speech synthesizers should probably have their own device class, how
> should it be called? "synth"? "speech"?
speech. 'synth' sounds like something midi-related.
> Synthesizers are usually plugged on serial ports, but there is no bus
> abstraction for that, so I believe we can put them in the virtual bus.
>
> Then there are the screen reading parameters. I'd tend to think that
> like there are /sys/{block,firmware,fs,power}, there could be a
> /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> could then be in /sys/a11y/speakup?
Please keep a11y and similar madness far from kernel. /sys/speech ? Or
even better /sys/class/speech? What is global about it?
BTW... from 486+, cpus are fast enough for speech synthesis. Why not
doing it in software, viewing hw synthetisers as 'flite coprocessors'?
What modifications would be needed to make useful a11y w/o
additional hw?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hello,
Pavel Machek, le Tue 30 Jun 2009 08:34:54 +0200, a ?crit :
> > Then there are the screen reading parameters. I'd tend to think that
> > like there are /sys/{block,firmware,fs,power}, there could be a
> > /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> > could then be in /sys/a11y/speakup?
>
> Please keep a11y and similar madness far from kernel.
What do you qualify as "madness" precisely? Could you explain why you
are using such extreme word?
> /sys/speech ? Or even better /sys/class/speech? What is global about
> it?
As I said, there is a difference between speech synthesizers, which can
easily be considered as devices, and screen readers (here, speakup),
which could for instance use different synthesizers, so please don't mix
them. Accessibility features, however, is not really a device, but as it
was suggested it could go into the vtcon directory.
> BTW... from 486+, cpus are fast enough for speech synthesis. Why not
> doing it in software, viewing hw synthetisers as 'flite coprocessors'?
At least because flite is very far from proprietary hardware
synthesizers in terms of quality.
> What modifications would be needed to make useful a11y w/o additional
> hw?
It depends on what you call "useful". The desktop can already use
software speech synthesis. When / can't be mounted, you're hosed,
however, unless you have shipped a full software speech synthesizer in
initrd, but even in such case the initrd script could also fail.
Samuel
On Thu 2009-07-02 00:19:04, Samuel Thibault wrote:
> Hello,
>
> Pavel Machek, le Tue 30 Jun 2009 08:34:54 +0200, a ?crit :
> > > Then there are the screen reading parameters. I'd tend to think that
> > > like there are /sys/{block,firmware,fs,power}, there could be a
> > > /sys/accessibility, or even shorter, /sys/a11y? Speakup parameters
> > > could then be in /sys/a11y/speakup?
> >
> > Please keep a11y and similar madness far from kernel.
>
> What do you qualify as "madness" precisely? Could you explain why you
> are using such extreme word?
If the word is so long that you have to write number of its letters
inside... then you are using wrong word.
Greg's suggestion seems ok.
> > /sys/speech ? Or even better /sys/class/speech? What is global about
> > it?
>
> As I said, there is a difference between speech synthesizers, which can
> easily be considered as devices, and screen readers (here, speakup),
> which could for instance use different synthesizers, so please don't mix
> them. Accessibility features, however, is not really a device, but as it
> was suggested it could go into the vtcon directory.
Ack.
> > BTW... from 486+, cpus are fast enough for speech synthesis. Why not
> > doing it in software, viewing hw synthetisers as 'flite coprocessors'?
>
> At least because flite is very far from proprietary hardware
> synthesizers in terms of quality.
Well... but for reading boot messages, it might be adequate, right?
> > What modifications would be needed to make useful a11y w/o additional
> > hw?
>
> It depends on what you call "useful". The desktop can already use
> software speech synthesis. When / can't be mounted, you're hosed,
> however, unless you have shipped a full software speech synthesizer in
> initrd, but even in such case the initrd script could also fail.
I'd actually prefer soft synthetiser in initrd. You know... "normal"
consoles (such as vt) do fail sometimes, too.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Pavel Machek, le Wed 08 Jul 2009 11:35:16 +0200, a ?crit :
> On Thu 2009-07-02 00:19:04, Samuel Thibault wrote:
> > Pavel Machek, le Tue 30 Jun 2009 08:34:54 +0200, a ?crit :
> > > Please keep a11y and similar madness far from kernel.
> >
> > What do you qualify as "madness" precisely? Could you explain why you
> > are using such extreme word?
>
> If the word is so long that you have to write number of its letters
> inside... then you are using wrong word.
Unfortunately that's the word. If the very notion of accessibility was
realized by mankind earlier maybe we'd have had a shorter word for it.
> > > BTW... from 486+, cpus are fast enough for speech synthesis. Why not
> > > doing it in software, viewing hw synthetisers as 'flite coprocessors'?
> >
> > At least because flite is very far from proprietary hardware
> > synthesizers in terms of quality.
>
> Well... but for reading boot messages, it might be adequate, right?
I'd actually say it's particularly not adequate. Try to feed your dmesg
to a speech synthesizer and try to understand it.
> You know... "normal" consoles (such as vt) do fail sometimes, too.
Yes, and in such case sighted and blind users are on equal basis. In
that case there is no need for a particular support for blind people.
Samuel
On Wed 2009-07-08 11:42:19, Samuel Thibault wrote:
> Pavel Machek, le Wed 08 Jul 2009 11:35:16 +0200, a ?crit :
> > On Thu 2009-07-02 00:19:04, Samuel Thibault wrote:
> > > Pavel Machek, le Tue 30 Jun 2009 08:34:54 +0200, a ?crit :
> > > > Please keep a11y and similar madness far from kernel.
> > >
> > > What do you qualify as "madness" precisely? Could you explain why you
> > > are using such extreme word?
> >
> > If the word is so long that you have to write number of its letters
> > inside... then you are using wrong word.
>
> Unfortunately that's the word. If the very notion of accessibility was
> realized by mankind earlier maybe we'd have had a shorter word for it.
"speech" would seem good enough substitute.
> > > > BTW... from 486+, cpus are fast enough for speech synthesis. Why not
> > > > doing it in software, viewing hw synthetisers as 'flite coprocessors'?
> > >
> > > At least because flite is very far from proprietary hardware
> > > synthesizers in terms of quality.
> >
> > Well... but for reading boot messages, it might be adequate, right?
>
> I'd actually say it's particularly not adequate. Try to feed your dmesg
> to a speech synthesizer and try to understand it.
Do you really expect blind people to do kernel hacking?
> > You know... "normal" consoles (such as vt) do fail sometimes, too.
>
> Yes, and in such case sighted and blind users are on equal basis. In
> that case there is no need for a particular support for blind people.
You know, we do not translate kernel messages into other languages,
either. So maybe we should make sure that Linux machines can be used
without reading dmesg, and just do it from initrd?
After all, most distributions _already_ put splashscreens on, so 99%
of people do not see kernel messages, either...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Pavel Machek, le Sun 12 Jul 2009 12:31:34 +0200, a ?crit :
> On Wed 2009-07-08 11:42:19, Samuel Thibault wrote:
> > Pavel Machek, le Wed 08 Jul 2009 11:35:16 +0200, a ?crit :
> > > On Thu 2009-07-02 00:19:04, Samuel Thibault wrote:
> > > > Pavel Machek, le Tue 30 Jun 2009 08:34:54 +0200, a ?crit :
> > > > > Please keep a11y and similar madness far from kernel.
> > > >
> > > > What do you qualify as "madness" precisely? Could you explain why you
> > > > are using such extreme word?
> > >
> > > If the word is so long that you have to write number of its letters
> > > inside... then you are using wrong word.
> >
> > Unfortunately that's the word. If the very notion of accessibility was
> > realized by mankind earlier maybe we'd have had a shorter word for it.
>
> "speech" would seem good enough substitute.
For the speech case. Then you could have braille, speech recognition,
etc.
> > > > > BTW... from 486+, cpus are fast enough for speech synthesis. Why not
> > > > > doing it in software, viewing hw synthetisers as 'flite coprocessors'?
> > > >
> > > > At least because flite is very far from proprietary hardware
> > > > synthesizers in terms of quality.
> > >
> > > Well... but for reading boot messages, it might be adequate, right?
> >
> > I'd actually say it's particularly not adequate. Try to feed your dmesg
> > to a speech synthesizer and try to understand it.
>
> Do you really expect blind people to do kernel hacking?
They do. Why shouldn't they be able to?
> > > You know... "normal" consoles (such as vt) do fail sometimes, too.
> >
> > Yes, and in such case sighted and blind users are on equal basis. In
> > that case there is no need for a particular support for blind people.
>
> You know, we do not translate kernel messages into other languages,
> either. So maybe we should make sure that Linux machines can be used
> without reading dmesg, and just do it from initrd?
People can learn english. Blind people can't learn seeing.
> After all, most distributions _already_ put splashscreens on, so 99%
> of people do not see kernel messages, either...
Blindness is orthogonal to that 99%.
Samuel
> > > > If the word is so long that you have to write number of its letters
> > > > inside... then you are using wrong word.
> > >
> > > Unfortunately that's the word. If the very notion of accessibility was
> > > realized by mankind earlier maybe we'd have had a shorter word for it.
> >
> > "speech" would seem good enough substitute.
>
> For the speech case. Then you could have braille, speech recognition,
> etc.
Well, but maybe braile and speech recognition _don't_ belong together?
> > > I'd actually say it's particularly not adequate. Try to feed your dmesg
> > > to a speech synthesizer and try to understand it.
> >
> > Do you really expect blind people to do kernel hacking?
>
> They do. Why shouldn't they be able to?
...
> > > > You know... "normal" consoles (such as vt) do fail sometimes, too.
> > >
> > > Yes, and in such case sighted and blind users are on equal basis. In
> > > that case there is no need for a particular support for blind people.
> >
> > You know, we do not translate kernel messages into other languages,
> > either. So maybe we should make sure that Linux machines can be used
> > without reading dmesg, and just do it from initrd?
>
> People can learn english. Blind people can't learn seeing.
I guess for such case, serial console to machine with running system
(with speech synthesis/braille/etc) is the way to go. Anything else
just will not work early enough.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Pavel Machek, le Tue 14 Jul 2009 11:52:43 +0200, a ?crit :
> > > > > If the word is so long that you have to write number of its letters
> > > > > inside... then you are using wrong word.
> > > >
> > > > Unfortunately that's the word. If the very notion of accessibility was
> > > > realized by mankind earlier maybe we'd have had a shorter word for it.
> > >
> > > "speech" would seem good enough substitute.
> >
> > For the speech case. Then you could have braille, speech recognition,
> > etc.
>
> Well, but maybe braile and speech recognition _don't_ belong together?
They are often used together.
> > > > I'd actually say it's particularly not adequate. Try to feed your dmesg
> > > > to a speech synthesizer and try to understand it.
> > >
> > > Do you really expect blind people to do kernel hacking?
> >
> > They do. Why shouldn't they be able to?
> ...
Yes, "..."
I'm amazed that you could think that blind people shouldn't do kernel
hacking? Why shouldn't they? Actually they could be even better at
it that sighted people, precisely because kernel stuff is mostly about
stuff that you can't see.
> > > > > You know... "normal" consoles (such as vt) do fail sometimes, too.
> > > >
> > > > Yes, and in such case sighted and blind users are on equal basis. In
> > > > that case there is no need for a particular support for blind people.
> > >
> > > You know, we do not translate kernel messages into other languages,
> > > either. So maybe we should make sure that Linux machines can be used
> > > without reading dmesg, and just do it from initrd?
> >
> > People can learn english. Blind people can't learn seeing.
>
> I guess for such case, serial console to machine with running system
> (with speech synthesis/braille/etc) is the way to go. Anything else
> just will not work early enough.
Yes, and so in such extreme case there is no need for particular
support. But the "initrd is hosed" case is not so rare.
Samuel
> > > > > I'd actually say it's particularly not adequate. Try to feed your dmesg
> > > > > to a speech synthesizer and try to understand it.
> > > >
> > > > Do you really expect blind people to do kernel hacking?
> > >
> > > They do. Why shouldn't they be able to?
> > ...
>
> Yes, "..."
>
> I'm amazed that you could think that blind people shouldn't do kernel
> hacking? Why shouldn't they? Actually they could be even better at
> it that sighted people, precisely because kernel stuff is mostly about
> stuff that you can't see.
They are welcome to do kernel development. And yes, we probably should
support braille devices (that's just another piece of hw).
And we could support hw speech synthetizers, too, if those are simple
drivers.
But I do believe that sw speech synthesis is more important than
obscure hw synthetizers; it is available on way more machines, and
there's no reason why hw synthetizers should be superior.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html