2009-11-27 15:57:56

by [email protected]

[permalink] [raw]
Subject: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
<[email protected]> wrote:
> Hi Mauro,
>
> on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
>> Christoph Bartelmus wrote:
> [...]
>>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's just
>>> because I'm not familiar at all with input layer toolset.
> [...]
>> I hope it helps for you to better understand how this works.
>
> So the plan is to have two ways of using IR in the future which are
> incompatible to each other, the feature-set of one being a subset of the
> other?

Take advantage of the fact that we don't have a twenty year old legacy
API already in the kernel. Design an IR API that uses current kernel
systems. Christoph, ignore the code I wrote and make a design proposal
that addresses these goals...

1) Unified input in Linux using evdev. IR is on equal footing with
mouse and keyboard.
2) plug and play for basic systems - you only need an external app for scripting
3) No special tools - use mkdir, echo, cat, shell scripts to build maps
4) Use of modern Linux features like sysfs, configfs and udev.
5) Direct multi-app support - no daemon
6) Hide timing data from user as much as possible.

What are other goals for this subsystem?

Maybe we decide to take the existing LIRC system as is and not
integrate it into the input subsystem. But I think there is a window
here to update the LIRC design to use the latest kernel features. We
don't want to build another /dev/mouse and have to rip it out in five
years.

>
> When designing the key mapping in the kernel you should be aware that
> there are remotes out there that send a sequence of scan codes for some
> buttons, e.g.
> http://lirc.sourceforge.net/remotes/pioneer/CU-VSX159

This is good input.


--
Jon Smirl
[email protected]


2009-11-27 17:00:18

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, 2009-11-27 at 10:57 -0500, Jon Smirl wrote:
> On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
> <[email protected]> wrote:

> > So the plan is to have two ways of using IR in the future which are
> > incompatible to each other, the feature-set of one being a subset of the
> > other?
>
> Take advantage of the fact that we don't have a twenty year old legacy
> API already in the kernel. Design an IR API that uses current kernel
> systems. Christoph, ignore the code I wrote and make a design proposal
> that addresses these goals...

Jon,

It's good to have clear, sensible goals. I'd also like to have
concurrence on what are driving requirements vs. nice-to-have's and also
on priorities.


> 1) Unified input in Linux using evdev. IR is on equal footing with
> mouse and keyboard.

Sounds fine. I think some of the discussion so far indicates the devil
may be in the details. I understand the driving requirement is to avoid
user(?) problems experienced in the past with PS/2 keyboards, etc.


> 2) plug and play for basic systems - you only need an external app for scripting

I concur. Users needing hardware to "Just Work" is *the* driving
requirment for in kernel IR from the discussion. I would only say that
you may not need any application for the default configuration on basic
systems.


> 3) No special tools - use mkdir, echo, cat, shell scripts to build maps

Sounds fine. I also was a user who used setkeys, loadkeys, et. al. once
years ago, and can't remeber for the life of me why I had to do so or
how they work anymore.


> 4) Use of modern Linux features like sysfs, configfs and udev.

I'm not sure this is strictly driven by anything; it's an implementation
decision stated in advance. One uses features, if one needs them.


> 5) Direct multi-app support - no daemon

I understand the rationale for this to really be a desire for minimal
userspace components. If you think/know that the input system can
multiplex or multicast events in a sane way for applications, then I
suppose it's feasible.

I don't hear users asking for minimal userspace components, as their
dsitribution packaging system usually handles this for them. I suspect
this is mostly driven by kernel developers or embedded systems
developers.


> 6) Hide timing data from user as much as possible.

I do not strictly agree with this. I understand this goal is to
insulate users from the low level details of IR protocols. I think that
hinders users' ability to solve or diagnose problems on their own. Some
people like details and flexible control. I think users being able to
report about timing data of unknown remotes or protocols has value as
well.


> What are other goals for this subsystem?

7. Support IR transmit by applications.

8. Support IR transmit without using special applications - just cat,
mkdir, shell, etc.
(Following your lead here.)

9. For flexible IR transmit hardware, the one IR transmitter should be
capable of sending codes to STBs with different protocols or keymaps
(not at the same time of course).



Regards,
Andy

2009-11-27 17:31:19

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Jon,

on 27 Nov 09 at 10:57, Jon Smirl wrote:
[...]
>>>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's
>>>> just because I'm not familiar at all with input layer toolset.
>> [...]
>>> I hope it helps for you to better understand how this works.
>>
>> So the plan is to have two ways of using IR in the future which are
>> incompatible to each other, the feature-set of one being a subset of the
>> other?

> Take advantage of the fact that we don't have a twenty year old legacy
> API already in the kernel. Design an IR API that uses current kernel
> systems. Christoph, ignore the code I wrote and make a design proposal
> that addresses these goals...
>
> 1) Unified input in Linux using evdev. IR is on equal footing with
> mouse and keyboard.

Full support given with LIRC by using uinput.

> 2) plug and play for basic systems - you only need an external app for
> scripting

LIRC is lacking in plug and play support. But it wouldn't be very
difficult to add some that works for all basic systems.
As I'm favouring a solution outside of the kernel, of course I can't offer
you a solution which works without userspace tools.

> 3) No special tools - use mkdir, echo, cat, shell scripts to build
> maps

A user friendly GUI tool to configure the mapping of the remote buttons is
essential for good user experience. I hope noone here considers that users
learn command line or bash to configure their remotes.

> 4) Use of modern Linux features like sysfs, configfs and udev.

LIRC uses sysfs where appropriate. I have no problem using modern
interfaces where it makes sense. But I won't change working and well
tested interfaces just because it's possible to implement the same thing a
different way. The interface is efficient and small. I don't see how it
could gain much by the mentioned featues.
Tell me what exactly you don't like about the LIRC interface and we can
work on it.

> 5) Direct multi-app support - no daemon

lircd is multi-app. I want to be in userspace, so I need a daemon.

> 6) Hide timing data from user as much as possible.

Nobody is manually writing lircd.conf files. Of course you don't want the
user to know anything about the technical details unless you really want
to get your hands dirty.

> What are other goals for this subsystem?
>
> Maybe we decide to take the existing LIRC system as is and not
> integrate it into the input subsystem. But I think there is a window
> here to update the LIRC design to use the latest kernel features.

If it ain't broke, don't fix it.

I'm also not against using the input layer where it makes sense.

For devices that do the decoding in hardware, the only thing that I don't
like about the current kernel implementation is the fact that there are
mapping tables in the kernel source. I'm not aware of any tools that let
you change them without writing some keymaps manually.

I'm also not against in-kernel decoding in general. We already agreed last
year that we can include an interface in lirc_dev that feeds the signal
data to an in-kernel decoder if noone from userspace reads it. That's
close to an one line change in lirc_dev. You won't have to change a single
device driver for this. I think there also was common understanding that
there will be cases where in-kernel decoding will not be possible for
esoteric protocols and that there needs to be an interface to deliver the
raw data to userspace.

My point just is that it took LIRC a very long time until the most common
protocols have been fully supported, with all the toggle bits, toggle
masks, repeat codes, sequences, headers, differing gap values, etc. Or
take a look at crappy hardware like the Igor Cesko's USB IR Receiver. This
device cripples the incoming signal except RC-5 because it has a limited
buffer size. LIRC happily accepts the data because it does not make any
assumptions on the protocol or bit length. With the approach that you
suggested for the in-kernel decoder, this device simply will not work for
anything but RC-5. The devil is in all the details. If we decide to do the
decoding in-kernel, how long do you think this solution will need to
become really stable and mainline? Currently I don't even see any
consensus on the interface yet. But maybe you will prove me wrong and it's
just that easy to get it all working.
I also understand that people want to avoid dependency on external
userspace tools. All I can tell you is that the lirc tools already do
support everything you need for IR control. And as it includes a lot of
drivers that are implemented in userspace already, LIRC will just continue
to do it's work even when there is an alternative in-kernel.
If LIRC is being rejected I don't have a real problem with this either,
but we finally need a decision because for me this is definitely the last
attempt to get this into the kernel.

Christoph

2009-11-27 17:49:25

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Nov 27, 2009 at 12:29 PM, Christoph Bartelmus <[email protected]> wrote:
>> Maybe we decide to take the existing LIRC system as is and not
>> integrate it into the input subsystem. But I think there is a window
>> here to update the LIRC design to use the latest kernel features.
>
> If it ain't broke, don't fix it.
>
> I'm also not against using the input layer where it makes sense.
>
> For devices that do the decoding in hardware, the only thing that I don't
> like about the current kernel implementation is the fact that there are
> mapping tables in the kernel source. I'm not aware of any tools that let
> you change them without writing some keymaps manually.
>
> I'm also not against in-kernel decoding in general. We already agreed last
> year that we can include an interface in lirc_dev that feeds the signal
> data to an in-kernel decoder if noone from userspace reads it. That's
> close to an one line change in lirc_dev. You won't have to change a single
> device driver for this. I think there also was common understanding that
> there will be cases where in-kernel decoding will not be possible for
> esoteric protocols and that there needs to be an interface to deliver the
> raw data to userspace.
>
> My point just is that it took LIRC a very long time until the most common
> protocols have been fully supported, with all the toggle bits, toggle
> masks, repeat codes, sequences, headers, differing gap values, etc. Or
> take a look at crappy hardware like the Igor Cesko's USB IR Receiver. This
> device cripples the incoming signal except RC-5 because it has a limited
> buffer size. LIRC happily accepts the data because it does not make any
> assumptions on the protocol or bit length. With the approach that you
> suggested for the in-kernel decoder, this device simply will not work for
> anything but RC-5. The devil is in all the details. If we decide to do the
> decoding in-kernel, how long do you think this solution will need to
> become really stable and mainline? Currently I don't even see any
> consensus on the interface yet. But maybe you will prove me wrong and it's
> just that easy to get it all working.
> I also understand that people want to avoid dependency on external
> userspace tools. All I can tell you is that the lirc tools already do
> support everything you need for IR control. And as it includes a lot of
> drivers that are implemented in userspace already, LIRC will just continue
> to do it's work even when there is an alternative in-kernel.
> If LIRC is being rejected I don't have a real problem with this either,
> but we finally need a decision because for me this is definitely the last
> attempt to get this into the kernel.

Christoph, take what you know from all of the years of working on LIRC
and design the perfect in-kernel system. This is the big chance to
redesign IR support and get rid of any past mistakes. Incorporate any
useful chunks of code and knowledge from the existing LIRC into the
new design. Drop legacy APIs, get rid of daemons, etc. You can do this
redesign in parallel with existing LIRC. Everyone can continue using
the existing code while the new scheme is being built. Think of it as
LIRC 2.0. You can lead this design effort, you're the most experience
developer in the IR area. Take advantage of this window to make a
design that is fully integrated with Linux - put IR on equal footing
with the keyboard and mouse as it should be.

--
Jon Smirl
[email protected]

2009-11-27 19:03:13

by Ferenc Wagner

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> On Fri, Nov 27, 2009 at 12:29 PM, Christoph Bartelmus <[email protected]> wrote:
>
>>> Maybe we decide to take the existing LIRC system as is and not
>>> integrate it into the input subsystem. But I think there is a window
>>> here to update the LIRC design to use the latest kernel features.
>>
>> If it ain't broke, don't fix it. [...]
>>
>> We already agreed last year that we can include an interface in
>> lirc_dev that feeds the signal data to an in-kernel decoder if noone
>> from userspace reads it. [...]
>>
>> I also understand that people want to avoid dependency on external
>> userspace tools. All I can tell you is that the lirc tools already do
>> support everything you need for IR control. And as it includes a lot of
>> drivers that are implemented in userspace already, LIRC will just continue
>> to do it's work even when there is an alternative in-kernel.
>
> Christoph, take what you know from all of the years of working on LIRC
> and design the perfect in-kernel system.

Hi,

I'm reading this thread with great interest. Thank you (plural) for the
very informative conversation, I think I learnt a lot. But now I
somehow lost the point, please correct me if the following is wrong.

It looks like having lirc_dev (or a similar raw interface) is a must.
It could be disguised as an input device, or changed in various ways,
but is it worth the effort? As I understand Christoph, he does not want
to do so, because he finds it wasted work, and also there's already a
*single* user space daemon using it and doing everything users could
want. Except for plug&play.

On the other hand, a one-liner could make in-kernel decoding possible,
so those who haven't got lircd running could have plug&play easily, if
somebody writes the necessary in-kernel decoders to feed the input
subsystem (which lircd also does, through uinput).

But even if you can't find anybody at the moment to write those, this is
still good stuff (I don't know about the code), which is hurt by being
developed out of kernel. Is there any reason to keep this so?
Admittedly, I don't know why /dev/mouse is evil, maybe I'd understand if
somebody pointed me to some reading.
--
Thanks,
Feri.

2009-11-27 19:21:11

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Nov 27, 2009 at 2:03 PM, Ferenc Wagner <[email protected]> wrote:
> Jon Smirl <[email protected]> writes:
>
>> On Fri, Nov 27, 2009 at 12:29 PM, Christoph Bartelmus <[email protected]> wrote:
>>
>>>> Maybe we decide to take the existing LIRC system as is and not
>>>> integrate it into the input subsystem. But I think there is a window
>>>> here to update the LIRC design to use the latest kernel features.
>>>
>>> If it ain't broke, don't fix it. ?[...]
>>>
>>> We already agreed last year that we can include an interface in
>>> lirc_dev that feeds the signal data to an in-kernel decoder if noone
>>> from userspace reads it. ?[...]
>>>
>>> I also understand that people want to avoid dependency on external
>>> userspace tools. All I can tell you is that the lirc tools already do
>>> support everything you need for IR control. And as it includes a lot of
>>> drivers that are implemented in userspace already, LIRC will just continue
>>> to do it's work even when there is an alternative in-kernel.
>>
>> Christoph, take what you know from all of the years of working on LIRC
>> and design the perfect in-kernel system.
>
> Hi,
>
> I'm reading this thread with great interest. ?Thank you (plural) for the
> very informative conversation, I think I learnt a lot. ?But now I
> somehow lost the point, please correct me if the following is wrong.
>
> It looks like having lirc_dev (or a similar raw interface) is a must.
> It could be disguised as an input device, or changed in various ways,
> but is it worth the effort? ?As I understand Christoph, he does not want
> to do so, because he finds it wasted work, and also there's already a
> *single* user space daemon using it and doing everything users could
> want. ?Except for plug&play.

The high level summary:

LIRC has developed it's own way of doing things. It has its own device
protocol, user space daemon, tools, etc. No one is denying that all of
that works.

The alternative is to rework IR to use standard kernel interfaces
(evdev, sysfs, configfs), standard user space tools (udev, ls, mkdir,
cat) and make the daemon optional.

Since IR hasn't been added to the kernel yet we are still free to
design the user space API before locking it in stone for the next
twenty years.


This is an architectural debate, not a debate on specific features.


>
> On the other hand, a one-liner could make in-kernel decoding possible,
> so those who haven't got lircd running could have plug&play easily, if
> somebody writes the necessary in-kernel decoders to feed the input
> subsystem (which lircd also does, through uinput).
>
> But even if you can't find anybody at the moment to write those, this is
> still good stuff (I don't know about the code), which is hurt by being
> developed out of kernel. ?Is there any reason to keep this so?
> Admittedly, I don't know why /dev/mouse is evil, maybe I'd understand if

/dev/mouse is evil because it is possible to read partial mouse
messages. evdev fixes things so that you only get complete messages.

> somebody pointed me to some reading.
> --
> Thanks,
> Feri.
>



--
Jon Smirl
[email protected]

2009-11-27 21:50:19

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> 3) No special tools - use mkdir, echo, cat, shell scripts to build maps

>From the POV of a distributor, there is always a special tool required.
Whether it is implemented in bash, Python, or C doesn't make a
difference to him.

For an enduser whose distributor doesn't package that tool, it also
doesn't matter whether it is bash or Python. (C is awkward because it
needs to be run through gcc first.) A Pyton tool can operate the
existing EVIOCSKEYCODE interface just as well as a C tool.

Your mkdir/ echo/ cat programs would still just this: Programs. Sure,
these programs would be interpreted by an interpreter which is installed
everywhere, and the data they operate on is in a clear text format. The
downside is that these programs do not exist yet.

> 5) Direct multi-app support - no daemon

Think of lircd (when it feeds into uinput) as of a userspace driver
rather than a daemon. The huge benefit of a userspace driver is that it
can load configuration files.

Multi-app support is provided by evdev of course.

> What are other goals for this subsystem?

- Minimal development cost; reduced maintenance cost relative
the to status quo.

- No regressions would be best.
--
Stefan Richter
-=====-==--= =-== ==-==
http://arcgraph.de/sr/

2009-11-28 01:08:26

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, 2009-11-27 at 22:49 +0100, Stefan Richter wrote:
> Jon Smirl wrote:
> > 3) No special tools - use mkdir, echo, cat, shell scripts to build maps
>
> From the POV of a distributor, there is always a special tool required.
> Whether it is implemented in bash, Python, or C doesn't make a
> difference to him.
>
> For an enduser whose distributor doesn't package that tool, it also
> doesn't matter whether it is bash or Python. (C is awkward because it
> needs to be run through gcc first.) A Pyton tool can operate the
> existing EVIOCSKEYCODE interface just as well as a C tool.
>
> Your mkdir/ echo/ cat programs would still just this: Programs. Sure,
> these programs would be interpreted by an interpreter which is installed
> everywhere, and the data they operate on is in a clear text format. The
> downside is that these programs do not exist yet.
>
> > 5) Direct multi-app support - no daemon
>
> Think of lircd (when it feeds into uinput) as of a userspace driver
> rather than a daemon. The huge benefit of a userspace driver is that it
> can load configuration files.
And bear in mind the fact that only handful of lirc drivers are in
kernel.
Many drivers are pure userspace and live in the lirc daemon itself.
These drivers ether will have to be reimplemented in kernel (huge job)
Or we will have a lot of duplication, because same remote can be used
with kernel or userspace drivers.

Look at daemons subdirectory of lircd to get the idea of how many such
drivers exist.


Btw, for _some_ user space drivers its not possible to re implement them
in kernel, like driver that reads input from a sound card, which I can
say is nice very cheap way to have a receiver.


I want to repeat the correct way of doing things:

1 - all drivers that do all processing in hardware, will use input
system.

2 - all drivers that decode protocol will use ether lirc, to keep
configuration in one place.

3 - all drivers that send pulse/space will use lirc.

lirc will process the data, convert it to input events and feed them
back to kernel.

Please note, and note again.
We aren't taking about two interfaces for userspace!
Everybody agree that userspace programs will only recieve input events.
The point is that we give an exception for one program that yes is just
a userspace driver to recieve the raw data, process it, and feed it
back.

Also same program (lircd) could receive data from other sources, and
convert that to input events.

Whats wrong with that?

If we add in-kernel decoding, we still will end up with two different
decoding, one in kernel and one in lirc.

Could we finally end this discussion and move forward?

Best regards,
Maxim Levitsky

2009-11-28 01:35:07

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Nov 27, 2009 at 02:21:13PM -0500, Jon Smirl wrote:
> On Fri, Nov 27, 2009 at 2:03 PM, Ferenc Wagner <[email protected]> wrote:
> > Admittedly, I don't know why /dev/mouse is evil, maybe I'd understand if
>
> /dev/mouse is evil because it is possible to read partial mouse
> messages. evdev fixes things so that you only get complete messages.
>

For me the main evil of /dev/mouse (and other multuiplexing interfaces)
is that it is impossible to remove one of the streams from the
multiplexed stream without affecting other users. And so are born
various "grab" schemes where we declare one application _the
application_ and let it "grab" the device.. which breaks when there are
other applications also interested in the same data stream.

--
Dmitry

2009-11-28 09:33:50

by Simon Kenyon

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Christoph Bartelmus wrote:
> A user friendly GUI tool to configure the mapping of the remote
buttons is
> essential for good user experience. I hope noone here considers that
users
> learn command line or bash to configure their remotes.

oh please no
the major, major problem with bluetooth is that there is *only* a gui
the core system should use the command line and then a gui (or guis) can
follow

> Nobody is manually writing lircd.conf files. Of course you don't want
the
> user to know anything about the technical details unless you really
want
> to get your hands dirty.

speak for yourself

> If it ain't broke, don't fix it.

i have been hacking lirc for *so many years* because i needed two
separate serial inputs. so that is most assuredly broken

2009-11-28 11:20:21

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Maxim Levitsky <[email protected]> writes:

> If we add in-kernel decoding, we still will end up with two different
> decoding, one in kernel and one in lirc.

And that's good. Especially for a popular and simple protocol such as
RC5.
Actually, it's not about adding the decoder. It's about fixing it.
I can fix it.
--
Krzysztof Halasa

2009-11-28 11:21:39

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Christoph,

Christoph Bartelmus wrote:

>> Maybe we decide to take the existing LIRC system as is and not
>> integrate it into the input subsystem. But I think there is a window
>> here to update the LIRC design to use the latest kernel features.
>
> If it ain't broke, don't fix it.

I don't know what's up on lirc development, but in the case of the media drivers,
the situation we have currently is different than what we had five years ago. In
the past, all drivers were developed by a someone without any official help from
the hardware developer. On several cases, it were developed _despite_ vendors
efforts to hide technical info. Basically, vendors weren't interested
on officially support Linux.

Now, the situation has changed. We have several vendors providing patches
and drivers to the community and we have incoming vendors joining the efforts
almost every month. They are not only providing basic streaming capabilities
but also providing us patches for the shipped IR's.

If the developers community is changed, it is natural that the development
model needs improvements to better handle the new model, as they're bringing
new demands and asking for API improvements.

One of the effects is that we're actively working very hard to improve the core
of the subsystem, in order to provide more flexibility on the subsystem and
to make easier to receive patch contributions.

So, even not being broken, the subsystem internal media API's changed
a lot during the last years, and there are still several new changes
on our TODO list.

So, I'd say that if we can do it better, then let's do it.

> I'm also not against using the input layer where it makes sense.
>
> For devices that do the decoding in hardware, the only thing that I don't
> like about the current kernel implementation is the fact that there are
> mapping tables in the kernel source. I'm not aware of any tools that let
> you change them without writing some keymaps manually.

When the keymap tool I pointed is built, The Makefile automatically parses all
kernel source files with IR keymaps and produce a directory with all those
keymaps (currently, it is producing 89 keymap tables).

After we have some sort of tool that automatically loads the keymaps when
a new device is added by udev, we can deprecate the in-kernel keymaps and use
those files as a basis for such tool.

Still, I prefer first to migrate all drivers to use the full scancode and
re-generate the keymaps before such step.

> I'm also not against in-kernel decoding in general. We already agreed last
> year that we can include an interface in lirc_dev that feeds the signal
> data to an in-kernel decoder if noone from userspace reads it. That's
> close to an one line change in lirc_dev. You won't have to change a single
> device driver for this. I think there also was common understanding that
> there will be cases where in-kernel decoding will not be possible for
> esoteric protocols and that there needs to be an interface to deliver the
> raw data to userspace.
>
> My point just is that it took LIRC a very long time until the most common
> protocols have been fully supported, with all the toggle bits, toggle
> masks, repeat codes, sequences, headers, differing gap values, etc.

It also took a very long time to add support at the existing in-kernel drivers
to allow them to support the shipped IR and the hardware IR decoding
capabilities, in order to provide consistent interfaces that work out-of-the-box.

Any API decision we take, it should be applied to all IR drivers: the
current in-kernel drivers and the new lirc drivers should both be
compliant with the API's. So, we are all at the same boat.

As you probably know, on almost all multimedia drivers, the same driver needs
to support more than one way to receive IR. There are even some cases where
the same driver has different methods to talk with the same IR, due to
different design decisions that were taken from the same manufacturer on
different boards or board revisions.

For example, there are several cases where the same IR is shipped with cards
that only provides raw pulse/space interfaces and with devices with
hardware decoding. Also the same IR is sometimes used by different vendors.
It happens that the same driver needs different ways to talk with the same IR.
So, while a raw interface can be provided for those devices that have
raw IR reception to work with lirc, it doesn't make sense of removing
the existing event interface for the devices that requires to use the already
existing in-kernel decoders. We shouldn't impose a penalty to the users just
because the vendor decided to save a few cents and not adding a hardware decoder.

Yet, I understand that having a raw interface for those devices that don't
have hardware IR decoding capabilities is interesting for lirc, as it can
use different algorithms to support unusual devices.

> Or take a look at crappy hardware like the Igor Cesko's USB IR Receiver. This
> device cripples the incoming signal except RC-5 because it has a limited
> buffer size. LIRC happily accepts the data because it does not make any
> assumptions on the protocol or bit length.

For sure there will be cases where only with lirc you'll be able to get
an event. I'm not saying that we need to move the entire lirc drivers into
the kernel. However, in the cases where adding a new kernel driver is
the better approach, the kernel driver should directly offer an evdev interface,
to allow a wider usage of the IR. An IR should behave, by default, just like
a keyboard or a mouse: once the hardware is plugged, it should produce keystrokes.

Also, the solution of a kernel driver that sends a raw event to userspace,
proccess there and return back to the kernel shouldn't be the default, since
it will add more delay than directly doing whatever is needed in kernel and
directly output the keystroke.

> With the approach that you
> suggested for the in-kernel decoder, this device simply will not work for
> anything but RC-5. The devil is in all the details.

I haven't seen such limitations on his proposal. We currently have in-kernel
decoders for NEC, pulse-distance, RC4 protocols, and some variants. If non-RC5
decoders are missing, we need for sure to add them.

> If we decide to do the
> decoding in-kernel, how long do you think this solution will need to
> become really stable and mainline? Currently I don't even see any
> consensus on the interface yet. But maybe you will prove me wrong and it's
> just that easy to get it all working.

The timeframe to go to mainline will basically depend on taking a decision about the
API and on people having time to work on it.

Providing that we agree on what we'll do, I don't see why not
adding it on staging for 2.6.33 and targeting to have
everything done for 2.6.34 or 2.6.35.

> I also understand that people want to avoid dependency on external
> userspace tools. All I can tell you is that the lirc tools already do
> support everything you need for IR control. And as it includes a lot of
> drivers that are implemented in userspace already, LIRC will just continue
> to do it's work even when there is an alternative in-kernel.

The point is that for simple usage, like an user plugging his new USB stick
he just bought, he should be able to use the shipped IR without needing to
configure anything or manually calling any daemon. This currently works
with the existing drivers and it is a feature that needs to be kept.

Of course lirc has a number of features that improves a lot the users experience.
Also, as you reminded, there are some cases where the user wants to do something
that requires a setup, like using a non-standard IR, or wants to use some user-made
hardware to receive IR.

> If LIRC is being rejected I don't have a real problem with this either,
> but we finally need a decision because for me this is definitely the last
> attempt to get this into the kernel.

Nobody is rejecting it, but we need to carefully discuss the API's that
will be available for IR's.

At the kernel development model, API's need to be stable for a very long time. So,
it is better to take some time discussing it than to suffer for a long time
trying to solve a bad decision.

Just as an example, V4L1 API went on kernel in 1999, and the first V4L2 API drafts
are back from 2002. The new API corrects several serious design problems on the
original V4L1 API. Still today, we're converting drivers to the new API
and loosing our time fixing bugs and porting applications to use the new model.

Maybe if people had better understanding about the needs
and could had V4L2 API done before adding the first drivers, we would
had saved a huge amount of efforts trying to fix it.

The same history also happened with /dev/mouse, with OSS/ALSA, ...
As we now have a good understanding about the IR input needs, let's
focus on producing an API that will last there for a long time, please.

Cheers,
Mauro.

2009-11-28 14:42:50

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 12:20 +0100, Krzysztof Halasa wrote:
> Maxim Levitsky <[email protected]> writes:
>
> > If we add in-kernel decoding, we still will end up with two different
> > decoding, one in kernel and one in lirc.
>
> And that's good. Especially for a popular and simple protocol such as
> RC5.
> Actually, it's not about adding the decoder. It's about fixing it.
> I can fix it.

This is nonsense.


2009-11-28 15:25:03

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Maxim Levitsky <[email protected]> writes:

>> And that's good. Especially for a popular and simple protocol such as
>> RC5.
>> Actually, it's not about adding the decoder. It's about fixing it.
>> I can fix it.
>
> This is nonsense.

You forgot to say why do you think so.
--
Krzysztof Halasa

2009-11-28 15:36:03

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 16:25 +0100, Krzysztof Halasa wrote:
> Maxim Levitsky <[email protected]> writes:
>
> >> And that's good. Especially for a popular and simple protocol such as
> >> RC5.
> >> Actually, it's not about adding the decoder. It's about fixing it.
> >> I can fix it.
> >
> > This is nonsense.
>
> You forgot to say why do you think so.

Because frankly, I am sick of this discussion.
Generic decoder that lirc has is actually much better and more tolerant
that protocol specific decoders that you propose,

You claim you 'fix' the decoder, right?
But what about all these lirc userspace drivers?
How they are supposed to use that 'fixed' decoder.


2009-11-28 15:44:40

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Maxim Levitsky <[email protected]> writes:

> Generic decoder that lirc has is actually much better and more tolerant
> that protocol specific decoders that you propose,

Actually, it is not the case. Why do you think it's better (let alone
"much better")? Have you at least seen my RC5 decoder?

> You claim you 'fix' the decoder, right?

Sure.

> But what about all these lirc userspace drivers?

Nothing. They are not relevant and obviously have to use lircd.
If you can have userspace driver, you can have lircd as well.

> How they are supposed to use that 'fixed' decoder.

They are not.

Is it a problem for you?
How is your keyboard supposed to use scanner driver?
--
Krzysztof Halasa

2009-11-28 16:26:19

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 16:44 +0100, Krzysztof Halasa wrote:
> Maxim Levitsky <[email protected]> writes:
>
> > Generic decoder that lirc has is actually much better and more tolerant
> > that protocol specific decoders that you propose,
>
> Actually, it is not the case. Why do you think it's better (let alone
> "much better")? Have you at least seen my RC5 decoder?
Because userspace decoder is general, it doesn't depend on exact timing,
as long as pulses vary in size it can distinguish between keys, and that
is enough.
I didn't use your decoder, so in that particular case I don't know.


>
> > You claim you 'fix' the decoder, right?
>
> Sure.
Unless you put it againt an inaccurate decoder....
Ask the lirc developers.


>
> > But what about all these lirc userspace drivers?
>
> Nothing. They are not relevant and obviously have to use lircd.
> If you can have userspace driver, you can have lircd as well.
>
> > How they are supposed to use that 'fixed' decoder.
>
> They are not.
>
> Is it a problem for you?
> How is your keyboard supposed to use scanner driver?
Another piece of off-topic nonsense.

I have a VCR remote, ok?
I have a pulse/space decoder in my notebook, I have created a config
file for that, and I did a lot of customizations, because this remote
isn't supposed to be used with PC.

Now, I also have a desktop.
I don't have a receiver there, but someday I arrange some sort of it.
I have an IR dongle in the closed, its raw IR diode.
Probably with few components I could connect it to soundcard (and I have
3 independent inputs, of which only one is used)
And then I will use alsa input driver.

Now I had ended up with 2 different configurations, one for the kernel,
another for the lirc.
Great, isn't it?

The point is again, I *emphasize* that as long as lirc is used to decode
all but ready to use scancodes, everything is kept in one place.
Both decode algorithms and configuration.

For ready to use scancode, a hardcoded table can be used in kernel to
translate to input events.

How hard to understand that?



Also, I repeat again, that this discussion *IS NOT* about userspace api,
its about who decodes the input, kernel or lirc.

Raw access to timing will be aviable this way or another, ether as
primary way of decoding for lirc, or as a debug measure.

Regards,
Maxim Levitsky

2009-11-28 16:44:28

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Maxim Levitsky <[email protected]> writes:

>> Actually, it is not the case. Why do you think it's better (let alone
>> "much better")? Have you at least seen my RC5 decoder?
> Because userspace decoder is general, it doesn't depend on exact timing,
> as long as pulses vary in size it can distinguish between keys, and that
> is enough.
> I didn't use your decoder, so in that particular case I don't know.

I thought so.
FYI: a sane RC5 decoder doesn't depend on exact timing. Even seen
a multi-function remote can control many different devices like TV,
VCR, DVD and so on? From different manufacturers etc.

> Unless you put it againt an inaccurate decoder....
> Ask the lirc developers.

Not sure what do you mean.

> I have a VCR remote, ok?
> I have a pulse/space decoder in my notebook, I have created a config
> file for that, and I did a lot of customizations, because this remote
> isn't supposed to be used with PC.

There is no such thing as "being supposed to be used with PC".
A space/mark receiver can receive data from any remote.

> Now, I also have a desktop.
> I don't have a receiver there, but someday I arrange some sort of it.
> I have an IR dongle in the closed, its raw IR diode.
> Probably with few components I could connect it to soundcard (and I have
> 3 independent inputs, of which only one is used)
> And then I will use alsa input driver.
>
> Now I had ended up with 2 different configurations, one for the kernel,
> another for the lirc.
> Great, isn't it?

If you want such setup - why not?
If you don't - you can use lirc in both cases.

> The point is again, I *emphasize* that as long as lirc is used to decode
> all but ready to use scancodes, everything is kept in one place.
> Both decode algorithms and configuration.

Then keep it that way and let others use what they think is best for
them.

Now how hard is it to understand that?

> Also, I repeat again, that this discussion *IS NOT* about userspace api,
> its about who decodes the input, kernel or lirc.

That could be the case if you were limited to "or". But we can do both.
--
Krzysztof Halasa

2009-11-28 16:45:48

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 10:35 AM, Maxim Levitsky
<[email protected]> wrote:
> On Sat, 2009-11-28 at 16:25 +0100, Krzysztof Halasa wrote:
>> Maxim Levitsky <[email protected]> writes:
>>
>> >> And that's good. Especially for a popular and simple protocol such as
>> >> RC5.
>> >> Actually, it's not about adding the decoder. It's about fixing it.
>> >> I can fix it.
>> >
>> > This is nonsense.
>>
>> You forgot to say why do you think so.
>
> Because frankly, I am sick of this discussion.
> Generic decoder that lirc has is actually much better and more tolerant
> that protocol specific decoders that you propose,

Porting the decoder engine from lirc into the kernel is also a possibility.

I'm asking to have an architecture design discussion, not to pick one
of the various implementations. This is something that we have to live
with for twenty years and it is a giant pain to change if we get wrong
initially.

> You claim you 'fix' the decoder, right?
> But what about all these lirc userspace drivers?
> How they are supposed to use that 'fixed' decoder.

Some of that user space hardware belongs in the trash can and will
never work reliably in a modern system. For example - sitting in a
tight user space loop reading the DTS bit from a serial port or
parallel port and then using the system clock to derive IR timings.
That process is going to be inaccurate or it is going to make video
frames drop. Big banging from user space is completely unreliable.

If you really want to use your microphone input as a DAC channel, run
a little app that reads the ALSA input and converts it to a timing
stream and then inject this data into the kernel input system using
uevent.

Both of these are hobbyist class solutions. They are extremely cheap
but they are unreliable and create large CPU loads. But some people
want to use a $300 CPU to eliminate $2 worth of IR hardware. This type
of hardware will continue to work via event injection. But neither of
these solutions belong in the kernel.

What are other examples of user space IR drivers?

--
Jon Smirl
[email protected]

2009-11-28 16:48:26

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Krzysztof and Maxim,

on 28 Nov 09 at 16:44, Krzysztof Halasa wrote:
> Maxim Levitsky <[email protected]> writes:

>> Generic decoder that lirc has is actually much better and more tolerant
>> that protocol specific decoders that you propose,

> Actually, it is not the case. Why do you think it's better (let alone
> "much better")? Have you at least seen my RC5 decoder?

Nobody here doubts that you can implement a working RC-5 decoder. It's
really easy. I'll give you an example why Maxim thinks that the generic
LIRC approach has advantages:

Look at the Streamzap remote (I think Jarod submitted the lirc_streamzap
driver in his patchset):
http://lirc.sourceforge.net/remotes/streamzap/PC_Remote

This remote uses RC-5. But some of the developers must have thought that
it may be a smart idea to use 14 bits instead the standard 13 bits for
this remote. In LIRC you won't care, because this is configurable and
irrecord will figure it out automatically for you. In the proposed kernel
decoders I have seen until now, you will have to treat this case specially
in the decoder because you expect 13 bits for RC-5, not 14.
Well, it can be done. But you'll have to add another IR protocol define
for RC-5_14, which will become very ugly with many non-standard protocol
variations.

@Maxim: I think Mauro is right. We need to find an approach that makes
everybody happy. We should take the time now to discuss all the
possibilities and choose the best solution. LIRC has lived so long outside
the kernel, that we can wait another couple of weeks/months until we
agreed on something which will be a stable API hopefully for many years to
come.

Christoph

2009-11-28 17:06:44

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 11:47 AM, Christoph Bartelmus <[email protected]> wrote:
> @Maxim: I think Mauro is right. We need to find an approach that makes
> everybody happy. We should take the time now to discuss all the
> possibilities and choose the best solution. LIRC has lived so long outside
> the kernel, that we can wait another couple of weeks/months until we
> agreed on something which will be a stable API hopefully for many years to
> come.

Please do this. That's why I started this thread off with goals for
the implementation. After we settle on a set of goals we can move on
to how to implement those goals. The end result is almost certainly
going to combine aspects from all of the various proposals and the
LIRC code base is likely to be the largest contributor.

There are two very basic things that we need to reach consensus on first.

1) Unification with mouse/keyboard in evdev - put IR on equal footing.
2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
generic tools (ls, mkdir, echo) for configuration

Once consensus is reached in those two areas everything else should be
much easier.

--
Jon Smirl
[email protected]

2009-11-28 17:21:41

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

[email protected] (Christoph Bartelmus) writes:

> Nobody here doubts that you can implement a working RC-5 decoder. It's
> really easy. I'll give you an example why Maxim thinks that the generic
> LIRC approach has advantages:

But surely not when compared to an in-kernel decoder _and_ the one in
lircd? :-)

> Look at the Streamzap remote (I think Jarod submitted the lirc_streamzap
> driver in his patchset):
> http://lirc.sourceforge.net/remotes/streamzap/PC_Remote
>
> This remote uses RC-5. But some of the developers must have thought that
> it may be a smart idea to use 14 bits instead the standard 13 bits for
> this remote. In LIRC you won't care, because this is configurable and
> irrecord will figure it out automatically for you. In the proposed kernel
> decoders I have seen until now, you will have to treat this case specially
> in the decoder because you expect 13 bits for RC-5, not 14.

Well, the 14-bit RC5 is de-facto standard for some time now. One of the
start bits, inverted, now functions as the MSB of the command code.
13-bit receiver implementations (at least these aimed at "foreign"
remotes) are obsolete.

> Well, it can be done. But you'll have to add another IR protocol define
> for RC-5_14, which will become very ugly with many non-standard protocol
> variations.

No, the 14-bit version is designed to be backward compatible.
--
Krzysztof Halasa

2009-11-28 17:35:03

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> There are two very basic things that we need to reach consensus on first.
>
> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
> generic tools (ls, mkdir, echo) for configuration

I think we can do this gradually:
1. Merging the lirc drivers. The only stable thing needed is lirc
interface.
2. Changing IR input layer interface ("media" drivers and adding to lirc
drivers).
--
Krzysztof Halasa

2009-11-28 17:37:40

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 12:35 PM, Krzysztof Halasa <[email protected]> wrote:
> Jon Smirl <[email protected]> writes:
>
>> There are two very basic things that we need to reach consensus on first.
>>
>> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
>> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
>> generic tools (ls, mkdir, echo) for configuration
>
> I think we can do this gradually:
> 1. Merging the lirc drivers. The only stable thing needed is lirc
> ? interface.

Doing that locks in a user space API that needs to be supported
forever. We need to think this API through before locking it in.

> 2. Changing IR input layer interface ("media" drivers and adding to lirc
> ? drivers).
> --
> Krzysztof Halasa
>



--
Jon Smirl
[email protected]

2009-11-28 17:40:29

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

>> 1. Merging the lirc drivers. The only stable thing needed is lirc
>>   interface.
>
> Doing that locks in a user space API that needs to be supported
> forever. We need to think this API through before locking it in.

Sure, that's why I wrote about the need for it to be "stable".
--
Krzysztof Halasa

2009-11-28 18:18:15

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> There are two very basic things that we need to reach consensus on first.
>
> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
> generic tools (ls, mkdir, echo) for configuration

About 2: If at all, there only needs to be a decision about pulse/space
to scancode converter configuration. In contrast, scancode to keycode
converter configuration is already solved; the interface is
EVIOCSKEYCODE. If you find the EVIOCSKEYCODE interface lacking, extend
it or submit an alternative --- but this does not affect LIRC and
whether to merge it in any way.

PS: Drop your "specific vs. generic tools" terminology already. Your
configfs based proposal requires "specific" tools as well, it's just
that they can be implemented in bash, among else.
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 18:46:02

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 11:45 -0500, Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 10:35 AM, Maxim Levitsky
> <[email protected]> wrote:
> > On Sat, 2009-11-28 at 16:25 +0100, Krzysztof Halasa wrote:
> >> Maxim Levitsky <[email protected]> writes:
> >>
> >> >> And that's good. Especially for a popular and simple protocol such as
> >> >> RC5.
> >> >> Actually, it's not about adding the decoder. It's about fixing it.
> >> >> I can fix it.
> >> >
> >> > This is nonsense.
> >>
> >> You forgot to say why do you think so.
> >
> > Because frankly, I am sick of this discussion.
> > Generic decoder that lirc has is actually much better and more tolerant
> > that protocol specific decoders that you propose,
>
> Porting the decoder engine from lirc into the kernel is also a possibility.
>
> I'm asking to have an architecture design discussion, not to pick one
> of the various implementations. This is something that we have to live
> with for twenty years and it is a giant pain to change if we get wrong
> initially.
>
> > You claim you 'fix' the decoder, right?
> > But what about all these lirc userspace drivers?
> > How they are supposed to use that 'fixed' decoder.
>
> Some of that user space hardware belongs in the trash can and will
> never work reliably in a modern system. For example - sitting in a
> tight user space loop reading the DTS bit from a serial port or
> parallel port and then using the system clock to derive IR timings.
> That process is going to be inaccurate or it is going to make video
> frames drop. Big banging from user space is completely unreliable.
>
> If you really want to use your microphone input as a DAC channel, run
> a little app that reads the ALSA input and converts it to a timing
> stream and then inject this data into the kernel input system using
> uevent.
>
> Both of these are hobbyist class solutions. They are extremely cheap
> but they are unreliable and create large CPU loads. But some people
> want to use a $300 CPU to eliminate $2 worth of IR hardware. This type
> of hardware will continue to work via event injection. But neither of
> these solutions belong in the kernel.


>
> What are other examples of user space IR drivers?
>

many libusb based drivers?

Regards,
Maxim Levitsky

2009-11-28 18:56:21

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 1:45 PM, Maxim Levitsky <[email protected]> wrote:
> On Sat, 2009-11-28 at 11:45 -0500, Jon Smirl wrote:
>> What are other examples of user space IR drivers?
>>
>
> many libusb based drivers?

If these drivers are for specific USB devices it is straight forward
to turn them into kernel based drivers. If we are going for plug and
play this needs to happen. All USB device drivers can be implemented
in user space, but that doesn't mean you want to do that. Putting
device drivers in the kernel subjects them to code inspection, they
get shipped everywhere, they autoload when the device is inserted,
they participate in suspend/resume, etc.

If these are generic USB serial devices being used to implement IR
that's the hobbyist model and the driver should stay in user space and
use event injection.

If a ft232 has been used to build a USB IR receiver you should program
a specific USB ID into it rather than leaving the generic one in. FTDI
will assign you a specific USB ID out of their ID space for free, you
don't need to pay to get one from the USB forum. Once you put a
specific ID into the ft232 it will trigger the load of the correct
in-kernel driver.


>
> Regards,
> Maxim Levitsky
>
>



--
Jon Smirl
[email protected]

2009-11-28 18:58:23

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 1:17 PM, Stefan Richter
<[email protected]> wrote:
> Jon Smirl wrote:
>> There are two very basic things that we need to reach consensus on first.
>>
>> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
>> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
>> generic tools (ls, mkdir, echo) for configuration
>
> About 2: ?If at all, there only needs to be a decision about pulse/space
> to scancode converter configuration. ?In contrast, scancode to keycode
> converter configuration is already solved; the interface is
> EVIOCSKEYCODE. ?If you find the EVIOCSKEYCODE interface lacking, extend
> it or submit an alternative --- but this does not affect LIRC and
> whether to merge it in any way.

EVIOCSKEYCODE is lacking, first parameter is an INT. Some decoded IR
codes are over 32b. Christoph posted an example that needs 128b. This
is a problem with ioctls, they change size depending on platform and
endianess.

Also, how do you create the devices for each remote? You would need to
create these devices before being able to do EVIOCSKEYCODE to them.

>
> PS: ?Drop your "specific vs. generic tools" terminology already. ?Your
> configfs based proposal requires "specific" tools as well, it's just
> that they can be implemented in bash, among else.

The shell commands are the most generic tools in Unix. udev already
knows how to run shell scripts.

But there is no technical reason why setkeycodes, getkeycodes,
showkey, loadkeys, xmodmap, and dump-keys can't be modified to support
IR. I already have to have the man page on the screen when using these
commands so adding a bunch more parameters won't hurt.

> --
> Stefan Richter
> -=====-==--= =-== ===--
> http://arcgraph.de/sr/
>



--
Jon Smirl
[email protected]

2009-11-28 19:16:18

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 13:56 -0500, Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 1:45 PM, Maxim Levitsky <[email protected]> wrote:
> > On Sat, 2009-11-28 at 11:45 -0500, Jon Smirl wrote:
> >> What are other examples of user space IR drivers?
> >>
> >
> > many libusb based drivers?
>
> If these drivers are for specific USB devices it is straight forward
> to turn them into kernel based drivers. If we are going for plug and
> play this needs to happen. All USB device drivers can be implemented
> in user space, but that doesn't mean you want to do that. Putting
> device drivers in the kernel subjects them to code inspection, they
> get shipped everywhere, they autoload when the device is inserted,
> they participate in suspend/resume, etc.
>
> If these are generic USB serial devices being used to implement IR
> that's the hobbyist model and the driver should stay in user space and
> use event injection.
>
> If a ft232 has been used to build a USB IR receiver you should program
> a specific USB ID into it rather than leaving the generic one in. FTDI
> will assign you a specific USB ID out of their ID space for free, you
> don't need to pay to get one from the USB forum. Once you put a
> specific ID into the ft232 it will trigger the load of the correct
> in-kernel driver.

If we could put *all* lirc drivers in the kernel and put the generic
decoding algorithm, then it might be begin to look a bit more sane.
And write tool to upload the existing lirc config files to kernel.
This would essentially we same as porting the lirc to the kernel.
I don't see much gains of this, and this way or another, alsa input
won't be possible.

Christoph Bartelmus, Jarod Wilson, what do you think?

Regards,
Maxim Levitsky








2009-11-28 19:31:09

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> If these drivers are for specific USB devices it is straight forward
> to turn them into kernel based drivers. If we are going for plug and
> play this needs to happen. All USB device drivers can be implemented
> in user space, but that doesn't mean you want to do that. Putting
> device drivers in the kernel subjects them to code inspection, they
> get shipped everywhere, they autoload when the device is inserted,
> they participate in suspend/resume, etc.

Huh? Userspace implementations /can/ be code-reviewed (but they can't
crash your machine), they /can/ be and are shipped everywhere, they /do/
auto-load when the device is inserted. And if there should be an issue
with power management (is there any?), then improve the ABI and libusb
can surely be improved. I don't see why a device with a userspace
driver cannot be included in power management.
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 19:32:43

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 2:30 PM, Stefan Richter
<[email protected]> wrote:
> Jon Smirl wrote:
>> If these drivers are for specific USB devices it is straight forward
>> to turn them into kernel based drivers. If we are going for plug and
>> play this needs to happen. All USB device drivers can be implemented
>> in user space, but that doesn't mean you want to do that. Putting
>> device drivers in the kernel subjects them to code inspection, they
>> get shipped everywhere, they autoload when the device is inserted,
>> they participate in suspend/resume, etc.
>
> Huh? ?Userspace implementations /can/ be code-reviewed (but they can't
> crash your machine), they /can/ be and are shipped everywhere, they /do/
> auto-load when the device is inserted. ?And if there should be an issue
> with power management (is there any?), then improve the ABI and libusb
> can surely be improved. ?I don't see why a device with a userspace
> driver cannot be included in power management.

If you want a micro-kernel there are plenty to pick from. Linux has
chosen not to be a micro-kernel. The Linux model is device drivers in
the kernel.

--
Jon Smirl
[email protected]

2009-11-28 19:46:13

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 1:17 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> There are two very basic things that we need to reach consensus on first.
>>>
>>> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
>>> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
>>> generic tools (ls, mkdir, echo) for configuration
>> About 2: If at all, there only needs to be a decision about pulse/space
>> to scancode converter configuration. In contrast, scancode to keycode
>> converter configuration is already solved; the interface is
>> EVIOCSKEYCODE. If you find the EVIOCSKEYCODE interface lacking, extend
>> it or submit an alternative --- but this does not affect LIRC and
>> whether to merge it in any way.
>
> EVIOCSKEYCODE is lacking, first parameter is an INT. Some decoded IR
> codes are over 32b. Christoph posted an example that needs 128b. This
> is a problem with ioctls, they change size depending on platform and
> endianess.

No, they do not "change size depending on platform and endianess" if
basic rules are observed. Defining compatible ioctls is not rocket
science. Sure, "int" should not be used in ioctl arguments or other
binary interfaces.

(I never said EVIOCSKEYCODE was not lacking, I only said it exists
already. When you talk about better scancode-to-keycode converter
configuration, then you are talking about EVIOCSKEYCODE, not about LIRC
or a hypothetic replacement of LIRC. Ergo, the decision about whether
to merge LIRC or not is not blocked by this configuration interface issue.)

> Also, how do you create the devices for each remote? You would need to
> create these devices before being able to do EVIOCSKEYCODE to them.

The input subsystem creates devices on behalf of input drivers. (Kernel
drivers, that is. Userspace drivers are per se not affected.)

>> PS: Drop your "specific vs. generic tools" terminology already. Your
>> configfs based proposal requires "specific" tools as well, it's just
>> that they can be implemented in bash, among else.
>
> The shell commands are the most generic tools in Unix.

The shell scripts are still special-purpose programs.

> udev already knows how to run shell scripts.
[...]

Udev can run any kind of program, compiled or interpreted.
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 19:48:30

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 2:30 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> If these drivers are for specific USB devices it is straight forward
>>> to turn them into kernel based drivers. If we are going for plug and
>>> play this needs to happen. All USB device drivers can be implemented
>>> in user space, but that doesn't mean you want to do that. Putting
>>> device drivers in the kernel subjects them to code inspection, they
>>> get shipped everywhere, they autoload when the device is inserted,
>>> they participate in suspend/resume, etc.
>> Huh? Userspace implementations /can/ be code-reviewed (but they can't
>> crash your machine), they /can/ be and are shipped everywhere, they /do/
>> auto-load when the device is inserted. And if there should be an issue
>> with power management (is there any?), then improve the ABI and libusb
>> can surely be improved. I don't see why a device with a userspace
>> driver cannot be included in power management.
>
> If you want a micro-kernel there are plenty to pick from. Linux has
> chosen not to be a micro-kernel. The Linux model is device drivers in
> the kernel.

Total nonsense. Neither am I arguing for a micro-kernel, nor are
userspace drivers alien to Linux. Not at all.
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 19:55:02

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> EVIOCSKEYCODE is lacking, first parameter is an INT. Some decoded IR
> codes are over 32b. Christoph posted an example that needs 128b.

This only means that the existing interface is limited.

> This
> is a problem with ioctls, they change size depending on platform and
> endianess.

But not this: you can use fixed-width u16, u32, u64 and e.g. u8[x].
I don't know an arch which changes int sizes depending on endianness,
is there any?
32/64 binary compatibility needs some minimal effort.
--
Krzysztof Halasa

2009-11-28 20:08:37

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
<[email protected]> wrote:
> Jon Smirl wrote:
>> Also, how do you create the devices for each remote? You would need to
>> create these devices before being able to do EVIOCSKEYCODE to them.
>
> The input subsystem creates devices on behalf of input drivers. ?(Kernel
> drivers, that is. ?Userspace drivers are per se not affected.)

We have one IR receiver device and multiple remotes. How does the
input system know how many devices to create corresponding to how many
remotes you have? There is no current mechanism to do that. You need
an input device for each remote so that you can do the EVIOCSKEYCODE
against it. Some type of "create subdevice" IOCTL will need to be
built.

I handled that in configds like this:
/configfs - mount the basic configfs
/configfs/remotes (created by loading IR support)
mkdir /configfs/remotes/remote_A - this causes the input subdevice to
be created, the name of it appears in the created directory.
mkdir /configfs/remotes/remote_A/... - now build the mapping entires.

This "create subdevice" IOCTL will need to take a name so that it can
be identified. You will probably another IOCTL to enumerate which
subdevices belong to the root device, etc...

Keyboards don't have subdevices. There is a 1:1 mapping between the
keyboard and the device driver.

--
Jon Smirl
[email protected]

2009-11-28 20:14:30

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 2:55 PM, Krzysztof Halasa <[email protected]> wrote:
> Jon Smirl <[email protected]> writes:
>
>> EVIOCSKEYCODE is lacking, first parameter is an INT. Some decoded IR
>> codes are over 32b. Christoph posted an example that needs 128b.
>
> This only means that the existing interface is limited.
>
>> This
>> is a problem with ioctls, they change size depending on platform and
>> endianess.
>
> But not this: you can use fixed-width u16, u32, u64 and e.g. u8[x].
> I don't know an arch which changes int sizes depending on endianness,
> is there any?

Endianess comes into play when send/receiving multibyte integers on
platforms with different endianess. That where the hton() stuff comes
from. IOCTLs obviously work, you just have to allow for all of this
stuff when writing them.

http://linux.die.net/man/3/htonl


> 32/64 binary compatibility needs some minimal effort.
> --
> Krzysztof Halasa
>



--
Jon Smirl
[email protected]

2009-11-28 20:21:56

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> We have one IR receiver device and multiple remotes. How does the
> input system know how many devices to create corresponding to how many
> remotes you have? There is no current mechanism to do that. You need
> an input device for each remote so that you can do the EVIOCSKEYCODE
> against it. Some type of "create subdevice" IOCTL will need to be
> built.

Thinking about it, I'm not sure. Why do we want multiple remote devices?
(not multiple remotes, that's clear).
--
Krzysztof Halasa

2009-11-28 20:29:40

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> Also, how do you create the devices for each remote? You would need to
>>> create these devices before being able to do EVIOCSKEYCODE to them.
>> The input subsystem creates devices on behalf of input drivers. (Kernel
>> drivers, that is. Userspace drivers are per se not affected.)
>
> We have one IR receiver device and multiple remotes. How does the
> input system know how many devices to create corresponding to how many
> remotes you have?

If several remotes are to be used on the same receiver, then they
necessarily need to generate different scancodes, don't they? Otherwise
the input driver wouldn't be able to route their events to the
respective subdevice. But if they do generate different scancodes,
there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
might still be desirable to have subdevices for other reasons perhaps.)
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 20:29:21

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> Endianess comes into play when send/receiving multibyte integers on
> platforms with different endianess.

It's the case when you're sending this data to a machine with
a different endianness. For example, in a network or to another CPU in
e.g. add-on card.
Ioctls are not affected by this, since both ends are the same.

Obviously you can be affected if you try to access data as integers in
one point and as arrays of bytes in the other, but it has nothing to do
with ioctls.
--
Krzysztof Halasa

2009-11-28 20:34:34

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Stefan Richter wrote:
> Jon Smirl wrote:
>> We have one IR receiver device and multiple remotes. How does the
>> input system know how many devices to create corresponding to how many
>> remotes you have?
>
> If several remotes are to be used on the same receiver, then they
> necessarily need to generate different scancodes, don't they? Otherwise
> the input driver wouldn't be able to route their events to the
> respective subdevice. But if they do generate different scancodes,
> there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
> might still be desirable to have subdevices for other reasons perhaps.)

PS, forgot to add: If there is a real need to initiate device creation
from userspace, then ioctl is not the way to go.
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 20:46:03

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 3:29 PM, Stefan Richter
<[email protected]> wrote:
> Jon Smirl wrote:
>> On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
>> <[email protected]> wrote:
>>> Jon Smirl wrote:
>>>> Also, how do you create the devices for each remote? You would need to
>>>> create these devices before being able to do EVIOCSKEYCODE to them.
>>> The input subsystem creates devices on behalf of input drivers. ?(Kernel
>>> drivers, that is. ?Userspace drivers are per se not affected.)
>>
>> We have one IR receiver device and multiple remotes. How does the
>> input system know how many devices to create corresponding to how many
>> remotes you have?
>
> If several remotes are to be used on the same receiver, then they
> necessarily need to generate different scancodes, don't they? ?Otherwise
> the input driver wouldn't be able to route their events to the
> respective subdevice. ?But if they do generate different scancodes,
> there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
> might still be desirable to have subdevices for other reasons perhaps.)

Multiple remotes will have duplicate buttons (1, 2 ,3, power, mute,
etc) these should get mapped into the standard keycodes. You need to
devices to key things straight.

Push button 1 on Remote A. That should generate a KP_1 on the evdev
interface for that remote.
Push button 1 on Remote B. That should generate a KP_1 on the evdev
interface for that remote.

Scenario for this - a mutifunction remote that is controlling two
different devices/apps. In one mode the 1 might be a channel number,
in the other mode it might be a telephone number.

The user may chose to make button 1 on both remote A/B map to KP_1 on
a single interface.

Scenario for this - I want to use two different remotes to control a
single device.

---------------------

I handled that in configds like this:
/configfs - mount the basic configfs
/configfs/remotes (created by loading IR support)
mkdir /configfs/remotes/remote_A - this causes the input subdevice to
be created, the name of it appears in the created directory.

--- this entry really shouldn't be called "remote" it should be named
"application" . Then you build map entries under it for the keycodes
the app knows about. Nothing prevents you from adding entries that map
both Remote_A_1 and Remote_B_1 to KP_1.

-- it's not sufficient to support a single application. I might be
running mythtv, a voip phone, home automation, etc all using a remote.
By switching modes on a multifunction remote I can switch apps.

mkdir /configfs/remotes/remote_A/... - now build the mapping entires.


> --
> Stefan Richter
> -=====-==--= =-== ===--
> http://arcgraph.de/sr/
>



--
Jon Smirl
[email protected]

2009-11-28 21:46:53

by Stefan Richter

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 3:29 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> We have one IR receiver device and multiple remotes. How does the
>>> input system know how many devices to create corresponding to how many
>>> remotes you have?
>> If several remotes are to be used on the same receiver, then they
>> necessarily need to generate different scancodes, don't they? Otherwise
^^^^^^^^^
I referred to scancodes, not keycodes.

>> the input driver wouldn't be able to route their events to the
>> respective subdevice. But if they do generate different scancodes,
>> there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
>> might still be desirable to have subdevices for other reasons perhaps.)
>
> Multiple remotes will have duplicate buttons (1, 2 ,3, power, mute,
> etc) these should get mapped into the standard keycodes. You need to
> devices to key things straight.
>
> Push button 1 on Remote A. That should generate a KP_1 on the evdev
> interface for that remote.
> Push button 1 on Remote B. That should generate a KP_1 on the evdev
> interface for that remote.
>
> Scenario for this - a mutifunction remote that is controlling two
> different devices/apps. In one mode the 1 might be a channel number,
> in the other mode it might be a telephone number.
>
> The user may chose to make button 1 on both remote A/B map to KP_1 on
> a single interface.
>
> Scenario for this - I want to use two different remotes to control a
> single device.
>
> ---------------------
>
> I handled that in configds like this:
> /configfs - mount the basic configfs
> /configfs/remotes (created by loading IR support)
> mkdir /configfs/remotes/remote_A - this causes the input subdevice to
> be created, the name of it appears in the created directory.
[...]

I'm lost. If there are two remotes sending to a single receiver, and
their sets of scancodes do not overlap, then all is fine. You can map
either set of scancodes to keycodes independently. But if their ranges
of scancodes do overlap, then even the creation of subdevices does not
help --- the driver has no way to tell which of the remotes sent the
signal in order to select the corresponding input subdevice, does it?
--
Stefan Richter
-=====-==--= =-== ===--
http://arcgraph.de/sr/

2009-11-28 22:10:27

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 4:46 PM, Stefan Richter
<[email protected]> wrote:
> Jon Smirl wrote:
>> On Sat, Nov 28, 2009 at 3:29 PM, Stefan Richter
>> <[email protected]> wrote:
>>> Jon Smirl wrote:
>>>> We have one IR receiver device and multiple remotes. How does the
>>>> input system know how many devices to create corresponding to how many
>>>> remotes you have?
>>> If several remotes are to be used on the same receiver, then they
>>> necessarily need to generate different scancodes, don't they? ?Otherwise
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?^^^^^^^^^
> I referred to scancodes, not keycodes.
>
>>> the input driver wouldn't be able to route their events to the
>>> respective subdevice. ?But if they do generate different scancodes,
>>> there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
>>> might still be desirable to have subdevices for other reasons perhaps.)
>>
>> Multiple remotes will have duplicate buttons (1, 2 ,3, power, mute,
>> etc) these should get mapped into the standard keycodes. You need to
>> devices to key things straight.
>>
>> Push button 1 on Remote A. That should generate a KP_1 on the evdev
>> interface for that remote.
>> Push button 1 on Remote B. That should generate a KP_1 on the evdev
>> interface for that remote.
>>
>> Scenario for this - a mutifunction remote that is controlling two
>> different devices/apps. In one mode the 1 might be a channel number,
>> in the other mode it might be a telephone number.
>>
>> The user may chose to make button 1 on both remote A/B map to KP_1 on
>> a single interface.
>>
>> Scenario for this - I want to use two different remotes to control a
>> single device.
>>
>> ---------------------
>>
>> I handled that in configds like this:
>> /configfs - mount the basic configfs
>> /configfs/remotes (created by loading IR support)
>> mkdir /configfs/remotes/remote_A ?- this causes the input subdevice to
>> be created, the name of it appears in the created directory.
> [...]
>
> I'm lost. ?If there are two remotes sending to a single receiver, and
> their sets of scancodes do not overlap, then all is fine. ?You can map
> either set of scancodes to keycodes independently. ?But if their ranges

You can do this, but now the events from both remotes are occurring on
a single evdev device. If I assign Remote_A_1 to KP_1 what am I going
to assign to Remote_B_1?

> of scancodes do overlap, then even the creation of subdevices does not
> help --- the driver has no way to tell which of the remotes sent the
> signal in order to select the corresponding input subdevice, does it?

The scancodes are always unique even among different remotes.

I have three apps: mythtv, voip and home automation. How can I use a
remote(s) to control these three apps? The concept of keyboard focus
doesn't map very well to remote controls.

My idea was to create an evdev device for each app:
mythtv - Remote_A_1 mapped KP_1, etc
voip - Remote_B_1 mapped KP_1, etc
home automation - etc

Note that there probably aren't really three remotes (A,B,C), it a
multi-function remote. Picking a different context on a multi-function
remote doesn't generate an event.




> --
> Stefan Richter
> -=====-==--= =-== ===--
> http://arcgraph.de/sr/
>



--
Jon Smirl
[email protected]

2009-11-28 22:18:30

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

I'm looking at a Sony multi-function remote right now. It has five
devices and forty keys. Each of the five devices can transmit 0-9,
power, volume, etc. It transmits 5*40 = 200 unique scancodes.

I want the five devices to correspond to five apps. What's the plan
for splitting those 200 scancodes into the five apps?

I did it by creating five evdev devices each mapping 40 scancodes.
That's lets me reuse KP_1 for each of the five apps.


--
Jon Smirl
[email protected]

2009-11-28 23:28:43

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon,

On Sat, 2009-11-28 at 12:37 -0500, Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 12:35 PM, Krzysztof Halasa <[email protected]> wrote:
> > Jon Smirl <[email protected]> writes:
> >
> >> There are two very basic things that we need to reach consensus on first.
> >>
> >> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.

The only thing this buys for the user is remote/products bundles that
work out of the box. That can only be a solution for the 80% case.

I don't hear users crying out "Please integrate IR with the input
system". I do hear users say "I want my remote to work", and "How can I
make my remote work?". Users are not specifically asking for this
integration of IR and the input system - a technical nuance. If such a
tecnical desire-ment drives excessive rework, I doubt anyone will care
enough about IR to follow through to make a complete system.

What does "equal footing" mean as an incentive anyway? The opportunity
to reimplement *everything* that exists for IR already over again in
kernel-space for the sake of developer technical desires? That's just a
lot of work for "not invented here" syndrome. IR transceivers are
arguably superior to keyboards and mice anyway because they can transmit
data too.


> >> 2) Specific tools (xmodmap, setkeycodes, etc or the LIRC ones) or
> >> generic tools (ls, mkdir, echo) for configuration
> >
> > I think we can do this gradually:
> > 1. Merging the lirc drivers. The only stable thing needed is lirc
> > interface.
>
> Doing that locks in a user space API that needs to be supported
> forever. We need to think this API through before locking it in.

No one get things right the first time - No one.

Most designs are iterated with prototypes in the commercial world.
Prototypes keep costs low so you can throw it away easily and try a new
approach, if the current approach is not panning out

Only governements try to get everything right on the first go. It takes
them too long and the end product is usually still hosed.

Whatever gets developed won't be locked in for 20 years, that's absurd.
Technology moves on 6 month to two year cycles. Linux changes ABIs too.
V4L transitioned from V4L1 to V4L2 and that's happened in less than 20
years for a much more complex set of devices with a more varied set of
userspace apps.

Regards,
Andy

> > 2. Changing IR input layer interface ("media" drivers and adding to lirc
> > drivers).
> > --
> > Krzysztof Halasa

2009-11-29 02:56:34

by Mike Lampard

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 28 Nov 2009 02:27:59 am Jon Smirl wrote:
> On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
>
> <[email protected]> wrote:
> > Hi Mauro,
> >
> > on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
> >> Christoph Bartelmus wrote:
> >
> > [...]
> >
> >>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's
> >>> just because I'm not familiar at all with input layer toolset.
> >
> > [...]
> >
> >> I hope it helps for you to better understand how this works.
> >
> > So the plan is to have two ways of using IR in the future which are
> > incompatible to each other, the feature-set of one being a subset of the
> > other?
>
> Take advantage of the fact that we don't have a twenty year old legacy
> API already in the kernel. Design an IR API that uses current kernel
> systems. Christoph, ignore the code I wrote and make a design proposal
> that addresses these goals...
>
> 1) Unified input in Linux using evdev. IR is on equal footing with
> mouse and keyboard.

I think this a case where automating setup can be over-emphasised (in the
remote-as-keyboard case).

Apologies in advance if I've misunderstood the idea of utilising the 'input
subsystem' for IR. If the plan is to offer dedicated IR events via a yet-to-
be-announced input event subsystem and to optionally disallow acting as a
keyboard via a module option or similar then please ignore the following.

Whilst having remotes come through the input subsystem might be 'the correct
thing' from a purely technical standpoint, as an end-user I find the use-case
for remotes completely different in one key aspect: Keyboards and mice are
generally foreground-app input devices, whereas remotes are often controlling
daemons sitting in the background piping media through dedicated devices. As
an example I have a VDR instance running in the background on my desktop
machine outputting to a TV in another room via a pci mpeg decoder - I
certainly don't want the VDR remote control interacting with my X11 desktop in
any way unless I go out of my way to set it up to do so, nor do I want it
interacting with other applications (such as MPD piping music around the
house) that are controlled via other remotes in other rooms unless specified.

Setting this up with Lircd was easy, how would a kernel-based proposal handle
this?

Regards
Mike


> 2) plug and play for basic systems - you only need an external app for
> scripting
> 3) No special tools - use mkdir, echo, cat, shell scripts to
> build maps
> 4) Use of modern Linux features like sysfs, configfs and udev.
> 5) Direct multi-app support - no daemon
> 6) Hide timing data from user as much as possible.
>
> What are other goals for this subsystem?
>
> Maybe we decide to take the existing LIRC system as is and not
> integrate it into the input subsystem. But I think there is a window
> here to update the LIRC design to use the latest kernel features. We
> don't want to build another /dev/mouse and have to rip it out in five
> years.
>
> > When designing the key mapping in the kernel you should be aware that
> > there are remotes out there that send a sequence of scan codes for some
> > buttons, e.g.
> > http://lirc.sourceforge.net/remotes/pioneer/CU-VSX159
>
> This is good input.
>

2009-11-29 04:33:45

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, 2009-11-28 at 12:37 -0500, Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 12:35 PM, Krzysztof Halasa <[email protected]> wrote:
> > Jon Smirl <[email protected]> writes:
> >
> >> There are two very basic things that we need to reach consensus on first.
> >>
> >> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.

BTW, circa 1995 my serial mouse "Just Worked" in Linux. Sometime around
the release of Fedora Core 3 or 4, serial mice stopped being well
supported as input devices AFAICT. (I still have a dual boot
Windows95/Linux machine with a serial mouse because it has ISA slots.)

Are serial port connected IR devices going to see the same fate in this
model?


Why not consider IR devices as bi-directional communications devices vs.
input devices like mice or keyboards? Theoretically the TTY layer with
line discipline modules for underlying IR hardware could also interface
IR devices to user space.

Sorry, the input subsystem cannot meet all the end user IR requirements.
I doubt it could easily support all the current user space only IR
drivers moving into the kernel. I suspect the serial port connected IR
devices will be deemed "too hard" and IR Tx as "not input" and dropped
on the floor.


The more I think about IR integration with input, the more I think any
effort beyond the plug-and-plug for default configurations is a waste of
time and effort. Something more is needed to handle the transmitters
and serial connected IRs. It's also too convenient to access USB IR
hardware from existing userspace drivers to bother porting into the
kernel.

Regards,
Andy

2009-11-29 04:50:46

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 11:32:01PM -0500, Andy Walls wrote:
> On Sat, 2009-11-28 at 12:37 -0500, Jon Smirl wrote:
> > On Sat, Nov 28, 2009 at 12:35 PM, Krzysztof Halasa <[email protected]> wrote:
> > > Jon Smirl <[email protected]> writes:
> > >
> > >> There are two very basic things that we need to reach consensus on first.
> > >>
> > >> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
>
> BTW, circa 1995 my serial mouse "Just Worked" in Linux. Sometime around
> the release of Fedora Core 3 or 4, serial mice stopped being well
> supported as input devices AFAICT. (I still have a dual boot
> Windows95/Linux machine with a serial mouse because it has ISA slots.)
>

serport + sermouse combo should work well. At least I don't get any bug
reports ;P


> Are serial port connected IR devices going to see the same fate in this
> model?
>
>
> Why not consider IR devices as bi-directional communications devices vs.
> input devices like mice or keyboards? Theoretically the TTY layer with
> line discipline modules for underlying IR hardware could also interface
> IR devices to user space.
>
> Sorry, the input subsystem cannot meet all the end user IR requirements.

Again, what end users are you taling about here? An application that
wants to prcess key (or button) presses? Or something entirely
different, lice lirc itself?

> I doubt it could easily support all the current user space only IR
> drivers moving into the kernel. I suspect the serial port connected IR
> devices will be deemed "too hard" and IR Tx as "not input" and dropped
> on the floor.
>
>
> The more I think about IR integration with input, the more I think any
> effort beyond the plug-and-plug for default configurations is a waste of
> time and effort. Something more is needed to handle the transmitters
> and serial connected IRs. It's also too convenient to access USB IR
> hardware from existing userspace drivers to bother porting into the
> kernel.

Having support 2 different interfaces for regular applications is also a
waste of time and effort. The applications who don;t care about IRC
protocol decoding should be able to just work with standard input
interface.

--
Dmitry

2009-11-29 04:55:49

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 01:17:03PM +1030, Mike Lampard wrote:
> On Sat, 28 Nov 2009 02:27:59 am Jon Smirl wrote:
> > On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
> >
> > <[email protected]> wrote:
> > > Hi Mauro,
> > >
> > > on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
> > >> Christoph Bartelmus wrote:
> > >
> > > [...]
> > >
> > >>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's
> > >>> just because I'm not familiar at all with input layer toolset.
> > >
> > > [...]
> > >
> > >> I hope it helps for you to better understand how this works.
> > >
> > > So the plan is to have two ways of using IR in the future which are
> > > incompatible to each other, the feature-set of one being a subset of the
> > > other?
> >
> > Take advantage of the fact that we don't have a twenty year old legacy
> > API already in the kernel. Design an IR API that uses current kernel
> > systems. Christoph, ignore the code I wrote and make a design proposal
> > that addresses these goals...
> >
> > 1) Unified input in Linux using evdev. IR is on equal footing with
> > mouse and keyboard.
>
> I think this a case where automating setup can be over-emphasised (in the
> remote-as-keyboard case).
>
> Apologies in advance if I've misunderstood the idea of utilising the 'input
> subsystem' for IR. If the plan is to offer dedicated IR events via a yet-to-
> be-announced input event subsystem and to optionally disallow acting as a
> keyboard via a module option or similar then please ignore the following.
>
> Whilst having remotes come through the input subsystem might be 'the correct
> thing' from a purely technical standpoint, as an end-user I find the use-case
> for remotes completely different in one key aspect: Keyboards and mice are
> generally foreground-app input devices, whereas remotes are often controlling
> daemons sitting in the background piping media through dedicated devices. As
> an example I have a VDR instance running in the background on my desktop
> machine outputting to a TV in another room via a pci mpeg decoder - I
> certainly don't want the VDR remote control interacting with my X11 desktop in
> any way unless I go out of my way to set it up to do so, nor do I want it
> interacting with other applications (such as MPD piping music around the
> house) that are controlled via other remotes in other rooms unless specified.
>
> Setting this up with Lircd was easy, how would a kernel-based proposal handle
> this?
>

Why would that be different really? On my keyboard there is a key for
e-mail application (and many others) - what HID calls Application Launch
keys IIRC. There also application control keys and system control keys,
KEY_COFFEE aka KEY_SCREENLOCK. Those are not to be consumed by
foreground application but by daemons/session-wide application.

--
Dmitry

2009-11-29 04:58:29

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 06:26:55PM -0500, Andy Walls wrote:
> Jon,
>
> On Sat, 2009-11-28 at 12:37 -0500, Jon Smirl wrote:
> > On Sat, Nov 28, 2009 at 12:35 PM, Krzysztof Halasa <[email protected]> wrote:
> > > Jon Smirl <[email protected]> writes:
> > >
> > >> There are two very basic things that we need to reach consensus on first.
> > >>
> > >> 1) Unification with mouse/keyboard in evdev - put IR on equal footing.
>
> The only thing this buys for the user is remote/products bundles that
> work out of the box. That can only be a solution for the 80% case.
>
> I don't hear users crying out "Please integrate IR with the input
> system". I do hear users say "I want my remote to work", and "How can I
> make my remote work?". Users are not specifically asking for this
> integration of IR and the input system - a technical nuance.

Right, but if remotes work users did not care if we went through 20
revisions of the interface and how much effort was wasted. When we
talking about users here we do talk about application developers mostly,
not the consumers.

Well, consumers would bennefit of plug and play and proper power
management too.

--
Dmitry

2009-11-29 04:59:25

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 05:18:34PM -0500, Jon Smirl wrote:
> I'm looking at a Sony multi-function remote right now. It has five
> devices and forty keys. Each of the five devices can transmit 0-9,
> power, volume, etc. It transmits 5*40 = 200 unique scancodes.
>
> I want the five devices to correspond to five apps. What's the plan
> for splitting those 200 scancodes into the five apps?
>
> I did it by creating five evdev devices each mapping 40 scancodes.
> That's lets me reuse KP_1 for each of the five apps.
>

KEY_NUMERIC_1 please (which should not be affected by numlock state).

--
Dmitry

2009-11-29 05:32:27

by Mike Lampard

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 29 Nov 2009 03:25:49 pm Dmitry Torokhov wrote:
> On Sun, Nov 29, 2009 at 01:17:03PM +1030, Mike Lampard wrote:
> > On Sat, 28 Nov 2009 02:27:59 am Jon Smirl wrote:
> > > On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
> > >
> > > <[email protected]> wrote:
> > > > Hi Mauro,
> > > >
> > > > on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
> > > >> Christoph Bartelmus wrote:
> > > >
> > > > [...]
> > > >
> > > >>> But I'm still a bit hesitant about the in-kernel decoding. Maybe
> > > >>> it's just because I'm not familiar at all with input layer toolset.
> > > >
> > > > [...]
> > > >
> > > >> I hope it helps for you to better understand how this works.
> > > >
> > > > So the plan is to have two ways of using IR in the future which are
> > > > incompatible to each other, the feature-set of one being a subset of
> > > > the other?
> > >
> > > Take advantage of the fact that we don't have a twenty year old legacy
> > > API already in the kernel. Design an IR API that uses current kernel
> > > systems. Christoph, ignore the code I wrote and make a design proposal
> > > that addresses these goals...
> > >
> > > 1) Unified input in Linux using evdev. IR is on equal footing with
> > > mouse and keyboard.
> >
> > I think this a case where automating setup can be over-emphasised (in the
> > remote-as-keyboard case).
> >
> > Apologies in advance if I've misunderstood the idea of utilising the
> > 'input subsystem' for IR. If the plan is to offer dedicated IR events
> > via a yet-to- be-announced input event subsystem and to optionally
> > disallow acting as a keyboard via a module option or similar then please
> > ignore the following.
> >
> > Whilst having remotes come through the input subsystem might be 'the
> > correct thing' from a purely technical standpoint, as an end-user I find
> > the use-case for remotes completely different in one key aspect:
> > Keyboards and mice are generally foreground-app input devices, whereas
> > remotes are often controlling daemons sitting in the background piping
> > media through dedicated devices. As an example I have a VDR instance
> > running in the background on my desktop machine outputting to a TV in
> > another room via a pci mpeg decoder - I certainly don't want the VDR
> > remote control interacting with my X11 desktop in any way unless I go out
> > of my way to set it up to do so, nor do I want it interacting with other
> > applications (such as MPD piping music around the house) that are
> > controlled via other remotes in other rooms unless specified.
> >
> > Setting this up with Lircd was easy, how would a kernel-based proposal
> > handle this?
>
> Why would that be different really? On my keyboard there is a key for
> e-mail application (and many others) - what HID calls Application Launch
> keys IIRC. There also application control keys and system control keys,
> KEY_COFFEE aka KEY_SCREENLOCK. Those are not to be consumed by
> foreground application but by daemons/session-wide application.
>
In my real-world examples above, both VDR and MPD are started at system start
and are not associated with any user-initiated sessions (X login etc) - they
are not X11 clients. Their only input is via Lircd. Conversely todays
Xserver (if I read my logfiles correctly) consumes all input event devices by
default, turning them into keypresses for its client apps. This is exactly
the wrong behaviour for my use-case. In order to ensure that my daemons
receive their input I must first ensure that X doesn't receive those events -
assuming this is possible it still complicates matters further than they are
today (I'd need a simple way of automatically differentiating between remote
devices and keyboard devices) .

Mike

2009-11-29 07:14:54

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 04:01:53PM +1030, Mike Lampard wrote:
> On Sun, 29 Nov 2009 03:25:49 pm Dmitry Torokhov wrote:
> > On Sun, Nov 29, 2009 at 01:17:03PM +1030, Mike Lampard wrote:
> > > On Sat, 28 Nov 2009 02:27:59 am Jon Smirl wrote:
> > > > On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
> > > >
> > > > <[email protected]> wrote:
> > > > > Hi Mauro,
> > > > >
> > > > > on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
> > > > >> Christoph Bartelmus wrote:
> > > > >
> > > > > [...]
> > > > >
> > > > >>> But I'm still a bit hesitant about the in-kernel decoding. Maybe
> > > > >>> it's just because I'm not familiar at all with input layer toolset.
> > > > >
> > > > > [...]
> > > > >
> > > > >> I hope it helps for you to better understand how this works.
> > > > >
> > > > > So the plan is to have two ways of using IR in the future which are
> > > > > incompatible to each other, the feature-set of one being a subset of
> > > > > the other?
> > > >
> > > > Take advantage of the fact that we don't have a twenty year old legacy
> > > > API already in the kernel. Design an IR API that uses current kernel
> > > > systems. Christoph, ignore the code I wrote and make a design proposal
> > > > that addresses these goals...
> > > >
> > > > 1) Unified input in Linux using evdev. IR is on equal footing with
> > > > mouse and keyboard.
> > >
> > > I think this a case where automating setup can be over-emphasised (in the
> > > remote-as-keyboard case).
> > >
> > > Apologies in advance if I've misunderstood the idea of utilising the
> > > 'input subsystem' for IR. If the plan is to offer dedicated IR events
> > > via a yet-to- be-announced input event subsystem and to optionally
> > > disallow acting as a keyboard via a module option or similar then please
> > > ignore the following.
> > >
> > > Whilst having remotes come through the input subsystem might be 'the
> > > correct thing' from a purely technical standpoint, as an end-user I find
> > > the use-case for remotes completely different in one key aspect:
> > > Keyboards and mice are generally foreground-app input devices, whereas
> > > remotes are often controlling daemons sitting in the background piping
> > > media through dedicated devices. As an example I have a VDR instance
> > > running in the background on my desktop machine outputting to a TV in
> > > another room via a pci mpeg decoder - I certainly don't want the VDR
> > > remote control interacting with my X11 desktop in any way unless I go out
> > > of my way to set it up to do so, nor do I want it interacting with other
> > > applications (such as MPD piping music around the house) that are
> > > controlled via other remotes in other rooms unless specified.
> > >
> > > Setting this up with Lircd was easy, how would a kernel-based proposal
> > > handle this?
> >
> > Why would that be different really? On my keyboard there is a key for
> > e-mail application (and many others) - what HID calls Application Launch
> > keys IIRC. There also application control keys and system control keys,
> > KEY_COFFEE aka KEY_SCREENLOCK. Those are not to be consumed by
> > foreground application but by daemons/session-wide application.
> >
> In my real-world examples above, both VDR and MPD are started at system start
> and are not associated with any user-initiated sessions (X login etc) - they
> are not X11 clients. Their only input is via Lircd. Conversely todays
> Xserver (if I read my logfiles correctly) consumes all input event devices by
> default, turning them into keypresses for its client apps. This is exactly
> the wrong behaviour for my use-case. In order to ensure that my daemons
> receive their input I must first ensure that X doesn't receive those events -
> assuming this is possible it still complicates matters further than they are
> today (I'd need a simple way of automatically differentiating between remote
> devices and keyboard devices) .

But the setup you described only works for you because lirc is the only
consumer using the device _for now_. As soon as there are more users you
will have to solve the same isssue as with evdev being consumed but
different applications. As soon as somebody says "gosh, I really don't
want my KDE application to depend on lircm why does not X provide RC
data the same way it provides key presses?" your setup goes south.

As to how to solve such specific needs - there could be different
solutions. EVIOCGRAB can be used to gain exclusive access (but that
again breaks when there appears another application wanting exclusive
access). Applications can limit themselves to opening only specific
event devices (and igonre others).

--
Dmitry

2009-11-29 12:07:26

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Krzysztof,

on 28 Nov 09 at 18:21, Krzysztof Halasa wrote:
[...]
>> This remote uses RC-5. But some of the developers must have thought that
>> it may be a smart idea to use 14 bits instead the standard 13 bits for
>> this remote. In LIRC you won't care, because this is configurable and
>> irrecord will figure it out automatically for you. In the proposed kernel
>> decoders I have seen until now, you will have to treat this case specially
>> in the decoder because you expect 13 bits for RC-5, not 14.

> Well, the 14-bit RC5 is de-facto standard for some time now. One of the
> start bits, inverted, now functions as the MSB of the command code.
> 13-bit receiver implementations (at least these aimed at "foreign"
> remotes) are obsolete.

Ah, sorry. I didn't mean the extension of the command code by inverting
one of the start bits.

The Streamzap really uses one more bit.
In the LIRC world the RC5 start bit which is fixed to "1" is not counted
as individual bit. So translated to your notation, the Streamzap uses 15
bits, not 14 like the extended RC-5 or 13 like the original RC-5...

Christoph

2009-11-29 12:07:05

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Stefan,

on 28 Nov 09 at 21:29, Stefan Richter wrote:
> Jon Smirl wrote:
>> On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
>> <[email protected]> wrote:
>>> Jon Smirl wrote:
>>>> Also, how do you create the devices for each remote? You would need to
>>>> create these devices before being able to do EVIOCSKEYCODE to them.
>>> The input subsystem creates devices on behalf of input drivers. (Kernel
>>> drivers, that is. Userspace drivers are per se not affected.)
>>
>> We have one IR receiver device and multiple remotes. How does the
>> input system know how many devices to create corresponding to how many
>> remotes you have?

> If several remotes are to be used on the same receiver, then they
> necessarily need to generate different scancodes, don't they? Otherwise
> the input driver wouldn't be able to route their events to the
> respective subdevice.

Consider this case:
Two remotes use different protocols. The scancodes after decoding happen
to overlap.
Just using the scancodes you cannot distinguish between the remotes.
You'll need to add the protocol information to be able to solve this which
complicates the setup.

In LIRC this is solved by having protocol parameters and scancode mapping
in one place.

Christoph

2009-11-29 12:07:09

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Mauro,

on 28 Nov 09 at 09:21, Mauro Carvalho Chehab wrote:
> Hi Christoph,
>
> Christoph Bartelmus wrote:

>>> Maybe we decide to take the existing LIRC system as is and not
>>> integrate it into the input subsystem. But I think there is a window
>>> here to update the LIRC design to use the latest kernel features.
>>
>> If it ain't broke, don't fix it.
[...]
> So, even not being broken, the subsystem internal media API's changed
> a lot during the last years, and there are still several new changes
> on our TODO list.
>
> So, I'd say that if we can do it better, then let's do it.

I'm not against improving things.
If there are feature request that cannot be handled with an interface, it
has to be extended or redesigned. But currently the LIRC interface
supports all features that came up until now since many years.
I just don't want to change a working interface just because it could be
also implemented in a different way, but having no other visible advantage
than using more recent kernel features.

[...]
>> For devices that do the decoding in hardware, the only thing that I don't
>> like about the current kernel implementation is the fact that there are
>> mapping tables in the kernel source. I'm not aware of any tools that let
>> you change them without writing some keymaps manually.
[...]
> Still, I prefer first to migrate all drivers to use the full scancode and
> re-generate the keymaps before such step.

Good to see that this is in the works.

[...]
>> With the approach that you
>> suggested for the in-kernel decoder, this device simply will not work for
>> anything but RC-5. The devil is in all the details.

> I haven't seen such limitations on his proposal. We currently have in-kernel
> decoders for NEC, pulse-distance, RC4 protocols, and some variants. If
> non-RC5 decoders are missing, we need for sure to add them.

That was not my point. If you point a NEC remote at the Igor USB device,
you won't be able to use a NEC decoder because the device will swallow
half of the bits. LIRC won't care unless the resulting scancodes are
identical.
Granted, this is an esoteric arguement, because this device is utter
garbage.

[...]
>> If we decide to do the
>> decoding in-kernel, how long do you think this solution will need to
>> become really stable and mainline? Currently I don't even see any
>> consensus on the interface yet. But maybe you will prove me wrong and it's
>> just that easy to get it all working.

> The timeframe to go to mainline will basically depend on taking a decision
> about the API and on people having time to work on it.
>
> Providing that we agree on what we'll do, I don't see why not
> adding it on staging for 2.6.33 and targeting to have
> everything done for 2.6.34 or 2.6.35.

The problem that I see here is just that even when we have very talented
people working on this, that put together all resources, we won't be able
to cover all the corner cases with all the different receivers and remote
control protocols out there. It will still require lots of fine-tuning
which was done in LIRC over the years.

>> I also understand that people want to avoid dependency on external
>> userspace tools. All I can tell you is that the lirc tools already do
>> support everything you need for IR control. And as it includes a lot of
>> drivers that are implemented in userspace already, LIRC will just continue
>> to do it's work even when there is an alternative in-kernel.

> The point is that for simple usage, like an user plugging his new USB stick
> he just bought, he should be able to use the shipped IR without needing to
> configure anything or manually calling any daemon. This currently works
> with the existing drivers and it is a feature that needs to be kept.

Admittedly, LIRC is way behind when it comes to plug'n'play.

Christoph

2009-11-29 12:07:57

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Jon,

on 27 Nov 09 at 12:49, Jon Smirl wrote:
[...]
> Christoph, take what you know from all of the years of working on LIRC
> and design the perfect in-kernel system. This is the big chance to
> redesign IR support and get rid of any past mistakes. Incorporate any
> useful chunks of code and knowledge from the existing LIRC into the
> new design. Drop legacy APIs, get rid of daemons, etc. You can do this
> redesign in parallel with existing LIRC. Everyone can continue using
> the existing code while the new scheme is being built. Think of it as
> LIRC 2.0. You can lead this design effort, you're the most experience
> developer in the IR area.

This is a very difficult thing for me to do. I must admit that I'm very
biased.
Because lircd is the only userspace application that uses the LIRC kernel
interface, we never had any problems changing the interface when needed.
I can't say there's much legacy stuff inside. I'm quite happy with the
interface.
The other thing is that I can't really move the decoder from userspace to
kernel because there are way too many userspace drivers that do require a
userspace decoder. LIRC also is running on FreeBSD, MacOS and even Cygwin.
So letting the userspace drivers take advantage of a potential Linux in-
kernel decoder is not an option for me either.
I'm having my 'LIRC maintainer' hat on mostly during this discussion and I
do understand that from Linux kernel perspective things look different.

> Take advantage of this window to make a
> design that is fully integrated with Linux - put IR on equal footing
> with the keyboard and mouse as it should be.

That's a question that I have not answered for myself concludingly.
Is a remote control really on exactly the same level as a keyboard or
mouse?

Christoph

2009-11-29 12:39:15

by Alan

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

> BTW, circa 1995 my serial mouse "Just Worked" in Linux. Sometime around

Correct X11 just talked to the serial ports. In fact that is still the
way to configure it if you want any sanity in life.

> and serial connected IRs. It's also too convenient to access USB IR
> hardware from existing userspace drivers to bother porting into the
> kernel.

Userspace needs a way to identify IR hardware and to interface with it
using the right protocol. It's not clear the kernel needs to provide
anything more than minimal hardware interfaces in most case - be that
serial, libusb, ...

2009-11-29 16:02:10

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 2:45 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> Also, how do you create the devices for each remote? You would need to
>>> create these devices before being able to do EVIOCSKEYCODE to them.
>> The input subsystem creates devices on behalf of input drivers. (Kernel
>> drivers, that is. Userspace drivers are per se not affected.)
>
> We have one IR receiver device and multiple remotes. How does the
> input system know how many devices to create corresponding to how many
> remotes you have? There is no current mechanism to do that. You need
> an input device for each remote so that you can do the EVIOCSKEYCODE
> against it. Some type of "create subdevice" IOCTL will need to be
> built.
>
> I handled that in configds like this:
> /configfs - mount the basic configfs
> /configfs/remotes (created by loading IR support)
> mkdir /configfs/remotes/remote_A - this causes the input subdevice to
> be created, the name of it appears in the created directory.
> mkdir /configfs/remotes/remote_A/... - now build the mapping entires.
>
> This "create subdevice" IOCTL will need to take a name so that it can
> be identified. You will probably another IOCTL to enumerate which
> subdevices belong to the root device, etc...
>
> Keyboards don't have subdevices. There is a 1:1 mapping between the
> keyboard and the device driver.

The above struct doesn't fit for the already existing in-kernel drivers, since
you may have more than one IR driver on kernel. I have some machines here with
3 or 4 different input cards, each with their own IR hardware. How are you
supposing to associate a created Remote Controller with the corresponding driver?

With EVIOSKEYCODE, it is as simple as directing the ioctl to the corresponding
evdev interface.

Cheers,
Mauro.
>

2009-11-29 16:02:13

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> I'm looking at a Sony multi-function remote right now. It has five
> devices and forty keys. Each of the five devices can transmit 0-9,
> power, volume, etc. It transmits 5*40 = 200 unique scancodes.
>
> I want the five devices to correspond to five apps. What's the plan
> for splitting those 200 scancodes into the five apps?
>
> I did it by creating five evdev devices each mapping 40 scancodes.
> That's lets me reuse KP_1 for each of the five apps.
>
>

In this case, the evdev interface won't solve the issue alone. Some sort
of userspace tool will need to identify what application is expecting that
code and redirect it to that application.

IMO, the biggest LIRC benefit over a pure evdev interface, from user's perspective,
is that it can redirect a keycode to a specific application.

Yet, I don't see why your configfs proposal will solve this issue, as userspace
will keep receiving duplicated KET_

2009-11-29 16:18:45

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sat, Nov 28, 2009 at 4:46 PM, Stefan Richter
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> On Sat, Nov 28, 2009 at 3:29 PM, Stefan Richter
>>> <[email protected]> wrote:
>>>> Jon Smirl wrote:
>>>>> We have one IR receiver device and multiple remotes. How does the
>>>>> input system know how many devices to create corresponding to how many
>>>>> remotes you have?
>>>> If several remotes are to be used on the same receiver, then they
>>>> necessarily need to generate different scancodes, don't they? Otherwise
>> ^^^^^^^^^
>> I referred to scancodes, not keycodes.
>>
>>>> the input driver wouldn't be able to route their events to the
>>>> respective subdevice. But if they do generate different scancodes,
>>>> there is no need to create subdevices just for EVIOCSKEYCODE's sake. (It
>>>> might still be desirable to have subdevices for other reasons perhaps.)
>>> Multiple remotes will have duplicate buttons (1, 2 ,3, power, mute,
>>> etc) these should get mapped into the standard keycodes. You need to
>>> devices to key things straight.
>>>
>>> Push button 1 on Remote A. That should generate a KP_1 on the evdev
>>> interface for that remote.
>>> Push button 1 on Remote B. That should generate a KP_1 on the evdev
>>> interface for that remote.
>>>
>>> Scenario for this - a mutifunction remote that is controlling two
>>> different devices/apps. In one mode the 1 might be a channel number,
>>> in the other mode it might be a telephone number.
>>>
>>> The user may chose to make button 1 on both remote A/B map to KP_1 on
>>> a single interface.
>>>
>>> Scenario for this - I want to use two different remotes to control a
>>> single device.
>>>
>>> ---------------------
>>>
>>> I handled that in configds like this:
>>> /configfs - mount the basic configfs
>>> /configfs/remotes (created by loading IR support)
>>> mkdir /configfs/remotes/remote_A - this causes the input subdevice to
>>> be created, the name of it appears in the created directory.
>> [...]
>>
>> I'm lost. If there are two remotes sending to a single receiver, and
>> their sets of scancodes do not overlap, then all is fine. You can map
>> either set of scancodes to keycodes independently. But if their ranges
>
> You can do this, but now the events from both remotes are occurring on
> a single evdev device. If I assign Remote_A_1 to KP_1 what am I going
> to assign to Remote_B_1?
>
>> of scancodes do overlap, then even the creation of subdevices does not
>> help --- the driver has no way to tell which of the remotes sent the
>> signal in order to select the corresponding input subdevice, does it?
>
> The scancodes are always unique even among different remotes.
>
> I have three apps: mythtv, voip and home automation. How can I use a
> remote(s) to control these three apps? The concept of keyboard focus
> doesn't map very well to remote controls.
>
> My idea was to create an evdev device for each app:
> mythtv - Remote_A_1 mapped KP_1, etc
> voip - Remote_B_1 mapped KP_1, etc
> home automation - etc
>
> Note that there probably aren't really three remotes (A,B,C), it a
> multi-function remote. Picking a different context on a multi-function
> remote doesn't generate an event.

In this case, the evdev interface won't solve the issue alone. Some sort
of userspace tool will need to identify what application is expecting that
code and should redirect it to that application. So, you'll basically need
a table like:

scancode -> application PID | keycode

And, IMO, such mapping schema is better handled at userspace.

Yet, I don't see how your configfs proposal will solve this issue, as userspace
will keep receiving duplicated events (in different evdev interfaces, but
still the same keycode will be sent to userspace). You might be
considering that each application will open a different set of evdev interfaces,
and getting exclusive locks, but this will require a setup per-application, or
to have some proxy program that will open all different evdev interfaces and do the
keycode redirects.

On a scenario that different scancodes will produce the same KEY events, but
each duplicated scancode will be sent to a different application, the better
would be to have an evdev interface that will output directly the scancode
and let an userspace program to "focus" the keystroke to the corresponding
application.

IMHO, the biggest LIRC benefit over a pure evdev interface, from user's
perspective, is that it can redirect a keycode to a specific application.

Cheers,
Mauro.


2009-11-29 17:28:28

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 2009-11-29 at 12:40 +0000, Alan Cox wrote:
> > BTW, circa 1995 my serial mouse "Just Worked" in Linux. Sometime around
>
> Correct X11 just talked to the serial ports. In fact that is still the
> way to configure it if you want any sanity in life.
>
> > and serial connected IRs. It's also too convenient to access USB IR
> > hardware from existing userspace drivers to bother porting into the
> > kernel.
>
> Userspace needs a way to identify IR hardware and to interface with it
> using the right protocol. It's not clear the kernel needs to provide
> anything more than minimal hardware interfaces in most case - be that
> serial, libusb, ...

Exactly.
As it currently stands, kernel provides lircd the pulse/space timing,
lirc parses that, and sends input events via uinput.
lircd behaves just like an userspace driver, and the biggest advantage
is that it can access its configuration directly, unlike kernel solution
that will have to use some configfs hack.


It can use its own older interface, but that is now optional.
Also its not that hard to make lirc scan is database and adapt to the
remote that is used.
This should give the user absolutely zero configuration.

Instead, there is strong push to put lircd, the userspace daemon's
functionality into kernel.
This has zero advantages besides good developer feeling that "My system
has one less daemon..."

Best regards,
Maxim Levitsky

2009-11-29 17:50:08

by Ray Lee

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
> This has zero advantages besides good developer feeling that "My system
> has one less daemon..."

Surely it's clear that having an unnecessary daemon is introducing
another point of failure? Reducing complexity is not just its own
reward in a 'Developer Feel Good' way.

If decoding can *only* be sanely handled in user-space, that's one
thing. If it can be handled in kernel, then that would be better.

2009-11-29 18:12:01

by Alan

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

> If decoding can *only* be sanely handled in user-space, that's one
> thing. If it can be handled in kernel, then that would be better.

Why ?

I can compute fast fourier transforms in the kernel but that doesn't make
it better than doing it in user space. I can write web servers in the
kernel and the same applies.

Alan

2009-11-29 18:19:08

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 7:40 AM, Alan Cox <[email protected]> wrote:
>> BTW, circa 1995 my serial mouse "Just Worked" in Linux. ?Sometime around
>
> Correct X11 just talked to the serial ports. In fact that is still the
> way to configure it if you want any sanity in life.
>
>> and serial connected IRs. ?It's also too convenient to access USB IR
>> hardware from existing userspace drivers to bother porting into the
>> kernel.
>
> Userspace needs a way to identify IR hardware and to interface with it
> using the right protocol. It's not clear the kernel needs to provide
> anything more than minimal hardware interfaces in most case - be that
> serial, libusb, ...

That's a description of the current system and it is driver chaos.

Half of the drivers are in user space and there are two different
classes of kernel driver - LIRC and V4L.
A lot of the hardware doesn't identify itself.
There are two types of IR data in use - pulse timing and decoded protocol.

IR is an input device. We have a nice evdev input subsystem and it has
been demonstrated that IR can work with it.

Everybody keeps complaining that they want IR to "just work".
Consolidating all of this (under 50K of code) driver support in the
kernel is the way to make it "just work".

For example. Some IR devices only record pulse timing data. There are
various protocols - RC5, RC6, etc for turning these pulse timing into
a decode IR command. This is about 20K of code. Does it really make
sense to create a device, push this data out into user space, decode
it there, then inject the results back into the kernel (uinput) for
further processing by the input subsystem?

This decoding is getting done in user space because half of the IR
drivers are in user space. But the other half of them aren't in user
space and that set can't work in user space. Most of the user space
drivers can be pushed into the kernel where they'll automatically load
when the device is detected.

Some of the drivers can't be moved like the IR over ALSA. But is
attaching an IR diode to the mic input of your sound card really a
device or is it a hack that should be dealt with in user space?
Another type is IR hardware that toggles the DTR output of a serial
port at 40Khz to make a signal. Same thing is done with parallel
ports. Those force the system into a bit-banging timing loop for
1/10th second.


--
Jon Smirl
[email protected]

2009-11-29 18:53:03

by Ray Lee

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 10:13 AM, Alan Cox <[email protected]> wrote:
>> If decoding can *only* be sanely handled in user-space, that's one
>> thing. If it can be handled in kernel, then that would be better.
>
> Why ?
>
> I can compute fast fourier transforms in the kernel but that doesn't make
> it better than doing it in user space.

Of course not.

> I can write web servers in the kernel and the same applies.

I'm not so young as to not recall Tux. That was again a bad idea, for
the same reason. It introduced unnecessary complexity. Enabling
userspace to be able to service web requests faster improved all
user-space code. Yay.

The question is which solution is more complex, the current one that
requires userspace to be an active participant in the decoding, so
that we can handle bare diodes hooked up to a sound-card, or having
the kernel do decode for the sane devices and providing some fall-back
for broken hardware. The former has the advantage of being flexible at
the cost of increased fragility and surface area for security, and
latency in responding to events, the latter has the problem of
requiring two different decoding paths to be maintained, at least if
you want to support odd-ball hardware.

Jon is asking for an architecture discussion, y'know, with use cases.
Maxim seems to be saying it's obvious that what we have today works
fine. Except it doesn't appear that we have a consensus that
everything is fine, nor an obvious winner for how to reduce the
complexity here and keep the kernel in a happy, maintainable state for
the long haul.

Who knows, perhaps I misunderstood the dozens of messages up-thread --
wouldn't be the first time, in which case I'll shut up and let you get
back to work.

2009-11-29 18:59:27

by Alan

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

> Half of the drivers are in user space and there are two different
> classes of kernel driver - LIRC and V4L.


> A lot of the hardware doesn't identify itself.
> There are two types of IR data in use - pulse timing and decoded protocol.
>
> IR is an input device. We have a nice evdev input subsystem and it has
> been demonstrated that IR can work with it.

Evdev allows userspace to feed events into the kernel.

> Everybody keeps complaining that they want IR to "just work".
> Consolidating all of this (under 50K of code) driver support in the
> kernel is the way to make it "just work".

We have things called "Libraries" that unlike kernel code run out of a
secure context, can be paged and shared by applications dynamically.

Also the data rate of IR controllers puts it into the realm where the
kernel doesn't need to be involved, in fact you could turn them into
evdev events via user space quite acceptably, or even into meaningful
actions and onto dbus.

> For example. Some IR devices only record pulse timing data. There are
> various protocols - RC5, RC6, etc for turning these pulse timing into
> a decode IR command. This is about 20K of code. Does it really make
> sense to create a device, push this data out into user space, decode
> it there, then inject the results back into the kernel (uinput) for
> further processing by the input subsystem?

Does it really make sense to put big chunks of protocol decoding crap for
an interface which runs at about 1 character per second on a good day
into the kernel ? Does it really make sense ot move 50K of code from user
context to kernel context where it must meet strict security
requirements, be extensively audited and cannot be paged. For embedded
users will also have to be highly modular so no unused bits are loaded.

> This decoding is getting done in user space because half of the IR
> drivers are in user space. But the other half of them aren't in user
> space and that set can't work in user space. Most of the user space
> drivers can be pushed into the kernel where they'll automatically load
> when the device is detected.

So you proposed to write another ton of new drivers in kernel space for
these only devices supported by user space, portably and to test and
submit them all. If you can't persuade the maintainera of all those
drivers to do so you don't appear to have a credible proposal.

> attaching an IR diode to the mic input of your sound card really a
> device or is it a hack that should be dealt with in user space?

It's a device. There is no divide between "hack" and "device", as anyone
who ever worked on the Mac68K can assure you ;)

> Another type is IR hardware that toggles the DTR output of a serial
> port at 40Khz to make a signal. Same thing is done with parallel
> ports. Those force the system into a bit-banging timing loop for
> 1/10th second.

We have people who run wireless like that, people who ran SCSI and IDE
like that. In the embedded world its the norm. If you sell 100,000
devices then saving that part, wiring and board space is often the right
choice. That kind of stuff needs doing in user space not kernel.

You stated the real problem at the start - devices don't identify
themselves well. That doesn't seem to be a kernel problem other than for
kernel drivers perhaps exposing more information on themselves via sysfs.

2009-11-29 19:03:01

by Alan

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

> Jon is asking for an architecture discussion, y'know, with use cases.
> Maxim seems to be saying it's obvious that what we have today works
> fine. Except it doesn't appear that we have a consensus that
> everything is fine, nor an obvious winner for how to reduce the
> complexity here and keep the kernel in a happy, maintainable state for
> the long haul.

The important point is that this is not an A or B discussion. There are
lots of ways to tackle it that are neither. If you look at things like
complex video format handling it is done in user space but with an
infrastructure to handle it.

I don't believe putting it in the kernel is the alternative to the
current setup. Cleaning up the way what we have today is presented to
applications is perfectly possible without a whole new pile of kernel
crap, because evdev was designed sensibly in the first place to allow
userspace added events.

2009-11-29 19:16:10

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 2:04 PM, Alan Cox <[email protected]> wrote:
>> Jon is asking for an architecture discussion, y'know, with use cases.
>> Maxim seems to be saying it's obvious that what we have today works
>> fine. Except it doesn't appear that we have a consensus that
>> everything is fine, nor an obvious winner for how to reduce the
>> complexity here and keep the kernel in a happy, maintainable state for
>> the long haul.
>
> The important point is that this is not an A or B discussion. There are
> lots of ways to tackle it that are neither. If you look at things like
> complex video format handling it is done in user space but with an
> infrastructure to handle it.
>
> I don't believe putting it in the kernel is the alternative to the
> current setup. Cleaning up the way what we have today is presented to
> applications is perfectly possible without a whole new pile of kernel
> crap, because evdev was designed sensibly in the first place to allow
> userspace added events.

So we're just back to the status quo of last year which is to do
nothing except some minor clean up.

We'll be back here again next year repeating this until IR gets
redesigned into something fairly invisible like keyboard and mouse
drivers.

--
Jon Smirl
[email protected]

2009-11-29 19:28:41

by Alan

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

> So we're just back to the status quo of last year which is to do
> nothing except some minor clean up.

Which in itself is vastly preferable to some grandiose scheme that turns
out to be wrong.

And no it's not a back to the status quo, it's a request to discuss the
actual problems and options not to simply try and force your chosen
solution down the throats of everyone else.

2009-11-29 19:50:08

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

on 29 Nov 09 at 14:16, Jon Smirl wrote:
> On Sun, Nov 29, 2009 at 2:04 PM, Alan Cox <[email protected]> wrote:
>>> Jon is asking for an architecture discussion, y'know, with use cases.
[...]
> So we're just back to the status quo of last year which is to do
> nothing except some minor clean up.
>
> We'll be back here again next year repeating this until IR gets
> redesigned into something fairly invisible like keyboard and mouse
> drivers.

Last year everyone complained that LIRC does not support evdev - so I
added support for evdev.

This year everyone complains that LIRC is not plug'n'play - we'll fix that
'til next year.

There's progress. ;-)

Christoph

2009-11-29 20:28:12

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

1. Do we agree that a lirc (-style) kernel-user interface is needed at
least?

2. Is there any problem with lirc kernel-user interface?

If the answer for #1 is "yes" and for #2 is "no" then perhaps we merge
the Jarod's lirc patches (at least the core) so at least the
non-controversial part is done?

Doing so doesn't block improving input layer IR interface, does it?
--
Krzysztof Halasa

2009-11-29 20:44:36

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 3:27 PM, Krzysztof Halasa <[email protected]> wrote:
> 1. Do we agree that a lirc (-style) kernel-user interface is needed at
> ? least?
>
> 2. Is there any problem with lirc kernel-user interface?

Can you consider sending the raw IR data as a new evdev message type
instead of creating a new device protocol?
evdev protects the messages in a transaction to stop incomplete
messages from being read.

You might also want to use evdev capabilities to describe what the
hardware can do. These were the capabilities I had made up:

#define IR_CAP_RECEIVE_BASEBAND 0
#define IR_CAP_RECEIVE_36K 1
#define IR_CAP_RECEIVE_38K 2
#define IR_CAP_RECEIVE_40K 3
#define IR_CAP_RECEIVE_56K 4
#define IR_CAP_SEND_BASEBAND 5
#define IR_CAP_SEND_36K 6
#define IR_CAP_SEND_38K 7
#define IR_CAP_SEND_40K 8
#define IR_CAP_SEND_56K 9
#define IR_CAP_XMITTER_1 10
#define IR_CAP_XMITTER_2 11
#define IR_CAP_XMITTER_3 12
#define IR_CAP_XMITTER_4 13
#define IR_CAP_RECEIVE_RAW 14
#define IR_CAP_SEND_RAW 15


> If the answer for #1 is "yes" and for #2 is "no" then perhaps we merge
> the Jarod's lirc patches (at least the core) so at least the
> non-controversial part is done?
>
> Doing so doesn't block improving input layer IR interface, does it?
> --
> Krzysztof Halasa
>



--
Jon Smirl
[email protected]

2009-11-29 21:29:47

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Nov 29, 2009, at 12:44 PM, Jon Smirl <[email protected]> wrote:

> On Sun, Nov 29, 2009 at 3:27 PM, Krzysztof Halasa <[email protected]>
> wrote:
>> 1. Do we agree that a lirc (-style) kernel-user interface is needed
>> at
>> least?
>>
>> 2. Is there any problem with lirc kernel-user interface?
>
> Can you consider sending the raw IR data as a new evdev message type
> instead of creating a new device protocol?

No, I think it would be wrong. Such events are ill-suited for
consumption by regular applications and would introduce the "looping"
interface I described in my other email.

> evdev protects the messages in a transaction to stop incomplete
> messages from being read.

If such property is desired we can add it to the new lirc-like
interface, can't we?

--
>
Dmitry

2009-11-29 21:31:53

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Nov 29, 2009, at 12:27 PM, Krzysztof Halasa <[email protected]> wrote:

> 1. Do we agree that a lirc (-style) kernel-user interface is needed at
> least?
>
> 2. Is there any problem with lirc kernel-user interface?
>
> If the answer for #1 is "yes" and for #2 is "no" then perhaps we merge
> the Jarod's lirc patches (at least the core) so at least the
> non-controversial part is done?


Isn't the meat of Jarod's patch the lirc interface?

--
>

Dmitry

2009-11-29 21:47:16

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 4:29 PM, Dmitry Torokhov
<[email protected]> wrote:
> On Nov 29, 2009, at 12:44 PM, Jon Smirl <[email protected]> wrote:
>
>> On Sun, Nov 29, 2009 at 3:27 PM, Krzysztof Halasa <[email protected]> wrote:
>>>
>>> 1. Do we agree that a lirc (-style) kernel-user interface is needed at
>>> ?least?
>>>
>>> 2. Is there any problem with lirc kernel-user interface?
>>
>> Can you consider sending the raw IR data as a new evdev message type
>> instead of creating a new device protocol?
>
> No, I think it would be wrong. Such events are ill-suited for consumption by
> regular applications and would introduce the "looping" interface I described
> in my other email.

Regular applications are going to ignore these messages. The only
consumer for them is the LIRC daemon. Which is just going to process
them and re-inject the events back into evdev again in a different
form.

IR is an input device, what make it so special that it needs to by
pass this subsystem and implement its own private communications
scheme?

>> evdev protects the messages in a transaction to stop incomplete
>> messages from being read.
>
> If such property is desired we can add it to the new lirc-like interface,
> can't we?

Why do you want to redesign evdev instead of using it?


>
> --
>>
> Dmitry
>



--
Jon Smirl
[email protected]

2009-11-29 21:59:30

by Artur Skawina

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mike Lampard wrote:
> an example I have a VDR instance running in the background on my desktop
> machine outputting to a TV in another room via a pci mpeg decoder - I
> certainly don't want the VDR remote control interacting with my X11 desktop in
> any way unless I go out of my way to set it up to do so, nor do I want it
> interacting with other applications (such as MPD piping music around the
> house) that are controlled via other remotes in other rooms unless specified.
>
> Setting this up with Lircd was easy, how would a kernel-based proposal handle
> this?

eg

EV="/dev/input/"$( cd "/sys/class/input" &&
( grep -l 'X10' event*/device/name || grep -l 'X10' event*/device/manufacturer ) |
sed -e 's,/.*,,' )

./vdr [...] -P "remote -i $EV"

This is how it has worked for years, so there's no reason it wouldn't work w/ any
future scheme. The remote buttons normally arrive as normal kbd keys; once an app
grabs the input device corresponding to a remote, it receives the events exclusively.

artur

2009-11-29 22:48:33

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Nov 29, 2009, at 1:47 PM, Jon Smirl <[email protected]> wrote:

> On Sun, Nov 29, 2009 at 4:29 PM, Dmitry Torokhov
> <[email protected]> wrote:
>> On Nov 29, 2009, at 12:44 PM, Jon Smirl <[email protected]> wrote:
>>
>>> On Sun, Nov 29, 2009 at 3:27 PM, Krzysztof Halasa <[email protected]>
>>> wrote:
>>>>
>>>> 1. Do we agree that a lirc (-style) kernel-user interface is
>>>> needed at
>>>> least?
>>>>
>>>> 2. Is there any problem with lirc kernel-user interface?
>>>
>>> Can you consider sending the raw IR data as a new evdev message type
>>> instead of creating a new device protocol?
>>
>> No, I think it would be wrong. Such events are ill-suited for
>> consumption by
>> regular applications and would introduce the "looping" interface I
>> described
>> in my other email.
>
> Regular applications are going to ignore these messages. The only
> consumer for them is the LIRC daemon. Which is just going to process
> them and re-inject the events back into evdev again in a different
> form.
>
> IR is an input device, what make it so special that it needs to by
> pass this subsystem and implement its own private communications
> scheme?

So are HID devices (both USB and BT), PS/2 and so on. You are not
arguing for sending unprocessed data from these devices through evdev.

>
>>> evdev protects the messages in a transaction to stop incomplete
>>> messages from being read.
>>
>> If such property is desired we can add it to the new lirc-like
>> interface,
>> can't we?
>
> Why do you want to redesign evdev instead of using it?
>

I just said why in my previous email: looping is a mark of badly
designed interface.

--
Dmitry

2009-11-29 23:37:21

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
> > This has zero advantages besides good developer feeling that "My system
> > has one less daemon..."
>
> Surely it's clear that having an unnecessary daemon is introducing
> another point of failure?

A failure in a userspace IR daemon is worst case loss of IR
functionality.

A failure in kernel space can oops or panic the machine.

> Reducing complexity is not just its own
> reward in a 'Developer Feel Good' way.

No complexity is being reduced here. It's being shoved from one side of
a fence to another. A bad part about the proposed move is that in user
space, user address space is fairly isolated from other applications and
separate from kernel space. Partitioning reduces complexity and the
impact of failures. Moving things into kernel space just adds more to
the pile of code; it should have a good reason for being there.


> If decoding can *only* be sanely handled in user-space, that's one
> thing. If it can be handled in kernel, then that would be better.

Why does the address space in which decoding is performed make the
decoding process better or worse? The in kernel infrastructre and
restrictions add constraints to a decoding implementation. Userspace is
much more flexible.

Regards,
Andy

2009-11-30 00:06:20

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 2009-11-29 at 20:49 +0100, Christoph Bartelmus wrote:
> Hi,
>
> on 29 Nov 09 at 14:16, Jon Smirl wrote:
> > On Sun, Nov 29, 2009 at 2:04 PM, Alan Cox <[email protected]> wrote:
> >>> Jon is asking for an architecture discussion, y'know, with use cases.
> [...]
> > So we're just back to the status quo of last year which is to do
> > nothing except some minor clean up.
> >
> > We'll be back here again next year repeating this until IR gets
> > redesigned into something fairly invisible like keyboard and mouse
> > drivers.
>
> Last year everyone complained that LIRC does not support evdev - so I
> added support for evdev.
>
> This year everyone complains that LIRC is not plug'n'play - we'll fix that
> 'til next year.

V4L-DVB is also making progress on the enumeration front. At least for
V4L devices a new media controller device node will be able to enumerate
all devices associated with a video card (or embedded system or SoC).
>From one device node, an app should be able to discover all video, alsa,
dvb, and framebuffer device nodes on a video card, find out about
entities on the card, and set up the reconfigurable connections between
entities on the card. One should be able to discover subdevices on
cards like IR controllers.

The RFC before the mini-summit at the 2009 LPC is here:
http://lwn.net/Articles/352623/


The V4L media controller entity discover mechanism won't completely
solve the general discovery problem for IR. It will be one way to
discover IR devices associated with V4L supported peripherals. I assume
discovering USB IR-only devices by USB Id is not a problem. That leaves
serial port, parallel port and sound jack connected devices as the
difficult ones to "discover".

Regards,
Andy

> Christoph
> --

2009-11-30 00:50:00

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 2009-11-29 at 21:27 +0100, Krzysztof Halasa wrote:
> 1. Do we agree that a lirc (-style) kernel-user interface is needed at
> least?

Yes. Honestly, I'm just waiting on lirc_dev for the IR devices I work
with. With that I can get those new devices supported for both IR Rx
and Tx right now. I was holding off building a lirc_v4l module to see
if lirc_dev makes it in kernel.

However, because of the desire by V4L supporters wanting IR to just work
for remotes bundled with video capture cards, I have already added:

1. cx23885-input.c support with RC-5, and soon NEC support, in kernel.

2. new configurations for ir-kbd-i2c.c, due to IR hardware on some cx18
and ivtv supported cards.

So at least IR Rx works for the cheap bundled remote. I still think
input subsystem will never be able to rpovide a feature complete
interface though. I don't mind putting enough work in for the "Just
works" part with the bundled remote with -input, but not more.


> 2. Is there any problem with lirc kernel-user interface?

Here's my list

1. Unused transmit mode defines/enums. It's not a big deal really, but
I had a hard time figuring out the differences between CODE, MODE2,
PULSE since some are not well documented and some of those are not even
used for Tx.

2. The LIRC ioctls() wanting to change one parameter at a time will
cause me some thought/work. It would be better, I think, if LIRC would
do a get/set of all the parameters in one go.

That's what I do internally here:

http://linuxtv.org/hg/v4l-dvb/file/e0cd9a337600/linux/include/media/v4l2-subdev.h#l283
http://linuxtv.org/hg/v4l-dvb/file/e0cd9a337600/linux/drivers/media/video/cx23885/cx23888-ir.c#l746
http://linuxtv.org/hg/v4l-dvb/file/e0cd9a337600/linux/drivers/media/video/cx23885/cx23885-input.c#l269

The idea is that you set up the IR hardware infrequently and spend most
of the time letting it run. I'd have to think, if setting up the
hardware one parameter at a time may have it operating in a strange
mode, until the sequence of configuration ioctl() is complete.

I was planning on a lirc_v4l module aggregating all the lirc_dev ioctl
requests somehow to possibly avoid that potential problem.

Maybe an additional ioctl() from LIRC userspace and lirc_dev saying it
was done with the current IR device configuration sequence would be
another way to mitigate the problem for me.




> If the answer for #1 is "yes" and for #2 is "no" then perhaps we merge
> the Jarod's lirc patches (at least the core) so at least the
> non-controversial part is done?
>
> Doing so doesn't block improving input layer IR interface, does it?

It never has AFAICT.

A current related problem is that i2c based devices can only be bound to
only one of ir-kbd-i2c *or* lirc_i2c *or* lirc_zilog at any one time.
Currently it is somewhat up to the bridge driver which binding is
preferred. Discussion about this for the pvrusb2 module had the biggest
email churn IIRC.

This will be a general problem to fix for all the V4L-DVB drivers where
both LIRC and input can handle the device. A reasonable default
assumption *may* be that the user who bothers to configure LIRC wants
LIRC to override handling by input. That's a detail though...

Regards,
Andy

2009-11-30 02:15:27

by Ray Lee

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 3:35 PM, Andy Walls <[email protected]> wrote:
>> If decoding can *only* be sanely handled in user-space, that's one
>> thing. If it can be handled in kernel, then that would be better.
>
> Why does the address space in which decoding is performed make the
> decoding process better or worse?  The in kernel infrastructre and
> restrictions add constraints to a decoding implementation.  Userspace is
> much more flexible.

In which case I look forward to seeing your patches to move
drivers/hid/ to userspace. Please cc: me so I can enjoy the ensuing
festival.

2009-11-30 04:50:47

by Jarod Wilson

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Nov 29, 2009, at 4:31 PM, Dmitry Torokhov wrote:

> On Nov 29, 2009, at 12:27 PM, Krzysztof Halasa <[email protected]> wrote:
>
>> 1. Do we agree that a lirc (-style) kernel-user interface is needed at
>> least?
>>
>> 2. Is there any problem with lirc kernel-user interface?
>>
>> If the answer for #1 is "yes" and for #2 is "no" then perhaps we merge
>> the Jarod's lirc patches (at least the core) so at least the
>> non-controversial part is done?
>
>
> Isn't the meat of Jarod's patch the lirc interface?

Patch 1 was the lirc interface, 2 and 3 are individual device drivers that use it.

/me has some catching up to do on this thread after being partially detached from the computer over the holiday weekend here in the US...

--
Jarod Wilson
[email protected]


2009-11-30 09:59:53

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Alan,

Alan Cox wrote:

> Does it really make sense to put big chunks of protocol decoding crap for
> an interface which runs at about 1 character per second on a good day
> into the kernel ? Does it really make sense ot move 50K of code from user
> context to kernel context where it must meet strict security
> requirements, be extensively audited and cannot be paged. For embedded
> users will also have to be highly modular so no unused bits are loaded.

The same logic would apply to mouse, keyboards and serial consoles.
It is possible to move everything to userspace.

However, there are some reassons for they to be in kernelspace:
- you may need them during boot time;
- they are mandatory to allow the users interaction;
- you need low latency.

The same arguments apply to IR, especially on embedded devices: some devices,
like TVs, Set Top TV boxes and IPTV Set Top Boxes have IR as their primary
input device.

Also, as changing a digital TV or an IP TV channel requires to discard the current
MPEG stream and getting a newer one, and it requires a large time until you'll
be able to output something to the user, one of the needs is to handle IR keystrokes
(especially channel up/down) as fast as possible, to try to minimize the discomfort
of changing a channel.

Using an approach where you'll send a raw event to userspace, process there and return
back to kernel will increase the latency and can only be done after when loading
the SYSV runlevel stuff.

On the other hand, we already have IR decoding in-kernel. Most of the code are
at:
drivers/media/common/ir-functions.c

But there are also some other decoders at bttv, saa7134 and cx88 drivers.

In the case of drivers/media stuff, there common case is that the drivers have
support for both space/pulse decoding and in-hardware decoding. On both cases,
the scancode is converted to a keystroke via evdev. IMHO, we shouldn't really
consider dropping those decoders from kernel.

Cheers,
Mauro.

2009-11-30 09:58:00

by Artur Skawina

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Ray Lee wrote:
> On Sun, Nov 29, 2009 at 3:35 PM, Andy Walls <[email protected]> wrote:
>>> If decoding can *only* be sanely handled in user-space, that's one
>>> thing. If it can be handled in kernel, then that would be better.
>> Why does the address space in which decoding is performed make the
>> decoding process better or worse? The in kernel infrastructre and
>> restrictions add constraints to a decoding implementation. Userspace is
>> much more flexible.
>
> In which case I look forward to seeing your patches to move
> drivers/hid/ to userspace. Please cc: me so I can enjoy the ensuing
> festival.

Umm, this thread is really about several independent issues

- Event delivery
There's an existing input system that works, there's no need to
reinvent the wheel; all remote events (usually key presses, but
there are also wheels/touchpads/joysticks/etc, which don't necessarily
map to buttons) should eventually arrive through it. Whether there's a
userspace component somewhere on the receiver->/dev/input path doesn't
change anything.

- Acquisition
If it isn't interrupt-driven it should to be in userspace. ADC falls
into this category, but also various bitbanging approaches and probably
also many serial port (ab)uses where the decoding isn't trivial.
(Receivers that require accurate timestamps could be one exception)

- Decoding
There is "decoding" and there is "translation". With hw receivers
(such as usb ir/rf dongles and HID devices mentioned above) you often
only need to translate or map events sent by the hw to the correct input
event and that's it. This can easily be done in-kernel (modulo complex
remote/key mappings, maybe).
Decoding analog input (even if it's only timing) is a different story.
Would you want to worry about somebody with an IR transmitter (think
phone/pda/laptop, but it could also be a modded remote) crashing your
machine, just because the in-kernel decoder didn't handle some weird
data? Or somebody leaving a small device around, which over time lead to
OOM, cause the kernel decoder kept leaking memory?
The bandwidth requirements for a remote are minimal, say <=20 events/s,
and the max latency in the 100ms range would still be ok, so two, or six,
context switches per key pressed shouldn't be a problem.

- Configuration
This isn't actually as simple as it looks at first. If you want to support
multiple remotes (and you do), complex mappings (eg one->many or sequence->one),
multiple modes etc then going through a userspace mapper is probably better.
I looked briefly at Jon's configfs i/f and it seems it could handle the
multiple-physical-remotes-with-one-receiver-and-multiple-independent-devices
case, but being able to grab the receiver, process the data in userspace and
reinject it back would still be needed for some setups.

artur

2009-11-30 10:42:13

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Christoph Bartelmus wrote:
> Hi Jon,
>
> on 27 Nov 09 at 12:49, Jon Smirl wrote:
> [...]
>> Christoph, take what you know from all of the years of working on LIRC
>> and design the perfect in-kernel system. This is the big chance to
>> redesign IR support and get rid of any past mistakes. Incorporate any
>> useful chunks of code and knowledge from the existing LIRC into the
>> new design. Drop legacy APIs, get rid of daemons, etc. You can do this
>> redesign in parallel with existing LIRC. Everyone can continue using
>> the existing code while the new scheme is being built. Think of it as
>> LIRC 2.0. You can lead this design effort, you're the most experience
>> developer in the IR area.
>
> This is a very difficult thing for me to do. I must admit that I'm very
> biased.
> Because lircd is the only userspace application that uses the LIRC kernel
> interface, we never had any problems changing the interface when needed.
> I can't say there's much legacy stuff inside. I'm quite happy with the
> interface.

It makes sense currently, but, once added at kernel, you won't be able
to change it again, without huge efforts. So, if the interface has any
trouble, we need to correct it before adding at the kernel. You should
remember that a kernel driver shouldn't be bound to an specific userspace
application. So, the same kernel interface should work with all lircd's
starting from the version where the interface was added. In other words,
it should be possible to use let's say a 5 year-old lirc with a brand
new kernel.

Also, some non lirc applications may arise, using the same kernel interface.
So, the API stability needs to be kept.

> The other thing is that I can't really move the decoder from userspace to
> kernel because there are way too many userspace drivers that do require a
> userspace decoder. LIRC also is running on FreeBSD, MacOS and even Cygwin.
> So letting the userspace drivers take advantage of a potential Linux in-
> kernel decoder is not an option for me either.

You can take advantage of a in-kernel decoder. Instead of receiving raw
pulse/space, you'll be receiving keystrokes (or scancodes).

Probably, it doesn't make sense to port every single IR protocol decoder
to kernel. We need there support for the protocols that come with the IR shipped
with the devices (I think that currently we have RC5, RC4, NEC and pulse-distance),
and the most used procolos at the universal IR's (RC5 may be enough?).

>> Take advantage of this window to make a
>> design that is fully integrated with Linux - put IR on equal footing
>> with the keyboard and mouse as it should be.
>
> That's a question that I have not answered for myself concludingly.
> Is a remote control really on exactly the same level as a keyboard or
> mouse?

On some devices like STB and TV sets (most of modern LCD/Plasma TV's run Linux),
they are at the same level. I'd say that the same applies to PC's that
the user has dedicated to work as an MCE.

Cheers,
Mauro.

2009-11-30 11:57:13

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Andy Walls wrote:
> On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
>> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
>>> This has zero advantages besides good developer feeling that "My system
>>> has one less daemon..."
>> Surely it's clear that having an unnecessary daemon is introducing
>> another point of failure?
>
> A failure in a userspace IR daemon is worst case loss of IR
> functionality.
>
> A failure in kernel space can oops or panic the machine.

If IR is the only interface between the user and the system (like in a TV
or a Set Top Box), both will give you the same practical result: the system
will be broken, if you got a crash at the IR driver.

> Userspace is much more flexible.

Why? The flexibility about the same on both kernelspace and userspace,
except for the boot time.

A kernelspace input device driver can start working since boot time.
On the other hand, an userspace device driver will be available only
after mounting the filesystems and starting the deamons
(e. g. after running inittab).

So, you cannot catch a key that would be affecting the boot
(for example to ask the kernel to run a different runlevel or entering
on some administrative mode).

After the boot, and providing that the kernel has the proper
API's, a pure userspace driver can behave just like a kernelspace
driver and vice-versa. The only difference may be in terms of device
transfer rate (not relevant for input devices) and latency.

Cheers,
Mauro.

2009-11-30 12:13:59

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Fri, Nov 27, 2009 at 2:45 AM, Christoph Bartelmus
> <[email protected]> wrote:
>> Hi Mauro,
>>
>> on 26 Nov 09 at 14:25, Mauro Carvalho Chehab wrote:
>>> Christoph Bartelmus wrote:
>> [...]
>>>> But I'm still a bit hesitant about the in-kernel decoding. Maybe it's just
>>>> because I'm not familiar at all with input layer toolset.
>> [...]
>>> I hope it helps for you to better understand how this works.
>> So the plan is to have two ways of using IR in the future which are
>> incompatible to each other, the feature-set of one being a subset of the
>> other?
>
> Take advantage of the fact that we don't have a twenty year old legacy
> API already in the kernel. Design an IR API that uses current kernel
> systems. Christoph, ignore the code I wrote and make a design proposal
> that addresses these goals...
>
> 1) Unified input in Linux using evdev. IR is on equal footing with
> mouse and keyboard.

This makes sense to me. Yet, I think that, on some specific cases, we'll
need a raw interface.

> 2) plug and play for basic systems - you only need an external app for scripting

Yes.

> 3) No special tools - use mkdir, echo, cat, shell scripts to build maps

I don't think this is relevant. As we already have ioctls for building maps, and
while all in-kernel drivers can handle scancodes with up to 32 bits, I don't see
any reason to use anything different than what's currently available.

> 4) Use of modern Linux features like sysfs, configfs and udev.

sysfs/udev is a need for hot-plugging. I wouldn't use configfs for it. There aren't
many places using it and afaik some distros are not compiling their kernels with
configfs enabled. Also, as we have already ioctl's for keycode maps, I think
we shouldn't be migrating to controlfs.

> 5) Direct multi-app support - no daemon

For multi-app support usage like your example (e. g. different IR keys mapped into
the same evdev keycode and sent to different applications), I think we should need
a daemon for handling it.

> 6) Hide timing data from user as much as possible.

I agree. Porting the IRQ/gpio pollings to userspace on a system with a high workload
may mean that the keycode will be badly interpreted.

Cheers,
Mauro.

2009-11-30 12:34:20

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Christoph Bartelmus wrote:
> Hi Mauro,
>
> I just don't want to change a working interface just because it could be
> also implemented in a different way, but having no other visible advantage
> than using more recent kernel features.

I agree. The main reasons to review the interface is:
1) to avoid any overlaps (if are there any) with the evdev interface;
2) to have it stable enough to be used, without changes, for a long
time.

>> I haven't seen such limitations on his proposal. We currently have in-kernel
>> decoders for NEC, pulse-distance, RC4 protocols, and some variants. If
>> non-RC5 decoders are missing, we need for sure to add them.
>
> That was not my point. If you point a NEC remote at the Igor USB device,
> you won't be able to use a NEC decoder because the device will swallow
> half of the bits. LIRC won't care unless the resulting scancodes are
> identical.

If the difference is just the bits order, and assuming that we use a standard
NEC decoder, a (kernel) driver will simply provide a different scancode for
that device, and the keymap table will be different, but it will still work
(an can still be plug and play).

In this specific case, we can opt to simply don't add any special hack for
Igor USB at the driver, but to leting the userspace tool to invert the bits
order when loading the keymap for that device.

>> Providing that we agree on what we'll do, I don't see why not
>> adding it on staging for 2.6.33 and targeting to have
>> everything done for 2.6.34 or 2.6.35.
>
> The problem that I see here is just that even when we have very talented
> people working on this, that put together all resources, we won't be able
> to cover all the corner cases with all the different receivers and remote
> control protocols out there. It will still require lots of fine-tuning
> which was done in LIRC over the years.

True, but even if we want to merge lirc drivers "as-is", the drivers will
still need changes, due to kernel CodingStyle, due to the usage of some API's
that may be deprecated, due to some breakage with non-Intel architectures, due
to some bugs that kernel hackers may discover, etc.

Also, there will be the needs for integrating with V4L/DVB code that may
also require some changes.

So, the drivers will still be different than what you currently have
and they may still need some fine-tuning after the merge.

Cheers,
Mauro

2009-11-30 12:59:25

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, 2009-11-30 at 09:56 -0200, Mauro Carvalho Chehab wrote:
> Andy Walls wrote:
> > On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
> >> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
> >>> This has zero advantages besides good developer feeling that "My system
> >>> has one less daemon..."
> >> Surely it's clear that having an unnecessary daemon is introducing
> >> another point of failure?
> >
> > A failure in a userspace IR daemon is worst case loss of IR
> > functionality.
> >
> > A failure in kernel space can oops or panic the machine.
>
> If IR is the only interface between the user and the system (like in a TV
> or a Set Top Box), both will give you the same practical result: the system
> will be broken, if you got a crash at the IR driver.

Yes, true. I had forgotten about the embedded space.

Nonetheless I'd still rather debug a problem with a dead process in
userspace than an oops or panic (not that an end user cares) and avoid
the risk of filesystem corruption.

> Userspace is much more flexible.
>
> Why? The flexibility about the same on both kernelspace and userspace,
> except for the boot time.

I suppose my best answer to that is question back to you: Why does udev
run in userspace versus a kernel thread?


My personal thoughts on why user space is more flexible:

1. You have all of *NIX available to you to use as tools to achieve your
requirements.

2. You are not constrained to use C.

3. You can link in libraries with functions that are not available in
the kernel. (udev has libudev IIRC to handle complexities)

4. Reading a configuration file or other file from the filesystem is
trivial - file access from usespace is easy.

5. You don't have to be concerned about the running context (am I
allowed to sleep here or not?).






> A kernelspace input device driver can start working since boot time.
> On the other hand, an userspace device driver will be available only
> after mounting the filesystems and starting the deamons
> (e. g. after running inittab).
>
> So, you cannot catch a key that would be affecting the boot
> (for example to ask the kernel to run a different runlevel or entering
> on some administrative mode).

Right. That's another requirement that makes sense, if we're talking
about systems that don't have any other keyboard handy to the user.

So are we optimizing for the embedded/STB and HTPC with no keyboard use
case, or the desktop or HTPC with a keyboard for maintencance?


Regards,
Andy

2009-11-30 13:23:03

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Nov 30, 2009 at 7:57 AM, Andy Walls <[email protected]> wrote:
> I suppose my best answer to that is question back to you: Why does udev
> run in userspace versus a kernel thread?

Because udev is a scripting system. I've always said that the
scripting piece of IR belongs in user space. IR scripting should be
optional, none of the systems I work on need it.

This is the event flow being built...

device timing data
-- send timing data to user space
-- do protocol decode (40K code)
-- send decoded data back to kernel
other devices that decode in HW add events here
-- send decoded data to user space
-- map to keys (30K code)
-- send keys back to kernel
apps listen for keys
-- send keys back to user space
-- user space apps act on key (possibly run scripts)

I'd like to see...

device timing data
-- user space can inject timing data from user space drivers
do protocol decode (40K code)
other devices that decode in HW add events here
-- user space can inject decoded data from user space drivers
map to keys (30K code)
apps listen for keys
-- send keys back to user space
-- user space apps act on key (possibly run scripts)



>
> My personal thoughts on why user space is more flexible:
>
> 1. You have all of *NIX available to you to use as tools to achieve your
> requirements.
>
> 2. You are not constrained to use C.
>
> 3. You can link in libraries with functions that are not available in
> the kernel. ?(udev has libudev IIRC to handle complexities)
>
> 4. Reading a configuration file or other file from the filesystem is
> trivial - file access from usespace is easy.
>
> 5. You don't have to be concerned about the running context (am I
> allowed to sleep here or not?).
>
>
>
>
>
>
>> A kernelspace input device driver can start working since boot time.
>> On the other hand, an userspace device driver will be available only
>> after mounting the filesystems and starting the deamons
>> (e. g. after running inittab).
>>
>> So, you cannot catch a key that would be affecting the boot
>> (for example to ask the kernel to run a different runlevel or entering
>> on some administrative mode).
>
> Right. ?That's another requirement that makes sense, if we're talking
> about systems that don't have any other keyboard handy to the user.
>
> So are we optimizing for the embedded/STB and HTPC with no keyboard use
> case, or the desktop or HTPC with a keyboard for maintencance?
>
>
> Regards,
> Andy
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-input" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>



--
Jon Smirl
[email protected]

2009-11-30 13:25:12

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Andy Walls wrote:
> On Mon, 2009-11-30 at 09:56 -0200, Mauro Carvalho Chehab wrote:
>> Andy Walls wrote:
>>> On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
>>>> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
>>>>> This has zero advantages besides good developer feeling that "My system
>>>>> has one less daemon..."
>>>> Surely it's clear that having an unnecessary daemon is introducing
>>>> another point of failure?
>>> A failure in a userspace IR daemon is worst case loss of IR
>>> functionality.
>>>
>>> A failure in kernel space can oops or panic the machine.
>> If IR is the only interface between the user and the system (like in a TV
>> or a Set Top Box), both will give you the same practical result: the system
>> will be broken, if you got a crash at the IR driver.
>
> Yes, true. I had forgotten about the embedded space.
>
> Nonetheless I'd still rather debug a problem with a dead process in
> userspace than an oops or panic (not that an end user cares) and avoid
> the risk of filesystem corruption.
>
>> Userspace is much more flexible.
>>
>> Why? The flexibility about the same on both kernelspace and userspace,
>> except for the boot time.
>
> I suppose my best answer to that is question back to you: Why does udev
> run in userspace versus a kernel thread?

udev relies on a kernel implementation: sysfs.

> My personal thoughts on why user space is more flexible:
>
> 1. You have all of *NIX available to you to use as tools to achieve your
> requirements.
>
> 2. You are not constrained to use C.
>
> 3. You can link in libraries with functions that are not available in
> the kernel. (udev has libudev IIRC to handle complexities)
>
> 4. Reading a configuration file or other file from the filesystem is
> trivial - file access from usespace is easy.
>
> 5. You don't have to be concerned about the running context (am I
> allowed to sleep here or not?).

You can do all the above steps with the proper API, just like udev does
with sysfs API.

After the boot, a device can open the raw API, disabling any in-kernel
decoding/handling and handle IR directly. Alternatively, an udev rule
can load a different keymap based on some config written on a file.

So, you won't loose anything by having an in-kernel driver or decoder.

>> A kernelspace input device driver can start working since boot time.
>> On the other hand, an userspace device driver will be available only
>> after mounting the filesystems and starting the deamons
>> (e. g. after running inittab).
>>
>> So, you cannot catch a key that would be affecting the boot
>> (for example to ask the kernel to run a different runlevel or entering
>> on some administrative mode).
>
> Right. That's another requirement that makes sense, if we're talking
> about systems that don't have any other keyboard handy to the user.
>
> So are we optimizing for the embedded/STB and HTPC with no keyboard use
> case, or the desktop or HTPC with a keyboard for maintencance?

If we remove the in-kernel decoders/evdev handlers, you'l limit the usecase,
where if you keep them in kernel, both ways can be used.

Cheers,
Mauro.

2009-11-30 13:43:32

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, 2009-11-30 at 07:57 -0500, Andy Walls wrote:
> On Mon, 2009-11-30 at 09:56 -0200, Mauro Carvalho Chehab wrote:
> > Andy Walls wrote:
> > > On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
> > >> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
> > >>> This has zero advantages besides good developer feeling that "My system
> > >>> has one less daemon..."
> > >> Surely it's clear that having an unnecessary daemon is introducing
> > >> another point of failure?
> > >
> > > A failure in a userspace IR daemon is worst case loss of IR
> > > functionality.
> > >
> > > A failure in kernel space can oops or panic the machine.
> >
> > If IR is the only interface between the user and the system (like in a TV
> > or a Set Top Box), both will give you the same practical result: the system
> > will be broken, if you got a crash at the IR driver.
>
> Yes, true. I had forgotten about the embedded space.
>
> Nonetheless I'd still rather debug a problem with a dead process in
> userspace than an oops or panic (not that an end user cares) and avoid
> the risk of filesystem corruption.
>
> > Userspace is much more flexible.
> >
> > Why? The flexibility about the same on both kernelspace and userspace,
> > except for the boot time.
>
> I suppose my best answer to that is question back to you: Why does udev
> run in userspace versus a kernel thread?
>
>
> My personal thoughts on why user space is more flexible:
>
> 1. You have all of *NIX available to you to use as tools to achieve your
> requirements.
>
> 2. You are not constrained to use C.
>
> 3. You can link in libraries with functions that are not available in
> the kernel. (udev has libudev IIRC to handle complexities)
>
> 4. Reading a configuration file or other file from the filesystem is
> trivial - file access from usespace is easy.
>
> 5. You don't have to be concerned about the running context (am I
> allowed to sleep here or not?).


6. You can modify userspace driver easily to cope with all weird setups.
Like you know that there are remotes that send whole packet of data that
consist of many numbers that are also displayed on the LCD of the
remote.
Otherwise you will have to go through same fight for every minor thing
you like to add to kernel...


7. You don't have an ABI constraints, your userspace program can read a
configuration file in any format you wish.
I for example was thinking about putting all lirc config files into an
sqllite database, and pulling them out when specific remote is detected.


>
>
>
>
>
>
> > A kernelspace input device driver can start working since boot time.
> > On the other hand, an userspace device driver will be available only
> > after mounting the filesystems and starting the deamons
> > (e. g. after running inittab).
> >
> > So, you cannot catch a key that would be affecting the boot
> > (for example to ask the kernel to run a different runlevel or entering
> > on some administrative mode).
>
> Right. That's another requirement that makes sense, if we're talking
> about systems that don't have any other keyboard handy to the user.
>
> So are we optimizing for the embedded/STB and HTPC with no keyboard use
> case, or the desktop or HTPC with a keyboard for maintencance?
>
>
> Regards,
> Andy
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2009-11-30 14:01:27

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Nov 30, 2009 at 8:43 AM, Maxim Levitsky <[email protected]> wrote:
> On Mon, 2009-11-30 at 07:57 -0500, Andy Walls wrote:
>> On Mon, 2009-11-30 at 09:56 -0200, Mauro Carvalho Chehab wrote:
>> > Andy Walls wrote:
>> > > On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
>> > >> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
>> > >>> This has zero advantages besides good developer feeling that "My system
>> > >>> has one less daemon..."
>> > >> Surely it's clear that having an unnecessary daemon is introducing
>> > >> another point of failure?
>> > >
>> > > A failure in a userspace IR daemon is worst case loss of IR
>> > > functionality.
>> > >
>> > > A failure in kernel space can oops or panic the machine.
>> >
>> > If IR is the only interface between the user and the system (like in a TV
>> > or a Set Top Box), both will give you the same practical result: the system
>> > will be broken, if you got a crash at the IR driver.
>>
>> Yes, true. ?I had forgotten about the embedded space.
>>
>> Nonetheless I'd still rather debug a problem with a dead process in
>> userspace than an oops or panic (not that an end user cares) and avoid
>> the risk of filesystem corruption.
>>
>> > Userspace is much more flexible.
>> >
>> > Why? The flexibility about the same on both kernelspace and userspace,
>> > except for the boot time.
>>
>> I suppose my best answer to that is question back to you: Why does udev
>> run in userspace versus a kernel thread?
>>
>>
>> My personal thoughts on why user space is more flexible:
>>
>> 1. You have all of *NIX available to you to use as tools to achieve your
>> requirements.
>>
>> 2. You are not constrained to use C.
>>
>> 3. You can link in libraries with functions that are not available in
>> the kernel. ?(udev has libudev IIRC to handle complexities)
>>
>> 4. Reading a configuration file or other file from the filesystem is
>> trivial - file access from usespace is easy.
>>
>> 5. You don't have to be concerned about the running context (am I
>> allowed to sleep here or not?).
>
>
> 6. You can modify userspace driver easily to cope with all weird setups.
> Like you know that there are remotes that send whole packet of data that
> consist of many numbers that are also displayed on the LCD of the
> remote.
> Otherwise you will have to go through same fight for every minor thing
> you like to add to kernel...
>
>
> 7. You don't have an ABI constraints, your userspace program can read a
> configuration file in any format you wish.
> I for example was thinking about putting all lirc config files into an
> sqllite database, and pulling them out when specific remote is detected.

Linux is not a microkernel it is a monolithic kernel.
http://en.wikipedia.org/wiki/Microkernel

If you want to push all of the device drivers to user space go run a
microkernel. Even the X server has finally come around to getting rid
of their cross platform OS in user space model and begun the switch to
kernel drivers. That transition is going to take ten years to
complete.

Once things get into the kernel they become far harder to change.
Stop for a minute and think about designing the best IR system for
Linux and forget about making a cross platform solution. IR is an
input device, it should be integrated into the Linux input subsystem.
You may not like the designs I have proposed, but running IR in user
space and injecting a keystroke at the end of the process is not
integrating it into the input subsystem.


>
>
>>
>>
>>
>>
>>
>>
>> > A kernelspace input device driver can start working since boot time.
>> > On the other hand, an userspace device driver will be available only
>> > after mounting the filesystems and starting the deamons
>> > (e. g. after running inittab).
>> >
>> > So, you cannot catch a key that would be affecting the boot
>> > (for example to ask the kernel to run a different runlevel or entering
>> > on some administrative mode).
>>
>> Right. ?That's another requirement that makes sense, if we're talking
>> about systems that don't have any other keyboard handy to the user.
>>
>> So are we optimizing for the embedded/STB and HTPC with no keyboard use
>> case, or the desktop or HTPC with a keyboard for maintencance?
>>
>>
>> Regards,
>> Andy
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to [email protected]
>> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
>
>
>



--
Jon Smirl
[email protected]

2009-11-30 15:04:27

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, 2009-11-30 at 09:01 -0500, Jon Smirl wrote:
> On Mon, Nov 30, 2009 at 8:43 AM, Maxim Levitsky <[email protected]> wrote:
> > On Mon, 2009-11-30 at 07:57 -0500, Andy Walls wrote:
> >> On Mon, 2009-11-30 at 09:56 -0200, Mauro Carvalho Chehab wrote:
> >> > Andy Walls wrote:
> >> > > On Sun, 2009-11-29 at 09:49 -0800, Ray Lee wrote:
> >> > >> On Sun, Nov 29, 2009 at 9:28 AM, Maxim Levitsky <[email protected]> wrote:
> >> > >>> This has zero advantages besides good developer feeling that "My system
> >> > >>> has one less daemon..."
> >> > >> Surely it's clear that having an unnecessary daemon is introducing
> >> > >> another point of failure?
> >> > >
> >> > > A failure in a userspace IR daemon is worst case loss of IR
> >> > > functionality.
> >> > >
> >> > > A failure in kernel space can oops or panic the machine.
> >> >
> >> > If IR is the only interface between the user and the system (like in a TV
> >> > or a Set Top Box), both will give you the same practical result: the system
> >> > will be broken, if you got a crash at the IR driver.
> >>
> >> Yes, true. I had forgotten about the embedded space.
> >>
> >> Nonetheless I'd still rather debug a problem with a dead process in
> >> userspace than an oops or panic (not that an end user cares) and avoid
> >> the risk of filesystem corruption.
> >>
> >> > Userspace is much more flexible.
> >> >
> >> > Why? The flexibility about the same on both kernelspace and userspace,
> >> > except for the boot time.
> >>
> >> I suppose my best answer to that is question back to you: Why does udev
> >> run in userspace versus a kernel thread?
> >>
> >>
> >> My personal thoughts on why user space is more flexible:
> >>
> >> 1. You have all of *NIX available to you to use as tools to achieve your
> >> requirements.
> >>
> >> 2. You are not constrained to use C.
> >>
> >> 3. You can link in libraries with functions that are not available in
> >> the kernel. (udev has libudev IIRC to handle complexities)
> >>
> >> 4. Reading a configuration file or other file from the filesystem is
> >> trivial - file access from usespace is easy.
> >>
> >> 5. You don't have to be concerned about the running context (am I
> >> allowed to sleep here or not?).
> >
> >
> > 6. You can modify userspace driver easily to cope with all weird setups.
> > Like you know that there are remotes that send whole packet of data that
> > consist of many numbers that are also displayed on the LCD of the
> > remote.
> > Otherwise you will have to go through same fight for every minor thing
> > you like to add to kernel...
> >
> >
> > 7. You don't have an ABI constraints, your userspace program can read a
> > configuration file in any format you wish.
> > I for example was thinking about putting all lirc config files into an
> > sqllite database, and pulling them out when specific remote is detected.
>
> Linux is not a microkernel it is a monolithic kernel.
> http://en.wikipedia.org/wiki/Microkernel


The above is trolling.

Maybe it will come as a surprise to you, but I am quite big supporter of
in-kernel code.

For example I don't quite like that alsa doesn't do mixing and
re-sampling in kernel.
These days pulseaudio works quite well, but it still sucks in some sense
sometimes.
I know about dmix/dsnoop, etc, these are nice, but still I would be
happy if kernel did that critical for both performance and latency thing
in kernel.

Some time ago an idea to move kernel VT support in userspace surfaced,
and I was against it too.


However, following established concept in philosophy, extremes are
equally bad.

Both pushing everything out of kernel, and shoving everything in is
equally bad.

So I am not blindly saying that, 'Everything belongs to kernel!'
or 'Lets put everything out, its more stable that way!, Moore law will
take care of performance...'

Instead I consider the pros and cons of both solutions, picking the best
one.

In that particular case I was even happy to see your kernel patches at
first glance, but then, after deep review I found that in-kernel
approach will create only problems, won't eliminate userspace decoding,
and solve only one problem, that is give good feeling about 'one less
daemon in system'.



> Once things get into the kernel they become far harder to change.
> Stop for a minute and think about designing the best IR system for
> Linux and forget about making a cross platform solution. IR is an
> input device, it should be integrated into the Linux input subsystem.
> You may not like the designs I have proposed, but running IR in user
> space and injecting a keystroke at the end of the process is not
> integrating it into the input subsystem.
Yes it is, like it or not.

Best regards,
Maxim Levitsky



>
>
> >
> >
> >>
> >>
> >>
> >>
> >>
> >>
> >> > A kernelspace input device driver can start working since boot time.
> >> > On the other hand, an userspace device driver will be available only
> >> > after mounting the filesystems and starting the deamons
> >> > (e. g. after running inittab).
> >> >
> >> > So, you cannot catch a key that would be affecting the boot
> >> > (for example to ask the kernel to run a different runlevel or entering
> >> > on some administrative mode).
> >>
> >> Right. That's another requirement that makes sense, if we're talking
> >> about systems that don't have any other keyboard handy to the user.
> >>
> >> So are we optimizing for the embedded/STB and HTPC with no keyboard use
> >> case, or the desktop or HTPC with a keyboard for maintencance?
> >>
> >>
> >> Regards,
> >> Andy
> >>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> >> the body of a message to [email protected]
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
> >
>
>
>

2009-11-30 16:14:17

by kevin granade

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Nov 30, 2009 at 7:24 AM, Mauro Carvalho Chehab
<[email protected]> wrote:

> After the boot, a device can open the raw API, disabling any in-kernel
> decoding/handling and handle IR directly. Alternatively, an udev rule
> can load a different keymap based on some config written on a file.

This idea of the in-kernel decoding being disabled when the raw API is
opened worries me. What guarantees that the following scenario will
not happen?

User uses apps which retrieve the decoded IR messages from the kernel.
User installs an app which decodes messages via the raw API (not lirc).
User's other applications no longer receive IR messages.

I know the assumption has been that "only lirc will use the raw API",
but this seems like a poor assumption for an API design to me.

A related question, what is an application developer who wishes to
decode the raw IR signal (for whatever reason) to do? Are they
*required* to implement full decoding and feed all the messages back
to the kernel so they don't break other applications?

For clarity, I'm not arguing for a particular approach, I'm not fully
able to follow the discussion on this issue, but this one issue
bothered me.

Thank you for your time,
Kevin

> Cheers,
> Mauro.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at ?http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at ?http://www.tux.org/lkml/
>

2009-11-30 16:19:34

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Andy Walls wrote:

> Nonetheless I'd still rather debug a problem with a dead process in
> userspace than an oops or panic (not that an end user cares) and avoid
> the risk of filesystem corruption.

Considering my experience adding in-kernel support for IR's, I'd say that
in general, a driver does some things:

1) it polls or waits IRQ's for an IR event. On raw IR devices, the read value
means a mark or a space;
2) it counts the timings between each pulse, and pulse/space duration;
3) it runs a protocol decoding logic that, based on pulse/space duration, one
scancode is produced;
4) it does a table lookup to convert the scancode into the corresponding keycode;
5) it generates an evdev event.

Steps 2 and 3 happen only when the device doesn't have hardware decoding capabilities.
For devices with hardware decoding, the polling/IRQ process already retrieves a scancode.

Based on my experience, I can say that, from the above logic, the one
where you're more likely to generate an OOPS is at the first one,
where you need to do the proper memory barriers for example to avoid
unregistering an IR while you're in the middle of an IRQ or pull handling.
In the case of IRQ, you'll also need to take care to not sleep, since you're
in interrupt mode.

If you're outputing raw pulse/space to userspace (a lirc-like raw interface),
you'll still need to do steps (1) and (2) in kernel, and doing a logic close
to (5) to output an event to userspace.

So, the basic difference is that you won't run the decoder (3) nor do a table lookup (4).

The logic for (4) is trivial (a simple a table lookup). If you do a mistake
there, the bug will likely arise at the development time. Also, if you're not able
to write a proper code to get a value from a table, you shouldn't be trying
to write a driver anyway.

The logic for (3) is as simple as identifying the length of a pulse and the length of
the spaces. Depending on the length, it will produce a zero or one. Pure integer math.
The only risk of such logic is if you try to divide by zero. Except of that, this type
of code shouldn't cause any OOPS or panic.

Also, for (3) and (4), it is very easy to write it first on userspace (if you feel
more comfortable on doing so) and, after doing enough testing, add the same code to
kernelspace.

Cheers,
Mauro.


2009-11-30 17:34:23

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

kevin granade wrote:
> On Mon, Nov 30, 2009 at 7:24 AM, Mauro Carvalho Chehab
> <[email protected]> wrote:
>
>> After the boot, a device can open the raw API, disabling any in-kernel
>> decoding/handling and handle IR directly. Alternatively, an udev rule
>> can load a different keymap based on some config written on a file.
>
> This idea of the in-kernel decoding being disabled when the raw API is
> opened worries me. What guarantees that the following scenario will
> not happen?
>
> User uses apps which retrieve the decoded IR messages from the kernel.
> User installs an app which decodes messages via the raw API (not lirc).
> User's other applications no longer receive IR messages.
>
> I know the assumption has been that "only lirc will use the raw API",
> but this seems like a poor assumption for an API design to me.

All those questions are theoretical, as we haven't a raw API code
already merged in kernel. So, this is just my understanding on how
this should work.

If the user wants to use the raw interface, it is because the in-kernel
decoding is not appropriate for his usage (at least while such application
is opened). So, not disabling the evdev output seems senseless.

Btw, this is the same behavior that happens when some application directly
opens an evdev interface, instead of letting it to be redirected to stdin.

> A related question, what is an application developer who wishes to
> decode the raw IR signal (for whatever reason) to do? Are they
> *required* to implement full decoding and feed all the messages back
> to the kernel so they don't break other applications?

If such application won't do it, the IR will stop working, while the
application is in use.

Cheers,
Mauro.

2009-11-30 17:43:17

by Lennart Sorensen

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Nov 28, 2009 at 06:26:55PM -0500, Andy Walls wrote:
> The only thing this buys for the user is remote/products bundles that
> work out of the box. That can only be a solution for the 80% case.
>
> I don't hear users crying out "Please integrate IR with the input
> system". I do hear users say "I want my remote to work", and "How can I
> make my remote work?". Users are not specifically asking for this
> integration of IR and the input system - a technical nuance. If such a
> tecnical desire-ment drives excessive rework, I doubt anyone will care
> enough about IR to follow through to make a complete system.

Please integrate it so I can stop having issues with the lirc moduels
when going to a new kernel version.

> What does "equal footing" mean as an incentive anyway? The opportunity
> to reimplement *everything* that exists for IR already over again in
> kernel-space for the sake of developer technical desires? That's just a
> lot of work for "not invented here" syndrome. IR transceivers are
> arguably superior to keyboards and mice anyway because they can transmit
> data too.

I have no idea. I am sure you guys will come up with a great interface.
I just use lirc with my mythtv box.

--
Len Sorensen

2009-11-30 18:02:48

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Nov 30, 2009 at 03:33:52PM -0200, Mauro Carvalho Chehab wrote:
> kevin granade wrote:
> > On Mon, Nov 30, 2009 at 7:24 AM, Mauro Carvalho Chehab
> > <[email protected]> wrote:
> >
> >> After the boot, a device can open the raw API, disabling any in-kernel
> >> decoding/handling and handle IR directly. Alternatively, an udev rule
> >> can load a different keymap based on some config written on a file.
> >
> > This idea of the in-kernel decoding being disabled when the raw API is
> > opened worries me. What guarantees that the following scenario will
> > not happen?
> >
> > User uses apps which retrieve the decoded IR messages from the kernel.
> > User installs an app which decodes messages via the raw API (not lirc).
> > User's other applications no longer receive IR messages.
> >
> > I know the assumption has been that "only lirc will use the raw API",
> > but this seems like a poor assumption for an API design to me.
>
> All those questions are theoretical, as we haven't a raw API code
> already merged in kernel. So, this is just my understanding on how
> this should work.
>
> If the user wants to use the raw interface, it is because the in-kernel
> decoding is not appropriate for his usage

Not necessarily, someone might just want to observe the data stream for
one reason or enough. You would not believe how many times I wanted
to use evtest from X but could not because X grabs the device and had
to switch to console....

> (at least while such application
> is opened). So, not disabling the evdev output seems senseless.

You know what they say when you assume things?

>
> Btw, this is the same behavior that happens when some application directly
> opens an evdev interface, instead of letting it to be redirected to stdin.

Well, console applications don't get their input directly from event
device but even if they did "not redirecting it to stdin" will not
affect any other application that has the same event device open.

This is a _huge_ difference.

>
> > A related question, what is an application developer who wishes to
> > decode the raw IR signal (for whatever reason) to do? Are they
> > *required* to implement full decoding and feed all the messages back
> > to the kernel so they don't break other applications?
>
> If such application won't do it, the IR will stop working, while the
> application is in use.
>

I don't think it is indication of a good solution.

--
Dmitry

2009-11-30 18:28:34

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Mon, Nov 30, 2009 at 03:33:52PM -0200, Mauro Carvalho Chehab wrote:
>> kevin granade wrote:
>>> On Mon, Nov 30, 2009 at 7:24 AM, Mauro Carvalho Chehab
>>> <[email protected]> wrote:
>>>
>>>> After the boot, a device can open the raw API, disabling any in-kernel
>>>> decoding/handling and handle IR directly. Alternatively, an udev rule
>>>> can load a different keymap based on some config written on a file.
>>> This idea of the in-kernel decoding being disabled when the raw API is
>>> opened worries me. What guarantees that the following scenario will
>>> not happen?
>>>
>>> User uses apps which retrieve the decoded IR messages from the kernel.
>>> User installs an app which decodes messages via the raw API (not lirc).
>>> User's other applications no longer receive IR messages.
>>>
>>> I know the assumption has been that "only lirc will use the raw API",
>>> but this seems like a poor assumption for an API design to me.
>> All those questions are theoretical, as we haven't a raw API code
>> already merged in kernel. So, this is just my understanding on how
>> this should work.
>>
>> If the user wants to use the raw interface, it is because the in-kernel
>> decoding is not appropriate for his usage
>
> Not necessarily, someone might just want to observe the data stream for
> one reason or enough. You would not believe how many times I wanted
> to use evtest from X but could not because X grabs the device and had
> to switch to console....
>
>> (at least while such application
>> is opened). So, not disabling the evdev output seems senseless.
>
> You know what they say when you assume things?
>
>> Btw, this is the same behavior that happens when some application directly
>> opens an evdev interface, instead of letting it to be redirected to stdin.
>
> Well, console applications don't get their input directly from event
> device but even if they did "not redirecting it to stdin" will not
> affect any other application that has the same event device open.
>
> This is a _huge_ difference.
>
>>> A related question, what is an application developer who wishes to
>>> decode the raw IR signal (for whatever reason) to do? Are they
>>> *required* to implement full decoding and feed all the messages back
>>> to the kernel so they don't break other applications?
>> If such application won't do it, the IR will stop working, while the
>> application is in use.
>>
>
> I don't think it is indication of a good solution.

Well, maybe then we may have an ioctl to explicitly disable the evdev processing
of the data that could be applied to the raw IR interface, instead of making
assumptions.
>

2009-11-30 19:08:01

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Nov 30, 2009 at 04:27:56PM -0200, Mauro Carvalho Chehab wrote:
> Dmitry Torokhov wrote:
> > On Mon, Nov 30, 2009 at 03:33:52PM -0200, Mauro Carvalho Chehab wrote:
> >> kevin granade wrote:
> >>> On Mon, Nov 30, 2009 at 7:24 AM, Mauro Carvalho Chehab
> >>> <[email protected]> wrote:
> >>>
> >>>> After the boot, a device can open the raw API, disabling any in-kernel
> >>>> decoding/handling and handle IR directly. Alternatively, an udev rule
> >>>> can load a different keymap based on some config written on a file.
> >>> This idea of the in-kernel decoding being disabled when the raw API is
> >>> opened worries me. What guarantees that the following scenario will
> >>> not happen?
> >>>
> >>> User uses apps which retrieve the decoded IR messages from the kernel.
> >>> User installs an app which decodes messages via the raw API (not lirc).
> >>> User's other applications no longer receive IR messages.
> >>>
> >>> I know the assumption has been that "only lirc will use the raw API",
> >>> but this seems like a poor assumption for an API design to me.
> >> All those questions are theoretical, as we haven't a raw API code
> >> already merged in kernel. So, this is just my understanding on how
> >> this should work.
> >>
> >> If the user wants to use the raw interface, it is because the in-kernel
> >> decoding is not appropriate for his usage
> >
> > Not necessarily, someone might just want to observe the data stream for
> > one reason or enough. You would not believe how many times I wanted
> > to use evtest from X but could not because X grabs the device and had
> > to switch to console....
> >
> >> (at least while such application
> >> is opened). So, not disabling the evdev output seems senseless.
> >
> > You know what they say when you assume things?
> >
> >> Btw, this is the same behavior that happens when some application directly
> >> opens an evdev interface, instead of letting it to be redirected to stdin.
> >
> > Well, console applications don't get their input directly from event
> > device but even if they did "not redirecting it to stdin" will not
> > affect any other application that has the same event device open.
> >
> > This is a _huge_ difference.
> >
> >>> A related question, what is an application developer who wishes to
> >>> decode the raw IR signal (for whatever reason) to do? Are they
> >>> *required* to implement full decoding and feed all the messages back
> >>> to the kernel so they don't break other applications?
> >> If such application won't do it, the IR will stop working, while the
> >> application is in use.
> >>
> >
> > I don't think it is indication of a good solution.
>
> Well, maybe then we may have an ioctl to explicitly disable the evdev processing
> of the data that could be applied to the raw IR interface, instead of making
> assumptions.

This is I think better. Still, this takes decision from one application
and to another. Why don't we let consumers decide what they want to
use? I.e if one does not want to use kernel-driven events - don't open
that particular /dev/input/eventX but rather open event device created
through uinput by lirc?

--
Dmitry

2009-11-30 19:49:10

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

>> That's a question that I have not answered for myself concludingly.
>> Is a remote control really on exactly the same level as a keyboard or
>> mouse?
>
> On some devices like STB and TV sets (most of modern LCD/Plasma TV's
> run Linux),
> they are at the same level. I'd say that the same applies to PC's that
> the user has dedicated to work as an MCE.

A remote can just _be_ keyboard and/or (sort of) mouse.
--
Krzysztof Halasa

2009-11-30 20:03:42

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Andy Walls <[email protected]> writes:

> Nonetheless I'd still rather debug a problem with a dead process in
> userspace than an oops or panic (not that an end user cares) and avoid
> the risk of filesystem corruption.

I'll concentrate on IRQ-driven space/mark drivers/devices since it's
what I've been using. They are: very simple hardware (as simple as a
TSOP1836 3-pin receiver "chip" + a resistor), very simple driver (the
hardware signals change in input state with IRQ). Something like maybe
50 lines of code + the (default) key mapping table.

Anyway, you can't move the whole driver to userspace, as it has to
handle IRQs with timestamps.

It doesn't have to sleep.

It's about the last thing I'd worry about WRT the stability.

> So are we optimizing for the embedded/STB and HTPC with no keyboard use
> case, or the desktop or HTPC with a keyboard for maintencance?

IOW the question is: do we want to continue supporting keyboard-less
machines?
--
Krzysztof Halasa

2009-11-30 20:07:19

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

kevin granade <[email protected]> writes:

> This idea of the in-kernel decoding being disabled when the raw API is
> opened worries me.

I don't think we need to disable the in-kernel decoding automatically.
That would be rather unfortunate.
--
Krzysztof Halasa

2009-11-30 21:35:32

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Nov 29, 2009 at 7:01 AM, Christoph Bartelmus <[email protected]> wrote:
> Hi Jon,
>
> on 27 Nov 09 at 12:49, Jon Smirl wrote:
> [...]
>> Christoph, take what you know from all of the years of working on LIRC
>> and design the perfect in-kernel system. This is the big chance to
>> redesign IR support and get rid of any past mistakes. Incorporate any
>> useful chunks of code and knowledge from the existing LIRC into the
>> new design. Drop legacy APIs, get rid of daemons, etc. You can do this
>> redesign in parallel with existing LIRC. Everyone can continue using
>> the existing code while the new scheme is being built. Think of it as
>> LIRC 2.0. You can lead this design effort, you're the most experience
>> developer in the IR area.
>
> This is a very difficult thing for me to do. I must admit that I'm very
> biased.
> Because lircd is the only userspace application that uses the LIRC kernel
> interface, we never had any problems changing the interface when needed.
> I can't say there's much legacy stuff inside. I'm quite happy with the
> interface.
> The other thing is that I can't really move the decoder from userspace to
> kernel because there are way too many userspace drivers that do require a
> userspace decoder. LIRC also is running on FreeBSD, MacOS and even Cygwin.
> So letting the userspace drivers take advantage of a potential Linux in-
> kernel decoder is not an option for me either.
> I'm having my 'LIRC maintainer' hat on mostly during this discussion and I
> do understand that from Linux kernel perspective things look different.

It would be interesting to split the lirc daemon. Put the protocol
decoder stuff in one daemon and the scripting support in the other.
The scripting daemon would then be optional. What would be the
relative sizes of the two daemons?

--------------

The LIRC daemon always works with timing data, right? When it reads
the config files generated by irrecord it internally converts those to
timing data and then matches the incoming data against it.

Have you looked at the protocol engine idea? Running the protocol
engines in parallel until a match is achieved. Then map the
vendor/device/command triplet. The protocol engine concept fixes the
problem of Sony remotes in irrecord. Various Sony remote buttons
transmit in different protocols. irrecord assumes that a remote is
only using a single protocol. Since it can't figure out a protocol it
always records these remotes as raw.

If the IR data is being decoded in a protocol engine it becomes
possible to get rid of the need to configure IR in some cases. Apps
like MythTV could pretend like they are a common piece of electronics
hardware - say a Motorola DVR. MythTV can then look for the
vendor/device/command triplet from a Motorola DVR. Set your
programmable remote to send the Motorola DVR commands and everything
will "just work".

Button on remote programed to be Mot DVR --> protocol engine -->
Mot/dev/command --> MythTV which is looking for Mot/dev/command
No config files needed.

Make a command in MythTV to switch to emulating a different DVR if you
happen to own this one. Take this a step further and register a MythTV
profile with the various IR databases.




>
>> Take advantage of this window to make a
>> design that is fully integrated with Linux - put IR on equal footing
>> with the keyboard and mouse as it should be.
>
> That's a question that I have not answered for myself concludingly.
> Is a remote control really on exactly the same level as a keyboard or
> mouse?
>
> Christoph
>



--
Jon Smirl
[email protected]

2009-12-01 07:46:16

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Jon,

on 30 Nov 09 at 16:35, Jon Smirl wrote:
[...]
> It would be interesting to split the lirc daemon. Put the protocol
> decoder stuff in one daemon and the scripting support in the other.
> The scripting daemon would then be optional. What would be the
> relative sizes of the two daemons?
>
> --------------
>
> The LIRC daemon always works with timing data, right?

Timing data or hex codes (if decoding is done in hardware).

> When it reads
> the config files generated by irrecord it internally converts those to
> timing data

No.

> and then matches the incoming data against it.

Pattern matching is only done with raw mode config files. The normal case
is that lircd is decoding the incoming data using the protocol description
found in the config file.

> Have you looked at the protocol engine idea? Running the protocol
> engines in parallel until a match is achieved. Then map the
> vendor/device/command triplet. The protocol engine concept fixes the
> problem of Sony remotes in irrecord.

No, only rewriting irrecord would fix the problem of Sony remotes.
irrecord tries to guess the protocol parameters without any prior
knowledge about any protocols.
irrecord could also be rewritten to use the protocol engine concept
without changing anything in the decoder itself. In fact partly this is
already available. You can give irrecord a template config file and it
will skip the protocol guessing step.

This just would have to be extended so that the template config file could
contain several protocol descriptions to match against.
I havn't implemented this yet, because I don't care much. Sony remotes do
work flawlessly also in raw mode. It's only a problem from the aesthetic
view point.

> Various Sony remote buttons
> transmit in different protocols. irrecord assumes that a remote is
> only using a single protocol. Since it can't figure out a protocol it
> always records these remotes as raw.

With manual intervention you can convert these raw config files afterwards
with "irrecord -a".

[...]
> Button on remote programed to be Mot DVR --> protocol engine -->
> Mot/dev/command --> MythTV which is looking for Mot/dev/command
> No config files needed.

You just move complexity to the application. MythTV would have to know how
a Motorola command set looks like.

Currently I would tend to an approach like this:
- raw interface to userspace using LIRC
- fixed set of in-kernel decoders that can handle bundled remotes

That would allow zero configuration for simple use cases and full
flexibility for more advanced use cases.

Christoph

2009-12-01 09:52:33

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On 11/30/09 13:34, Mauro Carvalho Chehab wrote:
> Christoph Bartelmus wrote:
>> Hi Mauro,
>>
>> I just don't want to change a working interface just because it could be
>> also implemented in a different way, but having no other visible advantage
>> than using more recent kernel features.
>
> I agree. The main reasons to review the interface is:
> 1) to avoid any overlaps (if are there any) with the evdev interface;

Use lirc for raw samples.
Use evdev for decoded data.

Hardware/drivers which can handle both can support both interfaces.

IMHO it makes no sense at all to squeeze raw samples through the input
layer. It looks more like a serial line than a input device. In fact
you can homebrew a receiver and connect it to the serial port, which was
quite common in pre-usb-ir-receiver times.

> 2) to have it stable enough to be used, without changes, for a long
> time.

It isn't like lirc is a new interface. It has been used in practice for
years. I don't think API stability is a problem here.

> True, but even if we want to merge lirc drivers "as-is", the drivers will
> still need changes, due to kernel CodingStyle, due to the usage of some API's
> that may be deprecated, due to some breakage with non-Intel architectures, due
> to some bugs that kernel hackers may discover, etc.

I assumed this did happen in already in preparation of this submission?

cheers,
Gerd

2009-12-01 10:20:32

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

>> The point is that for simple usage, like an user plugging his new USB stick
>> he just bought, he should be able to use the shipped IR without needing to
>> configure anything or manually calling any daemon. This currently works
>> with the existing drivers and it is a feature that needs to be kept.
>
> Admittedly, LIRC is way behind when it comes to plug'n'play.

Should not be that hard to fixup.

When moving the keytable loading from kernel to userspace the kernel
drivers have to inform userspace anyway what kind of hardware the IR
device is, so udev can figure what keytable it should load. A sysfs
attribute is the way to go here I think.

lirc drivers can do the same, and lircd can startup with a reasonable
(default) configuration.

Of course evdev and lirc subsytems/drivers should agree on which
attributes should be defined and how they are filled.

cheers,
Gerd

2009-12-01 10:46:36

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

> A current related problem is that i2c based devices can only be bound to
> only one of ir-kbd-i2c *or* lirc_i2c *or* lirc_zilog at any one time.
> Currently it is somewhat up to the bridge driver which binding is
> preferred. Discussion about this for the pvrusb2 module had the biggest
> email churn IIRC.

Once lirc_dev is merged you can easily fix this: You'll have *one*
driver which supports *both* evdev and lirc interfaces. If lircd opens
the lirc interface raw data will be sent there, keystrokes come in via
uinput. Otherwise keystrokes are send directly via evdev. Problem solved.

cheers,
Gerd

PS: Not sure this actually makes sense for the i2c case, as far I know
these do decoding in hardware and don't provide access to the raw
samples, so killing the in-kernel IR limits to make ir-kbd-i2c
being on par with lirc_i2c might be more useful in this case.

2009-12-01 11:39:57

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, 2009-12-01 at 08:45 +0100, Christoph Bartelmus wrote:
> Hi Jon,
>
> on 30 Nov 09 at 16:35, Jon Smirl wrote:


> Currently I would tend to an approach like this:
> - raw interface to userspace using LIRC
> - fixed set of in-kernel decoders that can handle bundled remotes
>
> That would allow zero configuration for simple use cases and full
> flexibility for more advanced use cases.
>
> Christoph

I'd also prefer that approach.

That probably comes as no surprise, but I may not be able to keep
following/kibitzing in this thread. Christoph's statement sums up my
preference.

Regards,
Andy

2009-12-01 11:50:49

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, 2009-12-01 at 11:46 +0100, Gerd Hoffmann wrote:
> Hi,
>
> > A current related problem is that i2c based devices can only be bound to
> > only one of ir-kbd-i2c *or* lirc_i2c *or* lirc_zilog at any one time.
> > Currently it is somewhat up to the bridge driver which binding is
> > preferred. Discussion about this for the pvrusb2 module had the biggest
> > email churn IIRC.
>
> Once lirc_dev is merged you can easily fix this: You'll have *one*
> driver which supports *both* evdev and lirc interfaces. If lircd opens
> the lirc interface raw data will be sent there, keystrokes come in via
> uinput. Otherwise keystrokes are send directly via evdev. Problem solved.

This will be kind of strange for lirc_zilog (aka lirc_pvr150). It
supports IR transmit on the PVR-150, HVR-1600, and HD-PVR. I don't know
if transmit is raw pulse timings, but I'm sure the unit provides codes
on receive. Occasionally blocks of "boot data" need to be programmed
into the transmitter side. I suspect lirc_zilog will likely need
rework....


> cheers,
> Gerd
>
> PS: Not sure this actually makes sense for the i2c case, as far I know
> these do decoding in hardware and don't provide access to the raw
> samples,

True.

> so killing the in-kernel IR limits to make ir-kbd-i2c
^^^^^^^^^^^^^^^^^^^
> being on par with lirc_i2c might be more useful in this case.

I didn't quite understand that. Can you provide a little more info?


Thanks,
Andy

2009-12-01 13:12:09

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Gerd Hoffmann wrote:
> On 11/30/09 13:34, Mauro Carvalho Chehab wrote:
>> Christoph Bartelmus wrote:
>>> Hi Mauro,
>>>
>>> I just don't want to change a working interface just because it could be
>>> also implemented in a different way, but having no other visible
>>> advantage
>>> than using more recent kernel features.
>>
>> I agree. The main reasons to review the interface is:
>> 1) to avoid any overlaps (if are there any) with the evdev interface;
>
> Use lirc for raw samples.
> Use evdev for decoded data.
>
> Hardware/drivers which can handle both can support both interfaces.
> IMHO it makes no sense at all to squeeze raw samples through the input
> layer. It looks more like a serial line than a input device. In fact
> you can homebrew a receiver and connect it to the serial port, which was
> quite common in pre-usb-ir-receiver times.

I agree.
>
>> 2) to have it stable enough to be used, without changes, for a long
>> time.
>
> It isn't like lirc is a new interface. It has been used in practice for
> years. I don't think API stability is a problem here.

You're probably right here, but, as, currently, changing the API is not a problem,
I don't doubt that the API has changed during those years (I haven't followed
lirc API, so this is just an educated guess).

So, all I'm saying is that we should do a final review considering API stability
before merging it, eventually considering to add a few reserved fields there, if
we suspect that we might need more space for some reason.

>> True, but even if we want to merge lirc drivers "as-is", the drivers will
>> still need changes, due to kernel CodingStyle, due to the usage of
>> some API's
>> that may be deprecated, due to some breakage with non-Intel
>> architectures, due
>> to some bugs that kernel hackers may discover, etc.
>
> I assumed this did happen in already in preparation of this submission?

Yes, for just a few drivers that went on the first series of patches (on Jerod's
proposal, only 2 drivers were submitted).

Cheers,
Mauro.

2009-12-01 14:03:06

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On 12/01/09 12:49, Andy Walls wrote:
> On Tue, 2009-12-01 at 11:46 +0100, Gerd Hoffmann wrote:
>> Once lirc_dev is merged you can easily fix this: You'll have *one*
>> driver which supports *both* evdev and lirc interfaces. If lircd opens
>> the lirc interface raw data will be sent there, keystrokes come in via
>> uinput. Otherwise keystrokes are send directly via evdev. Problem solved.
>
> This will be kind of strange for lirc_zilog (aka lirc_pvr150). It
> supports IR transmit on the PVR-150, HVR-1600, and HD-PVR. I don't know
> if transmit is raw pulse timings, but I'm sure the unit provides codes
> on receive. Occasionally blocks of "boot data" need to be programmed
> into the transmitter side. I suspect lirc_zilog will likely need
> rework....

Well, for IR *output* it doesn't make sense to disable evdev. One more
reason which indicates it probaably is better to introduce a ioctl to
disable evdev reporting. lircd will probably turn it off, especially
when sending data to uevent. debug tools might not, likewise apps
sending IR.

>> so killing the in-kernel IR limits to make ir-kbd-i2c
> ^^^^^^^^^^^^^^^^^^^
>> being on par with lirc_i2c might be more useful in this case.
>
> I didn't quite understand that. Can you provide a little more info?

Such as throwing away the address part of rc5 codes ...

cheers,
Gerd

2009-12-01 14:10:38

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, 2009-12-01 at 06:38 -0500, Andy Walls wrote:
> On Tue, 2009-12-01 at 08:45 +0100, Christoph Bartelmus wrote:
> > Hi Jon,
> >
> > on 30 Nov 09 at 16:35, Jon Smirl wrote:
>
>
> > Currently I would tend to an approach like this:
> > - raw interface to userspace using LIRC
> > - fixed set of in-kernel decoders that can handle bundled remotes
> >
> > That would allow zero configuration for simple use cases and full
> > flexibility for more advanced use cases.
> >
> > Christoph
>
> I'd also prefer that approach.

I also agree with this approach.
This way, there will be no need for configfs hacks, but just static
table for bundled remotes, and in fact this is very clean approach.
Also, since bundled remotes use standard protocols, there will be no
problem to add decoders for them.

For the rest, the remotes that were never meant to be used with the
computer, lircd will do just fine.

So, it a deal?

Best regards,
Maxim Levitsky


2009-12-01 14:15:08

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Gerd Hoffmann wrote:
> Hi,
>
>>> The point is that for simple usage, like an user plugging his new USB
>>> stick
>>> he just bought, he should be able to use the shipped IR without
>>> needing to
>>> configure anything or manually calling any daemon. This currently works
>>> with the existing drivers and it is a feature that needs to be kept.
>>
>> Admittedly, LIRC is way behind when it comes to plug'n'play.
>
> Should not be that hard to fixup.
>
> When moving the keytable loading from kernel to userspace the kernel
> drivers have to inform userspace anyway what kind of hardware the IR
> device is, so udev can figure what keytable it should load. A sysfs
> attribute is the way to go here I think.
>
> lirc drivers can do the same, and lircd can startup with a reasonable
> (default) configuration.
>
> Of course evdev and lirc subsytems/drivers should agree on which
> attributes should be defined and how they are filled.

Yes, a sysfs attribute seems appropriate in this case.

This is the attributes that are currently available via sysfs:

looking at device '/class/input/input13/event5':
KERNEL=="event5"
SUBSYSTEM=="input"
SYSFS{dev}=="13:69"

looking at parent device '/class/input/input13':
ID=="input13"
BUS=="input"
DRIVER==""
SYSFS{name}=="em28xx IR _em28xx #0_"
SYSFS{phys}=="usb-0000:00:1d.7-8/input0"
SYSFS{uniq}==""

For the currently used attributes, we have:

The name attribute. If we do some effort to standardize it, it could be an option.
However, on several drivers, this attribute is filled with something that is generic
for the entire driver, and on several cases like the above, it adds a device number.

The phys attribute has to do only with the bus address. Btw, the lirc drivers need
to follow the conventions here. We did a great effort at 2.6.30 or 2.6.31 to standardize
the phys attribute, as some drivers were using different conventions for it.

The uniq attribute is meant to be used as a serial number (no driver seems to use
it currently, from my tests with git grep).

By looking on other subsystems, ALSA defines two name attributes: a shortname and a longname.

The current board naming schema at the V4L drivers are a long name. For example:
"Pinnacle Dazzle DVC 90/100/101/107 / Kaiser Baas Video to DVD maker"

The rationale is that they should be user-friendly.

Maybe a similar concept could be used here: we can add a sort of shortname string
that will uniquely describe a device and will have a rule to describe them unically.

For example, the above device is a Hauppauge HVR950 usb stick, that is supported
by em28xx driver.

We may call it as "EM28xxHVR950-00" (the last 2 chars is to allow having board revisions,
as some devices may have more than one variant).

Another alternative would be to create an integer SYSFS atribute and use some rule to
associate the device number with the driver.

The big issue here is: how do we document that "EM28xxHVR950-00" is the Hauppauge Grey IR that
is shipped with their newer devices.

A third approach would be to identify, instead, the Remote Controller directly. So, we would
add a sysfs field like ir_type.

There are two issues here:
1) What's the name for this IR? We'll need to invent names for the existing IR's, as
those devices don't have a known brand name;
2) there are cases where the same device is provided with two or more different IR
types. If we identify the board type instead of the IR type, userspace can better handle
it, by providing a list of the possibilities.

---

No matter how we map, we'll still need to document it somehow to userspace. What would be
the better? A header file? A set of keymaps from the default IR's that will be added
on some directory at the Linux tree? A Documentation/IR ?

I'm for having the keymaps on some file at the kernel tree, maybe at Documentation/IR,
but this is just my 2 cents. We need to think more about that.

Comments?

Anyway, we shouldn't postpone lirc drivers addition due to that. There are still lots of work
to do before we'll be able to split the tables from the kernel drivers.

Cheers,
Mauro.

2009-12-01 14:18:58

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Gerd Hoffmann wrote:
> On 12/01/09 12:49, Andy Walls wrote:
>> On Tue, 2009-12-01 at 11:46 +0100, Gerd Hoffmann wrote:
>>> Once lirc_dev is merged you can easily fix this: You'll have *one*
>>> driver which supports *both* evdev and lirc interfaces. If lircd opens
>>> the lirc interface raw data will be sent there, keystrokes come in via
>>> uinput. Otherwise keystrokes are send directly via evdev. Problem
>>> solved.
>>
>> This will be kind of strange for lirc_zilog (aka lirc_pvr150). It
>> supports IR transmit on the PVR-150, HVR-1600, and HD-PVR. I don't know
>> if transmit is raw pulse timings, but I'm sure the unit provides codes
>> on receive. Occasionally blocks of "boot data" need to be programmed
>> into the transmitter side. I suspect lirc_zilog will likely need
>> rework....
>
> Well, for IR *output* it doesn't make sense to disable evdev. One more
> reason which indicates it probaably is better to introduce a ioctl to
> disable evdev reporting. lircd will probably turn it off, especially
> when sending data to uevent. debug tools might not, likewise apps
> sending IR.
>
>>> so killing the in-kernel IR limits to make ir-kbd-i2c
>> ^^^^^^^^^^^^^^^^^^^
>>> being on par with lirc_i2c might be more useful in this case.
>>
>> I didn't quite understand that. Can you provide a little more info?
>
> Such as throwing away the address part of rc5 codes ...

This limit were already removed from the subsystem core by the patches
I committed recently (still only at the devel tree - I should be adding
those patches to my linux-next tree likely today).

The remaining issue is that we'll need to re-scan the IR tables for
every supported remote to be sure that we're getting the full RC5 code.
It is not complex, but requires lots of work from people that actually have
those IR's.

Cheers,
Mauro.

2009-12-01 14:32:46

by Jarod Wilson

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Dec 1, 2009, at 4:52 AM, Gerd Hoffmann wrote:

> On 11/30/09 13:34, Mauro Carvalho Chehab wrote:
>> Christoph Bartelmus wrote:
>>> Hi Mauro,
>>>
>>> I just don't want to change a working interface just because it could be
>>> also implemented in a different way, but having no other visible advantage
>>> than using more recent kernel features.
>>
>> I agree. The main reasons to review the interface is:
>> 1) to avoid any overlaps (if are there any) with the evdev interface;
>
> Use lirc for raw samples.
> Use evdev for decoded data.

This is the approach I'm pretty well settled on wanting to take myself.

> Hardware/drivers which can handle both can support both interfaces.

Exactly.

> IMHO it makes no sense at all to squeeze raw samples through the input layer. It looks more like a serial line than a input device. In fact you can homebrew a receiver and connect it to the serial port, which was quite common in pre-usb-ir-receiver times.
>
>> 2) to have it stable enough to be used, without changes, for a long
>> time.
>
> It isn't like lirc is a new interface. It has been used in practice for years. I don't think API stability is a problem here.

Yeah, in the ~3 years I've been maintaining lirc patches for the Fedora kernels, only once has something happened where new userspace could no longer talk to old kernelspace. The majority of the work has been keeping things running as kernel interfaces change -- the 2.6.31 i2c changes are still biting us, as some capture card devices lagged behind a bit on converting to the new i2c scheme, making it impossible for lirc_i2c and/or lirc_zilog to bind (and ir-kbd-i2c, for that matter).

>> True, but even if we want to merge lirc drivers "as-is", the drivers will
>> still need changes, due to kernel CodingStyle, due to the usage of some API's
>> that may be deprecated, due to some breakage with non-Intel architectures, due
>> to some bugs that kernel hackers may discover, etc.
>
> I assumed this did happen in already in preparation of this submission?

Yes. There may still be a bit of work to do here, but there was a crew of us about a year, year and a half ago, that did a major sweep through all the lirc drivers, reformatting things so that we were at least mostly checkpatch-clean. The original lirc patches I put into the Fedora Core 6 kernel had several thousand lines of warnings and errors, while with the current lirc patches in Fedora 12, I get:

total: 1 errors, 12 warnings, 15987 lines checked

The error is new, hadn't seen that one before, going to fix it now... :) The warnings are almost all the same thing, "WARNING: struct file_operations should normally be const", need to fix that too, though we actually do edit the lirc_fops on a per-device basis right now, so they can't be const...

Okay, the error and one of the warnings are gone from my local tree, now its all just the above.

But yeah, for the most part, I think the coding style and formatting of the lirc drivers *does* look like kernel code these days, minor fixages suggested in Mauro's review aside. I submitted only a 3-part series (lirc_dev, lirc_mceusb and lirc_imon) to keep from overwhelming anyone (myself included) with too much code at once, and went with the two device drivers that I've personally done the most work on and have several devices driven by (which includes the IR parts I've been using in my "production" MythTV setup for years now).

--
Jarod Wilson
[email protected]


2009-12-01 15:29:08

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

> The big issue here is: how do we document that "EM28xxHVR950-00" is the Hauppauge Grey IR that
> is shipped with their newer devices.
>
> A third approach would be to identify, instead, the Remote Controller directly. So, we would
> add a sysfs field like ir_type.

I'd pick a more descriptive name like 'bundled_remote'.
Maybe an additional attribute could say which protocol the bundled
remote speaks (rc5, ...), so userspace could do something sensible by
default even if it has no data about the bundled remote.

> There are two issues here:
> 1) What's the name for this IR? We'll need to invent names for the existing IR's, as
> those devices don't have a known brand name;

Name them by the hardware they are bundled with should work reasonable well.

> 2) there are cases where the same device is provided with two or more different IR
> types. If we identify the board type instead of the IR type, userspace can better handle
> it, by providing a list of the possibilities.

We also could also provide a list of possible remotes directly via sysfs
instead of expecting userspace know which remotes can come bundled with
which board.

> No matter how we map, we'll still need to document it somehow to userspace. What would be
> the better? A header file? A set of keymaps from the default IR's that will be added
> on some directory at the Linux tree? A Documentation/IR ?

I'd suggest tools/ir/ (map loader intended to be called by udev) and the
maps being files in the linux source tree (next to the drivers?). The
maps probably should be installed on some standard location (pretty much
like firmware).

> Anyway, we shouldn't postpone lirc drivers addition due to that. There are still lots of work
> to do before we'll be able to split the tables from the kernel drivers.

Indeed. The sysfs bits are future work for both lirc and evdev drivers.
There is no reason to make the lirc merge wait for it.

cheers,
Gerd

2009-12-03 04:29:51

by Jarod Wilson

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Dec 1, 2009, at 10:28 AM, Gerd Hoffmann wrote:

>> Anyway, we shouldn't postpone lirc drivers addition due to that. There are still lots of work
>> to do before we'll be able to split the tables from the kernel drivers.
>
> Indeed. The sysfs bits are future work for both lirc and evdev drivers. There is no reason to make the lirc merge wait for it.

At this point, my plan is to try to finish cleaning up lirc_dev and lirc_mceusb at least over the weekend while at FUDCon up in Toronto, and resubmit them next week.

I'm still on the fence over what to do about lirc_imon. The driver supports essentially 3 generations of devices. First-gen is very old imon parts that don't do onboard decoding. Second-gen is the devices that all got (insanely stupidly) tagged with the exact same usb device ID (0x15c2:0xffdc), some of which have an attached VFD, some with an attached LCD, some with neither, some that are actually RF parts, but all (I think) of which do onboard decoding. Third-gen is the latest stuff, which is all pretty sane, unique device IDs for unique devices, onboard decoding, etc.

So the lirc_imon I submitted supports all device types, with the onboard decode devices defaulting to operating as pure input devices, but an option to pass hex values out via the lirc interface (which is how they've historically been used -- the pure input stuff I hacked together just a few weeks ago), to prevent functional setups from being broken for those who prefer the lirc way.

What I'm debating is whether this should be split into two drivers, one for the older devices that don't do onboard decoding (which would use the lirc_dev interface) called 'lirc_imon' or 'lirc_imon_legacy', and one that is a pure input driver, not unlike the ati_remote{,2} drivers, with no lirc_dev dependency at all, probably called simply 'imon'. Could still be used with lirc via its devinput userspace driver, of course. But if I split it out, there may end up being a fair amount of code duplication, and the resulting lirc_imon wouldn't be as interesting to submit, and I wouldn't have any devices that worked with it, I've only got onboard decode devices... The new imon input driver would be a separate submission that is completely irrelevant to this whole discussion.

So perhaps for round three, lirc_dev, lirc_mceusb and lirc_zilog, to make it more interesting...

--
Jarod Wilson
[email protected]


2009-12-03 12:09:57

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On 12/03/09 05:29, Jarod Wilson wrote:
> On Dec 1, 2009, at 10:28 AM, Gerd Hoffmann wrote:
>
>>> Anyway, we shouldn't postpone lirc drivers addition due to that.
>>> There are still lots of work to do before we'll be able to split
>>> the tables from the kernel drivers.
>>
>> Indeed. The sysfs bits are future work for both lirc and evdev
>> drivers. There is no reason to make the lirc merge wait for it.
>
> At this point, my plan is to try to finish cleaning up lirc_dev and
> lirc_mceusb at least over the weekend while at FUDCon up in Toronto,
> and resubmit them next week.

Good plan IMHO. Having lirc_dev merged quickly allows in-kernel drivers
start supporting lirc.

One final pass over the lirc interface would be good, taking the chance
to fixup anything before the ABI is set in stone with the mainline
merge. Things to look at:

(1) Make sure ioctl structs are 32/64 bit invariant.
(2) Maybe add some reserved fields to allow extending later
without breaking the ABI.
(3) Someone suggested a 'commit' ioctl which would activate
the parameters set in (multiple) previous ioctls. Makes sense?
(4) Add a ioctl to enable/disable evdev event submission for
evdev/lirc hybrid drivers.

> I'm still on the fence over what to do about lirc_imon. The driver
> supports essentially 3 generations of devices. First-gen is very old
> imon parts that don't do onboard decoding. Second-gen is the devices
> that all got (insanely stupidly) tagged with the exact same usb
> device ID (0x15c2:0xffdc), some of which have an attached VFD, some
> with an attached LCD, some with neither, some that are actually RF
> parts, but all (I think) of which do onboard decoding. Third-gen is
> the latest stuff, which is all pretty sane, unique device IDs for
> unique devices, onboard decoding, etc.

Do have second-gen and third-gen devices have a 'raw mode'? If so, then
there should be a lirc interface for raw data access.

> So the lirc_imon I submitted supports all device types, with the
> onboard decode devices defaulting to operating as pure input devices,
> but an option to pass hex values out via the lirc interface (which is
> how they've historically been used -- the pure input stuff I hacked
> together just a few weeks ago), to prevent functional setups from
> being broken for those who prefer the lirc way.

Hmm. I'd tend to limit the lirc interface to the 'raw samples' case.

Historically it has also been used to pass decoded data (i.e. rc5) from
devices with onboard decoding, but for that in-kernel mapping + input
layer really fits better.

> What I'm debating is whether this should be split into two drivers,
> one for the older devices that don't do onboard decoding (which would
> use the lirc_dev interface) called 'lirc_imon' or 'lirc_imon_legacy',
> and one that is a pure input driver, not unlike the ati_remote{,2}
> drivers, with no lirc_dev dependency at all, probably called simply
> 'imon'.

i.e. lirc_imon would support first+second gen, and imon third-gen
devices, without overlap?

> But if I split it out, there may end up being a
> fair amount of code duplication,

You could try to split common code into a third module used by the other
two. Or have one module for all devices which is a evdev/lirc hybrid.

cheers,
Gerd

2009-12-03 17:31:21

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

[email protected] (Christoph Bartelmus) writes:

> Currently I would tend to an approach like this:
> - raw interface to userspace using LIRC
> - fixed set of in-kernel decoders that can handle bundled remotes

I'd modify it a bit:
- raw interface to userspace using LIRC
- fixed set of in-kernel decoders

Longer term:

Removing the key assignment tables from the kernel. Plug-and-play can be
then achieved with udev. The only thing needed from the kernel is
indicating the tuner/sensor type, udev can guess the bundled remote type.

Porting the in-kernel drivers (such as ir-common) to LIRC interface
(while not removing the input layer mode).
--
Krzysztof Halasa

2009-12-03 17:47:45

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Gerd Hoffmann <[email protected]> writes:

> I'd pick a more descriptive name like 'bundled_remote'.
> Maybe an additional attribute could say which protocol the bundled
> remote speaks (rc5, ...), so userspace could do something sensible by
> default even if it has no data about the bundled remote.

This is problematic since there can be more that one remote bundled.
If we export the sensor (tuner etc) name, userspace can make some
decision (or ask the user etc).

The protocol alone won't help - the user will have to teach the system
about the remote anyway. This should be made trivial at least for common
protocols, though.

> Name them by the hardware they are bundled with should work reasonable
> well.

I guess udev can look at parent PCI vendor/device and subsystem
vendor/device for most PCI cases. Ditto with USB. For generic stuff like
TSOP* coupled with a resistor and connected to RS232 port, we can't
guess anyway.

> We also could also provide a list of possible remotes directly via
> sysfs

The kernel has no way to _know_ this information. The policy is better
handled in userland.

>> Anyway, we shouldn't postpone lirc drivers addition due to that.

Actually merging lirc is a prerequisite for a number of changes.
--
Krzysztof Halasa

2009-12-03 17:55:35

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Thu, Dec 03, 2009 at 01:09:14PM +0100, Gerd Hoffmann wrote:
> On 12/03/09 05:29, Jarod Wilson wrote:
>> On Dec 1, 2009, at 10:28 AM, Gerd Hoffmann wrote:
>>
>>>> Anyway, we shouldn't postpone lirc drivers addition due to that.
>>>> There are still lots of work to do before we'll be able to split
>>>> the tables from the kernel drivers.
>>>
>>> Indeed. The sysfs bits are future work for both lirc and evdev
>>> drivers. There is no reason to make the lirc merge wait for it.
>>
>> At this point, my plan is to try to finish cleaning up lirc_dev and
>> lirc_mceusb at least over the weekend while at FUDCon up in Toronto,
>> and resubmit them next week.
>
> Good plan IMHO. Having lirc_dev merged quickly allows in-kernel drivers
> start supporting lirc.

No, please, wait just a minute. I know it is tempting to just merge
lirc_dev and start working, but can we first agree on the overall
subsystem structure before doing so. It is still quite inclear to me.

The open questions (for me at least):

- do we create a new class infrastructure for all receivers or only for
ones plugged into lirc_dev? Remember that classifying objects affects
how udev and friemds see them and may either help or hurt writing PnP
rules.

- do we intend to support in-kernel sotfware decoders? What is the
structure? Do we organize them as a module to be used by driver
directly or the driver "streams" the data to IR core and the core
applies decoders (in the same fashion input events from drivers flow
into input core and then distributed to all bound interfaces for
processing/conversion/transmission to userspace)?

- how do we control which decoder should handle particular
receiver/remote? Is it driver's decision, decoder's decision, user's
or all of the above?

- do we allow to have several decorers active at once for a receiver?

- who decides that we want to utilize lirc_dev? Driver's themselves, IR
core (looking at the driver/device "capabilities"), something else?

- do we recognize and create input devices "on-fly" or require user
intervention? Semantics for splitting into several input/event
devices?

Could anyone please draw me a picture, starting with a "receiver"
piece of hardware. I am not concerned much with how exactly receiver is
plugged into a particular subsystem (DVB/V4L etc) since it would be
_their_ implementation detail, but with the flow in/out of that
"receiver" device.

Now as far as input core goes I see very limited number of changes that
may be needed:

- Allow to extend size of "scancode" in EVIOC{S,G}KEYCODE if we are
unable to limit ourselves to 32 bits (keeping compatibility of course)

- Maybe adding new ioctl to "zap" the keymap table

- Adding more key EV_KEY/KEY_* definitons, if needed

Thanks.

--
Dmitry

2009-12-03 18:34:20

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Let me draw my view:

Em Thu, 3 Dec 2009 09:55:31 -0800
Dmitry Torokhov <[email protected]> escreveu:

> No, please, wait just a minute. I know it is tempting to just merge
> lirc_dev and start working, but can we first agree on the overall
> subsystem structure before doing so. It is still quite inclear to me.
>
> The open questions (for me at least):
>
> - do we create a new class infrastructure for all receivers or only for
> ones plugged into lirc_dev? Remember that classifying objects affects
> how udev and friemds see them and may either help or hurt writing PnP
> rules.

IMO, I would create it as /sys/class/input/IR (just like the /mice). I
don't see why do we need to different lirc than no-lirc drivers in the
case of sysfs class. As devices with raw input capabilities will have
another dev to communicate, this means that we'll need a /lirc node
there to point to lirc dev.

>
> - do we intend to support in-kernel sotfware decoders?

Yes.

> - What is the structure? Do we organize them as a module to be used by driver
> directly or the driver "streams" the data to IR core and the core
> applies decoders (in the same fashion input events from drivers flow
> into input core and then distributed to all bound interfaces for
> processing/conversion/transmission to userspace)?

My plan is to expand ir-common.ko module and rename it to ir-core, to be
the IR core module for the evdev interface. I'm already working on it.
My idea for an architecture is that the lirc-core module will use
ir-common where the IR decoders will be, and the evdev interface.

IMO, we should move them from /drivers/media/common to /drivers/input/ir.
It makes sense to use kfifo to send the data to the in-kernel decoders.

> - how do we control which decoder should handle particular
> receiver/remote? Is it driver's decision, decoder's decision, user's
> or all of the above?

It should be all the above, since some hardware will only work with certain
decoders (hardware limitation) or they may have already a raw mode->scancode
legacy decoder. In the latter case, those decoders will be removed from
the existing drivers, but this action will take some time.

Some sysfs attributes are needed to specify a list of the supported protocols
and the currently used one. I'll prepare a proposed patch for it, after we
finish aligning the requirements.

> - do we allow to have several decorers active at once for a receiver?

Yes, as an optional requirement, since some hardware won't support it.

> - who decides that we want to utilize lirc_dev? Driver's themselves, IR
> core (looking at the driver/device "capabilities"), something else?

Drivers that support raw mode, should interface via lirc-core, that will,
in turn use ir-core.

Drivers that have in-hardware decode will directly use ir-core.

> - do we recognize and create input devices "on-fly" or require user
> intervention? Semantics for splitting into several input/event
> devices?

I don't have a strong opinion here.

I don't see any way for doing it, except with very few protocols that
sends vendor IDs. I don't care if this feature can be used by the
drivers/decoders that could support it.

> Could anyone please draw me a picture, starting with a "receiver"
> piece of hardware. I am not concerned much with how exactly receiver is
> plugged into a particular subsystem (DVB/V4L etc) since it would be
> _their_ implementation detail, but with the flow in/out of that
> "receiver" device.

Not sure if I got your idea. Basically, what I see is:

For device drivers that work in raw mode:
[IR physical device] ==> [IR receiver driver] ==> [lirc-core] ==> [decoder] ==> [ir-core] ==> [evdev]

(eventually, we can merge decoder and ir-core into one module at the beginning,
depending on the size of the decoders)

For device drivers that work only in evdev mode (those with hardware decoders):

[IR physical device] ==> [IR receiver driver] ==> [ir-core] ==> [evdev]

>
> Now as far as input core goes I see very limited number of changes that
> may be needed:
>
> - Allow to extend size of "scancode" in EVIOC{S,G}KEYCODE if we are
> unable to limit ourselves to 32 bits (keeping compatibility of course)

Yes, but the way EVIOC{S,G}KEYCODE currently works, it performs poorly when you have a
table with 2^64 size. The table is very sparsed, but, as the key to get/set a code is
the scancode, it is very hard to enumberate what are the actual entries there. The
better is to use an index parameter for they, instead of using the scancode as such.

> - Maybe adding new ioctl to "zap" the keymap table

Yes, this is needed.

> - Adding more key EV_KEY/KEY_* definitons, if needed

Probably.

--

Cheers,
Mauro

2009-12-03 18:56:16

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Thu, Dec 3, 2009 at 12:55 PM, Dmitry Torokhov
<[email protected]> wrote:
> On Thu, Dec 03, 2009 at 01:09:14PM +0100, Gerd Hoffmann wrote:
>> On 12/03/09 05:29, Jarod Wilson wrote:
>>> On Dec 1, 2009, at 10:28 AM, Gerd Hoffmann wrote:
>>>
>>>>> Anyway, we shouldn't postpone lirc drivers addition due to that.
>>>>> There are still lots of work to do before we'll be able to split
>>>>> the tables from the kernel drivers.
>>>>
>>>> Indeed. ?The sysfs bits are future work for both lirc and evdev
>>>> drivers. ?There is no reason to make the lirc merge wait for it.
>>>
>>> At this point, my plan is to try to finish cleaning up lirc_dev and
>>> lirc_mceusb at least over the weekend while at FUDCon up in Toronto,
>>> and resubmit them next week.
>>
>> Good plan IMHO. ?Having lirc_dev merged quickly allows in-kernel drivers
>> start supporting lirc.
>
> No, please, wait just a minute. I know it is tempting to just merge
> lirc_dev and start working, but can we first agree on the overall
> subsystem structure before doing so. It is still quite inclear to me.
>
> The open questions (for me at least):

Great list, good starting point for evaluating the design alternatives.

Have the various use cases all been enumerated?

> - do we create a new class infrastructure for all receivers or only for
> ?ones plugged into lirc_dev? Remember that classifying objects affects
> ?how udev and friemds see them and may either help or hurt writing PnP
> ?rules.
>
> - do we intend to support in-kernel sotfware decoders? What is the
> ?structure? Do we organize them as a module to be used by driver
> ?directly or the driver "streams" the data to IR core and the core
> ?applies decoders (in the same fashion input events from drivers flow
> ?into input core and then distributed to all bound interfaces for
> ?processing/conversion/transmission to userspace)?
>
> - how do we control which decoder should handle particular
> ?receiver/remote? Is it driver's decision, decoder's decision, user's
> ?or all of the above?
>
> - do we allow to have several decoders active at once for a receiver?
>
> - who decides that we want to utilize lirc_dev? Driver's themselves, IR
> ?core (looking at the driver/device "capabilities"), something else?

Is the lirc device protocol documented? The lirc device only allows a
single app to read from it, it that ok? What about the problem the
mouse driver had with reading partial messages in one read and by the
time you came back around to read the second half the first read was
overwritten? That led to the transactions in evdev.

>
> - do we recognize and create input devices "on-fly" or require user
> ?intervention? Semantics for splitting into several input/event
> ?devices?

Complete on-the-fly is not going to do what you want it to. For
example Sony TVs use three variations of the Sony protocol in a single
TV.

A slick solution would have unknown IR events trigger a mapping
definition app via udev. The mapping app would ask you to hit a few
more keys until it locates the corresponding device profile in the IR
database. Then it could write a script which will load create a new
evdev device for this device and load the keycode map for it at boot.

The big IR profile database from All-In-One is published. For this
application you'd need to add entries mapping each IR code to a Linux
keycode. This is the same problem you have with scancodes and various
languages on keyboards. I'll bet the guys at http://www.openremote.org
would help with this.

>
> Could anyone please draw me a picture, starting with a "receiver"
> piece of hardware. I am not concerned much with how exactly receiver is
> plugged into a particular subsystem (DVB/V4L etc) since it would be
> _their_ implementation detail, but with the flow in/out of that
> "receiver" device.
>
> Now as far as input core goes I see very limited number of changes that
> may be needed:
>
> - Allow to extend size of "scancode" in EVIOC{S,G}KEYCODE if we are
> ?unable to limit ourselves to 32 bits (keeping compatibility of course)
>
> - Maybe adding new ioctl to "zap" the keymap table
>
> - Adding more key EV_KEY/KEY_* definitons, if needed

Aren't EV_IR events needed so that an app for building keymaps can be written?
Normal apps would not use EV_IR events.

--
Jon Smirl
[email protected]

2009-12-03 21:10:49

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Gerd Hoffmann wrote:

> One final pass over the lirc interface would be good, taking the chance
> to fixup anything before the ABI is set in stone with the mainline
> merge. Things to look at:
>
> (1) Make sure ioctl structs are 32/64 bit invariant.
> (2) Maybe add some reserved fields to allow extending later
> without breaking the ABI.
> (3) Someone suggested a 'commit' ioctl which would activate
> the parameters set in (multiple) previous ioctls. Makes sense?

A better approach is to create an ioctl that can send a group of value/attribute pairs
at the same time. We used this estrategy for V4L extended controls to do things like
setting an mpeg encoder (were we need to adjust several parameters at the same time,
and adding all of them on one struct would be hard, since you can't specify all
of them sa the same time). The same strategy is also used by DVB API to allow it
to use any arbitrary protocol. It was conceived to support DVB-S2.

> (4) Add a ioctl to enable/disable evdev event submission for
> evdev/lirc hybrid drivers.

Yes, all above makes sense.
>
>> I'm still on the fence over what to do about lirc_imon. The driver
>> supports essentially 3 generations of devices. First-gen is very old
>> imon parts that don't do onboard decoding. Second-gen is the devices
>> that all got (insanely stupidly) tagged with the exact same usb
>> device ID (0x15c2:0xffdc), some of which have an attached VFD, some
>> with an attached LCD, some with neither, some that are actually RF
>> parts, but all (I think) of which do onboard decoding. Third-gen is
>> the latest stuff, which is all pretty sane, unique device IDs for
>> unique devices, onboard decoding, etc.
>
> Do have second-gen and third-gen devices have a 'raw mode'? If so, then
> there should be a lirc interface for raw data access.
>
>> So the lirc_imon I submitted supports all device types, with the
>> onboard decode devices defaulting to operating as pure input devices,
>> but an option to pass hex values out via the lirc interface (which is
>> how they've historically been used -- the pure input stuff I hacked
>> together just a few weeks ago), to prevent functional setups from
>> being broken for those who prefer the lirc way.
>
> Hmm. I'd tend to limit the lirc interface to the 'raw samples' case.

> Historically it has also been used to pass decoded data (i.e. rc5) from
> devices with onboard decoding, but for that in-kernel mapping + input
> layer really fits better.

I agree.

>
>> What I'm debating is whether this should be split into two drivers,
>> one for the older devices that don't do onboard decoding (which would
>> use the lirc_dev interface) called 'lirc_imon' or 'lirc_imon_legacy',
>> and one that is a pure input driver, not unlike the ati_remote{,2}
>> drivers, with no lirc_dev dependency at all, probably called simply
>> 'imon'.
>
> i.e. lirc_imon would support first+second gen, and imon third-gen
> devices, without overlap?
>
>> But if I split it out, there may end up being a
>> fair amount of code duplication,
>
> You could try to split common code into a third module used by the other
> two. Or have one module for all devices which is a evdev/lirc hybrid.
>
Splitting it into a core driver and two different drivers for raw/non-raw
device makes sense to me.

An alternative would be to have just one module, but splitting the code into
3 parts. This allows an easier understanding, IMHO.

Cheers,
Mauro.

2009-12-03 21:52:47

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Mauro,

on 03 Dec 09 at 19:10, Mauro Carvalho Chehab wrote:
[...]
>>> So the lirc_imon I submitted supports all device types, with the
>>> onboard decode devices defaulting to operating as pure input devices,
>>> but an option to pass hex values out via the lirc interface (which is
>>> how they've historically been used -- the pure input stuff I hacked
>>> together just a few weeks ago), to prevent functional setups from
>>> being broken for those who prefer the lirc way.
>>
>> Hmm. I'd tend to limit the lirc interface to the 'raw samples' case.

>> Historically it has also been used to pass decoded data (i.e. rc5) from
>> devices with onboard decoding, but for that in-kernel mapping + input
>> layer really fits better.

> I agree.

Consider passing the decoded data through lirc_dev.
- there's a large user base already that uses this mode through lirc and
would be forced to switch to input layer if it disappears.
- that way all IR drivers would consistently use lirc interface and all
PnP hooks could be implemented there in one place.
- drivers like lirc_imon that have to support both raw and decoded mode,
currently have to implement both the lirc and the input interface.
Complexity could be reduced in such cases. But maybe this is necessary
anyway for lirc_imon that also includes mouse functionality. Jarod?

Christoph

2009-12-03 22:12:36

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Thu, Dec 03, 2009 at 10:51:00PM +0100, Christoph Bartelmus wrote:
> Hi Mauro,
>
> on 03 Dec 09 at 19:10, Mauro Carvalho Chehab wrote:
> [...]
> >>> So the lirc_imon I submitted supports all device types, with the
> >>> onboard decode devices defaulting to operating as pure input devices,
> >>> but an option to pass hex values out via the lirc interface (which is
> >>> how they've historically been used -- the pure input stuff I hacked
> >>> together just a few weeks ago), to prevent functional setups from
> >>> being broken for those who prefer the lirc way.
> >>
> >> Hmm. I'd tend to limit the lirc interface to the 'raw samples' case.
>
> >> Historically it has also been used to pass decoded data (i.e. rc5) from
> >> devices with onboard decoding, but for that in-kernel mapping + input
> >> layer really fits better.
>
> > I agree.
>
> Consider passing the decoded data through lirc_dev.
> - there's a large user base already that uses this mode through lirc and
> would be forced to switch to input layer if it disappears.

I believe it was agreed that lirc-dev should be used mainly for decoding
protocols that are more conveniently decoded in userspace and the
results would be looped back into input layer through evdev which will
be the main interface for consumer applications to use.

--
Dmitry

2009-12-03 23:47:35

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Thu, 2009-12-03 at 19:10 -0200, Mauro Carvalho Chehab wrote:
> Gerd Hoffmann wrote:
>
> > One final pass over the lirc interface would be good, taking the chance
> > to fixup anything before the ABI is set in stone with the mainline
> > merge. Things to look at:

> > (3) Someone suggested a 'commit' ioctl which would activate
> > the parameters set in (multiple) previous ioctls. Makes sense?
>
> A better approach is to create an ioctl that can send a group of value/attribute pairs
> at the same time. We used this estrategy for V4L extended controls to do things like
> setting an mpeg encoder (were we need to adjust several parameters at the same time,
> and adding all of them on one struct would be hard, since you can't specify all
> of them sa the same time). The same strategy is also used by DVB API to allow it
> to use any arbitrary protocol. It was conceived to support DVB-S2.

Gerd,

I mentioned it. The reason that I mentioned it is that partial
configuration, before all the IOCTLs are done, of the IR chips that I
work with *may* cause:

1. Unnecessary, extra I2C bus operations leading to delay on
configuration. That's no big deal as it would really only matter for a
genuine discrete CX2584x chip with IR implemented using the integrated
IR controller. I do not know of any TV capture cards wired up like
that.

2. If the Low Pass Filter gets turned off, or set to very short time
interval, bad ambient light conditions could create an "interrupt
storm". As soon as all the IOCTLs complete, the storm would stop.


We can probably do without the change in lirc_dev ioctl() altogether,
since it only *really* affects one set of chips that I work with, and
only during configuration. I could instead implement interrupt storm
detection and interrupt rate limiting for those devices.


BTW IIRC, LIRC likes to resend the ioctl() to set the carrier frequency
over again when it goes to transmit. That's kind of annoying, but I can
work around that too by caching a copy of the carrier freq LIRC set the
last time.

Regards,
Andy

2009-12-04 07:38:04

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Dmitry,

on 03 Dec 09 at 14:12, Dmitry Torokhov wrote:
[...]
>> Consider passing the decoded data through lirc_dev.
[...]
> I believe it was agreed that lirc-dev should be used mainly for decoding
> protocols that are more conveniently decoded in userspace and the
> results would be looped back into input layer through evdev which will
> be the main interface for consumer applications to use.

Quoting myself:
> Currently I would tend to an approach like this:
> - raw interface to userspace using LIRC

For me this includes both the pulse/space data and also the scan codes
when hardware does the decoding.
Consider cases like this:
http://lirc.sourceforge.net/remotes/lg/6711A20015N

This is an air-conditioner remote.
The entries that you see in this config file are not really separate
buttons. Instead the remote just sends the current settings for e.g.
temperature encoded in the protocol when you press some up/down key. You
really don't want to map all possible temperature settings to KEY_*
events. For such cases it would be nice to have access at the raw scan
codes from user space to do interpretation of the data.
The default would still be to pass the data to the input layer, but it
won't hurt to have the possibility to access the raw data somehow.

Christoph

2009-12-04 10:06:47

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Thu, Dec 03, 2009 at 04:33:28PM -0200, Mauro Carvalho Chehab wrote:
> Let me draw my view:
>
> Em Thu, 3 Dec 2009 09:55:31 -0800
> Dmitry Torokhov <[email protected]> escreveu:
>
> > No, please, wait just a minute. I know it is tempting to just merge
> > lirc_dev and start working, but can we first agree on the overall
> > subsystem structure before doing so. It is still quite inclear to me.
> >
> > The open questions (for me at least):
> >
> > - do we create a new class infrastructure for all receivers or only for
> > ones plugged into lirc_dev? Remember that classifying objects affects
> > how udev and friemds see them and may either help or hurt writing PnP
> > rules.
>
> IMO, I would create it as /sys/class/input/IR (just like the /mice). I

That will not work. Driver core does not support notion of subclasses,
Greg and Kay insist on flat class namespace. Mice do not belong to a
separate [sub]class, they all members of input class, with peculiar
directory structure.

IRs however, I believe, deserve a full-fledged class (since they are in
my view are parents to the input devices representing remotes). I would
argus for the following sysfs hierarchy for the main device tree:

/sys/devices/pcipci0000:00/../pci.../../irrcv0/input14/event16
/sys/devices/pcipci0000:00/../usb.../../irrcv1/input15/event17
/input16/event18

And corresponding class:

/sys/class/irrcv/irrcv0
irrcv1

and so on.

>
> don't see why do we need to different lirc than no-lirc drivers in the
> case of sysfs class.

I do agree that _all_ infrared receivers should belong to this class,
and not only ones utilizing lirc_dev.

> As devices with raw input capabilities will have
> another dev to communicate, this means that we'll need a /lirc node
> there to point to lirc dev.
>
> >
> > - do we intend to support in-kernel sotfware decoders?
>
> Yes.
>

Good.

> > - What is the structure? Do we organize them as a module to be used by driver
> > directly or the driver "streams" the data to IR core and the core
> > applies decoders (in the same fashion input events from drivers flow
> > into input core and then distributed to all bound interfaces for
> > processing/conversion/transmission to userspace)?
>
> My plan is to expand ir-common.ko module and rename it to ir-core, to be
> the IR core module for the evdev interface. I'm already working on it.
> My idea for an architecture is that the lirc-core module will use
> ir-common where the IR decoders will be, and the evdev interface.
>

How related lirc-core to the current lirc code? If it is not the same
maybe we should not call it lirc to avoid confusion.


> IMO, we should move them from /drivers/media/common to /drivers/input/ir.
> It makes sense to use kfifo to send the data to the in-kernel decoders.
>

OK.

> > - how do we control which decoder should handle particular
> > receiver/remote? Is it driver's decision, decoder's decision, user's
> > or all of the above?
>
> It should be all the above, since some hardware will only work with certain
> decoders (hardware limitation) or they may have already a raw mode->scancode
> legacy decoder. In the latter case, those decoders will be removed from
> the existing drivers, but this action will take some time.
>
> Some sysfs attributes are needed to specify a list of the supported protocols
> and the currently used one. I'll prepare a proposed patch for it, after we
> finish aligning the requirements.
>
> > - do we allow to have several decorers active at once for a receiver?
>
> Yes, as an optional requirement, since some hardware won't support it.
>
> > - who decides that we want to utilize lirc_dev? Driver's themselves, IR
> > core (looking at the driver/device "capabilities"), something else?
>
> Drivers that support raw mode, should interface via lirc-core, that will,
> in turn use ir-core.
>
> Drivers that have in-hardware decode will directly use ir-core.
>
> > - do we recognize and create input devices "on-fly" or require user
> > intervention? Semantics for splitting into several input/event
> > devices?
>
> I don't have a strong opinion here.
>
> I don't see any way for doing it, except with very few protocols that
> sends vendor IDs. I don't care if this feature can be used by the
> drivers/decoders that could support it.
>
> > Could anyone please draw me a picture, starting with a "receiver"
> > piece of hardware. I am not concerned much with how exactly receiver is
> > plugged into a particular subsystem (DVB/V4L etc) since it would be
> > _their_ implementation detail, but with the flow in/out of that
> > "receiver" device.
>
> Not sure if I got your idea. Basically, what I see is:
>
> For device drivers that work in raw mode:
> [IR physical device] ==> [IR receiver driver] ==> [lirc-core] ==> [decoder] ==> [ir-core] ==> [evdev]
>
> (eventually, we can merge decoder and ir-core into one module at the beginning,
> depending on the size of the decoders)
>
> For device drivers that work only in evdev mode (those with hardware decoders):
>
> [IR physical device] ==> [IR receiver driver] ==> [ir-core] ==> [evdev]
>

Maybe we are talking about the same things and it is just names that are
confusing. I'd imagine something like this:


In-kernel decoding:

[IR physical device] => [IR receiver driver] => [IR core] => [decoder] => [input core] => [evdev]
=> [decoder] => [input core] => [evdev]

Hardware decoder:
[IR physical device] => [IR receiver driver] => [IR core]
=> [input core] => [evdev]

I.e we still register with IR core but driver communicates directly with input device.

Userspace decoging:
[IR physical device] => [IR receiver driver] => [IR core] => [lirc_dev] => [lircd] => [uinput] => [input core] => [evdev]

Essentially lirc_dev becomes a special case of decoder that, instead of
connecting inptu core and creating input devices passes the data to
userspace.

I did not show the block that you call ir-core since I expect it to be more
like a library rather than an object in overall structure.


> >
> > Now as far as input core goes I see very limited number of changes that
> > may be needed:
> >
> > - Allow to extend size of "scancode" in EVIOC{S,G}KEYCODE if we are
> > unable to limit ourselves to 32 bits (keeping compatibility of course)
>
> Yes, but the way EVIOC{S,G}KEYCODE currently works, it performs poorly when you have a
> table with 2^64 size. The table is very sparsed, but, as the key to get/set a code is
> the scancode, it is very hard to enumberate what are the actual entries there. The
> better is to use an index parameter for they, instead of using the scancode as such.
>

evdev does not really care what you use as scancode. So nobody stops
your driver to report index as a scancode and accept index from the
ioctl. The true "scancode" will thus be competely hidden from userspace.
In fact a few drivers do just that.

--
Dmitry

2009-12-04 14:12:56

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Em Fri, 4 Dec 2009 02:06:42 -0800
Dmitry Torokhov <[email protected]> escreveu:

> On Thu, Dec 03, 2009 at 04:33:28PM -0200, Mauro Carvalho Chehab wrote:
> > Let me draw my view:
> >
> > Em Thu, 3 Dec 2009 09:55:31 -0800
> > Dmitry Torokhov <[email protected]> escreveu:
> >
> > > No, please, wait just a minute. I know it is tempting to just merge
> > > lirc_dev and start working, but can we first agree on the overall
> > > subsystem structure before doing so. It is still quite inclear to me.
> > >
> > > The open questions (for me at least):
> > >
> > > - do we create a new class infrastructure for all receivers or only for
> > > ones plugged into lirc_dev? Remember that classifying objects affects
> > > how udev and friemds see them and may either help or hurt writing PnP
> > > rules.
> >
> > IMO, I would create it as /sys/class/input/IR (just like the /mice). I
>
> That will not work. Driver core does not support notion of subclasses,
> Greg and Kay insist on flat class namespace. Mice do not belong to a
> separate [sub]class, they all members of input class, with peculiar
> directory structure.
>
> IRs however, I believe, deserve a full-fledged class (since they are in
> my view are parents to the input devices representing remotes). I would
> argus for the following sysfs hierarchy for the main device tree:
>
> /sys/devices/pcipci0000:00/../pci.../../irrcv0/input14/event16
> /sys/devices/pcipci0000:00/../usb.../../irrcv1/input15/event17
> /input16/event18
>
> And corresponding class:
>
> /sys/class/irrcv/irrcv0
> irrcv1
>
> and so on.

Seems fine to me.
>
> >
> > don't see why do we need to different lirc than no-lirc drivers in the
> > case of sysfs class.
>
> I do agree that _all_ infrared receivers should belong to this class,
> and not only ones utilizing lirc_dev.
>
> > As devices with raw input capabilities will have
> > another dev to communicate, this means that we'll need a /lirc node
> > there to point to lirc dev.
> >
> > >
> > > - do we intend to support in-kernel sotfware decoders?
> >
> > Yes.
> >
>
> Good.
>
> > > - What is the structure? Do we organize them as a module to be used by driver
> > > directly or the driver "streams" the data to IR core and the core
> > > applies decoders (in the same fashion input events from drivers flow
> > > into input core and then distributed to all bound interfaces for
> > > processing/conversion/transmission to userspace)?
> >
> > My plan is to expand ir-common.ko module and rename it to ir-core, to be
> > the IR core module for the evdev interface. I'm already working on it.
> > My idea for an architecture is that the lirc-core module will use
> > ir-common where the IR decoders will be, and the evdev interface.
> >
>
> How related lirc-core to the current lirc code? If it is not the same
> maybe we should not call it lirc to avoid confusion.

Just for better illustrate what I'm seeing, I broke the IR generic
code into two components:

lirc core - the module that receives raw pulse/space and creates
a device to receive raw API pulse/space events;

IR core - the module that receives scancodes, convert them into
keycodes and send via evdev interface.

We may change latter the nomenclature, but I'm seeing the core as two different
modules, since there are cases where lirc core won't be used (those
devices were there's no way to get pulse/space events).

> > Not sure if I got your idea. Basically, what I see is:
> >
> > For device drivers that work in raw mode:
> > [IR physical device] ==> [IR receiver driver] ==> [lirc-core] ==> [decoder] ==> [ir-core] ==> [evdev]
> >
> > (eventually, we can merge decoder and ir-core into one module at the beginning,
> > depending on the size of the decoders)
> >
> > For device drivers that work only in evdev mode (those with hardware decoders):
> >
> > [IR physical device] ==> [IR receiver driver] ==> [ir-core] ==> [evdev]
> >
>
> Maybe we are talking about the same things and it is just names that are
> confusing. I'd imagine something like this:
>
>
> In-kernel decoding:
>
> [IR physical device] => [IR receiver driver] => [IR core] => [decoder] => [input core] => [evdev]
> => [decoder] => [input core] => [evdev]
>
> Hardware decoder:
> [IR physical device] => [IR receiver driver] => [IR core]
> => [input core] => [evdev]
>
> I.e we still register with IR core but driver communicates directly with input device.
>
> Userspace decoging:
> [IR physical device] => [IR receiver driver] => [IR core] => [lirc_dev] => [lircd] => [uinput] => [input core] => [evdev]

I think, we're thinking the same thing, but I've broke the IR core into two parts:
the lirc core, where the LIRC API will be handled, and the IR core, where the input API will be handled.

I've assumed that we'll use lirc API only for raw IR decode. So, in the hardware decoder case,
we will expose only the evdev.

So a drawing showing those two components will be:

In-kernel decoding:

[IR physical device] => [IR receiver driver] => [LIRC core] => [decoder] => [IR core] => [input core] => [evdev]
||
=> [Lirc API device]

Hardware decoder:
[IR physical device] => [IR receiver driver] => [IR core]
=> [input core] => [evdev]

Userspace decoding:
[IR physical device] => [IR receiver driver] => [LIRC core] => [Lirc API device] => [lircd] => [uinput] => [input core] => [evdev]

Of course, for userspace, there is trivial case where it will
just directly read from evdev without using any userspace program:

Userspace direct usage of IR:
[IR physical device] => [IR receiver driver] => [IR core] => [input core] => [evdev]

> Essentially lirc_dev becomes a special case of decoder that, instead of
> connecting inptu core and creating input devices passes the data to
> userspace.

Yes.

> I did not show the block that you call ir-core since I expect it to be more
> like a library rather than an object in overall structure.
>
>
> > >
> > > Now as far as input core goes I see very limited number of changes that
> > > may be needed:
> > >
> > > - Allow to extend size of "scancode" in EVIOC{S,G}KEYCODE if we are
> > > unable to limit ourselves to 32 bits (keeping compatibility of course)
> >
> > Yes, but the way EVIOC{S,G}KEYCODE currently works, it performs poorly when you have a
> > table with 2^64 size. The table is very sparsed, but, as the key to get/set a code is
> > the scancode, it is very hard to enumberate what are the actual entries there. The
> > better is to use an index parameter for they, instead of using the scancode as such.
> >
>
> evdev does not really care what you use as scancode. So nobody stops
> your driver to report index as a scancode and accept index from the
> ioctl. The true "scancode" will thus be competely hidden from userspace.
> In fact a few drivers do just that.

Let me better express here. It is all about how we'll expand the limits of those
ioctls to fulfill the needs.

The point is that we'll have, let's say something like to 50-500 scancode/keycode tuples
sparsely spread into a 2^64 scancode universe (assuming 64 bits - Not sure if is there any
IR protocol/code with a bigger scancode).

On such universe if we want to get all keycodes with the current ioctls for a scancode in
the range of 32 bits, we need to do something like:

u32 code;
int codes[2];
for (code = 0; code <= (unsigned u32) - 1; code++) {
codes[0] = (int)code;
if (!ioctl(fd, EVIOCGKEYCODE, codes))
printf("scancode 0x%08x = keycode 0x%08x\n", codes[0], codes[1]);
}

So, on the 32 bits case, we'll do about 4 billions calls to EVIOGKEYCODE ioctl to
read the complete scancode space, to get those 50-500 useful codes.

Due to the current API limit, we don't have any way to use the full 64bits space for scancodes.

if we use code[0] as an index, this means that we'll need to share the 32 bits on code[1]
for scancode/keycode. Even using an 32 bits integer for keycode, it is currently limited to:

#define KEY_MAX 0x2ff
#define KEY_CNT (KEY_MAX+1)

So, we have 10 bits already used for keycode. This gives only 22 bits for scancodes, if we share
codes[1] for both keycode/scancode. By sharing the 32 bits, we'll need to be care to not extend
KEY_MAX to be bigger than 0x3ff, otherwise the keytable won't be able to represent all keys of the
key universe.

What is need for this case is that the arguments for get key/set key to be something like:

struct {
u16 index;
u64 scancode;
u32 keycode;
};

Eventually, if we want to be more careful about the number of bits for scancode, the better is to
think on some ways to allow extending the scancode universe, like using u64 scancode[2],
adding some reserved fields, or using a pair of size/pointer for the the scancode.
In the latter case, we'll need to write some compat32 code for handling the pointer. Comments?

It should be noticed that just changing the number of bits at EVIO[G|S]KEYCODE will break
the kernel API. One alternative would be to just define a new pair of ioctls that allows
using more bits there.

Cheers,
Mauro

2009-12-04 14:34:11

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Christoph Bartelmus wrote:
> Hi Dmitry,
>
> on 03 Dec 09 at 14:12, Dmitry Torokhov wrote:
> [...]
>>> Consider passing the decoded data through lirc_dev.
> [...]
>> I believe it was agreed that lirc-dev should be used mainly for decoding
>> protocols that are more conveniently decoded in userspace and the
>> results would be looped back into input layer through evdev which will
>> be the main interface for consumer applications to use.
>
> Quoting myself:
>> Currently I would tend to an approach like this:
>> - raw interface to userspace using LIRC
>
> For me this includes both the pulse/space data and also the scan codes
> when hardware does the decoding.
> Consider cases like this:
> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>
> This is an air-conditioner remote.
> The entries that you see in this config file are not really separate
> buttons. Instead the remote just sends the current settings for e.g.
> temperature encoded in the protocol when you press some up/down key. You
> really don't want to map all possible temperature settings to KEY_*
> events. For such cases it would be nice to have access at the raw scan
> codes from user space to do interpretation of the data.
> The default would still be to pass the data to the input layer, but it
> won't hurt to have the possibility to access the raw data somehow.

Interesting. IMHO, the better would be to add an evdev ioctl to return the
scancode for such cases, instead of returning the keycode.

Cheers,
Mauro.

2009-12-04 15:42:52

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 4, 2009 at 9:12 AM, Mauro Carvalho Chehab
<[email protected]> wrote:
>> In-kernel decoding:
>>

[IR physical device] => [IR receiver driver] => [LIRC core] =>
[decoder] => [IR core] => [input core] => [evdev]
||
=> [Lirc API device]>>
>> Hardware decoder:
>> [IR physical device] => [IR receiver driver] => [IR core]
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?=> [input core] => [evdev]

My hope is that 90% of users can achieve "just works" via these two
models. There will need to be several default device/keymaps installed
to avoid the need to configure. For example, have udev install a
default map from Motorola DVR IR to Linux keycodes. Installing that
map creates a new evdev device. Myth can then look for that device by
default and listen to it. Now the user just needs to set their
programmable remote to send Motorola DVR and everything should "just
work". Pick similar default maps/evdev device for music players and
home automation. Sophisticated users can change these default maps by
editing the udev scripts.

Long term the goal is to get IR profiles for Linux DVR, music player,
home automation, etc in to the IR database. Then we won't have to
override the profile for another company's device.

I believe it is a requirement to send the decoded IR events up to user
space (EV_IR). That would handle the case of the air conditioner
remote. It also allows construction of an app that automatically
searches the IR database for mapping tables. Normal apps would just
ignore these events.

Installing a map is what triggers the creation of another evdev
device. There should be one evdev device per map installed. The
current input API design does not provide for a way to do this.

The only solution I see to automatic device creation is to monitor
unmapped EV_IR events and search for them in the IR database. When a
match is found, install the map from the database - which then
triggers the creation of a new evdev device and the EV_IR event won't
be unmapped any more. Searching is not a simple process since the same
IR code often appears in more than one map. Some human intervention
will probably be required.

[IR physical device] => [IR receiver driver] => [LIRC core] =>
[decoder] => [IR core] => [input core] => [evdev]
||
||
=> [Lirc API
device] ==> [EV_IR
events]

The EV_IR events also allow a user space protocol decoder to inject
those events back into the input subsystem where they will flow
through the mapping tables.

The in-kernel protocol decoders should be designed as a set of modules
with the pulse data being simultaneously supplied to all modules. That
will make it easy to work on the protocol converters - just
insmod/rmmod as you make changes. These engines can be extracted from
the LIRC code and turned into modules.

Where are IR repeat bits going to be handled?

--
Jon Smirl
[email protected]

2009-12-04 21:47:26

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Mauro,

on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
> Christoph Bartelmus wrote:
>>>> Consider passing the decoded data through lirc_dev.
[...]
>> Consider cases like this:
>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>
>> This is an air-conditioner remote.
>> The entries that you see in this config file are not really separate
>> buttons. Instead the remote just sends the current settings for e.g.
>> temperature encoded in the protocol when you press some up/down key. You
>> really don't want to map all possible temperature settings to KEY_*
>> events. For such cases it would be nice to have access at the raw scan
>> codes from user space to do interpretation of the data.
>> The default would still be to pass the data to the input layer, but it
>> won't hurt to have the possibility to access the raw data somehow.

> Interesting. IMHO, the better would be to add an evdev ioctl to return the
> scancode for such cases, instead of returning the keycode.

That means you would have to set up a pseudo keymap, so that you can get
the key event which you could than react on with a ioctl. Or are you
generating KEY_UNKNOWN for every scancode that is not mapped?
What if different scan codes are mapped to the same key event? How do you
retrieve the scan code for the key event?
I don't think it can work this way.

Christoph

2009-12-04 22:07:17

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 04, 2009 at 10:46:00PM +0100, Christoph Bartelmus wrote:
> Hi Mauro,
>
> on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
> > Christoph Bartelmus wrote:
> >>>> Consider passing the decoded data through lirc_dev.
> [...]
> >> Consider cases like this:
> >> http://lirc.sourceforge.net/remotes/lg/6711A20015N
> >>
> >> This is an air-conditioner remote.
> >> The entries that you see in this config file are not really separate
> >> buttons. Instead the remote just sends the current settings for e.g.
> >> temperature encoded in the protocol when you press some up/down key. You
> >> really don't want to map all possible temperature settings to KEY_*
> >> events. For such cases it would be nice to have access at the raw scan
> >> codes from user space to do interpretation of the data.
> >> The default would still be to pass the data to the input layer, but it
> >> won't hurt to have the possibility to access the raw data somehow.
>
> > Interesting. IMHO, the better would be to add an evdev ioctl to return the
> > scancode for such cases, instead of returning the keycode.
>
> That means you would have to set up a pseudo keymap, so that you can get
> the key event which you could than react on with a ioctl. Or are you
> generating KEY_UNKNOWN for every scancode that is not mapped?
> What if different scan codes are mapped to the same key event? How do you
> retrieve the scan code for the key event?
> I don't think it can work this way.
>

EV_MSC/MSC_SCAN.

--
Dmitry

2009-12-04 23:03:37

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Dmitry,

on 04 Dec 09 at 14:07, Dmitry Torokhov wrote:
> On Fri, Dec 04, 2009 at 10:46:00PM +0100, Christoph Bartelmus wrote:
>> Hi Mauro,
>>
>> on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
>>> Christoph Bartelmus wrote:
>>>>>> Consider passing the decoded data through lirc_dev.
>> [...]
>>>> Consider cases like this:
>>>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>>>
>>>> This is an air-conditioner remote.
>>>> The entries that you see in this config file are not really separate
>>>> buttons. Instead the remote just sends the current settings for e.g.
>>>> temperature encoded in the protocol when you press some up/down key. You
>>>> really don't want to map all possible temperature settings to KEY_*
>>>> events. For such cases it would be nice to have access at the raw scan
>>>> codes from user space to do interpretation of the data.
>>>> The default would still be to pass the data to the input layer, but it
>>>> won't hurt to have the possibility to access the raw data somehow.
>>
>>> Interesting. IMHO, the better would be to add an evdev ioctl to return the
>>> scancode for such cases, instead of returning the keycode.
>>
>> That means you would have to set up a pseudo keymap, so that you can get
>> the key event which you could than react on with a ioctl. Or are you
>> generating KEY_UNKNOWN for every scancode that is not mapped?
>> What if different scan codes are mapped to the same key event? How do you
>> retrieve the scan code for the key event?
>> I don't think it can work this way.
>>

> EV_MSC/MSC_SCAN.

How would I get the 64 bit scan codes that the iMON devices generate?
How would I know that the scan code is 64 bit?
input_event.value is __s32.

BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
codes. Anyone here has docs on the XMP protocol?

Christoph

2009-12-04 23:15:30

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat, Dec 05, 2009 at 12:01:00AM +0100, Christoph Bartelmus wrote:
> Hi Dmitry,
>
> on 04 Dec 09 at 14:07, Dmitry Torokhov wrote:
> > On Fri, Dec 04, 2009 at 10:46:00PM +0100, Christoph Bartelmus wrote:
> >> Hi Mauro,
> >>
> >> on 04 Dec 09 at 12:33, Mauro Carvalho Chehab wrote:
> >>> Christoph Bartelmus wrote:
> >>>>>> Consider passing the decoded data through lirc_dev.
> >> [...]
> >>>> Consider cases like this:
> >>>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
> >>>>
> >>>> This is an air-conditioner remote.
> >>>> The entries that you see in this config file are not really separate
> >>>> buttons. Instead the remote just sends the current settings for e.g.
> >>>> temperature encoded in the protocol when you press some up/down key. You
> >>>> really don't want to map all possible temperature settings to KEY_*
> >>>> events. For such cases it would be nice to have access at the raw scan
> >>>> codes from user space to do interpretation of the data.
> >>>> The default would still be to pass the data to the input layer, but it
> >>>> won't hurt to have the possibility to access the raw data somehow.
> >>
> >>> Interesting. IMHO, the better would be to add an evdev ioctl to return the
> >>> scancode for such cases, instead of returning the keycode.
> >>
> >> That means you would have to set up a pseudo keymap, so that you can get
> >> the key event which you could than react on with a ioctl. Or are you
> >> generating KEY_UNKNOWN for every scancode that is not mapped?
> >> What if different scan codes are mapped to the same key event? How do you
> >> retrieve the scan code for the key event?
> >> I don't think it can work this way.
> >>
>
> > EV_MSC/MSC_SCAN.
>
> How would I get the 64 bit scan codes that the iMON devices generate?
> How would I know that the scan code is 64 bit?
> input_event.value is __s32.
>

I suppose we could add MSC_SCAN_END event so that we can transmit
"scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
bit.

FWIW there is MSC_RAW as well.

--
Dmitry

2009-12-05 00:28:15

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> codes. Anyone here has docs on the XMP protocol?

Assuming a general purpose receiver (not one with fixed hardware
decoding), is it important for Linux to receive IR signals from all
possible remotes no matter how old or obscure? Or is it acceptable to
tell the user to throw away their dedicated remote and buy a universal
multi-function one? Universal multi-function remotes are $12 in my
grocery store - I don't even have to go to an electronics store.

I've been working off the premise of getting rid of obscure remotes
and replacing them with a universal one. The universal one can be set
to send a common protocol like JVC or Sony. That implies that we only
need one or two protocol decoders in-kernel which greatly reduces the
surface area of the problem.

>From my perspective Linux needs the capability to receive about 40
buttons on about five devices. How those 200 unique codes get into the
box doesn't really matter so I was picking a simple protocol and
setting the universal remote to support five devices in that protocol.

Of course transmitting is a completely different problem, but we
haven't been talking about transmitting. I can see how we would need
to record any IR protocol in order to retransmit it. But that's in the
5% of users world, not the 90% that want MythTV to "just work". Use
something like LIRC if you want to transmit.

My goal was to make it simple for people to do really basic tasks like
using a remote to pause their music player. Something like: plug in
MSMCE receiver, program remote to send codes for Sony CR-114 mp3
player, hit pause button, music stops.

--
Jon Smirl
[email protected]

2009-12-05 01:48:49

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, 2009-12-04 at 19:28 -0500, Jon Smirl wrote:
> On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> > codes. Anyone here has docs on the XMP protocol?
>
> Assuming a general purpose receiver (not one with fixed hardware
> decoding), is it important for Linux to receive IR signals from all
> possible remotes no matter how old or obscure?

Importance of any particular requirement is relative/subjective. As is
usefulness of any existing functionality.

Personally, I just think it's cool to pick up a random remote and use
Linux to figure out its protocol and its codes and get it working.



> Or is it acceptable to
> tell the user to throw away their dedicated remote and buy a universal
> multi-function one?

Nope. That other OS provider forces device obsolescence or arbitrary
constraints on users quite often and I don't like it myself. That's why
I use Linux.


> Universal multi-function remotes are $12 in my
> grocery store - I don't even have to go to an electronics store.

The old remote in my possession costs $0, and I don't even have to leave
the house.


> I've been working off the premise of getting rid of obscure remotes
> and replacing them with a universal one. The universal one can be set
> to send a common protocol like JVC or Sony. That implies that we only
> need one or two protocol decoders in-kernel which greatly reduces the
> surface area of the problem.

The design should serve the users, the users should not serve the
design. If the reduction of requirements scope starts forcing users to
buy new hardware, are we really serving the users or just asking them to
pay to compensate for our shortcomings?


Regards,
Andy

2009-12-05 02:11:12

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, 2009-12-04 at 20:48 -0500, Andy Walls wrote:
> On Fri, 2009-12-04 at 19:28 -0500, Jon Smirl wrote:
> > On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> > > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> > > codes. Anyone here has docs on the XMP protocol?
> >
> > Assuming a general purpose receiver (not one with fixed hardware
> > decoding), is it important for Linux to receive IR signals from all
> > possible remotes no matter how old or obscure?

Google reveals that XMP is somewhat new, proprietary, and not limited to
remotes:

http://www.uei.com/html.php?page_id=89


UEI is apparently the company responsible for the "One for All" brand of
remote controls:

http://www.uei.com/html.php?page_id=62



Here's some random tech details about one XMP remote:

http://irtek.wikidot.com/remotecomcastxmp


Regards,
Andy

2009-12-05 03:45:52

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 4, 2009 at 8:48 PM, Andy Walls <[email protected]> wrote:
> On Fri, 2009-12-04 at 19:28 -0500, Jon Smirl wrote:
>> On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
>> > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
>> > codes. Anyone here has docs on the XMP protocol?
>>
>> Assuming a general purpose receiver (not one with fixed hardware
>> decoding), is it important for Linux to receive IR signals from all
>> possible remotes no matter how old or obscure?
>
> Importance of any particular requirement is relative/subjective. ?As is
> usefulness of any existing functionality.
>
> Personally, I just think it's cool to pick up a random remote and use
> Linux to figure out its protocol and its codes and get it working.

You are a technical user.

>
>
>
>> ?Or is it acceptable to
>> tell the user to throw away their dedicated remote and buy a universal
>> multi-function one?
>
> Nope. ?That other OS provider forces device obsolescence or arbitrary
> constraints on users quite often and I don't like it myself. ?That's why
> I use Linux.
>
>
>> ? Universal multi-function remotes are $12 in my
>> grocery store - I don't even have to go to an electronics store.
>
> The old remote in my possession costs $0, and I don't even have to leave
> the house.
>
>
>> I've been working off the premise of getting rid of obscure remotes
>> and replacing them with a universal one. The universal one can be set
>> to send a common protocol like JVC or Sony. That implies that we only
>> need one or two protocol decoders in-kernel which greatly reduces the
>> surface area of the problem.
>
> The design should serve the users, the users should not serve the
> design. ?If the reduction of requirements scope starts forcing users to
> buy new hardware, are we really serving the users or just asking them to
> pay to compensate for our shortcomings?

Use of arbitrary remotes is a complex process. It almost certainly can
not be done in a transparent "just works" manner.

Let me rephrase, is it ok to tell people to buy a new remote if they
want to avoid a complex, technical configuration process that isn't
even guaranteed to work (they might have a 56K remote and a 38K
receiver or a Sony remote and a fixed RC-5 receiver).

I'm not proposing that we prevent arbitrary remotes from working,
you're just going to need to expend more effort to make them work.
For example, you have to have a fair amount of IR knowledge to figure
out why those two cases above don't work. You might have to install
LIRC and futz with irrecord and build your own config files and
mapping tables, etc...

It doesn't have to only be a universal remote, we can pre-install
mapping tables for the remotes commonly shipped with the v4l hardware.
When the v4l drivers load they could even poke the default map for
their bundled remotes directly into the input system if they wanted
to. Doing that might save a lot of config issues.

How this for new goals?
Specific IR drivers autoload maps for their bundled remotes by
poking them into the input subsystem during module load
IR always has default map for a universal remote - it provides five
devices and uses a common protocol like JVC (may not work for fixed
hardware, you have to set these five common devices into the universal
remote)
All of these maps can be overriden with user space commands (which
lets you configure funky remotes)

--
Jon Smirl
[email protected]

2009-12-06 02:31:59

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, 2009-12-04 at 22:45 -0500, Jon Smirl wrote:
> On Fri, Dec 4, 2009 at 8:48 PM, Andy Walls <[email protected]> wrote:
> > On Fri, 2009-12-04 at 19:28 -0500, Jon Smirl wrote:
> >> On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> >> > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> >> > codes. Anyone here has docs on the XMP protocol?
> >>
> >> Assuming a general purpose receiver (not one with fixed hardware
> >> decoding), is it important for Linux to receive IR signals from all
> >> possible remotes no matter how old or obscure?
> >
> > Importance of any particular requirement is relative/subjective. As is
> > usefulness of any existing functionality.
> >
> > Personally, I just think it's cool to pick up a random remote and use
> > Linux to figure out its protocol and its codes and get it working.
>
> You are a technical user.

Yes, I agree. I do not know what percentage of current Linux users are
technical vs non-technical, so I cannot gauge the current improtance.

I can see the trend line though: as time goes by, the percentage of all
linux users that have a technical bent will only get smaller.




> >> I've been working off the premise of getting rid of obscure remotes
> >> and replacing them with a universal one. The universal one can be set
> >> to send a common protocol like JVC or Sony. That implies that we only
> >> need one or two protocol decoders in-kernel which greatly reduces the
> >> surface area of the problem.
> >
> > The design should serve the users, the users should not serve the
> > design. If the reduction of requirements scope starts forcing users to
> > buy new hardware, are we really serving the users or just asking them to
> > pay to compensate for our shortcomings?
>
> Use of arbitrary remotes is a complex process. It almost certainly can
> not be done in a transparent "just works" manner.
>
> Let me rephrase, is it ok to tell people to buy a new remote if they
> want to avoid a complex, technical configuration process that isn't
> even guaranteed to work (they might have a 56K remote and a 38K
> receiver or a Sony remote and a fixed RC-5 receiver).

"Recommended hardware" to guide users is usually an acceptable concept.

I have a feeling though, we may end up with a lot of "hey I got this
remote and video card on eBay and ..."

If the in kernel IR Rx handling is going to be really limited in trying
to "keep it simple", then that remote control hardware recommendation
should probably be strictly "the remote bundled with your IR receiver
hardware" to handle the most important use case for the in kernel IR Rx
handling to meet.



> I'm not proposing that we prevent arbitrary remotes from working,
> you're just going to need to expend more effort to make them work.
> For example, you have to have a fair amount of IR knowledge to figure
> out why those two cases above don't work. You might have to install
> LIRC and futz with irrecord and build your own config files and
> mapping tables, etc...
>
> It doesn't have to only be a universal remote, we can pre-install
> mapping tables for the remotes commonly shipped with the v4l hardware.

At least one vendor, has shipped two different type of remote with the
same board over the years. Also MCE versions of cards usually ship with
an MCE remote versus the standard one.

I think it still could be possible to avoid a user interview process,
but I suspect you'll need a userspace set of "scripting" tools to take
detection data from the kernel and select the right kepymap.



> When the v4l drivers load they could even poke the default map for
> their bundled remotes directly into the input system if they wanted
> to. Doing that might save a lot of config issues.

They do that right now (well some of them and ir-kbd-i2c). But there is
*no* intelligence beyond the most current or popular remote for a board.
Also, some things require things a manual or scripted module load by the
user: ir-kbd-i2c and lirc_i2c can't currently hook to the same I2C
device at once, so the user has to set one or the other up to be loaded.


> How this for new goals?
> Specific IR drivers autoload maps for their bundled remotes by
> poking them into the input subsystem during module load
> IR always has default map for a universal remote - it provides five
> devices and uses a common protocol like JVC (may not work for fixed
> hardware, you have to set these five common devices into the universal
> remote)
> All of these maps can be overriden with user space commands (which
> lets you configure funky remotes)

This looks like it just adds the option for the user to trade away
mental effort with purchasing power (for a universal remote). Not an
unacceptable trade for many people I guess.

I suppose it gives the user one option than he had before, if you have a
target set of universal remote hardware. I'd like to see some vendor
diversity in that target set of "just works" universal remotes.




My whole thought on the in-kernel IR Rx implementation is try to do it
all, or do just enough to get by.

The just works case is improtant enough for me to relent that something
should reside in kernel. I'm certainly OK with folks not spending a lot
of effort on it and not adding lots of code in kernel, so it works just
enough to hit the ground running for 80% of users. But if that's the
case, do we need to also allow LIRC to process more than just raw pulse
data?


BTW, I found this breif a while ago that gives an algorithm for multiple
protocol recognition/discrimination:

http://www.audiodevelopers.com/temp/Remote_Controls.ppt

Maybe this would be easier for protocol detection than running
"parallel" decoders that spit out an answer and a confidence factor.


Regards,
Andy

2009-12-06 03:38:55

by hermann pitton

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

Am Freitag, den 04.12.2009, 19:28 -0500 schrieb Jon Smirl:
> On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> > codes. Anyone here has docs on the XMP protocol?
>
> Assuming a general purpose receiver (not one with fixed hardware
> decoding), is it important for Linux to receive IR signals from all
> possible remotes no matter how old or obscure? Or is it acceptable to
> tell the user to throw away their dedicated remote and buy a universal
> multi-function one? Universal multi-function remotes are $12 in my
> grocery store - I don't even have to go to an electronics store.

finally we have some point here, IMHO, that is not acceptable and I told
you previously not to bet on such. Start some poll and win it, and I'll
shut up :)

To be frank, you are quite mad at this point, or deliver working other
remotes to __all__ for free.

Cheers,
Hermann



2009-12-06 06:55:19

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi,

On Sun, Dec 06, 2009 at 04:36:33AM +0100, hermann pitton wrote:
> Hi,
>
> Am Freitag, den 04.12.2009, 19:28 -0500 schrieb Jon Smirl:
> > On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
> > > BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
> > > codes. Anyone here has docs on the XMP protocol?
> >
> > Assuming a general purpose receiver (not one with fixed hardware
> > decoding), is it important for Linux to receive IR signals from all
> > possible remotes no matter how old or obscure? Or is it acceptable to
> > tell the user to throw away their dedicated remote and buy a universal
> > multi-function one? Universal multi-function remotes are $12 in my
> > grocery store - I don't even have to go to an electronics store.
>
> finally we have some point here, IMHO, that is not acceptable and I told
> you previously not to bet on such. Start some poll and win it, and I'll
> shut up :)
>

Who would participate in the poll though?

> To be frank, you are quite mad at this point, or deliver working other
> remotes to __all__ for free.
>

I do not believe you are being realistic. Sometimes we just need to say
that the device is a POS and is just not worth it. Remember, there is
still "lirc hole" for the hard core people still using solder to produce
something out of the spare electronic components that may be made to
work (never mind that it causes the CPU constantly poll the device, not
letting it sleep and wasting electricity as a result - just hypotetical
example here).

We still need to do cost-benefit analysis and decide whether supporting
the exotic setups _in kernel_ makes sense if it encumbers implementation
and causes issues to the other 95% people.

--
Dmitry

2009-12-06 07:09:34

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
> Em Fri, 4 Dec 2009 02:06:42 -0800
> Dmitry Torokhov <[email protected]> escreveu:
>
> > >
> >
> > evdev does not really care what you use as scancode. So nobody stops
> > your driver to report index as a scancode and accept index from the
> > ioctl. The true "scancode" will thus be competely hidden from userspace.
> > In fact a few drivers do just that.
>
> Let me better express here. It is all about how we'll expand the limits of those
> ioctls to fulfill the needs.
>
> The point is that we'll have, let's say something like to 50-500 scancode/keycode tuples
> sparsely spread into a 2^64 scancode universe (assuming 64 bits - Not sure if is there any
> IR protocol/code with a bigger scancode).
>
> On such universe if we want to get all keycodes with the current ioctls for a scancode in
> the range of 32 bits, we need to do something like:
>
> u32 code;
> int codes[2];
> for (code = 0; code <= (unsigned u32) - 1; code++) {
> codes[0] = (int)code;
> if (!ioctl(fd, EVIOCGKEYCODE, codes))
> printf("scancode 0x%08x = keycode 0x%08x\n", codes[0], codes[1]);
> }
>
> So, on the 32 bits case, we'll do about 4 billions calls to EVIOGKEYCODE ioctl to
> read the complete scancode space, to get those 50-500 useful codes.
>

Right, currently there is no need to query all scancodes defined by
device. Quite often drivers don't even know what scancodes device
actually generates (ex AT keyboard).

Could you describe in more detail how you are using this data?

>
> Due to the current API limit, we don't have any way to use the full 64bits space for scancodes.
>

Can we probably reduce the "scancode" space? ARe all 64 bits in
protocols used to represent keypresses or some are used for some kind of
addressing?

> if we use code[0] as an index, this means that we'll need to share the 32 bits on code[1]
> for scancode/keycode. Even using an 32 bits integer for keycode, it is currently limited to:
>
> #define KEY_MAX 0x2ff
> #define KEY_CNT (KEY_MAX+1)
>
> So, we have 10 bits already used for keycode. This gives only 22 bits for scancodes, if we share
> codes[1] for both keycode/scancode. By sharing the 32 bits, we'll need to be care to not extend
> KEY_MAX to be bigger than 0x3ff, otherwise the keytable won't be able to represent all keys of the
> key universe.
>
> What is need for this case is that the arguments for get key/set key to be something like:
>
> struct {
> u16 index;
> u64 scancode;
> u32 keycode;
> };
>

Hmm, so what is this index? I am confused...

Thanks.

--
Dmitry

2009-12-06 07:14:54

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
> > >
> >
> > How related lirc-core to the current lirc code? If it is not the same
> > maybe we should not call it lirc to avoid confusion.
>
> Just for better illustrate what I'm seeing, I broke the IR generic
> code into two components:
>
> lirc core - the module that receives raw pulse/space and creates
> a device to receive raw API pulse/space events;
>
> IR core - the module that receives scancodes, convert them into
> keycodes and send via evdev interface.
>
> We may change latter the nomenclature, but I'm seeing the core as two different
> modules, since there are cases where lirc core won't be used (those
> devices were there's no way to get pulse/space events).
>

OK, I think we are close but not exactly close. I believe that what you
call lirc core will be used always - it is the code that create3s class
devices, connectes decorers with the data streams, etc. I believe it
will be utilized even in case of devices using hardware decoders. That
is why we should probably stop calling it "lirc core" just tso we don't
confuse it with original lirc.

Then we have decoders and lirc_dev - which implements original lirc
interface (or maybe its modified version) and allows lircd to continue
working.

Lastly we have what you call IR core which is IR-to-input bridge of
sorts.

Right?

--
Dmitry

2009-12-06 11:03:46

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
>> Em Fri, 4 Dec 2009 02:06:42 -0800
>> Dmitry Torokhov <[email protected]> escreveu:
>>
>>> evdev does not really care what you use as scancode. So nobody stops
>>> your driver to report index as a scancode and accept index from the
>>> ioctl. The true "scancode" will thus be competely hidden from userspace.
>>> In fact a few drivers do just that.
>> Let me better express here. It is all about how we'll expand the limits of those
>> ioctls to fulfill the needs.
>>
>> The point is that we'll have, let's say something like to 50-500 scancode/keycode tuples
>> sparsely spread into a 2^64 scancode universe (assuming 64 bits - Not sure if is there any
>> IR protocol/code with a bigger scancode).
>>
>> On such universe if we want to get all keycodes with the current ioctls for a scancode in
>> the range of 32 bits, we need to do something like:
>>
>> u32 code;
>> int codes[2];
>> for (code = 0; code <= (unsigned u32) - 1; code++) {
>> codes[0] = (int)code;
>> if (!ioctl(fd, EVIOCGKEYCODE, codes))
>> printf("scancode 0x%08x = keycode 0x%08x\n", codes[0], codes[1]);
>> }
>>
>> So, on the 32 bits case, we'll do about 4 billions calls to EVIOGKEYCODE ioctl to
>> read the complete scancode space, to get those 50-500 useful codes.
>>
>
> Right, currently there is no need to query all scancodes defined by
> device. Quite often drivers don't even know what scancodes device
> actually generates (ex AT keyboard).
>
> Could you describe in more detail how you are using this data?

It is useful if you want to dump the keycode maps into file with the current
scancode attribution, in order to modify some keystrokes.

Right now, programs like dumpkeys (from kbd package) allow you to dump for example
the attribution keys from your keyboard.

In the case of IR's this functionality is very important.

For example, you may need to replace the scancode/KEY_CHANNELUP tuple by scancode/KEY_UP,
in order to make your IR to work with some applications that don't recognize the IR
specific keycodes.

In practice, with such applications, you'll need to replace several different scancodes.

So, you may end by having different scancodes producing the same keycode, as such applications
aren't capable of differentiating an UP key from a CHANNELUP key. This is the case, for example
of the popular tvtime application.

The better way is to just generate a dump file, modify the needed entries and reload the
table by calling EVIOSKEYCODE, in order to use the new table.

I wrote a small application that just do the above, and I use to load some special tables
to work with some applications like tvtime and mplayer. (with mplayer, you need to map
<channel down> as KEY_H and <channel up> as KEY_K).

I hope that, after we finish addressing IR's, we'll finally have media applications handling
directly the proper keycodes, but people may still need to write different keycodes to do
other things. I used to have a keymap file in order to use an IR to control the slide show
with openoffice.

>> Due to the current API limit, we don't have any way to use the full 64bits space for scancodes.
>>
>
> Can we probably reduce the "scancode" space? ARe all 64 bits in
> protocols used to represent keypresses or some are used for some kind of
> addressing?

All the IR's I found with V4L/DVB use up to 16 bits code (or 24 bits, for NEC extended protocol).
However, currently, the drivers were getting only 7 bits, due to the old way to implement
EVIO[S|G]KEYCODE.

I know, however, one i2c chip that returns a 5 byte scancode when you press a key.
We're currently just discarding the remaining bits, so I'm not really sure what's there.


The usage of 7 bits, in practice, were meaning that it weren't possible to use
a different remote than the one provided by the device manufacturer, as the scancodes produced
by other remotes differ on more than 7 bits. Also, this means that, if your TV and your PC
are using the same protocol, like RC5, if you press a button on your TV remote, the PC will
also get it.

I know, however, one IR driver that produces 6 bytes when you press a key.
We're currently just discarding the remaining bits, so I'm not really sure
what else is there. Some badly optimized protocol? a bigger scancode? a protocol indication?

In general, the scancode contains 8 or 16 bits for address, and 8 bits for command.

However, the scancode table needs to handle the address as well, since we don't want that a
scancode meant to go to your TV to be handled by the PC, but we may want to get codes from
different addresses there, as we may need to use the address to differentiate the commands
meant to control the TV volume, for example, than the same command meant to control the PC
master volume.

>> if we use code[0] as an index, this means that we'll need to share the 32 bits on code[1]
>> for scancode/keycode. Even using an 32 bits integer for keycode, it is currently limited to:
>>
>> #define KEY_MAX 0x2ff
>> #define KEY_CNT (KEY_MAX+1)
>>
>> So, we have 10 bits already used for keycode. This gives only 22 bits for scancodes, if we share
>> codes[1] for both keycode/scancode. By sharing the 32 bits, we'll need to be care to not extend
>> KEY_MAX to be bigger than 0x3ff, otherwise the keytable won't be able to represent all keys of the
>> key universe.
>>
>> What is need for this case is that the arguments for get key/set key to be something like:
>>
>> struct {
>> u16 index;
>> u64 scancode;
>> u32 keycode;
>> };
>>
>
> Hmm, so what is this index? I am confused...

It is the sequence number of a scancode/keycode tuple stored at the keycode table.

Better than saying it in words, let me put a code snippet:

at include/linux/input.h, we'll add a code like:

struct input_keytable_entry {
u16 index;
u64 scancode;
u32 keycode;
} __attribute__ ((packed));

(the attribute packed avoids needing a compat for 64 bits)

#define EVIOGKEYCODEENTRY _IOR('E', 0x85, struct input_keytable_entry)

(and a similar ioctl for setkeycode)

This struct will be used by the new

at include/media/ir-common.h, we already have:

struct ir_scancode {
u16 scancode;
u32 keycode;
};

struct ir_scancode_table {
struct ir_scancode *scan;
int size;
...
};

The code at ir core that will handle the ioctl will be like:

static int ir_getkeycode_entry(struct input_dev *dev, struct input_keytable_entry *ike)
{
struct ir_scancode_table *rc_tab = input_get_drvdata(dev);

if (rc_tab->size >= ike->index)
return -EINVAL;

irk->scancode = rctab->scan->scancode;
irk->keycode = rctab->scan->keycode;

return 0;
}

---

As a reference, we currently implement the getkeycode at ir-keytable.c as:

static int ir_getkeycode(struct input_dev *dev,
int scancode, int *keycode)
{
int elem;
struct ir_scancode_table *rc_tab = input_get_drvdata(dev);

elem = ir_seek_table(rc_tab, scancode);
if (elem >= 0) {
*keycode = rc_tab->scan[elem].keycode;
return 0;
}

/*
* Scancode not found and table can't be expanded
*/
if (elem < 0 && rc_tab->size == IR_TAB_MAX_SIZE)
return -EINVAL;

/*
* If is there extra space, returns KEY_RESERVED,
* otherwise, input core won't let ir_setkeycode to work
*/
*keycode = KEY_RESERVED;
return 0;
}

where ir_seek_table is a function that checks returns the entry
that corresponds to the given scancode.

By using an index, both userspace and kernelspace code will be simpler
and fast enough to work even with the biggest scancode.

Cheers,
Mauro.

2009-12-06 11:23:12

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
>>> How related lirc-core to the current lirc code? If it is not the same
>>> maybe we should not call it lirc to avoid confusion.
>> Just for better illustrate what I'm seeing, I broke the IR generic
>> code into two components:
>>
>> lirc core - the module that receives raw pulse/space and creates
>> a device to receive raw API pulse/space events;
>>
>> IR core - the module that receives scancodes, convert them into
>> keycodes and send via evdev interface.
>>
>> We may change latter the nomenclature, but I'm seeing the core as two different
>> modules, since there are cases where lirc core won't be used (those
>> devices were there's no way to get pulse/space events).
>>
>
> OK, I think we are close but not exactly close. I believe that what you
> call lirc core will be used always - it is the code that create3s class
> devices, connectes decorers with the data streams, etc. I believe it
> will be utilized even in case of devices using hardware decoders. That
> is why we should probably stop calling it "lirc core" just tso we don't
> confuse it with original lirc.
>
> Then we have decoders and lirc_dev - which implements original lirc
> interface (or maybe its modified version) and allows lircd to continue
> working.
>
> Lastly we have what you call IR core which is IR-to-input bridge of
> sorts.

It seems to be just nomenclacure ;)

what I called "IR core" you called "lirc core"
what I called "lirc core" you called "lirc_dev"

What I called IR core is the one that will be used by every IR driver, handling
sysfs, evdev's, calling decoders, etc.

I opted to use the nomenclature Lirc to the part of the IR subsystem that
will create the Lirc interface.

Currently, I almost finished the evdev part of the "IR core", using the current
API to control the dynamic keycode allocation. It is already working fine with
V4L drivers.

Cheers,
Mauro.

2009-12-06 11:46:59

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> Hi,
>
> On Sun, Dec 06, 2009 at 04:36:33AM +0100, hermann pitton wrote:
>> Hi,
>>
>> Am Freitag, den 04.12.2009, 19:28 -0500 schrieb Jon Smirl:
>>> On Fri, Dec 4, 2009 at 6:01 PM, Christoph Bartelmus <[email protected]> wrote:
>>>> BTW, I just came across a XMP remote that seems to generate 3x64 bit scan
>>>> codes. Anyone here has docs on the XMP protocol?
>>> Assuming a general purpose receiver (not one with fixed hardware
>>> decoding), is it important for Linux to receive IR signals from all
>>> possible remotes no matter how old or obscure? Or is it acceptable to
>>> tell the user to throw away their dedicated remote and buy a universal
>>> multi-function one? Universal multi-function remotes are $12 in my
>>> grocery store - I don't even have to go to an electronics store.
>> finally we have some point here, IMHO, that is not acceptable and I told
>> you previously not to bet on such. Start some poll and win it, and I'll
>> shut up :)
>>
>
> Who would participate in the poll though?
>
>> To be frank, you are quite mad at this point, or deliver working other
>> remotes to __all__ for free.
>>
>
> I do not believe you are being realistic. Sometimes we just need to say
> that the device is a POS and is just not worth it. Remember, there is
> still "lirc hole" for the hard core people still using solder to produce
> something out of the spare electronic components that may be made to
> work (never mind that it causes the CPU constantly poll the device, not
> letting it sleep and wasting electricity as a result - just hypotetical
> example here).
>
> We still need to do cost-benefit analysis and decide whether supporting
> the exotic setups _in kernel_ makes sense if it encumbers implementation
> and causes issues to the other 95% people.

Fully agreed. The costs (our time) to add and keep supporting an in-kernel
driver for an IR that just one person is still using is higher than
asking the user to get a new IR. This time would be better spent adding a new
driver for other devices.

Cheers,
Mauro.

2009-12-06 12:02:43

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Dmitry,

on 04 Dec 09 at 15:15, Dmitry Torokhov wrote:
[...]
>>>>>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
>>>>>>
>>>>>> This is an air-conditioner remote.
>>>>>> The entries that you see in this config file are not really separate
>>>>>> buttons. Instead the remote just sends the current settings for e.g.
>>>>>> temperature encoded in the protocol when you press some up/down key.
>>>>>> You really don't want to map all possible temperature settings to KEY_*
>>>>>> events. For such cases it would be nice to have access at the raw scan
>>>>>> codes from user space to do interpretation of the data.
>>>>>> The default would still be to pass the data to the input layer, but it
>>>>>> won't hurt to have the possibility to access the raw data somehow.
>>>>
>>>>> Interesting. IMHO, the better would be to add an evdev ioctl to return
>>>>> the scancode for such cases, instead of returning the keycode.
>>>>
>>>> That means you would have to set up a pseudo keymap, so that you can get
>>>> the key event which you could than react on with a ioctl. Or are you
>>>> generating KEY_UNKNOWN for every scancode that is not mapped?
>>>> What if different scan codes are mapped to the same key event? How do you
>>>> retrieve the scan code for the key event?
>>>> I don't think it can work this way.
>>>>
>>
>>> EV_MSC/MSC_SCAN.
>>
>> How would I get the 64 bit scan codes that the iMON devices generate?
>> How would I know that the scan code is 64 bit?
>> input_event.value is __s32.
>>

> I suppose we could add MSC_SCAN_END event so that we can transmit
> "scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
> MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
> bit.

And I set a timeout to know that no MSC_SCAN_END will arrive? This is
broken design IMHO.

Furthermore lircd needs to know the length of the scan code in bits, not
as a multiple of 32.

> FWIW there is MSC_RAW as well.

It took me some time to convice people that this is not the right way to
handle raw timing data.

Christoph

2009-12-06 12:02:29

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Dmitry,

on 05 Dec 09 at 22:55, Dmitry Torokhov wrote:
[...]
> I do not believe you are being realistic. Sometimes we just need to say
> that the device is a POS and is just not worth it. Remember, there is
> still "lirc hole" for the hard core people still using solder to produce
> something out of the spare electronic components that may be made to
> work (never mind that it causes the CPU constantly poll the device, not
> letting it sleep and wasting electricity as a result - just hypotetical
> example here).

The still seems to be is a persistent misconception that the home-brewn
receivers need polling or cause heavy CPU load. No they don't. All of them
are IRQ based.
It's the commercial solutions like gpio based IR that need polling.
For transmitters it's a different story, but you don't transmit 24h/7.

Christoph

2009-12-06 12:13:50

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Jon,

on 04 Dec 09 at 19:28, Jon Smirl wrote:
>> BTW, I just came across a XMP remote that seems to generate 3x64 bit
>> scan codes. Anyone here has docs on the XMP protocol?
>
> Assuming a general purpose receiver (not one with fixed hardware
> decoding), is it important for Linux to receive IR signals from all
> possible remotes no matter how old or obscure? Or is it acceptable to
[...]
> Of course transmitting is a completely different problem, but we
> haven't been talking about transmitting. I can see how we would need
> to record any IR protocol in order to retransmit it. But that's in the
> 5% of users world, not the 90% that want MythTV to "just work". Use
> something like LIRC if you want to transmit.

I don't think anyone here is in the position to be able to tell what is
90% or 5%. Personally I use LIRC exclusively for transmit to my settop box
using an old and obscure RECS80 protocol.
No, I won't replace my setup just because it's old and obscure.

Cable companies tend to provide XMP based boxes to subscribers more often
these days. Simply not supporting these setups is a no-go for me.

Christoph

2009-12-06 16:38:51

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 6, 2009 at 7:12 AM, Christoph Bartelmus <[email protected]> wrote:
> Hi Jon,
>
> on 04 Dec 09 at 19:28, Jon Smirl wrote:
>>> BTW, I just came across a XMP remote that seems to generate 3x64 bit
>>> scan codes. Anyone here has docs on the XMP protocol?
>>
>> Assuming a general purpose receiver (not one with fixed hardware
>> decoding), is it important for Linux to receive IR signals from all
>> possible remotes no matter how old or obscure? Or is it acceptable to
> [...]
>> Of course transmitting is a completely different problem, but we
>> haven't been talking about transmitting. I can see how we would need
>> to record any IR protocol in order to retransmit it. But that's in the
>> 5% of users world, not the 90% that want MythTV to "just work". ?Use
>> something like LIRC if you want to transmit.
>
> I don't think anyone here is in the position to be able to tell what is
> 90% or 5%. Personally I use LIRC exclusively for transmit to my settop box
> using an old and obscure RECS80 protocol.
> No, I won't replace my setup just because it's old and obscure.

There are two groups, technically oriented people who can handle
messing around with IR protocols and everyone else. I'm not proposing
to remove any capabilities from the first group. Instead I'd like to
see the creation of a "just works" option for the other group. We
don't know the size of the everyone else group yet because that option
doesn't exist. In general non-technical people way out number the
technical ones in broad user bases. For example I had to use LIRC to
get my remotes working, but I would have rather been in the everyone
else group and not had to learn about IR.

> Cable companies tend to provide XMP based boxes to subscribers more often
> these days. Simply not supporting these setups is a no-go for me.

I suspect what we categorize as "just works" will expand over time.
The in-kernel support can start small and add protocols and maps over
time. Support for XMP can start in LIRC and migrate into the kernel
after we fully understand the protocol and know that enough people are
using it to justify the effort of maintaining it in-kernel. Adding
in-kernel support for a protocol is not going to make LIRC disappear.

The critical part is getting the initial design of the in-kernel IR
system right. That design is very hard to change after it gets into
everyone's systems and apps start depending on it. Writing up use
cases, modular protocols, figuring out how many bits are needed in
fields, how are maps written, can they be autoloaded, transmitting,
etc, etc. These are the important things to be discussing. LIRC users
obviously have a lot of knowledge in this area to contribute.

PS - another area we need to be thinking about is radio remotes like
the new RF4CE devices. The button presses from these remotes will come
in on the 802.15.4 networking stack and need to get routed into the
input subsystem somehow.

--
Jon Smirl
[email protected]

2009-12-06 17:26:39

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Andy Walls <[email protected]> writes:

> Yes, I agree. I do not know what percentage of current Linux users are
> technical vs non-technical, so I cannot gauge the current improtance.
>
> I can see the trend line though: as time goes by, the percentage of all
> linux users that have a technical bent will only get smaller.

This IMHO shouldn't matter. If users can configure their keymaps for
e.g. games with a graphical utility (and they easily can), they can do
the same with their remotes, at least with these using common sane
protocols. The only thing needed is a good GUI utility. Ergo - it's not
a kernel issue.

The "default bundled", or PnP, won't work well in comparison to a GUI
utility, I wouldn't worry about it too much (though adding it to udev
and co is trivial and we should do it - even if not PnP but asking first
about the actual remote used).
--
Krzysztof Halasa

2009-12-06 17:48:20

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

>> I do not believe you are being realistic. Sometimes we just need to say
>> that the device is a POS and is just not worth it. Remember, there is
>> still "lirc hole" for the hard core people still using solder to produce
>> something out of the spare electronic components that may be made to
>> work

Which device? It was about a remote controller, not the receiver. The IR
receivers are frequently coupled with a DVB etc. receiver. There is
absolutely no problem supporting almost any remote if the hardware is
compatible with the receiver (i.e. IR to IR, the carrier frequency is
not 36 vs 56 kHz, the receiver supports the protocol etc).

I don't say we need to support in-kernel decoding for arbitrary
protocols.

>> (never mind that it causes the CPU constantly poll the device, not
>> letting it sleep and wasting electricity as a result - just hypotetical
>> example here).

Very hypotetical, indeed :-)

Most (all?) home-made receivers don't need polling, they use IRQs
instead ("the" home-made receiver is based on serial port and uses IRQ).
They are hardly the "obscure hardware" that nobody has.

The "more advanced" receivers using shift registers may use polling.

> Fully agreed. The costs (our time) to add and keep supporting an in-kernel
> driver for an IR that just one person is still using is higher than
> asking the user to get a new IR. This time would be better spent adding a new
> driver for other devices.

Agreed, I think nobody argues we should support such things in the
kernel.


Once again: how about agreement about the LIRC interface
(kernel-userspace) and merging the actual LIRC code first? In-kernel
decoding can wait a bit, it doesn't change any kernel-user interface.
--
Krzysztof Halasa

2009-12-06 17:52:09

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 6, 2009 at 12:48 PM, Krzysztof Halasa <[email protected]> wrote:
> Once again: how about agreement about the LIRC interface
> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> decoding can wait a bit, it doesn't change any kernel-user interface.

I'd like to see a semi-complete design for an in-kernel IR system
before anything is merged from any source.

--
Jon Smirl
[email protected]

2009-12-06 20:19:05

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

> All the IR's I found with V4L/DVB use up to 16 bits code (or 24 bits, for NEC extended protocol).
> However, currently, the drivers were getting only 7 bits, due to the old way to implement
> EVIO[S|G]KEYCODE.
>
> I know, however, one i2c chip that returns a 5 byte scancode when you press a key.
> We're currently just discarding the remaining bits, so I'm not really sure what's there.

Right. This will have to be investigated by owners of the exact hardware
in question. What we can do is to try to make it easy for them.
There is no hurry, though - it can and will continue to work the current
way.

> In general, the scancode contains 8 or 16 bits for address, and 8 bits for command.

Right. I think the kernel shouldn't differentiate between address and
command too much.

> at include/linux/input.h, we'll add a code like:
>
> struct input_keytable_entry {
> u16 index;
> u64 scancode;
> u32 keycode;
> } __attribute__ ((packed));
>
> (the attribute packed avoids needing a compat for 64 bits)

Maybe { u64 scancode; u32 keycode; u16 index; u16 reserved } would be a
bit better, no alignment problems and we could eventually change
"reserved" into something useful.

But I think, if we are going to redesign it, we better use scancodes of
arbitrary length (e.g. protocol-dependent length). It should be opaque
except for the protocol handler.
--
Krzysztof Halasa

2009-12-06 20:22:12

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> The in-kernel support can start small and add protocols and maps over
> time.

Protocols, yes. Maps - we certainly don't want megatons of maps in the
kernel. The existing ones have to be removed, some time.
--
Krzysztof Halasa

2009-12-06 20:34:25

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

>> Once again: how about agreement about the LIRC interface
>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
>> decoding can wait a bit, it doesn't change any kernel-user interface.
>
> I'd like to see a semi-complete design for an in-kernel IR system
> before anything is merged from any source.

This is a way to nowhere, there is no logical dependency between LIRC
and input layer IR.

There is only one thing which needs attention before/when merging LIRC:
the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
actually, making a correct IR core design without the LIRC merged can be
only harder.
--
Krzysztof Halasa

2009-12-06 21:23:48

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <[email protected]> wrote:
> Jon Smirl <[email protected]> writes:
>
>>> Once again: how about agreement about the LIRC interface
>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
>>> decoding can wait a bit, it doesn't change any kernel-user interface.
>>
>> I'd like to see a semi-complete design for an in-kernel IR system
>> before anything is merged from any source.
>
> This is a way to nowhere, there is no logical dependency between LIRC
> and input layer IR.
>
> There is only one thing which needs attention before/when merging LIRC:
> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> actually, making a correct IR core design without the LIRC merged can be
> only harder.

Here's a few design review questions on the LIRC drivers that were posted....

How is the pulse data going to be communicated to user space?
Can the pulse data be reported via an existing interface without
creating a new one?
Where is the documentation for the protocol?
Is it a device interface or something else?
Does it work with poll, epoll, etc?
What is the time standard for the data, where does it come from?
How do you define the start and stop of sequences?
What about capabilities of the receiver, what frequencies?
If a receiver has multiple frequencies, how do you report what
frequency the data came in on?
What about multiple apps simultaneously using the pulse data?
Is receiving synchronous or queued?
How big is the receive queue?
How does access work, root only or any user?
What about transmit, how do you get pulse data into the device?
Transmitter frequencies?
Multiple transmitters?
Is transmitting synchronous or queued?
How big is the transmit queue?
How are capabilities exposed, sysfs, etc?
What is the interface for attaching an in-kernel decoder?
If there is an in-kernel decoder should the pulse data stop being
reported, partially stopped, something else?
What is the mechanism to make sure both system don't process the same pulses?

> --
> Krzysztof Halasa
>



--
Jon Smirl
[email protected]

2009-12-07 07:48:27

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 06, 2009 at 09:03:31AM -0200, Mauro Carvalho Chehab wrote:
> Dmitry Torokhov wrote:
> > On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
> >> Em Fri, 4 Dec 2009 02:06:42 -0800
> >> Dmitry Torokhov <[email protected]> escreveu:
> >>
> >>> evdev does not really care what you use as scancode. So nobody stops
> >>> your driver to report index as a scancode and accept index from the
> >>> ioctl. The true "scancode" will thus be competely hidden from userspace.
> >>> In fact a few drivers do just that.
> >> Let me better express here. It is all about how we'll expand the limits of those
> >> ioctls to fulfill the needs.
> >>
> >> The point is that we'll have, let's say something like to 50-500 scancode/keycode tuples
> >> sparsely spread into a 2^64 scancode universe (assuming 64 bits - Not sure if is there any
> >> IR protocol/code with a bigger scancode).
> >>
> >> On such universe if we want to get all keycodes with the current ioctls for a scancode in
> >> the range of 32 bits, we need to do something like:
> >>
> >> u32 code;
> >> int codes[2];
> >> for (code = 0; code <= (unsigned u32) - 1; code++) {
> >> codes[0] = (int)code;
> >> if (!ioctl(fd, EVIOCGKEYCODE, codes))
> >> printf("scancode 0x%08x = keycode 0x%08x\n", codes[0], codes[1]);
> >> }
> >>
> >> So, on the 32 bits case, we'll do about 4 billions calls to EVIOGKEYCODE ioctl to
> >> read the complete scancode space, to get those 50-500 useful codes.
> >>
> >
> > Right, currently there is no need to query all scancodes defined by
> > device. Quite often drivers don't even know what scancodes device
> > actually generates (ex AT keyboard).
> >
> > Could you describe in more detail how you are using this data?
>
> It is useful if you want to dump the keycode maps into file with the current
> scancode attribution, in order to modify some keystrokes.
>
> Right now, programs like dumpkeys (from kbd package) allow you to dump for example
> the attribution keys from your keyboard.
>
> In the case of IR's this functionality is very important.
>
> For example, you may need to replace the scancode/KEY_CHANNELUP tuple by scancode/KEY_UP,
> in order to make your IR to work with some applications that don't recognize the IR
> specific keycodes.
>
> In practice, with such applications, you'll need to replace several different scancodes.
>
> So, you may end by having different scancodes producing the same keycode, as such applications
> aren't capable of differentiating an UP key from a CHANNELUP key. This is the case, for example
> of the popular tvtime application.
>
> The better way is to just generate a dump file, modify the needed entries and reload the
> table by calling EVIOSKEYCODE, in order to use the new table.
>
> I wrote a small application that just do the above, and I use to load some special tables
> to work with some applications like tvtime and mplayer. (with mplayer, you need to map
> <channel down> as KEY_H and <channel up> as KEY_K).
>
> I hope that, after we finish addressing IR's, we'll finally have media applications handling
> directly the proper keycodes, but people may still need to write different keycodes to do
> other things. I used to have a keymap file in order to use an IR to control the slide show
> with openoffice.
>
> >> Due to the current API limit, we don't have any way to use the full 64bits space for scancodes.
> >>
> >
> > Can we probably reduce the "scancode" space? ARe all 64 bits in
> > protocols used to represent keypresses or some are used for some kind of
> > addressing?
>
> All the IR's I found with V4L/DVB use up to 16 bits code (or 24 bits, for NEC extended protocol).
> However, currently, the drivers were getting only 7 bits, due to the old way to implement
> EVIO[S|G]KEYCODE.
>
> I know, however, one i2c chip that returns a 5 byte scancode when you press a key.
> We're currently just discarding the remaining bits, so I'm not really sure what's there.
>
>
> The usage of 7 bits, in practice, were meaning that it weren't possible to use
> a different remote than the one provided by the device manufacturer, as the scancodes produced
> by other remotes differ on more than 7 bits. Also, this means that, if your TV and your PC
> are using the same protocol, like RC5, if you press a button on your TV remote, the PC will
> also get it.
>
> I know, however, one IR driver that produces 6 bytes when you press a key.
> We're currently just discarding the remaining bits, so I'm not really sure
> what else is there. Some badly optimized protocol? a bigger scancode? a protocol indication?
>
> In general, the scancode contains 8 or 16 bits for address, and 8 bits for command.
>
> However, the scancode table needs to handle the address as well, since we don't want that a
> scancode meant to go to your TV to be handled by the PC, but we may want to get codes from
> different addresses there, as we may need to use the address to differentiate the commands
> meant to control the TV volume, for example, than the same command meant to control the PC
> master volume.

Right, but this data is not interesting to userspace. For userpsace
scancode is just a cookie that is uniquely identifies a button for which
a keycode can be assigned.

>
> >> if we use code[0] as an index, this means that we'll need to share the 32 bits on code[1]
> >> for scancode/keycode. Even using an 32 bits integer for keycode, it is currently limited to:
> >>
> >> #define KEY_MAX 0x2ff
> >> #define KEY_CNT (KEY_MAX+1)
> >>
> >> So, we have 10 bits already used for keycode. This gives only 22 bits for scancodes, if we share
> >> codes[1] for both keycode/scancode. By sharing the 32 bits, we'll need to be care to not extend
> >> KEY_MAX to be bigger than 0x3ff, otherwise the keytable won't be able to represent all keys of the
> >> key universe.
> >>
> >> What is need for this case is that the arguments for get key/set key to be something like:
> >>
> >> struct {
> >> u16 index;
> >> u64 scancode;
> >> u32 keycode;
> >> };
> >>
> >
> > Hmm, so what is this index? I am confused...
>
> It is the sequence number of a scancode/keycode tuple stored at the keycode table.
>
> Better than saying it in words, let me put a code snippet:
>
> at include/linux/input.h, we'll add a code like:
>
> struct input_keytable_entry {
> u16 index;
> u64 scancode;
> u32 keycode;
> } __attribute__ ((packed));
>
> (the attribute packed avoids needing a compat for 64 bits)
>
> #define EVIOGKEYCODEENTRY _IOR('E', 0x85, struct input_keytable_entry)
>
> (and a similar ioctl for setkeycode)
>
> This struct will be used by the new
>
> at include/media/ir-common.h, we already have:
>
> struct ir_scancode {
> u16 scancode;
> u32 keycode;
> };
>
> struct ir_scancode_table {
> struct ir_scancode *scan;
> int size;
> ...
> };
>
> The code at ir core that will handle the ioctl will be like:
>
> static int ir_getkeycode_entry(struct input_dev *dev, struct input_keytable_entry *ike)
> {
> struct ir_scancode_table *rc_tab = input_get_drvdata(dev);
>
> if (rc_tab->size >= ike->index)
> return -EINVAL;
>
> irk->scancode = rctab->scan->scancode;
> irk->keycode = rctab->scan->keycode;
>
> return 0;
> }
>

OK, but why do you even want to expose scancode to userpsace using
evdev here? Lircd-type applications might be interested, but they are
going to use lircd. For the rest of userpsace index can be used as a
"scancode" just as easily.

Scancodes in input system never been real scancodes. Even if you look
into atkbd it uses some synthetic data composed out of real scancodes
sent to the keyboard, and noone cares. If you are unsatisfied with
mapping you fire up evtest, press the key, take whatever the driver
[mis]represents as a scancode and use it to load the new definition. And
you don't care at all whether the thing that driver calls cancode makes
any sense to the hardware device.

--
Dmitry

2009-12-07 07:51:00

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 06, 2009 at 12:58:00PM +0100, Christoph Bartelmus wrote:
> Hi Dmitry,
>
> on 04 Dec 09 at 15:15, Dmitry Torokhov wrote:
> [...]
> >>>>>> http://lirc.sourceforge.net/remotes/lg/6711A20015N
> >>>>>>
> >>>>>> This is an air-conditioner remote.
> >>>>>> The entries that you see in this config file are not really separate
> >>>>>> buttons. Instead the remote just sends the current settings for e.g.
> >>>>>> temperature encoded in the protocol when you press some up/down key.
> >>>>>> You really don't want to map all possible temperature settings to KEY_*
> >>>>>> events. For such cases it would be nice to have access at the raw scan
> >>>>>> codes from user space to do interpretation of the data.
> >>>>>> The default would still be to pass the data to the input layer, but it
> >>>>>> won't hurt to have the possibility to access the raw data somehow.
> >>>>
> >>>>> Interesting. IMHO, the better would be to add an evdev ioctl to return
> >>>>> the scancode for such cases, instead of returning the keycode.
> >>>>
> >>>> That means you would have to set up a pseudo keymap, so that you can get
> >>>> the key event which you could than react on with a ioctl. Or are you
> >>>> generating KEY_UNKNOWN for every scancode that is not mapped?
> >>>> What if different scan codes are mapped to the same key event? How do you
> >>>> retrieve the scan code for the key event?
> >>>> I don't think it can work this way.
> >>>>
> >>
> >>> EV_MSC/MSC_SCAN.
> >>
> >> How would I get the 64 bit scan codes that the iMON devices generate?
> >> How would I know that the scan code is 64 bit?
> >> input_event.value is __s32.
> >>
>
> > I suppose we could add MSC_SCAN_END event so that we can transmit
> > "scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
> > MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
> > bit.
>
> And I set a timeout to know that no MSC_SCAN_END will arrive? This is
> broken design IMHO.
>

EV_SYN signals the end of state transmission.

> Furthermore lircd needs to know the length of the scan code in bits, not
> as a multiple of 32.

I really do not think that LIRCD is the type of application that should
be using evdev interface, but rather other way around.

>
> > FWIW there is MSC_RAW as well.
>
> It took me some time to convice people that this is not the right way to
> handle raw timing data.
>
> Christoph

--
Dmitry

2009-12-07 15:34:39

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Sun, Dec 06, 2009 at 09:03:31AM -0200, Mauro Carvalho Chehab wrote:
>> Dmitry Torokhov wrote:
>>> On Fri, Dec 04, 2009 at 12:12:34PM -0200, Mauro Carvalho Chehab wrote:
>>>> Em Fri, 4 Dec 2009 02:06:42 -0800
>>>> Dmitry Torokhov <[email protected]> escreveu:
>>>>
>>>>> evdev does not really care what you use as scancode. So nobody stops
>>>>> your driver to report index as a scancode and accept index from the
>>>>> ioctl. The true "scancode" will thus be competely hidden from userspace.
>>>>> In fact a few drivers do just that.
>>>> Let me better express here. It is all about how we'll expand the limits of those
>>>> ioctls to fulfill the needs.
>>>>
>>>> The point is that we'll have, let's say something like to 50-500 scancode/keycode tuples
>>>> sparsely spread into a 2^64 scancode universe (assuming 64 bits - Not sure if is there any
>>>> IR protocol/code with a bigger scancode).
>>>>
>>>> On such universe if we want to get all keycodes with the current ioctls for a scancode in
>>>> the range of 32 bits, we need to do something like:
>>>>
>>>> u32 code;
>>>> int codes[2];
>>>> for (code = 0; code <= (unsigned u32) - 1; code++) {
>>>> codes[0] = (int)code;
>>>> if (!ioctl(fd, EVIOCGKEYCODE, codes))
>>>> printf("scancode 0x%08x = keycode 0x%08x\n", codes[0], codes[1]);
>>>> }
>>>>
>>>> So, on the 32 bits case, we'll do about 4 billions calls to EVIOGKEYCODE ioctl to
>>>> read the complete scancode space, to get those 50-500 useful codes.
>>>>
>>> Right, currently there is no need to query all scancodes defined by
>>> device. Quite often drivers don't even know what scancodes device
>>> actually generates (ex AT keyboard).
>>>
>>> Could you describe in more detail how you are using this data?
>> It is useful if you want to dump the keycode maps into file with the current
>> scancode attribution, in order to modify some keystrokes.
>>
>> Right now, programs like dumpkeys (from kbd package) allow you to dump for example
>> the attribution keys from your keyboard.
>>
>> In the case of IR's this functionality is very important.
>>
>> For example, you may need to replace the scancode/KEY_CHANNELUP tuple by scancode/KEY_UP,
>> in order to make your IR to work with some applications that don't recognize the IR
>> specific keycodes.
>>
>> In practice, with such applications, you'll need to replace several different scancodes.
>>
>> So, you may end by having different scancodes producing the same keycode, as such applications
>> aren't capable of differentiating an UP key from a CHANNELUP key. This is the case, for example
>> of the popular tvtime application.
>>
>> The better way is to just generate a dump file, modify the needed entries and reload the
>> table by calling EVIOSKEYCODE, in order to use the new table.
>>
>> I wrote a small application that just do the above, and I use to load some special tables
>> to work with some applications like tvtime and mplayer. (with mplayer, you need to map
>> <channel down> as KEY_H and <channel up> as KEY_K).
>>
>> I hope that, after we finish addressing IR's, we'll finally have media applications handling
>> directly the proper keycodes, but people may still need to write different keycodes to do
>> other things. I used to have a keymap file in order to use an IR to control the slide show
>> with openoffice.
>>
>>>> Due to the current API limit, we don't have any way to use the full 64bits space for scancodes.
>>>>
>>> Can we probably reduce the "scancode" space? ARe all 64 bits in
>>> protocols used to represent keypresses or some are used for some kind of
>>> addressing?
>> All the IR's I found with V4L/DVB use up to 16 bits code (or 24 bits, for NEC extended protocol).
>> However, currently, the drivers were getting only 7 bits, due to the old way to implement
>> EVIO[S|G]KEYCODE.
>>
>> I know, however, one i2c chip that returns a 5 byte scancode when you press a key.
>> We're currently just discarding the remaining bits, so I'm not really sure what's there.
>>
>>
>> The usage of 7 bits, in practice, were meaning that it weren't possible to use
>> a different remote than the one provided by the device manufacturer, as the scancodes produced
>> by other remotes differ on more than 7 bits. Also, this means that, if your TV and your PC
>> are using the same protocol, like RC5, if you press a button on your TV remote, the PC will
>> also get it.
>>
>> I know, however, one IR driver that produces 6 bytes when you press a key.
>> We're currently just discarding the remaining bits, so I'm not really sure
>> what else is there. Some badly optimized protocol? a bigger scancode? a protocol indication?
>>
>> In general, the scancode contains 8 or 16 bits for address, and 8 bits for command.
>>
>> However, the scancode table needs to handle the address as well, since we don't want that a
>> scancode meant to go to your TV to be handled by the PC, but we may want to get codes from
>> different addresses there, as we may need to use the address to differentiate the commands
>> meant to control the TV volume, for example, than the same command meant to control the PC
>> master volume.
>
> Right, but this data is not interesting to userspace. For userpsace
> scancode is just a cookie that is uniquely identifies a button for which
> a keycode can be assigned.
>
>>>> if we use code[0] as an index, this means that we'll need to share the 32 bits on code[1]
>>>> for scancode/keycode. Even using an 32 bits integer for keycode, it is currently limited to:
>>>>
>>>> #define KEY_MAX 0x2ff
>>>> #define KEY_CNT (KEY_MAX+1)
>>>>
>>>> So, we have 10 bits already used for keycode. This gives only 22 bits for scancodes, if we share
>>>> codes[1] for both keycode/scancode. By sharing the 32 bits, we'll need to be care to not extend
>>>> KEY_MAX to be bigger than 0x3ff, otherwise the keytable won't be able to represent all keys of the
>>>> key universe.
>>>>
>>>> What is need for this case is that the arguments for get key/set key to be something like:
>>>>
>>>> struct {
>>>> u16 index;
>>>> u64 scancode;
>>>> u32 keycode;
>>>> };
>>>>
>>> Hmm, so what is this index? I am confused...
>> It is the sequence number of a scancode/keycode tuple stored at the keycode table.
>>
>> Better than saying it in words, let me put a code snippet:
>>
>> at include/linux/input.h, we'll add a code like:
>>
>> struct input_keytable_entry {
>> u16 index;
>> u64 scancode;
>> u32 keycode;
>> } __attribute__ ((packed));
>>
>> (the attribute packed avoids needing a compat for 64 bits)
>>
>> #define EVIOGKEYCODEENTRY _IOR('E', 0x85, struct input_keytable_entry)
>>
>> (and a similar ioctl for setkeycode)
>>
>> This struct will be used by the new
>>
>> at include/media/ir-common.h, we already have:
>>
>> struct ir_scancode {
>> u16 scancode;
>> u32 keycode;
>> };
>>
>> struct ir_scancode_table {
>> struct ir_scancode *scan;
>> int size;
>> ...
>> };
>>
>> The code at ir core that will handle the ioctl will be like:
>>
>> static int ir_getkeycode_entry(struct input_dev *dev, struct input_keytable_entry *ike)
>> {
>> struct ir_scancode_table *rc_tab = input_get_drvdata(dev);
>>
>> if (rc_tab->size >= ike->index)
>> return -EINVAL;
>>
>> irk->scancode = rctab->scan->scancode;
>> irk->keycode = rctab->scan->keycode;
>>
>> return 0;
>> }
>>
>
> OK, but why do you even want to expose scancode to userpsace using
> evdev here? Lircd-type applications might be interested, but they are
> going to use lircd. For the rest of userpsace index can be used as a
> "scancode" just as easily.

This is needed in order to load a keytable by udev, even for devices that
won't provide a lirc_dev interface.

> Scancodes in input system never been real scancodes. Even if you look
> into atkbd it uses some synthetic data composed out of real scancodes
> sent to the keyboard, and noone cares. If you are unsatisfied with
> mapping you fire up evtest, press the key, take whatever the driver
> [mis]represents as a scancode and use it to load the new definition. And
> you don't care at all whether the thing that driver calls cancode makes
> any sense to the hardware device.

We used a mis-represented scancode, but this proofed to be a broken design
along the time.

For users, whatever the scancode "cookie" means, the same IR device should
provide the same "cookie" no matter what IR receiver is used, since the same
IR may be found on different devices, or the user can simply buy a new card
and opt to use their old IR (there are very good reasons for that, since
several new devices are now coming with small IR's that has half of the
keys of the ones available at the older models).

To allow that, the userspace should allow a completely override of the scancode
table by a new table, the driver should support not only replacing the keycodes,
but also replacing the scancodes.

Also, the same IR physical device may be provided by a wide range of different
boards, so the scancode should be portable between all drivers.

The solution of using a mis-represented scancode is currently used by V4L drivers,
and we're getting rid of that, since this won't allow replacing the IR keytable
by the keytable from another IR.

Just as an example, I've committed one patch yesterday that replaces the old
mis-represented scancode table for the Hauppauge Grey IR by the correct one, for
one device I have here (WinTV USB2):

http://linuxtv.org/hg/v4l-dvb/rev/6ec717e42b4a

After this patch, both WinTV USB2 and HVR-950 devices will use the same scancode
representation, also used by a completely different driver (dib0700).

After the patch, both devices can now support other IR's that generates keycodes
using RC-5 protocol, and the bits are represented on both following the RC-5 specs.

So, for both devices, the same IR keycode table for Hauppauge Grey IR will work
perfectly with both devices.

Also, I can now replace the IR table to accept for example, a PC-39 [1] IR I have here.
[1] http://www.cwtek.com.tw/product-en.php?layer1=6&layer2=37&lang=en

(this site is interesting: it shows several different types of IR manufactured by a
Chinese company that are commonly found bundled with cheap media devices - I've got
several cases of devices that used to be shipped with one type to be replaced by
another type on a newer device, with the same PCI/USB ID's).

The PC-39 IR generates a completely different set of scancodes. For example, on this IR,
the scancode 0x083a for <Picture> key. With the original IR, the scancode for <Pictures>
is the scancode 0x1e1a.

After the patch, the same PC-39 IR table works with both WinTV USB2 and HVR-950.

Before it, we're using the Hauppauge new scancode table masked by a 0xff mask. So,
instead of getting 0x1e00 + command, the table has just command).

By not using the complete scancode, this means that the driver will need to
accept only codes that matches (scancode & 0x1e00) == 0x1e00, or that it
will accept any scancode & 0xff.

With the first case, you can't replace the IR by another manufacturer. With
the latter case, if you use your TV RC5 remote, your PC will miss-recognize the
scancode.

So, on both cases, evdev interface will be taking undesired actions.

The V4L drivers had this broken behavior for a long time, causing lots of
complaints, and, in practice, limiting the driver to use just the provided IR's,
or preventing the usage of a PC close to a TV.

So, we're migrating the drivers to get a full scancode table there. This is
needed for users that use evdev.

So, while I agree that, in thesis, we might just use any arbitrary representation
for scancode, by just discarding information, the practice already proofed that
such design won't work.

In summary,

While the current EVIO[G|S]KEYCODE works sub-optimally for scancodes up to 16 bytes
(since a read loop for 2^16 is not that expensive), the current approach
won't scale with bigger scancode spaces. So, it is needed expand it
to work with bigger scancode spaces, used by more recent IR protocols.

I'm afraid that any tricks we may try to go around the current limits to still
keep using the same ioctl definition will sooner or later cause big headaches.
The better is to redesign it to allow using different scancode spaces.

Cheers,
Mauro.

2009-12-07 15:37:05

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Sun, Dec 6, 2009 at 12:48 PM, Krzysztof Halasa <[email protected]> wrote:
>> Once again: how about agreement about the LIRC interface
>> (kernel-userspace) and merging the actual LIRC code first?

That's fine for me.

>> In-kernel
>> decoding can wait a bit, it doesn't change any kernel-user interface.

This may occur in parallel, but, as we've been discussing, there are
still some needs there that will require kernel-user interfaces.

> I'd like to see a semi-complete design for an in-kernel IR system
> before anything is merged from any source.

There are some tasks there that are independent of any API design.

For example, I'm currently doing some cleanups and improvements
at the existing IR in-kernel code to create a common IR core that replaces
the already existing feature of handling 7-bits scancode/keycode table to
use the complete scancodes found at the current in-kernel drivers.

This approach works for the current drivers, as none of them currently support
any protocol that requires more than 16 bits for scancodes. However, the
current EVIOGKEYCODE implementation won't scale with bigger scancode spaces.

This code is written to be generic enough to be used by V4L, DVB and LIRC
drivers. So, after having this work done, it should be easy to connect the lirc_dev
to a decoder and to this core support. There are already some in-kernel decoders
that can be used for some protocols, but the better is to review the decoders in
the light of lirc. I expect that the lirc decoders will be in a better shape.

While I'm here, I intend also to create the sysfs bits to create sys/class/irrcv,
as already discussed and submit the patch here for discussions.

Of course, after writing different API's to control the IR tables, we'll
need to improve it, but this depends on the results of the architecture discussions.

Cheers,
Mauro

2009-12-07 18:34:21

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Dec 07, 2009 at 01:34:10PM -0200, Mauro Carvalho Chehab wrote:
>
> > Scancodes in input system never been real scancodes. Even if you look
> > into atkbd it uses some synthetic data composed out of real scancodes
> > sent to the keyboard, and noone cares. If you are unsatisfied with
> > mapping you fire up evtest, press the key, take whatever the driver
> > [mis]represents as a scancode and use it to load the new definition. And
> > you don't care at all whether the thing that driver calls cancode makes
> > any sense to the hardware device.
>
> We used a mis-represented scancode, but this proofed to be a broken design
> along the time.
>
> For users, whatever the scancode "cookie" means, the same IR device should
> provide the same "cookie" no matter what IR receiver is used, since the same
> IR may be found on different devices, or the user can simply buy a new card
> and opt to use their old IR (there are very good reasons for that, since
> several new devices are now coming with small IR's that has half of the
> keys of the ones available at the older models).

OK, this is a fair point. We need to keep the "scancodes" stable across
receivers.

However I am not sure if the "index" approach is the best - it will not
work well if driver decides to implement the keymap using data structure
different from array, let's say linked list or a hash table. Lists by
their nature do not have a stable index and even if we were to generate
one "on fly" we could not rely on it for subsequent EVIOSKEYCODE: some
other program may cause insertion or deletion of an element making the
artificial index refer to another entry in the map.

While extending scancode size is pretty straightforward (well, almost
;) ) I am not sure what is the best way to enumerate keymap for a given
device.

--
Dmitry

2009-12-07 18:42:01

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, Dec 06, 2009 at 09:34:26PM +0100, Krzysztof Halasa wrote:
> Jon Smirl <[email protected]> writes:
>
> >> Once again: how about agreement about the LIRC interface
> >> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> >> decoding can wait a bit, it doesn't change any kernel-user interface.
> >
> > I'd like to see a semi-complete design for an in-kernel IR system
> > before anything is merged from any source.
>
> This is a way to nowhere, there is no logical dependency between LIRC
> and input layer IR.
>
> There is only one thing which needs attention before/when merging LIRC:
> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> actually, making a correct IR core design without the LIRC merged can be
> only harder.

This sounds like "merge first, think later"...

The question is why we need to merge lirc interface right now, before we
agreed on the sybsystem architecture? Noone _in kernel_ user lirc-dev
yet and, looking at the way things are shaping, no drivers will be
_directly_ using it after it is complete. So, even if we merge it right
away, the code will have to be restructured and reworked. Unfortunately,
just merging what Jarod posted, will introduce sysfs hierarchy which
is userspace interface as well (although we not as good maintaining it
at times) and will add more constraints on us.

That is why I think we should go the other way around - introduce the
core which receivers could plug into and decoder framework and once it
is ready register lirc-dev as one of the available decoders.

--
Dmitry

2009-12-07 20:08:57

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov <[email protected]> writes:

>> There is only one thing which needs attention before/when merging LIRC:
>> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
>> actually, making a correct IR core design without the LIRC merged can be
>> only harder.
>
> This sounds like "merge first, think later"...

I'd say "merge the sane agreed and stable things first and think later
about improvements".

> The question is why we need to merge lirc interface right now, before we
> agreed on the sybsystem architecture?

Because we need the features and we can't improve something which is
outside the kernel. What "subsystem architecture" do you want to
discuss? Unrelated (input layer) interface?

Those are simple things. The only part which needs to be stable is the
(in this case LIRC) kernel-user interface.

> Noone _in kernel_ user lirc-dev
> yet and, looking at the way things are shaping, no drivers will be
> _directly_ using it after it is complete. So, even if we merge it right
> away, the code will have to be restructured and reworked.

Sure. We do this constantly to every part of the kernel.

> Unfortunately,
> just merging what Jarod posted, will introduce sysfs hierarchy which
> is userspace interface as well (although we not as good maintaining it
> at times) and will add more constraints on us.

Then perhaps it should be skipped, leaving only the things udev needs to
create /dev/ entries. They don't have to be particularly stable.
Perhaps it should go to the staging first. We can't work with the code
outside the kernel, staging has not such limitation.

> That is why I think we should go the other way around - introduce the
> core which receivers could plug into and decoder framework and once it
> is ready register lirc-dev as one of the available decoders.

That means all the work has to be kept and then merged "atomically",
it seems there is a lack of manpower for this.
--
Krzysztof Halasa

2009-12-07 21:38:17

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Dec 07, 2009 at 09:08:57PM +0100, Krzysztof Halasa wrote:
> Dmitry Torokhov <[email protected]> writes:
>
> >> There is only one thing which needs attention before/when merging LIRC:
> >> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> >> actually, making a correct IR core design without the LIRC merged can be
> >> only harder.
> >
> > This sounds like "merge first, think later"...
>
> I'd say "merge the sane agreed and stable things first and think later
> about improvements".
>
> > The question is why we need to merge lirc interface right now, before we
> > agreed on the sybsystem architecture?
>
> Because we need the features and we can't improve something which is
> outside the kernel. What "subsystem architecture" do you want to
> discuss? Unrelated (input layer) interface?
>

No, the IR core responsible for registering receivers and decoders.

> Those are simple things. The only part which needs to be stable is the
> (in this case LIRC) kernel-user interface.

For which some questions are still open. I believe Jon just oulined some
of them.

>
> > Noone _in kernel_ user lirc-dev
> > yet and, looking at the way things are shaping, no drivers will be
> > _directly_ using it after it is complete. So, even if we merge it right
> > away, the code will have to be restructured and reworked.
>
> Sure. We do this constantly to every part of the kernel.

No we do not. We do not merge something that we expect to rework almost
completely (no, not the lirc-style device userspace inetrface, although
even it is not completely finalized I believe, but the rest of the
subsystem).

>
> > Unfortunately,
> > just merging what Jarod posted, will introduce sysfs hierarchy which
> > is userspace interface as well (although we not as good maintaining it
> > at times) and will add more constraints on us.
>
> Then perhaps it should be skipped, leaving only the things udev needs to
> create /dev/ entries. They don't have to be particularly stable.
> Perhaps it should go to the staging first. We can't work with the code
> outside the kernel, staging has not such limitation.

OK, say we add this to staging as is. What is next? Who will be using
this code that is now in staging? Do we encougrage driver's writers to
hook into it (given that we intend on redoing it soon)? Do something
else?

>
> > That is why I think we should go the other way around - introduce the
> > core which receivers could plug into and decoder framework and once it
> > is ready register lirc-dev as one of the available decoders.
>
> That means all the work has to be kept and then merged "atomically",
> it seems there is a lack of manpower for this.

No, not at all. You merge core subsystem code, then start addig
decoders... In the meantime driver-writers could start preparing their
drivers to plug into it.

In the mean time out-of-tree LIRC can be used by consumers undisturbed.

--
Dmitry

2009-12-07 23:01:22

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Mon, Dec 07, 2009 at 01:34:10PM -0200, Mauro Carvalho Chehab wrote:
>>
>>> Scancodes in input system never been real scancodes. Even if you look
>>> into atkbd it uses some synthetic data composed out of real scancodes
>>> sent to the keyboard, and noone cares. If you are unsatisfied with
>>> mapping you fire up evtest, press the key, take whatever the driver
>>> [mis]represents as a scancode and use it to load the new definition. And
>>> you don't care at all whether the thing that driver calls cancode makes
>>> any sense to the hardware device.
>> We used a mis-represented scancode, but this proofed to be a broken design
>> along the time.
>>
>> For users, whatever the scancode "cookie" means, the same IR device should
>> provide the same "cookie" no matter what IR receiver is used, since the same
>> IR may be found on different devices, or the user can simply buy a new card
>> and opt to use their old IR (there are very good reasons for that, since
>> several new devices are now coming with small IR's that has half of the
>> keys of the ones available at the older models).
>
> OK, this is a fair point. We need to keep the "scancodes" stable across
> receivers.
>
> However I am not sure if the "index" approach is the best - it will not
> work well if driver decides to implement the keymap using data structure
> different from array, let's say linked list or a hash table. Lists by
> their nature do not have a stable index and even if we were to generate
> one "on fly" we could not rely on it for subsequent EVIOSKEYCODE: some
> other program may cause insertion or deletion of an element making the
> artificial index refer to another entry in the map.

Any addition/deletion of an element will cause problems, even with a simple
table. I don't think we should consider a case where two applications are
changing the table at the same time. The end result will likely be different
than what's expected anyway. Btw, while an index for EVIOGSKEYCODE is really
important, except for symmetry, there are no other reasons why we can't use
scancode as the primary key for EVIOSKEYCODE. We can't allow two identical
scancodes anyway at the scancode/keycode table. So, we can define the
EVIOSKEYCODE without an index.

> While extending scancode size is pretty straightforward (well, almost
> ;) ) I am not sure what is the best way to enumerate keymap for a given
> device.
>

Btw, if you want to take a look, I've finished to implement the table insert/delete
logic. Extending/reducing space at the table required some care, but it is working
fine:

http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git;a=commitdiff;h=87d73cbd33235b162e8da62305ba8b5926a1fbf8

The code is not optimized by using a hash table or a binary search yet (patches to
improve are welcome), but it is already working as expected.

Cheers,
Mauro.

2009-12-07 23:44:35

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Let me add my view for those questions.

Jon Smirl wrote:
> On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <[email protected]> wrote:
>> Jon Smirl <[email protected]> writes:
>>
>>>> Once again: how about agreement about the LIRC interface
>>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
>>>> decoding can wait a bit, it doesn't change any kernel-user interface.
>>> I'd like to see a semi-complete design for an in-kernel IR system
>>> before anything is merged from any source.
>> This is a way to nowhere, there is no logical dependency between LIRC
>> and input layer IR.
>>
>> There is only one thing which needs attention before/when merging LIRC:
>> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
>> actually, making a correct IR core design without the LIRC merged can be
>> only harder.
>
> Here's a few design review questions on the LIRC drivers that were posted....
>
> How is the pulse data going to be communicated to user space?

lirc_dev will implement a revised version of the lirc API. I'm assuming that
Jerod and Christoph will do this review, in order to be sure that it is stable
enough for kernel inclusion (as proposed by Gerd).

> Can the pulse data be reported via an existing interface without
> creating a new one?

Raw pulse data should be reported only via lirc_dev, but it can be converted
into a keycode and reported via evdev as well, via an existing interface.

> Where is the documentation for the protocol?

I'm not sure what you're meaning here. I've started a doc about IR at the media
docbook. This is currently inside the kernel Documents/DocBook. If you want
to browse, it is also available as:

http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html

For sure we need to better document the IR's, and explain the API's there.

> Is it a device interface or something else?

lirc_dev should create a device interface.

> What about capabilities of the receiver, what frequencies?
> If a receiver has multiple frequencies, how do you report what
> frequency the data came in on?

IMO, via sysfs.

> What about multiple apps simultaneously using the pulse data?

IMO, the better is to limit the raw interface to just one open.

> How big is the receive queue?

It should be big enough to receive at least one keycode event. Considering that
the driver will use kfifo (IMO, it is a good strategy, especially since you
won't need any lock if just one open is allowed), it will require a power of two size.

> How does access work, root only or any user?

IMO, it should be the same requirement as used by an input interface.

> How are capabilities exposed, sysfs, etc?

IMO, sysfs.

> What is the interface for attaching an in-kernel decoder?

IMO, it should use the kfifo for it. However, if we allow both raw data and
in-kernel decoders to read data there, we'll need a spinlock to protect the
kfifo.

> If there is an in-kernel decoder should the pulse data stop being
> reported, partially stopped, something else?

I don't have a strong opinion here, but, from the previous discussions, it
seems that people want it to be double-reported by default. If so, I think
we need to implement a command at the raw interface to allow disabling the
in-kernel decoder, while the raw interface is kept open.

> What is the mechanism to make sure both system don't process the same pulses?

I don't see a good way to avoid it.

> Does it work with poll, epoll, etc?
> What is the time standard for the data, where does it come from?
> How do you define the start and stop of sequences?
> Is receiving synchronous or queued?
> What about transmit, how do you get pulse data into the device?
> Transmitter frequencies?
> Multiple transmitters?
> Is transmitting synchronous or queued?
> How big is the transmit queue?

I don't have a clear answer for those. I'll let those to LIRC developers to answer.


Cheers,
Mauro

2009-12-07 23:50:49

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Christoph Bartelmus wrote:
> Hi Jon,
>
> on 04 Dec 09 at 19:28, Jon Smirl wrote:
>>> BTW, I just came across a XMP remote that seems to generate 3x64 bit
>>> scan codes. Anyone here has docs on the XMP protocol?
>> Assuming a general purpose receiver (not one with fixed hardware
>> decoding), is it important for Linux to receive IR signals from all
>> possible remotes no matter how old or obscure? Or is it acceptable to
> [...]
>> Of course transmitting is a completely different problem, but we
>> haven't been talking about transmitting. I can see how we would need
>> to record any IR protocol in order to retransmit it. But that's in the
>> 5% of users world, not the 90% that want MythTV to "just work". Use
>> something like LIRC if you want to transmit.
>
> I don't think anyone here is in the position to be able to tell what is
> 90% or 5%.

True. Yet, cases like IR devices made by someone's own use is something
that we don't need to care to have an in-kernel driver.

> Personally I use LIRC exclusively for transmit to my settop box
> using an old and obscure RECS80 protocol.
> No, I won't replace my setup just because it's old and obscure.
>
> Cable companies tend to provide XMP based boxes to subscribers more often
> these days. Simply not supporting these setups is a no-go for me.

I don't see any reason why not supporting STB protocols. Several such
hardware use Linux, anyway. So, eventually the STB manufacturers may send
us decoders that work with their IR's.

Cheers,
Mauro.


2009-12-08 00:01:06

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Krzysztof Halasa wrote:
> Mauro Carvalho Chehab <[email protected]> writes:
>

>> struct input_keytable_entry {
>> u16 index;
>> u64 scancode;
>> u32 keycode;
>> } __attribute__ ((packed));
>>
>> (the attribute packed avoids needing a compat for 64 bits)
>
> Maybe { u64 scancode; u32 keycode; u16 index; u16 reserved } would be a
> bit better, no alignment problems and we could eventually change
> "reserved" into something useful.
>
> But I think, if we are going to redesign it, we better use scancodes of
> arbitrary length (e.g. protocol-dependent length). It should be opaque
> except for the protocol handler.

Yes, an opaque type for scancode at the userspace API can be better, but
passing a pointer to kernel will require some compat32 logic (as pointer
size is different on 32 and 64 bits).

We may use something like an u8[] with an arbitrary large number of bytes.
In this case, we need to take some care to avoid LSB/MSB troubles.

Cheers,
Mauro.

2009-12-08 00:28:12

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
<[email protected]> wrote:
> Let me add my view for those questions.
>
> Jon Smirl wrote:
>> On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <[email protected]> wrote:
>>> Jon Smirl <[email protected]> writes:
>>>
>>>>> Once again: how about agreement about the LIRC interface
>>>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
>>>>> decoding can wait a bit, it doesn't change any kernel-user interface.
>>>> I'd like to see a semi-complete design for an in-kernel IR system
>>>> before anything is merged from any source.
>>> This is a way to nowhere, there is no logical dependency between LIRC
>>> and input layer IR.
>>>
>>> There is only one thing which needs attention before/when merging LIRC:
>>> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
>>> actually, making a correct IR core design without the LIRC merged can be
>>> only harder.
>>
>> Here's a few design review questions on the LIRC drivers that were posted....
>>
>> How is the pulse data going to be communicated to user space?
>
> lirc_dev will implement a revised version of the lirc API. I'm assuming that
> Jarod and Christoph will do this review, in order to be sure that it is stable
> enough for kernel inclusion (as proposed by Gerd).
>
>> Can the pulse data be reported via an existing interface without
>> creating a new one?
>
> Raw pulse data should be reported only via lirc_dev, but it can be converted
> into a keycode and reported via evdev as well, via an existing interface.
>
>> Where is the documentation for the protocol?
>
> I'm not sure what you're meaning here. I've started a doc about IR at the media

What is the format of the pulse stream data coming out of the lirc device?

> docbook. This is currently inside the kernel Documents/DocBook. If you want
> to browse, it is also available as:
>
> ? ? ? ?http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
>
> For sure we need to better document the IR's, and explain the API's there.
>
>> Is it a device interface or something else?
>
> lirc_dev should create a device interface.
>
>> What about capabilities of the receiver, what frequencies?
>> If a receiver has multiple frequencies, how do you report what
>> frequency the data came in on?
>
> IMO, via sysfs.

Say you have a hardware device with two IR diodes, one at 38K and one
at 56K. Both of these receivers can get pulses. How do we tell the
user space app which frequency the pulses were received on? Seems to
me like there has to be a header on the pulse data indicating the
received carrier frequency. There is also baseband signaling. sysfs
won't work for this because of the queuing latency.

How is over-run signaled to the app? You'd get an over-run if the app
is too slow at reading the data out of the FIFO. If you ignore
over-run you'll be processing bad data because part of the message was
lost. An over-run signal tell the abort to abort the signal and start
over.

>> What about multiple apps simultaneously using the pulse data?
>
> IMO, the better is to limit the raw interface to just one open.
>
>> How big is the receive queue?
>
> It should be big enough to receive at least one keycode event. Considering that
> the driver will use kfifo (IMO, it is a good strategy, especially since you
> won't need any lock if just one open is allowed), it will require a power of two size.

How is end of a pulse train detected? timeout? without decoding the
protocol there is no way to tell the end of signal other than timeout.

>
>> How does access work, root only or any user?
>
> IMO, it should be the same requirement as used by an input interface.
>
>> How are capabilities exposed, sysfs, etc?
>
> IMO, sysfs.
>
>> What is the interface for attaching an in-kernel decoder?
>
> IMO, it should use the kfifo for it. However, if we allow both raw data and
> in-kernel decoders to read data there, we'll need a spinlock to protect the
> kfifo.
>
>> If there is an in-kernel decoder should the pulse data stop being
>> reported, partially stopped, something else?
>
> I don't have a strong opinion here, but, from the previous discussions, it
> seems that people want it to be double-reported by default. If so, I think
> we need to implement a command at the raw interface to allow disabling the
> in-kernel decoder, while the raw interface is kept open.

Data could be sent to the in-kernel decoders first and then if they
don't handle it, send it to user space.


>
>> What is the mechanism to make sure both system don't process the same pulses?
>
> I don't see a good way to avoid it.
>
>> Does it work with poll, epoll, etc?
>> What is the time standard for the data, where does it come from?
>> How do you define the start and stop of sequences?
>> Is receiving synchronous or queued?
>> What about transmit, how do you get pulse data into the device?
>> Transmitter frequencies?
>> Multiple transmitters?
>> Is transmitting synchronous or queued?
>> How big is the transmit queue?
>
> I don't have a clear answer for those. I'll let those to LIRC developers to answer.
>
>
> Cheers,
> Mauro
>
>
>



--
Jon Smirl
[email protected]

2009-12-08 00:44:18

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Dec 7, 2009 at 1:41 PM, Dmitry Torokhov
<[email protected]> wrote:
> That is why I think we should go the other way around - introduce the
> core which receivers could plug into and decoder framework and once it
> is ready register lirc-dev as one of the available decoders.

The core needs to allow for RF remotes too.

-Bluetooth remotes are already in kernel somehow, I don't know how they work,
-RF4CE, the 802.15.4 stack has been recently merged, the remotes use a
protocol on top of that. These remotes will hit the consumer market
next year. Sony, Panasonic and other big names are behind this.
-Zwave, the Harmony remotes use Zwave. There is no Zwave support in
the kernel that I am aware of. Zwave is proprietary.

After these protocols are decoded you end up with scancodes. The
scancodes need to get injected into input somehow and then flow
through the mapping process. Decoding down to the scancodes probably
happens over in the networking code.

After an in-kernel IR decoder runs it needs to hand off the scancodes
into the input subsystem. This same API can be used by the networking
code to hand off RF scancodes.

--
Jon Smirl
[email protected]

2009-12-08 04:11:46

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 2009-12-06 at 16:23 -0500, Jon Smirl wrote:
> On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <[email protected]> wrote:
> > Jon Smirl <[email protected]> writes:
> >
> >>> Once again: how about agreement about the LIRC interface
> >>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> >>> decoding can wait a bit, it doesn't change any kernel-user interface.
> >>
> >> I'd like to see a semi-complete design for an in-kernel IR system
> >> before anything is merged from any source.
> >
> > This is a way to nowhere, there is no logical dependency between LIRC
> > and input layer IR.
> >
> > There is only one thing which needs attention before/when merging LIRC:
> > the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> > actually, making a correct IR core design without the LIRC merged can be
> > only harder.
>
> Here's a few design review questions on the LIRC drivers that were posted....

I will answer based on my understanding of LIRC as it exists today, but
I'm tired and am not going to dig into too many details I can't find
easily.

(Christoph can correct me if I get anything wrong.)


An architecture drawing can be found here:

http://www.lirc.org/html/technical.html

> How is the pulse data going to be communicated to user space?

Currently that is via lirc_dev which shows up as /dev/lircN (IIRC) in
userspace.

"The lirc_dev module is a helper and abstraction layer for other
modules. It registers /dev/lirc device in a system (including support
for devfs) and waits for plugin registration. After that it serves
device requests (open, read, poll, ioctl, close) and if needed calls
callback functions from plugin(s) to communicate with the physical
device."

The function call for hardware drivers to register with lirc_dev from
within kernel space is lirc_register_driver() which requires a structure
with points to hardware specifi operations, IIRC.


> Can the pulse data be reported via an existing interface without
> creating a new one?

Yes.


> Where is the documentation for the protocol?

http://www.lirc.org/html/technical.html


> Is it a device interface or something else?

Device for a kernelspace driver/plugin registering with lirc_dev.


> Does it work with poll, epoll, etc?

lirc_dev has an function irctl_poll() that will call a hardware specifi
poll operation if it exists, otherwise it has default poll logic.


> What is the time standard for the data, where does it come from?

I think it is usec, IIRC.

I know that the hardware I work with has sub 100 ns resolution, so
that's what is used as the basis for v4l2_subdev_ir_ops time values in
kernel. The conversion to usec is rather trivial.

The hardware I work with is very configurable, but I always use the
BT.656 video pixel clock of of 13.5 MHz * 8 = 108 MHz as the master
frequency reference for all the pulse width measurement circuitry.


> How do you define the start and stop of sequences?

For the end of Rx signalling:

Well with the Conexant hardware I can set a maximum pulse (mark or
space) width, and the hardware will generate an Rx Timeout interrupt to
signal the end of Rx when a space ends up longer than that max pulse
width. The hardware also puts a special marker in the hardware pulse
widht measurement FIFO (in band signalling essentially).

I'm not sure anything like that gets communicated to userspace via
lirc_dev, and I'm too tired to doublecheck right now.

If you have determined the protocol you are after, it's easy to know
what the pulse count should be and what the max pulse width should be (+
slop for crappy hardware) so finding the end of an Rx isn't hard. The
button repeats intervals are *very* large. I've never seen a remote
rapid fire codes back to back.


For the start of a sequence:

Easy, the first mark after a *very* long (10's of msec) space.
You could also look for very long mark header which many protocols (NEC,
RC-6, ...) have to help the IR hardware's AGC get set.


> What about capabilities of the receiver, what frequencies?

LIRC's API has a LIRC_GET_FEATURES ioctl().


> If a receiver has multiple frequencies, how do you report what
> frequency the data came in on?

I'm not sure most hardware can pick up a pulse on an arbitrary freq.
Usually you set a desired carrier and a window. The windows can be very
generous on some hardware: Fc * 16/20 to Fc * 16/12 (e.g. for 38 kHz
that's 30.4 kHz to 50.667 kHz).

Hardware can have a special "learn mode" to really make fine
measurements about the waveform without a specified carrier, but usually
requires some special setup and the user being prompted to take action
to get a good measurement.


> What about multiple apps simultaneously using the pulse data?

LIRC multiplexes a single device node with a daemon in userspace.


> Is receiving synchronous or queued?

kfifo's in lirc_dev IIRC.


> How big is the receive queue?

Device HW FIFO's can have a depth of 1 to 16.

My software queues for CX2388[58] devices are 512 pulse measurments deep
-- overkill except for maybe a protocol with a 256 bit manchester
encoded payload.

IIRC the lirc_dev per device buffers (on top of things that I just
mentioned) are at a size requested of lirc_dev by the underlying
hardware driver/plugin.



> How does access work, root only or any user?

Depends on udev rules. *NIX systems by default should be doing mknod()
such that root only has access unless otherwise specified.

Connecting to the LIRC daemon is via socket interface.


> What about transmit, how do you get pulse data into the device?

With the LIRC daemon which uses, in this example, the /dev/lircN device
node.


> Transmitter frequencies?

Config file.

Also with the LIRC_SET_SEND_CARRIER and LIRC_SET_SEND_DUTY_CYCLE
ioctl()s.


> Multiple transmitters?

LIRC_SET_TRANSMITTER_MASK ioctl() for multiple Tx diodes on one IR
controller.

Multiple /dev/lircN nodes for multiple independent controllers.


> Is transmitting synchronous or queued?

kfifo's IIRC.


> How big is the transmit queue?

Likely variable again requested by the underling driver/plugin of
lirc_dev.



There are really 3 files I think you should look at for in kernel LIRC
drivers which may have answered many of those questions you had:

lirc-0.8.5/drivers/lirc.h
lirc-0.8.5/drivers/lirc_dev/lirc_dev.h
lirc-0.8.5/drivers/lirc_dev/lirc_dev.c

and IIRC you said you had an MCE USB device, so maybe

lirc-0.8.5/drivers/lirc_mceusb*/*[ch]

would interest you as well.


My particular gripes about the current LIRC interface:

1. The one thing that I wish were documented better were the distinction
between LIRC_MODE_PULSE, LIRC_MODE_RAW, and LIRC_MODE2 modes of
operation. I think I've figured it out, but I had to look at a lot of
LIRC drivers to do so.

2. I have hardware where I can set max_pulse_width so I can optimize
pulse timer resolution and have the hardware time out rapidly on end of
RX. I also have hardware where I can set a min_pulse_width to set a
hardware low-pass/glitch filter. Currently LIRC doesn't have any way to
set these, but it would be nice to have. In band signalling of a
hardware detected "end of Rx" may also make sense then too.

3. As I mentioned before, it would be nice if LIRC could set a batch of
parameters atomically somehow, instead of with a series of ioctl()s. I
can work around this in kernel though.


> How are capabilities exposed, sysfs, etc?
> What is the interface for attaching an in-kernel decoder?
> If there is an in-kernel decoder should the pulse data stop being
> reported, partially stopped, something else?
> What is the mechanism to make sure both system don't process the same pulses?

Mauro and Dmitiri probably have better answers.

Regards,
Andy

2009-12-08 04:23:41

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Mon, Dec 07, 2009 at 09:44:14PM -0200, Mauro Carvalho Chehab wrote:
> Let me add my view for those questions.
>
> Jon Smirl wrote:
> > On Sun, Dec 6, 2009 at 3:34 PM, Krzysztof Halasa <[email protected]> wrote:
> >> Jon Smirl <[email protected]> writes:
> >>
> >>>> Once again: how about agreement about the LIRC interface
> >>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
> >>>> decoding can wait a bit, it doesn't change any kernel-user interface.
> >>> I'd like to see a semi-complete design for an in-kernel IR system
> >>> before anything is merged from any source.
> >> This is a way to nowhere, there is no logical dependency between LIRC
> >> and input layer IR.
> >>
> >> There is only one thing which needs attention before/when merging LIRC:
> >> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
> >> actually, making a correct IR core design without the LIRC merged can be
> >> only harder.
> >
> > Here's a few design review questions on the LIRC drivers that were posted....
> >
> > How is the pulse data going to be communicated to user space?
>
> lirc_dev will implement a revised version of the lirc API. I'm assuming that
> Jerod and Christoph will do this review, in order to be sure that it is stable
> enough for kernel inclusion (as proposed by Gerd).
>
> > Can the pulse data be reported via an existing interface without
> > creating a new one?
>
> Raw pulse data should be reported only via lirc_dev, but it can be converted
> into a keycode and reported via evdev as well, via an existing interface.
>
> > Where is the documentation for the protocol?
>
> I'm not sure what you're meaning here. I've started a doc about IR at the media
> docbook. This is currently inside the kernel Documents/DocBook. If you want
> to browse, it is also available as:
>
> http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
>
> For sure we need to better document the IR's, and explain the API's there.
>
> > Is it a device interface or something else?
>
> lirc_dev should create a device interface.
>
> > What about capabilities of the receiver, what frequencies?
> > If a receiver has multiple frequencies, how do you report what
> > frequency the data came in on?
>
> IMO, via sysfs.

We probably need to think what exactly we report through sysfs siunce it
is ABI of sorts.

>
> > What about multiple apps simultaneously using the pulse data?
>
> IMO, the better is to limit the raw interface to just one open.
>

Why woudl we want to do this? Quite often there is a need for "observer"
that maybe does not act on data but allows capturing it. Single-user
inetrfaces are PITA.

> > How big is the receive queue?
>
> It should be big enough to receive at least one keycode event. Considering that
> the driver will use kfifo (IMO, it is a good strategy, especially since you
> won't need any lock if just one open is allowed), it will require a power of two size.
>

Would not it be wither driver- or protocol-specific?

> > How does access work, root only or any user?
>
> IMO, it should be the same requirement as used by an input interface.
>
> > How are capabilities exposed, sysfs, etc?
>
> IMO, sysfs.
>
> > What is the interface for attaching an in-kernel decoder?
>
> IMO, it should use the kfifo for it. However, if we allow both raw data and
> in-kernel decoders to read data there, we'll need a spinlock to protect the
> kfifo.
>

I think Jon meant userspace interface for attaching particular decoder.

> > If there is an in-kernel decoder should the pulse data stop being
> > reported, partially stopped, something else?
>
> I don't have a strong opinion here, but, from the previous discussions, it
> seems that people want it to be double-reported by default. If so, I think
> we need to implement a command at the raw interface to allow disabling the
> in-kernel decoder, while the raw interface is kept open.

Why don't you simply let consumers decide where they will get their data?

>
> > What is the mechanism to make sure both system don't process the same pulses?
>
> I don't see a good way to avoid it.
>
> > Does it work with poll, epoll, etc?
> > What is the time standard for the data, where does it come from?
> > How do you define the start and stop of sequences?
> > Is receiving synchronous or queued?
> > What about transmit, how do you get pulse data into the device?
> > Transmitter frequencies?
> > Multiple transmitters?
> > Is transmitting synchronous or queued?
> > How big is the transmit queue?
>
> I don't have a clear answer for those. I'll let those to LIRC developers to answer.
>

--
Dmitry

2009-12-08 11:18:06

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
> <[email protected]> wrote:

>>> Where is the documentation for the protocol?
>> I'm not sure what you're meaning here. I've started a doc about IR at the media
>
> What is the format of the pulse stream data coming out of the lirc device?

AFAIK, it is at:
http://www.lirc.org/html/index.html

It would be nice to to add it to DocBook after integrating the API in kernel.

>> docbook. This is currently inside the kernel Documents/DocBook. If you want
>> to browse, it is also available as:
>>
>> http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
>>
>> For sure we need to better document the IR's, and explain the API's there.
>>
>>> Is it a device interface or something else?
>> lirc_dev should create a device interface.
>>
>>> What about capabilities of the receiver, what frequencies?
>>> If a receiver has multiple frequencies, how do you report what
>>> frequency the data came in on?
>> IMO, via sysfs.
>
> Say you have a hardware device with two IR diodes, one at 38K and one
> at 56K. Both of these receivers can get pulses. How do we tell the
> user space app which frequency the pulses were received on? Seems to
> me like there has to be a header on the pulse data indicating the
> received carrier frequency. There is also baseband signaling. sysfs
> won't work for this because of the queuing latency.

Simply create two interfaces. One for each IR receiver. At sysfs, you'll
have /sys/class/irrcv/irrcv0 for the first one and /sys/class/irrcv/irrcv1.
>
> How is over-run signaled to the app? You'd get an over-run if the app
> is too slow at reading the data out of the FIFO. If you ignore
> over-run you'll be processing bad data because part of the message was
> lost. An over-run signal tell the abort to abort the signal and start
> over.

The API should provide that info. Maybe it is already solved.

>>> What about multiple apps simultaneously using the pulse data?
>> IMO, the better is to limit the raw interface to just one open.
>>
>>> How big is the receive queue?
>> It should be big enough to receive at least one keycode event. Considering that
>> the driver will use kfifo (IMO, it is a good strategy, especially since you
>> won't need any lock if just one open is allowed), it will require a power of two size.
>
> How is end of a pulse train detected? timeout? without decoding the
> protocol there is no way to tell the end of signal other than timeout.

The API should provide that info. Provided that lirc works, I'm assuming that
this is already solved.

>>> How does access work, root only or any user?
>> IMO, it should be the same requirement as used by an input interface.
>>
>>> How are capabilities exposed, sysfs, etc?
>> IMO, sysfs.
>>
>>> What is the interface for attaching an in-kernel decoder?
>> IMO, it should use the kfifo for it. However, if we allow both raw data and
>> in-kernel decoders to read data there, we'll need a spinlock to protect the
>> kfifo.
>>
>>> If there is an in-kernel decoder should the pulse data stop being
>>> reported, partially stopped, something else?
>> I don't have a strong opinion here, but, from the previous discussions, it
>> seems that people want it to be double-reported by default. If so, I think
>> we need to implement a command at the raw interface to allow disabling the
>> in-kernel decoder, while the raw interface is kept open.
>
> Data could be sent to the in-kernel decoders first and then if they
> don't handle it, send it to user space.

Hmm... like adding a delay if the raw userspace is open and, if the raw userspace
doesn't read all pulse data, it will send via in-kernel decoder instead? This can
work, but I'm not sure if this is the better way, and will require some logic to
synchronize lirc_dev and IR core modules. Also, doing it key by key will introduce
some delay.

If you're afraid of having the userspace app hanged and having no IR output,
it would be simpler to just close the raw interface if an available data won't be
read after a bigger timeout (3 seconds? 5 seconds?).

Cheers,
Mauro.

2009-12-08 11:23:30

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Mon, Dec 7, 2009 at 1:41 PM, Dmitry Torokhov
> <[email protected]> wrote:
>> That is why I think we should go the other way around - introduce the
>> core which receivers could plug into and decoder framework and once it
>> is ready register lirc-dev as one of the available decoders.
>
> The core needs to allow for RF remotes too.
>
> -Bluetooth remotes are already in kernel somehow, I don't know how they work,
> -RF4CE, the 802.15.4 stack has been recently merged, the remotes use a
> protocol on top of that. These remotes will hit the consumer market
> next year. Sony, Panasonic and other big names are behind this.
> -Zwave, the Harmony remotes use Zwave. There is no Zwave support in
> the kernel that I am aware of. Zwave is proprietary.
>
> After these protocols are decoded you end up with scancodes. The
> scancodes need to get injected into input somehow and then flow
> through the mapping process. Decoding down to the scancodes probably
> happens over in the networking code.
>
> After an in-kernel IR decoder runs it needs to hand off the scancodes
> into the input subsystem. This same API can be used by the networking
> code to hand off RF scancodes.
>

Yes, the same core should be able to work with non infra red remotes, but, depending
on how the device is implemented.

Cheers,
Mauro.

2009-12-08 11:59:13

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Mon, Dec 07, 2009 at 09:44:14PM -0200, Mauro Carvalho Chehab wrote:

>>> What about capabilities of the receiver, what frequencies?
>>> If a receiver has multiple frequencies, how do you report what
>>> frequency the data came in on?
>> IMO, via sysfs.
>
> We probably need to think what exactly we report through sysfs siunce it
> is ABI of sorts.

Yes, sure.

Probably, the exact needs will popup only when we start to actually writing that
part of the core.

My intention for now is to just create a /sys/class/irrcv, with one node
per each IR receiver and adding a protocol enumeration/selection node
there, and add some capabilities for the in-kernel decoders and lirc_dev
to create new nodes under that class.

When the decoders/lirc_dev patches popup, we'll need to review those sysfs
API's.

>>> What about multiple apps simultaneously using the pulse data?
>> IMO, the better is to limit the raw interface to just one open.
>>
>
> Why woudl we want to do this? Quite often there is a need for "observer"
> that maybe does not act on data but allows capturing it. Single-user
> inetrfaces are PITA.

That should work fine as well, but I'm not sure how we'll detect overrun with
several kfifo readers.

>>> How big is the receive queue?
>> It should be big enough to receive at least one keycode event. Considering that
>> the driver will use kfifo (IMO, it is a good strategy, especially since you
>> won't need any lock if just one open is allowed), it will require a power of two size.
>>
>
> Would not it be wither driver- or protocol-specific?

Probably.

>
>>> How does access work, root only or any user?
>> IMO, it should be the same requirement as used by an input interface.
>>
>>> How are capabilities exposed, sysfs, etc?
>> IMO, sysfs.
>>
>>> What is the interface for attaching an in-kernel decoder?
>> IMO, it should use the kfifo for it. However, if we allow both raw data and
>> in-kernel decoders to read data there, we'll need a spinlock to protect the
>> kfifo.
>>
>
> I think Jon meant userspace interface for attaching particular decoder.

I don't think we need an userspace interface for the in-kernel decoders. All
it needs is to enable/disable the protocol decoders, imo via sysfs interface.

>>> If there is an in-kernel decoder should the pulse data stop being
>>> reported, partially stopped, something else?
>> I don't have a strong opinion here, but, from the previous discussions, it
>> seems that people want it to be double-reported by default. If so, I think
>> we need to implement a command at the raw interface to allow disabling the
>> in-kernel decoder, while the raw interface is kept open.
>
> Why don't you simply let consumers decide where they will get their data?

How?

Cheers,
Mauro.

2009-12-08 13:34:25

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 8, 2009 at 6:17 AM, Mauro Carvalho Chehab
<[email protected]> wrote:
> Jon Smirl wrote:
>> On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
>> <[email protected]> wrote:
>
>>>> Where is the documentation for the protocol?
>>> I'm not sure what you're meaning here. I've started a doc about IR at the media
>>
>> What is the format of the pulse stream data coming out of the lirc device?
>
> AFAIK, it is at:
> ? ? ? ?http://www.lirc.org/html/index.html
>
> It would be nice to to add it to DocBook after integrating the API in kernel.
>

The point of those design review questions was to illustrate that the
existing LIRC system is only partially designed. Subsystems need to be
fully designed before they get merged.

For example 36-40K and 56K IR signals are both in use. It is a simple
matter to design a receiver (or buy two receivers) that would support
both these frequencies. But the current LIRC model only supports a
single IR receiver. Adjusting it to support two receivers is going to
break the ABI.

My choice would be to just tell the person with the 56K remote to just
buy a new 38K remote, but other people are against that choice. That
forces us into designing a system that can handle multiple receivers.
There is a parallel problem with baseband encoded IR signals.

We need to think about all of these use cases before designing the
ABI. Only after we think we have a good ABI design should code start
being merged. Of course we may make mistakes and have to fix the ABI,
but there is nothing to be gained by merging the existing ABI if we
already know it has problems.

--
Jon Smirl
[email protected]

2009-12-08 13:51:59

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

>> What is the interface for attaching an in-kernel decoder?
>
> IMO, it should use the kfifo for it. However, if we allow both raw data and
> in-kernel decoders to read data there, we'll need a spinlock to protect the
> kfifo.

This may be an option, but I think we should be able to attach protocol
decoders in parallel, directly to the IRQ handler. At least with RC-5
(that's what I personally use) it means reliable decoding, no need for
any timeouts, the code is clean, fast (can be a part of hard IRQ
handler) and simple.

The decoder needs something like
rc5_signal_change(ptr, space_or_mark, microseconds).

At least mark->space or space->mark events must be reported. For better
reliability, both of them.
--
Krzysztof Halasa

2009-12-08 13:54:05

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl <[email protected]> writes:

> Data could be sent to the in-kernel decoders first and then if they
> don't handle it, send it to user space.

Nope. It should be sent to all of them, they aren't dependent.
--
Krzysztof Halasa

2009-12-08 13:57:17

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov <[email protected]> writes:

> Why woudl we want to do this? Quite often there is a need for "observer"
> that maybe does not act on data but allows capturing it. Single-user
> inetrfaces are PITA.

Lircd can work as a multiplexer. IMHO single-open lirc interface is ok,
though we obviously need simultaneous operation of in-kernel decoders.
--
Krzysztof Halasa

2009-12-08 14:01:26

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

> I don't think we need an userspace interface for the in-kernel
> decoders.

Of course we need it, to set (and probably retrieve) scancode-keycode
mappings. This could probably be, ATM, the existing input layer channel.

> All
> it needs is to enable/disable the protocol decoders, imo via sysfs interface.

This isn't IMHO needed at all. The protocol is enabled when at least one
key using it is configured, otherwise it's disabled. We probably need
some "wildcard" as well, to capture decoded scancodes (through the input
layer).
This is BTW pure optimization, the protocol could stay enabled all the
time, only wasting the cycles.
--
Krzysztof Halasa

2009-12-08 14:06:30

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

> Yes, an opaque type for scancode at the userspace API can be better, but
> passing a pointer to kernel will require some compat32 logic (as pointer
> size is different on 32 and 64 bits).

Yes. I think we can't avoid that, but it's a single compat handler,
I wouldn't worry about it too much. We don't need it in every driver
fortunately.

> We may use something like an u8[] with an arbitrary large number of
> bytes.

Yes. All of this pointed to by the pointer.

> In this case, we need to take some care to avoid LSB/MSB troubles.

Sure.
--
Krzysztof Halasa

2009-12-08 14:07:59

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Krzysztof Halasa wrote:
> Mauro Carvalho Chehab <[email protected]> writes:
>
>>> What is the interface for attaching an in-kernel decoder?
>> IMO, it should use the kfifo for it. However, if we allow both raw data and
>> in-kernel decoders to read data there, we'll need a spinlock to protect the
>> kfifo.
>
> This may be an option, but I think we should be able to attach protocol
> decoders in parallel, directly to the IRQ handler. At least with RC-5
> (that's what I personally use) it means reliable decoding, no need for
> any timeouts, the code is clean, fast (can be a part of hard IRQ
> handler) and simple.
>
> The decoder needs something like
> rc5_signal_change(ptr, space_or_mark, microseconds).
>
> At least mark->space or space->mark events must be reported. For better
> reliability, both of them.

If you use a kfifo to store the event (space_or_mark, timestamp),
the IRQ handler can return immediately, and a separate kernel thread
can do the decode without needing to touch at the IRQ. It also helps to
have a decoder independent of the kernel driver.

Cheers,
Mauro.

2009-12-08 14:13:33

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Krzysztof Halasa wrote:
> Mauro Carvalho Chehab <[email protected]> writes:
>
>> I don't think we need an userspace interface for the in-kernel
>> decoders.
>
> Of course we need it, to set (and probably retrieve) scancode-keycode
> mappings. This could probably be, ATM, the existing input layer channel.

This is the standard/extended evdev interface already discussed.
>
>> All
>> it needs is to enable/disable the protocol decoders, imo via sysfs interface.
>
> This isn't IMHO needed at all. The protocol is enabled when at least one
> key using it is configured, otherwise it's disabled. We probably need
> some "wildcard" as well, to capture decoded scancodes (through the input
> layer).
> This is BTW pure optimization, the protocol could stay enabled all the
> time, only wasting the cycles.

The enable/disable protocol decoder enable/disable interface is needed anyway,
due to the needs for the hardware IR decoders (and the legacy IR decoders,
while we don't convert all of them to the generic ones).

So, IMO, the interface will be cleaner if we use it also for the generic
IR decoders.

Cheers,
Mauro.

2009-12-08 14:51:16

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 8, 2009 at 9:07 AM, Mauro Carvalho Chehab
<[email protected]> wrote:
> Krzysztof Halasa wrote:
>> Mauro Carvalho Chehab <[email protected]> writes:
>>
>>>> What is the interface for attaching an in-kernel decoder?
>>> IMO, it should use the kfifo for it. However, if we allow both raw data and
>>> in-kernel decoders to read data there, we'll need a spinlock to protect the
>>> kfifo.
>>
>> This may be an option, but I think we should be able to attach protocol
>> decoders in parallel, directly to the IRQ handler. At least with RC-5
>> (that's what I personally use) it means reliable decoding, no need for
>> any timeouts, the code is clean, fast (can be a part of hard IRQ
>> handler) and simple.
>>
>> The decoder needs something like
>> ? ? ? rc5_signal_change(ptr, space_or_mark, microseconds).
>>
>> At least mark->space or space->mark events must be reported. For better
>> reliability, both of them.
>
> If you use a kfifo to store the event (space_or_mark, timestamp),
> the IRQ handler can return immediately, and a separate kernel thread
> can do the decode without needing to touch at the IRQ. It also helps to
> have a decoder independent of the kernel driver.

The first version of my code ran the decoders from the IRQ. That
wasn't a good model for sharing decoders between drivers. So I
switched to using a kernel thread. There is also the problem of
handing decoded events off up the chain. You can't do that from IRQ
context.

If I remember correctly the kernel thread would run approximately two
times per IR message received. But sometimes it would only run once.
It's a random function of the load on the system. The kernel thread
empties the FIFO and sends the pulses in parallel to the decoders.

Code for doing this is in the patches I posted. I wasn't aware of
kfifo when I wrote them so I coded my own fifo.

--
Jon Smirl
[email protected]

2009-12-08 14:57:33

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Tue, Dec 8, 2009 at 6:17 AM, Mauro Carvalho Chehab
> <[email protected]> wrote:
>> Jon Smirl wrote:
>>> On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
>>> <[email protected]> wrote:
>>>>> Where is the documentation for the protocol?
>>>> I'm not sure what you're meaning here. I've started a doc about IR at the media
>>> What is the format of the pulse stream data coming out of the lirc device?
>> AFAIK, it is at:
>> http://www.lirc.org/html/index.html
>>
>> It would be nice to to add it to DocBook after integrating the API in kernel.
>>
>
> The point of those design review questions was to illustrate that the
> existing LIRC system is only partially designed.

Ah, ok.

> Subsystems need to be
> fully designed before they get merged.

True.

> For example 36-40K and 56K IR signals are both in use. It is a simple
> matter to design a receiver (or buy two receivers) that would support
> both these frequencies. But the current LIRC model only supports a
> single IR receiver. Adjusting it to support two receivers is going to
> break the ABI.
>
> My choice would be to just tell the person with the 56K remote to just
> buy a new 38K remote, but other people are against that choice. That
> forces us into designing a system that can handle multiple receivers.
> There is a parallel problem with baseband encoded IR signals.
>
> We need to think about all of these use cases before designing the
> ABI. Only after we think we have a good ABI design should code start
> being merged. Of course we may make mistakes and have to fix the ABI,
> but there is nothing to be gained by merging the existing ABI if we
> already know it has problems.

I have here machines with 3 or 4 IR receivers (well, in a matter of fact,
I don't use all of them at the same time). Nothing stops someone to
use all IR receivers at the same.

I've seen some interesting applications developed for students, where just
one computer running Linux is splitted into 4 different consoles. Nothing
stops that, on such usages, you may have 4 different IR transceivers working
at the same time.

In order to keep supporting the in-kernel drivers, we should create one lirc
device per each IR transceiver.

Cheers,
Mauro.

2009-12-08 15:24:40

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov <[email protected]> writes:

> No, the IR core responsible for registering receivers and decoders.

Well. This makes me think now that LIRC can be just "another decoder".

>> Those are simple things. The only part which needs to be stable is the
>> (in this case LIRC) kernel-user interface.
>
> For which some questions are still open. I believe Jon just oulined some
> of them.

Those are rather "how does it work", not "let's change something because
it's not optimal".

> No we do not. We do not merge something that we expect to rework almost
> completely (no, not the lirc-style device userspace inetrface, although
> even it is not completely finalized I believe, but the rest of the
> subsystem).

I don't think we need to rework it almost completely. Perhaps we need to
change a hook here or there.

> No, not at all. You merge core subsystem code, then start addig
> decoders...

You must have at least one common decoder merged with the core code,
otherwise you don't know if the core is adequate. And you have to have
at least one common input device.

But perhaps it is a workable idea after all, given the "another
decoder".
--
Krzysztof Halasa

2009-12-08 15:26:54

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

> The enable/disable protocol decoder enable/disable interface is needed anyway,
> due to the needs for the hardware IR decoders

Why do they need it exactly?
The key tables say all they need I hope?
--
Krzysztof Halasa

2009-12-08 15:29:45

by Krzysztof Halasa

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Mauro Carvalho Chehab <[email protected]> writes:

> If you use a kfifo to store the event (space_or_mark, timestamp),
> the IRQ handler can return immediately, and a separate kernel thread
> can do the decode without needing to touch at the IRQ.

But the decoding itself is a really simple thing, why complicate it?
There is no need for the kernel thread if the handler is fast (and it
is).

Userspace is obviously different.
--
Krzysztof Halasa

2009-12-08 15:41:50

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Krzysztof Halasa wrote:
> Mauro Carvalho Chehab <[email protected]> writes:
>
>> The enable/disable protocol decoder enable/disable interface is needed anyway,
>> due to the needs for the hardware IR decoders
>
> Why do they need it exactly?
> The key tables say all they need I hope?

You can't upload a key for an unsupported protocol. Also, provided
that hardware decoders in general don't support decoding multiple
protocols at the same time, it is needed to select what protocol it
will be decoding.

So, userspace needs to:
- retrieve the list of the supported protocols;
- identify if a given IR is capable of multiple protocols;
- be able to select what protocol(s) decoder(s) will be enabled.

Cheers,
Mauro.

2009-12-08 15:49:53

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Krzysztof Halasa wrote:
> Mauro Carvalho Chehab <[email protected]> writes:
>
>> If you use a kfifo to store the event (space_or_mark, timestamp),
>> the IRQ handler can return immediately, and a separate kernel thread
>> can do the decode without needing to touch at the IRQ.
>
> But the decoding itself is a really simple thing, why complicate it?
> There is no need for the kernel thread if the handler is fast (and it
> is).

The decoding of just one protocol may be fast, but having several decoders
serialized (without kthreads, you're serializing the decoders) will possibly
not be that fast.

Also, you don't need wake the decoders kthreads for every event, but wait
for some number of events to happen before waking it. For example,
16 pulse/space events correspond to 8 bits of data on most protocols,
so you can wake the kthread only after 16 events for really simple decoders,
or if a timeout event is detected. The number of events to wake may be customized
per decoder.

Cheers,
Mauro.

2009-12-08 16:26:12

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 8, 2009 at 10:49 AM, Mauro Carvalho Chehab
<[email protected]> wrote:
> Krzysztof Halasa wrote:
>> Mauro Carvalho Chehab <[email protected]> writes:
>>
>>> If you use a kfifo to store the event (space_or_mark, timestamp),
>>> the IRQ handler can return immediately, and a separate kernel thread
>>> can do the decode without needing to touch at the IRQ.
>>
>> But the decoding itself is a really simple thing, why complicate it?
>> There is no need for the kernel thread if the handler is fast (and it
>> is).
>
> The decoding of just one protocol may be fast, but having several decoders
> serialized (without kthreads, you're serializing the decoders) will possibly
> not be that fast.
>
> Also, you don't need wake the decoders kthreads for every event, but wait

Just wake the default kthread on each event. If you wake the default
thread multiple times it is the same as waking it once.

The default kthread doesn't schedule very fast. If you get 120 events
and call wake 120 times, the thread is only going to visit your driver
one or two times not 120 times.

> for some number of events to happen before waking it. For example,
> 16 pulse/space events correspond to 8 bits of data on most protocols,
> so you can wake the kthread only after 16 events for really simple decoders,
> or if a timeout event is detected. The number of events to wake may be customized
> per decoder.
>
> Cheers,
> Mauro.
>



--
Jon Smirl
[email protected]

2009-12-08 17:04:18

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 08, 2009 at 09:17:42AM -0200, Mauro Carvalho Chehab wrote:
> Jon Smirl wrote:
> > On Mon, Dec 7, 2009 at 6:44 PM, Mauro Carvalho Chehab
> > <[email protected]> wrote:
>
> >>> Where is the documentation for the protocol?
> >> I'm not sure what you're meaning here. I've started a doc about IR at the media
> >
> > What is the format of the pulse stream data coming out of the lirc device?
>
> AFAIK, it is at:
> http://www.lirc.org/html/index.html
>
> It would be nice to to add it to DocBook after integrating the API in kernel.
>
> >> docbook. This is currently inside the kernel Documents/DocBook. If you want
> >> to browse, it is also available as:
> >>
> >> http://linuxtv.org/downloads/v4l-dvb-apis/ch17.html
> >>
> >> For sure we need to better document the IR's, and explain the API's there.
> >>
> >>> Is it a device interface or something else?
> >> lirc_dev should create a device interface.
> >>
> >>> What about capabilities of the receiver, what frequencies?
> >>> If a receiver has multiple frequencies, how do you report what
> >>> frequency the data came in on?
> >> IMO, via sysfs.
> >
> > Say you have a hardware device with two IR diodes, one at 38K and one
> > at 56K. Both of these receivers can get pulses. How do we tell the
> > user space app which frequency the pulses were received on? Seems to
> > me like there has to be a header on the pulse data indicating the
> > received carrier frequency. There is also baseband signaling. sysfs
> > won't work for this because of the queuing latency.
>
> Simply create two interfaces. One for each IR receiver. At sysfs, you'll
> have /sys/class/irrcv/irrcv0 for the first one and /sys/class/irrcv/irrcv1.

Yes, please. Distinct hardware - distinct representation in the kernel.
This is the most sane way.

...
> >>
> >>> What is the interface for attaching an in-kernel decoder?
> >> IMO, it should use the kfifo for it. However, if we allow both raw data and
> >> in-kernel decoders to read data there, we'll need a spinlock to protect the
> >> kfifo.

Probably we should do what input layer does - the data is pushed into
all handlers that are signed up for it and they can deal with it at
their leisure.

> >>
> >>> If there is an in-kernel decoder should the pulse data stop being
> >>> reported, partially stopped, something else?
> >> I don't have a strong opinion here, but, from the previous discussions, it
> >> seems that people want it to be double-reported by default. If so, I think
> >> we need to implement a command at the raw interface to allow disabling the
> >> in-kernel decoder, while the raw interface is kept open.
> >
> > Data could be sent to the in-kernel decoders first and then if they
> > don't handle it, send it to user space.

You do not know what userspace wants to do with the data. They may want
to simply observe it, store or do something else. Since we do provide
interface for such raw[ish] data we just need to transmit it to
userpsace as long as there are users (i.e. interface is open).

>
> Hmm... like adding a delay if the raw userspace is open and, if the raw userspace
> doesn't read all pulse data, it will send via in-kernel decoder instead? This can
> work, but I'm not sure if this is the better way, and will require some logic to
> synchronize lirc_dev and IR core modules. Also, doing it key by key will introduce
> some delay.
>
> If you're afraid of having the userspace app hanged and having no IR output,
> it would be simpler to just close the raw interface if an available data won't be
> read after a bigger timeout (3 seconds? 5 seconds?).

We can not foresee all use cases. Just let all parties signed up for the
data get and process it, do not burden the core with heuristics.

--
Dmitry

2009-12-08 17:12:39

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 08, 2009 at 09:58:53AM -0200, Mauro Carvalho Chehab wrote:
> Dmitry Torokhov wrote:
> > On Mon, Dec 07, 2009 at 09:44:14PM -0200, Mauro Carvalho Chehab wrote:
>
> >>> What about capabilities of the receiver, what frequencies?
> >>> If a receiver has multiple frequencies, how do you report what
> >>> frequency the data came in on?
> >> IMO, via sysfs.
> >
> > We probably need to think what exactly we report through sysfs siunce it
> > is ABI of sorts.
>
> Yes, sure.
>
> Probably, the exact needs will popup only when we start to actually writing that
> part of the core.
>
> My intention for now is to just create a /sys/class/irrcv, with one node
> per each IR receiver and adding a protocol enumeration/selection node
> there, and add some capabilities for the in-kernel decoders and lirc_dev
> to create new nodes under that class.
>
> When the decoders/lirc_dev patches popup, we'll need to review those sysfs
> API's.
>
> >>> What about multiple apps simultaneously using the pulse data?
> >> IMO, the better is to limit the raw interface to just one open.
> >>
> >
> > Why woudl we want to do this? Quite often there is a need for "observer"
> > that maybe does not act on data but allows capturing it. Single-user
> > inetrfaces are PITA.
>
> That should work fine as well, but I'm not sure how we'll detect overrun with
> several kfifo readers.
>

Push the data into readers so they can do te decoding at their own pace.
Some can do it in interrupt context, some will need workqueue/thread.
They can also regilate the depth of the buffer, according to their
needs.

> >>> How big is the receive queue?
> >> It should be big enough to receive at least one keycode event. Considering that
> >> the driver will use kfifo (IMO, it is a good strategy, especially since you
> >> won't need any lock if just one open is allowed), it will require a power of two size.
> >>
> >
> > Would not it be wither driver- or protocol-specific?
>
> Probably.
>
> >
> >>> How does access work, root only or any user?
> >> IMO, it should be the same requirement as used by an input interface.
> >>
> >>> How are capabilities exposed, sysfs, etc?
> >> IMO, sysfs.
> >>
> >>> What is the interface for attaching an in-kernel decoder?
> >> IMO, it should use the kfifo for it. However, if we allow both raw data and
> >> in-kernel decoders to read data there, we'll need a spinlock to protect the
> >> kfifo.
> >>
> >
> > I think Jon meant userspace interface for attaching particular decoder.
>
> I don't think we need an userspace interface for the in-kernel decoders. All
> it needs is to enable/disable the protocol decoders, imo via sysfs interface.
>
> >>> If there is an in-kernel decoder should the pulse data stop being
> >>> reported, partially stopped, something else?
> >> I don't have a strong opinion here, but, from the previous discussions, it
> >> seems that people want it to be double-reported by default. If so, I think
> >> we need to implement a command at the raw interface to allow disabling the
> >> in-kernel decoder, while the raw interface is kept open.
> >
> > Why don't you simply let consumers decide where they will get their data?
>
> How?
>

You end up with N evdev devices. Let application (MythTV) say "I want to
use /dev/input/event1" (well, it will need persistent udev rule, but
that's a detail). Another application will chose another event node.
User can decide she'd rather use lircd - and so configire applications
to use event5. Any maybe turned off the in-kernel decoders if they are
of no use and there is a concern that they consume too mcuh resoures.

Won't this work?

--
Dmitry

2009-12-08 17:25:47

by Dmitry Torokhov

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 08, 2009 at 02:57:15PM +0100, Krzysztof Halasa wrote:
> Dmitry Torokhov <[email protected]> writes:
>
> > Why woudl we want to do this? Quite often there is a need for "observer"
> > that maybe does not act on data but allows capturing it. Single-user
> > inetrfaces are PITA.
>
> Lircd can work as a multiplexer.

What this has to do with my statement? Did you mean retransmitter of sorts?

Also I may explicitely not want the data stream to be multiplexed...

> IMHO single-open lirc interface is ok,
> though we obviously need simultaneous operation of in-kernel decoders.

Why is the distinction?

--
Dmitry

2009-12-08 22:34:09

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Jon,

on 08 Dec 09 at 08:34, Jon Smirl wrote:
[...]
> The point of those design review questions was to illustrate that the
> existing LIRC system is only partially designed. Subsystems need to be
> fully designed before they get merged.

I'd say that a system that has proven itself in real world applications
for >10 years, does not deserve to be called partially designed.

> For example 36-40K and 56K IR signals are both in use. It is a simple
> matter to design a receiver (or buy two receivers) that would support
> both these frequencies. But the current LIRC model only supports a
> single IR receiver. Adjusting it to support two receivers is going to
> break the ABI.

Really? When we added support for multiple transmitters, we somehow
managed to do without breaking the ABI. Do I miss something?

Your example could even now be solved by using the LIRC_SET_REC_CARRIER
ioctl. The driver would have to choose the receiver that best fits the
requested frequency.

[...]
> We need to think about all of these use cases before designing the
> ABI. Only after we think we have a good ABI design should code start
> being merged. Of course we may make mistakes and have to fix the ABI,
> but there is nothing to be gained by merging the existing ABI if we
> already know it has problems.

The point is that we did not get up this morning and started to think
about how the LIRC interface should look like. That happened 10 years ago.

I'm not saying that the interface is the nicest thing ever invented, but
it works and is extendable. If you see that something is missing please
bring it up.

Christoph

2009-12-08 22:33:58

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Dmitry,

on 06 Dec 09 at 23:51, Dmitry Torokhov wrote:
[...]
>>> I suppose we could add MSC_SCAN_END event so that we can transmit
>>> "scancodes" of arbitrary length. You'd get several MSC_SCAN followed by
>>> MSC_SCAN_END marker. If you don't get MSC_SCAN_END assume the code is 32
>>> bit.
>>
>> And I set a timeout to know that no MSC_SCAN_END will arrive? This is
>> broken design IMHO.
>>

> EV_SYN signals the end of state transmission.

>> Furthermore lircd needs to know the length of the scan code in bits, not
>> as a multiple of 32.

> I really do not think that LIRCD is the type of application that should
> be using evdev interface, but rather other way around.

Well, all I'm asking is that lircd can keep using the LIRC interface for
getting the scan codes. ;-)

Christoph

2009-12-08 22:34:14

by lirc

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi Andy,

on 07 Dec 09 at 23:10, Andy Walls wrote:
[...]
> (Christoph can correct me if I get anything wrong.)

Just a few additions.

[...]
>> What is the time standard for the data, where does it come from?

> I think it is usec, IIRC.

Yes, it is.

> I know that the hardware I work with has sub 100 ns resolution,

Highest IR carrier frequency I know is 500kHz. usec resolution is enough
even for raw modulated IR pulses. But you only look at the signal after it
has been demodulated by the IR chip, so higher resolution would be
overkill.

[...]
>> How do you define the start and stop of sequences?

> For the end of Rx signalling:
>
> Well with the Conexant hardware I can set a maximum pulse (mark or
> space) width, and the hardware will generate an Rx Timeout interrupt to
> signal the end of Rx when a space ends up longer than that max pulse
> width. The hardware also puts a special marker in the hardware pulse
> widht measurement FIFO (in band signalling essentially).
>
> I'm not sure anything like that gets communicated to userspace via
> lirc_dev, and I'm too tired to doublecheck right now.

There is no such thing in the protocol. Some devices cannot provide any
end of signal marker, so lircd handles this using timers.

If there is some interest, the MODE2 protocol can be extended. We still
have 7 bits unused...

> If you have determined the protocol you are after, it's easy to know
> what the pulse count should be and what the max pulse width should be (+
> slop for crappy hardware) so finding the end of an Rx isn't hard. The
> button repeats intervals are *very* large. I've never seen a remote
> rapid fire codes back to back.

I did. There are some protocols that have a gap of only 6000 us between
signals. And the settop boxes are very picky about this. If you make it
too long, they won't accept the command.

[...]
>> Is transmitting synchronous or queued?

> kfifo's IIRC.

No, it's synchronous.

>> How big is the transmit queue?

No queue.

[...]
> My particular gripes about the current LIRC interface:
>
> 1. The one thing that I wish were documented better were the distinction
> between LIRC_MODE_PULSE, LIRC_MODE_RAW, and LIRC_MODE2 modes of
> operation. I think I've figured it out, but I had to look at a lot of
> LIRC drivers to do so.

No driver uses RAW until now and lircd does not support it.
PULSE is used on the transmit path, MODE2 on the receive path.
There is no special reasoning for that, it's rather historic.
MODE2 makes sense on the receive path because you can easily distinguish
between pulse/space.

> 2. I have hardware where I can set max_pulse_width so I can optimize
> pulse timer resolution and have the hardware time out rapidly on end of
> RX. I also have hardware where I can set a min_pulse_width to set a
> hardware low-pass/glitch filter. Currently LIRC doesn't have any way to
> set these, but it would be nice to have.

Should be really easy to add these. The actual values could be derived
from the config files easily.

> In band signalling of a
> hardware detected "end of Rx" may also make sense then too.

See above.

> 3. As I mentioned before, it would be nice if LIRC could set a batch of
> parameters atomically somehow, instead of with a series of ioctl()s. I
> can work around this in kernel though.

Is there any particular sequence that you are concerned about?
Setting carrier frequency and then duty cycle is a bit problematic.
Currently it's solved by resetting the duty cycle to 50% each time you
change the carrier frequency.
But as the LIRC interface is "one user only", I don't see a real problem.

Christoph

2009-12-09 02:23:10

by Andy Walls

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, 2009-12-08 at 23:30 +0100, Christoph Bartelmus wrote:
> Hi Andy,
>
> on 07 Dec 09 at 23:10, Andy Walls wrote:
> [...]
> > (Christoph can correct me if I get anything wrong.)
>
> Just a few additions.

Christoph,

Thanks for the corrections and additions. :)

> [...]

> > I know that the hardware I work with has sub 100 ns resolution,
>
> Highest IR carrier frequency I know is 500kHz. usec resolution is enough
> even for raw modulated IR pulses. But you only look at the signal after it
> has been demodulated by the IR chip, so higher resolution would be
> overkill.

Yes, it's overkill. It is more of a side effect of how I set up the
hardware to uses as much of the bits in the pulse width measurement
counter as possible for the longest expected valid measurment width.
The LSB of the hardware pulse width measurement counter can convey a
time change of as little as 74 ns depending on the setup of the Conexant
integrated IR controller.


> [...]
> >> How do you define the start and stop of sequences?
>
> > For the end of Rx signalling:
> >
> > Well with the Conexant hardware I can set a maximum pulse (mark or
> > space) width, and the hardware will generate an Rx Timeout interrupt to
> > signal the end of Rx when a space ends up longer than that max pulse
> > width. The hardware also puts a special marker in the hardware pulse
> > widht measurement FIFO (in band signalling essentially).
> >
> > I'm not sure anything like that gets communicated to userspace via
> > lirc_dev, and I'm too tired to doublecheck right now.
>
> There is no such thing in the protocol. Some devices cannot provide any
> end of signal marker, so lircd handles this using timers.
>
> If there is some interest, the MODE2 protocol can be extended. We still
> have 7 bits unused...

As I thought about this more, I could just pass up a space the length of
the pulse width measurment timeout from the kernel up to LIRC. LIRC's
decoders should know that the space is too long as well. No changes
needed - I think.




> [...]
> >> Is transmitting synchronous or queued?
>
> > kfifo's IIRC.
>
> No, it's synchronous.
>
> >> How big is the transmit queue?
>
> No queue.

Oops, thanks for the correction.



> [...]
> > My particular gripes about the current LIRC interface:

> > 2. I have hardware where I can set max_pulse_width so I can optimize
> > pulse timer resolution and have the hardware time out rapidly on end of
> > RX. I also have hardware where I can set a min_pulse_width to set a
> > hardware low-pass/glitch filter. Currently LIRC doesn't have any way to
> > set these, but it would be nice to have.
>
> Should be really easy to add these. The actual values could be derived
> from the config files easily.

Good. I thought it would be so.

> > In band signalling of a
> > hardware detected "end of Rx" may also make sense then too.
>
> See above.
>
> > 3. As I mentioned before, it would be nice if LIRC could set a batch of
> > parameters atomically somehow, instead of with a series of ioctl()s. I
> > can work around this in kernel though.
>
> Is there any particular sequence that you are concerned about?
> Setting carrier frequency and then duty cycle is a bit problematic.
> Currently it's solved by resetting the duty cycle to 50% each time you
> change the carrier frequency.
> But as the LIRC interface is "one user only", I don't see a real problem.

The case I worry about is enabling the IR Rx hardware without the low
pass filter properly set up to be consistent with the minimum expected
Rx pulse width and the desired Rx carrier window or maximum expected Rx
pulse width. The result could be a lot of useless interrupts from IR
"glitch" measurements in bad ambient light conditions until all the
parameters are consistent.

Regards,
Andy

> Christoph

2009-12-13 02:31:55

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sat 2009-11-28 21:21:57, Krzysztof Halasa wrote:
> Jon Smirl <[email protected]> writes:
>
> > We have one IR receiver device and multiple remotes. How does the
> > input system know how many devices to create corresponding to how many
> > remotes you have? There is no current mechanism to do that. You need
> > an input device for each remote so that you can do the EVIOCSKEYCODE
> > against it. Some type of "create subdevice" IOCTL will need to be
> > built.
>
> Thinking about it, I'm not sure. Why do we want multiple remote devices?
> (not multiple remotes, that's clear).

Computer in the basement, one ir receiver + usb speakers in kitchen,
second ir receiver + speakers + monitor in living room.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-13 01:17:39

by David Lang

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun, 6 Dec 2009, Krzysztof Halasa wrote:

> Andy Walls <[email protected]> writes:
>
>> Yes, I agree. I do not know what percentage of current Linux users are
>> technical vs non-technical, so I cannot gauge the current improtance.
>>
>> I can see the trend line though: as time goes by, the percentage of all
>> linux users that have a technical bent will only get smaller.
>
> This IMHO shouldn't matter. If users can configure their keymaps for
> e.g. games with a graphical utility (and they easily can), they can do
> the same with their remotes, at least with these using common sane
> protocols. The only thing needed is a good GUI utility. Ergo - it's not
> a kernel issue.
>
> The "default bundled", or PnP, won't work well in comparison to a GUI
> utility, I wouldn't worry about it too much (though adding it to udev
> and co is trivial and we should do it - even if not PnP but asking first
> about the actual remote used).

how is this problem any different from figuring out the keymap of a
keyboard?

there are many defined keymaps (including cases where keys are labled
different things on the keyboard but send identical codes)

currently in linux distros the user can either select the keymap, or the
installer will ask the user to press specific keys (or indicate that they
don't exist) until the installer can guess the keymap to use.

why would this not work for IR remotes as well?

and just like linux has some default keymaps that it uses that mostly work
for the common case, there could be default IR keymaps that map the common
keys for all remotes to the appropriate keycodes. it will mean that by
default you won't see a difference between a DVD, VCR, DVR, etc play
button, but it will mean that someone picking up a random remote and
pointing it at the linux box will probably get minimal functionality.

then with a utility to tweak the keymap (or load a more specific one) the
user can do better.

this would also integrate very nicely with they 'multimedia keyboards'
that have lots of buttons on them as well, unless you tell it otherwise,
play is play is play no matter which play button is pressed.

David Lang

2009-12-13 12:15:40

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Dmitry Torokhov wrote:
> On Sun, Dec 06, 2009 at 09:34:26PM +0100, Krzysztof Halasa wrote:
>> Jon Smirl <[email protected]> writes:
>>
>>>> Once again: how about agreement about the LIRC interface
>>>> (kernel-userspace) and merging the actual LIRC code first? In-kernel
>>>> decoding can wait a bit, it doesn't change any kernel-user interface.
>>> I'd like to see a semi-complete design for an in-kernel IR system
>>> before anything is merged from any source.
>> This is a way to nowhere, there is no logical dependency between LIRC
>> and input layer IR.
>>
>> There is only one thing which needs attention before/when merging LIRC:
>> the LIRC user-kernel interface. In-kernel "IR system" is irrelevant and,
>> actually, making a correct IR core design without the LIRC merged can be
>> only harder.
>
> This sounds like "merge first, think later"...
>
> The question is why we need to merge lirc interface right now, before we
> agreed on the sybsystem architecture? Noone _in kernel_ user lirc-dev
> yet and, looking at the way things are shaping, no drivers will be
> _directly_ using it after it is complete. So, even if we merge it right
> away, the code will have to be restructured and reworked. Unfortunately,
> just merging what Jarod posted, will introduce sysfs hierarchy which
> is userspace interface as well (although we not as good maintaining it
> at times) and will add more constraints on us.
>
> That is why I think we should go the other way around - introduce the
> core which receivers could plug into and decoder framework and once it
> is ready register lirc-dev as one of the available decoders.
>

I've committed already some IR restruct code on my linux-next -git tree:

http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git

The code there basically moves the input/evdev registering code and
scancode/keycode management code into a separate ir-core module.

To make my life easy, I've moved the code temporarily into drivers/media/IR.
This way, it helps me to move V4L specific code outside ir-core and to later
use it for DVB. After having it done, probably the better is to move it to
be under /drivers or /drivers/input.

The enclosed patch just adds a skeleton for the new sysfs class for remote
controllers and registers an yet unused ir_protocol attribute, creating this
tree:

/sys/class/irrcv/
|-- irrcv0
| |-- ir_protocol
| |-- power
| | `-- wakeup
| |-- subsystem -> ../../irrcv
| `-- uevent
`-- irrcv1
|-- ir_protocol
|-- power
| `-- wakeup
|-- subsystem -> ../../irrcv
`-- uevent

While writing the code, it occurred to me that calling it as "IR" is not the better way,
since there's nothing on the code that is related to infra-red, but, instead, is
is related to remote controller.

So, if it is ok for everybudy, IMO, we should use, instead "rc" meaning remote controller,
naming the core module as "rc-core", putting it into drivers/rc.

Also, since the same rc chip can have a receiver and a transmitter, maybe we can create the
class as:
/sys/class/rc
rcrcv0/
rcrcv1/
...
rctx0/
rctx1/
...

Comments?


---
linux/drivers/media/IR/Makefile | 2
linux/drivers/media/IR/ir-keytable.c | 17 +++++-
linux/drivers/media/IR/ir-sysfs.c | 94 +++++++++++++++++++++++++++++++++++
linux/include/media/ir-core.h | 12 +++-
4 files changed, 119 insertions(+), 6 deletions(-)

--- master.orig/linux/drivers/media/IR/Makefile
+++ master/linux/drivers/media/IR/Makefile
@@ -1,5 +1,5 @@
ir-common-objs := ir-functions.o ir-keymaps.o
-ir-core-objs := ir-keytable.o
+ir-core-objs := ir-keytable.o ir-sysfs.o

obj-$(CONFIG_IR_CORE) += ir-core.o
obj-$(CONFIG_VIDEO_IR) += ir-common.o
--- master.orig/linux/drivers/media/IR/ir-keytable.c
+++ master/linux/drivers/media/IR/ir-keytable.c
@@ -448,12 +448,21 @@ int ir_input_register(struct input_dev *
input_set_drvdata(input_dev, ir_dev);

rc = input_register_device(input_dev);
+ if (rc < 0)
+ goto err;
+
+ rc = ir_register_class(input_dev);
if (rc < 0) {
- kfree(rc_tab->scan);
- kfree(ir_dev);
- input_set_drvdata(input_dev, NULL);
+ input_unregister_device(input_dev);
+ goto err;
}

+ return 0;
+
+err:
+ kfree(rc_tab->scan);
+ kfree(ir_dev);
+ input_set_drvdata(input_dev, NULL);
return rc;
}
EXPORT_SYMBOL_GPL(ir_input_register);
@@ -473,6 +482,8 @@ void ir_input_unregister(struct input_de
kfree(rc_tab->scan);
rc_tab->scan = NULL;

+ ir_unregister_class(dev);
+
kfree(ir_dev);
input_unregister_device(dev);
}
--- /dev/null
+++ master/linux/drivers/media/IR/ir-sysfs.c
@@ -0,0 +1,94 @@
+/* ir-register.c - handle IR scancode->keycode tables
+ *
+ * Copyright (C) 2009 by Mauro Carvalho Chehab <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/input.h>
+#include <linux/device.h>
+#include <media/ir-core.h>
+
+#define IRRCV_NUM_DEVICES 256
+
+unsigned long ir_core_dev_number;
+
+static struct class *ir_input_class;
+
+static DEVICE_ATTR(ir_protocol, S_IRUGO | S_IWUSR, NULL, NULL);
+
+static struct attribute *ir_dev_attrs[] = {
+ &dev_attr_ir_protocol.attr,
+};
+
+int ir_register_class(struct input_dev *input_dev)
+{
+ int rc;
+ struct kobject *kobj;
+
+ struct ir_input_dev *ir_dev = input_get_drvdata(input_dev);
+ int devno = find_first_zero_bit(&ir_core_dev_number,
+ IRRCV_NUM_DEVICES);
+
+ if (unlikely(devno < 0))
+ return devno;
+
+ ir_dev->attr.attrs = ir_dev_attrs;
+ ir_dev->class_dev = device_create(ir_input_class, NULL,
+ input_dev->dev.devt, ir_dev,
+ "irrcv%d", devno);
+ kobj= &ir_dev->class_dev->kobj;
+
+ printk(KERN_WARNING "Creating IR device %s\n", kobject_name(kobj));
+ rc = sysfs_create_group(kobj, &ir_dev->attr);
+ if (unlikely (rc < 0)) {
+ device_destroy(ir_input_class, input_dev->dev.devt);
+ return -ENOMEM;
+ }
+
+ ir_dev->devno = devno;
+ set_bit(devno, &ir_core_dev_number);
+
+ return 0;
+};
+
+void ir_unregister_class(struct input_dev *input_dev)
+{
+ struct ir_input_dev *ir_dev = input_get_drvdata(input_dev);
+ struct kobject *kobj;
+
+ clear_bit(ir_dev->devno, &ir_core_dev_number);
+
+ kobj= &ir_dev->class_dev->kobj;
+
+ sysfs_remove_group(kobj, &ir_dev->attr);
+ device_destroy(ir_input_class, input_dev->dev.devt);
+
+ kfree(ir_dev->attr.name);
+}
+
+static int __init ir_core_init(void)
+{
+ ir_input_class = class_create(THIS_MODULE, "irrcv");
+ if (IS_ERR(ir_input_class)) {
+ printk(KERN_ERR "ir_core: unable to register irrcv class\n");
+ return PTR_ERR(ir_input_class);
+ }
+
+ return 0;
+}
+
+static void __exit ir_core_exit(void)
+{
+ class_destroy(ir_input_class);
+}
+
+module_init(ir_core_init);
+module_exit(ir_core_exit);
--- master.orig/linux/include/media/ir-core.h
+++ master/linux/include/media/ir-core.h
@@ -42,8 +42,11 @@ struct ir_scancode_table {
};

struct ir_input_dev {
- struct input_dev *dev;
- struct ir_scancode_table rc_tab;
+ struct input_dev *dev; /* Input device*/
+ struct ir_scancode_table rc_tab; /* scan/key table */
+ unsigned long devno; /* device number */
+ struct attribute_group attr; /* IR attributes */
+ struct device *class_dev; /* virtual class dev */
};

/* Routines from ir-keytable.c */
@@ -59,4 +62,9 @@ int ir_input_register(struct input_dev *
struct ir_scancode_table *ir_codes);
void ir_input_unregister(struct input_dev *input_dev);

+/* Routines from ir-sysfs.c */
+
+int ir_register_class(struct input_dev *input_dev);
+void ir_unregister_class(struct input_dev *input_dev);
+
#endif

2009-12-15 11:47:24

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Sun 2009-12-06 12:59:00, Christoph Bartelmus wrote:
> Hi Dmitry,
>
> on 05 Dec 09 at 22:55, Dmitry Torokhov wrote:
> [...]
> > I do not believe you are being realistic. Sometimes we just need to say
> > that the device is a POS and is just not worth it. Remember, there is
> > still "lirc hole" for the hard core people still using solder to produce
> > something out of the spare electronic components that may be made to
> > work (never mind that it causes the CPU constantly poll the device, not
> > letting it sleep and wasting electricity as a result - just hypotetical
> > example here).
>
> The still seems to be is a persistent misconception that the home-brewn
> receivers need polling or cause heavy CPU load. No they don't. All of them
> are IRQ based.

I have at least one that needs polling/signal
processing... somewhere. IR LED connected to mic input.

Anyway, clearly hacked-up devices like that are better left for
userland solutions.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 11:50:28

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?


> > This sounds like "merge first, think later"...
> >
> > The question is why we need to merge lirc interface right now, before we
> > agreed on the sybsystem architecture? Noone _in kernel_ user lirc-dev
> > yet and, looking at the way things are shaping, no drivers will be
> > _directly_ using it after it is complete. So, even if we merge it right
> > away, the code will have to be restructured and reworked. Unfortunately,
> > just merging what Jarod posted, will introduce sysfs hierarchy which
> > is userspace interface as well (although we not as good maintaining it
> > at times) and will add more constraints on us.
> >
> > That is why I think we should go the other way around - introduce the
> > core which receivers could plug into and decoder framework and once it
> > is ready register lirc-dev as one of the available decoders.
> >
>
> I've committed already some IR restruct code on my linux-next -git tree:
>
> http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git
>
> The code there basically moves the input/evdev registering code and
> scancode/keycode management code into a separate ir-core module.
>
> To make my life easy, I've moved the code temporarily into drivers/media/IR.
> This way, it helps me to move V4L specific code outside ir-core and to later
> use it for DVB. After having it done, probably the better is to move it to
> be under /drivers or /drivers/input.

Well, -next is for stuff to be merged into 2.6.34. You are quite an
optimist.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 13:33:50

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Pavel Machek wrote:
>>> That is why I think we should go the other way around - introduce the
>>> core which receivers could plug into and decoder framework and once it
>>> is ready register lirc-dev as one of the available decoders.
>>>
>> I've committed already some IR restruct code on my linux-next -git tree:
>>
>> http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git
>>
>> The code there basically moves the input/evdev registering code and
>> scancode/keycode management code into a separate ir-core module.
>>
>> To make my life easy, I've moved the code temporarily into drivers/media/IR.
>> This way, it helps me to move V4L specific code outside ir-core and to later
>> use it for DVB. After having it done, probably the better is to move it to
>> be under /drivers or /drivers/input.
>
> Well, -next is for stuff to be merged into 2.6.34. You are quite an
> optimist.
> Pavel

Well, we need those changes anyway for the in-kernel drivers, and I'm not seeing
on the current patches any reason for not having them for 2.6.34.

I've added all the ir-core patches I did so far at linux-next. This helps people
to review and contribute.

The patches are already working with the in-kernel em28xx driver, allowing to
replace the keycode table and the protocol used by the hardware IR decoder.
I tested here by replacing an RC-5 based IR table (Hauppauge Grey) by a NEC
based IR table (Terratec Cinergy XS remote controller).

The current Remote Controller core module (ir-core) is currently doing:

- Implementation of the existing EVIO[G|S]KEYCODE, expanding/feeing memory
dynamically, based on the needed size for scancode/keycode table;

- scancodes can be up to 16 bits currently;

- sysfs is registering /sys/class/irrcv and creating one branch for each
different RC receiver, numbering from irrcv0 to irrcv255;

- one irrcv note is created: current_protocol;

- reading /sys/class/irrcv/irrcv*/current_protocol returns the protocol
currently used by the driver;

- writing to /sys/class/irrcv/irrcv*/current_protocol changes the protocol
to a new one, by calling a callback, asking the driver to change the protocol. If
the protocol is not support, it returns -EINVAL;

- all V4L drivers are already using ir-core;

- em28xx driver is implementing current_protocol show/store support.

TODO:
1) Port DVB drivers to use ir-core, removing the duplicated (and incomplete
- as table size can't change on DVB's implementation) code that exists there;

2) add current_protocol support on other drivers;

3) link the corresponding input/evdev interfaces with /sys/class/irrcv/irrcv*;

4) make the keytable.c application aware of the sysfs vars;

5) add an attribute to uniquely identify a remote controller;

6) write or convert an existing application to load IR tables at runtime;

7) get the complete 16-bit scancodes used by V4L drivers;

8) add decoder/lirc_dev glue to ir-core;

9) add lirc_dev module and in-kernel decoders;

10) extend keycode table replacement to support big/variable sized scancodes;

11) rename IR->RC;

12) redesign or remove ir-common module. It currently handles in-kernel
keycode tables and a few helper routines for raw pulse/space decode;

13) move drivers/media/IR to a better place;


comments:

Tasks (1) to (6) for sure can happen to 2.6.34, depending on people's spare
time for it;

(7) is probably the more complex task, since it requires to re-test all in-kernel
supported remote controlle scancode/keycode tables, to get the complete IR keycode
and rewrite the getkeycode functions that are currently masking the IR code into 7 bits.
We'll need users help on this task, but this can be done gradually, like I did with
two RC keytables on em28xx driver, while preserving the other keytables as-is.

(8) I suggest that this glue will be submitted together with lirc_dev patch
series, as the biggest client for it is lirc. In principle, kfifo seems the better
interface for lirc_dev -> decoders interface. For the decoders -> RC core interface,
there's an interface already used on V4L drivers, provided by ir-common, using evdev
kernel API. This may need some review.

(9) depends on lirc API discusions. My proposal is that people submit an RFC
with the lirc API reviewed to the ML's, for people to ack/nack/comment. After that,
re-submit the lirc_dev module integrating it into ir-core and with the reviewed API;

(10) depends on EVIO[G|S]KEYCODE discussions we've already started. I did a proposal
about it. I'll review, based on the comments and re-submit it;

(11) if none is against renaming IR as RC, I'll do it on a next patch;

(12) depends on having lirc_dev added, for the removal of ir-functions.c. With
respect to the keytables, maybe one interesting alternative is to use a logic close to
nls tables that exists at fs, allowing to individually insert or remove an IR keytable
in-kernel.

(13) has low priority. While not finishing the DVB integration with RC core
and reviewing the remaining bits of the ir-common module.

Cheers,
Mauro.

2009-12-15 13:43:14

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 15, 2009 at 8:33 AM, Mauro Carvalho Chehab
<[email protected]> wrote:
> Pavel Machek wrote:
>>>> That is why I think we should go the other way around - introduce the
>>>> core which receivers could plug into and decoder framework and once it
>>>> is ready register lirc-dev as one of the available decoders.
>>>>
>>> I've committed already some IR restruct code on my linux-next -git tree:
>>>
>>> http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git
>>>
>>> The code there basically moves the input/evdev registering code and
>>> scancode/keycode management code into a separate ir-core module.
>>>
>>> To make my life easy, I've moved the code temporarily into drivers/media/IR.
>>> This way, it helps me to move V4L specific code outside ir-core and to later
>>> use it for DVB. After having it done, probably the better is to move it to
>>> be under /drivers or /drivers/input.
>>
>> Well, -next is for stuff to be merged into 2.6.34. You are quite an
>> optimist.
>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Pavel
>
> Well, we need those changes anyway for the in-kernel drivers, and I'm not seeing
> on the current patches any reason for not having them for 2.6.34.
>
> I've added all the ir-core patches I did so far at linux-next. This helps people
> to review and contribute.
>
> The patches are already working with the in-kernel em28xx driver, allowing to
> replace the keycode table and the protocol used by the hardware IR decoder.
> I tested here by replacing an RC-5 based IR table (Hauppauge Grey) by a NEC
> based IR table (Terratec Cinergy XS remote controller).
>
> The current Remote Controller core module (ir-core) is currently doing:
>
> ? ? ? ?- Implementation of the existing EVIO[G|S]KEYCODE, expanding/feeing memory
> dynamically, based on the needed size for scancode/keycode table;
>
> ? ? ? ?- scancodes can be up to 16 bits currently;
>
> ? ? ? ?- sysfs is registering /sys/class/irrcv and creating one branch for each
> different RC receiver, numbering from irrcv0 to irrcv255;
>
> ? ? ? ?- one irrcv note is created: current_protocol;
>
> ? ? ? ?- reading /sys/class/irrcv/irrcv*/current_protocol returns the protocol
> currently used by the driver;
>
> ? ? ? ?- writing to /sys/class/irrcv/irrcv*/current_protocol changes the protocol
> to a new one, by calling a callback, asking the driver to change the protocol. If
> the protocol is not support, it returns -EINVAL;
>
> ? ? ? ?- all V4L drivers are already using ir-core;
>
> ? ? ? ?- em28xx driver is implementing current_protocol show/store support.
>
> TODO:

I'd add a pulse based receiver like a MSMCE to make sure the core API is right.
MSME has transmit hardware too.

What about creating multiple evdev devices with their own keymaps off
from a single receiver? That's a key part of making multi-function
remotes work.


> ? ? ? ?1) Port DVB drivers to use ir-core, removing the duplicated (and incomplete
> ? ? ? ? ?- as table size can't change on DVB's implementation) code that exists there;
>
> ? ? ? ?2) add current_protocol support on other drivers;
>
> ? ? ? ?3) link the corresponding input/evdev interfaces with /sys/class/irrcv/irrcv*;
>
> ? ? ? ?4) make the keytable.c application aware of the sysfs vars;
>
> ? ? ? ?5) add an attribute to uniquely identify a remote controller;
>
> ? ? ? ?6) write or convert an existing application to load IR tables at runtime;
>
> ? ? ? ?7) get the complete 16-bit scancodes used by V4L drivers;
>
> ? ? ? ?8) add decoder/lirc_dev glue to ir-core;
>
> ? ? ? ?9) add lirc_dev module and in-kernel decoders;
>
> ? ? ? ?10) extend keycode table replacement to support big/variable sized scancodes;
>
> ? ? ? ?11) rename IR->RC;
>
> ? ? ? ?12) redesign or remove ir-common module. It currently handles in-kernel
> ? ? ? ? ? ?keycode tables and a few helper routines for raw pulse/space decode;
>
> ? ? ? ?13) move drivers/media/IR to a better place;
>
>
> comments:
>
> ? ? ? ?Tasks (1) to (6) for sure can happen to 2.6.34, depending on people's spare
> time for it;
>
> ? ? ? ?(7) is probably the more complex task, since it requires to re-test all in-kernel
> supported remote controlle scancode/keycode tables, to get the complete IR keycode
> and rewrite the getkeycode functions that are currently masking the IR code into 7 bits.
> We'll need users help on this task, but this can be done gradually, like I did with
> two RC keytables on em28xx driver, while preserving the other keytables as-is.
>
> ? ? ? ?(8) I suggest that this glue will be submitted together with lirc_dev patch
> series, as the biggest client for it is lirc. In principle, kfifo seems the better
> interface for lirc_dev -> decoders interface. For the decoders -> RC core interface,
> there's an interface already used on V4L drivers, provided by ir-common, using evdev
> kernel API. This may need some review.
>
> ? ? ? ?(9) depends on lirc API discusions. My proposal is that people submit an RFC
> with the lirc API reviewed to the ML's, for people to ack/nack/comment. After that,
> re-submit the lirc_dev module integrating it into ir-core and with the reviewed API;
>
> ? ? ? ?(10) depends on EVIO[G|S]KEYCODE discussions we've already started. I did a proposal
> about it. I'll review, based on the comments and re-submit it;
>
> ? ? ? ?(11) if none is against renaming IR as RC, I'll do it on a next patch;
>
> ? ? ? ?(12) depends on having lirc_dev added, for the removal of ir-functions.c. With
> respect to the keytables, maybe one interesting alternative is to use a logic close to
> nls tables that exists at fs, allowing to individually insert or remove an IR keytable
> in-kernel.
>
> ? ? ? ?(13) has low priority. While not finishing the DVB integration with RC core
> and reviewing the remaining bits of the ir-common module.
>
> Cheers,
> Mauro.
>



--
Jon Smirl
[email protected]

2009-12-15 14:20:53

by Mauro Carvalho Chehab

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Jon Smirl wrote:
> On Tue, Dec 15, 2009 at 8:33 AM, Mauro Carvalho Chehab
> <[email protected]> wrote:
>> Pavel Machek wrote:
>>>>> That is why I think we should go the other way around - introduce the
>>>>> core which receivers could plug into and decoder framework and once it
>>>>> is ready register lirc-dev as one of the available decoders.
>>>>>
>>>> I've committed already some IR restruct code on my linux-next -git tree:
>>>>
>>>> http://git.kernel.org/?p=linux/kernel/git/mchehab/linux-next.git
>>>>
>>>> The code there basically moves the input/evdev registering code and
>>>> scancode/keycode management code into a separate ir-core module.
>>>>
>>>> To make my life easy, I've moved the code temporarily into drivers/media/IR.
>>>> This way, it helps me to move V4L specific code outside ir-core and to later
>>>> use it for DVB. After having it done, probably the better is to move it to
>>>> be under /drivers or /drivers/input.
>>> Well, -next is for stuff to be merged into 2.6.34. You are quite an
>>> optimist.
>>> Pavel
>> Well, we need those changes anyway for the in-kernel drivers, and I'm not seeing
>> on the current patches any reason for not having them for 2.6.34.
>>
>> I've added all the ir-core patches I did so far at linux-next. This helps people
>> to review and contribute.
>>
>> The patches are already working with the in-kernel em28xx driver, allowing to
>> replace the keycode table and the protocol used by the hardware IR decoder.
>> I tested here by replacing an RC-5 based IR table (Hauppauge Grey) by a NEC
>> based IR table (Terratec Cinergy XS remote controller).
>>
>> The current Remote Controller core module (ir-core) is currently doing:
>>
>> - Implementation of the existing EVIO[G|S]KEYCODE, expanding/feeing memory
>> dynamically, based on the needed size for scancode/keycode table;
>>
>> - scancodes can be up to 16 bits currently;
>>
>> - sysfs is registering /sys/class/irrcv and creating one branch for each
>> different RC receiver, numbering from irrcv0 to irrcv255;
>>
>> - one irrcv note is created: current_protocol;
>>
>> - reading /sys/class/irrcv/irrcv*/current_protocol returns the protocol
>> currently used by the driver;
>>
>> - writing to /sys/class/irrcv/irrcv*/current_protocol changes the protocol
>> to a new one, by calling a callback, asking the driver to change the protocol. If
>> the protocol is not support, it returns -EINVAL;
>>
>> - all V4L drivers are already using ir-core;
>>
>> - em28xx driver is implementing current_protocol show/store support.
>>
>> TODO:
>
> I'd add a pulse based receiver like a MSMCE to make sure the core API is right.
> MSME has transmit hardware too.

Makes sense. This can be done after having the lirc_dev integrated.

> What about creating multiple evdev devices with their own keymaps off
> from a single receiver? That's a key part of making multi-function
> remotes work.

I was sure I missed something at the TODO :)

>
>
>> 1) Port DVB drivers to use ir-core, removing the duplicated (and incomplete
>> - as table size can't change on DVB's implementation) code that exists there;
>>
>> 2) add current_protocol support on other drivers;
>>
>> 3) link the corresponding input/evdev interfaces with /sys/class/irrcv/irrcv*;
>>
>> 4) make the keytable.c application aware of the sysfs vars;
>>
>> 5) add an attribute to uniquely identify a remote controller;
>>
>> 6) write or convert an existing application to load IR tables at runtime;
>>
>> 7) get the complete 16-bit scancodes used by V4L drivers;
>>
>> 8) add decoder/lirc_dev glue to ir-core;
>>
>> 9) add lirc_dev module and in-kernel decoders;
>>
>> 10) extend keycode table replacement to support big/variable sized scancodes;
>>
>> 11) rename IR->RC;
>>
>> 12) redesign or remove ir-common module. It currently handles in-kernel
>> keycode tables and a few helper routines for raw pulse/space decode;
>>
>> 13) move drivers/media/IR to a better place;
>>

So, we have also at the todo list:

14) add pulse based drivers;

15) make in-kernel pulse-based devices to use lirc_dev;

16) add an API to dynamically create evdev interfaces for scancode filtering;

>>
>> comments:
>>
>> Tasks (1) to (6) for sure can happen to 2.6.34, depending on people's spare
>> time for it;
>>
>> (7) is probably the more complex task, since it requires to re-test all in-kernel
>> supported remote controlle scancode/keycode tables, to get the complete IR keycode
>> and rewrite the getkeycode functions that are currently masking the IR code into 7 bits.
>> We'll need users help on this task, but this can be done gradually, like I did with
>> two RC keytables on em28xx driver, while preserving the other keytables as-is.
>>
>> (8) I suggest that this glue will be submitted together with lirc_dev patch
>> series, as the biggest client for it is lirc. In principle, kfifo seems the better
>> interface for lirc_dev -> decoders interface. For the decoders -> RC core interface,
>> there's an interface already used on V4L drivers, provided by ir-common, using evdev
>> kernel API. This may need some review.
>>
>> (9) depends on lirc API discusions. My proposal is that people submit an RFC
>> with the lirc API reviewed to the ML's, for people to ack/nack/comment. After that,
>> re-submit the lirc_dev module integrating it into ir-core and with the reviewed API;
>>
>> (10) depends on EVIO[G|S]KEYCODE discussions we've already started. I did a proposal
>> about it. I'll review, based on the comments and re-submit it;
>>
>> (11) if none is against renaming IR as RC, I'll do it on a next patch;
>>
>> (12) depends on having lirc_dev added, for the removal of ir-functions.c. With
>> respect to the keytables, maybe one interesting alternative is to use a logic close to
>> nls tables that exists at fs, allowing to individually insert or remove an IR keytable
>> in-kernel.
>>
>> (13) has low priority. While not finishing the DVB integration with RC core
>> and reviewing the remaining bits of the ir-common module.
>>
>> Cheers,
>> Mauro.
>>
>
>
>

2009-12-15 19:59:14

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

Hi!

> (11) if none is against renaming IR as RC, I'll do it on a next patch;

Call it irc -- infrared remote control. Bluetooth remote controls will
have very different characteristics.

Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 20:14:08

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
> Hi!
>
>> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
>
> Call it irc -- infrared remote control. Bluetooth remote controls will
> have very different characteristics.

How are they different after the scancode is extracted from the
network packet? The scancode still needs to be passed to the input
system, go through a keymap, and end up on an evdev device.

I would expect the code for extracting the scancode to live in the
networking stack, but after it is recovered the networking code would
use the same API as IR to submit it to input.

--
Jon Smirl
[email protected]

2009-12-15 20:19:44

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue 2009-12-15 15:14:02, Jon Smirl wrote:
> On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
> > Hi!
> >
> >> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
> >
> > Call it irc -- infrared remote control. Bluetooth remote controls will
> > have very different characteristics.
>
> How are they different after the scancode is extracted from the
> network packet? The scancode still needs to be passed to the input
> system, go through a keymap, and end up on an evdev device.
>
> I would expect the code for extracting the scancode to live in the
> networking stack, but after it is recovered the networking code would
> use the same API as IR to submit it to input.

For one thing, bluetooth (etc) has concept of devices (and reliable
transfer). If you have two same bluetooth remotes, you can tell them
apart, unlike IR.

So yes, keymapping is the same, but that's pretty much it. Decoding
will not be the same (IR is special), etc...
Pavel

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 20:29:57

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 15, 2009 at 3:19 PM, Pavel Machek <[email protected]> wrote:
> On Tue 2009-12-15 15:14:02, Jon Smirl wrote:
>> On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
>> > Hi!
>> >
>> >> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
>> >
>> > Call it irc -- infrared remote control. Bluetooth remote controls will
>> > have very different characteristics.
>>
>> How are they different after the scancode is extracted from the
>> network packet? The scancode still needs to be passed to the input
>> system, go through a keymap, and end up on an evdev device.
>>
>> I would expect the code for extracting the scancode to live in the
>> networking stack, but after it is recovered the networking code would
>> use the same API as IR to submit it to input.
>
> For one thing, ?bluetooth (etc) has concept of devices (and reliable
> transfer). If you have two same bluetooth remotes, you can tell them
> apart, unlike IR.

IR has the same concept of devices. That's what those codes you enter
into a universal remote do - they set the device.

There are three classes of remotes..
Fixed function - the device is hardwired
Universal - you can change the device
Multi-function - a universal that can be multiple devices - TV, cable,
audio, etc

If you set two Bluetooth remotes both to the same device you can't
tell them apart either.
Two identical fixed function remotes can be distinguished and they
shouldn't be distinguishable.

To distinguish between universal remotes just change the device being emulated.


>
> So yes, keymapping is the same, but that's pretty much it. Decoding
> will not be the same (IR is special), etc...
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?Pavel
>
> --
> (english) http://www.livejournal.com/~pavelmachek
> (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
>



--
Jon Smirl
[email protected]

2009-12-15 20:33:16

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue 2009-12-15 15:29:51, Jon Smirl wrote:
> On Tue, Dec 15, 2009 at 3:19 PM, Pavel Machek <[email protected]> wrote:
> > On Tue 2009-12-15 15:14:02, Jon Smirl wrote:
> >> On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
> >> > Hi!
> >> >
> >> >> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
> >> >
> >> > Call it irc -- infrared remote control. Bluetooth remote controls will
> >> > have very different characteristics.
> >>
> >> How are they different after the scancode is extracted from the
> >> network packet? The scancode still needs to be passed to the input
> >> system, go through a keymap, and end up on an evdev device.
> >>
> >> I would expect the code for extracting the scancode to live in the
> >> networking stack, but after it is recovered the networking code would
> >> use the same API as IR to submit it to input.
> >
> > For one thing, ?bluetooth (etc) has concept of devices (and reliable
> > transfer). If you have two same bluetooth remotes, you can tell them
> > apart, unlike IR.
>
> IR has the same concept of devices. That's what those codes you enter
> into a universal remote do - they set the device.

They set the device _model_.

> There are three classes of remotes..
> Fixed function - the device is hardwired
> Universal - you can change the device
> Multi-function - a universal that can be multiple devices - TV, cable,
> audio, etc
>
> If you set two Bluetooth remotes both to the same device you can't
> tell them apart either.

Untrue. Like ethernets and wifis, bluetooth devices have unique
addresses. Communication is bidirectional.

Imagine wifi connected bluetooth. It is very different from infrared.

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 20:45:26

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 15, 2009 at 3:33 PM, Pavel Machek <[email protected]> wrote:
> On Tue 2009-12-15 15:29:51, Jon Smirl wrote:
>> On Tue, Dec 15, 2009 at 3:19 PM, Pavel Machek <[email protected]> wrote:
>> > On Tue 2009-12-15 15:14:02, Jon Smirl wrote:
>> >> On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
>> >> > Hi!
>> >> >
>> >> >> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
>> >> >
>> >> > Call it irc -- infrared remote control. Bluetooth remote controls will
>> >> > have very different characteristics.
>> >>
>> >> How are they different after the scancode is extracted from the
>> >> network packet? The scancode still needs to be passed to the input
>> >> system, go through a keymap, and end up on an evdev device.
>> >>
>> >> I would expect the code for extracting the scancode to live in the
>> >> networking stack, but after it is recovered the networking code would
>> >> use the same API as IR to submit it to input.
>> >
>> > For one thing, ?bluetooth (etc) has concept of devices (and reliable
>> > transfer). If you have two same bluetooth remotes, you can tell them
>> > apart, unlike IR.
>>
>> IR has the same concept of devices. That's what those codes you enter
>> into a universal remote do - they set the device.
>
> They set the device _model_.
>
>> There are three classes of remotes..
>> Fixed function - the device is hardwired
>> Universal - you can change the device
>> Multi-function - a universal that can be multiple devices - TV, cable,
>> audio, etc
>>
>> If you set two Bluetooth remotes both to the same device you can't
>> tell them apart either.
>
> Untrue. Like ethernets and wifis, bluetooth devices have unique
> addresses. Communication is bidirectional.

I agree with that, but the 802.15.4 remote control software I've
worked with ignores the MAC address. You set your remote to send codes
for a specific device. The mac address of the remote is ignored so
that any remote can control the device. You don't need to pair
802.15.4 remotes like Bluetooth devices need to be paired.

I haven't played around with a Bluetooth remote. Nothing I own can
send the signals. How can a Bluetooth remote control multiple devices
in the same room if it needs to be paired?

If it doesn't use this API, how does a Bluetooth remote turn a button
press into a Linux keycode on an evdev device?


>
> Imagine wifi connected bluetooth. It is very different from infrared.
>
> --
> (english) http://www.livejournal.com/~pavelmachek
> (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
>



--
Jon Smirl
[email protected]

2009-12-15 21:05:26

by Pavel Machek

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue 2009-12-15 15:45:14, Jon Smirl wrote:
> On Tue, Dec 15, 2009 at 3:33 PM, Pavel Machek <[email protected]> wrote:
> > On Tue 2009-12-15 15:29:51, Jon Smirl wrote:
> >> On Tue, Dec 15, 2009 at 3:19 PM, Pavel Machek <[email protected]> wrote:
> >> > On Tue 2009-12-15 15:14:02, Jon Smirl wrote:
> >> >> On Tue, Dec 15, 2009 at 2:58 PM, Pavel Machek <[email protected]> wrote:
> >> >> > Hi!
> >> >> >
> >> >> >> ? ? ? (11) if none is against renaming IR as RC, I'll do it on a next patch;
> >> >> >
> >> >> > Call it irc -- infrared remote control. Bluetooth remote controls will
> >> >> > have very different characteristics.
> >> >>
> >> >> How are they different after the scancode is extracted from the
> >> >> network packet? The scancode still needs to be passed to the input
> >> >> system, go through a keymap, and end up on an evdev device.
> >> >>
> >> >> I would expect the code for extracting the scancode to live in the
> >> >> networking stack, but after it is recovered the networking code would
> >> >> use the same API as IR to submit it to input.
> >> >
> >> > For one thing, ?bluetooth (etc) has concept of devices (and reliable
> >> > transfer). If you have two same bluetooth remotes, you can tell them
> >> > apart, unlike IR.
> >>
> >> IR has the same concept of devices. That's what those codes you enter
> >> into a universal remote do - they set the device.
> >
> > They set the device _model_.
> >
> >> There are three classes of remotes..
> >> Fixed function - the device is hardwired
> >> Universal - you can change the device
> >> Multi-function - a universal that can be multiple devices - TV, cable,
> >> audio, etc
> >>
> >> If you set two Bluetooth remotes both to the same device you can't
> >> tell them apart either.
> >
> > Untrue. Like ethernets and wifis, bluetooth devices have unique
> > addresses. Communication is bidirectional.
>
> I agree with that, but the 802.15.4 remote control software I've
> worked with ignores the MAC address. You set your remote to send codes
> for a specific device. The mac address of the remote is ignored so
> that any remote can control the device. You don't need to pair
> 802.15.4 remotes like Bluetooth devices need to be paired.
>
> I haven't played around with a Bluetooth remote. Nothing I own can
> send the signals. How can a Bluetooth remote control multiple devices
> in the same room if it needs to be paired?

I'd guess that bluetooth remote would be very similar to bluetooth
keyboard, and present itself in a very similar way.

I still believe infrared is different -- it is essentially light with
very little protocol above.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

2009-12-15 21:38:19

by [email protected]

[permalink] [raw]
Subject: Re: [RFC] What are the goals for the architecture of an in-kernel IR system?

On Tue, Dec 15, 2009 at 3:45 PM, Jon Smirl <[email protected]> wrote:
> On Tue, Dec 15, 2009 at 3:33 PM, Pavel Machek <[email protected]> wrote:
>> Untrue. Like ethernets and wifis, bluetooth devices have unique
>> addresses. Communication is bidirectional.

I read a little about how Bluetooth remotes work. Correct me if I get
things wrong....

They create pairings between the remote and the device. Each of these
pairings is assigned a device type. Multiple devices in the same room
are handled by the remote remembering the pairings and sending
directed packets instead of broadcast. That lets you have two TVs in
the same room.

Bluetooth devices need to advertise what profiles they support. So on
the Linux box you'd run a command to load the Bluetooth profile for
TV. This command would create an evdev subdevice, load the Bluetooth
keymap for TV, and tell the networking stack to advertise TV support.
Next you initiate the pairing from the Bluetooth remote and pick the
Linux box. This causes a pairing established exchange which tells the
Linux box to make the pairing persistent.

I believe the Bluetooth remote profile is handled in user space by the
BlueZ stack. BlueZ should be aware of the remote pairings. When it
decodes a button press it would need to inject the scancode into the
correct evdev subdevice. Evdev would translate it in the keymap and
create the keyevent. This is the same mechanism LIRC is using.


At a more general level we're missing a way for something like Myth to
declare that it is a DVR device. Myth should load, say I'm a DVR, and
then the remote control subsystem should automatically create a
Bluetooth DVR profile, load an IR profile for Motorola DVR on a
universal remote if the box has Bluetooth, IR or 802.15.4.

The whole concept of a remote control subsystem seems like it needs
more design work done. We keep coming up with big areas that no one
has thought about.

--
Jon Smirl
[email protected]