Hi Steve,
On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> all subdev entities in a pipeline to a given video device.
>
> Signed-off-by: Steve Longerbeam <[email protected]>
> ---
> drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
> 2 files changed, 73 insertions(+)
>
> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> index 303980b..09d4d97 100644
> --- a/drivers/media/v4l2-core/v4l2-mc.c
> +++ b/drivers/media/v4l2-core/v4l2-mc.c
> @@ -22,6 +22,7 @@
> #include <linux/usb.h>
> #include <media/media-device.h>
> #include <media/media-entity.h>
> +#include <media/v4l2-ctrls.h>
> #include <media/v4l2-fh.h>
> #include <media/v4l2-mc.h>
> #include <media/v4l2-subdev.h>
> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> }
> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>
> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> + struct media_entity *start_entity)
I have a few concerns / questions:
- What's the purpose of this patch? Why not to access the sub-device node
directly?
- This implementation is only workable as long as you do not modify the
pipeline. Once you disable a link along the pipeline, a device where the
control was inherited from may no longer be a part of the pipeline.
Depending on the hardware, it could be a part of another pipeline, in
which case it certainly must not be accessible through an unrelated video
node. As the function is added to the framework, I would expect it to
handle such a case correctly.
- I assume it is the responsibility of the caller of this function to ensure
the device in question will not be powered off whilst the video node is
used as another user space interface to such a sub-device. If the driver
uses the generic PM functions in the same file, this works, but it still
has to be documented.
> +{
> + struct media_device *mdev = start_entity->graph_obj.mdev;
> + struct media_entity *entity;
> + struct media_graph graph;
> + struct v4l2_subdev *sd;
> + int ret;
> +
> + ret = media_graph_walk_init(&graph, mdev);
> + if (ret)
> + return ret;
> +
> + media_graph_walk_start(&graph, start_entity);
> +
> + while ((entity = media_graph_walk_next(&graph))) {
> + if (!is_media_entity_v4l2_subdev(entity))
> + continue;
> +
> + sd = media_entity_to_v4l2_subdev(entity);
> +
> + ret = v4l2_ctrl_add_handler(vfd->ctrl_handler,
> + sd->ctrl_handler,
> + NULL);
> + if (ret)
> + break;
> + }
> +
> + media_graph_walk_cleanup(&graph);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(__v4l2_pipeline_inherit_controls);
> +
> +int v4l2_pipeline_inherit_controls(struct video_device *vfd,
> + struct media_entity *start_entity)
> +{
> + struct media_device *mdev = start_entity->graph_obj.mdev;
> + int ret;
> +
> + mutex_lock(&mdev->graph_mutex);
> + ret = __v4l2_pipeline_inherit_controls(vfd, start_entity);
> + mutex_unlock(&mdev->graph_mutex);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(v4l2_pipeline_inherit_controls);
> +
> /* -----------------------------------------------------------------------------
> * Pipeline power management
> *
> diff --git a/include/media/v4l2-mc.h b/include/media/v4l2-mc.h
> index 2634d9d..9848e77 100644
> --- a/include/media/v4l2-mc.h
> +++ b/include/media/v4l2-mc.h
> @@ -171,6 +171,17 @@ void v4l_disable_media_source(struct video_device *vdev);
> */
> int v4l_vb2q_enable_media_source(struct vb2_queue *q);
>
> +/**
> + * v4l2_pipeline_inherit_controls - Add the v4l2 controls from all
> + * subdev entities in a pipeline to
> + * the given video device.
> + * @vfd: the video device
> + * @start_entity: Starting entity
> + */
> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> + struct media_entity *start_entity);
> +int v4l2_pipeline_inherit_controls(struct video_device *vfd,
> + struct media_entity *start_entity);
>
> /**
> * v4l2_pipeline_pm_use - Update the use count of an entity
> @@ -231,6 +242,20 @@ static inline int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> return 0;
> }
>
> +static inline int __v4l2_pipeline_inherit_controls(
> + struct video_device *vfd,
> + struct media_entity *start_entity)
> +{
> + return 0;
> +}
> +
> +static inline int v4l2_pipeline_inherit_controls(
> + struct video_device *vfd,
> + struct media_entity *start_entity)
> +{
> + return 0;
> +}
> +
> static inline int v4l2_pipeline_pm_use(struct media_entity *entity, int use)
> {
> return 0;
--
Kind regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
On 03/02/2017 08:02 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>> all subdev entities in a pipeline to a given video device.
>>
>> Signed-off-by: Steve Longerbeam <[email protected]>
>> ---
>> drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>> 2 files changed, 73 insertions(+)
>>
>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>> index 303980b..09d4d97 100644
>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>> @@ -22,6 +22,7 @@
>> #include <linux/usb.h>
>> #include <media/media-device.h>
>> #include <media/media-entity.h>
>> +#include <media/v4l2-ctrls.h>
>> #include <media/v4l2-fh.h>
>> #include <media/v4l2-mc.h>
>> #include <media/v4l2-subdev.h>
>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>> }
>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>
>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>> + struct media_entity *start_entity)
>
> I have a few concerns / questions:
>
> - What's the purpose of this patch? Why not to access the sub-device node
> directly?
I don't really understand what you are trying to say. That's exactly
what this function is doing, accessing every subdevice in a pipeline
directly, in one convenient function call.
>
> - This implementation is only workable as long as you do not modify the
> pipeline. Once you disable a link along the pipeline, a device where the
> control was inherited from may no longer be a part of the pipeline.
That's correct. It's up to the media driver to clear the video device's
inherited controls whenever the pipeline is modified, and then call this
function again if need be.
In imx-media driver, the function is called in link_setup when the link
from a source pad that is attached to a capture video node is enabled.
This is the last link that must be made to define the pipeline, so it
is at this time that a complete list of subdevice controls can be
gathered by walking the pipeline.
> Depending on the hardware, it could be a part of another pipeline, in
> which case it certainly must not be accessible through an unrelated video
> node. As the function is added to the framework, I would expect it to
> handle such a case correctly.
The function will not inherit controls from a device that is not
reachable from the given starting subdevice, so I don't understand
you're point here.
>
> - I assume it is the responsibility of the caller of this function to ensure
> the device in question will not be powered off whilst the video node is
> used as another user space interface to such a sub-device. If the driver
> uses the generic PM functions in the same file, this works, but it still
> has to be documented.
I guess I'm missing something. Why are you bringing up the subject of
power? What does this function have to do with whether a subdevice is
powered or not? The function makes use of v4l2_ctrl_add_handler(), and
the latter has no requirements about whether the device's owning the
control handlers are powered or not.
Steve
On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>
>
> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <[email protected]>
>>> ---
>>> drivers/media/v4l2-core/v4l2-mc.c | 48
>>> +++++++++++++++++++++++++++++++++++++++
>>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>>> 2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>> #include <linux/usb.h>
>>> #include <media/media-device.h>
>>> #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>> #include <media/v4l2-fh.h>
>>> #include <media/v4l2-mc.h>
>>> #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>> vb2_queue *q)
>>> }
>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> + struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>> directly?
>
>
> I don't really understand what you are trying to say. That's exactly
> what this function is doing, accessing every subdevice in a pipeline
> directly, in one convenient function call.
>
>
>>
>> - This implementation is only workable as long as you do not modify the
>> pipeline. Once you disable a link along the pipeline, a device where
>> the
>> control was inherited from may no longer be a part of the pipeline.
>
> That's correct. It's up to the media driver to clear the video device's
> inherited controls whenever the pipeline is modified, and then call this
> function again if need be.
And here is where I need to eat my words :). I'm not actually
clearing the inherited controls if an upstream link from the
device node link is modified after the whole pipeline has been
configured. If the user does that the controls can become
invalid. Need to fix that by clearing device node controls in
the link_notify callback.
Steve
>
> In imx-media driver, the function is called in link_setup when the link
> from a source pad that is attached to a capture video node is enabled.
> This is the last link that must be made to define the pipeline, so it
> is at this time that a complete list of subdevice controls can be
> gathered by walking the pipeline.
>
On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>
>
> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <[email protected]>
>>> ---
>>> drivers/media/v4l2-core/v4l2-mc.c | 48
>>> +++++++++++++++++++++++++++++++++++++++
>>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>>> 2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>> #include <linux/usb.h>
>>> #include <media/media-device.h>
>>> #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>> #include <media/v4l2-fh.h>
>>> #include <media/v4l2-mc.h>
>>> #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>> vb2_queue *q)
>>> }
>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> + struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>> directly?
>
>
> I don't really understand what you are trying to say.<snip>
>
Actually I think I understand what you mean now. Yes, the user can
always access a subdev's control directly from its /dev/v4l-subdevXX.
I'm only providing this feature as a convenience to the user, so that
all controls in a pipeline can be accessed from one place, i.e. the
main capture device node.
Steve
Hi Steve,
On Thu, Mar 02, 2017 at 06:12:43PM -0800, Steve Longerbeam wrote:
>
>
> On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
> >
> >
> >On 03/02/2017 08:02 AM, Sakari Ailus wrote:
> >>Hi Steve,
> >>
> >>On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> >>>v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> >>>all subdev entities in a pipeline to a given video device.
> >>>
> >>>Signed-off-by: Steve Longerbeam <[email protected]>
> >>>---
> >>> drivers/media/v4l2-core/v4l2-mc.c | 48
> >>>+++++++++++++++++++++++++++++++++++++++
> >>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
> >>> 2 files changed, 73 insertions(+)
> >>>
> >>>diff --git a/drivers/media/v4l2-core/v4l2-mc.c
> >>>b/drivers/media/v4l2-core/v4l2-mc.c
> >>>index 303980b..09d4d97 100644
> >>>--- a/drivers/media/v4l2-core/v4l2-mc.c
> >>>+++ b/drivers/media/v4l2-core/v4l2-mc.c
> >>>@@ -22,6 +22,7 @@
> >>> #include <linux/usb.h>
> >>> #include <media/media-device.h>
> >>> #include <media/media-entity.h>
> >>>+#include <media/v4l2-ctrls.h>
> >>> #include <media/v4l2-fh.h>
> >>> #include <media/v4l2-mc.h>
> >>> #include <media/v4l2-subdev.h>
> >>>@@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
> >>>vb2_queue *q)
> >>> }
> >>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> >>>
> >>>+int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> >>>+ struct media_entity *start_entity)
> >>
> >>I have a few concerns / questions:
> >>
> >>- What's the purpose of this patch? Why not to access the sub-device node
> >> directly?
> >
> >
> >I don't really understand what you are trying to say.<snip>
> >
>
> Actually I think I understand what you mean now. Yes, the user can
> always access a subdev's control directly from its /dev/v4l-subdevXX.
> I'm only providing this feature as a convenience to the user, so that
> all controls in a pipeline can be accessed from one place, i.e. the
> main capture device node.
No other MC based V4L2 driver does this. You'd be creating device specific
behaviour that differs from what the rest of the drivers do. The purpose of
MC is to provide the user with knowledge of what devices are there, and the
V4L2 sub-devices interface is used to access them in this case.
It does matter where a control is implemented, too. If the pipeline contains
multiple sub-devices that implement the same control, only one of them may
be accessed. The driver calling the function (or even less the function)
would not know which one of them should be ignored.
If you need such functionality, it should be implemented in the user space
instead.
--
Regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
On 03/03/2017 11:17 AM, Sakari Ailus wrote:
> Hi Steve,
>
> On Thu, Mar 02, 2017 at 06:12:43PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/02/2017 03:48 PM, Steve Longerbeam wrote:
>>>
>>>
>>> On 03/02/2017 08:02 AM, Sakari Ailus wrote:
>>>> Hi Steve,
>>>>
>>>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>>>> all subdev entities in a pipeline to a given video device.
>>>>>
>>>>> Signed-off-by: Steve Longerbeam <[email protected]>
>>>>> ---
>>>>> drivers/media/v4l2-core/v4l2-mc.c | 48
>>>>> +++++++++++++++++++++++++++++++++++++++
>>>>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>>>>> 2 files changed, 73 insertions(+)
>>>>>
>>>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c
>>>>> b/drivers/media/v4l2-core/v4l2-mc.c
>>>>> index 303980b..09d4d97 100644
>>>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>>>> @@ -22,6 +22,7 @@
>>>>> #include <linux/usb.h>
>>>>> #include <media/media-device.h>
>>>>> #include <media/media-entity.h>
>>>>> +#include <media/v4l2-ctrls.h>
>>>>> #include <media/v4l2-fh.h>
>>>>> #include <media/v4l2-mc.h>
>>>>> #include <media/v4l2-subdev.h>
>>>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct
>>>>> vb2_queue *q)
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>>>
>>>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>>>> + struct media_entity *start_entity)
>>>>
>>>> I have a few concerns / questions:
>>>>
>>>> - What's the purpose of this patch? Why not to access the sub-device node
>>>> directly?
>>>
>>>
>>> I don't really understand what you are trying to say.<snip>
>>>
>>
>> Actually I think I understand what you mean now. Yes, the user can
>> always access a subdev's control directly from its /dev/v4l-subdevXX.
>> I'm only providing this feature as a convenience to the user, so that
>> all controls in a pipeline can be accessed from one place, i.e. the
>> main capture device node.
>
> No other MC based V4L2 driver does this. You'd be creating device specific
> behaviour that differs from what the rest of the drivers do. The purpose of
> MC is to provide the user with knowledge of what devices are there, and the
> V4L2 sub-devices interface is used to access them in this case.
Well, again, I don't mind removing this. As I said it is only a
convenience (although quite a nice one in my opinion). I'd like
to hear from others whether this is worth keeping though.
>
> It does matter where a control is implemented, too. If the pipeline contains
> multiple sub-devices that implement the same control, only one of them may
> be accessed. The driver calling the function (or even less the function)
> would not know which one of them should be ignored.
Yes the pipeline should not have any duplicate controls. On imx-media no
pipelines that can be configured have duplicate controls.
Steve
>
> If you need such functionality, it should be implemented in the user space
> instead.
>
On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
> Hi Steve,
>
> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> > v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> > all subdev entities in a pipeline to a given video device.
> >
> > Signed-off-by: Steve Longerbeam <[email protected]>
> > ---
> > drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
> > include/media/v4l2-mc.h | 25 ++++++++++++++++++++
> > 2 files changed, 73 insertions(+)
> >
> > diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> > index 303980b..09d4d97 100644
> > --- a/drivers/media/v4l2-core/v4l2-mc.c
> > +++ b/drivers/media/v4l2-core/v4l2-mc.c
> > @@ -22,6 +22,7 @@
> > #include <linux/usb.h>
> > #include <media/media-device.h>
> > #include <media/media-entity.h>
> > +#include <media/v4l2-ctrls.h>
> > #include <media/v4l2-fh.h>
> > #include <media/v4l2-mc.h>
> > #include <media/v4l2-subdev.h>
> > @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> > }
> > EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> >
> > +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> > + struct media_entity *start_entity)
>
> I have a few concerns / questions:
>
> - What's the purpose of this patch? Why not to access the sub-device node
> directly?
What tools are in existance _today_ to provide access to these controls
via the sub-device nodes?
v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
which is capable of accessing the subdevices is media-ctl, and that only
provides functionality for configuring the pipeline.
So, pointing people at vapourware userspace is really quite rediculous.
The established way to control video capture is through the main video
capture device, not through the sub-devices. Yes, the controls are
exposed through sub-devices too, but that does not mean that is the
correct way to access them.
The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
even disagrees with your statements. That talks about control
inheritence from sub-devices to the main video device, and the core
v4l2 code provides _automatic_ support for this - see
v4l2_device_register_subdev():
/* This just returns 0 if either of the two args is NULL */
err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
which merges the subdev's controls into the main device's control
handler.
So, (a) I don't think Steve needs to add this code, and (b) I think
your statements about not inheriting controls goes against the
documentation and API compatibility with _existing_ applications,
and ultimately hurts the user experience, since there's nothing
existing today to support what you're suggesting in userspace.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/03/2017 03:06 PM, Russell King - ARM Linux wrote:
> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
>> Hi Steve,
>>
>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>> all subdev entities in a pipeline to a given video device.
>>>
>>> Signed-off-by: Steve Longerbeam <[email protected]>
>>> ---
>>> drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>>> 2 files changed, 73 insertions(+)
>>>
>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>>> index 303980b..09d4d97 100644
>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>> @@ -22,6 +22,7 @@
>>> #include <linux/usb.h>
>>> #include <media/media-device.h>
>>> #include <media/media-entity.h>
>>> +#include <media/v4l2-ctrls.h>
>>> #include <media/v4l2-fh.h>
>>> #include <media/v4l2-mc.h>
>>> #include <media/v4l2-subdev.h>
>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>>> }
>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>
>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>> + struct media_entity *start_entity)
>>
>> I have a few concerns / questions:
>>
>> - What's the purpose of this patch? Why not to access the sub-device node
>> directly?
>
> What tools are in existance _today_ to provide access to these controls
> via the sub-device nodes?
>
> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
> which is capable of accessing the subdevices is media-ctl, and that only
> provides functionality for configuring the pipeline.
>
> So, pointing people at vapourware userspace is really quite rediculous.
Hi Russell,
Yes, that's a big reason why I added this capability. The v4l2-ctl
tool won't accept subdev nodes, although Philipp Zabel has a quick hack
to get around this (ignore return code from VIDIOC_QUERYCAP).
>
> The established way to control video capture is through the main video
> capture device, not through the sub-devices. Yes, the controls are
> exposed through sub-devices too, but that does not mean that is the
> correct way to access them.
>
> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
> even disagrees with your statements. That talks about control
> inheritence from sub-devices to the main video device, and the core
> v4l2 code provides _automatic_ support for this - see
> v4l2_device_register_subdev():
>
> /* This just returns 0 if either of the two args is NULL */
> err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
>
> which merges the subdev's controls into the main device's control
> handler.
Actually v4l2_dev->ctrl_handler is not of much use to me. This will
compose a list of controls from all registered subdevs, i.e. _all
possible controls_.
What v4l2_pipeline_inherit_controls() does is compose a list of
controls that are reachable and available in the currently configured
pipeline.
Steve
>
> So, (a) I don't think Steve needs to add this code, and (b) I think
> your statements about not inheriting controls goes against the
> documentation and API compatibility with _existing_ applications,
> and ultimately hurts the user experience, since there's nothing
> existing today to support what you're suggesting in userspace.
>
Hi Russell,
On Fri, Mar 03, 2017 at 11:06:45PM +0000, Russell King - ARM Linux wrote:
> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
> > Hi Steve,
> >
> > On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
> > > v4l2_pipeline_inherit_controls() will add the v4l2 controls from
> > > all subdev entities in a pipeline to a given video device.
> > >
> > > Signed-off-by: Steve Longerbeam <[email protected]>
> > > ---
> > > drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
> > > include/media/v4l2-mc.h | 25 ++++++++++++++++++++
> > > 2 files changed, 73 insertions(+)
> > >
> > > diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
> > > index 303980b..09d4d97 100644
> > > --- a/drivers/media/v4l2-core/v4l2-mc.c
> > > +++ b/drivers/media/v4l2-core/v4l2-mc.c
> > > @@ -22,6 +22,7 @@
> > > #include <linux/usb.h>
> > > #include <media/media-device.h>
> > > #include <media/media-entity.h>
> > > +#include <media/v4l2-ctrls.h>
> > > #include <media/v4l2-fh.h>
> > > #include <media/v4l2-mc.h>
> > > #include <media/v4l2-subdev.h>
> > > @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
> > > }
> > > EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
> > >
> > > +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
> > > + struct media_entity *start_entity)
> >
> > I have a few concerns / questions:
> >
> > - What's the purpose of this patch? Why not to access the sub-device node
> > directly?
>
> What tools are in existance _today_ to provide access to these controls
> via the sub-device nodes?
yavta, for instance:
<URL:http://git.ideasonboard.org/yavta.git>
VIDIOC_QUERYCAP isn't supported on sub-devices and v4l2-ctl appears to be
checking for that. That check should be removed (with possible other
implications taken into account).
>
> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
> which is capable of accessing the subdevices is media-ctl, and that only
> provides functionality for configuring the pipeline.
>
> So, pointing people at vapourware userspace is really quite rediculous.
Do bear in mind that there are other programs that can make use of these
interfaces. It's not just the test programs, or a test program you attempted
to use.
>
> The established way to control video capture is through the main video
> capture device, not through the sub-devices. Yes, the controls are
> exposed through sub-devices too, but that does not mean that is the
> correct way to access them.
It is. That's the very purpose of the sub-devices: to provide access to the
hardware independently of how the links are configured.
>
> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
> even disagrees with your statements. That talks about control
> inheritence from sub-devices to the main video device, and the core
> v4l2 code provides _automatic_ support for this - see
> v4l2_device_register_subdev():
>
> /* This just returns 0 if either of the two args is NULL */
> err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
>
> which merges the subdev's controls into the main device's control
> handler.
That's done on different kind of devices: those that provide plain V4L2 API
to control the entire device. V4L2 sub-device interface is used *in kernel*
as an interface to control sub-devices that do not need to be exposed to the
user space.
Devices that have complex pipeline that do essentially require using the
Media controller interface to configure them are out of that scope.
--
Regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
On 04/03/17 14:13, Sakari Ailus wrote:
> Hi Russell,
>
> On Fri, Mar 03, 2017 at 11:06:45PM +0000, Russell King - ARM Linux wrote:
>> On Thu, Mar 02, 2017 at 06:02:57PM +0200, Sakari Ailus wrote:
>>> Hi Steve,
>>>
>>> On Wed, Feb 15, 2017 at 06:19:16PM -0800, Steve Longerbeam wrote:
>>>> v4l2_pipeline_inherit_controls() will add the v4l2 controls from
>>>> all subdev entities in a pipeline to a given video device.
>>>>
>>>> Signed-off-by: Steve Longerbeam <[email protected]>
>>>> ---
>>>> drivers/media/v4l2-core/v4l2-mc.c | 48 +++++++++++++++++++++++++++++++++++++++
>>>> include/media/v4l2-mc.h | 25 ++++++++++++++++++++
>>>> 2 files changed, 73 insertions(+)
>>>>
>>>> diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
>>>> index 303980b..09d4d97 100644
>>>> --- a/drivers/media/v4l2-core/v4l2-mc.c
>>>> +++ b/drivers/media/v4l2-core/v4l2-mc.c
>>>> @@ -22,6 +22,7 @@
>>>> #include <linux/usb.h>
>>>> #include <media/media-device.h>
>>>> #include <media/media-entity.h>
>>>> +#include <media/v4l2-ctrls.h>
>>>> #include <media/v4l2-fh.h>
>>>> #include <media/v4l2-mc.h>
>>>> #include <media/v4l2-subdev.h>
>>>> @@ -238,6 +239,53 @@ int v4l_vb2q_enable_media_source(struct vb2_queue *q)
>>>> }
>>>> EXPORT_SYMBOL_GPL(v4l_vb2q_enable_media_source);
>>>>
>>>> +int __v4l2_pipeline_inherit_controls(struct video_device *vfd,
>>>> + struct media_entity *start_entity)
>>>
>>> I have a few concerns / questions:
>>>
>>> - What's the purpose of this patch? Why not to access the sub-device node
>>> directly?
>>
>> What tools are in existance _today_ to provide access to these controls
>> via the sub-device nodes?
>
> yavta, for instance:
>
> <URL:http://git.ideasonboard.org/yavta.git>
>
> VIDIOC_QUERYCAP isn't supported on sub-devices and v4l2-ctl appears to be
> checking for that. That check should be removed (with possible other
> implications taken into account).
No, the subdev API should get a similar QUERYCAP ioctl. There isn't a single
ioctl that is guaranteed to be available for all subdev devices. I've made
proposals for this in the past, and those have all been shot down.
Add that, and I'll add support for subdevs in v4l2-ctl.
>
>>
>> v4l-tools doesn't last time I looked - in fact, the only tool in v4l-tools
>> which is capable of accessing the subdevices is media-ctl, and that only
>> provides functionality for configuring the pipeline.
>>
>> So, pointing people at vapourware userspace is really quite rediculous.
>
> Do bear in mind that there are other programs that can make use of these
> interfaces. It's not just the test programs, or a test program you attempted
> to use.
>
>>
>> The established way to control video capture is through the main video
>> capture device, not through the sub-devices. Yes, the controls are
>> exposed through sub-devices too, but that does not mean that is the
>> correct way to access them.
>
> It is. That's the very purpose of the sub-devices: to provide access to the
> hardware independently of how the links are configured.
>
>>
>> The v4l2 documentation (Documentation/media/kapi/v4l2-controls.rst)
>> even disagrees with your statements. That talks about control
>> inheritence from sub-devices to the main video device, and the core
>> v4l2 code provides _automatic_ support for this - see
>> v4l2_device_register_subdev():
>>
>> /* This just returns 0 if either of the two args is NULL */
>> err = v4l2_ctrl_add_handler(v4l2_dev->ctrl_handler, sd->ctrl_handler, NULL);
>>
>> which merges the subdev's controls into the main device's control
>> handler.
>
> That's done on different kind of devices: those that provide plain V4L2 API
> to control the entire device. V4L2 sub-device interface is used *in kernel*
> as an interface to control sub-devices that do not need to be exposed to the
> user space.
>
> Devices that have complex pipeline that do essentially require using the
> Media controller interface to configure them are out of that scope.
>
Way too much of how the MC devices should be used is in the minds of developers.
There is a major lack for good detailed documentation, utilities, compliance
test (really needed!) and libv4l plugins.
Russell's comments are spot on and it is a thorn in my side that this still
hasn't been addressed.
I want to see if I can get time from my boss to work on this this summer, but
there is no guarantee.
The main reason this hasn't been a much bigger problem is that most end-users
make custom applications for this hardware. It makes sense, if you need full
control over everything you make the application yourself, that's the whole point.
But there was always meant to be a layer (libv4l plugin) that could be used to
setup a 'default scenario' that existing applications could use, but that was
never enforced, sadly.
Anyway, regarding this specific patch and for this MC-aware driver: no, you
shouldn't inherit controls from subdevs. It defeats the purpose.
Regards,
Hans
On Fri, Mar 10, 2017 at 01:54:28PM +0100, Hans Verkuil wrote:
> But there was always meant to be a layer (libv4l plugin) that could be
> used to setup a 'default scenario' that existing applications could use,
> but that was never enforced, sadly.
However, there's other painful issues lurking in userspace, particularly
to do with the v4l libraries.
The idea that the v4l libraries should intercept the format negotiation
between the application and kernel is a particularly painful one - the
default gstreamer build detects the v4l libraries, and links against it.
That much is fine.
However, the problem comes when you're trying to use bayer formats. The
v4l libraries "helpfully" (or rather unhelpfully) intercept the format
negotiation, and decide that they'll invoke v4lconvert to convert the
bayer to RGB for you, whether you want them to do that or not.
v4lconvert may not be the most efficient way to convert, or even what
is desired (eg, you may want to receive the raw bayer image.) However,
since the v4l libraries/v4lconvert gives you no option but to have its
conversion forced into the pipeline, other options (such as using the
gstreamer neon accelerated de-bayer plugin) isn't an option without
rebuilding gstreamer _without_ linking against the v4l libraries.
At that point, saying "this should be done in a libv4l plugin" becomes
a total nonsense, because if you need to avoid libv4l due to its
stupidities, you don't get the benefit of subdevs, and it yet again
_forces_ people down the route of custom applications.
So, I really don't agree with pushing this into a userspace library
plugin - at least not with the current state there.
_At least_ the debayering in the v4l libraries needs to become optional.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 10/03/17 14:07, Russell King - ARM Linux wrote:
> On Fri, Mar 10, 2017 at 01:54:28PM +0100, Hans Verkuil wrote:
>> But there was always meant to be a layer (libv4l plugin) that could be
>> used to setup a 'default scenario' that existing applications could use,
>> but that was never enforced, sadly.
>
> However, there's other painful issues lurking in userspace, particularly
> to do with the v4l libraries.
>
> The idea that the v4l libraries should intercept the format negotiation
> between the application and kernel is a particularly painful one - the
> default gstreamer build detects the v4l libraries, and links against it.
> That much is fine.
>
> However, the problem comes when you're trying to use bayer formats. The
> v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> negotiation, and decide that they'll invoke v4lconvert to convert the
> bayer to RGB for you, whether you want them to do that or not.
>
> v4lconvert may not be the most efficient way to convert, or even what
> is desired (eg, you may want to receive the raw bayer image.) However,
> since the v4l libraries/v4lconvert gives you no option but to have its
> conversion forced into the pipeline, other options (such as using the
> gstreamer neon accelerated de-bayer plugin) isn't an option without
> rebuilding gstreamer _without_ linking against the v4l libraries.
>
> At that point, saying "this should be done in a libv4l plugin" becomes
> a total nonsense, because if you need to avoid libv4l due to its
> stupidities, you don't get the benefit of subdevs, and it yet again
> _forces_ people down the route of custom applications.
>
> So, I really don't agree with pushing this into a userspace library
> plugin - at least not with the current state there.
>
> _At least_ the debayering in the v4l libraries needs to become optional.
>
I *thought* that when a plugin is used the format conversion code was disabled.
But I'm not sure.
The whole problem is that we still don't have a decent plugin for an MC driver.
There is one for the exynos4 floating around, but it's still not accepted.
Companies write the driver, but the plugin isn't really needed since their
customers won't use it anyway since they make their own embedded driver.
And nobody of the media core developers has the time to work on the docs,
utilities and libraries you need to make this all work cleanly and reliably.
As mentioned, I will attempt to try and get some time to work on this
later this year. Fingers crossed.
We also have a virtual MC driver floating around. I've pinged the author if
she can fix the last round of review comments and post a new version. Having
a virtual driver makes life much easier when writing docs, utilities, etc.
since you don't need real hardware which can be hard to obtain and run.
Again, I agree completely with you. But we don't have many core developers
who can do something like this, and it's even harder for them to find the time.
Solutions on a postcard...
BTW, Steve: this has nothing to do with your work, it's a problem in our subsystem.
Regards,
Hans
On Fri, Mar 10, 2017 at 02:22:29PM +0100, Hans Verkuil wrote:
> And nobody of the media core developers has the time to work on the docs,
> utilities and libraries you need to make this all work cleanly and reliably.
Well, talking about docs, and in connection to control inheritence,
this is already documented in at least three separate places:
Documentation/media/uapi/v4l/dev-subdev.rst:
Controls
========
...
Depending on the driver, those controls might also be exposed through
one (or several) V4L2 device nodes.
Documentation/media/kapi/v4l2-subdev.rst:
``VIDIOC_QUERYCTRL``,
``VIDIOC_QUERYMENU``,
``VIDIOC_G_CTRL``,
``VIDIOC_S_CTRL``,
``VIDIOC_G_EXT_CTRLS``,
``VIDIOC_S_EXT_CTRLS`` and
``VIDIOC_TRY_EXT_CTRLS``:
The controls ioctls are identical to the ones defined in V4L2. They
behave identically, with the only exception that they deal only with
controls implemented in the sub-device. Depending on the driver, those
controls can be also be accessed through one (or several) V4L2 device
nodes.
Then there's Documentation/media/kapi/v4l2-controls.rst, which gives a
step by step approach to the main video device inheriting controls from
its subdevices, and it says:
Inheriting Controls
-------------------
When a sub-device is registered with a V4L2 driver by calling
v4l2_device_register_subdev() and the ctrl_handler fields of both v4l2_subdev
and v4l2_device are set, then the controls of the subdev will become
automatically available in the V4L2 driver as well. If the subdev driver
contains controls that already exist in the V4L2 driver, then those will be
skipped (so a V4L2 driver can always override a subdev control).
What happens here is that v4l2_device_register_subdev() calls
v4l2_ctrl_add_handler() adding the controls of the subdev to the controls
of v4l2_device.
So, either the docs are wrong, or the advice being mentioned in emails
about subdev control inheritence is misleading. Whatever, the two are
currently inconsistent.
As I've already mentioned, from talking about this with Mauro, it seems
Mauro is in agreement with permitting the control inheritence... I wish
Mauro would comment for himself, as I can't quote our private discussion
on the subject.
Right now, my view is that v4l2 is currently being screwed up by people
with different opinions - there is no unified concensus on how any of
this stuff is supposed to work, everyone is pulling in different
directions. That needs solving _really_ quickly, so I suggest that
v4l2 people urgently talk to each other and thrash out some of the
issues that Steve's patch set has brought up, and settle on a way
forward, rather than what is seemingly happening today - which is
everyone working in isolation of everyone else with their own bias on
how things should be done.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 10/03/17 15:01, Russell King - ARM Linux wrote:
> On Fri, Mar 10, 2017 at 02:22:29PM +0100, Hans Verkuil wrote:
>> And nobody of the media core developers has the time to work on the docs,
>> utilities and libraries you need to make this all work cleanly and reliably.
>
> Well, talking about docs, and in connection to control inheritence,
> this is already documented in at least three separate places:
>
> Documentation/media/uapi/v4l/dev-subdev.rst:
>
> Controls
> ========
> ...
> Depending on the driver, those controls might also be exposed through
> one (or several) V4L2 device nodes.
>
> Documentation/media/kapi/v4l2-subdev.rst:
>
> ``VIDIOC_QUERYCTRL``,
> ``VIDIOC_QUERYMENU``,
> ``VIDIOC_G_CTRL``,
> ``VIDIOC_S_CTRL``,
> ``VIDIOC_G_EXT_CTRLS``,
> ``VIDIOC_S_EXT_CTRLS`` and
> ``VIDIOC_TRY_EXT_CTRLS``:
>
> The controls ioctls are identical to the ones defined in V4L2. They
> behave identically, with the only exception that they deal only with
> controls implemented in the sub-device. Depending on the driver, those
> controls can be also be accessed through one (or several) V4L2 device
> nodes.
>
> Then there's Documentation/media/kapi/v4l2-controls.rst, which gives a
> step by step approach to the main video device inheriting controls from
> its subdevices, and it says:
>
> Inheriting Controls
> -------------------
>
> When a sub-device is registered with a V4L2 driver by calling
> v4l2_device_register_subdev() and the ctrl_handler fields of both v4l2_subdev
> and v4l2_device are set, then the controls of the subdev will become
> automatically available in the V4L2 driver as well. If the subdev driver
> contains controls that already exist in the V4L2 driver, then those will be
> skipped (so a V4L2 driver can always override a subdev control).
>
> What happens here is that v4l2_device_register_subdev() calls
> v4l2_ctrl_add_handler() adding the controls of the subdev to the controls
> of v4l2_device.
>
> So, either the docs are wrong, or the advice being mentioned in emails
> about subdev control inheritence is misleading. Whatever, the two are
> currently inconsistent.
These docs were written for non-MC drivers, and for those the documentation
is correct. Unfortunately, this was never updated for MC drivers.
> As I've already mentioned, from talking about this with Mauro, it seems
> Mauro is in agreement with permitting the control inheritence... I wish
> Mauro would comment for himself, as I can't quote our private discussion
> on the subject.
I can't comment either, not having seen his mail and reasoning.
> Right now, my view is that v4l2 is currently being screwed up by people
> with different opinions - there is no unified concensus on how any of
> this stuff is supposed to work, everyone is pulling in different
> directions. That needs solving _really_ quickly, so I suggest that
> v4l2 people urgently talk to each other and thrash out some of the
> issues that Steve's patch set has brought up, and settle on a way
> forward, rather than what is seemingly happening today - which is
> everyone working in isolation of everyone else with their own bias on
> how things should be done.
The simple fact is that to my knowledge no other MC applications inherit
controls from subdevs. Suddenly doing something different here seems very
wrong to me and needs very good reasons.
But yes, the current situation sucks. Yelling doesn't help though if nobody
has time and there are several other high-prio projects that need our
attention as well.
If you know a good kernel developer who has a few months to spare, please
point him/her in our direction!
Regards,
Hans
Em Fri, 10 Mar 2017 13:54:28 +0100
Hans Verkuil <[email protected]> escreveu:
> > Devices that have complex pipeline that do essentially require using the
> > Media controller interface to configure them are out of that scope.
> >
>
> Way too much of how the MC devices should be used is in the minds of developers.
> There is a major lack for good detailed documentation, utilities, compliance
> test (really needed!) and libv4l plugins.
Unfortunately, we merged an incomplete MC support at the Kernel. We knew
all the problems with MC-based drivers and V4L2 applications by the time
it was developed, and we requested Nokia developers (with was sponsoring MC
develoment, on that time) to work on a solution to allow standard V4L2
applications to work with MC based boards.
Unfortunately, we took the decision to merge MC without that, because
Nokia was giving up on Linux development, and we didn't want to lose the
2 years of discussions and work around it, as Nokia employers were leaving
the company. Also, on that time, there was already some patches floating
around adding backward support via libv4l. Unfortunately, those patches
were never finished.
The net result is that MC was merged with some huge gaps, including
the lack of a proper solution for a generic V4L2 program to work
with V4L2 devices that use the subdev API.
That was not that bad by then, as MC was used only on cell phones
that run custom-made applications.
The reallity changed, as now, we have lots of low cost SoC based
boards, used for all sort of purposes. So, we need a quick solution
for it.
In other words, while that would be acceptable support special apps
on really embedded systems, it is *not OK* for general purpose SoC
harware[1].
[1] I'm calling "general purpose SoC harware" those ARM boards
like Raspberry Pi that are shipped to the mass and used by a wide
range of hobbyists and other people that just wants to run Linux on
ARM. It is possible to buy such boards for a very cheap price,
making them to be used not only on special projects, where a custom
made application could be interesting, but also for a lot of
users that just want to run Linux on a low cost ARM board, while
keeping using standard V4L2 apps, like "camorama".
That's perhaps one of the reasons why it took a long time for us to
start receiving drivers upstream for such hardware: it is quite
intimidating and not logical to require developers to implement
on their drivers 2 complex APIs (MC, subdev) for those
hardware that most users won't care. From user's perspective,
being able to support generic applications like "camorama" and
"zbar" is all they want.
In summary, I'm pretty sure we need to support standard V4L2
applications on boards like Raspberry Pi and those low-cost
SoC-based boards that are shipped to end users.
> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> shouldn't inherit controls from subdevs. It defeats the purpose.
Sorry, but I don't agree with that. The subdev API is an optional API
(and even the MC API can be optional).
I see the rationale for using MC and subdev APIs on cell phones,
ISV and other embedded hardware, as it will allow fine-tuning
the driver's support to allow providing the required quality for
certain custom-made applications. but on general SoC hardware,
supporting standard V4L2 applications is a need.
Ok, perhaps supporting both subdev API and V4L2 API at the same
time doesn't make much sense. We could disable one in favor of the
other, either at compilation time or at runtime.
This way, if the subdev API is disabled, the driver will be
functional for V4L2-based applications that don't support neither
MC or subdev APIs.
> As mentioned, I will attempt to try and get some time to work on this
> later this year. Fingers crossed.
That will be good, and, once we have a solution that works, we can
work on cleanup the code, but, until then, drivers for arm-based boards
sold to end consumers should work out of the box with standard V4L2 apps.
While we don't have that, I'm OK to merge patches adding such support
upstream.
Thanks,
Mauro
Hi Russell,
Em Fri, 10 Mar 2017 13:07:33 +0000
Russell King - ARM Linux <[email protected]> escreveu:
> The idea that the v4l libraries should intercept the format negotiation
> between the application and kernel is a particularly painful one - the
> default gstreamer build detects the v4l libraries, and links against it.
> That much is fine.
>
> However, the problem comes when you're trying to use bayer formats. The
> v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> negotiation, and decide that they'll invoke v4lconvert to convert the
> bayer to RGB for you, whether you want them to do that or not.
>
> v4lconvert may not be the most efficient way to convert, or even what
> is desired (eg, you may want to receive the raw bayer image.) However,
> since the v4l libraries/v4lconvert gives you no option but to have its
> conversion forced into the pipeline, other options (such as using the
> gstreamer neon accelerated de-bayer plugin) isn't an option
That's not true. There is an special flag, used only by libv4l2
emulated formats, that indicates when a video format is handled
via v4lconvert:
* - ``V4L2_FMT_FLAG_EMULATED``
- 0x0002
- This format is not native to the device but emulated through
software (usually libv4l2), where possible try to use a native
format instead for better performance.
Using this flag, if the application supports a video format directly
supported by the hardware, it can use their own video format decoder.
If not, it is still possible to use the V4L2 hardware, by using
v4lconvert.
Unfortunately, very few applications currently check it.
I wrote a patch for zbar (a multi-format barcode reader) in the past,
adding a logic there that gives a high priority to hardware formats,
and a low priority to emulated ones:
https://lists.fedoraproject.org/pipermail/scm-commits/2010-December/537428.html
> without
> rebuilding gstreamer _without_ linking against the v4l libraries.
I guess it wouldn't be complex to add a logic similar to that at
gstreamer.
AFAIKT, there's another problem that would prevent to make
libv4l the default on gstreamer: right now, libv4l doesn't support
DMABUF. As gstreamer is being used on embedded hardware, I'd say
that DMABUF support should be default there.
Thanks,
Mauro
Em Fri, 10 Mar 2017 15:20:48 +0100
Hans Verkuil <[email protected]> escreveu:
>
> > As I've already mentioned, from talking about this with Mauro, it seems
> > Mauro is in agreement with permitting the control inheritence... I wish
> > Mauro would comment for himself, as I can't quote our private discussion
> > on the subject.
>
> I can't comment either, not having seen his mail and reasoning.
The rationale is that we should support the simplest use cases first.
In the case of the first MC-based driver (and several subsequent
ones), the simplest use case required MC, as it was meant to suport
a custom-made sophisticated application that required fine control
on each component of the pipeline and to allow their advanced
proprietary AAA userspace-based algorithms to work.
That's not true, for example, for the UVC driver. There, MC
is optional, as it should be.
> > Right now, my view is that v4l2 is currently being screwed up by people
> > with different opinions - there is no unified concensus on how any of
> > this stuff is supposed to work, everyone is pulling in different
> > directions. That needs solving _really_ quickly, so I suggest that
> > v4l2 people urgently talk to each other and thrash out some of the
> > issues that Steve's patch set has brought up, and settle on a way
> > forward, rather than what is seemingly happening today - which is
> > everyone working in isolation of everyone else with their own bias on
> > how things should be done.
>
> The simple fact is that to my knowledge no other MC applications inherit
> controls from subdevs. Suddenly doing something different here seems very
> wrong to me and needs very good reasons.
That's because it was not needed before, as other subdev-based drivers
are meant to be used only on complex scenarios with custom-made apps.
Thanks,
Mauro
On Fri, Mar 10, 2017 at 12:26:34PM -0300, Mauro Carvalho Chehab wrote:
> Hi Russell,
>
> Em Fri, 10 Mar 2017 13:07:33 +0000
> Russell King - ARM Linux <[email protected]> escreveu:
>
> > The idea that the v4l libraries should intercept the format negotiation
> > between the application and kernel is a particularly painful one - the
> > default gstreamer build detects the v4l libraries, and links against it.
> > That much is fine.
> >
> > However, the problem comes when you're trying to use bayer formats. The
> > v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> > negotiation, and decide that they'll invoke v4lconvert to convert the
> > bayer to RGB for you, whether you want them to do that or not.
> >
> > v4lconvert may not be the most efficient way to convert, or even what
> > is desired (eg, you may want to receive the raw bayer image.) However,
> > since the v4l libraries/v4lconvert gives you no option but to have its
> > conversion forced into the pipeline, other options (such as using the
> > gstreamer neon accelerated de-bayer plugin) isn't an option
>
> That's not true. There is an special flag, used only by libv4l2
> emulated formats, that indicates when a video format is handled
> via v4lconvert:
I'm afraid that my statement comes from trying to use gstreamer with
libv4l2 and _not_ being able to use the 8-bit bayer formats there at
all - they are simply not passed across to the application through
libv4l2/v4lconvert.
Instead, the formats that are passed across are the emulated formats.
As I said above, that forces applications to use only the v4lconvert
formats, the raw formats are not available.
So, the presence or absence of the V4L2_FMT_FLAG_EMULATED is quite
meaningless if you can't even enumerate the non-converted formats.
The problem comes from the "always needs conversion" stuff in
v4lconvert coupled with the way this subdev stuff works - since it
requires manual configuration of all the pads within the kernel
media pipeline, the kernel ends up only advertising _one_ format
to userspace - in my case, that's RGGB8.
When v4lconvert_create_with_dev_ops() enumerates the formats from
the kernel, it gets only RGGB8. That causes always_needs_conversion
in there to remain true, so the special v4l control which enables/
disables conversion gets created with a default value of "true".
The RGGB8 bit is also set in data->supported_src_formats.
This causes v4lconvert_supported_dst_fmt_only() to return true.
What this all means is that v4lconvert_enum_fmt() will _not_ return
any of the kernel formats, only the faked formats.
Ergo, the RGGB8 format from the kernel is completely hidden from the
application, and only the emulated format is made available. As I
said above, this forces v4lconvert's debayering on the application,
whether you want it or not.
In the gstreamer case, it knows nothing about this special control,
which means that trying to use this gstreamer pipeline:
$ gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink
is completely impossible without first rebuilding gstreamer _without_
libv4l support. Build gstreamer without libv4l support, and the above
works.
Enabling debug output in gstreamer's v4l2src plugin confirms that
the kernel's bayer format are totally hidden from gstreamer when
linked with libv4l2, but are present when it isn't linked with
libv4l2.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On Fri, Mar 10, 2017 at 03:57:09PM +0000, Russell King - ARM Linux wrote:
> Enabling debug output in gstreamer's v4l2src plugin confirms that
> the kernel's bayer format are totally hidden from gstreamer when
> linked with libv4l2, but are present when it isn't linked with
> libv4l2.
Here's the information to back up my claims:
root@hbi2ex:~# v4l2-ctl -d 6 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'RGGB'
Name : 8-bit Bayer RGRG/GBGB
root@hbi2ex:~# DISPLAY=:0 GST_DEBUG_NO_COLOR=1 GST_DEBUG=v4l2:9 gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink > gst-v4l2-1.log 2>&1
root@hbi2ex:~# cut -b65- gst-v4l2-1.log|less
v4l2_calls.c:519:gst_v4l2_open:<v4l2src0> Trying to open device /dev/video6
v4l2_calls.c:69:gst_v4l2_get_capabilities:<v4l2src0> getting capabilities
v4l2_calls.c:77:gst_v4l2_get_capabilities:<v4l2src0> driver: 'imx-media-camif'
v4l2_calls.c:78:gst_v4l2_get_capabilities:<v4l2src0> card: 'imx-media-camif'
v4l2_calls.c:79:gst_v4l2_get_capabilities:<v4l2src0> bus_info: ''
v4l2_calls.c:80:gst_v4l2_get_capabilities:<v4l2src0> version: 00040a00
v4l2_calls.c:81:gst_v4l2_get_capabilities:<v4l2src0> capabilites: 85200001
...
v4l2_calls.c:258:gst_v4l2_fill_lists:<v4l2src0> controls+menus
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00000000
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'User Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980001
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID white_balance_automatic (98090c)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 0098090c
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID gamma (980910)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980910
v4l2_calls.c:389:gst_v4l2_fill_lists:<v4l2src0> Adding ControlID gain (980913)
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980913
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980914
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 00980915
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Camera Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009a0001
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID exposure_time_absolute (9a0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009a0902
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Image Source Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0001
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID vertical_blanking (9e0901) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0901
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID horizontal_blanking (9e0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0902
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID analogue_gain (9e0903) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0903
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID red_pixel_value (9e0904) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0904
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID green_red_pixel_value (9e0905) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0905
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID blue_pixel_value (9e0906) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0906
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID green_blue_pixel_value (9e0907) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009e0907
v4l2_calls.c:319:gst_v4l2_fill_lists:<v4l2src0> starting control class 'Image Processing Controls'
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0001
v4l2_calls.c:340:gst_v4l2_fill_lists:<v4l2src0> Control type for 'Pixel Rate' not suppored for extra controls.
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID Pixel Rate (9f0902) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0902
v4l2_calls.c:382:gst_v4l2_fill_lists:<v4l2src0> ControlID test_pattern (9f0903) unhandled, FIXME
v4l2_calls.c:278:gst_v4l2_fill_lists:<v4l2src0> checking control 009f0903
v4l2_calls.c:284:gst_v4l2_fill_lists:<v4l2src0> controls finished
v4l2_calls.c:451:gst_v4l2_fill_lists:<v4l2src0> done
v4l2_calls.c:587:gst_v4l2_open:<v4l2src0> Opened device 'imx-media-camif' (/dev/video6) successfully
gstv4l2object.c:804:gst_v4l2_set_defaults:<v4l2src0> tv_norm=0x0, norm=(nil)
v4l2_calls.c:734:gst_v4l2_get_norm:<v4l2src0> getting norm
v4l2_calls.c:1021:gst_v4l2_get_input:<v4l2src0> trying to get input
v4l2_calls.c:1031:gst_v4l2_get_input:<v4l2src0> input: 0
gstv4l2object.c:1106:gst_v4l2_object_fill_format_list:<v4l2src0> getting src format enumerations
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index: 0
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type: 1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags: 00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'RGB3'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: RGB3
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index: 1
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type: 1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags: 00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'BGR3'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: BGR3
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index: 2
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type: 1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags: 00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'YU12'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: YU12
gstv4l2object.c:1124:gst_v4l2_object_fill_format_list:<v4l2src0> index: 3
gstv4l2object.c:1125:gst_v4l2_object_fill_format_list:<v4l2src0> type: 1
gstv4l2object.c:1126:gst_v4l2_object_fill_format_list:<v4l2src0> flags: 00000002
gstv4l2object.c:1128:gst_v4l2_object_fill_format_list:<v4l2src0> description: 'YV12'
gstv4l2object.c:1130:gst_v4l2_object_fill_format_list:<v4l2src0> pixelformat: YV12
gstv4l2object.c:1143:gst_v4l2_object_fill_format_list:<v4l2src0> got 4 format(s):
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0> YU12 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0> YV12 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0> BGR3 (emulated)
gstv4l2object.c:1149:gst_v4l2_object_fill_format_list:<v4l2src0> RGB3 (emulated)
As you can see from this, the RGGB bayer format advertised from the
kernel is not listed - only the four emulated formats provided by
v4lconvert are listed, so the application has _no_ choice but to use
v4lconvert's RGGB conversion.
The result is that the above pipeline fails:
0:00:00.345739030 2794 0x3ade60 DEBUG v4l2 gstv4l2object.c:3812:gst_v4l2_object_get_caps:<v4l2src0> ret: video/x-raw, format=(string)I420, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)YV12, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)BGR, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1; video/x-raw, format=(string)RGB, framerate=(fraction)[ 0/1, 2147483647/1 ], width=(int)816, height=(int)616, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
as the v4l2src element is offering an I420-formatted buffer to the
gstreamer bayer converter, which obviously objects.
Rebuilding without libv4l2 linked results in gstreamer working.
Using a kernel driver which exposes some formats that libv4lconvert
_doesn't_ need to convert in addition to bayer _also_ works.
The only case where this fails is where the kernel device only
advertises formats where libv4lconvert is in "we must always convert"
mode, where upon the unconverted formats are completely hidden from
the application.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Em Fri, 10 Mar 2017 15:57:09 +0000
Russell King - ARM Linux <[email protected]> escreveu:
> On Fri, Mar 10, 2017 at 12:26:34PM -0300, Mauro Carvalho Chehab wrote:
> > Hi Russell,
> >
> > Em Fri, 10 Mar 2017 13:07:33 +0000
> > Russell King - ARM Linux <[email protected]> escreveu:
> >
> > > The idea that the v4l libraries should intercept the format negotiation
> > > between the application and kernel is a particularly painful one - the
> > > default gstreamer build detects the v4l libraries, and links against it.
> > > That much is fine.
> > >
> > > However, the problem comes when you're trying to use bayer formats. The
> > > v4l libraries "helpfully" (or rather unhelpfully) intercept the format
> > > negotiation, and decide that they'll invoke v4lconvert to convert the
> > > bayer to RGB for you, whether you want them to do that or not.
> > >
> > > v4lconvert may not be the most efficient way to convert, or even what
> > > is desired (eg, you may want to receive the raw bayer image.) However,
> > > since the v4l libraries/v4lconvert gives you no option but to have its
> > > conversion forced into the pipeline, other options (such as using the
> > > gstreamer neon accelerated de-bayer plugin) isn't an option
> >
> > That's not true. There is an special flag, used only by libv4l2
> > emulated formats, that indicates when a video format is handled
> > via v4lconvert:
>
> I'm afraid that my statement comes from trying to use gstreamer with
> libv4l2 and _not_ being able to use the 8-bit bayer formats there at
> all - they are simply not passed across to the application through
> libv4l2/v4lconvert.
>
> Instead, the formats that are passed across are the emulated formats.
> As I said above, that forces applications to use only the v4lconvert
> formats, the raw formats are not available.
>
> So, the presence or absence of the V4L2_FMT_FLAG_EMULATED is quite
> meaningless if you can't even enumerate the non-converted formats.
>
> The problem comes from the "always needs conversion" stuff in
> v4lconvert coupled with the way this subdev stuff works - since it
> requires manual configuration of all the pads within the kernel
> media pipeline, the kernel ends up only advertising _one_ format
> to userspace - in my case, that's RGGB8.
>
> When v4lconvert_create_with_dev_ops() enumerates the formats from
> the kernel, it gets only RGGB8. That causes always_needs_conversion
> in there to remain true, so the special v4l control which enables/
> disables conversion gets created with a default value of "true".
> The RGGB8 bit is also set in data->supported_src_formats.
>
> This causes v4lconvert_supported_dst_fmt_only() to return true.
>
> What this all means is that v4lconvert_enum_fmt() will _not_ return
> any of the kernel formats, only the faked formats.
>
> Ergo, the RGGB8 format from the kernel is completely hidden from the
> application, and only the emulated format is made available. As I
> said above, this forces v4lconvert's debayering on the application,
> whether you want it or not.
>
> In the gstreamer case, it knows nothing about this special control,
> which means that trying to use this gstreamer pipeline:
>
> $ gst-launch-1.0 v4l2src device=/dev/video6 ! bayer2rgbneon ! xvimagesink
>
> is completely impossible without first rebuilding gstreamer _without_
> libv4l support. Build gstreamer without libv4l support, and the above
> works.
>
> Enabling debug output in gstreamer's v4l2src plugin confirms that
> the kernel's bayer format are totally hidden from gstreamer when
> linked with libv4l2, but are present when it isn't linked with
> libv4l2.
Argh! that is indeed a bug at libv4l (and maybe at gstreamer).
I guess that the always_needs_conversion logic was meant to be used to
really odd proprietary formats, e. g:
/* Vendor-specific formats */
#define V4L2_PIX_FMT_CPIA1 v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
#define V4L2_PIX_FMT_WNVA v4l2_fourcc('W', 'N', 'V', 'A') /* Winnov hw compress */
#define V4L2_PIX_FMT_SN9C10X v4l2_fourcc('S', '9', '1', '0') /* SN9C10x compression */
...
I suspect that nobody uses libv4l2 with MC-based V4L2 devices. That's
likely why nobody reported this bug before (that I know of).
In any case, for non-proprietary formats, the default should be to
always offer both the emulated format and the original one.
I suspect that the enclosed patch should fix the issue with bayer formats.
>
Thanks,
Mauro
[PATCH RFC] libv4lconvert: by default, offer the original format to the client
Applications should have the right to decide between using a
libv4lconvert emulated format or to implement the decoding themselves,
as this may have significative performance impact.
So, change the default to always show both formats.
Change also the default for Bayer encoded formats, as userspace
likely will want to handle it directly.
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
diff --git a/lib/libv4lconvert/libv4lconvert.c b/lib/libv4lconvert/libv4lconvert.c
index da718918b030..2e5458fa420d 100644
--- a/lib/libv4lconvert/libv4lconvert.c
+++ b/lib/libv4lconvert/libv4lconvert.c
@@ -118,10 +118,10 @@ static const struct v4lconvert_pixfmt supported_src_pixfmts[] = {
{ V4L2_PIX_FMT_OV511, 0, 7, 7, 1 },
{ V4L2_PIX_FMT_OV518, 0, 7, 7, 1 },
/* uncompressed bayer */
- { V4L2_PIX_FMT_SBGGR8, 8, 8, 8, 1 },
- { V4L2_PIX_FMT_SGBRG8, 8, 8, 8, 1 },
- { V4L2_PIX_FMT_SGRBG8, 8, 8, 8, 1 },
- { V4L2_PIX_FMT_SRGGB8, 8, 8, 8, 1 },
+ { V4L2_PIX_FMT_SBGGR8, 8, 8, 8, 0 },
+ { V4L2_PIX_FMT_SGBRG8, 8, 8, 8, 0 },
+ { V4L2_PIX_FMT_SGRBG8, 8, 8, 8, 0 },
+ { V4L2_PIX_FMT_SRGGB8, 8, 8, 8, 0 },
{ V4L2_PIX_FMT_STV0680, 8, 8, 8, 1 },
/* compressed bayer */
{ V4L2_PIX_FMT_SPCA561, 0, 9, 9, 1 },
@@ -178,7 +178,7 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
/* This keeps tracks of devices which have only formats for which apps
most likely will need conversion and we can thus safely add software
processing controls without a performance impact. */
- int always_needs_conversion = 1;
+ int always_needs_conversion = 0;
if (!data) {
fprintf(stderr, "libv4lconvert: error: out of memory!\n");
@@ -208,8 +208,8 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
if (j < ARRAY_SIZE(supported_src_pixfmts)) {
data->supported_src_formats |= 1ULL << j;
v4lconvert_get_framesizes(data, fmt.pixelformat, j);
- if (!supported_src_pixfmts[j].needs_conversion)
- always_needs_conversion = 0;
+ if (supported_src_pixfmts[j].needs_conversion)
+ always_needs_conversion = 1;
} else
always_needs_conversion = 0;
}
Hi!
> Argh! that is indeed a bug at libv4l (and maybe at gstreamer).
>
> I guess that the always_needs_conversion logic was meant to be used to
> really odd proprietary formats, e. g:
>
> /* Vendor-specific formats */
> #define V4L2_PIX_FMT_CPIA1 v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
> #define V4L2_PIX_FMT_WNVA v4l2_fourcc('W', 'N', 'V', 'A') /* Winnov hw compress */
> #define V4L2_PIX_FMT_SN9C10X v4l2_fourcc('S', '9', '1', '0') /* SN9C10x compression */
> ...
>
> I suspect that nobody uses libv4l2 with MC-based V4L2 devices. That's
> likely why nobody reported this bug before (that I know of).
>
> In any case, for non-proprietary formats, the default should be to
> always offer both the emulated format and the original one.
>
> I suspect that the enclosed patch should fix the issue with bayer formats.
...
> @@ -178,7 +178,7 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
> /* This keeps tracks of devices which have only formats for which apps
> most likely will need conversion and we can thus safely add software
> processing controls without a performance impact. */
> - int always_needs_conversion = 1;
> + int always_needs_conversion = 0;
>
> if (!data) {
> fprintf(stderr, "libv4lconvert: error: out of memory!\n");
> @@ -208,8 +208,8 @@ struct v4lconvert_data *v4lconvert_create_with_dev_ops(int fd, void *dev_ops_pri
> if (j < ARRAY_SIZE(supported_src_pixfmts)) {
> data->supported_src_formats |= 1ULL << j;
> v4lconvert_get_framesizes(data, fmt.pixelformat, j);
> - if (!supported_src_pixfmts[j].needs_conversion)
> - always_needs_conversion = 0;
> + if (supported_src_pixfmts[j].needs_conversion)
> + always_needs_conversion = 1;
> } else
> always_needs_conversion = 0;
> }
Is the else still needed? You changed default to 0...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi Mauro (and others),
On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 15:20:48 +0100
> Hans Verkuil <[email protected]> escreveu:
>
> >
> > > As I've already mentioned, from talking about this with Mauro, it seems
> > > Mauro is in agreement with permitting the control inheritence... I wish
> > > Mauro would comment for himself, as I can't quote our private discussion
> > > on the subject.
> >
> > I can't comment either, not having seen his mail and reasoning.
>
> The rationale is that we should support the simplest use cases first.
>
> In the case of the first MC-based driver (and several subsequent
> ones), the simplest use case required MC, as it was meant to suport
> a custom-made sophisticated application that required fine control
> on each component of the pipeline and to allow their advanced
> proprietary AAA userspace-based algorithms to work.
The first MC based driver (omap3isp) supports what the hardware can do, it
does not support applications as such.
Adding support to drivers for different "operation modes" --- this is
essentially what is being asked for --- is not an approach which could serve
either purpose (some functionality with simple interface vs. fully support
what the hardware can do, with interfaces allowing that) adequately in the
short or the long run.
If we are missing pieces in the puzzle --- in this case the missing pieces
in the puzzle are a generic pipeline configuration library and another
library that, with the help of pipeline autoconfiguration would implement
"best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
--- then these pieces need to be impelemented. The solution is
*not* to attempt to support different types of applications in each driver
separately. That will make writing drivers painful, error prone and is
unlikely ever deliver what either purpose requires.
So let's continue to implement the functionality that the hardware supports.
Making a different choice here is bound to create a lasting conflict between
having to change kernel interface behaviour and the requirement of
supporting new functionality that hasn't been previously thought of, pushing
away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.
As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
for libv4l to perform this. This should be much easier than getting the
automatic pipe configuration library and the rest working, and as it is
custom for i.MX6, the resulting plugin may make informed technical choices
for better functionality. Jacek has been working on such a plugin for
Samsung Exynos hardware, but I don't think he has quite finished it yey.
The original plan was and continues to be sound, it's just that there have
always been too few hands to implement it. :-(
>
> That's not true, for example, for the UVC driver. There, MC
> is optional, as it should be.
UVC is different. The device simply provides additional information through
MC to the user but MC (or V4L2 sub-device interface) is not used for
controlling the device.
--
Kind regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
Em Sat, 11 Mar 2017 00:37:14 +0200
Sakari Ailus <[email protected]> escreveu:
> Hi Mauro (and others),
>
> On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 15:20:48 +0100
> > Hans Verkuil <[email protected]> escreveu:
> >
> > >
> > > > As I've already mentioned, from talking about this with Mauro, it seems
> > > > Mauro is in agreement with permitting the control inheritence... I wish
> > > > Mauro would comment for himself, as I can't quote our private discussion
> > > > on the subject.
> > >
> > > I can't comment either, not having seen his mail and reasoning.
> >
> > The rationale is that we should support the simplest use cases first.
> >
> > In the case of the first MC-based driver (and several subsequent
> > ones), the simplest use case required MC, as it was meant to suport
> > a custom-made sophisticated application that required fine control
> > on each component of the pipeline and to allow their advanced
> > proprietary AAA userspace-based algorithms to work.
>
> The first MC based driver (omap3isp) supports what the hardware can do, it
> does not support applications as such.
All media drivers support a subset of what the hardware can do. The
question is if such subset covers the use cases or not.
The current MC-based drivers (except for uvc) took a patch to offer a
more advanced API, to allow direct control to each IP module, as it was
said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
to work, it was mandatory to access the pipeline's individual components.
Such approach require that some userspace software will have knowledge
about some hardware details, in order to setup pipelines and send controls
to the right components. That makes really hard to have a generic user
friendly application to use such devices.
Non-MC based drivers control the hardware via a portable interface with
doesn't require any knowledge about the hardware specifics, as either the
Kernel or some firmware at the device will set any needed pipelines.
In the case of V4L2 controls, when there's no subdev API, the main
driver (e. g. the driver that creates the /dev/video nodes) sends a
multicast message to all bound I2C drivers. The driver(s) that need
them handle it. When the same control may be implemented on different
drivers, the main driver sends a unicast message to just one driver[1].
[1] There are several non-MC drivers that have multiple ways to
control some things, like doing scaling or adjust volume levels at
either the bridge driver or at a subdriver.
There's nothing wrong with this approach: it works, it is simpler,
it is generic. So, if it covers most use cases, why not allowing it
for usecases where a finer control is not a requirement?
> Adding support to drivers for different "operation modes" --- this is
> essentially what is being asked for --- is not an approach which could serve
> either purpose (some functionality with simple interface vs. fully support
> what the hardware can do, with interfaces allowing that) adequately in the
> short or the long run.
Why not?
> If we are missing pieces in the puzzle --- in this case the missing pieces
> in the puzzle are a generic pipeline configuration library and another
> library that, with the help of pipeline autoconfiguration would implement
> "best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
> --- then these pieces need to be impelemented. The solution is
> *not* to attempt to support different types of applications in each driver
> separately. That will make writing drivers painful, error prone and is
> unlikely ever deliver what either purpose requires.
>
> So let's continue to implement the functionality that the hardware supports.
> Making a different choice here is bound to create a lasting conflict between
> having to change kernel interface behaviour and the requirement of
> supporting new functionality that hasn't been previously thought of, pushing
> away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.
This situation is there since 2009. If I remember well, you tried to write
such generic plugin in the past, but never finished it, apparently because
it is too complex. Others tried too over the years.
The last trial was done by Jacek, trying to cover just the exynos4 driver.
Yet, even such limited scope plugin was not good enough, as it was never
merged upstream. Currently, there's no such plugins upstream.
If we can't even merge a plugin that solves it for just *one* driver,
I have no hope that we'll be able to do it for the generic case.
That's why I'm saying that I'm OK on merging any patch that would allow
setting controls via the /dev/video interface on MC-based drivers when
compiled without subdev API. I may also consider merging patches allowing
to change the behavior on runtime, when compiled with subdev API.
> As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
> for libv4l to perform this. This should be much easier than getting the
> automatic pipe configuration library and the rest working, and as it is
> custom for i.MX6, the resulting plugin may make informed technical choices
> for better functionality.
I wouldn't call "much easier" something that experienced media
developers failed to do over the last 8 years.
It is just the opposite: broadcasting a control via I2C is very easy:
there are several examples about how to do that all over the media
drivers.
> Jacek has been working on such a plugin for
> Samsung Exynos hardware, but I don't think he has quite finished it yey.
As Jacek answered when questioned about the merge status:
Hi Hans,
On 11/03/2016 12:51 PM, Hans Verkuil wrote:
> Hi all,
>
> Is there anything that blocks me from merging this?
>
> This plugin work has been ongoing for years and unless there are serious
> objections I propose that this is merged.
>
> Jacek, is there anything missing that would prevent merging this?
There were issues raised by Sakari during last review, related to
the way how v4l2 control bindings are defined. That discussion wasn't
finished, so I stayed by my approach. Other than that - I've tested it
and it works fine both with GStreamer and my test app.
After that, he sent a new version (v7.1), but never got reviews.
> The original plan was and continues to be sound, it's just that there have
> always been too few hands to implement it. :-(
If there are no people to implement a plan, it doesn't matter how good
the plan is, it won't work.
> > That's not true, for example, for the UVC driver. There, MC
> > is optional, as it should be.
>
> UVC is different. The device simply provides additional information through
> MC to the user but MC (or V4L2 sub-device interface) is not used for
> controlling the device.
It is not different. If the Kernel is compiled without the V4L2
subdev interface, the i.MX6 driver (or whatever other driver)
won't receive any control via the subdev interface. So, it has to
handle the control logic control via the only interface that
supports it, e. g. via the video devnode.
Thanks,
Mauro
On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> Em Fri, 10 Mar 2017 13:54:28 +0100
> Hans Verkuil <[email protected]> escreveu:
>
>>> Devices that have complex pipeline that do essentially require using the
>>> Media controller interface to configure them are out of that scope.
>>>
>>
>> Way too much of how the MC devices should be used is in the minds of developers.
>> There is a major lack for good detailed documentation, utilities, compliance
>> test (really needed!) and libv4l plugins.
>
> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> all the problems with MC-based drivers and V4L2 applications by the time
> it was developed, and we requested Nokia developers (with was sponsoring MC
> develoment, on that time) to work on a solution to allow standard V4L2
> applications to work with MC based boards.
>
> Unfortunately, we took the decision to merge MC without that, because
> Nokia was giving up on Linux development, and we didn't want to lose the
> 2 years of discussions and work around it, as Nokia employers were leaving
> the company. Also, on that time, there was already some patches floating
> around adding backward support via libv4l. Unfortunately, those patches
> were never finished.
>
> The net result is that MC was merged with some huge gaps, including
> the lack of a proper solution for a generic V4L2 program to work
> with V4L2 devices that use the subdev API.
>
> That was not that bad by then, as MC was used only on cell phones
> that run custom-made applications.
>
> The reallity changed, as now, we have lots of low cost SoC based
> boards, used for all sort of purposes. So, we need a quick solution
> for it.
>
> In other words, while that would be acceptable support special apps
> on really embedded systems, it is *not OK* for general purpose SoC
> harware[1].
>
> [1] I'm calling "general purpose SoC harware" those ARM boards
> like Raspberry Pi that are shipped to the mass and used by a wide
> range of hobbyists and other people that just wants to run Linux on
> ARM. It is possible to buy such boards for a very cheap price,
> making them to be used not only on special projects, where a custom
> made application could be interesting, but also for a lot of
> users that just want to run Linux on a low cost ARM board, while
> keeping using standard V4L2 apps, like "camorama".
>
> That's perhaps one of the reasons why it took a long time for us to
> start receiving drivers upstream for such hardware: it is quite
> intimidating and not logical to require developers to implement
> on their drivers 2 complex APIs (MC, subdev) for those
> hardware that most users won't care. From user's perspective,
> being able to support generic applications like "camorama" and
> "zbar" is all they want.
>
> In summary, I'm pretty sure we need to support standard V4L2
> applications on boards like Raspberry Pi and those low-cost
> SoC-based boards that are shipped to end users.
>
>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>> shouldn't inherit controls from subdevs. It defeats the purpose.
>
> Sorry, but I don't agree with that. The subdev API is an optional API
> (and even the MC API can be optional).
>
> I see the rationale for using MC and subdev APIs on cell phones,
> ISV and other embedded hardware, as it will allow fine-tuning
> the driver's support to allow providing the required quality for
> certain custom-made applications. but on general SoC hardware,
> supporting standard V4L2 applications is a need.
>
> Ok, perhaps supporting both subdev API and V4L2 API at the same
> time doesn't make much sense. We could disable one in favor of the
> other, either at compilation time or at runtime.
Right. If the subdev API is disabled, then you have to inherit the subdev
controls in the bridge driver (how else would you be able to access them?).
And that's the usual case.
If you do have the subdev API enabled, AND you use the MC, then the
intention clearly is to give userspace full control and inheriting controls
no longer makes any sense (and is highly confusing IMHO).
>
> This way, if the subdev API is disabled, the driver will be
> functional for V4L2-based applications that don't support neither
> MC or subdev APIs.
I'm not sure if it makes sense for the i.MX driver to behave differently
depending on whether the subdev API is enabled or disabled. I don't know
enough of the hardware to tell if it would ever make sense to disable the
subdev API.
Regards,
Hans
>
>> As mentioned, I will attempt to try and get some time to work on this
>> later this year. Fingers crossed.
>
> That will be good, and, once we have a solution that works, we can
> work on cleanup the code, but, until then, drivers for arm-based boards
> sold to end consumers should work out of the box with standard V4L2 apps.
>
> While we don't have that, I'm OK to merge patches adding such support
> upstream.
>
> Thanks,
> Mauro
>
Em Sat, 11 Mar 2017 12:32:43 +0100
Hans Verkuil <[email protected]> escreveu:
> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > Em Fri, 10 Mar 2017 13:54:28 +0100
> > Hans Verkuil <[email protected]> escreveu:
> >
> >>> Devices that have complex pipeline that do essentially require using the
> >>> Media controller interface to configure them are out of that scope.
> >>>
> >>
> >> Way too much of how the MC devices should be used is in the minds of developers.
> >> There is a major lack for good detailed documentation, utilities, compliance
> >> test (really needed!) and libv4l plugins.
> >
> > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > all the problems with MC-based drivers and V4L2 applications by the time
> > it was developed, and we requested Nokia developers (with was sponsoring MC
> > develoment, on that time) to work on a solution to allow standard V4L2
> > applications to work with MC based boards.
> >
> > Unfortunately, we took the decision to merge MC without that, because
> > Nokia was giving up on Linux development, and we didn't want to lose the
> > 2 years of discussions and work around it, as Nokia employers were leaving
> > the company. Also, on that time, there was already some patches floating
> > around adding backward support via libv4l. Unfortunately, those patches
> > were never finished.
> >
> > The net result is that MC was merged with some huge gaps, including
> > the lack of a proper solution for a generic V4L2 program to work
> > with V4L2 devices that use the subdev API.
> >
> > That was not that bad by then, as MC was used only on cell phones
> > that run custom-made applications.
> >
> > The reallity changed, as now, we have lots of low cost SoC based
> > boards, used for all sort of purposes. So, we need a quick solution
> > for it.
> >
> > In other words, while that would be acceptable support special apps
> > on really embedded systems, it is *not OK* for general purpose SoC
> > harware[1].
> >
> > [1] I'm calling "general purpose SoC harware" those ARM boards
> > like Raspberry Pi that are shipped to the mass and used by a wide
> > range of hobbyists and other people that just wants to run Linux on
> > ARM. It is possible to buy such boards for a very cheap price,
> > making them to be used not only on special projects, where a custom
> > made application could be interesting, but also for a lot of
> > users that just want to run Linux on a low cost ARM board, while
> > keeping using standard V4L2 apps, like "camorama".
> >
> > That's perhaps one of the reasons why it took a long time for us to
> > start receiving drivers upstream for such hardware: it is quite
> > intimidating and not logical to require developers to implement
> > on their drivers 2 complex APIs (MC, subdev) for those
> > hardware that most users won't care. From user's perspective,
> > being able to support generic applications like "camorama" and
> > "zbar" is all they want.
> >
> > In summary, I'm pretty sure we need to support standard V4L2
> > applications on boards like Raspberry Pi and those low-cost
> > SoC-based boards that are shipped to end users.
> >
> >> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> >> shouldn't inherit controls from subdevs. It defeats the purpose.
> >
> > Sorry, but I don't agree with that. The subdev API is an optional API
> > (and even the MC API can be optional).
> >
> > I see the rationale for using MC and subdev APIs on cell phones,
> > ISV and other embedded hardware, as it will allow fine-tuning
> > the driver's support to allow providing the required quality for
> > certain custom-made applications. but on general SoC hardware,
> > supporting standard V4L2 applications is a need.
> >
> > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > time doesn't make much sense. We could disable one in favor of the
> > other, either at compilation time or at runtime.
>
> Right. If the subdev API is disabled, then you have to inherit the subdev
> controls in the bridge driver (how else would you be able to access them?).
> And that's the usual case.
>
> If you do have the subdev API enabled, AND you use the MC, then the
> intention clearly is to give userspace full control and inheriting controls
> no longer makes any sense (and is highly confusing IMHO).
I tend to agree with that.
> >
> > This way, if the subdev API is disabled, the driver will be
> > functional for V4L2-based applications that don't support neither
> > MC or subdev APIs.
>
> I'm not sure if it makes sense for the i.MX driver to behave differently
> depending on whether the subdev API is enabled or disabled. I don't know
> enough of the hardware to tell if it would ever make sense to disable the
> subdev API.
Yeah, I don't know enough about it either. The point is: this is
something that the driver maintainer and driver users should
decide if it either makes sense or not, based on the expected use cases.
Thanks,
Mauro
Hi Mauro and Hans,
On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 11 Mar 2017 12:32:43 +0100
> Hans Verkuil <[email protected]> escreveu:
>
> > On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> > > Em Fri, 10 Mar 2017 13:54:28 +0100
> > > Hans Verkuil <[email protected]> escreveu:
> > >
> > >>> Devices that have complex pipeline that do essentially require using the
> > >>> Media controller interface to configure them are out of that scope.
> > >>>
> > >>
> > >> Way too much of how the MC devices should be used is in the minds of developers.
> > >> There is a major lack for good detailed documentation, utilities, compliance
> > >> test (really needed!) and libv4l plugins.
> > >
> > > Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> > > all the problems with MC-based drivers and V4L2 applications by the time
> > > it was developed, and we requested Nokia developers (with was sponsoring MC
> > > develoment, on that time) to work on a solution to allow standard V4L2
> > > applications to work with MC based boards.
> > >
> > > Unfortunately, we took the decision to merge MC without that, because
> > > Nokia was giving up on Linux development, and we didn't want to lose the
> > > 2 years of discussions and work around it, as Nokia employers were leaving
> > > the company. Also, on that time, there was already some patches floating
> > > around adding backward support via libv4l. Unfortunately, those patches
> > > were never finished.
> > >
> > > The net result is that MC was merged with some huge gaps, including
> > > the lack of a proper solution for a generic V4L2 program to work
> > > with V4L2 devices that use the subdev API.
> > >
> > > That was not that bad by then, as MC was used only on cell phones
> > > that run custom-made applications.
> > >
> > > The reallity changed, as now, we have lots of low cost SoC based
> > > boards, used for all sort of purposes. So, we need a quick solution
> > > for it.
> > >
> > > In other words, while that would be acceptable support special apps
> > > on really embedded systems, it is *not OK* for general purpose SoC
> > > harware[1].
> > >
> > > [1] I'm calling "general purpose SoC harware" those ARM boards
> > > like Raspberry Pi that are shipped to the mass and used by a wide
> > > range of hobbyists and other people that just wants to run Linux on
> > > ARM. It is possible to buy such boards for a very cheap price,
> > > making them to be used not only on special projects, where a custom
> > > made application could be interesting, but also for a lot of
> > > users that just want to run Linux on a low cost ARM board, while
> > > keeping using standard V4L2 apps, like "camorama".
> > >
> > > That's perhaps one of the reasons why it took a long time for us to
> > > start receiving drivers upstream for such hardware: it is quite
> > > intimidating and not logical to require developers to implement
> > > on their drivers 2 complex APIs (MC, subdev) for those
> > > hardware that most users won't care. From user's perspective,
> > > being able to support generic applications like "camorama" and
> > > "zbar" is all they want.
> > >
> > > In summary, I'm pretty sure we need to support standard V4L2
> > > applications on boards like Raspberry Pi and those low-cost
> > > SoC-based boards that are shipped to end users.
> > >
> > >> Anyway, regarding this specific patch and for this MC-aware driver: no, you
> > >> shouldn't inherit controls from subdevs. It defeats the purpose.
> > >
> > > Sorry, but I don't agree with that. The subdev API is an optional API
> > > (and even the MC API can be optional).
> > >
> > > I see the rationale for using MC and subdev APIs on cell phones,
> > > ISV and other embedded hardware, as it will allow fine-tuning
> > > the driver's support to allow providing the required quality for
> > > certain custom-made applications. but on general SoC hardware,
> > > supporting standard V4L2 applications is a need.
> > >
> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.
> >
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> >
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
>
> I tend to agree with that.
I agree as well.
This is in line with how existing drivers behave, too.
>
> > >
> > > This way, if the subdev API is disabled, the driver will be
> > > functional for V4L2-based applications that don't support neither
> > > MC or subdev APIs.
> >
> > I'm not sure if it makes sense for the i.MX driver to behave differently
> > depending on whether the subdev API is enabled or disabled. I don't know
> > enough of the hardware to tell if it would ever make sense to disable the
> > subdev API.
>
> Yeah, I don't know enough about it either. The point is: this is
> something that the driver maintainer and driver users should
> decide if it either makes sense or not, based on the expected use cases.
My understanding of the i.MX6 case is the hardware is configurable enough
to warrant the use of the Media controller API. Some patches indicate
there are choices to be made in data routing.
Steve: could you enlighten us on the topic, by e.g. doing media-ctl
--print-dot and sending the results to the list? What kind of different IP
blocks are there and what do they do? A pointer to hardware documentation
wouldn't hurt either (if it's available).
--
Kind regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
On Sat, Mar 11, 2017 at 05:32:29PM +0200, Sakari Ailus wrote:
> My understanding of the i.MX6 case is the hardware is configurable enough
> to warrant the use of the Media controller API. Some patches indicate
> there are choices to be made in data routing.
The iMX6 does have configurable data routing, but in some scenarios
(eg, when receiving bayer data) there's only one possible routing.
> Steve: could you enlighten us on the topic, by e.g. doing media-ctl
> --print-dot and sending the results to the list? What kind of different IP
> blocks are there and what do they do? A pointer to hardware documentation
> wouldn't hurt either (if it's available).
Attached for the imx219 camera. Note that although the CSI2 block has
four outputs, each output is dedicated to a CSI virtual channel, so
they can not be arbitarily assigned without configuring the sensor.
Since the imx219 only produces bayer, the graph is also showing the
_only_ possible routing for the imx219 configured for CSI virtual
channel 0.
The iMX6 manuals are available on the 'net.
https://community.nxp.com/docs/DOC-101840
There are several chapters that cover the capture side:
* MIPI CSI2
* IPU CSI2 gasket
* IPU
The IPU not only performs capture, but also display as well.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/11/2017 07:32 AM, Sakari Ailus wrote:
> Hi Mauro and Hans,
>
> On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
>> Em Sat, 11 Mar 2017 12:32:43 +0100
>> Hans Verkuil <[email protected]> escreveu:
>>
>>> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
>>>> Em Fri, 10 Mar 2017 13:54:28 +0100
>>>> Hans Verkuil <[email protected]> escreveu:
>>>>
>>>>>> Devices that have complex pipeline that do essentially require using the
>>>>>> Media controller interface to configure them are out of that scope.
>>>>>>
>>>>>
>>>>> Way too much of how the MC devices should be used is in the minds of developers.
>>>>> There is a major lack for good detailed documentation, utilities, compliance
>>>>> test (really needed!) and libv4l plugins.
>>>>
>>>> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
>>>> all the problems with MC-based drivers and V4L2 applications by the time
>>>> it was developed, and we requested Nokia developers (with was sponsoring MC
>>>> develoment, on that time) to work on a solution to allow standard V4L2
>>>> applications to work with MC based boards.
>>>>
>>>> Unfortunately, we took the decision to merge MC without that, because
>>>> Nokia was giving up on Linux development, and we didn't want to lose the
>>>> 2 years of discussions and work around it, as Nokia employers were leaving
>>>> the company. Also, on that time, there was already some patches floating
>>>> around adding backward support via libv4l. Unfortunately, those patches
>>>> were never finished.
>>>>
>>>> The net result is that MC was merged with some huge gaps, including
>>>> the lack of a proper solution for a generic V4L2 program to work
>>>> with V4L2 devices that use the subdev API.
>>>>
>>>> That was not that bad by then, as MC was used only on cell phones
>>>> that run custom-made applications.
>>>>
>>>> The reallity changed, as now, we have lots of low cost SoC based
>>>> boards, used for all sort of purposes. So, we need a quick solution
>>>> for it.
>>>>
>>>> In other words, while that would be acceptable support special apps
>>>> on really embedded systems, it is *not OK* for general purpose SoC
>>>> harware[1].
>>>>
>>>> [1] I'm calling "general purpose SoC harware" those ARM boards
>>>> like Raspberry Pi that are shipped to the mass and used by a wide
>>>> range of hobbyists and other people that just wants to run Linux on
>>>> ARM. It is possible to buy such boards for a very cheap price,
>>>> making them to be used not only on special projects, where a custom
>>>> made application could be interesting, but also for a lot of
>>>> users that just want to run Linux on a low cost ARM board, while
>>>> keeping using standard V4L2 apps, like "camorama".
>>>>
>>>> That's perhaps one of the reasons why it took a long time for us to
>>>> start receiving drivers upstream for such hardware: it is quite
>>>> intimidating and not logical to require developers to implement
>>>> on their drivers 2 complex APIs (MC, subdev) for those
>>>> hardware that most users won't care. From user's perspective,
>>>> being able to support generic applications like "camorama" and
>>>> "zbar" is all they want.
>>>>
>>>> In summary, I'm pretty sure we need to support standard V4L2
>>>> applications on boards like Raspberry Pi and those low-cost
>>>> SoC-based boards that are shipped to end users.
>>>>
>>>>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>>>>> shouldn't inherit controls from subdevs. It defeats the purpose.
>>>>
>>>> Sorry, but I don't agree with that. The subdev API is an optional API
>>>> (and even the MC API can be optional).
>>>>
>>>> I see the rationale for using MC and subdev APIs on cell phones,
>>>> ISV and other embedded hardware, as it will allow fine-tuning
>>>> the driver's support to allow providing the required quality for
>>>> certain custom-made applications. but on general SoC hardware,
>>>> supporting standard V4L2 applications is a need.
>>>>
>>>> Ok, perhaps supporting both subdev API and V4L2 API at the same
>>>> time doesn't make much sense. We could disable one in favor of the
>>>> other, either at compilation time or at runtime.
>>>
>>> Right. If the subdev API is disabled, then you have to inherit the subdev
>>> controls in the bridge driver (how else would you be able to access them?).
>>> And that's the usual case.
>>>
>>> If you do have the subdev API enabled, AND you use the MC, then the
>>> intention clearly is to give userspace full control and inheriting controls
>>> no longer makes any sense (and is highly confusing IMHO).
>>
>> I tend to agree with that.
>
> I agree as well.
>
> This is in line with how existing drivers behave, too.
Well, sounds like there is consensus on this topic. I guess I'll
go ahead and remove the control inheritance support. I suppose
having a control appear in two places (subdev and video nodes) can
be confusing.
As for the configurability vs. ease-of-use debate, I added the
control inheritance to make it a little easier on the user, but,
as the dot graphs below will show, the user already needs quite
a lot of knowledge of the architecture already, in order to setup
the different pipelines. So perhaps the control inheritance is
rather pointless anyway.
>
>>
>>>>
>>>> This way, if the subdev API is disabled, the driver will be
>>>> functional for V4L2-based applications that don't support neither
>>>> MC or subdev APIs.
>>>
>>> I'm not sure if it makes sense for the i.MX driver to behave differently
>>> depending on whether the subdev API is enabled or disabled. I don't know
>>> enough of the hardware to tell if it would ever make sense to disable the
>>> subdev API.
>>
>> Yeah, I don't know enough about it either. The point is: this is
>> something that the driver maintainer and driver users should
>> decide if it either makes sense or not, based on the expected use cases.
>
> My understanding of the i.MX6 case is the hardware is configurable enough
> to warrant the use of the Media controller API. Some patches indicate
> there are choices to be made in data routing.
>
> Steve: could you enlighten us on the topic, by e.g. doing media-ctl
> --print-dot and sending the results to the list? What kind of different IP
> blocks are there and what do they do? A pointer to hardware documentation
> wouldn't hurt either (if it's available).
Wow, I didn't realize there was so little knowledge of the imx6
IPU capture architecture.
Yes, the imx6 definitely warrants the need for MC, as the dot graphs
will attest.
The graphs follows very closely the actual hardware architecture of
the IPU capture blocks. I.e., all the subdevs and links shown correspond
to actual hardware connections and sub-blocks.
Russell just provided a link to the imx6 reference manual, and dot
graph for the imx219 based platform.
Also I've added quite a lot of detail to the media doc at
Documentation/media/v4l-drivers/imx.rst.
The dot graphs for SabreSD, SabreLite, and SabreAuto reference platforms
are attached. This is generated from the most recent
(version 5) imx-media driver.
Steve
On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 07:32 AM, Sakari Ailus wrote:
> >Hi Mauro and Hans,
> >
> >On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
> >>Em Sat, 11 Mar 2017 12:32:43 +0100
> >>Hans Verkuil <[email protected]> escreveu:
> >>
> >>>On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
> >>>>Em Fri, 10 Mar 2017 13:54:28 +0100
> >>>>Hans Verkuil <[email protected]> escreveu:
> >>>>
> >>>>>>Devices that have complex pipeline that do essentially require using the
> >>>>>>Media controller interface to configure them are out of that scope.
> >>>>>>
> >>>>>
> >>>>>Way too much of how the MC devices should be used is in the minds of developers.
> >>>>>There is a major lack for good detailed documentation, utilities, compliance
> >>>>>test (really needed!) and libv4l plugins.
> >>>>
> >>>>Unfortunately, we merged an incomplete MC support at the Kernel. We knew
> >>>>all the problems with MC-based drivers and V4L2 applications by the time
> >>>>it was developed, and we requested Nokia developers (with was sponsoring MC
> >>>>develoment, on that time) to work on a solution to allow standard V4L2
> >>>>applications to work with MC based boards.
> >>>>
> >>>>Unfortunately, we took the decision to merge MC without that, because
> >>>>Nokia was giving up on Linux development, and we didn't want to lose the
> >>>>2 years of discussions and work around it, as Nokia employers were leaving
> >>>>the company. Also, on that time, there was already some patches floating
> >>>>around adding backward support via libv4l. Unfortunately, those patches
> >>>>were never finished.
> >>>>
> >>>>The net result is that MC was merged with some huge gaps, including
> >>>>the lack of a proper solution for a generic V4L2 program to work
> >>>>with V4L2 devices that use the subdev API.
> >>>>
> >>>>That was not that bad by then, as MC was used only on cell phones
> >>>>that run custom-made applications.
> >>>>
> >>>>The reallity changed, as now, we have lots of low cost SoC based
> >>>>boards, used for all sort of purposes. So, we need a quick solution
> >>>>for it.
> >>>>
> >>>>In other words, while that would be acceptable support special apps
> >>>>on really embedded systems, it is *not OK* for general purpose SoC
> >>>>harware[1].
> >>>>
> >>>>[1] I'm calling "general purpose SoC harware" those ARM boards
> >>>>like Raspberry Pi that are shipped to the mass and used by a wide
> >>>>range of hobbyists and other people that just wants to run Linux on
> >>>>ARM. It is possible to buy such boards for a very cheap price,
> >>>>making them to be used not only on special projects, where a custom
> >>>>made application could be interesting, but also for a lot of
> >>>>users that just want to run Linux on a low cost ARM board, while
> >>>>keeping using standard V4L2 apps, like "camorama".
> >>>>
> >>>>That's perhaps one of the reasons why it took a long time for us to
> >>>>start receiving drivers upstream for such hardware: it is quite
> >>>>intimidating and not logical to require developers to implement
> >>>>on their drivers 2 complex APIs (MC, subdev) for those
> >>>>hardware that most users won't care. From user's perspective,
> >>>>being able to support generic applications like "camorama" and
> >>>>"zbar" is all they want.
> >>>>
> >>>>In summary, I'm pretty sure we need to support standard V4L2
> >>>>applications on boards like Raspberry Pi and those low-cost
> >>>>SoC-based boards that are shipped to end users.
> >>>>
> >>>>>Anyway, regarding this specific patch and for this MC-aware driver: no, you
> >>>>>shouldn't inherit controls from subdevs. It defeats the purpose.
> >>>>
> >>>>Sorry, but I don't agree with that. The subdev API is an optional API
> >>>>(and even the MC API can be optional).
> >>>>
> >>>>I see the rationale for using MC and subdev APIs on cell phones,
> >>>>ISV and other embedded hardware, as it will allow fine-tuning
> >>>>the driver's support to allow providing the required quality for
> >>>>certain custom-made applications. but on general SoC hardware,
> >>>>supporting standard V4L2 applications is a need.
> >>>>
> >>>>Ok, perhaps supporting both subdev API and V4L2 API at the same
> >>>>time doesn't make much sense. We could disable one in favor of the
> >>>>other, either at compilation time or at runtime.
> >>>
> >>>Right. If the subdev API is disabled, then you have to inherit the subdev
> >>>controls in the bridge driver (how else would you be able to access them?).
> >>>And that's the usual case.
> >>>
> >>>If you do have the subdev API enabled, AND you use the MC, then the
> >>>intention clearly is to give userspace full control and inheriting controls
> >>>no longer makes any sense (and is highly confusing IMHO).
> >>
> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
>
> Well, sounds like there is consensus on this topic. I guess I'll
> go ahead and remove the control inheritance support. I suppose
> having a control appear in two places (subdev and video nodes) can
> be confusing.
I would say _don't_ do that until there are tools/libraries in place
that are able to support controlling subdevs, otherwise it's just
going to be another reason for me to walk away from this stuff, and
stick with a version that does work sensibly.
> As for the configurability vs. ease-of-use debate, I added the
> control inheritance to make it a little easier on the user, but,
> as the dot graphs below will show, the user already needs quite
> a lot of knowledge of the architecture already, in order to setup
> the different pipelines. So perhaps the control inheritance is
> rather pointless anyway.
I really don't think expecting the user to understand and configure
the pipeline is a sane way forward. Think about it - should the
user need to know that, because they have a bayer-only CSI data
source, that there is only one path possible, and if they try to
configure a different path, then things will just error out?
For the case of imx219 connected to iMX6, it really is as simple as
"there is only one possible path" and all the complexity of the media
interfaces/subdevs is completely unnecessary. Every other block in
the graph is just noise.
The fact is that these dot graphs show a complex picture, but reality
is somewhat different - there's only relatively few paths available
depending on the connected source and the rest of the paths are
completely useless.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:08:23AM -0800, Steve Longerbeam wrote:
>> On 03/11/2017 07:32 AM, Sakari Ailus wrote:
>>> Hi Mauro and Hans,
>>>
>>> On Sat, Mar 11, 2017 at 10:14:08AM -0300, Mauro Carvalho Chehab wrote:
>>>> Em Sat, 11 Mar 2017 12:32:43 +0100
>>>> Hans Verkuil <[email protected]> escreveu:
>>>>
>>>>> On 10/03/17 16:09, Mauro Carvalho Chehab wrote:
>>>>>> Em Fri, 10 Mar 2017 13:54:28 +0100
>>>>>> Hans Verkuil <[email protected]> escreveu:
>>>>>>
>>>>>>>> Devices that have complex pipeline that do essentially require using the
>>>>>>>> Media controller interface to configure them are out of that scope.
>>>>>>>>
>>>>>>>
>>>>>>> Way too much of how the MC devices should be used is in the minds of developers.
>>>>>>> There is a major lack for good detailed documentation, utilities, compliance
>>>>>>> test (really needed!) and libv4l plugins.
>>>>>>
>>>>>> Unfortunately, we merged an incomplete MC support at the Kernel. We knew
>>>>>> all the problems with MC-based drivers and V4L2 applications by the time
>>>>>> it was developed, and we requested Nokia developers (with was sponsoring MC
>>>>>> develoment, on that time) to work on a solution to allow standard V4L2
>>>>>> applications to work with MC based boards.
>>>>>>
>>>>>> Unfortunately, we took the decision to merge MC without that, because
>>>>>> Nokia was giving up on Linux development, and we didn't want to lose the
>>>>>> 2 years of discussions and work around it, as Nokia employers were leaving
>>>>>> the company. Also, on that time, there was already some patches floating
>>>>>> around adding backward support via libv4l. Unfortunately, those patches
>>>>>> were never finished.
>>>>>>
>>>>>> The net result is that MC was merged with some huge gaps, including
>>>>>> the lack of a proper solution for a generic V4L2 program to work
>>>>>> with V4L2 devices that use the subdev API.
>>>>>>
>>>>>> That was not that bad by then, as MC was used only on cell phones
>>>>>> that run custom-made applications.
>>>>>>
>>>>>> The reallity changed, as now, we have lots of low cost SoC based
>>>>>> boards, used for all sort of purposes. So, we need a quick solution
>>>>>> for it.
>>>>>>
>>>>>> In other words, while that would be acceptable support special apps
>>>>>> on really embedded systems, it is *not OK* for general purpose SoC
>>>>>> harware[1].
>>>>>>
>>>>>> [1] I'm calling "general purpose SoC harware" those ARM boards
>>>>>> like Raspberry Pi that are shipped to the mass and used by a wide
>>>>>> range of hobbyists and other people that just wants to run Linux on
>>>>>> ARM. It is possible to buy such boards for a very cheap price,
>>>>>> making them to be used not only on special projects, where a custom
>>>>>> made application could be interesting, but also for a lot of
>>>>>> users that just want to run Linux on a low cost ARM board, while
>>>>>> keeping using standard V4L2 apps, like "camorama".
>>>>>>
>>>>>> That's perhaps one of the reasons why it took a long time for us to
>>>>>> start receiving drivers upstream for such hardware: it is quite
>>>>>> intimidating and not logical to require developers to implement
>>>>>> on their drivers 2 complex APIs (MC, subdev) for those
>>>>>> hardware that most users won't care. From user's perspective,
>>>>>> being able to support generic applications like "camorama" and
>>>>>> "zbar" is all they want.
>>>>>>
>>>>>> In summary, I'm pretty sure we need to support standard V4L2
>>>>>> applications on boards like Raspberry Pi and those low-cost
>>>>>> SoC-based boards that are shipped to end users.
>>>>>>
>>>>>>> Anyway, regarding this specific patch and for this MC-aware driver: no, you
>>>>>>> shouldn't inherit controls from subdevs. It defeats the purpose.
>>>>>>
>>>>>> Sorry, but I don't agree with that. The subdev API is an optional API
>>>>>> (and even the MC API can be optional).
>>>>>>
>>>>>> I see the rationale for using MC and subdev APIs on cell phones,
>>>>>> ISV and other embedded hardware, as it will allow fine-tuning
>>>>>> the driver's support to allow providing the required quality for
>>>>>> certain custom-made applications. but on general SoC hardware,
>>>>>> supporting standard V4L2 applications is a need.
>>>>>>
>>>>>> Ok, perhaps supporting both subdev API and V4L2 API at the same
>>>>>> time doesn't make much sense. We could disable one in favor of the
>>>>>> other, either at compilation time or at runtime.
>>>>>
>>>>> Right. If the subdev API is disabled, then you have to inherit the subdev
>>>>> controls in the bridge driver (how else would you be able to access them?).
>>>>> And that's the usual case.
>>>>>
>>>>> If you do have the subdev API enabled, AND you use the MC, then the
>>>>> intention clearly is to give userspace full control and inheriting controls
>>>>> no longer makes any sense (and is highly confusing IMHO).
>>>>
>>>> I tend to agree with that.
>>>
>>> I agree as well.
>>>
>>> This is in line with how existing drivers behave, too.
>>
>> Well, sounds like there is consensus on this topic. I guess I'll
>> go ahead and remove the control inheritance support. I suppose
>> having a control appear in two places (subdev and video nodes) can
>> be confusing.
>
> I would say _don't_ do that until there are tools/libraries in place
> that are able to support controlling subdevs, otherwise it's just
> going to be another reason for me to walk away from this stuff, and
> stick with a version that does work sensibly.
>
>> As for the configurability vs. ease-of-use debate, I added the
>> control inheritance to make it a little easier on the user, but,
>> as the dot graphs below will show, the user already needs quite
>> a lot of knowledge of the architecture already, in order to setup
>> the different pipelines. So perhaps the control inheritance is
>> rather pointless anyway.
>
> I really don't think expecting the user to understand and configure
> the pipeline is a sane way forward. Think about it - should the
> user need to know that, because they have a bayer-only CSI data
> source, that there is only one path possible, and if they try to
> configure a different path, then things will just error out?
>
> For the case of imx219 connected to iMX6, it really is as simple as
> "there is only one possible path" and all the complexity of the media
> interfaces/subdevs is completely unnecessary. Every other block in
> the graph is just noise.
>
> The fact is that these dot graphs show a complex picture, but reality
> is somewhat different - there's only relatively few paths available
> depending on the connected source and the rest of the paths are
> completely useless.
>
I totally disagree there. Raw bayer requires passthrough yes, but for
all other media bus formats on a mipi csi-2 bus, and all other media
bus formats on 8-bit parallel buses, the conersion pipelines can be
used for scaling, CSC, rotation, and motion-compensated de-interlacing.
Steve
On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>
>
> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >I really don't think expecting the user to understand and configure
> >the pipeline is a sane way forward. Think about it - should the
> >user need to know that, because they have a bayer-only CSI data
> >source, that there is only one path possible, and if they try to
> >configure a different path, then things will just error out?
> >
> >For the case of imx219 connected to iMX6, it really is as simple as
> >"there is only one possible path" and all the complexity of the media
> >interfaces/subdevs is completely unnecessary. Every other block in
> >the graph is just noise.
> >
> >The fact is that these dot graphs show a complex picture, but reality
> >is somewhat different - there's only relatively few paths available
> >depending on the connected source and the rest of the paths are
> >completely useless.
> >
>
> I totally disagree there. Raw bayer requires passthrough yes, but for
> all other media bus formats on a mipi csi-2 bus, and all other media
> bus formats on 8-bit parallel buses, the conersion pipelines can be
> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
... which only makes sense _if_ your source can produce those formats.
We don't actually disagree on that.
Let me re-state. If the source can _only_ produce bayer, then there is
_only_ _one_ possible path, and all the overhead of the media controller
stuff is totally unnecessary.
Or, are you going to tell me that the user should have the right to
configure paths through the iMX6 hardware that are not permitted by the
iMX6 manuals for the data format being produced by the sensor?
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>> I really don't think expecting the user to understand and configure
>>> the pipeline is a sane way forward. Think about it - should the
>>> user need to know that, because they have a bayer-only CSI data
>>> source, that there is only one path possible, and if they try to
>>> configure a different path, then things will just error out?
>>>
>>> For the case of imx219 connected to iMX6, it really is as simple as
>>> "there is only one possible path" and all the complexity of the media
>>> interfaces/subdevs is completely unnecessary. Every other block in
>>> the graph is just noise.
>>>
>>> The fact is that these dot graphs show a complex picture, but reality
>>> is somewhat different - there's only relatively few paths available
>>> depending on the connected source and the rest of the paths are
>>> completely useless.
>>>
>>
>> I totally disagree there. Raw bayer requires passthrough yes, but for
>> all other media bus formats on a mipi csi-2 bus, and all other media
>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>
> ... which only makes sense _if_ your source can produce those formats.
> We don't actually disagree on that.
>
> Let me re-state. If the source can _only_ produce bayer, then there is
> _only_ _one_ possible path, and all the overhead of the media controller
> stuff is totally unnecessary.
>
> Or, are you going to tell me that the user should have the right to
> configure paths through the iMX6 hardware that are not permitted by the
> iMX6 manuals for the data format being produced by the sensor?
>
Russell, I'm not following you. The imx6 pipelines allow for many
different sources, not just the inx219 that only outputs bayer. You
seem to be saying that those other pipelines should not be present
because they don't support raw bayer.
Steve
Hi!
> >>I tend to agree with that.
> >
> >I agree as well.
> >
> >This is in line with how existing drivers behave, too.
>
>
> Well, sounds like there is consensus on this topic. I guess I'll
> go ahead and remove the control inheritance support. I suppose
> having a control appear in two places (subdev and video nodes) can
> be confusing.
I guess that's way to go. It is impossible to change userland APIs
once the patch is merged...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On 03/11/2017 12:26 PM, Pavel Machek wrote:
> Hi!
>
>>>> I tend to agree with that.
>>>
>>> I agree as well.
>>>
>>> This is in line with how existing drivers behave, too.
>>
>>
>> Well, sounds like there is consensus on this topic. I guess I'll
>> go ahead and remove the control inheritance support. I suppose
>> having a control appear in two places (subdev and video nodes) can
>> be confusing.
>
> I guess that's way to go. It is impossible to change userland APIs
> once the patch is merged...
Ok, not including myself, it's now 4 in favor of removing, 1 against...
Steve
On Sat, Mar 11, 2017 at 11:06:55AM -0800, Steve Longerbeam wrote:
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward. Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary. Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
> >
> >Let me re-state. If the source can _only_ produce bayer, then there is
> >_only_ _one_ possible path, and all the overhead of the media controller
> >stuff is totally unnecessary.
> >
> >Or, are you going to tell me that the user should have the right to
> >configure paths through the iMX6 hardware that are not permitted by the
> >iMX6 manuals for the data format being produced by the sensor?
> >
>
> Russell, I'm not following you. The imx6 pipelines allow for many
> different sources, not just the inx219 that only outputs bayer. You
> seem to be saying that those other pipelines should not be present
> because they don't support raw bayer.
What I'm saying is this:
_If_ you have a sensor connected that can _only_ produce bayer, _then_
there is only _one_ possible path through the imx6 pipelines that is
legal. Offering other paths from the source is noise, because every
other path can't be used with a bayer source.
_If_ you have a sensor connected which can produce RGB or YUV formats,
_then_ other paths are available, and pipeline needs to be configured
to select the appropriate path with the desired features.
So, in the case of a bayer source, offering the user the chance to
manually configure the _single_ allowable route through the tree is
needless complexity. Forcing the user to have to use the subdev
interfaces to configure the camera is needless complexity. Such a
source can only ever be used with one single /dev/video* node.
Moreover, this requires user education, and this brings me on to much
larger concerns. We seem to be saying "this is too complicated, the
user can work it out!"
We've been here with VGA devices. Remember the old days when you had
to put mode lines into the Xorg.conf, or go through a lengthy setup
process to get X running? It wasn't very user-friendly. We seem to
be making the same mistake here.
Usability comes first and foremost - throwing complex problems at
users is not a solution.
Now, given that this media control API has been around for several
years, and the userspace side of the story has not really improved
(according to Mauro, several attempts have been made, every single
attempt so far has failed, even for specific hardware) it seems to me
that using the media control API is a very poor choice for the very
simple reason that _no one_ knows how to configure a system using it.
Hans thoughts of getting some funding to look at this aspect is a
good idea, but I really wonder, given the history so far, how long
this will take - and whether it _ever_ will get solved.
If it doesn't get solved, then we're stuck with quite a big problem.
So, I suggest that we don't merge any further media-controller based
kernel code _until_ we have the userspace side sorted out. Merging
the kernel side drivers when we don't even know that the userspace
API is functionally usable in userspace beyond test programs is
utterly absurd - what if it turns out that no one can write v4l
plugins that sort out the issues that have been highlighted throughout
these discussions.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Hi!
> > > Ok, perhaps supporting both subdev API and V4L2 API at the same
> > > time doesn't make much sense. We could disable one in favor of the
> > > other, either at compilation time or at runtime.
> >
> > Right. If the subdev API is disabled, then you have to inherit the subdev
> > controls in the bridge driver (how else would you be able to access them?).
> > And that's the usual case.
> >
> > If you do have the subdev API enabled, AND you use the MC, then the
> > intention clearly is to give userspace full control and inheriting controls
> > no longer makes any sense (and is highly confusing IMHO).
>
> I tend to agree with that.
Well, having different userspace interface according to config options
is strange. I believe the right solution is to make complex drivers
depend on CONFIG_VIDEO_V4L2_SUBDEV_API...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi!
> > > The rationale is that we should support the simplest use cases first.
> > >
> > > In the case of the first MC-based driver (and several subsequent
> > > ones), the simplest use case required MC, as it was meant to suport
> > > a custom-made sophisticated application that required fine control
> > > on each component of the pipeline and to allow their advanced
> > > proprietary AAA userspace-based algorithms to work.
> >
> > The first MC based driver (omap3isp) supports what the hardware can do, it
> > does not support applications as such.
>
> All media drivers support a subset of what the hardware can do. The
> question is if such subset covers the use cases or not.
>
> The current MC-based drivers (except for uvc) took a patch to offer a
> more advanced API, to allow direct control to each IP module, as it was
> said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
> to work, it was mandatory to access the pipeline's individual components.
>
> Such approach require that some userspace software will have knowledge
> about some hardware details, in order to setup pipelines and send controls
> to the right components. That makes really hard to have a generic user
> friendly application to use such devices.
Well. Even if you propagate controls to the right components, there's
still a lot application needs to know about the camera
subsystem. Focus lengths, for example. Speed of the focus
coil. Whether or not aperture controls are available. If they are not,
what is the fixed aperture.
Dunno. Knowing what control to apply on what subdevice does not look
like the hardest part of camera driver. Yes, it would be a tiny bit
easier if I would have just one device to deal with, but.... fcam-dev
has cca 20000 lines of C++ code.
> In the case of V4L2 controls, when there's no subdev API, the main
> driver (e. g. the driver that creates the /dev/video nodes) sends a
> multicast message to all bound I2C drivers. The driver(s) that need
> them handle it. When the same control may be implemented on different
> drivers, the main driver sends a unicast message to just one
> driver[1].
Dunno. There's quite common to have two flashes. In that case, will
application control both at the same time?
> There's nothing wrong with this approach: it works, it is simpler,
> it is generic. So, if it covers most use cases, why not allowing it
> for usecases where a finer control is not a requirement?
Because the resulting interface is quite ugly?
> That's why I'm saying that I'm OK on merging any patch that would allow
> setting controls via the /dev/video interface on MC-based drivers when
> compiled without subdev API. I may also consider merging patches allowing
So.. userspace will now have to detect if subdev is available or not,
and access hardware in different ways?
> > The original plan was and continues to be sound, it's just that there have
> > always been too few hands to implement it. :-(
>
> If there are no people to implement a plan, it doesn't matter how good
> the plan is, it won't work.
If the plan is good, someone will do it.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years.
>
> The last trial was done by Jacek, trying to cover just the exynos4 driver.
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
>
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.
This is what really worries me right now about the current proposal for
iMX6. What's being proposed is to make the driver exclusively MC-based.
What that means is that existing applications are _not_ going to work
until we have some answer for libv4l2, and from what you've said above,
it seems that this has been attempted multiple times over the last _8_
years, and each time it's failed.
When thinking about it, it's quite obvious why merely trying to push
the problem into userspace fails:
If we assert that the kernel does not have sufficient information to
make decisions about how to route and control the hardware, then under
what scenario does a userspace library have sufficient information to
make those decisions?
So, merely moving the problem into userspace doesn't solve anything.
Loading the problem onto the user in the hope that the user knows
enough to properly configure it also doesn't work - who is going to
educate the user about the various quirks of the hardware they're
dealing with?
I don't think pushing it into platform specific libv4l2 plugins works
either - as you say above, even just trying to develop a plugin for
exynos4 seems to have failed, so what makes us think that developing
a plugin for iMX6 is going to succeed? Actually, that's exactly where
the problem lies.
Is "iMX6 plugin" even right? That only deals with the complexity of
one part of the system - what about the source device, which as we
have already seen can be a tuner or a camera with its own multiple
sub-devices. What if there's a video improvement chip in the chain
as well - how is a "generic" iMX6 plugin supposed to know how to deal
with that?
It seems to me that what's required is not an "iMX6 plugin" but a
separate plugin for each platform - or worse. Consider boards like
the Raspberry Pi, where users can attach a variety of cameras. I
don't think this approach scales. (This is relevant: the iMX6 board
I have here has a RPi compatible connector for a MIPI CSI2 camera.
In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
NoIR Camera V2.)
The iMX6 problem is way larger than just "which subdev do I need to
configure for control X" - if you look at the dot graphs both Steve
and myself have supplied, you'll notice that there are eight (yes,
8) video capture devices. Let's say that we can solve the subdev
problem in libv4l2. There's another problem lurking here - libv4l2
is /dev/video* based. How does it know which /dev/video* device to
open?
We don't open by sensor, we open by /dev/video*. In my case, there
is only one correct /dev/video* node for the attached sensor, the
other seven are totally irrelevant. For other situations, there may
be the choice of three functional /dev/video* nodes.
Right now, for my case, there isn't the information exported from the
kernel to know which is the correct one, since that requires knowing
which virtual channel the data is going to be sent over the CSI2
interface. That information is not present in DT, or anywhere. It
only comes from system knowledge - in my case, I know that the IMX219
is currently being configured to use virtual channel 0. SMIA cameras
are also configurable. Then there's CSI2 cameras that can produce
different formats via different virtual channels (eg, JPEG compressed
image on one channel while streaming a RGB image via the other channel.)
Whether you can use one or three in _this_ scenario depends on the
source format - again, another bit of implementation specific
information that userspace would need to know. Kernel space should
know that, and it's discoverable by testing which paths accept the
source format - but that doesn't tell you ahead of time which
/dev/video* node to open.
So, the problem space we have here is absolutely huge, and merely
having a plugin that activates when you open a /dev/video* node
really doesn't solve it.
All in all, I really don't think "lets hope someone writes a v4l2
plugin to solve it" is ever going to be successful. I don't even
see that there will ever be a userspace application that is anything
more than a representation of the dot graphs that users can use to
manually configure the capture system with system knowledge.
I think everyone needs to take a step back and think long and hard
about this from the system usability perspective - I seriously
doubt that we will ever see any kind of solution to this if we
continue to progress with "we'll sort it in userspace some day."
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/11/2017 03:14 PM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
>> This situation is there since 2009. If I remember well, you tried to write
>> such generic plugin in the past, but never finished it, apparently because
>> it is too complex. Others tried too over the years.
>>
>> The last trial was done by Jacek, trying to cover just the exynos4 driver.
>> Yet, even such limited scope plugin was not good enough, as it was never
>> merged upstream. Currently, there's no such plugins upstream.
>>
>> If we can't even merge a plugin that solves it for just *one* driver,
>> I have no hope that we'll be able to do it for the generic case.
>
> This is what really worries me right now about the current proposal for
> iMX6. What's being proposed is to make the driver exclusively MC-based.
>
I don't see anything wrong with that.
> What that means is that existing applications are _not_ going to work
> until we have some answer for libv4l2, and from what you've said above,
> it seems that this has been attempted multiple times over the last _8_
> years, and each time it's failed.
>
> When thinking about it, it's quite obvious why merely trying to push
> the problem into userspace fails:
>
> If we assert that the kernel does not have sufficient information to
> make decisions about how to route and control the hardware, then under
> what scenario does a userspace library have sufficient information to
> make those decisions?
>
> So, merely moving the problem into userspace doesn't solve anything.
>
> Loading the problem onto the user in the hope that the user knows
> enough to properly configure it also doesn't work - who is going to
> educate the user about the various quirks of the hardware they're
> dealing with?
Documentation?
>
> I don't think pushing it into platform specific libv4l2 plugins works
> either - as you say above, even just trying to develop a plugin for
> exynos4 seems to have failed, so what makes us think that developing
> a plugin for iMX6 is going to succeed? Actually, that's exactly where
> the problem lies.
>
> Is "iMX6 plugin" even right? That only deals with the complexity of
> one part of the system - what about the source device, which as we
> have already seen can be a tuner or a camera with its own multiple
> sub-devices. What if there's a video improvement chip in the chain
> as well - how is a "generic" iMX6 plugin supposed to know how to deal
> with that?
>
> It seems to me that what's required is not an "iMX6 plugin" but a
> separate plugin for each platform - or worse. Consider boards like
> the Raspberry Pi, where users can attach a variety of cameras. I
> don't think this approach scales. (This is relevant: the iMX6 board
> I have here has a RPi compatible connector for a MIPI CSI2 camera.
> In fact, the IMX219 module I'm using _is_ a RPi camera, it's the RPi
> NoIR Camera V2.)
>
> The iMX6 problem is way larger than just "which subdev do I need to
> configure for control X" - if you look at the dot graphs both Steve
> and myself have supplied, you'll notice that there are eight (yes,
> 8) video capture devices.
There are 4 video nodes (per IPU):
- unconverted capture from CSI0
- unconverted capture from CSI1
- scaled, CSC, and/or rotated capture from PRP ENC
- scaled, CSC, rotated, and/or de-interlaced capture from PRP VF
Configuring the imx6 pipelines are not that difficult. I've put quite
a bit of detail in the media doc, so it should become clear to any
user with MC knowledge (even those with absolutely no knowledge of
imx) to quickly start getting working pipelines.
Let's say that we can solve the subdev
> problem in libv4l2. There's another problem lurking here - libv4l2
> is /dev/video* based. How does it know which /dev/video* device to
> open?
>
> We don't open by sensor, we open by /dev/video*. In my case, there
> is only one correct /dev/video* node for the attached sensor, the
> other seven are totally irrelevant. For other situations, there may
> be the choice of three functional /dev/video* nodes.
>
> Right now, for my case, there isn't the information exported from the
> kernel to know which is the correct one, since that requires knowing
> which virtual channel the data is going to be sent over the CSI2
> interface. That information is not present in DT, or anywhere.
It is described in the media doc:
"This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
channel outputs."
> It only comes from system knowledge - in my case, I know that the IMX219
> is currently being configured to use virtual channel 0. SMIA cameras
> are also configurable. Then there's CSI2 cameras that can produce
> different formats via different virtual channels (eg, JPEG compressed
> image on one channel while streaming a RGB image via the other channel.)
>
> Whether you can use one or three in _this_ scenario depends on the
> source format - again, another bit of implementation specific
> information that userspace would need to know. Kernel space should
> know that, and it's discoverable by testing which paths accept the
> source format - but that doesn't tell you ahead of time which
> /dev/video* node to open.
>
> So, the problem space we have here is absolutely huge, and merely
> having a plugin that activates when you open a /dev/video* node
> really doesn't solve it.
>
> All in all, I really don't think "lets hope someone writes a v4l2
> plugin to solve it" is ever going to be successful. I don't even
> see that there will ever be a userspace application that is anything
> more than a representation of the dot graphs that users can use to
> manually configure the capture system with system knowledge.
>
> I think everyone needs to take a step back and think long and hard
> about this from the system usability perspective - I seriously
> doubt that we will ever see any kind of solution to this if we
> continue to progress with "we'll sort it in userspace some day."
While I admit when I first came across the MC idea a couple years ago,
my first impression was it was putting a lot of burden on the user to
have a detailed knowledge of the system in question. But I don't think
that is a problem with good documentation, and most people who have a
need to use a specific MC driver will already have that knowledge.
Steve
On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>> I really don't think expecting the user to understand and configure
>>> the pipeline is a sane way forward. Think about it - should the
>>> user need to know that, because they have a bayer-only CSI data
>>> source, that there is only one path possible, and if they try to
>>> configure a different path, then things will just error out?
>>>
>>> For the case of imx219 connected to iMX6, it really is as simple as
>>> "there is only one possible path" and all the complexity of the media
>>> interfaces/subdevs is completely unnecessary. Every other block in
>>> the graph is just noise.
>>>
>>> The fact is that these dot graphs show a complex picture, but reality
>>> is somewhat different - there's only relatively few paths available
>>> depending on the connected source and the rest of the paths are
>>> completely useless.
>>>
>>
>> I totally disagree there. Raw bayer requires passthrough yes, but for
>> all other media bus formats on a mipi csi-2 bus, and all other media
>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>
> ... which only makes sense _if_ your source can produce those formats.
> We don't actually disagree on that.
...and there are lots of those sources! You should try getting out of
your imx219 shell some time, and have a look around! :)
>
> Let me re-state. If the source can _only_ produce bayer, then there is
> _only_ _one_ possible path, and all the overhead of the media controller
> stuff is totally unnecessary.
>
> Or, are you going to tell me that the user should have the right to
> configure paths through the iMX6 hardware that are not permitted by the
> iMX6 manuals for the data format being produced by the sensor?
Anyway, no the user is not allowed to configure a path that is not
allowed by the hardware, such as attempting to pass raw bayer through
an Image Converter path.
I guess you are simply commenting that for users of bayer sensors, the
other pipelines can be "confusing". But I trust you're not saying those
other pipelines should therefore not be present, which would be a
completely nutty argument.
Steve
On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
>
>
> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> >>
> >>
> >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> >>>I really don't think expecting the user to understand and configure
> >>>the pipeline is a sane way forward. Think about it - should the
> >>>user need to know that, because they have a bayer-only CSI data
> >>>source, that there is only one path possible, and if they try to
> >>>configure a different path, then things will just error out?
> >>>
> >>>For the case of imx219 connected to iMX6, it really is as simple as
> >>>"there is only one possible path" and all the complexity of the media
> >>>interfaces/subdevs is completely unnecessary. Every other block in
> >>>the graph is just noise.
> >>>
> >>>The fact is that these dot graphs show a complex picture, but reality
> >>>is somewhat different - there's only relatively few paths available
> >>>depending on the connected source and the rest of the paths are
> >>>completely useless.
> >>>
> >>
> >>I totally disagree there. Raw bayer requires passthrough yes, but for
> >>all other media bus formats on a mipi csi-2 bus, and all other media
> >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> >
> >... which only makes sense _if_ your source can produce those formats.
> >We don't actually disagree on that.
>
> ...and there are lots of those sources! You should try getting out of
> your imx219 shell some time, and have a look around! :)
If you think that, you are insulting me. I've been thinking about this
from the "big picture" point of view. If you think I'm only thinking
about this from only the bayer point of view, you're wrong.
Given what Mauro has said, I'm convinced that the media controller stuff
is a complete failure for usability, and adding further drivers using it
is a mistake.
I counter your accusation by saying that you are actually so focused on
the media controller way of doing things that you can't see the bigger
picture here.
So, tell me how the user can possibly use iMX6 video capture without
resorting to opening up a terminal and using media-ctl to manually
configure the pipeline. How is the user going to control the source
device without using media-ctl to find the subdev node, and then using
v4l2-ctl on it. How is the user supposed to know which /dev/video*
node they should be opening with their capture application?
If you can actually respond to the points that I've been raising about
end user usability, then we can have a discussion.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
>>
>>
>> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
>>> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>>>
>>>>
>>>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>>>> I really don't think expecting the user to understand and configure
>>>>> the pipeline is a sane way forward. Think about it - should the
>>>>> user need to know that, because they have a bayer-only CSI data
>>>>> source, that there is only one path possible, and if they try to
>>>>> configure a different path, then things will just error out?
>>>>>
>>>>> For the case of imx219 connected to iMX6, it really is as simple as
>>>>> "there is only one possible path" and all the complexity of the media
>>>>> interfaces/subdevs is completely unnecessary. Every other block in
>>>>> the graph is just noise.
>>>>>
>>>>> The fact is that these dot graphs show a complex picture, but reality
>>>>> is somewhat different - there's only relatively few paths available
>>>>> depending on the connected source and the rest of the paths are
>>>>> completely useless.
>>>>>
>>>>
>>>> I totally disagree there. Raw bayer requires passthrough yes, but for
>>>> all other media bus formats on a mipi csi-2 bus, and all other media
>>>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>>>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>>>
>>> ... which only makes sense _if_ your source can produce those formats.
>>> We don't actually disagree on that.
>>
>> ...and there are lots of those sources! You should try getting out of
>> your imx219 shell some time, and have a look around! :)
>
> If you think that, you are insulting me. I've been thinking about this
> from the "big picture" point of view. If you think I'm only thinking
> about this from only the bayer point of view, you're wrong.
No insult there, you have my utmost respect Russel. Me gives you the
Ali-G "respec!" :)
It was just a light-hearted attempt at suggesting you might be too
entangled with the imx219 (or short on hardware access, which I can
certainly understand).
>
> Given what Mauro has said, I'm convinced that the media controller stuff
> is a complete failure for usability, and adding further drivers using it
> is a mistake.
>
I do agree with you that MC places a lot of burden on the user to
attain a lot of knowledge of the system's architecture. That's really
why I included that control inheritance patch, to ease the burden
somewhat.
On the other hand, I also think this just requires that MC drivers have
very good user documentation.
And my other point is, I think most people who have a need to work with
the media framework on a particular platform will likely already be
quite familiar with that platform.
> I counter your accusation by saying that you are actually so focused on
> the media controller way of doing things that you can't see the bigger
> picture here.
>
Yeah I've been too mired in the details of this driver.
> So, tell me how the user can possibly use iMX6 video capture without
> resorting to opening up a terminal and using media-ctl to manually
> configure the pipeline. How is the user going to control the source
> device without using media-ctl to find the subdev node, and then using
> v4l2-ctl on it. How is the user supposed to know which /dev/video*
> node they should be opening with their capture application?
The media graph for imx6 is fairly self-explanatory in my opinion.
Yes that graph has to be generated, but just with a simple 'media-ctl
--print-dot', I don't see how that is difficult for the user.
The graph makes it quite clear which subdev node belongs to which
entity.
As for which /dev/videoX node to use, I hope I made it fairly clear
in the user doc what functions each node performs. But I will review
the doc again and make sure it's been made explicitly clear.
>
> If you can actually respond to the points that I've been raising about
> end user usability, then we can have a discussion.
Right, I haven't added my input to the middle-ware discussions (libv4l,
v4lconvert, and the auto-pipeline-configuration library work). I can
only say at this point that v4lconvert does indeed sound broken w.r.t
bayer formats from your description. But it also sounds like an isolated
problem and it just needs a patch to allow passing bayer through without
software conversion.
I wish I had the IMX219 to help you debug these bayer issues. I don't
have any bayer sources.
In summary, I do like the media framework, it's a good abstraction of
hardware pipelines. It does require a lot of system level knowledge to
configure, but as I said that is a matter of good documentation.
Steve
On Sun 2017-03-12 07:37:45, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
> >
> >
> > On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
> > >On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
> > >>
> > >>
> > >>On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
> > >>>I really don't think expecting the user to understand and configure
> > >>>the pipeline is a sane way forward. Think about it - should the
> > >>>user need to know that, because they have a bayer-only CSI data
> > >>>source, that there is only one path possible, and if they try to
> > >>>configure a different path, then things will just error out?
> > >>>
> > >>>For the case of imx219 connected to iMX6, it really is as simple as
> > >>>"there is only one possible path" and all the complexity of the media
> > >>>interfaces/subdevs is completely unnecessary. Every other block in
> > >>>the graph is just noise.
> > >>>
> > >>>The fact is that these dot graphs show a complex picture, but reality
> > >>>is somewhat different - there's only relatively few paths available
> > >>>depending on the connected source and the rest of the paths are
> > >>>completely useless.
> > >>>
> > >>
> > >>I totally disagree there. Raw bayer requires passthrough yes, but for
> > >>all other media bus formats on a mipi csi-2 bus, and all other media
> > >>bus formats on 8-bit parallel buses, the conersion pipelines can be
> > >>used for scaling, CSC, rotation, and motion-compensated de-interlacing.
> > >
> > >... which only makes sense _if_ your source can produce those formats.
> > >We don't actually disagree on that.
> >
> > ...and there are lots of those sources! You should try getting out of
> > your imx219 shell some time, and have a look around! :)
>
> If you think that, you are insulting me. I've been thinking about this
> from the "big picture" point of view. If you think I'm only thinking
> about this from only the bayer point of view, you're wrong.
Can you stop that insults nonsense?
> Given what Mauro has said, I'm convinced that the media controller stuff
> is a complete failure for usability, and adding further drivers using it
> is a mistake.
Hmm. But you did not present any alternative. Seems some hardware is
simply complex. So either we don't add complex drivers (_that_ would
be a mistake), or some userspace solution will need to be
done. Shell-script running media-ctl does not seem that hard.
> So, tell me how the user can possibly use iMX6 video capture without
> resorting to opening up a terminal and using media-ctl to manually
> configure the pipeline. How is the user going to control the source
> device without using media-ctl to find the subdev node, and then using
> v4l2-ctl on it. How is the user supposed to know which /dev/video*
> node they should be opening with their capture application?
Complex hardware sometimes requires userspace configuration. Running a
shell script on startup does not seem that hard.
And maybe we could do some kind of default setup in kernel, but that
does not really solve the problem.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Sat 2017-03-11 23:14:56, Russell King - ARM Linux wrote:
> On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> > This situation is there since 2009. If I remember well, you tried to write
> > such generic plugin in the past, but never finished it, apparently because
> > it is too complex. Others tried too over the years.
> >
> > The last trial was done by Jacek, trying to cover just the exynos4 driver.
> > Yet, even such limited scope plugin was not good enough, as it was never
> > merged upstream. Currently, there's no such plugins upstream.
> >
> > If we can't even merge a plugin that solves it for just *one* driver,
> > I have no hope that we'll be able to do it for the generic case.
>
> This is what really worries me right now about the current proposal for
> iMX6. What's being proposed is to make the driver exclusively MC-based.
>
> What that means is that existing applications are _not_ going to work
> until we have some answer for libv4l2, and from what you've said above,
> it seems that this has been attempted multiple times over the last _8_
> years, and each time it's failed.
Yeah. We need a mid-layer between legacy applications and MC
devices. Such layer does not exist in userspace or in kernel.
> Loading the problem onto the user in the hope that the user knows
> enough to properly configure it also doesn't work - who is going to
> educate the user about the various quirks of the hardware they're
> dealing with?
We have docs. Users can write shell scripts. Still, mid-layer would be
nice.
> So, the problem space we have here is absolutely huge, and merely
> having a plugin that activates when you open a /dev/video* node
> really doesn't solve it.
>
> All in all, I really don't think "lets hope someone writes a v4l2
> plugin to solve it" is ever going to be successful. I don't even
> see that there will ever be a userspace application that is anything
> more than a representation of the dot graphs that users can use to
> manually configure the capture system with system knowledge.
>
> I think everyone needs to take a step back and think long and hard
> about this from the system usability perspective - I seriously
> doubt that we will ever see any kind of solution to this if we
> continue to progress with "we'll sort it in userspace some day."
Mid-layer is difficult... there are _hundreds_ of possible
pipeline setups. If it should live in kernel or in userspace is a
question... but I don't think having it in kernel helps in any way.
Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Em Sun, 12 Mar 2017 10:56:53 -0700
Steve Longerbeam <[email protected]> escreveu:
> On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
> > On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
> > Given what Mauro has said, I'm convinced that the media controller stuff
> > is a complete failure for usability, and adding further drivers using it
> > is a mistake.
I never said that. The thing is that the V4L2 API was designed in
1999, when the video hardware were a way simpler: just one DMA engine
on a PCI device, one video/audio input switch and a few video entries.
On those days, setting up a pipeline on such devices is simple, and can be
done via VIDIOC_*_INPUT ioctls.
Nowadays hardware used on SoC devices are a way more complex.
SoC devices comes with several DMA engines for buffer transfers, plus
video transform blocks whose pipeline can be set dynamically.
The MC API is a need to allow setting a complex pipeline, as
VIDIOC_*_INPUT cannot work with such complexity.
The subdev API solves a different issue. On a "traditional" device,
we usually have a pipeline like:
<video input> ==> <processing> ==> /dev/video0
Where <processing> controls something at the device (like
bright and/or resolution) if you change something at the /dev/video0
node, it is clear that the <processing> block should handle it.
On complex devices, with a pipeline like:
<camera> ==> <processing0> ==> <CSI bus> ==> <processing1> ==> /dev/video0
If you send a command to adjust the something at /dev/video0, it is
not clear for the device driver to do it at processing0 or at
processing1. Ok, the driver can decide it, but this can be sub-optimal.
Yet, several drivers do that. For example, with em28xx-based drivers
several parameters can be adjusted either at the em28xx driver or at
the video decoder driver (saa711x). There's a logic inside the driver
that decides it. The pipeline there is fixed, though, so it is
easy to hardcode a logic for that.
So, I've no doubt that both MC and subdev APIs are needed when full
hardware control is required.
I don't know about how much flexibility the i.MX6 hardware gives,
nor if all such flexibility is needed for most use case applications.
If I were to code a driver for such hardware, though, I would try to
provide a subset of the functionality that would work without the
subdev API, allowing it to work with standard V4L applications.
That doesn't sound hard to do, as the driver may limit the pipelines
to a subset that would make sense, in order to make easier for the
driver to take the right decision about to where send a control
to setup some parameter.
> I do agree with you that MC places a lot of burden on the user to
> attain a lot of knowledge of the system's architecture.
Setting up the pipeline is not the hard part. One could write a
script to do that.
> That's really why I included that control inheritance patch, to
> ease the burden somewhat.
IMHO, that makes sense, as, once some script sets the pipeline, any
V4L2 application can work, if you forward the controls to the right
I2C devices.
> On the other hand, I also think this just requires that MC drivers have
> very good user documentation.
No, it is not a matter of just documentation. It is a matter of having
to rewrite applications for each device, as the information exposed by
MC are not enough for an application to do what's needed.
For a generic application to work properly with MC, we need to have to
add more stuff to MC, in order to allow applications to know more about
the features of each subdevice and to things like discovering what kind
of signal is present on each PAD. We're calling it as "properties API"[1].
[1] we discussed about that at the ML and at the MC workshop:
https://linuxtv.org/news.php?entry=2015-08-17.mchehab
Unfortunately, nobody sent any patches implementing it so far :-(
> And my other point is, I think most people who have a need to work with
> the media framework on a particular platform will likely already be
> quite familiar with that platform.
I disagree. The most popular platform device currently is Raspberry PI.
I doubt that almost all owners of RPi + camera module know anything
about MC. They just use Raspberry's official driver with just provides
the V4L2 interface.
I have a strong opinion that, for hardware like RPi, just the V4L2
API is enough for more than 90% of the cases.
> The media graph for imx6 is fairly self-explanatory in my opinion.
> Yes that graph has to be generated, but just with a simple 'media-ctl
> --print-dot', I don't see how that is difficult for the user.
Again, IMHO, the problem is not how to setup the pipeline, but, instead,
the need to forward controls to the subdevices.
To use a camera, the user needs to set up a set of controls for the
image to make sense (bright, contrast, focus, etc). If the driver
doesn't forward those controls to the subdevs, an application like
"camorama" won't actually work for real, as the user won't be able
to adjust those parameters via GUI.
Thanks,
Mauro
Em Sun, 12 Mar 2017 22:29:04 +0100
Pavel Machek <[email protected]> escreveu:
> Mid-layer is difficult... there are _hundreds_ of possible
> pipeline setups. If it should live in kernel or in userspace is a
> question... but I don't think having it in kernel helps in any way.
Mid-layer is difficult, because we either need to feed some
library with knowledge for all kernel drivers or we need to improve
the MC API to provide more details.
For example, several drivers used to expose entities via the
generic MEDIA_ENT_T_DEVNODE to represent entities of different
types. See, for example, entities 1, 5 and 7 (and others) at:
https://mchehab.fedorapeople.org/mc-next-gen/igepv2_omap3isp.png
A device-specific code could either be hardcoding the entity number
or checking for the entity strings to add some logic to setup
controls on those "unknown" entities, a generic app won't be able
to do anything with them, as it doesn't know what function(s) such
entity provide.
Also, on some devices, like the analog TV decoder at:
https://mchehab.fedorapeople.org/mc-next-gen/au0828_test/mc_nextgen_test-output.png
May have pads with different signals on their output. In such
case, pads 1 and 2 provide video, while pad 3 provides audio using a
different type of output.
The application needs to know such kind of things in order to be able
to properly setup the pipeline [1].
[1] this specific device has a fixed pipeline, but I'm aware of SoC with
flexible pipelines that have this kind of issue (nobody upstreamed the
V4L part of those devices yet).
Thanks,
Mauro
On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
>
>
> On 03/11/2017 11:37 PM, Russell King - ARM Linux wrote:
>> On Sat, Mar 11, 2017 at 07:31:18PM -0800, Steve Longerbeam wrote:
>>>
>>>
>>> On 03/11/2017 10:59 AM, Russell King - ARM Linux wrote:
>>>> On Sat, Mar 11, 2017 at 10:54:55AM -0800, Steve Longerbeam wrote:
>>>>>
>>>>>
>>>>> On 03/11/2017 10:45 AM, Russell King - ARM Linux wrote:
>>>>>> I really don't think expecting the user to understand and configure
>>>>>> the pipeline is a sane way forward. Think about it - should the
>>>>>> user need to know that, because they have a bayer-only CSI data
>>>>>> source, that there is only one path possible, and if they try to
>>>>>> configure a different path, then things will just error out?
>>>>>>
>>>>>> For the case of imx219 connected to iMX6, it really is as simple as
>>>>>> "there is only one possible path" and all the complexity of the media
>>>>>> interfaces/subdevs is completely unnecessary. Every other block in
>>>>>> the graph is just noise.
>>>>>>
>>>>>> The fact is that these dot graphs show a complex picture, but reality
>>>>>> is somewhat different - there's only relatively few paths available
>>>>>> depending on the connected source and the rest of the paths are
>>>>>> completely useless.
>>>>>>
>>>>>
>>>>> I totally disagree there. Raw bayer requires passthrough yes, but for
>>>>> all other media bus formats on a mipi csi-2 bus, and all other media
>>>>> bus formats on 8-bit parallel buses, the conersion pipelines can be
>>>>> used for scaling, CSC, rotation, and motion-compensated de-interlacing.
>>>>
>>>> ... which only makes sense _if_ your source can produce those formats.
>>>> We don't actually disagree on that.
>>>
>>> ...and there are lots of those sources! You should try getting out of
>>> your imx219 shell some time, and have a look around! :)
>>
>> If you think that, you are insulting me. I've been thinking about this
>> from the "big picture" point of view. If you think I'm only thinking
>> about this from only the bayer point of view, you're wrong.
>
> No insult there, you have my utmost respect Russel. Me gives you the
> Ali-G "respec!" :)
>
> It was just a light-hearted attempt at suggesting you might be too
> entangled with the imx219 (or short on hardware access, which I can
> certainly understand).
>
>
>>
>> Given what Mauro has said, I'm convinced that the media controller stuff
>> is a complete failure for usability, and adding further drivers using it
>> is a mistake.
>>
>
> I do agree with you that MC places a lot of burden on the user to
> attain a lot of knowledge of the system's architecture. That's really
> why I included that control inheritance patch, to ease the burden
> somewhat.
>
> On the other hand, I also think this just requires that MC drivers have
> very good user documentation.
>
> And my other point is, I think most people who have a need to work with
> the media framework on a particular platform will likely already be
> quite familiar with that platform.
>
>> I counter your accusation by saying that you are actually so focused on
>> the media controller way of doing things that you can't see the bigger
>> picture here.
>>
>
> Yeah I've been too mired in the details of this driver.
>
>
>> So, tell me how the user can possibly use iMX6 video capture without
>> resorting to opening up a terminal and using media-ctl to manually
>> configure the pipeline. How is the user going to control the source
>> device without using media-ctl to find the subdev node, and then using
>> v4l2-ctl on it. How is the user supposed to know which /dev/video*
>> node they should be opening with their capture application?
>
> The media graph for imx6 is fairly self-explanatory in my opinion.
> Yes that graph has to be generated, but just with a simple 'media-ctl
> --print-dot', I don't see how that is difficult for the user.
>
> The graph makes it quite clear which subdev node belongs to which
> entity.
>
> As for which /dev/videoX node to use, I hope I made it fairly clear
> in the user doc what functions each node performs. But I will review
> the doc again and make sure it's been made explicitly clear.
>
>
>>
>> If you can actually respond to the points that I've been raising about
>> end user usability, then we can have a discussion.
>
> Right, I haven't added my input to the middle-ware discussions (libv4l,
> v4lconvert, and the auto-pipeline-configuration library work). I can
> only say at this point that v4lconvert does indeed sound broken w.r.t
> bayer formats from your description. But it also sounds like an isolated
> problem and it just needs a patch to allow passing bayer through without
> software conversion.
>
> I wish I had the IMX219 to help you debug these bayer issues. I don't
> have any bayer sources.
>
> In summary, I do like the media framework, it's a good abstraction of
> hardware pipelines. It does require a lot of system level knowledge to
> configure, but as I said that is a matter of good documentation.
And the reason we went into this direction is that the end-users that use
these SoCs with complex pipelines actually *need* this functionality. Which
is also part of the reason why work on improved userspace support gets
little attention: they don't need to have a plugin that allows generic V4L2
applications to work (at least with simple scenarios).
If they would need it, it would have been done (and paid for) before.
And improving userspace support for this isn't even at the top of our prio
list: getting the request API and stateless codec support in is our highest
priority. And that's a big job as well.
If you want to blame anyone for this, blame Nokia who set fire to their linux-based
phones and thus to the funding for this work.
Yes, I am very unhappy with the current state, but given the limited resources
I understand why it is as it is. I will try to get time to work on this this summer,
but there is no guarantee that that will be granted. If someone else is interested
in doing this and can get funding for it, then that would be very welcome.
Regards,
Hans
On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
> On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
> > In summary, I do like the media framework, it's a good abstraction of
> > hardware pipelines. It does require a lot of system level knowledge to
> > configure, but as I said that is a matter of good documentation.
>
> And the reason we went into this direction is that the end-users that use
> these SoCs with complex pipelines actually *need* this functionality. Which
> is also part of the reason why work on improved userspace support gets
> little attention: they don't need to have a plugin that allows generic V4L2
> applications to work (at least with simple scenarios).
If you stop inheriting controls from the capture sensor to the v4l2
capture device, then this breaks - generic v4l2 applications are not
going to be able to show the controls, because they're not visible at
the v4l2 capture device anymore. They're only visible through the
subdev interfaces, which these generic applications know nothing about.
> If you want to blame anyone for this, blame Nokia who set fire to
> their linux-based phones and thus to the funding for this work.
No, I think that's completely unfair to Nokia. If the MC approach is
the way you want to go, you should be thanking Nokia for the amount of
effort that they have put in to it, and recognising that it was rather
unfortunate that the market had changed, which meant that they weren't
able to continue.
No one has any right to require any of us to finish what we start
coding up in open source, unless there is a contractual obligation in
place. That goes for Nokia too.
Nokia's decision had ramifications far and wide (resulting in knock on
effects in TI and further afield), so don't think for a moment I wasn't
affected by what happened in Nokia. Even so, it was a decision for
Nokia to make, they had the right to make it, and we have no right to
attribute "blame" to Nokia for having made that decision.
To even suggest that Nokia should be blamed is absurd.
Open source gives rights to everyone. It gives rights to contribute
and use, but it also gives rights to walk away without notice (remember
the "as is" and "no warranty" clauses?)
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On 03/13/2017 11:58 AM, Russell King - ARM Linux wrote:
> On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
>> On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
>>> In summary, I do like the media framework, it's a good abstraction of
>>> hardware pipelines. It does require a lot of system level knowledge to
>>> configure, but as I said that is a matter of good documentation.
>>
>> And the reason we went into this direction is that the end-users that use
>> these SoCs with complex pipelines actually *need* this functionality. Which
>> is also part of the reason why work on improved userspace support gets
>> little attention: they don't need to have a plugin that allows generic V4L2
>> applications to work (at least with simple scenarios).
>
> If you stop inheriting controls from the capture sensor to the v4l2
> capture device, then this breaks - generic v4l2 applications are not
> going to be able to show the controls, because they're not visible at
> the v4l2 capture device anymore. They're only visible through the
> subdev interfaces, which these generic applications know nothing about.
>
>> If you want to blame anyone for this, blame Nokia who set fire to
>> their linux-based phones and thus to the funding for this work.
>
> No, I think that's completely unfair to Nokia. If the MC approach is
> the way you want to go, you should be thanking Nokia for the amount of
> effort that they have put in to it, and recognising that it was rather
> unfortunate that the market had changed, which meant that they weren't
> able to continue.
>
> No one has any right to require any of us to finish what we start
> coding up in open source, unless there is a contractual obligation in
> place. That goes for Nokia too.
>
> Nokia's decision had ramifications far and wide (resulting in knock on
> effects in TI and further afield), so don't think for a moment I wasn't
> affected by what happened in Nokia. Even so, it was a decision for
> Nokia to make, they had the right to make it, and we have no right to
> attribute "blame" to Nokia for having made that decision.
>
> To even suggest that Nokia should be blamed is absurd.
>
> Open source gives rights to everyone. It gives rights to contribute
> and use, but it also gives rights to walk away without notice (remember
> the "as is" and "no warranty" clauses?)
Sorry, unfortunate choice of words. While it lasted they did great work.
But the reason why MC development stopped for quite some time (esp. the
work on userspace software) was because the funding from Nokia dried up.
Regards,
Hans
Em Mon, 13 Mar 2017 10:58:42 +0000
Russell King - ARM Linux <[email protected]> escreveu:
> On Mon, Mar 13, 2017 at 11:44:50AM +0100, Hans Verkuil wrote:
> > On 03/12/2017 06:56 PM, Steve Longerbeam wrote:
> > > In summary, I do like the media framework, it's a good abstraction of
> > > hardware pipelines. It does require a lot of system level knowledge to
> > > configure, but as I said that is a matter of good documentation.
> >
> > And the reason we went into this direction is that the end-users that use
> > these SoCs with complex pipelines actually *need* this functionality. Which
> > is also part of the reason why work on improved userspace support gets
> > little attention: they don't need to have a plugin that allows generic V4L2
> > applications to work (at least with simple scenarios).
>
> If you stop inheriting controls from the capture sensor to the v4l2
> capture device, then this breaks - generic v4l2 applications are not
> going to be able to show the controls, because they're not visible at
> the v4l2 capture device anymore. They're only visible through the
> subdev interfaces, which these generic applications know nothing about.
True. That's why IMHO, the best is to do control inheritance when
there are use cases for generic applications and is possible for
the driver to do it (e. g. when the pipeline is not too complex
to prevent it to work).
As Hans said, for the drivers currently upstreamed at drivers/media,
there are currently very little interest on running generic apps
there, as they're meant to be used inside embedded hardware using
specialized applications.
I don't have myself any hardware with i.MX6. Yet, I believe that
a low cost board like SolidRun Hummingboard - with comes with a
CSI interface compatible with RPi camera modules - will likely
attract users who need to run generic applications on their
devices.
So, I believe that it makes sense for i.MX6 driver to inherit
controls from video devnode.
Thanks,
Mauro
On Mon, Mar 13, 2017 at 08:42:15AM -0300, Mauro Carvalho Chehab wrote:
> I don't have myself any hardware with i.MX6. Yet, I believe that
> a low cost board like SolidRun Hummingboard - with comes with a
> CSI interface compatible with RPi camera modules - will likely
> attract users who need to run generic applications on their
> devices.
As you've previously mentioned about camorama, I've installed it (I
run Ubuntu 16.04 with "gnome-flashback-metacity" on the HB) and I'm
able to use camorama to view the IMX219 camera sensor.
There's some gotcha's though:
* you need to start it on the command line, manually specifying
which /dev/video device to use, as it always wants to use
/dev/video0. With the CODA mem2mem driver loaded, this may not
be a camera device:
$ v4l2-ctl -d 0 --all
Driver Info (not using libv4l2):
Driver name : coda
Card type : CODA960
Bus info : platform:coda
Driver version: 4.11.0
* camorama seems to use the v4lconvert library, and looking at the
resulting image quality, is rather pixelated - my guess is that
v4lconvert is using a basic algorithm to de-bayer the data. It
also appears to only manage 7fps at best. The gstreamer neon
debayer plugin appears to be faster and higher quality.
* it provides five controls - brightness/contrast/color/hue/white
balance, each of which are not supported by the hardware (IMX219
supports gain and analogue gain only.) These controls appear to
have no effect on the resulting image.
However, using qv4l2 (with the segfault bug in
GeneralTab::updateFrameSize() fixed - m_frameSize, m_frameWidth and
m_frameHeight can be NULL) provides access to all controls. This
can happen if GeneralTab::inputSection() is not called.
The USB uvcvideo camera achieves around 24fps with functional controls
in camorama (mainly because it provides those exact controls to
userspace.)
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Hi Mauro,
On Sat, Mar 11, 2017 at 08:25:49AM -0300, Mauro Carvalho Chehab wrote:
> Em Sat, 11 Mar 2017 00:37:14 +0200
> Sakari Ailus <[email protected]> escreveu:
>
> > Hi Mauro (and others),
> >
> > On Fri, Mar 10, 2017 at 12:53:42PM -0300, Mauro Carvalho Chehab wrote:
> > > Em Fri, 10 Mar 2017 15:20:48 +0100
> > > Hans Verkuil <[email protected]> escreveu:
> > >
> > > >
> > > > > As I've already mentioned, from talking about this with Mauro, it seems
> > > > > Mauro is in agreement with permitting the control inheritence... I wish
> > > > > Mauro would comment for himself, as I can't quote our private discussion
> > > > > on the subject.
> > > >
> > > > I can't comment either, not having seen his mail and reasoning.
> > >
> > > The rationale is that we should support the simplest use cases first.
> > >
> > > In the case of the first MC-based driver (and several subsequent
> > > ones), the simplest use case required MC, as it was meant to suport
> > > a custom-made sophisticated application that required fine control
> > > on each component of the pipeline and to allow their advanced
> > > proprietary AAA userspace-based algorithms to work.
> >
> > The first MC based driver (omap3isp) supports what the hardware can do, it
> > does not support applications as such.
>
> All media drivers support a subset of what the hardware can do. The
> question is if such subset covers the use cases or not.
Can you name a feature in the OMAP 3 ISP that is not and can not be
supported using the current driver model (MC + V4L2 sub-device + V4L2) that
could be even remotely useful?
>
> The current MC-based drivers (except for uvc) took a patch to offer a
> more advanced API, to allow direct control to each IP module, as it was
> said, by the time we merged the OMAP3 driver, that, for the N9/N900 camera
> to work, it was mandatory to access the pipeline's individual components.
>
> Such approach require that some userspace software will have knowledge
> about some hardware details, in order to setup pipelines and send controls
> to the right components. That makes really hard to have a generic user
> friendly application to use such devices.
The effect you described above is true, but I disagree with the cause. The
cause is the hardware is more complex and variable than what has been
supported previously, and providing a generic interface for accessing such
hardware will require more complex interface.
The hardware we have today and the user cases we have today are more --- not
less --- complex and nuanced than when the Media controller was merged back
in 2010. Arguably there is thus more need for the functionality it provides,
not less.
>
> Non-MC based drivers control the hardware via a portable interface with
> doesn't require any knowledge about the hardware specifics, as either the
> Kernel or some firmware at the device will set any needed pipelines.
>
> In the case of V4L2 controls, when there's no subdev API, the main
> driver (e. g. the driver that creates the /dev/video nodes) sends a
> multicast message to all bound I2C drivers. The driver(s) that need
> them handle it. When the same control may be implemented on different
> drivers, the main driver sends a unicast message to just one driver[1].
>
> [1] There are several non-MC drivers that have multiple ways to
> control some things, like doing scaling or adjust volume levels at
> either the bridge driver or at a subdriver.
>
> There's nothing wrong with this approach: it works, it is simpler,
> it is generic. So, if it covers most use cases, why not allowing it
> for usecases where a finer control is not a requirement?
Drivers are written to support hardware, not particular use case. Use case
specific knowledge should be present only in applications, not in drivers.
Doing it otherwise will lead to use case specific drivers and more driver
code to maintain for any particular piece of hardware.
An individual could possibly choose the right driver for his / her use case,
but this approach could hardly work for Linux distribution kernels.
The plain V4L2 interface is generic within its own scope: hardware can be
supported within the hardware model assumed by the interface. However, on
some devices this will end up being a small subset of what the hardware can
do. Besides that, when writing the driver, you need to decide at detail
level what kind of subset that might be.
That's not something anyone writing a driver should need to confront.
>
> > Adding support to drivers for different "operation modes" --- this is
> > essentially what is being asked for --- is not an approach which could serve
> > either purpose (some functionality with simple interface vs. fully support
> > what the hardware can do, with interfaces allowing that) adequately in the
> > short or the long run.
>
> Why not?
Let's suppose that the omap3isp driver provided an "operation mode" for more
simple applications.
Would you continue to have a V4L2 video device per DMA engine? Without Media
controller it'd be rather confusing for applications since depending on
which format (and level of processing) is requested defines the video node
where the images is captured.
Instead you'd probably want to have a single video node. For the driver to
expose just a single device node, should that be a Kconfig option or a
module parameter, for instance?
I have to say I wouldn't be even particularly interested to know how much
driver changes you'd have to implement to achieve that and how
unmaintainable to end result would be. Consider inflicting the same on all
drivers.
That's just *one* of a large number of things you'd need to change in order
to support plain V4L2 applications from the driver, while still continuing to
support the current interface.
With the help of a user space library, we can show the omap3isp device as a
single video node with a number of inputs (sensors) that can provide some
level of service to the user. I'm using "can", because it's just up to a
missing implementation of such a library.
It may be hardware specific or not. A hardware specific one may produce
better results than best effort since it may use knowledge of the hardware
not available through the kernel interfaces.
>
> > If we are missing pieces in the puzzle --- in this case the missing pieces
> > in the puzzle are a generic pipeline configuration library and another
> > library that, with the help of pipeline autoconfiguration would implement
> > "best effort" service for regular V4L2 on top of the MC + V4L2 subdev + V4L2
> > --- then these pieces need to be impelemented. The solution is
> > *not* to attempt to support different types of applications in each driver
> > separately. That will make writing drivers painful, error prone and is
> > unlikely ever deliver what either purpose requires.
> >
> > So let's continue to implement the functionality that the hardware supports.
> > Making a different choice here is bound to create a lasting conflict between
> > having to change kernel interface behaviour and the requirement of
> > supporting new functionality that hasn't been previously thought of, pushing
> > away SoC vendors from V4L2 ecosystem. This is what we all do want to avoid.
>
> This situation is there since 2009. If I remember well, you tried to write
> such generic plugin in the past, but never finished it, apparently because
> it is too complex. Others tried too over the years.
I'd argue I know better what happened with that attempt than you do. I had a
prototype of a generic pipeline configuration library but due to various
reasons I haven't been able to continue working on that since around 2012.
The prototype could figure out that the ccdc -> resizer path isn't usable
with raw sensors due to the lack of common formats between the two,
something that was argued to be too complex to implement.
I'm not aware of anyone else who would have tried that. Are you?
>
> The last trial was done by Jacek, trying to cover just the exynos4 driver.
> Yet, even such limited scope plugin was not good enough, as it was never
> merged upstream. Currently, there's no such plugins upstream.
>
> If we can't even merge a plugin that solves it for just *one* driver,
> I have no hope that we'll be able to do it for the generic case.
I believe Jacek ceased to work on that plugin in his day job; other than
that, there are some matters left to be addressed in his latest patchset.
Having provided feedback on that patchset, I don't see additional technical
problems that require solving before the patches can be merged. The
remaining matters seem to be actually fairly trivial.
>
> That's why I'm saying that I'm OK on merging any patch that would allow
> setting controls via the /dev/video interface on MC-based drivers when
> compiled without subdev API. I may also consider merging patches allowing
> to change the behavior on runtime, when compiled with subdev API.
>
> > As far as i.MX6 driver goes, it is always possible to implement i.MX6 plugin
> > for libv4l to perform this. This should be much easier than getting the
> > automatic pipe configuration library and the rest working, and as it is
> > custom for i.MX6, the resulting plugin may make informed technical choices
> > for better functionality.
>
> I wouldn't call "much easier" something that experienced media
> developers failed to do over the last 8 years.
>
> It is just the opposite: broadcasting a control via I2C is very easy:
> there are several examples about how to do that all over the media
> drivers.
That's one of the things Jacek's plugin actually does. This is still
functionality that *only* some user applications wish to have, and as
implemented in the plugin it works correctly without use case specific
semantics --- please see my comments on the patch picking the controls from
the pipeline.
Drivers can also make use of v4l2_device_for_each_subdev() to distribute
setting controls on different sub-devices. A few drivers actually do that.
>
> > Jacek has been working on such a plugin for
> > Samsung Exynos hardware, but I don't think he has quite finished it yey.
>
> As Jacek answered when questioned about the merge status:
>
> Hi Hans,
>
> On 11/03/2016 12:51 PM, Hans Verkuil wrote:
> > Hi all,
> >
> > Is there anything that blocks me from merging this?
> >
> > This plugin work has been ongoing for years and unless there are serious
> > objections I propose that this is merged.
> >
> > Jacek, is there anything missing that would prevent merging this?
>
> There were issues raised by Sakari during last review, related to
> the way how v4l2 control bindings are defined. That discussion wasn't
> finished, so I stayed by my approach. Other than that - I've tested it
> and it works fine both with GStreamer and my test app.
>
> After that, he sent a new version (v7.1), but never got reviews.
There are a number of mutually agreed but unaddressed comments against v7.
Also, v7.1 is just a single patch that does not address issues pointed out
in v7, but something Jacek wanted to fix himself.
In other words, there are comments to address but no patches to review.
Let's see how to best address them. I could possibly fix at least some of
those, but due to lack of hardware I have no ability to test the end result.
>
> > The original plan was and continues to be sound, it's just that there have
> > always been too few hands to implement it. :-(
>
> If there are no people to implement a plan, it doesn't matter how good
> the plan is, it won't work.
Do you have other proposals than what we have commonly agreed on? I don't
see other approaches that could satisfactorily address all the requirements
going forward.
That said, I do think we need to reinvigorate the efforts to get things
rolling again on supporting plain V4L2 applications on devices that are
controlled through the MC inteface. These matters have received undeservedly
little attention in recent years.
>
> > > That's not true, for example, for the UVC driver. There, MC
> > > is optional, as it should be.
> >
> > UVC is different. The device simply provides additional information through
> > MC to the user but MC (or V4L2 sub-device interface) is not used for
> > controlling the device.
>
> It is not different. If the Kernel is compiled without the V4L2
> subdev interface, the i.MX6 driver (or whatever other driver)
> won't receive any control via the subdev interface. So, it has to
> handle the control logic control via the only interface that
> supports it, e. g. via the video devnode.
Looking at the driver code and the Kconfig file, i.MX6 driver depends on
CONFIG_MEDIA_CONTROLLER and uses the Media controller API. So if Media
controller is disabled in kernel configuration, the i.MX6 IPU driver won't
be compiled.
--
Kind regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
Hi Sakari,
I started preparing a long argument about it, but gave up in favor of a
simpler one.
Em Mon, 13 Mar 2017 14:46:22 +0200
Sakari Ailus <[email protected]> escreveu:
> Drivers are written to support hardware, not particular use case.
No, it is just the reverse: drivers and hardware are developed to
support use cases.
Btw, you should remember that the hardware is the full board, not just the
SoC. In practice, the board do limit the use cases: several provide a
single physical CSI connector, allowing just one sensor.
> > This situation is there since 2009. If I remember well, you tried to write
> > such generic plugin in the past, but never finished it, apparently because
> > it is too complex. Others tried too over the years.
>
> I'd argue I know better what happened with that attempt than you do. I had a
> prototype of a generic pipeline configuration library but due to various
> reasons I haven't been able to continue working on that since around 2012.
...
> > The last trial was done by Jacek, trying to cover just the exynos4 driver.
> > Yet, even such limited scope plugin was not good enough, as it was never
> > merged upstream. Currently, there's no such plugins upstream.
> >
> > If we can't even merge a plugin that solves it for just *one* driver,
> > I have no hope that we'll be able to do it for the generic case.
>
> I believe Jacek ceased to work on that plugin in his day job; other than
> that, there are some matters left to be addressed in his latest patchset.
The two above basically summaries the issue: the task of doing a generic
plugin on userspace, even for a single driver is complex enough to
not cover within a reasonable timeline.
>From 2009 to 2012, you were working on it, but didn't finish it.
Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
I didn't check when the generic plugin interface was added to libv4l).
In the case of Jacek's work, the first patch I was able to find was
written in Oct, 2014:
https://patchwork.kernel.org/patch/5098111/
(not sure what happened with the version 1).
The last e-mail about this subject was issued in Dec, 2016.
In summary, you had this on your task for 3 years for an OMAP3
plugin (where you have a good expertise), and Jacek for 2 years,
for Exynos 4, where he should also have a good knowledge.
Yet, with all that efforts, no concrete results were achieved, as none
of the plugins got merged.
Even if they were merged, if we keep the same mean time to develop a
libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
years to be developed.
There's a clear message on it:
- we shouldn't keep pushing for a solution via libv4l.
Thanks,
Mauro
On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> Hi Sakari,
>
> I started preparing a long argument about it, but gave up in favor of a
> simpler one.
>
> Em Mon, 13 Mar 2017 14:46:22 +0200
> Sakari Ailus <[email protected]> escreveu:
>
>> Drivers are written to support hardware, not particular use case.
>
> No, it is just the reverse: drivers and hardware are developed to
> support use cases.
>
> Btw, you should remember that the hardware is the full board, not just the
> SoC. In practice, the board do limit the use cases: several provide a
> single physical CSI connector, allowing just one sensor.
>
>>> This situation is there since 2009. If I remember well, you tried to write
>>> such generic plugin in the past, but never finished it, apparently because
>>> it is too complex. Others tried too over the years.
>>
>> I'd argue I know better what happened with that attempt than you do. I had a
>> prototype of a generic pipeline configuration library but due to various
>> reasons I haven't been able to continue working on that since around 2012.
>
> ...
>
>>> The last trial was done by Jacek, trying to cover just the exynos4 driver.
>>> Yet, even such limited scope plugin was not good enough, as it was never
>>> merged upstream. Currently, there's no such plugins upstream.
>>>
>>> If we can't even merge a plugin that solves it for just *one* driver,
>>> I have no hope that we'll be able to do it for the generic case.
>>
>> I believe Jacek ceased to work on that plugin in his day job; other than
>> that, there are some matters left to be addressed in his latest patchset.
>
> The two above basically summaries the issue: the task of doing a generic
> plugin on userspace, even for a single driver is complex enough to
> not cover within a reasonable timeline.
>
> From 2009 to 2012, you were working on it, but didn't finish it.
>
> Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> I didn't check when the generic plugin interface was added to libv4l).
>
> In the case of Jacek's work, the first patch I was able to find was
> written in Oct, 2014:
> https://patchwork.kernel.org/patch/5098111/
> (not sure what happened with the version 1).
>
> The last e-mail about this subject was issued in Dec, 2016.
>
> In summary, you had this on your task for 3 years for an OMAP3
> plugin (where you have a good expertise), and Jacek for 2 years,
> for Exynos 4, where he should also have a good knowledge.
>
> Yet, with all that efforts, no concrete results were achieved, as none
> of the plugins got merged.
>
> Even if they were merged, if we keep the same mean time to develop a
> libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> years to be developed.
>
> There's a clear message on it:
> - we shouldn't keep pushing for a solution via libv4l.
Or:
- userspace plugin development had a very a low priority and
never got the attention it needed.
I know that's *my* reason. I rarely if ever looked at it. I always assumed
Sakari and/or Laurent would look at it. If this reason is also valid for
Sakari and Laurent, then it is no wonder nothing has happened in all that
time.
We're all very driver-development-driven, and userspace gets very little
attention in general. So before just throwing in the towel we should take
a good look at the reasons why there has been little or no development: is
it because of fundamental design defects, or because nobody paid attention
to it?
I strongly suspect it is the latter.
In addition, I suspect end-users of these complex devices don't really care
about a plugin: they want full control and won't typically use generic
applications. If they would need support for that, we'd have seen much more
interest. The main reason for having a plugin is to simplify testing and
if this is going to be used on cheap hobbyist devkits.
An additional complication is simply that it is hard to find fully supported
MC hardware. omap3 boards are hard to find these days, renesas boards are not
easy to get, freescale isn't the most popular either. Allwinner, mediatek,
amlogic, broadcom and qualcomm all have closed source implementations or no
implementation at all.
I know it took me a very long time before I had a working omap3.
So I am not at all surprised that little progress has been made.
Regards,
Hans
Em Tue, 14 Mar 2017 08:55:36 +0100
Hans Verkuil <[email protected]> escreveu:
> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> > Hi Sakari,
> >
> > I started preparing a long argument about it, but gave up in favor of a
> > simpler one.
> >
> > Em Mon, 13 Mar 2017 14:46:22 +0200
> > Sakari Ailus <[email protected]> escreveu:
> >
> >> Drivers are written to support hardware, not particular use case.
> >
> > No, it is just the reverse: drivers and hardware are developed to
> > support use cases.
> >
> > Btw, you should remember that the hardware is the full board, not just the
> > SoC. In practice, the board do limit the use cases: several provide a
> > single physical CSI connector, allowing just one sensor.
> >
> >>> This situation is there since 2009. If I remember well, you tried to write
> >>> such generic plugin in the past, but never finished it, apparently because
> >>> it is too complex. Others tried too over the years.
> >>
> >> I'd argue I know better what happened with that attempt than you do. I had a
> >> prototype of a generic pipeline configuration library but due to various
> >> reasons I haven't been able to continue working on that since around 2012.
> >
> > ...
> >
> >>> The last trial was done by Jacek, trying to cover just the exynos4 driver.
> >>> Yet, even such limited scope plugin was not good enough, as it was never
> >>> merged upstream. Currently, there's no such plugins upstream.
> >>>
> >>> If we can't even merge a plugin that solves it for just *one* driver,
> >>> I have no hope that we'll be able to do it for the generic case.
> >>
> >> I believe Jacek ceased to work on that plugin in his day job; other than
> >> that, there are some matters left to be addressed in his latest patchset.
> >
> > The two above basically summaries the issue: the task of doing a generic
> > plugin on userspace, even for a single driver is complex enough to
> > not cover within a reasonable timeline.
> >
> > From 2009 to 2012, you were working on it, but didn't finish it.
> >
> > Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> > I didn't check when the generic plugin interface was added to libv4l).
> >
> > In the case of Jacek's work, the first patch I was able to find was
> > written in Oct, 2014:
> > https://patchwork.kernel.org/patch/5098111/
> > (not sure what happened with the version 1).
> >
> > The last e-mail about this subject was issued in Dec, 2016.
> >
> > In summary, you had this on your task for 3 years for an OMAP3
> > plugin (where you have a good expertise), and Jacek for 2 years,
> > for Exynos 4, where he should also have a good knowledge.
> >
> > Yet, with all that efforts, no concrete results were achieved, as none
> > of the plugins got merged.
> >
> > Even if they were merged, if we keep the same mean time to develop a
> > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > years to be developed.
> >
> > There's a clear message on it:
> > - we shouldn't keep pushing for a solution via libv4l.
>
> Or:
>
> - userspace plugin development had a very a low priority and
> never got the attention it needed.
The end result is the same: we can't count on it.
>
> I know that's *my* reason. I rarely if ever looked at it. I always assumed
> Sakari and/or Laurent would look at it. If this reason is also valid for
> Sakari and Laurent, then it is no wonder nothing has happened in all that
> time.
>
> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?
No. We should look it the other way: basically, there are patches
for i.MX6 driver that sends control from videonode to subdevs.
If we nack apply it, who will write the userspace plugin? When
such change will be merged upstream?
If we don't have answers to any of the above questions, we should not
nack it.
That's said, that doesn't prevent merging a libv4l plugin if/when
someone can find time/interest to develop it.
> I strongly suspect it is the latter.
>
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.
What are the needs for a cheap hobbyist devkit owner? Do we currently
satisfy those needs? I'd say that having a functional driver when
compiled without the subdev API, that implements the ioctl's/controls
that a generic application like camorama/google talk/skype/zbar...
would work should be enough to make them happy, even if they need to
add some udev rule and/or run some "prep" application that would setup
the pipelines via MC and eventually rename the device with a working
pipeline to /dev/video0.
>
> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are not
> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> amlogic, broadcom and qualcomm all have closed source implementations or no
> implementation at all.
I'd say that we should not care anymore on providing a solution for
generic applications to run on boards like OMAP3[1]. For hardware that
are currently available that have Kernel driver and boards developed
to be used as "cheap hobbyist devkit", I'd say we should implement
a Kernel solution that would allow them to be used without subdev
API, e. g. having all ioctls needed by generic applications to work
functional, after some external application sets the pipeline.
[1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
just arrived here last week. It would be nice to have xawtv3 running on it :-)
So, if I have a lot of spare time (with is very unlikely), I might eventually
do something for it to work.
> I know it took me a very long time before I had a working omap3.
My first OMAP3 board with working V4L2 source just arrived last week :-)
> So I am not at all surprised that little progress has been made.
I'm not surprised, but I'm disappointed, as I tried to push toward a
solution for this problem since when we had our initial meetings about
it.
Thanks,
Mauro
Hi!
> > Mid-layer is difficult... there are _hundreds_ of possible
> > pipeline setups. If it should live in kernel or in userspace is a
> > question... but I don't think having it in kernel helps in any way.
>
> Mid-layer is difficult, because we either need to feed some
> library with knowledge for all kernel drivers or we need to improve
> the MC API to provide more details.
>
> For example, several drivers used to expose entities via the
> generic MEDIA_ENT_T_DEVNODE to represent entities of different
> types. See, for example, entities 1, 5 and 7 (and others) at:
>
> > https://mchehab.fedorapeople.org/mc-next-gen/igepv2_omap3isp.png
Well... we provide enough information, so that device-specific code
does not have to be in kernel.
There are few types of ENT_T_DEVNODE there. "ISP CCP2" does not really
provide any functionality to the user. It just has to be there,
because the pipeline needs to be connected. "ISP Preview" provides
format conversion. "ISP Resizer" provides rescaling.
I'm not sure if it ever makes sense to use "ISP Preview
output". Normally you take data for display from "ISP Resizer
output". (Would there be some power advantage from that?)
> A device-specific code could either be hardcoding the entity number
> or checking for the entity strings to add some logic to setup
> controls on those "unknown" entities, a generic app won't be able
> to do anything with them, as it doesn't know what function(s) such
> entity provide.
Generic app should know if it wants RGGB10 data, then it can use "ISP
CCDC output", or if it wants "cooked" data suitable for display, then
it wants to use "ISP Resizer output". Once application knows what
output it wants, there's just one path through the system. So being
able to tell the ENT_T_DEVNODEs apart does not seem to be critical.
OTOH, for useful camera application, different paths are needed at
different phases.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi!
> > > Even if they were merged, if we keep the same mean time to develop a
> > > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > > years to be developed.
> > >
> > > There's a clear message on it:
> > > - we shouldn't keep pushing for a solution via libv4l.
> >
> > Or:
> >
> > - userspace plugin development had a very a low priority and
> > never got the attention it needed.
>
> The end result is the same: we can't count on it.
>
> >
> > I know that's *my* reason. I rarely if ever looked at it. I always assumed
> > Sakari and/or Laurent would look at it. If this reason is also valid for
> > Sakari and Laurent, then it is no wonder nothing has happened in all that
> > time.
> >
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
>
> No. We should look it the other way: basically, there are patches
> for i.MX6 driver that sends control from videonode to subdevs.
>
> If we nack apply it, who will write the userspace plugin? When
> such change will be merged upstream?
Well, I believe first question is: what applications would we want to
run on complex devices? Will sending control from video to subdevs
actually help?
mplayer is useful for testing... but that one already works (after you
setup the pipeline, and configure exposure/gain).
But thats useful for testing, not really for production. Image will be
out of focus and with wrong white balance.
What I would really like is an application to get still photos. For
taking pictures with manual settings we need
a) units for controls: user wants to focus on 1m, and take picture
with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
focal length is 20mm with 5mm chip.
But... autofocus/autogain would really be good to have. Thus we need:
b) for each frame, we need exposure settings and focus position at
time frame was taken. Otherwise autofocus/autogain will be too
slow. At least focus position is going to be tricky -- either kernel
would have to compute focus position for us (not trivial) or we'd need
enough information to compute it in userspace.
There are more problems: hardware-accelerated preview is not trivial
to set up (and I'm unsure if it can be done in generic way). Still
photos application needs to switch resolutions between preview and
photo capture. Probably hardware-accelerated histograms are needed for
white balance, auto gain and auto focus, ....
It seems like there's a _lot_ of stuff to be done before we have
useful support for complex cameras...
(And I'm not sure... when application such as skype is running, is
there some way to run autogain/autofocus/autowhitebalance? Is that
something we want to support?)
> If we don't have answers to any of the above questions, we should not
> nack it.
>
> That's said, that doesn't prevent merging a libv4l plugin if/when
> someone can find time/interest to develop it.
I believe other question is: will not having same control on main
video device and subdevs be confusing? Does it actually help userspace
in any way? Yes, we can make controls accessible to old application,
but does it make them more useful?
> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
>
> What are the needs for a cheap hobbyist devkit owner? Do we currently
> satisfy those needs? I'd say that having a functional driver when
> compiled without the subdev API, that implements the ioctl's/controls
Having different interface based on config options... is just
weird. What about poor people (like me) trying to develop complex
applications?
> [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> just arrived here last week. It would be nice to have xawtv3 running on it :-)
> So, if I have a lot of spare time (with is very unlikely), I might eventually
> do something for it to work.
>
> > I know it took me a very long time before I had a working omap3.
>
> My first OMAP3 board with working V4L2 source just arrived last week
> :-)
You can get Nokia N900 on aliexpress. If not, they are still available
between people :-).
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Em Tue, 14 Mar 2017 23:32:54 +0100
Pavel Machek <[email protected]> escreveu:
> Hi!
>
> > > > Even if they were merged, if we keep the same mean time to develop a
> > > > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > > > years to be developed.
> > > >
> > > > There's a clear message on it:
> > > > - we shouldn't keep pushing for a solution via libv4l.
> > >
> > > Or:
> > >
> > > - userspace plugin development had a very a low priority and
> > > never got the attention it needed.
> >
> > The end result is the same: we can't count on it.
> >
> > >
> > > I know that's *my* reason. I rarely if ever looked at it. I always assumed
> > > Sakari and/or Laurent would look at it. If this reason is also valid for
> > > Sakari and Laurent, then it is no wonder nothing has happened in all that
> > > time.
> > >
> > > We're all very driver-development-driven, and userspace gets very little
> > > attention in general. So before just throwing in the towel we should take
> > > a good look at the reasons why there has been little or no development: is
> > > it because of fundamental design defects, or because nobody paid attention
> > > to it?
> >
> > No. We should look it the other way: basically, there are patches
> > for i.MX6 driver that sends control from videonode to subdevs.
> >
> > If we nack apply it, who will write the userspace plugin? When
> > such change will be merged upstream?
>
> Well, I believe first question is: what applications would we want to
> run on complex devices? Will sending control from video to subdevs
> actually help?
I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
with those, it will likely run with any other application.
> mplayer is useful for testing... but that one already works (after you
> setup the pipeline, and configure exposure/gain).
>
> But thats useful for testing, not really for production. Image will be
> out of focus and with wrong white balance.
>
> What I would really like is an application to get still photos. For
> taking pictures with manual settings we need
>
> a) units for controls: user wants to focus on 1m, and take picture
> with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> focal length is 20mm with 5mm chip.
>
> But... autofocus/autogain would really be good to have. Thus we need:
>
> b) for each frame, we need exposure settings and focus position at
> time frame was taken. Otherwise autofocus/autogain will be too
> slow. At least focus position is going to be tricky -- either kernel
> would have to compute focus position for us (not trivial) or we'd need
> enough information to compute it in userspace.
>
> There are more problems: hardware-accelerated preview is not trivial
> to set up (and I'm unsure if it can be done in generic way). Still
> photos application needs to switch resolutions between preview and
> photo capture. Probably hardware-accelerated histograms are needed for
> white balance, auto gain and auto focus, ....
>
> It seems like there's a _lot_ of stuff to be done before we have
> useful support for complex cameras...
Taking still pictures using a hardware-accelerated preview is
a sophisticated use case. I don't know any userspace application
that does that. Ok, several allow taking snapshots, by simply
storing the image of the current frame.
> (And I'm not sure... when application such as skype is running, is
> there some way to run autogain/autofocus/autowhitebalance? Is that
> something we want to support?)
Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
it can access the device's controls via /dev/video devnode. Other
applications may be using some other similar algorithms.
Ok, they don't use histograms provided by the SoC. So, they do it in
software, with is slower. Still, it works fine when the light
conditions don't change too fast.
> > If we don't have answers to any of the above questions, we should not
> > nack it.
> >
> > That's said, that doesn't prevent merging a libv4l plugin if/when
> > someone can find time/interest to develop it.
>
> I believe other question is: will not having same control on main
> video device and subdevs be confusing? Does it actually help userspace
> in any way? Yes, we can make controls accessible to old application,
> but does it make them more useful?
Yes. As I said, libv4l (and some apps) have logic inside to adjust
the image via bright, contrast and white balance controls, using the
video devnode. They don't talk subdev API. So, if those controls
aren't exported, they won't be able to provide a good quality image.
> > > In addition, I suspect end-users of these complex devices don't really care
> > > about a plugin: they want full control and won't typically use generic
> > > applications. If they would need support for that, we'd have seen much more
> > > interest. The main reason for having a plugin is to simplify testing and
> > > if this is going to be used on cheap hobbyist devkits.
> >
> > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > satisfy those needs? I'd say that having a functional driver when
> > compiled without the subdev API, that implements the ioctl's/controls
>
> Having different interface based on config options... is just
> weird. What about poor people (like me) trying to develop complex
> applications?
Well, that could be done using other mechanisms, like a modprobe
parameter or by switching the behaviour if a subdev interface is
opened. I don't see much trouble on allowing accessing a control via
both interfaces.
>
> > [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> > just arrived here last week. It would be nice to have xawtv3 running on it :-)
> > So, if I have a lot of spare time (with is very unlikely), I might eventually
> > do something for it to work.
> >
> > > I know it took me a very long time before I had a working omap3.
> >
> > My first OMAP3 board with working V4L2 source just arrived last week
> > :-)
>
> You can get Nokia N900 on aliexpress. If not, they are still available
> between people :-)
I have one. Unfortunately, I never had a chance to use it, as the display
stopped working one week after I get it.
Thanks,
Mauro
On Tue, Mar 14, 2017 at 09:54:31PM -0300, Mauro Carvalho Chehab wrote:
> Em Tue, 14 Mar 2017 23:32:54 +0100
> Pavel Machek <[email protected]> escreveu:
>
> >
> > Well, I believe first question is: what applications would we want to
> > run on complex devices? Will sending control from video to subdevs
> > actually help?
>
> I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> with those, it will likely run with any other application.
>
I would like to add the 'v4l2src' plugin of gstreamer, and on the imx6 its
imx-specific counterpart 'imxv4l2videosrc' from the gstreamer-imx package
at https://github.com/Freescale/gstreamer-imx, and 'v4l2-ctl'.
Philippe
--
Philippe De Muyter +32 2 6101532 Macq SA rue de l'Aeronef 2 B-1140 Bruxelles
Hi!
> > Well, I believe first question is: what applications would we want to
> > run on complex devices? Will sending control from video to subdevs
> > actually help?
>
> I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> with those, it will likely run with any other application.
I'll take a look when I'm at better internet access.
> > mplayer is useful for testing... but that one already works (after you
> > setup the pipeline, and configure exposure/gain).
> >
> > But thats useful for testing, not really for production. Image will be
> > out of focus and with wrong white balance.
> >
> > What I would really like is an application to get still photos. For
> > taking pictures with manual settings we need
> >
> > a) units for controls: user wants to focus on 1m, and take picture
> > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > focal length is 20mm with 5mm chip.
> >
> > But... autofocus/autogain would really be good to have. Thus we need:
> >
> > b) for each frame, we need exposure settings and focus position at
> > time frame was taken. Otherwise autofocus/autogain will be too
> > slow. At least focus position is going to be tricky -- either kernel
> > would have to compute focus position for us (not trivial) or we'd need
> > enough information to compute it in userspace.
> >
> > There are more problems: hardware-accelerated preview is not trivial
> > to set up (and I'm unsure if it can be done in generic way). Still
> > photos application needs to switch resolutions between preview and
> > photo capture. Probably hardware-accelerated histograms are needed for
> > white balance, auto gain and auto focus, ....
> >
> > It seems like there's a _lot_ of stuff to be done before we have
> > useful support for complex cameras...
>
> Taking still pictures using a hardware-accelerated preview is
> a sophisticated use case. I don't know any userspace application
> that does that. Ok, several allow taking snapshots, by simply
> storing the image of the current frame.
Well, there are applications that take still pictures. Android has
one. Maemo has another. Then there's fcam-dev. Its open source; with
modified kernel it is fully usable. I have version that runs on recent
nearly-mainline on N900.
So yes, I'd like solution for problems a) and b).
> > (And I'm not sure... when application such as skype is running, is
> > there some way to run autogain/autofocus/autowhitebalance? Is that
> > something we want to support?)
>
> Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
> it can access the device's controls via /dev/video devnode. Other
> applications may be using some other similar algorithms.
>
> Ok, they don't use histograms provided by the SoC. So, they do it in
> software, with is slower. Still, it works fine when the light
> conditions don't change too fast.
I guess it is going to work well enough with higher CPU
usage. Question is if camera without autofocus is usable. I'd say "not
really".
> > I believe other question is: will not having same control on main
> > video device and subdevs be confusing? Does it actually help userspace
> > in any way? Yes, we can make controls accessible to old application,
> > but does it make them more useful?
>
> Yes. As I said, libv4l (and some apps) have logic inside to adjust
> the image via bright, contrast and white balance controls, using the
> video devnode. They don't talk subdev API. So, if those controls
> aren't exported, they won't be able to provide a good quality image.
Next question is if the libv4l will do the right thing if we just put
all controls to one node. For example on N900 you have exposure/gain
and brightness. But the brightness is applied at preview phase, so it
is "basically useless". You really need to adjust the image using the
exposure/gain.
> > > > In addition, I suspect end-users of these complex devices don't really care
> > > > about a plugin: they want full control and won't typically use generic
> > > > applications. If they would need support for that, we'd have seen much more
> > > > interest. The main reason for having a plugin is to simplify testing and
> > > > if this is going to be used on cheap hobbyist devkits.
> > >
> > > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > > satisfy those needs? I'd say that having a functional driver when
> > > compiled without the subdev API, that implements the ioctl's/controls
> >
> > Having different interface based on config options... is just
> > weird. What about poor people (like me) trying to develop complex
> > applications?
>
> Well, that could be done using other mechanisms, like a modprobe
> parameter or by switching the behaviour if a subdev interface is
> opened. I don't see much trouble on allowing accessing a control via
> both interfaces.
If we really want to go that way (is not modifying library to access
the right files quite easy?), I believe non-confusing option would be
to have '/dev/video0 -- omap3 camera for legacy applications' which
would include all the controls.
> > You can get Nokia N900 on aliexpress. If not, they are still available
> > between people :-)
>
> I have one. Unfortunately, I never had a chance to use it, as the display
> stopped working one week after I get it.
Well, I guess the easiest option is to just get another one :-).
But otoh -- N900 is quite usable without the screen. 0xffff tool can
be used to boot the kernel, then you can use nfsroot and usb
networking. It also has serial port (over strange
connector). Connected over ssh over usb network is actually how I do
most of the v4l work.
Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > with those, it will likely run with any other application.
> >
>
> I would like to add the 'v4l2src' plugin of gstreamer, and on the
> imx6 its
While it would be nice if somehow you would get v4l2src to work (in
some legacy/emulation mode through libv4l2), the longer plan is to
implement smart bin that handle several v4l2src, that can do the
required interactions so we can expose similar level of controls as
found in Android Camera HAL3, and maybe even further assuming userspace
can change the media tree at run-time. We might be a long way from
there, specially that some of the features depends on how much the
hardware can do. Just being able to figure-out how to build the MC tree
dynamically seems really hard when thinking of generic mechanism. Also,
Request API will be needed.
I think for this one, we'll need some userspace driver that enable the
features (not hide them), and that's what I'd be looking for from
libv4l2 in this regard.
> imx-specific counterpart 'imxv4l2videosrc' from the gstreamer-imx
> package
> at https://github.com/Freescale/gstreamer-imx, and 'v4l2-ctl'.
This one is specific to IMX hardware using vendor driver. You can
probably ignore that.
Nicolas
Em Wed, 15 Mar 2017 19:04:21 +0100
Pavel Machek <[email protected]> escreveu:
> Hi!
>
> > > Well, I believe first question is: what applications would we want to
> > > run on complex devices? Will sending control from video to subdevs
> > > actually help?
> >
> > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > with those, it will likely run with any other application.
>
> I'll take a look when I'm at better internet access.
Ok.
> > > mplayer is useful for testing... but that one already works (after you
> > > setup the pipeline, and configure exposure/gain).
> > >
> > > But thats useful for testing, not really for production. Image will be
> > > out of focus and with wrong white balance.
> > >
> > > What I would really like is an application to get still photos. For
> > > taking pictures with manual settings we need
> > >
> > > a) units for controls: user wants to focus on 1m, and take picture
> > > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > > focal length is 20mm with 5mm chip.
> > >
> > > But... autofocus/autogain would really be good to have. Thus we need:
> > >
> > > b) for each frame, we need exposure settings and focus position at
> > > time frame was taken. Otherwise autofocus/autogain will be too
> > > slow. At least focus position is going to be tricky -- either kernel
> > > would have to compute focus position for us (not trivial) or we'd need
> > > enough information to compute it in userspace.
> > >
> > > There are more problems: hardware-accelerated preview is not trivial
> > > to set up (and I'm unsure if it can be done in generic way). Still
> > > photos application needs to switch resolutions between preview and
> > > photo capture. Probably hardware-accelerated histograms are needed for
> > > white balance, auto gain and auto focus, ....
> > >
> > > It seems like there's a _lot_ of stuff to be done before we have
> > > useful support for complex cameras...
> >
> > Taking still pictures using a hardware-accelerated preview is
> > a sophisticated use case. I don't know any userspace application
> > that does that. Ok, several allow taking snapshots, by simply
> > storing the image of the current frame.
>
> Well, there are applications that take still pictures. Android has
> one. Maemo has another. Then there's fcam-dev. Its open source; with
> modified kernel it is fully usable. I have version that runs on recent
> nearly-mainline on N900.
Hmm... it seems that FCam is specific for N900:
http://fcam.garage.maemo.org/
If so, then we have here just the opposite problem, if want it to be
used as a generic application, as very likely it requires OMAP3-specific
graph/subdevs.
> So yes, I'd like solution for problems a) and b).
>
> > > (And I'm not sure... when application such as skype is running, is
> > > there some way to run autogain/autofocus/autowhitebalance? Is that
> > > something we want to support?)
> >
> > Autofocus no. Autogain/Autowhite can be done via libv4l, provided that
> > it can access the device's controls via /dev/video devnode. Other
> > applications may be using some other similar algorithms.
> >
> > Ok, they don't use histograms provided by the SoC. So, they do it in
> > software, with is slower. Still, it works fine when the light
> > conditions don't change too fast.
>
> I guess it is going to work well enough with higher CPU
> usage.
Yes.
> Question is if camera without autofocus is usable. I'd say "not
> really".qv4l2
That actually depends on the sensor and how focus is adjusted.
I'm testing right now this camera module for RPi:
https://www.raspberrypi.org/products/camera-module-v2/
I might be wrong, but this sensor doesn't seem to have auto-focus.
Instead, it seems to use a wide-angle lens. So, except when the
object is too close, the focus look OK.
> > > I believe other question is: will not having same control on main
> > > video device and subdevs be confusing? Does it actually help userspace
> > > in any way? Yes, we can make controls accessible to old application,
> > > but does it make them more useful?
> >
> > Yes. As I said, libv4l (and some apps) have logic inside to adjust
> > the image via bright, contrast and white balance controls, using the
> > video devnode. They don't talk subdev API. So, if those controls
> > aren't exported, they won't be able to provide a good quality image.
>
> Next question is if the libv4l will do the right thing if we just put
> all controls to one node. For example on N900 you have exposure/gain
> and brightness. But the brightness is applied at preview phase, so it
> is "basically useless". You really need to adjust the image using the
> exposure/gain.
I've no idea, but I suspect it shouldn't be hard to teach libv4l to
prefer use an exposure/gain instead of brightness when available.
> > > > > In addition, I suspect end-users of these complex devices don't really care
> > > > > about a plugin: they want full control and won't typically use generic
> > > > > applications. If they would need support for that, we'd have seen much more
> > > > > interest. The main reason for having a plugin is to simplify testing and
> > > > > if this is going to be used on cheap hobbyist devkits.
> > > >
> > > > What are the needs for a cheap hobbyist devkit owner? Do we currently
> > > > satisfy those needs? I'd say that having a functional driver when
> > > > compiled without the subdev API, that implements the ioctl's/controls
> > >
> > > Having different interface based on config options... is just
> > > weird. What about poor people (like me) trying to develop complex
> > > applications?
> >
> > Well, that could be done using other mechanisms, like a modprobe
> > parameter or by switching the behaviour if a subdev interface is
> > opened. I don't see much trouble on allowing accessing a control via
> > both interfaces.
>
> If we really want to go that way (is not modifying library to access
> the right files quite easy?), I believe non-confusing option would be
> to have '/dev/video0 -- omap3 camera for legacy applications' which
> would include all the controls.
Yeah, keeping /dev/video0 reserved for generic applications is something
that could work. Not sure how easy would be to implement it.
>
> > > You can get Nokia N900 on aliexpress. If not, they are still available
> > > between people :-)
> >
> > I have one. Unfortunately, I never had a chance to use it, as the display
> > stopped working one week after I get it.
>
> Well, I guess the easiest option is to just get another one :-).
:-) Well, I guess very few units of N900 was sold in Brazil. Importing
one is too expensive, due to taxes.
> But otoh -- N900 is quite usable without the screen. 0xffff tool can
> be used to boot the kernel, then you can use nfsroot and usb
> networking. It also has serial port (over strange
> connector). Connected over ssh over usb network is actually how I do
> most of the v4l work.
If you pass me the pointers, I can try it when I have some time.
Anyway, I got myself an ISEE IGEPv2, with the expansion board:
https://www.isee.biz/products/igep-processor-boards/igepv2-dm3730
https://www.isee.biz/products/igep-expansion-boards/igepv2-expansion
The expansion board comes with a tvp5150 analog TV demod. So, with
this device, I can simply connect it to a composite input signal.
I have some sources here that I can use to test it.
Thanks,
Mauro
On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > with those, it will likely run with any other application.
> > >
> >
> > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > imx6 its
>
> While it would be nice if somehow you would get v4l2src to work (in
> some legacy/emulation mode through libv4l2),
v4l2src works just fine, provided the pipeline is configured manually in
advance via media-ctl.
> the longer plan is to
> implement smart bin that handle several v4l2src, that can do the
> required interactions so we can expose similar level of controls as
> found in Android Camera HAL3, and maybe even further assuming userspace
> can change the media tree at run-time. We might be a long way from
> there, specially that some of the features depends on how much the
> hardware can do. Just being able to figure-out how to build the MC tree
> dynamically seems really hard when thinking of generic mechanism. Also,
> Request API will be needed.
>
> I think for this one, we'll need some userspace driver that enable the
> features (not hide them), and that's what I'd be looking for from
> libv4l2 in this regard.
regards
Philipp
On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > Le mercredi 15 mars 2017 ? 11:50 +0100, Philippe De Muyter a ?crit :
> > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > with those, it will likely run with any other application.
> > > >
> > >
> > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > imx6 its
> >
> > While it would be nice if somehow you would get v4l2src to work (in
> > some legacy/emulation mode through libv4l2),
>
> v4l2src works just fine, provided the pipeline is configured manually in
> advance via media-ctl.
Including choosing the framerate ? Sorry, I have no time these days
to test it myself.
And I cited imxv4l2videosrc for its ability to provide the physical address
of the image buffers for further processing by other (not necessarily next
in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
the h.264 video encoder. As I am stuck with fsl/nxp kernel and driver on that
matter, I don't know how the interfaces have evolved in current linux kernels.
BR
Philippe
--
Philippe De Muyter +32 2 6101532 Macq SA rue de l'Aeronef 2 B-1140 Bruxelles
On Thu, 2017-03-16 at 10:47 +0100, Philippe De Muyter wrote:
> On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> > On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > > Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > > with those, it will likely run with any other application.
> > > > >
> > > >
> > > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > > imx6 its
> > >
> > > While it would be nice if somehow you would get v4l2src to work (in
> > > some legacy/emulation mode through libv4l2),
> >
> > v4l2src works just fine, provided the pipeline is configured manually in
> > advance via media-ctl.
>
> Including choosing the framerate ? Sorry, I have no time these days
> to test it myself.
No, the framerate is set with media-ctl on the CSI output pad. To really
choose the framerate, the element would indeed need a deeper
understanding of the pipeline, as the resulting framerate depends on at
least the source v4l2_subdevice (sensor) framerate and the CSI frame
skipping.
> And I cited imxv4l2videosrc for its ability to provide the physical address
> of the image buffers for further processing by other (not necessarily next
> in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
> the h.264 video encoder. As I am stuck with fsl/nxp kernel and driver on that
> matter, I don't know how the interfaces have evolved in current linux kernels.
The physical address of the image buffers is hidden from userspace by
dma-buf objects, but those can be passed around to the next driver
without copying the image data.
regards
Philipp
On Thu, Mar 16, 2017 at 11:01:56AM +0100, Philipp Zabel wrote:
> On Thu, 2017-03-16 at 10:47 +0100, Philippe De Muyter wrote:
> > On Thu, Mar 16, 2017 at 10:26:00AM +0100, Philipp Zabel wrote:
> > > On Wed, 2017-03-15 at 14:55 -0400, Nicolas Dufresne wrote:
> > > > Le mercredi 15 mars 2017 ? 11:50 +0100, Philippe De Muyter a ?crit :
> > > > > > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > > > > > with those, it will likely run with any other application.
> > > > > >
> > > > >
> > > > > I would like to add the 'v4l2src' plugin of gstreamer, and on the
> > > > > imx6 its
> > > >
> > > > While it would be nice if somehow you would get v4l2src to work (in
> > > > some legacy/emulation mode through libv4l2),
> > >
> > > v4l2src works just fine, provided the pipeline is configured manually in
> > > advance via media-ctl.
> >
> > Including choosing the framerate ? Sorry, I have no time these days
> > to test it myself.
>
> No, the framerate is set with media-ctl on the CSI output pad. To really
> choose the framerate, the element would indeed need a deeper
> understanding of the pipeline, as the resulting framerate depends on at
> least the source v4l2_subdevice (sensor) framerate and the CSI frame
> skipping.
Count me in than as a supporter of Steve's "v4l2-mc: add a function to
inherit controls from a pipeline" patch
> > of the image buffers for further processing by other (not necessarily next
> > in gstreamer pipeline, or for all frames) hardware-accelerated plugins likes
> > the h.264 video encoder. As I am stuck with fsl/nxp kernel and driver on that
> > matter, I don't know how the interfaces have evolved in current linux kernels.
>
> The physical address of the image buffers is hidden from userspace by
> dma-buf objects, but those can be passed around to the next driver
> without copying the image data.
OK
thanks
Philippe
Hi!
> > > > mplayer is useful for testing... but that one already works (after you
> > > > setup the pipeline, and configure exposure/gain).
> > > >
> > > > But thats useful for testing, not really for production. Image will be
> > > > out of focus and with wrong white balance.
> > > >
> > > > What I would really like is an application to get still photos. For
> > > > taking pictures with manual settings we need
> > > >
> > > > a) units for controls: user wants to focus on 1m, and take picture
> > > > with ISO200, 1/125 sec. We should also tell him that lens is f/5.6 and
> > > > focal length is 20mm with 5mm chip.
> > > >
> > > > But... autofocus/autogain would really be good to have. Thus we need:
> > > >
> > > > b) for each frame, we need exposure settings and focus position at
> > > > time frame was taken. Otherwise autofocus/autogain will be too
> > > > slow. At least focus position is going to be tricky -- either kernel
> > > > would have to compute focus position for us (not trivial) or we'd need
> > > > enough information to compute it in userspace.
> > > >
> > > > There are more problems: hardware-accelerated preview is not trivial
> > > > to set up (and I'm unsure if it can be done in generic way). Still
> > > > photos application needs to switch resolutions between preview and
> > > > photo capture. Probably hardware-accelerated histograms are needed for
> > > > white balance, auto gain and auto focus, ....
> > > >
> > > > It seems like there's a _lot_ of stuff to be done before we have
> > > > useful support for complex cameras...
> > >
> > > Taking still pictures using a hardware-accelerated preview is
> > > a sophisticated use case. I don't know any userspace application
> > > that does that. Ok, several allow taking snapshots, by simply
> > > storing the image of the current frame.
> >
> > Well, there are applications that take still pictures. Android has
> > one. Maemo has another. Then there's fcam-dev. Its open source; with
> > modified kernel it is fully usable. I have version that runs on recent
> > nearly-mainline on N900.
>
> Hmm... it seems that FCam is specific for N900:
> http://fcam.garage.maemo.org/
>
> If so, then we have here just the opposite problem, if want it to be
> used as a generic application, as very likely it requires OMAP3-specific
> graph/subdevs.
Well... there's quick and great version on maemo.org. I do have local
version (still somehow N900-specific), but it no longer uses hardware
histogram/sharpness support. Should be almost generic.
> > So yes, I'd like solution for problems a) and b).
...but it has camera parameters hardcoded (problem a) and slow
(problem b).
> > Question is if camera without autofocus is usable. I'd say "not
> > really".qv4l2
>
> That actually depends on the sensor and how focus is adjusted.
>
> I'm testing right now this camera module for RPi:
> https://www.raspberrypi.org/products/camera-module-v2/
>
> I might be wrong, but this sensor doesn't seem to have auto-focus.
> Instead, it seems to use a wide-angle lens. So, except when the
> object is too close, the focus look OK.
Well, cameras without autofocus are somehow usable without
autofocus. But cameras with autofocus don't work too well without one.
> > If we really want to go that way (is not modifying library to access
> > the right files quite easy?), I believe non-confusing option would be
> > to have '/dev/video0 -- omap3 camera for legacy applications' which
> > would include all the controls.
>
> Yeah, keeping /dev/video0 reserved for generic applications is something
> that could work. Not sure how easy would be to implement it.
Plus advanced applications would just ignore /dev/video0.. and not be confused.
> > > > > > You can get Nokia N900 on aliexpress. If not, they are still available
> > > > between people :-)
> > >
> > > I have one. Unfortunately, I never had a chance to use it, as the display
> > > stopped working one week after I get it.
> >
> > Well, I guess the easiest option is to just get another one :-).
>
> :-) Well, I guess very few units of N900 was sold in Brazil. Importing
> one is too expensive, due to taxes.
Try to ask at local mailing list. Those machines were quite common.
> > But otoh -- N900 is quite usable without the screen. 0xffff tool can
> > be used to boot the kernel, then you can use nfsroot and usb
> > networking. It also has serial port (over strange
> > connector). Connected over ssh over usb network is actually how I do
> > most of the v4l work.
>
> If you pass me the pointers, I can try it when I have some time.
Ok, I guess I'll do that in private email.
> Anyway, I got myself an ISEE IGEPv2, with the expansion board:
> https://www.isee.biz/products/igep-processor-boards/igepv2-dm3730
> https://www.isee.biz/products/igep-expansion-boards/igepv2-expansion
>
> The expansion board comes with a tvp5150 analog TV demod. So, with
> this device, I can simply connect it to a composite input signal.
> I have some sources here that I can use to test it.
Well... it looks like TV capture is a "solved" problem. Taking useful
photos is what is hard...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?
>
> I strongly suspect it is the latter.
>
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.
I think you're looking at it with a programmers hat on, not a users hat.
Are you really telling me that requiring users to 'su' to root, and then
use media-ctl to manually configure the capture device is what most
users "want" ?
Hasn't the way technology has moved towards graphical interfaces,
particularly smart phones, taught us that the vast majority of users
want is intuitive, easy to use interfaces, and not the command line
with reams of documentation?
Why are smart phones soo popular - it's partly because they're flashy,
but also because of the wealth of apps, and apps which follow the
philosophy of "do one job, do it well" (otherwise they get bad reviews.)
> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are not
> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> amlogic, broadcom and qualcomm all have closed source implementations or no
> implementation at all.
Right, and that in itself tells us something - the problem that we're
trying to solve is not one that commonly exists in the real world.
Yes, the hardware we have in front of us may be very complex, but if
there's very few systems out there which are capable of making use of
all that complexity, then we're trying to solve a problem that isn't
the common case - and if it's going to take years to solve it (it
already has taken years) then it's the wrong problem to be solved.
I bet most of the problem can be eliminated if, rather than exposing
all this complexity, we instead expose a simpler capture system where
the board designer gets to "wire up" the capture system.
I'll go back to my Bayer example, because that's the simplest. As
I've already said many times in these threads, there is only one
possible path through the iMX6 device that such a source can be used
with - it's a fixed path. The actual path depends on the CSI2
virtual channel that the camera has been _configured_ to use, but
apart from that, it's effectively a well known set of blocks. Such
a configuration could be placed in DT.
For RGB connected to a single parallel CSI, things get a little more
complex - capture through the CSI or through two other capture devices
for de-interlacing or other features. However, I'm not convinced that
exposing multiple /dev/video* devices for different features for the
same video source is a sane approach - I think that's a huge usability
problem. (The user is expected to select the capture device on iMX6
depending on the features they want, and if they want to change features,
they're expected to shut down their application and start it up on a
different capture device.) For the most part on iMX6, there's one
path down to the CSI block, and then there's optional routing through
the rest of the IPU depending on what features you want (such as
de-interlacing.)
The complex case is a CSI2 connected camera which produces multiple
streams through differing virtual channels - and that's IMHO the only
case where we need multiple different /dev/video* capture devices to
be present.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Hi Russell,
On 03/17/17 13:42, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
>> We're all very driver-development-driven, and userspace gets very little
>> attention in general. So before just throwing in the towel we should take
>> a good look at the reasons why there has been little or no development: is
>> it because of fundamental design defects, or because nobody paid attention
>> to it?
>>
>> I strongly suspect it is the latter.
>>
>> In addition, I suspect end-users of these complex devices don't really care
>> about a plugin: they want full control and won't typically use generic
>> applications. If they would need support for that, we'd have seen much more
>> interest. The main reason for having a plugin is to simplify testing and
>> if this is going to be used on cheap hobbyist devkits.
>
> I think you're looking at it with a programmers hat on, not a users hat.
>
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?
It depends on who the user is. I don't think anyone is suggesting a
regular end user is the user of all these APIs: it is either an
application tailored for that given device, a skilled user with his test
scripts or as suggested previously, a libv4l plugin knowing the device
or a generic library geared towards providing best effort service. The
last one of this list does not exist yet and the second last item
requires help.
Typically this class of devices is simply not up to provide the level of
service you're requesting without additional user space control library
which is responsible for automatic white balance, exposure and focus.
Making use of the full potential of the hardware requires using a more
expressive interface. That's what the kernel interface must provide. If
we decide to limit ourselves to a small sub-set of that potential on the
level of the kernel interface, we have made a wrong decision. It's as
simple as that. This is why the functionality (and which requires taking
a lot of policy decisions) belongs to the user space. We cannot have
multiple drivers providing multiple kernel interfaces for the same hardware.
That said, I'm not trying to provide an excuse for not having libraries
available to help the user to configure and control the device more or
less automatically even in terms of best effort. It's something that
does require attention, a lot more of it than it has received in recent
few years.
--
Kind regards,
Sakari Ailus
[email protected]
On Fri, 2017-03-17 at 11:42 +0000, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
> >
> > I strongly suspect it is the latter.
> >
> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
>
> I think you're looking at it with a programmers hat on, not a users hat.
>
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?
>
> Hasn't the way technology has moved towards graphical interfaces,
> particularly smart phones, taught us that the vast majority of users
> want is intuitive, easy to use interfaces, and not the command line
> with reams of documentation?
>
> Why are smart phones soo popular - it's partly because they're flashy,
> but also because of the wealth of apps, and apps which follow the
> philosophy of "do one job, do it well" (otherwise they get bad reviews.)
> > An additional complication is simply that it is hard to find fully supported
> > MC hardware. omap3 boards are hard to find these days, renesas boards are not
> > easy to get, freescale isn't the most popular either. Allwinner, mediatek,
> > amlogic, broadcom and qualcomm all have closed source implementations or no
> > implementation at all.
>
> Right, and that in itself tells us something - the problem that we're
> trying to solve is not one that commonly exists in the real world.
>
> Yes, the hardware we have in front of us may be very complex, but if
> there's very few systems out there which are capable of making use of
> all that complexity, then we're trying to solve a problem that isn't
> the common case - and if it's going to take years to solve it (it
> already has taken years) then it's the wrong problem to be solved.
>
> I bet most of the problem can be eliminated if, rather than exposing
> all this complexity, we instead expose a simpler capture system where
> the board designer gets to "wire up" the capture system.
>
> I'll go back to my Bayer example, because that's the simplest. As
> I've already said many times in these threads, there is only one
> possible path through the iMX6 device that such a source can be used
> with - it's a fixed path. The actual path depends on the CSI2
> virtual channel that the camera has been _configured_ to use, but
> apart from that, it's effectively a well known set of blocks. Such
> a configuration could be placed in DT.
>
> For RGB connected to a single parallel CSI, things get a little more
> complex - capture through the CSI or through two other capture devices
> for de-interlacing or other features. However, I'm not convinced that
> exposing multiple /dev/video* devices for different features for the
> same video source is a sane approach - I think that's a huge usability
> problem. (The user is expected to select the capture device on iMX6
> depending on the features they want, and if they want to change features,
> they're expected to shut down their application and start it up on a
> different capture device.) For the most part on iMX6, there's one
> path down to the CSI block, and then there's optional routing through
> the rest of the IPU depending on what features you want (such as
> de-interlacing.)
>
> The complex case is a CSI2 connected camera which produces multiple
> streams through differing virtual channels - and that's IMHO the only
> case where we need multiple different /dev/video* capture devices to
> be present.
I wanted to have the IC PRP outputs separate because the IC PRP should
support running both the VF and ENC tasks with different parameters from
the same input. That would allow to capture two different resolutions
(up to 1024x1024) at the same time.
I think most of the simple, fixed pipeline use cases could be handled by
libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
function internally would set up the media links to the
nearest /dev/video interface, propagate format, resolution and frame
intervals if necessary, and return an fd to the video device, there'd be
no additional complexity for the users beyond selecting the v4l2_subdev
instead of the video device.
regards
Philipp
On Fri, Mar 17, 2017 at 01:02:07PM +0100, Philipp Zabel wrote:
> I think most of the simple, fixed pipeline use cases could be handled by
> libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
> function internally would set up the media links to the
> nearest /dev/video interface, propagate format, resolution and frame
> intervals if necessary, and return an fd to the video device, there'd be
> no additional complexity for the users beyond selecting the v4l2_subdev
> instead of the video device.
... which would then require gstreamer to be modified too. The gstreamer
v4l2 plugin looks for /dev/video* or /dev/v4l2/video* devices and monitors
these for changes, so gstreamer applications know which capture devices
are available:
const gchar *paths[] = { "/dev", "/dev/v4l2", NULL };
const gchar *names[] = { "video", NULL };
/* Add some depedency, so the dynamic features get updated upon changes in
* /dev/video* */
gst_plugin_add_dependency (plugin,
NULL, paths, names, GST_PLUGIN_DEPENDENCY_FLAG_FILE_NAME_IS_PREFIX);
I haven't checked yet whether sys/v4l2/gstv4l2deviceprovider.c knows
anything about the v4l2 subdevs.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
[...]
> The big question, waiting for an answer on the last 8 years is
> who would do that? Such person would need to have several different
> hardware from different vendors, in order to ensure that it has
> a generic solution.
>
> It is a way more feasible that the Kernel developers that already
> have a certain hardware on their hands to add support inside the
> driver to forward the controls through the pipeline and to setup
> a "default" pipeline that would cover the common use cases at
> driver's probe.
Actually, would setting pipeline via libv4l2 plugin and letting drivers
provide a sane enabled default pipeline configuration be mutually
exclusive? Not sure about the control forwarding, but at least a simple
link setup and format forwarding would also be possible in the kernel
without hindering userspace from doing it themselves later.
regards
Philipp
Em Fri, 17 Mar 2017 13:55:33 +0200
Sakari Ailus <[email protected]> escreveu:
> Hi Russell,
>
> On 03/17/17 13:42, Russell King - ARM Linux wrote:
> > On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> >> We're all very driver-development-driven, and userspace gets very little
> >> attention in general. So before just throwing in the towel we should take
> >> a good look at the reasons why there has been little or no development: is
> >> it because of fundamental design defects, or because nobody paid attention
> >> to it?
> >>
> >> I strongly suspect it is the latter.
> >>
> >> In addition, I suspect end-users of these complex devices don't really care
> >> about a plugin: they want full control and won't typically use generic
> >> applications. If they would need support for that, we'd have seen much more
> >> interest. The main reason for having a plugin is to simplify testing and
> >> if this is going to be used on cheap hobbyist devkits.
> >
> > I think you're looking at it with a programmers hat on, not a users hat.
I fully agree with you: whatever solution is provided, this should be fully
transparent for the end user, no matter what V4L2 application he's using.
> >
> > Are you really telling me that requiring users to 'su' to root, and then
> > use media-ctl to manually configure the capture device is what most
> > users "want" ?
The need of su can easily be fixed with a simple addition at the udev
rules.d, for it to consider media controller as part of the v4l stuff:
--- udev/rules.d/50-udev-default.rules 2017-02-01 19:45:35.000000000 -0200
+++ udev/rules.d/50-udev-default.rules 2015-08-29 07:54:16.033122614 -0300
@@ -30,6 +30,7 @@ SUBSYSTEM=="mem", KERNEL=="mem|kmem|port
SUBSYSTEM=="input", GROUP="input"
SUBSYSTEM=="input", KERNEL=="js[0-9]*", MODE="0664"
+SUBSYSTEM=="media", GROUP="video"
SUBSYSTEM=="video4linux", GROUP="video"
SUBSYSTEM=="graphics", GROUP="video"
SUBSYSTEM=="drm", GROUP="video"
Ok, someone should base it on upstream and submit this to
udev maintainers[1].
[1] On a side note, it would also be possible to have an udev rule that
would be automatically setting the pipeline then the media device pops
up.
I wrote something like that for remote controllers, with gets
installed together with v4l-utils package: when a remote controller
is detected, it checks the driver and the remote controller table
that the driver wants and load it on userspace.
It would be possible to do something like that for MC, but someone
would need to do such task. Of course, that would require to have
a way for a generic application to detect the board type and be
able to automatically setup the pipelines. So, we'll go back to
the initial problem that nobody was able to do that so far.
> It depends on who the user is. I don't think anyone is suggesting a
> regular end user is the user of all these APIs: it is either an
> application tailored for that given device, a skilled user with his test
> scripts
Test scripts are just test scripts, meant for development purposes.
We shouldn't even consider this seriously.
> Making use of the full potential of the hardware requires using a more
> expressive interface.
That's the core of the problem: most users don't need "full potential
of the hardware". It is actually worse than that: several boards
don't allow "full potential" of the SoC capabilities.
Ok, when the user requires "full potential", they may need a complex
tailored application. But, on most cases, all it is needed is to
support a simple application that controls a video stream via
/dev/video0.
> That's what the kernel interface must provide. If
> we decide to limit ourselves to a small sub-set of that potential on the
> level of the kernel interface, we have made a wrong decision. It's as
> simple as that. This is why the functionality (and which requires taking
> a lot of policy decisions) belongs to the user space. We cannot have
> multiple drivers providing multiple kernel interfaces for the same hardware.
I strongly disagree. Looking only at the hardware capabilities without
having a solution to provide what the user wants is *wrong*.
The project decisions should be based on the common use cases, and, if
possible and not to expensive/complex, covering exotic cases.
The V4L2 API was designed to fulfill the user needs. Drivers should
take it in consideration when choosing policy decisions.
In order to give you some examples, before the V4L2 API, the bttv driver
used to have its own set of ioctls, meant to provide functionality based
on its hardware capabilities (like selecting other standards like PAL/M).
The end result is that applications written for bttv were not generic
enough[1]. The V4L2 API was designed to be generic enough to cover the
common use-cases.
[1] As the bttv board were very popular, most userspace apps didn't
work on other hardware. The big advantage of V4L2 API is that it
contains a set of functions that it is good enough to control any
hardware. Ok, some features are missing.
For example, in the past, one could use the bttv hardware to "decrypt"
analog cable TV using custom ioctls. Those ioctls got removed during
the V4L2 conversion, as they are specific to the way bttv hardware
works.
Also, the cx88 hardware could be used as a generic fast A/D converter,
using a custom set of ioctls - something similar to SDR. In this
specific case, such patchset was never merged upstream.
To give you a few examples about policy decisions taken by the drivers
in order to fulfill the user needs, the Conexant chipsets (supported by
bttv, cx88, cx231xx and cx25821 drivers, among others) provide a way
more possibilities than what the driver supports.
Basically, they all have fast A/D converters, running at around
27-30 MHz clock. The samples of the A/D can be passed as-is to
userspace and/or handled by some IP blocks inside the chips.
Also, even the simplest one (bttv) uses a RISC with a firmware that
is built dynamically at runtime by the Kernel driver. Such firmware
needs to setup several DMA pipelines.
For example, the cx88 driver sets those DMA pipelines
(see drivers/media/pci/cx88/cx88-core.c):
* FIFO space allocations:
* channel 21 (y video) - 10.0k
* channel 22 (u video) - 2.0k
* channel 23 (v video) - 2.0k
* channel 24 (vbi) - 4.0k
* channels 25+26 (audio) - 4.0k
* channel 28 (mpeg) - 4.0k
* channel 27 (audio rds)- 3.0k
In order to provide the generic V4L2 API, the driver need to
take lots of device-specific decisions.
For example, the DMA from channel 28 is only enabled if the
device has an IP block to do MPEG, and the user wants to
receive mpeg data, instead of YUV.
The DMA from channel 27 has samples taken from the audio IF.
The audio decoding itself are at DMA channels 25 and 26.
The data from DMA channel 27 is used by a software dsp code,
implemented at drivers/media/pci/cx88/cx88-dsp.c, with detects
if the IF contains AM or FM modulation and what are the carriers,
in order to identify the audio standard. Once the standard is
detected, it sets the hardware audio decoder to the right
standard, and the DMA from channels 25 and 26 will contain
audio PCM samples.
The net result is that an user of any of the V4L2 drivers
That's a way more complex than deciding if a pipeline would
require an IP block to convert from Bayer format to YUV.
Another case: the cx25821 hardware supports 12 video streams,
consuming almost all available bandwidth of an ePCI bus. Each video
stream connector can either be configured to be capture or output, in
runtime. The hardware vendor chose to hardcode the driver to provide
8 inputs and 4 outputs. Their decision was based in the fact that
the driver is already very complex, and it satisfies their customer's
needs. The cost/efforts of make the driver to be reconfigured in
runtime were too high for almost no benefit.
> That said, I'm not trying to provide an excuse for not having libraries
> available to help the user to configure and control the device more or
> less automatically even in terms of best effort. It's something that
> does require attention, a lot more of it than it has received in recent
> few years.
The big question, waiting for an answer on the last 8 years is
who would do that? Such person would need to have several different
hardware from different vendors, in order to ensure that it has
a generic solution.
It is a way more feasible that the Kernel developers that already
have a certain hardware on their hands to add support inside the
driver to forward the controls through the pipeline and to setup
a "default" pipeline that would cover the common use cases at
driver's probe.
Thanks,
Mauro
On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:
> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
> [...]
> > The big question, waiting for an answer on the last 8 years is
> > who would do that? Such person would need to have several different
> > hardware from different vendors, in order to ensure that it has
> > a generic solution.
> >
> > It is a way more feasible that the Kernel developers that already
> > have a certain hardware on their hands to add support inside the
> > driver to forward the controls through the pipeline and to setup
> > a "default" pipeline that would cover the common use cases at
> > driver's probe.
>
> Actually, would setting pipeline via libv4l2 plugin and letting drivers
> provide a sane enabled default pipeline configuration be mutually
> exclusive? Not sure about the control forwarding, but at least a simple
> link setup and format forwarding would also be possible in the kernel
> without hindering userspace from doing it themselves later.
I think this is the exact same problem as controls in ALSA.
When ALSA started off in life, the requirement was that all controls
shall default to minimum, and the user is expected to adjust controls
after the system is running.
After OSS, this gave quite a marked change in system behaviour, and
led to a lot of "why doesn't my sound work anymore" problems, because
people then had to figure out which combination of controls had to be
set to get sound out of their systems.
Now it seems to be much better, where install Linux on a platform, and
you have a working sound system (assuming that the drivers are all there
which is generally the case for x86.)
However, it's still possible to adjust all the controls from userspace.
All that's changed is the defaults.
Why am I mentioning this - because from what I understand Mauro saying,
it's no different from this situation. Userspace will still have the
power to disable all links and setup its own. The difference is that
there will be a default configuration that the kernel sets up at boot
time that will be functional, rather than the current default
configuration where the system is completely non-functional until
manually configured.
However, at the end of the day, I don't care _where_ the usability
problems are solved, only that there is some kind of solution. It's not
the _where_ that's the real issue here, but the _how_, and discussion of
the _how_ is completely missing.
So, let's try kicking off a discussion about _how_ to do things.
_How_ do we setup a media controller system so that we end up with a
usable configuration - let's start with the obvious bit... which links
should be enabled.
I think the first pre-requisit is that we stop exposing capture devices
that can never be functional for the hardware that's present on the board,
so that there isn't this plentora of useless /dev/video* nodes and useless
subdevices.
One possible solution to finding a default path may be "find the shortest
path between the capture device and the sensor and enable intervening
links".
Then we need to try configuring that path with format/resolution
information.
However, what if something in the shortest path can't handle the format
that the sensor produces? I think at that point, we'd need to drop that
subdev out of the path resolution, re-run the "find the shortest path"
algorithm, and try again.
Repeat until success or no path between the capture and sensor exists.
This works fine if you have just one sensor visible from a capture device,
but not if there's more than one (which I suspect is the case with the
Sabrelite board with its two cameras and video receiver.) That breaks
the "find the shortest path" algorithm.
So, maybe it's a lot better to just let the board people provide via DT
a default setup for the connectivity of the modules somehow - certainly
one big step forward would be to disable in DT parts of the capture
system that can never be used (remembering that boards like the RPi /
Hummingboard may end up using DT overlays to describe this for different
cameras, so the capture setup may change after initial boot.)
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Em Fri, 17 Mar 2017 12:16:08 +0000
Russell King - ARM Linux <[email protected]> escreveu:
> On Fri, Mar 17, 2017 at 01:02:07PM +0100, Philipp Zabel wrote:
> > I think most of the simple, fixed pipeline use cases could be handled by
> > libv4l2, by allowing to pass a v4l2 subdevice path to v4l2_open. If that
> > function internally would set up the media links to the
> > nearest /dev/video interface, propagate format, resolution and frame
> > intervals if necessary, and return an fd to the video device, there'd be
> > no additional complexity for the users beyond selecting the v4l2_subdev
> > instead of the video device.
>
> ... which would then require gstreamer to be modified too. The gstreamer
> v4l2 plugin looks for /dev/video* or /dev/v4l2/video* devices and monitors
> these for changes, so gstreamer applications know which capture devices
> are available:
>
> const gchar *paths[] = { "/dev", "/dev/v4l2", NULL };
> const gchar *names[] = { "video", NULL };
>
> /* Add some depedency, so the dynamic features get updated upon changes in
> * /dev/video* */
> gst_plugin_add_dependency (plugin,
> NULL, paths, names, GST_PLUGIN_DEPENDENCY_FLAG_FILE_NAME_IS_PREFIX);
>
> I haven't checked yet whether sys/v4l2/gstv4l2deviceprovider.c knows
> anything about the v4l2 subdevs.
Not only gstreamer do that, but all simple V4L2 applications, although
on most of them, you can either pass a command line argument or setup
the patch via GUI.
Btw, I've no idea from where gstreamer took /dev/v4l2 :-)
I'm yet to find a distribution using it.
On the other hand, /dev/v4l/by-patch and /dev/v4l/by-id are usual directories
where V4L2 devices can be found, and should provide persistent names. So, IMHO,
gst should prefer those names, when they exist:
$ tree /dev/v4l
/dev/v4l
├── by-id
│  ├── usb-046d_HD_Pro_Webcam_C920_55DA1CCF-video-index0 -> ../../video1
│  └── usb-Sunplus_mMobile_Inc_USB_Web-CAM-video-index0 -> ../../video0
└── by-path
├── platform-3f980000.usb-usb-0:1.2:1.0-video-index0 -> ../../video1
└── platform-3f980000.usb-usb-0:1.5:1.0-video-index0 -> ../../video0
Thanks,
Mauro
On Fri 2017-03-17 11:42:03, Russell King - ARM Linux wrote:
> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
> > We're all very driver-development-driven, and userspace gets very little
> > attention in general. So before just throwing in the towel we should take
> > a good look at the reasons why there has been little or no development: is
> > it because of fundamental design defects, or because nobody paid attention
> > to it?
> >
> > I strongly suspect it is the latter.
> >
> > In addition, I suspect end-users of these complex devices don't really care
> > about a plugin: they want full control and won't typically use generic
> > applications. If they would need support for that, we'd have seen much more
> > interest. The main reason for having a plugin is to simplify testing and
> > if this is going to be used on cheap hobbyist devkits.
>
> I think you're looking at it with a programmers hat on, not a users hat.
>
> Are you really telling me that requiring users to 'su' to root, and then
> use media-ctl to manually configure the capture device is what most
> users "want" ?
If you want to help users, right way is to improve userland support.
> Hasn't the way technology has moved towards graphical interfaces,
> particularly smart phones, taught us that the vast majority of users
> want is intuitive, easy to use interfaces, and not the command line
> with reams of documentation?
How is it relevant to _kernel_ interfaces?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On 03/17/2017 12:55 PM, Sakari Ailus wrote:
> Hi Russell,
>
> On 03/17/17 13:42, Russell King - ARM Linux wrote:
>> On Tue, Mar 14, 2017 at 08:55:36AM +0100, Hans Verkuil wrote:
>>> We're all very driver-development-driven, and userspace gets very little
>>> attention in general. So before just throwing in the towel we should take
>>> a good look at the reasons why there has been little or no development: is
>>> it because of fundamental design defects, or because nobody paid attention
>>> to it?
>>>
>>> I strongly suspect it is the latter.
>>>
>>> In addition, I suspect end-users of these complex devices don't really care
>>> about a plugin: they want full control and won't typically use generic
>>> applications. If they would need support for that, we'd have seen much more
>>> interest. The main reason for having a plugin is to simplify testing and
>>> if this is going to be used on cheap hobbyist devkits.
>>
>> I think you're looking at it with a programmers hat on, not a users hat.
>>
>> Are you really telling me that requiring users to 'su' to root, and then
>> use media-ctl to manually configure the capture device is what most
>> users "want" ?
>
> It depends on who the user is. I don't think anyone is suggesting a
> regular end user is the user of all these APIs: it is either an
> application tailored for that given device, a skilled user with his test
> scripts or as suggested previously, a libv4l plugin knowing the device
> or a generic library geared towards providing best effort service. The
> last one of this list does not exist yet and the second last item
> requires help.
>
> Typically this class of devices is simply not up to provide the level of
> service you're requesting without additional user space control library
> which is responsible for automatic white balance, exposure and focus.
>
> Making use of the full potential of the hardware requires using a more
> expressive interface. That's what the kernel interface must provide. If
> we decide to limit ourselves to a small sub-set of that potential on the
> level of the kernel interface, we have made a wrong decision. It's as
> simple as that. This is why the functionality (and which requires taking
> a lot of policy decisions) belongs to the user space. We cannot have
> multiple drivers providing multiple kernel interfaces for the same hardware.
Right. With my Cisco hat on I can tell you that Cisco would want full low-level
control. If the driver would limit us we would not be able to use it.
Same with anyone who wants to put Android CameraHAL on top of a V4L2 driver:
they would need full control. Some simplified interface would be unacceptable.
>
> That said, I'm not trying to provide an excuse for not having libraries
> available to help the user to configure and control the device more or
> less automatically even in terms of best effort. It's something that
> does require attention, a lot more of it than it has received in recent
> few years.
Right.
Regards,
Hans
On 03/17/2017 03:37 PM, Russell King - ARM Linux wrote:
> On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:
>> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
>> [...]
>>> The big question, waiting for an answer on the last 8 years is
>>> who would do that? Such person would need to have several different
>>> hardware from different vendors, in order to ensure that it has
>>> a generic solution.
>>>
>>> It is a way more feasible that the Kernel developers that already
>>> have a certain hardware on their hands to add support inside the
>>> driver to forward the controls through the pipeline and to setup
>>> a "default" pipeline that would cover the common use cases at
>>> driver's probe.
>>
>> Actually, would setting pipeline via libv4l2 plugin and letting drivers
>> provide a sane enabled default pipeline configuration be mutually
>> exclusive? Not sure about the control forwarding, but at least a simple
>> link setup and format forwarding would also be possible in the kernel
>> without hindering userspace from doing it themselves later.
>
> I think this is the exact same problem as controls in ALSA.
>
> When ALSA started off in life, the requirement was that all controls
> shall default to minimum, and the user is expected to adjust controls
> after the system is running.
>
> After OSS, this gave quite a marked change in system behaviour, and
> led to a lot of "why doesn't my sound work anymore" problems, because
> people then had to figure out which combination of controls had to be
> set to get sound out of their systems.
>
> Now it seems to be much better, where install Linux on a platform, and
> you have a working sound system (assuming that the drivers are all there
> which is generally the case for x86.)
>
> However, it's still possible to adjust all the controls from userspace.
> All that's changed is the defaults.
>
> Why am I mentioning this - because from what I understand Mauro saying,
> it's no different from this situation. Userspace will still have the
> power to disable all links and setup its own. The difference is that
> there will be a default configuration that the kernel sets up at boot
> time that will be functional, rather than the current default
> configuration where the system is completely non-functional until
> manually configured.
>
> However, at the end of the day, I don't care _where_ the usability
> problems are solved, only that there is some kind of solution. It's not
> the _where_ that's the real issue here, but the _how_, and discussion of
> the _how_ is completely missing.
>
> So, let's try kicking off a discussion about _how_ to do things.
>
> _How_ do we setup a media controller system so that we end up with a
> usable configuration - let's start with the obvious bit... which links
> should be enabled.
>
> I think the first pre-requisit is that we stop exposing capture devices
> that can never be functional for the hardware that's present on the board,
> so that there isn't this plentora of useless /dev/video* nodes and useless
> subdevices.
>
> One possible solution to finding a default path may be "find the shortest
> path between the capture device and the sensor and enable intervening
> links".
>
> Then we need to try configuring that path with format/resolution
> information.
>
> However, what if something in the shortest path can't handle the format
> that the sensor produces? I think at that point, we'd need to drop that
> subdev out of the path resolution, re-run the "find the shortest path"
> algorithm, and try again.
>
> Repeat until success or no path between the capture and sensor exists.
>
> This works fine if you have just one sensor visible from a capture device,
> but not if there's more than one (which I suspect is the case with the
> Sabrelite board with its two cameras and video receiver.) That breaks
> the "find the shortest path" algorithm.
>
> So, maybe it's a lot better to just let the board people provide via DT
> a default setup for the connectivity of the modules somehow - certainly
> one big step forward would be to disable in DT parts of the capture
> system that can never be used (remembering that boards like the RPi /
> Hummingboard may end up using DT overlays to describe this for different
> cameras, so the capture setup may change after initial boot.)
The MC was developed before the device tree came along. But now that the DT
is here, I think this could be a sensible idea to let the DT provide an
initial path.
Sakari, Laurent, Mauro: any opinions?
Regards,
Hans
On 03/14/2017 11:21 AM, Mauro Carvalho Chehab wrote:
> Em Tue, 14 Mar 2017 08:55:36 +0100
> Hans Verkuil <[email protected]> escreveu:
>
>> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
>>> Hi Sakari,
>>>
>>> I started preparing a long argument about it, but gave up in favor of a
>>> simpler one.
>>>
>>> Em Mon, 13 Mar 2017 14:46:22 +0200
>>> Sakari Ailus <[email protected]> escreveu:
>>>
>>>> Drivers are written to support hardware, not particular use case.
>>>
>>> No, it is just the reverse: drivers and hardware are developed to
>>> support use cases.
>>>
>>> Btw, you should remember that the hardware is the full board, not just the
>>> SoC. In practice, the board do limit the use cases: several provide a
>>> single physical CSI connector, allowing just one sensor.
>>>
>>>>> This situation is there since 2009. If I remember well, you tried to write
>>>>> such generic plugin in the past, but never finished it, apparently because
>>>>> it is too complex. Others tried too over the years.
>>>>
>>>> I'd argue I know better what happened with that attempt than you do. I had a
>>>> prototype of a generic pipeline configuration library but due to various
>>>> reasons I haven't been able to continue working on that since around 2012.
>>>
>>> ...
>>>
>>>>> The last trial was done by Jacek, trying to cover just the exynos4 driver.
>>>>> Yet, even such limited scope plugin was not good enough, as it was never
>>>>> merged upstream. Currently, there's no such plugins upstream.
>>>>>
>>>>> If we can't even merge a plugin that solves it for just *one* driver,
>>>>> I have no hope that we'll be able to do it for the generic case.
>>>>
>>>> I believe Jacek ceased to work on that plugin in his day job; other than
>>>> that, there are some matters left to be addressed in his latest patchset.
>>>
>>> The two above basically summaries the issue: the task of doing a generic
>>> plugin on userspace, even for a single driver is complex enough to
>>> not cover within a reasonable timeline.
>>>
>>> From 2009 to 2012, you were working on it, but didn't finish it.
>>>
>>> Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
>>> I didn't check when the generic plugin interface was added to libv4l).
>>>
>>> In the case of Jacek's work, the first patch I was able to find was
>>> written in Oct, 2014:
>>> https://patchwork.kernel.org/patch/5098111/
>>> (not sure what happened with the version 1).
>>>
>>> The last e-mail about this subject was issued in Dec, 2016.
>>>
>>> In summary, you had this on your task for 3 years for an OMAP3
>>> plugin (where you have a good expertise), and Jacek for 2 years,
>>> for Exynos 4, where he should also have a good knowledge.
>>>
>>> Yet, with all that efforts, no concrete results were achieved, as none
>>> of the plugins got merged.
>>>
>>> Even if they were merged, if we keep the same mean time to develop a
>>> libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
>>> years to be developed.
>>>
>>> There's a clear message on it:
>>> - we shouldn't keep pushing for a solution via libv4l.
>>
>> Or:
>>
>> - userspace plugin development had a very a low priority and
>> never got the attention it needed.
>
> The end result is the same: we can't count on it.
>
>>
>> I know that's *my* reason. I rarely if ever looked at it. I always assumed
>> Sakari and/or Laurent would look at it. If this reason is also valid for
>> Sakari and Laurent, then it is no wonder nothing has happened in all that
>> time.
>>
>> We're all very driver-development-driven, and userspace gets very little
>> attention in general. So before just throwing in the towel we should take
>> a good look at the reasons why there has been little or no development: is
>> it because of fundamental design defects, or because nobody paid attention
>> to it?
>
> No. We should look it the other way: basically, there are patches
> for i.MX6 driver that sends control from videonode to subdevs.
>
> If we nack apply it, who will write the userspace plugin? When
> such change will be merged upstream?
>
> If we don't have answers to any of the above questions, we should not
> nack it.
>
> That's said, that doesn't prevent merging a libv4l plugin if/when
> someone can find time/interest to develop it.
I don't think this control inheritance patch will magically prevent you
from needed a plugin.
>
>> I strongly suspect it is the latter.
>>
>> In addition, I suspect end-users of these complex devices don't really care
>> about a plugin: they want full control and won't typically use generic
>> applications. If they would need support for that, we'd have seen much more
>> interest. The main reason for having a plugin is to simplify testing and
>> if this is going to be used on cheap hobbyist devkits.
>
> What are the needs for a cheap hobbyist devkit owner? Do we currently
> satisfy those needs? I'd say that having a functional driver when
> compiled without the subdev API, that implements the ioctl's/controls
> that a generic application like camorama/google talk/skype/zbar...
> would work should be enough to make them happy, even if they need to
> add some udev rule and/or run some "prep" application that would setup
> the pipelines via MC and eventually rename the device with a working
> pipeline to /dev/video0.
This is literally the first time we have to cater to a cheap devkit. We
were always aware of this issue, but nobody really needed it.
>
>>
>> An additional complication is simply that it is hard to find fully supported
>> MC hardware. omap3 boards are hard to find these days, renesas boards are not
>> easy to get, freescale isn't the most popular either. Allwinner, mediatek,
>> amlogic, broadcom and qualcomm all have closed source implementations or no
>> implementation at all.
>
> I'd say that we should not care anymore on providing a solution for
> generic applications to run on boards like OMAP3[1]. For hardware that
> are currently available that have Kernel driver and boards developed
> to be used as "cheap hobbyist devkit", I'd say we should implement
> a Kernel solution that would allow them to be used without subdev
> API, e. g. having all ioctls needed by generic applications to work
> functional, after some external application sets the pipeline.
I liked Russell's idea of having the DT set up an initial video path.
This would (probably) make it much easier to provide a generic plugin since
there is already an initial valid path when the driver is loaded, and it
doesn't require custom code in the driver since this is left to the DT
which really knows about the HW.
>
> [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> just arrived here last week. It would be nice to have xawtv3 running on it :-)
> So, if I have a lot of spare time (with is very unlikely), I might eventually
> do something for it to work.
>
>> I know it took me a very long time before I had a working omap3.
>
> My first OMAP3 board with working V4L2 source just arrived last week :-)
>
>> So I am not at all surprised that little progress has been made.
>
> I'm not surprised, but I'm disappointed, as I tried to push toward a
> solution for this problem since when we had our initial meetings about
> it.
So many things to do, so little time. Sounds corny, but really, that's what
this is all about. There were always (and frankly, still are) more important
things that needed to be done.
Regards,
Hans
Em Mon, 20 Mar 2017 14:10:30 +0100
Hans Verkuil <[email protected]> escreveu:
> On 03/17/2017 03:37 PM, Russell King - ARM Linux wrote:
> > On Fri, Mar 17, 2017 at 02:51:10PM +0100, Philipp Zabel wrote:
> >> On Fri, 2017-03-17 at 10:24 -0300, Mauro Carvalho Chehab wrote:
> >> [...]
> >>> The big question, waiting for an answer on the last 8 years is
> >>> who would do that? Such person would need to have several different
> >>> hardware from different vendors, in order to ensure that it has
> >>> a generic solution.
> >>>
> >>> It is a way more feasible that the Kernel developers that already
> >>> have a certain hardware on their hands to add support inside the
> >>> driver to forward the controls through the pipeline and to setup
> >>> a "default" pipeline that would cover the common use cases at
> >>> driver's probe.
> >>
> >> Actually, would setting pipeline via libv4l2 plugin and letting drivers
> >> provide a sane enabled default pipeline configuration be mutually
> >> exclusive? Not sure about the control forwarding, but at least a simple
> >> link setup and format forwarding would also be possible in the kernel
> >> without hindering userspace from doing it themselves later.
> >
> > I think this is the exact same problem as controls in ALSA.
> >
> > When ALSA started off in life, the requirement was that all controls
> > shall default to minimum, and the user is expected to adjust controls
> > after the system is running.
> >
> > After OSS, this gave quite a marked change in system behaviour, and
> > led to a lot of "why doesn't my sound work anymore" problems, because
> > people then had to figure out which combination of controls had to be
> > set to get sound out of their systems.
> >
> > Now it seems to be much better, where install Linux on a platform, and
> > you have a working sound system (assuming that the drivers are all there
> > which is generally the case for x86.)
> >
> > However, it's still possible to adjust all the controls from userspace.
> > All that's changed is the defaults.
> >
> > Why am I mentioning this - because from what I understand Mauro saying,
> > it's no different from this situation. Userspace will still have the
> > power to disable all links and setup its own. The difference is that
> > there will be a default configuration that the kernel sets up at boot
> > time that will be functional, rather than the current default
> > configuration where the system is completely non-functional until
> > manually configured.
> >
> > However, at the end of the day, I don't care _where_ the usability
> > problems are solved, only that there is some kind of solution. It's not
> > the _where_ that's the real issue here, but the _how_, and discussion of
> > the _how_ is completely missing.
> >
> > So, let's try kicking off a discussion about _how_ to do things.
> >
> > _How_ do we setup a media controller system so that we end up with a
> > usable configuration - let's start with the obvious bit... which links
> > should be enabled.
> >
> > I think the first pre-requisit is that we stop exposing capture devices
> > that can never be functional for the hardware that's present on the board,
> > so that there isn't this plentora of useless /dev/video* nodes and useless
> > subdevices.
> >
> > One possible solution to finding a default path may be "find the shortest
> > path between the capture device and the sensor and enable intervening
> > links".
> >
> > Then we need to try configuring that path with format/resolution
> > information.
> >
> > However, what if something in the shortest path can't handle the format
> > that the sensor produces? I think at that point, we'd need to drop that
> > subdev out of the path resolution, re-run the "find the shortest path"
> > algorithm, and try again.
> >
> > Repeat until success or no path between the capture and sensor exists.
> >
> > This works fine if you have just one sensor visible from a capture device,
> > but not if there's more than one (which I suspect is the case with the
> > Sabrelite board with its two cameras and video receiver.) That breaks
> > the "find the shortest path" algorithm.
> >
> > So, maybe it's a lot better to just let the board people provide via DT
> > a default setup for the connectivity of the modules somehow - certainly
> > one big step forward would be to disable in DT parts of the capture
> > system that can never be used (remembering that boards like the RPi /
> > Hummingboard may end up using DT overlays to describe this for different
> > cameras, so the capture setup may change after initial boot.)
>
> The MC was developed before the device tree came along. But now that the DT
> is here, I think this could be a sensible idea to let the DT provide an
> initial path.
>
> Sakari, Laurent, Mauro: any opinions?
It makes perfect sense to me.
By setting the pipeline via DT on boards with simple configurations,
e. g. just one CSI physical interface, it can create just one
devnode (e. g. /dev/video0) with would fully control the device,
without enabling subdev API for such hardware, making the hardware
usable with all V4L2 applications.
Regards,
Mauro
Em Mon, 20 Mar 2017 14:24:25 +0100
Hans Verkuil <[email protected]> escreveu:
> On 03/14/2017 11:21 AM, Mauro Carvalho Chehab wrote:
> > Em Tue, 14 Mar 2017 08:55:36 +0100
> > Hans Verkuil <[email protected]> escreveu:
> >
> >> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> >>> Hi Sakari,
> >>>
> >> We're all very driver-development-driven, and userspace gets very little
> >> attention in general. So before just throwing in the towel we should take
> >> a good look at the reasons why there has been little or no development: is
> >> it because of fundamental design defects, or because nobody paid attention
> >> to it?
> >
> > No. We should look it the other way: basically, there are patches
> > for i.MX6 driver that sends control from videonode to subdevs.
> >
> > If we nack apply it, who will write the userspace plugin? When
> > such change will be merged upstream?
> >
> > If we don't have answers to any of the above questions, we should not
> > nack it.
> >
> > That's said, that doesn't prevent merging a libv4l plugin if/when
> > someone can find time/interest to develop it.
>
> I don't think this control inheritance patch will magically prevent you
> from needed a plugin.
Yeah, it is not just control inheritance. The driver needs to work
without subdev API, e. g. mbus settings should also be done via the
video devnode.
Btw, Sakari made a good point on IRC: what happens if some app
try to change the pipeline or subdev settings while another
application is using the device? The driver should block such
changes, maybe using the V4L2 priority mechanism.
> This is literally the first time we have to cater to a cheap devkit. We
> were always aware of this issue, but nobody really needed it.
That's true. Now that we have a real need for that, we need to
provide a solution.
> > I'd say that we should not care anymore on providing a solution for
> > generic applications to run on boards like OMAP3[1]. For hardware that
> > are currently available that have Kernel driver and boards developed
> > to be used as "cheap hobbyist devkit", I'd say we should implement
> > a Kernel solution that would allow them to be used without subdev
> > API, e. g. having all ioctls needed by generic applications to work
> > functional, after some external application sets the pipeline.
>
> I liked Russell's idea of having the DT set up an initial video path.
> This would (probably) make it much easier to provide a generic plugin since
> there is already an initial valid path when the driver is loaded, and it
> doesn't require custom code in the driver since this is left to the DT
> which really knows about the HW.
Setting the device via DT indeed makes easier (either for a kernel
or userspace solution), but things like resolution changes should
be possible without needing to change DT.
Also, as MC and subdev changes should be blocked while a V4L2 app
is using the device, using a mechanism like calling VIDIOC_S_PRIORITY
ioctl via the V4l2 interface, Kernel will require changes, anyway.
My suggestion is to touch on one driver to make it work with a
generic application. As we currently have efforts and needs for
the i.MX6 to do it, I'd say that the best would be to make it
work on such driver. Once the work is done, we can see if the
approach taken there would work at libv4l or not.
In parallel, someone has to fix libv4l for it to be default on
applications like gstreamer, e. g. adding support for DMABUF
and fixing other issues that are preventing it to be used by
default.
Nicolas,
Why libv4l is currently disabled at gstreamer's default settings?
> > [1] Yet, I might eventually do that for fun, an OMAP3 board with tvp5150
> > just arrived here last week. It would be nice to have xawtv3 running on it :-)
> > So, if I have a lot of spare time (with is very unlikely), I might eventually
> > do something for it to work.
> >
> >> I know it took me a very long time before I had a working omap3.
> >
> > My first OMAP3 board with working V4L2 source just arrived last week :-)
> >
> >> So I am not at all surprised that little progress has been made.
> >
> > I'm not surprised, but I'm disappointed, as I tried to push toward a
> > solution for this problem since when we had our initial meetings about
> > it.
>
> So many things to do, so little time. Sounds corny, but really, that's what
> this is all about. There were always (and frankly, still are) more important
> things that needed to be done.
What's most important for some developer may not be so important for
another developer.
My understanding here is that there are developers wanting/needing
to have standard V4L2 apps support for (some) i.MX6 devices. Those are
the ones that may/will allocate some time for it to happen.
Thanks,
Mauro
On Mon, Mar 20, 2017 at 12:39:38PM -0300, Mauro Carvalho Chehab wrote:
> Em Mon, 20 Mar 2017 14:24:25 +0100
> Hans Verkuil <[email protected]> escreveu:
> > I don't think this control inheritance patch will magically prevent you
> > from needed a plugin.
>
> Yeah, it is not just control inheritance. The driver needs to work
> without subdev API, e. g. mbus settings should also be done via the
> video devnode.
>
> Btw, Sakari made a good point on IRC: what happens if some app
> try to change the pipeline or subdev settings while another
> application is using the device? The driver should block such
> changes, maybe using the V4L2 priority mechanism.
My understanding is that there are already mechanisms in place to
prevent that, but it's driver dependent - certainly several of the
imx driver subdevs check whether they have an active stream, and
refuse (eg) all set_fmt calls with -EBUSY if that is so.
(That statement raises another question in my mind: if the subdev is
streaming, should it refuse all set_fmt, even for the TRY stuff?)
> In parallel, someone has to fix libv4l for it to be default on
> applications like gstreamer, e. g. adding support for DMABUF
> and fixing other issues that are preventing it to be used by
> default.
Hmm, not sure what you mean there - I've used dmabuf with gstreamer's
v4l2src linked to libv4l2, importing the buffers into etnaviv using
a custom plugin. There are distros around (ubuntu) where the v4l2
plugin is built against libv4l2.
> My understanding here is that there are developers wanting/needing
> to have standard V4L2 apps support for (some) i.MX6 devices. Those are
> the ones that may/will allocate some time for it to happen.
Quite - but we need to first know what is acceptable to the v4l2
community before we waste a lot of effort coding something up that
may not be suitable. Like everyone else, there's only a limited
amount of effort available, so using it wisely is a very good idea.
Up until recently, it seemed that the only solution was to solve the
userspace side of the media controller API via v4l2 plugins and the
like.
Much of my time that I have available to look at the imx6 capture
stuff at the moment is taken up by triping over UAPI issues with the
current code (such as the ones about CSI scaling, colorimetry, etc)
and trying to get concensus on what the right solution to fix those
issues actually is, and at the moment I don't have spare time to
start addressing any kind of v4l2 plugin for user controls nor any
other solution.
Eg, I spent much of this last weekend sorting out my IMX219 camera
sensor driver for my new understanding about how scaling is supposed
to work, the frame enumeration issue (which I've posted patches for)
and the CSI scaling issue (which I've some half-baked patch for at the
moment, but probably by the time I've finished sorting that, Philipp
or Steve will already have a solution.)
That said, my new understanding of the scaling impacts the four patches
I posted, and probably makes the frame size enumeration in CSI (due to
its scaling) rather obsolete.
--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.
Em Mon, 20 Mar 2017 16:10:03 +0000
Russell King - ARM Linux <[email protected]> escreveu:
> On Mon, Mar 20, 2017 at 12:39:38PM -0300, Mauro Carvalho Chehab wrote:
> > Em Mon, 20 Mar 2017 14:24:25 +0100
> > Hans Verkuil <[email protected]> escreveu:
> > > I don't think this control inheritance patch will magically prevent you
> > > from needed a plugin.
> >
> > Yeah, it is not just control inheritance. The driver needs to work
> > without subdev API, e. g. mbus settings should also be done via the
> > video devnode.
> >
> > Btw, Sakari made a good point on IRC: what happens if some app
> > try to change the pipeline or subdev settings while another
> > application is using the device? The driver should block such
> > changes, maybe using the V4L2 priority mechanism.
>
> My understanding is that there are already mechanisms in place to
> prevent that, but it's driver dependent - certainly several of the
> imx driver subdevs check whether they have an active stream, and
> refuse (eg) all set_fmt calls with -EBUSY if that is so.
>
> (That statement raises another question in my mind: if the subdev is
> streaming, should it refuse all set_fmt, even for the TRY stuff?)
Returning -EBUSY only when streaming is too late, as ioctl's
may be changing the pipeline configuration and/or buffer allocation,
while the application is sending other ioctls in order to prepare
for streaming.
V4L2 has a mechanism of blocking other apps to change such
parameters via VIDIOC_S_PRIORITY[1]. If an application sets
priority to V4L2_PRIORITY_RECORD, any other application attempting
to change the device via some other file descriptor will fail.
So, it is a sort of "exclusive write access".
On a quick look at V4L2 core, currently, sending a
VIDIOC_S_PRIORITY ioctl to a /dev/video device doesn't seem to have
any effect at either MC or V4L2 subdev API for the subdevs connected
to it. We'll likely need to add some code at v4l2_prio_change()
for it to notify the subdevs for them to return -EBUSY if one
would try to change something there, while the device is priorized.
[1] https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-g-priority.html
> > In parallel, someone has to fix libv4l for it to be default on
> > applications like gstreamer, e. g. adding support for DMABUF
> > and fixing other issues that are preventing it to be used by
> > default.
>
> Hmm, not sure what you mean there - I've used dmabuf with gstreamer's
> v4l2src linked to libv4l2, importing the buffers into etnaviv using
> a custom plugin. There are distros around (ubuntu) where the v4l2
> plugin is built against libv4l2.
Hmm... I guess some gst developer mentioned that there are/where
some restrictions at libv4l2 with regards to DMABUF. I may be
wrong.
>
> > My understanding here is that there are developers wanting/needing
> > to have standard V4L2 apps support for (some) i.MX6 devices. Those are
> > the ones that may/will allocate some time for it to happen.
>
> Quite - but we need to first know what is acceptable to the v4l2
> community before we waste a lot of effort coding something up that
> may not be suitable. Like everyone else, there's only a limited
> amount of effort available, so using it wisely is a very good idea.
Sure. That's why we're discussing here :-)
> Up until recently, it seemed that the only solution was to solve the
> userspace side of the media controller API via v4l2 plugins and the
> like.
>
> Much of my time that I have available to look at the imx6 capture
> stuff at the moment is taken up by triping over UAPI issues with the
> current code (such as the ones about CSI scaling, colorimetry, etc)
> and trying to get concensus on what the right solution to fix those
> issues actually is, and at the moment I don't have spare time to
> start addressing any kind of v4l2 plugin for user controls nor any
> other solution.
I hear you. A solution via libv4l could be more elegant, but it
doesn't seem simple, as nobody did it before, and depends on the
libv4l plugin mechanism, with is currently unused.
Also, I think it is easier to provide a solution using DT and some
driver and/or core support for it, specially since, AFAICT,
currently there's no way request exclusive access to MC and subdevs.
It is probably not possible do to that exclusively in userspace.
> Eg, I spent much of this last weekend sorting out my IMX219 camera
> sensor driver for my new understanding about how scaling is supposed
> to work, the frame enumeration issue (which I've posted patches for)
> and the CSI scaling issue (which I've some half-baked patch for at the
> moment, but probably by the time I've finished sorting that, Philipp
> or Steve will already have a solution.)
>
> That said, my new understanding of the scaling impacts the four patches
> I posted, and probably makes the frame size enumeration in CSI (due to
> its scaling) rather obsolete.
Yeah, when there's no scaler, it should report just the resolution(s)
supported by the sensor (actually, at the CSI) via
V4L2_FRMSIZE_TYPE_DISCRETE.
However, when there's a scaler at the pipeline, it should report the
range supported by the scaler, e. g.:
- V4L2_FRMSIZE_TYPE_CONTINUOUS - when an entire range of resolutions
is supported with step = 1 for both H and V .
- V4L2_FRMSIZE_TYPE_STEPWISE - when there's either a H or V step at
the possible values for resolutions. This is actually more common
in practice, as several encodings take a 2x2 pixel block. So, the
step should be at least 2.
There is something to be considered by the logic that forwards the
resolution to the CSI: a lower resolution there means a higher number of
frames per second.
So, if the driver is setting the resolution via a V4L2 device, it
will provide a higher number of fps if it selects the lowest
resolution at CSI that it is greater or equal to the resolution
set at the scaler. On the other hand, the image quality could be
better if it doesn't scale at CSI.
Thanks,
Mauro
Hi!
> > Making use of the full potential of the hardware requires using a more
> > expressive interface.
>
> That's the core of the problem: most users don't need "full potential
> of the hardware". It is actually worse than that: several boards
> don't allow "full potential" of the SoC capabilities.
Well, in kernel we usually try to support "full hardware" potential.
And we are pretty sure users would like to take still photos, even if
common v4l2 applications can not do it.
> > That's what the kernel interface must provide. If
> > we decide to limit ourselves to a small sub-set of that potential on the
> > level of the kernel interface, we have made a wrong decision. It's as
> > simple as that. This is why the functionality (and which requires taking
> > a lot of policy decisions) belongs to the user space. We cannot have
> > multiple drivers providing multiple kernel interfaces for the same hardware.
>
> I strongly disagree. Looking only at the hardware capabilities without
> having a solution to provide what the user wants is *wrong*.
Hardware manufacturers already did this kind of research for us. They
don't usually include features noone wants...
> Another case: the cx25821 hardware supports 12 video streams,
> consuming almost all available bandwidth of an ePCI bus. Each video
> stream connector can either be configured to be capture or output, in
> runtime. The hardware vendor chose to hardcode the driver to provide
> 8 inputs and 4 outputs. Their decision was based in the fact that
> the driver is already very complex, and it satisfies their customer's
> needs. The cost/efforts of make the driver to be reconfigured in
> runtime were too high for almost no benefit.
Well, it is okay to provide 'limited' driver -- there's possibility to
fix the driver. But IMO it is not okay to provide 'limited' kernel
interface -- because if you try to fix it, you'll suddenly have
regressions.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi!
> > I do agree with you that MC places a lot of burden on the user to
> > attain a lot of knowledge of the system's architecture.
>
> Setting up the pipeline is not the hard part. One could write a
> script to do that.
Can you try to write that script? I believe it would solve big part of
the problem.
> > And my other point is, I think most people who have a need to work with
> > the media framework on a particular platform will likely already be
> > quite familiar with that platform.
>
> I disagree. The most popular platform device currently is Raspberry PI.
>
> I doubt that almost all owners of RPi + camera module know anything
> about MC. They just use Raspberry's official driver with just provides
> the V4L2 interface.
>
> I have a strong opinion that, for hardware like RPi, just the V4L2
> API is enough for more than 90% of the cases.
Maybe V4L2 API is enough for 90% of the users. But I don't believe
that means that we should provide compatibility. V4L2 API is not good
enough for complex devices, and if we can make RPi people fix
userspace... that's a good thing.
> > The media graph for imx6 is fairly self-explanatory in my opinion.
> > Yes that graph has to be generated, but just with a simple 'media-ctl
> > --print-dot', I don't see how that is difficult for the user.
>
> Again, IMHO, the problem is not how to setup the pipeline, but, instead,
> the need to forward controls to the subdevices.
>
> To use a camera, the user needs to set up a set of controls for the
> image to make sense (bright, contrast, focus, etc). If the driver
> doesn't forward those controls to the subdevs, an application like
> "camorama" won't actually work for real, as the user won't be able
> to adjust those parameters via GUI.
I believe this can be fixed in libv4l2.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi Hans,
On Tuesday 14 Mar 2017 08:55:36 Hans Verkuil wrote:
> On 03/14/2017 04:45 AM, Mauro Carvalho Chehab wrote:
> > Hi Sakari,
> >
> > I started preparing a long argument about it, but gave up in favor of a
> > simpler one.
> >
> > Em Mon, 13 Mar 2017 14:46:22 +0200 Sakari Ailus escreveu:
> >> Drivers are written to support hardware, not particular use case.
> >
> > No, it is just the reverse: drivers and hardware are developed to
> > support use cases.
> >
> > Btw, you should remember that the hardware is the full board, not just the
> > SoC. In practice, the board do limit the use cases: several provide a
> > single physical CSI connector, allowing just one sensor.
> >
> >>> This situation is there since 2009. If I remember well, you tried to
> >>> write such generic plugin in the past, but never finished it, apparently
> >>> because it is too complex. Others tried too over the years.
> >>
> >> I'd argue I know better what happened with that attempt than you do. I
> >> had a prototype of a generic pipeline configuration library but due to
> >> various reasons I haven't been able to continue working on that since
> >> around 2012.
> > ...
> >
> >>> The last trial was done by Jacek, trying to cover just the exynos4
> >>> driver. Yet, even such limited scope plugin was not good enough, as it
> >>> was never merged upstream. Currently, there's no such plugins upstream.
> >>>
> >>> If we can't even merge a plugin that solves it for just *one* driver,
> >>> I have no hope that we'll be able to do it for the generic case.
> >>
> >> I believe Jacek ceased to work on that plugin in his day job; other than
> >> that, there are some matters left to be addressed in his latest patchset.
> >
> > The two above basically summaries the issue: the task of doing a generic
> > plugin on userspace, even for a single driver is complex enough to
> > not cover within a reasonable timeline.
> >
> > From 2009 to 2012, you were working on it, but didn't finish it.
> >
> > Apparently, nobody worked on it between 2013-2014 (but I may be wrong, as
> > I didn't check when the generic plugin interface was added to libv4l).
> >
> > In the case of Jacek's work, the first patch I was able to find was
> >
> > written in Oct, 2014:
> > https://patchwork.kernel.org/patch/5098111/
> > (not sure what happened with the version 1).
> >
> > The last e-mail about this subject was issued in Dec, 2016.
> >
> > In summary, you had this on your task for 3 years for an OMAP3
> > plugin (where you have a good expertise), and Jacek for 2 years,
> > for Exynos 4, where he should also have a good knowledge.
> >
> > Yet, with all that efforts, no concrete results were achieved, as none
> > of the plugins got merged.
> >
> > Even if they were merged, if we keep the same mean time to develop a
> > libv4l plugin, that would mean that a plugin for i.MX6 could take 2-3
> > years to be developed.
> >
> > There's a clear message on it:
> > - we shouldn't keep pushing for a solution via libv4l.
>
> Or:
> - userspace plugin development had a very a low priority and
> never got the attention it needed.
>
> I know that's *my* reason. I rarely if ever looked at it. I always assumed
> Sakari and/or Laurent would look at it. If this reason is also valid for
> Sakari and Laurent, then it is no wonder nothing has happened in all that
> time.
The reason is also valid for me. I'd really love to work on the userspace
side, but I just can't find time at the moment.
> We're all very driver-development-driven, and userspace gets very little
> attention in general. So before just throwing in the towel we should take
> a good look at the reasons why there has been little or no development: is
> it because of fundamental design defects, or because nobody paid attention
> to it?
>
> I strongly suspect it is the latter.
>
> In addition, I suspect end-users of these complex devices don't really care
> about a plugin: they want full control and won't typically use generic
> applications. If they would need support for that, we'd have seen much more
> interest. The main reason for having a plugin is to simplify testing and
> if this is going to be used on cheap hobbyist devkits.
>
> An additional complication is simply that it is hard to find fully supported
> MC hardware. omap3 boards are hard to find these days, renesas boards are
> not easy to get, freescale isn't the most popular either. Allwinner,
> mediatek, amlogic, broadcom and qualcomm all have closed source
> implementations or no implementation at all.
>
> I know it took me a very long time before I had a working omap3.
>
> So I am not at all surprised that little progress has been made.
--
Regards,
Laurent Pinchart