So want to toss a few thoughts on adding support for thermopile
devices (could be used for FLIR Lepton as well) that output pixel
data.
These typically aren't DMA'able devices since they are low speed
(partly to limiting the functionality to be in compliance with ITAR)
and data is piped over i2c/spi.
My question is that there doesn't seem to be an other driver that
polls frames off of a device and pushes it to the video buffer, and
wanted to be sure that this doesn't currently exist somewhere.
Also more importantly does the mailing list thinks it belongs in v4l2?
We already came up the opinion on the IIO list that it doesn't belong
in that subsystem since pushing raw pixel data to a buffer is a bit
hacky. Also could be generically written with regmap so other devices
(namely FLIR Lepton) could be easily supported.
Need some input for the video pixel data types, which the device we
are using (see datasheet links below) is outputting pixel data in
little endian 16-bit of which a 12-bits signed value is used. Does it
make sense to do some basic processing on the data since greyscale is
going to look weird with temperatures under 0C degrees? Namely a cold
object is going to be brighter than the hottest object it could read.
Or should a new V4L2_PIX_FMT_* be defined and processing done in
software? Another issue is how to report the scaling value of 0.25 C
for each LSB of the pixels to the respecting recording application.
Datasheet: http://media.digikey.com/pdf/Data%20Sheets/Panasonic%20Sensors%20PDFs/Grid-EYE_AMG88.pdf
Datasheet: https://eewiki.net/download/attachments/13599167/Grid-EYE%20SPECIFICATIONS%28Reference%29.pdf?version=1&modificationDate=1380660426690&api=v2
Thanks,
Matt
Hi Matt,
> Need some input for the video pixel data types, which the device we
> are using (see datasheet links below) is outputting pixel data in
> little endian 16-bit of which a 12-bits signed value is used. Does it
> make sense to do some basic processing on the data since greyscale is
> going to look weird with temperatures under 0C degrees? Namely a cold
> object is going to be brighter than the hottest object it could read.
> Or should a new V4L2_PIX_FMT_* be defined and processing done in
> software? Another issue is how to report the scaling value of 0.25 C
> for each LSB of the pixels to the respecting recording application.
Regarding the format for the pixel data: I did some research into
this when doing some driver work for the Seek Thermal (a product
similar to the FLIR Lepton). While it would be nice to be able to use
an existing application like VLC or gStreamer to just take the video
and capture from the V4L2 interface with no additional userland code,
the reality is that how you colorize the data is going to be highly
user specific (e.g. what thermal ranges to show with what colors,
etc). If your goal is really to do a V4L2 driver which returns the
raw data, then you're probably best returning it in the native
greyscale format (whether that be an existing V4L2 PIX_FMT or a new
one needs to be defined), and then in software you can figure out how
to colorize it.
Just my opinion though....
Devin
--
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
On 10/28/2016 10:30 PM, Devin Heitmueller wrote:
> Hi Matt,
>
>> Need some input for the video pixel data types, which the device we
>> are using (see datasheet links below) is outputting pixel data in
>> little endian 16-bit of which a 12-bits signed value is used. Does it
>> make sense to do some basic processing on the data since greyscale is
>> going to look weird with temperatures under 0C degrees? Namely a cold
>> object is going to be brighter than the hottest object it could read.
>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
>> software? Another issue is how to report the scaling value of 0.25 C
>> for each LSB of the pixels to the respecting recording application.
>
> Regarding the format for the pixel data: I did some research into
> this when doing some driver work for the Seek Thermal (a product
> similar to the FLIR Lepton). While it would be nice to be able to use
> an existing application like VLC or gStreamer to just take the video
> and capture from the V4L2 interface with no additional userland code,
> the reality is that how you colorize the data is going to be highly
> user specific (e.g. what thermal ranges to show with what colors,
> etc). If your goal is really to do a V4L2 driver which returns the
> raw data, then you're probably best returning it in the native
> greyscale format (whether that be an existing V4L2 PIX_FMT or a new
> one needs to be defined), and then in software you can figure out how
> to colorize it.
All true, I also did my share of poking into SEEK Thermal USB and it is
an excellent candidate for a V4L2 driver, that one. But I think this
device here is producing much smaller images, something like 8x8 pixels.
--
Best regards,
Marek Vasut
Hi Matt,
On 28/10/16 22:14, Matt Ranostay wrote:
> So want to toss a few thoughts on adding support for thermopile
> devices (could be used for FLIR Lepton as well) that output pixel
> data.
> These typically aren't DMA'able devices since they are low speed
> (partly to limiting the functionality to be in compliance with ITAR)
> and data is piped over i2c/spi.
>
> My question is that there doesn't seem to be an other driver that
> polls frames off of a device and pushes it to the video buffer, and
> wanted to be sure that this doesn't currently exist somewhere.
Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
drivers/media/video/bw-qcam.c
It was for a grayscale parallel port webcam (which explains why it was
removed in 3.7 :-) ), and it used polling to get the pixels.
> Also more importantly does the mailing list thinks it belongs in v4l2?
I think it fits. It's a sensor, just with a very small resolution and
infrared
instead of visible light.
> We already came up the opinion on the IIO list that it doesn't belong
> in that subsystem since pushing raw pixel data to a buffer is a bit
> hacky. Also could be generically written with regmap so other devices
> (namely FLIR Lepton) could be easily supported.
>
> Need some input for the video pixel data types, which the device we
> are using (see datasheet links below) is outputting pixel data in
> little endian 16-bit of which a 12-bits signed value is used. Does it
> make sense to do some basic processing on the data since greyscale is
> going to look weird with temperatures under 0C degrees? Namely a cold
> object is going to be brighter than the hottest object it could read.
> Or should a new V4L2_PIX_FMT_* be defined and processing done in
> software?
I would recommend that. It's no big deal, as long as the new format is
documented.
> Another issue is how to report the scaling value of 0.25 C
> for each LSB of the pixels to the respecting recording application.
Probably through a read-only control, but I'm not sure.
Regards,
Hans
>
> Datasheet: http://media.digikey.com/pdf/Data%20Sheets/Panasonic%20Sensors%20PDFs/Grid-EYE_AMG88.pdf
> Datasheet: https://eewiki.net/download/attachments/13599167/Grid-EYE%20SPECIFICATIONS%28Reference%29.pdf?version=1&modificationDate=1380660426690&api=v2
>
> Thanks,
>
> Matt
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
On Fri, Oct 28, 2016 at 1:40 PM, Marek Vasut <[email protected]> wrote:
> On 10/28/2016 10:30 PM, Devin Heitmueller wrote:
>> Hi Matt,
>>
>>> Need some input for the video pixel data types, which the device we
>>> are using (see datasheet links below) is outputting pixel data in
>>> little endian 16-bit of which a 12-bits signed value is used. Does it
>>> make sense to do some basic processing on the data since greyscale is
>>> going to look weird with temperatures under 0C degrees? Namely a cold
>>> object is going to be brighter than the hottest object it could read.
>>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
>>> software? Another issue is how to report the scaling value of 0.25 C
>>> for each LSB of the pixels to the respecting recording application.
>>
>> Regarding the format for the pixel data: I did some research into
>> this when doing some driver work for the Seek Thermal (a product
>> similar to the FLIR Lepton). While it would be nice to be able to use
>> an existing application like VLC or gStreamer to just take the video
>> and capture from the V4L2 interface with no additional userland code,
>> the reality is that how you colorize the data is going to be highly
>> user specific (e.g. what thermal ranges to show with what colors,
>> etc). If your goal is really to do a V4L2 driver which returns the
>> raw data, then you're probably best returning it in the native
>> greyscale format (whether that be an existing V4L2 PIX_FMT or a new
>> one needs to be defined), and then in software you can figure out how
>> to colorize it.
>
> All true, I also did my share of poking into SEEK Thermal USB and it is
> an excellent candidate for a V4L2 driver, that one. But I think this
> device here is producing much smaller images, something like 8x8 pixels.
Yes this is only 64 pixel (8x8 grid) but it is video still. Does have
some major pluses over a FLIR camera though, mainly power usage is
really low, and cost is lower (although that reason is decreasing
everyday).
>
> --
> Best regards,
> Marek Vasut
On Fri, Oct 28, 2016 at 1:30 PM, Devin Heitmueller
<[email protected]> wrote:
> Hi Matt,
>
>> Need some input for the video pixel data types, which the device we
>> are using (see datasheet links below) is outputting pixel data in
>> little endian 16-bit of which a 12-bits signed value is used. Does it
>> make sense to do some basic processing on the data since greyscale is
>> going to look weird with temperatures under 0C degrees? Namely a cold
>> object is going to be brighter than the hottest object it could read.
>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
>> software? Another issue is how to report the scaling value of 0.25 C
>> for each LSB of the pixels to the respecting recording application.
>
> Regarding the format for the pixel data: I did some research into
> this when doing some driver work for the Seek Thermal (a product
> similar to the FLIR Lepton). While it would be nice to be able to use
> an existing application like VLC or gStreamer to just take the video
> and capture from the V4L2 interface with no additional userland code,
> the reality is that how you colorize the data is going to be highly
> user specific (e.g. what thermal ranges to show with what colors,
> etc). If your goal is really to do a V4L2 driver which returns the
> raw data, then you're probably best returning it in the native
> greyscale format (whether that be an existing V4L2 PIX_FMT or a new
> one needs to be defined), and then in software you can figure out how
> to colorize it.
>
Good point I was leaning to having userspace do it. But didn't think
of the color mapping part though so even more reason.
> Just my opinion though....
>
> Devin
>
> --
> Devin J. Heitmueller - Kernel Labs
> http://www.kernellabs.com
On Fri, Oct 28, 2016 at 2:53 PM, Hans Verkuil <[email protected]> wrote:
> Hi Matt,
>
> On 28/10/16 22:14, Matt Ranostay wrote:
>>
>> So want to toss a few thoughts on adding support for thermopile
>> devices (could be used for FLIR Lepton as well) that output pixel
>> data.
>> These typically aren't DMA'able devices since they are low speed
>> (partly to limiting the functionality to be in compliance with ITAR)
>> and data is piped over i2c/spi.
>>
>> My question is that there doesn't seem to be an other driver that
>> polls frames off of a device and pushes it to the video buffer, and
>> wanted to be sure that this doesn't currently exist somewhere.
>
>
> Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
>
> drivers/media/video/bw-qcam.c
>
> It was for a grayscale parallel port webcam (which explains why it was
> removed in 3.7 :-) ), and it used polling to get the pixels.
Yikes parallel port, but I'll take a look at that for some reference :)
>
>> Also more importantly does the mailing list thinks it belongs in v4l2?
>
>
> I think it fits. It's a sensor, just with a very small resolution and
> infrared
> instead of visible light.
>
>> We already came up the opinion on the IIO list that it doesn't belong
>> in that subsystem since pushing raw pixel data to a buffer is a bit
>> hacky. Also could be generically written with regmap so other devices
>> (namely FLIR Lepton) could be easily supported.
>>
>> Need some input for the video pixel data types, which the device we
>> are using (see datasheet links below) is outputting pixel data in
>> little endian 16-bit of which a 12-bits signed value is used. Does it
>> make sense to do some basic processing on the data since greyscale is
>> going to look weird with temperatures under 0C degrees? Namely a cold
>> object is going to be brighter than the hottest object it could read.
>
>
>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
>> software?
>
>
> I would recommend that. It's no big deal, as long as the new format is
> documented.
>
>> Another issue is how to report the scaling value of 0.25 C
>> for each LSB of the pixels to the respecting recording application.
>
>
> Probably through a read-only control, but I'm not sure.
>
> Regards,
>
> Hans
>
>>
>> Datasheet:
>> http://media.digikey.com/pdf/Data%20Sheets/Panasonic%20Sensors%20PDFs/Grid-EYE_AMG88.pdf
>> Datasheet:
>> https://eewiki.net/download/attachments/13599167/Grid-EYE%20SPECIFICATIONS%28Reference%29.pdf?version=1&modificationDate=1380660426690&api=v2
>>
>> Thanks,
>>
>> Matt
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
On Fri, Oct 28, 2016 at 7:59 PM, Matt Ranostay <[email protected]> wrote:
> On Fri, Oct 28, 2016 at 2:53 PM, Hans Verkuil <[email protected]> wrote:
>> Hi Matt,
>>
>> On 28/10/16 22:14, Matt Ranostay wrote:
>>>
>>> So want to toss a few thoughts on adding support for thermopile
>>> devices (could be used for FLIR Lepton as well) that output pixel
>>> data.
>>> These typically aren't DMA'able devices since they are low speed
>>> (partly to limiting the functionality to be in compliance with ITAR)
>>> and data is piped over i2c/spi.
>>>
>>> My question is that there doesn't seem to be an other driver that
>>> polls frames off of a device and pushes it to the video buffer, and
>>> wanted to be sure that this doesn't currently exist somewhere.
>>
>>
>> Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
>>
>> drivers/media/video/bw-qcam.c
>>
>> It was for a grayscale parallel port webcam (which explains why it was
>> removed in 3.7 :-) ), and it used polling to get the pixels.
>
> Yikes parallel port, but I'll take a look at that for some reference :)
So does anyone know of any software that is using V4L2_PIX_FMT_Y12
currently? Want to test my driver but seems there isn't anything that
uses that format (ffmpeg, mplayer, etc).
Raw data seems correct but would like to visualize it :). Suspect I'll
need to write a test case application though
>
>>
>>> Also more importantly does the mailing list thinks it belongs in v4l2?
>>
>>
>> I think it fits. It's a sensor, just with a very small resolution and
>> infrared
>> instead of visible light.
>>
>>> We already came up the opinion on the IIO list that it doesn't belong
>>> in that subsystem since pushing raw pixel data to a buffer is a bit
>>> hacky. Also could be generically written with regmap so other devices
>>> (namely FLIR Lepton) could be easily supported.
>>>
>>> Need some input for the video pixel data types, which the device we
>>> are using (see datasheet links below) is outputting pixel data in
>>> little endian 16-bit of which a 12-bits signed value is used. Does it
>>> make sense to do some basic processing on the data since greyscale is
>>> going to look weird with temperatures under 0C degrees? Namely a cold
>>> object is going to be brighter than the hottest object it could read.
>>
>>
>>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
>>> software?
>>
>>
>> I would recommend that. It's no big deal, as long as the new format is
>> documented.
>>
>>> Another issue is how to report the scaling value of 0.25 C
>>> for each LSB of the pixels to the respecting recording application.
>>
>>
>> Probably through a read-only control, but I'm not sure.
>>
>> Regards,
>>
>> Hans
>>
>>>
>>> Datasheet:
>>> http://media.digikey.com/pdf/Data%20Sheets/Panasonic%20Sensors%20PDFs/Grid-EYE_AMG88.pdf
>>> Datasheet:
>>> https://eewiki.net/download/attachments/13599167/Grid-EYE%20SPECIFICATIONS%28Reference%29.pdf?version=1&modificationDate=1380660426690&api=v2
>>>
>>> Thanks,
>>>
>>> Matt
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>>> the body of a message to [email protected]
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
On Wed, 2 Nov 2016 23:10:41 -0700
Matt Ranostay <[email protected]> wrote:
> On Fri, Oct 28, 2016 at 7:59 PM, Matt Ranostay <[email protected]> wrote:
> > On Fri, Oct 28, 2016 at 2:53 PM, Hans Verkuil <[email protected]> wrote:
> >> Hi Matt,
> >>
> >> On 28/10/16 22:14, Matt Ranostay wrote:
> >>>
> >>> So want to toss a few thoughts on adding support for thermopile
> >>> devices (could be used for FLIR Lepton as well) that output pixel
> >>> data.
> >>> These typically aren't DMA'able devices since they are low speed
> >>> (partly to limiting the functionality to be in compliance with ITAR)
> >>> and data is piped over i2c/spi.
> >>>
> >>> My question is that there doesn't seem to be an other driver that
> >>> polls frames off of a device and pushes it to the video buffer, and
> >>> wanted to be sure that this doesn't currently exist somewhere.
> >>
> >>
> >> Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
> >>
> >> drivers/media/video/bw-qcam.c
> >>
> >> It was for a grayscale parallel port webcam (which explains why it was
> >> removed in 3.7 :-) ), and it used polling to get the pixels.
> >
> > Yikes parallel port, but I'll take a look at that for some reference :)
>
>
> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
> currently? Want to test my driver but seems there isn't anything that
> uses that format (ffmpeg, mplayer, etc).
>
> Raw data seems correct but would like to visualize it :). Suspect I'll
> need to write a test case application though
>
You could add a conversion routine in libv4lconvert from v4l-utils to
have a grayscale representation of Y12, I did something similar for the
kinect depth map, discarding the least significant bits:
https://git.linuxtv.org/v4l-utils.git/commit/lib/libv4lconvert?id=6daa2b1ce8674bda66b0f3bb5cf08089e42579fd
After that any v4l2 program using libv4l2 will at least be able to show
_an_ image.
You can play with "false color" representations too in libv4lconvert,
however I don't know if such representations are generic enough to be
mainlined, in the Kinect case the false color representation of the
depth map was done in specialized software like libfreenect.
Ciao,
Antonio
--
Antonio Ospite
https://ao2.it
https://twitter.com/ao2it
A: Because it messes up the order in which people normally read text.
See http://en.wikipedia.org/wiki/Posting_style
Q: Why is top-posting such a bad thing?
On 03/11/16 08:35, Antonio Ospite wrote:
> On Wed, 2 Nov 2016 23:10:41 -0700
> Matt Ranostay <[email protected]> wrote:
>
>> On Fri, Oct 28, 2016 at 7:59 PM, Matt Ranostay <[email protected]> wrote:
>>> On Fri, Oct 28, 2016 at 2:53 PM, Hans Verkuil <[email protected]> wrote:
>>>> Hi Matt,
>>>>
>>>> On 28/10/16 22:14, Matt Ranostay wrote:
>>>>>
>>>>> So want to toss a few thoughts on adding support for thermopile
>>>>> devices (could be used for FLIR Lepton as well) that output pixel
>>>>> data.
>>>>> These typically aren't DMA'able devices since they are low speed
>>>>> (partly to limiting the functionality to be in compliance with ITAR)
>>>>> and data is piped over i2c/spi.
>>>>>
>>>>> My question is that there doesn't seem to be an other driver that
>>>>> polls frames off of a device and pushes it to the video buffer, and
>>>>> wanted to be sure that this doesn't currently exist somewhere.
>>>>
>>>>
>>>> Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
>>>>
>>>> drivers/media/video/bw-qcam.c
>>>>
>>>> It was for a grayscale parallel port webcam (which explains why it was
>>>> removed in 3.7 :-) ), and it used polling to get the pixels.
>>>
>>> Yikes parallel port, but I'll take a look at that for some reference :)
>>
>>
>> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
>> currently? Want to test my driver but seems there isn't anything that
>> uses that format (ffmpeg, mplayer, etc).
>>
>> Raw data seems correct but would like to visualize it :). Suspect I'll
>> need to write a test case application though
>>
>
> You could add a conversion routine in libv4lconvert from v4l-utils to
> have a grayscale representation of Y12, I did something similar for the
> kinect depth map, discarding the least significant bits:
>
> https://git.linuxtv.org/v4l-utils.git/commit/lib/libv4lconvert?id=6daa2b1ce8674bda66b0f3bb5cf08089e42579fd
>
> After that any v4l2 program using libv4l2 will at least be able to show
> _an_ image.
>
> You can play with "false color" representations too in libv4lconvert,
> however I don't know if such representations are generic enough to be
> mainlined, in the Kinect case the false color representation of the
> depth map was done in specialized software like libfreenect.
You can also try to add support for Y12 to the qv4l2 utility. It already has
Y16 support, so adding Y12 should be pretty easy.
Regards,
Hans
On Wed, 2 Nov 2016 23:10:41 -0700
Matt Ranostay <[email protected]> wrote:
>
> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
> currently? Want to test my driver but seems there isn't anything that
> uses that format (ffmpeg, mplayer, etc).
>
> Raw data seems correct but would like to visualize it :). Suspect I'll
> need to write a test case application though
I was pretty sure that MPlayer supports 12bit greyscale, but I cannot
find where it was handled. You can of course pass it to the MPlayer
internas as 8bit greyscale, which would be IMGFMT_Y8 or just pass
it on as 16bit which would be IMGFMT_Y16_LE (LE = little endian).
You can find the internal #defines of the image formats in
libmpcodecs/img_format.h and can use https://www.fourcc.org/yuv.php
to decode their meaning.
The equivalent for libav would be libavutil/pixfmt.h
Luca Barbato tells me that adding Y12 support to libav would be easy.
Attila Kinali
--
It is upon moral qualities that a society is ultimately founded. All
the prosperity and technological sophistication in the world is of no
use without that foundation.
-- Miss Matheson, The Diamond Age, Neil Stephenson
On 03/11/2016 14:21, Attila Kinali wrote:
> On Wed, 2 Nov 2016 23:10:41 -0700
> Matt Ranostay <[email protected]> wrote:
>
>>
>> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
>> currently? Want to test my driver but seems there isn't anything that
>> uses that format (ffmpeg, mplayer, etc).
>>
>> Raw data seems correct but would like to visualize it :). Suspect I'll
>> need to write a test case application though
>
> I was pretty sure that MPlayer supports 12bit greyscale, but I cannot
> find where it was handled. You can of course pass it to the MPlayer
> internas as 8bit greyscale, which would be IMGFMT_Y8 or just pass
> it on as 16bit which would be IMGFMT_Y16_LE (LE = little endian).
>
> You can find the internal #defines of the image formats in
> libmpcodecs/img_format.h and can use https://www.fourcc.org/yuv.php
> to decode their meaning.
>
> The equivalent for libav would be libavutil/pixfmt.h
>
> Luca Barbato tells me that adding Y12 support to libav would be easy.
>
> Attila Kinali
>
So easy that is [done][1], it still needs to be tested/reviewed/polished
though.
[1]:https://github.com/lu-zero/libav/commits/gray12
lu
On Thu, Nov 3, 2016 at 8:11 AM, Luca Barbato <[email protected]> wrote:
> On 03/11/2016 14:21, Attila Kinali wrote:
>> On Wed, 2 Nov 2016 23:10:41 -0700
>> Matt Ranostay <[email protected]> wrote:
>>
>>>
>>> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
>>> currently? Want to test my driver but seems there isn't anything that
>>> uses that format (ffmpeg, mplayer, etc).
>>>
>>> Raw data seems correct but would like to visualize it :). Suspect I'll
>>> need to write a test case application though
>>
>> I was pretty sure that MPlayer supports 12bit greyscale, but I cannot
>> find where it was handled. You can of course pass it to the MPlayer
>> internas as 8bit greyscale, which would be IMGFMT_Y8 or just pass
>> it on as 16bit which would be IMGFMT_Y16_LE (LE = little endian).
>>
>> You can find the internal #defines of the image formats in
>> libmpcodecs/img_format.h and can use https://www.fourcc.org/yuv.php
>> to decode their meaning.
>>
>> The equivalent for libav would be libavutil/pixfmt.h
>>
>> Luca Barbato tells me that adding Y12 support to libav would be easy.
>>
>> Attila Kinali
>>
>
> So easy that is [done][1], it still needs to be tested/reviewed/polished
> though.
Cool. Although needs to be processed since it is signed value, and
because it it is really just 0C based readings with 0.25C steps.. But
will look into that when I get a chance.
Anyway did hack in basic support so v4l2grab so I could test the
sensor, and seems to work well but needs some colorized processing to
be useful of course.
Soldering iron about 1 meter from sensor -> http://imgur.com/a/8totG
>
> [1]:https://github.com/lu-zero/libav/commits/gray12
>
> lu
Hi Matt and Hans,,
On Wed, Nov 02, 2016 at 11:10:41PM -0700, Matt Ranostay wrote:
> On Fri, Oct 28, 2016 at 7:59 PM, Matt Ranostay <[email protected]> wrote:
> > On Fri, Oct 28, 2016 at 2:53 PM, Hans Verkuil <[email protected]> wrote:
> >> Hi Matt,
> >>
> >> On 28/10/16 22:14, Matt Ranostay wrote:
> >>>
> >>> So want to toss a few thoughts on adding support for thermopile
> >>> devices (could be used for FLIR Lepton as well) that output pixel
> >>> data.
> >>> These typically aren't DMA'able devices since they are low speed
> >>> (partly to limiting the functionality to be in compliance with ITAR)
> >>> and data is piped over i2c/spi.
> >>>
> >>> My question is that there doesn't seem to be an other driver that
> >>> polls frames off of a device and pushes it to the video buffer, and
> >>> wanted to be sure that this doesn't currently exist somewhere.
> >>
> >>
> >> Not anymore, but if you go back to kernel 3.6 then you'll find this driver:
> >>
> >> drivers/media/video/bw-qcam.c
> >>
> >> It was for a grayscale parallel port webcam (which explains why it was
> >> removed in 3.7 :-) ), and it used polling to get the pixels.
> >
> > Yikes parallel port, but I'll take a look at that for some reference :)
>
>
> So does anyone know of any software that is using V4L2_PIX_FMT_Y12
> currently? Want to test my driver but seems there isn't anything that
> uses that format (ffmpeg, mplayer, etc).
yavta can capture that, it doesn't convert it to anything else though.
<URL:http://git.ideasonboard.org/yavta.git>
>
> Raw data seems correct but would like to visualize it :). Suspect I'll
> need to write a test case application though
>
>
> >
> >>
> >>> Also more importantly does the mailing list thinks it belongs in v4l2?
> >>
> >>
> >> I think it fits. It's a sensor, just with a very small resolution and
> >> infrared
> >> instead of visible light.
Agreed.
> >>
> >>> We already came up the opinion on the IIO list that it doesn't belong
> >>> in that subsystem since pushing raw pixel data to a buffer is a bit
> >>> hacky. Also could be generically written with regmap so other devices
> >>> (namely FLIR Lepton) could be easily supported.
> >>>
> >>> Need some input for the video pixel data types, which the device we
> >>> are using (see datasheet links below) is outputting pixel data in
> >>> little endian 16-bit of which a 12-bits signed value is used. Does it
> >>> make sense to do some basic processing on the data since greyscale is
> >>> going to look weird with temperatures under 0C degrees? Namely a cold
> >>> object is going to be brighter than the hottest object it could read.
> >>
> >>
> >>> Or should a new V4L2_PIX_FMT_* be defined and processing done in
> >>> software?
> >>
> >>
> >> I would recommend that. It's no big deal, as long as the new format is
> >> documented.
Agreed; in general such conversion on CPU does not belong to drivers (but
e.g. libv4l).
> >>
> >>> Another issue is how to report the scaling value of 0.25 C
> >>> for each LSB of the pixels to the respecting recording application.
> >>
> >>
> >> Probably through a read-only control, but I'm not sure.
As this is the property of the format, I'd try to represent it as such. But
that's another discussion...
--
Kind regards,
Sakari Ailus
e-mail: [email protected] XMPP: [email protected]
Hi!
> So want to toss a few thoughts on adding support for thermopile
> devices (could be used for FLIR Lepton as well) that output pixel
> data.
> These typically aren't DMA'able devices since they are low speed
> (partly to limiting the functionality to be in compliance with ITAR)
> and data is piped over i2c/spi.
>
> My question is that there doesn't seem to be an other driver that
> polls frames off of a device and pushes it to the video buffer, and
> wanted to be sure that this doesn't currently exist somewhere.
>
> Also more importantly does the mailing list thinks it belongs in v4l2?
> We already came up the opinion on the IIO list that it doesn't belong
> in that subsystem since pushing raw pixel data to a buffer is a bit
> hacky. Also could be generically written with regmap so other devices
> (namely FLIR Lepton) could be easily supported.
>
> Need some input for the video pixel data types, which the device we
> are using (see datasheet links below) is outputting pixel data in
> little endian 16-bit of which a 12-bits signed value is used. Does it
> make sense to do some basic processing on the data since greyscale is
> going to look weird with temperatures under 0C degrees? Namely a cold
> object is going to be brighter than the hottest object it could read.
> Or should a new V4L2_PIX_FMT_* be defined and processing done in
> software? Another issue is how to report the scaling value of 0.25 C
> for each LSB of the pixels to the respecting recording application.
Should we get some kind of flag saying "this is deep infrared"? Most software
won't care, but it would be nice to have enough information so that userspace
can do the right thing automatically...
Thanks,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html